Monday, 31 January 2011

molecular biology - How do I create a probe for in situ hybridization?

I tried to make the probe several times but it failed again and again. It usually turns out that the probe after hydrolysis is very very short (maybe around 50nt). I did not check the RNA before hydrolysis except last time. However, I saw a band (though very weak) last time, but it still produce very very short probe after hydrolysis.I use Sp6 polymerase to synthesize RNA, and the original length of the DNA is 1.5kb. My expected length after hydrolysis is 500nt.



I'm guessing maybe it's the phenol contamination of the phenol:chloroform extraction step because my 260/280 value is low (around 1.6-1.7), and my 260/230 is also low, which is like 1.6-1.8. Also the peak is not at 260, it's almost at 270.

Saturday, 29 January 2011

exoplanet - Planets orbiting Alpha Centauri

I read that a few years ago some planets were discovered orbiting Alpha Centauri. Has there been any more information since then concerning the possibility of life there? This is the closest star system to Earth, but it would still take us 63,000 years to reach it.

Wednesday, 26 January 2011

botany - What is the largest perennial herbaceous plant?

As mentioned above, the largest perennial herbaceous plant is indeed the banana. Whilst the main reference to this ("Yes, we have more bananas" - an article in the Royal Horticultural Society Journal from May 2002) has been removed from their website it would be I'm sure possible to order should you need to.



This summary of the banana mentions that the pseudo-stem of the banana plant grows to 6-7.6m in height - I haven't found anything taller when looking around so it doesn't generate any reason to doubt the removed article.



Regarding the comments on Bamboo, there are 7 genera containing bamboo species. All of these (Arthrostylidium Rupr, Bambusa Schreb, Chusquea Kunth, Dendrocalamus Nees, Phyllostachys Siebold & Zucc., Pseudosasa Makino and Sasa Makino & Shibata) are described as having persistent woody stems so don't meet your herbaceous requirement.

Tuesday, 25 January 2011

ecology - How do susceptible organisms prevent parasites from overcoming resistance?

In an environment where all plants are resistant to certain parasites, a rare breed which has a mechanism against this resistance has free play - lots of food and no competition. However, in plants which do not have the resistance, this rare parasite breed may be at a disadvantage compared to parasites who do not have the mechanism against that resistance (even if only because they do not waste energy on a defensive mechanism they don't need).



By mixing resistant and unresistant plants, it is probably possible to maintain high enough a competition of parasites without that defensive mechanism to prevent the ones with the mechanism from developing. It's natural selection at work :)

Saturday, 22 January 2011

genetics - What is the difference between copy number changes with and without allelic imbalance?

I require some clarification on copy number aberrations (structural gain and loss in chromosomes). From what I understand, gain/loss per se can be divided into two types.



Consider two alleles, A and B at the same locus on homologous chromosomes.



  1. Allelic Balance: in one type of structural gain/loss, there is no resulting allelic imbalance; examples:



    • AB ==> AABB (duplication),

    • AB => -- (loss).


  2. Allelic Imbalance: in the other type of structural gain/loss, there is resulting allelic imbalance; examples:



    • AB ==> AAB (duplication of A),

    • AB => -B (deletion LOH),

    • AB ==> BB (copy-neutral LOH).


First, is what I have described above correct?



Second, if it is correct, what is the biology underpinning these copy number aberration types? Are the mechanisms responsible for bringing about allelically-imbalanced CN change distinct from the mechanisms for bringing about allelically-balanced CN change? What are these mechanisms?



NB. I'm confining this to just copy number-related allelic imbalance.

Friday, 21 January 2011

Why do satellites never run out of power?

Actually, satellites may kind of run out of power when their solar panels stop working properly because they have degraded with age. Then we lose connection with them and they become just trash in orbit which is endangering other satellites and most importantly the ISS.



So yes, satellites use solar energy and it is enough, since those in space are much more efficient. At 500 kilometers a satellite is not exposed to direct sunshine for more than 38% of the time. Also on Earth you encounter with cloudy weather, storms and so on.

Monday, 17 January 2011

Computing the Sérsic profile of a galaxy from jpg images

I am trying to calculate the Sérsic profile of various galaxies from the SDSS based on the images provided by the galaxy zoo site. I am doing this as part of a kaggle competition on using machine learning to predict galaxy morphology. I have no chance of a high rank in this competition so I'm not hesitant to ask for help.



I used the R contourLines function to identify the isophotes of the galaxy and then fit ellipses to each isophote. This seemed to work well, the isophotes are almost always well fit by the ellipses and the ellipses are nearly concentric. Then letting I be the pixel intensity of an isophote and R be the length of the semi-major axis of the corresponding ellipse, I need to fit an equation of the form



log I(R) = log I_0 - k * R^(1/n)


The simple approach seemed to be to take the log of both sides and use OLS regression, so I fit a linear model in R of the form



log(log(I)) ~ log(R)


The resulting graphs showed a good fit but the resulting Sérsic indices n are almost always less than one and never as big as two. This doesn't seem right since indices of 4 or higher seem common in my reading. I don't get anywhere near 4 for an image of M87.



Possibly taking log log flattens things out too much and the index is not responsive enough. I tried using nls to work with just the log but it didn't move the indices much.



Is there any standard software or algorithm for computing the Sérsic index from an image? Are there reference images I can work from that would let me check if my algorithm is reasonable? Any recommendations on how to proceed would be welcome.



UPDATE: I have found the programs GALFIT and GIM2D which look like they might be useful. Any other software that is commonly used for this?

Sunday, 16 January 2011

evolution - Why are there exactly four nucleobases in DNA?

Here is a possible answer given by this paper:



http://www.ncbi.nlm.nih.gov/pubmed/16794952
or
http://www.math.unl.edu/~bdeng1/Papers/DengDNAreplication.pdf



It gives a Darwinian explanation to the question. It approaches the problem from Claude Shannon's theory for communication. It treats DNA replication conceptually and mathematically the same as a data transmission. It concludes that the system of four bases, not two, not six, replicates the most genetic information at the shortest amount of time.



The communicational analogy goes like this. If you have two data transmission systems, one can transmit, say, 1 MB per second, and the other can do 2 MB per second but cost less than twice as much. The answer is obvious you will buy the second service for a higher rate per cost. As a data service, it does not care what information you consume -- it can be spam, video, audio, etc. All that matters is the transmission rate. As for DNA replication, it is like a data transmission channel when one base is replicated a time along the mother DNA template. It too does not care whether the process is for a bacterium genome, or a plant, or an animal genome. The pay-off is in information and the cost is in time. Unlike your abiotic communication varieties, time is both the sender and the receiver of all messages of life, and different life forms or species are merely time's cell phones. So if one system can replicate more information in a unit time than another, the faster one will win the evolutionary arm race. A prey operating on a slow replicator system will not be able to compete with nor to adapt to a predator operating on a fast one.



Now because the A-T pair has only two weak hydrogen bonds but the C-G pair has three, A and T take a shorter time to complete duplication than the C and G do. Although the replication time is short in some fraction of nano second, but the time adds up quickly for genomes with base pairs in the billions. So having the C-G pair may slow down the replication, but the gain is in information. One base pair gives you 1 bit per base information. Two pairs gives you 2 bits per base information. But, having more base pairs may eventually run into a diminished return in information replication rate if the new bases take too long a time to replicate. Hence the consideration for the optimal rate of replication measured in information bits per base per time. Without information there would be no diversity, no complexity. Without replication in information there would be no life.



Using a simple transmission/replication rate calculation by Shannon you can calculate the mean rate for the AT-system, the CG-system, the ATCG-system, and for some hypothetical 6-bases, 2n-bases system whose new bases take progressively longer time to replicate. The analysis shows the ATCG-system has the optimal replication rate if the CG bases take 1.65 to 3 times longer to replicate than the AT bases. That is, a base-2 system replicates its bases faster but does not carry more information to have a higher bit rate. Likewise, a base-6 system has a greater per-base information but replicate slower on average to end up with a suboptimal bit rate.



DNA Replication Rate



According to a comparison from the paper, the base-4 system is about 40% faster than the A–T only system, and 133% faster than the G–C only system. Assume life on Earth started about 4 billion years ago, then the A-T only system would set back evolution by 1 billion years, the G–C system would do so by 2.3 billion years. For a hypothetical base-6 system, it would do so by 80 million years. In other words, life is where it should be because the base-4 system is able to transmit information through the time bottleneck at the optimal bit rate.



In conclusion, life is to replicate the most information with the shortest time, and the base-4 system does it the best. If ever there were other systems they would have lost the informatic competition to the base-4 system from the get-go. Darwin's principle works at life's most basic and most important level.



There are other explanations, all non-Darwinian. Most are based on the base's molecular structures. But these types of explanation border on circular argument -- using observations to explain themselves. They also face this catch-22 problem since there is no way to exhaust all possible bases for replication. However, such lines of exploration are fruitful regardless because more knowledge the better. But without taking information and its replication into consideration it is hard to imagine a sensible answer to the question.

space - What happens outside of the universe?

There are a number of ways of answering this question. Here are a few.



Firstly we can simply state that, by definition, the universe is everything and therefore there is simply nothing beyond it. Not nothing as in empty space, but nothing at all, not even space (or it could be that the universe is infinite.)



Secondly, we could look at what we mean when we say that the universe is expanding. What this really means is that the space-time metric is growing. It does not mean that the universe is expanding into some void, but that distances inside the universe are increasing with time. My favourite parallel is with an balloon being blown up. Points on the surface get further away without the addition of any more material to the balloon. (It is, of course, an imperfect parallel as the balloon is expanding in a medium).



Thirdly, and now we are touching on metaphysics (in the sense that it may be impossible to test some or any of these theories), we can say that "our" universe is one of many and that, actually it may well have an edge which even now is crashing into some other universe, or perhaps we are inside a black hole inside another universe. These multiverse theories are ten-a-penny but they are not without scientific merit, possibly.

Saturday, 15 January 2011

astrophysics - Size of the Universe

I think you misunderstanding an important concept. There are a finite number of atoms in the observable universe — that is, the part we can see (notice the lower case "u").



When people say "the universe", they can often refer to the observable universe. Sloppy, I know. The whole Universe (notice the capital "u" here) may be infinite. Also, don't know where you got the idea that the Universe has an infinite number of dimensions.

How strong does a light source on the Moon have to be to be visible from Earth?

Flashes from some meteoroid impacts on the Moon can be seen with the naked eye. How much light must a future lunar base emit in order to be visible as a dot in the lunar night (like the flag of Tunisia)? I suppose it is a bit complicated and depends on the phase of the Moon, and that the new Moon is so close to the Sun.



enter image description here

Friday, 14 January 2011

observation - Why can't we observe the Oort cloud with a telescope?

The angular resolution of the telescope really has no direct bearing on our ability to detect Oort cloud objects beyond how that angular resolution affects the depth to which one can detect the light from faint objects. Any telescope can detect stars, even though their actual discs are way beyond the angular resolution of the telescope.



The detection of Oort cloud objects is simply a question of detecting the (unresolved) reflected light in exactly the same way that one detects a faint (unresolved) star. Confirmation of the Oort cloud nature of the object would then come by observing at intervals over a year or so and obtaining a very large ($>2$ arcseconds) parallax.



The question amounts to how deep do you need to go? We can do this in two ways (i) a back of the envelope calculation assuming the object reflects light from the Sun with some albedo. (ii) Scale the brightness of comets when they are distant from the Sun.



(i) The luminosity of the Sun is $L=3.83times10^{26} W$. Let the distance to the Oort cloud be $D$ and the radius of the (assumed spherical) Oort object be $R$.
The light from the Sun incident on the object is $pi R^2 L/4pi D^2$.
If we now assume that a fraction $f$ of this is reflected uniformly into a $2pi$ solid angle. This latter point is an approximation, the light will not be reflected isotropically, but it will represent some average over any viewing angle.



To a good approximation, as $D gg 1$ au, we can assume that the distance from the Oort object to the Earth is also $D$. Hence the flux of light received at the Earth is
$$F_{E} = f frac{pi R^2 L}{4pi D^2}frac{1}{2pi D^2} = f frac{R^2 L}{8pi D^4}$$



Putting some numbers in, let $R=10$ km and let $D= 10,000$ au. Cometary material has a very low albedo, but let's be generous and assume $f=0.1$.
$$ F_E = 3times10^{-29}left(frac{f}{0.1}right) left(frac{R}{10 km}right)^2 left(frac{D}{10^4 au}right)^{-4} Wm^{-2}$$



To convert this to a magnitude, assume the reflected light has the same spectrum as sunlight. The Sun has an apparent visual magnitude of -26.74, corresponding to a flux at the Earth of $1.4times10^{3} Wm^{-2}$. Converting the flux ratio to a magnitude difference, we find that the apparent magnitude of our fiducial Oort object is 52.4.



(ii) Halley's comet is similar (10 km radius, low albedo) to the fiducial Oort object considered above. Halley's comet was observed by the VLT in 2003 with a magnitude of 28.2 and at a distance of 28 au from the Sun. We can now just scale this magnitude, but it scales as distance to the power of four, because the light must be received and then we see it reflected.
Thus at 10,000 au, Halley would have a magnitude of $28.2 - 2.5 log (28/10^{4})= 53.7$, in reasonable agreement with my other estimate. (Incidentally my crude formula in (i) above suggests a $f=0.1$, $R=10 km$ comet at 28 au would have a magnitude of 26.9. Given that Halley probably has a smaller $f$ this is excellent consistency.)



The observation of Halley by the VLT represents the pinnacle of what is possible with today's telescopes. Even the Hubble deep ultra deep field only reached visual magnitudes of about 29. Thus a big Oort cloud object remains more than 20 magnitudes below this detection threshold!



The most feasible way of detecting Oort objects is when they occult background stars. The possibilities for this are discussed by Ofek & Naker 2010 in the context of the photometric precision provided by Kepler. The rate of occultations (which are of course single events and unrepeatable) was calculated to be between zero and 100 in the whole Kepler mission, dependent on the size and distance distribution of the Oort objects. As far as I am aware, nothing has come of this (yet).

Thursday, 13 January 2011

star systems - Estimates of exoplanets distribution consistent with current data

As you've correctly mentioned observation biases play a huge role in current understanding of planetary populations. Let's have a look at a slightly outdated plot (age 2 months) that shows us the semimajor-axis vs mass distribution of currently known planets including KOI's (unconfirmed planets).



Current planetary population



Here we see two things:



  • Hot Jupiters seem to make up a significant part of the confirmed planets (confirmed = RV + Transit)

  • The farther out we go, the less likely it is to have a measured planetary radius (black dots)

Now knowing how our methods and instrumentation work, we can try and debias this data.
Transit measurments pose the easiest ones to do this: as for transits the detection probability is a simple geometric function of planetary alignment, radius and stellar radius one can overplot the found planets with this probability. So this is usually of not very massive interest to the planetary community.
edit
The probability usually is given as $sim r_{stellar}/2 a$, with $r_{stellar}$ being the stellar radius and $a$ the planet's semi-major axis. A simple derivation is here. Discussion how everything gets more complicated when combining with RV measurments here.
/edit



Any instrument one uses however, will have their own quirks and biases and starspots or eclipsing binaries can lead to false positives. Petigura+ 2013, fig. 1 try to asses those effects for Kepler transits and come up with the following detection probability map.



Rough correction for transit incompleteness



The corrected population then would be found by dividing $rmfrac{found population}{completeness}$. Mathematical issues like dividing by zero are easily interpreted as issues stemming from the incomplete data. As where the detectability is zero, we can't know if there are 0 or 1 or 20 planets hidden in a bin.
When projected on one axis or the other those corrected data-plots then result in corrected Radius- or Period occurence rates.
I won't show those for the transit method, as I want to encourage to check the original reference.



I show one of those plots for the radial velocity method however. From Mayor+ 2011, fig. 12 that analyse their HARPS-spectrograph data from the preceeding decade:



enter image description here



In black: Found planets, red: debiased radial-velocity population
Note: A Jupiter-mass is roughly 320 Earth-masses, so in this plot the Hot Jupiters are on the far right. This now closes the thematic circle to my initial note about Hot Jupiters: While in black Jupiter-like planets seem very common, in the debiased version this is not the case anymore. Mayor et al. estimate the Hot Jupiters to make up only 0.5% of the total planetary population with this method.



Note also that although the debiased number of terrestrial planets increases significantly, their error bars explode in size. This stems simply from doing statistics with low numbers. Also for reasons of simplicity and clarity the error bars on the first two plots are not shown.




Concluding:
It is important to debias the data we have about exoplanets by method.
When doing this, we get rough ideas about how the true planetary populations look like.



Recent results and non-stop public announcments about "hundreds of newly found planets" can lead to a certain, totally understandable euphoria that we now know everything about planets.
This is however very far from true. The Number distribution in full mass-radius-semimajor axis space is still very, very thinly spread. So in the end we do low-number statistics when debiasing, which is scientifically simply unsatisfying.
Or said differently: There is still a lot of exciting work to be done!

Wednesday, 12 January 2011

evolution - Why is selfishness the 'obvious' strategy?

It basically comes down to a question of the unit of selection.



From the common viewpoint, in which natural selection is seen as acting on individual organisms, it's almost a tautology that the organisms favored by selection are those that maximize their own reproductive fitness. Thus, the possibility that some organisms might engage in acts that help another organism at their own expense may seem like a paradox, or at least a puzzle in need of explanation.



One solution to this puzzle is offered by the gene-centered view of evolution, where selection is viewed as acting on genes, with organisms being merely convenient (and usually, but not always, cooperative) collections of genes that (usually) reproduce together. From this viewpoint, it is not at all surprising that evolution might favor genes that cause an organism to help other organisms, provided that there's a statistical tendency for those other organisms to also be carrying the same gene.



Other mechanisms for the evolution of cooperation do also exist. For example, organisms with sufficiently advanced cognitive capabilities may indeed engage in reciprocal altruism, where they help others only if those others have shown themselves willing to help them in exchange. Such exchanges, being mutually beneficial, do indeed help both participating organisms, and are thus selected for even at the organism level. However, to persist (in the absence of gene or group level selection effects), they generally need some form of enforcement and/or learning mechanism — if cheaters can keep receiving help without ever helping anyone else, they'll do better than the cooperators and eventually displace them.



Also, in some situations, an organism acting simply to help itself (e.g. by modifying its environment to be more suitable for itself) may also end up helping other organisms that just happen to be nearby. In such a situation, selection indeed favors cooperative behavior in isolated organisms by default, with the evolution and persistence of "freeloaders" (who spend less effort on improving their surrounding, relying instead on others to do the work) being only possible as long as there are sufficiently many cooperators around.



Of course, none of these mechanisms are mutually exclusive. Indeed, the presence of some gene-level selection effects tends to be unavoidable in any realistic situation involving cooperative interactions; it's hard to come up with a setting in which no two interacting organisms ever share genes, and obviously the fact that two organisms share a common genetic predisposition to cooperate makes them more likely to do so when they meet.



(However, one not-so-realistic theoretical limit case where that does happen is that of an infinitesimally rare cooperative mutant invading an infinitely large and well mixed population. The popular use of this simplifying limit assumption in classical models of evolution may be one reasons why cooperative behavior is so often seen as something remarkable and hard to explain by evolutionary theorists.)

Tuesday, 11 January 2011

redshift - Estimating galactic dust extinction for medium band filters

The quantity you want is basically the extinction law, and is usually called $k(lambda)$. An extinction law is a fit to several measurements of the extinction $A_lambda$ in some direction (or an average of several directions).



Cardelli et al. (1989) provides different functional forms for the mean extinction law, parametrized in their Eq. 1 as
$$
frac{A_lambda}{A_V} = a(x) + frac{b(x)}{R_V},
$$
where $x$ is the inverse wavelength in $mumathrm{m}^{-1}$, and the coefficients are given separately for IR, optical, UV, and FUV in Eqs. 2, 3, 4, and 5, respectively. The total-to-selective extinction $R_Vequiv A_V/E(B-V)$ takes different values for different lines of sight, but usually lies in the range 2.5 to 6, with 3.1 being a typical value in the Milky Way.



To get the quantity you're interested in, simply convert your favorite wavelength to $x$, stick into Eq. 1, and multiply by $R_V$:
$$
k(lambda) equiv frac{A_lambda}{E(B-V)} = frac{A_lambda}{A_V} R_V.
$$

Monday, 10 January 2011

bioinformatics - Source of DNA sequences

This is more of a comment, but too long to put in a comment box, so I'm putting it here.



This is a fun idea you're doing. I have a half baked idea (assuming you're looking for input) if you want to explore it further -- or not ... by the time I finish writing this, I might realize it's too silly, but still ... let's see.



It might be fun to take the sequence of two organisms, let's say mouse and human and align certain regions to each other -- imagine this is like playing a piano where the "left hand" might be the mouse sequence, and the "right hand" is human.



So, say you take a gene that are shared in both, like CCND1. You can align them against each other and you'll find large portions of the sequences are common (with some mismatches, obviously). In these regions, the left and right hands are playing together (different octaves.



You'll also find gaps in the alignments where you'll have a stretch of "mouse only" or "human only" sequence, and in these regions the left or right hand play alone (solo).



For instance, say the two alignments look like this:



mouse: CGTGGGAGGCTCTTGAGCCTGGAAACACTATCGCAGTTTGTACGGAATGCACTTGTTCTTTACAAAAGG
human: CTTGGGCGACA---GAGC---GAGACTTTGTCTCAAAAAAGAAG--------------------AAAAG


In this case, you see stretches of the alignments where the mouse (left hand) will be playing a solo, and other times the two hands play in "harmony."

Sunday, 9 January 2011

bioinformatics - Finding and Comparing Homologous Genes

If you don't have any idea where your gene will have a match with other genes, try something like Blast at the NCBI website.



This will give you a list of hits that you can then use to align with a multiple sequence aligner (MSA). The same NCBI page can give you a tree reconstructed from the results of the searching process, although there is a variety of methods that can be used if you just download the sequences and attempt to build the gene family alignment yourself using "[x] Select All -- Get selected sequences" in the NCBI blast results page, then downloading them in FASTA or other format with "Send to -- File -- Format FASTA -- Create File".



If what you want instead is to include your gene sequence into the best aligning place in an existing gene family alignment, you can try PAGAN.

Saturday, 8 January 2011

Is the influence of gravity instantaneous?

The first question as stated has a rather trivial answer:




"If the sun magically disappeared, instantly, along with all its influences, how long would it take its gravity to stop having an effect on us?"




Since the Sun's gravity is among its influences, it would instantly stop having an effect on us. That's just part of the magical situation, and doesn't even involve any physics. A bit more interesting is the question without the bolded part.



In general relativity, changes in the gravitational field propagate at the speed of light. Thus, one might expect that the magical and instant disappearance of the Sun would not affect earth for about eight minutes, since that's how long light from the Sun takes to reach Earth.



However, this is mistaken because the instant disappearance of the Sun itself violates general relativity, as the Einstein field equation enforces a kind of local conservation law on the stress-energy tensor analogous to the non-divergence of the magnetic field in the electromagnetism: in any small neighborhood of spacetime, there are no local sources or sinks of stress-energy; it must come from somewhere and go somewhere. Since the magical instant disappearance of the Sun violates general relativity, it does not make sense to use that theory to predict what happens in such a situation.



Thus, the Sun's gravity instantly ceasing any effect on the Earth is just as consistent with general relativity as having any sort of time-delay. Or to be precise, it's no more inconsistent.




My big question, now, is: "How do we know it's instant?"




It's not instant, but it can appear that way.




We can't possibly move an object large enough to have a noticeable gravitational influence fast enough to measure if it creates (or doesn't create) a doppler-like phenomenon.




We don't have to: solar system dynamics are quite fast enough. An simple calculation due to Laplace in the early nineteenth century concluded that if gravity aberrated, Earth's orbit would crash into the Sun on the time-scale of about four centuries. Thus gravity does not aberrate appreciably--more careful analyses concluded that in the Newtonian framework, the speed of gravity must be more than $2times10^{10}$ the speed of light to be consistent with the observed lack of aberration.



This may seem quite a bit puzzling with how it fits with general relativity's claim that changes in the gravitational field propagate at the speed of light, but it's actually not that peculiar. As an analogy, the electric field of a uniformly moving electric charge is directed toward the instantaneous position of the charge--not where the charge used to be, as one might expect from a speed of light delay. This doesn't mean that electromagnetism propagates instantaneously--if you wiggle the charge, that information will be limited by $c$, as the electromagnetic field changes in response to your action. Instead, it's just something that's true for uniformly moving charges: the electric field "anticipates" where the change will be if no influence acts on it. If the charge velocity changes slowly enough, it will look like electromagnetism is instantaneous, even though it really isn't.



Gravity does this even better: the gravitational field of a uniformly accelerating mass is toward its current position. Thus, gravity "anticipates" where the mass will be based on not just current velocity, but also acceleration. Thus, if conditions are such that the acceleration of gravitating bodies changes slowly (as is the case in the solar system), gravity will look instantaneous. But this is only approximately true if the acceleration changes slowly--it's just a very good approximation under the conditions of the solar system. After all, Newtonian gravity works well.



A detailed analysis of this can be found in Steve Carlip's Aberration and the Speed of Gravity, Phys.Lett.A 267:81-87 (2000) [arXiV:gr-qc/9909087].




If he was wrong, how do we know it's not?




We have a lot of evidence for general relativity, but the best current evidence that gravitational radiation behaves as GTR says it does is Hulse-Taylor binary. However, there is no direct observation of gravitational radiation yet. The connection between the degree of apparent cancellation of velocity-dependent effects in both electromagnetism and gravity, including its connection with the dipole nature of EM radiation and quadrupole nature of gravitational radiation, can also be found in Carlip's paper.

rotation - Why does the Earth have a tilt of ~23°?

First up, the tilt is exactly 23.45 degrees.



The reason for Earth's tilt is still not yet really proven, but scientists at Princeton stated on August 25, 2006 that planet Earth may have 'tilted' to keep its balance. Quote:




By analyzing the magnetic composition of ancient sediments found in the remote Norwegian archipelago of Svalbard, Princeton University's Adam Maloof has lent credence to a 140-year-old theory regarding the way the Earth might restore its own balance if an unequal distribution of weight ever developed in its interior or on its surface.



The theory, known as true polar wander, postulates that if an object of sufficient weight -- such as a supersized volcano -- ever formed far from the equator, the force of the planet's rotation would gradually pull the heavy object away from the axis the Earth spins around. If the volcanoes, land and other masses that exist within the spinning Earth ever became sufficiently imbalanced, the planet would tilt and rotate itself until this extra weight was relocated to a point along the equator.




Same goes for the lack of proof about the exact consequences on the planet or if it'll happen again. All that is still being researched and debated, but those Princeton scientists have thrown in some interesting perspectives which you'll discover when you visit the link I provided and completely read what they wrote.



Besides "consequences" for Earth as a planet, it should be noted that the tilt of Earth is the reason why we have seasons. So even when we yet have to find out what the consequences for Earth (as a planet) have been and/or will be, we do know that the tilt surely has consequences for all beings that live on planet Earth… the seasons that influence us all: summer, fall, winter, and spring.



And - last but not least - as for your question how they know where 0 degrees would be, you just have to look at the Earth's rotation around the sun. Take a 90° angle of the planet's orbit and you know where 0° would be. But I guess a picture explains more than a thousand words, so I quickly created the following graphic:



tilt and axis explained

galaxies - Create Position-Velocity diagram from a velocity field

Such diagrams are usually called rotation curves, and show the velocity of stars/gas in a disk galaxy viewed edge-on along the line of sight (LOS) as a function of distance from the center.



That is, the $x$ axis gives the distance of the stars from the center ($x=0$) of the galaxy, and the $y$ axis gives the measured velocities at a given distance. Along a given LOS you'll find not a single velocity, but a (Gaussian-like) distribution of velocities, both because there is a certain scatter in velocities, but also because you only measure the component of the velocities projected onto your LOS, while both speed and direction change along the LOS, as seen here:



rotvel



If they all had the same velocity along the LOS, the plot you show would be a thin line; instead it is smeared out.



Due to mass being concentrated in the center, there's a steep increase in velocity at low $x$. If there were only the mass you can see, the velocity would decrease at larger $x$ due to the larger distance from the central mass. The reason it stay more or less constant is thought to be the extra, invisible mass known as dark matter.



The reason the left side of the plot is similar to the right, but mirrored not only across the $y$ axis but also the $x$ axis, is that on this side, the stars move toward you, while on the right side they move away from you. The whole galaxy evidently moves away from you at $500,mathrm{km},mathrm{s}^{-1}$, since this is the velocity at $x=0$.



So, to recap: How do you generate such a plot? You measure the distribution of velocities along many lines of sight in the galactic plane, from one side ($-R_mathrm{gal}$) to the other ($+R_mathrm{gal}$). For instance, at $x=14,mathrm{kpc}$, you'd measure $V=675pm12,mathrm{km},mathrm{s}^{-1}$:



los



In practice, images like this is made by using a grism which disperses the light according to its wavelength, together with a slit placed along the galactic plane, such that you block the light from the rest of the field. This is called slit spectroscopy, in contrast to "normal" spectroscopy where you block all light except for a little hole placed on some object (e.g. a star), or integral field spectroscopy, where you get the whole spectrum behind every pixel in an image.



Slit

Friday, 7 January 2011

How long is DNA stable in a freezer?

If the DNA is pure, it should last quite a long time. If there are enzymes and other biological molecules in there, -80C will work much better.



I think you could keep pure DNA at -20C practically indefinitely.



Purity is the main issue there, also pH stabilized, sealed properly, etc. That makes all the difference.

Thursday, 6 January 2011

Cosmological deflation? - Astronomy

There are a number of flaws with your idea (despite the relativity breaking transatlantic journey times that would be possible).



Inflation is a popular theoretical model put forward to solve three main issues with the Big Bang cosmology: the horizon problem, the flatness problem & the magnetic monopole problem (see the wiki for details). It consists of a scalar field called the inflaton, a physical field that pervades all space. During inflation it undergoes a phase transition to a lower energy state, releasing huge amounts of energy that drive the expansion. To force the field back to a higher energy state would require inconceivable amounts of energy, and have to be applied across the entire universe. Not only is this impossible, but also annoyingly prevents its use for local galactic travel.




Or maybe a technically very advanced civilization could keep its solar system inflated by the time of a deflation of the whole universe




I assume you mean contraction, as opposed to the current expansion that we see today due to dark energy. Again this would be unfeasible due to the reasons outlined above, and if contraction was to proceed to it's conclusion, i.e. a singularity like the Big Bang, then no information from this universe could be carried through that singularity, even if there was another universe on the other side.

Wednesday, 5 January 2011

visualisation of the universe's expansion

You were correct, you teacher was incorrect.



It is the space that expands - much as a surface of a balloon does.



An explosion is a poor analogy in contrast because - as you suggest - it implies something to expand into.



For another way, not as accurate as the balloon analogy but maybe helpful - imagine being trapped inside an expanding loaf of bread in an oven. You have no knowledge of the world outside the loaf - and you may assume the loaf is infinite in extent. But as the dough rises the gaps inside the loaf get larger without the mass of the loaf itself increasing. You could compare our position in the universe to being inside such an infinite loaf.

distances - Can we tell how fast bodies are moving away by measuring their frequency?

Your understanding is correct. The doppler shift observed from a galaxy is the sum of its peculiar velocity with respect to the "Hubble flow" and the redshift due to the Hubble flow, which is caused by the expansion of the universe.



There is no direct way from a spectrum to separate these two components - they have the same qualitative result.



In principle, the expansion of the universe (or a change in the peculiar velocity) could be directly measured by looking for a change in redshift with time, which would depend on the cosmological parameters.



This is an extremely small effect and is confused by the peculiar motions of individual galaxies. Nevertheless, measuring this redshift drift is one of the prime goals of the Codex Instrument on the E-ELT (see Pasquini et al. 2010, http://esoads.eso.org/abs/2010Msngr.140...20P )
using Lyman alpha absorption systems towards distant quasars. This experiment is also planned for the Square Kilometre Array, using the 21cm line (Kloeckner et al. 2015 http://arxiv.org/abs/1501.03822 ).



In both cases, to overcome the experimental uncertainties (eg at 21 cm, it amounts to line drifts of 0.1 Hz over a decade), then observations of millions of galaxies must be combined.



There is no prospect of measuring this effect in an individual galaxy, furthermore I fear your understanding of cosmological redshift is flawed. The dependence on distance is a statistical average, not an absolute dependence. Individual galaxies are moving in individual gravitational potentials from objects around them. This gives them their peculiar velocities with respect to the flow. This velocity could increase or decrease as a galaxy got further away, but is never expected to be large enough to be detectable on human timescales for any individual galaxy. In addition, any change in peculiar velocity should average to zero when looking at millions of galaxies, leaving the redshift drift due to the expansion.

Sunday, 2 January 2011

How can a supernova affect black hole in a binary system?

The likely result would either be a black hole-black hole binary system; a neutron star-black hole binary system, or the black hole and the compact remnant from the second supernova explosion would go their separate ways at reasonably high speeds.



You cannot disrupt a black hole in this way. In fact all that will happen to the original black hole is that it will likely get a bit more massive from accreting some of the supernova ejecta.

newtonian gravity - Can the centripetal force be inverted?

The simple answer is no.
The centripetal force is what we call an 'inertial force', as contrary to 'fundamental force'. This means there are no charges attached to it, and no field (-analogue) that propagates this force - unlike e.e. for electromagnetism and gravity.



A inertial forces originate purely in changes of your frame of reference (or 'viewpoint') and how this frame moves relative to your originating frame.
The name fictious force for this concept is also very common. To this the corresponding wikipedia-page discusses an example with a deccelerated bus, which I can only very strongly recommend to study in order to understand the concept of inertial forces.



The direction of the centripetal force is tied to the acceleration at any given moment. Any path that a massive object has through space has a curvature at every point of this path. This local curvature defines a so-called osculating circle (see below) towards which the centripetal force points. Thus, there is no way to reverse this.



enter image description here
(c) Wikipedia, Wiki commons license



Sorry, if this answer seems a bit vague, but I think this is the best I can do without going deeper into the math of classical mechanics that covers those topics.

Saturday, 1 January 2011

biochemistry - Why insects are so energy-efficient while flying?

Insect flight muscle is capable of achieving the highest metabolic rate of all animal tissues, and this tissue may be considered an exquisite example of biochemical adaptation.



Locusts, for example, may (almost instantaneously) increase their oxygen consumption up to 70-fold when starting to fly. In humans, excercise can increase O2 consumption a maximum of 20-fold, and for birds in flight the figure is about 10-fold (Wegener, 1996; Sacktor, 1976).



As Wegener (1996) has put it (in his definitive paper):




The aerobic scope (the ratio of maximal to basal rate of respiration) of insects is unrivalled in the animal kingdom




Flight is powered by ATP hydrolysis, and these impressive metabolic rates are achieved by very effective control of ATP hydrolysis and regeneration.



  • Metabolism is aerobic, thus allowing for much more efficient ATP production from hexoses (as compared with, say, anaerobic metabolism).



  • Flight muscle may account for up to 20% of body mass.


  • In insects, haemoglobin and myoglobin are absent. Instead, gaseous O2 is transported to the tissues by a system of tubules and deposited so close to the site of consumption that (seemingly) it may reach mitochondria by diffusion.


  • Locusts fuel flight by burning sugars in the early stages, gradually changing to use lipids as fuel. (In bees, flight is totally fuelled by hexose consumption).

    This is achieved by effective control of glycogen breakdown and glycolysis, by modifying the activity glycogen phosphorylase (glycogen breakdown) and phosphofructokinase (PFK), a key control enzyme of glycolysis.


  • There is an enormous literature on these topics, but suffice it to say, in the case of glycolysis, control is very efficiently achieved by allosteric regulation of PKF, where fructose 1,6-bisphosphate and fructose 2,6-bisphosphate play key roles (see Sacktor, 1976).


  • This allosteric control very effectively allows glycolysis to be (almost instantaneously) turned on and operate at a maximum value, and to be (almost instantaneously) turned off.

References



Wegener, G. (1996) Flying insects: model systems exercise physiology
Experientia May 15;52(5):404-12. (See here)



Sacktor B. (1976) Biochemical adaptations for flight in the insect.
Biochem Soc Symp. 1976;(41):111-31. (See here)