Sunday, 31 January 2010

exoplanet - Could we detect an earth-like planet around another star?

The short answer is no; we cannot quite detect earth-like planets around Sun-like stars with orbital periods of 1 year.



The two main planetary detection techniques are transit photometry and the radial velocity variation technique. Direct imaging of earth-like planets at 1 au from the host star is utterly impossible with current technology - the problem is not the sensitivity, it is the contrast achievable at small angular separations.



The first demands high precision photometry (a transit by an "earth" across a "sun" produces a lightcurve dip of about 0.01%). This kind of precision has been achieved (by space-based observatories), but they have not observed stars for sufficiently long to build up the requisite number of transits (you need at least a few) to confirm a detection at periods of 365 days The Kepler primary mission ceased after about 4 years, meaning it will be tricky to dredge out convincing Earth-like transit signals at periods of 1 year (but not impossible) - and even then you need to perform some sort of follow up to prove it is a planetary mass object, rather than some false positive, and actually estimate the mass to show that it is a rocky planet.



Which brings us to the doppler radial velocity technique. The motion of the Earth-Sun system results in the Sun executing a 1-year orbit around the common centre-of-mass, with an amplitude of about 9 cm/s. This is about a factor of 5-10 smaller than the best precision that is available at any telescope in the word right now.



So - although none have been confirmed yet (there are candidates in the Kepler data), that does not mean that Earth-like planets are uncommon. Indeed most sensible extrapolations of the frequency of Earth-sized planets found at closer orbital distances suggests that they could be quite common (e.g. $sim 25$% Petigura et al. 2013)

Saturday, 30 January 2010

galaxy - Why aren't new stars in Earth's relative proximity constantly discovered?

To the naked eye, the answer is almost certainly no because of the enormously slow movement of stars across the sky and because 5,000 stars may be a lot but only a tiny percentage of the sky is covered by visible stars.



To Hubble, which can see perhaps tens of millions of stars, maybe more, the link here, has a picture of two stars that are approaching crossing each other's path from our point of view. With a big enough telescope it probably happens from time to time, though I wouldn't want to try to calculate how often, but to the naked eye, I'm comfortable saying no, in fact, it was often assumed that stars didn't move and were fixed in the sky (contrary to what Macrobius said). That was the popular point of view prior to Halley's observation.



There was also Tycho Brahe's "De Nova Stella" or "new star" which we now know to be a super-nova, and that was quite the surprise at the time. Nobody thought a new star could appear because they thought the stars were fixed and permanent, but that appearance wasn't by the method you suggest.



Consider how small stars are from our point of view. Alpha Centauri A, the larger one, it's about 1.7 million KM across and it's about 4.3 light years away, or, 41 trillion KM. It's diameter is 23 million times smaller than it's distance from us. That's the equivalent of looking at a golf ball from nearly 200 miles away. Now if you scatter 5000 golf balls each 200 miles away across the sky and you let them move around very very slowly, how often do you think one golf ball passes in-front of another? Not very often. Granted, that's not quite right as the atmosphere spreads stars out a bit so each golf ball is smudged to maybe the size of a basketball, but they almost never pass infront of one another, at least, not if we only take into account 5,000 visible stars.



Now, binary stars, it happens more often if they are lined up right, then they can pass infront of each other and this has certainly been observed by telescope but not to the Naked Eye, we can't visibly tell that Alpha Centauri is 2 stars (3 with the more distant Proxima but that can't be seen by the eye). They are on average about a billion miles apart but that can't be seen by the naked eye. It was observed by telescope in 1689.



There simply aren't enough visible stars (and taking HDE's point that most of the 5,000 visible stars weren't cataloged until recently), there's essentially zero chance that it was ever observed that a star appeared "new" by passing from behind another star.



Using Hubble, it can happen, but not to human sight.

geocentrism - How can we tell that the sun is moving with theories such as the theory of relativity?

There are some simple answers to your questions, and some (historic) controversy. I'll answer the simple elements first (but backwards by your ordering of the questions), and then describe the nature of the controversy.



  1. What is absolute motion?

Absolute motion is motion that is referenced (compared to) a feature that you can say is standing still. For example, how do you know the speed of a car? You compare it to something that is standing still.



  1. What is relative motion?

Relative motion is motion that is referenced (compared to) something that may be moving. For example, when driving, you may find that the car next to you is edging ahead of you. You can say that it is moving 5 miles per hour faster than you - that is its relative speed. (Note, it also has an "absolute" speed, say 65 miles per hour, while you have an "absolute" speed of 60 miles per hour. Here I use "absolute" in the sense of comparing speed to the surface of the Earth, not a universal absolute.)



This brings us to the definition of a reference frame, which is critical for understanding the later arguments. A reference frame is a perspective from which one can begin to measuring things. It is a defined zero point from which you can measure distance, speed etc. So, in the example above, you have a reference point, yourself in the car, and you see the other car moving ahead of you at 5 miles an hour, and you also see the ground moving past you at 60 miles an hour. The driver of the car also has a reference point, himself, and sees you as moving backwards at 5 miles an hour, and the ground moving past at 65 miles an hour. Someone sitting on the ground also has a reference point, seeing two cars moving away at 60 and 65 miles an hour respectively.



  1. How do we know that indeed the earth moves?

We compare the Earth to other things. We can compare it to the sun, and therefore know that it is both moving around itself (rotating) and around the sun (revolving). We compare the sun to the milky way, noting that the entire solar system is orbiting around the center of our galaxy. We compare the galaxy to other galaxies, and note that our galaxy is drifting towards our local great attractor. And (this is where the controversy of the other answers comes into play) we can compare our speed to the equivalent motion of an object at rest with the Cosmic Microwave Background Radiation (called CMBR for short), and can even deduce how fast the local great attractor is moving compared to that.



So what is the controversy? The controversy is around the question, "Is there a universal reference frame?" In other words, is there such thing as a point in the universe that we can say is the zero point that we measure all positions and velocities from?



Historically, this was fought out between Einstein and Lorentz. Before the Michelson-Morley experiments, it was thought that light traveled through a ghostly material called an "aether" that wasn't moving, which could be considered the universal reference frame. The Michelson-Morley experiments were set up to determine the Earth's relative motion through the aether, but it didn't find any. Lorentz created a mathematical method that could account for this, but it was missing a physical basis. Then along came Einstein, who replaced the assumption of an aether with the assumption that light travels at a constant speed. From this he developed the now famous Special Theory of Relativity (although, that is the name it has subsequently come to be called). This was a little too far for Lorentz, and he developed a counter theory, which slowly transformed into a very similar theory to Einstein's, with one exception - Lorentz believed that there was an, as yet undetectable, zero reference frame for the universe. Here is what he said in 1910




Provided that there is an aether, then under all systems x, y, z, t, one is preferred by the fact, that the coordinate axes as well as the clocks are resting in the aether. If one connects with this the idea (which I would abandon only reluctantly) that space and time are completely different things, and that there is a "true time" (simultaneity thus would be independent of the location, in agreement with the circumstance that we can have the idea of infinitely great velocities), then it can be easily seen that this true time should be indicated by clocks at rest in the aether. However, if the relativity principle had general validity in nature, one wouldn't be in the position to determine, whether the reference system just used is the preferred one. Then one comes to the same results, as if one (following Einstein and Minkowski) deny the existence of the aether and of true time, and to see all reference systems as equally valid. Which of these two ways of thinking one is following, can surely be left to the individual.




Einstein maintained that not only was a universal reference frame undetected, it was unnecessary, and Occam's razor should rule (i.e. that if something is unnecessary for the theory to work, then it shouldn't be included in the theory). It is interesting that both theories actually predict the same things, but in the end, Einstein won. Most physicists these days would state that a universal reference frame does not exist.



Since that time, two key things have happened that impact this. First is the discovery of the expansion of the universe. This means that, even if we were to make an arbitrary universal zero position and time, other reference frames that start out at stationary compared to this reference frame will start to move away, even though any object there has undergone no acceleration. This greatly complicates any attempt to set a universal reference frame, an makes it much easier to consider that none exist. Tick one for Einstein.



The second thing was the discovery of the CMBR. This is the relic of a change in the entire universe at around 380,000 years after the big bang. At that point, the universe went from opaque (where light couldn't travel very far without encountering some matter to interact) to transparent (where some light could travel consistently without interruption until someone like you measures it). Two interesting (and relevant) features were found about this CMBR - it comes from every direction (consequently it is universal, i.e. can be seen from anywhere), and it is very consistent in intensity from every direction (termed isotropic). This means that anyone, anywhere, can measure their speed relative to the local knowledge of the universe. Furthermore, this effective reference frame can be used to measure velocity irrespective of the expansion of the universe, if one considers certain interpretations about comoving reference frames. Although this is very different to a universal reference frame of an aether, it is certainly, in one sense, a universal reference frame. Tick one to Lorentz.



So what does this mean? Does it mean that Einstein was wrong? No. Einstein was concerned with the physics of light and electrodynamics, of the relationship between time and position and velocity. The basis of Einstein's argument was that, no matter how fast you go relative to any other object, the physics of what you measure locally will be the same. The original concept of a universal frame implied otherwise, and Einstein has been proven experimentally correct over that original concept.



However, the concept of using the CMBR to define a universal reference frame is a slightly different idea than the original concept (and more in line with Lorentz's final theory). It does not imply any change in the local physics due to relative speed. But it is, as suggested by Rob Jeffries above, a useful reference frame. I would argue it is a special reference frame, in a manner that science has created special zero points throughout history (e.g. sea level, 0 degrees latitude, binding energy etc). None of these zero points change the physics of what is measured locally, but they make it much easier to compare measurements at different locations etc. In the same way, the CMBR reference frame makes it easier to answer a range of questions about non-local physics. For example, what is the age of the universe? (The age is maximum at the CMBR reference frame.)



So, in conclusion, when we see some arguments between physicists about whether the CMBR is an absolute or universal reference frame, it is usually because one side or the other is interpreting the term in the same manner as the original concept, with all its incorrect implications, rather than the lesser concept of a natural definer of the local speed to a universal feature of the universe.

Tuesday, 26 January 2010

gravity - Wouldn't the rings of Saturn experience tidal effect?

Most of the 60 moons in the Saturn system are far away from the rings and very small, so their effect on the rings is negligible. But larger ones that are closer in (Enceladus) do have a rather significant effect on the rings, but as the gravitational pull of these moons is radially outward, it is hardly visible. On the other hand, small moons inside the rings that are a few kilometers in diameter cause significant changes in the ring system.





The orbits of those giant rocks are slightly inclined relative to the rings, which causes the rings to create a bulge in the direction of the small moon as seen in the picture. And these effects are fairly significant as the rings are mostly just a few hundred meters thick or less without a moon present, but with one, they are possibly hundreds of kilometers spread out.

Why can we detect gravitational waves?

The short answer is that waves that are "in the apparatus" are indeed stretched. However the "fresh waves" being produced by the laser are not. So long as the "new" waves spend much less time in the interferometer than it takes to expand them (which takes roughly 1/gravitational wave frequency), then the effect you are talking about can be neglected.



Details:



There is an apparent paradox: you can think about the detection in two ways. On the one hand you can imagine that the lengths of the detector arms change and that the round-trip travel time of a light beam is subsequently changed and so the difference in the time-of-arrival of wavecrests translates into a phase difference that is detected in the interferometer. On the other hand you have the analogy to the expansion of the universe - if the arm length is changed, then isn't the wavelength of the light changed by exactly the same factor and so there can be no change in the phase difference? I guess this latter is your question.



Well clearly, the detector works so there must be a problem with the second interpretation. There is an excellent discussion of this by Saulson 1997, from which I give a summary.



Interpretation 1:



If the two arms are in the $x$ and $y$ directions and the incoming wave the $z$ direction, then the metric due to the wave can be written
$$ds^2 = -c^2 dt^2 + (1+ h(t))dx^2 + (1-h(t))dy^2,$$
where $h(t)$ is the strain of the gravitational wave.



For light travelling on geodesic paths the metric interval $ds^2=0$, this means that (considering only the arm aligned along the x-axis for a moment)
$$c dt = sqrt{(1 + h(t))}dx simeq (1 + frac{1}{2}h(t))dx$$
The time taken to travel the path is therefore increased to
$$tau_+ = int dt = frac{1}{c}int (1 + frac{1}{2}h(t))dx$$



If the original arm is of length $L$ and the perturbed arm length is $L(1+h/2)$, then the time difference for a photon to make the round trip along each arm is
$$ Delta tau = tau_+ - tau_- simeq frac{2L}{c}h(t)$$
leading to a phase difference in the signals of
$$Delta phi = frac{4pi L}{lambda} h(t)$$



Interpretation 2:



In analogy with the expansion of the universe, the gravitational wave does change the wavelength of light in each arm of the experiment. However, only the waves that are in the apparatus as the gravitational wave passes through can be affected.



Suppose that $h(t)$ is a step function so that the arm changes length from $L$ to $L+h(0)/2$ instantaneously. The waves that are just arriving back at the detector will be unaffected by this change, but subsequent wavecrests will have had successively further to travel and so there is a phase lag that builds up gradually to the value defined above in interpretation 1. The time taken for the phase lag to build up will be $2L/c$.



But then what about the waves that enter the apparatus later? For those, the laser frequency is unchanged and as the speed of light is constant, then the wavelength is unchanged. These waves travel in a lengthened arm and therefore experience a phase lag exactly equivalent to interpretation 1.



In practice, the "buildup time" for the phase lag is short compared with the reciprocal of the frequency of the gravitational waves. For example the LIGO path length is about 300 km, so the "build up time" would be 0.002 s compared with the reciprocal of the $sim 100$ Hz signal of 0.01 s and so is relatively unimportant when interpreting the signal.

human biology - What evidence gives clues to the physiological basis for conversion disorder?

Conversion disorder has a set of DSM diagnosis criteria, which, among other things, includes ruling out all neurological disease.



However, as the media has shown us (and one could argue a biased portrayal), many of these young people in Le Roy, NY who were diagnosed with conversion disorder have exhibited tics and starts that are highly reminiscent of Tourette's Syndrome, which is thought to have some basis in pathology of the basal ganglia (and perhaps the thalamus and frontal cortex).



Granted, the issue is being looked into as having an environmental cause, so I can understand how the diagnosis may be reshaped, but I'm more curious about the initial diagnosis of "mass hysteria".



If this syndrome causes real physical symptoms and yet is "psychogenic", through what physiological means is the disease acting? Why would ruling out a neurological basis be a valid criterion?

Monday, 25 January 2010

gravity - Gravitational Propulsion - Astronomy

I am not a physicist, in fact I am just a family doctor, so I beg of you to excuse me of anything I say that you may find outrageous :)



But I came to this forum to see if anyone would be able to validate or offer a perspective on a documentary I watched today. It is an old one to say the least, probably from the 80s, but still has some very interesting information.



Basically, a supposed physicist by the name of Rob Lazar had been working on gravitational propulsion systems at Los Alamos Laboratory, and had been part of a project where they harnessed gravity for travel. Since gravity distorts spacetime, he proposed the idea that a ship in the same way could do this to travel. Furthermore, he said by harnessing gravity and targeting gravitons at points in spacetime, it would create a divet in that location, and thus the ship could move toward it in as little time as possible. Basically, this is not a propulsion system at all, but rather an "attractive" system. He was talking about a stable super-heavy element with 115 protons (element 115) that could create this "amplifying of gravity waves". I am really confused, and do not worry, I will provide a link to this information. Could someone comment on the validity of such a case?



http://www.karinya.com/travel1.htm

Thursday, 21 January 2010

formation - How are boulders formed on asteroids?

It looks like you are asking about rubble piles, asteroids that are made up of a large number of different sized objects that are weakly held together by gravity. A few of the component objects are large, but most are very small (down to grains of sand).



By way of analogy, think of playing pocket pool. Rack the 15 target balls but leave the rack on. The cue ball bounces off when you strike the rack with the cue ball, but that's about it.



Now let's try again, but this time with the rack removed. To make the target balls a even more rubble pile-like, we'll use a piece of cardboard to add a bit of space between the target balls. Now something very different happens when the cue ball strikes the target balls. The balls are only loosely connected to one another. This makes them very good at absorbing the momentum of the cue ball and at distributing the energy and momentum throughout the rack. Given a low to moderate cue ball velocity, the collision will be close to purely inelastic.



And that's how you get a rubble pile. It's one inelastic collision after another after another. Over the course of 4.6 billion years, you have a pile of dust and sand with some rocks and a few large boulders mixed in.

Wednesday, 20 January 2010

microbiology - Antibacterial hand soaps (and related products); what are they good for?

I thought I would attempt to answer my own question. The only other answer currently (by Larian LeQuella), while a helpful comment, I feel it doesn't answer my original question.



To begin, we need to put the results into context.



Fact: Hand-washing using tap water alone will reduce the amount of bacteria and viruses on hands (they might not be actually killed, but no longer on your hands).



Some examples:



  • Ansari et al. (1989) performed an experiment where a fecal solution was placed on participants' hands and subsequently washed off using one of a range of soaps. They found a reduction in around 83% of the human rotavirus and 90% of the E. coli.


  • Shojaeia et al. (2006) randomly selected 150 food handlers in Iran, and instructed them to wash and scrub their hands with sterile water. Before the intervention, around 73% were contaminated with bacteria (primarily with Staphylococcus aureus or E. coli), and afterwards 32%.


Fact: More bacteria and viruses are removed from hands by hand washing by using some type of soap (not necessarily antibacterial).



Some examples:



  • Ansari et al. (1989) (op. cit.) found that certain agents (e.g. 70% isopropanol) can increase the reduction in E. coli and human rotavirus to around 98%. However, this was not true of all possible hand-washing agents.


  • Mbithi et al. (1993) also conducted a experiment consisting of placing a fecal solution on participants' hands that was subsequently washed off using one of a range of soaps. In this case, the hepatitis A virus (HAV) and poliovirus type 1 (PV) was considered. They found that tap water gave a reduction of HAV of around 80% and PV of around 85%. The reduction increased to around 88-92% for HAV and 90-98% for PV for most soaps tested.


[There will be zillions of additional examples to the above.]



A large survey was conducted by Aiello et al. (2007) which considered the efficacy of triclosan (one of the common active ingredients used in antibacterial hand soap) found that, overall, previous research has indicated no significant additional benefit vs. ordinary hand soap. They write:




Soaps containing triclosan within the range of concentrations commonly
used in the community setting (0.1%–0.45% wt/vol) were no more
effective than plain soap at preventing infectious illness symptoms
and reducing bacterial levels on the hands.




This is probably not the last word on the matter, e.g. Fischler et al. (2007) claimed a significant difference in the transmission and acquisition of E. coli and Shigella flexneri after using an antibacterial soap. It is also plausible that antibacterial agents other than triclosan (where the focus has been) provide a health benefit (for example, methylchloroisothiazolinone/methylisothiazolinone). In fact, the Aiello et al. (op. cit.) paper itself indicates a non-negligible effect from >1% triclosan. Kimberly-Clark give a 99.9% or more bacterial killing efficacy for a range of bacteria in a 2003 in vitro experiment of their product; obtainable via Googling "kimberly-clark killing efficacy".



(Side note: There is also a suspicion that the use of triclosan will give rise to triclosan resistance or cross-resistance to antibiotics (e.g. by Levy (2001), Aielloa and Larson (2003), Yazdankhah et al. (2006)). Although, Cole et al. (2003) and Weber and Rutala (2006) claim otherwise in the case of antibiotic resistance.)



This leads to the original question: Which common diseases would likely be more affected by antibacterial hand soaps (and other antibacterial products) than their non-antibacterial counterpart? I.e. if we wanted to test the claim that a certain antibacterial hand soap has some positive effect (vs. a non-antibacterial version), what would be some common diseases that we could test for, and expect to find a non-negligible result?



I offer the following candidate for a bacterial disease whose effect should be noticeably reduced by the use of antibacterial hand soaps:



  • Staphylococcus aureus is a bacterial species that causes a range of noticeable illnesses and can be transferred by skin-to-skin contact and contact with contaminated objects. We can thus reasonably expect it to be affected by an effective antibacterial hand soap. In fact, washing hands is recommended for reducing the prevalence of S. aureus, along with zillions of other diseases (by e.g. Better health channel). This is one of the bacteria that Kimberly-Clark give a 99.99% kill efficacy of their product in vitro.

Although, not directly related to hand-washing, Brady et al. (1990) used a 1% triclosan preparation and Zafar et al. (1995) used 0.3% triclosan to control methicillin-resistant Staphylococcus aureus.




The single additional measure of changing handwashing and bathing soap to a preparation containing 0.3% triclosan (Bacti-Stat) was associated with the immediate termination of the acute phase of the MRSA outbreak. -- Zafar et al.




Similarly, a 1.5% triclocarban bath has been used to treat atopic detmatitis Breneman et al. (2000):




The antimicrobial soap regimen caused significantly greater
improvement in the severity and extent of skin lesions than the
placebo soap regimen, which correlated with reductions both in S
aureus in patients with positive cultures at baseline and in total
aerobic organisms.




The above results suggest that antibacterial soaps could have a beneficial effect against S. aureus (more than a non-antibacterial counterpart). However, it is unclear as to how much of this effect implies a comparable effect for hand washing alone (and in an ordinary household setting). Moreover, there are many different antibacterial hand soaps and other products, whose active ingredients might be at different dosage, all of which would further affect the outcomes.

Tuesday, 19 January 2010

sound - Would a woodwind instrument still play in outer space?

They wouldn't work.



In a pipe, wind blowing over the fipple, or past a reed causes vortices which give lots of different frequencies. Then at the open end of the pipe (or open finger holes) the change to a fixed pressure causes most of the vibration to be reflected back down the pipe, setting up standing waves at various harmonics, which produce the tone. With no atmosphere in space, the air would rush explosively down the pipe, there would be no reflection, no establishment of standing waves, hence no tone formed.



If you put a microphone on the body of the instrument you would probably pick up a lot of noise, of rushing air. You would not get a tone. The air down the pipe would be moving quite fast. Don't put your head in the way!

angular resolution - Will the E-ELT use Adaptive Optics at visible wavelengths?

From looking at the E-ELT website, it appears that at first light, the AO will only work for near-IR. Specifically, the instrument that can use AO is the MICADO instrument. The description page for this instrument states




MICADO, or the Multi-Adaptive Optics Imaging Camera for Deep
Observations, is one of the first-light instruments for the European
Extremely Large Telescope (E-ELT) instrument and takes the Adaptive
Optics technique to the next level. It will be the first dedicated
imaging camera for the E-ELT and works with the multi-conjugate
adaptive optics module, MAORY.



MICADO will equip the E-ELT with a first light capability for
diffraction limited imaging at near-infrared wavelengths.




The MAORY instrument will be the specific AO instrument which, at first light will operate at




wavelengths from 0.8–2.4µm




So in short, the answer appears to be that the E-ELT will only have AO capabilities for near-IR at first light.



As to whether or not there are plans for visible AO capabilities, I couldn't find anything specific to that question. Given that first light for this project is 8 years away at best, I think that plans for future upgrades are tentative at best, and likely not easily accessible to those not on the project. I will say that while the MAORY instrument appears to be the intended AO system at first light, there is listed a second AO instrument called ATLAS that will eventually make its way onto the telescope. While I can find no specific details of this particular instrument, its possible that this will provide for visible AO.



On a tangentially related note, this telescope will also have active optics that will help to improve their imaging capabilities.

Monday, 18 January 2010

Does the event horizon of a black hole increase or decrease by adding mass?

The radius of the event horizon ($r_mathrm{s}$) is directly proportional to the mass of the black hole (M). More exactly:



$$r_mathrm{s} = frac{2 G M}{c^2}$$



The black holes whose merger was detected by LIGO would each have been about 90 km in radius, and after merger, a little less than 180km.



Read about the Schwarzchild radius

Saturday, 16 January 2010

observation - Calculating longitude from star culmination

This is my first post, and I've accidentally deleted almost all I've written!



a later edit with an answer



The RA of a culminating star gives you the local sidereal time and comparison with Greenwich Sidereal Time gives your local longitude by noting the difference between the two.



If local sidereal time is less than Greenwich, your position is west of the prime meridian at Greenwich; if it is more, then your position is east of Greenwich.



Four minutes of Right Ascension = One degree of longtitude.
One minute of RA = 15 arc minutes of longitude.



A worked example:



Greenwich Sidereal time 17:42
Local Sidereal time 16:36 hours



      Greenwich is ahead (more) than local, therefore local is West.


The difference = 42 minutes plus 24 minutes



= 1 hour 6 minutes of RA to the west of Greenwich



Thus, 60 minutes = 15° longitude



Add 4 minutes = 1° longitude



Add 2 minutes = 0°30' longitude



= longitude 16°30'W



Which is accurate to 0°05' of the usually given figure for my location.



http://www.abecedarical.com/javascript/script_clock.html will show GST



I use a Sidereal Clock app on my Android tablet, set to show both GST and local sidereal time at my location.



information for background



This is the historical background for the search for a reliable method of calculating longitude when at sea



The Quest for Longitude and the Rise of Greenwich – a Brief History



I've edited this several times because I'm finding my way around how this works and made several editing mistakes! I've also tried to add another link, and got an error message, but no explanation. Is there a limit to the number of hyperlinks permitted?

Friday, 15 January 2010

dna sequencing - Questions to ask to a panel of people that will be sequenced

One that I wish we asked the subjects in our studies is: When do you normally go to sleep each evening and when do you normally wake up? This can get more detailed, but these 2 basic questions can allow segregation into morning or evening types (or both).



Deep phenotyping is critical to a better understanding of the genetic influence on disease risk phenotypes, and so one should collect lifestyle data - like diet, sleep, exercise - which all influence the genotype-phenotype association (these are called gene-environment interactions). If you can come up with phenotypes that distinguish different paths to the "same" disease outcome, that would be relevant to ask. In other words, there are often many paths to a very similar disease outcome, but as those paths are different, the disease may be slightly different. This requires sub-classification of disease - think of the different types of breast cancer. So, collecting these phenotypes, if you know them, in conjunction with diet, exercise, etc, will give you some very interesting possibilities.

hydrostatic equilibrium - Are larger stars rounder?

In terms of mean angular velocity, the distribution of rotation rates among main sequence stars is well known. Allen (1963) compiled data on mass, radius, and equatorial velocity, which was then expanded upon by McNally (1965), who focused on angular velocity and angular momentum. It became clear that angular velocity increases from low rates for spectral types of G and below before rising to a peak around type A stars and then slowly decreasing.



Equatorial velocity continues increasing to mid-B type stars, before slowly decreasing, but because of the increased radii of O and B type main sequence stars, the peak in angular velocity occurs before this. As part of Jean-Louis Tassoul's Stellar Rotation notes, many O type stars have rotational periods similar to that of the G-type stars like the Sun!



The distribution is not smooth and uniform (McNally noticed a strange discontinuity in angular momentum per unit mass right for A0 and A5 stars; see his Figure 2); Barnes (2003) observed two distinct populations in open clusters, consisting of slower rotators (the I sequence) and faster rotators (the C sequence). Stars may migrate from one sequence to another as they evolve. Interestingly enough, stars on the I sequence lose angular momentum $J$ faster than stars on the C sequence:
$$frac{mathrm{d}J}{mathrm{d}t}propto-omega^n,quadtext{where}begin{cases}
n=3text{ on the I sequence}\
n=1text{ on the C sequence}\
end{cases}$$
Here, of course, $omega$ is angular velocity. These results obey Skumanich's law.



Oblateness can be determined from mass, radius, and angular velocity as
$$f=frac{5omega^2R^3}{4GM}$$
Using this and McNally's data, some quick calculations get me the following table:
$$begin{array}{|c|c|}
hline text{Spectral type} & f/f(text{O}5) \
hline text{O}5 & 1\
hline text{B}0 & 1.28\
hline text{B}5 & 1.84\
hline text{A}0 & 1.67\
hline text{A}5 & 1.35\
hline text{F}0 & 0.482\
hline text{F}5 & 0.0387\
hline text{G}0 & 0.000314\
hline
end{array}$$

Wednesday, 13 January 2010

isro - Utilization of data from satellites mantained by India

Right now, the data from ASTROSAT is not available as the instruments are still being checked. According to Astronomical Society of India,




ASTROSAT is a public observatory and is therefore available for any potential researcher in India and abroad. The first 6 months after launch will be devoted to extensive tests of all the systems. The next 6 months will be for observations of the sky by the teams that built the instruments. However, a year after the launch, a certain fraction of ASTROSAT time will be available for any scientist in India who proposes an observation that passes review. Two years after the launch, international scientists can apply as well. In addition, once an observation is done by ASTROSAT and some time has passed, the entire data will be made public to anyone who is interested !




For using data from Mangalyaan, a proposal has to be submitted to ISRO:




Proposals could be submitted by individuals or a group of scientists, academicians belonging to recognized institutions, universities and government organizations of India.




The present deadline for submission of proposals has already passed (Oct 10). I think the rules for using data from Mars Orbiter Mission (MOM) is similar to ASTROSAT. First, the data will be available to Indian and partner institutions and then for others. In general, the data will be made available to the public after some time, usually after the results have been published.

Sunday, 10 January 2010

neuroscience - How does Golgi's neural histological stain work?

Some background information.



First of all one should notice, that Golgi staining belongs to so-called morphological types of stainings in neuroscience, where the actual anatomy of different neurons is revealed (compared to other techniques, like Ca-imaging or potential imaging).



Second, Golgi staining is a type of silver staining, there the sedimentation of silver or its salts (here: silver chromate) reveals the morphological traits of the cels.



And, third, Golgi staining is applied to fixed preparation. This means that the tissue (normally a brain slice) is pre-treated (here: with formol) to kill all cells and arrest every biological process.



What is known about the targets.



The common description of the target is quoted as "a limited number of cells at random in their entirety". This is the essence of Golgi staining:



  1. Only single cells are stained, therefore there is no impediment from adjacent cell staining while deriving the morphological structure of the cells (very important for light microscopy where you have integrated input from different depths).


  2. The cells are stained randomly, there is no known preference among neuronal cells for Golgi staining and I haven't seen any other types of cells in CNS stained with Golgi, therefore it is quite specific for neuronal cells (and leaving macro- and microglia, astrocytes etc. intact).


  3. The cells are stained in their entirety, meaning that the complete cell is stained very nicely, showing detailed arborisation of dendritic tree, that was very important in studying of Purkinje cells in cerebellum.


Are larger neurons more likely to be stained? Are specific cell types more susceptible than others?



Golgi staining is used mostly for brain slices (I have never seen or heard its application for other tissues). Traditionally one of the biggest cells here are pyramid neurons (NA, ACh-ergic) and one of the smallest are interneurons (often GABA-ergic) -- both are amenable to Golgi staining (reference) and there is no seemingly clusterization of stained cells by their size or transmitter type.



What is preventing us from using the advanced molecular biology techniques to understand the process?



I can name several reasons for this:



  1. Since Golgi staining is applied to fixed preparation the tissue is already "damaged" (formol leads to dessication of cells and shrumping), therefore it is difficult to use some fine mollecular biology methods to investigate these tissues.


  2. There is no way to tell which cells get stained beforehand. And as long as the microcrystallisation of silver chromate is started it can't be (easily) stopped and reversed. Therefore it is difficult to look at what caused the staining afterwards, then the whole cell is impregnated with silver.


  3. I think there were no real attempts to crack the mistery of this staining: how intersting it might be, this seems to be an interdisciplinary question on the brink between biology and chemistry. So, maybe one day somebody will look into it and explain everything.


Saturday, 9 January 2010

Reasoning questions on eclipses - Astronomy

Let me put it this way. If the Moon's orbital plane where exactly aligned with the Earth's orbital plane (which, by the way, includes the sun, not surprisingly) then there would be a solar eclipse on Earth during each New Moon. But, in fact, the Moon's orbital plane moves about by many degrees in a very complicated fashion. Figuring out why this was and how the plane moves kept Isaac Newton busy for a few years.



Both the Moon and the Sun subtend about 1/2 of a degree on the sky. The distance to the Moon also changes considerably, so its angular extent varies around this 1/2 degree. So, to get a total solar eclipse, they need to be aligned to a very high degree and the Moon needs to not be near the far edge of its distance range.

Friday, 8 January 2010

planet - Is atmospheric turbulence irrelevant for ExoPlanetary transits and radial velocity measurments?

Atmospheric turbulence is known to scatter photons in a quasi-random way along their path throughout the atmosphere, resulting in lower imaging resolution than would have been anticipated by instrument-only considerations.



I have been thinking whether the same effects can play a relevant role in limiting the sensitivites for photometry in transits or for spectrometry in radial velocity measurments.



My thoughts so far:



  • Transits: as I'm no observer I don't know if atmospheric turbulence is actually strong enough to scatter source photons out of the line-of-sight, rendering them undetected. This would fiddle with the signal-to-noise ratio per measurment and would let it fluctuate over time.

  • Radial velocity: Turbulence should be able to influence a spectral measurment from the ground, if the induced turbulent broadening is significant compared to the line width that can be resolved with the instrument considered. Taking the turbulence-induced doppler shift $Delta v/c$ as $10cm/s /c sim 10^{-7}$ (I assumed turbulent eddy-velocities to be comparable to typical winds) as typical for Earth's atmosphere, this should be insignificant even for a high-resolution spectrograph like HARPS that has $lambda/Delta lambda sim 10^5$.
    However smaller eddies rotate faster, they could thus reach the detectability range when $Delta v/c sim 10^{-5}$

Here my expertise in this topic ends, and I'd hope for someone from this community to illuminate the points above. Also googling usually only points to the benefits in direct imaging.
Bonus question: Would adaptive optics always help to remedy any issues that might arise?

Thursday, 7 January 2010

galaxy - Stellar systems: what is the difference between virial, dynamical and thermodynamic equilibrium?

I'm currently going through Binney & Tremaine (2008) on my own to learn about stellar dynamics. I also have been perusing additional online resources such as this scholarpedia wiki.



Often when distinguishing between collision-less vs collisional stellar systems, the virial theorem is invoked along with the equations for "crossing time" (also known as "dynamical time") and "relaxation time." A large galaxy is said to be collision-less because its relaxation time is many orders of magnitude higher than its age, whereas a dense stellar system (e.g., a globular cluster) is collisional because its relaxation time is less than its age.



But what is the relationship between this so-called "relaxed" state and virial, dynamical, and thermodynamic equilibrium? What do the three different kinds of equilibria intuitively mean?



For example, I have heard that large galaxies are assumed to be in virial equilibrium and then people derive "dynamical masses" (why not "virial masses"?). What would it take and/or mean for a large elliptical galaxy to be not just in virial equilibrium, but also in dynamical or thermodynamic equilibrium?

human biology - Does muscle growth trigger angiogenesis?

So heavier people generally have more blood than lighter people (this is why heavier people generally need to take higher doses of medication for the same effect of medication). They also have more blood to draw from. But this fails to differentiate between muscle mass and fat mass.



So here's my question: Is the net angiogenesis per gram of muscle more or less than the net angiogenesis per gram of fat?

Monday, 4 January 2010

Expansion of Space - Astronomy

The Universe is remarkably homogeneous, i.e. the same everywhere, and isotropic, i.e. the same in every direction (NB homogeneity does not imply isotropy and there are toy cosmological models which are homogeneous but not isotropic) and this is the underlying physical assumption of big bang cosmology.



When modelling the Universe the simplification of it being 100% homogeneous and isotropic is most often used as because on the very largest scale (i.e. the scale of the observable Universe) this is very close to being true and any deviations from this are small enough that they do not effect most of the properties being studied. In 100% homogeneous and isotropic Universe, expansion takes place at all places and at all scales.



However we know that the Universe is not 100% homogeneous and isotropic, especially on a smaller scale; for example the centre of a star is a very different place from intergalactic space. It is a fair question then to ask in view of this how does expansion manifest itself on smaller scales.



We know that in gravitationally bound systems such as our local group (which includes Andromeda) matter has the tendency to be drawn closer by gravity rather than to move away as objects do in an expanding Universe, so it seems our local group is not expanding. Still, it is very tempting to see expansion on this scale as being like a small repulsive force which acts against gravity, but is not strong enough to overcome gravity. So in this sense even the local group would be expanding



However the view of expansion acting like a small repulsive force on a small scale is not necessarily the right view. A counterexample would be the Einstein-Strauss Swiss-cheese model where expansion only takes place in deep space and not in the space around stars (modeled as vacuoles). The Einstein-Strauss model though cannot be taken as the last word on the subject as the way it models stars in space is not that realistic.



Overall then how expansion manifests itself on a smaller scale is still very much an open question and we just don't know what tiny corrections if any it would make on the dynamics of a very small system such as the solar system.

Sunday, 3 January 2010

Is Earth's 1g solid surface gravity unusually high for exoplanets?

Ultimately we don't know enough about exoplanets to be sure; for now all our data is skewed toward more massive planets which are easier to detect using Doppler wobble, or large diameter planets (almost certainly gas giants) which are easy to detect by their host star dimming when they eclipse it relative to us. More data is coming in every day, and as fantastic as Keplar has been, I think we need to at least hold out for the James-Webb to be online before we draw any really hard conclusions form the data.



Without the data, all we can rely on is our theories of planet formation, which we're fairly good at.
Earth is probably more dense than an average planet of it's size, as a result of it colliding with a roughly mars-sized object (nicknamed Theia) early in it's development. Theia's core would have been absorbed into earth's core, but the outer layers of both were stripped away, creating a ring which would coalesce into our moon. This would leave earth with a higher mass core than a planet forming at its distance would have.



We can see this in the densities of the terrestrial planets;



--Object-------Density (g cm−3)-----Semi-major axis (AU)-



-------------Mean----Uncompressedm-----------------------



-Mercury-----5.4---------5.3-------------0.39------------



-Venus-------5.2---------4.4-------------0.72------------



-Earth-------5.5----------4.4--------------1.0-------------



-Mars--------3.9----------3.8--------------1.5-------------



Credit, Wikipedia



Planets closer to their star are naturally going to have higher densities as a result of mass differentiation; denser material settling to the core of a planet or the center of a solar accretion disk.



Looking at it from the perspective of habitability,
We know that density is positively correlated to surface gravity, so we can expect that earth would have a slightly higher than average surface gravity for a planet in the habitable zone around a star in the category of one solar mass.



That being said, most stars do not have one solar mass, most of the stars in the universe are red dwarfs, which are much dimmer and lighter than our sun, and would have a closer, narrower habitable zone. A habitable planet around a red dwarf would probably be smaller and lighter, but denser, due to its lower mass accretion cloud and closer proximity to it's star respectively.



I think we could expect the majority of exoplanets to be planets similar to mercury orbiting red dwarfs.
If this is the case we can expect that earth would have a high surface gravity relative to terrestrial planets (although FAR more massive terrestrial planets of similar diameter exist) and about average gravity when taking all planets into account.

Saturday, 2 January 2010

planet - Beginner telescope for my daughter

I have been looking to get into astronomy myself and I have been impressed with the reviews on the Celestron Firstscope, which is around $50. It has a decent optic size and is small enough to pack up and get out to a clear area if you live in a high light pollution area. The only drawback I find with the scope (without actually using it) is it doesnt have a viewfinder but you can find one of those easily.

Friday, 1 January 2010

transcription - tools to reconstruct the transcriptional regulatory circuits?

Inferring transcriptional / regulatory networks from empirical data is an active area of research, and to my knowledge there aren't many mature tools for this type of analysis. I see mostly mathematicians, statisticians, and engineers working on this problem, probably because of the intense quantitative theory involved. Even if mature tools do exist, I doubt they're tailored for the typical biologist--more likely, they are geared toward scientists with a more quantitative background.



That being said, I am aware of 2 or 3 pieces of software that may provide a starting point for the curious or the adventurous: AIRnet (described here), iBioSim (described by Barker's PhD thesis, currently the second hit on this Google search), and maybe Ingenuity Pathways Analysis (which requires a paid license). The only one of these tools I've even tried to use is iBioSim, and at the time (2 or so years ago) it was a very kludgy process.