Tuesday 27 April 2010

What happens if two or three stars collide with each other?

The collision of another star with our sun would be a very effective way of ending all life on Earth. Fortunately stars are so far apart that the the chance of it happening in the life of the sun is very close to zero.



However in the densely packed cores of globular clusters the rate of collision is much higher, and there may be a collision somewhere in the milkyway once every 10000 years or so. Three way collisions don't happen (or are exceptionally unlikely) but would be similar to two two-star collisions.



The result varies depending on the size of the stars, their density, the relative velocity, and whether a collision is head on or not. Generally a denser star is likely to come out of a collision in better shape. So a red giant, could have most of its very rarified outer layers pulled off and the core escaping.



On the other hand, if two main sequence stars, like our sun, collide, they first compress each other. Some of the stellar envelope escapes perpendicular to the collision (think of jam being squeezed out of a sandwich). But within a few hours the two stars have mixed and merged, and a new, larger star formed.



The mixing adds lots more hydrogen to the core of the star, which causes it to rejuventate. The first evidence of stellar collisions is in globular clusters, in which there are some large blue stars that seem younger than all the other stars in the cluster (Blue stragglers)



A white dwarf and main sequence star collision is bad news for the star. The collision causes the core of the star to heat up and a massive release of thermonuclear energy. Plenty more than is needed to completely disrupt the star. The star is completely destroyed in the process. However the white dwarf is relatively unscathed.



White dwarf-white dwarf collisions can, depending on the size result in gravitational collapse to a neutron star.



The paper Stellar collisions, mergers and their consequences is a very readable summary.

solar system - How did Batygin estimate when Planet Nine was ejected?

Page 12 of Batygin & Brown (2016) says that a speculative formation scenario can be drawn from recent solar system formation simulations by Bromley & Kenyon and by Izidoro et al. These suggest that the core of a nascent ice giant may have been ejected very early in the solar system's history in order to explain the properties of the observed planets; the formation of Uranus and Neptune was probably accompanied by at least one other ice giant.



The source of the claim you mention is actually an article by Eric Hand in Science Magazine. He points out that possibly, to explain why this planet was ejected yet still remains part of the solar system in a much wider orbit, then you need it to have been slowed down by residual gas in the protosolar disk. So I assume the 10 million year is an (uncertain) upper limit on the dispersal of the disk and I would guess the 3 million year lower limit is just how long it takes to form a 10 Earth mass ice giant core at the distance of Neptune.

Monday 26 April 2010

classification - Stars repartitions - Astronomy

I'm looking for the stars repartitions in percent, in the Harvard Classification (from O to M)



I have search every where but I found nothing. I'm not looking for exact values, but more generic values.



After that in each Classes I'm going to need the repartitions in the luminosity classes



If you have some link, or any lead to help, I'm gona take it



I'm working on a space Video games, and I'm gonna generate Solar system :)
I have done a first script whch generate good value, but not for us : http://global-chaos.fr/old/generator/



in ths frst generator we have simplfed the star classifcation, but we want to go on more realistic classification, and more diversty of star

nebula - Will new stars stop forming at some point of time?

Cosmic GDP has already crashed, as Peak Star was ~11 billion years ago.
Cosmic GDP



According to Sobral et al's prediction, the future star production by mass will give only 5% of the stars in the universe today, "even if we wait forever." More theoretical predictions, such as this one, suggest that nebulae will run out of hydrogen on the order of $10^{13}$ years, while star formation will occasionally happen due to collision of brown dwarfs until somewhere on the order of $10^{14}$ years.



Of course, hydrogen itself may have a finite lifetime. The half-life of a proton is experimentally known to be longer than $10^{34}$ years, but it may still be quite finite.

Sunday 25 April 2010

Analysing the results of real-time PCR

You also probobably need to check if your samples haven't been contaminated with PCR reaction inhibitors, which is very common if you first extract your mRNA, digest remaining DNA and then run a PCR with a less then 10-fold dilution. You need at least a 200 times dilution to get rid of all the artifacts.



Once you've diluted your samples, place a repeats of successive dilutions of your target and baseline mRNA (200xdilution, 1000x dilution, 5000x dilution). Normally you will see a nice regular spacing between curves on your rtPCR plot. If you don't, something went wrong with the reaction.



More generally rtPCR is a pretty tricky experience, and if you want it to be 100% reliable, you've got a certain number of controls to perform. You can find a good and comprehensive guideline over here.

extra terrestrial - Could the KIC 8462852 (Alien megastructure star) be explained by orbiting gas clouds?

There are some theories to explain that. The following explanations, I took from wikipeda's article.



  1. The star has a small companion red dwarf which just crossed its Oort cloud equivalent (at 885 UA). A passing star this close would surely cause havoc and serious disturbances to comets' orbits. This could result in a swarm of comets being thrown into the inner stellar system at once in highly excentric orbits. However, there are severe doubts about this.


  2. It was also proposed that a planet with a very big ring system transits the star (or perhaps, nearly misses and just part of the rings transit the star). This is not unprecedent, and was already seen with another star.


  3. Astronomer Jason Wright suggested that the star might be younger than it seems and it is still coalescing material around it.


  4. Astronomer Bradley Schaefer presented just a few days ago (2016-01-13) a study where he concludes that the star dimmed roughly 20% from 1890 to 1989, and that this is unprecedent for a F-type star. So, there could be still more weird things going on than previously thought.


  5. NASA Infrared Telescope Facility found similiarities with another star, Eta Corvi, which is undergoing a Late Heavy Bombardment.


Wikipedia's article also cites another possibility (which took some headlines around the world). But this is just some (unfortunately notable) sensacionalistic (un|pseudo)jornalistic claim that nobody can take seriously:



  1. There is an advanced alien civilization building a giant device like a Dyson Sphere or something similarly big.

LocalFluff posted an answer suggesting another possibility:




  1. The anomalies correlate strictly with the telescope's orbital period and orientation and are obviously a man made artefact because of some unforeseen malfunction of the instrument, nothing astrophysical is involved.



So, the conclusion (at least my conclusion) is that we simply don't know yet what is happening and we need many follow up observations which will likely tell us a lot. Specially somewhere around May of 2017, when it is predicted that the strange megastructure transit should happen again.



I personally would guess that the close encounter with the red dwarf triggered a Late Heavy Bombardment. But this is only a guess from mine and I have no way to provide evidence for it.

Friday 23 April 2010

the moon - Is extraterrestrial mining more difficult or impractical for bodies without plate tectonics?

I vaguely remember my dad talking about this.



Uranium and other heavy elements are dense. When the celestial body was molten (early in it's life) all the heavy elements sank to the core of the body. Now, with tectonics, the heavy elements are brought back up to the surface. This is why we can mine Uranium on Earth near the surface: the Uranium was brought back up due to convention current on Earth.
In conclusion, it's not worth it at all to mine on a non-tectonically active planet/moon



If anyone can extend this, please do.

Thursday 22 April 2010

space - humans leave earth forever?

Nasa are sending humans to mars because it has a background of evolution very similar to earth, meaning we can discover more about the past and future of our planet. Mars had the capability to have life in the past, and if we can work this out we can answer the fundamental question: Does life exist anywhere else?



Now, about humans leaving earth. Humanity will have to leave earth at some point - due to issues like overpopulation, resource shortages and NASA as well as many other scientists are planning things like human missions to asteroids, and eventually developing technology for a human mission into deep space. Astronauts on the ISS are helping us understand how the body changes in space (for example, fluids inside your body aren't pulled down by gravity). Furthermore, these astronauts are helping us prove the technologies needed for exploration into deep space. I found a site about stephen hawking talking about how we need to leave earth http://www.space.com/8924-stephen-hawking-humanity-won-survive-leaving-earth.html. http://www.theregister.co.uk/2015/02/20/hawking_alert_leave_planet_earth_stupid_humans/



However, another thing about the matter of exploration further into space, and why we haven't encountered aliens is that some scientists believe that there a many, many, many sentient civilisations out there, but they wipe themselves out so quickly they never cross paths.



Furthermore, there are so many things we need to think about if we will travel and colonise other planets. Like how the planet's gravity will most likely be different, the atmosphere will be at a different density. This is why we need to expand our presence in the solar system, and one day send a long term manned mission to mars.

Claim that 30-m class telescopes will have resolution far superior to Hubble: true?

This article makes the claim that the Giant Magellan Telescope (GMT, number 4 in the list) will have resolution 10 times better than that of Hubble, while the Thirty Meter Telescope (TMT, number 3 in the list) will have resolution 12 times better than that of Hubble. These are claims I find impossible to believe.



A simple application of, for instance, the Raleigh Criterion ($theta = 1.22~lambda/D$), where $theta$ is the angular resolution of the telescope, $lambda$ is the wavelength of the light in question, and $D$ is the diameter of the telescope, would show that a comparison between angular resolutions is as simple as comparing the diameters of the telescope apertures. Hubble's mirror is 2.4 m; comparing with the 24.5 m GMT and the 30 m TMT we see that the GMT will have 10.2 times the angular resolution of Hubble, while the TMT will have 12.5 times the angular resolution of Hubble. I think a calculation similar to the one I have described is how the article linked above came up with the numbers they did about the angular resolution of these telescopes compared to Hubble.



However, the Raleigh criterion only applies to telescopes working at their diffraction limit. Space telescopes (if they're designed well and built correctly) can work close to the diffraction limit (maybe even at the diffraction limit). Ground-based telescopes, however, are limited in angular resolution by the atmosphere, which at best will limit resolution to about an arc-second at the best sites on Earth. Thus the GMT and TMT by themselves will not have better image resolution than Hubble.



My question, then, is whether this article is correct (possibly because of one of the reasons I list below) or whether it seems this article naively applied the Raleigh Criterion for angular resolution with no thought about how the atmosphere will affect the resolving capabilities of these large ground-based telescopes.



Possible reasons the article may still be correct:



  • Adaptive optics, a developing technology which can allow telescopes to correct for distortions to the image produced by the atmosphere. Perhaps GMT and TMT will have very fancy adaptive optics systems.

  • Another technique, such as speckle imaging or lucky imaging.

  • Article could be referring to spectral resolution, rather than angular resolution (or image resolution). However, obtaining good spectral resolution is as much the job of the spectral instrument as it is of the telescope so I don't consider "spectral resolution" to be an inherent property of a telescope.

Tuesday 20 April 2010

bioinformatics - First pass protein structure prediction

Have you used hhpred to search for homologues? What was your criteria for defining there was no homologues? You could potentially go down to around 30% sequence identity to model.



I would submit the the query sequence to both I-Tasser and Rosetta and see if both of the servers agree on the topology. I-Tasser will always provide 5 models of your query sequence (even if they are rubbish) ranked by a confidence score.



While you wait for the above servers to finish, I would submit the sequence to a number of secondary structure prediction servers to come to a general consensus for the secondary structure of your protein. Some servers you could use are:



If the servers come to a consensus, compare the secondary structure prediction with that of the secondary strucutre of the models, to see if the ab initio modelling concurs with that of the secondary strucutre prediction. This will give you more reason to be confident in your model.



I would then validate your models using:



  • Prosa - This calculates a Z-score, and plots where it fits with structures with a similar amino acid length, from the PDB.

  • DOPE Score - This is employed in Modeller and measures the "nativeness" of your model, the lower the score the better. But you will need to install Python and have some knowledge of how to use the command line. I can provide the code to Run Dope with modeller.

Finally, are there any constraints you could introduce such a oligomerisation site or a protein-protein interaction site, to help with the modelling?

Sunday 18 April 2010

Oxygen in a methane atmosphere akin to methane in oxygen atmosphere?

I agree with Cipher's answer, but just to add, from a certain point of view, Oxygen could be stored and used like a fuel in a situation like that.



Now, as far as aliens doing so, I have some doubt that you can have life (intelligent life if they're using fuel) on a planet with a Nitrogen/CH4 atmosphere cause I don't see how that would actually work and how the resperatory cycles would balance out. I looked into that a little bit and it gets pretty complicated, so, I don't want to say it's impossible, but I suspect it's unlikely, but, if we take your scenario and work with the assumption that there's intelligent life on a Goldilocks planet that breathes N2/CH4 (maybe throw some CO2 in there), on a rocky planet with oceans, I see no reason why they couldn't use Oxygen as fuel, at least, running "oxygen lines" inside their alien homes to make fire, perhaps to cook with - sure. It seems like it would be a useful convenience. You don't gain much energy pulling Oxygen from Silicate rock or Iron Oxide or from water and then using it to burn in a methane atmosphere, so I don't think it would be an energy source, unless they use bacteria to produce oxygen and then, maybe. Oxygen wouldn't concentrate underground the way hydrocarbons do, so in that sense, it wouldn't be drill-able, but as a transportable "fuel like" substance, it would work. I see no reason why that couldn't be done.



If we ever set up an astronaut colony on Titan, which, while much farther away from the earth than Mars, it has several advantages, one being, the surface pressure is manageable, so structures wouldn't need to be able to contain a significant fraction of 14.4 PSI. They'd need to be mostly air tight, but compared to the structural integrity needed to not lose atmosphere to a vacuum, that's easy.



Astronauts/colonists on Titan would be able to collect rocky CO2 and H2O from the Moon's surface and use that to grow plants inside and in time, they would probably have an abundance of oxygen and under certain circumstances, that Oxygen could be used in a similar way to fuel. I think the scenario the way you wrote it is very unlikely, but oxygen being used like fuel on a methane or hydrogen abundant atmosphere, more likely some kind of astronaut colony, there's no reason why that couldn't be done that I can think of.



PS - That's on the speculative side for an answer here, but that's my thoughts on the question.

gravity - What would happen if a body were to fall into a neutron star?

Let's assume that what is falling onto the neutron star is "normal" material - i.e. a planet, an asteroid or something like that. As the material heads towards the neutron star it gains an enormous amount of kinetic energy. If we assume it starts from infinity, then the energy gained (and turned into kinetic energy) is approximately (ignoring GR)
$$ frac{1}{2}m v^2 = frac{GMm}{R}, $$
where $m$ is the mass of the object (which cancels) and $M$ and $R$ are the mass and radius of the neutron star (let's assume typical values of $1.4 M_{odot}$ and 10 km respectively).



This results in a velocity as it approaches the neutron star surface of $1.9 times 10^{8}$ m/s - i.e. big enough that you would have to do the calculation using relativistic mechanics actually.



However, I doubt that the object would get to the surface intact, due to tidal forces. The Roche limit for the breakup of a rigid object occurs when the
object is a distance
$$d = 1.26 R left(frac{rho_{NS}}{rho_O}right)^{1/3},$$
where $rho_{NS}$ and $rho_O$ are the average densities of our neutron star and object respectively. For rocky material, $rho_O simeq 5000$ kg/m$^{3}$. For our fiducial neutron star $rho_{NS} simeq 7times10^{17}$ kg/m$^{3}$. Thus when the object gets closer than $d= 500,000$ km it will disintegrate into its constituent atoms.



It will thus arrive in the vicinity of the neutron star as an extremely hot, ionised gas. But if the material has even the slightest angular momentum it could not fall directly onto the neutron star surface without first shedding that angular momentum. It will therefore form (or join) an accretion disk. As angular momentum is transported outwards, material can move inwards until it is hooked onto the neutron star magnetic field and makes its final journey onto the neutron surface, probably passing through an accretion shock as it gets close to the magnetic pole, if the object is already accreting strongly. Roughly a few percent of the rest mass energy is converted into kinetic energy and then heat which is partly deposited in the neutron star crust along with matter (nuclei and electrons) and partly radiated away.



At the high densities in the outer crust the raw material (certainly if it contains many protons) will be burned in rapid nuclear reactions. If enough material is accreted in a short time this can lead to a runaway thermonuclear burst until all the light elements have been consumed. Subsequent electron captures make the material more and more neutron rich until it settles down to the equilibrium composition of the crust, which consists of neutron-rich nuclei and ultra-relativistically degenerate electrons (no free neutrons).

Saturday 17 April 2010

molecular biology - What is the least costly method to generate sequential amino acid deletions?

Deletions may make sense if you are analyzing the N-terminus or C-terminus of a protein. If you are looking at an internal region however, keep in mind that the more AAs you delete, the more likely you are to disrupt the overall protein structure. If you delete any random selection of 8 AAs within a protein, there's a chance you'll knock out activity by changing the protein fold or stability. That's not useful information.



This question is usually first addressed by alanine scanning - sequentially or additively changing amino acids to alanine. This is much more informative than deletions. Even better, you can choose to replace wild-type AAs with other AAs of similar size but differing charge or hydrophobicity. Then you are most likely to change the function of a region without changing the structure.



The quickchange kit works great, but if cost is an issue you can do whole plasmid mutagenesis PCR with your own reagents. And make your own competent cells.



In my experience though, whole plasmid PCR can be tricky - if it doesn't work the first time it can be difficult to troubleshoot. If time is an issue, I'd recommend doing overlap extension PCR with the same mutagenic primers, plus one set of amplification primers at the 5' and 3' end of your gene. Make a large batch of digested vector, test as a negative control, and use it for ligation of all of your different inserts.

Tuesday 13 April 2010

Can a star have a ring system?

They certainly can. A ring is often formed around a celestial body when its gravity rips apart another smaller celestial body. The Sun is really massive, so it could destroy any object that is not dense enough. Just Google about the Roche Limit for more informations (and better explanations).



Now, take a look at our solar system : You have the asteroid belt between Mars and Jupiter, and you also have the Kuiper belt beyond Neptune. Would you consider these being rings ? They sure are not as smooth as Saturn's, but to my mind, they still are rings.

Monday 12 April 2010

galaxy - How do astronomers estimate the total mass of dust in clouds and galaxies?

Dust absorbs stellar light (primarily in the ultraviolet), and is heated up. Subsequently it cools by emitting infrared, "thermal" radiation. Assuming a dust composition and grain size distribution, the amount of emitted IR light per unit dust mass can be calculated as a function of temperature. Observing the object at several different IR wavelengths, a Planck curve can be fitted to the data points, yielding the dust temperature. The more UV light incident on the dust, the higher the temperature.



The result is somewhat sensitive to the assumptions, and thus the uncertainties are sometimes quite large. The more IR data points obtained, the better. If only one IR point is available, the temperature cannot be calculated. Then there's a degeneracy between incident UV light and the amount of dust, and the mass can only be estimated to within some orders of magnitude (I think).



If lines from various atomic or molecular transitions are seen as well, the composition can be better constrained. The size distribution can be determined from fitting the theoretical spectrum of a given distribution to observed dust spectra. This information is often not available in a given high-redshift galaxy, so here we can be forced to assume that the dust is similar in nature to "local" dust, i.e. in the Milky Way and our nearest neighbors.



If you're interested in the relevant equations, they can be found many places, e.g. here.



Another way to estimate the dust mass is to measure the metallicity of the gas with which the dust is is mixed, either from emission lines or absorption lines if a background source is available. The dust mass is then found from an assumed dust-to-metals ratio, which is pretty well-established in the local Universe, and to some extend also at higher redshifts.

Sunday 11 April 2010

Is there a variation of ISO 8601 for julian calendar dates?

Astronomers use "Julian day numbers" to indicate time. The Julian day number is the number of days since Noon GMT on January 1st 4713BC (in the Julian calendar). A time is then a single real number. For example, 00:30:00.0 UT January 1, 2013 (Gregorian), is 2456293.520833 (wikipedia)



So except to specify the epoch, the Julian calendar is not in use by astronomers to specify dates. There is no international standard equivalent to iso8601 for representing dates in the Julian calendar.



Ancient dates are often expressed as years before present, where "present" is 1950. 1950 is chosen as it preceeds most of the atmospheric nuclear testing that dumped massive amounts of C14 into the atmosphere and made carbon dating only valid for years before 1950.

staining - How long does it take to stain cells?

For fluorescence immunocytochemistry, there are 3 main incubations that determine duration of the procedure. Generally it takes 2 days to stain your cells before you can visualize them. Once cells or tissue are fixed onto glass, you first incubate with a primary antibody to directly target a specific antigen(s), usually overnight. After primary labelling, a secondary antibody is applied to target the primary antibody, which usually takes a few hours. These secondary antibodies come conjugated to some fluorescent dye (Cy5, TRITC, FITC, AlexaFluors, etc). A nuclear counterstain is really quick, only 5 minutes. After all the labelling and nuclear stain, you can mount the coverslip with mounting media and once it cures you can visualize them.



Abcam has one of many quick reference guides.

Saturday 10 April 2010

Would a satellite in geosynchronous orbit between the earth and moon track across the sky together?

The answer to the headline question is clearly: No.



The full moon has an apparent diameter of about 0.5°. Its apparent motion is roughly 360°/24h, neglecting the orbital motion of Earth and Moon. Hence it apparently moves about 24h*0.5°/360° = 0.033h = 2 min.
Hence the moon moves one moon diameter in about 2 min.
(The neglected orbital motion induces an error of about 3.65% (0.997d/27.3d), divide the siderial Earth day by a siderial moon orbit, hence an error of just a few seconds for the apparent motion of 0.5°.)
In 35 minutes, Moon's apparent motion would be about 17 moon diameters.



The geosynchronous satellite, in contrast, remains staying at the same apparent position, by definition.



Planets or stars move apparently about the above error estimate different from the moon, hence roughly 27.3-times slower relative to the apparent moon than a geosynchronous satellite.

Thursday 8 April 2010

observation - Radio telescope targeting

Large radio telescope have pretty good pointing accuracy:



The second critical parameter is the beam width. Beam width depends highly on frequency:



beamwidth = wave length/dish diameter



When you use several dishes in an interferometer, you can increase their effective accuracy and decrease their collective beam width.



A bit more info on the mechanics of the Lovell telescope:




A control computer calculates the required drive rates to follow each radio source. The drive motors are servo-controlled, so there is a continuous check that the correct rate has been achieved. The position of the telescope is constantly monitored and fed back to control computer to ensure that the telescope is pointing correctly.



For good tracking the pointing accuracy should be about a twentieth of the resolution. Since the resolution is proportional to the wavelength being received (see below), it follows that the pointing accuracy is more critical at shorter wavelengths. The control computer is able to correct for pointing errors caused by the telescope bowl sagging under its own weight as it moves up and down. In this way the pointing errors can be kept to about 10 arcsec.




So, servo motors and presumably, calibration are what enables this accuracy.



Further reading: the story of Jodrell Bank, and a Radio Electronics article.

Wednesday 7 April 2010

star - Statistically, what would the average distance of the closest black hole be?

Let us assume that $N$ stars have ever been born in the Milky Way galaxy, and given them masses between 0.1 and 100$M_{odot}$. Next, assume that stars have been born with a mass distribution that approximates to the Salpeter mass function - $n(m) propto m^{-2.3}$. Then assume that all stars with mass $m>25M_{odot}$ end their lives as black holes.



So, if $n(m) = Am^{-2.3}$, then
$$10^{11} = int^{100}_{0.1} A m^{-2.3} dm$$
and thus $A=0.065N$.



The number of black holes created will be
$$N_{BH} = int^{100}_{25} Am^{-2.3} dm = 6.4times10^{-4} N$$
i.e 0.06% of stars in the Galaxy become black holes. NB: The finite lifetime of the galaxy is irrelevant here because it is much longer than the lifetime of black hole progenitors.



Now, I follow the other answers by scaling to the number of stars in the solar neighbourhood, which is approximately 1000 in a sphere of 15 pc radius $simeq 1$ pc$^{-3}$. Thus, the black hole density is $simeq 6.4 times 10^{-4}$ pc$^{-3}$ and so there is one black hole within 7 pc.



OK, so why might this number be wrong? Although the number is very insensitive to the assumed upper mass limit of stars, it is very sensitive to the assumed lower mass limit. This could be higher or lower depending on the very uncertain details of the late stellar evolution and mass-loss from massive stars. This could drive our answer up or down.



Some fraction $f$ of these black holes will merge with other black holes or will escape the Galaxy due to "kicks" from a supernova explosion or interactions with other stars in their dense, clustered birth environments (though not all black holes require a supernova explosion for their creation). We don't know what this fraction is, but it increases our answer by $(1-f)^{1/3}$.



Even if they don't escape, it is highly likely that black holes will have a much higher velocity dispersion and hence spatial dispersion above and below the Galactic plane compared with "normal" stars. This is especially true considering most black holes will be very old, since most star formation (including massive star formation) occurred early in the life of the Galaxy, and black hole progenitors die very quickly. Old stars (and black holes) have their kinematics "heated" so that their velocity and spatial dispersions increase.



I conclude that black holes will therefore be under-represented in the solar neighbourhood compared with the crude calculations above and so you should treat the 7pc as a lower limit to the expectation value, although of course it is possible (though not probable) that a closer one could exist.

Ejected planets during the early stages of the formation of a solar system?

This is similar to a previously asked question, but I am just asking about theory rather than observational evidence. Assuming there were a much larger number of protoplanets in the early solar system, what percentage of these would theoretically have been ejected (orbital velocity increased beyond escape velocity due to interaction) and what percentage would have been just moved out to a more distant orbit (say Kuiper belt or Oort cloud).
In other words, should we expect to find rocky worlds in the far outer solar system that were formed inside the current radius of the asteroid belt?

Tuesday 6 April 2010

cosmology - What are the cosmological ramifications if we probabilise and continuify the order of differentiation in $F=frac{d(mv)}{dt}$?

Newton's second law of motion states that $F=frac{d(mv)}{dt}$. This is a first-order differential equation, in which the order of differentiation of momentum is 1. So we can write it $F=frac{d^k(mv)}{dt^k}$ where $k=1$.



Newton used the dot notation for differentiation and unlike Leibniz did not conceive of orders of differentiation that were not integers. Perhaps if he had he would have changed his notation, because it's hard to draw half a dot! We, however, have no such difficulty.



Let's make $k$ probabilistic, so that its mean value is 1 but it varies from 1 with decreasing probability as the difference from 1 increases. Let's assume the probability density function of its distribution is normal, so that $f(k)=N(1,sigma)$, and let's further assume that $sigma$ is a universal constant.



If $sigma = 0$, we get $F=frac{d(mv)}{dt}$, and either a Newtonian universe or, when we take mass to be relativistic, an Einsteinian one.



If we assume a big bang cosmology on the Lambda-CDM model, is there a value $epsilon > 0$ such that if $sigma < epsilon$ we still get the same cosmology, because the tweak we have made to the equation $F=frac{d(mv)}{dt}$ is too small to make a difference?



And what happens to the currently prevalent version of big bang cosmology when we increase the value of $sigma$ such that a difference is actually made?



I ask the question in that way because I am not saying let's give $sigma$ a large value such as 0.5. I am saying let's give it a very tiny value that will just begin to have a cosmological effect. What will that cosmological effect be?



Bearing in mind that non-integer-order derivatives are non-local, do we get matter appearing out of empty space and a possible basis on which to support a steady state cosmology? What other adjustments might be suggested - of big bang theory, quantum theory, or both?

plant physiology - How do white Caladiums perform enough photosynthesis to support their mass?

There are many different kinds of plants that have independently evolved this sort of variegation (non-green areas) in the leaves. However, the mechanisms by which they effect this vary between species.



Some have little or no chlorophyll in the non-green areas, but many others have changed the architecture of their leaf cell layers in the non-green areas, creating refraction effects that make the leaves appear white, but don't significantly change their photosynthetic efficiency (Sheue et al.) In fact, Arum italicum white areas actually have higher efficiency in certain light conditions! (La Rocca et al.)



I can't find any information on which type of variegation Caladium shows. The genus Caladium is related to the Arum I mentioned above which uses the "architecture" method, but it's also related to Epipremnum, which seems to use the "no-cholorophyl" method.



Sheue et al. mention that with some species that use the architecture method, using transmitted light instead of reflected light reveals that the white spots are actually green; while in no-chlorophyll plants the spots remain white. If you have a Caladium around, you could try holding its leaf up to a bright light, and see!



So, to answer your question: if Caladium is an 'architecture-variegation' plant, its white areas perform photosynthesis at or near the efficiency of the green areas of the leaf. If it's a 'no-chlorophyll-variegation' plant, it would have reduced efficiency, but might make up for this with strategies including:



  • Having a larger total leaf area (La Rocca et al. and also see this question)

  • Only developing large white areas if there's plenty of light as they grow (Garland et al.)

  • Increasing the mitochondrial activity in its white areas, which could lead to other energy efficiencies (Toshoji et al.)

  • Using Homo sapiens to help it reproduce and provide plenty of light for it. Many of these extreme variegation patterns are found in horticulturally bred varieties!

  • Just using its green areas for photosynthesis, and "accepting" the fact that the white areas are not producing energy. In natural populations, you often don't find varieties as completely white as in that picture: they usually show white spots and plenty of green. This is true of Caladium species in the wild, which may use the variegation to mimic and to prevent herbivory: potential pests think the leaves are already infested, and avoid them (Soltau et al.). In other words, even if the white patches represent decreased photosynthesis efficiency, it could be a trade-off in order to achieve less herbivory.


Sunday 4 April 2010

asteroids - What is a dead comet?

Comets are bodies that formed in the outer solar system, and are composed largely of ices (water, CO2 and others). The Rosetta mission is discovering lots of new science about the compostion of comets right now.



The asteroids are more varied. Some are rocky, some metallic and some have a lot of ice. The asteroids are have various origins, but most orbit between Mars and Jupiter and in the plane of the solar system.



In contrast, comets tend to have highly elliptical orbits and often their orbits are highly inclined, relative to the rest of the solar system. This is because they are falling towards the sun having been disturbed from their birthplace in the outer solar system.



The chief characteristic of a comet is that, as it comes near the sun, the ices sublimate off the comet and form an a coma: a giant sphere of gas and dust that surrounds the comet's icy nucleus. The solar wind pushes this into a tail that point away from the sun.



After many orbits of the sun a comet will eventually run out of ice, at least on its surface, and will no longer form a coma. How long this takes depends on the orbit of the comet, but about 1/2 million years seems to be an estimate of the average life span of a comet. As it is no longer active, it is a "dead comet". Although having a "dead comet" on Halloween is a bit of spin, the term is real and has been used prior to 2015-TB145, for example in this page about near earth objects



In fact, as noted there, many near earth "asteroids" may actually be dead comets. Evidence that 2015-TB145 is one is that it has a highly inclined elliptical orbit, and it is very dark. Other comets, such as Halley, are blacker than soot.

redshift - How does one measure velocities of far-off, bright objects

The "redshift" of a distant galaxy is defined in terms of its line of sight velocity. In our model of the expanding universe, once we move away from the local group of galaxies (which have their own peculiar motions), distant galaxies follow the Hubble flow and to first order have a line of sight velocity tht is proportional to their distance way (it gets more complicated for very distant galaxies).



Distant galaxies may well have a "tangential" velocity too, but for galaxies outside the local group these velocities will be be negligible compared with the redshift. i.e. The line of sight velocity due to the expansion of the universe is dominant.



I guess by "parallax drift" you actually mean proper motion - which is the rate at which a star's position changes with respect to the celestial coordinate system. This proper motion depends on how far away the star is and how fast it is moving tangentially with respect to the solar system.



Thus to estimate a tangential velocity you need both the proper motion and the distance to the star.



I think the most distant object for which a proper motion has been determined with any accuracy is the Andromeda galaxy, which is a couple of million light years away. This was achieved by studying the position of many stars in Andromeda over a 7 year period using the Hubble Space Telescope. The details can be found in Sohn et al. (2012); but the headline numbers are that the proper motion is a mere $sim 0.05$ milli-arcseconds per year(!) , implying a tangential velocity (with respect to the solar system) of about 150 km/s.



Another candidate is measuring the velocities of material in the jet of the active galaxy M87 by Meyer et al. (2013). This galaxy is at 50 million light years, but the motion of the jet is only detectable here because it is moving relativistically.



These are quite special cases. In general, the tangential velocities of stars in our Galaxy are small and large-scale susrveys of proper motions are generally inaccurate beyond a few thousand light years. The upcoming Gaia results will improve this dramatically meaning we have good proper motions for objects out to tens of thousands of light years.

Saturday 3 April 2010

software - Where can I find a set of data of the initial conditions of our solar system?

I was able to get the Cartesian orbital vectors for all the major bodies from HORIZON at the J2000 epoch only. I could extend the coverage forward thru time. It’s easy to get data overload doing this. My simulation is modeled using the Laws of Gravitation and Motion alone. This gives results that are surprisingly close to those published. Running the solar system backwards (by reversing the velocity vectors) has given me the initial vectors back to 1900. This is all I needed and the results were close enough for my purposes. I still have the CSV files.



I have also have had all sorts of problems with the horizons interface. For instance changing the date had no effect on the value of the vectors. i.e.: all specified start dates have the same values. Lately, I have not been able to duplicate this feat. There are obviously some serious problems with this interface, especially lately.



I know the data I got was correct because it correlates, perfectly, with published events, e.g.: the recent transit of Mercury.



I too am still looking for this type of data.

Is there a way to estimate the age of M dwarf stars?

Really difficult.



Very young ($<50$ Myr) M dwarfs may still exhibit lithium in their photospheres. Any older and it will have been depleted.



For older dwarfs you then move on to looking at rotation and activity. Both decline with age, but in an M dwarf as cool as this, the decay timescale is many Gyr, so it is only weakly constraining and not very well calibrated for such low mass stars. A very slow rotation rate or very low level of magnetic activity might indicate it was older than 5 Gyr.



The lack of flares and the lack of light curve modulation by starspots may indicate that it has spun down, is magnetically inactive and is therefore older than 5 Gyr ( I cannot immediately locate the source of this Wikipedia claim). The ratio of X-ray to bolometric luminosity would be more definitive and could probably be derived from the ROSAT all-sky survey (or a constraining upper limit would be found).



A useful paper is by Stelzer et al. (2013). http://arxiv.org/abs/1302.1061
She finds various activity diagnostics for nearby stars, including this one. Its projected rotation velocity is 1.5 km/s, which suggests a rotation period of $sim 10$ days for a star of this size. This could not be less constraining on the age! The X-ray to bolometric flux ratio is $10^{-4.45}$ which suggests it is neither very young or very old - say between 1 and 5 Gyr for a star of this mass.



Another possibility is to look at the Galactic space motion and position. Older stars tend to have higher velocities, especially perpendicular to the Galactic plane, and also tend to be found further from the Galactic plane. The velocities for this object do not look large, suggesting it is not very old, ie younger than perhaps 10 Gyr and likely a young disk object with age of $<5$ Gyr.



You can also look at the metallicity. This is difficult to measure in M-dwarfs. Maldonado et al. (2015) give [Fe/H]$=-0.05 pm 0.03$. This is very close to the solar value and does not constrain the age beyond suggesting the star is younger than 10Gyr.



Beyond this, it is basically impossible. Even if you had a very precise distance and could place the star on a HR diagram, low mass M dwarfs evolve so little in luminosity and temperature over many Gyr that this offers no constraint.



So my conclusion - the age is between 1 and 5 Gyr, with the probability peaking somewhere in the middle.

Friday 2 April 2010

solar system - Celestial impacts

Have we ever observed, or better yet, recorded, an impact from a comet/meteorite/fragment/etc on another celestial body? For example, in all its time orbiting Mars, has the MRO observed anything impacting the surface?



All the other objects in our solar system that we've observed (aside from gas giants) are either speckled or virtually covered with craters, but I've never heard mention of actually seeing one of these impacts happen. At a basic level I'm curious if we've ever actually directly seen one, but what would be even better is if there's a publicly-available video of it happening.