Monday, 30 January 2012

botany - What's the effect of oxygen deficit on plants?

During the daylight, the plant is photosynthesising faster than it is respiring so there is no net uptake of oxygen (the oxygen of course being produced in the as part of the photosynthesis).



Of course, this only applies for tissues where photosynthesis is occurring. In the roots of the plant, oxygen must always be present in the surrounding soil/growth medium for respiration. Therefore if there is no oxygen in the roots then the root cells will be unable to produce ATP from respiration and consequently die. This will eventually lead to the death of the entire plant as it is unable to take up nutrients due to the lack of a functioning root network.



This is the cause of plant death when soil is waterlogged - the water fills spaces in the soil that would otherwise contain air (i.e. oxygen).



Certain plants (often crop plants), rice being given as an example in the comments below, are able to survive being waterlogged and the resulting low oxygen supply. However they can only do so in the short term. Rice plants in particular have adaptations that allow the transport of oxygen from the aerial parts of the plant down to the roots. This does of course mean that the rice can not survive being totally submerged for an extended period of time. Plants such as rice may have better adapted anaerobic fermentation pathways that become the main producer of ATP when aerobic respiration has arrested thus allowing them to better cope when transiently submerged than other plants.



Molecular strategies for improving waterlogging tolerance in plants

Sunday, 29 January 2012

stellar evolution - Why does shell fusion produce more energy than core fusion?

Ultimately this is more of an overly long comment, as I think a more satisfying and complete answer would properly explain things in a more concrete fashion—more of a "it has to do this because..." answer than my "it can do this because..." one.



The short of the answer to the first question is that helium fusion needs ~25 times the temperature that hydrogen fusion does. The proton-proton chain initates around $4times 10^6$ Kelvin, whereas helium fusion doesn't begin until around $10^8$ Kelvin. So when the main stage ends and the helium core contracts and the temperature rises, the "edge" of the core can have temperatures well in excess of the minimum hydrogen fusion temperature, and so a shell around it can have temperatures well beyond it. Fusion rates are (approximately) polynomial in temperature, with the degree depending on the reaction in question, so small increases in temperature can produce substantially more fusion. The gravitational force is strong enough to overwhelm the pressure from the induced fusion, and so will contract the surrounding shell to temperatures in excess of the minimum needed. This is basically what happens in the cores of main sequence massive stars (relative to, say, the Sun). Actually, our own Sun's energy is mostly from the proton-proton chain and has a core temperature of around $1.57times 10^7$ Kelvin, nearly four times the minimum necessary. And still, the core needs to be nearly 10 times hotter than that to initiate Helium fusion.



For the second question, the short of the answer is that the core has undergone thermal expansion after the Helium flash, and so occupies the temperature ranges most conducive to a strong hydrogen fusion rate. The material outside the core is now at lower temperatures and pressure, so the fusion rate is reduced substantially. So the energy output comes principally from the core, and the helium fusion at near minimum temperatures releases less energy than the hydrogen shell at (well) beyond minimum temperatures did. Thus the star overall produces less energy and contracts.



The remaining questions are explained in similar fashion: one has to pay attention to the sensitivity of reaction rates to temperature, and what the temperatures in those shells actually are. The sensitivities are different for each reaction chain, and the temperatures can go well beyond the minimum necessary.

Saturday, 28 January 2012

cmb - Why can we observe the Cosmic Microwave Background no matter the direction we look?

Until the Universe was 380,000 years old, it was filled with a gas of protons an electrons. There was also radiation, in thermal equilibrium with the matter, and because it was so hot, the protons and electrons couldn't form neutral hydrogen, since every time it "tried", an energetic photon would knock off the electron.



This gas was everywhere. And photons traveled and scattered in all directions:



CMB1



Photons (purple) scatter on free electrons (green), and both are mixed with protons (red).



380,000 years after the Big Bang, the temperature had fallen sufficiently that neutral atoms could form (this is called recombination). The radiation, which until now had scattered continously on free electrons, could now stream freely between the atoms (this is called decoupling).



So they did. Still in all directions:



CMB2



This free streaming is still taking place. Photons travel in all directions, and are everywhere. The photons that you are able to see, are the ones that started out at a particular distance from you, and in a particular direction, but other photons started out at smaller and larger distances, and in other directions. You just don't see them, because you happen to be right here. But a person in another place of the Universe would see the same as you.



The photons that we observe as the CMB come from a region we call the surface of last scattering, because it corresponds to the surface of a shell centered at us. But there is nothing special about this "surface", except that is consists of all points in the Universe that are roughly 47 billion lightyears away from us.



CMB3

Friday, 27 January 2012

exoplanet - Which is the largest planet ever observed?

Yes, there is a limit. Anything with a mass larger than about 13 times that of Jupiter would be called a brown dwarf (a failed star), though whether such an object would consist entirely of gas, or had a rocky/icy core as is probable for most giant planets, is not presently observable. Any larger than about 75 Jupiter masses and we would just call it a star. The exact definition of what a planet is (especially, the 13 Jupiter mass boundary) is still disputed.



Of the bonafide planets that have been detected and confirmed, the catalogue at exoplanets.org lists Kepler-435b as the one with the largest measured radius (although its radius error bar overlaps with that of other planets). The quoted radius is $1.99 pm 0.18$ times that of Jupiter.



Most giant planets have very similar radii for masses between about 0.5 and 10 times that of Jupiter. The reason for this is that they are largely supported by electron degeneracy pressure. The diversity in the radii (between about 0.7 and 2 times that of Jupiter) of such planets is not yet fully understood.



The plot below shows mass vs radius for "planets". The smaller (probably rocky/icy) planets do show a trend of increasing radius with mass (the solid line is where a theoretical relationship for rocky/icy planets has been used to estimate the mass from the radius). The gas giants above about 0.5 Jupiter masses show no trend and a small scatter.



Mass versus radius

cell biology - Significance of basal lamina for outer layers of epithelium

The basal lamina, a specialised type of extracellular matrix (ECM) that differs between cell types, acts as a base for stratified epithelia cells to layer on top of and therefore has a supportive role as well as providing a base for attachment (for the layer of cells immediately on top of it) 1. Layers of epithelial cells on top of further strata of similar cells with no differing tissue composition underneath would have no support and cell types would be able to mix freely within tissues. The different cell types within tissues are kept seperate by this ECM to maintain areas for specific functions. The importance of this can be seen through studying epithelial cancers, where the epithelial cell layer breaks through the entire basement membrance and invades the surrounding tissue. This can cause massive disruption of tissue function.



Yes, it seems that the layers of cells are attached by various junctional complexes, the desmosomes (maculae adherentes) being the most abundant for stratified epithelia.



Referenes



1 http://www.histology.leeds.ac.uk/tissue_types/connective/con_basal_lam.php

Thursday, 26 January 2012

Is Jupiter just a super earth with hydrogen atmosphere?


Does it originate from super earth with excessive collection of
hydrogen?




Basically no, and this is why:



enter image description here
Source from the water article linked below.



Jupiter likely formed outside the frost-line, so it never had a rocky or even a molten magma surface. It formed with much too high a percentage of ices, which, under pressure, became hot gas and perhaps, at one time, a very deep ocean around the whole planet.



As Rob Jeffries said, the formation of Jupiter isn't well understood, so there's some uncertainty there, but I don't see how a planet that forms with as much water, CO2, NH3 and other gases as Jupiter did could ever form what we would consider a solid surface. It's large size and heat of formation probably kept it highly gaseous during it's entire formation. Perhaps it was once a water world of sorts, but my guess is it always resembled a gas giant once it was recognizable as a planet.



The Earth is about 0.02% water (some websites say 0.05% but whichever percentage, the Earth is still almost entirely Rocky Mantle & Metallic core. I don't think a planet that forms outside the frost line would ever become Earth like or a super-earth.



Inside the frost line, A super earth could in time become a gas giant, but outside, I don't think so.



That said, Jupiter might well have an earth like ratio of elements at it's core, but I don't believe that means it was ever a super-earth.

Monday, 23 January 2012

temperature - How cold is interstellar space?

You can stick a thermometer in space, and if it is a super-high-tech one, it might show you the temperature of the gas. But since the interstellar medium (ISM) is so dilute, a normal thermometer will radiate energy away faster than it can absorb it, and thus it won't reach thermal equilibrium with the gas.
It won't cool all the way to 0 K, though, since the cosmic microwave background radiation won't allow it to cool further than 2.7 K, as described by David Hammen.



The term "temperature" is a measure of the average energy of the particles of a gas (other definitions exist e.g. for a radiation field). If the gas is very thin, but particles move at the same average speed as, say, at the surface of Earth, the gas is still said to have a temperature of, say, 27º C, or $ 300,mathrm{K}$.



The ISM consists of several different phases, each with their own physical characteristics and origins. Arguably, the three most important phases are (see e.g. Ferrière 2001):



Molecular clouds

Stars are born in dense molecular clouds with temperatures of just 10-20 K. In order for a star to form, the gas must be able to collapse gravitationally, which is impossible if the atoms move too fast.



The warm neutral medium

The molecular clouds themselves form from gas that is neutral, i.e. not ionized. Since most of the gas is hydrogen, this means that it has a temperature of roughly $10^4,mathrm{K}$, above which hydrogen tends to get ionized.



The hot ionized medium

Gas that accretes onto the galaxy in its early phases tend to have much larger temperature, of roughly $10^6,mathrm{K}$. Additionally, the radiative feedback from the hot stars (O and B), and the kinetic and radiative energy injected by supernova explosions ionize and heat gas bubbles that expand. This gas comprises the hot ionized medium.



Cooling

The reason that the ISM is so sharply divided into phases, as opposed to just being a smooth mixture of particles of all sorts of energies, is that gas cools by various physical processes that have a rather temperature-specific efficiency.
"Cooling" means converting the kinetic energy of particles into radiation that is able to leave the system.



Hot gas

Very hot gas is fully collisionally ionized and thus cools mainly through free electron emitting Bremsstrahlung. This mechanism becomes inefficent below $sim10^6,mathrm{K}$.



Warm gas

Between $10^4,mathrm{K}$ and $10^6,mathrm{K}$, recombinations (i.e. electrons being caught by ions) and collisonal excitation and subsequent de-excitation lead to emission, removing energy from the system.
Here the metallicity$^dagger$ of the gas is important, since various elements have different energy levels.



Cool gas

At lower temperatures, the gas is almost fully neutral, so recombinations cease to have any influence. Collisions between hydrogen atom become too weak to excite the atoms, but if molecules or metals are present, it is possible through fine/hyperfine lines, and rotational/vibrational lines, respectively.



The total cooling is the sum of all these processes, but will be dominated by one or a few processes at a given temperature. The figures below from Sutherland & Dopita (1993) shows the main cooling processes (left) and the main cooling elements (right), as a function of temperature:



processes/elements



The thick line show the total cooling rate. The figure below, from the same paper, shows the total cooling rate for different metallicities. The metallicity is a logarithmic scale, so [Fe/H] = 0 means Solar metallicity, and [Fe/H] = –1 means 0.1 times Solar metallicity, while "nil" is zero metallicity.



total



Since these processes don't cover equally the full temperature range, the gas will tend to reach certain "plateaus" in temperatures, i.e. it will tend to occupy certain specific temperatures. When gas cools, it contracts. From the ideal gas law, we know that the pressure $P$ is proportional to the product of the density $n$ and the temperature $T$. If there's pressure equilibrium in the ISM (which there isn't always, but in many cases is a good assumption), then $nT$ is constant, and thus if a parcel of hot ionized gas cools from $10^7,mathrm{K}$ to $10^4,mathrm{K}$, it must contract to increases its density by a factor $10^3$. Thus, cooler clouds are smaller and denser, and in this way the ISM is divided up in its various phases.



So, to conclude, interstellar space is not as cold as you may think. However, being extremely dilute, it is difficult to transfer heat, so if you leave your spaceship, you will radiate away energy faster than you can absorb it from the gas.




$^dagger$In astronomy, the term "metal", refers to all elements that are not hydrogen or helium, and "metallicity" is the fraction of gas that consists of metals.

Saturday, 21 January 2012

celestial mechanics - Two body orbit of equal masses


I know they will be orbiting about a common center of mass, i.e. the
barycenter. But, do the velocities have to be equal in magnitude and
opposite in direction (normal to R when R is their distance from each
other) for the orbit to be stable?




The orbital velocities do not have to be and in general are not equal in magnitude. What is equal is the angular velocity, that is the angular rate (e.g., in $rad/sec$) that two bodies will be orbiting their common barycenter. The orbital radius, $r$, orbital velocity, $v$, and angular velocity, $omega$, are related by the equation



$$omega = v/r$$



Note that these are all scalar quantities and $v$ can be thought of as the component of the velocity vector which is perpendicular to $r$. Since conservation of angular momentum implies that $omega$ remain constant for each body, individually, then we know that $v/r$ must also be constant which implies that bodies which orbit farther away from the barycenter must necessarily be orbiting faster and vice versa.



Now of course that simplifying explanation works for two bodies. Once you throw in more than two bodies, things become more complicated and the barycenter can move around, causing more complex motions.




I think, if the velocity of one mass were to vary with respect to the
other it would create a moving barycenter in which the two masses
would collide or throw on another out of the orbit




This is a subtly different question. If one of the two masses had a varying orbital velocity, that would imply it was gaining or losing energy by some mechanism. This can occur through things like tidal interactions as it does for our Earth and Moon. As stated above, the angular velocity must remain constant which implies that the body whose velocity is changing must also migrate towards or away from the barycenter, potentially resulting in a collision or escape. Since the Moon's orbital velocity is being sped up through tidal interactions, it is moving farther away as a result. Another example might be two black holes emitting gravitational waves as they orbit which propagates away energy and thus they orbital velocities, causing them to get closer together until they collide.

terrestrial planets - Statistics of elements abundance in exoplanets

The observational determination of the chemical abundances in exoplanets is in its infancy. In terms of terrestrial type planets, i.e. those of size less than a few Earth radii, the constraints are confined to comparing the measured densities (obtained from the masses and radii of transiting planets found by Kepler and CoRoT) with models of what planets with certain assumed composition would look like. An excellent recent example of this can be found in Dressing et al. (2015). In this paper they make the claim that all of the low-mass
planets are consistent with a single, simple 2-component models (a mixture of 83% MgSiO$_3$ and 17% iron, but that this changes at higher masses, where more volatile elements or significant water are needed to explain their lower densities. The plot below, taken from that paper, illustrates the available data and should be quite up-to-date. Note how all the low-mass planets (and the Earth and Venus) can lie on the same family of models.



Planet mass vs radius from Dressing et al. (2015)



I don't think that the authors are claiming that this is exactly what all the planets are made of, but simply illustrating that at present there do not seem to be any large deviations from such a composition (for example, planets that are made solely of iron).



There are relatively few planets on this diagram, because it is difficult to obtain the masses of small transiting planets (it requires detection of the doppler shift caused by the pull of the planet on its host star).



Of course different models yield somewhat different results. For instance, Wagner et al. (2012) used the same data for Kepler-10b and CoRoT-7b and their own detailed models to argue that these planets have an iron core that makes up about 60% of the planet - i.e. a lot more than makes up the Earth.



At the moment the data for the lowest mass planets currently indicate that there could only be a limited amount of diversity. But the information that we are working with, the sample size, and the fact that only masses and radii are
determined, is too sparse to be sure.



From a theoretical point of view there are many ideas. The basic concept about the formation of the terrestrial-type planets is that they form (relatively) close in to the parent star and have compositions that are reflective of what elements and minerals can condense out of the protoplanetary disk at high temperatures. This in turn depends on the balance of elements that are present in the protoplanetary disk, where in the disk the planet forms, the detailed structure of the protoplanetary disk, how it cools and how planets migrate in the disk. Unsurprisingly, by varying some of these conditions it is possible to create planets with a wide variety of compositions, which as I said above, appears to be mildly contradicted by the available evidence. At the most basic level we understand why the Earth and other terrestrial planets are comparatively lacking in "volatile" elements like hydrogen and helium (and to some extent C, N, O and noble gases), because they (or minerals containing them) could not condense at the high temperatures close to the proto-Sun.



Examples of these theoretical approaches can be found in Moriarty et al. (2014) (of which you are familiar), but also see Carter-Bond et al. (2012) for examples of how chemical diversity might arise. It seems that the Mg/Si and the C/O ratios have the biggest influence on the final compositions of the formed planets. A low C/O ratio favours the formation of silicates and fewer carbon-carrying compounds; but if there is more carbon than oxygen then it becomes more favourable to form carbon and silicon-carbide (I guess this is what you mean by "carbon planets"), but this also depends on the temperature in the region where the planet forms. For reference, the solar C/O ratio is 0.54 and the relative abundance of carbon in the Earth is much lower (than in the Sun) but the C/O ratio measured in other stars can be higher.

immunology - From inflammation to sickness and depression: when the immune system subjugates the brain

Cytokines are essentially signalling molecules of the immune system. Broadly speaking these can be classed as pro- or anti-inflammatory. Pro-inflammatory cytokines promote inflammation, and anti-inflammatory cytokines inhibit inflammation.



Inflammation (again, broadly speaking) is associated with the 'innate' immune response - this is the immediate response by neutrophils and macrophages (among others), and these cells release a variety of molecules designed to damage cells around them. This is great if those cells are bacterial, or damaged host cells that need removing, but not so great if that is 'normally' functioning tissue.



Aging is associated with increased systemic inflammation - i.e. a greater proportion of pro- to anti-inflammatory cytokines. The causes of this aren't entirely certain, but it could be that the adaptive immune response (i.e. the memory T-/B- cells) is not as effective in older ages so the innate response is more important, or that it is just simply left 'unchecked'. Either way, the result this that the pro-inflammatory response can end up damaging host tissues, including the brain, without the corresponding repair instigated by anti-inflammatory cells/cytokines.



Any anti-inflammatory medication (e.g. aspirin, ibuprofen, or glucocorticoids) will, in theory, reduce inflammation, but by suppressing the immune system the risk of infection naturally rises, and there may be unforeseen consequences to prolonged use (the immune system does a lot more than just fend of bacteria - e.g. it is essential for tissue maintenance and regeneration), so I am by no means suggesting the use of anti-inflammatories to protect brain function!

Thursday, 19 January 2012

astrophysics - Does this black hole magnetohydrodynamics equation even superficially make sense?

My question is about the journal paper mentioned in . Please understand that this paper has never been posted on arXiv, and I can provide only a link whose content is behind a paywall.



Summary of my question: it boils down to "whether a spatial coordinate of a fiducial observer can have a nonzero partial derivative with respect to the coordinate time."



I am interested in the validity of the central result of this paper. It is Eq. 4.24, which reads
begin{equation}
begin{split}
&nablacdotleft[frac{alpha}{varpi^{2}}left{1-left(frac{omega-Omega^{F}}{alpha}varpiright)^{2}right}nablaPsiright]
- frac{omega-Omega_{F}}{alpha}nablaOmega^{F}cdotnablaPsi\
&+ frac{4pidot{varpi}}{alpha^{2}varpi}left(1-frac{dot{Phi}}{4pi}right)frac{partial Omega^{F}}{partial z} + 4pi frac{partial}{partial z}left[frac{dot{varpi}}{alphavarpi}frac{omega-Omega^{F}}{alpha}left(1-frac{dot{Phi}}{4pi}right)right]\
&+frac{1}{alphavarpi^{2}}left[left(frac{dot{alpha}}{alpha}+frac{dot{varpi}}{varpi}right)dot{Psi}-ddot{Psi}right] + frac{dot{varphi}}{varpi}frac{omega-Omega^{F}}{alpha}frac{partialPsi}{partialvarpi}\
&-frac{16pi^{2}xi}{varpi^{2}}left(1-frac{dot{Phi}}{4pi}right) = 0,
end{split}
end{equation}
where a dot on top of a symbol denotes a partial derivative with respect to the coordinate time $t$.



The above partial differential equation is supposed to describe the magnetosphere of a Kerr black hole. The authors use the spherical coordinates $(r,theta,varphi)$ and define $varpi$ as follows:
begin{equation}
varpi equiv frac{Sigma}{rho}sintheta,
end{equation}
where
begin{equation}
rho^{2} equiv r^{2} + a^{2}cos^{2}theta,
end{equation}
begin{equation}
Sigma^{2} equiv (r^{2}+a^{2})^{2}- a^{2}Deltasin^{2}theta,
end{equation}
and
begin{equation}
Delta equiv r^{2} + a^{2} - 2Mr.
end{equation}
Note also that $alpha$ is the lapse function defined as
begin{equation}
alphaequiv frac{rho}{Sigma}sqrt{Delta}.
end{equation}



The functions $Psi(t,textbf{r})$ and $Phi(t,textbf{r})$ denote the magnetic and electric fluxes through an $textbf{m}$-loop passing through $textbf{r}$, where $textbf{m} equiv varpihat{e}_{varphi}$ is the Killing vector associated with axisymmetry.



What confuses me is the following: $varpi$, $varphi$, and $alpha$ are simply spatial coordinates or combinations thereof, and their (partial) time derivatives should all be identically equal to zero because space and time coordinates are independent variables. This would render many parts of Eq. 4.24 nothing but convoluted ways to express the number zero.



I have also tried to follow the derivation of Eq. 4.24, and figured out that the authors implicitly assumed the following relations:
begin{equation}
dot{Phi} = dot{varpi}frac{partialPhi}{partial varpi}
end{equation}
and
begin{equation}
dot{Psi} = dot{varpi}frac{partialPsi}{partial varpi}.
end{equation}
Recall that a dot means a partial derivative with respect to time. As $dot{varpi}$ is identically zero, the above relations seem to be wrong.



However, what makes me somewhat unsure about my conclusion is that this paper is published in The Astrophysical Journal, a renowned peer-reviewed journal in astrophysics. (I have little expertise in astrophysics.)



Could someone verify whether my suspicion is well founded or correct me where I am wrong? Thanks in advance!

Wednesday, 18 January 2012

biochemistry - Solution based measurement of Solvent-Accessible Surface Area of macromolecules

I only know of one method, but here it is. You create a sphere the diameter of the VdW radius of water, and then 'roll' it along the surface. I know this as a Richards-Lee surface, wikipedia has another name for it.



enter image description here



This looks complicated, but its not. you move the probe sphere along the surface of the molecule in the XY plane until it just touches the vdW radius of the protein, keeping the center of the sphere as the surface, all the way around the molecule. If you like, you can color the surface by the charge of the position too, which is useful for discussing solvent interactions.



Then you translate along the z axis and do another contour until you run out of protein. Apparently jmol and other packages will do this for you.



Wikipedia references a more mathematical method LCPO, which I am not so familiar with.



Is this accurate? As usual with such calculations its more of a guess than an answer. You can do the calculation on any structure or any ensemble of structures (like NMR gives). It doesn't understand how the molecule might be flexible or dynamic. If you read up on your physical chemistry you see that proteins breathe and can allow diffusion into the core rather readily. If I recall right, you can get rather large molecules quenching heme flouresence in hemoglobin at room temperature.



If you are looking to dock 2 proteins, SAS might be more useful. Its an important piece of information, but not an ultimate answer. I'm afraid with proteins that doesn't happen so easily.



@bobthejoe asked about SAS for which no structure exists.
This is an extremely difficult thing to even guess at. The non helpful answer is that the surface of the protein goes as the cube root of the molecular weight of the protein.



By getting a solution of the protein and shooting it in a syhcrotron, you can get a mean radius of gyration pretty easily which will give you an ellipsoidal volume (and surface area) for a protein. Again most of the particulars would be lost and this could easily be off by 25% for an irregularly shaped protein. For a regular globular protein it might give an answer similar to the power law above.



I have seen physical chemistry experiments that look for changes in osmotic pressure when the salt concentration in a solution of the protein changes substantially (Adrian Parsegian's work at NIH in the late 80s).



I doubt you will find any of these answers useful as their mean error is going to be very large (20-200%) and also assumes the protein is soluable and amenable to the experimental conditions.



Solvent probes can help too. For instance exposing the protein to D20 then doing mass spectroscopy on the protein. This is still only going to give you a general idea of how much of the peptide is surface exposed. Protein structure is still pretty necessary to getting any accurate measurement of SAS I think.

Tuesday, 17 January 2012

astrophysics - Calculating Orbital Period

I was using this formula to calculate the orbital period of a satellite in days:




T = sqrt[(4*pi^2)*R^3/GM-center]




Where R^3 is the radius of the orbit, or distance of the semi-major axis, G is the gravitational universal constant, and M-center is the mass of the object being orbited.



I'm attempting to calculate the orbital period for a planet that has a diameter of 5,124 kilometers, or 804,500,000/157 meters, and a mass of 5.526*10^13 Kilograms.



The star, which substitutes M-center, has a mass of 3.978*10^30 Kilograms. The length of the semi-major axis is 2.3AU, or 3.44*10^11 meters.



When substituted, I get:




T = sqrt[(4*pi^2)*(3.44*10^11)^3/(6.674*10^-11)*(3.978*10^30)




I simplified this too:




T = sqrt[(1.607094708736*10^33)/(6.674*10^-11)*(5.5616^13)




Which further simplifies too:




T = sqrt(1.607094708736*10^33)/(2.656092*10^20)




Something definitely seems wrong at this point. But, this comes out to:




T = sqrt(6.0505988073304689747192491826337340724643574093066053*10^12)



T = 2.459796*10^6




I find it hard to believe that it takes a planet that long to orbit around a star that is only 2.3AU away from it. Obviously, I am doing something very wrong, but I simply do not know where.



Somewhere, I am missing something.

Saturday, 14 January 2012

jupiter - How much of a difference do good lenses make?


...a very blurry, small view of Jupiter with the 4mm and Barlow...




Be aware that a 4mm eyepiece and a 3x barlow at the same time will give you a very high magnification - too high! For regular Jupiter viewing I would suggest you stick to 100x or 200x at most, unless the air is exceptionally still. (After a few sessions you'll find out what "Still" air means.)



A lot of beginner scopes like this are sold with barlows that tend to give too much power, my advice is to keep it packed away for most sessions.



So I'd stick to using that 4mm on its own, jupiter will look quite small but with practice you can usually tease a bit more detail out of the image.



I don't have one of these scopes, but my experience of comparing cheap modern eyepieces with expensive ones is that the lower cost ones are generally OK these days. 4mm is quite a high power so the extra magnification will cause a lot of blurriness and it will take patience to get the focusing at its sharpest, so don't panic if everything is looking fuzzy at the moment.



For a first stage I would suggest a few things that are nothing to do with the eyepieces at all...



  • Collimation - look this up, it just means adjusting the two mirrors
    so your eye is looking right down the tube in a straight line. For a
    long focal length scope like yours, it's not likely to be a problem
    unless one of them is wildly out of line.

  • Tube currents/thermal behaviour - on almost any cold night, when
    the tube and main mirror are still warm from being indoors, rising
    air currents in the tube will mess up your image and make it
    shimmery, at high power. Low power will look OK. It might take an hour or so for the image to improve (a guess)

Anyway, in summary my guess is changing the eyepieces straight away won't make a dramatic difference. Eyepiece makers will say otherwise of course :)

Wednesday, 11 January 2012

optics - How to make a telescope for viewing planets, moon and DSOs using a convex lens of aperture 100 mm and focal length 200 mm and other lenses at home?

I generally agree with the answer above, but have a couple more insights which might help you if you decide to proceed with trying to make your own scope...
The lens pairs that James mentioned (crown and flint) are known as a doublet. Glass has two key properties in play here - its index of refraction (how much it bends light) and its dispersion (how much that bending changes over color). The lens pair balances a strongly convex crown (low index and low dispersion) with a weakly concave flint (high index and high dispersion). The dispersions are designed to cancel out, while you want the curvature of the convex crown to overpower the concave flint in terms of index, so it still has some ability to focus. The design also inherently lends itself toward long focal lengths which are desirable in telescope objectives.
Eyepieces, due to their short-desirable focal lengths necessitate more lenses which allow you to balance the chromatic aberration, and also address other optical aberrations which come into play with such a short focal length (distortion, astigmatism, coma, and spherical aberration being the main concerns). There are well-established design forms which are often used for making well-corrected eyepieces, some of which can be found right on wikipedia: https://en.wikipedia.org/wiki/Eyepiece#Eyepiece_designs



One thing to consider, which I don't think has been mentioned is selection of focal lengths and apertures. A 5cm aperture is plenty sufficient to view the Galilean moons, and probably some bright DSOs if you're well-corrected, and if your focal lengths are well-chosen. The system magnification is the ratio of focal lengths between the objective lens and eyelens/eyepiece. (200cm/2.2cm = 90.9x). This means that something like the Galilean moons, which have a max extent of about 1/8 degree, would be magnified to have an apparent extent of 11 degrees (much easier to resolve).
Your aperture selection (particularly of your objective) will determine the light-gathering ability. But the magnification applies here too, so if you have a 5cm objective at 91x, your "exit pupil" will only be 0.55mm diameter, which is tiny compared to your eye's aperture. You'd still be able to see the object, but your eyes will easily accommodate up to a 30cm objective aperture (3mm exit pupil). Keep in mind, there is a tradeoff between aperture and aberrations, so unless you're designing a very well-aligned 2- or 3-element objective, you may want to stick with a max objective aperture of 50-75mm.



In terms of alignment, don't just set the lenses a certain distance apart and expect to see an image. You will need to allow for some adjustment, which is probably easier looking at a distant object during the day. After you form an image, you may need to adjust the centration and tilt of the eyelens to form the sharpest image to optimize your alignment.



All that said, a small aperture, high-end refractive (glass) telescope can perform better than a reflective telescope of the same aperture. But as aperture increases, the cost of the materials and impact of aberrations makes refractive telescopes vastly inferior to reflective (mirror) telescopes. Since the design for these only necessitates 1 powered mirror and an off-the-shelf eyepiece for $50 or less, the best bang for your buck will definitely be a reflective telescope. Sorry if that's not what you're hoping to hear, but it's why most telescopes on the market today are reflective.

Tuesday, 10 January 2012

cosmology - On the cosmological principle

Neither of the two cases are completely inconceivable:



A homogeneous, anisotropic universe



A universe with galaxies spread evenly all over, but all spinning in the same direction. This universe would look the same no matter where you lived, but have a net angular momentum, so looking in one direction you'd see all galaxies spinning along your line of sight, and in another direction, you'd see them spinning perpendicular to this direction.



Another example is a universe that had been permeated by density waves in one direction. In this direction, you'd see the density of galaxies alternating between high and low, and perpendicular hereto you'd see a constant density.



homo-noniso



Yesterday's papers on arXiv included a paper (Schucker 2016) that discusses the the possibility that we might live in another type of homogeneous, anisotropic universe, namely one in which the observed expansion rate depends upon the direction in which you look. This is called a "Bianchi I universe", and isn't just a hypothetical curiosity (although the results of this paper is statisically non-significant). See also @JonesTheAstronomer's answer.



An inhomogeneous, isotropic universe



As John Rennie has taught us, Big Bang didn't happen at a point. However, if it did, and we happened to live in the central region, we could observe the same in all directions, but see a gradually thinnening universe, or maybe increasing to some point and then decreasing, depending on exactly how this exsplosion came about. This scenario would however imply that we inhabit a special place in the universe, which would make Kopernikus sad. If a universe is isotropic from more than one location, is must also be homogeneous.



inhomo-iso

Sunday, 8 January 2012

Where do new stars get their hydrogen from?

Stars only burn hydrogen in their core, where the temperature gets high enough for nuclear reactions to occur. They end their lives when they run out of fuel in the core, but lots of hydrogen still exists in their envelopes.



The Sun will burn 5-10% of its mass before exhausting its core. When this happens, the core will contract due to the radiation pressure disappearing. It then starts to burn hydrogen in a shell around the core. Eventually, when the Sun dies, it will have burned less than half of its hydrogen. Larger stars burn an even smaller fraction.



This means that, when stars die they still leave hydrogen behind for the next generation.



Galaxies can still run of of gas, though. Since after all, each $M_odot$ of star formed burns of the order of $1,M_odot$, if a galaxy isn't fueled with new gas it will become depleted on a timescale of order $1$ over its specific star formation rate sSFR, which is its star formation rate SFR measured in Solar masses per year, divided by its stellar mass $M_*$ in Solar masses:
$$
t_mathrm{depl} sim frac{1}{mathrm{sSFR}}=frac{M_*/M_odot}{mathrm{SFR}/M_odot,mathrm{yr}^{-1}}.
$$
For instance, a $10^9,M_odot$ galaxy with a star formation rate of $10,M_odot,mathrm{yr}^{-1}$ will become depleted of gas in roughly $10^8$ years.



Gas-depleted galaxies do exist, and the more depleted they are, the lower the star formation rate is (e.g. Rose et al. 2010), but in general the timescale is longer than the above, since galaxies also accrete gas from the surrounding circumgalactic medium.

the sun - Is it possible for the Sun and the Moon to crash into each other?

Technically speaking, only the Moon would crash into the Sun. That being said, it's certainty one of many possibilities in the far distant future.



About 7.6 billion years from now, the Sun will have ballooned to a large enough size to engulf much of the inner solar system. One of the consequences of growing into a small red giant will be mass lost due to a rapid sheding of material through a stronger solar wind. This mass loss will allow all of the planets to be pulled on by the Sun less, and they can achieve a higher orbit. In Earth's case, it could be as much as 1.2 AU from the center of the Sun. Also, there will eventually be a large increase in the solar wind density where Earth orbits the Sun. This solar wind and chromosphere will create drag on Earth while in motion around the Sun, and will rob the planet of its angular momentum (orbital velocity). The Earth will slowly begin to transition into a smaller orbit, and some sources think that this may lead to the Earth falling into the Sun.



In this scenario, Earth-Moon system is gravitationally stable, and the Earth has sufficient gravity to hold onto the Moon during this time - Earth will pull the Moon down with it.



What is not known to me is if the drag from the solar wind and chromospher will cause the Moon to lose enough angular momentum to spiral inwards, ultimately merging with the Earth before the Earth is engulfed by the Sun. This would be a spectacular sight, as the moon would first be ripped apart by the tidal interaction with Earth (inside the Roche limit, or about 11,500 miles from the center of the Earth). Our Moon would resemble a string of moonlets, each eventually falling to Earth. Either way, the result is the same - the Moon will only exist within the Sun.



Sources:



http://www.smithsonianmag.com/smart-news/earth-will-die-a-hot-horrible-death-when-the-sun-expands-and-swallows-us-and-now-we-know-what-that-looks-like-28965223/?no-ist



http://www.space.com/3373-earth-moon-destined-disintegrate.html

Thursday, 5 January 2012

cosmology - what is the current explanation for the formation of cosmic voids?

You might like to watch this short movie from one of the authorities on this subject:
https://www.youtube.com/watch?v=wI12X2zczqI



The movie starts by answering the question "what is the cosmic web and what does it look like?". Simulations can reproduce such structures, but they do not explain "why" they look the way they do. The clue lies in the geometric appearance of the structures: The voids resemble the elements of a Voronoi tessellation in which the walls of the Voronoi polyhedra intersect in lines that are identified with the filaments. We need to understand the dynamics that generated this pattern.



At around 2:30 the movie addresses the issue of why it looks the way it does using a simple ballistic model for the motion of a set of discrete particles that represent a random density field (the Zel'dovich model). The filaments are there but too fuzzy. So at ~3:00 it generalizes this so as to achieve greater realism by making the particles sticky. This is the "adhesion model", which is governed by the Burgers-Hopf equation, which is easily solved by geometric means. There is no gravity in this - it's ballistics acting on density field generated by a Gaussian Random process with a known covariance function (or power spectrum).



The adhesion model is a remarkable representation of the structure. It shows how nothing "special" is required to generate the pattern of the cosmic web. That's the "how".



The "why does the structure look like it does?" is a little more complex: this is the question of emergent geometry which is understood in terms of the technical language of the Lagrangian description of the flow. But the graphics is pretty and quite didactic, so worth a look. This happens after around 5:30.



The movie compares appearance of the flow simultaneously in both Lagrangian and Eulerian space. In the Lagrangian picture the particles have fixed coordinates,



A short descriptive article goes with the movie:
http://arxiv.org/pdf/1205.1669v1.pdf
and a short explanatin of the Lagrangiuan view is at
http://arxiv.org/pdf/1211.5385v1.pdf
This is part of the thesis work of Johan Hidding from the Kapteyn Instutue in Groningen (supevisor Rien van de Weygaert - the father of the cosmic web according to Richard Bond). A full presentation of the nature of the singularities in terms of Morse Theory is at
http://arxiv.org/pdf/1311.7134v1.pdf
and there are several other papers pending.

Wednesday, 4 January 2012

Super moon or Super lag?

This is the animation of the Moon's position over Los Angeles, CA skies at 8 p.m. local (PDT) time from September 25 to September 27, 2015 (made with Sky Map Online):



  Sky map



As you can see, there isn't any way to describe its apparent movement with a single angular distance, whichever method you use. In astronomy, position of objects on the night skies is most commonly given by RA/Dec (Right Ascension and Declination), which places an object on the celestial sphere.



Unless we count the Supermoon Eclipse on September 27, 2015 (your time, September 28 UTC), one that won't occur again until October 8, 2033, or exactly one saros period since the last one, there wasn't anything else as extraordinarily dramatic about the Moon's movement on those nights as you describe. So what happened is most likely that measurements by your observations weren't taken precisely enough; Be it you didn't time your observations precisely, didn't measure Moon's position relative to some fixed direction from some fixed vantage point, didn't project its position onto your scale taking the curvature of the celestial sphere into account, or a bit of any or all of these.



The Moon, if observed at the same solar time from the same location, moves in apparent eastward direction (when it's visible and above the horizon, westwards when it's below the horizon, regardless if that's during day or night) roughly 13° per day, or, saying it otherwise, each day, at the same time, it will appear to lag for about an hour relative to its position a day before. Why? Simply because it completes one orbit around the Earth every 27.321582 days, so at exact same solar time the next day, it will be 360°/27.321582 or 13.18° more East. Its apparent movement during the same day is of course still East to West, because the Earth rotates on its axis towards East much faster than the Moon rotates around the Earth.

Tuesday, 3 January 2012

galaxy - Acceleration in Galaxies Collision


Some questions pop up in my mind.



Can the gravitational interaction sling shot an entire stellar system?
How much Km/s2 we can expect these system to accelerate? (In a most
extreme plausible case).




Basically this comes down to gravity assists. A gravity assist is essentially a 3 body interaction. You need a point of reference and 2 velocities. When we send a spacecraft around a planet for a gravity assist, the point of reference is the sun. The maximum gravity assist velocity is (see picture), 2U+v.



enter image description here



Source.



I agree with James Kiflinger. Reaching galactic escape velocity is very unlikely because galaxies have large dark matter halos where most of the mass is contained and to actually escape all that requires a very high velocity. The vast vast vast, even with spiral arms extending outwards, still, most of the stars would stay in the gravitaitonal control of the merged galaxies.



From this question 0ur Sun orbits the center of the Milky way at 220 km/s and it's escape velocity 537 km/s so it would need an acceleration of 317 km/s to escape the Milky way. That kind of acceleration is possible given that Andromeda and Milky way will fly towards each other at something like escape velocity speed (537 km/s) - perhaps a bit more as Andromeda is larger, and there's orbital speed (similarish to our sun's 220 km/s), so there's enough velocity to kick a star out if the gravity assist is positioned just right, but you'd need a very heavy star, as a star the mass of our sun simply couldn't effect another star the mass of our sun if they fly past each other at several hundred km/s - the gravitational bend would be much less than 180 degrees, so you'd really need a very large sun, maybe a few dozen solar masses traveling at high speed to (maybe) kick another star out of the combined galaxy. Such an even would be hugely rare but theoretically possible. Now, an individual star could get multiple gravity assists, but 2 stars getting close enough for a good gravity assist would be rare enough, so that, a star actually being ejected, while it could happen, would be very rare. Certainly billions of stars per galaxy it could happen to a significant number of stars, but it would still be a 1 in a million (so to speak) event.




How this acceleration can affect the system? Can it de-stabilise
planet orbits for example?




First things first, accelerations are rare. For a star to accelerate around another star they need to get quite close and in such a scenario, the first thing you'd notice is there would be a 2nd sun in the sky, growing gradually brighter.



Even when Andromeda & the Milky Way collide, form the point of view of your average star, it wouldn't be very interesting unless you actually saw Andromeda's galactic core move towards you - then it might look cool but mostly it would just look like a bunch of stars - like we see in the sky today. The average closest distance between stars in our neck of the woods is maybe about 5-6 light years. In the center of the galaxy, less, but there's still a lot lot lot of space between stars, so even as Andromeda and Milky-way collide into each other, from one stars perspective, most of the time, all the stars would be quite distant and another star passing as close as 1 lightyear would be very rare - like maybe every million years or so, rare.



If our sun was to pass another similar star about 1 light year away, it would be bright in the sky but otherwise the effect would be negligible unless the star was very large.



For a sun like star to have the gravitational effect on Earth that Jupiter has (Jupiter is 4 AU from Earth at closest pass, our sun 1047 times the mass of Jupiter), 32 times further, about 128 AU, which is a teeny/tiny fraction of 1 light year, about 1/470. Even 100 solar masses, such a star would have to pass within 1/47th of a light-year to have the gravitational tug on Earth that Jupiter has. That's crazy-close and would be enormously rare. It would happen to some stars, but your average individual solar-system with life on it would on average not be effected in that way.



Luminosity could be more likely a factor than gravity, as a sun with 100 solar masses would be about a million times as luminous as our sun, and 1 million solar luminosity, to equal the solar output from our sun it would only need to pass within 1,000 AU or 1/60th of a light-year, and a sun that bright would put out lots of UV light. But even so, fly-bys that close would be very rare, but we would still be more likely to be cooked by a large star than orbitally perturbed by one. Smaller stars would need to fly much closer to have an effect, but smaller stars are also much more common.



Outer planets would be more vulnerable and oort cloud objects even more so, to near fly-bys and orbital perturbations, but again, that kind of near fly-by wouldn't happen often.




How it can affect life? Can it affect magnetosphere? change it's
shape? Can solar flares/wind get stronger in some path due particles
acceleration?




As stated, 2 stars would have to get unusually close for there to be any effect at all. The most likely any effect would be interactions between the two stars oort cloud, possibly leading to a big increase in comet collisions, which in and of itself could be pretty devastating when a good sized comet hits a planet. Such events would still be rare, but rather than every few million years, it might be every few thousand (ballpark guess).



Magnetosphere, doubtful. If a star gets close enough to affect a planet's magnetosphere, the planet would be vaporized.



Solar wind/solar flares - also, no measurable effect. A sun's solar flares are internally driven and not changed by objects flying past, especially at the distance of 2 stars even in a close flyby.




If another star passes really near (let's say 1 light year at closest
approach) but at great speed can its light reach that planet at high
energy levels, let's say x-ray and affect that planet life?




Again, no measurable effect. With the colliding of 2 galaxies you might see stars fly by each other at about 1000 km/s but that's nowhere close to enough velocity to create any kind of blue-shifted x-ray effect. Other than the night sky changing more rapidly over centuries than usual, there's no effect to stars flying past each other at 1000 km/s.



Mostly, there would be little to worry about. Star on star dangerously close fly-bys would be rare, even in a galaxy on galaxy collision.

Sunday, 1 January 2012

cosmology - How long would it take for a rogue planet to evaporate in the late stages of the Universe?

This page by physicist John Baez explains what will happen in the long term to bodies that aren't massive enough to collapse into black holes, like rogue planets and white dwarfs, assuming they don't cross paths with preexisting black holes and get absorbed. Short answer: they'll evaporate, for reasons unrelated to Hawking radiation. It's apparently just a thermodynamic matter, presumably due to the internal thermal energy of the body periodically causing particles on the surface to randomly get enough kinetic energy to achieve escape velocity and escape the body (the wiki article here mentions this is known as 'Jeans escape'). Here's the full discussion:




Okay, so now we have a bunch of isolated black dwarfs, neutron stars, and black holes together with atoms and molecules of gas, dust particles, and of course planets and other crud, all very close to absolute zero.



As the universe expands these things eventually spread out to the point where each one is completely alone in the vastness of space.



So what happens next?



Well, everybody loves to talk about how all matter eventually turns to iron thanks to quantum tunnelling, since iron is the nucleus with the least binding energy, but unlike the processes I've described so far, this one actually takes quite a while. About $10^{1500}$ years, to be precise. (Well, not too precise!) So it's quite likely that proton decay or something else will happen long before this gets a chance to occur.



For example, everything except the black holes will have a tendency to "sublimate" or "ionize", gradually losing atoms or even electrons and protons, despite the low temperature. Just to be specific, let's consider the ionization of hydrogen gas — although the argument is much more general. If you take a box of hydrogen and keep making the box bigger while keeping its temperature fixed, it will eventually ionize. This happens no matter how low the temperature is, as long as it's not exactly absolute zero — which is forbidden by the 3rd law of thermodynamics, anyway.



This may seem odd, but the reason is simple: in thermal equilibrium any sort of stuff minimizes its free energy, E - TS: the energy minus the temperature times the entropy. This means there is a competition between wanting to minimize its energy and wanting to maximize its entropy. Maximizing entropy becomes more important at higher temperatures; minimizing energy becomes more important at lower temperatures — but both effects matter as long as the temperature isn't zero or infinite.




[I'll interrupt this explanation to note that any completely isolated system just maximizes its entropy in the long term, this isn't true for a system that's in contact with some surrounding system. Suppose your system is connected to a much bigger collection of surroundings (like being immersed in a fluid or even a sea of cosmic background radiation), and the system can trade energy in the form of heat with the surroundings (which won't appreciably change the temperature of the surroundings given the assumption the surroundings are much larger than the system, the surroundings being what's known as a thermal reservoir), but they can't trade other quantities like volume. Then the statement that the total entropy of system + surroundings must be maximized is equivalent to the statement that the system alone must minimize a quantity called its "Helmholtz free energy", which is what Baez is talking about in that last paragraph--see this answer or this page. And incidentally, if they can trade both energy and volume, maximizing the total entropy of system + surroundings is equivalent to saying the system on its own must minimize a slightly different quantity called its "Gibbs free energy" (which is equal to Helmholtz free energy plus pressure times change in volume), see "Entropy and Gibbs free energy" here.]




Think about what this means for our box of hydrogen. On the one hand, ionized hydrogen has more energy than hydrogen atoms or molecules. This makes hydrogen want to stick together in atoms and molecules, especially at low temperatures. But on the other hand, ionized hydrogen has more entropy, since the electrons and protons are more free to roam. And this entropy difference gets bigger and bigger as we make the box bigger. So no matter how low the temperature is, as long as it's above zero, the hydrogen will eventually ionize as we keep expanding the box.



(In fact, this is related to the "boiling off" process that I mentioned already: we can use thermodynamics to see that the stars will boil off the galaxies as they approach thermal equilibrium, as long as the density of galaxies is low enough.)



However, there's a complication: in the expanding universe, the temperature is not constant — it decreases!



So the question is, which effect wins as the universe expands: the decreasing density (which makes matter want to ionize) or the decreasing temperature (which makes it want to stick together)?



In the short run this is a fairly complicated question, but in the long run, things may simplify: if the universe is expanding exponentially thanks to a nonzero cosmological constant, the density of matter obviously goes to zero. But the temperature does not go to zero. It approaches a particular nonzero value! So all forms of matter made from protons, neutrons and electrons will eventually ionize!



Why does the temperature approach a particular nonzero value, and what is this value? Well, in a universe whose expansion keeps accelerating, each pair of freely falling observers will eventually no longer be able to see each other, because they get redshifted out of sight. This effect is very much like the horizon of a black hole - it's called a "cosmological horizon". And, like the horizon of a black hole, a cosmological horizon emits thermal radiation at a specific temperature. This radiation is called Hawking radiation. Its temperature depends on the value of the cosmological constant. If we make a rough guess at the cosmological constant, the temperature we get is about $10^{-30}$ Kelvin.



This is very cold, but given a low enough density of matter, this temperature is enough to eventually ionize all forms of matter made of protons, neutrons and electrons! Even something big like a neutron star should slowly, slowly dissipate. (The crust of a neutron star is not made of neutronium: it's mainly made of iron.)