Monday, 31 December 2012

botany - Can any plant regenerate missing tissue?

In general, plant cells only undergo differentiation at special regions in the plant known as meristems. Two of the primary types of meristem are the root apical meristem (at the tips of roots) and the shoot apical meristem (at shoot tips)^. Within the shoot apical meristem the plant cells divide and begin to differentiate into different cell types (such as different cells of the leaf, or vascular cells). Later growth (of, say, a leaf) is largely a result of cell expansion (although cell division does still occur, but drops off as the leaf expands). Therefore, if you punch a hole in a leaf, it probably won't be filled in because the cells in that leaf have finished growing and dividing.



However, as a shoot grows, more meristems are created. These are found in the axillary buds, just above where the leaf meets the stem. The meristems in the axillary buds can grow to form branches. Different plants obviously make different numbers of branches, but there is a common control mechanism known as apical dominance, where the meristem at the tip of the shoot suppresses the growth of the lower axillary buds. This is why a shoot with no branches can be made to grow branches by cutting off the tip (gardeners often do this to make "leggy" plants more bushy).



All of that was a long explanation to say, no, a plant doesn't normally^^ regenerate in the sense of filling in cells that have gone missing. However, if you cut off a shoot, the next remaining bud might begin to grow and, in a sense, replace the part that was lost. In that case, an existing bud is recruited to form a new branch and replace lost functionality, but I wouldn't say that qualifies as regenerating missing tissue.



^There are other types of meristem as well.



^^If you torture plant cells enough you can force them to become "stem cells" and thereby make an entirely new plant, but this is rare in nature.

Saturday, 29 December 2012

evolution - Is Behe's experiment (evolving the bacterial flagellum) plausible in the lab?

You may be interested in this paper and a video that summarizes it. It seems to be made quite clear that 1) effectively all of the parts of the flagellum are not original to it, and 2) there is a reasonable evolutionary path (one involving only increment/refine steps) that could have been responsible for it.



The video mentions but doesn't describe experiments that were in support of the proposed model. I assume they might involve refinement or statistical issues of the environment, not the whole-thing-at-once as Behe outlines. Obviously, if you can show that each step is independently adaptive, then the whole chain is shown to be possible evolutionarily, without trying to set up an experiment where you win the lottery n times simultaneously.



Personally I think the fact that the most awesome thing about the flagellum -- the rotation -- already exists in ATP synthase steals a lot of the flagellum's thunder. :)



Edit (Douglas S. Stones): Following the above references led me to this paper:




M.J. Pallen, N.J. Matzke "From The Origin of Species to the origin of bacterial flagella" Nature Reviews Microbiology 4 (2006), 784-790. (pdf)




In this article the authors discuss the possibility of designing a lab experiment to reproduce (steps of) the evolution of the flagellum.




Scott Minnich speculated in his testimony that studies on flagellar
evolution need not be restricted to sequence analysis or theoretical
models, but that instead this topic could become the subject of
laboratory-based experimental studies. But obviously, one cannot
model millions of years of evolution in a few weeks or months.



So how
might such studies be conducted? One option might be to look back in
time. It is feasible to use phylogenetic analyses to reconstruct
plausible ancestral sequences of modern-day proteins, and then
synthesize and investigate these ancestral proteins. Proof of
principle for this approach has already been demonstrated on several
NF proteins[69–75]. Similar studies could recreate plausible ancestors
for various flagellar components (for example, the common ancestor of
flagellins and HAP3 proteins). These proteins could then be reproduced
in the laboratory in order to examine their properties (for example,
how well they self-assemble into filaments and what those filaments
look like).



An alternative, more radical, option would be to model
flagellar evolution prospectively, for example, by creating random or
minimally constrained libraries and then iteratively selecting
proteins that assemble into ever more sophisticated artificial
analogues of the flagellar filament.



Another experimental option might
be to investigate the environmental conditions that favour or
disfavour bacterial motility. The fundamental physics involved
(diffusion due to Brownian motion) is mathematically tractable, and
has already been used to predict, for example, that powered motility
is useless in very small bacteria[76,77].




[For readability, I've added some line breaks to the above. There's too many cited references to list them all.]

Wednesday, 12 December 2012

bioinformatics - Displaying nucleotide at a single position from RNA-seq reads in a BAM file

How do I display a single nucleotide position from reads in a BAM file? I have been looking at variation using samtools mpileup, but I want to actually just display the nucleotide at the position I am interested in. This seems like you should be able to do it but I can't figure out how.



To be clear I have a BAM file of a bunch of reads. I'm looking to do something like samtools magic reads.bam chr3:10000 where I get back something like:



T T T T T T T T T T T T T T T T T T T A A A A A A A A A A A A A.


I just want to sanity check the output of bcftools by actually looking at the base calls.

Monday, 10 December 2012

evolution - Is extreme cladism a mainstream position in the species debate?

In the philosophy of biology it has been claimed many times that a popular position regarding the question of what species are, among biologists, is cladism. For my current purposes, the defining trait of cladism is captured in the following quote:




According to cladism, a species becomes extinct whenever it sends forth a new side species. (LaPorte 2004, 54)




So, for example, if Homo floresiensis is truly a species, and it derives, via a speciation event, from Homo sapiens, then, after the speciation, there is no Homo sapiens: there is Homo floresiensis and a new species, very similar to sapiens. In general, whenever there is speciation the mother species disappears.



I am under the impression that this description of the situation is not the one most mainstream biologists would endorse. I take it that a more common description would be one under which, if a number of Homo sapiens get isolated in an island and go on to form a new species, Homo sapiens exists both before and after this speciation event. I would like to know whether cladism, described as above, is really a popular position, or whether I am right about what most biologists would say about the relation between floresiensis and sapiens.



It might also be that biologists declare themselves cladists when explicitly theorizing about the nature of species, but fall back to a less extreme form of cladism in the actual practice. Evidence in favour of or against this possibility (coming from biology, not the philosophy of biology) would also be appreciated.



As Noah has pointed below, what I have called extreme cladism might be very far from cladism, as usually interpreted. This is precisely the kind of claim I would like to see substantiated.




LaPorte, Joseph. 2004. Natural Kinds and Conceptual Change. Cambridge University Press.


Friday, 7 December 2012

genetics - Do we get 1/4 of our genes from each grandparent?

I agree with the person above. But it might be more correct to say we get 1/4 of our total DNA from each of our grandparents, because though the genes are really important to code for specific proteins, there are tons of non-coding DNA in between them. In fact, 98% of our genome is non-coding! Although we are discovering awesome novel functions for this DNA slowly but surely. But it is important to think of it as 1/4 of your DNA and not genes because it is in this non-coding DNA where a lot of mutations can occur, and these mutations are what we inherit and make things like DNA fingerprinting for forensic analysis possible, like in CSI.



I just wanted to add three more thoughts:



Though it is true that 1/4 of your DNA would come from each grandparent, that is only true for your autosomal genes, not your necessarily for your sex chromosomes. The X chromosome does recombine regularly in the mother, but in the father the X and the Y do not recombine. (Actually I think part of the Y can recombine, but for the most part, the sex-determining part of the Y and its associated genes stay completely intact). This means that if you are a male, your father, and his father, and his father all share the same Y chromosome.



Additionally for the women, because the woman makes the egg, and thus all the early embryonic organelles and RNA's, the woman donates the mitochondrial DNA, which has its own sets of genes that are really important and have been linked to many diseases. So every child of a woman, shares their mother's mitochondrial DNA, as she shares her mother's and her mother's mother, etc.



If ~1/4 of our DNA comes each grandparent, then 1/8 of your DNA comes from each great-grandparent. So then 1/16 of our DNA comes from each great-great grandparent, which are your grandparent's grandparents. And thus it continues exponentially upward. The result is that there comes a point where your ancestors have contributed so little to your actual DNA, that it becomes almost negligible. So if you are an American, who most likely has ancestors from all over the world, even if you have 1 great-great-great-great grandparent who was a French immigrant while the rest of your ancestors were all British, the French DNA is only 1/64 of your total DNA. So even if you are a complete mix, there comes a point where so many different people have contributed to you being you, that you can establish a cut-off and just say, 3/4 of my ancestors were born in America, I'm American!

Thursday, 6 December 2012

genetics - Which patterns do I have to avoid when modifying the 3'-UTR?

I want to change a pre-miRNA sequence (in my case the pre-miRNA is encoding in a 3'UTR of a gene) and then put it in a lentivirus to see if it is still processed.



After modification (permutation of ten regions of ~20 nt) which kind of newly appeared pattern I've to be careful about? I do not want to disturb the gene in which the miRNA is encoded

Friday, 30 November 2012

molecular biology - If inhibiting S6 kinase decreases protein translation, then could inhibiting S6 kinase could possibly slow down long-term potentiation in neurons?

I can't rule it out, but it sounds a lot like trying to tune a piano with sledgehammer.



Neuronal LTP depends on protein translation, but so does absolutely everything else in the cell. Inhibiting protein synthesis at the ribosome will block the formation of all proteins, not just the ones responsible for LTP. Unless there's a link I don't know about between LTP and total levels of protein translation, you're really going to want to look into inhibiting the production of proteins specifically responsible for LTP and not protein synthesis in general.

Sunday, 25 November 2012

homework - To understand why satellite cells are genetically inactive in Barr body

DAPI (4',6-diamidino-2-phenylindole) preferentially binds AT-rich DNA (although it binds CG-rich DNA, too), which can give chromosomes distinctive banding patterns if they are polytene or in metaphase. In interphase condensed chromosomes, such as the inactive X chromosomes of female mammals (Barr Body), the relatively high concentration of tightly-packed DNA makes the chromosome appear as a brighter spot in the nucleus. When the DNA of a chromosome is decondensed (such as the rest of the chromosomes in interphase), it appears as more-or-less homogeneously stained DNA in the nucleus.



A great picture is here (see below), A is the DAPI staining, B is a protein localized to the Barr Body, C is the RNA (Xist) which binds the Barr Body.



Barr body



Since the active X is not condensed, it appears as do the rest of the chromosomes, so cannot be identified among the mixture of autosomes. The difference in DAPI appearance has nothing to do with activity per se, but rather differences in how tightly packaged the chromosomes are.



I do not understand the first part of your question, though. You cannot tell from a still image that the Barr Body is relatively inactive. If your were doing experiments on live cells, you could measure RNA production (using a labeled nucleotide), immunofluorescence to show localization of e.g. RNA Polymerase, or reporter genes located on the inactive versus active X chromosomes. If you clarify your question, I can answer better.

Friday, 23 November 2012

molecular biology - What is the mechanism of transgene integration (from expression vector to the host genome)?

What is your host in this case? For integration into the genome of a bacterium, you would need to use an "integration" vector. Most commercial vectors (such as pUC18) will be maintained without integrating into the host genome.



Here is a description of an integration vector from the Bacillus Genetic Stock Center to give you an idea of how foreign DNA would be integrated into a Bacillus genome:




Integration vectors are plasmids that feature conditional replication
coupled with a selectable marker. If the plasmid is transformed into
an appropriate host under conditions that select for the plasmid’s
presence but restrict its replication, all transformants will have
integrated the plasmid into their chromosome (or some other resident
DNA capable of replicating under the selective conditions). In
practice, the selectable marker usually specifies antibiotic
resistance. Conditional replication usually means that the plasmid has
replication functions that work in E. coli but not in gram-positive
bacteria, such as B. subtilis. Sometimes a temperature-sensitive
replication phenotype is employed instead. Integration is targeted to
a particular locus on the chromosome by including identical sequences
on the plasmid. If there is a single homologous sequence, a single
crossover will integrate the entire chromosome into the target locus
by a Campbell-type mechanism. If there are two homologous sequences,
and they are relatively close together on the chromosome, then a
double crossover will result in a cassette integrating between the
chromosomal targets. (source)




For most routine molecular biology labwork, there is no need to integrate genes into the E. coli genome. Integration is usually used to generate "knockout" cell lines to study gene function in the bacteria of interest.

Thursday, 22 November 2012

botany - Photosynthetic Pigments vs. Chloroplasts

Photosynthetic pigments are the chemicals which take part in photosynthesis, in particular they are they ones which absorb photons and fluoresce (emit photons of a different wavelength) or emit electrons. Pigments are molecules, and chlorophyll is a key example. These pigments are required for photosynthesis to take place, as they generate the electrons which create the electrochemical gradients which power photosynthesis, thus all photosynthetic organisms will have photosynthetic pigments of some kind.



Chloroplasts are membrane-bound organelles in plant cells, made up of many hundreds of thousands of molecules including pigments. Plant photosynthesis takes place on their internal membrane, the thylakoid. Specifically, the thylakoid membrane of chloroplasts is the membrane across which the aforementioned electrochemical gradients are created in plants.



Chloroplasts originated from a free-living bacterium, probably a cyanobacterium, entering a eukaryotic cell. So prokaryotes don't have them because the chloroplast endosymbiosis event was one way in which plants diverged from their non-plant ancestors. So you can think of a cyanobacterium as a free-living chloroplast - they have their own internal membranes similar to the thylakoid across which electrochemical gradients are created for photosynthesis. Conversely, you can think of a chloroplast as a small cyanobacterium living symbiotically inside a plant cell.

Wednesday, 21 November 2012

biochemistry - Can scientists create totally synthetic life?

In principle it is possible. Life doesn’t contain some divine or intrinsically spiritual element that we would have to add to our artificial organism potion to breathe life into it. At this moment we are limited by gaps in our knowledge and by the current state of technology.



We first have to better understand fundamental principles of life on a multi-level scale: from quantum mechanics, through biochemistry, structural biology, molecular evolution, to macroscopic function and behavior on the organism level. This, together with development of enabling technologies, will require decades of research but some steps have already been taken.



One of the promising approaches is re-writing, as exemplified in this work:




We redesign the genome of a natural biological system, bacteriophage T7, in order to specify an engineered surrogate that, if viable, would be easier to study and extend. (...) The resulting chimeric genome encodes a viable bacteriophage that appears to maintain key features of the original while being simpler to model and easier to manipulate. The viability of our initial design suggests that the genomes encoding natural biological systems can be systematically redesigned and built anew in service of scientific understanding or human intention.




or a minimal cell synthesis project:




Construction of a chemical system capable of replication and evolution, fed only by small molecule nutrients, is now conceivable. This could be achieved by stepwise integration of decades of work on the reconstitution of DNA, RNA and protein syntheses from pure components. (...) Completion would yield a functionally and structurally understood self-replicating biosystem. (...) Our proposed minimal genome is 113 kbp long and contains 151 genes. We detail building blocks already in place and major hurdles to overcome for completion.




So, technically it’s very difficult but definitely can be done, which is really exciting.

Monday, 5 November 2012

transcription - Why does the T7 RNA Polymerase require a reducing environment ie. DTT

A quick search on T7 cysteines gave some clues:




Bacteriophage T7-induced DNA polymerase is composed of a 1: 1
complex of phage-induced gene 5 protein and Escherichia coli
thioredoxin. Preparation of active subunits in the absence of
sulfhydryl reagents indicates the reduced form of thioredoxin is
sufficient for formation of the active holoenzyme. The oxidized
form of thioredoxin, thioredoxin modified at one active site
sulfhydryl by iodoacetate or methyl iodide, or thioredoxin modified
at both active site sulfhydryls by N-ethylmaleimide, are all
inactive, being defective in complex formation with gene 5
protein.




Adler and Modrich, J Biol Chem 258:6956 (1983)



There's a more recent paper (Aguirre et al, Inorganic Chemistry 48:4425 (2009)) that mentions the "the enzyme critical sulfhydryl cysteine groups", but unfortunately I only have access to the abstract.



Update: It seems to be an old finding, rather than a rationale concerning the cytoplasmic redox state. According to Chamberlin and Ring, JBC 248:2235 (1973),




General Requirements-The general requirements for T7 RNA
synthesis directed by T7 DNA polymerase are shown in Table
I. As expected for a template directed polymerase, RNA
synthesis shows an absolute requirement for DNA, the 4
ribonucleoside triphosphates and Mg++.




(no surprises there ;)




The activity of the
enzyme is reduced significantly if a sulfhydryl reducing
agent such as b-mercapto-ethanol is omitted from the
reaction. The addition of 10^-5 M p-hydroxymercuribenzoate to
the assay system in the absence of b-mercaptoethanol
abolished all activity, indicating that the enzyme contains a
sulfhydryl group necessary for activity.




However, if you see the table I, the remaining activity after removing bme is still 74%



There seems to be 7 exposed cysteines (Mukherjee et al, Cell 110:81 (2002)), but I could not find any paper discussing their roles.

Sunday, 4 November 2012

biochemistry - What does the human body use oxygen for besides the final electron acceptor in the electron transport chain?

Another small addition




There is class of oxidoreductases called oxygenases which incorporate molecular oxygen into the substrates and not just use it as an electron acceptor like in oxidases (note that the terminal enzyme in ETC is an oxidase and there are other such oxidases). In other words, oxygen is not a cofactor but a co-substrate. Oxygenases are further classified into dioxygenases and monooxygenases which incorporate two oxygen atoms and one oxygen atom respectively. Examples:



  • Cytochrome P450 family (monooxygenease): involved in detoxification of xenobiotics

  • Cyclooxygenase (dioxygenase): involved in production of prostaglandins which are involved in pain and inflammation. Many NSAID painkillers like aspirin, paracetamol and ibuprofen target cyclooxygenase-2 (COX2)

  • Lipoxygenase (dioxygenase): Involved in production of leukotrienes which are involved in inflammation.

  • Monoamine oxidase (monooxygenase): Involved in catabolism of neurotransmitters such as epinephrine, norepinephrine and dopamine.


Does oxygen deprivation result in death just due to the halting of ATP
production, or is there some other reason as well?




Death predominantly occurs because of halt in ATP production. Some cells such as neurons (and also perhaps cardiac muscles) are highly sensitive to loss of oxygen (for energy requirements) and clinical death because of hypoxia usually occurs because of loss basic brain function.




What percentage of the oxygen we take in through respiration is
expelled later through the breath as carbon dioxide?




As already mentioned, it is said that there is a rough 1:1 ratio of CO2 production and O2 consumption. However, as indicated in a comment by CurtF, O2 does not form CO2; it forms water in the last reaction of ETC. CO2 is produced in other reactions of Krebs cycle.



Glycolysis produces 32 molecules of ATP for 1 molecule of glucose via ETC (see here). There are three complexes in ETC and the third is dependent on oxygen; so you can assume that 1/2 a molecule of O2 consumed for production of 3 ATP molecules. Therefore 32 molecules of ATP would consume 4 molecules of O2. Seems like there is a 1:1 ratio of CO2 production and O2 consumption.



We can see it like this:



FADH2 enters ETC at the second complex whereas NADH enters at the first. We can say that as long as NADH is present FADH2 would not require an extra oxygen.



An NADH or a FADH2 molecule would require 1/2 molecule of O2. There are 8 molecules of NADH and 2 molecules of FADH2 produced during glycolysis+krebs cycle which would require 10/2 = 5 molecules of O2. Glycolysis produces 4 molecules of CO2 during krebs cycle.



However, 2 cytosolic NADH molecules require 2 ATPs (in other words another NADH molecule) to be transported to mitochondria. So the net effect may be actually close to 1:1 O2:CO2.



Another factor to keep in consideration is that the three complexes do not actually produce ATP; they just pump proton to create a chemical potential. The F0F1-ATP synthase would probably work only after a threshold of H+ potential is established. The 1 ATP molecule per complex is most likely to be the mean value and not exactly what really happens per reaction.

Wednesday, 24 October 2012

physiology - Below which temperature human muscles don't work?

If you dissect striated muscle out of most-any organism, the actual contractile apparatus works over a wide range of temperatures. So that's at the single-muscle-fiber scale. The muscle itself continues to work at all (thawed) temperatures below body temperature -- the problem comes with its regulation.



The shivering response -- a centrally controlled involuntary contractile pattern -- overwhelms voluntary muscle control. So the mechanism behind loss of muscle coordination in hypothermia is the same as the shivering mechanism. When the core temperature drops, the midbrain starts over-riding voluntary control of the muscles. When the core temperature drops far enough; around 32C, the shivering often slows down or stops. Voluntary movement becomes compromised probably because the brain simply isn't working; neuron firing rates are so slow that sensation, processing, and motor responses are all critically impaired.



The feeling of numbness does not actually directly accompany a loss of muscle contractility. You can walk pretty much indefinitely on frozen feet if you can keep your balance (and you keep your core temperature up.) Lots of people survive severe frostbite of their feet (their feet do not often survive, however.) The reason why it seems your hands don't work when they get cold is that you can't feel what you're doing (note; your hands can be much colder than your core body temperature.) But the muscles themselves work right up until they freeze solid.



UPDATE:
Here's a paper that directly addresses the scenario posed by OP -- the decrease in grip strength with temperature. Figure 1 of that paper illustrates their experimental setup; they measure the contractile strength of the index finger while manipulating the temperature of the rest of the hand. They show that contractile function is impaired with temperature and look at temperatures as low as 12C.



They measure as much as a 50% impairment on twitch tension upon cooling to 12C. It's interesting that they review results suggesting that some of this effect is intrinsic to the muscle fiber (not neurological), showing that I should refine what is meant by "continuing to work" in my opening paragraph. (I meant having an ability to generate contractile force when equilibrated in solution containing sufficient ATP and Ca$^{2+}$, not the ability to contract optimally.) For fun, I linearly extrapolated the final arm of Figure 5 and found that the 'voluntary tension' approached 25% at 5C. This suggests that total failure of the voluntary contraction happens somewhere below the freezing point of water (muscle would freeze at a temperature lower than 0C because of colligative effects.)

Friday, 19 October 2012

meiosis - How do you see the stage of the second meiotic arrest in oogenesis in the given video?

I seem to understand the thing now.



The video is utterly simplified for animal cell meiosis I and II.
In oogenesis, you get after every anaphase one cell with very little cytoplasm, polar body, and another cell with much cytoplasm. In the video, the amount of cytoplasm is equal so the thing is idealized.



The video is better to explain male gametogenesis: spermatogenesis, since the amount of cytoplasm does not differ after each anaphase.



The video is too going too fast to teach you oogenesis, since in reality the things are arrested rather long time. The second meiotic arrest in oogenesis can lasts 12 - 50 years starting after puberty. The rupturing secondary oocytes lasts then again less in the second meiotic arrest of oogenesis, since they start to develop into ovum after the monthly release.

Saturday, 13 October 2012

human biology - What is the purpose of the adrenal medulla?

The adrenal medulla is less of a 'real' endocrine organ like the others in the endocrine system and much more an extension of the sympathetic nervous system. In fact, its chromaffin cells are modified neurons by descent and secrete adrenalin and some noradrenalin upon stimulation by sympathetic preganglionic fibres, effectively turning the medulla into a sort of 'endocrine ganglion' with the whole cardiovascular system as its 'postganglionic fibres'.



Adrenalin in the circulation seems to have pretty much the same effect as all the adrenergic neurons of the sympathetic nervous system (which innervates all blood vessels, organs etc.): arteriole constriction, cardiac output increase, breath rate increase, pupil dilation, glucagon secretion and insulin inhibition, stimulation of glycolysis and glycogenolysis etc.



So what is the purpose of the adrenal medulla, if all of the effects which the endocrine adrenalin produces are essentially the same as those produced by the sympathetic mass response?

Tuesday, 9 October 2012

physiology - Are human fetuses more likely to be male?

The Fisher's principle is not applicable to the fetuses because it has been formulated for parental expenditure and basically states that the ratio of male to female parents (implying that both parties have reached the age of fertility) will tend to 1:1.



There are several mechanisms that we can use and that are mentioned in the canonical paper by James (2007):




(a) there are equal numbers of X and Y chromosomes in mammalian sperms
(b) X and Y stand equal chance of achieving conception
(c)
therefore equal number of male and female zygotes are formed, and that
(d) therefore any variation of sex ratio at birth is due to sex
selection between conception and birth.




James brings much evidence that none of these conventional beliefs is true. Rather, they are dependent upon many factors: exposure to stress during pregnancy, glucose level etc. He reports that there is an excess of males at birth in almost all human populations, and the natural sex ratio at birth is usually between 1.02 and 1.08. However, the ratio may deviate significantly from this range for natural reasons. (I really recommend reading the paper I linked, it is available for free).



Branum et al (2009) analyze birth statistics in the US taking into account many factors like ethnicity, gestational age and plurality and show that the ratio can increase even more with growing gestational age and has different values among different races.



So, taking everything together, I can say thay YES, the chances for human fetuses to be male are indeed higher.

Wednesday, 3 October 2012

biochemistry - Negative feedback in the fructose metabolism in liver

The main reaction of fructose in liver is phosphorylation, catalyzed by ketohexokinase (UniProt P50053). In one paper with rats it was shown that fructose load can deplete the liver of nucleotide triphosphates (ATP, GTP) although ADP and GDP are inhibitors of the enzyme. This shows there is no effective regulation.



M. I. Phillips, D. R. Davies: The mechanism of guanosine triphosphate depletion in the liver after a fructose load. The role of fructokinase. In: The Biochemical journal. Vol 228, Nr 3, Jun 1985, p. 667–671, {{ISSN|0264-6021}}. PMID 2992452. PMC 1145036.

Thursday, 27 September 2012

botany - What factors affect the rate of transpiration in plant leaves?

I'm trying to get my head around factors which affect transpiration in leaves.



For example, how would applying petroleum jelly to the surface of plant leaves affect their rate of transpiration?



I get that it's basically going to decrease the trasnpiration rate because the stomata will be covered, but I'm not sure about these parts:



  • How would it affect the rate of transpiration if only the top surface was covered?

  • How would it affect the rate of transpiration if only the bottom surface was covered?

Also, I think I'm right in saying that increasing wind / applying a fan to plant leaves will increase the transpiration rate, but why?

Monday, 24 September 2012

Robotic surgery for treating cancer?

I'm assuming you are not talking about a single solid tumor, but rather one where the tumor is loose and is distributed throughout the tissue, or has metastasized



I guess the answer is you could, but it would be one amazing machine. This robot would have to examine each individual cell and destroy it based on what you could sense about the surface properties. Identifying cancer cells on the basis of the proteins and glycosyl (carbohydrate) molecules on the cell surface is not embodied in a reliable way in a touch sensor. Cancer can manifest lots of such patterns and it could quite easily be that if you looked at healthy cells, they look very similar. Cancer cells don't have an obvious sign they are always holding out saying 'im cancer'...



Then there is the structure of tumors. For instance glioma is one of the most difficult brain cancers you can have. The glioma cells push out axons - they can be quite long. Its hard to imagine a robot that could find that in a brain without tearing up all the neural connections. In prostate cancers - one of the most common cancers in men, the cells are embedded in the tissue surrounded by necessary and healthy cells.



This is why chemicals and nano structures are the most commonly pursued means of combating cancer; they can enter into cells that are embedded in solid organs. Differentiating the cancer cells from other sorts of cells in vivo is a pretty hard problem - some antibodies are given as a possible help there, but they are not reliable.



This blog post is a pretty up to date summary of where we are



http://www.sciencebasedmedicine.org/index.php/personalized-medicine-vs-evolution/

Saturday, 22 September 2012

evolution - Productive turnover and generations in the fruit fly

What do you mean by you couldn't find anything on the internet? Drosophila generation time is explained here?



Your other question is a bit off-topic here but I'll give all advice I have heard myself:



You can leave out a banana skin and catch some fruit flies, then do the experiment in your kitchen :) Joke aside, unless you perform the experiment independently it will probably take a long time and it's well possible that you won't be able to do it at all (unless you become some famous scientist). I'm not familiar with the experiments you mention, but if you require equipment and/or funding, you will need a qualification and/or a good explanation why people should give you money or let you use their equipment for this experiment.



I don't know what stage of education you are at, but it sounds like you are still at school ("when I'm older")? Most countries have some schemes where students (below uni) can apply for young researcher kind of things, so you could try and google that. Apart from that, the best bet is probably to try and get into a uni with good facilities and research programmes for their students. What you study shouldn't matter so much as long as it's science.



In response to Marta's commment: if you understand the experiment and you think you can gather everything they used at home, nothing speaks against just doing it on your own. Just make sure you don't let those flies swarm your house ;)

Sunday, 16 September 2012

molecular biology - How to find ion/water channel related genes

We now have a collection of transcripts at hand. We would like to investigate some particular ones, which are ion/water channel related. How to perform this? Could anybody point out how to find the annotated genes those are ion/water channel related? If there are some places particularly for fish, that would be perfect. Thanks.

Thursday, 13 September 2012

botany - How long will a vegetable live for after being harvested?

The short answer is that as long as the vegetable/fruit is fresh looking - i.e. the cells have not disintegrated - they will be respiring, many cells will be functioning quite normally, and the plant is still technically alive. In cases where the part of the plant we treat as a vegetable is a part intended for reproduction (e.g. a seed, or a tuber like a potato) the plant will keep growing.



The point at which the plant dies is not clearly defined like it is in animals, but generally if you can still eat it, it's still alive.



Death in plants is quite different from that in animals - we refer to it as senescence. The key difference is that it happens to tissues and organs which can die and separate from the organism. Individual leaves can die without the plant's health being affected. Once this has happened to all the parts, the organism is considered dead, but if there is any respiring tissue left, it's still alive.

Monday, 3 September 2012

assay development - Providing small molecules to cells on a filter plate

Lets imagine that I have mammalian cells that I've immobilized on a filter. Now I want to keep providing small molecules to these immobilized cells without resolubilizing the cells.



The caveat is that I would like to do so without fixing the cells or waiting for them to adhere to the filter. The small molecules are small enough to diffuse through the filter.

evolution - Why do we grow so much hair on our heads compared to our bodies?

I've been wondering about head hair, facial hair in particular. Human males can grow very extensive beards should they choose to not shave - however you do not really see this in our chimpanzee cousins! Yes, they have little pseudo-beards, but the difference being that they do not shave, that is just the length they reach. Whereas in humans we can grow to our hearts content (*this may not be the case, see this question).



I can't really see why this would have been selected, unless it's simply that (evolutionarily speaking) women like men with long beards?



So my question is: why can humans perpetually grow head hair, yet we have lost the majority of our body hair, in comparison to chimpanzees and other ape family members?

microbiology - Aren't antibiotic resistant probiotics dangerous?

Usually, resistance genes are located on plasmids---additional DNA rings in the bacterium that are part of the genome. These plasmids cause their own exchange with other bacteria, even from other species.



B. clausii, the probiotics organism in question here, appears to be special, though, in that it has no plasmids. His resistance genes come with the primary ring-shaped genome and should not be transferred via plasmid exchange to other bacteria. This doesn't rule out other means of gene transfer like phages or conjugation, however. In one study, it was unsuccessfully tried to transfer a macrolide resistance gene to other bacteria. They conclude




A potential hazard is transfer of resistance to microorganisms
pathogenic for humans. The risk that this event will occur and the
consequences in terms of morbidity and mortality have not been
evaluated. Parameters required for risk assessment include studies on
the nature and mobility of the resistance genes of probiotics.




The only other paper on a B.clausii resistance gene didn't look at its transferability. That clearly shows we don't know enough.



B. Bozdogan, S. Galopin, R. Leclercq: Characterization of a new erm-related macrolide resistance gene present in probiotic strains of Bacillus clausii. In: Applied and environmental microbiology. Band 70, Nummer 1, Januar 2004, S. 280–284, {{ISSN|0099-2240}}. PMID 14711653. PMC 321311.

Sunday, 26 August 2012

rna - Can DNA produce ribozyme-like structures?

There are no known natural DNA enzymes (deoxyribozymes), but there are various synthetic DNA enzymes. The first deoxyribozyme that has been reported (Breaker and Joyce, 1994) catalyzed the Pb2+-dependant cleavage of RNA.



Various deoxyribozymes have been synthesized, they can catalyze RNA cleavage, RNA ligation and many other reactions like DNA phosphorylation or Thymine dimer photoreversion, and even a Diels-Alder reaction (see Baum and Silverman, 2008 for a review).



DNA lacks the 2'-hydroxyl group that RNA posesses, but there is some evidence that this doesn't significantly decrease the potential of deoxyribozymes compared to ribozymes. In one experiment DNA and RNA enzymes that catalyze a Carbon-Carbon bond formation were compared and both achieved comparable catalytic rates (Chandra and Silverman, 2008).



For more information about DNA enzymes you can look at the publications from the Silverman lab, they're probably the most active research group in this field.




Saturday, 25 August 2012

human biology - How does the brain's energy consumption depend on mental activity?

I answered on the facts of this question already on skeptics.SE, here and here. You should read both papers very carefully, I highlighted the most important facts but this is a very tricky question, esp. when it comes to defining what mental activitiy is. The papers also give an explanation of how fMRI signal is linked to NEURONAL activity, as far as I remember there is no strong direct link.



You assume in your question that a mathematician solving a differential equation needs higher mental activity than a child reading a book. Is this legitimate? It seems intuitive but also very subjective. In the paper they mention that for the highest and lowest energy consumption we lose consciousness. I will not draw conclusions from this. However, you are talking about conscious mental activities so this may answer your question. To me it means more that the understanding of the human brain in neurobiololgy is on the level of the Rutherford Atomic Model in Physics at the beginning of the 20th century. We have not really got a clue how information is processed and how it's constrained by physical laws and principles of entropy and energy. By reading the 2 papers it looks more like the human brain is not raising energy consumption as a computer would (the computer analogy pretty much fails when compared to the human brain). Most of the energy is used for unconscious processes in "standby mode".



As in physics, extreme cases such as savants and the mentally disabled are probably the best starting point to exclude possible models of human brain and physical boundary conditions as we cannot approach the questions of human brain in a reductionistic way. How can savants like Kim Peek process such huge amounts of information AND save it. He is able to scan books pages just once and know them by heart thereafter. His brain does not, however, consume more energy than an average human brain. So mental activity is probably not a very good term, quantity, or even really suited to be scientifically used. Does neuronal activity mean mental activity (in the sense of your definition?) Reading the papers, the problem is the separation of mental and neuronal activities. At first you have to know what are the basic brain functions and processes that are consuming most of the energy. However the brain is not built in modular way like a computer (most energy is used here for constantly refreshing RAM). So there is not really a objective way to analyse and separate this modular energy consumption, if it even is modular.



In my opinion, most models about information processing in human brain are intuitive guessing (again Rutherford). We need much more detailed experiments and data (Blue Brain Project). fMRi is like analysing a atom with a magnifying glass. Also, the more prosperous approach from a biophysical perspective is probably not the level of "mental activity" but the hard-based amount of information processed by human brains and linked energy consumption (Kim Peek). But therefore we need a model of how this information is saved in human brain. Do normal humans save the same information as Kim Peek scanning a page or are we just unable to recall it consciouscly? When solving a differential equation, how much energy do you consume when recalling facts and is that experience not similar to reading a book? How much is mental logical tasks and is there really a difference at all?



I will stop here, hope you gained some insight that the question is of course important but too early to be definitively answered. I think we will learn a lot more from projects like Blue Brain as we have from fMRI experiments.

Wednesday, 8 August 2012

zoology - How do insects breathe?

The main difference between tracheal gas exchange, and other forms of gas exchange (except simple diffusion) is that it is generally a passive process. Organisms with lungs, gills, or other modified respiratory organs can actively pump the oxygen-containing medium (usually air or water) across their respiratory surface, and some also pump their blood across in the opposite direction to cause a countercurrent exchange to enable maximum efficiency of oxygen/CO₂ transfer.



The tracheal system is a more efficient mode of gas exchange than diffusion since it does not only involve the oxygen passing over the organism's body surface (a semi-permeable membrane) which limits the rate of movement. Instead the epidermis is invaginated to form tracheae (tubes) through which air can travel passively. The tracheal system is highly branched and terminates at tracheoles (fluid-filled sacs) which closely line respiring tissues to provide oxygen and allow release of CO₂.



Tracheal respiration limits the size of insects since their oxygen demands will not be able to be met if the air has to diffuse very far. Some insects require more oxygen than others: many larger insects and more active insects have evolved to supplement the tracheal system with pumps, gas gills (permanent or temporary bubbles of air which aquatic insects carry), or mechanically ventilated air sacs (honeybees have these, see Gullan & Cranston, 2005; and Snodgrass, 1956).



Furthermore, tracheal systems can be 'open', 'closed', or the insect can engage in discontinuous gas exchange. Insects with open tracheal systems have spiracles (holes in their cuticle). Closed tracheal systems are more common in aquatic insects, they come in two major designs: highly branched systems over the internal surface, allowing cutaneous gas exchange; and filamentous or lamellate arrangements, which are analogous to a primitive gill (see Wigglesworth, 1964 or Gullan & Cranston, 2005 chapter 10 for more on aquatic insect adaptations). Discontinuous gas exchange involves co-ordinated opening and closing of different spiracles which results in a uni-directional current allowing more effective ventilation.



References



  • Gullan, P.J. & Cranston, P.S. (2010) The Insects: An Outline of Entomology. John Wiley & Sons.


  • Snodgrass, R. (1956) Anatomy of the honey bee, Comstock Publ. Assoc. Ithaca, New York,.


  • Wigglesworth, S.V.B. (1968) The Life of Insects. New American Library.


Tuesday, 7 August 2012

genetics - Extreme examples of protein translation/use coupling/decoupling?

One example that may interest you Rory M are the so-called toxin - antitoxin pairs in bacteria. Interestingly enough, many bacterial species need a way to protect themselves from their own toxins. Thus, for a particular toxin, a corresponding antitoxin is synthesized (most often both are proteins, but the antitoxin may be RNA as well, three types of toxin - antitoxin systems are known; check out http://en.wikipedia.org/wiki/Toxin-antitoxin_system). The antitoxin remains bound to the toxin until the latter is secreted into its designated location. If the antitoxin decouples prematurely, cellular death may occur. I'm not too sure how long the two are "stored" within the cell; I imagine this could vary greatly between species.

Saturday, 4 August 2012

molecular biology - Effect of histidine on the binding affinity of HisP

Still if you change your question as (If histidine is abundant, HisP's job is to stop the histidine pathway as a "repressor." If HisP binds less tightly to promotors, the pathway should not produce as much histidine.)



Then it should be under another assumption that what is the effect of HisP binding promoter of enzyme's gene. Is it suppressing the transcription or amplifying the transcription.



If it suppresses transcription then answer is more tightly, if it amplifies the transcription then answer is less tightly. I suggest you put this clearly in your answer, as the question is not quest accurate.




I guess your answer might be wrong.



The question should be under another assumption that how does this protein HisP regulate histidine biosynthesis. positiver or negative feedback regulation.



Generally, amino acids synthesis is regulated by negative feedback loops so that cells could control the amount of amino acids they want. In this case, the answer should be more tightly. As it functions as a repressor then it should bind promoter more tightly so that to repress the transcription more which then generate less histidine synthesis enzyme. (I believe this is what your teacher want you to answer)



In the other case, biosystems sometimes have the positive feedback regulation so that they can amplify the sensitivity to the environment noise or generate bistability (phenotypic switching). In that case, the protein bind promoter less tightly in order to generate more histidine.


human biology - Where does vasoconstriction occur in limbs?

Reduced blood flow to a region of the body occurs through 2 principal mechanisms.



1) The smooth muscle fibers in the tunica media layer of the arteries contract and reduce the diameter of the artery, limiting blood flow due to increased resistance (this is the mechanism in @agrimaldi's answer).



2) Circularly oriented smooth muscle fibers at the junction of a metarteriole (the distal end of an arteriole) and a capillary bed form a precapillary sphincter, which serves as a valve and prevents blood flow into a the capillary bed.



So to answer your question, it is both. The narrowing of the arteries would occur rather continuously across the limb but there are "pinch points" where metarterioles joint the capillary beds.



The endocrine and nervous systems are pretty integrated so in most cases it would be the result of the actions of both systems.

Thursday, 2 August 2012

synthetic biology - Designing genes with DNAWorks: Maximum nonzero score?

You might want to read Gibson's paper on the step-wise assembly of the mouse mitochondrial genome (1):



enter image description here



He started with 60b long oligos with 20b overlap, as he assembled 5 of those 60-mers into a backbone, obtaining 384b fragments. On the next step, he joined 5 of those 384mers, obtaining 1.2kb constructs. You can do the same, but on the second step use 2x 384mers to get you ~600bp gene. Gibson didn't use DNAWorks to chop up the sequence. He just started from base 1, so that his fragments were F1 [1:60], F2[41:100], F3[81:140], etc.



I think that 15b homology is pretty low (Gibson used 20b for assembling the 60mers in the first step, and 40bp homology for the next assemblies). 60b oligo length is standard.



You can also try one step assembly with all of your oligos (via PCA) as you planned, but I think that the two step assembly might be more efficient.



Let me know if you need more help.



1. Gibson et al, 2009. Chemical synthesis of the mouse mitochondrial genome

Wednesday, 1 August 2012

bioinformatics - What exactly are computers used for in DNA sequencing?

Computers are used in several steps of sequencing, from the raw data to finished sequence (or not):





Modern sequencers usually use fluorescent labelling of DNA fragments in solution. The fluorescence encodes the different base types. To achieve high throughput, millions or billions of sequencing reactions are performed in parallel in microscopic quantities on a glass chip, and for each micro-reaction, the label needs to be recorded at each step in the reaction.



This means: the sequencer takes a digital photograph of the chip containing the sequencing reagent. This photo has differently coloured pixels which need to be told apart and assigned a specific colour value.



Digital image of a sequencing chip



As you can see, this (strongly magnified) image (fragment) is very fuzzy and most of the dots overlap. This makes it hard to determine which colour to assign to which pixel.





One such image is registered for each step of the sequencing process, yielding one image for each base of the fragments. For a fragment of 75, that’d be 75 images.



Once you have analysed the images, you get colour spectra for each pixel across the images. The spectra for each pixel correspond to one sequence fragment (“read”) and are considered separately. So for each fragment you get such a spectrum:



Base calling spectrum



Now you need to decide which base to assign for each position (“base calling”, top row). For most positions this is fairly easy but sometimes the signal overlaps (towards the beginning in the above image) or decays significantly (near the middle). This has to be considered when deciding the base calling quality (i.e. which confidence you assign to your decision for a given base).



Doing this for each read yields billions of reads, each representing a short fragment of the original DNA that you sequenced.



Alas, this was the easy part. Most bioinformatics analysis starts here; that is, the machines emit files containing the short sequence fragments. Now we need to make a sequence from them.





The key point that allows retrieving the original sequence from these small fragments is the fact that these fragments are randomly distributed over the genome, and they are overlapping.



The next step depends on whether you have a similar, already sequenced genome at hand. Often, this is the case. For instance, there is a high-quality “reference sequence” of the human genome and since all the genomic sequences of all humans are ~99.9% identical (depending on how you count), you can simply look where your reads align to the reference.



Read mapping



This is done to search for single changes between the reference and your currently studied genome, for example to detect mutations that lead to diseases.



So all you have to do is to map the reads back to their original location in the reference genome (in blue) and look for differences (such as base pair differences, insertions, deletions, inversions …).



Mapped reads



Two points make this hard:



  1. You have got billions (!) of reads, and the reference genome is often several gigabytes large. Even with the fastest thinkable implementation of a string search, this would take prohibitively long.


  2. The strings don’t match precisely. First of all, there are of course differences between the genomes – otherwise, you wouldn’t sequence the data at all, you’d already have it! Most of these differences are single base pair differences – SNPs (= single nucleotide polymorphisms) – but there are also larger variations that are much harder to deal with (and they are often ignored in this step).



    Furthermore, the sequencing machines aren’t perfect. A lot of things influence the quality, first and foremost the quality of the sample preparation, and minute differences in the chemistry. All this leads to errors in the reads.


In summary, you need to find the position of billions of small strings in a larger string which is several gigabytes in size. All this data doesn’t even fit into a normal computer’s memory. And you need to account for mismatches between the reads and the genome.



Unfortunately, this still doesn’t yield the complete genome. The main reason is that some regions of the genome are highly repetitive and badly conserved, so that it’s impossible to map reads uniquely to such regions.



As a consequence, you instead end up with distinct, contiguous blocks (“contigs”) of mapped reads. Each contig is a sequence fragment, like reads, but much larger (and hopefully with less errors).



Assembly



Sometimes you want to sequence a new organism so you don’t have a reference sequence to map to. Instead, you need to do a de novo assembly. An assembly can also be used to piece contigs from a mapped reads together (but different algorithms are used).



Again we use the property of the reads that they overlap. If you find two fragments which look like this:



ACGTCGATCGCTAGCCGCATCAGCAAACAACACGCTACAGCCT
ATCCCCAAACAACACGCTACAGCCTGGCGGGGCATAGCACTGG


You can be quite certain that they overlap like this in the genome:



ACGTCGATCGCTAGCCGCATCAGCAAACAACACGCTACAGCCT
ATCCCCATTCAACACGCTA-AGCTTGGCGGGGCATACGCACTG


(Notice again that this isn’t a perfect match.)



So now, instead of searching for all the reads in a reference sequencing, you search for head-to-tail correspondences between reads in your collection of billions of reads.



If you compare the mapping of a read to searching a needle in a haystack (an often used analogy), then assembling reads is akin to comparing all the straws in the haystack to each other straw, and putting them in order of similarity.

Tuesday, 17 July 2012

human biology - Why are goosebumps so ineffective at keeping us warm?

When your brain, the hypothalamic temperature centers in particular, detects that the temperature is too warm or cold, it initiates a number of controls to try and correct this.



Goosebumps appear due to piloerection. This is one of the reactions that occur when the temperature is too low.



This causes hairs to stand on end as a result of contractions in muscles attached to hair follicles called arrector pili.



This particular reflex is not actually important in human beings. However, in animals, this mechanism allows entrapment of a layer of air allowing insulation. This way, the heat loss is greatly reduced.



The other mechanisms are very adept at maintaining temperature in the human body. These include sweating, dilation (vasodilation) and constriction (vasoconstriction) of skin blood vessels and increasing and decreasing the body's heat production.



For example, when it's too hot, dilation of the skin blood vessels can increase heat transfer by up to eight times.



Source: Guyton and Hall. Medical Physiology. 11th ed. Elsevier Saunders.

Saturday, 14 July 2012

human biology - Why is the microbial ecosystem of the gut so susceptible to disruption by pathogens?

There are two types of food poisoning:



Alimentary intoxication



This is the case when you consume food which is contaminated with some toxins, and those are responsible for development of the poisoning symptoms. The source organisms of these toxins might not be present anymore (killed by heating during cooking, for example). In this case there is no massive invasion of any foreign organisms into the gut.



Alimentary toxico-infection



This happens if you eat the food contaminated with microorganisms, and these start to massively proliferate in your alimentary system causing the symptoms of poisoning. The massive intake of the bacteria (even 2-3 spoons of contaminated food might contain millions of bacteria, like Staphilococcus in contaminated diary products). In this case the balance of gut microflora is dramatically changed due to introduction of a considerable amount of foreign microorganisms.



So, even the poisoning seems to be "minor" (e.g., its symptoms are not so dramatic), there could be different amounts of bacteria invading the guts.



The second important point here is the increased emptying of the gut due to diarrhea that leads to the washing out some of the "good" bacteria from the guts, especially in case of profuse diarrhea. The newly coming bacteria are not necessarily those that are present in normal microflora, and it takes days or even weeks until the microflora reaches homeostasis again.



One last point: even without poisoning, microflora varies, and the amount of different bacterial fractions can fluctuate over time. This is normal and depends upon your eating habits, your environment, immune status, and many other factors.

Thursday, 12 July 2012

human biology - What are tendons made of specifically

As you correctly say, tendons are made up of collagen fibers. Collagen is one of the most important proteins (or, to be more specific, family of proteins, as there are many types of collagen) forming connective tissue in the body.



Collagen molecules have a particular structure that allows them to form long fibers, composed by three different strands that form a triple helix. This is a schema of a collagen helix (each ball represents one aminoacid):



Collagen helix
(source: Wikipedia)



These helices can then be bound together to form a collagen fiber, through the action of an enzyme called lysyl oxidase which binds two lysine residues from two different helices together (lysine is one of the aminoacids that makes up collagen).



Here is a scanning electron microscope of a collagen fiber:



Collagen fiber, SEM
(source: Science photo library)



Collagen is secreted out of the cells that produce it so, although there may be cells around the collagen molecules, it is important to understand that it is part of what is called the extracellular matrix, the extracellular structure that supports the cells in our body.



As for the photo you linked, it is an hematoxylin and eosin (H&E) stain of a tendon. Hematoxylin colours cell nuclei in dark blue, so the dark spots are definitely cells.
The pink "waves" are indeed collagen fibers, the cells are probably the tenocytes, the specialized fibroblasts of the tendon, which produce the collagen.

Friday, 6 July 2012

nomenclature - Genetic networks vs genetic architectures?

What is the difference between the terms genetic network and genetic architecture? I've heard both in a variety of contexts used by different people, so I am interested in what people think they mean, other than what is described in Wikipedia:



Genetic architecture refers to the underlying genetic basis of a phenotypic trait



Genetic regulatory network (GRN) is a collection of DNA segments in a cell which interact with each other indirectly and with other substances in the cell, thereby governing the rates at which genes in the network are transcribed into mRNA.



EDIT: so what I take from the answers so far is that a genetic network is the molecular wiring of all the interacting loci, whereas genetic architecture describes the phenotypic consequence(s) one would be able to see from that network. Then, trying to bring the two definitions together, if we would assume we knew all the molecular details of a genetic network, we would only need to add the other factors in the model, such as environmental perturbations, to end up with the description of the genetic architecture, right?

Tuesday, 3 July 2012

biochemistry - What implications has the missing 2'-OH on the capability of DNA to form 3D structures?

To make sure I'm not comparing apples and pears, my (attempt to) answer the question will be broken into two parts: comparison of single-stranded nucleic acids and double stranded ones.



Single stranded DNA and RNA



Both DNA and RNA can form single-stranded complex tertiary structures in which the secondary structure elements are associated through van der Waals contacts and hydrogen bonds. The presence of a 2'-hydroxyl group makes ribose ring prefer different conformations than deoxyribose in DNA. Also, since 2′-OH moiety is both a hydrogen donor and acceptor, it provides RNA with greater flexibility to form 3D complex structures and stability to remain in one of these conformations. As Aleadam notices, this paper shows that tRNA and its DNA analog form similar tertiary structures though tDNA is not as stable as tRNA:




Therefore, we submit that the global conformation of nucleic acids is primarily dictated by the interaction of purine and pyrimidine bases with atoms and functional groups common to both RNA and DNA. In this view the 2-hydroxyl group, in tRNA at least, is an auxiliary structural feature whose role is limited to fostering local interactions, which increase the stability of a given conformation.




These authors also show that at least one loop in the tDNA analog is more susceptible to cleavage by a restriction endonuclease. In this region the tRNA has a water molecule hydrogen bonded to 2'hydroxyl group.



I was not able to find more of such interesting comparisons in the literature.



Double stranded DNA and RNA



Both DNA and RNA can form double-stranded structures. Again, sugar conformation determines the shape of the helix: for DNA helix it's usually B-form, whereas helical RNA forms A-geometry under nearly all conditions. In RNA helix we find the ribose predominantly in the C3’- endo conformation, as 2'-OH stericly disfavors the C2'-endo conformaion, necessary for B-form geometry.



Physiological significance



dsRNA and ssDNA often provide a signal to the cell that something is wrong. dsRNA is of course seen in normal processes like RNA interference but it can also stop protein synthesis and signal viral infections (cf. double stranded RNA viruses). Similarly, ssDNA is much more prone to degradation than dsDNA, it often signals damage of DNA, or infections from single stranded DNA viruses and induces cell death. Therefore, due to their functions, under normal conditions DNA 3D structure is mostly a double-stranded helix, whereas RNA has a single stranded, "protein-like", complex 3D structure.

Tuesday, 26 June 2012

dna - What conditions should I use for Gel Red staining?

I don't think you should need to vary the concentration of the GelRed. Mine came with instructions for the exact concentration and how to dilute. Optionally salt can be added, which I did, and it has worked for me.



I have only done this with Post-staining. 1-2.5% agarose with TAE. Used a 10,000x solution from Phenix research. The gelred stock and working solution needs to kept in the dark. I am assuming you have a transilluminator and (optional) filters to visual though. GelRed has similar excitation and emission [wavelength] properties as EtBr, so a standard transilluminator and filter should work without any changes. It will not be visible with the naked eye.



That being said there are a ton of things that can go wrong in casting the gel, eletrophoresis, staining, and visualization. If bands are visible but just blurry I was suspect there is more of a problem with either the gel creation or eletrophoresis though. Make sure agarose is completely clear and cooled down a little before pouring. Otherwise voltage is one of the main things I have noticed that can have significant affects on how sharp or blurry bands appear. You might try different % gels and different voltages. There are formulas that can help you estimate the best value for each of those.

evolution - Height and natural selection in humans?

@kate has what is probably the more correct answer for the observed pattern.



But as an experiment, I set up a basic simulation to approximate the conditions that you lay out:



  1. Starting mean heights of 5'8" (172.72 cm) and 5'4" (162.56 cm) with standard deviations of 2.8" (7.112 cm). I used cm, because it's easier than dealing with inches.

  2. Males will not mate with females that are taller than themselves.

  3. Females will not mate with males more than 8" taller.

  4. Males will not mate with females more then 8" shorter (follows from #3 above).

The problem that I quickly ran into was that, by truncating part of the normal distribution, the variance in height at each generation gradually decreased. After about 20 generations, the means weren't evolving because there was so little variation in height.



Human height is one of the most studied quantitative traits, going back over 100 years to some of the very first statisticians (Fisher, Galton). Height is a polygenic trait with very high heritability (h2 = 0.8)1. Genome-wide association studies have reported 54 genes involved in determination of human height2.



Imagine that each of these 54 genes has just two alleles: a and b. a gives a +1 to height. b gives a -1 to height. So aa would be +2, ab or ba 0, and bb -2. The sum of all those alleles is correlated to height. So if all 54 were aa, then the height would be +108.



The problem comes in when people only mate with taller people. Over time, the proportion of b's will decrease, and the proportion of a's will increase, but only to a point. Once all the alleles are fixed at a, there won't be any room left. The genetic variation will be exhausted. Without the input of new alleles, height will cease to evolve.



1 Lettre, G. 2011. Recent progress in the study of the genetics of height. Hum Genet 129:465–472.



2 Visscher, PM. 2008. Sizing up human height variation. Nat Genet 40(5):489-90.

Saturday, 16 June 2012

physiology - Dimensionless number for blood volume

Blood volume is not a dimensionless number - it's a volume. Historically we used to measure this in patients or volunteers by giving a large carbohydrate molecule like a starch that is not digestible or harmful to the body. Just like every other body fluid compartment volume (i.e. plasma, interstitial fluid, intracellular and extracellular) that we have, blood volume is estimated by intravenously injecting a known concentration of a particular compound. Once that compound equilibrates you take a blood sample and measure the compound's concentration again.



Initial Concentration * Initial Volume = Final Concentration * Final Volume



When you inject a known volume of a known concentration that only fills the "blood" component, and then you measure a final concentration - you can then solve for the "Final Volume" for "blood volume".

Thursday, 7 June 2012

immunology - What cells would have the CD3 marker on them (other than T-cells)

As you know CD3 is




a protein complex and is composed of four distinct chains. In mammals, the complex contains a CD3γ chain, a CD3δ chain, and two CD3ε chains.




Natural Killer (NK) cells have been shown to express CD3 epsilon proteins, but not CD3 delta or gamma.



Also on a side note, there are several other non peripheral mononucleated cells besides Purkinje cells that can express CD3, but mostly in pathological circumstances. For instance Warthin-Finkeldey cells, also known as polykaryocyes, display CD3, although the origin of these cells is not certain (they might be multinucleated T-cells).



(PS: If you want more information on CD3, I found this site very interesting.)

Sunday, 3 June 2012

neuroscience - Is the minicolumn the unit of the neocortex?

There are many arguments for what the unit of the neocortex is. "Columns" seem to be the standard, but what exactly those are is extremely contradictory between individuals, cortical regions, and species. Often times, when a column is referred to, it's actually a functional column without any anatomical borders (such as Hubel and Wiesels ocular dominance columns). Sometimes, the "column" is semi anatomical, such as the rat barrel cortex. Other times, these functional columns are confabulated into anatomical units, without any evidence for a border.



So my question is, could Mountcastle's "minicolumns" be the actual anatomical unit of the cortex? I've heard arguments that they are mere developmental relics. But they seem like the only reliable and consistent unit in the cortex.

physiology - Why is the human body able to repair a broken bone and not a heart muscle?

The heart does have stem cells in it, and there is cell turnover in the heart, of about 1% per year. Which is much slower than your skin, but not nothing. This allows your heart to grow during your life, and remodel itself slightly to become stronger/more efficient when you get in shape.



The heart can repair itself, when damaged it doesn't simply stay damaged. Unfortunately, the 'repair' leaves particularly useless scar tissue. After a heart attack, the dead muscle does repair itself, but very poorly. This 'scar' barely contracts, and isn't as strong as the heart wall around it.



This is mostly a function of the very specialized heart myocytes, and the evolutionary (relative) uselessness of being able to regenerate your heart after injury. In the wild, if your heart was injured, you were probably dead.

Saturday, 2 June 2012

botany - Serological assays not detecting native proteins

Is there anyone out there who has done much work with serological assays? We have antiserum for a manufactured viral protein but no luck so far getting it to detect native protein (unless today's attempt worked which we will find out tomorrow).



What are any common problems or possible considerations that we should bear in mind when testing?

Monday, 28 May 2012

proteins - Two subunits connected by only one disulfide bridge: quaternary structure?

I've always simply assumed quaternary structure to be characterized by non-covalent interactions such as hydrogen bonding, van der Waals interactions and whatnot. However, if two distinct polypeptides were only connected by one covalent disulfide bridge, would this be considered as quaternary structure, assuming that non-covalent interactions between the subunits are either negligible or even repulsive?



In other words, can a disulfide bridge, on its own, convey quaternary structure?



On a side note, are there any notable examples of this type of interaction?

Saturday, 26 May 2012

Last-ditch efforts to maintain thermal homeostasis

I was in the gym's steam-room today and a thought occurred to me: have I truly thwarted all possible mechanisms for maintaining thermal homeostasis?



There's sweating, which is thwarted because the steam-room's atmosphere is as close to 100% humidity as possible, so there's almost no evaporative cooling.



There's convection, which is thwarted because the ambient temperature is above normal body temperature.



And I can't get rid of heat by exhaling, because every lungful of air I inhale is already above normal body temperature.



I think that, eventually, I should go into hyperthermia, but beside the above, are there any other last-ditch attempts to lower core temperature that my body could take?

Thursday, 24 May 2012

How can I normalize mRNA samples for sequencing?

Is there an easy, inexpensive, not too labor intensive way to normalise mRNA samples so that even though one loses information of gene expression levels, each of the transcripts in the transcriptome is equally represented in the sample for sequencing? This is to say, one has a uniform distribution of transcripts, so that the lowly-expressed transcripts still have a good chance of being sequenced.



I have seen a few papers mentioning protocols for mRNA normalization, but I can't tell if any of them are practical in real life wet lab situations.

cell biology - What's the distinction between a tetrad and a synaptonemal complex in meiosis?

As far as I can tell there is a distinction.



A tetrad refers to the entire group of four chromatids after they have come together for crossing over in Prophase I (synapses).



A synaptonemal complex as you would expect is formed in synapses. This is a protein-RNA complex that connects the intervening regions of matched chromosomes in some circumstances - it is not required. Mutated yeast that can not form this complex has still been shown to be able to exchange genetic information.



In other words, you can have a tetrad without a synaptonemal complex, but not vice versa.

history - Why does the Hertzsprung–Russell diagram's x-axis go from large temperatures to lower?

The original Hertzsprung-Russell diagrams constructed by Henry Russell and Eijnar Hertzsprung consisted of absolute magnitude on the y-axis and a spectral type or an indicator of spectral type on the x-axis. Below you can see an original HR diagram produced by Russell in 1913.



Original HR diagram



When the diagrams were constructed, it was not at all clear what the sequence of spectral types or spectral type indicators actually meant. It turned out of course that the sequence (in modern day parlance O,B,A,F,G,K,M) actually corresponds to decreasing temperature.



Astronomers have simply stuck with this convention to the present day, there is no particular reason for that. Most HR diagrams are now plotted with temperature (decreasing) along the x-axis, although that is not what the original HR diagram was.

Thursday, 17 May 2012

saturn - How long do planetary rings last?

I'm surprised that this question hasn't been asked before (here or on Physics), to the best of my knowledge. It's one that I might have asked when I was a bit younger, and one that I think other people will ask.



Anyway, it's clear that Saturn's rings won't form a moon, and the same is likely to be true for other ring systems. However, I'm guessing that they won't last forever (it's just a guess).



How long do planetary rings in general last? What mechanisms could cause them to dissipate/fall apart/end? I'm guessing the Poynting-Robertson effect could come into play, but I'm not sure.



And for anyone curious, yes, I checked just for the fun of it, and Yahoo Answers had a bunch of really, really bad, unsourced and most likely inaccurate answers (given that there was no consensus), ranging from '3 million years' to '13-18 billion years' to 'forever'.

evolution - How does Artificial Selection work?

You seem to have a few different concepts in there...




But mutations are always completely random and human beings have no control over it.




Aside from the fact that mutations are not completely random (not always, at least), it is not true that humans have no control over them. Imagine you grow plants, and you start to cross those plants that have a larger stalk, allowing them to be more resistent to wind: you are effectively selecting a certain mutation. Surely, you cannot choose which specific sequence you want to mutate, but still you are enriching your population of plants with a specific mutation.
Nowadays of course you can specifically mutate the genome in the lab, but that is a different matter.




Would it have been possible to domesticate dogs from wolves, if there would have been no mutations in wolves to begin with?




Note that artificial selection has been used to generate the different breeds of dogs, by breeding animals with specific characters.
However, domestication is a different matter. Wolves and dogs are the same species (Canis lupus) and domestication is not strictly dependent on selective breeding. You can domesticate an adult animal without inducing mutations in its DNA. Probably epigenetics plays an important role there, and epigenetic patterns could possibly be transmitted to the offspring.



Of course, then you will use selective breeding to expand the domesticated population, and to select those traits you are interersted into. With artificial selection you are just selecting the mutations you want, the way you generate them is irrelevant to the matter: for instance you can irradiate plants to enhance mutation rate, and consequently the appearance of new favorable traits.

Friday, 11 May 2012

dna - Synthetic biology using existing cells

There is this guy, Martin Hanczyc, working on protocells to better understand how the beginning of life occurred. He makes synthetic protocells. They don't have any DNA in them but they are pretty cool and maybe the beginnings to making synthetic cells. Perhaps once science has figured out how cells began and their very minimal needs they can create completely synthetic cells.



http://www.ted.com/talks/martin_hanczyc_the_line_between_life_and_not_life.html



Also, just thinking, what would we consider completely synthetic cells? If we took synthetic protocells and they eventually evolved into a cell with DNA would that still be synthetic?

Thursday, 10 May 2012

botany - Do any plants exhibit hormonal changes similar to puberty?

In flowering plants (the angiosperms) there are several developmental transitions in the life of the plant. I won't list the plants, because the list includes pretty much all of them (although the magnitude in the change of developmental pace differs widely between taxa and environments).



First there is seed germination, which is controlled hormonally. Absence of germination is usually imposed by abscisic acid, whilst germination is caused at the appropriate time by gibberellic acid and ethylene (among other things; Holdsworth, Bentsink & Soppe, 2008).



Next, in many herbaceous species there is a transition between a spreading growth stage (e.g. rosette growth) and the flowering stage. The 'growth spurt' here is the differentiation and elongation of the flowering stem, and then subsequently the sudden flowering of buds. The transition is also controlled hormonally, by a variety of hormones including auxin (Zhao, 2010), gibberellic acid, ethylene (Schaller, 2012), and the long anticipated, recently confirmed florigen (Choi, 2012). Ethylene and abscisic acid then play important roles in the next developmental transition when seeds and fruits are produced and dehisced.



Small RNAs are also now being revealed to play a large role in controlling the timing of developmental, but they are upstream of the hormonal changes. In particular some key miRNAs are involved in auxin-based regulation of branching, and in embryogenesis (Nodine & Bartel, 2010), and RNA silencing is involved in the switch from rosette growth to flowering growth (reviewed in Poethig, 2009 and Baurle & Dean 2006).




Monday, 7 May 2012

spectra - Are hot stars like O-type stars entirely composed of helium?

The lines that appear in a stars spectrum mainly reflects its temperature not its composition, see here



O-type stars start out with the same sort of composition as other stars, that is they are mainly H and He (approximately 75% and 25% by mass) with traces heavier elements.

Saturday, 5 May 2012

telescope - Nebula and galaxies using 70mm scope

Whether you'll be able to see them depends on the levels of light pollution in your area. As TildalWave mentioned, a number nebulae and galaxies are perfectly observable with the naked eye so unless you live somewhere very bright, you should be fine. Under really dark skies, objects like M31 are very easy to find with the naked eye. Where I live, I hardly see it with a bright binocular due to very poor seeing, induced by a large number of shopping centres fond of pointing powerful searchlights at the sky for whatever reason.



As to how much you'll be able to see, you can get a general idea of what you should be able to see in ideal conditions by calculating your telescope's limiting magnitude



In reality, you'll also have to factor in the quality of the optics (mainly transmission), both the telescope and the eyepieces. Of the two you mentioned in your question, you should use the 25 mm one. It will give you a lower magnification, a bigger field of view (better for most bright nebulae and galaxies due to their often significant angular dimensions)



Here's a highly configurable calculator that you can use.



Since you live in the northern hemisphere, objects from the Messier catalogue are great candidates to begin your observations with.

Tuesday, 24 April 2012

observation - Why are distant objects observed in the near infrared?

I was reading an article that explains why JWST is a successor to Hubble and not a replacement for Hubble. They explained that Hubble's science pushed astronomers to look at longer wavelength. And then they said:




In particular, more distant objects are more highly redshifted, and their light is pushed from the UV and optical into the near-infrared.




So basically to observe the first galaxies, astronomers have to observe in infrared. My question is why distant objects require observations in the infrared?



Is it because they are at a very large distance from us, so the light has lost a lot of energy on its way so it's detectable in the infrared?

Monday, 23 April 2012

the sun - Are we still going to have rainbows if Sun is replaced by another star?

Rainbows would lack most blue, and some green for red stars.
For a blue star, the blue part of the rainbow would be more intense.



For more complex colors, the rainbow may show some gaps. A rainbow is essentially a spectrum of that star light portion, which is visible to our eyes, and to which the atmosphere is transparent.



Stars vary in brightness. A blue giant would be large and glaring, a red dwarf faint.



Colors in the rainbow would be blurred, hence closer to white, for large stars, and sharper, more distinct, for small stars, according to the angular size of the respective light source.



... This all assumes, that there is still rain. With small, red stars, it would get too cold for rain. With large blue stars, Earth would heat up too much.
To adjust for these effects, the distance to the star would need to be modified.
And of course, the length of a year, and the orbital velocity may change.
This could then cause different tides, changes in volcanism, etc.



Exchanging the star could cause various other effects, too, other polar lights, effects to the ionosphere, the ozone layer, atmospheric erosion, more...

Sunday, 22 April 2012

big bang theory - The Fermi paradox

I think this too broad, but I'll offer the following:



The star Kepler 444 is orbited by several small, assumed to be rocky, exoplanets. Kepler 444 is estimated to be a very old star, perhaps 11 billion years old, with a metal content of about one third that of the Sun.



Whilst the planets around Kepler 444 are small, they are too hot to be "earth like", but there appears to be no reason why planets at larger orbital radii (that are much harder to detect by the transit technique) should not be there.



Thus the answer appears to be demonstrably, at least 11 billion years ago.



However, the limit cannot be much longer than this, since a certain time must elapse between the formation of the first stars in the Galaxy to the enrichment of the interstellar medium with metals. These metals (all elements heavier than He are referred to as such) are required to build a "rocky" planet. While Kepler 444 demonstrates you don't need a solar metallicity, you still need some.



The fastest place this enrichment took place in general was in the Galactic bulge. A burst of star formation probably enriched the ISM in much less than a billion years.



Thus in principle I would say less than a billion years after Galaxy formation and this is probably not much more than 11 billion years ago.



Kepler 444



http://en.m.wikipedia.org/wiki/Kepler-444



http://adsabs.harvard.edu/abs/2015ApJ...799..170C



The second part of your question is hard. It has taken 4.5 billion years "for life like ours" to evolve. Since we don't fully understand the factors that lead to this, the only realistic answer is that it is probable that it takes another 4.5 billion years after the formation of planets for life "exactly like ours" to emerge.

evolution - What are samples of "Outlaw Genes"

I read this in a paper




Keller and Ross describe their greenbeard gene as an ‘outlaw’.
Admittedly, the comment is only made in passing, but are they correct?
In this context an outlaw is usually defined as a gene whose action
favours itself, but opposes the reproductive interests of the
individual organism.
Where there are outlaws, natural selection at
different loci is pulling the organism in different directions.
Theoretically speaking, green-beard genes...




Yes: http://www.sciencedirect.com/science/article/pii/S0140175083901562



Basically loving your children is outlaw because it makes you sacrifice resources for your children. However, if your interest is defined as maximizing the number of children, then it's just your stuff working properly. Outlaw genes will be something that makes you die for a cause or stuff like that I suppose.

Friday, 20 April 2012

Space time and aging - Astronomy

Einsteins general theory of relativity explains time dilation caused by gravity-emitting objects. As one experiences more gravity, time will flow slower. That means that "standing" on jupiter, which isn't possible due to the lack of surface, will cause you to move through time faster; But you do not age slower. Suppose your Lifespan is 80 years. On Jupiter you still would live 80 years but the time that has passed on earth during your 80 years would be slightly more than that. The same counts vice versa for the moon.

Thursday, 19 April 2012

atmosphere - Why argon instead of another noble gas?

Doing a bit of reading up on this, I might have an answer, though credit where credit is due, the answer isn't really mine:



https://www.reddit.com/r/askscience/comments/3wsy99/why_is_neon_so_rare_on_earth/



When the planets coalesced, it's likely that there was very little ices/gas around the inner planets when they formed and the Earth's atmosphere and water (CH4, NH3, CO2 and H20 being the 4 most common outside the frost line ices). These likely came from asteroids and meteors that formed outside the frost line and later crashed onto earth.



Neon is the 5th most common element in the milky-way but because all noble gases have very low freezing points, it's likely not be very common even on comets or meteors for the same reason that water or CO2 aren't common inside the frost line, Neon, and other noble gases likely stay free and don't collect on comets or meteors in high amounts. (I looked, but couldn't find an article to verify that).



But if comets have low noble gas content, then we have to look for an alternate source. With that in mind, and going back to the first link, Argon is produced by radioactive decay of Potassium 40 and that would explain it's relative abundance compared to the more common noble gas, Neon. Helium (Alpha particles) is also produced inside the earth and Radon is is too in small amounts but Radon also decays - that's not related to your question though.



If Argon on planets comes primarily from Potassium 40, you should expect the amount of Argon to have a roughly similar ratio to the amount of potassium on a planet and not be relative to the percentage of atmosphere. A 2nd factor, how much gets blown off the planet over long periods of time is a factor too. Venus in general should be able to retain much of it's Argon based on atomic weight (40) similar to CO2 (44), but if it loses even a tiny percentage of it's Argon over time, that would be a factor too.



Now, to see if this is possible, I should run some numbers, but I warn you, my math can be a little rusty.



Potassium is the 7th most common element in the Earth's lithosphere at about 0.26% and about 0.0117% of that Potassium is Potassium 40. Using a very rough estimate of 2.3 x 10^19th tonnes for the Earth's crust, 2.3*10^19*2.5*10^-3*1.17*10-4 = about 6.7*10^12 or 6.7 trillion tons of Potassium 40 currently in the Earth's crust. (There's probably a fair bit more in the mantle, so these numbers are rough)



With a half life of about 1.248 billion years, that's sufficient time for over 3 half lives if we start after the late heavy bombardment, which suggests a bit over 7/8ths of the original Potassium 40 in the Earth's crust has decayed into Argon 40, so there should be, given the age of the Earth and abundance of Potassium 40, a bit over 7 times 6.7 trillion tons or, lets ballpark and say a bit over 50 trillion tons of Argon that formed on earth by Potassium decay. (I'm ignoring any that might have been produced prior to the late heavy bombardment, cause I assume that could have blown some of the atmosphere off the earth or heated the atmosphere enough for the sun to blow some of it off). Also, doing a bit of research, only 11% of the Potassium 40 decays in to Argon 40, 89% undergoes beta decay into Calcium 40, so for this to work, there would need to be a fair bit more Potassium in the earth than I estimated, but that's still likely the case.



The mass of the atmosphere is about 5,140 trillion tons, and 1.288% of that (By mass, not volume) = about 66 trillion tons, so the Argon we should expect from Potassium 40 decay and the amount of Argon in the atmosphere are pretty close. Some Argon gas might have escaped and some should still be trapped inside the earth but the numbers are close enough to work and I think that's very likely the answer. It also suggests that the Earth has lost relatively little Argon to space, which also fits with the Atmospheric Escape article.



A 2nd way to look at this is that Argon 40 makes up 99.6% of the Argon in the atmosphere and Stellar Nucleosis likely wouldn't account for a ratio anywhere close to that (not a typical stellar link but Wikipedia says Argon 36 is the most common isotope). The decay of Potassium 40 does explain the 99.6% Argon40 ratio.



If we apply a similar estimate to Venus, with Venus atmosphere about 94 times the mass of Earth's, and we assume a similar amount of Argon-40 being produced in Venus' crust we could roughly expect 1.28%/60 or about 0.02% Argon by mass in Venus's atmosphere or perhaps, if the Earth lost a pretty high share of it's lighter crust elements after the giant impact, we might expect a bit more than that on Venus, perhaps 0.03% or 0.04% as a rough estimate. Using your number of 0.007%, that's lower than I calculate it should be, but Venus could have lost a higher share of it's Argon than Earth and it also might be slower to release trapped gas inside it's crust than Earth because it doesn't have plate tectonics, so the number for Venus looks "about right" too. It's the Potassium 40 in the crust. I'm convinced.



Interesting question. I learned something researching it.