Tuesday, 30 June 2009

safety - Neutralizing TCA washes

A neutralization reaction occurs between Brönsted acids and bases to form a salt.



In this case, you have to add a Brönsted base to neutralize your acid. If the final pH is important, you should work out the exact quantity of base to add. Otherwise, just use litmus paper to check the pH of your solution, and add base until it's neutral.



EDIT



In view of the other answer proposed, I want to stress the following:



Do NOT dispose of your solution in the sink/sewer. Leave it in its plastic bin until a lab technician takes it to the furnace.



If the base you use is sodium bicarbonate (NaHCO3), there will be bubbling, caused by the decomposition of carbonic acid (H2CO3) into water (H2O) and carbon dioxide (CO2). You've successfully neutralized the TCA once there's no bubbling upon addition of NaHCO3.



It is indeed best to proceed slowly while adding your base.

Sunday, 28 June 2009

molecular biology - What is a good miniprep protocol for the class room?

I think the spin column kits are the way to go. As mentioned already, the benefits of the kits are that they are easy, safe, and (most importantly) how almost every actual lab does plasmid purification these days.



The biggest criticism of the commercial kits is that you can get by easily without knowing anything about what is actually happening in the tube. However, as noted in the comments to Nick T's answer, there is no secret proprietary technology in the kits--they use the alkaline lysis method. This means that you can still teach the mechanism in the class while getting the benefit of the spin columns.



One option, for teaching purposes, is to make all of your own solutions and just use the spin columns on the supernatant after pelleting the precipitated membrane/protein. That way, you get the benefit of insisting on real names for the solutions (NaOH is a more educational name than P2) and you get to use the spin columns in place of the ethanol precipitation. I believe there are cheap sources that just sell the spin columns, although I have never used them. You want to avoid the ethanol precipitation because it adds a bare minimum of 30 minutes to the protocol (for my protocols, it was more like an 1.5 hours or more). And waiting for ethanol to evaporate so you can resuspend is like watching paint dry unless you have a speed vac.



Yield shouldn't be a big issue in a class situation, but you can boost yield with the column by prolonging the final elution step. I think the given instruction is to let the TE buffer (Tris) "elute" for 1 minute before the final spin, but I have routinely let the TE sit for about 15-30 minutes before spinning down. The difference in yield blew out the exposure on the gel camera.



There are certain cases where phenol:chloroform extractions are still desirable, but this is definitely not one of them. I used to do P:Cs routinely when working with RNA and yeast genomic DNA. I have never seen anyone do anything other than a miniprep when preparing plasmids. What's more, the hassle of working in a fume hood and the risk of phenol burns encouraged a lot of people in the lab to prefer the spin kits even when working with genomic DNA.

genetics - Which bacteria have the highest mutation rate?


From my reading on M. tuberculosis, I know that this organism has a pretty high mutation rate




Huh, that's news to me. In fact, Mtb has a rather low mutation rate and rather low genetic variance. See the paper by Sherman & Gagneux in Nature genetics. The paper does state that mutation rate in latent infection is higher than expected or previously assumed, but that's a different story.



That said, mutation rate depends strongly on the genomic context. You can have mutator strains in E. coli or other bacteria that exceed the mutation rates of other strains by orders of magnitude.



A clinical example of a bacterium with mutator strains that help in acquiring resistance is Pseudomonas aeruginosa. More on that in the paper by Ferroni et al..

Saturday, 27 June 2009

spectroscopy - Which elements are an indication of habitable exoplanets?

While not strictly relevant to your question, I'm very much looking forward to what the James Webb Telescope might tell us about exo-planet atmospheres. That's probably the thing I'm most looking forward to in astronomy discoveries over the next 5-10 years.



The one thing I've heard that they are looking for specifically is a combination of Oxygen (which implies Photosynthesis) and CH4 which, in combination with oxygen implies bacterial digestion of plant matter (aka cow farts). Those 2 things together are of particular interest because CH4 and O2 tend to interact with each other, so, if not regularly produced they aren't likely to both be present, at least, in measurable amounts. Other factors like plate tectonics, Methane escaping from the ground or under the oceans, comet impacts and perhaps even lightning might make such readings a little bit unreliable, but as I understand it, that combo, CH4 and O2 is the big one to look for.



Other interesting things might be a CO2/O2 ratio, also implying photosynthesis and perhaps telling us something about plate tectonics or O2 respiration. Variation in H20 content which could tell us something about weather, if the planet has freezing/thawing periods like Mars or vast deserts and few oceans and little water or occasional hurricanes. Dust particles could tell us something about the soil or volcanic activity. I would think NH3 content in abundance might be an indicator of little or no life, but I'm not positive there. Water with ammonia oceans might be possible and still have life, though we might find that not a good planet to live on.



Finally habitable is a term that needs a bit of exploration. Lets use the Earth as an example some 2 billion years ago, there was Oxygen in the atmosphere, there was CO2 in the atmosphere, there were oceans, there was a magnetic field, there was even an ozone layer, so it had everything Knu8 listed, but, there were no trees, there wasn't even any soil. There might have been a thin microbial layer living on the rocky surface or, if too hot, perhaps in caves. There were no fish, there wasn't even any good plants or krill like things growing in the oceans for fish to eat. It was mostly one celled life mostly just in the oceans and perhaps lakes. Earth 2 billion years ago might not be our first choice for habitation. It was also prone to photosynthesis driven snowball earth periods as well as having no soil and no trees.



There's also the question as to whether we'd want to inhabit an "earth-like" planet. Bacteria can be pretty hostile and if there's life on the planet, it might not be safe to settle, and there would also be ethical questions. If there's no life, then the Ethics isn't an issue, but there's likely a lot more terraforming. It's all relatively moot given the difficulty of that kind of space travel, but "habitable" is a more difficult question than it first appears. Our technology might be able to make a dead planet habitable over time, where as a planet with life on it might appear more habitable but could pose unforeseen problems. I'm not sure which would be more ideal.



I'm not an expert in this, but what to look for in exoplanet atmospheres for habitability is a little complicated and I've only touched on it briefly. Fun thing to think about though.

telescope - What is the formulat to compute King tracking rate for a given set of topocentric coordinates?

Canbury Tech has a good background on King tracking rate, which approximates the average sidereal motion of a refracted star across the sky. Most amateur telescope mounts have some sort of notion of this "average" rate, which is generally about 15.037 arc seconds per second.



However, King actually produced a formula that would allow the rate to be precisely calculated for a given set of sky coordinates. Modern microprocessor controlled mounts ought to be able to make use of this technique, but many don't. As the author of telescope drive software, I'd like to incorporate this feature into my work, but I don't seem to be able to locate the forumula. BBAstroDesigns has a calculator for this on their web site, but the don't show the formula anywhere.



Can anyone enlighten me as to how to compute this value?

Wednesday, 24 June 2009

What is the difference between a neutron star and a white dwarf?

In a neutron star, the force of gravity is strong enough to press the protons and electrons together to form neutrons {1}, White dwarfs are only very compact. With even more mass, you get a black hole.
All three types are outcomes of star death, when the failing fusion in the middle of a star is no longer able to counteract gravity. What a star become when it collapses is depending on its mass.

volcanism - What mechanism could have formed the Moon's Oceanus Procellarum rift-like gravity anomalies?

According to the NASA JPL web-report, Gravity Gradients Frame Oceanus Procellarum, a rift-like ring structure surrounding the Moon's Oceanus Procellarum has been detected as a gravity anomaly by the GRAIL scientific mission (as can be as the dark blue features on the image below).



enter image description here



Image credit: NASA/Colorado School of Mines/MIT/GSFC/Scientific Visualization Studio



What mechanism could have formed the Moon's Oceanus Procellarum rift-like gravity anomalies?

Sunday, 21 June 2009

genetics - What are the function(s) of Alu elements in the cell?

I've really not studied this, however wikipedia does a good job talking about some of the functions, particularly the associated diseases that result from changed in those sequences. Of particular interest is this sentence:




The discovery of Alu subfamilies led to the hypothesis of master/source genes, and provided the definitive link between transposable elements (active elements) and interspersed repetitive DNA (mutated copies of active elements).




The article goes on to discuss the effects of Alu changes:




Alu elements are a common source of mutation in humans, but such mutations are often confined to non-coding regions where they have little discernible impact on the bearer[citation needed]. However, the variation generated can be used in studies of the movement and ancestry of human populations[citation needed], and the mutagenic effect of Alu[9] and retrotransposons in general[10] has played a major role in the recent evolution of the human genome. There are also a number of cases where Alu insertions or deletions are associated with specific effects in humans:



Associations with human disease



Alu insertions are sometimes disruptive and can result in inherited disorders. However, most Alu variation acts as markers that segregate with the disease so the presence of a particular Alu allele does not mean that the carrier will definitely get the disease. The first report of Alu-mediated recombination causing a prevalent inherited predisposition to cancer was a 1995 report about hereditary nonpolyposis colorectal cancer.



The following human diseases have been linked with Alu insertions:



Breast cancer



Ewing's sarcoma



Familial hypercholesterolemia



Hemophilia



Neurofibromatosis



Diabetes mellitus type II



And the following diseases have been associated with single-nucleotide DNA variations in Alu elements impacting transcription levels:



Alzheimer's disease



Lung cancer



Gastric cancer



Other alu-associated human mutations



The ACE gene, encoding Angiotensin-converting_enzyme, has 2 common variants, one with an Alu insertion (ACE-I) and one with the Alu deleted (ACE-D). This variation has been linked to changes in sporting ability: the presence of the Alu element is associated with better performance in endurance-oriented events (e.g. triathlons), whereas its absence is associated with strength- and power-oriented performance



The opsin gene duplication which resulted in the re-gaining of trichromacy in Old World primates (including humans) is flanked by an Alu element, implicating the role of Alu in the evolution of three colour vision.




Of course, there is still a great deal to study on this. For instance, the University of Iowa has a team studying this "junk" DNA.




Part of the answer to how and why primates differ from other mammals, and humans differ from other primates, may lie in the repetitive stretches of the genome that were once considered "junk."



A new study by researchers at the University of Iowa Carver College of Medicine finds that when a particular type of repetitive DNA segment, known as a Alu element, is inserted into existing genes, it can alter the rate at which proteins are produced -- a mechanism that could contribute to the evolution of different biological characteristics in different species. The study was published in the Feb. 15 issue of the journal Proceedings of the National Academy of Sciences (PNAS).



"Repetitive elements of the genome can provide a playground for the creation of new evolutionary characteristics," said senior study author Yi Xing, Ph.D., assistant professor of internal medicine and biomedical engineering, who holds a joint appointment in the UI Carver College of Medicine and the UI College of Engineering. "By understanding how these elements function, we can learn more about genetic mechanisms that might contribute to uniquely human traits."



Alu elements are a specific class of repetitive DNA that first appeared about 60 to 70 million years ago during primate evolution. They do not exist in genomes of other mammals. Alu elements are the most common form of mobile DNA in the human genome, and are able to transpose, or jump, to different positions in the genome sequence. When they jump into regions of the genome containing existing genes, these elements can become new exons -- pieces of messenger RNAs that carry the genetic information.




There is a paper by Srikanta et al entitled An alternative pathway for Alu retrotransposition suggests a role in DNA double-strand break repair (PDF).



Hope that helps.

Friday, 19 June 2009

Does the sun now 'bypass" some of the original zodiac constellations?

This question got me thinking of an interview of, or a talk by Neil deGrasse Tyson that I saw somewhere in the YouTube universe, giving a question one might ask to possibly challenge literal interpretation of one's astrological sign. For some reason I can't find it now, but I believe he said that currently the sun does not actually pass through all twelve zodiac constellations any more.



I'm not looking for opinions or any particulars about Astrology, please, just the science! (if there is some) If someone is familiar with the statement, I'd like to hear if I've got it right, and if it's due to proper motion, precession, redrawing/redefinition of constellation boundaries, or something else.

the sun - Given a date obtain latitude and longitude where is the sun zenith

Searching is easy to find terminator line (frontier between day and night) or the position of the sun in the sky given a position on the earth and a time; but I can't find how to obtain where is the zenith of the sun given a date (and time).



I need to obtain the center of the illuminated zone of the earth (latitude and longitude) at a given time. (Well, actually I need the opposite, the Nadir, but with one you can calculate easily the other).



Any knows the function?



Thank you.

Thursday, 18 June 2009

rotation - Do solstices and equinoxes shift over time?

The Gregorian Calendar was created so that annual astronomical events, specifically the vernal equinox (used to determine when Easter is), would on average keep their places in the calendar year over time. It is the best official approximation to the definition of the tropical year, which is defined as "the length of time that the Sun takes to return to the same position in the cycle of seasons". Because this calendar describes 97 leap years out of every 400 years, it defines the average year as exactly 365.2425 solar days, or exactly 365 days, 5 hours, 49 minutes, and 12 seconds.



However, the mean tropical year is in reality about 365 days, 5 hours, 48 minutes, and 45 seconds, or 27 seconds shorter.



Because the Gregorian Calendar is based on the tropical year, the calendar dates of the year will keep up with the solstices and equinoxes, and thus the seasons. If this calendar were exactly the length of the tropical year, then the calendar would keep the vernal (northward) equinox around March 20th for all time.



But because of the slight inaccuracy, it will take about 3,200 years (60 s/min * 60 min/hr * 24 hr/day / 27 s/year) for these 27 seconds to add up every year to be 1 full day, and that will result in the solstices and equinoxes marching backwards in the calendar by 1 day every 3,200 years or so, depending on the accuracy of the 27 seconds difference. This very slow shift is due to the slight inaccuracy in the Gregorian calendar in, on average, matching the tropical year, not because of the precession of the equinoxes.



3,200 years from now, if the Gregorian Calendar is still used, the date of the vernal (northward) equinox will be on average one day earlier in March. The precession of the equinoxes will still occur, so the Earth's axis tilt will be significantly different from today. The Earth will be at a noticeably different position with respect to the Sun on the vernal (northward) equinox from where it is today, in 2014, on the vernal (northward) equinox, but it will still be in March.



This inaccuracy may very slowly increase over time, because according to the same Wikipedia page for the tropical year, the tropical year is very slowly getting shorter, and the mean solar day is even more slowly getting longer. But for 10,000 years to come, the Gregorian Calendar will keep the vernal (northward) equinox in March, even if it slowly shifts earlier in the month.



This is in contrast to the scenario that you imply, where the calendar date would correspond to the relative position of the Earth in its orbit around the Sun. That describes the sidereal year, the time taken for the Sun to reach the same spot in the sky relative to the stars, which is 365 days, 6 hours, 9 minutes, and 10 seconds. A sidereal calendar would explain why you might think that precession would cause the dates of equinoxes and solstices to change in the calendar year. That would result in a shift in the calendar of one full month in 1/12th the cycle length of the precession of the equinoxes, or about 1 full month about every 2,000 years.

Wednesday, 17 June 2009

gravity - What is the gravitational force felt on Earth from the other planets in our solar system?

Because of the inverse square law for Newtonian gravity we have the acceleration due the gravity $g_b$ at the surface of the Earth due to a body of mass $m_b$ at a distance $d_b gg r_e$ (where $r_eapprox 6371 mbox{km}$ denotes the radius of the Earth, note all distances will need be in $mbox{km}$ in what follows) is:
$$
g_b=gtimes frac{m_b}{m_e}times left(frac{r_e}{d_b}right)^2
$$
where $g$ is the usual accelleration due to gravity (from the Earth at the Earth's surface $approx 10 mbox{m/s}^2$, and $m_eapprox 6.0 times 10^{24} mbox{kg}$. We get the maximum acceleration due to a body when that body is at its closest to the Earth, which is what we do from now on (except for the Sun and Moon where the mean distance is used).



Now for the Moon $r_bapprox 0.384 times 10^6 mbox{km}$, and $m_bapprox 7.3 times 10^{22} mbox{kg}$, so the accelleration at the Earth's surface due to the Moon $g_bapprox 3.3 times 10^{-5} mbox{m/s}^2$



Then putting this relation and Solar-System data into a spread sheet we get:
enter image description here

genetics - What is the functional and structural distinction between core (H2A, H2B, H3,H4) and linker(H1/H5) histones?

The core histones are H2A, H2B, H3, and H4, and the linker histones are H1 and H5. The structure of the nucleosome is well explained in wikipedia:




Two of each of the core histones assemble to form one octameric nucleosome core particle, and 147 base pairs of DNA wrap around this core particle 1.65 times in a left-handed super-helical turn. The linker histone H1 binds the nucleosome and the entry and exit sites of the DNA, thus locking the DNA into place and allowing the formation of higher order structure. The most basic such formation is the 10 nm fiber or beads on a string conformation. This involves the wrapping of DNA around nucleosomes with approximately 50 base pairs of DNA separating each pair of nucleosomes (also referred to as linker DNA). The assembled histones and DNA is called chromatin. Higher-order structures include the 30 nm fiber (forming an irregular zigzag) and 100 nm fiber, these being the structures found in normal cells. During mitosis and meiosis, the condensed chromosomes are assembled through interactions between nucleosomes and other regulatory proteins.




The core histones have a positive net charge, which facilitates the interaction with the negatively charged phosphate groups of DNA.



Apart from defining the nucleosome structure, the function of the core histones is regulatory: it can switch on/off gene expression by histone modifications like acetylation or methylation.



The gene expression is ON when there is:



• DNA demethylation



• histones acetylation
• H3K4 methylation
• H3K36 methylation



• H3K9 demethylation
• H3K27 demethylation



The gene expression is OFF when there is:



• DNA methylation
• histones deacetylation
• H3K4 demethylation
• H3K36 demethylation
• H3K9 methylation
• H3K27 methylation



// H3K4 demethylation is abbreviation for: Lysine 4 of H3 is demethylated.



Berger SL. 2002. Histone modifications in transcriptional regulation. Current Opinion in Genetics & Development 12: 142–148 is also a nice review.

Monday, 15 June 2009

the sun - What triggers solar flares?

Magnetic fields are generated by currents - i.e. by the motion of charged particles. As you say, the Sun is full of freely moving charged particles, and these generate currents which in turn generate magnetic fields. No metals required.



Most of the magnetic field generation is thought to occur at the interface between the radiative interior of the Sun and an outer convective envelope. This region, called the tachocline, is subject to large shearing motions which are able to take small magnetic fields and amplify them. The stronger magnetic fields are then buoyant, because magnetic fields in a plasma exert a pressure. They therefore emerge at the solar surface in the form of loops of magnetic field.



These loops are strongly coupled to the plasma in the Sun. As the plasma moves around turbulently, driven by convective motions and differential rotation on the Sun's surface, the footpoints of the loops are sheared and twisted. There comes a point where the loop will snap back into a configuration with lower magnetic potential energy through a process called reconnection. The reconnection process has a side effect of accelerating particles within the magnetic loops. These relativistic particles smash into the photosphere (actually, they are mainly stopped in the chromosphere which sits above the photosphere), where they release their energy, heating plasma to millions of degrees, which evaporates into the corona. This is a solar flare.

Sunday, 14 June 2009

dna - How distantly related are eusocial insects? Aren't members of a species much more related than 1/4, 1/2, or 3/4?

In evolutionary genetic comparison, you are talking about members within species. They will share almost all genes, because if they didn't they would belong to a different species.



However, within species there exist different versions of the same genes, called 'alleles'. When we say that you are 0.5 related to each of your parents, we mean that statistically, 50% of your alleles should be those which your father has, and 50% of your alleles are those passed down from your mother.



Eusocial insects have different mechanisms. Bee males are produced without fertilisation, meaning that they only have one copy of each bee gene. When the male produces sperm, it only has this one set, so all sperms end up carrying the same set of alleles.



Females on the other hand have the normal double set, with two different versions of each gene. So if you look at one gene, half of the female's egg should have one version and the other half should have the other version. All females of one hive are produced by the same queen and the one male that she mated with. Remember, the male only has one set, so the versions coming from the male are the same in all female offspring.



This means a female's genes are made up of: 50% from the father (these are the same across all females) and 50% from the mother (where half the females have one version and half have the other version). Statistically, this means that looking at one gene, there is a 75% chance that two bees will have the same version of that gene.

Saturday, 13 June 2009

observation - Around what apparent magnitude can the naked eye observe an object during full moon

It may shave off 1 or 2 magnitudes, but it depends on many factors: nature of the object (star, nebula, galaxy), altitude (higher altitude has less light scatter), transparency, etc. There's no One Single Answer To Rule Them All.



Light pollution from artificial sources has a greater impact in most cases.

On the genetics behind caste marriages


Is the son of a couple engaged in farming better suited to the same profession unless he receives a couple of recessive alleles from both parents?




Not everything is coded in the genome, environmental cues play a very important role in shaping who you are. Profession is just one of those things that is not encoded in genes.
Many people do not continue doing the job of their parents, but they do very well in life.
Also note that, aside from the fact that there is no "agriculture" gene, having a recessive allele does not mean you will miss a certain phenotype.
What is true is that someone who has higher education will probably be more keen to have their kids studying. On the contrary, the kids of someone who is a poor farmer and is barely able to provide for his family will probably not have the opportunity of going to school (let alone university) and therefore will continue to work the land. It has nothing to do with genetics.




Generally there is a lot of unscientific information floating around about these matters. One of the grapevines is that it is genetically better for one to marry outside their caste to create diversity(?). Is such a thing true?




Continuing to reproduce in the same group of people will limit diversity in the genome, and create an accumulation of mutations, which is generally a bad thing. That is why it is very common to have particular illnesses in closed communities.



For instance Amish people have an higher incidence of genetic disorders such as Ellis-van Creveld syndrome (a form of dwarfism)



From The Gene for the Ellis–van Creveld Syndrome Is Located on Chromosome 4p16 - Ruiz-Perez et al., Nat. Genetics 2000




Autosomal recessive transmission of the disorder is supported by
multiple affected siblings with unaffected parents in families with
known parental consanguinity. The largest known pedigree segregating
with EVC is that reported by McKusick and his colleagues in the Old
Order Amish (McKusick et al., 1964). The prevalence of living persons
affected with EVC among the Lancaster County Amish was estimated to be
2 in 1000, whereas the frequency of occurrence among live births was 5
in 1000. McKusick et al. estimated that the frequency of heterozygous
carriers in the Old Order Amish is as high as 13% (McKusick et al.,
1964). All cases of EVC in the Amish at that time were traced to a
common founding couple, Samuel King and his wife, who immigrated to
Pennsylvania in 1744.




This is a well known effect, called the founder effect.



As for India, a study on the genetic behind casts has been done a couple of years ago.



Reconstructing Indian Population History - Reich et al., Nature 2010



From the paper:




Haldane wrote 45 years ago that “if inter-caste marriages in India become common, various… recessive characters will become rarer”. However, it has not been generally appreciated that this applies to groups throughout India, and not only to groups in the south where consanguinity is common. We hypothesize that founder effects are responsible for an even higher burden of recessive diseases in India than consanguinity.
To test this hypothesis, we used our data to estimate the probability that two alleles from a group share a common ancestor more recently than that group’s divergence from other Indians, and compared this to the probability that an individual’s two alleles share an ancestor in the last few generations due to consanguinity. Nine of the 15 Indian groups for which we could make this assessment had a higher probability of recessive disease due to founder events than to consanguinity, including all the Indo-European speaking groups.
An additional reason why some diseases are expected to occur at elevated frequencies in India is shared descent from a common Indian ancestral population. An example is a 25 base pair deletion in MYBPC3 that increases heart failure risk by about 7-fold, and occurs at around 4% throughout India but is nearly absent elsewhere.




They conclude that:




We have documented a high level of population substructure in India, and have shown that the model of mixture between two ancestral populations ASI and ANI provides an excellent description of genetic variation in many Indian groups.
[...]
By showing that a large proportion of Indian groups descend from strong founder events, these results highlight the importance of identifying recessive diseases in these groups and mapping causal genes.




It is very important to understand that this results in no way show a genetic basis for the existence of casts, which remain purely social entities. They rather show that an environmental constraint can influence the distribution of genetic traits, due to a restriction in the possibility of mixing genes in the population.

Thursday, 11 June 2009

star - Rate of Mass Loss from the Solar Wind

This is problem 1-4 from Principles of Stellar Evolution and Nucleosynthesis by Clayton:



Assuming at the Earth a characteristic velocity of 400km/s and density of 10amu/cm$^{3}$ for the solar wind, calculate the rate of mass loss for the sun.



There weren't any formulas about this in the section so I made a stab at it with dimensional analysis.



$$frac{dM}{dt} = frac{rho V}{Delta t} = rho v A$$
$$frac{dM}{dt} = left( frac{10 amu}{cm^{3}} right) left( frac{400km}{s} right) left( frac{4 pi (6.96e10 cm)^{2}}{1} right) left( frac{10^{5}cm}{km} right) left(frac{10^{-24} g}{1 amu} right) left( frac{M_{odot}}{2 times 10^{33} g} right) left( frac{3600s}{hr} right) left( frac{24 hr}{day}right)left(frac{365day}{yr}right)$$
$$frac{dM}{dt} = 3.84 times 10^{-19} M_{odot} / yr $$



However, the answer given in the book is $0.4 times 10^{-13} M_{odot} / yr$. So, I'm off by about five magnitudes. Can anyone point out where I went wrong and/or point me in the correct direction?

zoology - How does a jumping spider manage to "jump" on the ceiling?

I just witnessed a small jumping spider jump on some kind of louse or bed bug on the ceiling. How does it do that without falling? I have yet to find a high-speed-camera video of a jumping spider jumping on a ceiling.



My hypothesis is that the spider would spit silk and anchor it on the ceiling, then jump, and allow itself to be swung by the silk onto the prey. But I don't have a high speed camera to prove that.



EDIT:
I now have some evidence that supports my hypothesis: I just saw a jumping spider attempt to jump on a prey and failing to land properly. It didn't fall directly to the floor. Instead, it dangled from its own silk, at about 2 cm from the ceiling, then climbed right back up the thread of silk, onto the ceiling.

Wednesday, 10 June 2009

telescope - How useful are filters for spotting nebulae?

With an 8" scope, a filter will very likely give you better results than observing without a filter. Although a filter does block light, the crucial aspect is that a filter increases contrast (by blocking light pollution and extraneous wavelengths of light more than the nebula), thereby allowing you to spot low contrast diffuse nebulae (like IC59 and IC1318) much more easily. This is in fact more critical for visual observation than for photography because it is possible to increase contract in post-production with photography. You will find that visual astronomers go to great lengths to increase contrast--baffling, flocking, premium mirrors, etc.



Light pollution filters are broadband, meaning they allow all light except light emitted by streetlights. From darker skies (like bortle 4.5), this will not give dramatic results. I would recommend using an OIII filter to start with. The OIII filter is a narrowband filter, meaning it cuts all light except at a very narrow wavelength range from ionized oxygen. Very often OIII filters make the difference between being able to see an object and not being able to see it, even from very dark skies. As an excellent test of the OIII filter, check out the Veil Nebula too; from your skies, this object would be rather difficult without an OIII but rather stunning with an OIII filter.

Tuesday, 9 June 2009

solar system - What is the future of our universe?


What is the future of our universe?




Like StephenG said, nobody knows for sure. But we do have confidence that the universe is expanding, and we also have confidence that the expansion is speeding up. So extrapolating from that, the future looks cold and lonely and bleak. A bit like life for the older generation!




Is the universe heading towards a Big Freeze, a Big Rip, a Big Crunch or a Big Bounce? Or is it part of an infinitely recurring cyclic model?




I'd say the Big Crunch and the Big Bounce are out. The universe didn't contract when it was small and dense. Instead it expanded, and that expansion is increasing. So it looks like we're in for a Big Freeze. However I wouldn't rule out a Big Rip of sorts. Have a look at page 5 of this paper where Milgrom mentions the strength of space. Then think of the balloon analogy, but make it a bubble-gum balloon, in vacuum.



enter image description hereImage courtesy of the one-minute astronomer.



The skin gets thinner and the balloon expands, then the skin gets even weaker, so the balloon expands even faster, and so on. Bubble-gum bubbles sometimes end badly, and there's something about this article by Phil Plait that strikes a chord.




Or is it part of an infinitely recurring cyclic model?




I don't know. I have no evidence to suggest that there's any kind of recycling going on, and I can't think of mechanism by which this might occur. I have of course read about "conformal cyclic cosmology", but I just don't buy it, along with other stuff from Penrose.

Monday, 8 June 2009

galaxy - How much of the Milky Way is visible to the naked eye from earth?

At any one time, an average observer can see about 2,500 stars in a clear dark sky. Note that eyesight varies and sharp-eyed individuals may be able to see a half-magnitude dimmer stars than the average eye (apparent magnitude is a scale in which each integer is $2.51$ ($100^{0.2}$) times brighter or dimmer than the next consecutive integer.) A very dark sky may enable magnitude $+7.5$ or even $+8$ stars to be seen, but in a typical "dark" non-urban sky the limit is often $+5.5$ to $+6.5$.



Supernovae can potentially be seen as far as 13 billion light-years (ly) away, essentially from the 'edge' of our observable universe. So, it's not saying much to say that a supernova might be seen across a distance of 100,000 ly.



Some types of supernovae can be over $-22$ in absolute magnitude — where absolute magnitude is defined as their apparent magnitude if they were observed from a 32.6 ly distance (to be honest, I'm not sure if supernovae absolute brightness is defined in exactly this same way). By way of comparison, the Sun's abs. mag. is $+4.8$ (lower numbers indicate brighter stars).



Because of the dust and interstellar medium, the possible distance we can see is just a small fraction of the size of the Milky Way. In reality, few stars are bright enough to be seen over 400 ly away. Deneb which has an estimated distance of 2,600 ly (but may be as close as 1,550 ly, the large uncertainty is due to its variation in brightness). Only 6 visible stars are thought to be farther from us than 1,000 ly.



So, to sum up, looking inwards into the Milky Way, our visibility is very restricted to the nearest 1,000–2,000 ly (while the MW disk's radius is 50,000 to 90,000 ly and we're about 27,000 ly on this side of center, but only the brightest stars are visible from more than about 400 ly away.



While looking away, we can see the Andromeda Galaxy (but not its individual stars) which is 2,500,000 ly from us. In other words, most people blame dust for the poor visibility, but that's mostly only relevant for telescopes. For us, our limited eye-sight is the real barrier (not to mention the dearth of really dark skies).

Sunday, 7 June 2009

evolution - Why do eukaryotic organisms have introns in their DNA?

Prokaryotes can't have introns, because they have transcription coupled to translation. They don't have time/space for that, since intron splicing will stop the coupling. Eukaryotes evolved the nucleus, where splicing can be done. The ancestor of eukaryotes that developed the nucleus could afford more variability (because of introns) than species without it, so they had a greater fitness.



Bacteria can't afford high complexity compartmentalization, a process that requires a lot of available energy per gene, a eukaryotic cell can have tens, hundreds or even thousands of mitochondria that have similar energy output to a bacterial cell, while having a genome about 100-500 times smaller (16 kb of a human mitochondria compared to 4.000 kb for a E. coli cell).



I hope that clarifies your doubts, and you can see that this is a debatable answer.



Sorry for my bad English.



Sources:



Lane & Martin 2010.



Martin 2011

Friday, 5 June 2009

bioinformatics - When does BLAST fail to align 2 DNA sequences?

I am not sure I understand BLAST correctly.



When using BLAST in DNA and protein, they are different. There is a threshold T in protein "seeding" step, which means the seeding sequence is not perfectly matched. However, it seems there is no T in seeding step and we are looking for perfect match and extend them to neighbouring sequence.



So, if there is no word length 5 exact match between these 2 sequence, BLAST will fail.

Is ours the first universe?

It is very hard to tell whether we are in a unique Universe. The difficulty is the closer we attempt to probe in the fractions of a nano second after the Big Bang our current forumlations of Gravity and the three our fundamental forces do not work. So it is very hard to tell whether our Universe is unique.



However, with all that said, my own personal belief is that our Universe is not unique. The estimated time for random quantum fluctuations to generate a new Big Bang as quoted in the page that you have referred to is an UNIMAGINELY large time scale.



My question I will pose back to you is, how could we ever tell we are a unique Universe, or part of a collection of Universes?

Thursday, 4 June 2009

evolution - How do we know that dinosaurs were related to lizards and/or birds?

In general the answer is always the same: you construct a phylogenetic tree. In order to locate different species on this tree in relation to each other, you use various features to compare which species are more similar to each other than others.



The best way of doing this is by comparing their DNA sequence, especially orthologous genes (i.e. genes common to the species compared).



Unfortunately, genetic sequences usually aren’t available for extinct species. You can still compare homologous features though. For instance, the class of mammals are all characterised by the possession of mammary glands. Similarly, all vertebrates have a vertebral column and all aves are feathered, warm-blooded, egg-laying vertebrates.



The collection of many such features from fossile records allows the creation of more or less detailed phylogenies. The Wikipedia explanation mentions several transitional fossil forms which trace the evolution from dinosaurs to modern birds via several intermediates. All of the inferences are based on anatomical resemblance.



This may sound weak evidence but in fact anatomical homology has proved to be sufficiently accurate in constructing other phylogenies, where we have been able to verify the correctness using genome comparison data. So while there is much uncertainty about the precise branching point of birds from dinosaurs (or maybe archosaurs), there is near-certainty that the common ancestor of birds and dinosaurs was, in fact, an archosaur.

observation - Have we seen a black hole?

The answer is no. There are no resolved images at any wavelength of black holes or black hole candidates that demonstrate their lensing effect.



There are of course lensing images due to massive objects that probably have black holes at their centres (e.g. Courbin et al. 2010 and see below), but that is not the same thing.



A quasar acting as a gravitational lens



A quasar acting as a gravitational lens - Courbin et al. (2010)

How much heat is generated from waxing and waning of reflected radiation from the Sun?

The article doesn't go into specifics and appears, not to be written by Dr. Evans at all, but I'll pull some quotes.




When it is completed his work will be published as two scientific
papers. Both papers are undergoing peer review




and




He has been summarising his results in a series of blog posts on his
wife Jo Nova’s blog for climate sceptics.



He is about half way through his series, with blog post 8, “Applying
the Stefan-Boltzmann Law to Earth”, published on Friday.




(footnote, I'm guessing in Australia, summarizing and skeptics are spelled differently than in the US, cause that's a copy-paste)



I've only read one of his summaries, and in Mathematics, you have to look at the details, which he's not provided, so, he's basically saying little more than "this is true, here's generally why, I've done the math, please take my word for it, I'll publish the numbers later". Now Michael Mann is also presenting non peer reviewed work to Paris, so, what's good for the goose is good for the gander I suppose.



Now, I'm just a guy who likes science, but my understanding is that ideas have been presented before they are finished or peer reviewed fairly often. Einstein did this in fact regarding general relativity and another scientist actually beat him to publishing the theory (though, the other scientist was gracious about it and gave Einstein full credit), so, I don't think it's necessarily bad to present a summary prior to peer review.



I think it is, however, unusual to post summaries of an idea on a blog saying "I figured it out, the majority of research on this subject is wrong". Dr. Evans is saying he has proof, but he's acting like a junk science blogger.



Dr. Evans "trust me I did the math" claim kind of requires that we look
at his track record, and his track record isn't very strong, though he will say, he's being attacked by the establishment.



According to this site, he hasn't published anything peer-reviewed since the 1980s and he's on a their Climate Denier List. He's also on Skeptical Science's Climate Misinformers List.



And here's a list of debunked claims he's made (and if this list is accurate, he's not a scientist at all, just a guy on a fishing expedition. No good scientist would agree with pretty much every counter argument against climate change cause that's not how the scientific method works. You can disagree with something, that's fine, but to agree with every counter argument - that's silly. Here's another more detailed explanation of what he's gotten wrong, from 2011. Evans seems to be in this debate to disagree with it much more than he's in it to do scientific research.



It's pretty much impossible to make a true scientific argument against Evans "proof" without his specifics, and, that kind of proof/disproof can get a little long and complicated, but for now, disproving him is impossible. But should we listen to him?



It says above he's published 8 blogs related to his recent research. I'm not going to dig up all 8, but here's the most recent one. Applying the Stefan-Boltzmann Law to Earth. Now, I'm just a layman, but even I can see problems with his argument here (and he should too given that he's an engineer with a PhD). The Stefan-Boltzmann law is an approximation. It's a physical model to calculate radiation into space.



The problem with his approach is, the best way to measure how much heat/energy leaves the earth by radiation is to measure it directly, by satellite. The amount of energy that radiates from the Earth into space varies with temperature, snow cover, cloud cover, even humidity, and probably 1 or 2 other things I'm overlooking. If you try to calculate this energy leaving the earth into space by playing with with the Stefan-Boltzmann Law instead of relying on direct measurements, you're allowing yourself a lot of fudge factors and inviting a far greater error than direct measurements would give you.



On the quoted article, let me pull out an example:




Dr Evans has a theory: solar activity. What he calls “albedo
modulation”, the waxing and waning of reflected radiation from the
Sun, is the likely cause of global warming.




OK, so, which is it, solar activity or albedo modulation, cause they're not the same thing. The first takes place on the sun, the 2nd, the earth. This paragraph makes no sense to me.




He predicts global temperatures, which have plateaued, will begin to
cool significantly, beginning between 2017 and 2021. The cooling will
be about 0.3C in the 2020s. Some scientists have even forecast a mini
ice age in the 2030s.




Now, this paragraph is particularly devious. El Nino's tend to warm the earth, La Nina's cool it. The effect is temporary and not huge, but enough to cause yearly variation. A strong El Nino drove the big spike in global temperature for 1998 and we're in an El Nino now (edited my answer, since 2014 they've been talking about entering an El Nino, I gather it's officially started now).



We had more La Nina years than El Nino 2006-2013 with the only small El Nino coinciding with 2010, which set records for temperature. A lot of the hiatus in warming that is often talked about is related to there being only 1 small El Nino over 7 years.



Predicting 2017 as the time when the cooling will "begin" is devious because that could be around the time the El Nino has ended and the oceans could switch back to a La Nina (which usually follows El Nino). This will create a temporary cooling for a year or two, which he, no doubt, will take credit for if it happens. Now, he also predicts 2021 which could go either way and he gives an amount, but that doesn't change the fact that he's making a prediction and hoping the El Nino of 2015 will end and make it prediction look good.



Real global warming or cooling can't be measured in 1 year anyway, unless, maybe, if it's ocean current and the occasional mega-volcano adjusted - then, maybe you can get some measure of warming/cooling based on one year, but it's still only one year. That's a really really short period to make any predictions on and not something I'd trust very far at all.



and on the "scientists have predicted a mini ice age in 2030", that's not actually true. There was a study on sun-spots and they predicted that we could see a sun-spot low period around 2030, perhaps similar to the Maunder Minimum that may have caused the mini ice age, but the scientists who predicted that were very clear that they were not predicting a new mini ice age, they said the effect would be smaller than the effect of CO2.



Here's the mini ice age prediction, which a few people made (but not the scientists who did the research).



Here's an article that explains why it isn't true.



So, there's a lot of bad and a handful of false statements in that article you quoted, which, granted, wasn't written by Dr. Evans himself, but still, it's hard for me to take it seriously.



Until he publishes his results, he can't be proved or disproved but based on what I've read, I find it hard to take him seriously. My hunch is, he's not trying to reach scientists at all, but he's trying to reach his target audience. Those who question climate change and he gives them a name and an alternate argument that they can stand on. An argument doesn't have to be correct, it only needs to sound correct and with that, you can usually convince a percentage of people to agree with you.



Not sure how much that helps, but that's my take and I went through and tried to clean up my long answer a bit. If you'll forgive me, it reminds me of the the old joke. How can you tell Dr. David Evans is lying? He's talking or writing. :-)

Wednesday, 3 June 2009

notation - From Mean Moon to True Moon in an old procedural calendar

This is a follow-up to How to interpret this old degree notation?



It is about an old calendar system which generates a luni-solar calendar with a procedural method, using a handful of calculations and some constants derived from astronomical observations.



This method originates from ancient India. Today it is used in Thailand to determine public holidays and observance days (uposatha days) for Buddhist monks, which fall on New- and Full Moon days.



Actually it's a neat system that had allowed village "astronomers / astrologers" construct a calendar with a method they could memorize by rote learning, without astronomical tools to make precise measurements. It is dreadfully underdocumented, and I've been collating the method and it's practices in Calculating the Uposatha Moondays.



Most useful have been the actual formulas in Rules for Interpolation in the Thai Calendar by J.C. Eade, it is just some basic arithmetic, but following his notation is a puzzle in itself.



Effectively I'm trying to implement these formulas (see image below), in a golang package.



With the earlier help, I arrived at step 13 (Mean Moon). Hooray :)



Step 14. and 15. is a puzzle again.



Step 14.



How would you interpret this? "(1780 + 80) * 3 on base 808, and add 2"



Step 15.



Eade has (8; 11 : 7) - (6; 27 : 12) = (1; 3 : 55), but that doesn't work.



That specific substr. gives me (1; 13 : 55), which could be a typo in the paper, but that value doesn't carry things forward either.



Things straighten up at step 16., 17., 18., which would produce the paper's results, to get there I faked the values for step 14. and 15, to work at least in the example case given in the paper.



In suriya.go
this business is happening in:



func (suDay *SuriyaDay) Init(ce_year int, lunar_year_day int)


JC Eade, Rules for Interpretation

Do all stars have the potential to have life supporting planets?

Great question. The goldilocks zone is usually defined in terms of a region where the equilibrium temperature of the planet lies between some temperature limits (these temperature limits are somewhat debatable, but irrelevant for the purposes of this question - the boundary becomes fuzzy). This region can be calculated by working out how much flux is received from the star at a given radius.



In that sense - all stars have a goldilocks zone - you can always work out some distance from a star where the flux is roughly equivalent to the flux we receive from the Sun for instance. It is much closer to a dim star and much further away from a luminous star.



But, there is a difference between a habitable zone and a continuously habitable zone. If you seriously want life to evolve on a planet, then it will take time. How much time? No-one is sure but it seems to have taken a few hundred million years in our solar system. There are also good reasons to suppose that young planets are not going to be habitable - either they are still extremely hot after formation or they are being bombard by debris.



Thus you can probably exclude any star with a main sequence lifetime of less than about 100 million years (or longer if you feel conservative about how long it takes life to get going). This rules out stars of more than about 5 solar masses.



You might also rule out stars that evolve quickly. If a star changes its luminosity quickly on a short timescale then the habitable zone moves drastically too. This will happen near the ends of the lives of all main sequence stars and for all subgiant and giant stars. So for these, yes there is an instantaneous habitable zone, but no region is in a habitable zone for hundreds of millions of years or more.



You might want to consider not only the luminosity of the star, but its spectrum as well. For instance, both hot white dwarfs and low-mass M-dwarfs are reasonably stable and long-lived; they are faint, so have habitable zones close to the star (much closer than 1 au). However, for different reasons, both these types of object emit copious UV radiation. In hot white dwarfs it would be because they have hot photospheres. In M-dwarfs they can be highly magnetically active into old age, having a hot chromosphere and corona that would strongly irradiate any nearby planet. You might consider that their "habitable zones" were in fact uninhabitable. Of course an atmosphere and a strong magnetic field might counteract this.



A lot of this depends on your terms of reference and definition. The wikipedia page on habitable zones has quite a nice discussion, which emphasizes the debatable nature of this topic.



A final thought, which is not in the wiki page. Multiplicity could mess things up. Planetary systems will not be stable in the habitable zone of either star in a binary system if their separation is comparable with the orbital radius of the habitable zone. If you had two stars like the Sun, their separations would have to be less than a few tenths of an au or greater than of order 10 au in order to allow something to orbit at about 1 au around one of them. Even then, there could be all sorts of dynamical instabilities which prevent long-lived planets, especially if there were other planets in the system too.

Tuesday, 2 June 2009

dust - V838 Monocerotis "light-echo" images morphed into nice video, but why so few original images?

The V838 Monocerotis expansion (not a supernova) and the observation of the subsequent "spectacular" light echo was quite a notable event! From Nature 422, 405-408 (27 March 2003)



Nature Coverenter image description here



From Astronom. J. 135, 2, 2008 or ArXiv




".Galactic light echoes are extremely rare. The only other known example of extent similar to that of V838 Mon was the echo produced by Nova GK Persei 1901 (Kapteyn 1902; Perrine 1902; Ritchey 1902). Following early misunderstandings, light-echo geometry was properly described by Couderc (1939), and more recent discussions are given by many authors, including Chevalier (1986), Felten (1991), Sparks (1994), Sugerman (2003), and references therein".




It was the sole topic of an international conference photo from here:



Conference



From here it's noted that the Hubble observations were inserted into the observing schedule using the Director's Discretionary time, since the peer review process is too slow to accommodate observations of transient events. While there is no mention of any reason why observations were not more frequent, nor continued in 2003, one can speculate.




"Based on the highly structured appearance of the initial ground-based images, our team proposed for Director’s Discretionary (DD) time on HST for a program of direct imaging and imaging polarimetry. The team members are as follows: S. Starrfield (Arizona State University); Z. Levay, N. Panagia, W. Sparks, B. Sugerman, R. White, and myself (STScI); A. Henden (AAVSO); M. Wagner (University of Arizona); R. Corradi (Issac Newton Group); U. Munari (Padova University); L. Crause (SAAO); and M. Dopita (ANU)".



"We received HST observing time at five epochs in 2002 through DD allocations: April, May, September, October, and December. All of the observations were made with the Advanced Camera for Surveys (ACS), which had been installed in HST during SM3b in March 2002. I need not emphasize to this audience how extraordinarily unfortunate it is that no HST observations were obtained during 2003—the loss of this opportunity is truly incalculable. However, the echoes were imaged twice in 2004 through the Hubble Heritage program, in February and October. More happily, the HST Cycle 14 allocation committee did award our team observing time for an intensive HST imaging campaign from October 2005 to January 2006, and we also have two more epochs of observations scheduled in Cycle 15 for late 2006 and early 2007".




Figure 2 of the Nature paper describes the preservation of the actual light curve (history) within the structure of the light-echo shell:



enter image description here




"FIGURE 2. HST images of the light echoes
The apparently superluminal expansion of the echoes as light from the outburst propagates outward into surrounding dust is shown dramatically. Images were taken in 2002 on 30 April (a), 20 May (b), 2 September (c) and 28 October (d). Each frame is 83" times 83"; north is up and east to the left. Imaging on 30 April was obtained only in the B filter, but B, V and I were used on the other three dates, allowing us to make full-colour renditions. The time evolution of the stellar outburst (Fig. 1) is reflected by structures visible in these colour images. In b, for example, note the series of rings and filamentary structures, especially in the upper right quadrant. Close examination shows that each set of rings has a sharp, blue outer edge, a dip in intensity nearer the star, and then a rebrightening to a redder plateau. Similar replicas of the outburst light curve are seen propagating outwards throughout all of the colour images."




Again from Astronom. J. 135, 2, 2008 or ArXiv



enter image description here




Figure 2. Images representing the degree of linear polarization, p, for each of the four epochs of data shown in Figure 1. Image scales and orientations are the same as in Figure 1. The image stretch is linear, ranging from black representing zero linear polarization to full white representing ~50% linear polarization. These images illustrate the apparent outward motion of a ring of highly polarized light in the light echo.



Abstract
Following the outburst of the unusual variable star V838 Monocerotis in 2002, a spectacular light echo appeared. A light echo provides the possibility of direct geometric distance determination, because it should contain a ring of highly linearly polarized light at a linear radius of ct, where t is the time since the outburst. We present imaging polarimetry of the V838 Mon light echo, obtained in 2002 and 2005 with the Advanced Camera for Surveys on board the Hubble Space Telescope, which confirms the presence of the highly polarized ring. Based on detailed modeling that takes into account the outburst light curve, the paraboloidal echo geometry, and the physics of dust scattering and polarization, we find a distance of 6.1 ± 0.6 kpc. The error is dominated by the systematic uncertainty in the scattering angle of maximum linear polarization, taken to be θmax = 90° ± 5°. The polarimetric distance agrees remarkably well with a distance of 6.2 ± 1.2 kpc obtained from the entirely independent method of main-sequence fitting to a sparse star cluster associated with V838 Mon. At this distance, V838 Mon at maximum light had MV sime −9.8, making it temporarily one of the most luminous stars in the Local Group. Our validation of the polarimetric method offers promise for measurement of extragalactic distances using supernova light echoes.


Monday, 1 June 2009

How many earths fit in the observable universe?

Without checking the numbers in detail, according to Wikipedia, the volume of the observable universe is about $3.5cdot 10^{80} mbox{ m}^3$, and the volume of Earth is about
$1.08321cdot 10^{21} mbox{ m}^3$.



By dividing the two volumes we get a factor of $3.2cdot 10^{59}$, or written as decimal number: The observable comoving volume of the universe is about
320,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000-times the volume of Earth.

How Are Radioactive Decay Rates Influenced by Neutrinos - On Earth and Other Dense Planets

In the paper that this report is based on, 1, they simply see an annual period in the $beta$ decay rates of radioactive isotope samples in the lab. Basically, the rate is a fraction of a percent higher in winter than in summer. They conclude that absent any simple instrumentation explanation:




we conclude that these results are consistent with the hypothesis that
nuclear decay rates may be influenced by some form of solar radiation.




Several things can be changing in a laboratory during a year. Obviously, temperature and humidity changes and these were tested in the experiment. But, also radon levels change as the amount of outside air exchanged with inside air is changed. The solar cosmic ray flux (high energy electrons, protons, and He nuclei generated in the chromosphere of the sun) changes as the Sun angle changes, and neutrinos (produced in the core) also as Sun angle changes. These could be affecting nuclei decay rates directly or the instrument used to measure these (subtle changes in threshold energies, false counts from ions produced in the instrument, potential shifts, etc.).