Sunday 26 August 2012

rna - Can DNA produce ribozyme-like structures?

There are no known natural DNA enzymes (deoxyribozymes), but there are various synthetic DNA enzymes. The first deoxyribozyme that has been reported (Breaker and Joyce, 1994) catalyzed the Pb2+-dependant cleavage of RNA.



Various deoxyribozymes have been synthesized, they can catalyze RNA cleavage, RNA ligation and many other reactions like DNA phosphorylation or Thymine dimer photoreversion, and even a Diels-Alder reaction (see Baum and Silverman, 2008 for a review).



DNA lacks the 2'-hydroxyl group that RNA posesses, but there is some evidence that this doesn't significantly decrease the potential of deoxyribozymes compared to ribozymes. In one experiment DNA and RNA enzymes that catalyze a Carbon-Carbon bond formation were compared and both achieved comparable catalytic rates (Chandra and Silverman, 2008).



For more information about DNA enzymes you can look at the publications from the Silverman lab, they're probably the most active research group in this field.




Saturday 25 August 2012

human biology - How does the brain's energy consumption depend on mental activity?

I answered on the facts of this question already on skeptics.SE, here and here. You should read both papers very carefully, I highlighted the most important facts but this is a very tricky question, esp. when it comes to defining what mental activitiy is. The papers also give an explanation of how fMRI signal is linked to NEURONAL activity, as far as I remember there is no strong direct link.



You assume in your question that a mathematician solving a differential equation needs higher mental activity than a child reading a book. Is this legitimate? It seems intuitive but also very subjective. In the paper they mention that for the highest and lowest energy consumption we lose consciousness. I will not draw conclusions from this. However, you are talking about conscious mental activities so this may answer your question. To me it means more that the understanding of the human brain in neurobiololgy is on the level of the Rutherford Atomic Model in Physics at the beginning of the 20th century. We have not really got a clue how information is processed and how it's constrained by physical laws and principles of entropy and energy. By reading the 2 papers it looks more like the human brain is not raising energy consumption as a computer would (the computer analogy pretty much fails when compared to the human brain). Most of the energy is used for unconscious processes in "standby mode".



As in physics, extreme cases such as savants and the mentally disabled are probably the best starting point to exclude possible models of human brain and physical boundary conditions as we cannot approach the questions of human brain in a reductionistic way. How can savants like Kim Peek process such huge amounts of information AND save it. He is able to scan books pages just once and know them by heart thereafter. His brain does not, however, consume more energy than an average human brain. So mental activity is probably not a very good term, quantity, or even really suited to be scientifically used. Does neuronal activity mean mental activity (in the sense of your definition?) Reading the papers, the problem is the separation of mental and neuronal activities. At first you have to know what are the basic brain functions and processes that are consuming most of the energy. However the brain is not built in modular way like a computer (most energy is used here for constantly refreshing RAM). So there is not really a objective way to analyse and separate this modular energy consumption, if it even is modular.



In my opinion, most models about information processing in human brain are intuitive guessing (again Rutherford). We need much more detailed experiments and data (Blue Brain Project). fMRi is like analysing a atom with a magnifying glass. Also, the more prosperous approach from a biophysical perspective is probably not the level of "mental activity" but the hard-based amount of information processed by human brains and linked energy consumption (Kim Peek). But therefore we need a model of how this information is saved in human brain. Do normal humans save the same information as Kim Peek scanning a page or are we just unable to recall it consciouscly? When solving a differential equation, how much energy do you consume when recalling facts and is that experience not similar to reading a book? How much is mental logical tasks and is there really a difference at all?



I will stop here, hope you gained some insight that the question is of course important but too early to be definitively answered. I think we will learn a lot more from projects like Blue Brain as we have from fMRI experiments.

Wednesday 8 August 2012

zoology - How do insects breathe?

The main difference between tracheal gas exchange, and other forms of gas exchange (except simple diffusion) is that it is generally a passive process. Organisms with lungs, gills, or other modified respiratory organs can actively pump the oxygen-containing medium (usually air or water) across their respiratory surface, and some also pump their blood across in the opposite direction to cause a countercurrent exchange to enable maximum efficiency of oxygen/CO₂ transfer.



The tracheal system is a more efficient mode of gas exchange than diffusion since it does not only involve the oxygen passing over the organism's body surface (a semi-permeable membrane) which limits the rate of movement. Instead the epidermis is invaginated to form tracheae (tubes) through which air can travel passively. The tracheal system is highly branched and terminates at tracheoles (fluid-filled sacs) which closely line respiring tissues to provide oxygen and allow release of CO₂.



Tracheal respiration limits the size of insects since their oxygen demands will not be able to be met if the air has to diffuse very far. Some insects require more oxygen than others: many larger insects and more active insects have evolved to supplement the tracheal system with pumps, gas gills (permanent or temporary bubbles of air which aquatic insects carry), or mechanically ventilated air sacs (honeybees have these, see Gullan & Cranston, 2005; and Snodgrass, 1956).



Furthermore, tracheal systems can be 'open', 'closed', or the insect can engage in discontinuous gas exchange. Insects with open tracheal systems have spiracles (holes in their cuticle). Closed tracheal systems are more common in aquatic insects, they come in two major designs: highly branched systems over the internal surface, allowing cutaneous gas exchange; and filamentous or lamellate arrangements, which are analogous to a primitive gill (see Wigglesworth, 1964 or Gullan & Cranston, 2005 chapter 10 for more on aquatic insect adaptations). Discontinuous gas exchange involves co-ordinated opening and closing of different spiracles which results in a uni-directional current allowing more effective ventilation.



References



  • Gullan, P.J. & Cranston, P.S. (2010) The Insects: An Outline of Entomology. John Wiley & Sons.


  • Snodgrass, R. (1956) Anatomy of the honey bee, Comstock Publ. Assoc. Ithaca, New York,.


  • Wigglesworth, S.V.B. (1968) The Life of Insects. New American Library.


Tuesday 7 August 2012

genetics - Extreme examples of protein translation/use coupling/decoupling?

One example that may interest you Rory M are the so-called toxin - antitoxin pairs in bacteria. Interestingly enough, many bacterial species need a way to protect themselves from their own toxins. Thus, for a particular toxin, a corresponding antitoxin is synthesized (most often both are proteins, but the antitoxin may be RNA as well, three types of toxin - antitoxin systems are known; check out http://en.wikipedia.org/wiki/Toxin-antitoxin_system). The antitoxin remains bound to the toxin until the latter is secreted into its designated location. If the antitoxin decouples prematurely, cellular death may occur. I'm not too sure how long the two are "stored" within the cell; I imagine this could vary greatly between species.

Saturday 4 August 2012

molecular biology - Effect of histidine on the binding affinity of HisP

Still if you change your question as (If histidine is abundant, HisP's job is to stop the histidine pathway as a "repressor." If HisP binds less tightly to promotors, the pathway should not produce as much histidine.)



Then it should be under another assumption that what is the effect of HisP binding promoter of enzyme's gene. Is it suppressing the transcription or amplifying the transcription.



If it suppresses transcription then answer is more tightly, if it amplifies the transcription then answer is less tightly. I suggest you put this clearly in your answer, as the question is not quest accurate.




I guess your answer might be wrong.



The question should be under another assumption that how does this protein HisP regulate histidine biosynthesis. positiver or negative feedback regulation.



Generally, amino acids synthesis is regulated by negative feedback loops so that cells could control the amount of amino acids they want. In this case, the answer should be more tightly. As it functions as a repressor then it should bind promoter more tightly so that to repress the transcription more which then generate less histidine synthesis enzyme. (I believe this is what your teacher want you to answer)



In the other case, biosystems sometimes have the positive feedback regulation so that they can amplify the sensitivity to the environment noise or generate bistability (phenotypic switching). In that case, the protein bind promoter less tightly in order to generate more histidine.


human biology - Where does vasoconstriction occur in limbs?

Reduced blood flow to a region of the body occurs through 2 principal mechanisms.



1) The smooth muscle fibers in the tunica media layer of the arteries contract and reduce the diameter of the artery, limiting blood flow due to increased resistance (this is the mechanism in @agrimaldi's answer).



2) Circularly oriented smooth muscle fibers at the junction of a metarteriole (the distal end of an arteriole) and a capillary bed form a precapillary sphincter, which serves as a valve and prevents blood flow into a the capillary bed.



So to answer your question, it is both. The narrowing of the arteries would occur rather continuously across the limb but there are "pinch points" where metarterioles joint the capillary beds.



The endocrine and nervous systems are pretty integrated so in most cases it would be the result of the actions of both systems.

Thursday 2 August 2012

synthetic biology - Designing genes with DNAWorks: Maximum nonzero score?

You might want to read Gibson's paper on the step-wise assembly of the mouse mitochondrial genome (1):



enter image description here



He started with 60b long oligos with 20b overlap, as he assembled 5 of those 60-mers into a backbone, obtaining 384b fragments. On the next step, he joined 5 of those 384mers, obtaining 1.2kb constructs. You can do the same, but on the second step use 2x 384mers to get you ~600bp gene. Gibson didn't use DNAWorks to chop up the sequence. He just started from base 1, so that his fragments were F1 [1:60], F2[41:100], F3[81:140], etc.



I think that 15b homology is pretty low (Gibson used 20b for assembling the 60mers in the first step, and 40bp homology for the next assemblies). 60b oligo length is standard.



You can also try one step assembly with all of your oligos (via PCA) as you planned, but I think that the two step assembly might be more efficient.



Let me know if you need more help.



1. Gibson et al, 2009. Chemical synthesis of the mouse mitochondrial genome

Wednesday 1 August 2012

bioinformatics - What exactly are computers used for in DNA sequencing?

Computers are used in several steps of sequencing, from the raw data to finished sequence (or not):





Modern sequencers usually use fluorescent labelling of DNA fragments in solution. The fluorescence encodes the different base types. To achieve high throughput, millions or billions of sequencing reactions are performed in parallel in microscopic quantities on a glass chip, and for each micro-reaction, the label needs to be recorded at each step in the reaction.



This means: the sequencer takes a digital photograph of the chip containing the sequencing reagent. This photo has differently coloured pixels which need to be told apart and assigned a specific colour value.



Digital image of a sequencing chip



As you can see, this (strongly magnified) image (fragment) is very fuzzy and most of the dots overlap. This makes it hard to determine which colour to assign to which pixel.





One such image is registered for each step of the sequencing process, yielding one image for each base of the fragments. For a fragment of 75, that’d be 75 images.



Once you have analysed the images, you get colour spectra for each pixel across the images. The spectra for each pixel correspond to one sequence fragment (“read”) and are considered separately. So for each fragment you get such a spectrum:



Base calling spectrum



Now you need to decide which base to assign for each position (“base calling”, top row). For most positions this is fairly easy but sometimes the signal overlaps (towards the beginning in the above image) or decays significantly (near the middle). This has to be considered when deciding the base calling quality (i.e. which confidence you assign to your decision for a given base).



Doing this for each read yields billions of reads, each representing a short fragment of the original DNA that you sequenced.



Alas, this was the easy part. Most bioinformatics analysis starts here; that is, the machines emit files containing the short sequence fragments. Now we need to make a sequence from them.





The key point that allows retrieving the original sequence from these small fragments is the fact that these fragments are randomly distributed over the genome, and they are overlapping.



The next step depends on whether you have a similar, already sequenced genome at hand. Often, this is the case. For instance, there is a high-quality “reference sequence” of the human genome and since all the genomic sequences of all humans are ~99.9% identical (depending on how you count), you can simply look where your reads align to the reference.



Read mapping



This is done to search for single changes between the reference and your currently studied genome, for example to detect mutations that lead to diseases.



So all you have to do is to map the reads back to their original location in the reference genome (in blue) and look for differences (such as base pair differences, insertions, deletions, inversions …).



Mapped reads



Two points make this hard:



  1. You have got billions (!) of reads, and the reference genome is often several gigabytes large. Even with the fastest thinkable implementation of a string search, this would take prohibitively long.


  2. The strings don’t match precisely. First of all, there are of course differences between the genomes – otherwise, you wouldn’t sequence the data at all, you’d already have it! Most of these differences are single base pair differences – SNPs (= single nucleotide polymorphisms) – but there are also larger variations that are much harder to deal with (and they are often ignored in this step).



    Furthermore, the sequencing machines aren’t perfect. A lot of things influence the quality, first and foremost the quality of the sample preparation, and minute differences in the chemistry. All this leads to errors in the reads.


In summary, you need to find the position of billions of small strings in a larger string which is several gigabytes in size. All this data doesn’t even fit into a normal computer’s memory. And you need to account for mismatches between the reads and the genome.



Unfortunately, this still doesn’t yield the complete genome. The main reason is that some regions of the genome are highly repetitive and badly conserved, so that it’s impossible to map reads uniquely to such regions.



As a consequence, you instead end up with distinct, contiguous blocks (“contigs”) of mapped reads. Each contig is a sequence fragment, like reads, but much larger (and hopefully with less errors).



Assembly



Sometimes you want to sequence a new organism so you don’t have a reference sequence to map to. Instead, you need to do a de novo assembly. An assembly can also be used to piece contigs from a mapped reads together (but different algorithms are used).



Again we use the property of the reads that they overlap. If you find two fragments which look like this:



ACGTCGATCGCTAGCCGCATCAGCAAACAACACGCTACAGCCT
ATCCCCAAACAACACGCTACAGCCTGGCGGGGCATAGCACTGG


You can be quite certain that they overlap like this in the genome:



ACGTCGATCGCTAGCCGCATCAGCAAACAACACGCTACAGCCT
ATCCCCATTCAACACGCTA-AGCTTGGCGGGGCATACGCACTG


(Notice again that this isn’t a perfect match.)



So now, instead of searching for all the reads in a reference sequencing, you search for head-to-tail correspondences between reads in your collection of billions of reads.



If you compare the mapping of a read to searching a needle in a haystack (an often used analogy), then assembling reads is akin to comparing all the straws in the haystack to each other straw, and putting them in order of similarity.