Sunday 30 December 2007

ag.algebraic geometry - Points of a variety defined by Galois descent

The following seems to give a reasonable affirmative answer which avoids computing the coordinate ring directly, and replaces condition (2) with the more natural condition that the subset $Sigma := X(overline{k})$ in (1) is stable under the action of the Galois group on $overline{k}^n$.



Let's be cleaner by working more generally over an arbitrary (not necessarily perfect) field $k$ and with geometrically reduced closed subschemes $X$ in a fixed separated $k$-scheme $Y$ locally of finite type. (Note: now affine schemes are gone; can take $Y$ to be an affine space, but this is irrelevant.) The
${rm{Gal}}(k_s/k)$-stable set $Sigma = X(k_s)$ in $Y(k_s)$ recovers $X$ as follows. For a $k$-algebra $A$, $X(A)$ is the ${rm{Gal}}(k_s/k)$-invariants in $X(A_{k_s})$, so we just need to describe $X(A_{k_s})$ as a ${rm{Gal}}(k_s/k)$-stable subset of $Y(A_{k_s})$. The description in this latter case will be in terms of $Sigma$, and the ${rm{Gal}}(k_s/k)$-stability of $Sigma$ inside of $Y(k_s)$ will ensure that the description we give for $X(A_{k_s})$ is ${rm{Gal}}(k_s/k)$-stable inside of $Y(A_{k_s})$. That being noted, we rename $k_s$ as $k$ so that $k$ is separably closed and $Sigma$ is simply a set of $k$-rational points of $Y$ (so the notation is now marginally cleaner).



First assume $A$ is geometrically reduced in the sense that $A_K$ is reduced for any extension field $K/k$. Since $X(A)$ is the direct limit (inside $Y(A)$) of the $X(A_i)$ as $A_i$ varies through $k$-subalgebras of finite type in $A$ (all of which are geometrically reduced), we may assume $A$ is finitely generated over $k$.
Then the $k$-points are Zariski-dense (as $k = k_s$) and so the condition on $y in Y(A)$ that it lies in $X(A)$ is that $y(xi) in Sigma$ for all $k$-points $xi$ of $A$. That describes $X(A)$ for any (possibly not finitely generated) $k$-algebra $A$ that is geometrically reduced. In general, to check if $y in Y(A)$ lies in $X(A)$ amounts to the same for each local ring of $A$, so we can assume $A$ is local. Then the condition for $y$ to be in $X(A)$ is exactly that there is a local map of local $k$-algebras $B rightarrow A$ with $B$ geometrically reduced such that $y$ is in the image of $X(B)$ under the induced map $Y(B) rightarrow Y(A)$. I don't claim this formulation is the best way to think about it, but it "works".



Of course, one can apply this process to any ${rm{Gal}}(k_s/k)$-stable subset $Sigma$ of $Y(k_s)$ provided that we first replace $Sigma$ with with the set of $k_s$-points of its Zariski-closure in $Y_{k_s}$. Then we just obtain the Galois descent $X$ of the Zariski closure in $Y_{k_s}$ of $Sigma$. In general $X(k_s)$ may be larger than $Sigma$, but nonetheless $Sigma$ is Zariski-dense in $X_{k_s}$. This is perfectly interesting in practice, regardless of whether or not $Sigma$ is equal to $X_{k_s}$, since it is what underlies the construction of derived groups, commutator subgroups, images, orbits, and related things in the theory of linear algebraic groups over a general field. For example, the $k$-group ${rm{PGL}}_n$ is its own derived group in the sense of algebraic groups, but the commutator subgroup of ${rm{PGL}}_n(k_s)$ is a proper subgroup whenever $k$ is imperfect and ${rm{char}}(k)|n$.



To give a nifty application, suppose one begins with an arbitrary closed subscheme $X'$ in $Y$ (such as $X' = Y$!), then forms the ${rm{Gal}}(k_s/k)$-stable set $X'(k_s)$ (which could well be empty, or somehow really tiny), and then applies the above procedure to get a geometrically reduced closed subscheme $X$ in $X'$. What is it? It is the maximal geometrically reduced closed subscheme of $X'$, and one can check its formation is compatible with products (as well as separable extensions $K/k$, such as completions $k_v/k$ for a global field $k$). If $k$ is perfect then $X = X'_{rm{red}}$, so this is more interesting when $k$ is imperfect. It is especially interesting in the special case when $X'$ is equipped with a structure of $k$-group scheme. Then $X$ is its maximal smooth closed $k$-subgroup, since geometrically reduced $k$-groups locally of finite type are smooth. So what? If one is faced with the task of studying the Tate-Shararevich set for such an $X'$ (e.g., maybe $X'$ is a nasty automorphism scheme of something nice) then all that really intervenes is $X$ since it captures all of the local points, so for some purposes we can replace the possibly bad $X'$ with the smooth $X$. (This trick is used in the proof of finiteness of Tate-Shafarevich sets for arbitrary affine groups of finite type over global function fields.) But beware: if the $k$-group $X'$ is connected (and $k$ is imperfect) then $X$ may be disconnected and have much smaller dimension; see Remark C.4.2 in the book "Pseudo-reductive groups" for an example.

pr.probability - Football Squares

Dear Colleagues,



This is a math question for people who know the rules of (American) football.



Every year my barber runs a “football squares” game. He finds 100 customers, each put in 20 dollars, and each person is assigned a square on a 10 by 10 grid. After all squares are sold he picks numbers out of a hat to label each row and column 0 thru 9. So each contestant winds up being assigned with an ordered pair of numbers – in my case (3,7) this year. Now prizes are awarded by inspecting the last digit of the score of each team in the Super Bowl at the end of the first, second, third quarters and the final game score. Each winner gets $500.



For example, suppose that the scores are AFC 14, NFC 10 at the end of the first quarter. Then the person with (4,0) wins $500.



Now this is a fair bet, in the sense that each square is assigned randomly (and I believe that my barber doesn’t cheat.) However, it would seem that some numbers are “better” than others. Football scores are not random. For instance, at the end of the first quarter it is extremely unlikely that (5,5) will win. On the other hand, ((0,7) would seem like a good number. What is needed is some probabilistic analysis based upon the actual scoring patterns in football together with looking at actual scores of many pro football games. I am unable to find any analysis of this on the internet. I have tagged this as a probability question but since I work in operator algebras for a living this may be mis-tagged, and I ask you to pardon my error.



Let’s make it precise. Let f: {0,1,2,…, 9} x {0,1,2,…, 9} x {1,2,3,final} to [0,1] be the function that to a point (x,y,z) assigns the probability that the score at the end of the z’th quarter of the Super Bowl will be equal to (x,y) mod 10. Find the function f. Where does f achieve its max?



How about it, colleagues? Inquiring minds want to know!



CS

Friday 28 December 2007

nt.number theory - Can an infinite sequence of integers generate integer-area triangles?

I'll throw out a dumb idea: can anyone find a rational point on



$$y^2 = - (x^2-x+1)(x^2+x+1)(x^2-x-1)(x^2+x-1)?$$



UPDATE: The above formula used to have a sign error, which I have just fixed, and Bjorn's reponse was to the version with the sign error. Thanks to Kevin Buzzard for pointing this out to me.



Because, if so, $a_n=x^n$ gives triangles with rational area. Of course, this still wouldn't give an integer solution, but it would rule out a number of easy arguments against one existing.



I did a brute force search of values of $x$ with numerator and denominator under 5000 and didn't find any, but I don't think that is large enough to even count as evidence against one existing.

Tuesday 25 December 2007

cv.complex variables - Are there compact analogues of Cartan's theorems A and B?

Dear Colin , for $X$ a holomorphic connected manifold, denote by $mathcal M (X)$ its field of meromorphic functions.



A) It is not true that a germ of holomorphic function $f_xin mathcal O_{X,x}$ is induced by a global meromorphic function : many compact complex manifolds only have $mathbb C$ as meromorphic functions:
$mathcal M (X)=mathbb C$. There is an example with $X$ a surface in Shafarevich's Basic Algebraic Geometry, volume 2, page 164.



B) The best analogon to Theorem B is probably Cartan-Serre's result that for any coherent sheaf $mathcal F$ on the compact manifold $X$, the cohomology vector spaces $H^q(X,mathcal F), qgeq 1$ are finite-dimensional over $mathbb C$.



(Original article: Cartan-Serre, C.R.Acad.Sci. Paris 237 (1953), 128-130)

lab techniques - What is the difference between HPLC and FPLC and why is FPLC preferable for protein purification?

The only difference between FPLC and HPLC is the amount of pressure the pumps apply to the column. FPLC columns have a maximum pressure of about of 3-4 MPa, whereas HPLC columns can withstand or require much higher pressures. As a general rule, HPLC columns won't work with old FPLC equipment; FPLC columns can go on HPLCs as long as the pressure can be regulated.



Manufacturers have been marketing separate equipment to handle these different classes of columns, but the trend seems to be heading towards machines that can handle both types of columns without issue. A GE rep told me a few years ago that they've improved the pumps on the AKTAs to the point that "they're technically HPLCs now." The term "FPLC" is probably on its way out.

at.algebraic topology - A possible generalization of the homotopy groups.

I was told by Brian Griffiths that Fox was hoping to obtain a generalisation of the van Kampen theorem and so continue work of J.H.C Whitehead on adding relations to homotopy groups (see his 1941 paper with that title).



However if one frees oneself from the base point fixation one might be led to consider Loday's cat$^n$-group of a based $(n+1)$-ad, $X_*=(X;X_1, ldots, X_n)$; let $Phi X_*$ be the space of maps $I^n to X$ which take the faces of the $n$-cube $I^n$ in direction $i$ into $X_i$ and the vertices to the base point. Then $Phi$ has compositions $+_i$ in direction $i$ which form a lax $n$-fold groupoid. However the group $Pi X_*= pi_1(Phi, x)$, where $x$ is the constant map at the base point $x$, inherits these compositions to become a cat$^n$-group, i.e. a strict $n$-fold groupoid internal to the category of groups (the proof is non trivial).



There is a Higher Homotopy van Kampen Theorem for this functor $Pi$ which enables some new nonabelian calculations in homotopy theory (see our paper in Topology 26 (1987) 311-334).



So a key step is to move from spaces with base point to certain structured spaces.



Comment Feb 16, 2013: The workers in algebraic topology near the beginning of the 20th century were looking for higher dimensional versions of the fundamental group, since they knew that the nonabelian fundamental group was useful in problems of analysis and geometry. In 1932, Cech submitted a paper on Higher Homotopy Groups to the ICM at Zurich, but Alexandroff and Hopf quickly proved the groups were abelian for $n >1$ and on these grounds persuaded Cech to withdraw his paper, so that only a small paragraph appeared in the Proceedings. It is reported that Hurewicz attended that conference. In due course, the idea of higher versions of the fundamental group came to be seen as a mirage.



One explanation of the abelian nature of the higher homotopy groups is that group objects in the category of groups are abelian groups, as a result of the interchange law, also called the Eckmann-Hilton argument. However group objects in the category of groupoids are equivalent to crossed modules, and so are in some sense "more nonabelian" than groups. Crossed modules were first defined by J.H.C. Whitehead, 1946, in relation to second relative homotopy groups. This leads to the possibility, now realised, of "higher homotopy groupoids", Higher Homotopy Seifert-van Kampen Theorems, and the notions of higher dimensional group theory.



See this presentation for more background.

Monday 24 December 2007

ag.algebraic geometry - Transition Functions and Complex Structure

For a real manifold $M$ the transition functions of the tangent bundle $T(M)$ come from the Jacobian of the change-of-coordinate maps.



When $M$ is complex, it has a complex tangent bundle $T_{mathbb{C}}M$, which can be identified with the holomorphic vector bundle $T^{(1,0)} subset TM otimes mathbb{C}$. The transition functions on $T_{mathbb{C}}M$ are given by the (complex) Jacobian of the change-of-coordinate maps, so the same is true for $T^{(1,0)}$.



Since $T^{(0,1)}$ is the complex conjugate bundle, its transition functions are the complex conjugate of the same Jacobian.

human biology - Possible? When a pregnant woman suffers an organ damage, fetus would send stem cells to the damage organ to help repair it?

I am quite sure that there is this blood-placental barrier between the mother and the baby so that nothing (except a type of antibody) can pass through it.



But I remember reading somewhere that when a pregnant woman suffers an organ damage, fetus would send stem cells to the damage organ to help repair it.



Anything to support that?

Sunday 23 December 2007

biochemistry - What is a coupled reaction and why do cells couple reactions?

A coupled biochemical reaction is one where the free energy of a thermodynamically favourable reaction (such as the hydrolysis of ATP) is used to 'drive' a thermodynamically unfavourable one, by coupling or 'mechanistically joining' the two reactions.



To put it another way, two (or more) reactions may be combined by an enzyme (for example) such that a spontaneous reaction may be made 'drive' an unspontaneous one. Such reactions may be considered coupled (see, for example, Silby & Alberty (2001), quoted below).



An example is the reaction catalyzed by glyceraldehyde-3-phosphate dehydrogenase (EC 1.2.1.12; GAPDH) [see here] .



 Glyceraldehyde-3-phosphate + NAD+ + Pi → 1,3-diPhosphoGlycerate + NADH + H+


We can think of this reaction in terms of two separate reactions which are coupled mechanistically by the enzyme. (i) The NAD+ linked oxidation of an aldehyde to a carboxylic acid (the aldehyde dehydrogenase reaction) and (ii) the phosphorylation of a carboxylic acid. (Like ATP, a phosphorylated carboxylic acid may be considered a 'high energy' compound, that is one where the equilibrium for hydrolysis lies very much to the left in reaction 2 below).



Reaction 1



 RCHO + NAD+ + H2O → RCOOH + NADH + H+


Reaction 2 (Pi is inorganic phosphate).



 RCOOH + Pi → RC(=O)(O-Pi) + H2O 


The NAD+-linked oxidation of an aldehyde (reaction 1) is practically irreversible. That is, at equilibrium it has proceeded almost totally to the right. (As far as I am aware,
this reaction has never been convincingly reversed in vitro, although I know people who have tried!) As stated above, the position of equilibrium of reaction 2 lies very much to the left.



How can one 'drive' the formation of a phosphorylated carboxylic acid by coupling it to the (spontaneous) NAD+-linked oxidation of an aldehyde?



A simplified version of the GAPDH reaction is as follows (a more complete mechanism, supported by a lot of experimental evidence, may be found in Fersht (1999), which I quote below).



Step 1. Formation on an enzyme-linked thiohemiacetal.



 E-SH + RCHO → E-S-C(R)(H)(OH) 


A sulphydryl on the enzyme (part of a Cys residue) reacts with the aldehyde group on the substrate to give a thiohemiacetal. (In the representation above, groups in brackets are all connected to a single (tetrahedral) carbon atom).



Step 2. The thiohemiacetal is oxidized by enzyme-bound NAD+ to a thiol-ester (the key step).



 E-S-C(R)(H)(OH) + NAD+ → E-S-C(=O)(R) + NADH + H+


This (enzyme-bound) thiol-ester is a 'high energy' intermediate wherein, it may be envisaged, the free energy of aldehyde oxidation has been 'trapped'.



The final step of the GAPDH reaction is now spontaneous (proceeds to the right).



Step 3. Attack on the thiol-ester by inorganic phosphate

(Pi is inorganic phosphate)



E-S-C(=O)(R) + Pi → E-SH + R-C(=O)(O-Pi)


Thus, the free energy of NAD+-linked aldehyde oxidation has been 'sequestered' and used to 'drive' the thermodynamically unfavourable phosphorylation of a carboxylic acid, by coupling the two reactions via a ('high energy') thiol-ester.



The 'thermodynamic price' is that the GAPDH reaction (unlike NAD+-linked aldehyde oxidation) is freely reversible.



As stated above, this is a simplifed version of the GAPDH reaction. The (tetrameric) enzyme contains a tightly bound NAD+ for a start, and this needs to be taken account of. A fuller account may be found in the following reference:



  • Fersht, Alan. (1999) Structure and Mechanism in Protein Science, pp 469 - 471, W.H. Freeman & Co.

For a fuller treatment of coupled biochemical reactions, see



  • Silbey, R.J. & Alberty, R.A. (2001) Physical Chemistry (3rd Edn) pp 281 - 283.
    (I have relied heavily on this text for the first part of this answer).

Pyruvate kinase (EC 2.7.1.40) [see here] is another great example of a coupled biochemical reaction. In this case the reaction is almost irreversible in the direction of ATP synthesis!



The standard transformed free energy (ΔGo') for the hydrolysis of phosphoenol-pyruvate (PEP) to pyruvate and phosphate is ~ - 62 kJ/mol. This represents an equilibrium constant of about 1010 in favour of hydrolysis! (see Walsh, quoted below, pp 229-230).



For comparison, ΔGo' for ATP + H2O → ADP + Pi is about - 40 kJ/mol.



Thus the pyruvate kinase reaction may be viewed as a coupled biochemical reaction where the free energy of PEP hydrolysis is coupled to (almost irreversible) ATP synthesis.



Why does PEP have such a large negative ΔGo'? The enol form of pyruvate does not exist in appreciable quantites in aqueous solution at pH 7 (Pocker et al., 1969; Damitio et al., 1992). PEP may be considered a 'trapped' form of a thermodynamically unstable enol which is released upon hydrolysis, thus 'pulling' the equilibrium to the right. (see Walsh, quoted below, p 230, for a more thorough explanation).



Personally, I have always considered the reaction catalyzed by PK to be pretty amazing.



References



  • Damitio , J., Smith , G., Meany , J. E., Pocker, Y. (1992). A comparative study of the enolization of pyruvate and the reversible dehydration of pyruvate hydrate
    J. Am. Chem. Soc., 114, 3081–3087


  • Pocker, Y., Meany, J. E., Nist, B. J., & Zadorojny, C. (1969) The Reversible Hydration of Pyruvic Acid. I. Equilibrium Studies. J. Phys. Chem.
    76, 2879 – 2882.


  • Waslh, C. (1979) Enzymatic Reaction Mechanisms. W.H. Freeman & Co.



Oxidative Phosphorylation

The above examples are instances of substrate-level coupling. But perhaps the most important coupled reaction is that which occurs in oxidative phosphorylation, ie that which occurs beween oxidation of fuels via the respiratory redox chain and synthesis of ATP via the ATP synthetase complex. In short, that which occurs in respiration.



I have 'steered clear' up to this point, as it is a very complex area and difficult to do justice to in a few lines. However, in an attempt at completeness, I'll have a go.



In the chemiosmotic theory of oxidative phosphorylation (due primarily to Peter Mitchell) electron transport via the respiratory chain to molecular oxygen creates a proton gradient across the inner mitochondrial membrane (which is normally impermeable to protons). Protons are pumped outwards. This proton gradient, or protonmotive force, may be used to 'drive' the following (thermodynamically unfavourable) reaction to the right, via the ATP synthetase complex.



ADP + Pi →  ATP


This is commonly referred to as 'ATP synthesis', but what is meant is that the above reaction is maintained far from equilibrium to the right. (And this reaction, being far from equilibrium, may be used, by coupling, to 'drive' thermodynamically unfavourable processes).



That is, electron transport to oxygen is coupled to 'ATP synthesis' via a proton gradient. Among other characteristics, the coupled reactions in this case are spatially separated. (The ATP synthetase is an example of rotatory catalysis, see here).



Furthermore, the two reactions may be uncoupled.



A great example of an uncoupler is 2,4-dinitrophenol (DNP). This is a weakly acidic lipophilic compound where both the unionized (dinitrophenol) and ionized (dinitrophenolate) forms are membrane-permeable.



It may be envisaged that DNP shuttles protons across the membrane (thereby dissipating the proton gradient) by diffusing back and forth (through the membrane), picking up protons on the outside and depositing then on the inside.



Now electrons originating from foodstuffs are passed via the respiratory chain to oxygen but no ATP is synthesized: the reactions are uncoupled, and the free energy is dissipated as heat. In fact, DNP was at one time used as a slimming agent (see here), but (among other side effects) it also causes blindness.



See Abeles, Frey & Jencks (pp 620-621, quoted below), for a more thorough treatment of the mechanism of DNP.



In the analogy of a car used by nico, this uncoupled reaction is equivalent to the case where the driver has his/her foot full on the accelerator (thus burning fuel at almost the maximum rate) but where the clutch is disengaged (thus no movement or 'work' is being done). To perhaps push the analogy a bit too far, a car moving along at, say, 50 KPH is an example of a coupled energy transformation: the burning of fuel is coupled to movement.



Historically, of course, it was at one time thought that all ATP synthesis occured via substrate-level coupling. Peter Mitchell changed all that.



These topics are treated in detail in almost every textbook of biochemistry. For example, Chapter 22 (The Electron Transport Pathway and Oxidative Phosphorylation) of Abeles, Frey and Jencks (quoted below).



The text of four nobel lectures, due to Fritz Lipmann Peter Mitchell, Paul Boyer and John Walker, where the pdf files are available to all, are also excellent sources (also quoted in full below).



References



  • Abeles, R.H., Frey, P.A. & Jencks, W.P. (1992) Biochemistry. Jones & Barlett, Publishers.


  • Boyer, P. D. (1997) [Nobel Lecture] Energy, Life, and ATP (pdf available here)


  • Lipmann, F. (1953) [Nobel Lecture] Development of the Acetylation Problem: A Personal Account (pdf available here)


  • Mitchell, P. (1978) [Nobel Lecture] David Keilin's Respiratory Chain Concept and Its Chemiosmotic Consequences (pdf available here)


  • Walker, J. E. (1997) [Nobel Lecture] ATP Synthesis by Rotary Catalysis (pdf available here)


These two classics also deserve to be quoted.



  • Lipmann, F. (1941) Metabolic Generation and Utilization of Phosphate Bond Energy Advances in Enzymology, Vol 1, pp 99 - 162. [Introduces the concept of the 'high energy bond' (much derided at the time), and elucidates the central role of ATP in biological energy transformations. Without doubt, a classic reference].


  • Hinkle, P., & McCarty, R. E (1978) How cells make ATP Scientific American, Vol 238, pp 104-117 & 121-123.


combinatorial geometry - Drawing 3-configurations of points and lines with straight lines

It is well-known that the black-and-white coloring of the Heawood graph on 14 vertices determines a combinatorial 3-configuration with 7 "points" and 7 "lines", known as Fano plane. Similarly, any cubic bipartite graph of girth at least 6 with a given black-and-white coloring can be regarded as the Levi graph or incidence graph of a 3-configuration. The Fano plane can be drawn in the Euclidean plane with 6 straight lines and one curved line but not with all lines straight. There are many combinatorial 3-configurations, such as Pappus or Desargues configurations that can be realized as geometric configurations of points and lines in the Euclidean plane. Call such configuration realizable. It is easy to see that if a combinatorial configuration is realizable then its dual configuration is realizable. (The combinatorial dual is obtained by interchanging black and white colors in the coloring of its Levi graph). This means that the property of realizability is, in fact, a property of bipartite graphs and the Heawood graph is not realizable.



I would like to know what is known about the status of the following complexity decision problem.




Input: Cubic connected bipartite graph G of girth at least 6.



Question: Is G realizable?




I am aware of recent book "Configurations of Points and Lines" by Branko Grunbaum, the book by
Juergen Bokowski: "Computational Oriented Matroids" and the book "Computational Synthetic Geometry" by
Bokowski and Sturmfels. I am not sure if any of them gives the final answer to this problem.

set theory - Decidability of the Axiom of Choice

There are, of course, exactly two ways for a theory to decide AC. Either it proves that AC is true, or it proves that AC is false.



Consequently, if you have a theory extending ZF and deciding AC, then either your theory includes ZFC or it includes ZF+¬AC. Thus, there are two minimal possibilities which meet your requirement, and any theory extending ZF and deciding AC must extend one of them.



The results of Godel on the constructible universe show that if ZF is consistent, then so is ZF+AC. And the results of Cohen on the forcing method show that if ZF is consistent, then so is ZF+¬AC. So both of these minimal theories are consistent, if ZF itself is consistent.



There are, of course, a huge variety of further extensions of ZFC that are intensely studied in set theory, and you can learn about them in any introductory graduate level set theory text. For example, one will want to know whether V=L, or whether CH holds, or GCH, or Diamond, whether there are Suslin trees or not, or large cardinals, and so on. Similarly, there are also a large variety of further extensions of ZF+¬AC that are studied, some quite intensely. For example, one might want to have the countable AC, or DC, or the Axiom of Determinacy and so on.

Saturday 22 December 2007

ag.algebraic geometry - Commutative rings to algebraic spaces in one jump?

Typically, in the functor of points approach, one constructs the category of algebraic spaces by first constructing the category of locally representable sheaves for the global Zariski topology (Schemes) on $CRing^{op}$. That is, taking the full subcategory of $Psh(CRing^{op})$ which consists of objects $S$ such that $S$ is a sheaf in the global Zariski topology and $S$ has a cover by representables in the induced topology on $Psh(CRing^{op})$. This is the category of schemes. Then, one takes this category and equips it with the etale topology and repeats the construction of locally representable sheaves on this site (Sch with the etale topology) to get the category of algebraic spaces.



Can we "skip" the category of schemes entirely by putting a different topology on $CRing^{op}$?



My intuition is that since every scheme can be covered by affines, and every algebraic space can be covered by schemes, we can cut out the middle-man and just define algebraic spaces as locally representable sheaves for the global etale topology on $CRing^{op}$. If this ends up being the case, is there any sort of interesting further generalization before stacks, perhaps taking locally representable sheaves in a flat Zariski-friendly topology like fppf or fpqc?



Some motivation: In algebraic geometry, all of our data comes from commutative rings in a functorial way (intentionally vague). All of the grothendieck topologies with nice notions of descent used in Algebraic geometry can be expressed in terms of commutative rings, e.g., the algebraic and geometric forms of Zariski's Main theorem are equivalent, we can describe etale morphisms in terms of etale ring maps, et cetera. What I'm trying to see is whether or not we can really express all of algebraic geometry as "left-handed commutative algebra + sheaves (including higher sheaves like stacks)". The functor of points approach for schemes validates this intuition in the simplest case, but does it actually generalize further?



The main question is italicized, but feel free to tell me if I've incorrectly characterized something in the motivation or the background.

Friday 21 December 2007

biochemistry - Why is uracil used in RNA rather than thymine?

Thymine has a greater resistance to photochemical mutation, making the genetic message more stable. This offers a rough explanation of why thymine is more protected then uracil.



However, the real question is: Why does thymine replace uracil in DNA? The important thing to notice is that while uracil exists as both uridine (U) and deoxy-uridine (dU), thymine only exists as deoxy-thymidine (dT). So the question becomes: Why do cells go to the trouble of methylating uracil to thymine before it can be used in DNA? and the easy answer is: methylation protects the DNA.



Besides using dT instead of dU, most organisms also use various enzymes to modify DNA after it has been synthesized. Two such enzymes, dam and dcm methylate adenines and cytosines, respectively, along the entire DNA strand. This methylation makes the DNA unrecognizable to many nucleases (enzymes which break down DNA and RNA), so that it cannot be easily attacked by invaders, like viruses or certain bacteria. Obviously, methylating the nucleotides before they are incorporated ensures that the entire strand of DNA is protected.



Thymine also protects the DNA in another way. If you look at the components of nucleic acids, phosphates, sugars, and bases, you see that they are all very hydrophilic (water soluble). Obviously, adding a hydrophobic (water insoluble) methyl group to part of the DNA is going to change the characteristics of the molecule. The major effect is that the methyl group will be repelled by the rest of the DNA, moving it to a fixed position in the major groove of the helix. This solves an important problem with uracil - though it prefers adenine, uracil can base-pair with almost any other base, including itself, depending on how it situates itself in the helix. By tacking it down to a single conformation, the methyl group restricts uracil (thymine) to pairing only with adenine. This greatly improves the efficiency of DNA replication, by reducing the rate of mismatches, and thus mutations.

Thursday 20 December 2007

set theory - Set comprehension when the condition is false

The Cartesian product of two empty sets is the singleton set ${ () }$ containing the empty tuple. So, given a set $A$ which is empty, $A times A $ is defined as: $$ A times A = { (a,a) mid a in A } = { () } $$ Now, does that mean that $()$ satisfies the condition $a in A$? And if so, why don't we include the empty tuple in the Cartesian product of non-empty sets?



(It would be nice if you point out which concept I mis-understand: the set comprehension, or the tuple.)



Thanks in advance.



[edit: I should add the following link: Wikipedia: Empty_product#Nullary_Cartesian_product]

co.combinatorics - Is there a poset with 0 with countable automorphism group?

It seems unlikely (once you assume d.c.c.). Define the height of an element $x$ in $P$ to be the length of the shortest unrefinable chain from $x$ to $0$.



Let $P_n$ denote the elements of $P$ whose height is at most $n$. Since each element has a finite number of covers, the number of elements in $P_n$ is finite.



By d.c.c., every element of $P$ is in some $P_n$.



Let $G$ denote the automorphisms of $P$ and let $G_n$ denote the automorphisms of $P_n$. $G$ is the inverse limit of the system $G_n$. Let $H_n$ denote the image of $G$ inside $G_n$. (Note that this might not be all of $G_n$, since there could be automorphisms of $P_n$ that don't extend to $P$.) $G$ is also the inverse limit of the system $H_n$.



If the system $H_n$ stabilizes, then $G$ is finite. On the other hand, if $H_n$ doesn't stabilize, then the cardinality of $G$ is an infinite product, i.e. uncountable.

Wednesday 19 December 2007

rt.representation theory - Compact generation for modular representations

You might want to try tilting modules. Those at least provide a generating set with trivial self-extensions.



If I recall correctly, each tilting is left and right orthogonal to all but finitely many simples, and vice-versa, and every finite-dimensional module has a finite-length tilting resolution.



That is, the derived category of finite-dimensional modular representations is derived equivalent to bounded, finite rank perfect complexes over the endomorphism ring of the tiltings.

Sunday 16 December 2007

nt.number theory - Permutations with identical objects

Note: I tried asking this on math.stackexchange (here), but didn't really receive an answer - so I figured this might be the right place.




How can I find the number of k-permutations of n objects, where there are x types of objects, and r1, r2, r3 ... rx give the number of each type of object?



Example:




I have 20 letters from the alphabet. There are some duplicates - 4 of them are a, 5 of them are b, 8 of them are c, and 3 are d. How many unique 15-letter permutations can I make?




In the example:



n = 20
k = 15
x = 4
r1 = 4, r2 = 5, r3 = 8, r4 = 3



Furthermore, if there isn't a straightforward solution: how efficiently can this problem be solved?

lo.logic - How bad can the recursive properties of finitely presented groups be?

Your question is very interesting. I don't have a complete
answer.



First, let me note that in a finitely presented group, the
Cayley graph itself may not be a decidable graph, since to
know whether or not a node in the graph, which is a word in
the presentation, is trivial or not amounts exactly to the
word problem for that presentation. And since there are
finitely presented groups having an undecidable word
problem, there are finitely presented groups having an
undecidable Cayley graph.



This suggests an interesting sub-case of your problem, the
case when the Cayley graph is decidable. And in this case,
one can at least find a non-intersecting infinite path that
is low.



Let me explain. For any group presentation, one may form
the tree T of all finite non-intersecting paths. This is
the tree of attempts to build an infinite non-intersecting
path. Your question is equivalent to asking, when the group
is infinite, whether or not this tree has a computable
infinite branch. If the Cayley graph is decidable, then
this tree will be decidable. Thus, by the Low Basis
Theorem
,
it follows that there is a branch b through the tree which
is low, meaning that the halting problem relative to b is
Turing equivalent to the ordinary halting problem. In
particular, this branch b is strictly below the halting
problem in the Turing degrees. This shows that any
computably-presented group with a decidable Cayley graph
admits a low non-intersecting path. Such a path is close to
being computable, but perhaps not quite computable. (Even in
this very special case when the Cayley graph is decidable,
I'm not sure whether there must be a computable
non-intersecting path.)



In the general case, the argument shows that for any group presentation of an infinite group, we can find an infinite non-intersecting path s, which has low degree relative to the Turing degree of the Cayley graph (low in the technical sense). I suspect that this is the best that one can say.



There is another classical theorem in computability that
seems relevant to your question, namely, the fact that
there are computable infinite binary branching trees T,
subtrees of 2ω, having no computable
infinite branches. These trees therefore constitute
violations of the computable analogue of Konig's lemma.
This problem is very like yours, isn't it? I am intrigued
by the possibility of using that classical construction to
build an example of the kind of group you seek.



There is a natural generalization of your question beyond
the finitely presented groups, to the class of
finitely-generated but computably presented groups. That
is, consider a group presentation with finitely many
generators and a computable list of relations. It may be
easier to find a instance of the kind of group you seek
having this more general form, since one can imagine a
priority argument, where one gradually adds relations as
the construction proceeds in order to meet various
requirements that diagonalize against the computable paths.




Here are some resources to general decidability questions in
finite group presentations: the undecidability of the word
problem
,
the conjugacy
problem
,
and the group isomorphism
problem
.

Friday 14 December 2007

soft question - Dimension Leaps

Here is a closely related pair of examples from operator theory, von Neumann's inequality and the theory of unitary dilations of contractions on Hilbert space, where things work for 1 or 2 variables but not for 3 or more.



In one variable, von Neumann's inequality says that if $T$ is an operator on a (complex) Hilbert space $H$ with $|T|leq1$ and $p$ is in $mathbb{C}[z]$, then $|p(T)|leqsup{|p(z)|:|z|=1}$. Szőkefalvi-Nagy's dilation theorem says that (with the same assumptions on $T$) there is a unitary operator $U$ on a Hilbert space $K$ containing $H$ such that if $P:Kto H$ denotes orthogonal projection of $K$ onto $H$, then $T^n=PU^n|_H$ for each positive integer $n$.



These results extend to two commuting variables, as Ando proved in 1963. If $T_1$ and $T_2$ are commuting contractions on $H$, Ando's theorem says that there are commuting unitary operators $U_1$ and $U_2$ on a Hilbert space $K$ containing $H$ such that if $P:Kto H$ denotes orthogonal projection of $K$ onto $H$, then $T_1^{n_1}T_2^{n_2}=PU_1^{n_1}U_2^{n_2}|_H$ for each pair of nonnegative integers $n_1$ and $n_2$. This extension of Sz.-Nagy's theorem has the extension of von Neumann's inequality as a corollary: If $T_1$ and $T_2$ are commuting contractions on a Hilbert space and $p$ is in $mathbb{C}[z_1,z_2]$, then $|p(T_1,T_2)|leqsup{|p(z_1,z_2)|:|z_1|=|z_2|=1}$.



Things aren't so nice in 3 (or more) variables. Parrott showed in 1970 that 3 or more commuting contractions need not have commuting unitary dilations. Even worse, the analogues of von Neumann's inequality don't hold for $n$-tuples of commuting contractions when $ngeq3$. Some have considered the problem of quantifying how badly the inequalities can fail. Let $K_n$ denote the infimum of the set of those positive constants $K$ such that if $T_1,ldots,T_n$ are commuting contractions and $p$ is in $mathbb{C}[z_1,ldots,z_n]$, then $|p(T_1,ldots,T_n)|leq Kcdotsup{|p(z_1,ldots,z_n)|:|z_1|=cdots=|z_n|=1}$. So von Neumann's inequality says that $K_1=1$, and Ando's Theorem yields $K_2=1$. It is known in general that $K_ngeqfrac{sqrt{n}}{11}$. When $n>2$, it is not known whether $K_nltinfty$.



See Paulsen's book (2002) for more. On page 69 he writes:




The fact that von Neumann’s inequality holds for two commuting contractions
but not three or more is still the source of many surprising results and
intriguing questions. Many deep results about analytic functions come
from this dichotomy. For example, Agler [used] Ando’s theorem to deduce an
analogue of the classical Nevanlinna–Pick interpolation formula
for analytic functions on the bidisk. Because of the failure of a von
Neumann inequality for three or more commuting contractions, the analogous
formula for the tridisk is known to be false, and the problem of finding the
correct analogue of the Nevanlinna–Pick formula for polydisks
in three or more variables remains open.


Tuesday 11 December 2007

at.algebraic topology - primitive of an exact differential form with special properties

I think, the answer can depend on how you interpret the question. Let me show that the answer is negative for one of the interpretations already in the case of $2$-dimensional manifolds. We study the question locally in a neighbourhood of a vertex of a triangulation, so the condition on the integral over $n$-simplexes does not play any role. The obstruction for the existence of $beta$ comes form the local behaviour of curves at a vertex.



Lemma. Let $alpha=dxwedge dy$ on $mathbb R^2$. For $nge 8$ there exist $gamma_1,...,gamma_n$, smooth rays on $mathbb R^2$ that meet at $0$ with different tangent vectors and such that there is no $beta$ defined in any neighbourhood of 0 with $dbeta=alpha$ and vanishing been restricted to $gamma_i$.



It is clear that this lemma implies the negative answer to a version of the question, when we are not allowed to deform the triangulation.



Proof of Lemma. Suppose by contradiction that $beta$ exists. Then $beta_1=beta-frac{1}{2}(xdy-ydx)$ is a closed 1-form. So we can write $beta_1=dF$, where $F$ is a function defined in a neighbourhood of $0$, $F(0)=0$. Since the number $n$ of the rays is more than $2$, $dF$ should vanish at zero. Moreover, it is not hard to see, that since the number of rays is more than $4$, the quadratic term of $F$ vanishes at zero too.



Now, since $beta$ vanishes on $gamma_i$, the restiction of $beta_1$ to $gamma_i$ equals $frac{1}{2}(ydx-xdy)$. So we get the formula for $F$, resticted to $gamma_i$
$$F=frac{1}{2}int_{gamma_i}ydx-xdy.$$
Now, we will chose the rays $gamma_1,...,gamma_8$. Namely $gamma_1(t)=(t,t^2)$, $gamma_2(t)=(t,t-t^2)$, and take $gamma_3,...,gamma_8$ by consecutively rotating $gamma_1,gamma_2$ by $pi/2$, $pi$, $3pi/2$.



It is not hard to see, that $F$ is cubic modulo higher terms in $t$ when it is restricted to $gamma_i$. At the same time $F$ is positive on $gamma_{1},gamma_3,gamma_5, gamma_7$ and negative restricted to other rays. So it changes its sign at lest $8$ times on a little circle surrounding $0$. This is impossible for a cubic Function (in a little neighbourhood the cubic term of $F$ should be dominating). Contradiction.

order theory - Ordering of tuples equivalent to mapping to R?

I'm not sure what the point of taking tuples is, since all you seem to care about is the cardinality of the set of tuples (which is the same as the cardinality of the reals).



If you're asking whether every total ordering on a set with the cardinality of the reals is order-isomorphic to the usual ordering on a subset of the reals, the answer is no. In the reals, there can be at most countably many disjoint intervals, but that's not true in the long line.



If you're asking whether tuples with the lexicographic total order are order-isomorphic to a subset of the reals, the answer is again no, even for 2-tuples, for the same reason: the 2-tuples have uncountably many disjoint intervals of the form (a,b)-(a,c) (with b < c).



As for "I'm not really sure what area of maths this is": order theory.

Monday 10 December 2007

ct.category theory - What is the classifying space of "G-bundles with connections"

Let $G$ be a (maybe Lie) group, and $M$ a space (perhaps a manifold). Then a principal $G$-bundle over $M$ is a bundle $P to M$ on which $G$ acts (by fiber-preserving maps), so that each fiber is a $G$-torsor (a $G$-action isomorphic, although not canonically so, to the action of $G$ on itself by multiplication). A map of $G$-bundles is a bundle map that plays well with the actions.



Then I more-or-less know what the classifying space of $G$ is: it's some bundle $EG to BG$ that's universal in the homotopy category of (principal) $G$-bundles. I.e. any $G$-bundle $P to M$ has a (unique up to homotopy) map $Pto EG$ and $M to BG$, and conversely any map $Mto BG$ (up to homotopy) determines a (unique up to isomorphism) bundle $P to M$ and by pulling back the obvious square.



At least this is how I think it works. Wikipedia's description of $BG$ is here.



So, let $G$ be a Lie group and $M$ a smooth manifold. On a $G$-bundle $P to M$ I can think about connections. As always, a connection should determine for each smooth path in $M$ a $G$-torsor isomorphism between the fibers over the ends of the path. So in particular, a bundle-with-connection is a (smooth) functor from the path space of $M$ to the category of $G$-torsors. But not all of these are connections: the value of holonomy along a path is an invariant up to "thin homotopy", which is essentailly homotopy that does not push away from the image of the curve. So one could say that a bundle-with-connection is a smooth functor from the thin-homotopy-path-space.



More hands-on, a connection on $P to G$ is a ${rm Lie}(G)$-valued one-form on $P$ that is (1) invariant under the $G$ action, and (2) restricts on each fiber to the canonical ${rm Lie}(G)$-valued one-form on $G$ that takes a tangent vector to its left-invariant field (thought of as an element of ${rm Lie}(G)$).



Anyway, my question is: is there a "space" (of some sort) that classifies $G$-bundles over $M$ with connections? By which I mean, the data of such a bundle should be the same (up to ...) as a map $M to $ this space. The category of $G$-torsors is almost right, but then the map comes not from $M$ but from its thin-homotopy path space.



Please re-tag as desired.

ds.dynamical systems - What is the "category of bifurcations"?

While reading the introduction to this paper by Curtis McMullen, I came to the following (bold added):




In this paper we show that every bifurcation set contains a copy of the boundary of the Mandelbrot set or its degree $d$ generalization. The Mandelbrot sets $M_d$ are thus universal; they are initial objects in the category of bifurcations, providing a lower bound on the complexity of $B(f)$ for all families $f_t$.




(Here $f_t$ is a family of rational functions mapping $P^1$ to itself and depending holomorphically on a parameter $t$ ranging over some complex manifold, and the bifurcation set (or locus) $B(f)$ is the set of $t$ at which the dynamics of $f_t$ undergo a discontinuous change. For a positive integer $dgeq 2$, $M_d$ is the set of $t$ for which the function $zmapsto z^d+t$ has a connected Julia set, and $partial M_d$, the boundary of $M_d$, is the bifurcation locus for the family ${zmapsto z^d+t}_t$.)



Even without knowing exactly what the "category of bifurcations" is, I can see an analogy between the claim about Mandelbrot sets and the concept of initial objects, but presumably something more definite is intended. My question is thus how to interpret the second sentence quoted from McMullen's paper, or more simply: what is the "category of bifurcations"?

Friday 7 December 2007

molecular biology - How is Taq polymerase produced?

For recombinant Taq polymerase, industrial-scale production produces liters of highly concentrated enzyme in a single run. It comes in such small, dilute quantities when sold that only a few preps a year would be necessary to satisfy research lab demand.



I don't know the Thermus aquaticus protocol, but considering the achievements in yield for E. coli, and the minimal price difference between the two enzymes, I'd imagine that yields are similar there. That would say to me that nobody's farming hot springs for T. aquaticus.

ag.algebraic geometry - Looking for reference talking about relationship between descent theory and cohomological descent

The relationship between cohomological descent and Lurie's Barr-Beck is exactly the same as the relationship between ordinary descent and ordinary Barr-Back. To put things somewhat blithely, let's say you have some category of geometric objects $mathsf{C}$ (e.g. varieties) and some contravariant functor $mathsf{Sh}$ from $mathsf{C}$ to some category of categories (e.g. to $X$ gets associated its derived quasi-coherent sheaves, or as in SGA its bounded constructible complexes of $ell$-adic sheaves). Now let's say you have a map $p:Y rightarrow X$ in $mathsf{C}$, and you want to know if it's good for descent or not. All you do is apply Barr-Beck to the pullback map $p^ast:mathsf{Sh}(X)rightarrow mathsf{Sh}(Y)$. For this there are two steps: check the conditions, then interpret the conclusion. The first step is very simple -- you need something like $p^ast$ conservative, which usually happens when $p$ is suitably surjective, and some more technical condition which I think is usually good if $p$ isn't like infinite-dimensional or something, maybe. For the second step, you need to relate the endofunctor $p^ast p_ast$ (here $p_*$ is right adjoint to $p^ast$... you should assume this exists) to something more geometric; this is possible whenever you have a base-change result for the fiber square gotten from the two maps $p:Y rightarrow X$ and $p:Y rightarrow X$ (which are the same map). For instance in the $ell$-adic setting you're OK if $p$ is either proper or smooth (or flat, actually, I think). Anyway, when you have this base-change result (maybe for p as well as for its iterated fiber products), you can (presumably) successfully identify the algebras over the monad $p^ast p_ast$ (should I say co- everywhere?) with the limit of $mathsf{Sh}$ over the usual simplicial object associated to $p$, and so Barr-Beck tells you that $mathsf{Sh}(Y)$ identifies with this too, and that's descent. The big difference between this homotopical version and the classical one is that you need the whole simplicial object and not just its first few terms, to have the space to patch your higher gluing hopotopies together.

Thursday 6 December 2007

Why are there N's after Sanger sequencing?




I'll add on to this answer. A Sanger sequence is really only good for (at best) 800-900 bp. After this time, there is simply too little DNA left to have a reliable signal that you can confidently say is a specific nucleotide. A good habit to get into is always examine your chromatograms and trace, and reject everything after you start getting your first run of N's. I don't usually worry about my first N at the end of the read however because it can be attributed just to error, and the following bases are still confidently called.
– user560
Apr 16 '12 at 7:03


at.algebraic topology - Circle bundles over $RP^2$

I think that they have Seifert fiber space presentation as:
$(On,1|(1,b))$.



Or
$(On,1|(1,b),(a_1,b_1),...,(a_r,b_r))$, if you allow an orbifold with cone points in $RP^2$.



You can look at the cases by decomposing $RP^2=Mocup_{partial}D$, so the orientable 3-manifold will be the


1) orientable $Q=Motilde{times}S^1$, the twisted circle bundle over the mobius band, very well known being equivalent to the orientable I-bundle over the Klein bottle, with boundary a torus $T$,
2) and a Dehn-filling in the remaining disk $D$, with a whichever fibered solid torus or tori.

We could say that $(On,1mid (1,b))=Qcup_T W(1,b)$, for a fibered $(1,b)$ solid torus $W$

mathematics education - Depressed graduate student.

You have to ask yourself some basic questions.



1) Perhaps you lost your enthusiasm for a good reason. Maybe your initial enthusiasm was naive. Maybe you liked mathematics for unsustainable reasons? You're likely to go through far larger "down" cycles in the future if you stay in mathematics (we all do, it's a chronic problem in the field) so if you're going to stick with mathematics you have to find some kind of joy you can hold on to, through all kinds of messy situations.



2) Maybe you really do love mathematics but there's aggravating factors causing problems. Maybe you're not working on enough easy problems. Solving easy problems is fun, and if they're the right kind of problems you build up new skills. This is one of the reasons why I frequent this webpage.



I had a bunch of issues like (2) as a grad student. IMO I'm mostly better off for them. I'm talking about issues like solving a problem (or making progress on a problem) and finding out perhaps way too late that the problem had been solved by someone else. I found it pretty tricky to balance focus with awareness of what other people are doing. The math arXiv and MathSciNet are excellent resources that help with that.



Being in a very active place where you can talk to lots of people about various areas of mathematics helps. Being surrounded by enthusiastic people helps. Going to small conferences where you get to know people can help. Talking to people about what you're interested in helps. Barring external impetus, "computing the daylights out of things" is an excellent fall-back procedure. I know quite a few very successful mathematicians for which this is one of the main approaches to things. You start piling up enough computations on things that interest you and you notice patterns -- maybe not what you were looking for, but sometimes of interest to people for reasons you never expected. Sometimes publishable. :)



edit: After reading your recent edit I can say I saw some similar things as a grad student. Sometimes the most talented/bright/whatever grad students have a hard time completing a Ph.D. Some students have too high expectations of themselves. They give up because they realize they're not going to prove the Riemann hypothesis -- they want that great big creative insight. In that regard it's good to ensure such grad students are working on both big hard problems and medium-sized publishable work, so that they can complete a Ph.D even if they never prove the Riemann hypothesis or whatever. Basically, always make sure you have a managable goal in sight. If your goals are only huge enormous things, you're setting yourself up for a potentially horrible failure. On the other hand, some people want that kind of situation, and if they're conscious of it, IMO you might as well let them be. It's their life. If they prove a major theorem, we're all the better off for it. If they don't, well at least they tried.

nt.number theory - References for Artin motives

A motive is a chunk of a variety cut out by correspondences. (If you like, it is something of which we can take cohomology.)



Artin motives are what one gets by restricting to zero-dimensional varieties. If the ground
field is algebraically closed then zero-dimensional varieties are simply finite unions of points, so there is not much to say; the only invariant is the number of points.



But if the ground field $K$ is not algebraically closed (but is perfect, e.g. char $0$,
so that we can describe all finite extensions by Galois theory), then there are many
interesting $0$-dimensional motives, and in fact the category of Artin motives (with
coefficients in a field $F$ of characteristic $0$, say) is equal to the category of continuous
representations of $Gal(overline{K}/K)$ on $F$-vector spaces (where the $F$-vector spaces are given their discrete topoogy; in other words, the representation must factor through $Gal(E/K)$ for some finite extension $E$ of $K$).



Perhaps from a geometric perspective, these motives seem less interesting than others. On the other hand, number theoretically, they are very challenging to understand. The Artin conjecture about the holomorphicity of $L$-functions of Artin motives, which is the basic reciprocity conjecture regarding such motives, remains very wide open, with very few non-abelian cases known. (Of course, for representations with abelian image, these
conjectures amount to class field theory, which is already quite non-trivial.)

Tuesday 4 December 2007

The other classical limit of a quantum enveloping algebra?

Let $mathbb K$ be a field (of characteristic 0, say), $mathfrak g$ a Lie bialgebra over $mathbb K$, and $mathcal U mathfrak g$ its usual universal enveloping algebra. Then the coalgebra structure on $mathfrak g$ is equivalent to a co-Poisson structure on $mathcal U mathfrak g$, i.e. a map $hatdelta : mathcal U mathfrak g to (mathcal U mathfrak g)^{otimes 2}$ satisfying some axioms. A formal quantization of $g$ is a Hopf algebra $mathcal U_hbar mathfrak g$ over $mathbb K[[hbar]]$ (topologically free as a $mathbb K[[hbar]]$-module) that deforms $mathcal U mathfrak g$, in the sense that it comes with an isomorphism $mathcal U_hbar mathfrak g / hbar mathcal U_hbar mathfrak g cong mathcal U mathfrak g$, and moreover that deforms the comultiplication in the direction of $hatdelta$: $$Delta = Delta_0 + hbar hatdelta + O(hbar^2),$$ where $Delta$ is the comultiplication on $mathcal U_hbar mathfrak g$ and $Delta_0$ is the (trivial, i.e. which $mathfrak g$ is primitive) comultiplication on $mathcal Umathfrak g$. This makes precise the "classical limit" criterion: "$lim_{hbar to 0} mathcal U_hbar mathfrak g = mathcal U mathfrak g$"



I am wondering about "the other" classical limit of $mathcal U_hbar mathfrak g$. Recall that $mathcal Umathfrak g$ is filtered by declaring that $mathbb K hookrightarrow mathcal Umathfrak g$ has degree $0$ and that $mathfrak g hookrightarrow mathcal Umathfrak g$ has degree $leq 1$ (this generates $mathcal Umathfrak g$, and so defines the filtration on everything). Then the associated graded algebra of $mathcal Umathfrak g$ is the symmetric (i.e. polynomial) algebra $mathcal Smathfrak g$. On the other hand, the Lie structure on $mathfrak g$ induces a Poisson structure on $mathcal Smathfrak g$, one should understand $mathcal U mathfrak g$ as a "quantization" of $mathcal Smathfrak g$ in the direction of the Poisson structure. Alternately, let $k$ range over non-zero elements of $mathbb K$, and consider the endomorphism of $mathfrak g$ given by multiplication by $k$. Then for $x,y in mathfrak g$, we have $[kx,ky] = k(k[x,y])$. Let $mathfrak g_k$ be $mathfrak g$ with $[,]_k = k[,]$. Then $lim_{kto 0} mathcal Umathfrak g_k = mathcal Smathfrak g$ with the desired Poisson structure.



I know that there are functorial quantizations of Lie bialgebras, and these quantizations give rise to the Drinfeld-Jimbo quantum groups. So presumably I can just stick $mathfrak g_k$ into one of these, and watch what happens, but these functors are hard to compute with, in the sense that I don't know any of them explicitly. So:




How should I understand the "other" classical limit of $mathcal U_hbar mathfrak g$, the one that gives a commutative (but not cocommutative) algebra?




If there is any order to the world, in the finite-dimensional case it should give the dual to $mathcal U(mathfrak g^*)$, where $mathfrak g^*$ is the Lie algebra with bracket given by the Lie cobracket on $mathfrak g$. Indeed, B. Enriquez has a series of papers (which I'm in the process of reading) with abstracts like "functorial quantization that respects duals and doubles".



On answer that does not work: there is no non-trivial filtered $hbar$-formal deformation of $mathcal Umathfrak g$. If you demand that the comultiplication $Delta$ respect the filtration on $mathcal Umathfrak g otimes mathbb K[[hbar]]$ and that $Delta = Delta_0 + O(hbar)$, then the coassociativity constraints imply that $Delta = Delta_0$.



This makes it hard to do the $mathfrak g mapsto mathfrak g_k$ trick, as well. The most naive thing gives terms of degree $k^{-1}$ in the description of the comultiplication.

Sunday 2 December 2007

nt.number theory - Values of cusp forms at q = 1 ?

I'll give a slightly uncertain answer, based somewhat on my recollection of conversations with Zagier a month ago about similar questions.



If we were to imitate Euler, we might consider $f(1)$ as
$$f(1) = sum_{n geq 1} a_n = sum_{n geq 1} a_n n^{-0} = L(f,0).$$
So the analytic continuation of the L-function suggests that $f(1)$ should be identified with the value of the L-function at zero. By the functional equation, this relates to the L-function at the right edge of the critical strip.



So, for a cusp form of weight two, arising from an elliptic curve $E$ over $Q$, the value $L(f,0)$ is related to $L(E,2)$. An interpretation of this L-value, conjectured by Zagier, was proven by Goncharov and Levin, in "Zagier's conjecture on $L(E,2)$", Invent. Math. 132 (1998).



As for the analytic question, you are considering the "value" of a cusp form $f$ on the real axis, which bounds the upper half-plane. Almost by definition, there is a Sato hyperfunction $f_{bdr}$ on the real axis, which describes this boundary behavior of the holomorphic function $f$ on the upper half-plane. I am not sure if the following is published, but I have the impression that there might be a preprint now or soon which proves the following result:



At every (positive? I don't recall) rational number $q$, the hyperfunction $f_{bdr}$ is $C^infty$ at $q$. Its value at $1$ is $L(f,0)$ as described above.



I think that saying "a hyperfunction is $C^infty$ at $q$" means that the hyperfunction can be expressed as the distributional derivative of a continuous function -- $f = g^{(k)}$ for some $k geq 0$ -- and $g$ happens to be $C^infty$ at $q$. But I'm not much of an analyst.



I think that the value $f(1)$ also exists as $lim_{z rightarrow 1} f(z)$ limit, if $z$ approaches $1$ via a geodesic in the upper half-plane.



I don't think you'll see Sha or the torsion directly, as these appear at the central value $L(f,1)$. On the other hand, I do think you'll find $L(f,-n)$ for all $n geq 0$ (or equivalently, $L(f,2+n)$ ), by looking at the derivatives $f^{(n)}(1)$ of the boundary hyperfunction of $f$ at $1$.

molecular biology - Why are some genes dominant over others? What is the mechanism behind it?

Below are some insights about the evolution of dominance. It does not directly answers your question but understanding how it is thought to evolve also help understand the mechanism governing dominance relationship between alleles.




There are several hypotheses about the evolution of dominance. It is important first of all, to note that empirical observations show that beneficial alleles tend to be more dominant than detrimental alleles. Among the two main hypotheses to explain the evolution of dominance, one has been formulated by Ronald Fisher and one by Sewall Wright.



Fisher's hypothesis



According to Fisher's hypothesis, between two equally beneficial alleles, if one is more dominant than the other than it's heterozygote carrier will have higher fitness. In consequence, beneficial alleles evolve to become more dominant while detrimental alleles evolve to be recessive (so that they can hide from selection in heterozygotes).



Wright's hypothesis



According to Wright's hypothesis, beneficial alleles are more dominant because of the kinetic of biochemical reactions. The rate of a biochemical reaction is a function of the concentration in the substrates of interest. The function is called "Michaelis-Menten function" after the name of the authors. The Michaelis-Menten function looks like this:



enter image description here



Think about a knock-out mutation. Such mutation will decrease by half the concentration of protein the gene in question produce in heterozygotes. Imagine, the concentration of proteins in the wildtype homozygote was 3 (see above graph). The rate of reaction of this homozygote is therefore about 3. The heterozygote would have a concentration of 1.5 and the rate of the reaction is therefore about 2.5-2.75. Assuming that the rate of this biochemical reaction is directly related to fitness, then a the locus of interest, benefical alleles are necessarily dominant and detrimental alleles are necessarily recessive. Selection for dominance is not involved in Wirght's model.



Is Fisher's or Wright's hypothesis correct?



Current state of the art is to consider Wright's model to be correct and Fisher's model to be wrong. In reality, the truth probably lies somewhere in between these two extremes. Note also that some other alternative explanations may exist in the literature but Wright and Fisher original hypotheses are by far the most considered hypotheses for the evolution of dominance. During my undergrad, I remember a speaker (but I forgot his name, sorry!) who showed that some alleles actually have some domain that are directly responsible for decreasing the expression of the other allele on the sister chromosome suggesting that Fisher's hypothesis might sometime be a good explanation as well.

at.algebraic topology - Universal Covering Space of Wedge Products

Today I was studying for a qualifying exam, and I came up with the following question;




Is there a simple description in terms of the subspaces universal covers for the universal cover of a wedge product?




This question came about after calculating universal covers of the wedge of spheres ($mathbb{S}^1 veemathbb{S}^1$ and $mathbb{S}^1 veemathbb{S}^n$) and the wedge of projective space with spheres. In these cases, the universal cover looks like the cross product of the sheets of the universal covers of each space in the wedge.



For the case of wedging two spheres, we can use the fact that $pi_{ngeq2}left(Uright)$ is isomorphic to $pi_{ngeq2}left(Xright)$ for $U$ covering $X$.



I googled around a bit to try and find something, but nothing appeared.



Thanks in advance!

ag.algebraic geometry - Can a singular Deligne-Mumford stack have a smooth coarse space?

I think if the coarse moduli space is smooth, so is the DM stack, because XX --> X is a gerbe, which is always smooth (since smoothness can be checked fppf locally on X, and B(G/X) is smooth over X). A stack (or a morphism of stacks, not necessarily representable) is defined to be smooth if one can find a presentation which is smooth over the base. And if it is smooth, then any presentation is smooth. That's why I got confused on Anton's example. Maybe someone can explain this to me. Thanks in advance.

ct.category theory - What are natural transformations in 1-categories?

Here is a counterexample for your next-to-last question. Let S be a set with more than one element and consider the two full subcategories of Cat on, respectively, the single category which is the discrete category on S, and the single category which is the codiscrete category on S. In each case, when viewing Cat as a 1-category, the resulting full subcategory has a single object with endomorphisms Hom(S, S). However, if we view Cat as a 2-category, the former subcategory has no nontrivial natural transformations and thus really is BHom(S, S), while the latter has a unique natural transformation between any two functors and thus is actually • up to 2-equivalence.



Cat-the-1-category and Cat-the-2-category are very different constructs which unfortunately usually go by the same name. Even though they have "the same" objects, I suggest thinking of their objects as being different kinds of things. An object of Cat-the-1-category has more information than an object of Cat-the-2-category; we may talk about the cardinality of its set of objects, not just the cardinality of its set of isomorphism classes of objects. (This shouldn't seem too strange, since an object of Cat-the-0-category is a "specific" category, of which we may talk about the actual set of objects.) Put differently, an object of Cat-the-1-category is a "monoid with many objects", while an object of Cat-the-2-category is what we more often think of when thinking about categories (especially large ones).



In your example, you expressed Ab as a full subcategory of Cat-the-1-category. The full subcategory of Cat-the-2-category on the same objects is not Ab, since it has nontrivial natural automorphisms, as others have pointed out. It only becomes Ab after truncation—replacing each Hom-category by its set of isomorphism classes of objects. For Grp, the situation is worse, since distinct group homomorphisms may be naturally isomorphic as functors. The usual way to repair this is to work with "pointed categories", as described at this nlab page. But of course this is a kind of extra structure on a category, and if I'm allowed to introduce arbitrary extra structure then the question is too easy. Anyways, I'm not sure that one should expect various concrete categories to naturally be full subcategories of either Cat-the-1-category or Cat-the-2-category.

Saturday 1 December 2007

ag.algebraic geometry - Restriction of Ext sheaves

Hi Andrea, I don't think one can prove much without flatness. Let's assume the simplest case, that $X=text{Spec}(S)$, $Y= text{Spec}(R)$, and $Rto S$ is a finite local homomorphism with $R$ regular. Then I claim what you want is equivalent to flatness.



Your condition amounts to
$$text{Ext}_R^i(M,N)otimes_RS cong text{Ext}_S^i(Motimes S,Notimes S)$$
for $R$-modules $M,N$. There is a well-known result that the first $i$ such that $text{Ext}_R^i(R/I,R)neq 0$ is the length of the longest $R$-regular sequence in $I$. Let $M=R/m_R$, $N=R$, then by the Ext condition we can conclude that $m_RS$ contains a $S$-regular sequence of length equals to $text{dim} S$. So $S$ is Cohen-Macaulay.



But then "miracle flatness" implies that $f$ is flat! This also provides counter-examples: if $S$ is not Cohen-Macaulay, choose $i$ to be $text{depth} S$.



ADDED: for the sake of completeness, here is a class of examples to show the second nice situation ($mathcal F$ is locally free) can't be generalized to much.



Let $Y=mathbb A^n$, $X=V(f)subset Y$ such that $f$ vanishes at the origin. Let $mathcal F$ be locally free on $Y$ minus the origin, but not free at the origin. Let $mathcal G$ be any torsion-free coherent sheaf on $Y$. Then I claim the condition you want (let's call it $(*)$) will not hold.



From the short exact sequence $0to mathcal G to mathcal G to mathcal G/(f)to 0$ (the first map is multiplication by $f$, exact because $mathcal G$ is torsion-free) one can take $mathcal Hom(mathcal F,-)$ to get a long exact sequence. Looking at such l.e.s, $(*)$ means precisely that the maps by multiplication by $f$:
$$mathcal Ext^i(mathcal F,mathcal G) to mathcal Ext^i(mathcal F,mathcal G) $$
must be injective for all $i>0$. But as all these ${mathcal Ext}$ vanish away from the origin and $f$ vanishes at the origin,
they have to be $0$. Now localize at the origin, take a minimal free resolution of the stalk of $mathcal F$ to compute Ext and use Nakayama, one can show that $mathcal F$ must also be free there, contradiction.

Friday 30 November 2007

molecular biology - Is the eukaryotic nucleus composed of a single or double membrane?

The nucleus is topologically single membrane but functionally, and as visualized, double membrane. Naturally, this is a bit confusing.



First consider the mitochondria and plastids. One of these organelles has two entirely separate lipid bilayers, one of which is nested inside the other. If you start inside the organelle and draw a line to an external point, the line will pass through two lipid bilayers (assume there are no membrane folds to be encountered), first through the nested one and then through the entirely separate outer bilayer.



The nucleus at a gross level appears to be structured similarly. A line from the interior to the exterior will usually pass through two lipid bilayers, one that is the inner wall of the nucleus, then through the outer wall. This may seem like the inner wall is separate from, and nested inside, the outer wall, but it is not. As nicely illustrated in your reference (Martin 2005), the two walls are continuous with each other and not nested. Because of this, a line from nuclear interior to exterior may in fact not pass through any lipid bilayers; it may pass though a nuclear pore. Topologically (mathematically) speaking, the two walls are one surface, and the "inside" of the nucleus is on the same side of the surface as the outside. (The topological interior is perinuclear space and the interior of the endoplasmic reticulum.) Functionally though, the structure acts as a double membrane since it physically constrains its contents.



The distinction is not very important for most biological contexts, but for a few purposes needs to be considered. The main reason to pay attention to the distinction is that it probably is related to the evolutionary history of the nucleus: it is likely to have evolved from a single bilayer somehow. In contrast, the mitochondria and plastids, with nested bilayers, evolved from two bilayers, one from a separate cell and one from an enveloping piece of a host cell. This is why the distinction is important in the cited paper, which concerns the evolution of the nucleus.

sequences and series - Seeking for a formula or an expression to generate non-repeatative random number ..

When we hear random permutations, we bring in our intuition about permutations, and try to give a method which could generate a complicated permutation. Thus, I think we didn't pay enough attention to your examples like n*3 mod N, which for most situations would not be an acceptable way of generating random numbers. The only problem is what to do if N is divisible by 3. As far as I can tell, divisibility by 10 is irrelevant, so I'm not sure why you mentioned it.



You say you don't want to write a program, just a simple formula in Excel. This is reasonable, and even something which makes sense mathematically: There are a few operations available in Excel formulas such as addition, exponentiation, factorial, conditional evaluation based on whether a statement is true or false (characteristic functions), etc. Can one create a formula with fixed complexity which takes in n and N, and which is a permutation of {1,...,N] for a fixed N? Trivially returning n works, but can one produce a permutation other than (+-n+k mod N)+1?



I suggest creating a formula which is equivalent to the following:



If N is not divisible by 71, return (71*n mod N) + 1.
Otherwise N is divisible by 71. Permute the last digit base 71: return a + (3*b mod 71) +1
where n-1 = a + b and a is divisible by 71 and $0 le b lt 93$, i.e.,
b = n-1 mod 71.
a = n-1 - (n-1 mod 71).



IF(MOD(N,71)!=0,n-(MOD(n-1,71)) + MOD(3*(MOD(n-1,71),71),MOD(71*n,N)+1).



(Debugging left to the reader.)



This would be lousy as a random permutation, but it may be acceptable for some purposes.



A better random permutation might be based on f(n), where f reverses the lowest binary digits of n if n is at most than the greatest power of 2 less than N, and does nothing if n is greater. Try f(N+1-f(n)). This can be done using the DEC2BIN and StrReverse functions, but you need a little Excel expertise to use those.



Once you have a few ways to generate random permutations, you can compose them, and even using unsatisfactory random permutations like adding floor(sqrt(N)) can improve the appearance of the resulting permutation.

Thursday 29 November 2007

biochemistry - What is the correct model for enzyme-substrate complementarity?

Both models are true depending on how you frame the mechanisms of catalysis. As mentioned by @Blues, proteins are highly dynamic. In that manner, a protein will adopt both the unbound active state shown in the induced fit model and the complementary shape shown in the lock and key model.



(apologies since this is the only figure that I could find to explain this concept). Using the above description the induced fit model (E) will change its structure to the E*S model. In the lock and key model, the E state will be equivalent to the E*S state. According to the below figure, this would imply that the E*S state always exists but as it is a few kcals higher in free energy, the state is rarely seen. Thermodynamically, this means that the "lock" always exists but it is an unstable configuration. When the substrate is added to the system, it will stablize the lock and thermodynamically favor an E*S state.



Thermodynamics



Long story short, the induced fit model is a good explanation of how enzymes morph into an active state but depending on how you frame the mechanism, you are always seeing a lock-key model (at least according to my enzymology professor). Unfortunately, the majority of biochemistry textbook continue to teach using the induced fit model since it is a much easier concept to understand given the majority of undergrads; and 1st year graduates' understanding of statistical thermodynamics.



The induced fit model is more appropriately used to understand the mechanisms of substrate specificity. As hinted by your professor, enzymes will perform their function in the lock-key mechanism. This is true for many serine proteases which all do the exact same reaction. However, substrate specificity can be incorporated by unstabilizing the E*S complex which largely has to do with the E state.

ag.algebraic geometry - Algebraic versus Analytic Brauer Group

Let $X$ be a smooth projective algebraic variety over $mathbb{C}$. Then I think that someone (Serre?) showed that the Cohomological Etale Brauer Group agrees with the torsion part of the Analytic Brauer Group $H^{2}(X,mathcal{O}^{times})$. This latter group is calculated in the classical (metric) topology on the associated complex manifold with the sheaf of nowhere vanishing holomorphic functions.



However there can easily be non-torsion elements in $H^{2}(X,mathcal{O}^{times})$: for instance consider the image in $H^{3}(X,mathbb{Z}) cap (H^{(2,1)}(X) oplus H^{(1,2)}(X))$.



Could there be a topology more refined than etale but defined algebraically which can see these non-torsion classes? Notice that one can also ask the question for any $H^{i}(X,mathcal{O}^{times})$. For $i=0,1$ the Zariski and etale work fine.



Why do things break down for $i>1$?

Wednesday 28 November 2007

algebraic groups - "Eigenvalue characters"

This question is an addition to my question on simultaneous diagonalization from yesterday and it is probably also obvious but I just don't know this: Let $G$ be a commutative affine algebraic group over an algebraically closed field $k$. Let $G_s$ be the semisimple part of $G$. Let $rho:G rightarrow GL_n(V)$ be an embedding. Then $rho(G_S)$ is a set of commuting diagonalizable endomorphisms and I know from yesterday that I have unique morphisms of algebraic groups $chi_i: rho(G_s) rightarrow mathbb{G}_m$, $1 leq i leq r$, and a decomposition $V = bigoplus _{i=1}^r E _{chi_i}$, where $E_{chi_i} = lbrace v in V mid fv = chi_i(f)v forall f in rho(G_s) rbrace$. Now, my question is: are the morphisms $chi_i$ independent of $rho$ so that I get well-defined morphisms $chi_i:G_s rightarrow mathbb{G}_m$?



If somebody knows what I'm talking about, then please change the title appropriately! :)

pr.probability - What's the standard name for sets of a given size with maximal probability (or a given probability and minimal size)?

The definition I'm going to give isn't quite the concept I really want, but it's a good approximation. I don't want to make the definition too technical and specific because if there's a standard name for a slightly different definition, then I want to know about it.



Let $(X,mu)$ be a measure space, and let $rho$ be a probability measure on $X$. I call a subset $A$ of $X$ special if for all measurable $Bsubseteq X$,



  1. $mu(B)leqmu(A)$ implies $rho(B)leqrho(A)$, and

  2. $mu(B)=mu(A)$ and $rho(B)=rho(A)$ implies $B=A$ up to measure zero (with respect to both $mu$ and $rho$).

What is the standard name for my "special" sets? Equivalently, one could stipulate $mu(A)leqbeta$ and call $A$ "special" if it is essentially the unique maximizer of $rho(A)$ given that constraint.



Also equivalently, we could stipulate a particular $rho$-measure and consider sets achieving that $rho$-measure having the smallest possible $mu$-measure. That's probably the most intuitive way to think about this: we're looking for sets that contain a certain (heuristically: large) fraction of of the mass of $rho$ but are as small as possible (with respect to $mu$). That seems like a completely natural and obvious concept, which is why I think it should have a standard name. But I have almost no training in statistics, so I don't know what the name is.



This example might be far-fetched, but just to illustrate: suppose the FBI has knowledge that somebody is going to attempt a terrorist attack in a certain huge city at a particular hour. They might not know where, but they might have (some estimate of) a probability distribution for the location of the attack. They want to distribute agents strategically throughout the city, but they probably don't have enough agents to cover the entire city. Let's say every agent can forestall an attack if it occurs within a certain radius of his/her position (which is unrealistic, since the number of nearby agents surely also matters, but ignore that); then, to maximize the probability that the attack will be stopped, to an approximation, they should distribute their agents uniformly over a special subset of the city's area. To approach this from the other perspective, it could be the case that 99% of the mass of their probability distribution is contained in a region with very small area. (The one with the smallest area will be a special set.) Then, to save resources, if they're okay with 99:1 odds (c'est la vie), they might only distribute a relatively small number of agents to that small special region.



If $rho$ has a density $f$ with respect to $mu$ (when it makes sense to talk about such), then special sets are closely related to the superlevel sets of $f$, i. e., sets of the form ${x:f(x)geq c}$ for $cgeq 0$. (I think they're basically the same, but specialness of $A$ is unaffected by changing $A$ by a set of measure zero, so a superlevel set actually corresponds to an equivalence class of special sets.) I mention this here because (1) the connection to superlevel sets is one of my reasons for caring about specialness, and (2) "superlevel sets of the density" is not the answer I'm looking for.


Example 1

Here's a very simple example in which special sets can be completely characterized. Let $X={x_1,ldots,x_n}$ be a finite set, and let $mu$ be counting measure on $X$. Let $rho$ be any probability distribution on $X$, which necessarily has a density function $f:Xtomathbb{R}_+$, so by definition, $f(x_1) + ldots + f(x_n) = 1$ and $rho(A) = sum_{xin A} f(x)$. Suppose that no two points have the same $f$-value; then, without loss of generality, $f(x_1) > f(x_2) > ldots > f(x_n)$. It's easy to see that the special sets in this setup are exactly the sets $A_k = {x_1,x_2,ldots,x_k}$, i. e., which contain the largest $k$ points as measured by $rho$, for $k=0,ldots,n$. (Why: if you have some other candidate special set $B$, then $A_{\#B}$ has the same $mu$-measure as $B$ but higher $rho$-measure, so $B$ can't be special.) It's easy to generalize this example to the case in which $f$ isn't necessarily one-to-one: you have to treat all points with the same $f$-value as a block: either all of them are in the special set, or none of them are. (Otherwise, there's no way to satisfy the "uniqueness" part (point 2) of the definition.)


Example 2

Here's a generalization of the first example that hopefully clarifies what I said above. Let $(X,mu)$ be some nice measure space on which integration of functions makes sense (like a Riemannian manifold, or just $mathbb{R}^d$). Let $f:Xtomathbb{R}_+$ be a nonnegative integrable function with $int_X f(x) dmu = 1$, and let $rho$ be the probability measure $rho(Y) = int_Y f(x) dmu$, so $f$ is the density of $rho$ with respect to $mu$. Fix some $cgeq 0$ and let $A={x:f(x)geq c}$.




Claim: $A$ is a special set.




Proof: It suffices to show that if $mu(B) = mu(A)$, then $rho(B)leq rho(A)$, with equality if and only if $B$ and $A$ differ by a set of measure zero. If $mu(B) = mu(A)$, then $mu(B-A) = mu(A-B)$. Now we write
$begin{align*} rho(A) - rho(B) &= int_A f(x) dmu - int_B f(x) dmu \\
&= int_{A-B} f(x) dmu - int_{B-A} f(x) dmu \\
&= int_{A-B} f(x) dmu - int_{A-B}c\,dmu - int_{B-A} f(x) dmu + int_{B-A}c\,dmu\\
&= int_{A-B} (f(x)-c) dmu - int_{B-A} (f(x)-c) dmu.
end{align*}$



By construction, $f(x) geq c$ on $A$ and $f(x) < c$ on $B-A$, so the first integral is nonnegative and the second integral is nonpositive, and is in fact negative unless $mu(B-A)=0$, in which case $mu(A-B)=0$ as well. Thus, $rho(A)-rho(B)geq 0$, with strict inequality unless $A$ and $B$ differ by measure zero, QED.

Tuesday 27 November 2007

evolution - Which came first: The Chicken or the Egg?

Dunno about any serious scientific inquiries into the answer but I always thought the answer is egg. At some point the modern chicken has come into being as the progeny of two pre-modern chickens, however it had to be an egg before it could be a chicken and its parents couldn't have been modern chickens.



However, all that presupposes that you can draw a line in the evolutionary history of the modern (extant) chicken and say this is modern and what goes before is not. I dunno if that really can be done.



Edit: came across this guardian article: http://www.guardian.co.uk/science/2006/may/26/uknews




"Whether chicken eggs preceded chickens hinges on the nature of chicken eggs," said panel member and philosopher of science David Papineau at King's College London.



"I would argue it's a chicken egg if it has a chicken in it. If a kangaroo laid an egg from which an ostrich hatched, that would surely be an ostrich egg, not a kangaroo egg. By this reasoning, the first chicken did indeed come from a chicken egg, even though that egg didn't come from chickens."




And from Prof. Brookfield of the University of Nottingham:




The first chicken must have differed from its parents by some genetic change, perhaps a very subtle one, but one which caused this bird to be the first ever to fulfil our criteria for truly being a chicken; Thus the living organism inside the eggshell would have had the same DNA as the chicken that it would develop into, and thus would itself be a member of the species of chicken


ag.algebraic geometry - What properties "should" spectrum of noncommutative ring have?

I know almost nothing about noncommutative rings, but I have thought a bit about what the general concept of spectra might or should be, so I'll venture an answer.



One other property you might ask for is that it has a good categorical description. I'll explain what I mean.



The spectrum of a commutative ring can be described as follows. (I'll just describe its underlying set, not its topology or structure sheaf.) We have the category CRing of commutative rings, and the full subcategory Field of fields. Given a commutative ring $A$, we get a new category $A/$Field: an object is a field $k$ together with a homomorphism $A to k$, and a morphism is a commutative triangle. The set of connected-components of this category $A/$Field is $mathrm{Spec} A$.



There's a conceptual story here. Suppose we think instead about algebraic topology. Topologists (except "general" or "point-set" topologists) are keen on looking at spaces from the point of view of Euclidean space. For example, a basic thought of homotopy theory is that you probe a space by looking at the paths in it, i.e. the maps from $[0, 1]$ to it. We have the category Top of all topological spaces, and the subcategory Δ consisting of the standard topological simplices $Delta^n$ and the various face and degeneracy maps between them. For each topological space $A$ we get a new category Δ$/A$, in which an object is a simplex in $A$ (that is, an object $Delta^n$ of Δ together with a map $Delta^n to A$) and a morphism is a commutative triangle. This new category is basically the singular simplicial set of $A$, lightly disguised.



There are some differences between the two situations: the directions have been reversed (for the usual algebra/geometry duality reasons), and in the topological case, taking the set of connected-components of the category wouldn't be a vastly interesting thing to do. But the point is this: in the topological case, the category Δ$/A$ encapsulates




how $A$ looks from the point of view of simplices.




In the algebraic case, the category $A/$Field encapsulates




how $A$ looks from the point of view of fields.




$mathrm{Spec} A$ is the set of connected-components of this category, and so gives partial information about how $A$ looks from the point of view of fields.

pr.probability - How many trial picks expectedly sufficient to cover a sample space?

The expected number of picks needed equals the sum of the probabilities that at least $t$ picks are needed, which means that $t-1$ subsets left at least one value uncovered. We can use inclusion-exclusion to get the probability that at least one value is uncovered.



The probability that a particular set of $k$ values is uncovered after $t-1$ subsets are chosen is



$$Bigg(frac{n-k choose r}{n choose r}Bigg)^{t-1}$$



So, by inclusion-exclusion, the probability that at least one value is uncovered is



$$ sum_{k=1}^n {n choose k}(-1)^{k-1}Bigg(frac{n-k choose r}{n choose r}Bigg) ^{t-1} $$



And then the expected number of subsets needed to cover everything is



$$ sum_{t=1}^infty sum_{k=1}^n {n choose k}(-1)^{k-1} Bigg(frac{n-k choose r}{n choose r}Bigg)^{t-1} $$



Change the order of summation and use $s=t-1$:



$$ sum_{k=1}^n {n choose k}(-1)^{k-1} sum_{s=0}^infty Bigg( frac{n-k choose r}{n choose r}Bigg)^s$$



The inner sum is a geometric series.



$$ sum_{k=1}^n {n choose k} (-1)^{k-1}frac{n choose r}{{n choose r}-{n-k choose r}}$$



$$ {n choose r} sum_{k=1}^n (-1)^{k-1}frac{n choose k}{{n choose r}-{n-k choose r}}$$



I'm sure that should simplify further, but at least now it's a simple sum. I've checked that this agrees with the coupon collection problem for $r=1$.



Interestingly, Mathematica "simplifies" this sum for particular values of $r$, although what it returns even for the next case is too complicated to repeat, involving EulerGamma, the gamma function at half-integer values, and PolyGamma[0,1+n].

Sunday 25 November 2007

differential topology - Every Manifold Cobordant to a Simply Connected Manifold

Assume that $M^n$ has $pi_1$ finitely generated (Edit: and n>3). Choose a generator. We will construct (using surgery) a cobordism to $M'$ which kills that generator, and by induction we can kill all of $pi_1$. Choose an embedded loop which represents the generator, and choose a tubular neighborhood of the loop. We can view this as a (n-1)-dimensional vector bundle over $S^1$, the normal bundle. Since $M$ is oriented, this is a trivial vector bundle so we can identify this tubular neighborhood with $S^1 times D^{n-1}$.



Now we build the cobordism. We take $M times I$, which is a cobordism from $M$ to itself. To one end we glue $D^2 times D^{n-1}$ along the boundary piece $S^1 times D^{n-1}$ via its embedding into $M$. This is just attaching a handle to $M times I$. This new manifold is a cobordism from $M$ to $M'$, where $M'$ is just $M$ where we've done surgery along the given loop.



A van Kampen theorem argument shows that we have exactly killed the given generator of $pi_1$. Repeating this gives us a cobordism to a simply connected manifold.



Note that it is essential that our manifold was oriented. $mathbb{RP}^2$ is a counter example in the non-oriented setting, as all simply connected 2-manifolds are null-cobordant, but $mathbb{RP}^2$ is not.




[I was implicitly thinking high dimensions. Thanks to Tim Perutz for suggesting something was amiss when n=3]



If n=3 then this is "surgery in the middle dimension" and it is more subtle. First of all the normal bundle is an oriented 2-plane bundle over the sphere, so there are in fact $mathbb{Z} = pi_1(SO(2))$ many ways to trivialize the bundle (these are normal framings). Ignoring this, if you carry out the above construction, you will see that (up to homotopy) M' is the union of $M - (S^1 times D^2)$ and $D^2 times S^1$ along $S^1 times S^1$. This can (and does) enlarge the fundamental group.



However a different argument works in dimensions n=1,2,3. The oriented bordism groups in those dimensions are all zero (see the Wikipedia entry on cobordism), so in fact every oriented 3-manifold is cobordant to the empty set (a simply connected manifold). The fastest way to see this is probably a direct calculation of the first few homotopy groups of the Thom spectrum MSO.

Saturday 24 November 2007

ag.algebraic geometry - Uniqueness/motivation for the Suslin-Voevodsky theory of relative cycles.

I will just sum up the situation as I see it (too big for the comment box).



One important goal is to set up a good intersection theory for cycles without quotienting by rational equivalence, and using it to get a composition product for finite correspondences, which are by definition elements of groups of the form $c_{equi}(Xtimes_S Y/X,0)$



It is true that the variety of definitions of cycle groups in the paper is somewhat confusing. There are 16 possible groups because starting from the "bare" notion of relative cycles (def. 3.1.3) there are 4 binary conditions : being effective, being equidimensional, having compact support (c, PropCycl), and being "special", i.e satisfying the equivalent conditions of lemma 3.3.9 (everything except Cycl and PropCycl). So you have



1)$z_{equi}(X/S,r)subset z(X/S,r)subset Cycl(X/S,r) supset Cycl_{equi}(X/S,r)$



and their effective counter-parts.



2)$c_{equi}(X/S,r)subset c(X/S,r)subset PropCycl(X/S,r) supset PropCycl_{equi}(X/S,r)$



and their effective counter-parts.



(1) is then a "subline" of 2))



In a sense, the most satisfying definition would be to use only cycles which are flat over $S$ (the $mathbb{Z}Hilb$-groups, or the closely related $z_{equi}$) but pullbacks along arbitrary morphisms are not defined there in general.



With the groups Cycl, thanks to the relative cycle condition built in Cycl, you have pullbacks along arbitrary morphism, but only with rational coefficients (thm 3.3.1, the denominators of the multiplicities are divisible by residue characteristics)



The main interest of the "special" relative cycles $z(-,-)$ is in their definition : they admit integral pullbacks ! Then you have the small miracle that this condition is stable by those pullbacks and you get a subpresheaf. This means that using them you can set up intersection theory with integral coefficients even on singular car p schemes.



All this zoology simplifies when $S$ is nice : there are some results when $S$ is geometrically unibranch, but the nicest case is $S$ regular, in which the chains of inclusions I wrote down collapse, you are left with two distinctions which are reasonable from the point of view of classical intersection theory : effective/non-effective, general/with compact support. Furthermore, the intersection multiplicities are computed by the Tor multiplicity formula, so the Suslin-Voevodsky theory is really an extension of local intersection theory of regular rings as in Serre's book.