Wednesday, 31 October 2007

microbiology - If a human takes antibiotics are all bacteria in the body killed?

No



There are several reasons why this might not be true, as Alexander has discussed. An antibiotic often has a molecular target that isn't present in all bacteria, it's extremely hard to get antibiotics to certain parts of your body, and some bacteria will be defended against a antibiotic attack by biofilms, resistance mechanisms, and sheer statistical probability.



That is not to say that many don't die. Indeed, one of the major causes of Clostridium difficile infection is that antibiotics kill most of your gut bacteria, allowing the somewhat better protected C. diff to proliferate, start producing toxins, and send you to the hospital with symptoms ranging from diarrhea to perforated colon and worse. That disease is a direct consequence of "Antibiotics kill some but not all bacteria in you".

ca.analysis and odes - Growth of the "cube of square root" function

Here is a proof that $|g(n)|le 1$ for all but finitely many $n$. You can extract an explicit bound for $n$ from the argument and check the smaller values by hand.



If $f(n)=n^{3/2}$ without the floor, then $g(n)sim frac{3}{4sqrt n}$, so it is positive and tends to 0. When you replace $n^{3/2}$ by its floor, $g(n)$ changes by at most 2, hence the only chance for failure is to have $g(n)=2$ when the fractional parts of $n^{3/2}$ and $(n+2)^{3/2}$ are very small and the fractional part of $(n+1)^{3/2}$ is very close to 1 (the difference is less than $frac{const}{sqrt{n}}$).



Let $a,b,c$ denote the nearest integers to $n^{3/2}$, $(n+1)^{3/2}$ and $(n+2)^{3/2}$. Then $c-2b+a=0$ because it is an integer very close to $(n+2)^{3/2}-2(n+1)^{3/2}+n^{3/2}$. Denote $m=c-b=b-a$. Then $(n+1)^{3/2}-n^{3/2}<m$ and $(n+2)^{3/2}-(n+1)^{3/2}>m$. Observe that
$$
frac{3}{2}sqrt{n}<(n+1)^{3/2} - n^{3/2} < frac{3}{2}sqrt{n+1}
$$
(the bounds are just the bounds for the derivative of $x^{3/2}$ on $[n,n+1]$. Therefore
$$
frac{3}{2}sqrt{n} < m < frac{3}{2}sqrt{n+2}
$$
or, equivalently,
$$
n < frac49 m^2 < n+2.
$$
If $m$ is a multiple of 3, this inequality implies that $n+1=frac49 m^2=(frac23m)^2$, then $(n+1)^{3/2}=(frac23m)^3$ is an integer and not slightly smaller than an integer as it should be. If $m$ is not divisible by 3, then
$$
n+1 = frac49 m^2 + r
$$
where $r$ is a fraction with denominator 9 and $|r|<1$. From Taylor expansion
$$
f(x+r) = f(x) +r f'(x) +frac12 r^2 f''(x+r_1), 0<r_1<r,
$$
for $f(x)=x^{3/2}$, we have
$$
(n+1)^{3/2} = (frac49 m^2 + r)^{3/2} = frac8{27}m^3 + mr + delta
$$
where $0<delta<frac1{4m}$.
This cannot be close to an integer because it is close (from above) to a fraction with denominator 27.

evolution - How did the human brain evolve?


why would such an individual have an evolutionary advantage?




First thing to keep in mind, is that natural selection will seize upon even very slight advantages and evolve them until the point where evolving them is no longer viable. So, to you a slight increase in intelligence might be unnoticeable, but in a naturally selecting system it would be very noticeable. Don't mix up sexual selection (where the preference of one sex shapes the selection of genes) with natural selection (where fitness to the environment shapes the selection of genes). Sexual selection may or may not have helped shape brain evolution, I can't produce evidence that it did.



But the increased fitness that slight gains in intelligence convey are easier to create evolutionary anecdotes about. It's simplistic to speak of just "intelligence" as evolving. Many different nervous structures evolved through out the animal kingdom, each with a function that directly increased fitness. I get the feeling that you're specifically talking about the evolution of the neocortex, because you reference human intelligence.



So, what I guess you're really asking is what evolutionary advantage does the neocortex convey? The answer isn't that hard, once you understand a little about the neocortex. It is an organ that simply models the external and internal world as perceived by your sensory organs. By external world, I mean modeling things like vision, hearing, touch, etc... By internal world, I mean modeling the actions you can perform, the plans you make, and so on. Increasing the size of the neocortex is essentially increasing the possible complexity of these models. Not only can you find more subtle and complex patterns out there, you can develop increasingly subtle plans and ruses too.



Watching documentary footage of tribal hunters provides a really stark look at just how useful the neocortex is. It frames our intelligence in terms of a situation similar to one we evolved in. It's not hard to imagine why the growth has been explosive. Outsmarting other animals is a great niche that in our evolutionary history was virtually unexploited.

fa.functional analysis - Hilbert subspaces of indefinite inner product spaces

Let $E$ be a real linear space, endowed with a non-degenerate symmetric
bilinear form $(.,.)$.



Suppose that the [indefinite] inner product space $(E,(.,.))$
satisfies the following [sequential] properties:



(WSC) If the sequence { ${(x_{n}, y)}$ } is Cauchy for each $y$ in $E$, then there
exists some $x$ in $E$ such that $left(x_{n}-x,yright)$ $rightarrow$ $0$
whenever $y$ in $E$.



(That is to say, $(E,(.,.))$ is weakly sequentially complete.)



and



(DPG) If $left(x_{n},yright)$ $rightarrow$ $0$ for every $y$ in $E$,
then $left(x_{n},x_{n}right)$ $rightarrow$ $0$.



(That would be sort of "Dunford-Pettis & Grothendieck'' property for indefinite inner product spaces.)



Suppose also that $|E|$ is "big enough'' (at least $|E|$ $>|mathbb{R}|$).



Conjecture. $(E,(.,.))$ contains an infinite-dimensional Hilbert
subspace.



(That is, there exists a linear isometry from $(ell^{2},<.,.>)$
into $(E,(.,.))$ .)

Tuesday, 30 October 2007

dg.differential geometry - Do hyperKahler manifolds live in quaternionic-Kahler families?

A geometry question that I thought about more seriously a few years ago... thought it'd be a good first question for MO.



I'm aware that there are a number of Torelli type theorems now proven for compact HyperKahler manifolds. Also, I think that Y. Andre has considered some families of HyperKahler (or holomorphic symplectic) manifolds in some paper.



But, when I see such a moduli problem studied, the data of a HyperKahler manifold seems to include a preferred complex structure. For example, a HyperKahler manifold is instead viewed as a holomorphic symplectic manifold. I'm aware of various equivalences, but there are certainly different amounts of data one could choose as part of a moduli problem.



I have never seen families of HyperKahler manifolds, in which the distinction between hyperKahler rotations and other variation is suitably distinguished. Here is what I have in mind, for a "quaternionic-Kahler family of HyperKahler manifolds:



Fix a quaternionic-Kahler base space $X$, with twistor bundle $Z rightarrow X$. Thus the fibres $Z_x$ of $Z$ over $X$ are just Riemann spheres $P^1(C)$, and $Z$ has an integrable complex structure.



A family of hyperKahler manifolds over $X$ should be (I think) a fibration of complex manifolds $pi: E rightarrow Z$, such that:



  1. Each fibre $E_z = pi^{-1}(z)$ is a hyperKahler manifold $(M_z, J_z)$ with distinguished integrable complex structure $J_z$.

  2. For each point $x in X$, let $Z_x cong P^1(C)$ be the twistor fibre. Then the family $E_x$ of hyperKahler manifolds with complex structure over $P^1(C)$ should be (isomorphic to) the family $(M, J_t)$ obtained by fixing a single hyperKahler manifold, and letting the complex structure vary in the $P^1(C)$ of possible complex structures. (I think this is called hyperKahler rotation).

In other words, the actual hyperKahler manifold should only depend on a point in the quaternionic Kahler base space $X$, but the complex structure should "rotate" in the twistor cover $Z$.



This sort of family seems very natural to me. Can any professional geometers make my definition precise, give a reference, or some reason why such families are a bad idea? I'd be happy to see such families, even for hyperKahler tori (which I was originally interested in!)

Monday, 29 October 2007

ecoli - What is the appropriate method to send a strain?

I believe Addgene send strains as bacterial stabs.



This is the protocol from Qiagen.




E. coli strains can also be stored for up to 1 year as stabs in soft
agar. Stab cultures are used to transport or send bacterial strains to
other labs. Prepare stab cultures as follows:



  1. Prepare and autoclave LB agar (standard LB medium containing 0.7% agar) as described in the last issue of QIAGEN News (1).


  2. Cool the LB agar to below 50°C (when you can hold it comfortably) and add the appropriate antibiotic(s). While the agar is still liquid,
    add 1 ml agar to a 2-ml screw-cap vial under sterile conditions, then
    leave to solidify.


Vials of agar can be prepared in batches and stored at room
temperature until required.



  1. Using a sterile straight wire, pick a single colony from a freshly streaked plate and stab it deep down into the soft agar several times
    (Figure 1).

4. Incubate the vial at 37°C for 8–12 h leaving the cap
slightly loose.



  1. Seal the vial tightly and store in the dark, preferably at 4°C.

When recovering a stored strain, it is advisable to check the
antibiotic markers by streaking the strain onto a selective plate.




I believe if the shipping is <24hrs you could send a stab in ice.



Hope that helps



Ben

fa.functional analysis - Functional calculus for direct integrals

Suppose I have a direct integral of Hilbert spaces $H = int^oplus H_x dx $, and suppose I have an operator $T: H to H$ which is decomposable, and so it can be written as
$T = int^oplus T_x$ for some measurable field of operators $T_x$. Suppose furthermore that every $T_x$ is self-adjoint (and so also $T$ is self-adjoint), and let $f$ be a bounded measurable function on $mathbb R$.



Under what conditions $f(T)$ is decomposable (I guess always) and equal to the integral of the field $f(T_x)$ ?



One paper which says something about this problem is Chow, Gilfeather, "Functions of direct integrals of operators". It actually states that the only necessary condition is that $T_x$ are contractions. But unfortunately I don't understand this paper, since it doesn't state its assumptions very precisely - for example, it doesn't seem to be assumed that the operator $T$ (or operators $T_x$) is (are) normal, and so I don't what kind of functional calculus is considered.

Saturday, 27 October 2007

zoology - Why is it that cats can jump so high for their size, compared with humans?

My cat is about 1' high at the shoulder, and I am a little over 6', but my cat can easily jump onto something as high as I am. That is 6x it's height. If a cat can do this, then Why can't I jump up onto my barn roof? That is a little less than 36' up. I have a hard time jumping onto even a 4' platform. Now if my cat had trouble jumping onto an 8" platform, I would think that pathetic. Do cats have muscles 20x stronger than humans, for their mass?
Is it just their skeleton providing leverage?

Friday, 26 October 2007

ag.algebraic geometry - Mirror of local Calabi-Yau

The physicists (see e.g. this paper of Aganagic and Vafa) will write the mirror as a threefold $X$ which is an affine conic bundle over the holomorphic symplectic surface $mathbb{C}^{times}times mathbb{C}^{times}$ with discriminant a Seiberg-Witten curve $Sigma subset mathbb{C}^{times}times mathbb{C}^{times}$. In terms of the affine coordinates $(u,v)$ on $mathbb{C}^{times}times mathbb{C}^{times}$, the curve $Sigma$ is given by the equation
$$
Sigma : u + v + a uv^{-1} + 1 = 0,
$$
and so $X$ is the hypersurface in $mathbb{C}^{times}times mathbb{C}^{times} times mathbb{C}^2$ given by the equation
$$
X : xy = u + v + a uv^{-1} + 1.
$$



From geometric point of view it may be more natural to think of the mirror not as an affine conic fibration over a surface but as an affine fibration by two dimensional quadrics over a curve. The idea will be to start with the Landau-Ginzburg mirror of $mathbb{P}^{1}$, which is $mathbb{C}^{times}$ equipped with the superpotential $w = s + as^{-1}$ and to consider a bundle of affine two dimensional quadrics on $mathbb{C}^{times}$ which degenerates along a smooth fiber of the superpotential, e.g. the fiber $w^{-1}(0)$. In this setting the mirror will be a hypersurface in $mathbb{C}^{times}times mathbb{C}^{3}$ given by the equation
$$
xy - z^2 = s + as^{-1}.
$$
Up to change of variables this is equivalent to the previous picture but it also makes sense in non-toric situations. Presumably one can obtain this way the mirror of a Calabi-Yau which is the total space of a rank two (semistable) vector bundle of canonical determinant on a curve of higher genus.

ag.algebraic geometry - How to introduce Kahler differential in category?

It turns out that the whole complex of concepts



works in remarkable generality on pure category-theoretic grounds with respect to any category, and is nothing but different facets of one single general concept: that of the



This goes back to the old observation by Quillen, that the category of modules over a ring is equivalent to the category of abelian group objects in the given overcategory of rings. All other concepts follow from this: derivations are sections through the over-objects, and the assignment of Kähler differentials is the left-adjoint to the projection down from the overcategory.



The notion of "tangent $(infty,1)$-category" takes this idea to its full generality: this is the over-$(infty,1)$-category, fiberwise stabilized. See the above link for details.



This complete picture, based on Quillen's old idea, is fully developed and exposed in the first part of the very nice article



So the answer to the question is: a notion of Kähler differentials exists with respect to any (oo,1)-category $C$! Here for given $C$, the resulting notion models universal modules for objects in $C^{op}$, regarded as function rings over the objects in $C$.



I can't quite tell what the abelian category is supposed to be that appears in the question. But notice that the plain vanilla version of the story is obtained by letting $C$ be the category of (simplicial) monoids in the abelian category $Ab$ of abelian groups.



So, indeed, for any abelian category whatsoever, it makes very good sense to regard the category of monoids inside it as a replacement for the category of rings, regard the category of abelian group objects in the slice-categories of that as the corresponding bifibration of modules, and take the corresponding Kähler differentials to be the corresponding universal modules with respect to derivations, just following the general nonsense linked to above.

Thursday, 25 October 2007

linear algebra - Extracting integer multiplicative factors from the sum of certain sets of (finite-precision) real numbers?

Update based on Michael's answer (thanks again!) - Can the LLL or PSLQ algorithms provide a (knowably - i.e. not just incidental) unique solution for the set of integer multiplicative factors? Are there other algorithms (perhaps with worse run-time complexities) that can?



Imagine that one has a set of 'n' finite-precision real numbers - (r_1, r_2, ..., r_n), each with an associated positive integer multiplicative factor, (i_1, i_2, ..., i_n). Here, one only has access to the values in the set of real numbers, as well as the sum of the real numbers multiplied by their corresponding multiplicative factors, 'S', i.e. - Sum(i_1*n_1, i_2*n_2, ..., i_n*r_n).



What's the most efficient way to test that the sum of a particular set of real numbers will (or, obviously, will not) always allow us to extract a unique solution for the set of integer multiplicative factors? Or to find/do this with fewest restrictions on the values of the integer multiplicative factors?



In the case that this has a very simple answer, apologies.



(Edit - Changed "each with an associated "set of" positive integer multiplicative factors" --> "each with an associated positive integer multiplicative factor". One finite-precision real number 'r_k' has one integer multiplicative factor 'i_k'.)

pr.probability - Conditional expectation of convolution product equals..

Assuming $Omega$ has the structure for defining convolutions I don't think it is ever an algebra homomorphism. Take $X$ to be supported on $mathcal{G}^c$, i.e. take some set of non-zero measure in $mathcal{G}^c$ and let $X$ be a function whose support lies in that set, then $E(Xast Y|mathcal{G})neq 0$ but $E(X|mathcal{G})=0$ so $E(X|mathcal{G})ast E(Y|mathcal{G})=0$.



Edit: Scratch what I said. I was confusing sub-$sigma$-algebra with sub-algebra of random variables and even in the finite case my statement is completely incorrect. In almost every instance $E(X|mathcal{G})$ will not be zero as Jonas points out in the comments.

Wednesday, 24 October 2007

ag.algebraic geometry - Why is a partition function of a Topological Conformal Field Theory related to Deligne-Mumford space

As I said in the comments, you should read AJ's answer to this question.



If you haven't read Costello's paper "Topological conformal field theories and Calabi-Yau categories", then you should definitely take a look at it, as the paper you reference is the sequel to this paper. You should also take a look at Lurie's TFTs paper.



In Costello's work and in Lurie's work, you will notice that TCFTs, by definition, involve moduli spaces of nonsingular Riemann surfaces (or nonsingular algebraic curves).



On the other hand, in order to do Gromov-Witten theory, we also need to consider moduli spaces of certain singular Riemann surfaces ("stable" Riemann surfaces). This is where Deligne-Mumford spaces come into play. So Gromov-Witten theory and TCFT are not exactly the same thing; they involve different moduli spaces. The idea of Costello (and Kontsevich) is that sometimes we can take a TCFT and extend the theory from the uncompactified moduli space to the compactified moduli space, thus getting something which is "a Gromov-Witten theory" associated to the TCFT.



One of Costello's and Kontsevich's motivations comes from mirror symmetry. The idea is that the Fukaya category of a compact symplectic manifold $X$ should give a TCFT. This is why I asked this question. Then, we should be able to extend this TCFT to the DM boundary and obtain the Gromov-Witten theory of the manifold. On the mirror side, for example the derived category of coherent sheaves on a Calabi-Yau variety $Y$ should also give a TCFT. Again, if we extend this TCFT to the DM boundary, we should get "a Gromov-Witten theory", which will not be the Gromov-Witten theory of $Y$, but it should at least be related to the Gromov-Witten theory of whatever $Y$ is mirror to.



I might be wrong about this, but I think that in some sense we have to consider compactifications of the moduli spaces, such as the Deligne-Mumford compactification (but there are other possible compactifications), because in order to obtain things like partition functions or the Gromov-Witten potential function, we must do integrals over the moduli spaces in question. But if the moduli spaces are non-compact, which they are, there may be issues in defining these integrals. So one way to get around this is to compactify.



In any case that is at least vaguely the broad picture. If you want to know more details you will have to clean up your question and make it more specific.

Sunday, 21 October 2007

fa.functional analysis - Factorization through $ell_{1}$ and operator ideals

Recently, I bumped into the class of operators that factor through $ell_{1}(X)$ for some set $X$. For now, $X$ is a set with arbitrary cardinality but if it leads to a more concrete answer to my questions below, feel free to restrict $X$ to be countable. The restriction makes little to no difference to what it follows. I should also warn that I am fairly ignorant of operator ideals and Banach space theory, so please be gentle.



First some definitions. Define,



$$
|T|_{ell_{1}}= inf{|R||S|}
$$



where the infimum is taken over all the factorizations of $T$ as $xrightarrow{S}ell_{1}(X)xrightarrow{R}$. Obviously, $|T|leq |T|_{ell_{1}}$. Define $mathcal{L}(A, B)$ to be the linear space of operators that factor through some $ell_{1}(X)$ with the above norm. The little bit of thought that I have dedicated to this has produced the following up to now (and please correct me, if I have fumbled somewhere):



  1. We have $|RTS|_{ell_{1}}leq |R||T|_{ell_{1}}|S|$. Sketch: obvious.


  2. The normed space $mathcal{L}(A, B)$ is complete. Sketch: If $(T_{n})$ is $|,|_{ell_{1}}$-Cauchy then it has a uniform limit. To prove that this limit factors through some $ell_{1}(X)$ note two things. First, if you have a factorization through $ell_{1}(X)$ as $RS$ and $Xsubseteq Y$ then, since $ell_{1}(X)$ is a norm-1 complemented subspace of $ell_{1}(Y)$, you can make the factorization to pass through the larger $ell_{1}(Y)$ without altering $|R||S|$. Second, one has the isometric isomorphism,
    $$
    sum_{n}ell_{1}(X_{n})cong ell_{1}(coprod_{n} X_{n})
    $$
    which allows to take a sequence of factorizations and push them all to a common space $ell_{1}(X)$. Thus the uniform limit factors through some $ell_{1}(X)$.


  3. Finite-rank operators factor through $ell_{1}(X)$. Sketch: all finite-dimensional spaces are linearly homeomorphic to $ell_{1}(n)$. These first three conditions taken together mean that $(A, B)mapsto mathcal{L}(A, B)$ is an operator ideal (or Banach ideal, I am uncertain of the official terminology).


  4. Each $T$ is completely continuous. Sketch: a sequence in $ell_{1}(X)$ lives inside a copy of $ell_{1}$. The Schur property of $ell_{1}$ gives the result.


Now for my first batch of questions: can this class of operators be characterized? Any more salient properties of these operators? And what about the norm $|,|_{ell_{1}}$, is there some other more enlightening description of it? How far is it from the operator norm?



The second batch of questions is related to what are the properties required of a full subcategory $C$ of the category of Banach spaces so that one obtains an operator ideal by factorizing operators through it. An obvious example is the ideal of weakly compact operators that by Davis-Figiel-Johnson-Pelczynski is the class of operators that factor through reflexive spaces. My guess is that something like $omega_{1}$-filteredness of $C$ with $omega_{1}$ the first uncountable ordinal, is enough for the argument to go through, but I am sure someone smarter and more knowledgeable has already thought about this.



If you have appropriate references, that would be great; extra kudos if available online. Next September I will have access to a library and plan to get my hands on the Defant, Floret monograph Tensor norms and operators ideals -- not a very cheerful prospect actually, as the book looks rather daunting. The book Absolutely summing operators by Diestel, Jarchow and Tonge should also be useful, but alas, last time I checked it was not available.



Regards, TIA,
G. Rodrigues

Saturday, 20 October 2007

nt.number theory - moduli space and modularity

Kisin's work is fairly technical, and is devoted to studying deformations of Galois representations which arise by taking $overline{K}$-valued points of a finite flat group
over $mathcal O_K$ (where $K$ is a finite extension of $mathbb Q_p$).



The subtlety of this concept is that when $K$ is ramified over $mathbb Q_p$ (more precisely,
when $e geq p-1$, where $e$ is the ramification degree of $K$ over $mathbb Q_p$), there
can be more than one finite flat group scheme modelling a given Galois represenation.
E.g. if $p = 2$ and $K = {mathbb Q}_2$ (so that $e = 1 = 2 - 1$), the trivial character
with values in the finite field $mathbb F_2$ has two finite flat models over $mathbb Z_2$;
the constant etale group scheme $mathbb Z/2 mathbb Z$, and the group scheme $mu_2$
of 2nd roots of unity.



In general, as $e$ increases, there are more and more possible models. Kisin's work shows that they are in fact classified by a certain moduli space (the "moduli of finite flat group schemes" of the title). He is able to get some control over these moduli spaces, and hence prove new modularity lifting theorems; in particular, with this (and several other fantastic ideas) he is able to extend the Taylor--Wiles modularity lifting theorem to the context of arbitrary ramification at $p$, provided one restricts to a finite flat deformation problem.
This result plays a key role in the proof of Serre's conjecture by Khare, Wintenberger, and Kisin.



The detailed geometry of the moduli spaces is controlled by some Grassmanian--type structures that are very similar to ones arising in the study of local models of Shimura varieties. However, there is not an immediately direct connection between the two situations.



EDIT: It might be worth remarking that, in the study of modularity of elliptic curves,
the fact that the modular forms classifying elliptic curves over $mathbb Q$ are themselves
functions on the moduli space of elliptic curves is something of a coincidence.



One can already see this from the fact that lots of the other objects over $mathbb Q$ that are not elliptic curves are also classified by modular forms, e.g. any abelian variety
of $GL_2$-type.



When one studies more general instances of the Langlands correspondence, it becomes increasingly clear that these two roles of elliptic curves (providing the moduli space,
and then being classified by modular forms which are functions on the moduli space) are independent of one another.



Of course, historically, it helped a lot that the same theory that was developed to study the Diophantine properties of elliptic curves was also available to study the Diophantine properties of the moduli spaces (which again turn out to be curves, though typically not elliptic curves) and their Jacobians
(which are abelian varieties, and so can be studied by suitable generalizations of many of the tools developed in the study of elliptic curves). But this is a historical relationship between the two roles that elliptic curves play, not a mathematical one.

Friday, 19 October 2007

nt.number theory - Modular forms and the Riemann Hypothesis

I know two statements about modular forms that are Riemann Hypothesis-ish.



First, note that the constant term of the level-one non-holomorphic Eisenstein series $E_s$ is $y^s+c(s)y^{-s}$, and that the poles of $c(s)$ are the same as the poles of $E_s$. We can directly calculate that $c(s)={Lambda(s)overLambda(1+s)}$ (this depends on your precise normalization of the Eisenstein series), where $Lambda$ is the completed zeta function. We can actually say something about the location of the poles of $E_s$ (using the spectral theory of automorphic forms). Unfortunately, we only know how to control poles for ${rm Re}(s)ge 0$. This does give an alternate proof of the nonvanishing of $zeta(s)$ at the edge of the critical strip (from the lack of poles of ${Lambda(it)overLambda(1+it)}$), but it doesn't seem possible to go further to the left (though it does generalize to other $L$-functions appearing as the constant term of cuspidal-data Eisenstein series).



Second, the values of modular forms at certain (Heegner) points in the upper-half plane can be related to zeta functions. For example, $E_s(i)={Lambda_{{mathbb Q}(i)}(s)over Lambda_{mathbb Q}(2s)}$. The general statement is simple to express adelically. Take a quadratic extension $k_1$ of $k$, and let $H$ denote $k_1^times$ as a $k$-group and $E_s$ the standard level-one Eisenstein series on $G=GL_2(k)$. Take a character $chi$ on $Z_{mathbb A}H_kbackslash H_{mathbb A}$ then
$$int_{Z_{mathbb A}H_kbackslash H_{mathbb A}}E_s(h)chi(h) dh={Lambda_{k_1}(s,chi)over Lambda_k(2s)}$$
where $Z$ denotes the center of $G$, and we have normalized the measure on the quotient space to be 1. Note that since $H$ is a non-split torus in $G$, the quotient is compact, so the integral is finite. In fact, the integrand is invariant (on the right) under a compact open subgroup $K$ of $H_{mathbb A}$, so the integral is actually over the double coset space $Z_{mathbb A}H_kbackslash H_{mathbb A}/K$, which is actually a finite group.
In order to get the Riemann zeta function in the numerator on the right-hand-side, you would need to integrate over a split torus, which is precisely the Mellin transform, and you would have convergence issues. Note that if it did converge, the Mellin transform of $E_s$ would be
$$int_{Z_{mathbb A}M_kbackslash M_{mathbb A}} E_s(a)|a|^v da={Lambda(v+s)Lambda(v+1-s)overLambda(2s)}$$



The second idea is more commonly discussed in the context of subconvexity problems for general $L$-functions. (See Iwaniec's Spectral Methods of Automorphic Forms, especially Chp 13.) A class of subconvexity results is the Lindelof Hypothesis, which is one of the stronger implications of the Riemann Hypothesis.

mp.mathematical physics - Is the 'massive' Calogero-Moser system still integrable?

Background



The (rational) Calogero-Moser system is the dynamical system which describes the evolution of $n$ particles on the line $mathbb{C}$ which repel each other with force proportional to the cube of their distance. If the particles have (distinct!) position $q_i$ and momentum $p_i$, then the Hamiltonian which describes this system is
$$ H=sum_i p_i^2+sum_{ineq k}frac{1}{(x_i-x_k)^2} $$
There are many interesting properties of this system, but one of the first interesting properties is that it is `completely integrable'. This means that solving it explicitly amounts to solving a series of straight-forward integrals.



The integrability can most easily be shown by showing that the phase space for this system includes into a symplectic reduction of a certain matrix space, and then noticing that the above Hamiltonian is a restriction of a integrable Hamiltonian on the whole space. This is done by assigning to any ensemble of points $q_i$ and momenta $q_i$ a pair of $ntimes n$ matrices $X$ and $Y$, where $X$ is the diagonal matrix with $q_i$ on the diagonal entries, while $Y$ is given by
$$ Y_{ii}=p_i, ; Y_{ik}=(x_i-x_k)^{-1}, ; ineq k $$
This matrix assignment defines a map from the configuration space $CM_n$ of the CM system to the space of pairs of matrices. The space of pairs of matrices $(X,Y)$ is naturally a symplectic space from the bilinear form $(X,Y)cdot (X',Y')=Tr(XY')-Tr(X'Y)$, and the action of $GL_n$ by simultaneous conjugation naturally has a moment map. Therefore, we sympletically reduce the space of pairs of matrices at a specific coadjoint orbit (not the origin) and get a new symplectic space $overline{CM}_n$.



Composing the above matrix assignment with symplectic reduction, we get a map $CM_nrightarrow overline{CM}_n$. This map turns out to be a symplectic inclusion which has dense image. We also notice that the functions $Tr(Y^i)$, as $i$ goes from $1$ to $n$, descend to a Poisson-commuting family of functions on $overline{CM}_n$, and because $overline{CM}_n$ is $2n$ dimensional, each of the functions $Tr(Y^i)$ gives an integrable flow on $overline{CM}_n$. Finally, we notice that $Tr(Y^2)$ restricts to $H$ on $CM_n$.



The Massive Version of the CM System



Now, make the following change to the system. To every particle, assign a number $m_i$ (the mass), which can be in $mathbb{C}$, but I am interested in the case where the $m_i$ are positive integers. Define a the massive CM Hamiltonian as
$$H_m=sum_ifrac{p_i^2}{m_i}+sum_{ineq k}frac{m_im_k}{(x_i-x_k)^2} $$
The physical meaning of this equation is that particles still have force proportional to the inverse of the cube of their distance, but the force is proportional to the mass of that particle; also, particles resist acceleration proportional to their mass. If the force were to drop off proportional to the inverse square of their distance, and attract instead of repel, this would model how massive particles move under the influence of gravity.



Questions



  1. Is this system integrable?

  2. Can it be realized in a similar matrix form?

  3. Does it have any interesting or new behavior than the usual CM system?

An Idea



It is almost possible to realize this Hamiltonian in a simple modification of the previous approach. Let $M$ denote the diagonal matrix with the $m_i$s on the diagonal. Then
$$Tr(MYMY)=sum_im_i^2p_i^2+sum_{ineq k}frac{m_im_k}{(x_i-x_k)^2}$$
The functions $Tr( (MY)^i)$ should again be a Poisson commutative family. Rescaling the $p_i$ by $m_i^{3/2}$ gives the massive Hamiltonian $H_m$; however, this rescaling is not symplectic, and so it won't preserve the flows.



Another Idea



In the case of integer $m_i$, one possibility is to work with $Ntimes N$ matrices rather than $ntimes n$ matrices, where $N=sum m_i$. Then it is possible to construct a matrix $X$ with eigenvalues $q_i$, each occuring with multiplicity $m_i$, as well as a matrix $Y$ such that $(X,Y)$ defines a point in $overline{CM}_N$. The Hamiltonian $Tr(Y^2)$ even restricts to the correct 'massive' Hamiltonian $H_m$. However, the flow described by this Hamiltonian on $overline{CM}_n$ will in almost all cases immediately separate eigenvalues that started together, which we don't want. If we restrict the Hamiltonian to the closed subspace where the eigenvalues are required to stay together, then this gives the desired flow. Unfortunately, restricting to a closed subvariety doesn't preserve a Hamiltonian being integrable.

gromov witten theory - Why would I want to know (equivariant) quantum cohomology?

Hi, I am not really an expert in this field, but I think (equivariant) quantum cohomology rings have a very nice combinatorics and show up at many places. I few month ago I was just finishing a paper (joint with Christian Korff) which connects the somehow completely understood quantum cohomology of the Grassmannian with the Verlinde algebra (which for me is the fusion algebra of certain tilting modules for a quantum group at a root of unity).



Have a look at arXiv:0909.2347!



There are indeed lots of connections with integrable systems. In our work we look somehow at a very easy situation: take the affine Dynkin diagram for affine sl(n) (that means a circle with n points!) Then consider the integrable system where you can place particles at this n places. Either "bosonic" that means however many you want or "fermionic" which means at most one at each place. Now there is the operation of moving a particle to the next place. This defines you linear maps on the space of all particle confugurations fixing the number of total particles. Then we define Schur functors where the variables are these (non-commuting) operators.



Now the quantum cohomology of the Grassmannian Gr(k,n+k) has a basis which we can identify with certain partitions or with fermionic particle configurations on an (n+k)-circle using k particles.



Whereas the fusion algebra at level k has a basis which can again be identified with certain partitions or with bosonic particle configurations on an n-circle.



The funny thing now is that the multiplication in either of the rings is just given by
a*b=Schur poly to a viewed as an operator applied to b.



This desription of quantum cohomology was found by Postnikov a few years ago, but he did not connect it to this integrable system. The whole thing reproves and makes explicit an old result due to Witten, Gepner, Vafa and Intrilligator that the fusion ring of gl(n) at level k is isomorphic to the quantum cohomology of the Grassmannian Gr(k,n+k) when we specialise q to 1.



So already in this really boring Baby-example something interesting shows up, so I guess one really should study all sort of equivariant cohomology rings!

Thursday, 18 October 2007

ag.algebraic geometry - Why do automorphism groups of algebraic varieties have natural algebraic group structure?

It is not always true that the automorphism group of an algebraic variety has a natural algebraic group structure. For example, the automorphism group of $mathbb{A}^2$ includes all the maps of the form $(x,y) mapsto (x, y+f(x))$ where $f$ is any polynomial. I haven't thought through how to say this precisely in terms of functors, but this subgroup morally should be a connected infinite dimensional object, and is thus not a subobject of an algebraic group.



On the other hand, I believe that the automorphism group of a projective algebraic variety, $X$, can be given the structure of algebraic group in a fairly natural way. This is something I've thought about myself, but not written down a careful proof nor found a reference for: For any automorphism $f$ of $X$, consider the graph of $f$ as a subscheme of $X times X$, and thus a point of the Hilbert scheme of $Xtimes X$. In this way, we get an embedding of point sets from $mathrm{Aut}(X)$ into $mathrm{Hilb}(Xtimes X)$.



I believe that it should be easy to show that (1) $mathrm{Aut}(X)$ is open in $mathrm{Hilb}(Xtimes X)$, and thus acquires a natural scheme structure and (2) composition of automorphisms is a map of schemes.

lo.logic - Logical problems in category theory

Short answer: category theorists often elide the extra annotations when employing typical ambiguity or universe polymorphism. Proof theorists demand that these annotations be provided, and study how they behave.



If you want to be pedantic, then you have to annotate all instances of "category of sets" or "category of categories" with the additional word "small". Then the objects of the category of small sets do not form a small set, and the category of small categories is not a small category.



The next step is to replace "small" with an arbitrary natural number, where the objects of the "category of 0-small sets" form a 1-small set. Often, when fully annotated, it turns out that a proof will work for any value of "N" (where all the references to Set or Cat in the proof involve "offsets" from that N, such as "(N+3)-small sets"). Proofs which are parametric in this N (or some sequence N,M,... with inequality constraints between them) are called universe-polymorphic proofs, and are quite similar to a phenomenon in Principia Mathematica called typical ambiguity (although PM asserted a staggeringly powerful axiom about typical ambiguity without any sort of formal justification). You can re-apply these proofs at arbitrary levels in the transfinite hierarchy of universes, and they still hold.



That said, nobody has yet proven that a category of ALL categories (of every "smallness") cannot exist in the way that Russell proved that a set of all sets cannot exist. However, there is some evidence that you would have to omit certain axioms that might seem to be "obvious" at first glance.

evolution - How do we know that dinosaurs were related to lizards and/or birds?

In general the answer is always the same: you construct a phylogenetic tree. In order to locate different species on this tree in relation to each other, you use various features to compare which species are more similar to each other than others.



The best way of doing this is by comparing their DNA sequence, especially orthologous genes (i.e. genes common to the species compared).



Unfortunately, genetic sequences usually aren’t available for extinct species. You can still compare homologous features though. For instance, the class of mammals are all characterised by the possession of mammary glands. Similarly, all vertebrates have a vertebral column and all aves are feathered, warm-blooded, egg-laying vertebrates.



The collection of many such features from fossile records allows the creation of more or less detailed phylogenies. The Wikipedia explanation mentions several transitional fossil forms which trace the evolution from dinosaurs to modern birds via several intermediates. All of the inferences are based on anatomical resemblance.



This may sound weak evidence but in fact anatomical homology has proved to be sufficiently accurate in constructing other phylogenies, where we have been able to verify the correctness using genome comparison data. So while there is much uncertainty about the precise branching point of birds from dinosaurs (or maybe archosaurs), there is near-certainty that the common ancestor of birds and dinosaurs was, in fact, an archosaur.

Wednesday, 17 October 2007

co.combinatorics - How many vertices of a polytope can be chopped off to produce a k-vertex facet?

Let P be a simple n-facet d-polytope with facet F, and let F have k vertices. Let H be a halfspace and Q be a simple (n-1)-facet polytope such that H ∩ Q = P.



In terms of k, what is an upper bound on the number of vertices of Q contained in (ℝd H)? Informally: what is the largest number of vertices of Q that can be chopped off to produce a k-vertex facet F?



I've derived an algorithm to compute this value exactly in terms of the f-vector of F, but I'd like to determine a tight upper bound when the f-vector is unavailable.



Some observations: The removed set of vertices cannot contain a facet of Q, thus there would seem to be an upper limit on its size. k ≥ d, naturally, since cutting off a single vertex produces d vertices.



If this question is particularly difficult or contains any open problems of which I'm not aware, I'd be interested in knowing that, too!

Monday, 15 October 2007

biochemistry - How, on a physical level, does ATP confer energy?

Usually in biology (and being ATP, it most probably is biology), it's one of two things.



The gamma-phosphate (the third one, the one farthest from the adenosine) is very unstable, meaning the phosphoanhydride bond is easy to break. The cell "allows" it to break, but only at the cost of moving the phosphate to some other molecule, such as a serine or glycerol or fructose or whatever. This phosphorylation creates a bond with lower energy than the phosphoanhydride, and so is overall favored. Imagine the personification: the gamma phosphate hates being attached to anything, but hates being attached to an ADP the most.



Alternatively, if ATP hydrolysis is coupled via an enzyme, it is usually done through transient storage of the energy is protein conformation. An enzyme binds ATP, which makes the protein structure "bend" or conform around the ATP. This puts loads of strain (energy = A) on the protein which is offset by the stabilization of binding the ATP (energy = B). This strain can make an enzymatic surface open up on the protein which itself takes a lot of energy to make (energy = C). The surface can catalyze some reaction (X+Y->Z in your example) that costs some energy (energy = D). The completion of that reaction alters the enzyme's catalytic site to something new and higher energy (energy = E), which can be alleviated by cleavage of the ATP (-7.3 kcal/mol). Alas, ADP and P do not fit well into that site of the enzyme, so they float out, restoring that original ATP-binding surface to it's original state. Provided A>B>C>D>E>-7.3, the cycle will continue until the ATP is exhausted or you have no more Z to make.



Typing "enzyme catalysis cycle ATP" gives a few examples. Here's a few:
DNA gyrase
Actin-myosin cycle

ac.commutative algebra - Atiyah-MacDonald, exercise 7.19 - "decomposition using irreducible ideals"

An ideal $mathfrak{a}$ is called irreducible if $mathfrak{a} = mathfrak{b} cap mathfrak{c}$ implies $mathfrak{a} = mathfrak{b}$ or $mathfrak{a} = mathfrak{c}$. Atiyah-MacDonald Lemma 7.11 says that in a Noetherian ring, every ideal is a finite intersection of irreducible ideals. Exercise 7.19 is about the uniqueness of such a decomposition.



7.19. Let $mathfrak{a}$ be an ideal in a noetherian ring. Let
$$mathfrak{a} = cap_{i=1}^r mathfrak{b}_i = cap_{j=1}^s mathfrak{c}_j$$ be two minimal decompositions of $mathfrak{a}$ as an intersection of irreducible ideals. [I assume minimal means that none of the ideals can be omitted from the intersection.] Prove that $r = s$ and that (possibly after reindexing) $sqrt{mathfrak{b}_i} = sqrt{mathfrak{c}_i}$ for all $i$.



Comments: It's true that every irreducible ideal in a Noetherian ring is primary (Lemma 7.12), but I don't think our result follows from the analogous statement about primary decomposition. For example, here is Example 8.6 from Hassett's $textit{Introduction to Algebraic Geometry}$.



8.6 Consider $I = (x^2, xy, y^2) subset k[x,y]$. We have $$I = (y, x^2) cap (y^2, x) = (y+x, x^2) cap (x, (y+x)^2),$$ and all these ideals (other than $I$) are irreducible.



If my interpretation of "minimal" is correct, then this is a minimal decomposition using irreducible ideals, but it is not a minimal primary decomposition, because the radicals are not distinct: they all equal $(x,y)$.



There is a hint in the textbook: Show that for each $i = 1, ldots, r$, there exists $j$ such that $$mathfrak{a} = mathfrak{b}_1 cap cdots cap mathfrak{b}_{i-1} cap mathfrak{c}_j cap mathfrak{b}_{i+1} cap cdots cap mathfrak{b}_r.$$ I was not able to prove the hint.



I promise this exercise is not from my homework.



Update. There doesn't seem to be much interest in my exercise. I've looked at various solution sets on the internet, and I believe they all make the mistake of assuming that a minimal irreducible decomposition is a minimal primary decomposition. Does anyone know of a reference which discusses irreducible ideals? Some google searches have produced Hassett's book that I mention above and not much else.

dg.differential geometry - Commutator of Lie derivative and codifferential?

Let $(M,g)$ be some smooth, Riemannian manifold. Let $d$ be the exterior derivative and $delta$ the codifferential on forms. For a smooth vector field $X$, let $L_X$ be the Lie derivative associated to $X$. We know from Cartan formula that $L_X = d iota_X + iota_X d$ where $iota_X$ is the interior derivative associated to the vector field $X$. So it is well-known that $L_X$ and $d$ commute: for any arbitrary form $omega$, we have that $L_Xdomega = dL_Xomega$.



This is, of course, not true for codifferentials. In general $[L_X,delta]neq 0$. For certain cases the answer is well known: if $X$ is a Killing vector ($L_Xg = 0$) then since it leaves the metric structure in variant, it commutes with the Hodge star operator, and so $L_X$ commutes with $delta$. Another useful case is when $X$ is conformally Killing with constant conformal factor ($L_X g = k g$ with $dk = 0$). In this case conformality implies that the commutator $[L_X,*] = k^alpha *$ where $alpha$ is some numerical power depending on the rank of the form it is acting on (I think... correct me if I am wrong), so we have that $[L_X,delta] propto delta$.



So my question is: "Is there a general nice formula for the commutator $[L_X,delta]$?" If it is written down somewhere, a reference will be helpful. (In the Riemannian case, by working with suitable symmetrisations of metric connection one can get a fairly ugly answer by doing something like $delta omega propto mathop{tr}_{g^{-1}} nablaomega$ and use that the commutators $[L_X, g^{-1}]$ and $[L_X,nabla]$ are fairly well known [the latter giving a second-order deformation tensor measuring affine-Killingness]. But this formula is the same for the divergence of arbitrary covariant tensors. I am wondering if there is a better formula for forms in particular.)



A simple explanation of why what I am asking is idiotic would also be welcome.

Sunday, 14 October 2007

cv.complex variables - Ways to prove the fundamental theorem of algebra

Here is the proof by Pukhlikov (1997) at



http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=mp&paperid=6&option_lang=eng



which Ilya mentioned as being only in Russian so far. What I present below is not a literal translation (as if anyone on this site cares...).



The argument will use only real variables: there is no use of complex numbers anywhere.
The goal is to show for every $n geq 1$ that each monic polynomial of degree $n$ in ${mathbf R}[X]$ is a product of linear and quadratic polynomials.
This is clear for $n = 1$ and 2, so from now on let $n geq 3$ and assume by induction
that nonconstant polynomials of degree less than $n$ admit
factorizations into a product of linear and quadratic polynomials.



First, some context: we're going to make use of proper mappings. A complex-variable proof on this page listed by Gian depends on the fact that a nonconstant one-variable complex polynomial is a proper mapping $mathbf C rightarrow mathbf C$. Of course a nonconstant one-variable real polynomial is a proper mapping $mathbf R rightarrow mathbf R$, but that is not the kind of proper mapping we will use. Instead, we will use the fact (to be explained below) that multiplication of real one-variable polynomials of a fixed degree is a proper mapping on spaces of polynomials. I suppose if you find yourself teaching a course where you want to give the students an interesting but not well-known application of the concept of a proper mapping, you could direct them to this argument.



Now let's get into the proof. It suffices to focus on monic polynomials and their monic factorizations.
For any positive integer $d$, let $P_d$ be the space of monic polynomials
of degree $d$:
$$
x^d + a_{d-1}x^{d-1} + cdots + a_1x + a_0.
$$
By induction, every polynomial in $P_1, dots, P_{n-1}$ is a product of linear and quadratic polynomials. We will show every polynomial in $P_n$ is a product of
a polynomial in some $P_k$ and $P_{n-k}$ where $1 leq k leq n-1$ and therefore
is a product of linear and quadratic polynomials.



For $n geq 3$ and $1 leq k leq n-1$, define the multiplication map
$$mu_k colon P_k times P_{n-k} rightarrow P_n text{ by } mu_k(g,h) = gh.$$ Let $Z_k$ be the image of $mu_k$ in $P_n$ and
$$Z = bigcup_{k=1}^{n-1} Z_k.$$
These are the monic polynomials of degree $n$ which are composite. We want
to show $Z = P_n$. To achieve this we will look at topological properties of $mu_k$.



We can identify $P_d$ with ${mathbf R}^d$ by associating to the polynomial displayed way up above the vector $(a_{d-1},dots,a_1,a_0)$. This makes $mu_k colon P_k times P_{n-k} rightarrow P_n$ a continuous mapping. The key point
is that $mu_k$ is a proper mapping: its inverse images of compact sets are
compact. To explain why $mu_k$ is proper, we will use an idea of Pushkar' to "compactify" $mu_k$
to a mapping on projective spaces. (In the journal where Pukhlikov's paper appeared, the paper by Pushkar' with his nice idea comes immediately afterwards. Puklikov's own approach to proving $mu_k$ is proper is more complicated and I will not be translating it!)



Let $Q_d$ be the nonzero real polynomials of degree $leq d$ considered
up to scaling. There is a bijection
$Q_d rightarrow {mathbf P}^d({mathbf R})$ associating to a class of polynomials
$[a_dx^d + cdots + a_1x + a_0]$ in $Q_d$ the point $[a_d,dots,a_1,a_0]$.
In this way we make $Q_d$ a compact Hausdorff space.
The monic polynomials $P_d$, of degree $d$, embed into
$Q_d$ in a natural way and are identified in ${mathbf P}^d({mathbf R})$
with a standard copy of ${mathbf R}^d$.



Define $widehat{mu}_k colon Q_k times Q_{n-k} rightarrow Q_n$
by $widehat{mu}_k([g],[h]) = [gh]$.
This is well-defined and restricts on the embedded subsets of monic polynomials to the
mapping $mu_k colon P_k times P_{n-k} rightarrow P_n$. In natural homogeneous coordinates, $widehat{mu}_k$ is a polynomial mapping so
it is continuous. Since projective spaces are compact and Hausdorff,
$widehat{mu}_k$ is a proper map. Then, since
$widehat{mu}_k^{-1}(P_n) = P_k times P_{n-k}$,
restricting $widehat{mu}_k$ to $P_k times P_{n-k}$ shows $mu_k$ is proper.



Since proper mappings are closed mappings,
each $Z_k$ is a closed subset of $P_n$, so $Z = Z_1 cup cdots cup Z_{n-1}$
is closed in $P_n$. Topologically, $P_n cong {mathbf R}^n$ is connected,
so if we could show $Z$ is also open in $P_n$ then we immediately
get $Z = P_n$ (since $Z not= emptyset$), which was our goal. Alas, it will not be easy to show $Z$ is open directly, but a modification of this
idea will work.



We want to show that if a polynomial $f$ is in $Z$ then all polynomials in $P_n$ that are near
$f$ are also in $Z$. The inverse function theorem is a natural tool to use in
this context: supposing $f = mu_k(g,h)$, is the Jacobian determinant of
$mu_k colon P_k times P_{n-k} rightarrow P_n$ nonzero at $(g,h)$?
If so, then $mu_k$ has a continuous local inverse defined in a neighborhood of $f$.



To analyze $mu_k$ near $(g,h)$, we
write all (nearby) points in $P_k times P_{n-k}$ as
$(g+u,h+v)$ where $deg u leq k-1$ and $deg v leq n-k-1$ (allowing $u = 0$ or $v = 0$ too). Then
$$
mu_k(g+u,h+v) = (g+u)(h+v) = gh + gv + hu + uv = f + (gv + hu) + uv.
$$
As functions of the coefficients of $u$ and $v$, the coefficients of $gv + hu$ are all linear
and the coefficients of $uv$ are all higher degree polynomials.



If $g$ and $h$ are relatively prime then
every polynomial of degree less than $n$ is uniquely of the form
$gv + hu$ where $deg u < deg g$ or $u = 0$ and $deg v < deg h$ or $v = 0$,
while if $g$ and $h$ are not relatively prime then
we can write $gv + hu = 0$ for some nonzero polynomials
$u$ and $v$ where $deg u < deg g$ and $deg v < deg h$.
Therefore the Jacobian of $mu_k$ at $(g,h)$ is invertible if $g$ and $h$ are relatively prime
and not otherwise.



We conclude that if $f in Z$ can be written somehow as a product
of nonconstant relatively prime polynomials then a neighborhood of $f$ in $P_n$ is inside $Z$.
Every $f in Z$ is a product of linear and quadratic polynomials, so
$f$ can't be written as a product of nonconstant relatively prime
polynomials precisely when it is a power of a linear or quadratic polynomial. Let $Y$ be all these "degenerate" polynomials in $P_n$:
all $(x+a)^n$ for real $a$ if $n$ is odd and all $(x^2+bx+c)^{n/2}$ for real $b$ and $c$ if $n$ is even. (Note when $n$ is even that $(x+a)^n = (x^2 + 2ax + a^2)^{n/2}$.) We have shown $Z - Y$ is open in $P_n$. This is weaker than our hope of
showing $Z$ is open in $P_n$. But we're in good shape, as long as
we change our focus from $P_n$ to $P_n - Y$. If $n = 2$ then $Y = P_2$ and $P_2 - Y$ is empty.
For the first time we will use the fact that $n geq 3$.



Identifying $P_n$ with ${mathbf R}^n$ using polynomial coefficients,
$Y$ is either an algebraic curve ($n$ odd) or algebraic surface ($n$ even) sitting in
${mathbf R}^n$. For $n geq 3$, the complement of an algebraic curve
or algebraic surface in ${mathbf R}^n$ for $n geq 3$ is path connected, and thus
connected.



The set $Z-Y$ is nonempty since $(x-1)(x-2)cdots(x-n)$ is in it. Since $Z$ is closed in $P_n$, $Z cap (P_n - Y) = Z - Y$ is closed in $P_n - Y$.
The inverse function theorem tells us that
$Z - Y$ is open in $P_n$, so it is open in $P_n - Y$.
Therefore $Z - Y$ is a nonempty open and closed subset of $P_n - Y$.
Since $P_n - Y$ is connected and $Z - Y$ is not empty, $Z - Y = P_n - Y$.
Since $Y subset Z$, we get $Z = P_n$ and this completes Pukhlikov's "real" proof
of the Fundamental Theorem of Algebra.



Mы доказывали, доказывали и наконец доказали. Ура! :)

nt.number theory - Can $N^2$ have only digits 0 and 1, other than $N=10^k$?

In the interest of completeness, here is what I put on the 20-questions wiki — we might as well repeat it here in the $infty$-questions site. I had basically the same idea as Ilya, do a branched search to look for the digits of $N^2$. However, the code that I wrote in Python works from the 10-adic end, while Ilya's works from the Archimedean end. Both programs support a heuristic model that implies that solutions in the integers are very unlikely. If you wanted an optimized search in the integers, you would work from both ends and try to match the partial solutions. And you would probably want to implement the algorithms in C++ rather than in Python.



maxmod = 10**24

def check(x):
if not str(x**2).replace('0','').replace('1',''):
print 'Eureka:',x,x**2

def search(x,mod):
x %= mod
if mod == maxmod:
check(x)
check(mod-x)
return
top = -(x**2/mod) % 10
x += (top + top%2)/2 * mod
search(x,mod*10)
search(x + 5*mod,mod*10)

search(1,10) # Solution is either 1 or 9 mod 10


Also, it's tempting to mark the problem as open rather than as a "puzzle". I did find several papers that analyzed the congruence structure of integers with restricted digits, and one that looked at prime factors. Two are by Erdos, Mauduit, and Sarkozy 1 2, and two are by Banks and Shparlinski (one also with Conflitti) 3 4. Presumably some of these authors can say whether the problem should be called open.

lo.logic - Countable atomless boolean algebra covered by a larger boolean algebra

The answer to the revised version of the question is Yes. In fact, there is no need to assume that B is atomless, but rather, only that it is infinite.



Suppose that B is any infinite Boolean algebra. It follows that there is a countable maximal antichain A subset B. The idea of the proof is to map A arbitrarily into your countable atomless Boolean algebra Q, and then extend to B in a way I will describe. Enumerate the maximal antichain A = { an | n in ω } and the nonzero elements of Q as { qn | n in ω}. We will associate an with qn. In order to define f, suppose that b is any element of B. Let Ab = { qn | b ∧ an not = 0 } be the associated set in Q. Define the function f:B to Q by f(b) = ∨ Ab, if Ab is finite, and otherwise f(b)=1.



We now make several observations about this function f. First, the function is clearly onto, since f(an)=qn. Also, f(1)=1, since 1 meets every element of A, and f(0)=0 since 0 meets no elements of A. Moreover, f(b)=0 iff b=0, since no nonzero element of B has zero meet with every element of A, as A was a maximal antichain.



Because (b ∨ c) ∧ a = (b ∧ a) ∨ (c ∧ a), it follows that Ab ∨ c = Ab union Ac. From this, it follows that f(b ∨ c) = f(b) ∨ f(c), since if either set is infinite, then the answer is 1, and if they are finite, we are taking the join of two finite joins. Thus, f is join-preserving.



It follows that f is an order-homomorphism, since b <= c implies b ∨ c = c implies f(b) ∨ f(c) = f(b ∨ c) = f(c) implies f(b) <= f(c).



So f has all the desired properties.



Note that f definitely does not respect negation, since f(neg an) = 1 for every n. And f definitely does not respect meet, since any two elements of A have meet 0, but the corresponding qn must sometimes be nonzero.



This construction has some affinity with your example. Namely, if you take the various half-open unit characteristic functions as the elements of the maximal antichain (and use the corresponding qn's), then your f and my construction are the same.

Thursday, 11 October 2007

fa.functional analysis - When is a Banach space a Hilbert space?

In this simple note http://arxiv.org/abs/0907.1813 (to appear in Colloq. Math.), Rossi and I proved a characterization in terms of "inversion of Riesz representation theorem".



Here is the result: let $X$ be a normed space and recall Birkhoff-James ortogonality: $xin X$ is orthogonal to $yin X$ iff for all scalars $lambda$, one has $||x||leq||x+lambda y||$.



Let $H$ be a Hilbert space and $xrightarrow f_x$ be the Riesz representation. Observe that $xin Ker(f_x)^perp$, which can be required using Birkhoff-James orthogonality:



Theorem: Let $X$ be a normed (resp. Banach) space and $xrightarrow f_x$ be an isometric mapping from $X$ to $X^*$ such that



1) $f_x(y)=overline{f_y(x)}$



2) $xin Ker(f_x)^perp$ (in the sense of Birkhoff and James)



Then $X$ is a pre-Hilbert (resp. Hilbert) space and the mapping $xrightarrow f_x$ is the Riesz representation.

Wednesday, 10 October 2007

big list - One-step problems in geometry

Here's a cute question which Frederic Bourgeois asked me on a train journey recently. He was asked it by Givental, if my memory serves correctly, but I've no idea where it came from originally. Anyway, the question:



There is a mountain of frictionless ice in the shape of a perfect cone with a circular base. A cowboy is at the bottom and he wants to climb the mountain. So, he throws up his lasso which slips neatly over the top of the cone, he pulls it tight and starts to climb. If the mountain is very steep, with a narrow angle at the top, there is no problem; the lasso grips tight and up he goes. On the other hand if the mountain is very flat, with a very shallow angle at the top, the lasso slips off as soon as the cowboy pulls on it. The question is: what is the critical angle at which the cowboy can no longer climb the ice-mountain?



To solve it, you should think like a geometer and not an engineer. (And yes, it needs just one trick which is certainly applicable elsewhere.)



P.S. When I was asked the question, I failed miserably!

dg.differential geometry - Why is the exterior algebra so ubiquitous?

I will only answer for the link between determinants, differential forms and the Grassmannian.



The fact is that determinant, up to a sign, represents the volume of a parallelepipedal having n assigned vectors as sides. The sign is determined by orientation of this solid.



Indeed the axioms for the determinant can be translated geometrically: for instance the fact that the determinant vanishes when two columns are equal corresponds to the fact that a solid lieing in a hyperplane has 0 volume.



Now take a a linear map f expressed by a matrix A: the image of the unit cube is the solid generated by the columns of A; so f stretchs volumes by a factor |det(A)|, by the previous remark.



This is the infinitesimal expression of the usual formula for the change of variables in the integral, and it is the reason why the Jacobian determinant appears there. It is just the infinitesimal factor by which you multiply volumes. I hope this gives a rough explanation why the determinant appears in this formula.



Now to differential forms. Assume you want to integrate a quantity on a manifold, say a function. You may want to try to integrate it in local coordinates, but the result will depend on the coordinates chosen. So in order to get something well-defined you need a quantity whose local expression changes by the inverse factor (ok, I'm neglecting orientation here). This is exactly a n-form, whose local expression changes by the determinant of the Jacobian of the inverse change of coodinates.



This vague discussion should so far give an idea why differential form of maximal degree are apt to be integrated on oriented manifolds. Now choose a manifold M. You can integrate k-forms on M on k-subvarieties of M, so differential forms of any degree appear as dual elements of subvarieties of the corresponding dimension. Pushing this correspondence a bit explain why the complex of differential form gives the cohomology of M. But this is a topological invariant, so it has plenty of other constructions.



So we get an analytic tool (differential forms) which describes part of the topology of M; something which of course is worthy studying. Feeew!! If you got this far, you can understand what kind of link I see between determinants and differential forms.



As a particular case, this also give an explanation of the link with Grasmmannian: to a given subspace A you just associate the (constant) differential forms dual to it, up to multiples; this allows you to think of point of the Grassmannian as a point in a projective space, giving (more or less) the usual Plucker embedding. I mean: dual elements to general subvarieties are noncostant differential forms, but if you just restrict to subspaces you can just use costant differential forms.



I don't have an intuitive explanation of the link with irreducible representations of GL and I don't know Fermions, so I can't help you there.

examples - Applications of the Chinese remainder theorem

The Mayan calendar system uses a number of different periodic processes, and provides a simple but very important example of a practical use of the CRT.



The Tzolkin, or Day Count, has twenty weekdays (Ik, Akbal,... Auau) and thirteen numbers, 1-13. Each day, the day name advances, and so does the number. For example, 7 Ik is followed by 8 Akbal. These name/number pairs repeat in a 260 day cycle, which has been in continuous uniterrupted use since at least 600BC.



The Haab, or Vague Year, is a 365 day year consisting of 19 months (Pop, Uo, ..., Cumku, Uayeb). The first 18 months have 20 days and Uayeb has five days. The Haab runs 0 Pop, 1 Pop, ..., 19 Pop, 0 Uo, 1 Uo,..., 4 Uayeb and then repeats to 0 Pop.



Together, the Tzolkin and Haab form the calendar round, with dates given by Tzolkin then Haab, for example, 7 Ik 0 Cumku. This cycle repeats every 18980 days, about 52 years, which means that a calendar round date is good for most practical purposes (such as birth dates).



The earlier Mayan period, from around the 1st century BC to the 13th century AD, also featured a system known as the long count, recently made somewhat famous by the fact that it finished a 5126 year cycle on December 21, 2012.
Long count dates have Kin (days) which run 0-19. 20 Kin make one Uinal, 18 Uinal make one Tun, 20 Tun make one Katun, and 20 Katun make one Baktun. Dates are written with Baktun first, as, for example, 9.7.17.12.14. After 13 Baktun, the date goes back to zero, so that 12.19.19.17.19 was followed by 0.0.0.0.0.



One major problem with studying the Mayan calendar is that the long count dates fell out of use hundreds of years before the Spanish arrived, and it is nontrivial to decide which Mayan long count dates correspond to which dates in the modern western calendar system - the 'correlation problem'.



The key document, the Chronicle of Oxcutzcab, says that a tun ended 13 Ahua 8 Xul in AD 1539, thus tying together the long count (tun ending), the calendar round, and the Julian calendar. From ancient records, the long count is known to have begun on 4 Ahua, 8 Cumku. So, given 0.0.0.0.0 = 4 Ahua 8 Cumku, one needs to solve $x$.0.0 = 13 Ahua 8 Xul. The day number gives the equation $360 x equiv 9 pmod{13}$. Since 8 Xul is 125 days after 8 Cumku, the Haab gives the second equation $360 x equiv 125 pmod{365}$.



So, there's a simple little use of CRT: Solve for $x$, and find $x equiv 924 pmod{949}$.



To finish the story, the year AD 1539 contains the long count tun 924 = 2.6.4.0.0 plus some multiple of 949 tun = 2.7.9.0.0. There is enough historical evidence to guess the date to within 949 tun (about 935 years), and so one learns that 11.16.0.0.0 is in AD 1539. Finally, the calendar round is still in use and so one can determine that 11.16.0.0.0 is November 12, 1539. I'll leave it as an exercise to determine that December 21, 2012 really was 0.0.0.0.0.

mg.metric geometry - Sequences of evenly-distributed points in a product of intervals

One way to interpret this result is that it comes from the periodicity of the continued fraction expansion of $phi = 1 + frac{1}{1+frac{1}{cdots}}$ in the sense that it has no "better-than-expected" rational convergents, whereas for example with $pi = (3;7,15,1,292,cdots)$ we may stop at the 292 to get a good approximation (355/113 I believe).



So one may look at numbers of the form $x_n = (n;n,n,n,cdots)$, which satisfy $x_n^2 -nx_n - 1 = 0$, or $$x_n = frac{n+sqrt{n^2+4}}{2}.$$ So a few good sequences may be for example $left{nx_2right}$ where $x_2 = 1+sqrt{2}$, the so-called "silver ratio", or the same for $x_3 = (3+sqrt{13})/2.$



EDIT: These are in some cases pretty good approximations; one way to measure the "well-distribution" of such a sequence is to take the fractional parts ${lfloor nx_n rfloor: n = 1, cdots, M}$, sort them, compute the maximum difference between consecutive terms, and multiply this by $M$ to get some number in the range $[1,M)$. This can be accomplished in one line in Mathematica as follows:



WellDistribution[x_,M_]:=
Max[Differences[Sort[Table[N[FractionalPart[x*m]], {m, 1, M}]]]]*M;


Some interesting things happen with this when we vary $n$; perhaps I'll make a new post out of it.

human biology - What is the healing process of mouth wounds?

To build on the answers from @Armatus and @S-Sunil



The healing mechanism involves the inflammatory process, which is the same in almost the entire body. In particular in both skin and mucosa (both referred to as "epithelial" tissues), when there is a break, platelets and clotting factors clot off any bleeding vessels, white blood cells (neutrophils and macrophages in particular) collect and destroy any bacteria, dead cells and muck, and then the process of regeneration occurs (with ongoing inflammation), where stem cells in the surrounding tissue regrow cells, new blood vessels may be formed and scar tissue is laid down to give extra strength. Eventually after these stages of healing, there is "remodelling" where the structure basically gets better.



enter image description here



As to why oral wounds heal quickly and don't get infected that much? There are a bunch of reasons. One is that the head and neck has an excellent blood supply- just think of scalp wounds where you bleed like crazy but then they heal very well. Another is that mucous membranes have immune functions that stop invading microbes. Generally speaking we divide this into "innate" immunity which is a general response, and "adaptive" immunity which is tailored towards specific bugs. The mucosa has both. There are neutrophils and macrophages (innate) which live in that area, there are lymphoid patches (like lymph nodes, and adaptive), there are immunoglobulins specific to mucosa (IgA) which is adaptive in nature, and the epithelial lining cells themselves will signal to the rest of the immune system if there is damage or an infection. Plus, saliva itself has chemicals and enzymes which break down oral bugs.



Oral immunity



Most of the oral bacteria themselves are not particularly invasive. Just think, every time you brush your teeth, you cause multiple abrasions in your mouth. In fact, measurable amounts of bacteria from the mouth end up in your bloodstream every time you brush your teeth! And yet, we don't end up with bloodstream infections as a result. This is partially because the rest of our immune system (and the structure of our heart and vessels) is intact, and partially because oral bacteria are not very invasive or pathogenic. They kind of have a sweet deal living in your mouth minding their own business and not killing their host, and even causing infection in an oral wound would make their continued survival less likely.



In addition I should probably point out that skin has a great deal of bacteria on it as well, and yet we rarely get infections from them (unless colonised by an invasive species), for the same reasons.



colonisation, invasion, infection

Tuesday, 9 October 2007

ag.algebraic geometry - Is there any rational curve on an Abelian variety?

Over $mathbb C$ you can argue as follows.
Suppose you have a morphism $mathbb P^1(mathbb C) to A $ ($A$= abelian variety ). Since $mathbb P^1(mathbb C) $ is simply connected , the morphism lifts to the universal cover of $A$, affine space $mathbb C^n$. But since $mathbb P^1(mathbb C)$ is complete and connected, the lift to affine space must be constant and hence the original morphism is constant too.



The answers by Charles, Felipe and jvp are better because they work over arbitrary fields, but since the argument just given is so ridiculously elementary (introductory topology), I thought it might still be of some interest ( also it works in the holomorphic category if $A$ is a complex torus, maybe not algebraic).

Sunday, 7 October 2007

How is representation theory used in modular/automorphic forms?

Since you mentioned Galois representations, I can briefly discuss the simplest version of the connection there and point you to Diamond and Shurman's excellent book which discusses modular forms with an aim towards this perspective.



The connection here is to representations of the absolute Galois group $G = text{Gal}(bar{mathbb{Q}}/mathbb{Q})$. By the Kronecker-Weber theorem, one-dimensional (continuous, complex) representations of $G$ are classified by Dirichlet characters, so it is natural to ask about the next hardest case, the two-dimensional representations. A large class of them can be constructed as follows. Given an elliptic curve $E$ defined over $mathbb{Q}$, the elements of order $n$ (hereby designated by $E[n]$) form a group isomorphic to $(mathbb{Z}/nmathbb{Z})^2$, and since their coordinates are algebraic numbers, $G$ acts on them. This gives a representation



$$G to text{GL}_2(mathbb{Z}/nmathbb{Z}).$$



As is, this representation causes problems because $mathbb{Z}/nmathbb{Z}$ isn't an integral domain. So what we do is we take $n$ to be all the powers of $ell$ for a fixed prime $ell$ and take the inverse limit over all the corresponding $E[ell^n]$. The result is a gadget called a Tate module, which is a $G$-module isomorphic (as an abstract group) to $mathbb{Z}_{ell}^2$, and which therefore defines a representation



$$G to text{GL}_2(mathbb{Z}_{ell}).$$



So how does one identify the representation corresponding to $E$? The standard answer is to look at certain ("conjugacy classes" of) elements of $G$ called Frobenius elements, which come from lifts of Frobenius morphisms. Although Frobenius elements aren't always well-defined, it turns out that the trace $a_{p,E}$ of the Frobenius element corresponding to $p$ in a representation is, and so we can identify a representation by giving the numbers $a_p$ for all $p$. (I am not really familiar with the details here, but I believe this works because Frobenius elements are dense in $G$.) It turns out that if $p$ is a prime of good reduction, $a_{p,E} = p + 1 - |E(mathbb{F}_p)|$, so these numbers can actually be obtained in a fairly concrete manner (where $E(mathbb{F}_p)$ is the set of points of $E$ over $mathbb{F}_p$). (Again, I am not really familiar with the details here, including what happens when $p$ doesn't have good reduction.)



Now: one statement of the modularity theorem, formerly the Taniyama-Shimura conjecture, is that there exists a cusp eigenform $f$ of weight $2$ for $Gamma_0(N)$ for some $N$ (called the conductor of $E$) such that, whenever $p$ is a prime of good reduction,



$$a_{p, f} = a_{p, E}$$



where $a_{p, f}$ is the $p^{th}$ Fourier coefficient of $f$. In other words, cusp eigenforms of weight $2$ "are the same thing as" a large class of two-dimensional representations of $G$. The Langlands program is at least in part about generalizations of this statement to higher-dimensional representations of $G$, but there are many qualified number theorists here who can tell you what this is all about.

ag.algebraic geometry - how good an approximation to the equivariant derived category is given by the Grassmannian filtration of the classifying space?

So, let's say one has an action of $GL_n$ on an algebraic variety $X$ over a field $k$, and two objects $F,G$ in the equivariant derived category (i.e., the derived category of constructible sheaves on the stack $X/GL_n$).



For each integer $m$, let $Y_m$ be the space of injective maps of $k^nto k^m$ and let $X_m=(Y_mtimes X)/GL_n$ (with the diagonal action, as usual). Note that we have a map $p_m:X_mto X/GL_n$.



Now, it's a fact that $Hom_{X/GL_n}(F,G)$ injects into the inverse limit $varprojlim Hom_{X_m}(p_m^*F,p_m^*G)$, but it usually isn't injective for any given $m$.




Can anything precise be said about how fast this kernel shrinks?




The most boring case is when $F$ and $G$ are both the constant sheaf on a point. Then $Hom_{X/GL_n}(F,G)=H^*(BGL_n)$, the cohomology of the classifying space and $Hom_{X_m}(p_m^*F,p_m^*G)=H^*(Gr(m,n))$, the cohomology of the Grassmannian of $n$-planes in $m$-space. In this case the kernel is pretty well understood.



Ideally, the kernel in general would simply come from this case: i.e. these cohomology rings act on the right and left no matter what $X$ is, and the kernel might be generated by multiplying maps by classes in the kernel of the map from $H^*(BGL_n) to H^*(Gr(m,n))$, the map from the cohomology of the classifying space to the cohomology of the Grassmannian. This seems like a reasonable statement, but I'm not sure where to look for it.

human biology - Why do people say that trans fatty acids are bad for your health?

You are right the biology text books claim that trans-“unsaturated” fatty acids can be metabolized. Note : - The text books make it clear that “Trans” means the fatty acid is unsaturated and can be monounsaturated or polyunsaturated.



Trans fatty acids are generally being blamed so as to keep to the unfortunate false “theory” that certain poly unsaturated fatty acids are essential for humans. This false “theory” is the basis for the epidemic of obesity and most of the strange and lifestyle diseases. The actual culprits of the epidemic of lifestyle diseases is the high levels of polyunsaturated fatty acids (both trans and cis) we consume through the processed unsaturated vegetable fats.



Check from your text books, atherosclerotic plaque is made up of oxidised polyunsaturated fatty acids and cholesterol (which has been labelled LDL cholesterol).



Prostaglandins, which are produced in the body by polyunsaturated fatty acids, have been identified by many different research studies as the molecules underlying diseases and ailments such as diabetes, asthma, cancers, dysmenorrhea (menstrual pains), baldness, cramps and muscle pull, migraine, glaucoma, delivery and labour problems, etc.



Aspirin according to the research results, reduces the chances of cancer occurrence or dying early from cancer. Aspirin stops the production of prostaglandins in the body.

ac.commutative algebra - Modules over a Gorenstein ring

Here is a proof which may not be the best but demonstrates some standard techniques:



Since $R$ has finite inj. dim. one can replace $M$ by a high syzygy, so one can assume $M$ has full depth. Thus one can kill a full regular sequence for both $M$ and $R$ and (as finiteness of proj. or inj. dim are not affected) assume $R$ is Artinian. Now, your last question shows that you already know in this case $M$ must be injective.



Map a free module onto $M$ and look at the exact sequence:



$$ 0 to N to R^n to M to 0$$



As $R$ is Gorenstein, $N$ also has fin. inj. dim., so injective. But then $text{Ext}_R^1(M,N)=0$, hence the sequence splits, and $M$ must be free.



PS: The name is Bruns. You said in your profile that you are a graduate student interested in commutative algebra. If that is indeed the case, then perhaps taking a serious course and talking to the experts at your institution would be more effective then learning it on MO (-: Good luck!

The 'real' use of Quantum Algebra, Non-commutative Geometry, Representation Theory, and Algebraic Geometry to Physics

Of the topics you mentioned, perhaps Representation Theory (of Lie (super)algebras) has been the most useful. I realise that this is not the point of your question, but some people may not be aware of the extent of its pervasiveness. Towards the bottom of the answer I mention also the use of representation theory of vertex algebras in condensed matter physics.



The representation theory of the Poincaré group (work of Wigner and Bargmann) underpins relativistic quantum field theory, which is the current formulation for elementary particle theories like the ones our experimental friends test at the LHC.



The quark model, which explains the observed spectrum of baryons and mesons, is essentially an application of the representation theory of SU(3). This resulted in the Nobel to Murray Gell-Mann.



The standard model of particle physics, for which Nobel prizes have also been awarded, is also heavily based on representation theory. In fact, there is a very influential Physics Report by Slansky called Group theory for unified model building, which for years was the representation theory bible for particle physicists.



More generally, many of the more speculative grand unified theories are based on fitting the observed spectrum in unitary irreps of simple Lie algebras, such as $mathfrak{so}(10)$ or $mathfrak{su}(5)$. Not to mention the supersymmetric theories like the minimal supersymmetric standard model.



Algebraic Geometry plays a huge rôle in String Theory: not just in the more formal aspects of the theory (understanding D-branes in terms of derived categories, stability conditions,...) but also in the attempts to find phenomenologically realistic compactifications. See, for example, this paper and others by various subsets of the same authors.



Perturbative string theory is essentially a two-dimensional (super)conformal field theory and such theories are largely governed by the representation theory of infinite-dimensional Lie (super)algebras or, more generally, vertex operator algebras. You might not think of this as "real", but in fact two-dimensional conformal field theory describes many statistical mechanical systems at criticality, some of which can be measured in the lab.
In fact, the first (and only?) manifestation of supersymmetry in Nature is the Josephson junction at criticality, which is described by a superconformal field theory. (By the way, the "super" in "superconductivity" and the one in "supersymmetry" are not the same!)

Saturday, 6 October 2007

pr.probability - probability puzzle - selecting a person

I have written an article about this game, and solved it numerically for the case of 10 person. I computed that the probability is the same for each of the 9 person.



In short, I have defined the following:



Let P(n,i,j,k) be the probability of at the n-th round, the k person is having the coin, while the people from i counting clockwise to k have all received the coin before.



And 3 recurrence equations are formulated:



$P (n+1,i,j,k)= frac{1}{2} P (n,i,j,k-1)+ frac{1}{2} P (n,i,j,k+1)$ ........(1)



$P (n+1,i,j,j)=frac{1}{2} P (n,i,j-1,j-1)+ frac{1}{2} P(n,i,j,j-1)$ ........(2)



....(3)



For details, please refer to:



Solving a probability game using recurrence equations and python



In this article, a(n,i) which denotes the probability of the i-th person being the head in the n-th round is found and plotted out as well.

dg.differential geometry - Two definitions of Calabi-Yau manifolds

I have looked for a while for a proof
which does not use the Calabi-Yau theorem
and nobody seems to know it.



Also, there are plenty of non-Kaehler
manifolds with canonical bundle trivial
topologically and non-trivial as a holomorphic
bundle (the Hopf surface is an easiest
example).



The argument actually uses the Calabi-Yau
theorem, Bochner's vanishing, Berger's classification
of holonomy and Bogomolov's decomposition theorem.



From Calabi-Yau theorem you infer that
there exists a Ricci-flat Kaehler metric.
Since the Ricci curvature is a curvature
of the canonical bundle, this implies
that the canonical bundle admits a flat
connection.



Of course, this does not mean that
it is trivial holomorphically; in fact,
the canonical bundle is flat on Hopf surface
and on the Enriques surface, which are
not Calabi-Yau.



For Calabi-Yau manifolds, however,
it is known that the Albanese map
is a locally trivial fibration and
and has Calabi-Yau fibers with trivial
first Betti number. This is shown using the
Bochner's vanishing theorem which implies
that all holomorphic 1-forms are parallel.



Now, by adjunction formula, you prove that
the canonical bundle of the total space is
trivial, if it is trivial for the base and the fiber.
The base is a torus, and the fiber is a Calabi-Yau
with $H^1(M)=0$. For the later, triviality
of canonical bundle follows from Bogomolov's
decomposition theorem, because such a
Calabi-Yau manifold is a finite quotient
of a product of simple Calabi-Yau manifolds
and hyperkaehler manifolds having holonomy
$SU(n)$ and $Sp(n)$. Bogomolov's decomposition
is itself a non-trivial result, and (in this generality)
I think it can be only deduced from the Berger's
classification. The original proof of Bogomolov was
elementary, but he assumed holomorphic triviality
of a canonical bundle, which we are trying to prove.



This argument is extremely complicated; also,
it is manifestly useless in non-Kaehler situation
(and in many other interesting situations).
I would be very interested in any attempt
to simplify it.



Update: Just as I was writing the reply,
Dmitri has posted a link to Bogomolov's article, where
he proves that some power of a canonical bundle is
always trivial, without using the Calabi-Yau theorem.

Tuesday, 2 October 2007

ag.algebraic geometry - Why the rank of a locally free sheaves is well defined?

Let $mathcal{F}$ be a locally free sheaf on $X$. For any $x$ in $X$ there exists $x in U subset_{open} X $ such that




$mathcal{F}|_U cong mathcal{O}_X|_U^{(I)}$ $ (star)$.




In particular, for each $y$ in this particular $U$, one has $mathcal{F}_y cong mathcal{O}_{X,y}^{(I)}$ (which is given by the isomorphism above!!!).



Suppose now $X$ is connected and $mathcal{F}$ is locally free (we need this). Fix an indexing set $I$ (and I think I need to take this $I$ to be one of the indexing sets from $(star)$ above). The properties of $mathcal{F}$ show that the set




$S_I = left(x in X : mathcal{F}_x cong mathcal{O}_{X,x}^{(I)}right)$



is both closed and open in $X$. We know that there exists




$x$ in $X$ with $mathcal{F}_x cong mathcal{O}_{X,x}^{(I)}$,



we have $S_I = X$.




In particular, $text{rank}_{mathcal{O}_{X,x}}(mathcal{F}_x)$ is constant as $x$ varies in $X$.

Group-Adjoint and Hopf-Algebra-Adjoint Maps

The warm-up for any Hopf algebra construction is to try it on the two Hopf algebras $mathbb K G$ and $C(G,mathbb K)$, the group ring and ring of functions respectively for a finite group $G$.



In the group ring $mathbb K G$, the group conjugation manifests in the forward direction. Recall that the structure maps on $mathbb K G$ are simply the linearizations of the structure maps on $G$, where the comultiplication $Delta$ is the duplication map $g mapsto gotimes g$ for $gin G$ and the antipode $S$ is $g mapsto g^{-1}$. Then the conjugation is:
$$ gotimes h mapsto S(g_{(1)}) , h , g_{(2)} $$
where I have used the Sweedler notation to denote the comultiplication, and the multiplication is simply concatenation. So this is a map $mathbb K G^{otimes 2} to mathbb K G$ extending $(g,h) mapsto g^{-1}hg$. Incidentally, it's probably better to use the other conjugation $(g,h) mapsto ghg^{-1}$, as that gives a left-action of $G$ on $G$, and you are writing the actor on the left. But I'll keep the ordering you gave in the question, for now. At the end, I'll come back to this, and it will be clear where the confusion is.



In $C(G,mathbb K)$, everything is naturally reversed. In particular, for $G$ finite, $C(G,mathbb K)$ has a basis given by the delta functions $delta_g$ for $gin G$. The multiplication is commutative and the comultiplication is anticommutative. We want the map $delta_{g^{-1}hg} mapsto delta_g times delta_h$, or perhaps in the other order, depending on your conventions.



So let's read backwards. We first blow up $a = delta_{g^{-1}hg}$ to $a_{(1)} otimes a_{(2)} otimes a_{(3)}$, where we think of $a_{(1)}$ as $delta_{g^{-1}}$, etc. (of course, it isn't just that, but we'll pick out that term). Now we need to move the last term past the middle term, and antipode the first term: $S(a_{(1)}) otimes a_{(3)} otimes a_{(2)}$. Then the final multiplication makes sure that $S(a_{(1)})$ and $a_{(3)}$ are supported at the same points in $G$. So, all together, I think it's reasonable to call:
$$ a mapsto S(a_{(1)})a_{(3)} otimes a_{(2)} $$
a "coadjoint coaction", with the caveat as above that it's probably on the wrong side.



In any case, up to left and right, this is your second proposal. And left and right is hard, for the following reason. There seems to be no consensus in the quantum groups literature for whether the dual to $A otimes B$ is $A^* otimes B^*$ or $B^* otimes A^*$. Or rather, it's clear from the representation theory of Hopf algebras that it must be the latter, but many of the early (and later) texts on Hopf algebras use the former dual when working in vector spaces (defining duals to Hopf algebras, etc.).



But also, for $C(G,mathbb K)$ it doesn't matter: $ab = ba$. So really what you must do is check that your proposal is really a coaction, because in Hopf land this will pick out the difference.



In any case, I think you'll find when you work it out that this is not a coaction, exacly because $(g,h) mapsto g^{-1}hg$ is no an action of groups (the $g$ is on the wrong side). If you do the correctly-sided action $(g,h) mapsto ghg^{-1}$, then in $mathbb K G$ this is $gotimes h mapsto g_{(1)}hS(g_{(2)})$, and in $C(G,mathbb K)$ it is $a mapsto a_{(1)}S(a_{(3)}) otimes a_{(2)}$ (or the one you initially start with, if you have the opposite left-right convention when dualizing). Then you can just check directly that this is in fact a coaction of Hopf algebras.

genetics - How many people's DNA were involved in the compilation of the reference human genome?

tl;dr: In the Human Genome Project, they used the DNA of four people (though one male provided >70% of DNA. The Celera genome was compiled from five people.




In the [...] Human Genome Project (HGP), [...] scientists used white
blood cells from the blood of two male and two female donors (randomly
selected from 20 of each) -- each donor yielding a separate DNA
library. One of these libraries (RP11 [anonymous donor from Buffalo,
NY]) was used considerably more than others, due to quality
considerations.



[...]



In the Celera Genomics private-sector project, DNA from five different
individuals were used for sequencing. The lead scientist of Celera
Genomics at that time, Craig Venter, later acknowledged (in a public
letter to the journal Science) that his DNA was one of 21 samples in
the pool, five of which were selected for use.



On September 4, 2007, a team led by Craig Venter published his
complete DNA sequence,[21] unveiling the six-billion-nucleotide genome
of a single individual for the first time.)




Source: Wikipedia

Monday, 1 October 2007

pr.probability - Difference Equations & Possible Limits

The answer to this may well be in some elementary textbook - a reference might be more useful than a short answer here.



If we look at the behaviour of a point in R n under matrix multiplication, we know that there are only a few pictures (it can remain fixed at a point, go in a circle, go in a spiral, etc.). In projective space, there are even fewer (spirals become circles, etc).



I'm wondering if there are any similar theorems about what pictures are possible in the 'piecewise constant' as opposed to 'constant' case. Precisely, I have an (open) region of R n sliced up into smaller regions by hyperplanes; on each of these regions, I have a (single) matrix. Furthermore, on the boundaries, these matrices agree. I can now run the same sort of dynamics, and I'm curious as to what can happen. In addition to the question as to which 'pictures' are possible, I'm interested in any other general theorems that show up around here - this problem showed up in something else I was doing, and it is a little far from what I usually do.



Thanks!



PS: If the 'piecewise constant' condition seems ridiculous, we can think of the transformation on the line which takes x to 0.7x if x<=0, and takes x to 0.4x if x >=0. These transformations 'look different' at 0 as written, but obviously do the same thing. The piecewise constant condition is essentially the same thing, along more interesting subspaces.



[edited to respond to comment] Thanks for asking, I hope this is more clear: Space is sliced up into regions. Within each region there is a single (constant) matrix. Different regions may have different matrices; however, they must agree on the boundaries.