Saturday, 30 June 2007

homework - Where does meiosis II of oogenesis end exactly in tuba uterina - uterus?

I know that metaphase II ends at ampulla tuba uterina, but I am not completely sure where the telophase II ends. Is it in the triangular section of cervix uterii?



I just have an intuition that the thing is not ending at the ampulla, since there is some time to get to the uterus. The thing must happen before implantation to the uterus, since the egg is blastocyst at the stage.



2. When does meiosis II of oogenesis end exactly?

pr.probability - Is there a simple inductive procedure for generating labeled trees uniformly at random, without direct recourse to Prüfer sequences?

This is an interesting question. For any fixed positive integer $d geq 2$, write $T_d^{infty}$
for the complete infinite rooted $d$-ary tree (by this I mean every node has exactly $d$ children). Luczak and Winkler proved the existence of a procedure which will generate a sequence $(T_{n,d})_{n geq 0}$ such that for all $n geq 1$,



(a) the distribution of T_{n,d} is uniformly random over all $n$-node subtrees of $T^{infty}_d$ that contain the root of $T^{infty}_d$; and



(b) $T_{n,d}$ is a subtree of $T_{n+1,d}$.



It is not hard to show that (a) implies that for all $n$, $T_{n,d}$ is distributed as a Galton-Watson tree with offspring distribution $mathrm{Bin}(d,1/d)$, conditioned to have total size $n$.
Since $mathrm{Bin}(d,1/d)$ tends to a $mathrm{Poisson}(1)$ distribution as $d$ becomes large,
this means that as $d to infty$, the distribution of $T_{n,d}$ tends to that of a Galton-Watson tree with offspring distribution $mathrm{Poisson}(1)$ conditioned to have $n$ nodes (let me write $mathrm{PGW}_n(1)$ for this distribution). The latter distribution is the same as that of a uniformly random labelled rooted tree on labels $1,ldots,n$. (At least, the latter is true once we label the Galton-Watson tree uniformly at random, or alternately remove the labels of the labelled rooted tree.)



As noted by Lyons et al. (Theorem 2.1), all this implies in particular that one can define a similar sequence $(T_n)_{n geq 1}$ such that for all $n$, $T_n$ is a subtree of $T_{n+1}$ and $T_n$ has distribution $mathrm{PGW}_n(1)$.



However, the construction in the Luczak-Winkler paper uses flows, and it is not 100% obvious how it "passes through to the $d=infty$ limit." (I say this with the caveat that I didn't make any serious attempt at figuring this out.) As a consequence, while it is known that there exists a generation procedure of the type you are looking for, I am not aware of an explicit description of the actual rule for where the leaf should be attached to $T_n$ to create $T_{n+1}$. I asked Peter Winkler about this at a conference last year and he also didn't know (though I don't know whether he had thought about this specific question in depth, either).

Friday, 29 June 2007

Effects of Polyphasic vs Monophasic sleep in humans

Reading "Polyphasic Sleep: Facts and Myths (Dr Piotr Wozniak)", it is pointed out that infant humans do undergo polyphasic sleep. As this is where most of our development is obviously done, I do not know where I can further proceed with the question about how it would affect development? Perhaps the issue is more how it would effect the day to day performance of a developed individual? If this is the case then it is suggested by Dr Wozniak that this is likely to be highly disruptive to the individual




Those well-defined effects of natural sleep affecting stimuli on sleep patterns lead to an instant conclusion: the claim that humans can adapt to any sleeping pattern is false. A sudden shift in the schedule, as in shift work, may lead to a catastrophic disruption of sleep control mechanisms. 25% of North American population may work in variants of shift schedule. Many shift workers never adapt to shifts in sleep patterns. At times, they work partly in conditions of harmful disconnect from their body clock, and return to restful sleep once their shift returns to their preferred timing. At worst, the constant shift of the working hours results in a loss of synchrony between various physiological variables and the worker never gets any quality sleep. This propels an individual on a straight path to a volley of health problems...



It appears that polyphasic sleep encounters the precisely same problems as seen in jet lag or shift-work. Human body clock is not adapted to sleeping in patterns other than monophasic or biphasic sleep.




It would therefore seem that polyphasic sleep is certainly detrimental to health, if not development.



However studies into cognitive performance resulting from differing sleep patterns run by Dr Claudio Stampi (Published ISBN 0-8176-3462-2), he concluded that polyphasic sleep was more efficient than monophasic sleep. Therefore it may be possible that polyphasic sleep patterns have no detrimental effect on development.




Individuals sleeping for 30 minutes every four hours, for a daily total of only 3 hours of sleep, performed better and were more alert, compared to when they had 3 hours of uninterrupted sleep




There are a couple of theories mentioned in the above book (beginning pg 5) which support the development of monophasic sleep as evolution rather than a social convention:



  1. Polyphasic sleep is regularly seen in smaller mammals that have very high metabolic rates, requiring them to spend most of their time foraging or hunting. Therefore a long sleep would be highly impractical for them as they would wake without the energy required to hunt their next meal. Humans do not have this need as they are larger and do not require such regular meals.

  2. Monophasic sleep would be beneficial for the early human hunter gatherer as our eyes are not well adapted to see at night. Therefore any time spent not using the daylight is wasted and any time without the light is not nearly as useful. This makes it more beneficial to sleep for an extended period when the sun is down.

I am sure that social factors would have an effect, however I would imagine the evolutionary pressures to be more significant.



I really would recommend the book (http://sleepwarrior.com/Claudio_Stampi_-_Why_We_Nap.pdf) if you have not yet encountered it as I found it very informative and packed with references to studies that you may find helpful.

Galois theory and rational points on elliptic curves

What you've written down is relevant for finding rational torsion points on an elliptic curve. If that's what you want to do, Galois theory is certainly relevant. For instance, suppose you have an elliptic curve in Weierstrass form,



y^2 = f(x)



with f a cubic. Now suppose you find that f(x) has a linear factor (x-a). (I certainly take this to be a "Galois-theoretic" condition on f.) Then you've found a rational point of your curve, namely (a,0).



The relationship between Galois theory and points of infinite order is more subtle, involving Galois cohomology, and is discussed in chapter 10 of Silverman's book The Arithmetic of Elliptic Curves.

ct.category theory - What are natural examples of "bimorphism" classes?

I'm going to be deliberately provocative and say that I don't really know of any use for the concept of bimorphism as such. (I also don't really like the name; it sounds to me like something that's both a morphism and a comorphism.)



One use that's been proposed is "to find situations in which bimorphism ⇒ isomorphism." Such situations may be interesting, but as far as I can tell they are rarely (if ever) used. What seems to happen much more often is that we have some factorization system (E,M) and we use the fact that E+M=iso, which is true for any factorization system. The most common case is probably (extremal epi, mono), followed perhaps by (epi, extremal mono); both of these are factorization systems as soon as the relevant factorizations exist.



It might happen, in some case, that E consists of exactly the epimorphisms and M of exactly the monomorphisms (such as when all epis, or all monos, are extremal). But as far as I can tell this fact -- especially the epi part of it -- is hardly ever relevant, because in practice it's quite hard to characterize the epis in a given category or to check that a given morphism is epi, nor is the answer often especially meaningful. Since monadic functors also create limits, and in particular monomorphisms, a morphism in a category monadic over Set is monic iff it is injective -- but this is not true for epis, and even in quite nice categories the epis can be fairly bizarre. It's usually the extremal epis which coincide with the "surjections" and form a factorization system with the monos.



For instance, Andrew cited vector spaces as an example of a balanced category. But as I pointed out in my comment, do we ever use that fact? What we actually teach our undergraduates is that injective+surjective=iso for vector spaces; we (or, at least, I) don't tell them anything about why surjective=epi, or even what epi means. And when doing linear algebra, I might occasionally use the fact that surjections are in particular epi (which just follows because the forgetful functor to Set is faithful), but never the converse. It's just as true for groups, rings, fields, monoids, etc. that injective+surjective=iso, and we use that fact in doing algebra all the time -- but does the non-surjective ring epimorphism Z → Q, showing that rings (unlike vector spaces) are not balanced, ever actually bother us in practice?



In the topological situation, it's true that the epimorphisms in Top are precisely the surjective continuous maps. But does that fact really help you when looking for conditions ensuring that a continuous bijection is an isomorphism, or using that fact in practice? Odds are the property of a continuous bijection you're going to use is that it's continuous and a bijection, not that it's monic and epic in the category Top.



The categorical version of "continuous bijection in Top" is "inverted by the forgetful functor to Set," and I think that in general the property of "being inverted by a forgetful functor" is quite interesting and important. For instance, a forgetful functor with the property that any morphism inverted by it is already an isomorphism is called conservative, and these include all monadic functors. The question about all the different topologies one can put on a given set also seems to me to really be about morphisms inverted by the forgetful functor; is it really important here that continuous surjections are the epis in Top? I expect that if you modify the definition of Top a little, then it may no longer be true that epis coincide with continuous surjections, and in that case I bet that it is the continuous surjections which are of more interest.



At this point, perhaps the most interesting thing I know about bimorphisms is that they often form the middle class of a ternary factorization system. I'll be happy to be proven wrong, however.

Thursday, 28 June 2007

ag.algebraic geometry - Definition of Chow groups over Spec Z

So with a lot of extra care about dimension/codimension it seems to be possible to define Chow groups over Spec Z if I understand the above answers correctly.



I may point out that in the book by Elman, Karpenko and Merkurjev "Algebraic and Geometry Theory of Quadratic Forms" (even though the title does not suggest so) they very carefully work out Chow groups, even some version of higher Chow groups. They begin by treating Chow groups over general excellent schemes (something you do not have written so explicitly in Fulton), so quite general and only later impose additional assumptions, like equidimensionality, being over a field, and all that. So maybe it is worth having a look at that.



On the other hand, they get a pullback along non-flat morphisms only with the typical more restrictive conditions. This however is crucial for turning Chow groups into the Chow ring.



So I think the construction of the intersection product [which uses the pullback along the embedding of the diagonal X -> X x X] is another very very critical matter over Z (but according to one of the other answers it can be done, that sounds very interesting).



Last but not least, just maybe another perspective, if one writes down the classical intersection multiplicity of two cycles, that can be done by first multiplying both cycles of complementary codim [so for this we need a ring structure, but let's just assume somebody can give such a structure even over Z, just to find out where we would actually be going], then the product lies in CH^n(X), n being the dimension of our scheme. Now to turn that into the classical intersection multiplicity one could pushforward this cycle along the structural map to the base field,



$X$ --> $Spec$ $(k)$



over a field(!) and $CH{_0}(Spec k) = mathbb{Z}$ and we get our intersection number. Voilà.
But if we are proper over Spec Z, we could at best pushforward



$X$ --> $Spec$ Z



but $CH{_0}(Spec(mathbb{Z}) = 0$, so nothing very interesting seems to result here.
[this argument however only makes sense if the dimension shifts in this Spec Z setup would be carried over analogously, which maybe is also stupid here for the reason that Spec Z is one-dimensional and Spec k zero-dimensional. I am just saying all this, because best and supercool would of course be somebody with a Spec *F*$_1$ having



CH_0(Spec *F*$_1$)= ? (....something, probably rather R than Z)



and that could then be our Spec Z intersection number by giving *F*$_1$ the role of a "virtual base field" and I guess some people say this should link to the Arakelovian one.... but well, that's very speculative]



So I think some people's expectation goes in the direction that the "interesting" way of doing intersection theory over Spec Z needs such a final *F*$_1$-twist.



Note maybe that the classical analgoue would be



P1 <-> Spec Z + (infinite place)



but CH_0(P1) = Z, whereas CH_0(Z) = 0, so we kind of miss something if we just use classical Chow groups over Spec Z. For other questions, classical methods work well even for Z without needing *F*$_1$ or so, for example the étale fundamental groups of both P1 and Spec Z are trivial. But for Chow theory some additional tricks seem to be required.



At least that is my impression. Of course this arithmetic aspect of intersection theory over Spec Z is a kind of different story and it also makes perfect sense to talk about classical Chow groups over Spec Z, so there is certainly nothing wrong in having CH_0(Z) = 0, just maybe for some sorts of questions of arithmetical content, this type of Chow theory may not be the right approach.

biochemistry - Which sequence characteristics influence the transcription efficiency of T7 polymerase?

By far the most important part is the very beginning of the transcript, especially the positions +1 and +2. The conservered consensus sequence in class III T7 promoters is GGGAGA, any changes in the first two nucleotides severely reduce the transcription efficiency. Changes at positions +3 to +6 have much smaller effects.



Additionally, changes that put many AU base pairs in that region (e.g. GGUUU) seem to affect the trancription efficiency negatively.



Other sequences that are problematic anywhere, not only in the beginning are long stretches of uridines or adenines. Sequences with eight or more uridines or adenines can cause the polymerase to slip, which results in transcripts with more uridines or adenines than in the template.



These characteristics are detailed in the paper from Milligan and Uhlenbeck from 1989.




Milligan, J. F. & Uhlenbeck, O. C. Synthesis of small RNAs using T7 RNA polymerase. Meth. Enzymol. 180, 51–62 (1989).

synthetic biology - T7 phage promoter action in mammals

Amazingly, yes.



See High level gene expression in mammalian cells by a nuclear T7-phage RNA polymerase. The authors recognize that the T7 RNA polymerase tends to work only in the cytoplasm thus is unable to transcribe genes in the nucleus. To remedy this limitation, the T7 polymerase was fused to a nuclear location signal peptide to localize the polymerase correctly.



Promega sells a T7 expression system for RNAi production.

Wednesday, 27 June 2007

soft question - What's your favorite equation, formula, identity or inequality?

It's too hard to pick just one formula, so here's another: the Cauchy-Schwarz inequality:




||x|| ||y|| >= |(x.y)|, with equality iff x&y are parallel.




Simple, yet incredibly useful. It has many nice generalizations (like Holder's inequality), but here's a cute generalization to three vectors in a real inner product space:




||x||2 ||y||2 ||z||2 + 2(x.y)(y.z)(z.x) >= ||x||2(y.z)2 + ||y||2(z.x)2 + ||z||2(x.y)2, with equality iff one of x,y,z is in the span of the others.




There are corresponding inequalities for 4 vectors, 5 vectors, etc., but they get unwieldy after this one. All of the inequalities, including Cauchy-Schwarz, are actually just generalizations of the 1-dimensional inequality:




||x|| >= 0, with equality iff x = 0,




or rather, instantiations of it in the 2nd, 3rd, etc. exterior powers of the vector space.

lab reagents - What are key factors when evaluating and comparing miniprep columns?

Seems like you have covered the essentials here. I can't think of anything else.



Since the Qiagen patent on spin prep columns ran out, these kits are very cheap - $0.40 each? In the 3 or 4 kits Ive used, they all seem to use the same protocol and about the same buffers, so there might be differences in quality or yield but if so, they are small.



You can even make your own buffers as the recipes are on open wet ware. None of the kits i've used has suggested that these buffers can't be used. They even usually call the buffers by the same names.



So your application would have be pretty discriminating to differentiate. Even if this were the case, the quality could probably be improved a lot just by washing the column more. Can you provide more details about how a few percent one way or another might be an issue?

Tuesday, 26 June 2007

reference request - Sum of Log Normal random variables

Hi,



Here are the most intersting reference I could find



I post them because I think other people might be intereted in this issue,
so here are the articles titles :



Asmussen, Rojas-Nandayapa - Sums of Dependent lognormal Random Variables, Asymptotics and Simulation



Vanduffel, Chen, Dhaene, Goovaerts, Henrard - Optimal Approximations for Risk Measures of Sums of Lognormals based on Conditional Expectations



Li - A Novel Accurate Approximation Method of Lognormal Sum Random Variables



Gao, Xu, Ye- Asymptotic Behavior of Tail Density for Sum of Correlated Lognormal Variables



Mehta, Molisch, Wu, Zhang - Approximating the Sum of Correlated Lognormal or Lognormal-Rice Random Variables



Fu, Madan, Wang - Pricing Continuous Asian Options, A Comparison of Monte Carlo and Laplace Transform Inversion Methods



Vecer - New Pricing of Asian Options



And a few other articles not directly applicable to this problem but interesting on their own :



Eden, Viens - General Upper and Lower Tail Estimates using Malliavin Calculus and Stein's Equations



Barndorf-Nielsen, Kluppelberg - Tail Exactness of Multivariate Saddlepoint Approximations



Have a nice day

Monday, 25 June 2007

dg.differential geometry - Fundamental group of a compact space form.

Space forms are complete (connected) Riemannian manifolds of constant sectional curvature.
These fall into three classes: Euclidean, with universal covering isometric to $mathbb{R}^n$,
spherical, with universal covering isometric to $S^n$, and hyperbolic, with universal covering isometric to $mathbb{H}^n$.



Does there exist compact space forms of the same dimension from two different classes having isomorphic fundamental groups?



This cannot happen for $n = 2$, since the Gauss-Bonnet theorem



$$
int_M K{ }mathrm{vol}_M = 2{pi}chi(M)
$$



shows that the Euler characteristic $chi(M)$ is positive, zero, or negative when the space form $M$ is Euclidean, spherical or hyperbolic respectively. But for (closed) surfaces the fundamental group determines the Euler characteristic.



It is essential for the question to require compactness, otherwise there are trivial examples. Dividing out $mathbb{R}^2$ by the group of isometries generated by
$(x,y) mapsto (x+1,y)$ yields a Euclidean space form with fundamental group isomorphic to $mathbb{Z}$, while dividing out $mathbb{H}^2$ by the group of isometries generated by
the hyperbolic isometry $(x,y) mapsto (x+1,y)$ yields a hyperbolic space form also with fundamental group isomorphic to $mathbb{Z}$.



The standard reference on space forms is Spaces of Constant Curvature by Joseph A. Wolf.
The classification of Euclidean space forms is given in Chapter 3, and of spherical ones in Chapter 7. Wolf does not treat hyperbolic space forms, possibly because not much was known about them in 1967. Unfortunately, the fundamental groups are infinite for the compact Euclidean space forms, and finite for the spherical space forms (which are necessarily compact, being quotients of $S^n$). So a hypothetical example has to involve a hyperbolic space form.



An example might drop out of the theory of three-manifolds. In dimension three the space forms belong to three of the eight Thurston model geometries. A pair yielding an example would have to be one Euclidean and one hyperbolic, since it follows from Perelman's geometrization theorem that the spherical ones are precisely those with finite fundamental group.

Saturday, 23 June 2007

ca.analysis and odes - Lipschitz functions in $mathbb{R}^n$

Let $f = (f_1,ldots,f_n): [a,b] rightarrow mathbb{R}^n$ be a continuously differentiable function. (See the comments above for an explanation as to why the hypotheses have been strengthened.)



For $1 leq i leq n$, let



$L_i = max_{x in [a,b]} |f_i'(x)|$,



so that, by the Mean Value Theorem, for $x,y in [a,b]$,



$|f_i(x)-f_i(y)| = |f_i'(c)||x-y| leq L_i |x-y|$.



Then, taking the standard Euclidean norm on $mathbb{R}^n$,



$|f(x)-f(y)|^2 = sum_{i=1}^n |f_i(x)-f_i(y)|^2 leq (sum_{i=1}^n L_i^2) |x-y|^2$,



so



$|f(x)-f(y)| leq sqrt{(sum_{i=1}^n L_i^2)} |x-y|$.



Thus we can take



$L = sqrt{sum_{i=1}^n L_i^2}$.



Since all norms on $mathbb{R}^n$ are equivalent -- i.e., differ at most by a multiplicative constant -- the choice of norm on $mathbb{R}^n$ will change the expression of the Lipschitz constant $L$ in terms of the Lipschitz constants $L_i$ of the components, but not whether $f$ is Lipschitz.

number fields - Given an integer n and a finite extension K of Q , find a polynomial of degree n that is irreducible over K

Here is a more general result.



Theorem: Let $(K,| |)$ be a non-Archimedean normed field with completion $hat{K}$. Let $mathcal{L}/hat{K}$ be a finite separable extension of degree $d$. Then there exists a degree $d$ separable field extension $L/K$ such that $Lhat{K} = mathcal{L}$.



In particular, as long as the completion of $K$ admits a separable field extension of a certain degree $d$, so does $K$ itself, necessarily of the form $K[t]/(P(t))$ by the primitive element theorem. Moreover, as long as $K$ has characteristic zero and carries a nontrivial discrete valuation, it admits finite separable extensions of all finite degrees.



For a proof of this theorem using Krasner's Lemma, see Section 3.5 of



http://math.uga.edu/~pete/8410Chapter3.pdf



When the norm corresponds to discrete valuation $v$ (e.g. $| | = | |_{mathfrak{p}}$
the $mathfrak{p}$-adic norm for a prime ideal $mathfrak{p}$ of a number field $K$) one can get away with less: by weak approximation, there exists $alpha in K$ with $v(alpha) = 1$. For any positive integer $n$ prime to the characteristic of $K$, by Eisenstein's Criterion the polynomial $t^n - alpha in K[t]$ is (separable and) irreducible even over the completion $hat{K}$, so is certainly irreducible over $K$.

rt.representation theory - Beilinson-Bernstein localization in positive characteristic

This is a follow-up to this question; in particular, I'm wondering if anyone can expand upon the interesting answers given by Kevin McGerty and David Ben-Zvi there. (In particular, in this question I'm essentially quoting their answers).



Here's the setup: Let $k$ denote an algebraically closed field of positive characteristic and let $G$ be a semisimple algebraic group over $k$. Let $D$ denote the sheaf of ordinary differential operators on the flag variety $G/B$ of $G$; i.e., $D$ is the sheaf of divided-power differential operators. Also let $H$ denote the hyperalgebra of $G$.



Now, over $mathbb C$ there is an equivalence of categories between $D$-modules and $H$-modules with a certain central character. My question is: Is there any sort of localization theorem like this in positive characteristic? Kashiwara and Lauritzen have shown that $G/B$ is not $D$-affine in general, so perhaps one should look for a derived equivalence. (Bezrukavnikov, Mirkovic, and Rumynin have answered a similar question, but instead of $D$ they take the sheaf of crystalline/PD differential operators, and instead of $H$ they take the enveloping algebra of the Lie algebra of $G$).

Thursday, 21 June 2007

ac.commutative algebra - Serre intersection formula and derived algebraic geometry?

There are a number of comments to make about Serre's intersection formula and its relation to derived algebraic geometry.



First, we should be a little more cautious about attribution. The idea of using "derived rings" to give an intrinsic version of the Serre intersection formula is not recent. The idea goes back at least to thoughts of Deligne, Kontsevich, Drinfeld, and Beilinson in the 1980s (and possibly earlier). These ideas have been made precise in a number of ways, in particular in work of Kapranov & Ciocan-Fontaine, and Toën & Vezzosi. EDIT: As Ben-Zvi reminded me below, one should also mention Behrend and Behrend-Fantechi on DG schemes and virtual fundamental classes. Of course Lurie's work has been the most comprehensive and powerful in its treatment of the foundations of DAG, but it's important to understand that his work arose in the context of these fascinating ideas.



Now, just to provide a little context, let me try to recall how Serre's formula arises from DAG considerations. Let's start by using the notation above, but let's assume for simplicity that $X$, $Y$, and $Z$ are all local schemes. (Some of the technicalities of DAG arise in making sheaf theory work with some sort of "derived rings," so our discussion will be easier if we ignore that for now.) So we write $X=mathrm{Spec}(A)$, $Y=mathrm{Spec}(B)$, and $Z=mathrm{Spec}(C)$ for local rings $A$, $B$, and $C$.



Now if our aim is to intersect $Y$ and $Z$ in $X$, we know how to do that algebro-geometrically. We form the fiber product $Ytimes_XZ=mathrm{Spec}(Botimes_AC)$. The tensor product that appears here is really the thing we're going to alter. To do that, we're going to regard $B$ and $C$ as (discrete) simplicial (commutative) $A$-algebras, and we're going to form the derived tensor product. This produces a new simplicial commutative ring $Botimes^{mathbf{L}}_AC$ whose homotopy groups are exactly the groups $mathrm{Tor}^A_i(B,C)$. The intersection multiplicity is simply the length of $Botimes^{mathbf{L}}_AC$ as a simplicial $A$-module.



As Ben Webster says, the real joy of DAG is in thinking of the geometry of our new derived ring $Botimes^{mathbf{L}}_AC$ as a single unit instead of thinking only of its disembodied homotopy groups. The question you're asking seems to be: does thinking geometrically about this gadget help us to prove Serre's multiplicity conjectures in a more conceptual manner?



The short answer is: I don't know. I do not think a new proof of any of these has been announced using DAG (and it's definitely not in any of Lurie's papers), and in any case I do not think DAG has the potential to make the conjectures "easy." But let me see if I can make a case for the following idea: revisiting Serre's original method of reduction to the diagonal in the context of DAG.



Recall that, if $k$ is a field, if $A$ is a $k$-algebra, and if $M$ and $N$ are $A$-modules, then $$Motimes_AN=Aotimes_{Aotimes_kA}(Motimes_kN).$$
Hence to understand $mathrm{Tor}^A_{ast}(M,N)$, it suffices to understand $mathrm{Tor}^{Aotimes_kA}_{ast}(A,-)$. This allowed Serre to reduce to the case of the diagonal in $mathrm{Spec}(Aotimes_kA)$. The key point here is that everything is flat over $k$, so Serre could only use this to prove the multiplicity conjectures for $A$ essentially of finite type over a field. Observe that the same equality holds if we work in the derived setting: if $M$ and $N$ are simplicial $A$-modules, and $A$ is an $R$-algebra, then the derived tensor product of $M$ and $N$ over $A$ can be computed as
$$Aotimes^{mathbf{L}}_{Aotimes^{mathbf{L}}_RA}(Motimes^{mathbf{L}}_RN).$$
The gadget on the right (or, strictly speaking, its homotopy) has a name familiar to toplogists; it's the Hochschild homology $mathrm{HH}^R(A,Motimes^{mathbf{L}}_RN)$.



The hope is that we've chosen $R$ cleverly enough that $Botimes^{mathbf{L}}_RC$ is "less complicated" than $Botimes^{mathbf{L}}_AC$. (More precisely, we want the $mathrm{Tor}$-amplitude of $M$ and $N$ to decrease when we think of them as $R$-modules. There's a particular way of building $R$, but let me skip over this point.)



Has our situation improved? Perhaps only a little: we've turned our problem of looking at the derived intersection $Ytimes^h_XZ$ into the study of the derived intersection of the diagonal inside $Xtimes^h_RX$ with some simpler derived subscheme $Ytimes^h_RZ$ thereof. But now we can try to iterate this, working inductively.



I don't know whether this can be made to work, of course.

human biology - Could inhibition of progerin formation slow the rate at which a body ages?

Progeria (and related) syndromes are essentially a collection of 'accelerated aging' phenotypes caused by single mutations; Progerin is a shortened version of the protein Lamin A, and is therefore not found in individuals without a loss-of-function mutation in the LMNA gene (the wiki page you reference). As far as we are aware these genes do not 'cause' aging in individuals without the mutations.



LMNA is a normal component of the nuclear lamina (a structure inherent to the nucleus). This review discusses the various diseases associated with mutations in this gene, some of which present 'accelerated aging' phenotypes. However, as far as I know, there is limited evidence to suggest that LMNA, or indeed any Lamina associated protein, is involved in 'normal aging'. A recent GWAS meta-analysis found a variant in LMNA that is associated with longevity in humans, however the association is relatively weak (OR=1.18, P=7(x10)-4), so even if this is a true association, it seems that (as usual in aging research) there are many other factors to consider, and it is not a single gene that is doing the aging.



So to stress the point: progerin has no function in 'normal' human aging - it is a defective protein caused by a germline (or novel) mutation in the LMNA gene. Accelerated aging is the symptom of this genetic disorder, and is not completely analogous to normal aging (as you point out, there is no cognitive decline that is associated with normal human aging).

Tuesday, 19 June 2007

circadian rhythms - How is human biological clock modelled in modern science?

The situation is quite complex, and there are certain things that we do not fully understand, but I will try to give you an explanation.



First of all, at the cellular level, you have the genetic components of the circadian clock, the clock genes and their protein products.



You can divide clock proteins into positive and negative regulators: the positive regulators modulate transcription of clock-controlled genes and are inhibited by the negative regulators in a loop that lasts approximately 24 hours (in Latin: circa = around and dies = day).



For instance the Clock and Bmal1 nuclear proteins, when present in sufficiently large amount, will dimerize and act as transcription factors on a series of genes that contain a region in their promoter element called E-box. Amongst these are the Period (Per) genes.
Per is then synthesised, and exported to the cytoplasm. Here, if in a sufficiently large amount, it can heterodimerize with the Cryptocrome (Cry) proteins: the dimer will then enter the nucleus and inhibit the transcription actions of the Clock/Bmal couple. This will in turn, block the transcription of Per, that will therefore not be able to dimerize with Cry and stop the inhibition of Clock/Bmal and so on. All of this takes approximately 24 hours.



This basic loop (and some more) is nicely schematised in this review by Reppert and Weaver



Coordination of circadian timing in mammals. - Reppert and Weaver, Nature. 2002 Aug 29;418(6901):935-41.



Schematics of circadian clock - from Reppert and Weaver, Nature 2002



I report here the legend of the figure:




The clock mechanism comprises interactive positive (green) and negative (red) feedback loops. CLOCK (C, oval) and BMAL1 (B, oval) form heterodimers and activate transcription of the Per, Cry and Rev-Erbalpha genes through E-box enhancers. As the levels of PER proteins increase (P, blue circle), they complex with CRY proteins (C, diamond) and CKIε/CKIδ (ε/δ, circle), and are phosphorylated (p). In the nucleus, the CRY–PER–CKIε/CKIδ complexes associate with CLOCK–BMAL1 heterodimers to shut down transcription while the heterodimer remains bound to DNA, forming the negative feedback loop. For the positive feedback loop, increasing REV-ERBα levels (R, circle) act through Rev-Erb/ROR response elements in the Bmal1 promoter to repress (-) Bmal1 transcription. CRY-mediated inhibition of CLOCK–BMAL1-mediated transcription de-represses (activates) Bmal1 transcription, because REV-ERBα-mediated repression is inhibited. An activator (A, circle) may positively regulate Bmal1 transcription (?) alone or by interacting with mPER2. There are probably kinases (?) other than CKIε and CKIδ that participate in phosphorylation of clock proteins.




The clock machinery, however, is even more complex than this and different tissues need to integrate different signals (e.g. light, hormonal concentrations and so on).



However, as nice as this may be, our bodies are not constituted by one single cell... if we "zoom out" a little bit and we go to the tissue or the whole body level, we see that circadian rhytms are present in all sorts of tissues and involve all sorts of different physiological processes. A very interesting question is then: how do cells from different organs coordinate each other so that the interrelation of different circadian processes is always the same? For instance, the release of cortisol peaks in the morning, our body temperature has a peak in the late afternoon and a trough in the early hours of the morning, and our heart rate is higher during the daytime.



In 1972, an article by Moore and Eichler was published, showing that lesioning the suprachiasmatic nucleus (SCN) of the hypothalamus (a region at the base of the brain) resulted in the "breakage" of the circadian rhythm of corticosterone (the stress hormone).



Loss of a circadian adrenal corticosterone rhythm following suprachiasmatic lesions in the rat. - Moore and Eichler, Brain Res. 1972 Jul 13;42(1):201-6



The fact that lesioning a bit of the brain resulted in loss of circadian rhythm in the adrenal gland was obviously a very big deal: the central circadian clock of the organism had been discovered.



I will not going too much into that but, since 1972 many advances have been made and nowadays we know that, although the role of the SCN is very important, it is not the only clock in the body. Several peripheral clock exist, in organs such as the pituitary gland, the liver or the pancreas. How all these clocks communicate between each other and what is their contribution to the generation of non-circadian rhythms (ultradian and infradian) are questions that still have to be solved.



A few interesting papers on the matter:



A riot of rhythms: neuronal and glial circadian oscillators in the mediobasal hypothalamus. - Guilding et al., Mol Brain. 2009 Aug 27;2:28.



A clockwork web: circadian timing in brain and periphery, in health and disease. - Hastings et al., Nat Rev Neurosci. 2003 Aug;4(8):649-61.



Peripheral circadian oscillators in mammals: time and food. - Schibler et al., J Biol Rhythms. 2003 Jun;18(3):250-60.

nt.number theory - Modular congruences related to sums of Catalan numbers

I am curious if somebody can be helpful concerning the following
experimental observation:



There exist two rational sequences $alpha_0,alpha_1,dots$ and
$beta_0,beta_1,dots$, both with values in $mathbb Z[1/3]$
such that
$$sum_{k=0}^{p-1}{2kchoose k}frac{k^j}{k+1}equiv
alpha_j+pbeta_jpmod{p^2}$$
for every prime number $pequiv 1pmod 6$ and
$$sum_{k=0}^{p-1}{2kchoose k}frac{k^j}{k+1}equiv
-(-1)^j-alpha_j+pbeta_jpmod{p^2}$$
for every prime number $pequiv 5pmod 6$.



(More precisely, the sequences $3^nalpha_n$ and $3^nbeta_n$ are seemingly integral.)



The sequence $alpha_0,alpha_1,dots$ starts as
$$1, 0, -2/3, 4/3, -22/9, 140/27, -14, 1316/27, -17078/81, 87860/81, -1562042/243, 31323292/729, dots$$
and the first terms $beta_0,beta_1,dots$ are
$$0,0,2/3,-2,14/3,-34/3,98/3,-350/3,1526/3,-2622,46634/3,-311734/3,2316158/3,
-18920018/3,dots$$



Let me end by remarking that one has as a special case a similar result when replacing Catalan numbers by central binomial coefficients.



Update: The existence of the sequence $alpha_n$ is explained by the Zhi-Wei Sun paper,
see the answer by dke below.



Experimentally, the quotient sequence $frac{beta_n}{alpha_n}$ (defined for $ngeq 2$)
seems to converge very quickly towards $-frac{4sqrt{3}pi}{9}=-2.4183991523dots$ (the error is
smaller than $10^{-78}$ for $n=120$).



The sequence $frac{alpha_{n+1}}{alpha_n}-frac{alpha_n}{alpha_{n-1}}$
converges perhaps (fairly slowly) towards something like $-.72dots$.

Monday, 18 June 2007

ho.history overview - What was Gödel's real achievement?

If I remember correctly, the notion of a model is already present in Hilbert-Bernays where finite structures are used for proving absolute consistency of some theories, and is probably older. Again, if I remember correctly, Frege, Russel, and Hilbert did have formal systems and the notion of a formal proof. Skolem's construction of a term model (which is now famous as Skolem's Paradox because the set of real numbers of the model turns out to be countable) is in his 1922 paper, where the Godel's completeness theorem is from 1929. In other words, it seems that Skolem did already have all the tools necessary for proving completeness in 1922. It seems that Hilbert had even stated the question of completeness for first-order theories before this date and Godel has learned about this problem in Carnap's logic course in 1928.



Hilbert's 10-th problem from his famous 23 problems asks for an algorithm to decide existence of solutions for Diophantine equations. I think there were attempts after this for understanding what is an algorithm. There were many definitions which came out before Turing's definition which were equivalent to his definition, although they were not philosophically satisfactory, at least Godel did not accept any of them as capturing the intuitive notion of computability before Turing's definition.



Godel's collected works can shed more light on these issues.



EDIT: Also



Solomon Feferman, "Gödel on finitism, constructivity and Hilbert’s program"
http://math.stanford.edu/~feferman/papers/bernays.pdf




Hilbert and Ackermann posed the fundamental problem of the completeness of the first-order predicate calculus in their logic text of 1928; Gödel settled that question in the affirmative in his dissertation a year later. [page 2]



Hilbert introduced first order logic and raised the question of completeness much earlier, in his lectures of 1917-18. According to Awodey and Carus (2001), Gödel learned of this completeness problem in his logic course with Carnap in 1928 (the one logic course that he ever took!). [page 2, footnote]




Martin Davis, "What did Gödel believe and when did he believe it", BSL, 2005




Godel has emphasized the important role that his philosophical views had
played in his discoveries. Thus, in a letter to Hao Wang of December 7,
1967, explaining why Skolem and others had not obtained the completeness
theorem for predicate calculus, ... [page 1]


Statements in group theory which imply deep results in number theory

The notion of arithmetically equivalent number fields is a good example of a connection between group theory and number theory, see for example:
http://sbseminar.wordpress.com/2007/08/29/zeta-function-relations-and-linearly-equivalent-group-actions/



a couple of specific applications:



Lemma: Let $G$ be a finite $p$-group. Any two subgroups of index $p$ are quasi-conjugated.



Corollary: Two number fields $K$, $L$ of degree $p$ prime are arithmetically equivalent if and only if $[KL:Q] neq p^2$
See "A remark about zeta functions of number fields of prime degree" by R. Perlis.



Also by doing some basic group theory one can prove that any two arithmetically equivalent number fields of degree less than $7$ must be isomorphic.(This is also proven in a paper by Perlis but I don't remember what paper.)



Another result that comes to my mind with this question (totally unrelated to arithmetical equivalence) is that every group of odd order can be realized as a Galois group over Q(odd order theorem plus Shafarevich).

nt.number theory - Erik Westzynthius's cool upper bound argument: update?

After studying Kanold's 1967 paper on Jacobsthal's function, (and being inspired by a preprint http://arxiv.org/abs/1208.5342 that I discuss below,) I found an argument, mostly very simple, which gives some nice results for the effort given.  While Kanold deserves some of the credit for the argument, I have yet to see a statement by him or by anyone else that gives these results, so I present them here.  (Kanold wrote several articles on Jacobsthal's function, many of which I am tracking down, which might have this argument.  I am happy to accept help in obtaining electronic copies of them.)  This is the post I promised over a few months ago in a supplement to a question of Timothy Foo,
Analogues of Jacobsthal's function  .



For maximum ooh-aah effect, I assume $n$ is squarefree and has $k gt 2$ prime factors, one of which I call Peter, or $p$ for short.Now $1+tn$ is coprime to $n$ for any integer $t$.  So are most integers of the form $1 + tn/p$, the exceptions being those that are multiples of $p$, and those multiples do not occur as consecutive terms.  Thus, every interval of length $2n/p (=g(p)n/p)$ has at least one integer coprime to $n$ of the form $1+tn/p$.



Let's go further with this.  Let $d gt 1$ and divide $n$, and let $f=n/d$.  (Here I use $n$ squarefree to get $f$ coprime to $d$.)  Then numbers of the shape $1+tf$ form an arithmetic progression, are coprime to $f$, and (as can be seen by multiplying by $f$'s inverse in the ring of integers mod $d$) you can't pick $g(d)$ consecutive members of this progression without hitting something coprime to $d$ also.  So $g(n) leq g(d)f = g(d)n/d$ .



While I'm here, let me sharpen the inequality, assuming $f gt 1$ and $d gt 1$ are coprime:
there are $phi(f)$ totients $c$ of $f$ in the interval $[0,f]$, so I can repeat the argument with $c+tf$ instead of $1+tf$.  In the worst case, using all $phi(f)$ progressions, I get $g(fd) leq g(d)f - f + g(f)$, which mildly improves upon Kanold's bound $g(d)f -phi(f)+1$, and matches it when $f$ is prime.  (Of course, for $n=fd$ I really want $g(n)$ to be near $O(g(d)+g(f))$, but I don't yet know how to show that with grade school arithmetic.)



How to use this inequality? Pick the largest divisor $d$ for which one can comfortably compute (a subquadratic in $k$ upper bound for) $g(d)$; I pick $d$ to contain most of the large prime factors of $n$: find prime $q$ dividing $n$ so that $sigma^{-1}(d)=sum_{p text{ prime,} p|n, p geq q} 1/p$ is less than $1 + 1/2q$; a routine argument yields $g(d)$ is $O(qk)$.  The ugly part is to show that $q lt k^{0.5}$ (or else $d=n$), that $n/d lt 2^{3q/2}$ which for large $k$ approaches $2^{3(k^{epsilon + 1/e})/2}$, and that asymptotically $g(n)$ is $O(e^{k^{1/e}+Dlog(k)})$.  This isn't hard after using one of Mertens's theorems and a Chebyshev function; it just isn't pretty.  (Also for smaller $n$, $epsilon + 1/e$ can be close to $1/2$, but with patience $epsilon$ will tend to zero.)



This gives a bound that is asymptotically better than my first efforts at this, improves slightly ($k^{0.5}$ replaced by $Ck^{0.37}+ Dlog(k)$ on Kanold's bound of $2^sqrt{k}$ for $k$ not too large, and does not need Kanold's requirement that $k > e^{50}$.  Up until one chooses $d$ and crunches the formulae, it is also a very elementary argument; I suspect even Legendre knew about using the multiplicative inverse to transform a general arithmetic progression to a (effectively) consecutive sequence of integers and still preserve the property of interest here, being a unit in a certain ring (or missing it by that much). 



(One of the benefits of letting this sit for a few months before posting is that I can add cool observations like: If I could get the inequality down to $g(n) leq g(d)g(n/d)$, I could iterate the
above simple estimate to get an explicit bound of $O(k^c)$, where $c$ is a positive number less than 3.  Or like: using more advanced work combined with the above, I can get $g(n)
leq e^{k^{e^{-a}}}Ck^{a}$ for some integers $a$, which seems better than $Ck^{4loglog{k}}$ if you don't look too closely.)



Further, one can use a computer to refine the method slightly and get estimates which do quite well for small values of $n$, where small here means $k<100$.  Asymptotically though, Stevens's and my upper bounds eventually outperform this bound.



Also, there has been a nice result out of University College Dublin that I will briefly interpret.  Fintan Costello and Paul Watts find a way of presenting a related function recursively, then numerically compute a lower bound on this function which implies an upper bound on Jacobsthal's function computed on some particular values.  I thank them for reminding me about using a multiplicative inverse mod $d$ for $f$, so they deserve a "piece of the action".



These authors work in (and sometimes away from) the integer interval $BM = [b+1,b+2,ldots,b+m]$.  Given squarefree $n$ and its distinct prime factors, listed in some order as $q_1$ to $q_k$, define $Q_i$ as  $prod_{0 lt j leq i} q_j$. One approach to computing the size $pi(b,m,n)$ of the set $CP(b,m,n)$ which has those integers in $BM$ coprime to $n$ is to do the standard inclusion-exclusion argument: if we represent by $F(b,m,d)$ the multiples of $d$ in $BM$, and say there are $f(b,m,d)$ many such multiples, and abuse some notation, I then write
$CP(b,m,n) = sum_{d | n} sgn(d,F(b,m,d))$ .  Here $sgn$ is to suggest adding elements of the set $F(b,m,d)$ if $d$ has an even number of prime factors, and subtracting them instead when $d$ has an odd number of prime factors.



To set up for the recurrent expression, Costello and Watts use just some of the terms on the right hand side of the abused equation, and reorganize the rest of the terms.  In my interpretation of their work, they start with the multiset identity



$$CP(b,m,n) cup biguplus_{0 lt i leq k} F(b,m,q_i) =
BM uplus  biguplus_{0 lt i lt j leq k} RCP(i,j)$$



where $RCP(i,j)$ is $F(b,m,q_iq_j) cap CP(b,m,Q_{i-1})$, or the subset of $BM$ which has those multiples of $q_iq_j$ whose soonest prime factor in common with $n$ is $q_i$. 



One sees this identity holds by considering a member of $BM$ which has exactly $t$ distinct prime factors in common with $n$.
If $t$ is $0$, then the member occurs only once in $CP(b,m,n)$ and similarly only once in $BM$.  Otherwise, it occurs exactly $t$ times in the left hand side in $t$ distinct terms $F(b,m,q_i)$, and if $l$ is soonest such that $q_l$ is a prime factor of the member, the member occurs only once in each of $t-1$ sets
$RCP(l,j)$ (remember $l$ comes sooner than $j$) and only once in $BM$.



Now the term $RCP(i,j)$ is a subset of an arithmetic progression $A$ with common difference $q_iq_j$. By using the technique above of multiplying by a suitable inverse of $q_iq_j$ in the ring of integers mod $Q_{i-1}$, $A$ corresponds with an integer interval starting near some integer $c_{ijbm}$ of length $f(b,m,q_iq_j)$ which preserves the coprimality status with respect to $Q_{i-1}$: to wit, the size of $RCP(i,j)$ is $pi(c_{ijbm},f(b,m,q_iq_j),Q_{i-1})$.  Using the $pi$ term for the size of $CP$ and translating the other sets to numbers gives the numerical recurrent formula of Costello and Watts:
$$pi(b,m,n) = m - sum_{0 lt i leq k} f(b,m,q_i)
+ sum_{0 lt i lt j leq k} pi(c_{ijbm},f(b,m,q_iq_j),Q_{i-1})$$.



Following work of Hagedorn who computed $h(k)=g(P_k)$ for $k$ less than 50, where $P_k$ is the $k$th primorial, Costello and Watts use their formula and some analysis of coincidence of prime residues to compute an inequality for $pi_{min}(m,n)$ which is the minimum over all integers $b$ of $pi(b,m,n)$.  They underestimate $f(b,m,q_iq_j)$ by $lfloor m/q_iq_j rfloor$, ignore the $c$'s by using $pi_min$, pull out the $i=1$ terms from the double sum and rewrite that portion to include a term $E$, depending only on $m$ and the $p_i$, which arises from looking at when estimates for the sizes of the $F(b,m,p_i)$  and $F(b,m,2p_i)$ sets can be improved, and come up with (a refined version, using $p$'s for $q$'s, of) the inequality
$$m - sum_{0 lt i leq k}  lceil frac{m}{p_i} rceil + sum_{1 lt i leq k} lfloor frac{m}{2p_i} rfloor + E + sum_{1 lt i lt j leq k} pi_{min}(lfloor frac{m}{p_ip_j} rfloor,P_{i-1}) leq pi_{min}(m,P_k)$$.



With this inequality, Costello and Watts compute $pi_{low}$, a lower bound approximation to $pi_{min}$.  Since $h(k) leq m$ iff $pi_{min}(m,P_k) gt 0$, computing $pi_{low}(m,P_k)$ for various $m$ will give an upper bound on $h(k)$.  They say their computations for $k leq 10000$ suggest $h(k) leq Ck^2 log k$, where $C$ is a constant less than $0.3$ .  Although this data is achieved using data from Hagedorn's work, even without that their algorithm yields values which are a vast improvement on known and easily computable bounds, even the ones listed above.



One item to explore is how an algorithm based on this approximation will perform given different orderings of the prime factors.  I suspect that letting the larger primes come first will give tighter results.  Another item to explore is to see if there is a better term $F$ that will supplant $E$ and some of the recurrent terms in the double sum.  The idea of rewriting the $pi$ function recursively, while not new, is given new life in this double sum form, and suggests revisiting some old approaches with an eye toward computability.



Gerhard "Ask Me About Coprime Integers" Paseman, 2013.02.05

co.combinatorics - Dyck paths on rectangles

Is a sum OK?



I am used to a different rotation of the paths. I think the paths you are looking for can also be described as all paths above the x-axis, with steps (1,1) and (1,-1), that starts at (0,0) and ends on the line x=y+n for some (x,y) from (n,0) to (n+m,m).



(If instead they end at the line x=n, we get the Ballot paths.)



Let B(n,k) be the Ballot numbers, B(n,k)= # paths from (0,0) to (n,k). Now, all paths must pass the line x=n. From there on it is just a binomial path, so the number of paths are
sum_{k=0,2,4,...,n} B(k,n)*( (n-m-k)/2 choose k/2)



(n choose k)= Binomial coefficient, n!/(k!(n-k)!)

Sunday, 17 June 2007

gn.general topology - When does local invertibility imply invertibility?

Generally, local invertibility does not imply invertibility. However, for differentiable functions from $mathbb{R}$ to $mathbb{R}$ then surjectivity and local invertibility do imply invertibility.



As well as being the most obvious, it's also the only (non-contrived) case that I can think of. Are there any more?



Specifically, I'm looking for examples of spaces $X$ which are at least topological spaces (but may be more structured) and subsets of endomorphisms on $X$ for which local invertibility implies invertibility.

Saturday, 16 June 2007

co.combinatorics - 1 rectangle

The upper bound is <3.95.



I hope the code below will show correctly...



It proves that assuming a sum >=3.95 in the central AxB rectangle of the grid
({-B,-B+A,-2A,-A,0,A,2A,B-A,B}+{0,A}) x ({-2B,-B-A,-B,-B+A,-2A,-A,0,A,2A,B-A,B,B+A,2B}+{0,B})
leads to a contradiction in a finite number of steps. 3.95 is NOT best possible for this grid, but 3.94 does not lead to a contradiction. It will be easy to refine the number, but
more worthwhile is probably to search a larger grid (which starts to get slow in awk.)



awk 'BEGIN {

A=1;
# pick B large enough to ensure that there
# are no accidental squares in the grid below
B=1000;

# setting up the grid
x[0]=-B; x[1]=-B+A;
x[1]=-B+A; x[2]=-B+2*A;
x[3]=-2*A; x[4]=-A;
x[4]=-A; x[5]=0;
x[5]=0; x[6]=A;
x[6]=A; x[7]=2*A;
x[7]=2*A; x[8]=3*A;
x[9]=B-A; x[10]=B;
x[10]=B; x[11]=B+A;
M=11;

y[0]=-2*B; y[2]=-B;
y[1]=-B-A; y[5]=-A;
y[2]=-B; y[6]=0;
y[3]=-B+A; y[7]=A;
y[4]=-2*A; y[9]=B-2*A;
y[5]=-A; y[10]=B-A;
y[6]=0; y[11]=B;
y[7]=A; y[12]=B+A;
y[8]=2*A; y[13]=B+2*A;
y[10]=B-A; y[14]=B+B-A;
y[11]=B; y[15]=B+B;
y[12]=B+A; y[16]=B+B+A;
y[15]=2*B; y[17]=3*B;
N=17;

for(i=0; i<=M; i++)
for(j=i; j<=M; j++)
for(k=0; k<=N; k++)
for(l=k; l<=N; l++)
# 0 sum for degenerate rectangles
if(i==j || k==l) {
lo[i,j,k,l]=0;
hi[i,j,k,l]=0;
}
# squares
else if(x[j]-x[i]==y[l]-y[k]) {
lo[i,j,k,l]=-1;
hi[i,j,k,l]=1;
}
# other rectangles
else {
lo[i,j,k,l]=-4;
hi[i,j,k,l]=4;
}

# central rectangle: assume its sum is >=3.95
lo[5,6,6,11]=3.95;

iter=10000;
active=1;
while(iter-- && active) {
active=0;

# traverse all possible combinations of 1 rectangle split into 4
for(i=0; i<M; i++)
for(j=i+1; j<=M; j++)
for(k=0; k<N; k++)
for(l=k+1; l<=N; l++)
for(m=i; m<j; m++)
for(n=k; n<l; n++) {
lo0=lo[i,j,k,l];
lo1=lo[i,m,k,n];
lo2=lo[m,j,k,n];
lo3=lo[i,m,n,l];
lo4=lo[m,j,n,l];
hi0=hi[i,j,k,l];
hi1=hi[i,m,k,n];
hi2=hi[m,j,k,n];
hi3=hi[i,m,n,l];
hi4=hi[m,j,n,l];

# 3rd argument in max() and min() funtions
# is for printing purposes only...
lo0=max(lo0, lo1+lo2+lo3+lo4, 0);
hi0=min(hi0, hi1+hi2+hi3+hi4, 0);
lo1=max(lo1, lo0-hi2-hi3-hi4, 1);
lo2=max(lo2, lo0-hi1-hi3-hi4, 2);
lo3=max(lo3, lo0-hi1-hi2-hi4, 3);
lo4=max(lo4, lo0-hi1-hi2-hi3, 4);
hi1=min(hi1, hi0-lo2-lo3-lo4, 1);
hi2=min(hi2, hi0-lo1-lo3-lo4, 2);
hi3=min(hi3, hi0-lo1-lo2-lo4, 3);
hi4=min(hi4, hi0-lo1-lo2-lo3, 4);

if(lo0>hi0 || lo1>hi1 || lo2>hi2 || lo3>hi3 || lo4>hi4) {
print "CONTRADICTION AT", i,m,j,k,n,l;
exit;
}

lo[i,j,k,l]=lo0;
lo[i,m,k,n]=lo1;
lo[m,j,k,n]=lo2;
lo[i,m,n,l]=lo3;
lo[m,j,n,l]=lo4;
hi[i,j,k,l]=hi0;
hi[i,m,k,n]=hi1;
hi[m,j,k,n]=hi2;
hi[i,m,n,l]=hi3;
hi[m,j,n,l]=hi4;
}
}
print "FINISHED OK";
}

function max(s,t, where) {

if(s<t) {
print "lo=" t, "for", i,m,j,k,n,l, "(" where ")";
active=1;
s=t;
}
return(s);
}

function min(s,t, where) {

if(s>t) {
print "hi=" t, "for", i,m,j,k,n,l, "(" where ")";
active=1;
s=t;
}
return(s);
}
'

Friday, 15 June 2007

nt.number theory - A "round" lattice with low kissing number?

If the question is about the asymptotic behavior in high dimensions then I do not think it is known or even conjectured that the kissing number for the densest packing is not polynomial in n. To the best of my memory (but check the book by Conway and Sloane) there are no known examples of lattices where the kissing number is exponential, and the best known bound are only quasi-polynomial ($n^{log n}$).



So "dense lattice packing" and "huge kissing numbers" are not the same in very high dimensions. (There are the same in dimension 24!). If you are interested in interesting notions of low density packings you can think about packings which are locally densest but globally very undense. It does make sense to consider, even in low dimensions, what is the smallest "local maximum" for density of lattices.



Update: Let me add some more details on the connection between kissing number and more generally "the low end of the distance distribution" of a packing and the density. It is convenient to talk, more genarally, about spherical codes, and also to draw the analogy with binary codes. The densest examples of spherical codes and binary error correcting codes are obtained by randomized constructions. For those the kissing number is expected to be subexponential. It is known that if these randomized bounds can be improved to get higher density than some relaxed notion of a kissing number will be exponential. (This is a result of Nati Linial and me for the binary case and I think that it was extended to larger alphabets and sperical codes.) As I mentioned for sphere packing (and I think also for general spherical codes) no example is known where the kissing number is exponential. For error-correcting codes over a large alphabet the algebraic geometric codes (which are denser than the randomized construction) give such an example. This example can be transforned to the binary case but it doesnot give a higher density.



I think that the notion of minimum density among local maximum density packings was studied and also the corresponding question for covering. At least in low dimensions. This looks like the "correct" understanding of the question. But I do not recall the details.



More update: Frank Vallentin gave me some further relevant information. Indeed the relevant notion in discussing the problem is that of a "perfect lattice". In a perfect lattice, the following equality is true:



dim span{$v~ v^t: ~v$ shortest nonzero lattice vector} = $n(n+1)/2$.



(In general one only has the inequality "$le$") A lattice which is a local maximum for the packing density function is always perfect.



Trying to find perfet lattices with low density (which seems to be one purpose of the question if we put aside the kissing number issue), is interesting. Coxeter conjectured (Extreme forms, 1951) that the lattice $A_n$, (which is perfect,) gives the lowest density among all perfect lattices. It is known to be true up to dimension 8.



There is another conjecture by Conway and Sloane (Low dimensional lattices III, Perfect forms, 1988) saying that Coxeter's conjecture is wrong if the dimension is large enough...

dg.differential geometry - Cone angles for Riemannian metrics in polar coordinates

This is the simplest case of a question that's been bugging me for a while: say we have a Riemannian metric in polar coordinates on a (2-d) surface:
g=dr2+f2(r, θ)dθ2, such that the θ parameter runs from 0 to 2π. Assume that f is a smooth function on (0,∞)X S1 such that f(0, θ)=0.



Define the cone angle at the pole to be
$ C=lim_{rrightarrow 0^{+}} frac{L(partial B(r))}{r} $, where B(r) is the geodesic disc of radius r centered at the origin. Then it's fairly easy to see(by switching into Cartesian coordinates) that a necessary condition for the metric to be smooth is that C=2π. If C<2π, there is a cone point at the origin. One can write out a cone metric, and show that the triangle inequality holds, so there is a singular metric, but which still induces a metric space structure.



Now, if C>2π, it seems pretty clear that we'll end up with a space which violates the triangle inequality; it will be shorter to take a broken segment through the origin than to follow the shortest geodesic(in the sense of a curve γ(t) such that Dγ'γ'=0.) One can show this directly for some simple cases, eg a flat metric with a cone angle greater than 2π.



But there must be an elementary proof of the general case! I can't seem to find one though, and I spent the afternoon playing around with the Topogonov and Rauch comparison estimates to no avail. The basic problem I'm having is that the cone angle condition is essentially a condition on metric balls, but we expect a violation of the triangle inequality, which is a condition on distances.



This is not really related to anything I'm working on, but it's driving me crazy, so I'd appreciate any insight.

Thursday, 14 June 2007

Models of ZFC Set Theory - Getting Started

This answer is going to be a bit too informal, but I hope it helps.



Imagine we have the collection of all sets. Let us call them the real sets, and their membership relation the real set membership. The empty set is "actually" empty, and the class of all ordinals is "actually" a proper class.



Now that we have the real sets we can use them as the "ontological substratum" upon which everything else will be built from. And this, of course, includes formal theories and their models.



A model of any first-order theory is then only a real set. This applies to your favorite set theory too. So the models of your set theory are only real sets (but the models don't know it, just as they don't know if their empty sets are actually empty or if their set membership is the real one).



This view fits well, for example, with the idea of moving from a transitive model to a generic extension of it or to one with a constructible universe: we are simply moving from a class of models to another one, each one consisting of real sets.



But this view also leaves us with too many entities, and maybe here we have an opportunity to apply Occam's razor. It looks like we have two kind of theories: one for the real sets, which is made of things that are not sets (we can formalize our informal talk about them, but that does not make essentially any difference), another one for the models of set theory, which is made of sets.



The real sets and the theory of the real sets belong to a world where there are real sets, but there are also pigs and cows, and human languages and many other things. We don't need all that to do mathematics, do we? So why not diving into the wold of the real sets and ignore everything else?



If this story sounds too platonistic, I am sure it must have a formalistic counterpart.



With my question:



How to think like a set (or a model) theorist.



I expected to obtain an official view about all this stuff. I somehow succeeded on this, but as you can see, I'm still working on it.



Here is a related answer to a related question which I also find useful:



Is it necessary that model of theory is a set?

Tuesday, 12 June 2007

ct.category theory - In which categories is every coalgebra a sum of its finite-dimensional subcoalgebras?

Let $mathcal V$ be a reasonably nice category — I'm interested in the case when $mathcal V$ is $mathbb K$-linear for some field $mathbb K$, abelian, and has all products and coproducts (hence all limits and colimits, as it is abelian), but I don't mind if you demand similar or weaker properties — with a reasonably nice monoidal structure $otimes$. Recall that a (counital coassiciative) coalgebra in $mathcal V$ is an object $Ainmathcal V$ along with maps $A to 1$ and $A to Aotimes A$ that are coassiciative and counital in the sense that certain diagrams commute. The notion of homomorphism of coalgebras is obvious.



When $mathcal V = text{VECT}$ is the category of vector spaces, then coalgebras satisfy a particular fundamental property that makes them essentially easy. Namely, any coalgebra is $text{VECT}$ is the (vector space) sum of its finite-dimensional subcoalgebras. On the other hand, the corresponding statement in $text{VECT}^{rm op}$ fails: it is not true that every algebra in $text{VECT}$ is a pullback of its finite-dimensional quotient algebras. This is in spite of the fact that for many purposes $text{VECT}$ and $text{VECT}^{rm op}$ are equally nice categories.



For a general sufficiently nice category $mathcal V$, I should replace the word "sum" by "limit" and I should replace "finite-dimensional" by "dualizable". All together, my question is:




For which sufficiently nice monoidal categories is it true that every coalgebra object is a limit of its dualizable subcoalgebra objects?




This is, of course, an open ended question. The very best would be some necessary and sufficient conditions that are easier to check, but that's probably too hard: natural (and naturally occurring) easily-checked sufficient conditions would suffice.

zoology - What was the reason for some plant and animals to become giant in course of evolution?

Thanks for asking an interesting question which made me think.



The short answer is that something evolves if there is an advantage to the genes involved, and, by 'advantage' I mean it produces more copies of the genes in the next generation so more individuals with that characteristic will be present in the population.



As to what particular advantage increased body mass had to any particular species that would depend on the particular species and its environment at the time. There is almost certainly not a 'one-fits-all' answer.



Most very big land mammals are herbivores and there is an advantage in having a large digestive tract when you eat a lot of plant material with a fairly low nutrient level. However, when there are also predators around, getting big and slow might be more of a disadvantage than staying small and fast. With evolution there is often a balance of competing forces which are traded of against one another.



Where large mammals have become isolated on islands with no predators, such as a species of now-extinct elephant on the Flores Island, they tend to become smaller so this suggests avoiding predators by simply being too big for them to kill might be an important factor.



And of course with many species we have the effect of sex-selection. If females select for the biggest males, there will be a selection pressure towards greater size.



With plants it could be something else entirely. Trees, for example, have long trunks basically for one of two reasons.



Firstly to put their leaves and seeds out of the reach of herbivores, so we have a kind of arms race like that between giraffes and acacia trees in Africa.



Secondly, when they grow in close clumps like woods and forests, there will be competition for sunlight and soil resources like water. A species which can reach higher and put it's leaves above those of it's neighbours will win the competition and there will also then be pressure on it's neighbours to evolve longer trunks. Long trunks mean that water must be pumped a long way to the leaves so the tree with the longest trunks will also need bigger root systems. Again a trade off between making a long trunk and dominating the water supply underground.



Many trees are also believed to have formed symbiotic associations with fungi in their root systems which help them take in the nutrients and water required to 'service' a long trunk and high-up leaf crown so they may be the only types of plants capable of doing this anyway.



One thing you might like to think about is why some whales are very big and some are relatively small (dolphins, porpoises, killer whales). What is the difference in their life-style, food, habitat, and so on, which could drive these evolutionary differences.

nt.number theory - Composite pairs of the form n!-1 and n!+1

As far as nonstandard models go: we can indeed get $mathbb{Z}$-like intervals $I$ such that each $xin I$ has a standard factor. The proof is via Compactness, and the Chinese Remainder Theorem:



First, adjoin a constant symbol $c$ to our language. Let $p_i$ be the $i^{th}$ prime number, let $q_i=p_{2i}$, and let $r_i=p_{2i+1}$.



Define numbers $a_i$, $b_i$ by recursion as follows:



$a_0=0$, $a_{n+1}=minlbrace x: forall kinmathbb{N}, jle n(cnot=a_j+kq_j)rbrace$



$b_0=0$, $b_{n+1}=minlbrace x: forall kinmathbb{N}, jle n(cnot=b_j+kr_j)rbrace$



Now, for each $iinmathbb{N}$, let $sigma_i$ express "$c$ is congruent to $-a_i$(mod$p_i$)", let $tau_i$ express "$c$ is congruent to $b_i$(mod$p_i$)," and let $Sigma=lbrace sigma_i: iinmathbb{N}rbracecuplbrace tau_i: iinmathbb{N}rbrace$. By the Chinese Remainder Theorem, every finite subset of $Sigma$ is consistent with True Arithmetic $TA$, so by Compactness, $Sigma$ itself is consistent with $TA$. So there is some nonstandard model of $TA$ in which $Sigma$ holds; clearly, in such a model, every number in the $mathbb{Z}$-like interval centered on $c$ has a standard factor.



I have no idea whether $every$ nonstandard model has such an interval, however.

Monday, 11 June 2007

algebraic number theory - When is the composition of two totally ramified extension totally ramified?

Let me give an elementary answer in the case of abelian exponent-$p$ extensions of $K$, where $K$ is a finite extension of $mathbb{Q}_p$ containing a primitive $p$-th root $zeta$ of $1$. This is the basic case, and Kummer theory suffices.



Such extensions correspond to sub-$mathbb{F}_p$-spaces in $overline{K^times} = K^times/K^{times p}$ (thought of a vector space over $mathbb{F}_p$; not to be confused with the multiplicative group of an algebraic closure of $K$).



It can be shown fairly easily that the unramified degree-$p$ extension of $K$ corresponds to the $mathbb{F}_p$-line $bar U_{pe_1}$, where $e_1$ is the ramification index of $K|mathbb{Q}_p(zeta)$ and $bar U_{pe_1}$ is the image in $bar K^times$ of the group of units congruent to $1$ modulo the maximal ideal to the exponent $pe_1$. This is the "deepest line" in the filtration on $bar K^times$. See for example prop. 16 of arXiv:0711.3878.



An abelian extension $L|K$ of exponent $p$ is totally ramified if and only if the subspace $D$ which gives rise to $L$ (in the sense that $L=K(root pof D)$) does not contain the line $bar U_{pe_1}$.



Now, if $L_1$ and $L_2$ are given by the sub-$mathbb{F}_p$-spaces $D_1$ and $D_2$, then the compositum $L_1L_2$ is given by the subspace $D_1D_2$ (the subspace generated by the union of $D_1$ and $D_2$). Thus the compositum $L_1L_2$ is totally ramified if and only if $D_1D_2$ does not contain the deepest line $bar U_{pe_1}$.



Addendum. A similar remark can be made when the base field $K$ is a finite extension of $mathbb{F}_p((pi))$. Abelian extensions $L|K$ of exponent $p$ correspond to sub-$mathbb{F}_p$-spaces of $overline{K^+}=K/wp(K^+)$ (not to be confused with an algebraic closure of $K$), by Artin-Schreier theory. The unramified degree-$p$ extension corresponds to the image of $mathfrak{o}$ in $bar K$, which is an $mathbb{F}_p$-line $bar{mathfrak o}$ (say).



Thus, the compositum of two totally ramified abelian extensions $L_i|K$ of exponent $p$ is totally ramified precisely when the subspace $D_1+D_2$ does not contain the line $bar{mathfrak o}$, where $D_i$ is the subspace giving rise to $L_i$ in the sense that $L_i=K(wp^{-1}(D_i))$. See Parts 5 and 6 of arXiv:0909.2541.

ag.algebraic geometry - Hilbert scheme of points on a complex surface

Here is a geometric description in the case of $H_n(mathbb{C}^2)$. This is meant to be a geometric rewrite of Proposition 2.6 in Mark Haiman's "(t,q)-Catalan numbers and the Hilbert scheme",
Discrete Math. 193 (1998), 201-224.



Let $S= (mathbb{C}^2)^n/S_n$; notice that this is an orbifold. Let $S_0$ be the open dense set where the $n$ points are distinct. For $D$ an $n$-element subset of $mathbb{Z}_{geq 0}^2$, let $A_{D}$ be the polynomial $det( x_i^{a} y_i^{b})$, where $(a, b)$ ranges over the elements of $D$ and $i$ runs from $1$ to $n$. For any $D$ and $D'$, the ratio $A_D/A_{D'}$ is a meromorphic function on $S$, and is well defined on $S_0$.



Map $S_0$ into $S_0 times mathbb{CP}^{infty}$ where the homogenous coordinates on $ mathbb{CP}^{infty}$ are the $A_{D}$'s. (Only finitely many of the $A_D$'s are needed, but it would be a little time consuming to say which ones.) The Hilbert scheme is the closure of $S_0$ in $S times mathbb{CP}^{infty}$.



Algebraically, we can describe this as the blow up of $S$ along the ideal generated by all products $A_D A_{D'}$. Haiman points out that the reduction of this ideal is the locus where two of the points collide and speculates that this ideal may be reduced. If his speculation is correct, then we can describe $H_n(mathbb{C}^2)$ geometrically as the blow up of $(mathbb{C}^2)^n/S_n$ along the reduced locus where at least two of the points are equal.

ac.commutative algebra - Why is an elliptic curve a group?


Short answer: because it's a complex torus. Explanation below would take as through many topics.




Topological covers



The curve should be considered over complex numbers, where it can be seen as a Riemann surface, therefore a two-dimensional oriented closed variety. How to find out whether this particular one is a sphere, torus or something else? Just consider a two-fold covering onto $x$-axis and count the Euler characteristics as $-2 cdot 2 + 4 = 0$ (don't forget the point at infinity.)



Complex tori



So this is a torus; now a torus with complex structure can be always defined as a quotient $mathbb C/Lambda$, where $Lambda$ is the lattice of periods. It can be written as integrals $int_gamma omega$ of any differential form $omega$ over all elements $gamma in pi_1$. The choice of differential form is unique up to $lambda in mathbb C$.



Algebraic addition



A complex map of a torus into itself that leaves lattice $Lambda$ fixed can be only given by a shift. Once you select a base point, these shifts are in one-to-one correspondence with points of $E$. We have unique distinguished point — infinity — so let's choose it as the base point. It follows that we now have an addition map $(u, v) to uoplus v$, though defined purely algebraically so far.



Geometric meaning



Now let's stop and ask ourselves: how to see this addition geometrically? For a start, consider map that sends $u$ to the third point of intersection with the line containing both $u$ and 0 (the infinity point). It's not hard to see that we fix 0 but change every class $gamma$ in a fundamental group into $-gamma$, so we must have the map $umapsto -u$ here.



Group theory laws



What would happen if you took a line through $u$ and $v$? By temporarily changing coordinates so that $u$ becomes the infinity point, one writes down that map as $(u, v) mapsto -(u+v)$.
Now if you took three points, there would be two different ways to add them; those would lead to $(u+v)+w$ and $u+(v+w)$ as complex numbers, which we know to be associative.



Logically proven



In the above, we worked over complex numbers, but we proved associativity which is a formal theorem about substitution of some rational expressions into others. Since it works over complex fields, it is required to work over all fields.



(In any case, the big discovery of mid-20th century was that you actually can take all of the intuition described above and apply it to the case of elliptic curves over arbitrary field)



Analytic computations (bonus)



Consider a line that passes through points $u$, $0$ and $-u$. This line is actually vertical, and $y$ is a well-defined function there which has two zeroes and one double pole at infinity. After a shift and multiplication of several such functions we'll be getting a meromorphic function on a complex torus with poles $p_i$ and zeroes $z_i$ having the property $sum p_i = sum z_i$. This method can give all such functions and only them; it's not hard to see that only meromorphic functions with this property are allowed on elliptic curve.



For example, $wp'$-functions are the ones that have triple pole at 0 and single zeroes at points $frac12w_1, frac12w_2, frac12(w_1+ w_2)$ where $w_1, w_2$ are generators of $Lambda$.



Jacobian of a curve (bonus 2)



The formula above describes what types of functions are allowed on our curve. It is a good idea to organize this information into a curve: in this case, the information is that a single expression $p_1 + p_2 + cdots + p_n - z_1 - cdots - z_n$, considered a point of the curve, must vanish. For curves of higher genus, more relations are necessary; for $mathbb Cmathbb P^1$, no relations beyond number of poles = number of zeroes are necessary. Those are relations in the group of classes of divisors (= Jacobian of a curve) mentioned in other answers.



In particular, elliptic curves coincide with their Jacobian and that's another explanation for the additive law.

geometry - How can I sample uniformly from a surface?

just to develop what Fedja said above:



say you have a map $phi(x,y)=(u(x,y),v(x,y),w(x,y))$ from $D = [a;b]^2$ to $mathbb{R}^3$ that represents a surface (Klein bottle in your initial example). The infinitesimal surface $(x;x+dx) times (y;y+dy)$ is mapped to a surface of area $|partial_x Phi wedge partial_y Phi| dx dy = A(x,y) dx dy$. So it would suffice to sample points inside $D$ distributed according to a probability distribution $p(x,y) propto A(x,y) dx dy$.



1: as Fedja mentioned it, you can use a rejection sampling approach - you just need to find a bound on $|A|_{infty}$, which should be very easy since $D$ is simple.



2: you can use a MCMC approach, which is equally easily implemented (e.g. independence sampler) - might be a little bit faster.

Saturday, 9 June 2007

sleep - What is the timing of information assimilation within a human brain?

A little background: I'm an avid dreamer and have great dream recall, sometimes up to 5-7 per night.



In my experience, I can sometimes trace some elements of the dream to an event that occured within a 1-3 day window prior to the dream.



I've done similar experiments with the process of idea generation, where an idea that I have can be traced to experiences from the previous 1-3 days.



This makes me interested if there have been some kind of studies on how rapidly the brain absorbs and assimilates information from short term memory over to the long term memory. Has there been any studies on how long the brain keeps information before discarding or processing/assimilating it into dreams or new ideas?

gr.group theory - Topological simplicity and dense subgroups

Let $G$ be a (topologically) simple Hausdorff topological group. Let $H$ be a dense subgroup of $G$. Now throw away the topology. What restrictions are known on the structure of $H$ as an abstract group? I imagine not much can be said if $G$ has a very coarse topology, but I am particularly interested in the case where $G$ is totally disconnected and locally compact, that is, the intersection of all open compact subgroups of $G$ is trivial.



A related question: two (t.d.l.c.) topological groups $G$ and $K$ have a dense subgroup $H$ in common. Suppose $G$ is (topologically) simple. What does this say about $K$?



I don't have a precise question I want to answer here, this is more of an appeal for references on the subject.

taxonomy - On which date did the official name change of Lactobacillus sanfranciscensis change?

Lactobacillus sanfranciscensis first shows up in pubmed in Gänzle et al. (1998). They reference Trüper and De'Clari (1997) for the name Lactobacillus sanfranciscensis. The latter say:




As none of them makes sense in the nominative apposition construction,
we hereby correct these names to forms that are in agreement with Rule
12c as follows.



...



Lactobacillus sanfrancisco ("the city San Francisco") is corrected to Lactobacillus sanfranciscensis (adjective: "from San Francisco").




That issue of Int J Syst Bacteriol was published 1 July 1997, so I guess that's your date.



References



Gänzle M G, M Ehmann, and W P Hammes. 1998. Modeling of growth of Lactobacillus sanfranciscensis and Candida milleri in response to process parameters of sourdough fermentation. Appl Environ Microbiol 64(7):2616–2623.



Trüper H G, De'Clari L. 1997. Taxonomic note: necessary correction of specific epithets formed as substantives (nouns) "in apposition." Int J Syst Bacteriol 47:908–909.

Friday, 8 June 2007

gn.general topology - A question about the dispersion points of connected metric spaces

Here I construct an example which proves the answer is NO.



Take the KK fan:



enter image description here



Remove its dispersion point at the top. Now you have a Cantor set of lines of rationals/irrationals that cannot be separated horizontally. Stretch this into a "Cantor-like tube" and weave it closer and closer to a point $p$ in the plane while shrinking its diameter and making sure that every loop goes a distance of $7$ away from $p$.



enter image description here



Remove $p$ and you have a hereditarily disconnected space ($simeq$ KK fan minus its vertex). If $A$ is nonempty and clopen in $X$ then $A$ must snake around the tube forever, so it limits to $p$ and thus $pin A$. Therefore $X$ is connected.
The ball of radius $1$ around $p$ is hereditarily disconnected.



EDIT: I am basically taking the space which consists of the curve below, and the origin $p=(0,0)$ (so ${0}times (0,1]$ is not included). The difference is that instead of weaving a line, I am weaving this "Cantor tube" while shrinking its diameter. In the first case if I remove $p$ then I get something $simeq [0,infty)$, whereas in the second case I get something $simeq$ my Cantor tube.



enter image description here

cell biology - How does the Golgi Apparatus perform its function?

This is actually is an awesome story: both how it happens and how it was discovered. This is one of those stories people get Nobel Prize, and this was Günter Blobel who was awarded this highest possible prize for his discovery, which is according to his Nobel prize certificate:




"for the discovery that proteins have intrinsic signals that govern
their transport and localization in the cell"




If you watch or read his Nobel prize lecture you will find answers to your question formulated much better than I can do it here. His Nobel lecture was opened with the following words which might already answer you question:




Imagine a large factory that manufactures thousands of different items
in millions of copies every hour, that promptly packages and ships
each of them to waiting customers. Naturally, to avoid chaos, each
product requires a clearly labeled address tag. Günter Blobel is being
awarded this year's Nobel Prize in Physiology or Medicine for having
shown that newly synthesized proteins, analogous to the products
manufactured in the factory, contain built-in signals, or address
tags, that direct them to their proper cellular destination.




Still, I recommend you to read (or watch) to the end, there are some interesting points on how it was discovered and what experiments were done to do it.

What is the infinite-dimensional-manifold structure on the space of smooth paths mod thin homotopy?

Okay, you asked for it!




Question: What is the manifold structure on $P^1(M)$?







Answer: There isn't one.





Update: The biggest failing is actually that the obvious model space is not a vector space. The space of paths mod thin homotopy in $mathbb{R}^n$ does not inherit a well-defined addition from the space of paths in $mathbb{R}^n$. Full details at the nLab page http://ncatlab.org/nlab/show/smooth+structure+of+the+path+groupoid.



(Update added here, rather than at the end, as it's the most direct answer to the specific question; the rest should be viewed as extra for those interested in more than just whether or not this space is a smooth manifold.)




It is, as you say, a smooth space. This is formal: whatever category of generalised smooth spaces you like, take the quotient of $P(M)$ by thin homotopies. All the proposed categories of generalised smooth spaces admit quotients, so the quotient exists and is a smooth space. Depending on your choice of category, the description of this smooth space may vary. For example, its Frolicher structure and its diffeological structure are very different.



But it is not "locally linear" in any sense. The basic problem is that, as you say, within an equivalence class you have paths wrapping all the way around the manifold. This destroys any hope of local linearity.



As for the proposed local model, you hit the nail on the head when you say:




It's not clear to me how to put a topology on it,




Absolutely! Topologising these spaces can lead to quite strange behaviour. You want a LCTVS structure, else you haven't a hope of even starting, and that can distort the topology from what you expect. For example, if you take piecewise-smooth paths (with no quotient) then the LCTVS topology on that is the $C^0$-topology! Indeed, simply taking so-called "lazy paths" could be fraught with difficulties (I notice that you define "lazy" slightly differently to how I've seen it done before with sitting instances). Is that space a manifold? (I know the answer to this one, but if you don't then you should start with that one as it is a much easier question and will hone your skills a little.)



If you really want a manifold, the solution is to go one step further. Rather than quotienting out by thin homotopies, make your "thing" into a 2-structure and put the thin homotopies in at the 2-level. Keep all paths at the 1-level. Then each level has a manifold structure and by mapping into a 1-structure you effectively quotient out by the 2-structure but never actually have to consider the quotient itself.



To coin a phrase:




Quotients are horrible, it's a shame so many people think otherwise.




Lastly, that's not to say that there is no way of making $P^1(M)$ into a manifold. There may well be. But if there is, it'll be so convoluted and contrived that it won't look anything like the quotient of $P(M)$. A cautionary tale here is the case of all paths in a manifold, $C^infty(mathbb{R},M)$. That can be made into a manifold, but it has uncountably many components, for example, so looks absolutely horrid.



Okay, not quite lastly. There's lots of details here that have been glossed over. If you are really interested in working out the smooth space structure of this particular space then I (and I suspect Urs and Konrad) would be very interested in seeing it done and helping out. But MO isn't the place for that. Hop on over to the nLab, create a spin-off of http://ncatlab.org/nlab/show/path+groupoid, and start working.



Further Reading



  1. Constructing smooth manifolds of loop spaces. canonical page. The point of this is to figure out exactly when the "standard method" (alluded to by Tim) works. The distinction between "loop" and "path" is irrelevant.


  2. The Smooth Structure of the Space of Piecewise-Smooth Loops. canonical page. Why you should be very, very nervous whenever anyone says "consider piecewise-smooth maps"; and take as a cautionary tale as to the inadvisability of going beyond smooth maps in general.


  3. Work of David Roberts on the nLab. This is where I got the 2-idea that I mentioned above.


  4. Other relevant nLab pages: http://ncatlab.org/nlab/show/generalized+smooth+space, http://ncatlab.org/nlab/show/smooth+loop+space and further.


  5. Of course, the magnificent book by Kriegl and Michor. (I'm going to create a separate MO account for that book; its role will be to post an answer on relevant questions simply saying "Read Me".)



In response to Konrad's comment below, I've started an nlab page to work out the smooth structure of this space. The initial content considers the linear structure of the space of paths in some Euclidean space modded out by thin homotopy. The page is http://ncatlab.org/nlab/show/smooth+structure+of+the+path+groupoid.

Thursday, 7 June 2007

evolution - How do members of cryptic species know who to mate with?

As both @Rory M and @Alexander Galkin suggest, there are various non-visual mating behaviors to allow these species to select mates and also allow taxonomists and researchers to identify these species. And they hit on the two major ones, courtship rituals (mating calls, throat bulging, dancing) and pheromones.



Let's have a look at some two examples:



  • The widespread amazonian frog Allobates femoralis is now being approached as a species complex, with mating differences between the biologically distinct species varying from mating call note differences to cephalic amplexus (head grabbing during mate selection) and other various behavioral differences.[1]

  • Some crickets mate preferentially based on their species-specific song. Though, there is probably some genetic mixing with those crickets.

I think it's important to note that the species are cryptic because we as humans have decided they're hard to tell apart. Now, that similar morphology is also potentially difficult for predators to tell apart, or may be used by one or both species to parasitize on the other, it may be kept similar by sexual selection, or it may just not have great selective pressures to change. But the important part is that "cryptic species" is a box that we've created to describe traits we see.



Cryptic species as a window on diversity and conservation is a decent overview paper. (it's avaliable here if you don't have journal access)