Wednesday 28 February 2007

gr.group theory - Collapsible group words


What is the length $f(n)$ of the shortest nontrivial group word $w_n$ in $x_1,ldots,x_n$ that collapses to $1$ when we substitute $x_i=1$ for any $i$?




For example, $f(2)=4$, with the commutator $[x_1,x_2]=x_1 x_2 x_1^{-1} x_2^{-1}$ attaining the bound.



For any $m,n ge 1$, the construction $w_{m+n}(vec{x},vec{y}):=[w_m(vec{x}),w_n(vec{y})]$ shows that $f(m+n) le 2 f(m) + 2 f(n)$.



Is $f(1),f(2),ldots$ the same as sequence A073121:
$$ 1,4,10,16,28,40,52,64,88,112,136,ldots ?$$



Motivation: Beating the iterated commutator construction would improve the best known bounds in size of the smallest group not satisfying an identity.

Tuesday 27 February 2007

molecular genetics - What is the reason behind choosing the reporter gene when experimenting on your gene of interest?

Different genes will serve different purposes. For example, if you want to perform colocalization studies, then fluorescent genes like eGFP and DS-Red (or any variation of those, namely Emerald, mCherry, etc) will be quite useful, since you can use different filters on your microscope for the various fluorophores. For morphological assessments, perhaps a LacZ gene or alkaline phosphatase may be better suited, since the detection of these enzymes is compatible with standard histological techniques and you get usually much better contrast than with fluorescence. For quantification purposes, you would use probably firefly luciferase or CAT, since there are very good quantitative assays for these enzymes.



The decision will also depend on the degree of experience of the lab with each detection technique and even in the availability of the constructs. If a reporter is already made by the lab next door and if it fits your needs, you'll just use it, instead of creating your own, even if the reporter gene would not have been your first choice.

ct.category theory - Derived Functors in arbitrary triangulated categories

Let ${mathcal D}$ be a triangulated category, ${mathcal C}$ a triangulated subcategory and $Q: {mathcal D}to {mathcal D}/{mathcal C}$ the corresponding Verdier-localization. Now suppose we have a triangulated functor ${mathbb F}: {mathcal D}to {mathcal T}$ to some other triangulated category ${mathcal T}$.



My question is the following: Under which circumstances do we have some kind of "right derived" functor of ${mathbb F}$ with respect to ${mathcal C}$? By that I mean a triangulated functor $textbf{R}{mathbb F}: {mathcal D}/{mathcal C}to {mathcal T}$ together with a natural transformation ${mathbb F}Rightarrow textbf{R}{mathbb F}circ Q$ which is initial with this property.



Does there exist such a treatment of derived functors in arbitrary triangulated categories?



Thank you.

Monday 26 February 2007

nt.number theory - Prime numbers with given difference

Let be given natural numbers N_1,N_2,N_3,...,N_k such that for every prime p less or equal
k set N_1,N_2,N_3,...,N_k does not contain all reminders modulo p. Is it right that there exists number X such that all X+N_1,X+N_2,X+N_3,...,X+N_k are prime? I think it must follow
from some theorems about prime numbers in arithmetical progression.

physics - Two interacting bodies in an external field

Hope, MO is the right place for this question (if not so: where would you pose it?).



Consider a two-body system in classical mechanics. As long as the interaction depends only on the distance of the two bodies, the two-body problem is integrable/solvable. Now consider the two bodies in a fixed external field. (This is only one step away from a three-body system that is known to be non-integrable in general, but obviously different from it.)




Question: Can the conditions on the combination
of interaction and external field be
explicitely given for the problem to
be integrable/solvable?




It might be the case that the problem is
always solvable. In this case the
following reference request becomes
predominant:




Reference request:
Where can I find an explicit and
elaborated treatment of this problem?


ecology - Are there any side-effects from a mosquitoes extinction?

Eliminate Culicidae (mosquitoes) because some of the species transmit virus is a very bad idea. I have used the family name to make you realize there is more than 3500 species of mosquitoes and only a few are responsible for the transmission of disease or virus.



First, the male mosquitoes are important pollinators of multiple flowers (here is an example of some http://www.mosquitoreviews.com/mosquitoes-niche-pollinate.html ).



Secondly, mosquitoes are important food source for multiple predator such has bats, birds, frogs and fish. If you think it's negligible, just come down to Quebec in august and try to go naked in the wood you will see thousand of happy female mosquitoes on your skin in a minute.



Indeed, eliminating species because the have some consequence on humans is seeing the world as a simple thing. Ecosystem are complex and removing a group of animal will have consequences on the rest of the member of the ecosystem including us. Control of the population is a much better idea. Per example, in Montreal, the SOPFIM threat the sewer with larvicide (Methoprene) to control the mosquitoes population in the city. They also threat swamp close to the city. But the most efficient way to reduce mosquitoes population is by removing stagnant water (tires, plastic bag...). And cause of those effort,there are very few case of VNO in Montreal. So before causing to the extreme, educate the people about the biology of the mosquitoes and control the population.



Cheers

Sunday 25 February 2007

soft question - Is discrete mathematics mainstream?

There's a curious sense in which almost no one really feels comfortably mainstream, regardless of how they stand with respect to the cohomological divide, or even of their community status. The Grothendieck phenomenon is rather an obvious example, but there are many others. If we venture outside of mathematics proper, Noam Chomsky, often referred to as the most cited intellectual alive, frequently speaks of himself as an outsider. (Specifically in relation to his linguistics, not his politics.)



Of course, it’s tempting to speculate about the honesty of such self-perception, but I tend to think of it as largely reflective of the human condition. It may also be that this kind of view goes well with a sort of rebellious energy conducive to creative intensity. For people who like literature, the sensibility is wonderfully captured in the novella `Tonio Kroeger’ by Thomas Mann. The irony is that almost anyone who reads the story is able to relate to the loner, as is also the case with the typical rebel in simpler dramas.



Why go far? Here we have Tim Gowers, an enormously respected mathematician by any standard, apparently presenting himself as a spokesman for the tributaries. In his case, I take it as the prototypical gentlemanly self-effacement one finds often in Britain.



At the very least, the whole picture is complicated.



The point is it’s probably not worth spending too much energy on this question. Administrative constraints, classifications, and selections are a real enough part of life within which we have to find some equilibrium, but serious mathematics has too much unity to be divided by the watery metaphor.



David Corfield once (good-humouredly) misquoted me with regard to the perceived distinction:



'Which do you like better, the theorem on primes in arithmetic progressions or the one on arithmetic progressions in primes?’



The original context of that dichotomy, however, was a far-fetched suggestion that there should be a common framework for the two theorems.



Added: The more I think of it, the more it seems that the original thrust of cohomology was very combinatorial, as might be seen in old textbooks like Seifert and Threlfall. The way I teach it to undergraduates is along the lines of:



space $X$ --> triangulation $T$ --> Euler characteristic $chi_T(X)=V_T+F_T-E_T$ --> $T$-independence of $chi(X)$ --> dependence of $V_T$ etc. on $T$ --> 'refined incarnation' of $V_T, E_T, F_T$ as $h_0$, $h_1$, and $h_2$, which are independent of $T$--> refined $h_i$ as $H_i$.



The emphasis throughout is on capturing the combinatorial essence of the space.

epigenetics - Are prions an important driver in evolution?

It is proposed that prions are a good mechanism for "testing" phenotypic variation.



There are many identified proteins with prion-determining domains (PrD) in the yeast genome that can spontaneously switch between conformations with some low probability (eg: check SUP35 for one example, and [1] for a good overview of more). The theory is that:



  1. the low probability of switching from non-prion to prion state allows for many more mutations and variations to accumulate -- generating greater genetic diversity than in standard expressed gene variability where most mutations are silent or detrimental

  2. the prions provide a ready form of non-permanent inheritability that can be "trialed" by offspring and others in a colony of organisms -- this can be especially beneficial during say temporary changes in environment

  3. if the prion phenotype is widely successful, selective pressure can easily mutate it into a more permanent fixture in the genome.

Check out the excellent paper published just last week in Nature exploring this this topic [2]. To give a sense of just how evolutionarily-advantageous prions can be, in the author's experiments and analysis they note that 40% of the prion traits they analyzed were beneficial to growth (eg: in the paper strain UCD939 gains additional resistance to acidic conditions from the prion [PSI+]).



Assuming these hypothesis, prions would thus play a significant role in the evolution and variability of organisms.



[1] Crow, et. al. 2011. doi:10.1016/j.semcdb.2011.03.003



[2] Halfmann, et. al. 2012. doi:10.1038/nature10875

Saturday 24 February 2007

nt.number theory - Proving non-existence of solutions to $3^n-2^m=t$ without using congruences

I made a passing comment under Max Alekseyev's cute answer to this question and Pete Clark suggested I raise it explicitly as a different question. I cannot give any motivation for it however---it was just a passing thought. My only motivation is that it looks like fairly elementary number theory but I don't know the answer.



OK so one problem raised in the question linked to above was "prove there are no solutions to $3^n-2^m=41$ in non-negative integers" and Aleksevev's answer was "go mod 60". It was remarked afterwards that going mod 601 or 6553 would also nail it. For example, modulo 6553 (which is prime), 3 has order 39, 2 has order 117, but none of the 39 values of $3^n-41$ modulo 6553 are powers of 2 modulo 6553.



My question (really just a passing remark) is:



Is there an integer $t$ such that the equation $3^n-2^m=t$ has no solutions in non-negative integers $m$, $n$, but for which there are solutions modulo $N$ for all $Ngeq1$? (By which of course I mean that for each $Ngeq1$ the equation is satisfied mod $N$ for some integers $m,ngeq0$ depending on $N$; I am not suggesting that $m$ and $n$ be taken modulo $N$ or are independent of $N$).



This for me looks like a "Hasse principle" sort of thing---in general checking congruences doesn't give enough information about solvability of the polynomial in integers and there are many examples of such phenomena in mathematics. As exponential Diophantine equations are harder than normal ones I would similarly expect the Hasse Principle to fail here, but others seemed to be more optimistic.

class field theory - Intuition for Group Cohomology

See also this Math.SE post I wrote for some more motivation: http://math.stackexchange.com/a/270266/873. Recall that $mathrm H^*(G,M)=mathrm{Ext}^*(mathbb{Z},M)$.



After learning some more math, I've come across the following example of a use of group cohomology which sheds some light on its geometric meaning. (If you want to see a somewhat more concrete explanation of how group cohomology naturally arises, skip the next paragraph.)



We define an elliptic curve to be $E=mathbb{C}/L$ for a two-dimensional lattice $L$. Note that the first homology group of this elliptic curve is isomorphic to $L$ precisely because it is a quotient of the universal cover $mathbb{C}$ by $L$. A theta function is a section of a line bundle on an elliptic curve. Since any line bundle can be lifted to $mathbb{C}$, the universal cover, and any line bundle over a contractible space is trivial, the line bundle is a quotient of the trivial line bundle over $mathbb{C}$. We can define a function $j(omega,z):L times mathbb{C} to mathbb{C} setminus {0}$. Then we identify $(z,w) in mathbb{C}^2$ (i.e. the line bundle over $mathbb{C}$) with $(z+omega,j(omega,z)w)$. For this equivalence relation to give a well-defined bundle over $mathbb{C}/L$, we need the following: Suppose $omega_1,omega_2 in L$. Then $(z,w)$ is identified with $(z+omega_1+omega_2,j(omega_1+omega_2,z)w$. But $(z,w)$ is identified with $(z+omega_1,j(omega_1,z)w)$, which is identified with $(z+omega_1+omega_2,j(omega_2,z+omega_1)j(omega_1,z)w)$. In other words, this forces $j(omega_1+omega_2,z) = j(omega_2,z+omega_1)j(omega_1,z)$. This means that, if we view $j$ as a function from $L$ to the set of non-vanishing holomorphic functions $mathbb{C} to mathbb{C}$, with (right) L-action on this set defined by $(omega f)(z) mapsto f(z+omega)$, then $j$ is in fact a $1$-cocyle in the language of group cohomology. Thus $H^1(L,mathcal{O}(mathbb{C}))$, where $mathcal{O}(mathbb{C})$ denotes the (additive) $L$-module of holomorphic functions on $mathbb{C}$, classifies line bundles over $mathbb{C}/L$. What's more is that this set is also classified by the sheaf cohomology $H^1(E,mathcal{O}(E)^{times})$ (where $mathcal{O}(E)$ is the sheaf of holomorphic functions on $E$, and the $times$ indicates the group of units of the ring of holomorphic functions). That is, we can compute the sheaf cohomology of a space by considering the group cohomology of the action of the homology group on the universal cover! In addition, the $0$th group cohomology (this time of the meromorphic functions, not just the holomorphic ones) is the invariant elements under $L$, i.e. the elliptic functions, and similarly the $0$th sheaf cohomology is the global sections, again the elliptic functions.



More concretely, a theta function is a meromorphic function such that $theta(z+omega)=j(omega,z)theta(z)$ for all $z in mathbb{C}$, $omega in L$. (It is easy to see that $theta$ then gives a well-defined section of the line bundle on $E$ given by $j(omega,z)$ described above.) Then, note that $theta(z+omega_1+omega_1)=j(omega_1+omega_2,z)theta(z) = j(omega_2,z+omega_1)j(omega_1,z) theta(z)$, meaning that $j$ must satisfy the cocycle condition! More generally, if $X$ is a contractible Riemann surface, and $Gamma$ is a group which acts on $X$ under sufficiently nice conditions, consider meromorphic functions $f$ on $X$ such that $f(gamma z)=j(gamma,z)f(z)$ for $z in X$, $gamma in Gamma$, where $j: Gamma times X to mathbb{C}$ is holomorphic for fixed $gamma$. Then one can similarly check that for $f$ to be well-defined, $j$ must be a $1$-cocyle in $H^1(Gamma,mathcal{O}(X)^times)$! (I.e. with $Gamma$ acting by precomposition on $mathcal{O}(X)^times$, the group of units of the ring of holomorphic functions on $X$.) Thus the cocycle condition arises from a very simple and natural definition (that of a function which transforms according to a function $j$ under the action of a group). A basic example is a modular form such as $G_{2k}(z)$, which satisfies $G_{2k}(gamma z) = (cz+d)^{2k} G_{2k}(z)$, where $gamma = left(begin{array}{cc} a & b \ c & d end{array}right) in SL_2(mathbb{Z})$ acts as a fractional linear transformation. It follows automatically that something as simple as $(cz+d)^{2k}$ is a cocycle in group cohomology, since $G_{2k}$ is, for example, nonzero.

Friday 23 February 2007

homotopy theory - Classifying Space of a Group Extension

Yes. The principal bundles are the same and your guess that BA is an abelian group is exactly right. A good reference for this story, and of Segal's result that David Roberts quotes, is Segal's paper:



G. Segal. Cohomology of topological groups, Symposia Mathematica IV (1970), 377- 387.



The functors E and B can be described in two steps. First you form a simplicial topological space, and then you realize this space. It is easy to see directly that EG is always a group and that there is an inclusion G --> EG, which induces the action. The quotient is BG. Under suitable conditions, for example if G is locally contractible (which includes the discrete case), the map EG --> BG will admit local sections and so EG will be a G-principal bundle over BG. This is proven in the appendix of Segal's paper, above. There are other conditions (well pointedness) which will do a similar thing.



The inclusion of G into EG is a normal subgroup precisely when G is abelian, and so in this case BG is again an abelian group.



I believe your question was implicitly in the discrete setting, but the non-discrete setting is relevant and is the subject of Segal's paper. Roughly here is the answer: Given an abelian (topological) group H, the BH-princical bundles over a space X are classified by the homotopy classes of maps [X, BBH]. When H is discrete, BBH = K(H,2). If X = K(G,1) for a discrete group G, these correspond to (central) group extensions:



H --> E --> G



If G has topology, then the group extensions can be more interesting. For example there can be non-trivial group extensions which are trivial as principal bundles. Easy example exist when H is a contractible group. However Segal developed a cohomology theory which classifies all these extensions. That is the subject of his paper.

human biology - What are the constraints when growing an artificial brain?

The comments above are quite relevant -- we are years / decades away from growing functional brains in the lab, so there are probably innumerable constraints that we have likely not even thought of yet.



That being said, you might find this paper interesting: "Adaptive flight control with living neuronal networks on microelectrode arrays" [1]. The authors are able to succesfully culture rat neurons and electrically interface with them in order to control pitch and yaw in the XPlane flight simulation software. It's a very crude system compared to an actual brain, but it's able to successfully use cells as rudimentary living neural networks. If you consider this to be a successful "artificial brain," then your constraints are all the ones associated with culturing neurons.



[1] DeMarse, et. al 2005

Thursday 22 February 2007

rt.representation theory - a question about irreducibility of representations and Kirillov conjecture

I think it's best to look at the relatively recent paper of Moshe Baruch, Annals of Math., "A Proof of Kirillov's Conjecture" -- in the introduction of his paper, he discusses the basic techniques of proof, and a bit of the history (Bernstein proved this conjecture in the p-adic case, for example).



Baruch and others (e.g. Kirillov, in the original conjecture, I think) consider unitary representations. This is necessary for the methods which they use. From the beginning, they use the "converse of Schur's lemma", i.e., if $Hom_P(V,V)$ is one-dimensional then $V$ is irreducible. This converse of Schur's lemma requires one to work in the unitary (or unitarizable) setting.



Now, to address Kevin Buzzard's point, consider $G = GL_2(F)$ for a $p$-adic field, and a unitary principal series representation $V = Ind_B^G chi delta^{1/2}$, where $chi = chi_1 boxtimes chi_2$ is a unitary character of the standard maximal torus, and $delta$ is the modular character for the Borel $B$.



Restricting $V$ back down to $B$, one gets a short exact sequence of $B$-modules:
$$0 rightarrow V(BwB) rightarrow V rightarrow V(B) rightarrow 0,$$
where $V(X)$ denotes a space of functions (compactly supported modulo $B$ on the left) on the ($B$-stable) locus $X$. On can check that these spaces are nonzero using the structure of the Bruhat cells, and hence the restriction of $V$ to $B$ is reducible as Kevin suggests.



But, if one considers the Hilbert space completion $hat V$ of $V$, with respect to a natural Hermitian inner product, one finds that $hat V$ is an irreducible unitary representation of $G$ which remains irreducible upon restriction to $B$ (and to the even smaller "mirabolic" subgroup of Kirillov's conjecture). Here it is important to note that "irreducibility" for unitary representations on Hilbert spaces refers to closed subspaces. The $B$-stable subspace $V(BwB)$ of $V$ is not closed, and its closure is all of $hat V$ I think.



So - I think that Kirillov's conjecture is false, in the setting of smooth representations of $p$-adic groups (and most probably for smooth representations of moderate growth of real groups).



However, the techniques still apply in the smooth setting to give weaker (but still useful) results. After all, it is still useful to know that $Hom_P(V,V)$ is one-dimensional! This can be used to prove multiplicity one for certain representations, for example.



The general technique to prove $Hom_P(V,V)$ is one-dimensional involves various forms of Frobenius reciprocity and characterization of distributions. Without explaining too much (you should look at old papers of Bernstein, perhaps), and being sloppy about dualities sometimes,
$$Hom_P(V,V) cong Hom_P(End(V), C) cong Hom_G(End(V), Ind_P^G C).$$
Some sequence of Frobenius reciprocity and linear algebra (I don't think I have it quite right above) identifies $Hom_P(V,V)$ with a space of functions or distributions: $f: G rightarrow End(V)$, which are $(P,V)$-bi-invariant. In other words,
$$f(p_1 g p_2) = pi(p_1^{-1}) circ f(g) circ pi(p_2),$$
or something close.



So in the end, one is led to classify a family of $P$-bi-quasi-invariant $End(V)$-valued distributions on $G$. This leads to two problems: one geometric, involving the $P$-double cosets in $G$. This is particularly easy for the "mirabolic" subgroup $P$. The second problem is often more difficult, analyzing distributions on each double coset, and proving most of them are zero or else have very simple properties.



Hope this clarifies a little bit... you might read more on the Gelfand-Kazhdan method (Gross has an exposition in the Bulletin) to understand this better.

Wednesday 21 February 2007

endocrinology - Which human body hormonal systems exhibit 24 hour diurnal cyclical activity?

The real answer is probably more than you want, but its easy to do better than the list above.



I took a look through GEO for human circadian expression data and surprisingly I only found 2.



Looking at GSE2703 - the rhesus circadian expression experiment, they have shown 355 genes that are rhythmically expressed. This is not a great experiment because they only looked over a single 24 hour period. Its only the adrenal gland. Nonetheless they found 355 genes which seemed to be circadian. the table is supplemental data to the article, listed below.



I see a fibroblast growth factor receptor, some hydrocarbon nuclear receptor components, sterol regulatory factors, bone morphogenic protein 2, glutamate receptor, thrombonspondin receptor, ryanodine receptor 3 (what is that?) , lysophosphatidic acid G-protein-coupled receptor 2, purinergic receptor P2Y, G-protein coupled. You might find more if you know what you are looking for.



The other circadian study was on human muscle, which will no doubt give different answers. I imagine circadian behavior is highly tissue dependent.



Reference: Lemos DR, Downs JL, Urbanski HF. Twenty-four-hour rhythmic gene expression in the rhesus macaque adrenal gland. Mol Endocrinol 2006 May;20(5):1164-76

Tuesday 20 February 2007

rt.representation theory - Why aren't representations of monoids studied so much?

The representation theory of finite semigroups is an interesting blend of group representation theory and the representation theory of finite dimensional algebras. The subject is both old, going back to A.H. Clifford (from Clifford theory in group representation theory), and at the same time is in its infancy.



The reason why semigroup representation theory is not so well studied, in my opinion, lies in its origins. A description of the simple modules for a finite semigroup was given by work of Clifford, Munn and Ponizovsky in the forties and fifties. It was further clarified by Rhodes and Zalcstein and by Lallement and Petrich in the sixties. Roughly speaking the main theorem states that all irreducible representations of a finite semigroup can be constructed from irreducible representations of associated finite groups in a very explicit way. Sadly, this beautiful work was written up using very heavily the structure theory of finite semigroups, which is not widely known, and so the literature is virtually inaccessible to nonspecialists. The approach used here foreshadows the development of stratified and quasihereditary algebras by Cline, Parshall and Scott. In fact, in 1972, Nico computed a bound on the global dimension of the algebra of a finite von Neumann regula semigroup by finding a sequence of heredity ideals and discovering the bound it gives on global dimension years before the notion of heredity ideal was invented.



The character table of a finite semigroup was investigated in the sixties and seventies and shown to be invertible (although it is not orthogonal like in the group case). A method for writing a class function as a linear combination of irreducible characters was given that amalgamated the group situation with Möbius inversion in posets.



Progress on finite semigroup representation theory then more of less stalled for a number of years. I believe this was for two main reasons.



  1. There was a lack of ready-made applications.

  2. Semigroup algebras are almost never semisimple and the modern representation theory of quivers, etc. were only invented on the seventies. By then finite semigroup theorists were interested in other problems and they were mostly unaware of developments in the representation theory of finite dimensional algebras.

In the eighties and early nineties, there was some renewed investigation of the representation theory of finite semigroups, mostly due to Putcha, Okninski, Renner and their collaborators. In particular, connections with quivers and quasihereditary algebras and other aspects of modern representation theory were made.



The past decade has seen a renaissance in the subject of semigroup representation theory, spurred on by probabilists and algebraic combinatorialists. Bidigare, Hanlon, Rockmore, Diaconis and Brown, to name a few, have shown that a number of random walks are much more easily analyzed using semigroup representation than using group representation theory. For instance, it is nearly trivial to compute eigenvalues for the riffle shuffle and the top-to-random shuffle using semigroup theory. It is more difficult to use the representation theory of the symmetric group. Moreover, the diagonalizabilty of these walks is not explained by group theory, but it is explained by semigroup theory.



Also Bidigare's observation that Solomon's descent algebra associated to a finite Coxeter group is a subalgebra of a hyperplane face semigroup algebra has been important to people in algebraic combinatorics. There are also applications of semigroup theory to automata theory in particular in connection to the notorious Cerny conjecture on synchronizing automata.



In the last year, a half-dozen papers on semigroup representation theory have appeared on the ArXiv, many by nonsemigroup theorists. I expect that the trend will continue. We now know how to compute the quiver for a large class of finite semigroups, describe in semigroup theoretic terms projective indecomposable modules and for some classes we have techniques for computing global dimension. Semigroups with basic algebras over a given field have been described.



What is needed is a book covering all this for the general public!



Edit. Our new paper gives a close connection between monoid representation theory, poset topology and Leray numbers of simplicial complexes with classifying spaces of small categories thrown in. If browsing this paper doesn't convince you that monoid representation theory has something to it, then I don't know what will.



Edit. (2/18/14) Since this question just got bumped, let me add the new paper http://arxiv.org/abs/1401.4250 which gives a general introduction to Markov chains and semigroup representation theory and new examples.



Edit(4/1/15). Since this question just got bumped again, let me add that I am in the process of writing a book on the representation theory of monoids. In a sense I started writing this book because of this question (which was the first MO question I ever answered). Hopefully the book will be an answer to this question. I will make a link available shortly from my blog page.

ca.analysis and odes - "exchange" of real analyticity and integration

Sorry for the impreciseness of the title. It is merely meant for an analogy.



Exchange of limiting operations and integrations are basically derived from Lebesgue's dominated convergence theorem. For instance, let $f: mathbb{R}^2 to mathbb{R}$ be Borel measuable. Let $f(cdot, u) in C^k(I)$ for some open set $I$ and for all $u$ in a Borel set $D$. Let



$g = int_D f(x,u) {rm d} u$.



Then a sufficient condition for $g in C^k(I)$ is that $f^{(k)}(x, cdot)$ is dominated by an integrable function on $D$, i.e., $sup_{x in I} |f^{(k)}(x, cdot)| in L^1(D)$, and $g^{(k)}(x) = int_D f^{(k)}(x,u) {rm d} u$ holds in $I$.



My question is about when is real-analyticity preserved under integration, say, if $f$ is real-analytic in $I$ for each $u$, i.e., $f(cdot, u) in C^{omega}(I)$ for all $u in D$, what will be a sufficient condition for $g in C^{omega}(I)$?



Following the above rationale, we will obtain the following condition: for each $x_0 in I$,
1) the radius of convergence of $f(x, u) = sum_k a_k(u) (x-x_0)^k$ is bounded away from zero for all $u in D$.
2) integrability condition: $int_D sum_k a_k(u) (x-x_0)^k {rm d} u < infty$.
Then the analyticity of $g$ follows from Fubini's theorem.



Questions:
1) Is there other sufficient condition different from the above 'superficial' generalization, maybe exploring other characterization of real analyticitiy? The absolute integrability might not be easy to check.
2) Is there a more local version, which might give the radius of convergence of $g$.



Thanks!

genomics - How is the sequenced genome of a person useful to him in practice, now?

A human genome sequence can uncover large deletions and insertions in the genome and would give the genotypes of both common and private (rare to very rare) small polymorphisms (e.g. SNPs) and SSRs (simple sequence repeats). From this information, one can learn about some curiosity traits (say, sensing asparagus metabolites in the urine, slow or fast twitch muscles, curly vs not curly hair) and, more importantly, about disease risk. Ancestry and family relationships can also be learned as in, for example, a half-sibling relationship.



As many diseases are polygenic with numerous loci each making small contributions to the variance, it becomes very difficult to fully describe the level of increased or decreased risk. In other words, we don't yet know all the players and so assessing their contribution to risk of disease can't be done with complete confidence. To further compound this problem are both epistasis and gene-environment interactions. Epistasis is a gene-gene (gene-gene-gene, etc) interaction, such that genes A and B separately make no contribution to the phenotype (disease risk), but do so in combination - say in ab/ab individuals. A gene-environment interaction can be described as when an allele associates with increased risk only when an environmental factor (e.g., exercise, fat in the diet, sun exposure, oxygen tension (altitude), etc), passes a certain threshold.



Still, the most powerful predictor of disease risk, onset and progression is family history. A genome sequence approaches that history (half your genome is from your mother and half from your father, of course), but remains fairly uninterpretable in terms of complex traits. You may carry a couple variants posing increased risk for heart disease, but what about the other hundred or so loci that also make small contributions to this affliction? Sure, if we knew exactly what those other hundred were and how much they each contribute and if they interact with each other or the environment. An incomplete picture emerges.



With rare diseases and altered genomes (say in cancer compared to normal) great strides are being made to either identify the small number of defective genes (rare diseases) or the commandeered pathways exploited for uncontrolled growth (cancer). In this regard, much can be learned and will have quicker, but still measurable in years, applications to the clinic.

ag.algebraic geometry - Appropriate journal to publish a determinantal inequality

I have recently made the following observation:



Let $v_i := (v_{i1}, v_{i2})$, $1 leq i leq k$, be non-zero positive elements of $mathbb{Q}^2$ such that no two of them are proportional. Let $M$ be the $k times k$ matrix whose entries are $m_{ij} := max${$v_{ik}/v_{jk}: 1 leq k leq 2$}. Then $det M neq 0$.



The above statement is equivalent to the basic case of a result I recently discovered about pull back of divisors under a birational mapping of algebraic surfaces. I was going to include it as a part of another paper, then noticed the equivalent statement stated above and found it a bit amusing. My question is: is it worthwhile to try to publish it in a journal (as an example of an application of algebraic geometry to derive an arithmetic inequality), and if it is, then which journal(s)?



It is of course also very much possible that it is already known, or has a trivial proof (or counterexample!) - anything along those directions would also be appreciated.



Edit: Let me elaborate a bit about the geometric statement. In the 'other' paper, I define, for two algebraic varieties $X subseteq Y$, something called "linking number at infinity" (with respect to $X$) of two divisors with support in $Y setminus X$. I can show that when $Y$ is a surface, (under some additional conditions) the matrix of linking numbers at infinity of the divisors with support in $Y subseteq X$ is non-singular. In a special (toric) case, the matrix of linking numbers takes the form of $M$ defined above. So the question is if the result about non-singularity of the matrix and its corresponding implication(s) are publishable anywhere.

Mathematics for machine learning

For basic neural networks (i.e. if you just need to build and train one), I think basic calculus is sufficient, maybe things like gradient descent and more advanced optimization algorithms. For more advanced topics in NNs (convergence analysis, links between NNs and SVMs, etc.), somewhat more advanced calculus may be needed.



For machine learning, mostly you need to know probability/statistics, things like Bayes theorem, etc.



Since you are a biologist, I don't know whether you studied linear algebra. Some basic ideas from there are definitely extremely useful. Specifically, linear transformations, diagonalization, SVD (that's related to PCA, which is a pretty basic method for dimensionality reduction).



The book by Duda/Hart/Stork has several appendices which describe the basic math needed to understand the rest of the book.

mp.mathematical physics - Path integrals outside QFT

The path integral has many applications:



Mathematical Finance:



In mathematical finance one is faced with the problem of finding the price for an "option."



An option is a contract between a buyer and a seller that gives the buyer the right but not the obligation to buy or sell a specified asset, the underlying, on or before a specified future date, the option's expiration date, at a given price, the strike price. For example, an option may give the buyer the right but not the obligation to buy a stock at some future date at a price set when the contract is settled.



One method of finding the price of such an option involves path integrals. The price of the underlying asset varies with time between when the contract is settled and the expiration date. The set of all possible paths of the underlying in this time interval is the space over which the path integral is evaluated. The integral over all such paths is taken to determine the average pay off the seller will make to the buyer for the settled strike
price. This average price is then discounted, adjusted for for interest, to arrive at the current value of the option.



Statistical Mechanics:



In statistical mechanics the path integral is used in more-or-less the same manner as it is used in quantum field theory. The main difference being a factor of $i$.



One has a given physical system at a given temperature $T$ with an internal energy $U(phi)$ dependent upon the configuration $phi$ of the system. The probability that the system is in a given configuration $phi$ is proportional to



$e^{-U(phi)/k_B T}$,



where $k_B$ is a constant called the Boltzmann constant. The path integral is then used to determine the average value of any quantity $A(phi)$ of physical interest



$left< A right> := Z^{-1} int D phi A(phi) e^{-U(phi)/k_B T}$,



where the integral is taken over all configurations and $Z$, the partition function, is used to properly normalize the answer.



Physically Correct Rendering:



Rendering is a process of generating an image from a model through execution of a computer program.



The model contains various lights and surfaces. The properties of a given surface are described by a material. A material describes how light interacts with the surface. The surface may be mirrored, matte, diffuse or any other number of things. To determine the color of a given pixel in the produced image one must trace all possible paths form the lights of the model to the surface point in question. The path integral is used to implement this process through various techniques such as path tracing, photon mapping, and Metropolis light transport.



Topological Quantum Field Theory:



In topological quantum field theory the path integral is used in the exact same manner as it is used in quantum field theory.



Basically, anywhere one uses Monte Carlo methods one is using the path integral.

Monday 19 February 2007

Why do we need admissible isomorphisms for differential Galois theory?

I don't think I have seen the terminology "admissible isomorphism" being used in differential Galois theory, except for Kaplansky's book. I guess in E. Kolchin's work everything is assumed to lie in a universal differential extension, and therefore he never makes the distinction.



Your observation about Picard-Vessiot extensions is right, and I don't think one needs the notion of admissibility to develop ordinary Picard-Vessiot theory, which is a theory based on the "equation" approach. In fact the efforts done at the time were focused on developing a Galois theory of differential fields that wasn't necessarily associated to differential equations (but it had to be a generalization of PV of course). However, many problems arise when one takes this "extension" approach, in fact finding the right notion of a normal extension is not easy. Classically a field extension $M$ over $K$ is normal if every isomorphism into some extension field of $M$ is an automorphism. However the equivalent statement for differential algebra implies that $M$ is algebraic over $K$ and that is too strong (in fact this is one of the main reasons why one has to allow admissible isomorphisms). Here are two early approaches to normality:




$M$ is weakly normal if $K$ is the fixed field of the set of all differential automorphisms of $M$ over $K$.




Apparently this definition wasn't very fruitful, and not much could be proven. The next step was the following definition:




$M$ is normal over $K$ if it is weakly normal over all intermediate differential fields.




This wasn't bad and Kolchin could prove that the map $Lto Gal(M/L)$ where $Ksubset Lsubset M$ bijects onto a certain subset of subgroups of $Gal (M/K)$. However the characterization of these subsets was an open question (Kolchin referred to it as a blemish). The property he was missing was already there in the theory of equations, as the existence of a superposition formula (that every solution is some differential rational function of the fundamental solutions and some constants). The relevant section in Kaplansky's book is sec 21. Now an admissible isomorphism of $M$ over $K$ is a differential isomorphism, fixing $K$ element wise, of $M$ onto a subfield of a given larger differential field $N$. Thus, an admissible isomorphism $sigma$ let's you consider the compositum $Mcdot sigma(M)$ which is crucial to translating a superposition principle to field extensions. Indeed, if one denotes $C(sigma)$ to be the field of constants of $Mcdot sigma(M)$, then Kolchin defined an admissible isomorphism $sigma$ to be strong if it is the identity on the field of constants of $M$ and satisfies
$$Mcdot C(sigma)=Mcdot sigma(M)=sigma (M)cdot C(sigma)$$



This was the right interpretation of what was happening in the PV case and so a strongly normal extension $M$ over $K$ was defined as an extension where $M$ is finitely generated over $K$ as a differentiable field, and every admissible isomorphism of $M$ over $K$ is strong. Now the theory became more complete. $Gal(M/K)$ may be identified with an algebraic group and there is a bijection between the intermediate fields and its closed subgroups. Now this incorporates finite normal extensions (when $Gal(M/K)$ is finite), Picard-Vessiot extensions (when $Gal(M/K)$ is linear) or Weierstrass extensions (when $Gal(M/K)$ is isomorphic to an elliptic curve).



For a better exposition of this, see if you can find "Algebraic Groups and Galois Theory in the Work of Ellis R. Kolchin" by Armand Borel.

special functions - Relation between full elliptic integrals of the first and third kind

I am working on a calculation involving the Ronkin function of a hyperplane in 3-space.
I get a horrible matrix with full elliptic integrals as entries. A priori I know that the matrix is symmetrical and that give me a relation between full elliptic integrals of the first and third kind.
I can not find transformations in the literature that explain the relation and I think I need one in order to simplify my matrix.



The relation



With the notation



$operatorname{K}(k) = int_0^{frac{pi}{2}}frac{dvarphi}{sqrt{1-k^2sin^2varphi}},$
$qquad$ $Pi(alpha^2,k)=int_0^{frac{pi}{2}}frac{dvarphi}{(1-alpha^2sin^2varphi)sqrt{1-k^2sin^2varphi}}$



$k^2 = frac{(1+a+b-c)(1+a-b+c)(1-a+b+c)(-1+a+b+c)}{16abc},quad a,b,c > 0$



the following is true:



$2frac{(1+a+b-c)(1-a-b+c)(a-b)}{(a-c)(b-c)}operatorname{K}(k)+$



$(1-a-b+c)(1+a-b-c)Pileft( frac{(1+a-b+c)(-1+a+b+c)}{4ac},kright) +$



$frac{(a+c)(1+b)(1-a-b+c)(-1-a+b+c)}{(a-c)(-1+b)}Pileft( frac{(1+a-b+c)(-1+a+b+c)(a-c)^2}{4ac(1-b)^2},kright)+$



$(1-a-b+c)(-1+a+b+c)Pileft( frac{(1-a+b+c)(-1+a+b+c)}{4ac},kright)+$



$frac{(1+a)(b+c)(-1-a+b+c)(-1+a+b-c)}{(1-a)(c-b)}Pileft( frac{(1-a+b+c)(-1+a+b+c)(b-c)^2}{4ac(1-a)^2},kright)$



$==0$.



Is there some addition formula or transformation between elliptic integrals of the first and third kind that will explain this?

Sunday 18 February 2007

lab techniques - When running gels what is the difference between constant volts or constant amps?

I'll answer only for SDS-Page, which is the system I am most familiar with.



With a discontinuous buffer system, such as the well-known Laemmli system, resistance increases during electrophoresis, as (very mobile) chloride ions are replaced by glycinate (glycine ions).



From Ohms law:




Voltage (V) = Current (I) x Resistance (R)




and the definition of power (Watts):




Watts (W) = Current (I) x Voltage (V) = I2 x R




  • At constant current electrophoresis will proceed at a uniform rate, however, as the resistance increases, so too will the voltage and the amount of heat generated.

    In some discontinuous buffer systems, such as the Tricine/SDS system due to Schägger & von Jagow, this may be enough to crack the gel plates!



  • At constant voltage, the current will decrease during electrophoresis; as a result less heat will be generated than constant current electrophoresis, but the rate of migration of the sample will decrease with the decrease in current.

References



Laemmli, U.K. (1970) Cleavage of structural proteins during the assembly of the head of bacteriophage T4. Nature 227, 680-685. [PubMed]



Schägger, H. & von Jagow, G (1987).
Tricine-sodium dodecyl sulfate-polyacrylamide gel electrophoresis for the separation of proteins in the range from 1 to 100 kDa. Anal Biochem 166, 368-379.[PubMed]

pr.probability - distribution of degree of minimum polynomial for eigenvalues of random matrix with elements in finite field

This is an attempt to extend the current full fledged random matrix theory to fields of positive characteristics. So here is a possible setup for the problem: Let $A_{n,p}$ be an $n times n$ matrix with entries iid taking values uniformly in $F_p$. Then one should be able to find its eigenvalues together with multiplicities, which might lie in some finite extension of the field $F_p$. To ensure diagonalizability, one might even take $A_{n,p}$ to be symmetric or antisymmetric (I am not so sure if that guarantees diagonalizability in $F_p$ but I have no counterexamples either). Now the question is if we associate to each eigenvalue $lambda$ the degree of its minimal polynomial $d(lambda)$, then does the distribution of $d(lambda)$ as $n$ goes to infinite converge to some law upon normalization (say maybe Gaussian)? I am very curious whether others have studied this problem before. Maybe it's completely trivial.

Saturday 17 February 2007

co.combinatorics - Number of permutations with a specified number of fixed points

The "semi-exponential" generating function for these is



$sum_{n=0}^infty sum_{k=0}^n {F(k,n) z^n u^k over n!} = {exp((u-1)z) over 1-z}$



which follows from the exponential formula.



These numbers are apparently called the rencontres numbers although I'm not sure how standard that name is.



Now, how do we get a formula for these numbers out of this? First note that



$$exp((u-1)z) = 1 + (u-1)z + {(u-1)^2 over 2!} z^2 + {(u-1)^3 over 3!} z^3 + cdots $$



and therefore the "coefficient" (actually a polynomial in $u$) of $z^n$ in $exp((u-1)z)/(1-z)$ is



$$ P_n(u) = 1 + (u-1) + {(u-1)^2 over 2!} + cdots + {(u-1)^n over n!} = sum_{j=0}^n {{(u-1)^j } over j!} $$



since division of a generating function by $1-z$ has the effect of taking partial sums of the coefficients.



The coefficient of $u^k$ in $P_n(u)$ (which I'll denote $[u^k] P_n(u)$, where $[u^k]$ denotes taking the $u^k$-coefficient) is then



$$ [u^k] P_n(u) = sum_{j=0}^n [u^k] {(u-1)^j over j!} $$



But we only need to do the sum for $j = k, ldots, n$; the lower terms are zero, since they are the $u^k$-coefficient of a polynomial of degree less than $k$. So



$$ [u^k] P_n(u) = sum_{j=k}^n [u^k] {(u-1)^j over j!} $$



and by the binomial theorem,



$$ [u^k] P_n(u) = sum_{j=k}^n {(-1)^{j-k} over k! (j-k)!} $$



Finally, $F(k,n) = n! [u^k] P_n(u)$, and so we have



$$ F(k,n) = n! sum_{j=k}^n {(-1)^{j-k} over k!(j-k)!} $$

Learning Class Field Theory: Local or Global First?

I learned class field theory from the Harvard two-semester algebraic number theory sequence that Davidac897 alluded to, so I can really only speak for the "local first" approach (I don't even know what a good book to follow for doing the other approach would be, although I found this interesting book review which seems relevant to the topic at hand.).



This is a tough question to answer, partly because local-first/global-first is not the only pedagogical decision that needs to be made when teaching/learning class field theory, but more importantly because the answer depends upon what you want to get out of the experience of learning class field theory (of course, it also depends upon what you already know). Class field theory is a large subject and it is quite easy to lose the forest for the trees (not that this is necessarily a bad thing; the trees are quite interesting in their own right). Here are a number of different things one might want to get out of a course in class field theory, in no particular order (note that this list is probably a bit biased based on my own experience).



(a) a working knowledge of the important results of (global) class field theory and ability to apply them to relevant situations. This is more or less independent of the items below, since one doesn't need to understand the proofs of the results in order to apply them. I second Pete Clark's recommendation of Cox's book /Primes of the form x^2 + ny^2/.



Now on to stuff involved in the proofs of class field theory:



(b) understanding of the structure and basic properties of local fields and adelic/idelic stuff (not class field theory itself, but material that might be taught in a course covering class field theory if it isn't assumed as a prerequisite).



(c) knowledge of the machinery and techniques of group cohomology/Galois cohomology, or of the algebraic techniques used in non-cohomology proofs of class field theory. Most of the "modern" local-first presentations of local class field theory use the language of Galois cohomology. (It's not necessary, though; one can do all the algebra involved without cohomology. The cohomology is helpful in organizing the information involved, but may seem like a bit much of a sledgehammer to people with less background in homological algebra.)



(d) understanding of local class field theory and the proofs of the results involved (usually via Galois cohomology of local fields) as done, e.g. in Serre's /Local Fields/.



(e) understanding of class formations, that is, the underlying algebraic/axiomatic structure that is common to local and global class field theory. (Read the Wikipedia page on "class formations" for a good overview.) In both cases the main results of class field theory follow more or less from the axioms of class formations; the main thing that makes the results of global class field theory harder to prove than the local version is that in the global case it is substantially harder to prove that the class formation axioms are in fact satisfied.



(f) understanding the proofs of the "hard parts" of global class field theory. Depending upon one's approach, these proofs may be analytic or algebraic (historically, the analytic proofs came first, which presumably means they were easier to find). If you go the analytic route, you also get:



(g) understanding of L-functions and their connection to class field theory (Chebotarev density and its proof may come in here). This is the point I know the least about, so I won't say anything more.



There are a couple more topics I can think of that, though not necessary to a course covering class field theory, might come up (and did in the courses I took):



(h) connections with the Brauer group (typically done via Galois cohomology).



(i) examples of explicit class field theory: in the local case this would be via Lubin-Tate formal groups, and in the global case with an imaginary quadratic base field this would be via the theory of elliptic curves with complex multiplication (j-invariants and elliptic functions; Cox's book mentioned above is a good reference for this).



Obviously, this is a lot, and no one is going to master all these in a first course; although in theory my two-semester sequence covered all this, I feel that the main things I got out of it were (c), (d), (e), (h), and (i). (I already knew (b), I acquired (a) more from doing research related to class field theory before and after taking the course, and (f) and (g) I never really learned that well). A more historically-oriented course of the type you mention would probably cover (a), (f), and (g) better, while bypassing (b-e).



Which of these one prefers depends a lot on what sort of mathematics one is interested in. If one's main goal is to be able to use class field theory as in (a), one can just read Cox's book or a similar treatment and skip the local class field theory. Algebraically inclined people will find the cohomology in items (c) and (d) worth learning for its own sake, and they will find it simpler to deal with the local case first. Likewise, people who prefer analytic number theory or the study of L-functions in general will probably prefer the insights they get from going via (g).



I'm not sure I'm reaching a conclusion here: I guess what I mean to say is -- I took the "modern" local-first, Galois cohomology route (where by "modern" we actually mean "developed by Artin and Tate in the 50's") and, being definitely the algebraic type, I enjoyed what I learned, but still felt like I didn't have a good grip on the big picture. (Note: I learned the material out of Cassels and Frohlich mostly, but if I had to choose a book for someone interested in taking the local-first route I'd probably suggest Neukirch's /Algebraic Number Theory/ instead.) Other approaches may give a better view of the big picture, but it can be hard to keep an eye on the big picture when going through the gory details of proving everything.



(PS, directed at the poster, whom I know personally: David, if you're interested in advice geared towards your specific situation, you should of course feel welcome to contact me directly about it.)

molecular biology - Reverse transcription PCR optimization

I see big fuzzy bands around 100bp as well. They're most likely RNA contamination. To get rid of them, digest your RT-PCR products with RNAse-H. But if you just need to visualize your band of interest, and the fuzzy bands aren't getting in the way, it shouldn't be a problem.



I usually input anywhere from 1-2 ug of RNA into my RT-PCR reaction using the Invitrogen Superscript III kit for a total volume of 20 ul. After the reaction, I use 1/10 the volume of that (2 ul) for downstream (PCR) applications. This usually gives me nice results.



To optimize RT-PCR for detection of a specific target, consider using gene specific primers (make sure to use only anti-sense primers) in your RT-PCR instead of oligo-dT or random hexamers. This enriches your target when you use them for downstream applications. When you use GSPs though, make sure to run a parallel reaction with control (i.e. beta actin) primers to prove that your RT-PCR reaction is working.

Friday 16 February 2007

at.algebraic topology - Is it always possible to compute the Betti numbers of a nice space with a well-chosen Lefschetz zeta function?

Having resolved my ignorance concerning surface groups I can now answer question 1 negatively (or at least some formulation thereof). It is impossible if $Y$ is an oriented surface of genus at least $2$.



Suppose that $f: Y to Y$ is a self map of the surface such that the eigenvalues of $f^*$ acting on each $H^i(Y)$ are all nonzero (otherwise we can't "detect" the betti numbers), and such that $H^i(Y)$ and $H^j(Y)$ do not have eigenvalues of common magnitude for $i neq j$. Then in particular $f^*$ acts on $H^2(Y)$ nontrivially, say by multiplication by some integer $d$. This integer cannot be $pm 1$ since then $H^0(Y)$ and $H^2(Y)$ would contain eigenvectors with eigenvalues of equal magnitude.



Consider the subgroup $H = f_*(pi_1(Y))$ inside $G = pi_1(Y)$. If this had infinite index, then $f$ would lift to a map to some infinite covering of $Y$, so it would induce a trivial map of $H^2$. So $H$ has finite index in $G$. Let $X to Y$ be the corresponding covering space. Then $pi_1(X)$ is a quotient of $pi_1(Y)$, hence its abelianization has rank $leq 2g$ where $g$ is the genus of $Y$. This implies that $X$ is a closed surface of genus at most $g$. But its Euler characteristic is precisely $[G:H]$ times the Euler characteristic of $Y$, so $X = Y$. Thus $f$ induces a surjection on $pi_1(Y)$. By the post cited above, $f$ actually induces an isomorphism on $pi_1(Y)$, so it is a homotopy equivalence. In particular, $d = pm 1$, contrary to assumption.



After writing this it occurs to me that you might object to me ruling out the case $d = -1$... At any rate, this shows that the eigenvalues can't ever look like they do in the case of the Riemann hypothesis, with magnitude $q^{i/2}$ on $H^i$ for some $q>1$.

Thursday 15 February 2007

ag.algebraic geometry - Nonsingular/Normal Schemes

For curves over a field $k$, normal implies regular. (The point is that a normal Noetherian local ring
of dimension one is automatically regular, i.e. a DVR.) If $k$ is not
perfect, then it might not be smooth over $k$.



The reason is that in this case it is possible to have a regular local $k$-algebra of dimension one whose base-change to $overline{k}$ is no longer regular. (On the other hand,
smoothness over $k$ is preserved by base-change, since it is a determinental condition
on Jacobians.)



Here is a (somewhat cheap) example: let $l$ be a non-separable extension of $k$ (which
exists since $k$ is not perfect), and let $X = text{Spec} l[t],$ though of as a $k$-scheme.
This will be regular, but not smooth over $k$.



In dimension 2, even over an algebraically closed field, normal does not imply regular
(and so in particular, does not imply smooth). Normal is equivalent to having the singular locus be of codimension 2 or higher (so for a surface, just a bunch of points) (this is what Serre calls R_1) together with the condition that if a rational function on some open subset has no poles in codimension one, it is in fact regular on that open set (this is Serre's condition S_2).



For a surface in ${mathbb P}^3$, which is necessarily cut out by a single equation, the
condition $S_2$ is automatic (this is true of any local complete intersection in a smooth variety), so normal is equivalent to the singular locus being 0-dimensional.



For surfaces in higher dimensional projective space, $R_1$ and $S_2$ are independent
conditions; either can be satisfied without the other. And certainly both together (i.e. normality) are still weaker than smoothness.



From Serre's criterion (normal is equivalent to $R_1$ and $S_2$) you can see that normality
just involves conditions in codimension one or two. Thus for curves it says a lot,
for surfaces it says something, but it diverges further from smoothness the higher the dimension of the variety is.



Edit: As Hailong pointed out in a comment (now removed), I shouldn't say that S_2 is a condition only in dimension 2;
one must check it all points. Never the less, at some sufficiently vague level, the spirit
of the preceding remark is true: $R_1$ and $S_2$ capture less and less information about the local structure of the variety, the higher the dimension of the variety.

algebraic groups - Whenever I read "centraliser of maximal split torus", I think of...

It would help to place your question in the context of the foundational 1965 paper Groupes reductifs by Borel and Tits, freely available online from
NUMDAM: http://archive.numdam.org/



For example, their Section 4 studies centralizers of maximal $k$-split tori
in terms of roots, parabolic subgroups, Levi subgroups. This set-up was used
by Tits to codify many details of the classification of semisimple groups over
fields of special interest: finite, local, algebraic number fields, etc.
Relative to a field of definition, certain Levi subgroups of parabolic subgroups
are natural examples of the centralizers you want. Your proposed example needs
to be placed more carefully within this Borel-Tits framework, I think.



The story about structure and classification of reductive groups over arbitrary
fields is a long one, but the Tits strategy is to start with the known split
groups and then adapt the Dynkin diagram to a field of definition. See his
paper in the proceedings of the 1965 AMS Summer Institute at Boulder, available
freely online through AMS e-math in the first part of the volume:
MR0224710 (37 #309)
Tits, J.
Classification of algebraic semisimple groups. 1966 Algebraic Groups and Discontinuous Subgroups (Proc. Sympos. Pure Math., Boulder, Colo., 1965) pp. 33--62 Amer. Math. Soc., Providence, R.I., 1966.
See the Web page http://www.ams.org/online_bks/pspum9/



More details were worked
out by a student of Tits at Bonn: see MR0432776 (55 #5759)
Selbach, Martin
Klassifikationstheorie halbeinfacher algebraischer Gruppen.
Diplomarbeit, Univ. Bonn, Bonn, 1973. Bonner Mathematische Schriften, Nr. 83. Mathematisches Institut der Universität Bonn, Bonn, 1976. v+140 pp.



Your group is of inner type A in the classification, using the Dieudonne
determinant notation. So this really isn't so "typical", but occurs in the
Tits list. The "split" data in his diagrams is somewhat independent of
the ground field, but the remaining classification problem for anisotropic
groups depends strongly on the field.

Tuesday 13 February 2007

proteins - What's the opposite of a thermophile?

Thermophiles, heat-loving organisms, have been a popular topic of research for decades due in large part to the utility of their enzymes in various chemical reactions (Taq Pol single-enzymedly made PCR practical). One of the signatures of thermophiles is that their proteins resist heat denaturation up to much higher temperatures than their mesophilic (middle-loving) homologs.



Is there a corresponding group of extremophiles whose proteins resist cold denaturation down to much lower temperature than their mesophilic homologs? If so, could someone please point me towards a few good literature reviews on the topic? Or at least give me the proper name for this group of cold-loving extremophiles?

Monday 12 February 2007

soft question - Most 'obvious' open problems in complexity theory

The following two statements are really "obviously false", but are still open:



$EXP^{NP} subseteq$ depth-2-$TC^0$



$EXP^{NP} subseteq$ depth-2-$AC^0[6]$



Just as a reminder:



  • $EXP^{NP}$ is exponential time plus an oracle for NP. It contains $NEXP$ (nondeterministic exponential time), $EXP$, and $NP$.


  • By "depth-2-$TC^0$" I mean the class of polynomial-size, depth-two circuit families where each gate is an arbitrary threshold function -- i.e., if it has $m$ Boolean inputs $x_1, dots, x_m$, it is defined by reals $a_1, dots, a_n, theta$ and has output 1 iff $sum a_i x_i geq theta$.


  • By depth-2-$AC^0[6]$ I mean the class of polynomial-size, depth-two circuit families where each gate is a "standard" $MOD_6$ gate: if it has $m$ Boolean inputs $x_1, dots, x_m$, it has output 1 iff $sum x_i neq 0$ mod 6.


Expanding on the second open problem: Given $A subseteq mathbb{Z}_6$, define an $A$-$MOD_6$ gate to be one which outputs 1 iff $sum x_i in A$ mod 6.



The most embarrassing open problem in circuit complexity may be the following: Show that for all possible subsets $A$, the AND function requires superpolynomial-size depth-2 circuits of $A$-$MOD_6$ gates.



[PS: Thanks to Arkadev Chattopadhyay for explaining some of these $MOD_6$ problems to me.]

geometric analysis - How does curvature change under perturbations of a Riemannian metric?

Elaborating on Deane Yang's answer and Willie Wong's comment: Since
$M^{2}subsetmathbb{R}^{2}$ is a $C^{infty}$ submanifold with boundary, the
Euclidean coordinates are global. Generally, if $M^{n}$ is a compact manifold
with boundary, we can cover it by a finite number of charts ${x^{i}}$, where
for any $C^{infty}$ metric $g$ the functions $g^{ij}$ and $partial^{alpha
}g_{ij}$ are bounded (depending on $g$ and $|alpha|$) and where $alpha$ is a
multi-index with $|alpha|geq0$.



The scalar curvature $R_{g}$ (twice the Gauss curvature $K$ if $n=2$) is
$$
R_{g}=g^{jk}(partial_{ell}Gamma_{jk}^{ell}-partial_{j}Gamma_{ell
k}^{ell}+Gamma_{jk}^{p}Gamma_{ell p}^{ell}-Gamma_{ell k}^{p}Gamma
_{jp}^{ell})=(g^{-1})^{2}astpartial^{2}g+(g^{-1})^{3}ast(partial g)^{2}
$$
since the Christoffel symbols have the form $Gamma=g^{-1}astpartial g$,
where $partial^{k}g$ denotes some $k$-th partial derivative of $g_{ij}$ and
where $ast$ denotes a linear combination of products while summing over
repeated indices. From the formula for $R$ we have for metrics $g,g^{prime}$,
begin{align*}
& |R_{g}(x)-R_{g^{prime}}(x)|\
& leq C(|g^{-1}|^{2}+|g^{prime-1}|^{2})|partial^{2}g-partial^{2}g^{prime
}|+C(|g^{-1}|^{4}+|g^{prime-1}|^{4})(|partial g|^{2}+|partial g^{prime
}|^{2})|g-g^{prime}|\
& +C(|g^{-1}|^{3}+|g^{prime-1}|^{3}){(|partial^{2}g|+|partial^{2}
g^{prime}|)|g-g^{prime}|+(|partial g|+|partial g^{prime}|)leftvert
partial g-partial g^{prime}rightvert }
end{align*}
since $|g^{-1}-g^{prime-1}|leq C(|g^{-1}|^{2}+|g^{prime-1}|^{2}
)|g-g^{prime}|$.



Let $hat{Omega}=C^{2}(M,operatorname{Sym})$. Given $hinhat{Omega}$,
define $||h||=sup_{xin M}max_{i,j,k,ell}{|h_{ij}(x)|,|h_{ij,k}
(x)|,|h_{ij,kell}(x)|}$. Then $|R_{g}(x)-R_{g^{prime}}(x)|leq
C||g-g^{prime}||$, where $C$ depends on bounds on the inverses and the first
and second derivatives of $g$ and $g^{prime}$.



Elaborating on Terence Tao's answer and Deane Yang's comment: One reason it is
convenient to compute in local coordinates ${x^{i}}$ is that $[partial
_{i},partial_{j}]=0$. So the expression for the Christoffel symbols has only
$3$ terms instead of the $6$ terms comprising the formula for $nabla$:
$Gamma_{ij}^{k}=frac{1}{2}g^{kell}(partial_{i}g_{jell}+partial
_{j}g_{iell}-partial_{ell}g_{ij})$, which is symmetric in $i$ and $j$. With
$frac{partial}{partial s}g_{ij}=v_{ij}$, the variation formula is easy to
compute: $frac{partial}{partial s}Gamma_{ij}^{k}=frac{1}{2}g^{kell
}(nabla_{i}v_{jell}+nabla_{j}v_{iell}-nabla_{ell}v_{ij})$, since the
computation of this tensor formula at any point $p$ may be done in coordinates
where $partial_{i}g_{jk}(p)=0$ (such as normal coordinates centered at $p$);
this enables us to convert $partial_{i}$ to $nabla_{i}$ and to ignore the
$frac{partial}{partial s}g^{kell}$ term since it is multiplied by terms of
the form $partial g$. Now the variation of the Riemann curvature tensor is
$frac{partial}{partial s}R_{ijk}^{ell}=nabla_{i}(dfrac{partial
}{partial s}Gamma_{jk}^{ell})-nabla_{j}(dfrac{partial}{partial s}
Gamma_{ik}^{ell})$ using the same trick of computing at the center $p$ of
normal coordinates and replacing $partial$ by $nabla$ (note that
$frac{partial}{partial s}(GammaastGamma)=0$ at $p$ by the product rule);
the resulting formula is true in any coordinates since it is tensorial.



Generally, it is convenient to compute in local coordinates because it can be
done more or less mechanically. For example, if $alpha$ is a $1$-form, then
$nabla_{i}nabla_{j}alpha_{k}-nabla_{j}nabla_{i}alpha_{k}=-R_{ijk}^{ell
}alpha_{ell}$. One can remember this as the contraction of
$-operatorname{Rm}$ and $alpha$, where the lower indices $i,j,k$ on
$operatorname{Rm}$ appear in the same order as the first term on the left.
Similarly, if $beta$ is a $2$-tensor, then $nabla_{i}nabla_{j}beta_{kell
}-nabla_{j}nabla_{i}beta_{kell}=-R_{ijk}^{m}beta_{mell}-R_{ijell}%
^{m}beta_{km}$, where the the lower indices of $operatorname{Rm}$ are $i,j$
and then either $k$ or $ell$, with upper dummy index $m$ on
$operatorname{Rm}$ also replacing either $k$ or $ell$ on $beta$.

Sunday 11 February 2007

physiology - Were dinosaurs 'hot-blooded' or 'cold-blooded'?

This is a question I have often heard and there is no one certain answer to it. There are several scientific hypothesis about the methabolism of dinosaurs, but none of them has been ever proved or completely disproved.



Arts of methabolism



First of all, the term "cold-blooded" and "hot-blooded" are not scientific. In biology the organisms are classified according to how constant their body temperature is:



  • homeothermic (from greek "hom(e)o" = equal, same) -- organisms, who manage to maintain constant body temperature.

  • poikilothermic (from greek "poikilos" = changing, changeable) - ogranisms, whose body temperature fluctuates over time. It is implied here, that the body temperature follows the fluctuation of the environment temperature, even though this is strictly not correct (see below)

There is another orthogonal classification, which takes into account the metabolic properties of the organisms:



  • endothermic organisms actively produce heat in their bodies, whereas

  • exotermic organisms mostly rely upon external heat sources.

What do we know about dinosaurs' methabolism?



Unfortunately there is no one living dinosaur, so we can only reason about their methabolism basing upon the fossils (and possible DNA from moskitos trapped into ambers).



One of the most cited works here is the "Dinosaur Fossils Predict Body Temperatures" (link to the free full text), where the authors write:




Perhaps the greatest mystery surrounding dinosaurs concerns whether
they were endotherms, ectotherms, or some unique intermediate form.
Here we present a model that yields estimates of dinosaur body
temperature based on ontogenetic growth trajectories obtained from
fossil bones. The model predicts that dinosaur body temperatures
increased with body mass from approximately 25 °C at 12 kg to
approximately 41 °C at 13,000 kg. The model also successfully predicts
observed increases in body temperature with body mass for extant
crocodiles. These results provide direct evidence that dinosaurs were
reptiles that exhibited inertial homeothermy.




And exactly here we need the terminology I presented above to understand the findings:



  1. Dinosarier were endothermic -- they were able to actively produce heat in their bodies.

  2. The produced heat can only be taken away through their body surface. Due to the fact that the body surface increases quadratically (n2) with the size, whereas the weight grows cubically (n3), the bigger dinosaurs were the higher temperature their bodies reached. That could be one of the growth limiting factors and explains why especially big species preferred to live in water.

  3. Even though the dinosaries were homeothermic, we don't know anything about their thermoregulation. Modern reptilia do not have the thermoregulation mechanisms that would keep the body temperature constant. Dinosariers, however, due to increased heat production might have less fluctuations in their body temperature, than smaller reptilia. This is termed as "inertial homeothermy" here.

senescence - What are the effects of combining rapamycin with dietary restriction?

To clarify; administration of rapamycin (a drug) to lab organisms (including mice [1]) extends lifespan. Similarly, restricting the intake of nutrients to the minimum without causing malnutrition also extends lifespan in lab animals (including primates [2]).



Rapamycin inhibits the mTOR pathway (mammalian Target Of Rapamycin) - specifically mTORC1 (Complex 1) - which influences protein synthesis, autophagy and inflammation (among others). Upstream factors of mTOR include nutrient availability and insulin signaling (see "Deconvoluting mTOR biology" for good review [3]).



It has been hypothesized that the lifespan-extending effects of caloric restriction (CR) are mediated by mTOR (one can see why - mTOR is affected by nutrient availability). In fact it may depend on the method of CR;



Greer et al [4] report that different methods of CR in C.elegans, for instance feeding them a diluted food source, or conversely feeding them on alternate days, do not necessarily require the same genetic pathways. Not only this but CR combined with a genetic mutant (eat-2) have additive lifespan-enhancing effects.



So whilst the evidence is not concrete, and I look forward to other studies in mammals similar to the one by Greer et al, it looks as though rapamycin and CR have similar but not exactly the same effects on lifespan; rapamycin specifically inhibits an individual pathway which is involved in many processes, and some of its effects are not necessarily desirable (e.g. rapamycin inhibits the immune system [5]). On the other hand, CR (most of the different types) seems to be mediated by mTOR - this difference is critical: mTOR is not necessarily inhibited by CR, it is just required for its effect.



Therefore combining rapamycin and CR is unlikely to have an additive effect as rapamycin may override any influence CR has on mTOR signaling, but I have not seen a study in which this has been tried. Combing different methods of CR (or developing drugs to do just that) may well have additive lifespan-enhancing effects.



  1. Harrison DE, Strong R, Sharp ZD, et al. Rapamycin fed late in life extends lifespan in genetically heterogeneous mice. Nature. 2009;460(7253):392-5.

  2. Colman RJ, Anderson RM, Johnson SC, et al. Caloric restriction delays disease onset and mortality in rhesus monkeys. Science (New York, N.Y.). 2009;325(5937):201-4.

  3. Weber JD, Gutmann DH. Deconvoluting mTOR biology. Cell cycle (Georgetown, Tex.). 2012;11(2):236-48.

  4. Greer EL, Brunet A. Different dietary restriction regimens extend lifespan by both independent and overlapping genetic pathways in C. elegans. Aging cell. 2009;8(2):113-27.

  5. Thomson AW, Turnquist HR, Raimondi G. Immunoregulatory functions of mTOR inhibition. Nature reviews. Immunology. 2009;9(5):324-37.

Saturday 10 February 2007

gn.general topology - What is enough to conclude that something is a CW complex (part II)?

A while ago I asked a question about recoqnizing CW complexes and got an extremely nice and concrete answer. However, I am still interested in a more general treatment of this and therefore pose the following closely related question:



Assume that $X$ is an $n−1$ dimensional finite CW complex. and assume that $X'$ is given as a a set by the disjoint union of $X$ and a single open cell $e$ of dimension $n$. I.e. $e$ is an open subspace of $X'$ homeomorphic to the open $n$ disc (and of course $X$ is homeomorphic to the complement). Also assume that $X'$ is compact Hausdorff.



Question: Are there some natural topological conditions to further put on $X'$ such that it follows that $X'$ is in fact a CW-complex with $X$ as a sub-CW complex and $e$ a single new cell?



Remark: The fact that $X$ is such is equivalent to whether or not one can modify the homeomorphism of $e$ to the open unit disc such that the inverse extends to the closed unit disc.



Ideas on answers (but I have no proofs or counter examples in any of the cases):



1) $X'$ is locally contractible.



2) $X'$ has the homotopy type of a CW complex.



3) The Combination of 1) and 2).



4) $X'$ is homeomorphic to a CW complex



The last is weird, but it is not clear to me that even this is enough!



Any ideas, counter examples, or references?

rt.representation theory - Restriction map for Lie algebra/Lie group cohomology associated to a complex semisimple Lie algebra and a semisimple Lie-subalgebra

Let $mathfrak{g}$ be a finite-dimensional complex semisimple Lie algebra (or the corresponding Lie group). For definiteness, I'll take $mathfrak{g}$ to be of type $A_n$, that is, $mathfrak{g} = mathfrak{sl}_{n+1}(mathbb{C})$, but my question applies to semisimple Lie algebras of arbitrary Lie type. Consider the Dynkin diagram for $mathfrak{g}$. We can remove a node from the diagram to obtain a sub-diagram of type $A_{n-1}$. The sub-diagram corresponds to a copy of the Lie algebra $mathfrak{g}':=mathfrak{sl}_n(mathbb{C})$ sitting inside $mathfrak{g}$.



The structure of the Lie algebra cohomology rings $H^bullet(mathfrak{g},mathbb{C})$ and $H^bullet(mathfrak{g}',mathbb{C})$ are known, and are the same as the cohomology rings $H^bullet(G,mathbb{C})$ and $H^bullet(G',mathbb{C})$ for the corresponding complex Lie group. The computation of the cohomology rings is classical; for Lie algebras the computation is a result of Koszul.



In the specific case $mathfrak{g} = mathfrak{sl}_{n+1}(mathbb{C})$, we have $H^bullet(mathfrak{g},mathbb{C}) = Lambda(x_3,x_5,ldots,x_{2n+1})$, an exterior algebra on homogeneous generators of degrees $3,5,ldots,2n+1$. Then $H^bullet(mathfrak{g}',mathbb{C}) = Lambda(x_3,x_5,ldots,x_{2n-1})$. (For other Lie types, the cohomology ring is still an exterior algebra on homogeneous generators of certain odd degrees depending on the root system.)



The inclusion of Lie algebras $mathfrak{g}' rightarrow mathfrak{g}$ gives rise to a corresponding restriction map in cohomology: $H^bullet(mathfrak{g},mathbb{C}) rightarrow H^bullet(mathfrak{g}',mathbb{C})$.




Is the restriction map in cohomology map the "obvious'' map from $Lambda(x_3,x_5,ldots,x_{2n+1})$ to $Lambda(x_3,x_5,ldots,x_{2n-1})$, that is, the map that takes $x_i$ to $x_i$ for $1 leq i leq 2n-1$ and that takes $x_{2n+1}$ to zero? If so, can you provide a reference for this fact? For other Lie types, is the restriction map also the obvious map?




Edit: In response to the comments below, I think I should have phrased the question as follows:




Hopefully clarified version of question: Is there a choice of generators for the cohomology rings $H^bullet(mathfrak{g},mathbb{C})$ and $H^bullet(mathfrak{g}',mathbb{C})$ such that the restriction map in cohomology has the above described form?




I acknowledge that things could get messier for restriction maps in type $D_n$ and for types $E_6$, $E_7$, and $E_8$, because in those cases the degrees of the generators for the cohomolgoy ring aren't as well-behaved. But maybe something can still be said in general about the restriction map (e.g., when is it surjective?).

ag.algebraic geometry - Permanence of regularity in "generalised" semistable models

Given a regular ring $A$ with an element $t$, consider the "generalised" semistable model
$S := A[X_1,...,X_n]/(P_1cdots P_n - t)$ over $A$, where $P_i := {X_i}^{e_i}$ and $e_i$ are positive integers.



Question: For which $A$, $t$, and $(e_i)$ is $S$ regular?



E.g., when $n=2$ and $A$ is a discrete valuation ring and $t$ is a uniformiser, then it is easy to prove that $S$ is regular when $e_1$ and $e_2$ are both equal to 1. When $nge 3$ and $(A,t)$ is a DVR with uniformiser $t$, is there a reference?

Friday 9 February 2007

If I graft two trees together while young, will they grow as one plant?

If two trees grow close enough together so that their trunks touch each other anywhere along the length of the tree, then they will eventually fuse. This generally only happens at the trunk because, unlike small branches, the trunk really can't be pushed out of the way as easily. It doesn't necessarily need to be two trees of the same species either.



There used to be a fused sycamore-maple on my school campus (it was damaged in Sandy and was cut down). They weren't completely fused together, but you could see a joint at the base and about 20 feet below the canopy where the trunks essentially became the same. There was no distinction between the two separate trunks.



But to finally answer your question; when the trees fuse they pretty much become conjoined twins. I'm not sure if they transfer genetics to each other, but they do share resources.

Thursday 8 February 2007

lo.logic - a proof that L_min is not in coRE?

This is a problem with a surprisingly unique history. See the survey




Marcus Schaefer: A guided tour of minimal indices and shortest descriptions. Arch. Math. Log. 37(8): 521-548 (1998)




for more discussion. A proof that the problem is not coRE can be extracted from page 6 of that survey. For convenience I will give a self-contained argument here. Unfortunately it is not as simple as the "not-RE" proof!



We will show a Turing reduction from $L_{ALL}$ = { $ M~|~L(M) = Sigma^{star} $} to $L_{min}$, which suffices since $L_{ALL}$ is complete for the second level of the arithmetic hierarchy (implying it is not coRE). Intuitively, this means that given an oracle for $L_{min}$ we could decide the halting problem for machines which have oracles to the halting problem. A coRE language cannot have this property.



Let $hat{M}$ be the lexicographically first machine that accepts no inputs, under some encoding of machines.



First, we observe that given an oracle for $L_{min}$, we can effectively determine whether $M(M)$ halts or not, for a given $M$. (Recall this is enough to determine whether $M(x)$ halts or not, for given $M$, $x$.)



Define a machine $N_M$ which on input $t$, simulates $M(M)$ for up to $t$ steps and halts iff the simulation halts. To determine if $M(M)$ halts, make a list ${cal L}$ of all machines $M' neq hat{M}$ with $M' leq N_M$ such that $M' in L_{min}$. This list can be computed using an oracle for $L_{min}$. Observe that all such machines accept at least one input, since we excluded $hat{M}$ and they are all minimal. Via dovetailing, we can effectively find integers $t'$ such that each $M'(M')$ halts in $t'$ steps. Let $t''$ be the largest such $t'$. Then $M(M)$ halts iff $M(M)$ halts in at most $t''$ steps.



Now that we can decide the halting problem, we turn to computing a function version of $L_{min}$: given an oracle for $L_{min}$, we can output the minimum equivalent machine $M'$ to the input $M$. If $M in L_{min}$ this is easy. Otherwise we make a list ${cal L}$ of all machines that are in $L_{min}$ and are smaller than $M$. Then we begin to compute, for increasing inputs $x$, bit tables indicating the accept/reject behavior of all machines in ${cal L}$, on the inputs seen so far. (We can do this because we can solve the halting problem with the oracle!) When we find that a machine $M''$ differs on an input from $M$, we remove it from ${cal L}$. If $M$ is not minimal, there will eventually be only one machine $M'$ left which has not yet differed from $M$, since all machines in ${cal L}$ are minimal. If $M$ is not minimal then this $M'$ must be the minimum machine.



Finally, we define a machine $N$ that recognizes $L_{ALL}$ with an oracle for $L_{min}$. Let $M_{ALL}$ be the minimum Turing machine that accepts everything. If the input $M$ is less than $M_{ALL}$, output NO. ($M$ must reject something.) Otherwise compute the minimum machine equivalent to $M$. If $M = M_{ALL}$ then output YES otherwise output NO. Note $L(N) = L_{ALL}$.

brain - Is it possible to lose synapses over time?

Regarding your question about losing synapses; yes, synapses are regularly lost in a process called Synaptic Pruning. From the Wikipedia article:




A decrease in synapses is seen after adolescence reflecting synaptic pruning, and approximately 50% of neurons during development do not survive until adulthood. Pruning is influenced by environmental factors and is widely thought to represent learning.




Synaptic pruning is most significant during development due to "excessive" Neurogenesis that creates a large number of neurons, many of which will later be pruned. It's not excessive in a bad way, it's simply the case that more neurons are generated than necessary and thus pruning is needed to keep the connections efficient. There's no "measure twice, cut once", instead pruning is used to refine connections.



Synaptic Pruning really doesn't have anything to do with forgetting; Forgetting is natural and simply happens because connections (via Recall/Rehersal) are not made strongly enough to fully encode memories into Long Term Memory.



Note that Synaptic Pruning is not the cause for Alzheimer's or other age related degenerative disorders. From Wikipedia again:




Synaptic pruning is classified separately from the regressive events seen during older ages. While developmental pruning is experience dependent, the deteriorating connections that are synonymous with old age are not. The stereotyped pruning can be related to the chiseling and molding of stone into a statue. Once the statue is complete, the weather will begin to erode the statue and this represents the experience independent deletion of connections.




Synaptic Pruning is a normal process of development and occurs in all individuals. Alzheimer's is a very visible disorder due to media focus, but not part of normal developmental/aging effects.



Instead I suggest you read more specifically about Alzheimer's disease, there's quite a bit of information on it.

Wednesday 7 February 2007

cohomology - Cech to derived spectral sequence and sheafification

Yes, this is true in general.



It suffices to show the stalks vanish. Pick $x in X$ and take an injective resolution $0 to {cal F} to I^0 to cdots$. For any open $U$ containing $x$, we get a chain complex



$$0 to I^0(U) to I^1(U) to cdots$$



whose cohomology groups are $H^p(U,{cal F}|_U)$.



Taking direct limits of these sections gives the chain complex



$$0 to I^0_x to I^1_x to cdots$$



of stalks, which has zero cohomology in positive degrees because the original complex was a resolution. However, direct limits are exact and so we find



$$0 = {rm colim}_{x in U} H^p(U,{cal F}|_U) = {underline H}^p({cal F})_x$$



as desired.



Generally, cohomology tells you the obstructions to patching local solutions into global solutions, and this says that locally those obstructions vanish.

soft question - Why are the Dynkin diagrams E6, E7 and E8 always drawn the way they are drawn?

I disagree with the universality of your question, but I agree that the diagrams are often drawn in similar ways. They are drawn that way because they are easy to draw that way, and there isn't a good reason to deviate from what we are taught. I have seen both of your proposed alternatives for the $E_6$ diagram in the literature, and I might even say that your first alternative is the most common drawing I've seen.



There are in fact alternative conventions, e.g., if you put a 120 degree angle between consecutive edges, you don't have to spend as much time drawing vertices, and this can be helpful if you have to draw a lot of them, or have difficulty drawing convincing dots. As Guntram noted in the comments, some Wikipedia contributor has found a clever way to compress all of the Dynkin diagrams into a small, unreadable box.

Monday 5 February 2007

career - Tools for Organizing Papers?

As for physical papers:



I have two coexisting systems. The first is a file cabinet organized by author. Organizing large numbers of papers by subject or date or whatnot ravels out of control. The second is a series of magazine racks labelled by project, which contain papers directly relevant to the corresponding projects.



As for electronic papers:



I used to put them in folders by author, with helpful filenames like



Fukaya-FloerHomologyFor3ManifoldsWithBoundaryI.pdf
HatcherLawson-StabilityTheoremsForConcordanceImpliesIsotopyAndHCobordismImpliesDiffeomorphism.pdf
KustermansRognesTuset-TheModularSquareForQuantumGroups.pdf



but maintaining this got old. I tried Papers, and thought it was going to be fabulous, but like Scott, wasn't won over in the end.



But now search has gotten good enough there is much less need for explicit organization. You can just put the pdfs all together in a ginormous folder, and whenever you need something just search for it. It's the google way. (And if you use Google Desktop, then literally so.)



Here is the one thing I would like to be able to add to this system: it would be great to be able to add tags to papers, which would even further facilitate targeted retrieval and browsing. Does anyone know an easy way to do this?

reference request - What are some good resources for mathematical translation?

I think the Cornell mathematics library has a little math technobabble dictionary for converting between various standard languages like English, French and German.



Ask at the desk. If that doesn't work, ask Jim West because I think he's the guy that pointed it out to me. It used to be on that central shelf that's sitting in front of the main door to the library.



edit: if you find out the name of the dictionary, please post it here.

examples - Do good math jokes exist?

Kurd Lasswitz, mathematician, writer, inventor of science fiction in Germany, wrote this "nth part of Faust" for the Breslau Mathematical Society 1882:



"Personen:
Prost, Stud. math. in höheren Semestern, steht vor dem Staats-Examen,
Mephisto, Dx (sprich De-ix), Differentialgeisterkönig, ein Fuchs.
Ort Breslau. Zeit: Nach dem Abendessen. (Rechts ein Sofa, auf dem Tische zwischen allerlei Büchern ein Bierseidel und Bierflaschen, links eine Tafel auf einem Gestell, Kreide und Schwamm. Auf der Tafel ist eine die gesamt Fläche einnehmende ungeheuerliche Differentialgleichung aufgeschrieben).



Prost am Tische, mit den Büchern beschäftigt. Er stärkt sich.



Prost



Habe nun, ach, Geometrie, Analysis und Algebra
und leider auch Zahlentheorie studiert,
und wie, das weiß man ja!
Da steh' ich nun als Kandidat
und finde zur Arbeit keinen Rat.
Ließe mich gern Herr Doktor lästern;
zieh' ich doch schon seit zwölf Semestern
herauf, herab und quer und krumm
meine Zeichen auf dem Papiere herum,
und seh', daß wir nichts integrieren können.
Es ist wahrhaftig zum Kopfeinrennen.



Zwar bin ich nicht so hirnverbrannt,
daß ich mich quälte als Pedant,
wenn ich 'ne Reihe potenziere,
zu seh'n, ob sie auch konvergiere,
... "

structural biology - How does one calculate the resolution of a crystal structure?

Hi sorry i missed this one - not too hard for "biology"



If you look at a protein crystal (or any crystal really) in an x-ray beam, it scatters lots of spots (diffraction reflections). If you look at a picture of crystalline diffraction, at larger angles from the center of the x-ray beam, the reflections get weaker and weaker and basically just stop, if the wavelength is short enough (in all crystallography labs its plenty short - from 1.5 to 0.9 Angstroms).



The resolution is marked by the angle of scattering to the last measurable spot (aesthetics can vary here, but there is little variance from personal judgement here). Once you have the angle of scattering, you can calculate the resolution from the Bragg scattering equation:



lambda = 2d sin(theta)



which is rearranged to solve for 'd'



d = lambda / 2 * sin (theta)



where lambda is the wavelength of the incident radiation and theta is the scattering angle.
d = the apparent width of the 'slit' which caused this highest resolution reflection is called the 'resolution' of the X-ray scattering experiment.



There is one part that's a bit tricky as in the typical diagram, depicting Bragg reflection/scattering, you would take the incident x-ray beam as first beam and the scattered angle as the beam of the high resolution scattering, which is more easily measured as 180 - 2* theta.

lie algebras - Why is Lie's Third Theorem difficult?

@Theo, doubtless in the year since you answered this question, someone who can produce such impressive notes would have gotten all the details of a thorough answer sorted out, but my own proof of Lie III from my own forthcoming exposition on Lie theory, for what it's worth, it is basically a turning on its head of the proof that two simply connected connected Lie groups are isomorphic and runs roughly as follows. Once we have the local Lie group, form the set of all formal products of the form $gamma = Pi_j expleft(X_j(tau)right)$ where $X_j(tau)$ are a $C^1$ paths in the local Lie group, of course small enough that CBH applies to all pairs of its products. Then you can show that all continuous deformations of this path through the set of formal products can be written, thanks to the Hadamard formula, as $Pi_j expleft(X_j(tau)right) Pi_j expleft(delta X_j(tau)right)$, i.e. you can shuffle all the variation to one end of the product. Now restrict the variations $delta X_j$ to be small enough so that the CBH formula applies to each stage of the n-fold product $Delta = Pi_j expleft(delta X_j(tau)right)$. Thus, even though we have not rigourously defined the "value" of the formal product, now the concept of small continuous, but finite variations of the path that leave the "value" of the formal product unchanged is meaningful - it is any such variation such that, as calculated by CBH, $e = Delta = Pi_j expleft(delta X_j(tau)right)$ where $e$ is the group identity. So now we can define two of our formal products to be equivalent if one can find a finite sequence of small, continous variations that leave the products "value" unchanged, in the sense defined above, that step-by-step deforms one formal product to the other. We thus condense the big set of formal products into its equivalence classes modulo the equivalence just defined. This approach does two things: $mathbf{(i)}$ it irons out any inconsitency that might arise from an element's having horribly many potential representations as different formal products and $mathbf{(ii)}$ the set of equivalence classes, as a set of homotopy classes is simply connected by construction. So now we just check that this beast is a Lie group and we are done: in my exposition this is easy, because I use essentially a converse of Satz 1 of Freudenthal's 1941 "Die Topologie der Lieschen Gruppen Als Algebraisches Phanomen. I" to define a Lie group - I show one can take essentially the properties listed in Freudenthal's theorem, use them as axioms and show that a group that is a simply connected manifold builds itself from them. What we have done, of course, is to build the unique, simply connected connected Lie group that has the Lie algebra "input" to our proof".



I believe my approach is somewhat like the J.P. Serre's 1964 lectures approach cited in the first answer.

mp.mathematical physics - Asymptotic Matching of an logarithmic Outer solution to an exponential growing inner solution

Your question is a bit vague. Indeed, the standard procedure would involve rejecting the singular solutions and then using a combination of inner and outer expansions to satisfy the boundary conditions. There are various specific methods of achieving the goal (Vishik-Lyusternik and the matched asymptotic expansions are the most popular), but typically, one or several boundary conditions are only satisfied asymptotically (i.e. with an error vanishing as the small parameter tends to its limit, with the error often being exponentially small). Therefore, if some (or all) of your boundary conditions are satisfied only approximately (i.e. only in the limit of vanishing small parameter), this may in principle be the "feature" of the method that you are using. Otherwise, if the boundary conditions cannot be satisfied in this sense, it usually signals of the presence of yet another boundary layer which needs to be accounted for.



Kevorkian and Cole is an excellent source; you may also find helpful Van Dyke's "Perturbation methods in fluid mechanics" (a bit terse), Nayfeh's "Perturbation methods" (textbook) and de Jager and Furu's "The theory of singular perturbations" (good alternative).