Saturday, 30 September 2006

ag.algebraic geometry - Is the complex moduli of Quintic Calabi-Yau toric?

The complex moduli space does not admit a toric strucutre, since the orbifold fundamental group of a toric orbifold must be abelian. Indeed, $pi_1(mathbb C^*)^n$ surjects on the orbifold fundamental group. Also, the orbifold stabisier of each point on a toric orbifold
is a finite abelian group. At the same time the stabiliser of the quintic $sum_i z^5=0$
is a non-comutative group. Also I am sure that the orbifold fundamental group of the moduli space of quintics contains free (non-abelian) subgroups, but I don't know how to prove it.



Also it should be true that the Tiechmuller space is not algebraic. It least this happen in lower dimensions for cubics in $mathbb CP^2$ and for quartics in $mathbb CP^3$.
In the first case the Theichmuiller space is a disk, and in the second it is
a hermitian domain of type IV. Moduli spaces of polarised K3 are discussed here
for example, here:



http://people.bath.ac.uk/masgks/Papers/k3moduli.pdf

structural biology - How would one describe the R-factor in crystallography?

Crystallography requires the collection of many measurements (could be a few thousand to even millions depending on the size of the molecule and complexity of the crystal (technically speaking, the size of the crystal's unit cell is a major determining factor for the size of the data set). I'm not going to assume this is a small molecule crystal like a salt or a large molecule like a protein. its not necessary here.



I'll call this a set of intensity measurements, even though what is actually used is the square root of the intensities measured, called the structure factor.



The only way we really know that the molecular structure model is correct is that it generates as accurately as possible the intensities that were experimentally measured.



R, which I believe stands for residual, is a fractional difference between the measured intensities and what any proposed molecular model gives.



The residual is calculated as the absolute value of F(model)-F(measured). This means that 0.0 is a perfect match while 1.0 is a perfectly awful fit that shows the model is perfectly awful.



In practice 0.60 is usually as bad as random model will give you (there's a statistical argument as to why it doesn't go higher). Also the measurements are usually not perfect - they contain errors in measurements or artifacts from imperfect crystals or the detector, so an R value of < 0.20 (20%) is typically what you see in a reasonable paper. I think its commonly less than 0.15 for most structures now in fact.

molecular biology - How does translational coupling work in prokaryotes?

Translational coupling describes in how some cases an mRNA will code from more than one protein (i.e. will be polycistronic). Translational coupling is thought to be mostly used as a way to make a set of genes are translated at roughly the same amount in the cell.



Translational coupling is very common in prokaryotes and nearly half of e coli genes are found in a polycistronic operon. What we know about them is revealing. There are some fancy mechanisms to adjust the ratios of these adjacent genes, which are still coupled, but with ratios that are not just 1:1. Its shown that the later genes are often translated at somewhat lower frequency because the first genes are available more quickly before the mRNA degrades.



Eric Alm @ MIT wrote this great paper on how operons evolve.



I've only been able to find this reference to a eukaryotic case of "translational coupling" which is very rare, but does exist. The most common cases of translationally coupled genes in eukaryotes are RNA viruses which usually contain only a single full length mRNA which codes for all the genes in the virus. The selective pressures to keep the viral genome small and constrain the ratios of these genes, will cause these genes to even overlap, starting the next one before the current gene finishes.

Friday, 29 September 2006

molecular biology - How do I prepare and clone from E. coli DNA?

Yes, you must fragment the genome in order to insert it into a vector for cloning; you can't "insert" the whole 5 Mbp genome of E. coli into a vector. It's difficult to transform cells with huge plasmids, 2-20 kbp is an optimal range. In any case, if you want a clone of the whole genome, wait 30 minutes and the cell will happily oblige you.



Most procedures that isolate genomic DNA will fragment it in the first place, as it is much too large and fragile to stay together, and if it did it, the majority would be caught up in other cell debris and discarded. Vortexing with glass beads is a typical first step to randomly fragment it. After this, you can digest the DNA and your vector to place matching ends on it, ligate it, and transform host cells.



If you want a targeted approach (to extract a specific gene, a technique with which I am unfamiliar), you may be able to use PCR on your extracted, fragmented DNA, as there would hopefully be one intact segment spanning what you want.

Thursday, 28 September 2006

ho.history overview - A mathematical idea "abstract enough to be useless for physics"

Dear Jérôme, I doubt that Grothendieck ever said that.



However, in an analogous vein, Jean Leray, a brilliant French mathematician, was taken prisoner by the Germans in 1940 and sent to Oflag XVIIA ("Offizierslager", officers' prison camp) in Edelsbach (Austria), where he remained for five years till the end of WW2.



He managed to hide from his captors that he was an expert in fluid dynamics and mechanics, lest they would force him to contribute to their war effort (submarines, planes).
Instead, he organized a course, attended by his fellow prisoners, on the foundations of Algebraic Topology, a harmless subject for applications in his eyes. It is in these courses that he introduced sheaves, cohomology of sheaves and spectral sequences.



His strategy worked out fine since these discoveries didn't play any role in the construction of weapons by the German enemy, who never cared about Leray's courses and findings. On the other hand, these theoretical tools have had a non entirely negligible role in pure mathematics since.

biochemistry - Protein construct design

I am trying to create some constructs of a certain protein deleting well defined domains (at either terminus) to determine interaction regions with other proteins etc., 3 constructs with varying start/end sites have ended up being aggregated in E coli (while the full length protein expresses reasonably well). My question is what considerations one should use to determine start/end sites to maximize chances of getting soluble, purifiable protein. The criterion I used were:



  1. Preferably loop region in known x-ray structure

  2. Use hydrophobicity plots to minimize hydrophobic residues at both termini

  3. 3-4 residue linker between protein and affinity tag (GST/6xHis)

From personal experience with an earlier construct, this can turn out to be an idiosyncratic exercise but I was wondering if I was missing any crucial parameters.



EDIT: I also played around with the PDB file of the non-truncated protein to check whether huge hydrophobic patches were exposed on deletion of the domain(s), and I didn't find any such patches that could potentially be exposed and lead to aggregation

Tuesday, 26 September 2006

Hochschild/Cyclic Homology of von Neumann Algebras: Useless?

Some further thoughts: the most striking results I know of on "purely algebraic cyclic/Hochschild homology" are due to Wodzicki, see e.g.



Homological properties of rings of functional-analytic type, Proceedings of the National Academy of Sciences USA 87 (1990), 4910-4911



which states that stable C*-algebras have trivial cyclic homology. Obviously this doesn't answer your II_1 factor question...



Also: your remark that in some cases, we can ignore the analysis and make the situation a bit simpler confuses me a little. To get anywhere with cyclic or Hochschild homology, we need to do some kind of comparison of resolutions, or construction of contracting homotopies, or something like that. My intuition - but I don't work much on operator algebras, so I could well be wrong here - is that a von Neumann algebra is such a big object we usually can only get a handle on it by looking at suitable subsets which generate its unit ball in the WOT/SOT. So for group von Neumann algebras, one tries to see what's going on for translations, and thence to deduce more general results by exploiting w*-w* continuity; or else use projections and approximation arguments. If we go to a purely algebraic category, then it is no longer sufficient to define things on dense subsets - one really needs a global definition, one really needs to verify that certain putative identities are satisfied by each element of the von Neumann algebra.



Sorry if that's a bit waffly. I think my point is that imposing continuity restrictions actually makes things easier, because - intuitively - more things are going to be projective/injective/flat relative to one's restricted class of short exact sequences. This is why, for instance, we know that $H^n_{cb}(M,M)=0$ for any von Neumann algebra M, but why the analogous claim without the 'cb' is open and back-breaking. In a similar vein, if you work in a restricted category then one does indeed get some known instances of homological non-triviality (though at the level of modules, not at the level of cyclic homology):



M. E. Polyakov, An Example of a Spatially Nonflat von Neumann Algebra



I should also say that the Hilbert module stuff you mention doesn't really connect to your original question about cyclic (co)homology. It's interesting, and I think more has been done, but it's just different - so if that's what interests you, cyclic and Hochschild homology may be something of a distraction.

Monday, 25 September 2006

gt.geometric topology - Name for the motion of an immersion?

I have an immersion of a 2-simplicial complex S in $mathbb{R}^3$, and then a piecewise linear motion of that immersion over an interval of time [0,1].




Is there an existing name for the map $f:Stimes[0,1]tomathbb{R}^3times[0,1]$?




Update: here is a more detailed definition of f. Let $i_t$ be the immersion of S in $mathbb{R}^3$ parameterized over the piecewise linear motion: $i:Stimes[0,1]tomathbb{R}^3$. Now, extrude $mathbb{R}^3$ into a space-time $mathbb{R}^3times[0,1]$. Then the map above that I'm interested in is defined as $f(x,t)=(i_t(x),t)$.



Calling the map a homotopy seems incorrect because then the codomain should really be $mathbb{R}^3$. I'm interested in looking at the critical phenomena in a Morse theory or singularity theory sense, though I'm relatively ignorant of those fields. Perhaps there is some standard terminology to use from there.



Another possibility which came to mind was thinking of this swept immersion as a cobordism, although that didn't seem quite right since I care about the temporal ordering and resulting causality between the critical phenomena. (e.g. collisions)

co.combinatorics - Number of valid topologies on a finite set of n elements

It's wiiiiiiide open to compute it exactly. As far as I know the "feature that makes it intractable" is that there's no real feature that makes it tractable. Very broadly speaking, if you want to count the ways that a generic type of structure can be put on an n-element set, there's no efficient way to do this -- you basically have to enumerate the structures one by one. This is essentially because "given a description of a structure type and $n$, count the number of structures on an n-element set" is a ridiculously broad problem which ends up reducing to lots and lots of different counting and decision problems. Alternatively I think you can argue via Kolmogorov complexity and all that, but that's not my style?



So in any case, the burden of proof is on the person who claims that an efficient counting algorithm should exist. (If you believe some crazy things about complexity theory, like P = PSPACE, this starts to become less true since the structure types hard to enumerate will usually be hard to describe. But if you believe that you're a lost cause in any case :P) It's still reasonable to ask for further justification, though. I'd attempt to give some, but I've been awake for like 30 hours and it would be even more handwavey than the above. The short version: If you do enough enumerative combinatorics, you start to see that nice formulas for enumerating structures arise from one of a few situations. Some really big ones are:



  1. The ability to derive a sufficiently nice recurrence relation. This is a rather nebulous property, and some surprising structure types have cute recurrence relations. I can't really tell you a good solid reason why the number of finite topologies doesn't admit a nice recurrence relation; if you work with it a while, it just doesn't feel like it does.

  2. A classification theorem such that the structures in each class have nice formulas. Sometimes these are deep, sometimes they aren't. One non-deep example is in the usual (non-linear-algebra) proof of Cayley's formula on the number of trees. Point-set topology is weird, and it's pretty weird even in the finite case. This is out of the question.

  3. If the set of structures on a set of n elements is very rigid, there may be an "algebraic" way of counting them. Again, point-set topology is too weird for this to kick in.

So my answer boils down to: It's intractable 'cause it is. Not a particularly satisfying reason, but sometimes that's the way combinatorics works. Sorry if you read that whole post -- I meant for it to be shorter and have more content, but it ended up like most tales told by idiots. But hopefully you learned something, or at least had fun with it?

How expensive is knowledge? Knots, Links, 3 and 4-manifold algorithms.

With geometrization, Rubinstein's 3-sphere recognition algorithm and the Manning algorithm, 3-manifold theory has reached a certain maturity where many questions are "readily" answerable about 3-manifolds, knots and links. I'd like to have a community wiki post where we collectively build up a knowledge of just how much is algorithmically known, and how "expensive" it is to know something, in the sense of run-time upper bounds, and memory-usage upper bounds (if available). At worst I'd like to know "there's no known algorithm".



I hope to shape the discussion in this way: This initial post will list the "things we want to know", and the responses will pick off one of these topics and provide details. Whatever details you know, with references and examples of implementations (if available). Preference for algorithms you imagine are genuinely implementable -- ones with reasonable run-time estimates, reasonable memory-consumption, and assume reasonable start-up datum, something that you could feed to a computer without too much pain. As topics from the top post are answered, they will be erased from the top post to keep clutter low.



So if there are things you want to know -- for example, is there an estimate on how long it would take to compute (?triangulations?) of all finite-volume complete hyperbolic 3-manifolds which have the n-th largest volume (among the volumes of all finite-volume manifolds), where n is the input ordinal? -- add your question to the top post.



Of course, upper bounds on run-times is all I'm looking for. If you know the complexity class on the nose, great. The bias is more towards actual run-times of algorithms you'd consider using.



Let me get the ball rolling. I know answers to a few of these, and I'll try to get around to a few of them in the coming days.



knots and links



  • Given a planar diagram for a knot, how expensive is it to compute: the Alexander polynomial, the Jones polynomial, or the HOMFLYPT polynomial? To what extent do these benefit from a quantum computer?


  • Given a planar diagram for a knot, how expensive is it to compute a presentation matrix for the Alexander module? How about the Tristram-Levine or Milnor signatures?


  • What are the best run-times for unknot recognition, such as (any modified version of) the Haken algorithm? Dynnikov's work on unknot recognition would go here, as well.


  • Are there run-time estimates on how long it would take to determine if a knot is slice? ribbon?


3-manifolds



These questions all assume triangulations as input.



  • How expensive is 3-sphere recognition? The connect-sum decomposition? (Jaco, Rubinstein, Burton, etc)


  • How expensive is the compression-body decomposition, and the JSJ-decomposition?


  • How expensive is hyperbolisation (for a triangulated, hyperbolisable 3-manifold) i.e. the closed+cusped Manning algorithm. (Manning, ?Tillman?, others?)


  • How expensive is geometrization? (?)


  • How expensive is it to compute the Alexander ideals of a triangulated 3-manifold?


  • How expensive is it to produce a surgery presentation of a 3-manifold from a triangulation? (D.Thurston and Costantino's work is the closest related to this that I know)


  • Given an ordinal $n$ representing the volume of a hyperbolic $3$-manifold of finite volume, I want to know the actual volume (as a real number). How difficult is that to know? How about reconstructing the 3-manifold as well?


  • Given a triangulated cusped hyperbolisable $3$-manifold, is there an efficient algorithm to construct the Epstein-Penner decomposition?


4-manifolds



  • Given a triangulated rational homology 3-sphere, how expensive is it to compute the generalized Rochlin invariant? (or the Rochlin invariant for a homology 3-sphere)


  • Same question, but given a surgery presentation for the rational homology 3-sphere. In this case there is the Kaplan algorithm.


  • What computable invariants of Farber-Levine pairings are there, and how hard are they to compute from a surgery presentation of a triangulation of a 4-manifold?


  • Is the Oszvath-Szabo ''d''-invariant of $spin^c$ rational homology spheres algorithmically computable now, given a surgery presentation? How are run-times?


cv.complex variables - Lacunar series with an interesting (in-formula) symmetry.

So, I wrote out a table of functions like so:



$sum_{n=1}^{infty} (-1)^{n+1}q^{n}=$ $+q^{1}$ $-q^{2}$ $+q^{3}$ $-q^{4}$ $+q^{5}$ + $ldots$



$sum_{n=1}^{infty} (-1)^{n}q^{n^{2}}=$ $-q^{1}$ $+q^{4}$ $-q^{9}$ & $+q^{16}$ $-q^{25}$ $ldots$



$sum_{n=1}^{infty} (-1)^{n+1}q^{n^{3}}=$ $+q^{1}$ $-q^{8}$ $+q^{27}$ $-q^{64}$ $+q^{125}$ $ldots$



$sum_{n=1}^{infty} (-1)^{n}q^{n^{4}}=$ $-q^{1}$ & $+q^{16}$ $-q^{81}$ $+q^{256}$ $-q^{625}$ $ldots$



$sum_{n=1}^{infty} (-1)^{n+1}q^{n^{5}}=$ $+q^{1}$ $-q^{32}$ $+q^{243}$ $-q^{1024}$ $+q^{3125}$ $ldots$



And noticed that it is possible to rewrite (by transposing the first column so
it becomes the first row). The essential (though incomplete) statement of the
symmetry here is:



$X(q) = sum_{m}^{infty} sum_{n}^{infty} (-1)^{m+n} q^{m^{n}} = sum_{m}^{infty} sum_{n}^{infty} (-1)^{m+n} q^{n^{m}}$



Writing it out appropriately:



$X(q)=sum_{m=0}^{infty} (-1)^{m}q^{2^{m}} + sum_{m=0}^{infty} (-1)^{m+1}q^{3^{m}} + sum_{m=0}^{infty} (-1)^{m}q^{4^{m}} + sum_{m=0}^{infty} (-1)^{m+1}q^{5^{m}} + ldots$



$X(q) = sum_{n=1}^{infty} (-1)^{n+1}q^{n} + sum_{n=1}^{infty} (-1)^{n}q^{n^{2}} + sum_{n=1}^{infty} (-1)^{n+1}q^{n^{3}} + sum_{n=1}^{infty} (-1)^{n}q^{n^{4}} + sum_{n=1}^{infty} (-1)^{n+1}q^{n^{5}} + ldots$



Using mpmath I get numerically:




>>> nsum(lambda p: (nsum(lambda n: ((-1)**(n+1))*((1/2.0)**(n**(2.0*p-1))), [1,inf])), [1,inf]) + nsum(lambda p: (nsum(lambda n: ((-1)**(n))*((1/2.0)**(n**(2.0*p))), [1,inf])), [1,inf])
mpf('-0.10999554665856692')
>>> nsum(lambda p: (nsum(lambda n: ((-1)**n)*((1/2.0)**((2.0*p)**n)), [0,inf])), [1,inf]) + nsum(lambda p: (nsum(lambda n: ((-1)**(n+1))*((1/2.0)**((2.0*p+1)**n)), [0,inf])), [1,inf])
mpf('-0.10999554665855271')


And using mpmath's plotting facility, I obtained a picture of $X(q)$:





Questions:



  1. $lim_{qrightarrow 0} X(q) = -1/2$ by numerical evaluation, but just algebraically evaluating the function definition would lead one to believe that X(0)=0. What's going on here?


  2. All theta function identities (including Ramanujan's mock theta functions) that I've seen involve terms with $q^{n^2}$, but nothing higher in the uppermost exponent. Is there any work on series with $q^{n^{3}}$ I've found a paper by A. Sebbar which might be relevant.


  3. Has the function $X(q)$ been studied before? And if so, under what name? Does it have any interesting properties which aren't obvious from its definition. What are the appropriate lower bounds for the most compact representation? (summation) Does this function have any interesting symmetries under the modular group?


rt.representation theory - References for Lie superalgebras

For a quick, self-learning introduction you can take a look at Alberto Elduque's talks and papers in




Alberto Elduque’s Research



starting first with the talk called "Simple modular Lie superalgebras; Encuentro Matemático Hispano-Marroquí (Casablanca, 2008)."

Sunday, 24 September 2006

linear algebra - Matrix approximation

I'll address the last question (about an a priori bound for $epsilon$).



If $ngg kgg m$, the worst-case bound for $epsilon$ is between $c(m)cdot k^{-2/(m-1)}$ and $C(m)cdot k^{-1/(m-1)}$ (probably near the former but I haven't checked this carefully). Note that the bound does not depend on $n$.



Proof.
The columns of $A$ form a set $S$ of cardinality at most $n$ in $mathbb R^m$. For a given $epsilon$, a suitable $B$ exists if and only if there is a subset $Tsubset S$ of cardinality at most $m$ such that the convex hull $conv(T)$ majorizes the set $(1-epsilon)S$ in the following sense: for every $vin S$, there is a point in $conv(T)$ which is component-wise greater than $(1-epsilon)v$. And this majorization is implied by the following: $conv(sym(T))$ contains the set $(1-epsilon)S$, or equivalently, the set $(1-epsilon)conv(sym(S))$, where by $sym(X)$ denotes the minimal origin-symmetric set containing $X$, that is, $sym(X)=Xcup -X$.



Consider the polytope $P=conv(sym(S))$. We want to find a subset of its vertices of cardinality at most $k$, such that their convex hull approximates $P$ up to $(1-epsilon)$-rescaling. This problem is invariant under linear transformations, and we may assume that $P$ has nonempty interior. Then Fritz John's theorem asserts that there is a linear transformation of $mathbb R^m$ which transforms $P$ to a body contained in the unit ball and containing the ball of radius $1/sqrt m$. For such a set, $(1-epsilon)$-scaling approximation follows from $(epsilon/sqrt m)$-approximation in the sense of Hausdorff distance. So it suffices to choose $T$ to be an $(epsilon/sqrt m)$-net in $S$. Then a standard packing argument gives the above upper bound for $epsilon$.



On the other hand, if $S$ is contained in the unit sphere and separated away from the coordinate hyperplanes, you must choose $T$ to be a $sqrtepsilon$-net in $S$. This gives the lower bound; the "worst case" is a uniformly packed set of $n=C(m)cdot k$ points on the sphere.



UPDATE.



Fritz John theorem, also known as John Ellipsoid Theorem, says that for any origin-symmetric convex body $Ksubsetmathbb R^m$, there is an ellipsoid $E$ (also centered at the origin) such that $Esubset Ksubsetsqrt m E$. (There is a non-symmetric variant as well but the constant is worse.) The linear transformation that I used just sends $E$ to the unit ball. There are lecture notes about John ellipsoid here and probably in many other sources.



Comparing scaling distance (see also Banach-Mazur distance) and Hausdorff distance between convex bodies is based on the following. The scaling distance is determined by the worst ratio of the support functions of the two bodies, and the Hausdorff distance is the maximum difference between the support functions. Once you captured the bodies between two balls, you can compare relative and absolute difference. This should be explained in any reasonable textbook in convex geometry; unfortunately I'm not an expert in textbooks, especially English-language ones.



By "packing argument" I mean variants of the following argument showing that for any $epsilon$, any subset $S$ of the unit ball in $mathbb R^m$ contains an $epsilon$-net of cardinality at most $(1+2/epsilon)^m$. Take a maximal $epsilon$-separated subset $T$ of $S$, it is always an $epsilon$-net. Since $T$ is $epsilon$-separated, the balls of radius $epsilon/2$ centered at the points of $T$ are disjoint, hence the sum of their volumes is no greater than the volume of the $(1+epsilon/2)$-ball that contains them all. Writing the volume of an $r$-ball as $c(m)cdot r^m$ yields the result. This argument gives a rough estimate
$$
epsilon le (2sqrt m+1) k^{-1/m}
$$
in the original problem (up to errors in my quick computations). To improve the exponent one can consider the $(m-1)$-dimensional surface of $P$ rather that the whole ball.

set theory - What are interesting families of subsets of a given set?

Another ultrafilter cousin is the concept of a majority
space
. This is a family $M$ of nonempty subsets of $X$,
called the majorities, such that any superset of a
majority is a majority, every subset of $X$ or its
complement is a majority, and if disjoint sets are
majorities, then they are complements. A strict majority
space has $Yin Mto Y^cnotin M$, and otherwise they are
called weak majorities. A vast majority space is closed
under finite differences in majorities. There are other
various overwhelming majority concepts.



The main point is that the majority space concept
generalizes the ultrafilter concept by omitting the
intersection rule. Every ultrafilter on $X$ is a majority
space. But there are others. For example, on a finite set,
one may take the the subsets with at least half the size, and of course this situation motivates the voting theory terminology.



On an infinite set $X$, one can divide it into a finite odd
number of disjoint pieces $X_i$, each carrying an
ultrafilter $mu_i$ on $X_i$, and then saying that
$Ysubset X$ is a majority if for most $i$ one has
$Ycap X_iinmu_i$. This produces a vast majority on
$X$ that is not an ultrafilter.



Eric Pacuit has investigated majority logic, and I recall that Andreas Blass has some very interesting work showing that it is consistent with ZFC that every majority space derives from ultrafilters in a simple way.

Saturday, 23 September 2006

ag.algebraic geometry - Nonalgebraic complex varieties

The simplest example of a complex analytic non-algebraic manifold (and hence, non-projective) is probably the Hopf surface. Indeed, any smooth complete complex algebraic variety, projective or not, is bimeromorphically Kaehler,so its cohomology admits a Hodge decomposition, which can't exist for the Hopf surface, since its first Betti number is odd.



Any smooth complex algebraic variety is Moishezon, i.e. the transcendence degree of the field of meromorphic functions equals the dimension. All Moishezon surfaces are algebraic and even projective (Kodaira), but starting from dimension 3 there are Moishezon non-algebraic varieties. Here is an example (given in Hironaka's thesis). Let $C$ be a nodal plane cubic (or any other curve with one node and no other singularities) in $mathbf{P}^3(mathbf{C})$ and let $P$ be the singular point of $C$. Take a Euclidean neighborhood $U$ of $P$ such that $Ucap C$ is analytically two branches $C_1$ and $C_2$ intersecting transversally. Let $X$ be $mathbf{P}^3(mathbf{C})setminus{P}$ blown up along $Csetminus{{}P{}}$ and let $Y$ be the result of blowing up $U$ along $C_1$ and then blowing up the result along the proper preimage of $C_2$. Note that $Y$ exists only in the analytic category.



Both $X$ any $Y$ map to $mathbf{P}^3(mathbf{C})$ and the parts of both $X$ and $Y$ over $Usetminus P$ can be naturally identified. So we glue them together to get an analytic manifold $Z$. It is Moishezon, since it is bimeromorphic to $mathbf{P}^3(mathbf{C})$. Let us show that it is not algebraic. Let $L$ be the preimage of a point in $Csetminus P$ and let $L_i,i=1,2$ be the preimage of a point of $C_isetminus P$. The preimage of $P$ itself is two transversal lines, $L'$ and $L''$, the first of which appears after the first blow-up and the second one after the second. We have $[L]=[L_1]=[L_2], [L_2]=[L''],[L_1]=[L']+[L'']$. (Here I really wish I could draw you a picture!) So $[L']=0$ i.e. we have a $mathbf{P}^1$ inside $Z$ which is homologous to zero. This is impossible for an algebraic variety (as David writes in his blog posting).



Hironaka's thesis also contains examples of complete, algebraic but not projective manifolds constructed in a similar fashion.



upd: woops, wrote this answer in a hurry just before going out for drinks; missed a couple of things as a result. These have now been fixed. The curve $C$ is a nodal plane cubic, not conic. Also, David doesn't actually show in his posting that the class of an irreducible curve in a smooth complete variety is non-trivial, but this is easy anyway: let $Z$ be the ambient smooth complete variety and let $W$ be an irreducible curve. Take a smooth point $Q$ of $W$ and let $U$ be an affine neighborhood containing $Q$. There is an irreducible hypersurface $S$ through $Q$ in $U$ that does not contain $Wcap U$ and intersects $W$ transversally at $Q$. So the closure $bar S$ of $S$ in $Z$ does not contain $W$ and intersects $W$ transversally at at least one point. So the Poincar'e dual class of $bar S$ takes a positive value on the class of $W$.



Note that the class of a reducible curve may well be zero.

oa.operator algebras - A result about Fredholm operator

When I read the article "Index Theory" in Handbook of global analysis, I meet a result as below(Corollary 2.13):



If every $F_0in mathcal {F}(H_1,H_2)$, there is an open neighborhood $U_0subseteq mathcal {B}(H_1,H_2)$, such that $Fin U_0$ implies $F((KerF_0)^perp)oplus F_0(H_1)^perp =H_2$



I didn't find this result in other books.
I can't understand the proof about it. $Fv+w=F(v-f_0)+w$? Why?



Edit:
$H_1$ and $H_2$ are separable Hilbert spaces.



$mathcal {F}(H_1,H_2)$ is the spaces of Fredholm operators.



$mathcal {B}(H_1,H_2)$ is the spaces of bounded operators.



In the proof, construct a $overline{F}:H_1oplus F_0(H_1)^perp to H_2oplus kerF_0$ by
$overline{F}(v,w)=(Fv-w,pi_{KerF_0}v)$, this is a isomorphism. Since $overline{F}$ is onto, for any $(u, f_0)in H_2oplus kerF_0$, there is $(v,w)in H_1oplus F_0(H_1)^perp$, with $u=Fv-w$ and $pi_{KerF_0}v=f_0$.



$pi_{KerF_0}: H_1to KerF_0$

Friday, 22 September 2006

ac.commutative algebra - When a formal power series is a rational function in disguise

Continued fractions!



To motivate this answer, first recall the continued fraction algorithm for testing whether a real number is rational. Namely, given a real number $r$, subtract its floor $lfloor r rfloor$, take the reciprocal, and repeat. The number $r$ is rational if and only if at some point subtracting the floor gives $0$.



Of course, an infinite precision real number is not something that a Turing machine can examine fully in finite time. In practice, the input would be only an approximation to a real number, say specified by giving the first 100 digits after the decimal point. There is no longer enough information given to determine whether the number is rational, but it still makes sense to ask whether up to the given precision it is a rational number of small height, i.e., with numerator and denominator small relative to the amount of precision given. If the number is rational of small height, one will notice this when computing its continued fraction numerically, because subtracting the floor during one of the first few steps (before errors compound to the point that they dominate the results) will give a number that is extremely small relative to the precision; replacing this remainder by $0$ in the continued fraction built up so far gives the small height rational number.



What is the power series analogue? Instead of the field of real numbers, work with the field of formal Laurent series $k((x))$, whose elements are series with at most finitely many terms with negative powers of $x$: think of $x$ as being small. For $f = sum a_n x^n in k((x))$, define $lfloor f rfloor = sum_{n le 0} a_n x^n$; this is a sum with only finitely many nonzero terms. Starting with $f$, compute $f - lfloor f rfloor$, take the reciprocal, and repeat. The series $f$ is a rational function (in $k(x)$) if and only if at some point subtracting the floor gives $0$.



The same caveats as before apply. In practice, the model is that one has exact arithmetic for elements of $k$ (the coefficients), but a series will be specified only partially: maybe one is given only the first 100 terms of $f$, say. The only question you can hope to answer is whether $f$ is, up to the given precision, equal to a rational function of low height (i.e., with numerator and denominator of low degree). The answer will become apparent when the continued fraction algorithm is applied: check whether subtracting the floor during one of the first few steps gives a series that starts with a high positive power of $x$.



Bonus: Just as periodic continued fractions in the classical case correspond to quadratic irrational real numbers, periodic continued fractions in the Laurent series case correspond to series belonging to a quadratic extension of $k(x)$, i.e., to the function field of a hyperelliptic curve over $k$. Abel in 1826 exploited this idea as an ingredient in a method for determining which hyperelliptic integrals could be computed in elementary terms!

polynomials - Effective algorithm to test positivity

If by effective you mean "is this computable", then yes, the computational versions of Tarski-Seidenberg such as cylindrical algebraic decomposition give you a finite algorithm. (I suppose this is assuming your polynomial has rational coefficients, or at least algebraic coefficients each given by a polynomial they satisfy along with an interval isolating them from other roots. I would guess the problem is not computable if your coefficients are given by Turing machines which compute [successively better approximations to] the reals in question, but I'm guessing that's not the problem you're asking about.)



If by effective you mean "can this be done in polynomial time", the answer is probably not; the problem is NP-hard. In particular, a matrix $A$ is defined to be copositive if $x^T A xgeq 0$ for all elementwise nonnegative column vectors $x$. That is to say, $A$ is copositive if and only if $left[begin{smallmatrix}x_1^2 & cdots & x_n^2end{smallmatrix}right]Aleft[begin{smallmatrix}x_1^2 & cdots & x_n^2end{smallmatrix}right]^Tgeq 0$ for all real $x$, so checking copositivity is a particular example of the kind of problem you have mentioned. Murty and Kabadi's 1987 paper shows that checking if an integer matrix is not copositive is NP-complete. In particular this means that checking nonnegativity is NP-hard even in the degree four case.



If by effective you mean "are there polynomial-time methods that work well in practice", the answer is yes. In particular one can check sufficient conditions like whether $f$ is a sum of squares of polynomials (and a hierarchy of tighter conditions) in polynomial time using semidefinite programming, and often this allows one to solve such problems. Such methods often enable one to compute the global minimum of a polynomial and so can often even give a "no" answer despite the fact that they are a priori just sufficient conditions for nonnegativity. If you are interested in such techniques I would recommend the papers by Parrilo, Lasserre, Nie, etc. and in particular Parrilo's course on MIT OpenCourseWare.

cv.complex variables - Getting a differential equation for a function from a functional equation of its Mellin transform

If $f$ is a locally integrable function then its Mellin transform
$mathcal{M}[f]$ is defined by
$$ mathcal{M}[f] (s) = int_0^{infty} x^{s - 1} f (x) dx . $$
This integral usually converges in a strip $alpha < Re ; s < beta$ and
defines an analytic function. For our purposes we can assume that
$mathcal{M}[f]$ converges in the right half-plane.



Let us denote $F (s) =mathcal{M}[f] (s)$. Provided that the corresponding
Mellin transforms exist, the basic general theory tells us that, for instance,
$$
mathcal{M} left[ frac{d}{d x} f (x) right] = - (s - 1) F
(s - 1),quad
mathcal{M} left[ x^{mu} f (x) right] = F (s + mu) .
$$
This allows us to translate a differential equation for $f (x)$ into a
functional equation for its Mellin transform $F (s)$.



Example:
For instance, the function $f (x) = e^{- x}$ satisfies the differential
equation
$$ f' (x) + f (x) = 0 $$
which translates to the functional equation
$$ - (s - 1) F (s - 1) + F (s) = 0 $$
for its Mellin transform. Of course, the Mellin transform of $e^{- x}$ is
nothing but the gamma function $Gamma (s)$ which is well-known for
satisfying exactly this functional equation.



Now, let us assume that we are given a function $f (x)$ and its Mellin
transform $F (s)$. Further, suppose that we know that $F (s)$, just as the
gamma function, can be analytically extended to the whole complex plane with
poles at certain nonpositive integers. We also know that $F (s)$ satisfies a
functional equation which we would like to translate back into a differential
equation for $f (x)$. Formally, we obtain, say, a third order differential
equation with polynomial coefficients. Can we conclude that $f (x)$ solves
this DE?



The issue is that in our case the derivatives of $f (x)$ develop singularities
in the domain and are no longer integrable. So Mellin transforms can't be
defined in the usual way for them (and so we can't just use Mellin inversion).



What I am looking for is conditions under which we can still conclude that the
functional equation for $F (s)$ translates into a differential equation for $f
(x)$. Preferably, these should be conditions on $F (s)$ and not on $f (x)$. If
it helps, we can assume $f (x)$ to be compactly supported.



Any help or references are greatly appreciated!

dna - What can you tell about a person, having only their whole genome as information?

I've had a little encounter with this question in the past few months so I'm updating here...



The overall answer is 'really a lot about some things, but not as much as you'd like to think about others.' There is a scientific genome interpretation 'contest' that has been going on for the past few years called CAGI (Critical Assessment of Genome Interpretation). This is meant to be a cutting edge set of challenges and its worth looking them over. Last year there was in particular one challenge - answering questions about ten individuals given only a list of traits and their genome sequences.



It was not so easy it turns out - simply looking up variants and cross referencing them to the literature led to poor predictions. Glaucoma, asthma, migrane, irritable bowel syndrome, color blindness, lupus, lactose intolerance are examples from a list of 40-odd contest questions. If you register onto the site you can get some of the results or there is a paper reporting the results, only four labs tried the challenge and the accuracies were sometimes not great, topping off with Rachel Karchin's lab with an AUC of 90%. Even some of the phenotypes you think are easy are not a simple lookup. Genetics is not as pre-determined as we think.



While we are good at inferring our history and geneology on the other hand. A reasonable example of this is the 23andme analysis. They have a lot less information about you than a complete genome sequence, but they do seem to make the most of the 0.0001% of the genome that they do have.



They have a nice lookup of some of the hereditary disease data that is available. "Increased risk of skin cancer" or for Alzheimers are only chances and its never clear how much your actual lifestyle has impact on 44 outcomes, but its there.



What they also have is your ancestral analysis which is crazy interesting. it shows where your maternal and paternal lineages come from and how strong. they have a very pretty heatmap of the continents for this. National Geographic's DB is probably more sophisticated and ancestry.com also does this. This is pretty cool.



Also included things like curly hair, color of skin eyes and hair.



Like SimaPro says they are described in OMIM. OMIM stands for 'online mendelian inheritance in man' and all the straightforward directly inheritable traits we know of are listed there. its worth taking a look. There are not a lot of known single mutation genetically inheritable conditions.



If the genome sequence includes methylation, then some other traits which are epigenetic, not genetic but would also be determined from a genome sequence. Most famous example of this is MEST which will give an indicator of whether you are a doting parent or not.

Thursday, 21 September 2006

microbiology - What does the 34/70 in Saccharomyces pastorianus Weihenstephan 34/70 stand for?

I am not sure why you say there is no information... a quick Google search returned a few interesting pages...



In this paper:



Progress in Metabolic Engineering of Saccharomyces cerevisiae - Nevoigt, Microbiol Mol Biol Rev. 2008



the author says:




The identification of the entire genomic sequence of a commonly used lager brewer's yeast strain, i.e., Weihenstephan Nr. 34 (34/70), represents a breakthrough in the molecular analysis of lager brewer's yeast.




So, it would look like 34/70 is just a catalogue number, with no specific meaning.



Curiously, according to the Wikipedia page on Saccharomyces pastorianus:




S. pastorianus never grows above 34 °C (93 °F)




So, I cannot exclude the hypothesis that 34 could come from there although, well, I personally propend for the catalogue number.



Other interesting links:



The paper about the S. pastorianus genome sequencing:
Genome sequence of the lager brewing yeast, an interspecies hybrid. - Nakao et al., DNA Res. 2009



An article comparing two different strains of S. pastorianus, 34/70 and 34/78 (again, catalog number hypothesis seems to be the most obvious explanation)



Molecular species of phosphatidylethanolamine from continuous cultures of Saccharomyces pastorianus syn. carlsbergensis strains. - Tosch, Yeast. 2006



The NCBI taxonomy page (entry #520522)

Wednesday, 20 September 2006

How to tackle this puzzle?

Well, we know that the sum is at most $14+13+12+11+10+9+8+7=84$, so the product is at most $7056$.



If there are $7$ or more children, then the product is at least $8!>7056$, so there are at most $6$ children.



Furthermore, if there are $6$ children, the sum is at most $84-8-7=69$, so the product is at most $69^2=4761$, but the product is at least $7!=5040$, so there cannot be $6$ children.



Let $S$ denote the sum and $P$ the product. By the AM-GM inequality, we have $frac{S}{n} ge sqrt[n]{P} = sqrt[n]{S^2}$, so $frac{S^n}{n^n} ge S^2$, or $S^{n-2} ge n^n$, where $n$ is the number of children. This means that $n ge 3$, since $n^n ge 1$, which would contradict $n=2$.



I'm not sure there's much else you can do without getting into some messy casework. To rule out $3$ and $5$, you can divide into cases like "Suppose at least two children are older than 11," etc, and use similar arguments regarding sums and products as above. To then find the result given that $n=4$, you'll need to use some divisibility arguments and, yes, a little bit of casework.

ag.algebraic geometry - Do we have non-abelian sheaf cohomology?

The quick reply is: not really for $i gt 2$, and not in the way you perhaps expect for $i=2$, see below.



The comment on Charles' answer about 'teaching you never to ask that question again' is partly true, partly not. The lesson to learn from Giraud is that really one does not use groups for coefficients of higher cohomology. For a start, Giraud's $H^2(X,G)$ is not functorial with respect to group homomorphisms $Gto H$! One also does not get the exact sequences that one expects (this is due to the lack of functoriality). But this is not a problem with his definition of the cohomology set, but a problem with what category you believe the coefficients lie in. This is because the coefficient object of Giraud's cohomology is actually the crossed module $AUT(G) = (G to Aut(G))$, and the assignment $G mapsto AUT(G)$ is not functorial. (Aside: Giraud contains lots of other important things on stacks and gerbes and sites and so on, so the book is not a waste of time by any means)



But little-known work by Debremaeker[1-3] from the 1970s fixed this up and showed that really the Giraud cohomology was functorial with respect to morphisms of crossed modules. This has been recently extended by Aldrovandi and Noohi [4] by showing that it is functorial with respect to weak maps of crossed modules aka butterflies/papillion.



It was realised by John E. Roberts (no relation) and Ross Street that the most general nonabelian cohomology has as coefficient objects higher categories. In fact, we now know that the coefficients of $n^{th}$ degree cohomology is an $n$-category (usually an $n$-groupoid, though), even when we are talking about usual abelian cohomology.



Everything I've talked about is just for groups etc in Set, but it can all be done internal to a topos, i.e. for sheaves of groups, and more generally a Barr-exact category (and probably weaker, but Barr-exact means that the monadic description of cohomology therein due to Duskin (probably going back to Beck) works fine).




[1] R. Debremaeker, Cohomologie a valeurs dans un faisceau de groupes croises sur un site. I, Acad. Roy. Belg. Bull. Cl. Sci. (5), 63, (1977), 758 -- 764.



[2] R. Debremaeker, Cohomologie a valeurs dans un faisceau de groupes croises sur un site. II, Acad. Roy. Belg. Bull. Cl. Sci. (5), 63, (1977), 765 -- 772.



[3] R. Debremaeker, Non abelian cohomology, Bull. Soc. Math. Belg., 29, (1977), 57 -- 72.



[4] E. Aldrovandi and B. Noohi, Butterflies I: Morphisms of 2-group stacks, Advances in Mathematics, 221, (2009), 687 -- 773.

Tuesday, 19 September 2006

human biology - Is the protein in teardrops still attached to cells, or is it released and free-flowing?

I am not sure I understand your question.



According to the article you mention the proteins in teardrops kill the bacteria which are invading the eye (e.g. also present in the teardrops):




"Those jaws chew apart the walls of the bacteria that are trying to
get into your eyes and infect them,"




EDIT: These proteins are enzymes called lysozymes. Those are free-flowing proteins of the human tears. These proteins are actively produces in the lacrimal glands and actively secreted into the lacrimal liquid.

Monday, 18 September 2006

dg.differential geometry - Why is GL(n,C)/U(n) a CAT(0) space?

The title says it all. In one of his answers to the question "Convex hull in CAT(0)" (I don't have the points to post a link, if someone doesn't mind link-ifying this that would be cool), Greg Kuperberg said that GL(n,C)/U(n) is a CAT(0) space. I was wondering why this is true, or if there's a reference for this.

human biology - The genetic and physiological origins of laughter?

This Wikipedia article defines laughter in many terms, such as...




"a visual expression of happiness, or an inward feeling of joy"




and




"a part of human behavior regulated by the brain, helping humans clarify their intentions in social interaction and providing an emotional context to conversations".




Note: the emphasis was added by myself.



The article also states that laughter is "probably genetic", and that




"Scientists have noted the similarity in forms of laughter induced by tickling among various primates, which suggests that laughter derives from a common origin among primate species."




According to this report which the Wikipedia referenced, the expression of laughter in general is present among other great apes. Phylogenetic trees were reconstructed to represent the evolution of this trait among primates (which concludes that the ability to laugh must have a genetic basis, at least to some degree).



Note: I also found this WikiAnswers post, but it clearly can't be that reliable.



A search on the OMIM database yielded this result, where children with Angelman syndrome were characterized with "excessive laughter", this result, in which Charles Bonnet syndrome was characterized with "inappropriate laughter", as well as many other genetic mutations which resulted in some sort of uncontrollable laughter.



I understand that laughter is a complex psychological expression of emotion likely associated with more abstract thought, and that there simply can't be "a laughter gene", but my question is:



Is there any known genetic or physiological origin of laughter? What is biologically different in great apes which allows for laughter, in comparison with a panoply of other animals? I'm looking for more of a molecular answer rather than an ecological or psychological one.

Sunday, 17 September 2006

homological algebra - An exercise in group cohomology

Here is an exercise from Serre's "local fiels" when he starts to do cohomology: Let G act on an abelian group A, f be an inhomogenous n cochain, i.e. $fin C^n(G,A).$ Define an operator T on f, $Tf(g_1,g_2,cdots,g_n)=g_1g_2ldots g_n f(g_n^{-1},g_{n-1}^{-1},ldots,g_1^{-1})$. It is clear that $T^2f=f$. It is also not too hard to show $T(df)=(-1)^{n+1}d(Tf)$. Thus f is a cocycle iff Tf is, and f is a coboundary iff Tf is. When n=1, it is straightforward to see -f is cohomologous to Tf.
Then the exercise wants us to show when n= 0,3 mod 4, f is cohomologous to Tf,
while when n=1,2 mod 4, Tf is cohomologous to -f.
Any idea will be appreciated.

at.algebraic topology - unpointed brown representability theorem

Yes, Brown representability holds for such functors.
There are not really any material differences between this and the proof of Brown representability in the pointed case.



EDIT: My previous version of this was not rigorous enough. I was trying to be clever and get away with just simple cell attachments, which only work if you already know that the functor is represented by a space. Sorry for the delay in reworking, but this particular proof has enough details that it takes time to write up.



As you say, you begin by decomposing such functors so without loss of generality $F(pt)$ is a single point.



Start with $X_{-1}$ as a point. Assume you've inductively constructed an $(n-1)$-dimensional complex $X_{n-1}$ with an element $x_{n-1} in F(X_{n-1})$ so that, for all CW-inclusions $Z to Y$ of finite CW complexes with $Y$ formed by attaching a $k$-cell for $k < n$, the map
$$
[Y,X_{n-1}] to [Z,X_{n-1}] times_{F(Z)} F(Y)
$$is surjective.



Now, define a "problem" of dimension $n$ to be a CW-inclusion $Z to Y$ where $Y$ is a subspace of $mathbb{R}^infty$ formed by attaching a single $n$-cell to $Z$, together with an element of
$$
Map(Z,X_{n-1}) times_{F(Z)} F(Y).
$$
The fact that $Y$ has a fixed embedding in $mathbb{R}^infty$ means that there is a set of problems $S$, whose elements are tuples $(Z_s,Y_s,f_s,y_s)$ with $f_s$ a map $Z_s to X_{n-1}$ and $y_s$ is a compatible element of $F(Y)$.



Let $X_n$ be the pushout of the diagram
$$
X_{n-1} leftarrow coprod_{s in S} Z_s rightarrow coprod_{s in S} Y_s
$$
where the lefthand maps are defined by the maps $f_s$ and the righthand maps are the given $CW$-inclusions. This is a relative $CW$-inclusion formed by attaching a collection of $n$-cells; therefore, $X_n$ still has the extension property for relative cell inclusions of dimension less than $n$.



The space $X_n$ is homotopy equivalent to the homotopy pushout of the given diagram, which is formed by gluing together mapping cylinders. Specifically, $X_n$ is weakly equivalent to the space
$$
X_{n-1} times {0} cup (coprod_S Z_s times [0,1]) cup (coprod_S Y_s times {1})
$$
which decomposes into two CW-subcomplexes:
$$
A = X_{n-1} times {0} cup (coprod Z times [0,1/2])
$$
which deformation retracts to $X_{n-1}$, and
$$
B = (coprod Z_s times [0,1/2]) cup (coprod Y_s times {1})
$$
which deformation retracts to $coprod Y_s$ with intersection $A cap B cong coprod Z_s$. The Mayer-Vietoris property and the coproduct axiom then imply that there is an element $x_n in F(X_n)$ whose restriction to $A$ is $x_{n-1}$ and whose restriction to $B$ is $prod y_s$.



Taking colimits, you have a CW-complex $X$ with an element $x in F(X)$ (constructed using a mapping telescope + Mayer-Vietoris argument) so that, for all CW-inclusions $Z to Y$ obtained by attaching a single cell, the map
$$
[Y,X] to [Z,X] times_{F(Z)} F(Y)
$$ is surjective.



Now you need to show that for any finite CW-complex $K$, $[K,X] to F(X)$ is a bijection.



First, surjectivity is straightforward by induction on the skeleta of $K$. More specifically, for any $K$ with subcomplex $L$, element of $F(K)$, and map $L to X$ realizing the restriction to $F(L)$, you induct on the cells of $Ksetminus L$. Then, injectivity: if you have two elements $K to X$ with the same images in $F(K)$, you use the above-proven stronger surjectivity property to the inclusion $K times {0,1} to K times [0,1]$ to show that there is a homotopy between said maps.

Saturday, 16 September 2006

Finite dimensionality of certain $C^{star}$-algebras

In the discussion about the question Finite-dimensional subalgebras of $C^{star}$-algebras the following separate question came up:



Let $H$ be a Hilbert space and $a_1, dots, a_n in B(H)$ be self-adjoint operators. Consider the operators $x_1a_1+x_2a_2+dots + x_n a_n$ , where the $x_i$'s are complex variables and assume that there is a polynomial $p(z,x_1,dots,x_n) in mathbb C[z,x_1,dots,x_n]$ such that $z$ is in the spectrum of $x_1a_1+x_2a_2+dots + x_n a_n$ if and only if $p(z,x_1,dots,x_n)=0$.




Question: Is the subalgebra of $B(H)$ which is generated by the operators $a_1 , dots, a_n$ finite dimensional?


math communication - Is a free alternative to MathSciNet possible?

This was discussed a little on the algebraic topology list last autumn, you can look up the archives to see what was said.



Technologically, this is easy. The problems come in when you think beyond that.



  1. How would such a site start? Initially, there would be very few reviews so no one would have a reason to visit the site (the probability of the paper wanted being reviewed being almost nil). For obvious reasons, importing an initial dataset from MathSciNet or Zentralblatt is extremely unlikely. If no-one visits, no-one's going to contribute.


  2. How would such a site maintain itself? The big problem with reviews is that the person writing the review gets almost no gain from it but (to do it properly) has to put in a fair amount of effort (which is why so many reviews on MathSciNet and Zentralblatt are so appalling, just copying out the summary of the paper, and why the few gems are so greatly appreciated). That's a huge imbalance. MathSciNet sorts this out by awarding "points" to reviewers ("And points mean prizes!"). What would a free alternative offer?


  3. How would such a site maintain its standards? Here, one gets into extremely murky waters. One idea suggested on the alg-top list was to use something like the stackoverflow model, but as this is going to involve opinions it could be extremely dangerous. Often, the most useful information in a review is a reason not to read a given paper - if you're looking at the review, you are probably already inclined to read it - and some of the classic reviews are those that rip apart a paper. Who's going to write those on an open system?


There are other ways of essentially filling the same role as reviews. You read a review to find out whether or not it's worth reading a paper. But the main problem comes before that, which papers should I read? Once I've found that, then I use the review more to figure out whether or not I should bother finding the paper in my library or not. If the paper is freely available, it's almost as quick to read the paper itself as to read the review (but I often read the review first because I find the paper via MathSciNet so the review is there anyway). So I would much prefer something to speed up finding papers. A better system of linking papers together: "If you enjoyed this paper, you might also enjoy ..." sort of thing.



So a really useful thing to do would be to have a "related papers" section linked to a given paper. This could be started by the author - who would have every incentive to do so (since it increases the likelihood that their paper is understood) and who would find it very easy to do (since they would have such a list of papers lying on their desk - it's all the articles, books, and so forth that they read when writing the paper in the first place). Basically, it's an expanded and commented reference list, and one which can be added to by others (so that if reading a paper, you find some other paper very useful which the original author didn't know about, or knew so well they didn't think to mention it, then you can add it).




Added later in response to some of the comments and the changes in the question.



First, a minor point. MathSciNet and the arXiv already have the capability to link articles together. If you look at a typical MathSciNet review then you'll see at the top right corner a box linking to where the article was cited, either in articles or reviews. I've found this an invaluable tool and I hope that the AMS will extend it historically (the links are only to recent articles where this data could be found fairly automatically). The arXiv has experimental support for full text search, so you can search for the arXiv identifier of one of your articles and find all those that cite it, for example.



Now on to my major point. "Someone should set up a site that ...". Who's to say that this has to be centralised? After the discussions on the alg-top list and the subsequent discussions on the rForum, I've come to the conclusion that having a central system isn't the best idea. All that is really needed is a central place from which all other places can be reached. And that already exists. It's called the arXiv. The arXiv accepts trackbacks (subject to some approval) so when you blog about a paper, send a trackback to the arXiv and then you should get linked to from the page on the paper.



Of course, this only works for articles that are on the arXiv. So then lobby the AMS to accept trackbacks as well (they can bung their standard "the AMS has no responsibility for non-AMS sites" disclaimer on). The basic information in MathSciNet is freely available, these trackbacks can easily be added to those.



In the meantime, instead of waiting for someone to come along with some central setup (which probably won't be quite what you personally were thinking of) simply put your notes online. Here's the message from the front page of the nLab:




We all make notes as we read papers, read books and doodle on pads of paper. The nLab is somewhere to put all those notes, and, incidentally, to make them available to others. Others might read them and add or polish them. But even if they don’t, it is still easier to link from them to other notes that you’ve made.




On the nLab you will find information about papers that people have found useful. You can search for a particular paper to see if anyone's commented on it, use it, or cited it. Some papers/books have their own pages. Here's one example http://ncatlab.org/nlab/show/Topological+Quantum+Field+Theories+from+Compact+Lie+Groups and here's another http://ncatlab.org/nlab/show/Elephant.



The basic point is that you can do this yourselves, now, without needing someone else to say "Here's the way to do it". You benefit right now without doing loads of extra work, and everyone else benefits incidentally as a result. Everyone wins. It doesn't have to be the nLab, you can use whatever software you like. Just put it online. Somewhere. Anywhere. Stick the arXiv/MR identifier somewhere prominent and the search engines will pick it up.



Then the rest of us will get into the habit of searching on the internet for articles of articles and find your comments.



(Incidentally, along with this, make sure that you put your own articles on your webpages. Every journal that I've ever encountered allows you to do this and this is really a Must-Do for academics. Even if you just put a scan of older papers, it's invaluable for those whose libraries don't carry subscriptions to every single journal under the sun (and those that have no library at all). There is No Excuse for not doing this, especially given that photocopiers now can easily scan straight to PDF.)

ct.category theory - Ends and coends as Kan extensions (without using the subdivision category of Mac Lane)?

Ends and coends should be thought of as very canonical constructions: as Finn said, they can be described as weighted limits and colimits, where the weights are hom-functors.



Recall that if $J$ is a (small) category, a weight on $J$ is a functor $W: J to Set$. The limit of a functor $F: J to C$ with respect to a weight $W$ is an object $lim_J F$ of $C$ that represents the functor



$$C^{op} to Set: c mapsto Nat(W, hom_C(c, F-)).$$



Dually, given a weight $W: J^{op} to Set$, the weighted colimit of $F: J to C$ with respect to $W$ is an object $colim_J F$ that represents the functor



$$C to Set: c mapsto Nat(W, hom_C(F-, c)).$$



Then, as Finn notes above, the end of a functor $F: J^{op} times J to C$ is the weighted limit of $F$ with respect to the weight $hom_J: J^{op} times J to Set$, and the coend is the weighted colimit of $F$ with respect to $hom_{J^{op}}: J times J^{op} to Set$.



The ordinary limit of $F$ is the weighted limit of $F$ with respect to the terminal functor $t: J to Set$. Ordinary limits suffice for ordinary ($Set$-based) categories, but they are inadequate for enriched category theory. The concept of weight was introduced to give an adequate theory of enriched limits and colimits (replacing $Set$ by suitable $V$, and functors as above by enriched functors, etc.)



Weighted colimits and weighted limits (in particular coends and ends) can be expressed in terms of Kan extensions. For any weight $W$ in $Set^{J^{op}}$, the weighted colimit of $F: J to C$ (if it exists) is the value of the left Kan extension of $F: J to C$ along the Yoneda embedding $y: J to Set^{J^{op}}$ when evaluated at $W$, in other words



$$(Lan_y F)(W)$$



A similar statement can be made for weighted limits, as values of a right Kan extension.

ct.category theory - yoneda-embedding vs. dual vector space

I don't see the need to try to make vector spaces into categories. I would just say that in each case we have a closed symmetric monoidal category (respectively Vect or Cat), a map f : X ⊗ Y → Z for some objects X, Y, Z (respectively $langle-,-rangle$ : V ⊗ V → R and Hom : Cop × C → Set) and we are forming the associated map X → hom(Y,Z) (where hom denotes the internal hom functor). The double dual construction is obtained by setting Y = hom(X,Z) and letting f be the evaluation map; it doesn't depend on anything but X and Z.



That said, there is a great analogy between Vect and Cat, where R and Set play parallel roles: but what corresponds to the construction sending C to Hom(Cop, Set) is the free vector space functor from Set to Vect. The analogy goes something like this. (I am omitting some technical conditions for convenience.)



sets                categories
vector spaces cocomplete categories (and colimit-preserving functors)
additive structure colimits
free v.s. on S category of presheaves on C
the ground field Set
(comm.) algebras cocomplete closed (symmetric) monoidal categories
A-modules cocomplete V-enriched categories
etc.


I am not claiming there is a way to take an object in one column and get a corresponding object in the other column (although under some circumstances that may be possible): rather that it is fruitful to use the left-hand column as a way of thinking about the right-hand column.



See this nLab page for an introduction to these ideas.

Friday, 15 September 2006

rt.representation theory - Are there elements of fixed weight in a crystal not killed by too many Kashiwara operators?

I've come across an annoying lemma trying to finish up an argument, and I was hoping one of you guys knew about it.



Question: Given



  1. a weight $lambda$ of a simple Lie algebra $mathfrak g$, and

  2. integers $n_alpha$ for each simple root $alpha$,

Is there a highest weight $nu$, such that in the crystal of with highest weight $nu$ there is an element $x$ of weight $lambda$ such that $tilde{F}_alpha^{n_alpha}xneq 0$?



This is true in $mathfrak{sl}_2$, which makes me hopeful about other Lie algebras, but the argument isn't coming together for me.

gr.group theory - Automorphisms of supergroups of non-coHopfian groups

In this question, I asked whether there existed groups $G$ with finitely presentable subgroups $H$ such that $gHg^{-1}$ is a proper subgroup of $H$ for some $g in G$. Robin Chapman pointed out that the group of affine automorphisms of $mathbb{Q}$ contains examples where $H cong mathbb{Z}$.



This leads me to the following more general question. A group $Gamma$ is "coHopfian" if any injection $Gamma hookrightarrow Gamma$ is an isomorphism. To put it another way, $Gamma$ does not contain any proper subgroup isomorphic to itself. The canonical example of a non-coHopfian group is a free group $F_n$ on $n$ letters. Chapman's example exploits the fact that $F_1 cong mathbb{Z}$ contains proper subgroups $k mathbb{Z}$ isomorphic to $mathbb{Z}$.



Now let $Gamma$ be a non-coHopfian group and let $Gamma' subset Gamma$ be a proper subgroup with $Gamma' cong Gamma$. Question : does there exist a group $Gamma''$ such that $Gamma subset Gamma''$ and an automorphism $phi$ of $Gamma''$ such that $phi(Gamma) = Gamma'$? How about if we restrict ourselves to the cases where $Gamma$ and $Gamma''$ are finitely presentable? I expect that the answer is "no", and I'd be interested in conditions that would assure that it is "yes".



If such a $Gamma''$ existed, then we could construct an example answering my linked-to question above by taking $G$ to be the semidirect product of $Gamma''$ and $mathbb{Z}$ with $mathbb{Z}$ acting on $Gamma''$ via $phi$. This question thus can be viewed as asking whether Chapman's answer really used something special about $mathbb{Z}$.

Thursday, 14 September 2006

Is the semigroup M(n, Z) finitely presented? If so, where can I find a presentation of it?

Assuming by M(n, Z) you mean the semigroup (monoid) of n × n matrices over the integers under multiplication: no, it is not even finitely generated, because the determinant M(n, Z) → Z is multiplicative (Z denoting the monoid of integers under multiplication) and Z is not finitely generated (by the infinitude of primes).

Wednesday, 13 September 2006

immunology - Why do dendritic cells have CD4/CD8 on their surface?

There is no direct connection between CD (cluster of differentiation) receptors and T-cell receptor (TCR).



CD-receptors are used to label and distinguish different cells belonging to immune system: macrophages, T- and B-cells etc. Dendtritic cells play a significant role as antigen presenting cells and belong to vertebrate (and human) immune system and thus bear certain CD receptor.



These cells participate in the process of learning a new antigen by the immune system. During the very first phase the antigen is taken up by these cells, cleaved and processed. Depending upon the antigenic properties of the molecule it can be recognized as a valid antigen. In this case dentritic cells participate in the process of antigen presentation, where this new antigen bound to the MHC receptor is expressed on the cell membrane.



enter image description here



TCR is the protein that binds to this complex (MHC + antigen) and as long as this binding takes place the signal about antigen is propagated further to the immune system (leading to formation or augmentation of immune response depending upon the cell type which connects to this complex).

anatomy - What is the average Leg-to-Foot Length/Width Ratio?

I'm making a program that estimates a probable size (widest width and longest length) for a human's foot given the length and width of the leg to which it is attached. However, try as I might, I can't seem to find the average foot-to-leg size ratio. Does anyone know this? It would be even better if you have different ratios for males and females.



As a note, I don't want to know that this varies depending on certain circumstances, just a good average number that will let me generate people who aren't grossly out-of-proportion

soft question - Mathematicians who were late learners?-list

In May 2006, the AMS Notices printed a remembrance article for Serge Lang. Dorian Goldfield was one of the contributors, and as an undergraduate, he described himself as follows:




Of the many people who had serious
interactions with Serge, I am one of
those who came away with fierce
admiration and loyalty. In the
mid-1960s, I was an undergraduate in
the Columbia engineering school on
academic probation with a C–average.
In my senior year I had an idea for a
theorem which combined ergodic theory
and number theory in a new way, and I
approached Serge and showed him what I
was doing. Although I was only a
C–level student in his undergraduate
analysis class he took an immediate
interest in my work and asked Lorch if
he thought there was anything in it.
When Lorch came back with a positive
response, Lang immediately invited me
to join the graduate program at
Columbia the next year, September
1967.




Then again, Goldfield was not a "late learner" as he was 20 when he finished college and 22 when he earned his PhD. But...

kt.k theory homology - The ring $C^{infty}(M)$?

Let $M$ be a smooth paracompact manifold. I think that the ring $C^{infty}(M)$ contains many (possibly almost all?) geometric or topological information about $M$.



(e.g. Let $E$ be a vector bundle over $M$,$Gamma(E)$ be a set of smooth section of $E$. Then, $Gamma(E)$ is a $C^infty(M)$-module. (Actually, I think $Gamma(E)$ is projective $C^infty(M)$-module because every a short exact sequence of vector bundle splits.))



But I have a feeling that $C^infty(M)$ is too large to change the problem of Manifold theory into an algebraic problem or Ring theoretic problem.



Are there any well-known concrete description about the ring $C^infty(M)$ for some manifold $M$ with simple topology?

soft question - What should be offered in undergraduate mathematics that's currently not (or isn't usually)?

Computer Science. I know programming has been said already, but computer science isn't programming. (There's the famous Dijkstra quote: “Computer science is no more about computers than astronomy is about telescopes.”)



There is a vast and beautiful field of computer science out there that draws on algebra, category theory, topology, order theory, logic and other areas and that doesn't get much of a mention in mathematics courses (AFAIK). Example subjects are areas like F-(co)algebras for (co)recursive data structures, the Curry-Howard isomorphism and intuitionistic logic and computable analysis.



When I did programming as part of my mathematics course I gave it up. It was merely error analysis for a bunch of numerical methods. I had no idea that concepts I learnt in algebraic topology could help me reason about lists and trees (eg. functors), or that transfinite ordinals aren't just playthings for set theorists and can be immediately applied to termination proofs for programs, or that if my proof didn't use the law of the excluded middle then maybe I could automatically extract a program from it, or that there's a deep connection between computability and continuity, and so on.

Monday, 11 September 2006

biochemistry - Is there a binding affinity metric for interactions not in equilibria?

I am investigating the strength of binding of a small peptide to a protein by isolating the bound version and subjecting it to collisions with gas molecules (CID mass spectrometry) to dissociate the complex.
When I plot the collision energy against the proportion of bound and unbound proteins I get a curve that looks like a Kd curve. However, as the interaction isn't in equilibria (i.e. it is in the gas phase) calculating the Kd value wouldn't be correct.



So is there another value I could calculate to describe the strength of the interaction?

at.algebraic topology - Cohomology of fibrations over the circle: how to compute the ring structure?

This is a continuation of Ryan's answer above, but it has become too large for a comment. I wanted to work out the details of Ryan's example explicitly, so that we can see explicitly where your conditions fail to determine the cohomology; perhaps this can help you to pin down precisely what conditions you want. It doesn't seem that we actually need Kitano here, just Johnson's classic results.



Let $S_gto M^3to S^1$ be a mapping torus of an element of the Torelli group, i.e. a diffeomorphism $S_gto S_g$ acting trivially on homology. Such a bundle admits cohomology classes satisfying the Leray-Hirsch condition [this is a fun exercise], implying that as $H^{ast}(S^1)$-modules, $H^ast(M^3) = H^ast(S_g) otimes H^ast(S^1)$. Thus the following do not depend on the monodromy:



  • $Q = H^ast(S_g)$,

  • $I$, which is $Q$ with grading shifted by 1 (if $H^ast(S^1) = mathbb{Z}[t]/t^2$, this is $tQ$)

  • the action of $Q$ on $I$ (just the action of $Q$ on $tQ$),

  • and the Massey products on $Q = H^ast(S_g)$ [although perhaps I misunderstand what you mean here].

However, Johnson's work implies that your 3-manifold has the same cohomology ring as the product $S_g times S^1$ iff the monodromy lies in the kernel of a certain homomorphism called the Johnson homomorphism; in particular, the ring $H^ast(E)$ depends on the monodromy. It seems this shows that the answers to 1) and 2) are both "No".



Now we can compare this with your conditions to see exactly what information we're missing; it turns out to be exactly the "Johnson homomorphism". The exact sequence above $0to Ito H^ast(E)to Qto 0$ has a splitting as abelian groups $H^ast(E) = Qoplus tQ$ coming from the Leray-Hirsch theorem as above. The only information we don't know automatically is the cup product on $Q$ in this splitting with itself. We know when we project back to the $Q$ factor we recover the cup product there, which means that the missing information is the projection onto the $tQ$ factor. Letting e.g. $Q(1)$ denote the degree 1 part, the cup product is a map $Q(1) wedge Q(1) to H^2(E)$. Projecting onto the $tQ$ factor, we have $Q(1) wedge Q(1) to tQ(2)$. But both $Q(1)$ and $tQ(2)$ are isomorphic to $H^1(S_g)$, so this projection of cup product is a map $bigwedge^2 H^1(S_g) to H^1(S_g)$. This exactly encodes the data that is not determined by your conditions; Johnson's beautiful result is that this map is exactly the Johnson homomorphism, originally defined from the algebraic properties of the monodromy. In particular he showed that this missing data could be zero or nonzero, and in fact can be anything in the subspace $bigwedge^3 H^1<textrm{Hom}(bigwedge^2 H^1,H^1)$.



This was first laid out in Johnson's survey "A survey of the Torelli group" (MR0718141), and the details are worked out carefully in Hain, "Torelli groups and geometry of moduli spaces of curves" (MR1397061). What Kitano is doing is different, or rather a generalization of this: showing that just as the cup product on $H^ast(E)$ detects the Johnson homomorphism, the higher Massey products on $H^ast(E)$ detect "higher Johnson homomorphisms" measuring deeper algebraic invariants. (If any of this is useful, please consider it a partial repayment for your beautiful summary of Hodge theory in this answer.)

Sunday, 10 September 2006

at.algebraic topology - How to localize a model category with respect to a class of maps created by a left Quillen functor

I suspect that you already have one, but here is a proof. I will assume that $M$ and $N$ are combinatorial and that $M$ is left proper (otherwise, I don't think that the literature contains a general construction of the left Bousfield localizations of $M$ by any small set of maps). Everything needed for a quick proof is available in Appendix A of



J. Lurie, Higher topos theory, Annals of Mathematics Studies, vol. 170, Princeton University Press, 2009.



First, there exists a cofibrant resolution functor $Q$ in $M$ which is accessible: the one obtained by the small object argument (as accessible functors are closed under colimits, it is sufficient to know that $Hom_M(X,-)$ is an accessible functor for any object $X$ in $M$, which is true, as $N$ is combinatorial). Let $W$ be the class of maps $f$ of $M$ such that $L(Q(f))$ is a weak equivalence in $N$. As $N$ is combinatorial, the class of weak equivalences of $N$ is accessible see Corollary A.2.6.9 in loc. cit. Therefore, by virtue of Corollary A.2.6.5 in loc. cit, the class $W$ is accessible. To Prove what you want, it is sufficient to check that $M$, $W$ and $C=${cofibrations of $M$} satisfy the conditions of Proposition A.2.6.8 in loc. cit. The only non trivial part is the fact that the class $Ccap W$ satisfies all the usual stability properties for a class of trivial cofibrations (namely: stability by pushout, transfinite composition). That is where we use the left properness. For instance, if $Ato B$ is in $W$ and if $Ato A'$ is a cofibration of $M$, we would like the map $A'to B'=A'amalg_A B$ to be in $W$ as well. This is clear, by definition, if $A$, $A'$ and $B$ are cofibrant. For the general case, as $M$ is left proper, $B'$ is (weakly equivalent to) the homotopy pushout $A'amalg^h_A B$, and as left derived functors of left Quillen functors preserve homotopy pushouts, we may assume after all that $A$, $A'$ and $B$ are cofibrant (by considering the adequate cofibrant resolution to construct the homotopy pushout in a canonical way), and we are done. The case of transfinite composition is similar.

zoology - Why is there a difference in the rotation of the tail fin in fish compared to marine mammals?

While fish tend to move from side to side (lateral undulation) for which a vertical tail makes sense, the land ancestors of marine mammals had their limbs under them and so their spines were already adapted to up and down movement (dorsoventral undulation). When these animals moved to marine environments, they continued up and down movement in their swimming, for which a horizontal tail makes sense.



(The wikipedia article on fins gives some more detail, and links to this webpage on Berkeley.edu. A paper by Thewissen et al. suggests that for cetaceans, dorsoventral undulation as a swimming strategy came first, and the horizontal tail evolved later.




More detail:



In a third example beyond fishes and marine mammals, the icthyosaurs and other aquatic reptiles developed vertical tails, even though like marine mammals they evolved from four-footed land animals. This may be because the legs/spines/gaits of land reptiles differ from those of land mammals, so that the earliest icthyosaurs swam with lateral undulation, as reflected in their spinal modifications.



This blog post by Brian Switek gives a really superb run-down on the issue, with figures and citations. I'll quote this part which deals with the dorsoventral undulation theory:




...mammals and their relatives were carrying their legs underneath their bodies and not out to the sides since the late Permian, and so the motion of their spine adapted to move in an up-and-down motion rather than side-to-side like many living reptiles and amphibians. Thus Pakicetus[the pre-cetacean land mammal] would not have evolved a tail for side-to-side motion like icthyosaurs or sharks because they would have had to entirely change the way their spinal column was set up first.




Switek goes on to to talk about exceptions in some marine mammals:




At this point some of you might raise the point that living pinnipeds like seals and sea lions move in a side-to-side motion underwater. That may be true on a superficial level, but pinnipeds primarily use their modified limbs (hindlimbs in seals and forelimbs in sea lions) to move through the water; they aren’t relying on propulsion from a large fluke or caudal fin providing most of the propulsion with the front fins/limbs providing lift and allowing for change in direction. This diversity of strategies in living marine mammals suggests differing situations encountered by differing ancestors with their own suites of characteristics, but in the case of whales it seems that their ancestors were best fitted to move by undulating their spinal column and using their limbs to provide some extra propulsion/direction.


Saturday, 9 September 2006

Stability analysis of a system of 2 second order nonlinear differential equations

This is an answer to Charles' restatement of the question.



Recall that equation F(x,x',x'') = 0 (e.g. x'' + sin x = 0) can be written as a system



X' = f(X), where X = (x,x')^T (e.g. f(x',x) = (-sin x, x')^T) and that system can be linearized about an equilibrium E = (x_,x'_)^T to obtain a linear equation
X' = AX where A is the 2 x 2 matrix given by the derivative of f at E.



So too a larger system F(x,x',x'',y,y',y'') = 0 can be written as a system



X' = f(X) where X = (x,x',y,y')



Given an equilibrium E = (x_,x'_,y_,y'_), the linearization of X' = f(X) about E is
again X' = AX where X = (x',x,y',y) and A is the derivative of f evaluated at E. A is a 4 by 4 matrix. If all eigenvalues of A have negative real part, the system is stable. If one eigenvalue has positive real part, the system is unstable. If there are no eigenvalues with positive real part, and there are eigenvalues which lie on the imaginary axis, then the equilibrium is "spectrally stable," and further analysis is required to determine the nonlinear stability.



With regard to the energy of the system, you are looking for a function of the form
V(x,x',y,y') whose time-derivative along solutions is constant. A good place to start is with the guess 1/2((x')^2 + (y')^2) + F(x,y). You should be able to figure out what F needs to be in this particular example.



For future reference, this forum is (I believe) intended primarily for questions from students and practitioners who are a little further along in their study. I decided to answer the question because I can imagine being very frustrated at not having some information that it would take an expert five minutes to explain and the question didn't strike me as the kind which would encourage others to attempt to turn this forum into a homework help site. If the moderators disagree, I apologize.

Thursday, 7 September 2006

ag.algebraic geometry - When is a morphism proper?

Assume $V$ and $W$ are quasiprojective. Let $i:Vto X$ be a locally closed embedding with $X$ projective (for instance $X$ could be $P^n$). Consider the induced map $g:Vto Xtimes W$; this is also a locally closed embedding. Then $f$ is proper iff $g$ is a closed embedding, or equivalently if $g(V)$ is closed.



As for the topological approach, use the definition of properness given by Charles Staats. Let $f:Xto Y$ be a continuous map of Hausdorff second countable topological spaces. The base change $f':X'to Y'$ of $f$ by a continuous map $g:Y'to Y$ is defined by letting $X'$ be the set of pairs
$(x,y')$ in $Xtimes Y'$ such that $f(x)=g(y')$ (with the induced topology from $Xtimes Y'$), and $f':X'to Y'$ the obvious projection. Then $f$ is proper if and only if all its base changes are closed. This may not be logically relevant, but I find it very comforting.
To connect the two cases note that, given a locally closed embedding of complex algebraic varieties, it is closed in the Zariski topology iff it is closed in the Euclidean topology.

Wednesday, 6 September 2006

at.algebraic topology - The space of compact subspaces of $R^infty$ homotopy equivalent to a given finite complex.

To me, Hausdorff metric is an unaccustomed way of making such a space of spaces. I think I don't trust it because fixing a homotopy type gives you a set that is neither closed nor open in general.



But yes I believe the picture is that some kind of "space of spaces of homotopy type $X$" is closely related to $A(X)$.



Let's start with smooth manifolds, but of codimension zero. For a fixed $n$ and a finite complex $Ksubset mathbb R^n$, let $M_n(K)$ be the space of smooth compact $n$-manifolds $Nsubset mathbb R^n$ containing $K$ in the interior as a deformation retract. (Let's say, the simplicial set where a $p$-simplex is a suitable thing in $Delta^ptimes mathbb R^n$ such that the projection to $Delta^p$ is a smooth fiber bundle.) You can map $M_n(K)to M_{n+1}(K)$ by crossing with $[-1,1]$ (and doing something about corners), and you can consider the (homotopy) colimit over $n$. Using the classification of $h$-cobordisms you can work out that the set of components is the Whitehead group of $K$. The loopspace of one component is the smooth stable pseudoisotopy space of $K$. To get the idea, think of the case when $K$ is a point: the space $M_n(K)$ is then, after you discard extraneous components corresponding to cases where the boundary is not simply connected -- which were going to go away anyway upon stabilizing over $n$ -- your quotient of {embeddings $D^ntomathbb R^n$}~$O(n)$ by {diffeomorphisms $D^nto D^n$}. It's also a kind of "space of all $h$-cobordisms on $S^{n-1}$, and thus a delooping of the (unstable) pseudoisotopy space of $S^{n-1}$.



When $K$ is more complicated than a point, it's important to distinguish between the space of all blah blah blah containing $K$ as a deformation retract and the space of all blah blah homotopy equivalent to $K$; they differ by the space of homotopy equivalences $Kto K$.



There is a similar story for the piecewise linear or topological case.



The piecewise linear manifold version of this construction can, I believe, be shown to be equivalent to a non-manifold construction more like what you asked about: some kind of "space of compact PL spaces in $mathbb R^n$ containing a fixed $K$ as deformation retract".



Waldhausen tells us that the stable smooth construction above and the stable PL construction above are respectively (the underlying spaces of spectra which are) the fiber of a map from the suspension spectrum of $Kcup{point}$ to $A(K)$, and the fiber of a map from $A(*)wedge (Kcup{point})$ to $A(K)$.

Tuesday, 5 September 2006

lo.logic - Is there any proof assistant based on first-order logic?

Isabelle supports many different logics, and it has a formulation of first order logic which you may browse here: http://isabelle.in.tum.de/dist/library/FOL/index.html. However, even though proofs are natural deduction in flavor, it does not produce anything a logician would understand as a natural deduction derivation upon shallow inspection.



The automated theorem provers Prover9, E, SPASS and Vampire are all first order systems. They do not produce proofs using natural deduction (they are all typically resolution/paramodulation based systems).



It sounds like ProofWeb is exactly like what you want. It provides a system for displaying the accompanying natural deduction/sequent calculus proof along with a computer assisted formalization. It also has a really nice interactive interface for students, and provides the possibility of assigning exercises. On the other hand, I know that it has been largely developed for Coq, which is way, way more expressive than first order logic. And even though I know that there is a development of set theory within Coq, I suspect modifying the system for basic set theory would be a nontrivial exercise.

Monday, 4 September 2006

pr.probability - Is ERNIE output skewed by statistical tests?

Yes. In the same way that flipping a fair coin (with equal probabilities of getting heads of tails) eight times in a row is likely to come up all heads 1/256 times, or all tails 1/256 times. The psychological perception of a sequence with a run of 8 heads or 8 tails is that it is so unlikely as to never occur at all; whereas we mathematicians see the likelihood of a run of 8 in 8 flips as occuring with 2/256 or just under 1% of the time.



The opposite error is true, and also occurs with some frequency in biomedical experiments and medical experiments. The standard for accepting a result in a clinical or medical trial is for $p<0.05$: that there is less than a 5% probability that the results occured by chance. Thus, one in twenty times, it is possible that a random occurence or set of occurences will be perceived or accepted as being statistically valid when it is not.



But it also depends on how much data (how many draws) are in the sample being gauged for randomness. The smaller the sample size, the more likely you are to discard a valid but unreasonable appearing "true" random sequence. So my answer is really a qualified "maybe".



Shouldn't the validity or "true randomness" of the method be the gauge, along with a check to see that the algorithm is properly implemented? The problem, of course, with software is that bugs can creep into the implementation at any point:



  • the compiler could be messed up, generating incorrect code,


  • the ALU (the arithmetic logic unit) in the central processing unit could be incorrectly implemented, e.g. the Pentium chip had the floating-point co-processor which incorrectly calculated some particular multiply operations depending upon the operands,


  • the algorithm specified in the program requirements may be correct, but may be implemented incorrectly,


  • the algorithm may be correct for 32-bit integer math calculations, but incorrect if the system uses 64-bit arithmetic, or vice versa,


I wonder if the software code and hardware is vetted as well there as it is in Las Vegas by the Nevada Gaming Commision which oversees gambling and the electronic machinery for slots and electronic poker, etc.

dg.differential geometry - Are these operators defined on 2D surfaces self-adjoint?

My research group finds/proposes a fundamental operator in quantum mechanics, the Cartesian momentum as I called (I think for mathematician the ref. 2007 is sufficient). However, I do not know whether it is self-adjoint or not (we are all physicists). If a mathematician can give a definite answer to it for even simple surfaces such as cylindrical and spherical, he has then a nice paper.



The standard representation of the curved smooth surface $M$ embedded in $ R^{3}$ is,



$mathbf{r}(xi ,zeta )mathbf{=}left( x(xi ,zeta ),y(xi ,zeta ),z(xi ,zeta )right)$.



The covariant derivatives of $mathbf{r}$ are $mathbf{r}_{mu }=partial mathbf{r}/ partial x^{mu }$ .



The contravariant derivatives



$mathbf{r}^{mu }equiv g^{mu upsilon }mathbf{r}_{upsilon }$



is the generalized inverse of the covariant ones $mathbf{r}_{mu }$.



The unit normal vector at point $(xi ,zeta )$ is $mathbf{n=r}^{xi } times mathbf{r}^{zeta }/ sqrt{g}$.



The Hermitian Cartesian momentum $mathbf{p}$ takes a compact form,



$mathbf{p=}-ihbar (mathbf{r}^{mu }partial _{mu }+Hmathbf{n),}$



where $H$ is the mean curvature of the surface. When the motion is constraint-free or in a flat plane, i.e., when $H=0$, the constraint induced terms $Hmathbf{n}$ vanish. Then the Cartesian momentum operator reproduces its usual form as, $mathbf{p=}-ihbar nabla $.



For a particle moves on the surface of a sphere of radius $r$, $ x=rsin theta cos varphi ,text{ }y=rsin theta sin varphi ,text{ }z=rcos theta$,



the hermitian operators for Cartesian momenta $p_{i}$ are respectively,



$p_{x} =-frac{ihbar }{r}(cos theta cos varphi frac{partial }{partial theta }-frac{sin varphi }{sin theta }frac{partial }{partial varphi }-sin theta cos varphi ), $



$p_{y} =-frac{ihbar }{r}(cos theta sin varphi frac{partial }{partial theta }+frac{cos varphi }{sin theta }frac{partial }{partial varphi }-sin theta sin varphi ), $



$p_{z} =frac{ihbar }{r}(sin theta frac{partial }{partial theta }+cos theta ).$



On the spherical surface, the complete set of the spherical harmonics defines the Hilbert space.




Refs.



2003, Liu Q H and Liu T G, Int. Quantum Hamiltonian for the Rigid Rotator, J. Theoret. Phys. 42(2003)2877.



2004, Liu Q H, Hou J X, Xiao Y P and Li L X, Quantum Motion on 2D Surface of Nonspherical Topology, Int. J. Theoret. Phys. 43(2004)1011.



2005, Xiao Y P, Lai M M, Hou J X, Chen X W and Liu Q H, A Secondary Operator Ordering Problem for a Charged Rigid Planar Rotator in Uniform Magnetic Field, Comm. Theoret. Phys. 44(2005)49.



2006a, Lai M M, Wang X, Xiao Y P and Liu Q H, Gauge Transformation and Constraint Induced Operator Ordering for Charged Rigid Planar Rotator in Uniform Magnetic Field, Comm. Theoret. Phys. 46(2006) 843.



2006b, Wang X, Xiao Y P, Liu T G, Lai M M and Rao, Quantum Motion on 2D Surfaces of Spherical Topology, Int. J. Theoret. Phys. 45(2006)2509.



2006c, Liu Q H, Universality of Operator Ordering in Kinetic Energy Operator for Particles Moving on two Dimensional Surfaces, Int. J. Theoret. Phys. 45(2006)2167.



2007, Liu Q H., Tong C L., Lai M M., Constraint-induced mean curvature dependence of Cartesian momentum operators J. Phys. A 40(2007)4161.



2010, Zhu X M, Xu M and Liu Q H, Wave packets on spherical surface viewed from expectation values of Cartesian variables, Int. J. Geom. Meth. Mod. Phys., 7(2010)411-423.