Saturday, 31 March 2007

big list - Essential theorems in group (co)homology


  1. Interpretation of cohomology of small degree:

$H^1(G,A)$ = crossed homomorphisms $Gto A$ modulo principal ones.



$H^2(G,A)$ = equivalence classes of extensions of G by A.



$H^3(G,Center(G))$ = obstructions to existence of extensions of G by A.



2.
Transfer and its applications: If $G$ is finite then



1) $H^i(G,M)$ is a torsion group annihilated by multiplication by $|G|$.



2) Embedding of $p$-primary component of $H^i(G,M)$ into a subgroup of $H^i(P,M)$, for any $p$-Sylow subgroup $Psubset G$.



3.
In general, Brown's book "Cohomology of groups" gives a decent overview of what is good to know.

Friday, 30 March 2007

nt.number theory - Lagrange four-squares theorem: efficient algorithm with units modulo a prime?

I'm looking at algorithms to construct short paths in a particular Cayley graph defined in terms of quadratic residues. This has led me to consider a variant on Lagrange's four-squares theorem.



The Four Squares Theorem is simply that for any $n in mathbb N$, there exist $w,x,y,z in mathbb N$ such that
$$
n = w^2 + x^2 + y^2 + z^2 .
$$
Furthermore, using algorithms presented by Rabin and Shallit (which seem to be state-of-the-art), such decompositions of $n$ can be found in $mathrm{O}(log^4 n)$ random time, or about $mathrm{O}(log^2 n)$ random time if you don't mind depending on the ERH or allowing a finite but unknown number of instances with less-well-bounded running time.



I am considering a Cayley graph $G_N$ defined on the integers modulo $N$, where two residues are adjacent if their difference is a "quadratic unit" (a multiplicative unit which is also quadratic residue) or the negation of one (so that the graph is undirected). Paths starting at zero in this graph correspond to decompositions of residues as sums of squares.



It can be shown that four squares do not always suffice; for instance, consider $N = 24$, where $G_N$ is the 24-cycle, corresponding to the fact that 1 is the only quadratic unit mod 24. However, finding decompositions of residues into "squares" can be helpful in finding paths in the graphs $G_N$. The only caveat is that only squares which are relatively prime to the modulus are useable.



So, the question: let $p$ be prime, and $n in mathbb Z_p ( := mathbb Z / p mathbb Z)$. Under what conditions can we efficiently discover multiplicative units $w,x,y,z in mathbb Z_p^ast$ such that $n = w^2 + x^2 + y^2 + z^2$? Is there a simple modification of Rabin and Shallit's algorithms which is helpful?



Edit: In retrospect, I should emphasize that my question is about efficiently finding such a decomposition, and for $p > 3$. Obviously for $p = 3$, only $n = 1$ has a solution. Less obviously, one may show that the equation is always solvable for $n in mathbb Z_p^ast$, for any $p > 3$ prime.

Thursday, 29 March 2007

soft question - Which mathematical ideas have done most to change history?

Structuralism in mathematics. It may have started in linguistics, but it reached mathematics next, promoted largely through Weil and Bourbaki, category theory, and then the grand vision of Grothendieck. Structuralism is not so much a single mathematical idea as a way of thinking about properties and definitions, what mathematical objects are, and how we should study them. The ideas expanded out from mathematics swiftly, and in the course of 20th century intellectual development, it is hard to find an idea as pervasive and influential as the structuralist approach.



(There is a book by Amir Aczel on Bourbaki that some of the story. I found the book to be unfortunately rather poorly written, but informative nonetheless.)



Structuralism is literally everywhere. It contains the idea the objects are characterised by their relationships relative to all other objects, rather than having an inherent identity of their own. For example, one sees an element of this in passing from old notions of groups and collections of transformations of something to the more abstract notion of a set equipped with the structure of a group multiplication law. Through Levi-Strauss, structuralism was introduced into anthropology. It created a large school of thought in history, sociology, political science, and so on.



Up above, I see that the Google PageRank algorithm was mentioned. One can view this as an example of structuralism in action - the rank of a website is computed by the algorithm as a certain function of its relationship to all other websites rather than as a function of the content of the site itself.

Wednesday, 28 March 2007

co.combinatorics - Path connected coloured sets on the squared paper

I believe that such a 3x3 square does not necessarily exist.



A counterexample would take the form of an infinite still life pattern in the life-like cellular automaton rule B123678/S34 (these rules are chosen so that the only patterns that remain stable are the ones in which the number of live cells in each 3x3 box is 4 or 5). Additionally, both the live and dead cells of the pattern should be connected.



But as the following partial double spiral shows (copy and paste it into Golly to view and test) it's possible to form partial double-spiral patterns that, at least in the center of the pattern, have the desired properties. I don't see any good reason why it shouldn't be possible to continue the spiral infinitely.



x = 31, y = 31, rule = B123678/S34
14b4o$12b3o2b3o$10b3o6b3o$8b3o3b4o3b3o$6b3o3b3o2b3o3b3o$5b2o3b3o6b3o3b
2o$5bo2b3o3b4o3b3o2bo$4b2ob2o3b3o2b3o3b2ob2o$4bo2bo2b3o6b3o2bo2bo$3b2o
b2ob2o3b4o3b2ob2ob2o$3bo2bo2bo2b3o2b3o2bo2bo2bo$2b2ob2ob2ob2o6b2ob2ob
2ob2o$2bo2bo2b2obo2b4o2bo2bo2bo2bo$2bo2bo2bo2bob2o2b2ob2ob2ob2ob2o$b2o
b2ob2ob2o2bo2bo2bo2bo2bo2bo$b2ob2ob2ob2ob2ob2ob2ob2ob2ob2o$bo2bo2bo2bo
2bo2bo2bo2bo2bo2bo$2ob2ob2ob2ob2ob2o2bo2bo2bo2bo$o2bo2bo2bo2b2o2bob2ob
2ob2ob2o$2ob2ob2ob2o4b2obo2bo2bo2bo$bo2bo2bo2b6o2bo2bo2bo2bo$b2ob2ob2o
3b2o3b2ob2ob2ob2o$2bo2bo2b3o4b3o2bo2bo2bo$2b2ob2o3b6o3b2ob2ob2o$3bo2b
3o3b2o3b3o2bo2bo$3b2o3b3o4b3o3b2ob2o$4b3o3b6o3b3o2bo$6b3o3b2o3b3o3b2o$
8b3o4b3o3b3o$10b6o3b3o$12b2o3b3o!


Here's a screenshot:



alt text

nt.number theory - Dihedral extensions and the Ankeny - Artin - Chowla conjecture

Jensen and Yui (Polynomials with Dp as Galois group
J. Number Theory 15, 347-375 (1982)) proved that if p = 4n+1
is a regular prime, then there is no normal extension of the
rationals with Galois group Dp (dihedral of order 2p)
ramified only at p. When I first read it I noticed that such an
extension exists if and only if p divides u, where $t+usqrt{p}$
is the fundamental unit of the real quadratic number field with
discriminant p (Ankeny, Artin and Chowla conjectured that this
never happens; it is known that this property is equivalent to
the divisibility of the Bernoulli number B(p-1)/2 by p,
hence implies that p is irregular).



I recall having seen this result in print a few years later,
but can't find it anymore. Can anyone help me?

Tuesday, 27 March 2007

human biology - How do the brain and nerves create electrical pulses?

So, let us introduce some keywords.



The "electrical pulse" that "is sent from between brain and nerves" is called an Action Potential (AP). This is then propagated along a nerve fiber until the target organ.



Basically, a neuronal cell has a body and several long extended structures that "sprout" from the cell body. Dendrites receive signals from other cells and they convey signals towards the cell body by creating small electrical currents. The axon is a single "sprout" that is usually much thinner and longer than the dendrites and it conveys action potentials from the near the cell body to target cells and organs. Some axons can be as long as 80-90 cm (imagine!)! At the place where axon leaves the nerve cell body there is a small protrusion called the axon hillock.



The AP originates at a special part of the axon called the axon initial segment (AIS). The initial segment is the first part of the axon as it leaves the cell body and sits immediately after the axon hillock.



The electrical pulse is the short electrical discharge, that can be seen as a sudden movement of many charged particles from one place to another. In our cells we have ions of Na+ (sodium), K+ (potassium) and Cl- (chloride) (and in some cases also Ca2+) that constitute these charged particles.



There are two types of driving forces for these particles: besides the potential gradient, e.g. the difference in the total charge in two different places there is also another force called concentration gradient, e.g. the difference in concentration at two different places. These force can point into opposite directions, and thus by exploiting one force (let's say concentration gradient) we can influence another one.



What we need here again is a so-called semi-permeable membrane, this is just a barrier for ions, but only for specific ones. We need this because our main ions -- Na+ and K+ -- are both positively charged. Therefore the cell membrane acts as a semi-permeable membrane, letting K+ into the cells and Ca2+ ions outwards but not the opposite. Therefore we have two concentration gradients: Na+ (outside is the peak) and K+ (inside is the peak).



In order to start the pulse we need to initiate a massive ionic drift from one place to another. This is done by the cell, and the first event here is the drastic change (increase) of the permeability for Na+ ions. Na+ ions massively enter the cell and their charges, moved into the cell, form the upstroke of the action potential.



The protective mechanism of the cell immediately start working against the Na+ invasion and open the reserve shunts -- the K+ channels. K+ leaves the cell, taking away some charge and this is revealed as the decay of the action potential. But potassium channels are generally slower, that is why the decay of the pulse is more steady, not as sharp as the upstroke.



You might be wondering now: what triggers the rapid change of membrane permeability then? There are several factors here that may contribute into this process.



  1. Potential change of the membrane. Sodium and potassium channels are voltage-sensitive, meaning if you manage to change the resting potential of the membrane, formed due to concentration gradients and normally being about -90..-80 mV (millivolts) up to about -40 mV it will trigger the sodium channels. This is how the impulse propagates -- having originated at one place it just decreases the resting potential of the adjacent membrane area, sodium enters the cell there and the AP travels along the nerve. The AIS is the site of AP initiation because this part of the cell has a very high density of voltage-gated sodium channels.


  2. Chemical agents, called neurotransmitters, can be detected by receptors on the cell membrane. Some of these receptors are ion channels themselves and open directly when neurotransmitter is bound. Other receptors act through intracellular signals to open ion channels. This is how the signal appears at the sites of nerve cell contacts -- neurotransmitters, like acetylcholine or adrenaline, just act here as triggers for membrane permeability.


Monday, 26 March 2007

nt.number theory - Question concerning the arithmetic average of the Euler phi function:

There is information on page 68 of Montgomery and Vaughan's book, and also on page 51 of "Introduction to analytic and probabilistic number theory" by Gérald Tenenbaum. Briefly, Montgomery has established that



$$
limsup_{x rightarrow +infty}frac{R(x)}{xsqrt{loglog(x)}} > 0
$$



and similarly with the limit inferior. So there is only modest room for improvement. Unfortunately I cannot find any reference to an upper bound conditional on RH. On page 40 Tenenbaum has a reference to page 144 of Walfisz' book on exponential sums. Walfisz uses Vinogradov's method to show that



$$
R(x) = Oleft(xlog^{2/3}(x)(loglog(x))^{4/3}right).
$$



I don't own a copy of Walfisz' book, so I have no further details.

Sunday, 25 March 2007

ag.algebraic geometry - Nonsingular point on a hypersurface

Say we are given a hypersurface defined in variables $textbf{x} = (x_1, dots x_n) in mathbb{R}^n$ given by $$P(textbf{x})=0$$ for some homogeneous polynomial P, defined over $mathbb{R}$. I assume this is nondegenerate.



We are given a nonsingular point on this hypersurface $textbf{x}_0$, say.



Question: Is it possible to find a point on the hypersurface arbitrarily close (Euclidean distance) to $textbf{x}_0$ which is nonsingular with respect to a given subset of the variables? I.e. at least one of the partial derivatives from this subset is nonvanishing?



eg can we find an arbitrarily close point $textbf{x}_1$ with $$frac{partial}{partial x_n} P(textbf{x}) Big|_{textbf{x} = textbf{x}_1} neq 0 ?$$ Further could we even find a point arbitrarily close such that NONE of the partial derivatives vanish?

ag.algebraic geometry - resolution of singularities on surfaces

Not necessarily, I don't think. There are surface singularities (though I can't recall an example easily, but one whose resolution has exceptional divisor an elliptic curve) for which if you blow up at a point you get a singular curve. The example I'm thinking of is in Kollar's Exercises.



In the case I'm thinking of, you have a surface with a single point singularity, you blow it up, and you get a rational curve singularity, which if you blow up (or normalize, either one) gives you an elliptic curve over it.



EDIT: found it, it's exercise 68, do all three parts to see some of the things that can happen.

soft question - Which mathematicians have influenced you the most?

The graduate advisor at Queens College of the City University Of New York, Nick Metas, was and continues to be my greatest influence.



I first had a conversation on the phone with Nick over 15 years ago when I was a young chemistry major taking calculus and just becoming interested in mathematics. We spoke for over 3 hours and we were friends from that moment on.



It was Nick who indocrinated me into the ways of true rigor through his courses and countless conversations,and the equal cardinality of the stories he's told. Nick is a true scholar and my enormous knowledge of the textbook literature and research papers from the 1960's onward,I learned from Nick.My learned capacity for self-learning got me through the lean years at CUNY during my illnesses,when there wasn't much of a mathematics department there.



In relation to the reference to Gian-Can Rota above,I am Rota's mathematical grandson through Nick. Nick loved Rota and his eyes light up when he speaks of his dissertation advisor and friend from his student days at MIT. I hope someday there's someone famous I can feel that way about. But no one's influenced me more then Nick.



Nick's has been my friend and advisor for all things mathematical and he celebrated his 74th birthday yesterday quietly in his usual office hour,with dozens of students asking him for advice or just listening to his wonderful stories and jokes. Regardless of what happens,it will be Nick who's influence on me as a mathematician, student and mentor who's shaped me the most.

human biology - What causes light to be brighter in the corner of the eyes?

Your retina has two kinds of light-sensing nerves: cones and rods. Cones are responsible for color vision, i.e. hue, while your rods handle differences in the value, as in how bright it is.



Humans have adapted to have cones mostly in the center of the eye (which corresponds to what you're looking directly at), where the most distinguishing color information is needed. It's not often that you'd use your peripheral vision to identify the actual color of something, and this explains why you're seeing a difference between pink and red. The intensity of the color is correct, but you aren't able to identify the hue because of a lack of cone nerves.



So yes, this is normal. A fun trick (and also totally helpful if you have a messy room) is to look a foot or so above the floor when you're walking in the dark. It allows you to see what you're walking on better, since you need intensity receptors more than color receptors.



References:



http://regentsprep.org/Regents/physics/phys09/ceyes/sensing.htm



http://hyperphysics.phy-astr.gsu.edu/hbase/vision/rodcone.html

Saturday, 24 March 2007

economics - General Equilibrium for Mathematicians

I think it is worth pointing out that, from a purely mathematical point of view, economic general equilibrium theory is an exercise in fixed point theory.



The same may be said of the theory of non-cooperative games. John Nash invented the solution concept now known as the Nash equilibrium in his thesis. VonNeumann dismissed Nash's result as "just a fixed point theorem" but Nash eventually received the nobel prize in economics for this work.



The mathematical setup for economic general equilibrium theory focuses on constructing what is called the "excess demand correspondence". This is derived from assumptions about how consumers and producers formulate their plans to take best advantage of the prices they observe in the market place. The excess demand correspondence associates to any vector of market prices (one price for each commodity) the convex set of vectors of aggregate excess demands (one excess demand- possibly negative -for each commodity) that will arise when consumers and producers respond to that specified price vector.



The main idea of the proof is then to find another convex set of price vectors, each of which can be interpreted as a supporting hyperplane to the given convex set of excess demands, and each of which maximizes the market value of the excess demand.



This construction then can be shown to yield an upper semi-continuous, convex set valued function from the convex set of allowable vectors of market prices to the space of is convex subsets. One then applies an appropriate fixed point theory to deduce the existence of a price vector which, because of the structure of the excess demand correspondence, has the property that the value of excess demand in each market is zero. This is the market equilibrium price vector.

Friday, 23 March 2007

nt.number theory - Is there a "finitary" solution to the Basel problem?

Gabor Toth's Glimpses of Algebra and Geometry contains the following beautiful proof (perhaps I should say "interpretation") of the formula $displaystyle frac{pi}{4} = 1 - frac{1}{3} + frac{1}{5} mp ...$, which I don't think I've ever seen before. Given a non-negative integer $r$, let $N(r)$ be the number of ordered pairs $(a, b) in mathbb{Z}^2$ such that $a^2 + b^2 le r^2$, i.e. the number of lattice points in the ball of radius $r$. Then if $r_2(n)$ is the number of ordered pairs $(a, b) in mathbb{Z}^2$ such that $a^2 + b^2 = n$, it follows that $N(r^2) = 1 + r_2(1) + ... + r_2(r^2)$.



On the other hand, once one has characterized the primes which are a sums of squares, it's not hard to show that $r_2(n) = 4(d_1(n) - d_3(n))$ where $d_i(n)$ is the number of divisors of $n$ congruent to $i bmod 4$. So we want to count the number of divisors of numbers less than or equal to $r^2$ congruent to $i bmod 4$ for $i = 1, 3$ and take the difference. This gives



$displaystyle frac{N(r^2) - 1}{4} = leftlfloor r^2 rightrfloor - leftlfloor frac{r^2}{3} rightrfloor + leftlfloor frac{r^2}{5} rightrfloor mp ...$



and now the desired result follows by dividing by $r^2$ and taking the limit.



Question: Does a similar proof exist of the formula $displaystyle frac{pi^2}{6} = 1 + frac{1}{2^2} + frac{1}{3^2} + ...$?



By "similar" I mean one first establishes a finitary result with a clear number-theoretic or combinatorial meaning and then takes a limit.

soft question - Math paper authors' order

In Medicine and in Surgery, the convention is similar to that of the Physical Sciences with the most significant contributor being first or last, or with the owner of the lab equipment or funding getting senior author position as the last author.



However, there is a curve ball in Medical and Surgical Journals in that the first three authors are the ones who gain the most credit. The reason for this is that back in the pre-WWW-historic era, when I wrote papers that went into Surgical journals and when I went through medical school and surgical residency, the medical journal articles were all indexed in the Index Medicus.



The Index Medicus was a hard-copy index prepared at the end of each year and found in every medical library with three sets of listings sorted by Medical E-something Subject Headings (MeSH), title of the journal article, and the last name of the first three authors. This paper index was how people found journal articles of interest and how the authors gained "publication cred." I ended up as third author on many papers giving me a lot of cred even above some grad students and post-docs who helped with experiments but had not supervised or designed (or originally proposed some of, i.e. conception and design, as I had) the experiments in these papers as I had.



Because of the problem with "author inflation" (people being added to author lists as a courtesy or to accomodate seniority), journals in medical fields such as JAMA (Journal of the American Medical Association) now require authors to submit signed Authorship Responsibility Forms which outline specifically what constitutes valid criteria for being listed as an author on a paper:



Obtaining funding is listed as one of the possible criteria, as are administrative, technical, or material support. Some of these criteria surprised me as being rather flimsy in some contexts.

terminology - What does «generic» mean in algebraic geometry?

An irreducible scheme $B$ has a unique generic point $eta$. The generic fiber of a family $Xto B$ is the fiber $X_{eta}$ over that special point $eta$.



A general fiber $X_b$ is a fiber over $bin B$ that belongs to some fixed open set $Usubset B$. And very general means that $b$ belongs to $V$ which is a complement of countably many Zariski closed proper subsets $Z_i$ of $B$.



That is the most common modern terminology. In older (and not so old) books sometimes generic is used where general would be more appropriate.



Added in response to Kevin Lin's comment: In classical alg. geometry, people care about general fibers. The scheme theory provides generic fibers, which are really very convenient to have, since they are so concrete. The way "generic to general" usually works is as follows: You prove that the generic fiber has a property P, and that the property P is constructible. Then P holds for any $b$ in an open neighborhood of $eta$, that is for a general $b$. EGAs contain a long list of properties which are constructible in proper (e.g. projective) families: smoothness, CM, normality, etc., etc.



(And, yes, similar things were discussed in multiple other MO questions. One thing MO seriously lacks is a clear organization of the accumulated knowledge, so that people do not constantly ask and answer variations of the same question.)

Hopf algebras arising as Group Algebras

As Ben says, a commutative Hopf algebra $R$ is, by definition, the coordinate ring of an affine group scheme. There are also group schemes that are not affine, such as abelian varieties, so these are excluded from the question. Since the question is about algebras, let's say that the group scheme $G$ is defined over a field $k$. Then $G$ is morally, but not actually, the same as its group of $k$-rational points $G(k)$. Unless $G$ is a finite group with $R = k[G(k)]^*$, the dual of the usual group algebra, there are three possible differences between $R$ and a (dual) group algebra.



First, the group elements of $G(k)$ are ideals of $R$ with residue field $k$, and their group law is given by the coproduct on $R$. If $p$ is such an ideal, viewed as a point on $G$, then in general the function that is 1 on $p$ and $0$ on the rest of $G$ is not regular; it is not an element of $R$. This is one way to tell that $G$ must be 0-dimensional in order for $R$ to be a literal group algebra.



Second, if $k$ is not algebraically closed, there may be other closed points in $G$ whose residue field is a field extension of $k$. If the field extension is separable, then the group law on these points is multivalued. For instance if $G = text{GL}(n,mathbb{R})$, then it has complex points, which correspond to complex conjugate pairs of complex matrices. The way to multiply two of these points is to multiply conjugates in all possible ways. (This is an example of making a tensor product $E otimes_k F$ of two fields over a field, an operation that also came up in another MO question.)



Third, in characteristic $p$, $G$ may not be reduced. The simplest example is actually finite-dimensional: Take the universal enveloping algebra $U(L)$ of an abelian Lie algebra, which is to say a polynomial algebra, and divide by the ideal of $p$th powers of elements of $L$ to obtain a finite-dimensional Hopf algebra $u(L)$ which is a local ring. (This is similar and related to a common construction with quantum group Hopf algebras.) However, Cartier and Oort showed that algebraic group schemes in characteristic zero are reduced. You can always reduce $G$ as a scheme, but, as in the example $u(L)$, you may be throwing away everything interesting.

ca.analysis and odes - spiral of Theodorus

Here's a sketch of a proof that the constant you want exists, and how to find it.



Let
$$
f(n) = arctan(1) + arctan(1/sqrt{2}) + arctan(1/sqrt{3}) + ldots + arctan(1/sqrt{n}).
$$
You want to show that $f(n) = sqrt{n} + C + o(1)$ for some constant $C$. (If you're not familiar with the $o$-notation, think of $o(1)$ as representing some function which goes to $0$ as $n$ goes to infinity.)



Then take the power series expansion of $arctan(1/sqrt{k})$; this is



$$
(*) ~~~~~~~k^{-1/2} - frac{1}{3} k^{-3/2} + frac{1}{5} k^{-5/2} + ldots
$$



So summing over $1$ to $n$, we should get
begin{align*}
f(n) = & (1^{-1/2} + 2^{-1/2} + ... + n^{-1/2}) \
- , frac{1}{3} &(1^{-3/2} + 2^{-3/2} + ... + n^{-3/2}) \
+ , frac{1}{5}& (1^{-5/2} + 2^{-5/2} + ... + n^{-5/2}) - ldots
end{align*}
Now, $1^{-1/2} + 2^{-1/2} + ldots + n^{-1/2}$ has the asymptotic form
$$
2 sqrt{n} + zeta(1/2) + O(n^{-1/2})
$$
where I cheated a bit and asked Maple, and $zeta$ is the Riemann zeta function. And $1^{-j/2} + 2^{-j/2} + ldots + n^{-j/2}$ has the asymptotic form
$$
zeta(j/2) - O(n^{-j/2 + 1})
$$
where, if you're not familiar with the $O$-notation, $O(n^{-j/2+1})$ should be thought of as a function that goes to zero at least as fast as $n^{-j/2 + 1}$ as n goes to infinity. So, assuming that we can rearrange series however we like,
$$
f(n) = 2 sqrt{n} + (zeta(1/2) - frac{1}{3} zeta(3/2) + frac{1}{5} zeta(5/2) - ldots) + o(1).
$$
Since $zeta(s)$ is very close to $1$ when $s$ is a large real number, that alternating series should converge. Again cheating and using Maple, I claim it converges to about $−2.157782997$. This is the constant you call $varphi$, and what you called $K$ is equal to $2$. (An easier way to see that your $K$ is $2$ is to note that $arctan(1/sqrt{n})$ is about $1/sqrt{n}$, and approximate the sum by an integral.

soft question - Famous mathematical quotes

Oh, he seems like an okay person, except for being a little strange in some ways. All day he sits at his desk and scribbles, scribbles, scribbles. Then, at the end of the day, he takes the sheets of paper he's scribbled on, scrunches them all up, and throws them in the trash can. --J. von Neumann's housekeeper, describing her employer.

Thursday, 22 March 2007

gr.group theory - substitute for Serre's twisting when the "twisting" is outer

The short answer is that there is not much to say about the relationship between $H^1(G, B)$ and a twist $H^1(G, B_c)$ where $c$ is a cocycle taking values in $Aut(B)$. (I am going to write $B_c$ for the twist instead of Serre's notation $_cB$ for the sake of easy typesetting.) You can get a good feel for what is possible by soaking in sections I.5.7 and III.1.4 of Serre's Galois Cohomology.



Section I.5.7



One thing you can do -- as exhibited in I.5.7 -- is twist all three terms in a short exact sequence of $G$-modules and get a new short exact sequence, assuming the obvious compatibility conditions hold. Serre starts with an exact sequence




$1 to A to B to C to 1$




where $A$ is assumed central in $B$. Then he fixes a 1-cocycle $c$ with values in $C$ and twists to get an exact sequence




$1 to A to B_c to C_c to 1$.




Note that this twist $B_c$ is not an inner twist of $B$, because $c$ need not be in the image of $H^1(G, B) to H^1(G, C)$.



This may look like a lame example, in that the twist of $B$ is "pretty close" to being inner. But already here you don't have any results regarding a connection between $H^1(G, B)$ and $H^1(G, B_c)$. That's a pretty fuzzy statement; Serre says as much as you can say with precision in Remark 1: "it is, in general, false that $H^1(G, B_c)$ is in bijective correspondence with $H^1(G, B)$."



Section III.1.4



This section discusses your question for the specific case where $G$ is the absolute Galois group of a field $k$ and $B$ is the group of $n$-by-$n$ matrices of determinant 1 with entries in a separable closure of $k$. Serre explains what you get as $B_c$ when you twist $B$ by a cocycle with values in $Aut(B)$. You can get, for example, a special unitary group.



You can find explicit descriptions of $H^1(G, B_c)$ for some $B_c$'s in The Book of Involutions, pages 393 (Cor. 29.4) and 404 (box in middle of page). Note that for $B_c$ as in Section I.5.7, $H^1(G, B_c)$ is a group (a nice coincidence) but in the case where you get a true special unitary group, $H^1(G, B_c)$ does not have a reasonable group structure--it is just a pointed set like you expect.

How quickly do estrogens break down in the environment?

Of all the synthetic hormones we use, estrogens are probably the most common. They are used for birth control as well as hormone replacement therapy. This researcher also shows that there is plenty of it in milk because dairy cows are often pregnant while they are being milked.



Estrogen is a sturdy compound, very much like cholesterol. I was wondering if anyone had any idea how long it would survive in the environment, given that some people are concerned about it interrupting animals life cycles. How fast does it break down in the wild?

ca.analysis and odes - How to see meromorphicity of a function locally?

Given a germ of an analytic function on a (compact, for simplicity) Riemann surface, how can one see (locally) whether this is a "germ of meromorphic function"? I.e. if I do analytic continuation along various paths, how can I be sure sure that I will never see an essential singularity?



Another formulation of this question is, how can one determine whether a convergent taylor series determines a meromorphic function on the universal covering space of the Riemann surface?



The fact that there will be no essential singularity certainly implies something, e.g. when our Riemann surface is CP^1, then for a taylor series to be meromorphic, it must be rational. But how do one check this locally, in a nbhd of a point?



Thanks



P.S. I don't really know how to tag this question. Suggest a tag in comment please if possible.

big list - Top specialized journals

The following is my personal (i.e., includes all of my mathematical prejudices) ranked list of subject area journals in number theory.



From best to worst:



1) Algebra and Number Theory



2) International Journal of Number Theory



3) Journal de Theorie des Nombres de Bordeaux



4) Journal of Number Theory



5) Acta Arithmetica



6) Integers: The Journal of Combinatorial Number Theory



7) Journal of Integer Sequences



8) JP Journal of Algebra and Number Theory



For a slightly longer list, see



http://www.numbertheory.org/ntw/N6.html



but I don't have any personal experience with the journals listed there but not above.



Moreover, I think 1) is clearly the best (a very good journal), then 2)-5) are of roughly similar quality (all quite solid), then 6) and 7) have some nice papers and also some papers which I find not so interesting, novel and/or correct; I have not seen an interesting paper published in 8).



But I don't think that even 1) is as prestigious as the top subject journals in certain other areas, e.g. JDG or GAFA. There are some other excellent journals which, although not subject area journals, seem to be rather partial to number theory, e.g. Crelle, Math. Annalen, Compositio Math.



Finally, as far as analytic and combinatorial number theory goes, I think 4) and 5) should be reversed. (Were I an analytic number theorist, this would have caused me to rank 5) higher than 4) overall.)

Wednesday, 21 March 2007

linear algebra - Why were matrix determinants once such a big deal?

Very good question, I think. There is indeed a backlash against determinants, as evidenced for instance by Sheldon Axler's book Linear Algebra Done Right. To quote Axler from this link,




The novel approach used throughout the book takes great care to motivate concepts and simplify proofs. For example, the book presents, without having defined determinants, a clean proof that every linear operator on a finite-dimensional complex vector space (or on an odd-dimensional real vector space) has an eigenvalue.




Indeed. If you think that determinants are taught in order to invert matrices and compute eigenvalues, it becomes clear very soon that Gaussian elimination outperforms determinants in all but the smallest instances.



On the other hand, my undergraduate education spent a tremendous amount of time on determinants (or maybe it just felt that way). We built the theory (from $dim Lambda^n(K^n)=1$), proved some interesting theorems (row/column expansion, bloc-determinants, derivative of det(A(x))), but never used determinants to perform prosaic tasks. Instead, we spent a lot of time computing $n times n$ determinants such as Vandermonde (regular and lacunary), Cauchy, circulant, Sylveser (for resultants)... and of course, giving a few different proofs of the Cayley-Hamilton theorem!



What's the moral of the story? I think it's twofold:



  1. Determinants are mostly a concern from a theoretical point of view. Computationally, the definition is awful. The most sensible thing to do to compute a determinant is to use Gaussian elimination, but if you're going to go through that bother, chances are that it's not really the determinant that you're after but rather something else that elimination will give you.


  2. Determinants are a fertile ground to get to grips with a lot of really fundamental mathematical tools that a student of abstract mathematics should know backwards and forward. If you do everything I described above, you must learn deep results about the structure of the symmetric group $S_n$ (and more generally about multilinear forms), 10 flavors of induction, practical uses of group morphisms (from $GL_n(K)$ to $K^star$). And of course, the existence of determinants itself is crucial to the more theoretical developments such a student will encounter later on.


I've had pure math undergrads who had learned linear algebra from Axler's book. They knew how to compute a determinant. They had no idea why anyone would want to. So determinants are still a big deal, but just for the right audience: I'm perfectly fine with most scientist and engineers ignoring determinants beyond $3times 3$. Mathematics students, especially those with a theoretical bent, can learn a lot from determinants.

dg.differential geometry - Extending diffeomorphisms of Riemannian surfaces to the ambient space

Q1: Definately not always. More like "almost never". If the automorphism extends to $mathbb R^3$, then the bundle $S^1 ltimes_f M$ would embed in $S^4$. $S^1 ltimes_f M$ is the bundle over $S^1$ with fiber $M$ and monodromy $f$. The most-commonly used obstructions to embedding in this case are things like the Alexander polynomial, and Milnor signatures.



I don't see where the metric on $M$ plays a role for this.



If you want to see automorphisms that extend (and do not extend) for your Q1, take a look at my arXiv paper. You'll also find some references to several Jonathan Hillman papers that explore such obstructions.



In the case that your surface is unknotted -- bounding handlebodies on both sides (thinking of $M subset S^3$) then the automorphisms of $M$ that extend in this case are well-known. They're called the mapping class group of a Heegaard splitting of $S^3$. It's an infinite group. Generators are known for it (if I recall, they're the automorphisms induced by handle slides) but off the top of my head I'm not sure how much is known about the structure of the group. Do a little Googling on "mapping class group of a Heegaard splitting of S^3" and you should start finding relevant material.



To respond to your 2nd edit, if the co-dimension is high enough all automorphisms extend. This is a theorem of Hassler Whitney's. The basic idea is this, let $f : M to M$ be an automorphism. Let $i : M to mathbb R^k$ be any embedding. So you have two embeddings $i circ f$ and $i$ of $M$ in $mathbb R^k$. Any two maps $M to mathbb R^k$ are isotopic provided the co-dimension is large enough $k geq 2m+3$ suffices, for example. So isotope your standard inclusion to the one pre-composed with $f$. The Isotopy Extension Theorem gives you the result.



For example, if $Sigma$ is a Heegaard splitting / the surface is unknotted, $Sigma subset mathbb R^3$ (or $subset mathbb S^3$) and you have an automorphism $f : Sigma to Sigma$ a neccessary and sufficient condition for $f$ to extend to $mathbb R^3$ (or a side-preserving automorphism of the pair $(S^3,Sigma)$ in the $S^3$ case) is that if $C subset Sigma$ is a curve that bounds a disc on either the inside or outside of $Sigma$ respectively, then $f(C)$ bounds a disc on the inside or outside of $Sigma$ respectively (here I'm using inside/outside re the Jordan-Brouwer separation theorem). Since the fundamental group of the complement is just a free product of infinite cyclic groups this is something that can be checked rather easily provided you know the map $f$ well enough.

ag.algebraic geometry - A good example of a curve for geometric Langlands

This is a kind of question we asked ourselves about 10 years ago :)



Our answer - $P^1$ with nodes and cusps (and more general singularities) are very good examples for doing this. The answer is actually motivated by Serre's "Algebraic groups and algebraic class fields ..." where he works with generalized Jacobions and abelian Langlands (i.e. class field theory).



We concerned the part of the Langlands which treats the Hitchin's D-modules (these NOT all Hecke-eigensheves).



In papers with Dmitry Talalaev we described classical Hitchin system on such curves.
http://arxiv.org/abs/hep-th/0303069
Hitchin system on singular curves I



http://arxiv.org/abs/hep-th/0309059 - here are more general singularities



The second step was to quantize Hitchin's hamiltonians.
Actually it is the same as to quantize Gaudin's hamiltonian's.
Naive recipe - works only for sl(2), sl(3) - http://arxiv.org/abs/hep-th/0404106



The breakthrough that Dmitry Talalaev found
http://arxiv.org/abs/hep-th/0404153 Quantization of the Gaudin System



how to do TWO things simultaneously, he proposed a beautiful formula for:



1) All quantum Hitchin (Gaudin) Hamiltonionas (later generalized to whole center of universal enveloping for loop algebra)



2) At the same time it gives the GL-oper explicitly (moreover it gives "universal" GL-oper
meaning that its coefficents are quantum Hitchin (Gaudin) hamiltonians, but not complex numbers). Fixing values of Hitchin's hamiltonians we get complex-valued GL-oper, which corresponds by Langlands to these Hitchin's hamiltonians. So the Langlands correspondence: Hitchin D-module -> GL-oper is made very explicit.



3) His formula makes explicit the idea that "GL-oper is quantization of the spectral curve"



To some extent this solves the questions about the Laglands for GL-Hitchin's system.
We have not write down the proof of "Hecke-eigenvaluedness" of Hitchin's D-modules.
But it seems that is rather clear(may be not the ritht word), if you take appropriate point of view
on Hecke's transformations - as in the paper by A. Braverman, R. Bezrukavnikov
http://arxiv.org/abs/math/0602255
Geometric Langlands correspondence for D-modules in prime characteristic: the GL(n) case



One of key ideas - that you can do everything in "classical limit" and than quantize.
They worked for finite fields - so they can use some trick to go from classical to quantum,
over complex numbers we have explicit formulas by Talalaev so they should do the same.



Let me also mention that Hecke transformations are also known as Backlund transformations
in integrability and relevant papers are:



http://arxiv.org/abs/nlin/0004003 Backlund transformations for finite-dimensional integrable systems: a geometric approach
V. Kuznetsov, P. Vanhaecke



http://arxiv.org/abs/nlin/0110045 Hitchin Systems - Symplectic Hecke Correspondence and Two-dimensional Version
A.M. Levin, M.A. Olshanetsky, A. Zotov




It would be very nice project to consider from this point of view $P^1$ with cusp,
the cotangent to moduli space of vector bundles is $[X,Y]=0/GL(n)$,
the same thing which is considered in Etingof's Ginzburg's paper



http://arxiv.org/abs/math/0011114 Symplectic reflection algebras, Calogero-Moser space, and deformed Harish-Chandra homomorphis



It would be very nice (and should be simple) to explicitly describe
the Hecke-Bacclund transformations and their action on CalogeroMoser system and so on...

ho.history overview - Map of Number Theory

Your question about one book for number theory is like a non-mathematician asking about one book for all mathematics. It is simply not possible. It is a growing subject in various directions. The best I can attempt is to give a book each for each direction, approximating your question. It is impossible to give anything better than this.



For Analytic Number Theory, what you ask can be achieved by:



Iwaniec And Kowalski, Analytic Number Theory.



This is THE book. It is quite comprehensive. Includes L-functions, modular forms, random matrices, whatever.



For algebraic number theory, the book:



Cassels and Frohlich, Algebraic Number Theory



would tell you all about developments upto Classfield Theory and Tate's thesis. Includes the cohomological version. This is a MUST for algebraic number theorists.



For Langlands' program, use the reference that Pete gives.



For Iwasawa theory, there are two books by Coates and Sujatha.



You might want to know a bit more about the applications of algebraic geometry into number theory. The way to go is through Silverman on elliptic curves, Q. Liu's book, Serre's books, etc..



A historic overview up to the time of Legendre can be found in Weil's book, "Number theory through history: From Hammurapi to Legendre".

Tuesday, 20 March 2007

Is it possible for the repeated doubling of a non torsion point of an elliptic curve tstays bounded in the affine plane ?

EDIT: this answer is wrong. I misread the question as looking at the group generated by P, not the points obtained by repeated doubling. I would be OK if the subset of S^1 generated by taking a non-torsion point and repeatedly doubling came arbitrarily close to the origin---but it may not, as the comments below show. As I write, this question is still open. If a correct answer appears I might well delete this one.



Original answer:



"Bounded" in what sense? You mention heights, that's why I ask. But in fact the answer is "no" in both cases. The height will get bigger because of standard arguments on heights. And the absolute values of x_n and y_n will also be unbounded: think topologically! The real points on the curve are S^1 or S^1 x Z/2Z and if the point isn't torsion then the subgroup it generates will be dense in the identity component and hence will contain points arbitrarily close to the identity, which, by continuity, translates to "arbitrarily large absolute value" in the affine model.

rt.representation theory - some confusion about the explicit construction of irreducible representations of $S_n$

In this book chapter, the irreducible representations of the symmetric group $S_n$ is given in terms of polytabloids of a Ferrer's diagram $lambda$, defined as
$e_t = sum_{pi in C_t} text{sgn}(pi) e_{pi lbrace t rbrace}$.
Here $t$ is a tableau of $lambda$, $C_t$ is the column stablizing subgroup for $t$ in $S_n$. $text{sgn}$ is the signature of the permutation $pi$. Finally {t} is the equivalence class of tableau (called tabloid) represented by $t$, where two tableaux are considered equivalent if they have the same row entries.



My question is, how is the definition of polytabloids above independent of the choice of $t$ in its equivalence class? For instance, if $t$ is the tableau {1,2},{3,4}, then it's equivalent to s={2,1},{3,4}, but $e_t neq e_s$. So maybe it's not independent of representative. But then there seems to be too many polytabloids. I would also appreciate if someone could help me establish the connection with Fulton and Harris's book on representation theory problem 4.47. I am not sure what is meant by a standard tableau there. Also in the second construction of the irreps of $S_n$ in the same problem, I am not sure how the action of $S_n$ on the polynomials is defined.

reference request - Uniformization theorem for Riemann surfaces

As has been pointed out, the inequivalence of the three is elementary.



The original proofs of Koebe and Poincare were by means of harmonic functions, i.e. the
Laplace equation ${Delta}u = 0$. This approach was later considerably streamlined by means of Perron's method for constructing harmonic functions. Perron's method is very nice, as it is elementary (in complex analysis terms) and requires next to no topological assumptions.
A modern proof of the full uniformization theorem along these lines may be found in the book "Conformal Invariants" by Ahlfors.



The second proof of Koebe uses holomorphic functions, i.e. the Cauchy-Riemann equations, and
some topology.



There is a proof by Borel that uses the nonlinear PDE that expresses that the Gaussian curvature is constant. This ties in with the differential-geometric version of the Uniformization Theorem: Any surface (smooth, connected 2-manifold without boundary) carries a Riemannian metric with constant Gaussian curvature. (valid also for noncompact surfaces).



There is a proof by Bers using the Beltrami equation (another PDE).



For special cases the proof is easier. The case of a compact simply connected Riemann surface can be done by constructing a nonconstant meromorphic function by means of harmonic functions, and this is less involved than the full case. There is a short paper by Fisher, Hubbard and Wittner where the case of domains on the Riemann sphere is done by means of an idea of Koebe. (Subtle point here: Fisher et al consider non-simply connected domains on the Riemann sphere. The universal covering is a simply connected Riemann surface, but it is not obvious that it is biholomorphic to a domain on the Riemann sphere, so the Riemann Mapping Theorem does not apply).



The Uniformization Theorem lies a good deal deeper than the Riemann Mapping Theorem.
The latter is the special case of the former where the Riemann surface is a simply connected domain on the Riemann sphere.



I decided to add a comment to clear up a misunderstanding. The theorem that a simply connected surface (say smooth, connected 2-manifold without boundary) is diffeomorphic to the plane (a.k.a. the disk, diffeomorphically) or the sphere, is a theorem in topology, and is not the Uniformization Theorem. The latter says that any simply connected Riemann surface is biholomorphic (or conformally equivalent; same in complex dimension $1$) to the disk, the complex plane or the Riemann sphere.



But the topology theorem is a corollary to the Uniformization Theorem. To see this, suppose $X$ is a simply connected (smooth etc.) surface. Step (1): Immerse it in $mathbb{R}^3$ so as to miss the origin. Step (2): Put the Riemann sphere (with its complex structure!) in $mathbb{R}^3$ in the form of the unit sphere. Step (3): For every tangent space $T_pX$ on $X$, carry the complex structure $J$ from the corresponding tangent space on the Riemann sphere by parallell transport (Gauss map) to $T_pX$. This is well-defined by choosing a basepoint and recalling that $X$ is simply connected. Step (4): Presto! $X$ is now a Riemann surface (it carries a complex structure), so it is biholomorphic to the disk or the plane or the Riemann sphere, thus diffeomorphic to one of the three.



Of course, I have glided over the question of immersing the surface in 3-space, because this is topology. Actually, I vaguely recall that there is a classification of noncompact topological surfaces by Johannsen (sp?), and no doubt the topological theorem would immediately fall out of that.

fourier analysis - Can Walsh-Hadmard transform be used for convolution ?

Not in the sense I think you mean it. First of all, the Walsh-Hadamard transform is a Fourier transform - but on the group (Z/2Z)^n instead of on the group Z/NZ. That means you can use it to compute convolutions with respect to the space of functions (Z/2Z)^n -> C. Unfortunately, unlike the case with Z/NZ you can't use this to approximate a compactly supported convolution on Z, at least not directly.

Monday, 19 March 2007

co.combinatorics - Balls in boxes (partition)

This proof uses a combinatorial equivalent of the Borsuk-Ulam theorem. I think that the proof is a little more complicated than the average proofs here, so please check my related paper if you have difficulties to understand.



Octahedral Tucker's lemma. If for any set-pair $A, Bsubset [n], Acap B=emptyset, Acup Bneemptyset$ we have a $lambda(A,B)inpm[n-1]$ color, such that $lambda(A,B)=-lambda(B,A)$, then there are two set-pairs, $(A_{1},B_{1})$ and $(A_{2},B_{2})$, such that $(A_{1},B_{1})subset (A_{2},B_{2})$ and $lambda(A_{1},B_{1})=-lambda(A_{2},B_{2})$.



We will use this lemma for n=100. If for the boxes in A, the sum of the red balls is more than half of the total number of red balls, then we set $lambda(A,B)=+red$. If it is more than half in B, then we set $lambda(A,B)=-red$. We do similarly for blue and green (if they are not set yet to red). We also set $lambda(A,B)=pm (|A|+|B|)$ if $|A|+|B|le 96$ (if they are not set to anything else yet). This way the cardinality of the range of lambda is 99, just as in the lemma. It is easy to verify that it satisfies the conditions of the lemma, thus there must be a set-pair for which we did not set any value. But in that case either A or B must be bigger than 96/2=48, thus at least 49. We can put the remaining boxes to the other part and we are done.



Note that this proof easily generalizes to more colors.

big list - Interesting applications of the Pigeon-hole Principle

I think the solutions of these questions are very interesting (by using pigeon-hole principle), first question is easy, but second question is more advanced:



1) For any integer $n$, There are infinite integer numbers with digits only $0$ and $1$ where
they are divisible to $n$.



2) For any sequence $s=a_1a_2cdots a_n$, there is at least one $k$, such that $2^k$ begin with $s$.

Friday, 16 March 2007

lo.logic - Is there formal definition of universal quantification?

From wikipedia quantification has meaning:




In logic, quantification is the
binding of a variable ranging over a
domain of discourse




Is there any formal "definition" of universal quantifier for example using definition of domain of discourse?



I mean a formula build without universal quantifier, and existential one which has the same meaning if referenced to defined domain of discourse?



For example:
Suppose we use domain of discourse (DoD) given by sentence $ U = { x|phi(x) }$ for some $phi(x)$. Then naively we may wrote:



($forall (x in U) Phi(x) ) equiv ( { x|phi(x) } => Phi(x) )$



In words: to say that some property follows for every x in DoD is the same as to say that if x is chosen from DoD then has this property.



We may try also the folowing one:
($forall (x in U) Phi(x) ) equiv (( { x|phi(x) } => Phi(x) ) => (phi(x) <=> Phi(x) ))$



In words: to say that some property follows for every x in DoD is the same as to say that $phi$ and $Phi$ are evenly spanned.



Do You know any reference for such matter?




Gabriel: Yes, I agree that from formal point of view in mathematical practice DoD is a set and to extend it to bigger universe usual is done by pure formal way and may be changed to some additional axioms etc. But this is some kind of mathematical practice: "near every decent theory as far as we know is defined for DoD to be set or smaller but as it works also for proper classes we are trying to write it in a way". But then we omit important statement: every time DoD has to be defined and additional axioms about it existence, definition,properties has to be added to the theory. I am only a hobbyist but I do not know any theorem which states: structure to be DoD for formal theory over countable language has to have "this and this" property. Of course for example as in formula ${ x|phi(x)}$ we may require that $phi(x)$ has some property. For example we may require that it is in first order language. Or in second order. Or in finite order language etc. For me is rather clear that it cannot be whatever I like. As far as I know we do not have any theory for that. But maybe I am wrong?



So my question is: what is that mean "for all" in a context of different definition of DoD ( as well as "there exists"). Do we have clear meaning what it means for very big universes? We use some operator here named "for all" but have we possibility to define its meaning in syntactical way? If not, may we be sure that meaning of sentence "for all" is clearly defined?



I suggest this is example of Incomplete Inductive reasoning about possible ways of using general quantifier in mathematics. Moreover I suppose, even after reading something about Hilbert epsilon calculus that quantifiers has usual only intuitive meaning, that is its definition is far from such level of formality as for binary operation $in$ for ZFC for example, where it may be anything (for example in von Neumann hierarchy of sets "model" of ZFC it is order). When we try to define formal theory we want to abstract from the "meaning" of the symbols and give only pure syntactical rules for them. As far as I know ( but I know not much) I do not know such definition for quantifiers, even in epsilon Hilbert calculus for example, because it omits the area of possible, acceptable or correct definitions of domain of discourse.

Thursday, 15 March 2007

homeostasis - Glomerular Filtration Rate

The previous answer isn't quite correct, because, well, blood flow is complicated and the body has autoregulatory mechanisms. As Alex mentioned, vasoconstriction doesn't occur in capillaries because they lack a muscle layer. It does occur in the small arterioles before and after the glomerulus (the afferent and efferent arterioles, respectively).



Bernoulli's principle mainly applies to larger arteries and arterioles, and not capillaries. Capillaries are so tiny that the red blood cell is a significant portion of the diameter, so the assumption of no viscosity is no longer valid. See this discussion for an explanation.



To address your question, here's a basic picture of the glomerulus, from Gray's Anatomy.
image



The afferent arteriole is in red, and the efferent arteriole in blue. The glomerular filtration rate (GFR) depends on the pressure difference between the two, which affects the hydrostatic pressure at the glomerulus. Also, note that vasodilation has the opposite effect of vasoconstriction on GFR. (For a nice image and explanation, look at Figure 6 here) So:



  • Constriction of the afferent arteriole decreases the blood flow into the glomerulus and thus the glomerular hydrostatic pressure, which leads to a decrease in GFR.


  • In contrast, constriction of the efferent arteriole decreases blood flow out of the glomerulus, and this increases the glomerular hydrostatic pressure and leads to an increase in GFR.


Both the afferent and efferent arterioles are regulated through hormones (and drugs that inhibit the hormones). Prostaglandins dilate the afferent arterioles, and NSAIDs inhibit this action. Angiotensin II preferentially constricts the efferent arteriole, and this action is restricted by an ACE inhibitor. (Source: First Aid for the USMLE Step 1)

Wednesday, 14 March 2007

Human sleep cycles and dream times, what influences the timing and intensity? Sleep history included

I'm doing sleep and dreams research and have developed an iPhone application to help me track my bedtime, rise time, sleep onset and also mark dreams. The app also monitors overall activity overnight as an Actigraph, recording periods of elevated motion and inactivity.



I have a lot of sleep cycle data that I have trouble analyzing.



I know that circadian rhythm is at play with 90-110 minute sleep cycles ehxibited over the course of the night, each one is a progression of various sleep stages, from NREM to REM. REM is more likely to result in a dream reported if awakened during REM.



What I'm seeing is a bunch of orderly patterns, and a lot of chaotic behavior that I cannot explain. Particularly the times of going to sleep and awakening, while dreams exhibit more orderly behavior, with dreams reported at similar times on subsequent days.



I'm also interested in any insight as to what's causing me to wake up at certain times and report dreams. What could be involved in this kind of behavior? I saw research about deuretic hormones not being suppressed for the night, causing the urge to urinate to awaken the dreamer.



Right now I have 90 days of sleep onset and dream data that looks like this:



enter image description here



enter image description here



Complete 90 day sleep history



  1. In this case, each black marker indicates the sleep onset time for
    the night

  2. The orange marker indicates the time of getting out of bed

  3. The green markers are dreams that I report upon awakening from a
    dream, so dreams occur prior to the marker's location.


  4. Each green
    marker's brightness indicates more clear dream content remembered
    and recorded.


  5. Cyan markers are lucid dreams, where I was aware
    that I was dreaming within a dream.

  6. Cyan color of a marker
    indicates that I could control the dream's content from within a
    dream.

  7. Red markers are periods of insomnia.

  8. The cyan line is
    number of hours after bedtime

  9. The purple line is sleep cycle
    approximation (90 minutes).

The data is stacked with each row being a day of data. You will notice that your eyes start to automatically seek patterns within the vertical arrangement of markers. Because each marker is a dream reported upon awakening, they roughly trace the sleep cycles over the course of the night.



I'm interested in learning what influences the desire to go to bed at a particular time?
What influences the getting out of bed times?
is there some sort of analysis I can perform to understand the pattern within the sleep cycles as it evolves over multiple nights?



Any input is appreciated!

Tuesday, 13 March 2007

co.combinatorics - Generalization of permanent definition based on number of permutation cycles

Let $A$ be an $n$ by $n$ matrix and $x$ a free parameter. Define
$$p(A,x)=sum_{pi in S_n} x^{n(pi)}A_{1pi(1)}ldots A_{npi(n)},$$
where $pi$ ranges over the permutation group $S_n$ and $n(pi)$ is the number of cycles in the cycle decomposition of $pi$. Clearly, $p(A,1)=perm(A)$, the permanent. In general, $p(A,x)$ has properties in common with the permanent such as $p(PAQ,x)=p(A,x)$ for permutation matrices $P,Q$.



Is this a well-known structure in combinatorics and where might I find more information?

ac.commutative algebra - Bizarre operation on polynomials

I came across something similar in the context of extending boolean functions to real arguments. I thought it was pretty amusing, so let me share it here.



If we understand 1 to be "true" and 0 to be false, one way to define "x and y" is as xy. Similarly, "x or y" can be defined as x + y - xy, and "not x" can be 1 - x. There are good reasons to prefer these definitions over others. For example, if x and y are the probabilities of independent events, then xy, x + y - xy, etc. are respectively the probabilities of the conjunction, disjunction, etc. This gives a systematic way to extend boolean functions ${mathrm{false, true}}^nto{mathrm{false, true}}$ to polynomials $[0,1]^nto[0,1]$. These polynomials satisfy some (but relatively few) of the nice properties of their boolean counterparts, like de Morgan's law: x + y - xy = 1-(1-x)(1-y).



On the other hand, we could treat 0 as "true" and infinity as "false" and try to define boolean functions on the nonnegative real line $[0,infty]$. It seems we can begin by taking our polynomials defined above, expressed appropriately, and interchanging addition with multiplication and subtraction with division!



Conjunction: $xcdot y to x + y$



Disjunction: $(x + y) - (xcdot y) to (xcdot y) / (x + y)$



Negation: $1 - x to 1 / x$



Amusingly, these substitutions preserve de Morgan's law: $xy/(x + y) = 1/(1/x + 1/y)$. I don't think you can run with this all the way to the finish line. For example, the polynomial for exclusive-or is x + y - 2xy, but I don't see an easy way to express that to make the substitution go through. However, I do believe we have the following:




For every boolean function ${mathrm{false, true}}^nto{mathrm{false, true}}$, there is a polynomial extension $f:[0,1]^nto[0,1]$ and a rational extension $g:[0,infty]^nto [0,infty]$ such that, expressed appropriately, $f$ and $g$ are obtained from one another by the addition/multiplication interchange described above.




For example, for exclusive-or, we have $(x + y - xy)(1 - xy) to xy/(x+y) + 1/(x+y) = (1 + xy)/(x + y)$. However, $(x + y - xy)(1-xy) = x + y - 2xy + xy(1-x)(1-y) neq x + y - 2xy$, so we used a "noncanonical" polynomial. There are other examples where we can use the canonical polynomial, though. For example, for the 3-ary majority function, we have



$$xy + xz + yz - 2xyz to (x + y)(x + z)(y + z)/(x + y + z)^2.$$



I know this is not exactly what you asked about, since it involves subtraction and it's really an operation on expressions, not functions, but I hope it's in the right spirit.

Monday, 12 March 2007

senescence - Do we know if RHEB is more sensitive to some amino acids than other amino acids?

RHEB does not sense amino acids. See the abstract of this paper.




Activation of this pathway requires inhibition of the tumor suppressor
complex TSC1/2. TSC2 is a GTPase-activating protein for the small
GTPase Ras homologue enriched in brain (Rheb), GTP loading of which
activates mTOR by a yet unidentified mechanism. The level at which
this pathway senses the availability of amino acids is unknown but is
suggested to be at the level of TSC2. Here, we show that amino-acid
depletion completely blocks insulin- and TPA-induced Rheb activation.
This indicates that amino-acid sensing occurs upstream of Rheb.
Despite this, amino-acid depletion can still inhibit mTOR/S6 kinase
signaling in TSC2-/- fibroblasts. Since under these conditions
Rheb-GTP levels remain high, a second level of amino-acid sensing
exists, affecting mTOR activity in a Rheb-independent fashion.




RHEB also doesn't have any amino-acid binding (or any small molecule binding) domain.

Sunday, 11 March 2007

ag.algebraic geometry - Relationship between algebraic and holomorphic differential forms

Yes, every algebraic differential form is holomorphic and yes, the differential preserves the algebraic differential forms. If you are interested in projective smooth varieties then every holomorphic differential form is automatically algebraic thanks to Serre's GAGA. This answers (3).



Concerning (1) and (2) I suggest that you consult some standard reference as Hartshorne's Algebraic Geometry.



Edit: As pointed out by Mariano in the comments below there are subtle points when one compares Kahler differentials and holomorphic differentials. I have to confess that I have not thought about them when I first posted my answer above.



Algebraic differential $1$-forms
over a Zariski open set $U$ are elements of the module generated by $adb$ with $a$ and $b$
regular functions over $U$ (hence algebraic) by the relations $d(ab) = adb + b da$, $d (a + b) = da + db$ and $d lambda =0$ for any complex number $lambda$. Since these are the rules of calculus there is a natural map to the module of holomorphic $1$-forms over $U$. This map is injective, since the regular functions on $U$ are not very different from quotients of polynomials.



If instead of considering the ring $B$ of regular functions over $U$ one considers the ring $A$ of holomorphic functions over $U$ then one can still consider its $A$-module of Kahler differentials. If $U$ has sufficiently many holomorphic functions, for instance if $U$ is Stein, then one now has a surjective map to the holomorphic $1$-forms on $U$ which is no longer injective as pointed out by Georges Elencwajg in this other MO question.

ca.analysis and odes - How to invert this series?

The inverse series doesn't have that form. If $z=e^{-Q} (1+Q^{-1})$ then



$$Q = - log z + log (1+Q^{-1}) = log z + (log z)^{-1} + O((log z)^{-2}) quad mbox{as} z to 0^{+}.$$



The general form should be



$$Q = - log z + sum_{i>0} b_i (log z)^{-i}.$$



You might be able to coerce this into the Lagrange inversion form, but I don't see how right now. I would just generalize the solution above:



Write
$$Q = - log z + log left( 1 + sum a_i Q^{-i} right)
= - log z + sum frac{(-1)^k}{k} left( sum a_i Q^{-i} right)^k.$$



Expanding this will give you a formula of the form
$$Q = - log z + sum_{i=1}^N c_i Q^{-i} + O(Q^{-N-1}) quad (*)$$
for any $N$ you like. If you already know that $Q = - log z + sum_{i=0}^{N-1} b_i (log z)^{-i} + O((log z)^{-N})$, then plug your known values into $(*)$ to deduce the value of $b_N$.

nt.number theory - Lower bounds for linear forms of logarithms (a la Baker)?

Let $lambda_1$, $lambda_2$, and $a$ be three fixed complex algebraic
numbers.



For a given integer $n$, write
$Theta(n) = arg(a lambda_1^n + lambda_2^n)$.



Assuming $Theta(n)$ is not zero, I am looking for 'good' lower bounds on
$|Theta(n)|$. By 'good' I mean that if $|Theta(n)| > B(n)$,
then $1/B(n)$ should asymptotically grow slower that any exponential in $n$.



Is there a way to use one of Baker's theorems (which provide effective
lower bounds on linear combinations of logs of algebraic numbers) to achieve
this?



For example, writing instead $Gamma(n) = arg(a lambda_1^n lambda_2^n)$
(say), one can get polynomial bounds on $|Gamma(n)|$: noting that
$displaystyle{Gamma(n) = logleft(frac{a lambda_1^n lambda_2^n}
{|a lambda_1^n lambda_2^n|}right) =
log a + n log lambda_1 + n log lambda_2 - log |a| -
n log |lambda_1| - n log |lambda_2|}$
we can apply e.g. Baker-Wustholz (1993) to the above linear form
and get a lower bound $|Gamma(n)| > C(n)$ (assuming that $|Gamma(n)|$ is non-zero)
such that $1/C(n)$ is in fact bounded by a fixed polynomial in $n$.



The problem in getting a similar lower bound for $|Theta(n)|$ is that, even though
$Theta(n)$ can be written as a linear combination of logs of algebraic
numbers of constant degree, as for $Gamma(n)$, the height of the
algebraic number $a lambda_1^n + lambda_2^n$ is potentially exponential in
$n$, and it does not seem that taking logs will help here.



The critical case is of course when $lambda_1$ and $lambda_2$ have the
same magnitude. In fact, I would be happy for an approach with even very
simple values of $a$, such as $a = 2$.

Saturday, 10 March 2007

cell biology - Can NSAIDs impact negatively the healing of tendons?

I would suggest you contact your doctor if you are suffering from tendinitis.



NSAIDs are often used as part of treament for tendinitis, but like all medications they can have side effects. Therefore again if you have concerns about taking NSAIDs contact your doctor.



As for the mechinism of action of NSAIDs they don't constrict vessels but instead reduce the production of inflammatory mediators (prostoglandins) which act to dilate some vessels. These inflammatory mediators also make blood vessels more permeable so that nutrients and cells from the blood can pass through the vessel walls and carry out repairs to damage tissue. The problem however with the bodies inflammatory response is that it can be too severe, so that rather than providing healing it itself can cause damage and pain. For this reason (and many others) NSAIDs may be useful.



But again if you have concerns about taking NSAIDs or any medical condition these please consult your doctor. The internet is not an appropriate place for medical advice, your doctor is.

Friday, 9 March 2007

co.combinatorics - Inequality of the number of integer partitions

No, he probably means exactly what he said. That is the way the partition function is usually defined. But either way, the answer is no.



If $q(k,n)$ counts partitions of n into integers no bigger than k, as Jonah suggests, then note that $q(2,2m) = m+1$ for every $m$. (A partition is determined by the number of 2's.) So being able to compare values of $q(k,n)$ would in particular entail being able to compare $q(k,n)$ to any given integer.



As for the question as actually asked, note that $p(2k,4k-1)=k+1$ for every $k$. Once again, knowing the relative sizes of all $p(k,n)$ is tantamount to knowing whether $p(k,n)$ is more or less than each integer, i.e. knowing the values of $p(k,n)$.

Thursday, 8 March 2007

zoology - Are there dextral/sinistral higher animals?

Handedness has been studied in several different species of toads. As basal tetrapods, the authors argue that these taxa are unlikely to be influenced by human hand dominance and are thus a better model for studying the evolution of handedness.



Bisazza et al. (1997) studied "pawedness" in Bufo bufo, B. viridis, and B. marinus in wild-caught animals by determining which forelimb was used to remove a piece of paper from the animals' snout or a balloon placed over its head.



The results were mixed. B. bufo preferred the right forelimb in both tests, but B. viridis and B. marinus did not. B. marinus did preferentially turn its head left in another experiment, using the right forelimb for support.



A subsequent study by Malashichev and Nikitina (2002) showed that Bombina viridis is "lefthanded," while Bombina bombina is "ambidextrous."



So based on these studies, lateralization appears to have a long evolutionary history.

gr.group theory - Realizing groups as automorphism groups of graphs.

In the topological setting or if you want to relate the size of the graph to the size of the group, there are two relevant results:



(1) Any closed subgroup of $S_infty$, i.e., of the group of all (not just finitary) permutations of $mathbb N$, is topologically isomorphic to the automorphism group of a countable graph.



(2) The abstract group of increasing homeomorphisms of $mathbb R$, ${rm Homeo}_+(mathbb R)$, has no non-trivial actions on a set of size $<2^{aleph_0}$. So in particular, it cannot be represented as the automorphism group of a graph with less than continuum many vertices.

Wednesday, 7 March 2007

nt.number theory - The resultant and the ideal generated by two polynomials in $mathbb{Z}[x]$

This difference was well-known in the 19th century when people
a) Knew about invariants, and
b) Calculated by hand.
I believe a lot of the confusion today stems from Lang's Algebra book which is at best misleading about how to interpret what the resultant and discriminant are (and the ideas of famous books, right or wrong, tend to be perpetuated in other people's books!).



As an example, the resultant of the two polynomials $3x+1$ and $3x+2$ is, according to Sylvester's matrix definition, equal to $3$. Here Voloch's $D=1$. Surely this makes no sense according to the well-known theory, that a prime $p$ dividing the resultant of two polynomials should be interpreted to mean that these polynomials share a root when reduced mod $p$? This is evidently nonsense in this example ... unless one re-interprets these polynomials projectively (which is what one should do).



But now if we look at the pair of polynomials $y+2$ and $3y+2$ then the resultant is $4$, and here $D=4$, but how do you interpret here $2^2$ dividing the resultant? It is not immediate from the interpretation of modern algebra books!



There are all sorts of reasons that prime powers can divide a resultant (and discriminant) and it is complicated to understand all the cases when you wish to interpret higher power divisibility.



In Bhargava's work, he needs to understand squarefree values of a multivariable polynomial which is the the value of a discriminant of a class of parametrized polynomials. In other words he needs to parametrize when $p^2$ divides terms in this particular class of discriminants. Even this relatively simple request breaks down into several non-trivial cases, which he handles so beautifully as if to make it look trivial, but it's not.

protocol - How to set up a slow cooling on an AB Veriti thermal cycler?

As Thomas pointed out, there is a maximum ramp rate which you can set up on the thermocycler. Its value can go from 0% to 100%. I can then assume that a 100% ramp rate will corespond to the maximum ramp rate of 4.25°C/s for the heating block or 5°C/s for the sample. which means that a ramp rate of ~2.4%. Subsequently, I set up my reaction at a 2% ramp rate, as I started a timer at the time the ramp cycle started. To my surprise, the temperature was cooling down too slowly. Thus, I empirically found out that a ramp rate of 5% corresponds to ~0.1°C/s cooling speed.



I wished there was more precise way to calculate it. Perhaps the slope of the line changed at different temperature intervals. It would be great if I can talk to the AB people so that they could justify my finding.

Tuesday, 6 March 2007

open problem - How can an approach to $P$ vs $NP$ based on descriptive complexity avoid being a natural proof in the sense of Raborov-Rudich?

EDIT: This question has been modified to make it a stand-alone question. Feel free to retract your votes for the previous version.



Here are Vinay Deolalikar's paper, and Richard Lipton's first post about it, and the wiki page on polymath site summarizing the discussions about it. His approach is based on descriptive complexity.



One of famous barriers for separating $NP$ from $P$ is Razborov-Rudich Natural Proofs barrier. Richard Lipton remarked about his paper and the natural proofs barrier that apparently "it exploits a uniform characterization of P that may not extend to give lower bounds against circuits". A question which is mentioned in one of the comments on Lipton's post is:




How essential is the uniformity of $P$ to his proof?




i.e is the uniformity of $P$ used in such an essential way that the barrier will not apply to it? (By essential I mean that the proof does not work for the non-uniform version.)



So here is my questions:




Are there any previous computational complexity results based on descriptive complexity that avoid the Razborov-Rudich natural proofs barrier (because of being based on descriptive complexity)?



How can an approach to $P$ vs $NP$ based on descriptive complexity avoid being a natural proof in the sense of Raborov-Rudich?




A related question is:




What are the complexity results using uniformity in an essential way other than proofs by diagonalization?





Related closed MO posts:
https://mathoverflow.net/questions/34947/when-would-you-read-a-paper-claiming-to-have-settled-a-long-open-problem-like-p
https://mathoverflow.net/questions/34953/whats-wrong-with-this-proof-closed



Discussion on meta:
http://tea.mathoverflow.net/discussion/590/whats-wrong-with-this-proof/

ct.category theory - Classifying spaces for enriched categories

Edit: Modified in accordance with Tom Leinster's entirely reasonable objections.



Sorry to exhume this question from 5+ years ago. In case someone is still looking for an answer, note that a very specific version (restricting to $V = text{Cat}$) is addressed up to homotopy in the paper




M Bullejos and A Cegarra, On the geometry of 2-categories and their classifying spaces. K-theory, 29:211 – 229, (2003).




using the Duskin/Street nerve construction. Given a $V$-enriched category $C$, one constructs the simplicial set $Delta C$ as follows: vertices are the objects of $C$, and higher simplices spanning objects $x_0,cdots, x_d$ consist of



  1. 1-morphisms $f_{ij}:x_i to x_j$ for $0 leq i leq j leq d$, and

  2. 2-morphisms $alpha_{ijk}:f_{ik} Rightarrow f_{jk} circ f_{ij}$ for $0 leq i leq j leq k leq d$
    subject to certain associativity and identity constructions (see the Introduction of the linked paper for details).

One can also construct the Segal nerve as outlined in Chris Schommer-Pries's answer, but it is not as directly related to the objects and morphisms of the underlying enriched category. Bullejos and Cegarra show in the linked paper that the two constructions are naturally homotopy-equivalent so it doesn't matter too much either way!

lo.logic - cut elimination

Cut elimination is indispensable for studying fragments of arithmetic. Consider for example the classical Parsons–Mints–Takeuti theorem:



Theorem If $ISigma_1vdashforall x,exists y,phi(x,y)$ with $phiinSigma^0_1$, then there exists a primitive recursive function $f$ such that $mathrm{PRA}vdashforall x,phi(x,f(x))$.



The proof goes roughly as follows. We formulate $Sigma^0_1$-induction as a sequent rule
$$frac{Gamma,phi(x)longrightarrowphi(x+1),Delta}{Gamma,phi(0)longrightarrowphi(t),Delta},$$
include axioms of Q as extra initial sequents, and apply cut elimination to a proof of the sequent $longrightarrowexists y,phi(x,y)$ so that the only remaining cut formulas appear as principal formulas in the induction rule or in some axiom of Q. Since other rules have the subformula property, all formulas in the proof are now $Sigma^0_1$, and we can prove by induction on the length of the derivation that existential quantifiers in the succedent are (provably in PRA) witnessed by a primitive recursive function given witnesses to existential quantifiers in the antecedent.



Now, why did we need to eliminate cuts here? Because even if the sequent $philongrightarrowpsi$ consists of formulas of low complexity (here: $Sigma^0_1$), we could have derived it by a cut
$$frac{philongrightarrowchiqquadchilongrightarrowpsi}{philongrightarrowpsi}$$
where $chi$ is an arbitrarily complex formula, and then the witnessing argument above breaks.



To give an example from a completely different area: cut elimination is often used to prove decidability of (usually propositional) non-classical logics. If you show that the logic has a complete calculus enjoying cut elimination and therefore subformula property, there are only finitely many possible sequents that can appear in a proof of a given formula. One can thus systematically list all possible proofs, either producing a proof of the formula, or showing that it is unprovable. Again, cut elimination is needed here to have a bound on the complexity of formulas appearing in the proof.



Sigfpe wrote above in his answer that cut elimination makes proofs more complex, but that’s not actually true: cut elimination makes proofs longer, but more elementary, it eliminates complex concepts (formulas) from the proof. The latter is often useful, and it is the primary reason why so much time and energy is devoted to cut elimination in proof theory. In most applications of cut elimination one does not really care about having no cuts in the proof, but about having control of which formulas can appear in the proof.

Monday, 5 March 2007

lo.logic - Theory interpreted in non-set domain of discourse may be consistent?

It appears that you yearn to study various first-order theories, but do not want to be constrained by any requirement that your models, or domains of discourse, be sets. There are several ways to take such a proposal.



On the one hand, many mathematicians have yearnings similar to yours, and this has led them to try to use category theory as a theoretical background for their mathematical investigations. Surely this is part of the attraction of category theory, and some promote category theory as a kind of alternative foundation of mathematics (that is, alternative to set theory) for precisely this kind of reason. But you say that you are not especially interested in adopting that view.



Another way to study mathematical structures that are not sets, while keeping a principally set-theoretic background, is to focus on the set-class distinction in set theory. If V is the universe of all sets, we can define certain classes in V, such as { x | φ(x) }, where φ is any property. Such a class is not always a set. For example, the Russell paradox is based on the observations that if the collection R = { x | x ∉ x } were a set, then R ∈ R iff R ∉ R, a contradiction. So R is not a set. But R is still a collection of sorts, and we call it a class. A proper class is a class that is not a set. In ZFC, one can treat classes and proper classes by manipulating their definitions. That is, the classes do not exist as objects within the set-theoretic universe, but rather as definable subcollections of the universe. The intuition is that proper classes are simply too big to be sets. Other proper classes would include the class V itself (consisting of all sets), the class of all ordinals, the class of all cardinals, the class of all groups, all rings, all monoids, etc. Each of these classes is too large to be a set, but each has a perfectly clear definition defining a family of objects.



There are other formalizations of set theory, such as Goedel-Bernays set theory GBC and Kelly-Morse set theory, that allow one to treat classes as objects. In these theories, there are two kinds of objects, sets and classes, and every set is a class, but there are classes that are not sets (such as those I listed above). It turns out that GBC is a conservative extension of ZFC, which means that the assertions purely about sets that are provable in GBC are exactly the same as the assertions about sets that are provable in ZFC. Kelly-Morse, in contrast, is not conservative over ZFC, and it implies, in particular, that their must be set models of ZFC.



Now, the point is that you could study magmas that are proper classes. These would not be sets, but would still exist and could be formally analyzed as mathematical structures in these various set theoretic backgrounds. For example, one magma is simply the set union operation: (a,b) maps to (a U b), defined on all pairs of sets a, b. This magma is not a set, simply because it is much too large. There are innumerably many other such examples.

Saturday, 3 March 2007

neuroscience - What are the positive and negative effects of insulin on cognitive function?

Diabetes



I've not read about any conclusive evidence for a link between insulin and differential cognitive function, but I have read studies that link type-2 diabetes and impaired cognition (1). I will point out now that this is cross-sectional, so the study only reports associations (i.e. diabetes may not necessarily cause the cognitive impairment).



The study I have mentioned does not conclude that this is caused by raised insulin, but rather the effects on the vascular system (specifically microvascular). They also find a significant interaction between diabetes and smoking status, in the context of cognitive impairment. Again, this seems likely to be the vascular system, rather than insulin levels.



A recent review paper also refers to the links between diabetes and executive function (2), but again makes no reference to insulin as the cause, but rather microvascular changes, hypertension, and other associated traits (again, the causes are not known, these are speculative based on the evidence).



Diabetes may not be the best model for studying this though, as it can be characterized by either high insulin (tolerance), or low insulin (impairment) - therefore the association may not be found this way.



Insulin



In a separate review (this may be the best one for you if you only read one of the papers I've referenced) the author posits that whilst insulin may have a neuroprotective role, it also increases amyloid-Beta metabolism and tau phosphorylation, possibly contributing to Alzheimer's-like pathologies (3). Studies have found that insulin directly improves cognitive performance (4) after infusion of insulin, but the long term effects are less certain - it is unlikely to improve overall health having constantly raised insulin serum levels!



However the link between IGF-1 and improved cognition is less disputed, so you may well be right in thinking that the 'overall' effect of insulin on cognition may be protective, but this may just be a marker of good health overall, which is certainly protective!

Friday, 2 March 2007

gn.general topology - A question about disconnecting a Euclidean space or a Hilbert space

Assume the complement of $S$ in $mathbb{R}^n$ is not connected, say $A$ and $B$ are relatively closed and disjoint in $mathbb{R}^nsetminus S$ (and nonempty of course); let $O$ be the complement of the closure of $B$ and $U$ the complement of the closure of $A$, then $O$ and $U$ are disjoint nonempty open subsets of $mathbb{R}^n$ and the complement of their union, $F$, is closed in $mathbb{R}^n$, a subset of $S$ and it separates $mathbb{R}^n$.
In short: $S$ contains a closed set that also separates; as you noted that set is zero-dimensional and hence the answer is `no' for Euclidean spaces.
I don't know (yet) about Hilbert space.

Thursday, 1 March 2007

ag.algebraic geometry - Polynomial with two repeated roots

I have a polynomial of degree 4 $f(t) in mathbb{C}[t]$, and I'd like to know when it has two repeated roots, in terms of its coefficients.



Phrased otherwise I'd like to find the equations of the image of the squaring map



$sq colon mathbb{P}(mathbb{C}[t]^{leq 2}) rightarrow mathbb{P}(mathbb{C}[t]_{leq 4})$.



(for some reason the first lower index wouldn't parse, so I put it on top).



Of course I can write the map explicitly and then find enough equations by hand, but this looks cumbersome. I'm not an expert in elimination theory, so I wondered if there is some simple device to find explicit equations for this image. For instance one can detect polynomials with one repeated root using the discriminant, but I don't know how to proceed from this.

Why sin and cos in the Fourier Series?

$1$. Mathematical reason.



There is one reason which makes the basis of complex exponentials look very natural, and the reason is from complex analysis. Let $f(z)$ be a complex analytic function in the complex plane, with period $1$.



Then write the substitution $q = e^{2pi i z}$. This way the analytic function $f$ actually becomes a meromorphic function of $q$ around zero, and $z = i infty$ corresponds to $q = 0$. The Fourier expansion of $f(z)$ is then nothing but the Laurent expansion of $f(q)$ at $q = 0$.



Thus we have made use of a very natural function in complex analysis, the exponential function, to see the periodic function in another domain. And in that domain, the Fourier expansion is nothing but the Laurent expansion, which is a most natural thing to consider in complex analysis.



Here you can make suitable modifications when $f$ is periodic in some domain which is not the whole complex plane. In that case in the $q$-domain, $f$ will be analytic in some circle around $0$, and you can use that to get a Laurent expansion. The modular forms for instance are defined only in the upper-half plane, and what we get here is called the $q$-expansion.



However from the point of view of Real analysis, $L^p$-spaces etc., any other base would do just as fine as the complex exponentials. The complex exponentials are special because of complex analytic reasons.



$2$. Physical reason.



There are historical reasons also. For instance, in electrical engineering or theory of waves, it is very useful to decompose a function into its frequency components and this is the reason for the great importance of Fourier analysis in electrical engineering or in electrical communication theory. The impedance offered by circuits depends on the frequency of the signal that is being fed in, and a circuit consisting of capacitors, inductors etc. react differently to different frequencies, and thus the sine/cosine wave decomposition is very natural from a physical point of view. And it was from this context, and also the theory of heat conduction, that Fourier analysis developed up.