This command will display a list of lists in nice little boxes. Very useful

netList

Where Algebra and Geometry Intersect

This command will display a list of lists in nice little boxes. Very useful

netList

Advertisements

I’ve always been frustrated and confused by the statement of the theorem defining plucker relations. There are too many letters! In addition to the m and n which give the size of a matrix, there are also p, q, s, t, and then a,b,c which are indexed by the previous six letters. I’m currently reading this expository paper by Bruns and Conca – http://arxiv.org/pdf/math/0302058v3 . Unfortunately these notational woes still exist. I think i’ve just figured out the point again and am writing it down as a note to myself:

The basic idea is that you want to look at maximal minors of an m by n matrix with m < n. Then you basically pick some number of columns (more than m, less than n) – call these “shifty columns” and then some other columns split into two sets – left columns and right columns.

What we’ll do next is write down a standard product of two maximal minors

(Minor 1)(Minor 2)

Put all the “left columns” on the far left, and all the “right columns” on the far right.

(Left guys, Stuff)(Stuff, Right Guys)

The idea is that we take the shifty columns and throw them into the “stuff” portion of the previous line. There are just two rules:

1. Split the shifty columns so that both sets of parentheses describe a maximal minor (but then again it wouldn’t make sense otherwise)

2. Make sure that the “Stuff” in each parenthesis is written in increasing order.

Then the theorem says that if we take the sum over all ways of splitting the stuff up (multiplied by appropriate signs of permutations) we get 0. Or something like that. This isn’t specific at all, but it’s specific to get the idea across which is likely all that you really need to read a paper that uses this, say to prove the Straightening Law.

Notes: A few days later I tried working out some examples, and there’s one more key point: The number of shifty columns needs to be more than the size of each minor. In other words, you can’t say that (13|24)=(12|34). You have to permute at least 3 guys in this case!

It’s been a grip since I’ve posted, and I probably won’t start posting more regularly for awhile, but I typed an email today which I may as well put on here. It’s about normal rings which are not necessarily domains.

—-

In his book, Matsumura admits that most people just use the word normal to mean an integrally closed domain. However, he still follows the more general treatment of Serre and Grothendieck. This is useful in that in simplifies some of the statements in Eisenbud’s book, and is not that much more difficult. Here’s all you need to know:

Let’s assume all rings are Noetherian.

Defn: A ring R is normal is R_p is a normal domain for every prime ideal p. (Note that by definition, a local normal ring is a domain)

This implies, using the chinese remainder theorem, that if the minimal primes of R are {p1,…pn} then R is a product of normal domains as follows:

R = R/p_1 x … R/p_n

In fact, we have the following “structure theorem”

If R is noetherian, TFAE

1) R is normal

2) R is a finite product of normal domains

3) R is reduced and integrally closed in Quot(R)

4) R satisfies Serre’s conditions R1 and S2

Computationally, there is no nice algorithm for computing the integral closure of a ring. There are methods implemented in Macaulay, but they are very ad hoc and even now, Eisenbud and Stillman are trying to improve these things. However, a general strategy that one can use to compute integral closures (by hand) is:

1) Find all “reasonable” elements integral over the ring

2) Show that R adjoin these guys is integrally closed. This can usually be done by showing that the ring is isomorphic to a polynomial ring (or a product of polynomial rings) Anything harder than this would be unreasonable to do by hand.

In case that R is a domain, implicit in the above is that if the integral closure is a polynomial ring then R itself must be a subring of a polynomial ring, and sometimes you might even be able to eyeball that.

For example, you can check (using a Grobner basis for instance) that k[t^2,t^3] is isomorphic to k[x,y]/(y^2-x^3), and thus this integral closure is equal to k[t].

More generally the integral closure of k[f_1,…. f_n] in k[x_1,…. x_m] (if the f_i are monomial) will be given by some combinatorial subset in R^n (See Eisenbud Ch.4)

I’ve found computing normalizations to be one of the most frustrating experiences, but once you figure out the basics, they’re not so bad.

Every student in an algebraic geometry course will come across Serre’s famous “Twisting Sheaf” on projective space at some point or another. It is perhaps the first example of an invertible sheaf other than the structure sheaf. I was quite shocked at first that one could have sheaves all locally isomorphic, yet globally different, but a moment’s thought about manifolds or vector bundles, shows this is a common occurrence. In algebra, to see the difference one can simply compute the global sections and see that is the -vector space spanned by all monomials of degree in . Since these are different for different values of , the sheaves are not the same.

Rather than compute the sections explicitly to show these sheaves aren’t isomorphic, (which isn’t hard, and is good to do every once and awhile to keep in shape) I’d like to try and explain some of the geometry behind $F=latex \mathscr{O}(1) $ on and hopefully explain why it is called a “twisting sheaf”. The basic idea is this: invertible sheaves correspond to line bundles in that they are (locally) rank one free modules on We will work over the real numbers, so that is just the circle, and we will try to figure out exactly what the line bundle structure is in this case. (It’s the mobius strip!) I should warn you now, that I won’t be entirely rigorous with the details, but hope to get pretty close. (Oh, we’ll also be ignoring all non -valued points. Scheme theorists can write their own blog post and we’ll put it on Critch’s corner)

The first thing we must do is split up into two affine pieces. Let’s say and . Now both are isomorphic to the real line and the map gluing to sends . So far we’ve done nothing other than describe the projective line and how it is glued together. Before we introduce line bundles, let’s pick coordinates for and . Call the coordinate on , and let have coordinate This makes sense because we want to identify a point on either patch with the “original” point in .

Now we’re ready to introduce the line bundle If we jump completely back to scheme theory now, we would see that sections of on are given by elements of the module by which we mean the module with generator over the ring . Similarly sections on are given by the module where we’ll set . What’s important to realize here, is that in both cases we have a coordinate ring, and a generator. Most important, however is the relationship between the two modules. Notice that . In other words, on any patch, multiplying the coordinate by the generator gives you the other generator. We’re now ready to play!

Let’s start on at the point at the point in the fiber. (Where the subscript just denotes which patch we are in). Since we’ve already used the word component, let’s call the these two numbers “components” to keep our sanity. Now we’d like to switch components to . The first component remains unchanged since . (Remember the two affine are just glued by taking reciprocals.) And further, to get the second component, we just our original second component by the first (multiplying the generator by the coordinate). Thus we have .

Now let’s take a walk and move from the fiber over to the fiber over , keeping track of the point in the line. Since we’re just in a normal affine patch, nothing bizarre happens and we end up at the point . Now let’s switch back to before going back home to where we started. Again, and again multiplying by the first component, we obtain the following: .

Finally, move past now to get back to and we see that we arrive at the point $(1,-1e_1)$, which is not the point we started with! If we were ants walking along this line bundle, as we go all the way around the circle, we end up flipped upside down when we arrive back home. This is precisely what happens with the mobius strip!

Suppose that is a scheme and that is a morphism. It’s not hard to see that is an invertible sheaf and is generated by the sections . Conversely, given an invertible sheaf and global section which globally generated there is a morphism such that and This may seem like a complicated statement but the idea is really quite simple.

To see the simplicity, let’s look at the simple case of on , and take the global sections The map we are talking about above is simply the map from given by (I don’t know how to do square brackets on wordpress.) In fact the only thing we really have to check to make sure this is well defined, is that it doesn’t have any *base points*, i.e. points which get sent to It’s easy to see this is not the case here, but what more important is that the intuition here is exactly the same in the general case. The system will have base points (in this sense) if and only if the sections do not generate the sheaf. So for example, the sections do not globally generate, because they do not generate the sheaf at the point . If you really want to work out the details, this all boils down to the fact that locally at this point, we can divide by since it is nonzero, but we cannot divide by so we will never get the function .

To construct the map in general you need to define maps locally and patch them together, but the basic idea is exactly the same as in this example. One useful application of this notion is the fact that automorphisms of are precisely given by matrices modulo scalar multiples. That such matrices yield automorphisms is just a simple exercise in matrix multiplication, which is good to do! To see the other direction, given a morphism since Pic and since is an automorphism, and is a generator, we know that must be also be a generator. Thus it is and since only one of these has global sections, we conclude it must be Thus the pullbacks of the in the image must be linear combinations of the in the domain, and thus we can use these equations to form a matrix. (Forgive me for not being explicit here!) as required. Finally uniqueness follows since giving a morphism is the same as giving a sheaf (in this case and some sections (the linear combinations).

All of this generality shows us the existence of morphisms if we take a sheaf and some sections, but the map we end up with could be very badly behaved. For example, it could fail to be injective, and other horrors. The best case scenario would be if this map were a closed immersion (turning our scheme into a projective variety!) Luckily there’s a criterion for when this happens, and we’ll state it now.

**Proposition:** Let be an algebraically closed field, let be a projective scheme over and let be a morphism (over k) corresponding to as above. Let be the subspace spanned by the . Then is a closed immersion if and only if

- elements of V separate points, i.e. for any two distinct points there is an such that but , or vice versa, and
- elements of V separate tangent vectors, i.e. for each closed point , the set spans the k-vector space

[I’m not going to give the proof here, but the idea is that the first condition implies that the morphism is injective (and we get that the map is a homeomorphism onto its image via standard nonsense about images of proper schemes etc) whereas the second condition is used to prove that the structure map is surjective. The algebraically closed condition is necessary to get a handle on the types of closed points.]

Instead, let’s see this theorem in action. Let’s use and again. The vector space spanned by the sections has no base points, but it does not separate points, since both get sent to .

The notion of separating tangent vectors is a bit more subtle, but the geometric idea is that you don’t want to “squash” tangent vectors. For example, the 2-1 squashing map for the curve where the map is given by squashes the origin. Indeed, if P is the origin, (0,0,1) then is generated by but neither of is a generator. In fact and is a unit.

Finally, I’ll close with the somewhat simper statement for a curve In this case, the tangent space is 1-dimensional, so for the set to generate just says that every function which vanishes at does not vanish twice there. If you know about divisors this is just saying that is not a base point of if is the divisor associated to the map.

After what felt like entirely too much work, I think I finally understand the criterion for a scheme to be integral. My officemate Critch has a great criterion (with a detailed proof) in the affine case on his website. In the more general case, wikipedia simply says that is integral if and only if it is connected and can be covered by integral spectra. (This doesn’t appear to be correct) In Liu’s book the statement (as an exercise assuming finitely many irreducible components) is just that is integral if and only if all stalks are integral, whereas when Ravi assigns this problem you need to assume the space is locally Noetherian. Despite the seeming disagreement among the Google results, I think the last two statements are correct (and that the wikipedia article forgot to include locally noetherian.) Just to be formal, I should probably state something now:

If is a connected, locally noetherian scheme, all of whose stalks are integral, then is integral.

In what follows, we’ll assume we are working with a locally noetherian scheme, but all that we’ll really need is that there are finitely many irreducible components…. and all we *really * need is that any irreducible component intersects at least one other irreducible component. I don’t know of a counterexample to this proposition if you remove the finiteness condition. I’m putting this in Critch’s Corner where I’ll put some of these interesting questions and counterexamples.

After tiring with google, I decided to bite the bullet and look in EGA. I found a proposition (Prop 2.1.9 (2nd edition)) which says that an element is in only one irreducible component if and only if the nilradical of the local ring is prime.

First let’s show how this implies the proposition above. One direction is trivial, so we show the converse: that if all the local rings are integral then so is the whole scheme. Since reducedness is local, we’ll just show irreducibility.

Let be the irreducible components of . Then must intersect one of the other since is connected. (Here is where we use the finiteness bit) Let . Then by the proposition, the nilradical of is not prime, which contradicts the assumption that all local rings are integral.

The proof of Prop 2.1.9 is not hard. It first reduces to the affine case by noting that irreducible closed subsets of an open set are in natural correspondence with irreducible closed subsets of which meet . And now the affine case is easy and fun. The basic idea is that if then . Geometrically the irreducible components correspond to minimal primes. So the proposition is just saying that 1 irreducible component means 1 minimal prime. (cf. Example II.3.0.1 in Harshorne)

There is an interesting application in this. My friend Paul asked if there is an easy way to see that if is integral, then so is (Again, we should probably be assuming some Noetherian conditions on but you can sort that out yourself.) By the above, we just need to show that the fiber product is connected. Take a finite open affine cover by Spec . All these open sets intersect nontrivially since is irreducible and we claim that cover . (Is this obvious/true?) Now I think this shows that the fiber product is connected and hence we can apply the proposition.

I’m going to think a bit more about how to tell if a given collection of subsets covers a fiber product. I think we worked out a nice way of doing this at one point.

This is a post that I wrote last fall but never finished because it was getting out of control. I’m going to put it up because someone might find it useful as is.

—

—

Today I’m going to type up a lot of the basics of intersection theory from the beginning. We’ll be following Fulton’s “Introduction to Intersection Theory in Algebraic Geometry.” We begin by looking at some of the historical ways that people studied intersection theory. In the meantime we will try to learn something about divisors. And of course we’ll have a lot of pictures. So let’s get started – let’s learn some intersection theory!

One of the most basic questions in intersection theory is to describe the intersection of several hypersurfaces. Perhaps the simplest case would be the intersection of two curves of degrees in . Working over the complex numbers in projective space, if we are able to count the multiplicity correctly the correct answer is that the curves intersect in precisely points. This result is usually referred to as Bezout’s theorem. As we will see, the hardest part of this theory (and of intersection theory in general) is determining the correct way to define multiplicity.

First some motivation for curves: Suppose that curves intersect at a point . We’d like to define the intersection multiplicity. If the curves intersect transversely then this intersection number should be 1. To understand what happens when the curves do not intersect transversely, consider the following sequence of curves,

As these pictures indicate, if we want intersection multiplicity to have any sort of continuity property, then we had better make sure that a the intersection multiplicity is three in all of these cases. Similarly the following picture shows the case of a tangent line on a conic having multiplicity two.

**What if they don’t intersect transversely? **

As a matter of computing the multiplicity, If two curves do not intersect transversely, we can look locally around and write out the power series for . These power series will agree for some number of terms, which we can define to be the multiplicity. Let’s do an example to see what is going on.

**Example: **Consider the intersection of the line with the parabola . In local coordinates the power series are and . Since these agree for two terms we say they intersect with multiplicity two.

Ok, perhaps you thought that was a stupid example. Here’s a better one:

**Example**: Consider the intersection of the line with the cusp . In local coordinates at the origin… How exactly should this work? So intuitively, if you plug in into the equation you obtain which should then intersect with multiplicity three by a power series argument. Is there a way to do this directly?

In both of these examples, we can also compute the intersection multiplicity simply by looking at the dimension of a local ring

where are the local equations for the curves. In the second example above this computation yields

Ideally an intersection theory for varieties in higher dimension would provide us with a way of counting multiplicity, so that if hypersurfaces of degree intersected to form points then – the natural generalization of Bezout’s theorem. At the end of this entry we will give a proper definition (and see why the above ideas aren’t quite good enough!)

Let be a curve in . One number we can associate to is its class. The class of a curve is defined to be the number of tangent lines to the curve passing through a general point. Equivalently, this is the degree of the dual curve , which is constructed as follows: Recall that by duality, lines in are in natural bijection with given by the coorespondence

The dual curve of is the curve consisting of all the points dual to the tangent lines of . This can be realized quite explicitly in fact.

If is a homogeneous polynomial defining , then is given by the set of all points of the form

where denotes the partial derivative with respect to etc.

**Example**: The dual of the curve defined by is given by the set of all which if we call our new coordinates on we see this is the curve defined by .

Below is an example of a more complicated curve and its dual (original on the left, duals on the right). The equations for the curves are listed in the plots. I used Mathematica to plot everything, and Macaulay 2, to find the equations (which is basically just finding relations among the partial derivatives)

As you can see in the above examples, singularities can arise in taking the dual curve, most notably at points of inflection on the original curve. It should now be apparent after a moment’s thought that the degree of the dual curve (that is the number of intersection points with a generic line) is the same as the class of the original curve, by duality. To get a grasp on this number we define the polar curve associated to .

If is any point in the plane, we define the polar curve to be the curve defined by the equation

It is straightforward to check that a nonsingular point of is on exactly when the tangent line to at passes through . Also if is not a point of inflection, then and intersect transversally. Thus if the degree of is and is nonsingular, then

But wait… if then this seems to imply that the degree of the dual curve is always greater than the degree of the original curve. Since we want (and indeed it’s true!) that the double dual is just the original curve, this should be cause for alarm. The answer lies in studies what happens at singular points. Unfortunately, if has singular points they will always lie on this intersection as well. You can check that ordinary nodes will contribute 2 to this intersection, and ordinary cusps 3. Thus the correct formula (at least for curves containing ordinary nodes and ordinary cusps) is

Thus it happens that just the right amount of singularities arise on the dual curve so that when we take the dual again, we get our original curve back. Indeed, in the example of our cubic above we started with something of degree 3, and obtained something of degree 6 with some singularities. Thus the degree of the double dual should be of degree 30 minus some singularities. Using Mathematica you discover it has exactly nine singular points, all of them ordinary cusps, yielding the magic formula