## FANDOM

36 Pages

Springer GTM 150.

"It has seemed to me for a long time that commutative algebra is best practiced with knowledge of the geometric ideas that played a great role in its formation: in short, with a view toward algebraic geometry."

# Part I Basic Constructions Edit

## Chapter 3 Associated Primes and Primary Decomposition Edit

### Section 3.2 Prime Avoidance Edit

• Lemma 3.3 (Prime Avoidance): If on the other hand n > 2, ... Suppose J is contained in the union of n ideals, at most two of which are not prime, but is not contained in any one of the ideals. Any subset of (n-1) ideals contains at most two non-primes. If J were contained in the union of any such subset of (n-1) ideals, then by the inductive hypothesis J would equal one of these ideals, so J is not contained in the union of any proper subset of these ideals. Thus we can pick $x_i \in J \setminus \bigcup_{j \neq i} I_i$. Then carry out the rest of the proof from the book. The reason we need all but two ideals prime (as opposed to just one prime ideal, as it may first appear) is that we're applying the inductive hypothesis to *all* the (n-1)-element subsets of the ideals, and hence recursively to subsets of that subset, so in the worst case we may be throwing away a prime ideal at each step until we get down to the n=2 case, at which point we don't need a prime ideal to make it work.

This is one of the strangest-stated and strangest-proved results I've seen. I think the main problem is a failure to factor some of the technical details out of the main argument. Let me try my hand at breaking this up in a far more reasonable way:

Lemma a: Suppose that $I_1, \ldots, I_n$ are ideals of a k-algebra R, where k is an infinite field, and suppose $J \subseteq \bigcup_{j=1}^n I_j$ is an ideal of R. Then $J \subseteq I_j$ for some j.

Proof: (Proof as in book)

Lemma b: Suppose that $I_1, I_2$ are ideals of a ring R and $J \subseteq I_1 \cup I_2$ is an ideal of R. Either $J \subseteq I_1$ or $J \subseteq I_2$.

Proof: (Proof as in book)

Lemma c: Suppose that $I_1, \ldots, I_n$ are ideals of a ring R, and that P is a prime ideal of R. If $J \subseteq P \cup \bigcup_{j=1}^n I_j$, then either $J \subseteq P$ or $J \subseteq \bigcup_{j=1}^n I_j$.

Proof: Suppose not; then we can find $p \in J \setminus \bigcup_{j=1}^n I_j$ and $x_i \in J \setminus \left[P \bigcup_{j \neq i} I_j\right]$. Let $x = p + x_1 x_2 \cdots x_n$. Since $x \in J \subseteq P \cup \bigcup_{j=1}^n I_j$, either $x \in P$ or $x \in \bigcup_{j=1}^n I_j$. In the former case, we have $x_1 \cdots x_n = x - p \in P$, so some $x_j \in P$ since P is prime. But of course this contradicts the way we chose $x_j$. In the latter, we have $p = x - x_1 \cdots x_n \in \bigcup_{j=1}^n I_j$, contradicting the choice of $p$.

Corollary to lemma c: If at most two of the $I_j$ aren't prime, we can iterate the lemma to reduce either to one ideal or two the hypothesis of lemma b. In either case, it turns out that J is contained in one of the ideals in the union.

## Chapter 4 Integral Dependence and the Nullstellensatz Edit

### Section 4.1 The Cayley-Hamilton Theorem and Nakayama's Lemma Edit

• So far as I can tell, many of the results in this section dealing with integral elements are much simpler if we base the arguments on minimal polynomials rather than characteristic polynomials; the Cayley-Hamilton theorem doesn't seem to be logically necessary here.
• Theorem 4.3 (Cayley-Hamilton): If we write m for the column vector whose entries are the $m_j$,: Note the weird trick in play here. We've written A for the matrix of $\varphi$. Normally we'd regard this matrix as acting on $R^n$, with elements of $R^n$ representing elements of M via the (not necessarily injective) map $R^n \to M$ where $e_j \mapsto m_j$. What we're doing here, though, is regarding the matrix A as acting on $M^n$ instead. This trick is the key to the entire proof -- it's where the magic is happening -- but it's easy to miss if you're not looking carefully.
• Corollary 4.4a: or equivalently $1 - q(\alpha) \alpha = 0$. That is, that they are equal as elements of $\operatorname{End}_R(M)$.
• Corollary 4.8 (Nakayama's Lemma): Warning: It is tempting, but in general wrong, to use Nakayama's lemma to prove that a module M is finitely generated by exhibiting finitely many generators for M/IM. (Notice that M being finitely generated was already a hypothesis of Nakayama's lemma.)

### Section 4.2 Normal Domains and the Normalization Process Edit

• Normal domains are defined on page 118. They are integral domains which are integrally closed in their field of fractions.
• Proposition 4.10: factorial: This is defined on page 14, and is a synonym for "unique factorization domain" (UFD).

### Section 4.4 Primes in an Integral Extension Edit

• Proposition 4.15 (Lying over and going up): we may assume that R is local with maximal ideal P.

We have a commutative diagram

$\begin{array}{c c c} R_P & \hookrightarrow & S_p \\ \uparrow & & \uparrow \\ R & \hookrightarrow & S \end{array}$

Since the contraction (i.e., preimage) of $P_P < R_P$ to $R$ is just $P$, if we can find an ideal Q' of $S_P$ whose contraction in $R_P$ is $P_P$, then the contraction to R is P, as desired. By commutativity of the diagram, the contraction Q of Q' to S would then also contract to P. In pictures,

$\begin{array}{c c c c c} P_P < & R_P & \hookrightarrow & S_p & > Q' \\ & \uparrow & & \uparrow \\ P < & R & \hookrightarrow & S & > Q\end{array}$

### Section 4.5 The Nullstellensatz Edit

• Lemma 4.20 (Rabinowitch's Trick): ... S is Jacobson, and since S is a domain, it follows that the intersection of the maximal ideals of S is 0. In a Jacobson ring, the Jacobson radical is the same as the nilradical (this is immediate from the definitions), and in a domain the nilradical is of course zero.
• Lemma 4.20 (Rabinowitch's Trick): b is contained in all nonzero prime ideals that S may have. Thus the ideal (0) must be a maximal ideal. Otherwise, b would be contained in the Jacobson radical of R, which we ruled out previously.