mirror of
https://github.com/Smaug123/static-site-pipeline
synced 2025-10-10 18:28:53 +00:00
Import Hugo
This commit is contained in:
@@ -0,0 +1,18 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-08-19T00:00:00Z"
|
||||
aliases:
|
||||
- /categorytheory/category-theory-introduction/
|
||||
- /category-theory-introduction/
|
||||
title: Category Theory introduction
|
||||
---
|
||||
|
||||
The next few posts will be following me on my journey through the book [Category Theory], by Steve Awodey. I’m using the second edition, if anyone wants to join me. I will read the book and make notes here as I go along: doing the exercises (if they seem interesting enough, I’ll post them up here), coming up with my own intuition pumps, and generally writing down my thought processes. The idea is to see how a fledgling mathematician studies a text, and to record my thoughts so I can refresh my memory more easily in future.
|
||||
|
||||
As I go, I’m also creating an Anki deck of the definitions, by the way, although that might not appear on this site.
|
||||
|
||||
[Category Theory]: http://ukcatalogue.oup.com/product/9780199237180.do
|
57
hugo/content/awodey/2015-08-19-what-is-a-category.md
Normal file
57
hugo/content/awodey/2015-08-19-what-is-a-category.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-08-19T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/what-is-a-category/
|
||||
- /what-is-a-category/
|
||||
title: What is a Category?
|
||||
---
|
||||
|
||||
This post will cover the initial "examples" section of [Category Theory]. Because there aren't really very deep concepts in this section, this is probably a less interesting post to read than the others in this series.
|
||||
|
||||
The introduction lasts until the bottom of page 4, which is where a *category* is defined. I read the definition in a kind of blank haze, not really taking it in, but I was reassured by the line "we will have plenty of examples very soon". On re-reading the definition, I've summarised it into "objects, arrows which go from object to object, associative compositions of arrows, identity arrows which compose in the obvious way". That's a very general definition, as the text points out, so I'm just going to wait until the examples before trying to understand this properly.
|
||||
|
||||
The first example is the category of sets, with arrows being the functions between sets. That destroys my nice idea that "a category can be represented by a [simple (directed) graph][simple graph] together with a single identity arrow on each node": indeed, there are lots and lots of functions between the same two sets, and indeed more than one arrow \\(A \to A\\). I'll relax my mental idea into "directed multigraph".
|
||||
|
||||
Then there's the category of finite sets. I'll just check that's a category - oh, it's actually really obvious and there's not really anything to check.
|
||||
|
||||
Then the category of sets with injective functions. The "is this a category" check is done in the text.
|
||||
|
||||
What about surjective functions? The composition of surjective functions is surjective, and the identity function is surjective, so that does also form a category.
|
||||
|
||||
The first exercise in the text is where the arrows are \\(f: A \to B\\) such that \\(f^{-1}(b) \subset A\\) has at most two elements. (A moment of confusion before I realise that this is almost the definition of "injective".) That's clearly not a category: the composition of two of those might fail to satisfy the property. For instance, \\(f: \{0, 1, 2, 3 \} \to \{0, 1\}\\) the "is my input odd" function, and \\(g: \{0, 1\} \to \{0\}\\) the constant function; the composition of these is the constant zero function which is four-to-one.
|
||||
|
||||
Now comes the category of posets with monotone functions. Not much comes to mind about that.
|
||||
|
||||
The category of sets with binary relations as the arrows is one that is less intuitive for me, mainly because I'm still not used to thinking of relations \\(\sim\\) (such that \\(x \in X\\) may \\(\sim y \in Y\\)) as subsets of \\(X \times Y\\). The identity arrow is easy enough: it's the obvious "equality" relation that \\(a \sim a\\) only. The composition is a little less obvious: \\(a (S \circ R) c\\) iff there is \\(b\\) such that \\(a S b\\) and \\(b R c\\). Can I come up with an example of that? Let \\(S = \ \leq\\) on \\(\mathbb{R}\\), and \\(R = \ \geq\\). Then \\(S \circ R\\) is just the "everything is related" relation, since we may either let \\(b=a\\) or \\(b=c\\) depending on whether \\(a \leq c\\) or \\(a \geq c\\). OK, I'm a bit happier about that. It's easy to show that we have a category.
|
||||
|
||||
Then comes a matrices example (which I've simplified from the textual example), where the objects are natural numbers - possibly repeated - and the arrows are integer matrices of the right dimensions that matrix multiplication is defined. I thought that was a pretty neat example.
|
||||
|
||||
Finite categories: the book gives the definitions of \\(0\\), \\(1\\), \\(2\\) and \\(3\\). There's an obvious way to extend this to higher natural numbers. The section about "we may specify a finite category by just writing down a directed graph and making sure the arrows work" rings a strong bell with [free group]s, and indeed, the book calls them "free categories".
|
||||
|
||||
Now we come to the definition of a "functor", which I immediately parse as a "category homomorphism" and move on. (Questions which come to mind: are any of the above categories related by some functor? I don't care much about that for the moment.)
|
||||
|
||||
Preorders form a category which is drawn in almost exactly the same way as the Hasse diagram for a partial order (omitting identity arrows). That's a category in which the arrows are representing relation rather than domain/codomain.
|
||||
|
||||
The topological-space example I skipped because I didn't know what a \\(T_0\\) space was. (However, the specialisation ordering I did observe to be trivial on sufficiently separated spaces.)
|
||||
|
||||
Example from the category of proofs in a particular deductive system: the identity arrow \\(1_{\phi}\\) should be the trivial deduction \\(\phi\\) from premise \\(\phi\\). Very neat. It rings a bell from what I've heard of the [Curry-Howard isomorphism], and indeed the next example makes me think even more strongly of that.
|
||||
|
||||
Discrete category on a set: yep, checks out. I should verify that they are posets, which they are: the poset with order relation "almost nothing is comparable".
|
||||
|
||||
Monoids: oh dear, this example looks long. OK, I know what a monoid is ("group without inverses"), but how is it a category? Little mental shift of gear to thinking of elements as arrows, and it all becomes clear. The "free category" relations from earlier, then, correspond to the "free group" relations on the generators. I check that the set of functions \\(X \to X\\) actually a monoid, which it is. It seems easier to view it as a subcategory of the category of sets; and lo and behold the next paragraph points this out. We get to the bit about "monoid homomorphisms" - yes, they are indeed functors, which is not at all unexpected given that my understanding of "functor" is "category homomorphism", and monoids are categories.
|
||||
|
||||
## Summary
|
||||
This is actually the second time I've read this section - the first time was before I had the idea of blogging my progress - and now I think I've got a good feel for what a category is. The next section is titled "Isomorphisms", which should give me a better idea of which categories are "the same". I noticed that the integers (when implemented as categories) seem to form a preorder, and indeed a poset; this corresponds nicely with their implementation as finite ordinals, with \\(3 = \{2\}\\) and so forth. I like seeing things crop up in different implementations all over the place like that.
|
||||
|
||||
|
||||
[Category Theory]: http://ukcatalogue.oup.com/product/9780199237180.do
|
||||
[simple graph]: https://en.wikipedia.org/wiki/Graph_(mathematics)#Simple_graph
|
||||
[free group]: https://en.wikipedia.org/wiki/Free_group
|
||||
[Hasse diagram]: https://en.wikipedia.org/wiki/Hasse_diagram
|
||||
[Curry-Howard isomorphism]: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence
|
71
hugo/content/awodey/2015-08-20-new-categories-from-old.md
Normal file
71
hugo/content/awodey/2015-08-20-new-categories-from-old.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-08-20T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/new-categories-from-old/
|
||||
- /new-categories-from-old/
|
||||
title: New categories from old
|
||||
---
|
||||
|
||||
Here, I will be going through the Isomorphisms and Constructions sections of Awodey's Category Theory - pages 12 through 17.
|
||||
|
||||
The first definition here is that of an isomorphism within a category. I notice that it corresponds with the usual definition of an isomorphism, but it's not phrased in exactly the same way. Up til now, "isomorphism" has strictly meant "bijective homomorphism". Are these two notions secretly the same? They can't be, because arrows aren't necessarily homomorphisms. Let's proceed with this slightly unfamiliar definition: it is an "arrow which is invertible on either side by the same inverse". The book asks us to prove that inverses are unique - that's easy by the usual group-inverses proof, which only really requires associativity.
|
||||
|
||||
I need to be careful to remember that isomorphisms (as defined here) aren't between categories, but between members of a category. That is, they're not functors but arrows. (Though of course an arrow may represent a functor, but that's beside the point.)
|
||||
|
||||
Now comes a paragraph about abstract definitions, which basically crystallises my thoughts that isomorphism is a more general form of "bijective homomorphism" which works in all categories. The example from the poset category with monotone functions as arrows is something I'm going to have to get my head around. Here goes.
|
||||
|
||||
What does the category-theoretic definition of an isomorphism look like in the category of posets? It's a monotone function which has a monotone inverse. (Ah, that's more like the definition I remember: "a homeomorphism is a continuous function with a continuous inverse".) How is that different from "bijective homomorphism"? We'll want a monotone function which has an inverse which is not monotone. The standard topological spaces example was on an arbitrary space, between the discrete topology and the indiscrete topology. One direction is continuous and the other is not. Can I quickly turn that into a poset example? The obvious way to go would be on the same set from "nothing is related" to "total order". Definitely order-preserving: if \\(x < y\\) then \\(f(x) < f(y)\\) is vacuously true; definitely invertible; definitely not what we want an isomorphism to look like. I think I've got my head around the difference now.
|
||||
|
||||
In the case of a monoid (viewed as a category), "only the abstract definition makes sense". Is that true? Firstly, what does the abstract definition look like? In a group, all elements are isomorphisms. If we take the monoid \\((\mathbb{Z} \cup \{ \infty \}, +)\\), the arrow \\(\infty: G \to G\\) is not an isomorphism because it has no inverse. That seems fine. Can I make sense of the idea of a monoid element being a "bijective homomorphism"? I could make the element act on the monoid by left multiplication, and I don't see anything wrong with that at the moment. I moved on at the time, but asked someone a bit later about this. The answer is that there are some categories which can't be viewed concretely at all, so the idea of "an arrow is a function" can't be made to make sense in some categories.
|
||||
|
||||
Definition of a group is next; I definitely understand that, and I discovered for myself that a group has all its arrows as isomorphisms. I'll skip the bit about some examples of groups, because I know it, and go to the definition of a group homorphism. That bit is clear too, so on to Cayley's Theorem.
|
||||
|
||||
The proof which appears here is basically the same as the one I was taught: show that action-on-the-left gives us a way to turn \\(G\\) into a permutation group on itself.
|
||||
|
||||
The warning is interesting, and I hadn't noticed the feature it points out. I'll think about that a bit further. OK, it doesn't actually seem to be that problematic to understand, but definitely important to keep my thinking type-checked.
|
||||
|
||||
Theorem 1.6. This looks important. We instantiate objects by their collection of incoming arrows, and instantiate arrows by functions which "represent" an arrow in the same way as the regular representation does in groups. Actually, that doesn't seem particularly important: it's just saying "we can instantiate categories whose arrows form a set". Maybe the Remark will clear things up. It's basically saying by analogy that "there's nothing special about permutation groups, since all groups may be viewed as permutation groups, so stop thinking about them in that way please". I think I'll wait until the discussion of terminal objects before I try and get my head around the true interpretation of a concrete category.
|
||||
|
||||
Now the New Categories From Old section. The product looks easy enough, and its two projections are natural. The dual category likewise is pretty obvious, and makes the dual vector spaces idea much neater.
|
||||
|
||||
The arrow category takes me a while to get my head around. The composition operation clearly does compose arrows correctly. What does the arrow category of the integer category \\(3\\) look like? Let's call the objects of \\(3\\) by the names \\(a, b, c\\). Then the arrow category has six objects (three identity and three non-identity arrows). We can find all the commutative squares by brute force, which I did on paper: there are \\(3^4\\) squares, but anything with \\(c\\) in the top left corner must be the identity arrow on the arrow \\(c \to c\\). That narrows it down enough for me to do this by hand. We end up with \\(a \to a\\) being connected to every arrow; \\(a \to b\\) connected to every arrow except \\(a \to a\\); \\(a \to c\\) connected only to \\(a \to c, b \to c, c \to c\\); \\(b \to b\\) connected to \\(b \to b, b \to c, c \to c\\); \\(b \to c\\) connected to \\(b \to c, c \to c\\); and \\(c \to c\\) connected to \\(c \to c\\). That is, if we omit the identity arrows, we obtain the following Hasse diagram.
|
||||
|
||||
![Arrow category of 3][arrow]
|
||||
|
||||
I don't think that was very enlightening. Motto: arrow categories aren't obviously anything in particular. What about the forgetful functors specified by taking the codomain or the domain? I'm happy that those are both functors, having stared at my diagram.
|
||||
|
||||
Now comes the slice category. I've read this over once and got absolutely nowhere, so let's try again more carefully. The objects I can deal with: any arrow which goes into \\(C\\). The arrows? I'll do this with the category \\(3\\) again. If we slice on \\(a\\) then the only object is the identity arrow, and the only arrow is another identity. If we slice on \\(b\\) then there are two objects: \\(a \to b\\) and \\(b \to b\\). (Just quickly went back to the definition of a category, to check that \\((a \to b) \circ (b \to b)\\) isn't another arrow; in general it could be, but there isn't in this category.) Then in the slice category, there's an arrow \\((a \to b) \to (a \to b)\\) - namely the \\(C\\)-arrow \\(a \to a\\) - and an arrow \\((a \to b) \to (b \to b)\\) - namely the \\(C\\)-arrow \\(a \to b\\). We also have \\(b \to b\\)'s identity arrow. Therefore, we have recovered the category \\(2\\). That gives me intuition about what the identity arrows in the slice category look like.
|
||||
|
||||
I don't think I've got any more intuition here. I'll briefly move on to the bit about the functor which forgets the sliced object. Certainly I agree that the given functor behaves correctly on objects. Does it behave on arrows? Yes, that's obvious from the syntactic definition, but I'm not certain I grok it. (I notice at this point that the functor is not necessarily surjective, as the \\(3\\) example above shows.)
|
||||
|
||||
If I understand the composition law, then I should understand the arrows, so I'll aim for that instead. The composition law is clear from the book's diagram, on page 16: just add another triangle joined along edge \\(f'\\) to make a bigger supertriangle. OK, now I'm happier about the arrows in the slice category: they really are just arrows in the original category, and they join two slice-category objects (that is, arrows in \\(C\\)) if the two objects form a commutative triangle. This is actually a lot like the arrow category, by the looks of it.
|
||||
|
||||
What about this composition functor? It lets us slice out on a different vertex by "changing the worldview", viewing everything through the lens of a particular arrow. I'm happy enough with that as a concept, although I recognise that my "understanding that this is a functor" is purely syntactic. Hopefully I'll get used to this with time.
|
||||
|
||||
"The whole slicing construction is a functor". Yes, OK, that follows from the existence of the composition functor. I repeat that I'm understanding this at the surface level only, and I don't really grok any of it.
|
||||
|
||||
What happens if we slice out a group (viewed as a category) by its only object? Then we get a category which has objects {the elements of the group}, and arrows \\(g \to h\\) given by \\((h^{-1} \circ g) \in C\\). That seems to have taken the group and told us how all its elements are related, which is mildly interesting.
|
||||
|
||||
I verify that the slice category of a poset category is the "principal ideal" as stated, and note with relief that we will see more examples soon.
|
||||
|
||||
The coslice category: that's obviously just the dual of the slice category.
|
||||
|
||||
The category of pointed sets: yep, it's a category. I really don't' understand the isomorphism with the coslice category on sets. I can just about see it syntactically, but this is going to need a lot more work. I spent about ten minutes trying to work out what this really meant.
|
||||
|
||||
## Summary
|
||||
|
||||
I'm happy with some of these constructions, but I'll need a lot more work on others. I'll do these constructions on some more categories and see what happens.
|
||||
|
||||
After composing this post, I asked someone for intuition, and got the reply:
|
||||
|
||||
"The coslice category has objects which may be viewed as pairs \\((A, f)\\), where \\(f:\{ * \} \to A\\). So \\(f\\) is exactly a choice of element in \\(A\\). And the morphisms are maps such that the triangle commutes, i.e. the element "chosen" by \\(f\\) is the same as the one "chosen" by \\(f'\\)."
|
||||
|
||||
I think this has cleared things up, but time will tell.
|
||||
|
||||
[arrow]: {{< baseurl >}}images/CategoryTheorySketches/ArrowCategoryOf3.jpg
|
@@ -0,0 +1,56 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-08-21T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/free-categories-and-foundations/
|
||||
- /free-categories-and-foundations/
|
||||
title: Free categories and foundations
|
||||
---
|
||||
|
||||
Here, I will be going through the Free Categories and Foundations sections of Awodey's Category Theory - pages 18 through 25.
|
||||
|
||||
The definition of a free monoid is basically the same as that of a free group. However, I skim past and see the word "functor" appearing in the "no noise" statement, so I'll actually read this section properly.
|
||||
Everything is familiar up until the definition of the universal mapping property. One bit confuses me for a moment - "every monoid has an underlying set, and every monoid homomorphism an underlying function; this is a functor" - until I realise that by "this", Awodey means "this construction" rather than "this underlying function".
|
||||
|
||||
Now comes the Universal Mapping Property of free monoids. This is a painful definition - I've spent fifteen minutes trying and failing to understand it - so I'll skip past it and come back when I've read some more.
|
||||
|
||||
Proposition 1.9: this is a proof in an area where I'm wrestling to keep everything in my mind at once, so I'll just prove the proposition myself, using Awodey to take short-cuts. Let \\(i: A \to \vert A^* \vert\\) be defined by inclusion - that is, taking \\(a\\) to the single-character word \\(a\\). Let \\(N\\) be a monoid and \\(f: A \to \vert N \vert\\). Define \\(\bar{f}: A^* \to N\\) as stated; it's clearly a homomorphism. It has \\(\vert \bar{f} \vert \circ i = f\\): indeed, \\(\vert \bar{f} \vert \circ i(a) = f(a)\\) manifestly. The homomorphism is unique, as Awodey proves at the end. Very well: I'm satisfied that \\(A^*\\) has the UMP of the free monoid on \\(A\\).
|
||||
|
||||
Apparently the UMP captures "no junk and no noise". What Awodey says is plausible to me in that it hits the right words on the mark scheme, but the definition of the UMP is just too abstract. I'll try and break it into parts.
|
||||
|
||||
"There is a function \\(i : A \to \vert M(A) \vert\\)." That bit's fine: it's saying that the inclusion exists. "Words are built up from the set in some way."
|
||||
|
||||
"Given any monoid \\(N\\) and any function \\(f: A \to \vert N \vert \\), there is a monoid homomorphism \\(\bar{f} : M(A) \to N\\) such that \\(\vert \bar{f} \vert \circ i = f\\)." The final equality is saying "we may represent \\(f\\) instead by first including into the free group, then applying some analog of \\(f\\)". Makes sense: "if we know where members of the free group go, then we definitely know where the generators go".
|
||||
|
||||
"Moreover, \\(\bar{f}\\) is unique." Well, if it weren't unique, we would have a choice of places to send a word in the free group, even if we knew where all the generators went.
|
||||
|
||||
I think I understand it better now. Still not on a particularly intuitive level, but now I'm convinced by Awodey's explanation.
|
||||
|
||||
Let's move on to Proposition 1.10, that the free monoid is determined uniquely up to isomorphism. That seems plain enough, on a syntactic level.
|
||||
|
||||
The bit about graphs is clear, but now there's another UMP to worry about. (Ah, I'm starting to understand that a UMP is a class of property, not just one particular property. Presumably there's one for lots of different structures.) The forgetful functor from Cat to Graphs is fine; the "different point of view" of a graph homomorphism makes me stop. Let's break down that diagram more carefully.
|
||||
|
||||
\\(i: C_0 \to C_1\\) is indeed a valid map: we may view the identity arrow operation as taking an object to its associated identity. The codomain and domain functions do indeed take arrows to objects. The composition operation takes pairs of arrows (which have the right codomain/domain) to single arrows. OK, that's not too scary a diagram, and I agree that a functor is as claimed.
|
||||
|
||||
After the same process of thought, I agree with the formulation of Graphs; and then I get to the description of the forgetful function Cat to Graphs. That is immediately comprehensible, and my first thought is that I don't know why Awodey didn't just come out with it straight away.
|
||||
|
||||
"Similarly for functors…" - this bit is "easier to demonstrate with chalk", but I'll just go back and do it mentally. It works out in the obvious way.
|
||||
|
||||
Finally, our second universal mapping property, this time the free category generated by a graph. Armed with the (meagre) intuition from the free-monoid UMP, this is easier to understand. "We may include the graph into the free category, and given somewhere to map the generators, there is a unique way to determine where elements of the free category go". I had one of those rare moments of "I know exactly what is going on here", which is hopefully a good sign.
|
||||
|
||||
I'm intuitively happy with the examples given in the epilogue. If I were less lazy, I'd check from the UMP that the examples worked (that is, show that categories so defined were unique, and that the free catgory satisfied the UMP).
|
||||
|
||||
Page 24 (on foundations) is familiar to me. I note the definition of "small" and "large" categories - natural enough. The definition of "locally small" looks a bit frightening at first, but on second glance it really is just what you'd expect "locally small" to mean. What would it mean for \\(Cat\\), the collection of small categories, to not be locally small? There would have to be two small categories such that the collection of functors between them was not a set. But the two categories are small, so they are sets, and there is a set of all functions between two sets. (However, the category of locally small categories would not be locally small: pick a non-small member \\(C\\), and define a functor \\(1 \to C\\) which selects an element. There are non-set-many of these.)
|
||||
|
||||
Finally, the warning that "concrete" is not "small". Once given the example of the poset category \\(\mathbb{R}\\), I'm satisfied.
|
||||
|
||||
# Summary
|
||||
|
||||
I took a few days to understand this section, not working at it very hard but just letting it trickle in when the mood took me. It was massively more difficult than the previous sections, but I think I've got my head around the universal mapping properties described. I don't know whether I could come up with them myself to describe other free objects, but I could certainly give it a go.
|
||||
|
||||
The exercises at the end of this chapter will be the true test of understanding.
|
60
hugo/content/awodey/2015-09-02-epis-monos.md
Normal file
60
hugo/content/awodey/2015-09-02-epis-monos.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-02T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/epis-monos/
|
||||
- /epis-monos/
|
||||
title: Epis and monos
|
||||
---
|
||||
|
||||
This post is on pages 29 through 33 of Awodey. It took me a while to do this, because I was basically on holiday for the past week.
|
||||
|
||||
The definition of a [mono] and an [epi] seems at first glance to be basically the same thing as "injection" and "surjection". A mono is \\(f: A \to B\\) such that for all \\(g, h: C \to A\\), if \\(fg = fh\\) then \\(g=h\\). Indeed, if we take this in the category of sets, and let \\(g, h: \{1 \} \to A\\) ("picking out an element"), we precisely have the definition of "injection". An epi is \\(f: A \to B\\) such that for all \\(i, j:B \to D\\), if \\(if = jf\\) then \\(i=j\\). Again, in the category of sets, let \\(i, j: B \to \{1\}\\); then… ah. \\(if = jf\\) and \\(i=j\\) always, because there's only one function to the one-point set from a given set. I may have to rethink the "surjection" bit.
|
||||
|
||||
Then there's Proposition 2.2, which I'm happy I've just basically proved anyway, so I skim it.
|
||||
|
||||
Example 2.3: "monos are often injective homomorphisms". I glance through the example as preparation for going through it with pencil and paper, and see "this follows from the presence of objects like the free monoid \\(M(1)\\)", which is extremely interesting. Now I'll go back through the example properly.
|
||||
|
||||
Suppose \\(h: M \to N\\) is monic. For any two distinct ways of selecting an element of the monoid's underlying set, we can lift those selections into mappings on the free monoid \\(M(1) \to M\\); they are distinct by the UMP. Applying \\(h\\) then takes the mappings into \\(N\\), maintaining distinctness by monicity; then the UMP lets us drag the mappings back into the sets, making selections from \\(1 \to \vert N \vert\\). The converse is quite clear.
|
||||
|
||||
So it is clear where we needed the free monoid and its UMP: it was to give us a way to pass from talking about monoids to talking about sets, and back.
|
||||
|
||||
Example 2.4: every arrow in a poset category is both monic and epic. An arrow \\(f: A \to B\\) is monic iff for all \\(g, h: C \to A\\), \\(f g = f h \Rightarrow g = h\\). That is, to abuse notation horribly, \\(a \leq b\\) is monic iff \\(c \leq a \leq b, c \leq a \leq b \Rightarrow ((c \leq a) = (c \leq a))\\). Ah, it's clear why all arrows are monic: it's because there is at most one arrow between \\(A, B\\), so two arrows with the same codomain and domain must be the same. The same reasoning works for "the arrows are epic".
|
||||
|
||||
"Dually to the foregoing, the epis in the category of sets are the surjective functions". This is the bit from earlier I had to rethink. OK, let's take \\(f: A \to B\\) an epi in the category of sets. Let \\(i, j: B \to C\\), for some set \\(C\\). (Hopefully it'll become clear what \\(C\\) is to be.) Then \\(i f = j f\\) implies \\(i = j\\); we want to show that \\(f\\) hits every element of \\(B\\), so suppose it didn't hit \\(b\\). Then when we take the compositions \\(if, jf\\), we see that \\(i, j\\) never are asked about \\(b \in B\\), so in fact we are free to choose \\(i, j\\) to differ. That means we just need to pick \\(C\\) to be a set with more than one element. OK, that's much easier, although it's not quite clear to me how this is "dually".
|
||||
|
||||
Then the example of the inclusion map \\(i\\) of the monoid \\(\mathbb{N} \cup \{ 0 \}\\) into the monoid \\(\mathbb{Z}\\). We're going to prove it's epic, so I'll try that before reading the proof. Let \\(g, h: \mathbb{Z} \to M\\) for some monoid \\(M\\); we want to show that \\(g i = h i \Rightarrow g = h\\). Indeed, suppose \\(g i = h i\\), but \\(g \not = h\\): that is, there is some \\(z \in \mathbb{Z}\\) such that \\(g(z) \not = h(z)\\). Since \\(gi = hi\\), we must have that \\(i\\) does not hit \\(z\\): that is, \\(z < 0\\). But \\(gi(-z) = hi(-z)\\) and so \\(g(-z) = h(-z)\\); whence \\(g(0) = g(z)+g(-z) \not = h(z)+h(-z) = h(0)\\). That is, \\(g, h\\) differ in the image of the unit. That is a contradiction because a homomorphism of monoids has a defined place to send the unit.
|
||||
|
||||
Looking back over the proof in the book, it's basically the same. Awodey specialises to \\(-1\\) first.
|
||||
|
||||
Proposition 2.6: every iso is monic and epic. I can't help but see the diagram when I read this, but I'll try and ignore it so I can prove it myself. Recall that an iso is an arrow such that there is an "inverse arrow". Let \\(f: A \to B\\) be an iso, and \\(i, j: B \to C\\) such that \\(if = jf\\). Then we may post-compose by \\(f\\)'s inverse - ah, it's clear now that this will work both forwards and backwards. This is exactly analogous to saying "we may left- or right-cancel in a group", and now I come to think of it, "epis are about right-cancelling" is something I just skipped over in the book.
|
||||
|
||||
I'm happy with "every mono-epi is iso in the category of sets", since we've already proved that the injections are precisely the monos, and the epis are precisely the surjections.
|
||||
|
||||
Now, the definition of a split mono/epi. That seems fine - it's a weaker form of mono/epi. "Functors preserve identities" does indeed mean that they preserve split epis and split monos, clearly, because a split epi comes in a pair with a split mono.
|
||||
|
||||
The forgetful functor Mon to Set does not preserve the epi \\(\mathbb{N} \to \mathbb{Z}\\): we want to show that the inclusion of \\(\mathbb{N} \to \mathbb{Z}\\) (as sets) is not surjective. Oh, that's trivially obvious.
|
||||
|
||||
In Sets, every mono splits except the empty ones: yes, we already have a theorem that injections have left inverses. "Every epi splits" is the categorical axiom of choice: we already have a theorem that "surjections have right inverses" is equivalent to AC, so I'm happy with this bit.
|
||||
|
||||
Now the definition of a projective object. It's basically saying "arrows from this object may be pulled back through epis". A projective object "has a more free structure"? I don't really understand what that's saying, so I'll just accept the words and move on.
|
||||
|
||||
All sets are projective because of the axiom of choice? Fix set \\(P\\); we want to show that for any function \\(f: P \to X\\) and any surjection \\(e: E \to X\\), there is \\(\bar{f}: P \to E\\) with \\(e \circ \bar{f} = f\\). We have (by Choice) that \\(e\\) splits: there is a right inverse \\(e^{-1}\\) such that \\(e \circ e^{-1} = 1_X\\). Define \\(\bar{f} = e^{-1} \circ f\\) and we're done.
|
||||
|
||||
Any retract of a projective object is itself projective: I absolutely have to draw a diagram here. After a bit of confusion over left-composition happening as you go further to the right along the arrows, I spit out an answer.
|
||||
|
||||
![Retract of a projective object is projective][retract]
|
||||
|
||||
# Summary
|
||||
|
||||
This section was more definitional than idea-heavy, so I think I've got my head around it for now. I do still need to practise my fluency with converting compositions of arrows on the diagrams into composition of arrows as algebraically notated - I still have to keep careful track of domain and codomain to make sure I don't get confused.
|
||||
|
||||
[mono]: https://en.wikipedia.org/wiki/Monomorphism
|
||||
[epi]: https://en.wikipedia.org/wiki/Epimorphism
|
||||
|
||||
[retract]: {{< baseurl >}}images/CategoryTheorySketches/RetractOfProjectiveIsProjective.jpg
|
@@ -0,0 +1,57 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-02T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/initial-generalised-elements/
|
||||
- /initial-generalised-elements/
|
||||
title: Initial, terminal, and generalised elements
|
||||
---
|
||||
|
||||
This is pages 33 to 38 of Awodey.
|
||||
|
||||
This bit looks really cool. A categorical way of expressing "this set has one element only": a terminal object. We have more examples of UMPs - these aren't quite of the same form as the previous ones.
|
||||
|
||||
The proof that initial objects are unique up to unique isomorphism is easy - no need for me even to consider the diagram. On to the huge list of examples.
|
||||
|
||||
Sets example: agreed. I actually asked about this (the fact that Set is not isomorphic to its dual) on Stack Exchange, and got basically this answer. Just a quick check that the one-point sets are indeed unique up to unique isomorphism, which they are.
|
||||
|
||||
The category 0 is definitely initial in Cat; I agree that 1 is also terminal.
|
||||
|
||||
In Groups: the initial objects are those for which there is precisely one homomorphism between it and any other group. Such a group needs to be the trivial group, since if \\(G\\) contains any other element, we can send \\(G\\) to \\(G\\) in a non-identifying way by sending every element to its inverse. The terminal objects: again that's just the trivial group, because for any other group \\(G\\), we can take two different homomorphisms into \\(G \times G\\), by projection onto the first or second coordinates. In Rings, on the other hand, I agree that \\(\mathbb{Z}\\) is initial: the unit has to go somewhere, and that determines the image of all of \\(\mathbb{Z}\\).
|
||||
|
||||
Boolean algebras are something I ought to have met before in Part II Logic and Sets, but it was not lectured. I think I'll come back to this if it becomes important, because I feel like I have a good idea for the moment of what an initial/terminal object are.
|
||||
|
||||
Posets: an object is indeed initial iff it is the least element. We have that initial elements are isomorphic up to unique isomorphism. What does that mean here? It means there is a unique arrow which has an inverse between these two elements. That is, it means the two elements are comparable and equal (by \\(a \leq b, b \leq a \Rightarrow a=b\\)). We therefore require there to be a *single* least element, if it is to be initial. What about the poset consisting of two identical copies of \\(\mathbb{N}\\), the elements of each copy incomparable to those of the other? There is no arrow from the 1 in the first \\(\mathbb{N}\\) into any element of the second \\(\mathbb{N}\\), so I'm happy that this is indeed not initial.
|
||||
|
||||
Identity arrow is terminal in the slice category: everything has a unique morphism into this arrow, yes, because there is always a single commutative triangle between an arrow into \\(X\\) and the identity arrow on \\(X\\).
|
||||
|
||||
Generalised elements, now. Hopefully this will be about ways of saying categorically that "this set has three elements", in the same way as "this set is terminal" was a categorical way of identifying a set with one element.
|
||||
|
||||
"A set \\(A\\) has an arrow \\(f\\) into the initial object \\(A \to 0\\) just if it is itself initial." An initial object, remember, is one which has exactly one arrow into every other object, so it must have an arrow into \\(A\\); but the composition of \\(f\\) with that arrow must then be the identity on \\(0\\), since there is only one arrow \\(0 \to 0\\). Therefore \\(A, 0\\) are isomorphic and hence both initial.
|
||||
|
||||
In monoids and groups, every object has a unique arrow to the initial object - that's trivial, since there is only one object. Unless it means objects in the category of monoids? The unique initial object is the trivial group, and it's also terminal. That makes more sense.
|
||||
|
||||
Curses, I'm actually going to have to understand Boolean algebras now. I'll flick back to the definition and try to understand example 4 above. The definition looks an awful lot like the definitions of intersection and union, so I think I'll just think of them in that way. What's a filter? It's what we get when we infect some sets with filterness, and filterness propagates to "parents" and to "children of two parents" (intersections). An ultrafilter then is a filter where adding any other set infects everything.
|
||||
|
||||
A filter \\(F\\) on \\(B\\) is an ultrafilter iff for every \\(b \in B\\), either \\(b \in F\\) or \\(b^C \in F\\) but not both: if \\(b \in F\\) then \\(b^C\\) can't be in \\(F\\) because then the empty set (that is, the intersection) is in the filter, so the filter is "everything". If \\(b \not \in F\\) then unless \\(b^C \in F\\), we could add \\(b\\) to \\(F\\) to obtain a strictly larger filter which still isn't everything, since \\(b^C\\) is still not in the augmented filter.
|
||||
|
||||
Then I agree with the following stuff about "ultrafilters correspond to maps \\(B \to 2\\)". Not much more I can find to say there immediately.
|
||||
|
||||
Ring homomorphisms \\(p\\) from ring \\(A\\) into the initial ring \\(\mathbb{Z}\\) correspond with prime ideals: yep, since \\(p^{-1}(0)\\) is an ideal of \\(A\\) (being the kernel of \\(p\\)), which is prime because we quotient by it to get a Euclidean domain \\(\mathbb{Z}\\).
|
||||
|
||||
From arrows from initial objects to arrows from terminal objects. The definition of a point of object \\(A\\) is a natural one, as is the warning that objects are not necessarily determined by points (this is in the case that structural information is bound up in the arrows, like in a monoid viewed as a category). How many points does a Boolean algebra have? The terminal Boolean algebra is \\( \{ 0 \} \\); an arrow from \\(\{0\}\\) to a Boolean algebra must only ever pick out the \\(0\\) element, because the arrows must preserve the zero. That is, Boolean algebras have only one point.
|
||||
|
||||
"Generalised elements" is therefore a way of trying to capture all the information, which the terminal object does not necessarily. The example which follows is a summary of this idea. There is something there to prove: that \\(f = g\\) iff \\(fx = gx\\) for all arrows \\(x\\). This leaves me stuck for a bit - I'm reviewing possible ways to prove that two arrows are the same, but the only ways I can think of require some kind of invertibility. What does it even mean for two arrows to be equal? At this point I got horribly confused and asked StackExchange, where I was told that I don't need to worry about that - just let \\(x\\) be the identity arrow. (By the way, it seems that equality of arrows is in the first-order logic sense here.)
|
||||
|
||||
Example 2.13: aha, a way of showing categories are not isomorphic. Always handy to have ways of doing this. The number of \\(\mathbf{2}\\)-elements from \\(\{0 \leq 1 \}\\) to \\(\{x \leq y, x \leq z \}\\): \\(0\\) may map to \\(x\\), then \\(1\\) may map to \\(x\\) \\(y\\) or \\(z\\), or \\(0\\) may map to \\(y\\) or \\(z\\), when \\(1\\) must map to the same, producing five such 2-elements. I'm not sure I see why this is invariant, but on the next page I see that will be explained, and it all seems quite satisfactory.
|
||||
|
||||
Example 2.14: ah, the "figures of shape \\(T\\) in \\(A\\)" interpretation makes it actually intuitive why the number of \\(2\\)-elements of the posets above are what they are. The arrows from the free monoid on one generator suffice to distinguish homomorphisms? That is, if we know where all \\(\mathbb{N}\\)-shapes go from \\(M\\), can we entirely determine the homomorphism? Yes, we can. If we have access to the elements of the monoid, we can do better (by simply specifying the image of each element), but of course we don't have the elements.
|
||||
|
||||
# Summary
|
||||
|
||||
I might need a bit more exposure to these ideas before I understand them properly, but I suspect the exercises at the end of this chapter will help with that. This feels like the first really categorical thing that has happened: ways of cheating so that we can consider the elements of structures without actually needing any elements.
|
@@ -0,0 +1,49 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-08T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/products/
|
||||
- /products-in-category-theory/
|
||||
title: Products in category theory
|
||||
---
|
||||
|
||||
This is on pages 38 through 48 of Awodey. I've been looking forward to it, because products are things which come up all over the place and I'd heard that they are one of the main examples of a categorical idea.
|
||||
|
||||
I skim over the definition of the product in the category of sets, and go straight to the general definition. It seems natural enough: the product is defined uniquely such that given any generalised element of the product, it projects in a unique way to corresponding elements of the children.
|
||||
|
||||
Proving that products are unique up to isomorphism presumably goes in the same way as the other UMP-proofs have gone. I draw out the general diagram, then because we need to show isomorphism of two objects, I replace the "free" (unspecified) test object with one of the two objects of interest. Then the uniqueness conditions make everything fall out correctly. Moral: if we have a mapping property along the lines of "we can find unique arrows to make this diagram commute" then everything is easy.
|
||||
|
||||
![Diagrams for the UMP of the product][UMP of product]
|
||||
|
||||
Then we introduce some notation for the product. "A pair of objects may have many different products in a category". Yes, I can see why that's plausible, because we could define \\(\langle a, b \rangle\\) to be the ordered pair \\((b, a)\\), for instance, without changing any of the properties we're interested in.
|
||||
|
||||
"Something is gained by considering arrows out of products": I'm aware of currying, which when Awodey points it out, makes me think nothing is really gained after all. I think I'll wait for Chapter 6 before I pass judgement on that.
|
||||
|
||||
Now for a huge list of examples. First there are two definitions of "ordered pair", which I called earlier (though not in this exact form). Then we see the usual products of structured sets, with which I'm already very familiar.
|
||||
|
||||
I'll verify the UMP for the product of two categories: let \\(x_1: X \to C, x_2: X \to D\\) be generalised elements. We want there to be a unique arrow \\(u : X \to (C \times D)\\) with \\(p_1 u = x_1, p_2 u = x_2\\), where \\(p_1, p_2\\) are the projection functors. Certainly there is an arrow given by stitching \\(x_1, x_2\\) together componentwise; is there another? Clearly not. Suppose \\(u_2\\) were another arrow \\(u: X \to (C \times D)\\). If \\(u_2(x) = (c, d)\\) then \\(p_1 u_2(x) = c\\) by the UMP, and \\(u_2\\) is therefore specified on all generalised elements already. That argument is not very formal, and I don't really see how to formalise it properly.
|
||||
|
||||
The product of two groups according to this product construction is then self-evidently the product group we know and love. The product of two posets is also manifestly a poset, being a category where any pair of objects has at most one arrow between them. (Indeed, if there were two, we could project down to one of the posets to obtain two arrows between two elements.)
|
||||
|
||||
The greatest lower bound example takes me a while to get my head around. The UMP for the product says, "define \\(p \times q\\) such that for all \\(x\\), if \\(a \leq p \times q, b \leq p \times q\\), and \\(a \leq x, b \leq x\\), then \\(x \leq p \times q\\)". That is indeed the greatest lower bound, but it took me ages to work this out.
|
||||
|
||||
I work through the topological spaces example without thinking too hard about it. It's not clear to me that Awodey has proved that the uniqueness part of the UMP is satisfied, but I'll just accept it and move on.
|
||||
|
||||
Type theory example: I've already met the lambda calculus, though never studied it in any detail. I skim over this, pausing at the equation "\\(\lambda x . c x = c\\) if no \\(x\\) is in \\(c\\)" - is this a typo for \\(\lambda x . c = c\\)? No, stupid of me - \\(c\\) represents a function, and the function \\(x \mapsto c(x)\\) is the same as the function \\(c\\). Then the category of types is indeed a category, and I'm happy with the proof that it has products. This time Awodey does certainly verify the uniqueness part of the UMP, by simply expanding everything and reducing it again.
|
||||
|
||||
A long remark on the Curry-Howard correspondence. Clearly the product here is conjunction - skimming down I see that Awodey says it is indeed a product (or, at least, that there is a functor from types to proofs in which products have conjunctions as their images). Very pretty.
|
||||
|
||||
"Categories with products": supposing every pair of objects has a product, we define a functor taking every pair to its product. That's intuitive in the sense of "structured sets", since I'm very familiar with that product construction. What does it mean in the poset case? Recall that the product was the greatest lower bound. A poset where every pair of elements has a greatest lower bound is actually a totally ordered set, and the greatest lower bound is the least of the two elements, so that also makes sense. I think I'll skip over the UMPs for \\(n\\)-ary products, but the idea of a terminal object as a nullary product is pretty neat. So that's why the empty product of real numbers is 1.
|
||||
|
||||
|
||||
|
||||
# Summary
|
||||
|
||||
As seems to be a general theme, I understand the syntax of products, and I can recognise some of them when they turn up, but have no real intuition for how they work. There will be more examples at the end of the chapter, which should clear things up a bit.
|
||||
|
||||
[UMP of product]: {{< baseurl >}}images/CategoryTheorySketches/UMPofProduct.jpg
|
63
hugo/content/awodey/2015-09-10-homsets-and-exercises.md
Normal file
63
hugo/content/awodey/2015-09-10-homsets-and-exercises.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-10T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/homsets-and-exercises/
|
||||
- /homsets-and-exercises/
|
||||
title: Hom-sets and exercises
|
||||
---
|
||||
|
||||
This is on pages 48 through 52 of Awodey, covering the hom-sets section and the exercises at the end of Chapter 2. Only eight more chapters after this, and I imagine they'll be more difficult - I should probably step up the speed at which I'm doing this.
|
||||
|
||||
Awodey assumes we are working with locally small categories - recall that such categories have "given any two objects, there is a bona fide set of all arrows between those objects". That is, all the hom-sets are really sets.
|
||||
|
||||
We see the idea that any arrow induces a function on the hom-sets by composing on the left. Awodey doesn't mention currying here, but that seems to be the same phenomenon. Why is the map \\(\phi(g): g \mapsto (f \mapsto g \circ f)\\) a functor from \\(C\\) to the category of sets? I agree with the step-by-step proof Awodey gives, but I don't really have intuition for it. It feels a bit misleading to me that this is thought of as a functor into the category of sets, because that category contains many many more things than we're actually interested in. It's like saying \\(f: \mathbb{N} \to \mathbb{R}\\) by \\(n \mapsto n\\), when you only ever are interested in the fact that \\(f\\) takes integer values. I'm sure it'll become more natural later when we look at representable functors.
|
||||
|
||||
Then an alternative explanation of the product construction, as a way of exploding an arrow \\(X \to P\\) into two child arrows \\(X \to A, X \to B\\). A diagram is a product iff that explosion is always an isomorphism. Then a functor preserves binary products if it… preserves binary products. I had to draw out a diagram to convince myself that \\(F\\) preserves products iff \\(F(A \times B) \cong FA \times FB\\) canonically, but I'm satisfied with it.
|
||||
|
||||
|
||||
# Exercises
|
||||
|
||||
Exercises 1 and 2 I've [done already][epis-monos]. The uniqueness of inverses is easy by the usual group-theoretic argument: \\(fg = f g'\\) means \\(gfg = gf g'\\), so \\(g = g'\\) by cancelling the \\(g f = 1\\) term.
|
||||
|
||||
The composition of isos is an iso: easy, since \\(f^{-1} \circ g^{-1} = (g \circ f)^{-1}\\). \\(g \circ f\\) monic implies \\(f\\) monic and \\(g\\) epic: follows immediately by just writing out the definitions. The counterexample to "\\(g \circ f\\) monic implies \\(g\\) monic" can be found in the category of sets: we want an injective composition where the second function is not injective. Easy: take \\(\{1 \} \to \mathbb{N}\\) and then \\(\mathbb{N} \to \{ 1 \}\\). The composition is the identity, but the second function is very non-injective.
|
||||
|
||||
Exercise 5: a) and d) are equivalent by definition of "iso" and "split mono/epi". Isos are monic and epic, as we've already seen in the text (because we can cancel \\(f\\) in \\(x f = x' f\\), for instance), so we have that a) implies b) and c). If \\(f\\) is a mono and a split epi, then it has a right-inverse \\(g\\) such that \\(fg = 1\\); we claim that \\(g\\) is also a left-inverse. Indeed, \\(f g f = f 1\\) so \\(g f = 1\\) by mono-ness. Therefore b) implies a). Likewise c) implies a).
|
||||
|
||||
Exercise 6: Let \\(h: G \to H\\) be a monic graph hom. Let \\(v_1, v_2: 1 \to G\\) be homs from the graph with one vertex and no edges. Then \\(h v_1 = h v_2\\) implies \\(v_1 = v_2\\), so in fact \\(h\\) is injective. Likewise with edges, using the graph with one edge and two vertices, and the graph with one edge and one vertex. Conversely, suppose \\(h: G \to H\\) is not monic. Then there are \\(v_1: F_1 \to G, v_2: F_2 \to G\\) with \\(h v_1 = h v_2\\) but \\(v_1 \not = v_2\\). Since \\(h v_1 = h v_2\\), we must have that "their types match": \\(F_1 = F_2\\). We will denote that by \\(F\\). Then there is some vertex or edge on which \\(v_1\\) and \\(v_2\\) have different effects. If it's a vertex: then \\(v_1(v) \not = v_2(v)\\) for that vertex \\(v\\), but \\(h v_1 (v) = h v_2(v)\\), so \\(h\\) can't be injective. Likewise if it's an edge.
|
||||
|
||||
Exercises 7 and 8 I've [done already][epis-monos].
|
||||
|
||||
Exercise 9: the epis among posets are the surjections-on-elements. Let \\(f: P \to Q\\) be an epi of posets, so \\(x f = y f\\) implies \\(x = y\\). Suppose \\(f\\) is not surjective, so there is \\(q \in Q\\) it doesn't hit. Then let \\(x, y: Q \to \{ 1, 2 \}\\), disagreeing at \\(q\\). We have \\(x f = y f\\) so \\(x, y\\) must agree at \\(q\\). This is a contradiction. Conversely, any surjection-on-elements is an epi, because if \\(x(q) \not = y(q)\\) then we may write \\(q = f(p)\\) for some \\(p\\), whence \\(x f(p) \not = y f(p)\\). The one-element poset is projective: let \\(s: X \to \{1\}\\) be an epi (surjective), and \\(\phi: P \to \{ 1 \}\\). Then \\(X\\) has an element, \\(u\\) say, since \\(s\\) is surjective. Then we may lift \\(\phi\\) over \\(s\\) by letting \\(\bar{\phi}: p \mapsto u\\), so that the composite \\(s \circ \bar{\phi} = \phi\\). (Quick check in my mind that this works for \\(P\\) the empty poset - it does.)
|
||||
|
||||
Exercise 10: Sets (implemented as discrete posets) are projective in the category of posets: the one-element poset is projective, and retracts of projective objects are projective. Let \\(A\\) be an arbitrary discrete poset. Define \\(r: 1 \to A\\) by selecting an element, and \\(s: A \to \{1\}\\). Then \\(A\\) is a retraction of \\(B\\), so is projective. Afterwards, I looked in the solutions, and Awodey's proof is much more concrete than this. I [asked on Stack Exchange][SE question] whether my proof was valid, and the great Qiaochu Yuan himself pointed out that I had mixed up what "retract" meant, and had actually showed that \\(\{1\}\\) was a retract of \\(A\\). Back to the drawing board.
|
||||
|
||||
Exercise 10 revisited: Take a discrete poset \\(P\\), and let \\(f: X \to P\\) be an epi - that is, surjection. Let \\(A\\) be a poset and \\(\phi: A \to P\\) an arrow (monotone map). For each \\(a \in A\\) we have \\(\phi(a)\\) appearing in some form in \\(X\\); pick any inverse image \\(x_a\\) such that \\(f(x_a) = \phi(a)\\). I claim that the function \\(a \mapsto x_a\\) is monotone (whence we're done). Indeed, if \\(a \leq b\\) then \\(\phi(a) \leq \phi(b)\\) so \\(f(x_a) \leq f(x_b)\\) so \\(x_a \leq x_b\\) because \\(f, \phi\\) are monotone.
|
||||
|
||||
Example of a non-projective poset: let \\(A = P\\) be the poset \\(0 \leq 1 \leq 2\\), and let \\(i:A \to P\\) the inclusion. Let \\(E\\) be the poset \\(0 \leq 2, 1 \leq 2\\), with its obvious inclusion as the epi. Then \\(i\\) doesn't lift across that epi, because \\(0_A\\) must map to \\(0_E\\) and \\(1_A\\) to \\(1_E\\), but \\(0 \leq_A 1\\) and \\(0 \not \leq_E 1\\).
|
||||
|
||||
Now, all projective posets are discrete: suppose the comparison \\(a < b\\) exists in the poset \\(P\\), and let \\(X\\) be \\(P\\) but where we break that comparison. Let the epi \\(X \to P\\) be the obvious inclusion. Then the inclusion \\(\text{id}: P \to P\\) doesn't lift across \\(X\\).
|
||||
|
||||
Exercise 11: Of course, the first thing is a diagram. An initial object in \\(A-\mathbf{Mon}\\) is \\((I, i)\\) such that there is precisely one arrow from \\((I, i)\\) to any other object: that is, precisely one commutative triangle exists. A free monoid \\(M(A)\\) on \\(A\\) is such that there is \\(j: A \to \vert M(A) \vert\\), and for any function \\(f: A \to \vert N \vert\\) there is a unique monoid hom \\(\bar{f}: M(A) \to N\\) with \\(\vert \bar{f} \vert \circ j = f\\). If \\((I, i)\\) is initial, it is therefore clear that \\(I\\) has the UMP of the free monoid on \\(A\\), just by looking at the diagram. Initial objects are unique up to isomorphism, and free monoids are too, so we automatically have the converse.
|
||||
|
||||
Exercise 12 I did in my head to my satisfaction while I was following the text.
|
||||
|
||||
Exercise 13: I wrote out several lines for this, amounting to showing that the unique \\(x: (A \times B) \times C \to A \times (B \times C)\\) guaranteed by the UMP of \\(A \times (B \times C)\\) is in fact an iso. The symbol shunting isn't very enlightening, so I won't reproduce it here.
|
||||
|
||||
Exercise 14: the UMP for an \\(I\\)-indexed product should be: \\(P\\) with arrows \\(\{ (p_i: P \to A_i) : i \in I \}\\) is a product iff for every object \\(X\\) with collections \\(\{ (x_i: X \to A_i) : i \in I \}\\) of arrows, there is a unique \\(x : X \to P\\) with \\(p_i \circ x = x_i\\) for each \\(i \in I\\). Then in the category of sets, the product of \\(X\\) over \\(i \in I\\) satisfies that for all \\(T\\) with \\( \{ (t_i: T \to X): i \in I \}\\) arrows, there is a unique \\(t: T \to P\\) with \\(p_i \circ t = t_i\\). If we let \\(P = \{ f: I \to X \} = X^I\\), we do get this result: let \\(t(\tau) : i \mapsto t_i(\tau)\\). This works if \\(p_i \circ (\tau \mapsto (i \mapsto t_i(\tau))) = (\tau \mapsto t_i(\tau))\\), so we just need to define the projection \\(p_i: X^I \to X\\) by \\(p_i(i \mapsto x) = x\\). I think that makes sense.
|
||||
|
||||
Exercise 15: I first draw a diagram. \\(\mathbb{C}_{A, B}\\) has a terminal object iff there is some \\((X, x_1, x_2)\\) such that for all \\((Y, y_1, y_2)\\), there is precisely one arrow \\((Y, y_1, y_2) \to (X, x_1, x_2)\\). \\(A\\) and \\(B\\) have a product in \\(\mathbb{C}\\) iff there is \\(P\\) and \\(p_1: P \to A, p_2: P \to B\\) such that for every \\(x_1: X \to A, x_2: X \to B\\) there is unique \\(x: X \to P\\) with the appropriate diagram commuting. If we let \\((Y, y_1, y_2) = (P, p_1, p_2)\\) then it becomes clear that if \\(A, B\\) have a product then \\(\mathbb{C}_{A, B}\\) has a terminal object - namely \\((Y, y_1, y_2)\\). Conversely, if \\(\mathbb{C}_{A, B}\\) has a terminal object \\((Y, y_1, y_2)\\), then our unique arrow \\(x: X \to Y\\) in \\(\mathbb{C}_{A, B}\\) corresponds to a unique product arrow in \\(\mathbb{C}\\), so the UMP for products is satisfied.
|
||||
|
||||
Exercise 16: Is this really as easy as it looks? The product functor takes \\(a: A, b: B \mapsto \langle a, b \rangle : A \times B\\). Maybe I've misunderstood something, but I can't see that it's any harder than that. There's a functor \\(X \mapsto (A \to X)\\), given by coslicing out by \\(A\\). I've squinted at the answers Awodey supplies, and this isn't an exercise he gives. I'll just shut my eyes and pretend this exercise didn't exist.
|
||||
|
||||
Exercise 17: The given morphism is indeed monic, because \\(1_A x = 1_A y\\) implies \\(x = y\\), and \\(\Gamma(f)x = \Gamma(f)y\\) implies \\(1_A x = 1_A y\\) because of the projection we may perform on the pair \\(\langle 1_A, f \rangle\\). \\(\Gamma\\) is a functor from sets to relations, clearly, but we've already done that in Section 1 question 1b).
|
||||
|
||||
Exercise 18: It would really help if Awodey had told us what a representable functor was, rather than just giving an example. Is he asking us to show that "the representable functor of Mon is the forgetful functor"? I'm going to hope that I can just drop Mon in for the category C in section 2.7. If we let \\(A\\) be the trivial monoid, then \\(\text{Hom}(A, -)\\) is a functor taking a monad \\(M\\) to its set of underlying elements (each identified with a different hom \\(\{ 1 \} \to M\\)) - but hang on, there's only one such hom, so that line is nonsense. It would work in Sets, but not in Mon. We need \\(\text{Hom}(M, N)\\) to be isomorphic in some way to the set \\(\vert N \vert\\), and I just don't see how that's possible. Infuriatingly, this exercise doesn't have a solution in the answers section. I ended up looking this up, and the trick is to pick \\(M = \mathbb{N}\\). Then the homomorphisms \\(\phi: \mathbb{N} \to N\\) precisely determine elements of \\(N\\), by \\( \phi(1)\\). So that proves the result. Why did I not think of \\(\mathbb{N}\\) instead of \\(\{ 1 \}\\)? Probably just lack of experience.
|
||||
|
||||
[epis-monos]: {% post_url 2015-09-02-epis-monos %}
|
||||
[SE question]: http://math.stackexchange.com/q/1429746/259262
|
59
hugo/content/awodey/2015-09-15-duality-in-category-theory.md
Normal file
59
hugo/content/awodey/2015-09-15-duality-in-category-theory.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-15T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/duality/
|
||||
- /duality-in-category-theory/
|
||||
title: Duality in category theory
|
||||
---
|
||||
|
||||
I don't have strong preconceptions about this chapter. The previous chapter I knew would contain general constructions, and I was looking forwards to that, but this one is more unfamiliar to me. I'll be doing pages 53 through 61 of Awodey here - coproducts.
|
||||
|
||||
The first bits are stuff I recognise from when I flicked through Categories for the Working Mathematician, I think. Or something. Anyway, I recognise the notion of formal duality and the very-closely-related semantic duality. (Like the difference between "semantic truth" and "syntactic truth" in first-order logic.) It's probably a horrible sin to say it, but both of these are just obvious, once they've been pointed out.
|
||||
|
||||
Now the definition of a coproduct. The notation \\(A+B\\) is extremely suggestive, and I'd have preferred to try and work out what the coproduct was without that hint. \\(z_1: A \to Z\\) and \\(z_2: B \to Z\\) are "ways of selecting \\(A\\)- and \\(B\\)-shaped subsets of any object" (yes, that's not fully general, but for intuition I'll pretend I'm in a concrete category). So for any \\(Z\\), and for any way of selecting an \\(A\\)-shaped and a \\(B\\)-shaped subset of \\(Z\\), we can find a unique way of selecting an \\(A+B\\)-shaped subset according to the commuting-diagram condition. I'm still a bit unclear as to what that all means, so I whizz down to the Sets example below.
|
||||
|
||||
In Sets, if we can find an \\(A\\)-shaped subset of some set \\(Z\\), and a \\(B\\)-shaped subset, then we can find a subset which is shaped like the disjoint union of \\(A\\) and \\(B\\) in a unique way. (Note that our arrows need not be injective, which is why the \\(A+B\\)-shaped subset exists. For instance, if \\(A = \{1\}, B = \{1\}\\), and our \\(A\\)-shaped subset and \\(B\\)-shaped subset of \\(\{a,b \}\\) were both \\(\{a\}\\), then the \\(A+B\\)-shaped subset would be simply \\(\{a \}\\). Both selections of shape end up pointing at the same element.)
|
||||
|
||||
This leads me to wonder: what about in the category of sets with injections as arrows? Now it seems that the coproduct is only defined on disjoint sets, because the arrows \\(z_1, z_2\\) which pick out \\(A\\)- and \\(B\\)-shaped subsets now need to have distinct images in \\(Z\\) so that the coproduct may pick out an \\(A \cup B\\)-shaped subset.
|
||||
|
||||
The free-monoids coproduct: given any "co-test object" \\(N\\), and any two monoid homomorphisms selecting subsets of \\(N\\) corresponding to the shapes of \\(M(A)\\) and \\(M(B)\\), there should be a natural way to define a shape corresponding to some kind of union. The shape of \\(M(A)\\) corresponds exactly to "where we send the generators", so we can see intuitively that \\(M(A) + M(B) = M(A+B)\\). This is very much not a proof, and I'll make sure to check the diagrammatic proof from the book first; that proof is fine with me. "Forgetful functor preserves products -> a structure-imposing functor preserves coproducts" has a certain appeal to it, but I don't quickly see a sense in which the structure can be imposed in general.
|
||||
|
||||
Coproduct of two topological spaces: given a co-test topological space \\(X\\), and two continuous functions into \\(X\\) which pick out subspaces of shape \\(A\\) and \\(B\\), we want to find a space \\(P\\) such that for all \\(A\\)- and \\(B\\)-shape subspaces of \\(P\\), there is a unique \\(P\\)-shaped subspace of \\(X\\) composed of the same shapes as the \\(A\\)- and \\(B\\)-subspaces. Then it's fairly clear that \\(P\\) should be the disjoint union of \\(A\\) and \\(B\\) (compare with the fact that the forgetful functor to Set again yields the correct Set coproduct), but what topology? Surely it should be the "product" given by sets of the form (open in \\(A\\), open in \\(B\\)), since \\(A\\)-shaped subspaces of this will map directly into \\(A\\)-shaped subspaces of the co-test space, etc.
|
||||
|
||||
Coproducts, therefore, are a way of putting two things next to each other, and this is pointed out in the next paragraph, where the coproduct of two posets is the "disjoint union" of them. The coproduct of two rooted posets is what I'd have guessed, as well, given that we need to make sure the coproduct is also rooted.
|
||||
|
||||
Coproduct of two elements in a poset: that's easy by duality, since the opposite category of a poset is just the same poset with the opposite ordering. The product is the greatest lower bound, so the coproduct must be the least upper bound. How does this square with the idea of "put the two elements side by side"? This category is not concrete, so we need to work out what we mean by "an element of shape \\(A\\)". Since an arrow \\(A \to X\\) is precisely the fact that \\(A \leq X\\), we have that for every element \\(y\\) of the poset, all elements which compare less than or equal to that element have "images of shape \\(y\\)" in \\(X\\). Therefore, the coproduct condition says "for every co-test object \\(X\\), for every pair of images of shape \\(A, B\\) in \\(X\\), the there is an image of shape \\(A+B\\) in \\(X\\) which restricts to the images of shape \\(A\\) and \\(B\\) respectively". With a bit of mental gymnastics, that does correspond to \\(A+B\\) being the least upper bound.
|
||||
|
||||
Coproduct of two formulae in the category of proofs: an arrow from one formula to another is a deduction. An "image of shape \\(A\\) in \\(X\\)" - an arrow \\(A \to X\\) - is therefore a statement that we can deduce \\(X\\) from \\(A\\). We want a formula \\(A+B\\) such that for any co-test formula \\(X\\), and for any images of \\(A, B\\) in \\(X\\), there is a unique image of \\(A+B\\) in \\(X\\) which respects the shapes of \\(A\\) and \\(B\\) in \\(A+B\\). Hang on - at this point I realise that the opposite category of the category of proofs is the "category of negated proofs", and the "opposite category" functor is simply taking "not" of everything. That's because the contrapositive of a statement is equivalent to the statement. Therefore since the product is the "and", the coproduct should be the "or" (which is the composition of "not-and-not", or "dual-product-dual). I'll keep going anyway.
|
||||
|
||||
We need to be able to prove \\(A+B\\) from \\(A\\), and to prove \\(A+B\\) from \\(B\\). That's already mighty suggestive. Moreover, if there's a proof of \\(X\\) from \\(A\\), there needs to be a unique corresponding proof of \\(X\\) from \\(A+B\\). That's enough for my intuition to say "this is OR, beyond all reasonable doubt".
|
||||
|
||||
I now look at the book's explanation of this. Of course, I omitted to perform an actual proof that OR formed the coproduct, and that bites me here: identical arrows must yield identical proofs, but any proof which goes via "a OR b" must be different from one which ignores b. Memo: need to prove the intuitions.
|
||||
|
||||
Coproduct of two monoids. Ah, this is a cunning idea, viewing a monoid as a sub-monoid of its free monoid. We already know how to take the coproduct of two free monoids, and we can do the equiv-rel trick that worked with the category of proofs above. Is it possible that in general we do coproducts by moving into a free construction and then quotienting back down? I'm struggling to see how free posets might work, so I'll shelve that idea for now.
|
||||
|
||||
I went to sleep in between the previous paragraph and this one, so I'm now in a position to write out a proper proof that the coproduct of two monoids is as stated. I did it without prompting in a very concrete way: given a word('s equivalence class) in \\(M(\vert A \vert + \vert B \vert)\\), and two maps \\(z_1: A \to N\\) and \\(z_2: B \to N\\), we send the letter \\(a \in \vert A \vert\\) to \\(z_1(a)\\), etc. The book gives a more abstract way of doing it. I don't feel like I could come up with that myself in a hurry without a better categorical idea of "quotient by an equivalence relation". At least this way gave me a good feel for why we needed to do the quotient: otherwise our \\(\phi: a \mapsto z_1(a)\\) could have been replaced by \\(a \mapsto u_A z_1(a)\\). The map is unique in this setting. Indeed, suppose \\(\phi([w]) \not = \phi_2([w])\\) for some \\(w\\). We may wlog that \\(w\\) is just one character long, since any longer and we could use that \\(\phi, \phi_2\\) are "homomorphic" to find a character where they differed. (That's where we need that we're working with equivalence classes.) Wlog \\([w] = [w_1]\\). Then \\(\phi([w_1]) \not = \phi_2([w_1])\\); but that means \\(z_1(w_1) \not = z_1(w_1)\\) because the map \\(\phi_2\\) also needs to commute with \\(z_1\\).
|
||||
|
||||
I make sure to note that the forgetful functor Mon to Sets doesn't preserve coproducts.
|
||||
|
||||
Aha! An example I've seen recently in a different context. (Oops, I've glanced to the bottom of the page, Proposition 3.11. I'll wait til I actually get there.)
|
||||
|
||||
I'm confused by the "in effect we have pairs of elements" idea. What about a word like \\(a_1 a_2 b_1 b_2 b_3\\)? Then we don't get a pair of elements containing \\(b_3\\). Ah, I see - Awodey is implicitly pairing with \\(0_A\\) in that example. I'd have preferred to have that spelled out. Now I do see that the underlying set of the coproduct should be the same as that of the product, and that the given coproduct is indeed a coproduct.
|
||||
|
||||
Now my "aha" moment from earlier. I've seen this fact referenced [a few days ago][stack exchange] on StackExchange. I can follow the proof, and I see where it relies on abelian-ness, but don't really see much of an idea behind it. The obvious arrows \\(A \to A\\) and \\(B \to B\\) are picked, but it seems to be something of a trick to pick the zero homomorphism \\(A \to B\\). In hindsight, it's the only homomorphism we could have picked, but it would have taken me a while to think of it.
|
||||
|
||||
I skim over the bit about abelian categories, and over the various dual notions to products (like "coproducts are unique up to isomorphism", "the empty coproduct is an initial object" etc).
|
||||
|
||||
# Summary
|
||||
|
||||
This was a bit less intuitive than the idea of the product. Instead of having "finding \\(Z\\)-shaped things in \\(A\\) and \\(B\\) means we can find a \\(Z\\)-shaped thing in the product", we have "finding \\(A\\)- and \\(B\\)-shaped things in \\(Z\\) means we can find a coproduct-shaped thing in \\(Z\\) too", but it took me a while to fix this in my mind, and it still seems to me a little less easy for something to be a coproduct: we've been bothering with equivalence classes more.
|
||||
|
||||
[semantic truth]: https://en.wikipedia.org/wiki/Semantic_theory_of_truth
|
||||
[syntactic truth]: https://en.wikipedia.org/wiki/Logical_consequence
|
||||
[stack exchange]: https://math.stackexchange.com/a/1430755/259262
|
53
hugo/content/awodey/2015-09-16-equalisers.md
Normal file
53
hugo/content/awodey/2015-09-16-equalisers.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-16T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/equalisers/
|
||||
- /equalisers/
|
||||
title: Equalisers
|
||||
---
|
||||
|
||||
This is pages 62 through 71 of Awodey, on [equalisers] and coequalisers.
|
||||
|
||||
The first paragraph is really quite exciting. I can see that there would be a common generalisation of kernels and varieties - they're the same idea that lets us find complementary functions and particular integrals of linear differential equations, for instance. But the axiom of separation ("subset selection") as well? Now that's intriguing.
|
||||
|
||||
We are given the definition of an equaliser: given a pair of arrows with the same domain and codomain, it's an arrow \\(e\\) which may feed into that domain to make the two arrows be "the same according to \\(e\\)".
|
||||
|
||||
Let's see the examples of \\(f, g: \mathbb{R}^2 \to \mathbb{R}\\) with \\(f(x, y) = x^2+y^2, g(x,y) = 1\\). I'll try and find the equaliser (in Top) myself. It'll be a topological space \\(E\\) and a continuous function \\(e: E \to \mathbb{R}^2\\) such that \\(f \circ e = g \circ e\\). That is, such that \\(f \circ e = 1\\). That makes it easy: were it not for the "universal" property, \\(E\\) could be anything which has a continuous function mapping it into the unit circle in \\(\mathbb{R}^2\\), and \\(e\\) would be that mapping. (I'm beginning to see where the axiom of subset selection might come in.) But if we took the space \\(E = \{ (1, 0) \}\\) and the inclusion mapping as \\(e\\), this would fail the uniqueness property because there's more than one way we can continuously map a single point into that unit circle. In order to make sure everything is specified uniquely, we'll want \\(E\\) to be the entire unit circle and its inherited topology. Ah, Awodey points out that in this case, the work is easy because the inclusion is monic and so uniqueness is automatic.
|
||||
|
||||
Let's do the same thing for Set. The equaliser of \\(f, g: A \to B\\) is a function \\(e: E \to A\\) such that \\(f \circ e = g \circ e\\). We need to make sure \\(f, g\\) only ever see the elements where they're equal after the \\(e\\)-filter has been applied to them, so \\(e\\) must only map into the set \\(\{a \in A : f(a) = g(a) \}\\). It should be easy to show that the equaliser is actually that set with the obvious inclusion into \\(A\\), and I look at the book to see that it is indeed so.
|
||||
|
||||
"Every subset is an equaliser" is therefore true, and the characteristic function is indeed the obvious way to go about it. Huh - the axiom of subset selection has just fallen out, stating that there is an inverse to the characteristic function. Magic. Then \\(\text{Hom}(A, 2) \cong \mathbb{P}(A)\\), which we already knew because to specify an element of the power-set is precisely to specify which elements of \\(A\\) are included.
|
||||
|
||||
Equalisers are monic: well, the diagram certainly looks like being the right shape, and it's intuitive for Sets: if \\(E \to A\\) weren't injective, then we could choose more than one way of mapping \\(Z \to E \to A \to B\\). The proof in general mimics the Sets example.
|
||||
|
||||
Then a blurb on how equalisers can often be made as "restrict the sets and inherit the structure". That's a nice rule of thumb. Awodey points out the "kernel of homomorphism" interpretation, which I'd already pattern-matched in the first paragraph. The equaliser is basically specifying an equivalence class under an equiv rel.
|
||||
|
||||
Hah, I wrote that before seeing that a coequaliser is a generalisation of "take the quotient by an equiv rel". Makes sense: if the equaliser is an equivalence class, it seems reasonable for its dual to be a quotient. I skip past the definition of an equivalence relation, because I already know it. What does the definition of a coequaliser really mean? It's an arrow \\(q: B \to Q\\) such that once we've let \\(f, g\\) do their thing, we can apply \\(q\\) to make it look like they've done the *same* thing. It's the other way round from the equaliser, which restricted \\(f, g\\) so that they could only do the same thing. I can see why this is like taking a quotient, and the next example makes that very clear.
|
||||
|
||||
Coproduct of two rooted posets: we quotient by the appropriate equivalence relation. That is, we co-equalise using \\(\{ 0 \}\\) and its obvious inclusions into the two posets. I draw out the diagram and after some wrestling I convince myself that the rooted-posets coproduct is as stated. I'm still getting used to this diagram-chasing. I'll wait til the exercises to do the Top example.
|
||||
|
||||
Presentations of algebras. I've never seen a demonstration that all groups can be obtained as presentations of free groups, I think, although it's fairly clear that it can be done (just specify every single relation that can possibly hold - in effect writing out the Cayley table). I would prefer it if Awodey defined \\(F(1)\\) explicitly, since it takes me a moment to realise it's the free algebra on one generator. Then \\(F(3) = F(x(1), y(1), z(1))\\). We then perform the next coequaliser. Awodey is again confusing me a bit, and I have to stop and work out that by \\(q(y^2)\\), he means \\(q \circ (1 \mapsto y^2)\\), and by \\(q(1)\\) he means \\(q \circ (1 \mapsto 1)\\). It's obvious what the intent is - chaining together these coequalisers in the obvious way. Each coequaliser doesn't significantly change the structure of the free group, so each coequaliser can be applied in turn, using the inherited structure where necessary. However, this is a bit of a confusing write-up.
|
||||
|
||||
"The steps can be done simultaneously": oh dear. It looks like we should be able to do this construction sequentially all the time, and that is conceptually easier, but I'll try and understand the all-at-once construction anyway. Firstly, we define \\(F(2)\\) because we want to equalise two 2-tuples. (It would be \\(F(3)\\) if we had three constraints and so wanted to equalise two 3-tuples.) My instinct would have been to do this with the algebra product instead of the algebra coproduct - using \\(F(2) = F(1) \times F(1)\\). I asked a friend about this. The intuition is apparently that the product is good for imposing multiple conditions at the same time, while what we really want is a way to impose one of a number of conditions. The coproduct (by way of the direct sum) has the notion of "one of a number of things, but not necessarily all of them together".
|
||||
|
||||
I draw out the diagram again the next day. This time it makes a bit more sense that we need the coproduct: it's because we need a way of getting from \\(F(1)\\) to \\(F(3)\\), and if we used the product we'd have all our arrows ending at \\(F(1)\\) rather than originating at \\(F(1)\\). I can see why the UMP guarantees the uniqueness of the algebra given by these generators and relations now.
|
||||
|
||||
On to the specialisation to monoids. The construction is… odd, so I'll try and forget about it for a couple of minutes and then do it myself. We want to construct the free group on all the generators we have available - that is, all the monoid elements - so we're going to need a functor \\(T\\) (to use Awodey's notation) taking \\(N \to M(\vert N \vert)\\). Then we're also going to need a way to specify all the different relations in the monoid. We can do that by specifying a mapping taking a word \\(x_1, x_2, \dots, x_n\\) to the monoid product \\(x_1 x_2 \dots x_n\\). Write \\(C\\) for that multiplication ("C" for "collapsing") functor \\(T(N) \to N\\). Aha: our restrictions become of the form \\(C(x_1, x_2) = x_3\\), representing the equation \\(x_1 x_2 = x_3\\).
|
||||
|
||||
Very well: we're going to need our left-hand sides to be going from \\(T^2 N \to T N\\), and our right-hand sides likewise. Then we'll coequalise them. Let \\(f: T^2 N \to T N\\) taking a word of words to its corresponding word of products, and let \\(g: T^2 N \to T N\\) taking a word of words to the product of products. Wait, that's got the wrong codomain. Let \\(g: T^2 N \to T N\\) taking a word of words to the corresponding word of letters. That's better: we have basically provided a list of equivalences between \\((x_1, x_2)\\) and \\(x_1 x_2\\).
|
||||
|
||||
Finally, we take the coequaliser \\(e\\) of \\(f\\) and \\(g\\), and hope and pray that \\(N\\) (our original monoid) has the UMP for the resulting object. I remember from the proof in the book that we should first show that the coequaliser arrow is the operation "take the product of the word". (In hindsight, that's a great place to start. It's harder to deal with a function without knowing what it is.) Certainly the "take the product of the word" function does what we want, but does it actually satisfy the UMP? Drawing out a diagram convinces me that it does: any \\(\phi\\) which doesn't care how the letters are grouped between words, descends uniquely to a map from the monoid of all words.
|
||||
|
||||
The coequaliser arrow therefore definitely does go into \\(N\\), and we can identify \\(N\\) with the coequaliser in the obvious way by including from the coequaliser (which still technically has the structure of a free group).
|
||||
|
||||
# Summary
|
||||
|
||||
This is yet another thing I've started to get a feel for, but not really understood. I now know what coequalisers and equalisers are for, and the utility of the "duals" idea. The exercises will certainly be helpful.
|
||||
|
||||
[equalisers]: https://en.wikipedia.org/wiki/Equaliser_(mathematics)#In_category_theory
|
66
hugo/content/awodey/2015-09-19-duality-exercises.md
Normal file
66
hugo/content/awodey/2015-09-19-duality-exercises.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-19T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/duality-exercises/
|
||||
- /duality-exercises/
|
||||
title: Duality exercises
|
||||
---
|
||||
|
||||
Exercise 1 is easy: at the end of Chapter 2 the corresponding products statement was proved, and the obvious dual statement turns out to be this one.
|
||||
|
||||
Exercise 2 falls out of the appropriate diagram, whose upper triangle is irrelevant.
|
||||
|
||||
![Free monoid functor preserves coproducts][ex 2]
|
||||
|
||||
Exercise 3 I've [already proved][duality] - search on "sleep".
|
||||
|
||||
Exercise 4: Let \\(\pi_1: \mathbb{P}(A + B) \to \mathbb{P}(A)\\) be given by \\(\pi_1(S) = S \cap A\\), and \\(\pi_2: \mathbb{P}(A+B) \to \mathbb{P}(B)\\) likewise by \\(S \mapsto S \cap B\\). Claim: this has the UMP of the product of \\(\mathbb{P}(A)\\) and \\(\mathbb{P}(B)\\). Indeed, if \\(z_1: Z \to \mathbb{P}(A)\\) and \\(z_2: Z \to \mathbb{P}(B)\\) are given, then \\(z: Z \to \mathbb{P}(A + B)\\) is specified uniquely by \\(S \mapsto z_1(S) \cup z_2(S)\\) (taking the disjoint union).
|
||||
|
||||
Exercise 5: Let the coproduct of \\(A, B\\) be their disjunction. Then the "coproduct" property is saying "if we can prove \\(Z\\) from \\(A\\) and from \\(B\\), then we can prove it from \\(A \vee B\\)", which is clearly true. The uniqueness of proofs is sort of obvious, but I don't see how to prove it - I'm not at all used to the syntax of natural deduction. I look at the answer, which makes everything clear, although I still don't know if I could reproduce it. I understand its spirit, but not the mechanics of how to work in the category of proofs.
|
||||
|
||||
Exercise 6: we need that for any two monoid homomorphisms \\(f, g: A \to B\\) there is a monoid \\(E\\) and a monoid homomorphism \\(e: E \to A\\) universal with \\(f e = g e\\). Certainly there is a monoid hom \\(e: E \to A\\) with that property (namely the trivial hom), so we just need to find one that is "big enough". Let \\(E\\) be the subset of \\(A\\) on which \\(f = g\\), which is nonempty because they must be equal on \\(1_A\\). I claim that it is a monoid with \\(A\\)'s operation. Indeed, if \\(f(a) = g(a)\\) and \\(f(b) = g(b)\\) then \\(f(ab) = f(a) f(b) = g(a) g(b) = g(ab)\\). This also works with abelian groups - and apparently groups as well.
|
||||
|
||||
Finally we need that this structure satisfies the universal property. Let \\(Z\\) be a monoid with hom \\(h: Z \to A\\), such that \\(f h = g h\\). We want a hom \\(\bar{h} : Z \to E\\) with \\(e \bar{h} = h\\). But if \\(f h = g h\\) then we must have the image of \\(h\\) being in \\(E\\), so we can just take \\(\bar{h}\\) to be the inclusion. This reasoning works for abelian groups too. We relied on Mon having a terminal element and monoids being well-pointed.
|
||||
|
||||
Finite products: we just need to check binary products and the existence of an initial object. Initial objects are easy: the trivial monoid/group is initial. Binary products: the componentwise direct product satisfies the UMP for the product, since if \\(z_1: Z \to A, z_2: Z \to B\\) then take \\(z: Z \to A \times B\\) by \\(z(y) = \langle z_1(y), z_2(y) \rangle\\). This is obviously homomorphic, while the projections make sure it is unique.
|
||||
|
||||
Exercise 7 falls out of another diagram. The (1) label refers to arrows forced by the first step of the argument; the (2) label to the arrow forced by the (1) arrows.
|
||||
|
||||
![Coproduct of projectives is projective][ex 7]
|
||||
|
||||
Exercise 8: an injective object is \\(I\\) such that for any \\(X, E\\) with arrows \\(h: X \to I, m: X \to E\\) with \\(m\\) monic, there is \\(\bar{h}: E \to I\\) with \\(\bar{h} m = h\\). Let \\(P, Q\\) be posets, and let \\(f: P \to Q\\) be monic. Then for any points \\(x, y: \{ 1 \} \to P\\) we have \\(fx = fy \Rightarrow x=y\\), so \\(f\\) is injective. Conversely, if \\(f\\) is not monic then we can find \\(a: A \to P, b: B \to P\\) with \\(fa = fb\\) but \\(a \not = b\\). This means \\(A = B\\) because the arrows \\(fa, fb\\) agree on their domain; so we have \\(a, b: A \to P\\) and \\(x \in A\\) with \\(a(x) \not = b(x)\\). But \\(f a(x) = f b(x)\\), so we have \\(f\\) not injective.
|
||||
|
||||
Now, a non-injective poset: we want to set up a situation where we force some extra structure on \\(X\\). If \\(I\\) is has two distinct nontrivial chunks which have no elements comparable between the chunks, then \\(I\\) is not injective. Indeed, let \\(X = I\\). Then the inclusion \\(X \to I\\) does not lift across the map which sends one chunk "on top of" the other: say one chunk is \\(\{a \leq b \}\\) and the other \\(\{c \leq d\}\\), then the map would have image \\(a \leq b \leq c \leq d\\).
|
||||
|
||||
What about an injective poset? The dual of "posets" is "posets", so we can just take the dual of any projective poset - for instance, any discrete poset. Anything well-ordered will also do, suggests my intuition, but I looked it up and apparently the injective posets are exactly the complete lattices. Therefore a wellordering will almost never do. I couldn't see why \\(\omega\\) failed to be injective, so I asked a question on Stack Exchange; midway through, I [realised why][SE].
|
||||
|
||||
Exercise 9: \\(\bar{h}\\) is obviously a homomorphism. Indeed, \\(\bar{h}(a) \bar{h}(b) = h i(a) h i(b) = h(i(a) i(b))\\) because \\(h\\) is a homomorphism. But \\(i(a)\\) is the wordification of the letter \\(a\\), and \\(i(b)\\) likewise of \\(b\\), so we have \\(i(a) i(b)\\) is the word \\((a, b)\\), which is itself the inclusion of the product \\(ab\\).
|
||||
|
||||
Exercise 10: Functors preserve the structure of diagrams, so we just need to show that that the unique arrow guaranteed by the coequaliser UMP corresponds to a *unique* arrow in Sets. We need to show that given a function \\(\vert M \vert \to \vert N \vert\\) there is only one possible homomorphism \\(M \to N\\) which forgetful-functors down to it. But a homomorphism \\(M \to N\\) does specify where every single set element in \\(\vert M \vert\\) goes, so uniqueness is indeed preserved.
|
||||
|
||||
Exercise 11: Let \\(R\\) be the smallest equiv rel on \\(B\\) with \\(f(x) \sim g(x)\\) for all \\(x \in A\\). Claim: the projection \\(\pi: B \to B/R\\) is a coequaliser of \\(f, g: A \to B\\). Indeed, let \\(C\\) be another set, with a function \\(c: B \to C\\). Then there is a unique function \\(q: B/R \to C\\) with \\(q \pi = c\\): namely, \\(q([b]) = c(b)\\). This is well-defined because if \\(b \sim b'\\) then \\(c(b') = q([b']) = q([b]) = c(b)\\).
|
||||
|
||||
Exercise 12 I've [already done][duality] - search on "wrestling", though I didn't write this up.
|
||||
|
||||
Exercise 13: I left this question to the end and couldn't be bothered to decipher the notation.
|
||||
|
||||
Exercise 14: The equaliser of \\(f p_1\\) and \\(f p_2\\) is universal \\(e: E \to A \times A\\) such that \\(f p_1 e = f p_2 e\\). Let \\(E = \{ (a, b) \in A \times A : f(a) = f(b) \}\\) and \\(e\\) the inclusion. It is an equivalence relation manifestly: if \\(f(a) = f(b)\\) and \\(f(b) = f(c)\\) then \\(f(a) = f(c)\\), and so on.
|
||||
|
||||
The kernel of \\(\pi: A \mapsto A/R\\), the quotient by an equiv rel \\(R\\), is \\(\{ (a, b) \in A \times A : \pi(a) = \pi(b) \}\\). This is obviously \\(R\\), since \\(a \sim b\\) iff \\(\pi(a) = \pi(b)\\). That's what it means to take the quotient.
|
||||
|
||||
The coequaliser of the two projections \\(R \to A\\) is the quotient of \\(A\\) by the equiv rel generated by the pairs \\(\langle \pi_1(x), \pi_2(x) \rangle\\), as in exercise 11. This is precisely the specified quotient.
|
||||
|
||||
The final part of the exercise is a simple summary of the preceding parts.
|
||||
|
||||
Exercise 15 is more of a "check you follow this construction" than an actual exercise. I do follow it.
|
||||
|
||||
[duality]: {{< ref "2015-09-15-duality-in-category-theory" >}}
|
||||
[ex 2]: /images/CategoryTheorySketches/FreeMonoidFunctorPreservesCoproducts.jpg
|
||||
[ex 7]: /images/CategoryTheorySketches/CoproductOfProjectivesIsProjective.jpg
|
||||
[SE]: https://math.stackexchange.com/a/1442264/259262
|
57
hugo/content/awodey/2015-09-19-groups-in-categories.md
Normal file
57
hugo/content/awodey/2015-09-19-groups-in-categories.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-19T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/groups-in-categories/
|
||||
- /groups-in-categories/
|
||||
title: Groups in categories
|
||||
---
|
||||
|
||||
I go into this chapter hoping that it will be on things I already know about group theory. This post will be on pages 75 through 85 of Awodey.
|
||||
|
||||
I already know about groups over sets, but this looks like they can be made more generally over other categories. It is clear that we will need to consider only categories with finite products, because the notion of a binary operation requires us to work on pairs of elements.
|
||||
|
||||
The definition of a group is the obvious one: an object \\(G\\) with an "inverses" arrow \\(i: G \to G\\), a "multiplication" arrow \\(m: G \times G \to G\\) and a "unit" arrow \\(u: 1 \to G\\), such that \\(m\\) is associative in the obvious way, \\(u\\) is a unit for \\(m\\), and \\(i\\) is an inverse with respect to \\(m\\) - drawing out the appropriate diagrams.
|
||||
|
||||
The definition of a homomorphism is likewise very familiar, and the examples which follow are very clear. (The operations are arrows, so they must preserve structure.)
|
||||
|
||||
Example 4.4 is a group in the category of groups. I remember having proved Proposition 4.5 on an example sheet somewhere, but it wasn't indicated there that it was anything particularly important. I've only glanced over the construction of a group in the category of groups, so I'll try and work out what it is myself. A group in the category of groups is a group \\(G\\) together with its self-product \\(G \times G\\), and associative homomorphism \\(m: G \times G \to G\\), and \\(u: \{ 1 \} \to G\\), and \\(i: G \to G\\) which acts as an inverse for \\(m\\). This is still a bit nonspecific, so can we say anything about \\(m\\)? It must preserve the group structure on \\(((G, \cdot), m, i)\\), and we know \\(\cdot\\) preserves the group structure on \\((G, \cdot)\\). Is there perhaps a way to get them to play nicely together?
|
||||
|
||||
I'll write \\(\times\\) as a shorthand for \\(m\\). Then \\(m(a \cdot b, c \cdot d) = m(a, c) \cdot m(b, d)\\) because \\(m\\) is a homomorphism \\(G \times G \to G\\). Letting \\(a = 1_G, d = 1_G\\) yields \\(m(b, c) = c \cdot b\\). Letting \\(b=1_G, c=1_G\\) yields \\(m(a, d) = a \cdot d\\). Therefore in fact \\(m\\) is the group operation on \\(G\\), and \\(G\\) is also abelian. (I won't bother with the converse, since on looking, the book says it's easy.)
|
||||
|
||||
A strict monoidal category is a monoid in Cat. Dear heavens, this is confusingly general. I'll have to go through the examples Awodey gives.
|
||||
|
||||
The operation of taking products and coproducts (that is, the meet/join operations) does indeed satisfy the criterion - ah, I move down and see that Awodey points out that these only hold up to isomorphism, not equality, so this isn't "strict". In posets, though, there's at most one arrow between any two objects, so we really do have equality.
|
||||
|
||||
A discrete monoidal category is a standard Set-monoid: I can see that each Set-monoid is a discrete monoidal category. How about the converse? Yep, that's fine as long as we're talking about locally small categories. (I briefly got confused between the morphisms and the \\(\otimes\\) operation, but that's cleared up now.)
|
||||
|
||||
A strict non-poset monoidal category is the finite ordinals: since no arrow between two different ordinals has an inverse, we must have that objects are unique not just up to isomorphism, but in a more specific sense. This again lets us say that this is a strict monoidal category.
|
||||
|
||||
I'll leave that and hope for the best. Next is the category of groups, and we see the familiar equivalence between kernels of homomorphisms and normal subgroups. There's also this idea of "the equaliser is the subgroup; the coequaliser is the quotient" from earlier. I prove the coequaliser statement myself without looking at the proof - it's not hard, and it just involves showing that for \\(H\\) normal in \\(G\\), if \\(k: G \to K\\) is such that \\(k i = k u\\), then \\(k\\) is constant on \\(H\\) and so descends to the quotient \\(G/H\\). The category-theoretic statement about coequalisers is much more fearsome than the concrete group-theoretic one!
|
||||
|
||||
I'm very familiar with these results, so having done one of them (the coproducts one), I skip through to the first one I don't know, which is Corollary 4.11. Actually this is the First Isomorphism Theorem in disguise. I whizz down to the exercises and see that the cokernel construction is an exercise, so I'll leave it til then (I'd like to avoid fragmenting them, and also I can't be bothered at the moment).
|
||||
|
||||
Section 4.3: groups as categories. Groups certainly are categories - that's how I defined them in my Anki card for the category theory deck. A functor is therefore clearly a group hom, as Awodey says.
|
||||
|
||||
Ah, that's cool. Functors from a group (viewed as a category) to any category form "representations" of that group. Elements of \\(G\\) become automorphisms of an object in \\(C\\). In the case of the functor into the category of finite-dimensional vector spaces with linear maps, we can have \\(G\\) appearing as the automorphism group of a wide variety of different objects: for instance, \\(C_5\\) acts on \\(\mathbb{C}^1\\), or on \\(\mathbb{C}^2\\) or on…
|
||||
|
||||
In the case of the functor into the category of sets, it's most natural to identify \\(G\\) with a subgroup of some permutation group and to make \\(G\\) act on the appropriate set; in fact this looks like the only way of describing such a functor, since every group is unique up to isomorphism, so corresponds to only one "distinct" permutation-subgroup.
|
||||
|
||||
Now we see the definition of a congruence on a category. It's easy to see that this is the equivalence relation we get by identifying all arrows which go to and from the same place, or an equiv rel with more classes than that.
|
||||
|
||||
The congruence category uses some rather strange notation. What even is \\(C_0\\)? Surely it must be the set of objects, and \\(C_1\\) the set of arrows, but that isn't notation I remember from earlier in the book. Once that's settled, the definitions become easy: the congruence category is "the thing on which we need to take the quotient" in order to get the quotient by the congruence. It is the category where the morphisms are instead "congruent pairs of arrows" in the original category, and the composition is well-defined because \\(f' f\\) and \\(g' g\\) are congruent if \\(f', f\\) and \\(g', g\\) are.
|
||||
|
||||
There are indeed two projection functors, because we're working on a category which has "pairs of arrows" as its morphisms; then the coequaliser of those two is the desired quotient. That seems fine.
|
||||
|
||||
We then construct the "kernel of a functor \\(F\\)" in an analogous way to groups: two arrows are \\(F\\)-congruent iff \\(F\\) treats them in the same way, and we define the quotient category to be universal such that for any congruence \\(\sim\\), \\(F\\) descends to the quotient by \\(\sim\\) iff \\(\sim\\) is a sub-congruence of \\(\sim_F\\). (I had to sleep on this one, but I think I understand it now.)
|
||||
|
||||
Finally, picking \\(\sim\\) to equal \\(\sim_F\\) gives that all functors descend to their quotients, where the descent is bijective on objects, surjective on hom-sets, and the descended map is injective on hom-sets.
|
||||
|
||||
# Summary
|
||||
|
||||
This section was an interesting one, but it took me a while to get the hang of it. I'm used to all of this in a concrete setting; seeing it in the abstract makes everything quite difficult. I'm going back to the section on hom-sets now, because the last paragraph is not intuitive at all to me, and I feel it ought to be.
|
50
hugo/content/awodey/2015-09-22-limits-and-pullbacks.md
Normal file
50
hugo/content/awodey/2015-09-22-limits-and-pullbacks.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-22T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/limits-and-pullbacks/
|
||||
- /limits-and-pullbacks/
|
||||
title: Limits and pullbacks
|
||||
---
|
||||
|
||||
I'm going to skip pages 85 through 88 of Awodey for the moment, because time is starting to get short and I want to make sure I'm doing stuff which is relevant to the Part III course on category theory. Therefore, I'll skip straight to Chapter 5, pages 89 through 95. (There's not really a nice way to break this chapter up into small chunks, because the next many pages are on pullbacks.)
|
||||
|
||||
We have indeed seen that every subset of a set is an equaliser: just define two functions which agree on that subset and nowhere else. (The indicator function on the subset, and the constant-1 function, for example.) A mono is a generalised subset: well, we have that arrows are generalised elements, so can we make a mono represent a collection of generalised elements? Yes, sort of: given any generalised element which is "in the subset" - that is, on which the equaliser-functions agree - that element lifts over the mono, so can be interpreted as an element of the mono. It's a bit dubious, but it'll do.
|
||||
|
||||
The idea of "an injective function which is isomorphic onto its image" comes up quite often, so the next chunk is quite familiar. Then the collection of subobjects of \\(X\\) is just the slice \\(\mathbf{C}/X\\), and the morphisms are the same as in the slice category: commuting triangles.
|
||||
|
||||
Because our arrows are monic, we can have at most one way to complete any given commuting triangle, so we get the natural idea of "there is exactly one natural inclusion map from a subset to its parent set". Finally, we define what it means for two objects to be "the same object" in this setting: namely, each includes into the other. (Remark 5.2 describes the process of quotienting out those objects which are "the same" in this sense, and points out that in Set, each subobject is isomorphic only to itself.)
|
||||
|
||||
We then see that subobjects of subobjects of \\(X\\) are subobjects of \\(X\\), because the composition of monic things is monic. We therefore have a way of including subobjects of subobjects of \\(X\\) into \\(X\\), and that lets us define the obvious membership relation.
|
||||
|
||||
The final example in this section is that of the equaliser, which is actually a subobject consisting of generalised elements which \\(f, g\\) view as being the same. I follow this construction as symbols, but as ever, I don't really have an intuition for it. I'll accept that and move on.
|
||||
|
||||
Pullbacks next. A pullback is a universal way of completing a square. My first thought on seeing the definition is that this is an awful lot like a product: given \\(f: A \to C, g: B \to C\\) we seek a product of \\(A\\) and \\(B\\) such that the projection diagram commutes with \\(f\\) and \\(g\\) in the right way. However, products are unique up to isomorphism, so there is "only one" product anyway: we can't just look for one which behaves in the right way, can we?
|
||||
|
||||
I'm going to have to try and get this in Sets. Let \\(A = \{ 1, 2 \}\\), \\(B = \{4, 5 \}\\), \\(C = \{1, 2, 4, 5 \}\\) and \\(f, g\\) the inclusions. Then the pullback \\(P\\) must be the empty set - ah, this is the intersection operation Awodey mentioned earlier, and I sense an equaliser going on here. What about \\(A = \{1, 2, 4 \}\\) instead? Then we need \\(P\\) to be \\(\{4\}\\) only.
|
||||
|
||||
Ah, I understand my confusion. Products are indeed unique - but they are universal: they are the most general kind of thing which satisfies the UMP of the product. There are other things which satisfy the "UMP-without-the-U" of the product: the statement of the UMP but without the word "unique". We want to pick the most general one of those which satisfies a certain property. So a product is just a pullback where \\(C\\) is initial, for instance.
|
||||
|
||||
Proposition 5.5 is a description of the pullback as an equaliser. I knew there would be something like this! Without looking at the proof, I can tell it'll revolve around the fact that equalisers are monic (that'll be the step which guarantees uniqueness). The proof follows just by drawing out the diagram, really.
|
||||
|
||||
![Pullback exists if equalisers and products do][pullback exists]
|
||||
|
||||
Now comes a demonstration that inverse images are a kind of pullback. I don't see a way to understand this intuitively enough that I could reproduce it - the idea is simple but very much counter to my intuitions. I'll just plough on.
|
||||
|
||||
In a pullback of \\(f: A \to B, m: M \to B\\), if \\(m\\) is monic then its parallel arrow \\(m'\\) is: that follows from another diagram.
|
||||
|
||||
![Monic implies parallel arrow is monic in a pullback][monic]
|
||||
|
||||
# Summary
|
||||
|
||||
I get the impression that the idea of a limit is a very general one, of which presumably pullbacks are a specific example - I can't think of something which generalises the idea of "inverse image" off the top of my head. We're going to have six more pages on pullbacks, and then the idea of a limit will be introduced. (This chapter is rather long.)
|
||||
|
||||
I do like the way Awodey is doing this: give examples of specific constructions, and then show how they may be unified. I glanced down to the blurb at the start of the "limits" section, and saw that another such unification is about to take place. I'm looking forwards to that.
|
||||
|
||||
[pullback exists]: {{< baseurl >}}images/CategoryTheorySketches/PullbackExistsWithEqualisers.jpg
|
||||
[monic]: {{< baseurl >}}images/CategoryTheorySketches/ParallelArrowInPullbackIsMonic.jpg
|
@@ -0,0 +1,61 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-22T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/properties-of-pullbacks-limits/
|
||||
- /properties-of-pullbacks-limits/
|
||||
title: Limits and pullbacks 2
|
||||
---
|
||||
|
||||
We have just had the definition of a pullback; now in Awodey pages 95 through 100 we'll see some more about them, and after that we'll get the more general unifying idea of the limit in pages 100 through 105.
|
||||
|
||||
Lemma 5.8 states that a certain commuting diagram is a pullback. The proof is by "diagram chase", and I can see why - my proof goes along the lines of gesturing several times at various parts of the diagram. Then the corollary takes me a moment to get my head around, but then I turn my head sideways and it pops out of the diagram. If you push the \\(h'\\) line into the page in Lemma 5.8, and rotate the diagram by ninety degrees, you end up with the diagram of Corollary 5.9; then part 2 of Lemma 5.8 is the corollary.
|
||||
|
||||
The operation of pullback is a functor. Given a "base" arrow \\(h\\), we may define a functor which takes an arrow \\(f\\) and pulls back along \\(f, h\\). It seems very plausible, but it takes me a while of staring at the diagrams before it makes sense. In particular, the diagram in the book doesn't do a good job of splitting up the two statements which are proved: namely that \\(h^* 1_X = 1_{X'}\\) and \\(1_{X'} = 1_{h^* X}\\).
|
||||
|
||||
The corollary is that \\(f^{-1}\\) is a functor, which follows because the operation "pull things back along \\(f\\)" is a functor. Then we get that \\(f^{-1}\\) descends to the quotient by equivalence. This is all a set of symbols which I barely understand, so I have a break and the go back over the whole thing again.
|
||||
|
||||
Pullback is a functor. Fine. Then \\(f^{-1}: \text{Sub}(B) \to \text{Sub}(A)\\) - which is defined as the pullback of an inclusion and \\(f\\) - must also be a functor, because it is exactly the operation "take a certain pullback". The statement that \\(M \subseteq N \Rightarrow f^{-1}(M) \subseteq f^{-1}(N)\\) is just the statement that "if we apply the pullback by \\(f\\) and an inclusion, the relation \\(\subseteq\\) is preserved", which is true because pullback is a functor. (Recall that \\(M \subseteq N\\) iff there is \\(g: M \to N\\) with the triangle \\(m: M \to Z, n: N \to Z\\) commuting. We're working throughout with subobjects of the object \\(Z\\).)
|
||||
|
||||
Now we do have \\(M \equiv N\\) implies \\(f^{-1}(M) \equiv f^{-1}(N)\\) - recall that \\(M \equiv N\\) iff both are \\(\subseteq\\) each other - so \\(f^{-1}\\) is constant on equivalence classes and so descends to the quotient. That's a bit clearer now.
|
||||
|
||||
Phew, a concrete example is coming up: a pullback in Sets. I draw out the general diagram first, then write in the assumptions we make, and end up with a diagram a lot like the one in the book, except that I've labelled the unlabelled arrow "inclusion".
|
||||
|
||||
Ah, I'm starting to get this. The operation "take inverses" is a function which takes one "major" argument \\(f: A \to B\\), and one "minor" argument \\(M\\) (from which we extract the corresponding subset-arrow \\(m: M \to B\\)). The output is the pullback diagram, which may be interpreted as just the pullback object from those two arrows.
|
||||
|
||||
Once I've realised that the operation "take inverses" is as above, the top of the following page (p99) becomes trivially obvious, although I still have to do some mental work to do the interpretation in terms of substituting a term \\(f\\) for a variable \\(f\\) in function \\(\phi\\). It seems like a very complicated way of saying something very simple.
|
||||
|
||||
Then we see the naturality of the isomorphism \\(2^A \cong \mathbb{P}(A)\\). First, am I convinced we've even shown that there is an isomorphism? Certainly each function \\(A \to 2\\) corresponds (by inverses) to a unique member of \\(\mathbb{P}(A)\\), while each member of \\(\mathbb{P}(A)\\) corresponds to a unique member of \\(2^A\\) given by the characteristic function. Now, does the naturality diagram really commute? Yes, that's what happened above: \\(f^{-1}(V_{\phi}) = V_{\phi f}\\).
|
||||
|
||||
This section has one final example: reindexing an indexed family of sets. The definition of \\(p\\) is fine; then we pull it back along \\(\alpha\\). I need to check that the guessed pullback object is indeed a pullback, for which I need a diagram. The required property eludes me completely until I realise that the topmost arrow of Awodey's diagram is in fact the identity; then the UMP falls out easily.
|
||||
|
||||
Section 5.4 is entitled "limits", and it promises to unify pretty much everything we've already seen. Recall the theorem that if a category has pullbacks and a terminal object if has finite products and equalisers, because we may take the equaliser of the product to obtain a pullback, and we may perform the empty product to obtain a terminal object. Now Awodey proves the converse: constructing the product as a pullback from a terminal object, and constructing the equaliser as a pullback of the identity pair of arrows with the pair \\(\langle f, g \rangle\\) we want to equalise.
|
||||
|
||||
Define a "diagram of type \\(\mathbf{J}\\) in \\(\mathbf{C}\\)" in the way you'd hope: since an arrow \\(X \to Y\\) is thought of as a shape-\\(X\\) subset of \\(Y\\), we should consider a shape-\\(\mathbf{J}\\) "subset" of \\(\mathbf{C}\\) to be a functor \\(\mathbf{J} \to \mathbf{C}\\).
|
||||
|
||||
Define a *cone* to the diagram \\(D\\) as - well, the name is quite suggestive. Fix a base object \\(C\\) of \\(\mathbf{C}\\), and then the subcategory of all the arrows \\(C \to D_j\\) forms a category of shape \\(\mathbf{J}\\) in \\(\mathbf{C}\\), all linked to this base \\(C\\). (Of course, we insist that the arrows of this cone commute with the base object.)
|
||||
|
||||
A morphism of cones behaves in the obvious way: send the base point to its new position, and send each arrow to its new arrow. (We keep the \\(D_j\\) the same, because we need to preserve the diagram; we're only changing the position of the apex of the cone.)
|
||||
|
||||
Finally, the definition of a limit! It's a terminal object in the category of cones on a given diagram. All cones have exactly one arrow going into this cone (if it exists). The "closest cone to the diagram" idea is a nice one, and I can see how this links with the idea of a universal mapping property. The UMPs we've seen up to now are of the form "draw this diagram, and select the closest object that fulfils it" - how neat. This immediately covers the product, pullback and equaliser examples; from the empty diagram, there is precisely one cone for each object (namely "pick a vertex, and have no maps at all"), so the category of cones is just the original category, so the limit is a terminal object.
|
||||
|
||||
Now, a theorem on an equivalent condition for having all finite limits. If a category has all finite limits, then it trivially has all finite products and equalisers, because they're limits. Therefore we need to show that if a category has all finite products and equalisers, then we can build any limit. The proof will have to start by fixing some finite category \\(\mathbf{J}\\) and considering some fixed diagram of shape \\(\mathbf{J}\\) in \\(\mathbf{C}\\). Construct the cone category. We're going to have to manufacture the limit somehow, given that we have finite products and equalisers. At this point I look in the book and it tells me that the first step is to consider the product of all the objects in the diagram. OK, that is a cone-shape - it has the right arrows. Could it be a limit? We'd need that for any other cone \\(X\\), there was a unique arrow \\(X \to \prod D_i\\) commuting with the projections. That doesn't actually hold, though: consider \\(D_1, D_2\\) as our diagram, and take \\(D_1 \times D_2\\) as the product. Then \\(D_1 \times \{ \langle x, x \rangle : x \in D_2 \}\\) doesn't have a unique arrow into \\(D_1 \times D_2\\), because we could take either the second or the third projection, so we want to equalise out by such manipulations.
|
||||
|
||||
Ugh, I just don't see how to do this. I'll have to look at the book again. The construction is quite complicated: we take the product over all the possible arrows (ways to get to) \\(D_j\\) from any object \\(D_i\\), and we'll equalise out the different ways to get to each object. This becomes much clearer from a diagram, where it actually looks like the only possible way to do it: basically list all the different ways to get from A to B, and equalise out by "viewing them all as being the same way".
|
||||
|
||||
![Equalisers to make limits][equalisers to make limits]
|
||||
|
||||
Once that's done, the rest is bookkeeping to check that we've actually made a cone, and that the cone is a limit, by showing that "cone" is precisely "thing which satisfies the equaliser diagram"; then the fact that we made a limit falls straight out of the uniqueness part of the UMP.
|
||||
|
||||
The final bit on "we didn't use the finiteness condition" is clear, and the dual bit is clear (though I have not much idea about what a colimit or a cocone is). Presumably we'll see some examples of colimits later, but I imagine the coequaliser and coproduct are examples.
|
||||
|
||||
# Summary
|
||||
|
||||
This section was really neat. Quite hard to understand - took a lot of time and effort to get the pullbacks idea - but the feeling of unification was great fun. Next up will be "preservation of limits" and colimits, and after that will come some exercises (which I think are sorely needed). Then the next chapter is on another kind of construction which is not a limit, and then the really meaty sections which Awodey has called "higher category theory" and which occupy a large chunk of the Part III introductory category theory course.
|
||||
|
||||
[equalisers to make limits]: {{< baseurl >}}images/CategoryTheorySketches/EqualisersToMakeLimits.jpg
|
71
hugo/content/awodey/2015-09-23-properties-of-limits.md
Normal file
71
hugo/content/awodey/2015-09-23-properties-of-limits.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-23T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/properties-of-limits/
|
||||
- /properties-of-categorical-limits/
|
||||
title: Properties of categorical limits
|
||||
---
|
||||
|
||||
We've seen how limits are formed, and that they exist iff products and equalisers do. Now we get to see about continuous functions and colimits, pages 105 through 114 of Awodey.
|
||||
|
||||
The definition of a continuous functor is obvious in hindsight given the real-valued version: it "preserves all limits", where "preserves a particular limit" means the obvious thing that limits of cones of the given shape remain limits when the functor is applied.
|
||||
|
||||
The example is the representable functor, taking any arrow in category \\(\mathbf{C}\\) to its corresponding "apply me on the left!" arrow in Sets. That is basically to the relevant commutative triangle in \\(\mathbf{C}\\). I hope the following proof will help me understand the representable functors more clearly.
|
||||
|
||||
Representable functors preserve all limits: we need to preserve all products and all equalisers. Awodey shows the empty product first, which is clear: the terminal object goes to the terminal object. Then an arbitrary product \\(\prod_{i \in I} X_i\\) gets sent to \\(\text{Hom}(C, \prod_i X_i)\\), which is itself a product because \\(f: C \to \prod_i X_i)\\) corresponds exactly with \\(\{ f_i: C \to X_i \mid i \in I\}\\). (Indeed, the projections give \\(f \mapsto \{ f_i \mid i \in I\}\\); conversely, the UMP of the product gives a unique \\(f\\) for the collection \\(\{ f_i \mid i \in I \}\\).)
|
||||
|
||||
This has given me the intuition that "the representable functor preserves all the structure" in the sense that the diagrams will look the same before and after having done the functor.
|
||||
|
||||
Equalisers are the other thing to show, and that falls out of the definition in a completely impenetrable way. I can't distill that into "the representable functor preserves all the structure" so easily.
|
||||
|
||||
Then the definition of a contravariant functor. I've heard the terms "covariant" and "contravariant" before, several times, when people talk about tensors and general relativity and electromagnetism, but I could never understand what was meant by them. This definition is clearer: a functor which reverses input arrows with respect to the objects. Operations like \\(f \mapsto f^{-1}\\) would be contravariant, for instance.
|
||||
|
||||
The representable functor \\(\text{Hom}_{\mathbf{C}} ( -, C) : \mathbf{C}^{\text{op}} \to \mathbf{Sets}\\) is certainly contravariant, taking \\(A\\) to \\(\text{Hom}(A, C)\\) and an arrow \\(f: B \to A\\) to \\(f^* : \text{Hom}(A, C) \to \text{Hom}(B, C)\\) by \\((a \mapsto g(a)) \mapsto (b \mapsto g(f(b)))\\). The contravariant functor reverses the order of arrows in its argument; it takes arrows to co-arrows, so it should take colimits to co-colimits, or limits. I need to keep in mind this example, to avoid the intuition that "functors take things to things and cothings to cothings": if the functor is covariant, it flips the co-ness of its input.
|
||||
|
||||
Example: a coproduct is a colimit, so \\(\text{Hom}_{\mathbf{C}} ( - , C)\\) should take the coproduct to a product. That might be why we had \\(\mathbb{P}(A+B) \cong \mathbb{P}(A) \times \mathbb{P}(B)\\) as Boolean algebras: the functor \\(\mathbb{P}\\) might be contravariant. What does it do to the arrow \\(B \to A\\)? Recall that an arrow in the category of Boolean algebras (interpreted as posets) is an order-preserving map. Huh, not contravariant after all: the \\(\mathbb{P}\\) functor seems covariant to me. There must be some other reason; [it turns out][SE] that I'm mixing up two different functors, one of which is covariant and takes sets to sets, and one of which is contravariant and takes sets to Boolean algebras.
|
||||
|
||||
"The ultrafilters in a coproduct of Boolean algebras correspond to pairs of ultrafilters": recall that the functor \\(\text{Ult}: \mathbf{BA}^{\text{op}} \to \mathbf{Sets}\\) takes an ultrafilter to the corresponding set of indicator functions picking out whether a given subset is in the filter, and an arrow \\(f: B \to A\\) of ultrafilters to the arrow \\(\text{Ult}(f): \text{Ult}(A) \to \text{Ult}(B)\\) by \\(\text{Ult}(f)(1_U) = 1_U \circ f\\), and so it is representable. (I barely remember this. I think I deferred properly thinking about representable functors until Awodey covered them properly.) At least once we've proved that, we do get "ultrafilters in the coproduct correspond to pairs of ultrafilters", by the iso in the previous paragraph.
|
||||
|
||||
The exponent law is much easier - it follows immediately from the same iso.
|
||||
|
||||
(Oh, by the way, we have that limits are unique up to unique isomorphism, because they may be formed from products and equalisers which are themselves unique up to unique isomorphism.)
|
||||
|
||||
Next section: colimits. The construction of the co-pullback (that is, pushout) is dual to that of the pullback: take the coproduct and then coequalise across the two sides of the square. So the coproduct of two rooted posets would be the pushout of the two "pick out the root" functions: let \\(A = \{ 0 \}\\), and \\(B, C\\) be rooted posets with roots \\(0_B, 0_C\\). Then the pushout of \\(f: A \to B\\) by \\(f(0) = 0_B\\) and \\(g: A \to C\\) by \\(g(0) = 0_C\\) is just the coproduct of the two rooted posets.
|
||||
|
||||
Ugh, a geometrical example next. Actually, this is fairly neat: the coproduct of two discs, but where we view two points as being the same if they are both images of the inclusion. That's just two circles glued together on the boundary, which is topologically the same as a sphere. In the next lower dimension, we want to take two intervals, glued together at their endpoints, making a circle.
|
||||
|
||||
Then the definition of a colimit, which is the obvious dual to that of a limit. I skip through to the "direct limit" idea, where the colimit is taken over a linearly ordered indexing category. I can immediately see that this might be associated with the idea of a limit in \\(\mathbb{R}\\), but I'll save that until after the worked example, which is the direct limit of groups.
|
||||
|
||||
The colimit setup is all pretty obvious in retrospect, but I didn't try and come up with it myself. (The exercises will show whether it really is obvious!) The colimiting object does exist because coproducts and coequalisers do, and we can construct it as the coproduct followed by a certain coequaliser - namely, the one where "following a path through the sequence, then going out to the colimit, is the same as just going straight to the colimit". That is, such that \\(p_n g_{n-1} g_{n-2} \dots g_i = p_i\\), where the \\(p_i: G_i \to L\\) are the maps into the colimit. The equivalence relation whose quotient we take, is therefore: if \\(x \in G_n, y \in G_m\\), then \\(x \sim y\\) iff there is some \\(k\\) such that if we follow along the homomorphisms starting from \\(x\\) and \\(y\\), we eventually hit a common element. (Indeed, if there existed elements \\(x, y\\) which didn't have this property, then \\(p_m g_{m-1} \dots g_n(x) \not = p_n(x)\\).) I think I've got that.
|
||||
|
||||
The operations are the obvious ones, and we've made a kind of "infinite intersection" of these groups, where the maps \\(u_n: G_n \to G_{\infty}\\) are the "inclusions". Universality is inherited from Sets, so as long as the limiting structure obeys the group axioms, we have indeed ended up with a colimit.
|
||||
|
||||
What does it mean, then, for functor \\(F: \mathbf{C} \to \mathbf{D}\\) to "create limits of type \\(\mathbf{J}\\)"? For each diagram \\(D\\) in \\(\mathbf{D}\\) of type \\(\mathbf{J}\\), and each limit of that diagram, there is a unique cone in \\(\mathbf{C}\\) which is sent to \\(D\\) by \\(F\\), and moreover that cone is itself a limit.
|
||||
|
||||
In the example above, \\(F\\) is the forgetful functor Groups to Sets, \\(\mathbf{J}\\) is the ordinal category \\(\omega\\). For each diagram \\(D\\) in Sets of type \\(\omega\\), the colimit of the diagram is given by taking the coproduct of all the \\(D_i\\), and identifying \\(x_n \sim g_n(x_n)\\) (where \\(g_n: D_n \to D_{n+1}\\) is the arrow in \\(D\\) corresponding to the arrow in \\(\omega\\) from \\(n\\) to \\(n+1\\)). Then we can pull this back through the forgetful functor to obtain a corresponding cocone in Groups, and we can check that it's still a colimit. That is, \\(F\\) creates \\(\omega\\)-colimits.
|
||||
|
||||
Why does it create all limits? Take a diagram \\(C: \mathbf{J} \to \mathbf{Groups}\\) and limit \\(p_j: L \to U C_j\\) in \\(\mathbf{Sets}\\). Then we need a unique Groups-cone which is a limit for \\(C\\). The Set-limit can be assigned a group structure, apparently. It's obvious how to do that in the case that the limit was an ordinal - it's the same as we saw above - but in general…
|
||||
|
||||
I'll leave that for the moment, because I want to get on to adjoints sooner rather than later (they're apparently treated very early in the Part III course).
|
||||
|
||||
The idea behind the cumulative hierarchy construction is clear in the light of the \\(\omega\\) example above, and this makes it immediately obvious that each \\(V_{\alpha}\\) is transitive. The construction of the colimit is the obvious one (although I keep having to convince myself that it is indeed a colimit, rather than a limit).
|
||||
|
||||
What does it mean to have all colimits of type \\(\omega\\)? A diagram of \\(\omega\\)-shape is an \\(\omega\\)-chain. A colimit of that chain would compare bigger than all the elements of the chain (that's "there is an arrow \\(n \to \omega\\)" - that is, "it is a cocone"), and would have the property that if \\(n \leq x\\) for all \\(n\\) then \\(\omega \leq x\\) (that's "all other cocones have a map into the colimit"). The colimit is a "least upper bound" for the specified chain. A monotone map is called continuous if it maintains this kind of least upper bound.
|
||||
|
||||
Then we have a restated version of the theorem that "an order-preserving map on a complete poset has a fixed point", which I remember from Part II Logic and Sets. The proof here is very different, though. I follow it through, doing pretty natural things, until "The last step follows because the first term \\(d_0 = 0\\) of the sequence is trivial". Does it actually make a difference? If we remove the first element of the chain, I think it couldn't possibly alter anything in this case, even if the first element were not trivial, because we've already taken the quotient identifying "things which are eventually equal".
|
||||
|
||||
I was a little confused by the statement of the theorem. "Of course \\(h\\) has a least fixed point, because it's well-ordered" was my thought, but obviously that's nonsense because \\([0,1]\\) is not well-ordered. So there is some work to do here, although it's easy work.
|
||||
|
||||
The final example seems almost trivial when it's spelled out, but I would never have come up with it myself. Basically saying that "you need to check that your proposed colimit object actually exists", and if it doesn't, you might have to add things to your colimit until it starts existing". I don't know how common a problem this turns out to be in practice, but the dual says that we can't assume naive limits exist either.
|
||||
|
||||
# Summary
|
||||
|
||||
This was another rather difficult section. Fortunately the exercises come next, and that should help a lot. I've dropped behind a bit on my Anki deck, and need to Ankify the colimits section.
|
||||
|
||||
[SE]: http://math.stackexchange.com/a/1448655/259262
|
@@ -0,0 +1,51 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-29T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/exponentials/
|
||||
- /exponentials-in-category-theory/
|
||||
title: Exponentials in category theory
|
||||
---
|
||||
|
||||
Now we come to Chapter 6 of Awodey, on exponentials, pages 119 through 128. Supposedly, this represents a kind of universal property which is not of the form "for every arrow which makes this diagram commute, that arrow factors through this one".
|
||||
|
||||
First, we define the currying of a function \\(f: A \times B \to C\\), producting a function \\(f(a) : B \to C\\) - that is, a function \\(f(a) \in C^B\\). That is, we view \\(f: A \to C^B\\), defining an isomorphism of homsets \\(\text{Hom}_{\mathbf{Sets}}(A \times B, C)\\) to \\(\text{Hom}_{\mathbf{Sets}}(A, C^B)\\).
|
||||
|
||||
Now, we try to generalise this construction, by generalising the "currying" construct to allow for more kinds of evaluation. We just need a way to take \\(C^B \times B \to C\\) in a universal way. The resulting diagram is perhaps not something I could have come up with, but it is extremely reminiscent of the UMP of the free monoid.
|
||||
|
||||
The general definition of an exponential is then "a way of currying", defined in terms of "a way of evaluating". We get some terminology - the "evaluation" is the way of evaluating, and the "transpose" of an arrow is the curried form. We can also define the transpose of a curried arrow, by giving it a way of evaluating on any input; the UMP tells us that if we transpose twice, we recover the original arrow; therefore, the "curry me" operation is an isomorphism between \\(\text{Hom}_{\mathbf{C}}(A \times B, C)\\) and \\(\text{Hom}_{\mathbf{C}}(A, C^B)\\). (This is all probably very harmful, thinking of this in terms of currying, but so far I think it is helping.)
|
||||
|
||||
A category is then Cartesian closed if it has all finite products and exponentials. That is, if we can define multi-variable functions which curry. (Yes, arrows are usually not functions. This is for my beginner's intuition.)
|
||||
|
||||
Then Example 6.4, showing that the product of two posets is a poset, and defining the exponential to be the Sets-exponential but with the pointwise ordering on arrows. There is work to do to show that the evaluation is an arrow and that the transpose of an arrow is an arrow.
|
||||
|
||||
Restricting to \\(\omega\\)CPOs, we still need to show that \\(Q^P\\) is an \\(\omega\\)CPO. Indeed, given an \\(\omega\\)-chain in \\(Q^P\\), we need to find an upper bound in \\(Q^P\\). Say the chain was \\(f_1, f_2, \dots\\). Then for each \\(p\\), the chain with members \\(f_i(p)\\) has a least upper bound \\(f(p)\\). This defines an order-preserving function because if \\(p \leq q\\) then each \\(f_i(p) \leq f_i(q)\\), and weak inequalities respect the limiting operation. Therefore our prospective exponential is in fact in the category.
|
||||
|
||||
\\(\epsilon\\) needs to be \\(\omega\\)-continuous: it needs to respect least upper bounds. Let \\((f_i, p_i)\\) be an \\(\omega\\)-chain in \\(Q^P \times P\\). (I'll take it as read that products exist.) We need that evaluating the least upper bound, \\(\epsilon(f, p)\\), yields the limit of \\(\epsilon(f_i, p_i)\\). This follows from the lemma that if the LUB of \\((f_i)\\) is \\(f\\), and of \\((p_i)\\) is \\(p\\), then the least upper bound of \\((f_i, p_i)\\) is \\((f, p)\\) (which is true: it is an upper bound, while any other upper bound is bigger than it). Then \\(\epsilon(f, p) = f(p)\\) while \\(\epsilon(f_i, p_i) = f_i(p_i)\\), so we do get the result: each \\(f_i(p_i) \leq f(p)\\) because \\(f_i(p_i) \leq f(p_i) \leq f(p)\\), while any other upper bound \\(g\\) would have all \\(f_i(p_i) \leq g\\) so (fixing \\(j\\)) all \\(f_j(p_i) \leq g\\), so all \\(f_j(p) \leq g\\), so (releasing \\(j\\)) \\(f(p) \leq g\\).
|
||||
|
||||
Finally, the transpose of an \\(\omega\\)-continuous function needs to be \\(\omega\\)-continuous: let \\(f: A \times B \to C\\) be \\(\omega\\)-continuous. Its transpose is \\(\bar{f}: A \to C^B\\) given by \\(\epsilon \circ (\bar{f} \times 1_B) = f\\). If \\(\bar{f}\\) weren't \\(\omega\\)-continuous, there would be a witness sequence \\((a_i)\\) which had \\(\lim \bar{f}(a_i) \not = \bar{f}(\lim a_i)\\); plugging this into the definition of \\(\bar{f}\\) gives that \\((a_i)\\) is a witness against the \\(\omega\\)-continuity of \\(f\\). Contradiction.
|
||||
|
||||
And now for something completely different: an exponential with more structure than previously. I just check the definition of the product graph, because I don't think we had it in our Graph Theory course; it seems to be the obvious one, taking pairs of vertices and corresponding pairs of edges. Then the exponential graph. This is defined as to have vertices "set-exponential of the vertices", and an edge between \\(\phi: G \to H\\), \\(\psi: G \to H\\) is a \\(e(G)\\)-indexed collection of edges in \\(H\\) which have "the source is where \\(\phi\\) takes the corresponding \\(G\\)-source" and "the target is where \\(\psi\\) takes the corresponding \\(G\\)-target". It's a way of embedding \\(G\\) into \\(H\\) along \\(\phi\\) and \\(\psi\\).
|
||||
|
||||
The evaluation is the obvious one given those structures, and the transpose of a map is the curried version of that map. The different thing about this system is the fact that our maps have to have two parts (one for vertices and one for edges).
|
||||
|
||||
"Basic facts about exponentials". The transpose of evaluation, without looking at the rest of the page: \\(\epsilon: B^A \times A \to B\\) must transpose to \\(\bar{\epsilon}: B^A \to B^A\\) with \\(\epsilon \circ (\bar{\epsilon} \times 1_{A}) = \epsilon\\). If \\(\epsilon\\) were monic, we could say that \\(\bar{\epsilon} = 1_{B^A}\\) immediately, but it's not monic. Ah, but we do have that \\(\bar{\epsilon}\\) is uniquely specified by the UMP, so it must be \\(1_{B^A}\\) after all. Maybe that'll help me remember things, if nothing else.
|
||||
|
||||
A proof that "exponentiation by a fixed object" is a functor: it starts in Set, which makes me worry that representable functors are going to be involved again (because we seem to be able to cast many things as Set-based things). Onwards: currying is certainly functorial in Set because application of functions is associative, and because we check that the identity curries in the right way.
|
||||
|
||||
In general, the definition of the exponential of an arrow \\(\beta: B \to C\\) is the obvious one: there's only one way to make an element of \\(C^A\\) given one in \\(B^A\\) and a map \\(\beta : B \to C\\), and that's to "evaluate at \\(a\\), then do \\(\beta\\)". This method does keep the identity map as an identity: \\(1: B \to B\\) causes \\(f: A \to B\\) to become \\(f: A \to B\\), of course. It respects composition by just writing a couple of lines of symbol-manipulation.
|
||||
|
||||
Finally, the transpose of \\(1_{A \times B}\\), which is a map \\(\eta: A \to (A \times B)^B\\). This takes a value \\(a\\) and returns a function \\(b \mapsto (a, b)\\). Then some symbol shunting gives \\(\bar{f} = f^A \circ \eta\\).
|
||||
|
||||
![Calculating the exponential][exponential]
|
||||
|
||||
# Summary
|
||||
|
||||
This section is the one I've thought most concretely about so far. That's probably something I'll have to unlearn. It's useful already being familiar with currying; this chapter would have been a lot harder without already having that intuition.
|
||||
|
||||
[exponential]: {{< baseurl >}}images/CategoryTheorySketches/ExponentialEvaluation.jpg
|
83
hugo/content/awodey/2015-09-29-limit-exercises.md
Normal file
83
hugo/content/awodey/2015-09-29-limit-exercises.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-29T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/limit-exercises/
|
||||
- /limit-exercises/
|
||||
title: Limits exercises
|
||||
---
|
||||
|
||||
These are located on pages 114 through 118 of Awodey.
|
||||
|
||||
Exercise 1 follows by just drawing out the diagrams for the product and the pullback: they end up being the same diagram and the same UMP.
|
||||
|
||||
Exercise 2 a): \\(m\\) is monic iff \\(mx = my \Rightarrow x=y\\); the diagram is a pullback iff for all \\(x: A \to M\\) and \\(y: A \to M\\) with \\(m x = m y\\), have \\(z: A \to M\\) such that \\(my = mz = mz = mx\\).
|
||||
|
||||
Exercise 2 b): We can draw the cube line by line, checking that each pullback arrow exists, and ending up with a diagram.
|
||||
|
||||
![Pullback cube][pullback]
|
||||
|
||||
We still need the pullback of the pullback square to be a pullback square. If we can prove that \\(P\\) forms a pullback of \\(f \circ f^{-1}(\alpha), \beta\\) then we're done by the two-pullbacks lemma using the square with downward-arrow \\(f^{-1}(\beta): f^{-1}(B) \to Y\\). But it is: if we pull back the "diagonal square" \\(A \times_X B \to X\\) and \\(f\\), then we do get \\(P\\), and so all the commutative properties hold.
|
||||
|
||||
Exercise 2 c): this follows by drawing out the diagram. We pull back the "\\(m\\) is monic" square along \\(f\\) to obtain the "\\(m'\\) is monic" square; this is a pullback because of the "\\(f, m\\) pull back to \\(m'\\)" square.
|
||||
|
||||
Exercise 3: Let \\(x', y': R \to M'\\) with \\(m' x' = m' y'\\). Then \\(f m' x' = f m' y'\\); while labelling the unlabelled arrow in Awodey's diagram \\(\alpha\\), have \\(m \alpha x' = m \alpha y'\\) because the diagram commutes. But by monicness of \\(m\\), have \\(\alpha x' = \alpha y'\\). By the UMP of the pullback, there is a unique arrow \\(r\\) which arises \\(R \to M\\) such that \\(\alpha r = \alpha x'\\) and \\(m' r = m' x'\\), and so \\(r=x\\). Likewise \\(r=y\\) (since \\(\alpha y' = \alpha x'\\) and \\(m' x' = m' y'\\)). Hence \\(x=y\\).
|
||||
|
||||
Exercise 4: One direction is easy. Suppose \\(z \in_A M \Rightarrow z \in_A N\\). Let \\(z = m: M \to A\\). Then \\(M \in_A N\\) so \\(M \subseteq N\\).
|
||||
|
||||
Conversely, suppose \\(M \subseteq N\\) by means of \\(f: M \to N\\), and \\(z: Z \to A\\) gives \\(Z \in_A M\\). Then \\(z\\) lifts to \\(fz: Z \to N\\), and the entire diagram commutes as required.
|
||||
|
||||
Exercise 5 is apparently a duplicate of Exercise 4.
|
||||
|
||||
Exercise 6 is very similar in shape to some things we've already proved. Let \\(z: Z \to A\\) be such that \\(fz = gz\\). We need to find \\(\bar{z}: Z \to E\\) such that \\(e \bar{z} = z\\). Since \\(fz = gz\\), the arrow \\(Z \to B \times B\\) by \\(\langle f, g \rangle \circ z\\) is equal to the arrow \\(Z \to B \times B\\) given by \\(\langle 1_B, 1_B \rangle \circ f \circ z\\); so by the UMP of the pullback, there is \\(\bar{z}: Z \to E\\) with \\(e\bar{z} = z\\). That's all we needed.
|
||||
|
||||
Exercise 7: we need to show that \\(\text{Hom}_{\mathbf{C}}(C, L)\\) is a limit for \\(\text{Hom}_{\mathbf{C}}(C, \cdot) \circ D = \text{Hom}_{\mathbf{C}}(C, D): \mathbf{J} \to \mathbf{Sets}\\). Equivalently, we need to show that the representable functor preserves products and equalisers, so let \\(p_1: P \to A, p_2: P \to B\\) be a product in \\(\mathbf{C}\\). I claim that \\(p_1' : \text{Hom}_{\mathbf{C}}(C, P) \to \text{Hom}_{\mathbf{C}}(C, A)\\) by \\(p_1': f \mapsto p_1 f\\), and likewise \\(p_2': f \mapsto p_2 f\\), form a product. Indeed, let \\(x_1: X \to \text{Hom}_{\mathbf{C}}(C, A)\\) and \\(x_2: X \to \text{Hom}_{\mathbf{C}}(C, B)\\). Then \\(\langle x_1(z), x_2(z) \rangle\\) is of the form \\(\langle C \to A, C \to B \rangle\\) for all \\(z \in X\\), so there is a unique corresponding \\(C \to P\\) for each \\(z \in X\\). This therefore constructs a product.
|
||||
|
||||
Now the equalisers part. Let \\(e: E \to A\\) equalise \\(f, g: A \to B\\), and write \\(f^*, g^*\\) for the images of \\(f, g\\) under the representable functor. Let \\(x: X \to \text{Hom}_{\mathbf{C}}(C, A)\\) be such that \\(f^* x = g^* x\\). We need to lift \\(x\\) over \\(e^*\\). For each \\(z \in X\\), we have \\(x(z): C \to A\\) an arrow in \\(\mathbf{C}\\); this has \\(f \circ x(z) = g \circ x(z)\\), so \\(x(z)\\) lifts to unique \\(\overline{x(z)}: C \to E\\). This specifies a unique morphism \\(X \to \text{Hom}_{\mathbf{C}}(C, E)\\) as required.
|
||||
|
||||
Exercise 8: It seems intuitive that partial maps should define a category. However, let's go for it. There is an identity arrow - namely, the pair \\((\vert id_A \vert, A)\\). This does behave as the identity, because the pullback of the identity with anything gives that anything. The composition of arrows is evidently an arrow (because the composition of monos is monic). We just need associativity of composition, which comes out of drawing the diagrams of what happens when we do the triple composition in the two available ways. We can complete each of the two diagrams using the two pullbacks lemma, as in the picture.
|
||||
|
||||
![Partial maps associative][partial]
|
||||
|
||||
The map \\(\mathbf{C} \to \mathbf{Par}(\mathbf{C})\\) given by \\((f: A \to B) \mapsto (\vert f \vert, A)\\) is a functor: it respects the identity arrow by inspection, while composition is respected by just looking at the diagram. It is clearly the identity on objects, by definition of the partial-maps category.
|
||||
|
||||
![Partial maps functor is a functor][Partial maps functor]
|
||||
|
||||
Exercise 9: Diagrams is a category: identity arrows are just identity arrows from the parent category; the composition of commutative squares is itself a commutative square (well, rectangle); composing with the identity arrow doesn't change anything. Taking the vertex objects of limits does determine a functor: it takes the identity arrows to identity arrows because taking a diagram to itself means taking its unique limit vertex to itself. It respects domains/codomains, because… well, it just does: if \\(f: D_1 \to D_2\\) in Diagrams, then \\(\lim f\\) is uniquely specified to go from limit-vertex 1 to limit-vertex 2. (By the way, the intuition for what an arrow in this category is, is the placing of one diagram above another with linking arrows between the objects.) Better justification: there is a unique morphism between the limit vertices, because we can use the arrow to determine a collection of morphisms from one limit vertex to the other making \\(D_1\\) into a cone for \\(D_2\\).
|
||||
|
||||
The last part follows because \\(\mathbf{Diagrams}(I, \mathbf{Sets})\\) is isomorphic to \\(\mathbf{Sets}^I\\). Sets has all limits, so the theorem holds, and hence there is a product functor. This seems a little nonrigorous, but I can't put my finger on why.
|
||||
|
||||
Exercise 10: we've already seen this. I'll state it anyway. The copullback of arrows \\(f: A \to B\\) and \\(g: A \to C\\) is the universal \\(P\\) and arrows \\(p_1: B \to P, p_2: C \to P\\) such that for any \\(b: B \to Z, c: C \to Z\\) with \\(cg = bf\\), there is a unique \\(p: P \to Z\\) with \\(p p_1 = b, p p_2 = c\\), as in the diagram.
|
||||
|
||||
![Definition of a pushout][pushout]
|
||||
|
||||
The construction of a pushout with coequalisers and coproducts is done by taking the coproduct of \\(B\\) and \\(C\\), and coequalising the two sides of the square.
|
||||
|
||||
Exercise 11: To show that the diagram is an equaliser, we need to show that any \\(z: Z \to \mathbb{P}(X)\\), which causes the two \\(\mathbb{P}(r_i): \mathbb{P}(X) \to \mathbb{P}(R)\\) to be equal, factors through \\(\mathbb{P}(q): \mathbb{P}(Q) \to \mathbb{P}(X)\\). Any \\(z: Z \to \mathbb{P}(X)\\) is a selection of subsets of \\(X\\) for each element of \\(Z\\); the condition that it equalises \\(\mathbb{P}(r_1), \mathbb{P}(r_2)\\) is exactly the same as saying that if we take the \\(r_1\\)-inverse image and the \\(r_2\\)-inverse image of the result, then we get the same subset of \\(R\\). Can we make it assign an indicator function on \\(Q\\)? We're going to have to prove that \\(z: Z \to \mathbb{P}(X)\\) maps only into unions of equivalence classes, and then the map will descend.
|
||||
|
||||
OK, we have "for each element of \\(Z\\), we pick out a subset of \\(X\\) which has the property that finding everything which that subset twiddles on the left, we get the same set as everything which that subset twiddles on the right". Suppose an element \\(a\\) is in the image of \\(z \in Z\\). Then we must have the entire equivalence class of \\(a\\) in the image set, because \\(\mathbb{P}(r_1)(\{ a \}) = \{ (a, x) \mid x \sim a \}\\) but \\(\mathbb{P}(r_2)(\{ a \}) = \{ (x, a) \mid x \sim a \}\\). These can't be equal unless the only thing in the equivalence class is \\(a\\). The reasoning generalises for when more than one thing is in the image set, by taking appropriate unions. Therefore the map does descend.
|
||||
|
||||
Exercise 12: the limit is such that for any cone, there is a unique way to factor the cone through the limit. What is a cone? It's a way of identifying a subshape of every element of the sequence, such that all other subshapes also appear in this limit subshape. But the only shape in \\([0]\\) is \\([0]\\), so the limit must be isomorphic to \\([0]\\).
|
||||
|
||||
The colimit must be \\(\omega\\). Indeed, a cocone is precisely an identification of a subset which contains an \\(\omega\\)-wellordered subset, and the colimit is the smallest \\(\omega\\)-well-ordered subset.
|
||||
|
||||
Exercise 13 a): The limit of \\(M_0 \to M_1 \to \dots\\) is just \\(M_0\\) - same reasoning as Exercise 12 - so it's an abelian group. It seems like the colimit should also be abelian. Let \\(C\\) be the colimit, and let \\(x, y \in C\\). I claim that there is some \\(n\\) such that \\(x, y \in M_n\\), whence we're done because \\(M_n\\) is abelian. (Strictly, I claim that there is \\(n\\) and \\(\alpha, \beta\\) such that \\(i_n(\alpha) = x, i_n(\beta) = y\\), where \\(i_n\\) is the inclusion.) It's enough to show that there is \\(m\\) and \\(n\\) such that \\(x \in M_m, y \in M_n\\), because then the maximum of \\(m, n\\) would do. If there weren't such an \\(m\\) for \\(x\\), we could take the cocone \\(C \setminus \{ x \}\\), and this would fail to factor through \\(C\\).
|
||||
|
||||
I then had a clever but sadly bogus idea: the second diagram is the same as the first but in the opposite category. Therefore by duality, we have that colimits <-> limits, so the limits and colimits are indeed abelian. This is bogus because the opposite category of Monoids is not Monoids, so we're not working the right category any more.
|
||||
|
||||
Let's go back to the beginning. The colimit is \\(N_0\\) by the same reasoning that made the limit of the \\(M_i\\) sequence be \\(M_0\\). That means it's an abelian group. Taking the limit of \\(N_0 \gets N_1 \gets \dots\\): our limit is a shape \\(L\\) which is in \\(N_0\\), which is itself an image of \\(N_1\\), which… This is a kind of generalised intersection, and the (infinite) intersection of abelian groups is an abelian group, so the intuition should be that the limit is also an abelian group.
|
||||
|
||||
Some on Stack Exchange [gave a cunning way to continue][SE], considering involutions \\(x \mapsto x^{-1}\\). I don't know if I'd ever have come up with that.
|
||||
|
||||
Exercise 13 b): now they are all finite groups. The limit of the \\(M_i\\) is \\(M_0\\), so this certainly has the "all elements have orders" property. The colimit of the \\(N_i\\) is \\(N_0\\), so likewise. The colimit \\(M\\) of the \\(M_i\\): every element \\(x\\) appears in some \\(M_x\\) (and all later ones) as above, and it must have an order in those groups, so it has an order in \\(M\\) too (indeed, each \\(M_i\\) is a subgroup of \\(M\\)). The limit of the \\(N_i\\): what about \\(C_2 \gets C_2^2 \gets C_2^3 \gets \dots\\), each arrow being the quotient by the first coordinate? No, the limit of that is \\(C_2^{\mathbb{N}}\\) in which every element has order 2. If we use \\(C_{n!}\\) instead? Ugh, I'm confused. I'll leave this for the moment and try to press on. If it becomes vital to understand limits in great detail in the time left before my course starts, I'll come back to this.
|
||||
|
||||
[pullback]: /images/CategoryTheorySketches/PullbackCube.jpg
|
||||
[Partial maps functor]: /images/CategoryTheorySketches/PartialMapsFunctor.jpg
|
||||
[partial]: /images/CategoryTheorySketches/PartialMapAssociative.jpg
|
||||
[pushout]: /images/CategoryTheorySketches/PushoutDefinition.jpg
|
||||
[SE]: https://math.stackexchange.com/a/1454266/259262
|
49
hugo/content/awodey/2015-09-30-heyting-algebras.md
Normal file
49
hugo/content/awodey/2015-09-30-heyting-algebras.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-09-30T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/heyting-algebras/
|
||||
- /heyting-algebras/
|
||||
title: Heyting algebras
|
||||
---
|
||||
|
||||
Now that we've had the definition of an exponential, we move on to the Heyting algebra, pages 129 through 131 of Awodey. This is still in the "exponentials" chapter. I stop shortly after the definition of a Heyting algebra, so as to move on to the more general stuff which is more relevant to the Part III course.
|
||||
|
||||
The first thing to come is the definition of an exponential \\(b^a\\) in a Boolean algebra \\(B\\) (regarded as a poset category). Without looking at the definition, I draw out a picture. We need to find \\(c^b\\) and \\(\epsilon: c^b \times b \to c\\) such that for all \\(f: a \times b \to c\\) there is \\(\bar{f}: a \to c^b\\) unique with \\(\epsilon \circ (\bar{f} \times 1_b) = f\\).
|
||||
|
||||
The first thing to note is that arrows are already unique if they exist, because we are in a poset category, so we don't have to worry about uniqueness of \\(\bar{f}\\). Then note that \\(f: a \times b \to c\\) is nothing more nor less than the statement that \\(a \times b \leq c\\) - that is, that the greatest lower bound of \\(a\\) and \\(b\\) is \\(\leq c\\), or that \\(c\\) is not a lower bound for both \\(a\\) and \\(b\\) simultaneously (assuming \\(a \times b \not = c\\)). The definition of \\(\bar{f}\\) is precisely the statement that \\(a \leq c^b\\), and \\(\epsilon\\) says precisely that the GLB of \\(c^b\\) and \\(b\\) is \\(\leq c\\).
|
||||
|
||||
In order to piece this together, we're going to want to know what the product of two arrows looks like. We're in a poset category, so it comes from "propagating the two arrows downwards until they hit a common basepoint, and taking that arrow": it is the arrow between the GLB of the domains and the GLB of the codomains. Therefore the product arrow \\(\bar{f} \times 1_B\\) is the arrow between the GLB of \\(a, b\\) and the GLB of \\(c^b, b\\).
|
||||
|
||||
![Product of arrows][arrow product]
|
||||
|
||||
Therefore the following picture is justified.
|
||||
|
||||
![Exponential in boolean category][exponential]
|
||||
|
||||
What could \\(c^b\\) be? If we let \\(f\\) be the arrow \\(\text{GLB}(c^b, b) \to c^b\\), then \\(\bar{f} = f\\), and \\(\bar{f} \times 1_b\\) is the identity arrow on that GLB. I don't know if this is helping, and I'm forced to look at the book.
|
||||
|
||||
The book gives \\(c^b\\) as \\((\neg b \vee c)\\), the LUB of \\(\neg b\\) and \\(c\\). This certainly does have an appropriate evaluation arrow and it is an exponential (having worked through the lines in the book), but I really don't see how one could have come up with that.
|
||||
|
||||
A Heyting algebra has finite intersections, unions and exponentials (where \\(a \Rightarrow b\\) is defined such that \\(x \leq (a \Rightarrow b)\\) iff \\((x \wedge a) \leq b\\)). What does this exponential really mean? In a Boolean algebra, it's an object which has as its subsets precisely those things which intersect with \\(a\\) to give a subset of \\(b\\). I can draw that in terms of a Venn diagram.
|
||||
|
||||
The distributive property holds, as I write out myself given the first line.
|
||||
|
||||
Now the definition of a complete poset (which I already know as "all subsets have a least upper bound"). Why is completeness equivalent to cocompleteness? In a Boolean algebra, this is easy because "join" is "complement-meet-complement". Actually, I'm now a bit confused: \\(\omega\\), the first infinite well-ordering, is not complete as a poset, but it certainly looks cocomplete. I check the definition of "complete" again to see if I'm going mad, and I see that it's "all limits exist", not just "\\(\omega\\)-limits exist". But then why does the book say "a poset is complete if it is so as a category - that is, if it has all set-indexed meets"? OK, \\(\omega\\) has a meet - namely \\(0\\) - but for it to have a join, we need \\(a \in \omega\\) such that for any \\(c \in \omega\\), have all elements of \\(\omega\\) are \\(\leq c\\) iff \\(a \leq c\\). Since \\(c+1 \not \leq c\\), we must have \\(a \not \leq c\\): that is, \\(a\\) is bigger than all members of \\(\omega\\). Therefore \\(\omega \subseteq \omega\\) doesn't have a join. Can we find a corresponding subset of \\(\omega\\) without a meet? No: the meet of any subset of a well-ordered set is just the least element. I'm horribly confused, so I've asked on [Stack Exchange]; the reply came that the corresponding meetless subset is \\(\emptyset\\), which I forgot to consider.
|
||||
|
||||
OK, let's try again. Suppose our poset has a meetless subset \\((a_i)\\) - that is, one which doesn't have a greatest lower bound. Remember, our poset might not have a terminal object, so actually we might have to change this into a proof by contradiction rather than contrapositive: let's assume all subsets have joins, so in particular there is a terminal object (the empty join). I would love to say "Then the corresponding complement of \\(\{ a_i \}\\) has no join, because its least upper bound is a greatest lower bound for \\(\{ a_i \}\\)", but \\(\{ 1 \} \subset \omega\\) has \\(1\\) as its LUB, but its complement has \\(0\\) as its GLB. However, what I could say is "Let \\(\{ b_i \}\\) be the set of elements which are less than every element of \\(a_i\\). This doesn't have a least upper bound, because that would be a GLB of \\(a_i\\)." That's better.
|
||||
|
||||
The power set algebra is certainly a complete Heyting algebra, as I mentioned above with the Venn diagram, or by Awodey's reasoning with the distributive law. The statement that Heyting algebras correspond to intuitionistic propositional calculus (where excluded middle may not apply) is pretty neat, but I'm afraid I'm still a bit lost.
|
||||
|
||||
The next section is on propositional calculus, where Awodey provides a set of axioms for intuitionistic logic.
|
||||
|
||||
At this point, I was told that exponentials don't really turn up in the Part III course, and since my aim here is to get an advantage in terms of the course, I'm skipping to the next chapter.
|
||||
|
||||
[arrow product]: {{< baseurl >}}images/CategoryTheorySketches/ArrowProduct.jpg
|
||||
[exponential]: {{< baseurl >}}images/CategoryTheorySketches/ExponentialInBooleanAlgebra.jpg
|
||||
[Stack Exchange]: http://math.stackexchange.com/q/1459373/259262
|
Reference in New Issue
Block a user