diff --git a/README.md b/README.md
index 06c57bb..7b9ed94 100644
--- a/README.md
+++ b/README.md
@@ -7,33 +7,13 @@ Every part of it is terrible.
Some of it is unmaintained.
You have been warned.
-## Checking out
-
-The builder is intended to `git subtree` to separate the mechanics of building from the content to be built, though you could simply inline your own content into the repo at the specified locations, or use submodules if you don't care about Nix support.
-
-The subtree checkout is automated using `./checkout.sh`, which you should update to point to your own content.
-After `./checkout.sh` has executed, you should be on a new branch called `working`, in which those remote repos have been read in appropriately as subtrees.
-When you want to commit changes to this repo, you need to cherry-pick them out onto `main` or similar, so as not to introduce the subtrees to the history of `main`; the `working` branch is created so that it's harder to get confused this way.
-At some point, I may write helper scripts to make the workflows sane.
-
## How to use
-I currently maintain two distinct build pipelines that should do the same thing.
-I'm in the process of standardising them as much as possible so that the choice boils down to "use Docker or use Nix to get my dependencies in place", with all the actual scripts shared.
+`nix build`.
-* A Nix flake (`./flake.nix`). Invoke using a plain old `nix build` to get the rendered site symlinked to `./result`.
-* A shell script (`./build.sh`) which runs a collection of pipelines in Docker images. Invoke using `sh ./build.sh` to get the rendered site in `./public`.
+### The `pdfs` flake
-The repository is intended to contain subtrees (see "Checking out" above) which refer to example content:
-
-* `hugo`, which refers to a [Hugo](https://github.com/gohugoio/hugo) static site directory, no tweaks required.
-* `pdfs`, which must contain a collection of TeX files and a text file `pdf-targets.txt`.
-* `images`, which must contain a collection of folders containing image files, and a text file `image-targets.txt`.
-* `meta`, which contains some amount of miscellaneous metadata.
-
-### The `pdfs` folder
-
-The `pdfs` folder is expected to contain a structure such as the following:
+The `pdfs` flake is expected to output a structure such as the following:
```
file1.tex
@@ -60,9 +40,9 @@ static/Quux/file2.tex
static/Quux/file2.pdf
```
-### The `images` folder
+### The `images` flake
-The `images` folder is expected to contain a structure such as the following:
+The `images` flake is expected to output a structure such as the following:
```
FolderName/image1.jpg
@@ -107,3 +87,9 @@ However, in the immediate future I intend adding support for the following:
There is a work-in-progress linting script, which is not currently included in the Nix build.
It is intended to be run after `./build.sh`, and it runs a number of checks on the rendered output, such as ensuring that all HTML is syntactically valid.
+
+## License
+
+Code from the Anatole theme is MIT-licenced, and there's a copy next to it.
+The content of this website does not yet have a licence, because I haven't thought that far ahead: all rights reserved, you can `git clone` the repository from GitHub, but nothing else.
+Contact me if you want to use it for some reason.
diff --git a/flake.lock b/flake.lock
index 32e7208..b6cca1d 100644
--- a/flake.lock
+++ b/flake.lock
@@ -42,22 +42,6 @@
"type": "github"
}
},
- "content-source": {
- "flake": false,
- "locked": {
- "lastModified": 1696092055,
- "narHash": "sha256-CmQ0pcr0yiDQypcvFJ1jCviGgZCJ4Zw0t1JooX5LshM=",
- "owner": "Smaug123",
- "repo": "static-site-content",
- "rev": "56b120fccfea31c0f761b4ebd68aa9af7d8d40e2",
- "type": "github"
- },
- "original": {
- "owner": "Smaug123",
- "repo": "static-site-content",
- "type": "github"
- }
- },
"extra-content": {
"flake": false,
"locked": {
@@ -247,7 +231,6 @@
"root": {
"inputs": {
"anki-decks": "anki-decks",
- "content-source": "content-source",
"extra-content": "extra-content",
"flake-utils": "flake-utils_2",
"images": "images",
diff --git a/flake.nix b/flake.nix
index 0b239f1..d1c2b1e 100644
--- a/flake.nix
+++ b/flake.nix
@@ -25,10 +25,6 @@
url = "github:Smaug123/anki-decks";
inputs.flake-utils.follows = "flake-utils";
};
- content-source = {
- url = "github:Smaug123/static-site-content";
- flake = false;
- };
};
outputs = {
@@ -39,7 +35,6 @@
images,
pdfs,
anki-decks,
- content-source,
extra-content,
scripts,
}:
@@ -91,7 +86,7 @@
pname = "patrickstevens.co.uk";
version = "0.1.0";
- src = content-source;
+ src = ./hugo;
buildInputs = [
pkgs.hugo
diff --git a/hugo/.gitignore b/hugo/.gitignore
new file mode 100644
index 0000000..46a2051
--- /dev/null
+++ b/hugo/.gitignore
@@ -0,0 +1,13 @@
+/result
+public/
+.ionide/
+static/images/galleries
+images/**/*-thumb.jpg
+
+static/misc/**/*.tex
+static/misc/**/*.pdf
+
+.DS_Store
+
+.idea/
+.hugo_build.lock
diff --git a/hugo/assets/css/fontawesome.css b/hugo/assets/css/fontawesome.css
new file mode 100644
index 0000000..b24f753
--- /dev/null
+++ b/hugo/assets/css/fontawesome.css
@@ -0,0 +1,74 @@
+.fa,.fab,.fad,.fal,.far,.fas {
+ -moz-osx-font-smoothing: grayscale;
+ -webkit-font-smoothing: antialiased;
+ display: inline-block;
+ font-style: normal;
+ font-variant: normal;
+ text-rendering: auto;
+ line-height: 1
+}
+.fa-2x {
+ font-size: 2em
+}
+
+.fa-github:before {
+ content: "\f09b"
+}
+
+.fa-stack-exchange:before {
+ content: "\f18d"
+}
+
+.fa-envelope:before {
+ content: "\f0e0"
+}
+
+.fa-linkedin:before {
+ content: "\f08c"
+}
+
+.fa-calendar-day:before {
+ content: "\f783"
+}
+
+@font-face {
+ font-family:"Font Awesome 5 Brands";
+ font-style:normal;
+ font-weight:400;
+ font-display:block;
+ src:url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-brands-400.eot);
+ src:url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-brands-400.eot?#iefix) format("embedded-opentype"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-brands-400.woff2) format("woff2"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-brands-400.woff) format("woff"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-brands-400.ttf) format("truetype"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-brands-400.svg#fontawesome) format("svg")
+ }
+
+@font-face {
+ font-family: "Font Awesome 5 Free";
+ font-style: normal;
+ font-weight: 900;
+ font-display: block;
+ src: url('https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-solid-900.eot');
+ src: url('https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-solid-900.eot?#iefix') format("embedded-opentype"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-solid-900.woff2) format("woff2"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-solid-900.woff) format("woff"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-solid-900.ttf) format("truetype"),
+ url(https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.1/webfonts/fa-solid-900.svg#fontawesome) format("svg")
+}
+
+.fab {
+ font-family: "Font Awesome 5 Brands"
+}
+
+.fab,.far {
+ font-weight: 400
+}
+
+.fa,.far,.fas {
+ font-family: "Font Awesome 5 Free"
+}
+
+.fa,.fas {
+ font-weight: 900
+}
diff --git a/hugo/assets/css/sidenote.css b/hugo/assets/css/sidenote.css
new file mode 100644
index 0000000..0a37dc0
--- /dev/null
+++ b/hugo/assets/css/sidenote.css
@@ -0,0 +1,62 @@
+span .sidenote {
+ display: inline;
+}
+
+.sidenote:hover .sidenote-label {
+ background-color: #36e281;
+ color:#fff
+}
+
+.sidenote:hover .sidenote-content {
+ border: .2rem dashed;
+ padding: .875rem;
+ border-color:#36e281
+}
+
+.sidenote-label {
+ border-bottom:.2rem dashed #36e281
+}
+
+.sidenote-checkbox {
+ display:none
+}
+
+.sidenote-content {
+ display: block;
+ position: absolute;
+ box-sizing: border-box;
+ border: .075rem solid #bfbfbf;
+ border-radius: .2rem;
+ margin-top: -1.5rem;
+ padding: 1rem;
+ text-align:left
+}
+
+.sidenote-content.sidenote-right {
+ right: 0;
+ width: 16%;
+}
+
+@media screen and (max-width: 78.5rem) {
+ .sidenote-content.sidenote-right {
+ display:none
+ }
+}
+
+.sidenote-delimiter {
+ display:none
+}
+
+@media screen and (max-width: 78.5rem) {
+ .sidenote-content.sidenote-right {
+ position: static;
+ margin-top: 1rem;
+ margin-bottom: 1rem;
+ width: 100%;
+ margin-right:0
+ }
+
+ .sidenote-checkbox:checked ~ .sidenote-content.sidenote-right {
+ display: block
+ }
+}
\ No newline at end of file
diff --git a/hugo/assets/css/syntax.css b/hugo/assets/css/syntax.css
new file mode 100644
index 0000000..350286e
--- /dev/null
+++ b/hugo/assets/css/syntax.css
@@ -0,0 +1,59 @@
+/* Background */ .chroma { color: #f8f8f2; background-color: #272822 }
+/* Error */ .chroma .err { color: #960050; background-color: #1e0010 }
+/* LineTableTD */ .chroma .lntd { vertical-align: top; padding: 0; margin: 0; border: 0; }
+/* LineTable */ .chroma .lntable { border-spacing: 0; padding: 0; margin: 0; border: 0; width: auto; overflow: auto; display: block; }
+/* LineHighlight */ .chroma .hl { display: block; width: 100%;background-color: #ffffcc }
+/* LineNumbersTable */ .chroma .lnt { margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f }
+/* LineNumbers */ .chroma .ln { margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f }
+/* Keyword */ .chroma .k { color: #66d9ef }
+/* KeywordConstant */ .chroma .kc { color: #66d9ef }
+/* KeywordDeclaration */ .chroma .kd { color: #66d9ef }
+/* KeywordNamespace */ .chroma .kn { color: #f92672 }
+/* KeywordPseudo */ .chroma .kp { color: #66d9ef }
+/* KeywordReserved */ .chroma .kr { color: #66d9ef }
+/* KeywordType */ .chroma .kt { color: #66d9ef }
+/* NameAttribute */ .chroma .na { color: #a6e22e }
+/* NameClass */ .chroma .nc { color: #a6e22e }
+/* NameConstant */ .chroma .no { color: #66d9ef }
+/* NameDecorator */ .chroma .nd { color: #a6e22e }
+/* NameException */ .chroma .ne { color: #a6e22e }
+/* NameFunction */ .chroma .nf { color: #a6e22e }
+/* NameOther */ .chroma .nx { color: #a6e22e }
+/* NameTag */ .chroma .nt { color: #f92672 }
+/* Literal */ .chroma .l { color: #ae81ff }
+/* LiteralDate */ .chroma .ld { color: #e6db74 }
+/* LiteralString */ .chroma .s { color: #e6db74 }
+/* LiteralStringAffix */ .chroma .sa { color: #e6db74 }
+/* LiteralStringBacktick */ .chroma .sb { color: #e6db74 }
+/* LiteralStringChar */ .chroma .sc { color: #e6db74 }
+/* LiteralStringDelimiter */ .chroma .dl { color: #e6db74 }
+/* LiteralStringDoc */ .chroma .sd { color: #e6db74 }
+/* LiteralStringDouble */ .chroma .s2 { color: #e6db74 }
+/* LiteralStringEscape */ .chroma .se { color: #ae81ff }
+/* LiteralStringHeredoc */ .chroma .sh { color: #e6db74 }
+/* LiteralStringInterpol */ .chroma .si { color: #e6db74 }
+/* LiteralStringOther */ .chroma .sx { color: #e6db74 }
+/* LiteralStringRegex */ .chroma .sr { color: #e6db74 }
+/* LiteralStringSingle */ .chroma .s1 { color: #e6db74 }
+/* LiteralStringSymbol */ .chroma .ss { color: #e6db74 }
+/* LiteralNumber */ .chroma .m { color: #ae81ff }
+/* LiteralNumberBin */ .chroma .mb { color: #ae81ff }
+/* LiteralNumberFloat */ .chroma .mf { color: #ae81ff }
+/* LiteralNumberHex */ .chroma .mh { color: #ae81ff }
+/* LiteralNumberInteger */ .chroma .mi { color: #ae81ff }
+/* LiteralNumberIntegerLong */ .chroma .il { color: #ae81ff }
+/* LiteralNumberOct */ .chroma .mo { color: #ae81ff }
+/* Operator */ .chroma .o { color: #f92672 }
+/* OperatorWord */ .chroma .ow { color: #f92672 }
+/* Comment */ .chroma .c { color: #75715e }
+/* CommentHashbang */ .chroma .ch { color: #75715e }
+/* CommentMultiline */ .chroma .cm { color: #75715e }
+/* CommentSingle */ .chroma .c1 { color: #75715e }
+/* CommentSpecial */ .chroma .cs { color: #75715e }
+/* CommentPreproc */ .chroma .cp { color: #75715e }
+/* CommentPreprocFile */ .chroma .cpf { color: #75715e }
+/* GenericDeleted */ .chroma .gd { color: #f92672 }
+/* GenericEmph */ .chroma .ge { font-style: italic }
+/* GenericInserted */ .chroma .gi { color: #a6e22e }
+/* GenericStrong */ .chroma .gs { font-weight: bold }
+/* GenericSubheading */ .chroma .gu { color: #75715e }
diff --git a/hugo/config.toml b/hugo/config.toml
new file mode 100644
index 0000000..ce3a426
--- /dev/null
+++ b/hugo/config.toml
@@ -0,0 +1,111 @@
+baseURL = "/"
+disablePathToLower = true
+languageCode = "en-gb"
+title = "Patrick Stevens"
+theme = "anatole"
+buildFuture = false
+enableEmoji = true
+paginate = 20
+
+[params]
+profilePicture = "/images/AboutMe/profile"
+title = "Patrick Stevens"
+author = "Patrick Stevens"
+customCss = ["css/syntax.css", "css/fontawesome.css"]
+favicon = "favicons/"
+
+[params.math]
+enable = true
+use = "katex-css"
+
+[[params.socialIcons]]
+icon = "fab fa-github"
+title = "GitHub"
+url = "https://github.com/Smaug123/"
+
+[[params.socialIcons]]
+icon = "fab fa-stack-exchange"
+title = "Stack Exchange"
+url = "https://math.stackexchange.com/users/259262/patrick-stevens"
+
+[[params.socialIcons]]
+icon = "fas fa-envelope"
+title = "e-mail"
+url = "mailto:patrick+sidebar@patrickstevens.co.uk"
+
+[[params.socialIcons]]
+icon = "fab fa-linkedin"
+title = "LinkedIn"
+url = "https://www.linkedin.com/in/patrick-stevens-2846017b/"
+
+[markup.highlight]
+ codeFences = true
+ guessSyntax = true
+ hl_Lines = ""
+ lineNoStart = 1
+ lineNos = true
+ lineNumbersInTable = true
+ tabWidth = 4
+ noClasses = false
+
+pygmentsUseClasses = true
+pygmentsCodefences = true
+
+[menu]
+ [[menu.main]]
+ name = "Home"
+ identifier = "home"
+ url = "/"
+ [[menu.main]]
+ name = "Posts"
+ identifier = "posts"
+ url = "/posts"
+ [[menu.main]]
+ name = "About Me"
+ identifier = "about-me"
+ url = "/about"
+ [[menu.main]]
+ name = "About This Site"
+ identifier = "about-this-site"
+ url = "/about-this-site"
+ [[menu.main]]
+ name = "Top Posts"
+ identifier = "top-posts"
+ url = "/top-posts"
+ [[menu.main]]
+ name = "Reading List"
+ identifier = "reading-list"
+ url = "/reading-list"
+ [[menu.main]]
+ name = "Film List"
+ identifier = "films"
+ url = "/films"
+ [[menu.main]]
+ name = "Lifehacks"
+ identifier = "lifehacks"
+ url = "/lifehacks"
+
+[frontmatter]
+date = ["date", "publishDate", "lastmod"]
+lastmod = ["lastmod", "date", "publishDate"]
+publishDate = ["publishDate", "date"]
+expiryDate = ["expiryDate"]
+
+[outputFormats]
+ [outputFormats.Markdown]
+ baseName = "markdown"
+ isPlainText = true
+ mediaType = "text/markdown"
+
+[mediaTypes]
+ [mediaTypes.'text/markdown']
+ suffixes = ['md']
+
+[outputs]
+post = ['HTML', 'markdown']
+page = ['HTML', 'markdown']
+
+defaultContentLanguage = 'en'
+[languages]
+ [languages.en]
+ weight = 1
diff --git a/hugo/content/ILAS/index.md b/hugo/content/ILAS/index.md
new file mode 100644
index 0000000..7d0ee74
--- /dev/null
+++ b/hugo/content/ILAS/index.md
@@ -0,0 +1,8 @@
+---
+lastmod: "2023-09-09T23:30:00.0000000+01:00"
+title: Imre Leader Appreciation Society
+author: patrick
+layout: page
+---
+
+I used to maintain an archive of the Imre Leader Appreciation Society for posterity through WebCitation, but WebCitation itself is now dead, so here I simply link to [Konrad Dąbrowski's capture][https://www.konraddabrowski.co.uk/ilas/index.html).
diff --git a/hugo/content/about-this-site/index.md b/hugo/content/about-this-site/index.md
new file mode 100755
index 0000000..78cb80e
--- /dev/null
+++ b/hugo/content/about-this-site/index.md
@@ -0,0 +1,35 @@
+---
+lastmod: "2022-07-31T20:16:44.0000000+01:00"
+title: About this website
+author: patrick
+layout: page
+---
+
+This website has been around in one form or another since June 26th, 2013.
+
+The website is hosted on [DigitalOcean] and is served statically by [NGINX].
+[Cloudflare] is sitting between my DigitalOcean droplet and you.
+Your HTTPS connection is secure to Cloudflare, and secure from Cloudflare to the droplet.
+
+The rendering engines are [Hugo] for the site, [pdftex] for PDFs, and [ImageMagick] to create image thumbnails.
+The Hugo theme is [Anatole] with a variety of modifications, most notably to remove most uses of JavaScript and to incorporate [Danila Fedore's sidenotes](https://danilafe.com/blog/sidenotes/) ([archive](https://web.archive.org/web/20210116232126/https://danilafe.com/blog/sidenotes/)).
+Mathematical notation in HTML is rendered by [KaTeX].
+
+You can access the TeX source of any PDFs I authored in TeX, by replacing the ".pdf" extension with ".tex".
+
+The infrastructure for this website is defined and managed by [Pulumi]; you can see it [on GitHub](https://github.com/Smaug123/PulumiConfig/).
+That repository also specifies the [Nix] configuration for the server.
+
+ [static]: https://en.wikipedia.org/wiki/Static_web_page
+ [GitHub Pages]: https://pages.github.com
+ [Hugo]: https://gohugo.io/
+ [Wordpress]: https://wordpress.org
+ [Anatole]: https://themes.gohugo.io/anatole/
+ [CloudFlare]: https://www.cloudflare.com
+ [DigitalOcean]: https://www.digitalocean.com
+ [NGINX]: https://www.nginx.com/
+ [pdftex]: https://www.tug.org/applications/pdftex/
+ [ImageMagick]: https://imagemagick.org/index.php
+ [KaTeX]: https://katex.org/
+ [Pulumi]: https://www.pulumi.com
+ [Nix]: https://nixos.org/
diff --git a/hugo/content/about/index.md b/hugo/content/about/index.md
new file mode 100644
index 0000000..998d033
--- /dev/null
+++ b/hugo/content/about/index.md
@@ -0,0 +1,52 @@
+---
+lastmod: "2022-08-20T14:18:00.0000000+01:00"
+title: About me
+author: patrick
+layout: page
+sidenotes: true
+---
+I am Patrick Stevens, a software engineer based in London, England.
+I completed my BA+MMath at the University of Cambridge.
+
+Social media accounts:
+
+* [Github][Github: Smaug123].
+* [Hacker News][Hacker News: Smaug123].
+* [Email](mailto:patrick+sidebar@patrickstevens.co.uk).
+* [LinkedIn][LinkedIn] (used almost never).
+* [Twitter][Twitter: smaug12345] (used almost never). My handle is @smaug12345.
+
+I am very interested in maths and puzzle-solving.
+For instance:
+
+* I have one of the [top twenty answers on the Maths StackExchange](https://math.stackexchange.com/questions/1681993/why-is-1-frac11-frac11-ldots-not-real/1682008#1682008) by upvotes.
+* On [Hacker.org], I am [laz0r][Hacker.org: laz0r], one of the top 50 users, although I have run out of low-hanging fruit on that site and I haven't returned to it for a while.
+* For three years running, I have participated in-person in the [MIT Mystery Hunt](https://en.wikipedia.org/wiki/MIT_Mystery_Hunt), solving with Team Palindrome; if interested, see [captain's write-up from 2019](https://www.ericberlin.com/2019/01/23/mystery-hunt-2019/) and [from 2020](https://www.ericberlin.com/2020/01/22/a-really-absurdly-long-post-about-the-mit-mystery-hunt/). In 2021, [we won](https://www.ericberlin.com/2021/01/19/my-mystery-hunt-2021-wrapup/).
+* I have solved {{< side right project-euler "a number">}}{{< /side >}} of [Project Euler] problems.
+
+Languages:
+
+* F#: this is my day job.
+* Mathematica (recreationally); it's a lovely Lisp-ish thing. At one point, I was very active on the [Mathematica StackExchange].
+* Python: better-than-code-monkey experience. The open-source [Sextant] and [Endroid] are in Python; I contributed to both of these in my dim and distant past.
+* C#: my day job interacts moderately frequently with C# code.
+* Delphi: the language I learned first.
+* Agda: I'm [playing around][Agda] with this one at the moment.
+
+I have an interest in rationality, philosophy, lifehacking, and the links between these. I like to play around with words and constrained writing (such as poetry).
+
+[Twitter: smaug12345]: https://twitter.com/smaug12345 "My Twitter account"
+[Github: Smaug123]: https://github.com/Smaug123/ "Patrick Stevens Github account"
+[Hacker News: Smaug123]: https://news.ycombinator.com/user?id=Smaug123
+[Public key: Patrick Stevens]: https://keybase.io/patrickstevens
+[Project Euler]: https://projecteuler.net/
+[Hacker.org]: http://www.hacker.org "Hacker.org"
+[Hacker.org: laz0r]: http://www.hacker.org/forum/profile.php?mode=viewprofile&u=13437 "My Hacker.org profile"
+[Agda]: {{< ref "2018-07-21-dependent-types-overview" >}}
+[GitHub Page]: https://pages.github.com
+[Endroid]: https://launchpad.net/endroid
+[Launchpad]: https://launchpad.net/~patrickas
+[Sextant]: https://launchpad.net/ensoft-sextant
+[Maths StackExchange]: https://math.stackexchange.com
+[Mathematica StackExchange]: https://mathematica.stackexchange.com/users/30771/patrick-stevens
+[LinkedIn]: https://www.linkedin.com/in/patrick-stevens-2846017b/
diff --git a/hugo/content/anki-decks/index.md b/hugo/content/anki-decks/index.md
new file mode 100755
index 0000000..e5cea1b
--- /dev/null
+++ b/hugo/content/anki-decks/index.md
@@ -0,0 +1,18 @@
+---
+lastmod: "2023-09-08T19:22:27.0000000+01:00"
+title: Anki decks
+author: patrick
+layout: page
+comments: true
+---
+
+I have deleted almost all of the Anki decks on this page, because I think they would do more harm than good.
+They were made during a time when I didn't really know how to use Anki appropriately.
+Any remaining decks here are CC-BY-SA.
+
+* [Geography]. You can filter out the `london-tube` tag if you like, or `world-capitals`, or `american-geography`.
+
+
+This work by Patrick Stevens is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
+
+[Geography]: /AnkiDecks/Geography.apkg
\ No newline at end of file
diff --git a/hugo/content/awodey/2015-08-19-category-theory-introduction.md b/hugo/content/awodey/2015-08-19-category-theory-introduction.md
new file mode 100644
index 0000000..75be1b4
--- /dev/null
+++ b/hugo/content/awodey/2015-08-19-category-theory-introduction.md
@@ -0,0 +1,18 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-08-19T00:00:00Z"
+aliases:
+- /categorytheory/category-theory-introduction/
+- /category-theory-introduction/
+title: Category Theory introduction
+---
+
+The next few posts will be following me on my journey through the book [Category Theory], by Steve Awodey. I’m using the second edition, if anyone wants to join me. I will read the book and make notes here as I go along: doing the exercises (if they seem interesting enough, I’ll post them up here), coming up with my own intuition pumps, and generally writing down my thought processes. The idea is to see how a fledgling mathematician studies a text, and to record my thoughts so I can refresh my memory more easily in future.
+
+As I go, I’m also creating an Anki deck of the definitions, by the way, although that might not appear on this site.
+
+[Category Theory]: http://ukcatalogue.oup.com/product/9780199237180.do
diff --git a/hugo/content/awodey/2015-08-19-what-is-a-category.md b/hugo/content/awodey/2015-08-19-what-is-a-category.md
new file mode 100644
index 0000000..4cc89e1
--- /dev/null
+++ b/hugo/content/awodey/2015-08-19-what-is-a-category.md
@@ -0,0 +1,57 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-08-19T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/what-is-a-category/
+- /what-is-a-category/
+title: What is a Category?
+---
+
+This post will cover the initial "examples" section of [Category Theory]. Because there aren't really very deep concepts in this section, this is probably a less interesting post to read than the others in this series.
+
+The introduction lasts until the bottom of page 4, which is where a *category* is defined. I read the definition in a kind of blank haze, not really taking it in, but I was reassured by the line "we will have plenty of examples very soon". On re-reading the definition, I've summarised it into "objects, arrows which go from object to object, associative compositions of arrows, identity arrows which compose in the obvious way". That's a very general definition, as the text points out, so I'm just going to wait until the examples before trying to understand this properly.
+
+The first example is the category of sets, with arrows being the functions between sets. That destroys my nice idea that "a category can be represented by a [simple (directed) graph][simple graph] together with a single identity arrow on each node": indeed, there are lots and lots of functions between the same two sets, and indeed more than one arrow \\(A \to A\\). I'll relax my mental idea into "directed multigraph".
+
+Then there's the category of finite sets. I'll just check that's a category - oh, it's actually really obvious and there's not really anything to check.
+
+Then the category of sets with injective functions. The "is this a category" check is done in the text.
+
+What about surjective functions? The composition of surjective functions is surjective, and the identity function is surjective, so that does also form a category.
+
+The first exercise in the text is where the arrows are \\(f: A \to B\\) such that \\(f^{-1}(b) \subset A\\) has at most two elements. (A moment of confusion before I realise that this is almost the definition of "injective".) That's clearly not a category: the composition of two of those might fail to satisfy the property. For instance, \\(f: \{0, 1, 2, 3 \} \to \{0, 1\}\\) the "is my input odd" function, and \\(g: \{0, 1\} \to \{0\}\\) the constant function; the composition of these is the constant zero function which is four-to-one.
+
+Now comes the category of posets with monotone functions. Not much comes to mind about that.
+
+The category of sets with binary relations as the arrows is one that is less intuitive for me, mainly because I'm still not used to thinking of relations \\(\sim\\) (such that \\(x \in X\\) may \\(\sim y \in Y\\)) as subsets of \\(X \times Y\\). The identity arrow is easy enough: it's the obvious "equality" relation that \\(a \sim a\\) only. The composition is a little less obvious: \\(a (S \circ R) c\\) iff there is \\(b\\) such that \\(a S b\\) and \\(b R c\\). Can I come up with an example of that? Let \\(S = \ \leq\\) on \\(\mathbb{R}\\), and \\(R = \ \geq\\). Then \\(S \circ R\\) is just the "everything is related" relation, since we may either let \\(b=a\\) or \\(b=c\\) depending on whether \\(a \leq c\\) or \\(a \geq c\\). OK, I'm a bit happier about that. It's easy to show that we have a category.
+
+Then comes a matrices example (which I've simplified from the textual example), where the objects are natural numbers - possibly repeated - and the arrows are integer matrices of the right dimensions that matrix multiplication is defined. I thought that was a pretty neat example.
+
+Finite categories: the book gives the definitions of \\(0\\), \\(1\\), \\(2\\) and \\(3\\). There's an obvious way to extend this to higher natural numbers. The section about "we may specify a finite category by just writing down a directed graph and making sure the arrows work" rings a strong bell with [free group]s, and indeed, the book calls them "free categories".
+
+Now we come to the definition of a "functor", which I immediately parse as a "category homomorphism" and move on. (Questions which come to mind: are any of the above categories related by some functor? I don't care much about that for the moment.)
+
+Preorders form a category which is drawn in almost exactly the same way as the Hasse diagram for a partial order (omitting identity arrows). That's a category in which the arrows are representing relation rather than domain/codomain.
+
+The topological-space example I skipped because I didn't know what a \\(T_0\\) space was. (However, the specialisation ordering I did observe to be trivial on sufficiently separated spaces.)
+
+Example from the category of proofs in a particular deductive system: the identity arrow \\(1_{\phi}\\) should be the trivial deduction \\(\phi\\) from premise \\(\phi\\). Very neat. It rings a bell from what I've heard of the [Curry-Howard isomorphism], and indeed the next example makes me think even more strongly of that.
+
+Discrete category on a set: yep, checks out. I should verify that they are posets, which they are: the poset with order relation "almost nothing is comparable".
+
+Monoids: oh dear, this example looks long. OK, I know what a monoid is ("group without inverses"), but how is it a category? Little mental shift of gear to thinking of elements as arrows, and it all becomes clear. The "free category" relations from earlier, then, correspond to the "free group" relations on the generators. I check that the set of functions \\(X \to X\\) actually a monoid, which it is. It seems easier to view it as a subcategory of the category of sets; and lo and behold the next paragraph points this out. We get to the bit about "monoid homomorphisms" - yes, they are indeed functors, which is not at all unexpected given that my understanding of "functor" is "category homomorphism", and monoids are categories.
+
+## Summary
+This is actually the second time I've read this section - the first time was before I had the idea of blogging my progress - and now I think I've got a good feel for what a category is. The next section is titled "Isomorphisms", which should give me a better idea of which categories are "the same". I noticed that the integers (when implemented as categories) seem to form a preorder, and indeed a poset; this corresponds nicely with their implementation as finite ordinals, with \\(3 = \{2\}\\) and so forth. I like seeing things crop up in different implementations all over the place like that.
+
+
+[Category Theory]: http://ukcatalogue.oup.com/product/9780199237180.do
+[simple graph]: https://en.wikipedia.org/wiki/Graph_(mathematics)#Simple_graph
+[free group]: https://en.wikipedia.org/wiki/Free_group
+[Hasse diagram]: https://en.wikipedia.org/wiki/Hasse_diagram
+[Curry-Howard isomorphism]: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence
diff --git a/hugo/content/awodey/2015-08-20-new-categories-from-old.md b/hugo/content/awodey/2015-08-20-new-categories-from-old.md
new file mode 100644
index 0000000..25f6357
--- /dev/null
+++ b/hugo/content/awodey/2015-08-20-new-categories-from-old.md
@@ -0,0 +1,71 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-08-20T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/new-categories-from-old/
+- /new-categories-from-old/
+title: New categories from old
+---
+
+Here, I will be going through the Isomorphisms and Constructions sections of Awodey's Category Theory - pages 12 through 17.
+
+The first definition here is that of an isomorphism within a category. I notice that it corresponds with the usual definition of an isomorphism, but it's not phrased in exactly the same way. Up til now, "isomorphism" has strictly meant "bijective homomorphism". Are these two notions secretly the same? They can't be, because arrows aren't necessarily homomorphisms. Let's proceed with this slightly unfamiliar definition: it is an "arrow which is invertible on either side by the same inverse". The book asks us to prove that inverses are unique - that's easy by the usual group-inverses proof, which only really requires associativity.
+
+I need to be careful to remember that isomorphisms (as defined here) aren't between categories, but between members of a category. That is, they're not functors but arrows. (Though of course an arrow may represent a functor, but that's beside the point.)
+
+Now comes a paragraph about abstract definitions, which basically crystallises my thoughts that isomorphism is a more general form of "bijective homomorphism" which works in all categories. The example from the poset category with monotone functions as arrows is something I'm going to have to get my head around. Here goes.
+
+What does the category-theoretic definition of an isomorphism look like in the category of posets? It's a monotone function which has a monotone inverse. (Ah, that's more like the definition I remember: "a homeomorphism is a continuous function with a continuous inverse".) How is that different from "bijective homomorphism"? We'll want a monotone function which has an inverse which is not monotone. The standard topological spaces example was on an arbitrary space, between the discrete topology and the indiscrete topology. One direction is continuous and the other is not. Can I quickly turn that into a poset example? The obvious way to go would be on the same set from "nothing is related" to "total order". Definitely order-preserving: if \\(x < y\\) then \\(f(x) < f(y)\\) is vacuously true; definitely invertible; definitely not what we want an isomorphism to look like. I think I've got my head around the difference now.
+
+In the case of a monoid (viewed as a category), "only the abstract definition makes sense". Is that true? Firstly, what does the abstract definition look like? In a group, all elements are isomorphisms. If we take the monoid \\((\mathbb{Z} \cup \{ \infty \}, +)\\), the arrow \\(\infty: G \to G\\) is not an isomorphism because it has no inverse. That seems fine. Can I make sense of the idea of a monoid element being a "bijective homomorphism"? I could make the element act on the monoid by left multiplication, and I don't see anything wrong with that at the moment. I moved on at the time, but asked someone a bit later about this. The answer is that there are some categories which can't be viewed concretely at all, so the idea of "an arrow is a function" can't be made to make sense in some categories.
+
+Definition of a group is next; I definitely understand that, and I discovered for myself that a group has all its arrows as isomorphisms. I'll skip the bit about some examples of groups, because I know it, and go to the definition of a group homorphism. That bit is clear too, so on to Cayley's Theorem.
+
+The proof which appears here is basically the same as the one I was taught: show that action-on-the-left gives us a way to turn \\(G\\) into a permutation group on itself.
+
+The warning is interesting, and I hadn't noticed the feature it points out. I'll think about that a bit further. OK, it doesn't actually seem to be that problematic to understand, but definitely important to keep my thinking type-checked.
+
+Theorem 1.6. This looks important. We instantiate objects by their collection of incoming arrows, and instantiate arrows by functions which "represent" an arrow in the same way as the regular representation does in groups. Actually, that doesn't seem particularly important: it's just saying "we can instantiate categories whose arrows form a set". Maybe the Remark will clear things up. It's basically saying by analogy that "there's nothing special about permutation groups, since all groups may be viewed as permutation groups, so stop thinking about them in that way please". I think I'll wait until the discussion of terminal objects before I try and get my head around the true interpretation of a concrete category.
+
+Now the New Categories From Old section. The product looks easy enough, and its two projections are natural. The dual category likewise is pretty obvious, and makes the dual vector spaces idea much neater.
+
+The arrow category takes me a while to get my head around. The composition operation clearly does compose arrows correctly. What does the arrow category of the integer category \\(3\\) look like? Let's call the objects of \\(3\\) by the names \\(a, b, c\\). Then the arrow category has six objects (three identity and three non-identity arrows). We can find all the commutative squares by brute force, which I did on paper: there are \\(3^4\\) squares, but anything with \\(c\\) in the top left corner must be the identity arrow on the arrow \\(c \to c\\). That narrows it down enough for me to do this by hand. We end up with \\(a \to a\\) being connected to every arrow; \\(a \to b\\) connected to every arrow except \\(a \to a\\); \\(a \to c\\) connected only to \\(a \to c, b \to c, c \to c\\); \\(b \to b\\) connected to \\(b \to b, b \to c, c \to c\\); \\(b \to c\\) connected to \\(b \to c, c \to c\\); and \\(c \to c\\) connected to \\(c \to c\\). That is, if we omit the identity arrows, we obtain the following Hasse diagram.
+
+![Arrow category of 3][arrow]
+
+I don't think that was very enlightening. Motto: arrow categories aren't obviously anything in particular. What about the forgetful functors specified by taking the codomain or the domain? I'm happy that those are both functors, having stared at my diagram.
+
+Now comes the slice category. I've read this over once and got absolutely nowhere, so let's try again more carefully. The objects I can deal with: any arrow which goes into \\(C\\). The arrows? I'll do this with the category \\(3\\) again. If we slice on \\(a\\) then the only object is the identity arrow, and the only arrow is another identity. If we slice on \\(b\\) then there are two objects: \\(a \to b\\) and \\(b \to b\\). (Just quickly went back to the definition of a category, to check that \\((a \to b) \circ (b \to b)\\) isn't another arrow; in general it could be, but there isn't in this category.) Then in the slice category, there's an arrow \\((a \to b) \to (a \to b)\\) - namely the \\(C\\)-arrow \\(a \to a\\) - and an arrow \\((a \to b) \to (b \to b)\\) - namely the \\(C\\)-arrow \\(a \to b\\). We also have \\(b \to b\\)'s identity arrow. Therefore, we have recovered the category \\(2\\). That gives me intuition about what the identity arrows in the slice category look like.
+
+I don't think I've got any more intuition here. I'll briefly move on to the bit about the functor which forgets the sliced object. Certainly I agree that the given functor behaves correctly on objects. Does it behave on arrows? Yes, that's obvious from the syntactic definition, but I'm not certain I grok it. (I notice at this point that the functor is not necessarily surjective, as the \\(3\\) example above shows.)
+
+If I understand the composition law, then I should understand the arrows, so I'll aim for that instead. The composition law is clear from the book's diagram, on page 16: just add another triangle joined along edge \\(f'\\) to make a bigger supertriangle. OK, now I'm happier about the arrows in the slice category: they really are just arrows in the original category, and they join two slice-category objects (that is, arrows in \\(C\\)) if the two objects form a commutative triangle. This is actually a lot like the arrow category, by the looks of it.
+
+What about this composition functor? It lets us slice out on a different vertex by "changing the worldview", viewing everything through the lens of a particular arrow. I'm happy enough with that as a concept, although I recognise that my "understanding that this is a functor" is purely syntactic. Hopefully I'll get used to this with time.
+
+"The whole slicing construction is a functor". Yes, OK, that follows from the existence of the composition functor. I repeat that I'm understanding this at the surface level only, and I don't really grok any of it.
+
+What happens if we slice out a group (viewed as a category) by its only object? Then we get a category which has objects {the elements of the group}, and arrows \\(g \to h\\) given by \\((h^{-1} \circ g) \in C\\). That seems to have taken the group and told us how all its elements are related, which is mildly interesting.
+
+I verify that the slice category of a poset category is the "principal ideal" as stated, and note with relief that we will see more examples soon.
+
+The coslice category: that's obviously just the dual of the slice category.
+
+The category of pointed sets: yep, it's a category. I really don't' understand the isomorphism with the coslice category on sets. I can just about see it syntactically, but this is going to need a lot more work. I spent about ten minutes trying to work out what this really meant.
+
+## Summary
+
+I'm happy with some of these constructions, but I'll need a lot more work on others. I'll do these constructions on some more categories and see what happens.
+
+After composing this post, I asked someone for intuition, and got the reply:
+
+"The coslice category has objects which may be viewed as pairs \\((A, f)\\), where \\(f:\{ * \} \to A\\). So \\(f\\) is exactly a choice of element in \\(A\\). And the morphisms are maps such that the triangle commutes, i.e. the element "chosen" by \\(f\\) is the same as the one "chosen" by \\(f'\\)."
+
+I think this has cleared things up, but time will tell.
+
+[arrow]: {{< baseurl >}}images/CategoryTheorySketches/ArrowCategoryOf3.jpg
diff --git a/hugo/content/awodey/2015-08-21-free-categories-and-foundations.md b/hugo/content/awodey/2015-08-21-free-categories-and-foundations.md
new file mode 100644
index 0000000..eac8a14
--- /dev/null
+++ b/hugo/content/awodey/2015-08-21-free-categories-and-foundations.md
@@ -0,0 +1,56 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-08-21T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/free-categories-and-foundations/
+- /free-categories-and-foundations/
+title: Free categories and foundations
+---
+
+Here, I will be going through the Free Categories and Foundations sections of Awodey's Category Theory - pages 18 through 25.
+
+The definition of a free monoid is basically the same as that of a free group. However, I skim past and see the word "functor" appearing in the "no noise" statement, so I'll actually read this section properly.
+Everything is familiar up until the definition of the universal mapping property. One bit confuses me for a moment - "every monoid has an underlying set, and every monoid homomorphism an underlying function; this is a functor" - until I realise that by "this", Awodey means "this construction" rather than "this underlying function".
+
+Now comes the Universal Mapping Property of free monoids. This is a painful definition - I've spent fifteen minutes trying and failing to understand it - so I'll skip past it and come back when I've read some more.
+
+Proposition 1.9: this is a proof in an area where I'm wrestling to keep everything in my mind at once, so I'll just prove the proposition myself, using Awodey to take short-cuts. Let \\(i: A \to \vert A^* \vert\\) be defined by inclusion - that is, taking \\(a\\) to the single-character word \\(a\\). Let \\(N\\) be a monoid and \\(f: A \to \vert N \vert\\). Define \\(\bar{f}: A^* \to N\\) as stated; it's clearly a homomorphism. It has \\(\vert \bar{f} \vert \circ i = f\\): indeed, \\(\vert \bar{f} \vert \circ i(a) = f(a)\\) manifestly. The homomorphism is unique, as Awodey proves at the end. Very well: I'm satisfied that \\(A^*\\) has the UMP of the free monoid on \\(A\\).
+
+Apparently the UMP captures "no junk and no noise". What Awodey says is plausible to me in that it hits the right words on the mark scheme, but the definition of the UMP is just too abstract. I'll try and break it into parts.
+
+"There is a function \\(i : A \to \vert M(A) \vert\\)." That bit's fine: it's saying that the inclusion exists. "Words are built up from the set in some way."
+
+"Given any monoid \\(N\\) and any function \\(f: A \to \vert N \vert \\), there is a monoid homomorphism \\(\bar{f} : M(A) \to N\\) such that \\(\vert \bar{f} \vert \circ i = f\\)." The final equality is saying "we may represent \\(f\\) instead by first including into the free group, then applying some analog of \\(f\\)". Makes sense: "if we know where members of the free group go, then we definitely know where the generators go".
+
+"Moreover, \\(\bar{f}\\) is unique." Well, if it weren't unique, we would have a choice of places to send a word in the free group, even if we knew where all the generators went.
+
+I think I understand it better now. Still not on a particularly intuitive level, but now I'm convinced by Awodey's explanation.
+
+Let's move on to Proposition 1.10, that the free monoid is determined uniquely up to isomorphism. That seems plain enough, on a syntactic level.
+
+The bit about graphs is clear, but now there's another UMP to worry about. (Ah, I'm starting to understand that a UMP is a class of property, not just one particular property. Presumably there's one for lots of different structures.) The forgetful functor from Cat to Graphs is fine; the "different point of view" of a graph homomorphism makes me stop. Let's break down that diagram more carefully.
+
+\\(i: C_0 \to C_1\\) is indeed a valid map: we may view the identity arrow operation as taking an object to its associated identity. The codomain and domain functions do indeed take arrows to objects. The composition operation takes pairs of arrows (which have the right codomain/domain) to single arrows. OK, that's not too scary a diagram, and I agree that a functor is as claimed.
+
+After the same process of thought, I agree with the formulation of Graphs; and then I get to the description of the forgetful function Cat to Graphs. That is immediately comprehensible, and my first thought is that I don't know why Awodey didn't just come out with it straight away.
+
+"Similarly for functors…" - this bit is "easier to demonstrate with chalk", but I'll just go back and do it mentally. It works out in the obvious way.
+
+Finally, our second universal mapping property, this time the free category generated by a graph. Armed with the (meagre) intuition from the free-monoid UMP, this is easier to understand. "We may include the graph into the free category, and given somewhere to map the generators, there is a unique way to determine where elements of the free category go". I had one of those rare moments of "I know exactly what is going on here", which is hopefully a good sign.
+
+I'm intuitively happy with the examples given in the epilogue. If I were less lazy, I'd check from the UMP that the examples worked (that is, show that categories so defined were unique, and that the free catgory satisfied the UMP).
+
+Page 24 (on foundations) is familiar to me. I note the definition of "small" and "large" categories - natural enough. The definition of "locally small" looks a bit frightening at first, but on second glance it really is just what you'd expect "locally small" to mean. What would it mean for \\(Cat\\), the collection of small categories, to not be locally small? There would have to be two small categories such that the collection of functors between them was not a set. But the two categories are small, so they are sets, and there is a set of all functions between two sets. (However, the category of locally small categories would not be locally small: pick a non-small member \\(C\\), and define a functor \\(1 \to C\\) which selects an element. There are non-set-many of these.)
+
+Finally, the warning that "concrete" is not "small". Once given the example of the poset category \\(\mathbb{R}\\), I'm satisfied.
+
+# Summary
+
+I took a few days to understand this section, not working at it very hard but just letting it trickle in when the mood took me. It was massively more difficult than the previous sections, but I think I've got my head around the universal mapping properties described. I don't know whether I could come up with them myself to describe other free objects, but I could certainly give it a go.
+
+The exercises at the end of this chapter will be the true test of understanding.
diff --git a/hugo/content/awodey/2015-09-02-epis-monos.md b/hugo/content/awodey/2015-09-02-epis-monos.md
new file mode 100644
index 0000000..a9144ce
--- /dev/null
+++ b/hugo/content/awodey/2015-09-02-epis-monos.md
@@ -0,0 +1,60 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-02T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/epis-monos/
+- /epis-monos/
+title: Epis and monos
+---
+
+This post is on pages 29 through 33 of Awodey. It took me a while to do this, because I was basically on holiday for the past week.
+
+The definition of a [mono] and an [epi] seems at first glance to be basically the same thing as "injection" and "surjection". A mono is \\(f: A \to B\\) such that for all \\(g, h: C \to A\\), if \\(fg = fh\\) then \\(g=h\\). Indeed, if we take this in the category of sets, and let \\(g, h: \{1 \} \to A\\) ("picking out an element"), we precisely have the definition of "injection". An epi is \\(f: A \to B\\) such that for all \\(i, j:B \to D\\), if \\(if = jf\\) then \\(i=j\\). Again, in the category of sets, let \\(i, j: B \to \{1\}\\); then… ah. \\(if = jf\\) and \\(i=j\\) always, because there's only one function to the one-point set from a given set. I may have to rethink the "surjection" bit.
+
+Then there's Proposition 2.2, which I'm happy I've just basically proved anyway, so I skim it.
+
+Example 2.3: "monos are often injective homomorphisms". I glance through the example as preparation for going through it with pencil and paper, and see "this follows from the presence of objects like the free monoid \\(M(1)\\)", which is extremely interesting. Now I'll go back through the example properly.
+
+Suppose \\(h: M \to N\\) is monic. For any two distinct ways of selecting an element of the monoid's underlying set, we can lift those selections into mappings on the free monoid \\(M(1) \to M\\); they are distinct by the UMP. Applying \\(h\\) then takes the mappings into \\(N\\), maintaining distinctness by monicity; then the UMP lets us drag the mappings back into the sets, making selections from \\(1 \to \vert N \vert\\). The converse is quite clear.
+
+So it is clear where we needed the free monoid and its UMP: it was to give us a way to pass from talking about monoids to talking about sets, and back.
+
+Example 2.4: every arrow in a poset category is both monic and epic. An arrow \\(f: A \to B\\) is monic iff for all \\(g, h: C \to A\\), \\(f g = f h \Rightarrow g = h\\). That is, to abuse notation horribly, \\(a \leq b\\) is monic iff \\(c \leq a \leq b, c \leq a \leq b \Rightarrow ((c \leq a) = (c \leq a))\\). Ah, it's clear why all arrows are monic: it's because there is at most one arrow between \\(A, B\\), so two arrows with the same codomain and domain must be the same. The same reasoning works for "the arrows are epic".
+
+"Dually to the foregoing, the epis in the category of sets are the surjective functions". This is the bit from earlier I had to rethink. OK, let's take \\(f: A \to B\\) an epi in the category of sets. Let \\(i, j: B \to C\\), for some set \\(C\\). (Hopefully it'll become clear what \\(C\\) is to be.) Then \\(i f = j f\\) implies \\(i = j\\); we want to show that \\(f\\) hits every element of \\(B\\), so suppose it didn't hit \\(b\\). Then when we take the compositions \\(if, jf\\), we see that \\(i, j\\) never are asked about \\(b \in B\\), so in fact we are free to choose \\(i, j\\) to differ. That means we just need to pick \\(C\\) to be a set with more than one element. OK, that's much easier, although it's not quite clear to me how this is "dually".
+
+Then the example of the inclusion map \\(i\\) of the monoid \\(\mathbb{N} \cup \{ 0 \}\\) into the monoid \\(\mathbb{Z}\\). We're going to prove it's epic, so I'll try that before reading the proof. Let \\(g, h: \mathbb{Z} \to M\\) for some monoid \\(M\\); we want to show that \\(g i = h i \Rightarrow g = h\\). Indeed, suppose \\(g i = h i\\), but \\(g \not = h\\): that is, there is some \\(z \in \mathbb{Z}\\) such that \\(g(z) \not = h(z)\\). Since \\(gi = hi\\), we must have that \\(i\\) does not hit \\(z\\): that is, \\(z < 0\\). But \\(gi(-z) = hi(-z)\\) and so \\(g(-z) = h(-z)\\); whence \\(g(0) = g(z)+g(-z) \not = h(z)+h(-z) = h(0)\\). That is, \\(g, h\\) differ in the image of the unit. That is a contradiction because a homomorphism of monoids has a defined place to send the unit.
+
+Looking back over the proof in the book, it's basically the same. Awodey specialises to \\(-1\\) first.
+
+Proposition 2.6: every iso is monic and epic. I can't help but see the diagram when I read this, but I'll try and ignore it so I can prove it myself. Recall that an iso is an arrow such that there is an "inverse arrow". Let \\(f: A \to B\\) be an iso, and \\(i, j: B \to C\\) such that \\(if = jf\\). Then we may post-compose by \\(f\\)'s inverse - ah, it's clear now that this will work both forwards and backwards. This is exactly analogous to saying "we may left- or right-cancel in a group", and now I come to think of it, "epis are about right-cancelling" is something I just skipped over in the book.
+
+I'm happy with "every mono-epi is iso in the category of sets", since we've already proved that the injections are precisely the monos, and the epis are precisely the surjections.
+
+Now, the definition of a split mono/epi. That seems fine - it's a weaker form of mono/epi. "Functors preserve identities" does indeed mean that they preserve split epis and split monos, clearly, because a split epi comes in a pair with a split mono.
+
+The forgetful functor Mon to Set does not preserve the epi \\(\mathbb{N} \to \mathbb{Z}\\): we want to show that the inclusion of \\(\mathbb{N} \to \mathbb{Z}\\) (as sets) is not surjective. Oh, that's trivially obvious.
+
+In Sets, every mono splits except the empty ones: yes, we already have a theorem that injections have left inverses. "Every epi splits" is the categorical axiom of choice: we already have a theorem that "surjections have right inverses" is equivalent to AC, so I'm happy with this bit.
+
+Now the definition of a projective object. It's basically saying "arrows from this object may be pulled back through epis". A projective object "has a more free structure"? I don't really understand what that's saying, so I'll just accept the words and move on.
+
+All sets are projective because of the axiom of choice? Fix set \\(P\\); we want to show that for any function \\(f: P \to X\\) and any surjection \\(e: E \to X\\), there is \\(\bar{f}: P \to E\\) with \\(e \circ \bar{f} = f\\). We have (by Choice) that \\(e\\) splits: there is a right inverse \\(e^{-1}\\) such that \\(e \circ e^{-1} = 1_X\\). Define \\(\bar{f} = e^{-1} \circ f\\) and we're done.
+
+Any retract of a projective object is itself projective: I absolutely have to draw a diagram here. After a bit of confusion over left-composition happening as you go further to the right along the arrows, I spit out an answer.
+
+![Retract of a projective object is projective][retract]
+
+# Summary
+
+This section was more definitional than idea-heavy, so I think I've got my head around it for now. I do still need to practise my fluency with converting compositions of arrows on the diagrams into composition of arrows as algebraically notated - I still have to keep careful track of domain and codomain to make sure I don't get confused.
+
+[mono]: https://en.wikipedia.org/wiki/Monomorphism
+[epi]: https://en.wikipedia.org/wiki/Epimorphism
+
+[retract]: {{< baseurl >}}images/CategoryTheorySketches/RetractOfProjectiveIsProjective.jpg
diff --git a/hugo/content/awodey/2015-09-02-initial-generalised-elements.md b/hugo/content/awodey/2015-09-02-initial-generalised-elements.md
new file mode 100644
index 0000000..6077aed
--- /dev/null
+++ b/hugo/content/awodey/2015-09-02-initial-generalised-elements.md
@@ -0,0 +1,57 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-02T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/initial-generalised-elements/
+- /initial-generalised-elements/
+title: Initial, terminal, and generalised elements
+---
+
+This is pages 33 to 38 of Awodey.
+
+This bit looks really cool. A categorical way of expressing "this set has one element only": a terminal object. We have more examples of UMPs - these aren't quite of the same form as the previous ones.
+
+The proof that initial objects are unique up to unique isomorphism is easy - no need for me even to consider the diagram. On to the huge list of examples.
+
+Sets example: agreed. I actually asked about this (the fact that Set is not isomorphic to its dual) on Stack Exchange, and got basically this answer. Just a quick check that the one-point sets are indeed unique up to unique isomorphism, which they are.
+
+The category 0 is definitely initial in Cat; I agree that 1 is also terminal.
+
+In Groups: the initial objects are those for which there is precisely one homomorphism between it and any other group. Such a group needs to be the trivial group, since if \\(G\\) contains any other element, we can send \\(G\\) to \\(G\\) in a non-identifying way by sending every element to its inverse. The terminal objects: again that's just the trivial group, because for any other group \\(G\\), we can take two different homomorphisms into \\(G \times G\\), by projection onto the first or second coordinates. In Rings, on the other hand, I agree that \\(\mathbb{Z}\\) is initial: the unit has to go somewhere, and that determines the image of all of \\(\mathbb{Z}\\).
+
+Boolean algebras are something I ought to have met before in Part II Logic and Sets, but it was not lectured. I think I'll come back to this if it becomes important, because I feel like I have a good idea for the moment of what an initial/terminal object are.
+
+Posets: an object is indeed initial iff it is the least element. We have that initial elements are isomorphic up to unique isomorphism. What does that mean here? It means there is a unique arrow which has an inverse between these two elements. That is, it means the two elements are comparable and equal (by \\(a \leq b, b \leq a \Rightarrow a=b\\)). We therefore require there to be a *single* least element, if it is to be initial. What about the poset consisting of two identical copies of \\(\mathbb{N}\\), the elements of each copy incomparable to those of the other? There is no arrow from the 1 in the first \\(\mathbb{N}\\) into any element of the second \\(\mathbb{N}\\), so I'm happy that this is indeed not initial.
+
+Identity arrow is terminal in the slice category: everything has a unique morphism into this arrow, yes, because there is always a single commutative triangle between an arrow into \\(X\\) and the identity arrow on \\(X\\).
+
+Generalised elements, now. Hopefully this will be about ways of saying categorically that "this set has three elements", in the same way as "this set is terminal" was a categorical way of identifying a set with one element.
+
+"A set \\(A\\) has an arrow \\(f\\) into the initial object \\(A \to 0\\) just if it is itself initial." An initial object, remember, is one which has exactly one arrow into every other object, so it must have an arrow into \\(A\\); but the composition of \\(f\\) with that arrow must then be the identity on \\(0\\), since there is only one arrow \\(0 \to 0\\). Therefore \\(A, 0\\) are isomorphic and hence both initial.
+
+In monoids and groups, every object has a unique arrow to the initial object - that's trivial, since there is only one object. Unless it means objects in the category of monoids? The unique initial object is the trivial group, and it's also terminal. That makes more sense.
+
+Curses, I'm actually going to have to understand Boolean algebras now. I'll flick back to the definition and try to understand example 4 above. The definition looks an awful lot like the definitions of intersection and union, so I think I'll just think of them in that way. What's a filter? It's what we get when we infect some sets with filterness, and filterness propagates to "parents" and to "children of two parents" (intersections). An ultrafilter then is a filter where adding any other set infects everything.
+
+A filter \\(F\\) on \\(B\\) is an ultrafilter iff for every \\(b \in B\\), either \\(b \in F\\) or \\(b^C \in F\\) but not both: if \\(b \in F\\) then \\(b^C\\) can't be in \\(F\\) because then the empty set (that is, the intersection) is in the filter, so the filter is "everything". If \\(b \not \in F\\) then unless \\(b^C \in F\\), we could add \\(b\\) to \\(F\\) to obtain a strictly larger filter which still isn't everything, since \\(b^C\\) is still not in the augmented filter.
+
+Then I agree with the following stuff about "ultrafilters correspond to maps \\(B \to 2\\)". Not much more I can find to say there immediately.
+
+Ring homomorphisms \\(p\\) from ring \\(A\\) into the initial ring \\(\mathbb{Z}\\) correspond with prime ideals: yep, since \\(p^{-1}(0)\\) is an ideal of \\(A\\) (being the kernel of \\(p\\)), which is prime because we quotient by it to get a Euclidean domain \\(\mathbb{Z}\\).
+
+From arrows from initial objects to arrows from terminal objects. The definition of a point of object \\(A\\) is a natural one, as is the warning that objects are not necessarily determined by points (this is in the case that structural information is bound up in the arrows, like in a monoid viewed as a category). How many points does a Boolean algebra have? The terminal Boolean algebra is \\( \{ 0 \} \\); an arrow from \\(\{0\}\\) to a Boolean algebra must only ever pick out the \\(0\\) element, because the arrows must preserve the zero. That is, Boolean algebras have only one point.
+
+"Generalised elements" is therefore a way of trying to capture all the information, which the terminal object does not necessarily. The example which follows is a summary of this idea. There is something there to prove: that \\(f = g\\) iff \\(fx = gx\\) for all arrows \\(x\\). This leaves me stuck for a bit - I'm reviewing possible ways to prove that two arrows are the same, but the only ways I can think of require some kind of invertibility. What does it even mean for two arrows to be equal? At this point I got horribly confused and asked StackExchange, where I was told that I don't need to worry about that - just let \\(x\\) be the identity arrow. (By the way, it seems that equality of arrows is in the first-order logic sense here.)
+
+Example 2.13: aha, a way of showing categories are not isomorphic. Always handy to have ways of doing this. The number of \\(\mathbf{2}\\)-elements from \\(\{0 \leq 1 \}\\) to \\(\{x \leq y, x \leq z \}\\): \\(0\\) may map to \\(x\\), then \\(1\\) may map to \\(x\\) \\(y\\) or \\(z\\), or \\(0\\) may map to \\(y\\) or \\(z\\), when \\(1\\) must map to the same, producing five such 2-elements. I'm not sure I see why this is invariant, but on the next page I see that will be explained, and it all seems quite satisfactory.
+
+Example 2.14: ah, the "figures of shape \\(T\\) in \\(A\\)" interpretation makes it actually intuitive why the number of \\(2\\)-elements of the posets above are what they are. The arrows from the free monoid on one generator suffice to distinguish homomorphisms? That is, if we know where all \\(\mathbb{N}\\)-shapes go from \\(M\\), can we entirely determine the homomorphism? Yes, we can. If we have access to the elements of the monoid, we can do better (by simply specifying the image of each element), but of course we don't have the elements.
+
+# Summary
+
+I might need a bit more exposure to these ideas before I understand them properly, but I suspect the exercises at the end of this chapter will help with that. This feels like the first really categorical thing that has happened: ways of cheating so that we can consider the elements of structures without actually needing any elements.
diff --git a/hugo/content/awodey/2015-09-08-products-in-category-theory.md b/hugo/content/awodey/2015-09-08-products-in-category-theory.md
new file mode 100644
index 0000000..de9f08c
--- /dev/null
+++ b/hugo/content/awodey/2015-09-08-products-in-category-theory.md
@@ -0,0 +1,49 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-08T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/products/
+- /products-in-category-theory/
+title: Products in category theory
+---
+
+This is on pages 38 through 48 of Awodey. I've been looking forward to it, because products are things which come up all over the place and I'd heard that they are one of the main examples of a categorical idea.
+
+I skim over the definition of the product in the category of sets, and go straight to the general definition. It seems natural enough: the product is defined uniquely such that given any generalised element of the product, it projects in a unique way to corresponding elements of the children.
+
+Proving that products are unique up to isomorphism presumably goes in the same way as the other UMP-proofs have gone. I draw out the general diagram, then because we need to show isomorphism of two objects, I replace the "free" (unspecified) test object with one of the two objects of interest. Then the uniqueness conditions make everything fall out correctly. Moral: if we have a mapping property along the lines of "we can find unique arrows to make this diagram commute" then everything is easy.
+
+![Diagrams for the UMP of the product][UMP of product]
+
+Then we introduce some notation for the product. "A pair of objects may have many different products in a category". Yes, I can see why that's plausible, because we could define \\(\langle a, b \rangle\\) to be the ordered pair \\((b, a)\\), for instance, without changing any of the properties we're interested in.
+
+"Something is gained by considering arrows out of products": I'm aware of currying, which when Awodey points it out, makes me think nothing is really gained after all. I think I'll wait for Chapter 6 before I pass judgement on that.
+
+Now for a huge list of examples. First there are two definitions of "ordered pair", which I called earlier (though not in this exact form). Then we see the usual products of structured sets, with which I'm already very familiar.
+
+I'll verify the UMP for the product of two categories: let \\(x_1: X \to C, x_2: X \to D\\) be generalised elements. We want there to be a unique arrow \\(u : X \to (C \times D)\\) with \\(p_1 u = x_1, p_2 u = x_2\\), where \\(p_1, p_2\\) are the projection functors. Certainly there is an arrow given by stitching \\(x_1, x_2\\) together componentwise; is there another? Clearly not. Suppose \\(u_2\\) were another arrow \\(u: X \to (C \times D)\\). If \\(u_2(x) = (c, d)\\) then \\(p_1 u_2(x) = c\\) by the UMP, and \\(u_2\\) is therefore specified on all generalised elements already. That argument is not very formal, and I don't really see how to formalise it properly.
+
+The product of two groups according to this product construction is then self-evidently the product group we know and love. The product of two posets is also manifestly a poset, being a category where any pair of objects has at most one arrow between them. (Indeed, if there were two, we could project down to one of the posets to obtain two arrows between two elements.)
+
+The greatest lower bound example takes me a while to get my head around. The UMP for the product says, "define \\(p \times q\\) such that for all \\(x\\), if \\(a \leq p \times q, b \leq p \times q\\), and \\(a \leq x, b \leq x\\), then \\(x \leq p \times q\\)". That is indeed the greatest lower bound, but it took me ages to work this out.
+
+I work through the topological spaces example without thinking too hard about it. It's not clear to me that Awodey has proved that the uniqueness part of the UMP is satisfied, but I'll just accept it and move on.
+
+Type theory example: I've already met the lambda calculus, though never studied it in any detail. I skim over this, pausing at the equation "\\(\lambda x . c x = c\\) if no \\(x\\) is in \\(c\\)" - is this a typo for \\(\lambda x . c = c\\)? No, stupid of me - \\(c\\) represents a function, and the function \\(x \mapsto c(x)\\) is the same as the function \\(c\\). Then the category of types is indeed a category, and I'm happy with the proof that it has products. This time Awodey does certainly verify the uniqueness part of the UMP, by simply expanding everything and reducing it again.
+
+A long remark on the Curry-Howard correspondence. Clearly the product here is conjunction - skimming down I see that Awodey says it is indeed a product (or, at least, that there is a functor from types to proofs in which products have conjunctions as their images). Very pretty.
+
+"Categories with products": supposing every pair of objects has a product, we define a functor taking every pair to its product. That's intuitive in the sense of "structured sets", since I'm very familiar with that product construction. What does it mean in the poset case? Recall that the product was the greatest lower bound. A poset where every pair of elements has a greatest lower bound is actually a totally ordered set, and the greatest lower bound is the least of the two elements, so that also makes sense. I think I'll skip over the UMPs for \\(n\\)-ary products, but the idea of a terminal object as a nullary product is pretty neat. So that's why the empty product of real numbers is 1.
+
+
+
+# Summary
+
+As seems to be a general theme, I understand the syntax of products, and I can recognise some of them when they turn up, but have no real intuition for how they work. There will be more examples at the end of the chapter, which should clear things up a bit.
+
+[UMP of product]: {{< baseurl >}}images/CategoryTheorySketches/UMPofProduct.jpg
diff --git a/hugo/content/awodey/2015-09-10-homsets-and-exercises.md b/hugo/content/awodey/2015-09-10-homsets-and-exercises.md
new file mode 100644
index 0000000..c564df6
--- /dev/null
+++ b/hugo/content/awodey/2015-09-10-homsets-and-exercises.md
@@ -0,0 +1,63 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-10T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/homsets-and-exercises/
+- /homsets-and-exercises/
+title: Hom-sets and exercises
+---
+
+This is on pages 48 through 52 of Awodey, covering the hom-sets section and the exercises at the end of Chapter 2. Only eight more chapters after this, and I imagine they'll be more difficult - I should probably step up the speed at which I'm doing this.
+
+Awodey assumes we are working with locally small categories - recall that such categories have "given any two objects, there is a bona fide set of all arrows between those objects". That is, all the hom-sets are really sets.
+
+We see the idea that any arrow induces a function on the hom-sets by composing on the left. Awodey doesn't mention currying here, but that seems to be the same phenomenon. Why is the map \\(\phi(g): g \mapsto (f \mapsto g \circ f)\\) a functor from \\(C\\) to the category of sets? I agree with the step-by-step proof Awodey gives, but I don't really have intuition for it. It feels a bit misleading to me that this is thought of as a functor into the category of sets, because that category contains many many more things than we're actually interested in. It's like saying \\(f: \mathbb{N} \to \mathbb{R}\\) by \\(n \mapsto n\\), when you only ever are interested in the fact that \\(f\\) takes integer values. I'm sure it'll become more natural later when we look at representable functors.
+
+Then an alternative explanation of the product construction, as a way of exploding an arrow \\(X \to P\\) into two child arrows \\(X \to A, X \to B\\). A diagram is a product iff that explosion is always an isomorphism. Then a functor preserves binary products if it… preserves binary products. I had to draw out a diagram to convince myself that \\(F\\) preserves products iff \\(F(A \times B) \cong FA \times FB\\) canonically, but I'm satisfied with it.
+
+
+# Exercises
+
+Exercises 1 and 2 I've [done already][epis-monos]. The uniqueness of inverses is easy by the usual group-theoretic argument: \\(fg = f g'\\) means \\(gfg = gf g'\\), so \\(g = g'\\) by cancelling the \\(g f = 1\\) term.
+
+The composition of isos is an iso: easy, since \\(f^{-1} \circ g^{-1} = (g \circ f)^{-1}\\). \\(g \circ f\\) monic implies \\(f\\) monic and \\(g\\) epic: follows immediately by just writing out the definitions. The counterexample to "\\(g \circ f\\) monic implies \\(g\\) monic" can be found in the category of sets: we want an injective composition where the second function is not injective. Easy: take \\(\{1 \} \to \mathbb{N}\\) and then \\(\mathbb{N} \to \{ 1 \}\\). The composition is the identity, but the second function is very non-injective.
+
+Exercise 5: a) and d) are equivalent by definition of "iso" and "split mono/epi". Isos are monic and epic, as we've already seen in the text (because we can cancel \\(f\\) in \\(x f = x' f\\), for instance), so we have that a) implies b) and c). If \\(f\\) is a mono and a split epi, then it has a right-inverse \\(g\\) such that \\(fg = 1\\); we claim that \\(g\\) is also a left-inverse. Indeed, \\(f g f = f 1\\) so \\(g f = 1\\) by mono-ness. Therefore b) implies a). Likewise c) implies a).
+
+Exercise 6: Let \\(h: G \to H\\) be a monic graph hom. Let \\(v_1, v_2: 1 \to G\\) be homs from the graph with one vertex and no edges. Then \\(h v_1 = h v_2\\) implies \\(v_1 = v_2\\), so in fact \\(h\\) is injective. Likewise with edges, using the graph with one edge and two vertices, and the graph with one edge and one vertex. Conversely, suppose \\(h: G \to H\\) is not monic. Then there are \\(v_1: F_1 \to G, v_2: F_2 \to G\\) with \\(h v_1 = h v_2\\) but \\(v_1 \not = v_2\\). Since \\(h v_1 = h v_2\\), we must have that "their types match": \\(F_1 = F_2\\). We will denote that by \\(F\\). Then there is some vertex or edge on which \\(v_1\\) and \\(v_2\\) have different effects. If it's a vertex: then \\(v_1(v) \not = v_2(v)\\) for that vertex \\(v\\), but \\(h v_1 (v) = h v_2(v)\\), so \\(h\\) can't be injective. Likewise if it's an edge.
+
+Exercises 7 and 8 I've [done already][epis-monos].
+
+Exercise 9: the epis among posets are the surjections-on-elements. Let \\(f: P \to Q\\) be an epi of posets, so \\(x f = y f\\) implies \\(x = y\\). Suppose \\(f\\) is not surjective, so there is \\(q \in Q\\) it doesn't hit. Then let \\(x, y: Q \to \{ 1, 2 \}\\), disagreeing at \\(q\\). We have \\(x f = y f\\) so \\(x, y\\) must agree at \\(q\\). This is a contradiction. Conversely, any surjection-on-elements is an epi, because if \\(x(q) \not = y(q)\\) then we may write \\(q = f(p)\\) for some \\(p\\), whence \\(x f(p) \not = y f(p)\\). The one-element poset is projective: let \\(s: X \to \{1\}\\) be an epi (surjective), and \\(\phi: P \to \{ 1 \}\\). Then \\(X\\) has an element, \\(u\\) say, since \\(s\\) is surjective. Then we may lift \\(\phi\\) over \\(s\\) by letting \\(\bar{\phi}: p \mapsto u\\), so that the composite \\(s \circ \bar{\phi} = \phi\\). (Quick check in my mind that this works for \\(P\\) the empty poset - it does.)
+
+Exercise 10: Sets (implemented as discrete posets) are projective in the category of posets: the one-element poset is projective, and retracts of projective objects are projective. Let \\(A\\) be an arbitrary discrete poset. Define \\(r: 1 \to A\\) by selecting an element, and \\(s: A \to \{1\}\\). Then \\(A\\) is a retraction of \\(B\\), so is projective. Afterwards, I looked in the solutions, and Awodey's proof is much more concrete than this. I [asked on Stack Exchange][SE question] whether my proof was valid, and the great Qiaochu Yuan himself pointed out that I had mixed up what "retract" meant, and had actually showed that \\(\{1\}\\) was a retract of \\(A\\). Back to the drawing board.
+
+Exercise 10 revisited: Take a discrete poset \\(P\\), and let \\(f: X \to P\\) be an epi - that is, surjection. Let \\(A\\) be a poset and \\(\phi: A \to P\\) an arrow (monotone map). For each \\(a \in A\\) we have \\(\phi(a)\\) appearing in some form in \\(X\\); pick any inverse image \\(x_a\\) such that \\(f(x_a) = \phi(a)\\). I claim that the function \\(a \mapsto x_a\\) is monotone (whence we're done). Indeed, if \\(a \leq b\\) then \\(\phi(a) \leq \phi(b)\\) so \\(f(x_a) \leq f(x_b)\\) so \\(x_a \leq x_b\\) because \\(f, \phi\\) are monotone.
+
+Example of a non-projective poset: let \\(A = P\\) be the poset \\(0 \leq 1 \leq 2\\), and let \\(i:A \to P\\) the inclusion. Let \\(E\\) be the poset \\(0 \leq 2, 1 \leq 2\\), with its obvious inclusion as the epi. Then \\(i\\) doesn't lift across that epi, because \\(0_A\\) must map to \\(0_E\\) and \\(1_A\\) to \\(1_E\\), but \\(0 \leq_A 1\\) and \\(0 \not \leq_E 1\\).
+
+Now, all projective posets are discrete: suppose the comparison \\(a < b\\) exists in the poset \\(P\\), and let \\(X\\) be \\(P\\) but where we break that comparison. Let the epi \\(X \to P\\) be the obvious inclusion. Then the inclusion \\(\text{id}: P \to P\\) doesn't lift across \\(X\\).
+
+Exercise 11: Of course, the first thing is a diagram. An initial object in \\(A-\mathbf{Mon}\\) is \\((I, i)\\) such that there is precisely one arrow from \\((I, i)\\) to any other object: that is, precisely one commutative triangle exists. A free monoid \\(M(A)\\) on \\(A\\) is such that there is \\(j: A \to \vert M(A) \vert\\), and for any function \\(f: A \to \vert N \vert\\) there is a unique monoid hom \\(\bar{f}: M(A) \to N\\) with \\(\vert \bar{f} \vert \circ j = f\\). If \\((I, i)\\) is initial, it is therefore clear that \\(I\\) has the UMP of the free monoid on \\(A\\), just by looking at the diagram. Initial objects are unique up to isomorphism, and free monoids are too, so we automatically have the converse.
+
+Exercise 12 I did in my head to my satisfaction while I was following the text.
+
+Exercise 13: I wrote out several lines for this, amounting to showing that the unique \\(x: (A \times B) \times C \to A \times (B \times C)\\) guaranteed by the UMP of \\(A \times (B \times C)\\) is in fact an iso. The symbol shunting isn't very enlightening, so I won't reproduce it here.
+
+Exercise 14: the UMP for an \\(I\\)-indexed product should be: \\(P\\) with arrows \\(\{ (p_i: P \to A_i) : i \in I \}\\) is a product iff for every object \\(X\\) with collections \\(\{ (x_i: X \to A_i) : i \in I \}\\) of arrows, there is a unique \\(x : X \to P\\) with \\(p_i \circ x = x_i\\) for each \\(i \in I\\). Then in the category of sets, the product of \\(X\\) over \\(i \in I\\) satisfies that for all \\(T\\) with \\( \{ (t_i: T \to X): i \in I \}\\) arrows, there is a unique \\(t: T \to P\\) with \\(p_i \circ t = t_i\\). If we let \\(P = \{ f: I \to X \} = X^I\\), we do get this result: let \\(t(\tau) : i \mapsto t_i(\tau)\\). This works if \\(p_i \circ (\tau \mapsto (i \mapsto t_i(\tau))) = (\tau \mapsto t_i(\tau))\\), so we just need to define the projection \\(p_i: X^I \to X\\) by \\(p_i(i \mapsto x) = x\\). I think that makes sense.
+
+Exercise 15: I first draw a diagram. \\(\mathbb{C}_{A, B}\\) has a terminal object iff there is some \\((X, x_1, x_2)\\) such that for all \\((Y, y_1, y_2)\\), there is precisely one arrow \\((Y, y_1, y_2) \to (X, x_1, x_2)\\). \\(A\\) and \\(B\\) have a product in \\(\mathbb{C}\\) iff there is \\(P\\) and \\(p_1: P \to A, p_2: P \to B\\) such that for every \\(x_1: X \to A, x_2: X \to B\\) there is unique \\(x: X \to P\\) with the appropriate diagram commuting. If we let \\((Y, y_1, y_2) = (P, p_1, p_2)\\) then it becomes clear that if \\(A, B\\) have a product then \\(\mathbb{C}_{A, B}\\) has a terminal object - namely \\((Y, y_1, y_2)\\). Conversely, if \\(\mathbb{C}_{A, B}\\) has a terminal object \\((Y, y_1, y_2)\\), then our unique arrow \\(x: X \to Y\\) in \\(\mathbb{C}_{A, B}\\) corresponds to a unique product arrow in \\(\mathbb{C}\\), so the UMP for products is satisfied.
+
+Exercise 16: Is this really as easy as it looks? The product functor takes \\(a: A, b: B \mapsto \langle a, b \rangle : A \times B\\). Maybe I've misunderstood something, but I can't see that it's any harder than that. There's a functor \\(X \mapsto (A \to X)\\), given by coslicing out by \\(A\\). I've squinted at the answers Awodey supplies, and this isn't an exercise he gives. I'll just shut my eyes and pretend this exercise didn't exist.
+
+Exercise 17: The given morphism is indeed monic, because \\(1_A x = 1_A y\\) implies \\(x = y\\), and \\(\Gamma(f)x = \Gamma(f)y\\) implies \\(1_A x = 1_A y\\) because of the projection we may perform on the pair \\(\langle 1_A, f \rangle\\). \\(\Gamma\\) is a functor from sets to relations, clearly, but we've already done that in Section 1 question 1b).
+
+Exercise 18: It would really help if Awodey had told us what a representable functor was, rather than just giving an example. Is he asking us to show that "the representable functor of Mon is the forgetful functor"? I'm going to hope that I can just drop Mon in for the category C in section 2.7. If we let \\(A\\) be the trivial monoid, then \\(\text{Hom}(A, -)\\) is a functor taking a monad \\(M\\) to its set of underlying elements (each identified with a different hom \\(\{ 1 \} \to M\\)) - but hang on, there's only one such hom, so that line is nonsense. It would work in Sets, but not in Mon. We need \\(\text{Hom}(M, N)\\) to be isomorphic in some way to the set \\(\vert N \vert\\), and I just don't see how that's possible. Infuriatingly, this exercise doesn't have a solution in the answers section. I ended up looking this up, and the trick is to pick \\(M = \mathbb{N}\\). Then the homomorphisms \\(\phi: \mathbb{N} \to N\\) precisely determine elements of \\(N\\), by \\( \phi(1)\\). So that proves the result. Why did I not think of \\(\mathbb{N}\\) instead of \\(\{ 1 \}\\)? Probably just lack of experience.
+
+[epis-monos]: {% post_url 2015-09-02-epis-monos %}
+[SE question]: http://math.stackexchange.com/q/1429746/259262
diff --git a/hugo/content/awodey/2015-09-15-duality-in-category-theory.md b/hugo/content/awodey/2015-09-15-duality-in-category-theory.md
new file mode 100644
index 0000000..9c01581
--- /dev/null
+++ b/hugo/content/awodey/2015-09-15-duality-in-category-theory.md
@@ -0,0 +1,59 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-15T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/duality/
+- /duality-in-category-theory/
+title: Duality in category theory
+---
+
+I don't have strong preconceptions about this chapter. The previous chapter I knew would contain general constructions, and I was looking forwards to that, but this one is more unfamiliar to me. I'll be doing pages 53 through 61 of Awodey here - coproducts.
+
+The first bits are stuff I recognise from when I flicked through Categories for the Working Mathematician, I think. Or something. Anyway, I recognise the notion of formal duality and the very-closely-related semantic duality. (Like the difference between "semantic truth" and "syntactic truth" in first-order logic.) It's probably a horrible sin to say it, but both of these are just obvious, once they've been pointed out.
+
+Now the definition of a coproduct. The notation \\(A+B\\) is extremely suggestive, and I'd have preferred to try and work out what the coproduct was without that hint. \\(z_1: A \to Z\\) and \\(z_2: B \to Z\\) are "ways of selecting \\(A\\)- and \\(B\\)-shaped subsets of any object" (yes, that's not fully general, but for intuition I'll pretend I'm in a concrete category). So for any \\(Z\\), and for any way of selecting an \\(A\\)-shaped and a \\(B\\)-shaped subset of \\(Z\\), we can find a unique way of selecting an \\(A+B\\)-shaped subset according to the commuting-diagram condition. I'm still a bit unclear as to what that all means, so I whizz down to the Sets example below.
+
+In Sets, if we can find an \\(A\\)-shaped subset of some set \\(Z\\), and a \\(B\\)-shaped subset, then we can find a subset which is shaped like the disjoint union of \\(A\\) and \\(B\\) in a unique way. (Note that our arrows need not be injective, which is why the \\(A+B\\)-shaped subset exists. For instance, if \\(A = \{1\}, B = \{1\}\\), and our \\(A\\)-shaped subset and \\(B\\)-shaped subset of \\(\{a,b \}\\) were both \\(\{a\}\\), then the \\(A+B\\)-shaped subset would be simply \\(\{a \}\\). Both selections of shape end up pointing at the same element.)
+
+This leads me to wonder: what about in the category of sets with injections as arrows? Now it seems that the coproduct is only defined on disjoint sets, because the arrows \\(z_1, z_2\\) which pick out \\(A\\)- and \\(B\\)-shaped subsets now need to have distinct images in \\(Z\\) so that the coproduct may pick out an \\(A \cup B\\)-shaped subset.
+
+The free-monoids coproduct: given any "co-test object" \\(N\\), and any two monoid homomorphisms selecting subsets of \\(N\\) corresponding to the shapes of \\(M(A)\\) and \\(M(B)\\), there should be a natural way to define a shape corresponding to some kind of union. The shape of \\(M(A)\\) corresponds exactly to "where we send the generators", so we can see intuitively that \\(M(A) + M(B) = M(A+B)\\). This is very much not a proof, and I'll make sure to check the diagrammatic proof from the book first; that proof is fine with me. "Forgetful functor preserves products -> a structure-imposing functor preserves coproducts" has a certain appeal to it, but I don't quickly see a sense in which the structure can be imposed in general.
+
+Coproduct of two topological spaces: given a co-test topological space \\(X\\), and two continuous functions into \\(X\\) which pick out subspaces of shape \\(A\\) and \\(B\\), we want to find a space \\(P\\) such that for all \\(A\\)- and \\(B\\)-shape subspaces of \\(P\\), there is a unique \\(P\\)-shaped subspace of \\(X\\) composed of the same shapes as the \\(A\\)- and \\(B\\)-subspaces. Then it's fairly clear that \\(P\\) should be the disjoint union of \\(A\\) and \\(B\\) (compare with the fact that the forgetful functor to Set again yields the correct Set coproduct), but what topology? Surely it should be the "product" given by sets of the form (open in \\(A\\), open in \\(B\\)), since \\(A\\)-shaped subspaces of this will map directly into \\(A\\)-shaped subspaces of the co-test space, etc.
+
+Coproducts, therefore, are a way of putting two things next to each other, and this is pointed out in the next paragraph, where the coproduct of two posets is the "disjoint union" of them. The coproduct of two rooted posets is what I'd have guessed, as well, given that we need to make sure the coproduct is also rooted.
+
+Coproduct of two elements in a poset: that's easy by duality, since the opposite category of a poset is just the same poset with the opposite ordering. The product is the greatest lower bound, so the coproduct must be the least upper bound. How does this square with the idea of "put the two elements side by side"? This category is not concrete, so we need to work out what we mean by "an element of shape \\(A\\)". Since an arrow \\(A \to X\\) is precisely the fact that \\(A \leq X\\), we have that for every element \\(y\\) of the poset, all elements which compare less than or equal to that element have "images of shape \\(y\\)" in \\(X\\). Therefore, the coproduct condition says "for every co-test object \\(X\\), for every pair of images of shape \\(A, B\\) in \\(X\\), the there is an image of shape \\(A+B\\) in \\(X\\) which restricts to the images of shape \\(A\\) and \\(B\\) respectively". With a bit of mental gymnastics, that does correspond to \\(A+B\\) being the least upper bound.
+
+Coproduct of two formulae in the category of proofs: an arrow from one formula to another is a deduction. An "image of shape \\(A\\) in \\(X\\)" - an arrow \\(A \to X\\) - is therefore a statement that we can deduce \\(X\\) from \\(A\\). We want a formula \\(A+B\\) such that for any co-test formula \\(X\\), and for any images of \\(A, B\\) in \\(X\\), there is a unique image of \\(A+B\\) in \\(X\\) which respects the shapes of \\(A\\) and \\(B\\) in \\(A+B\\). Hang on - at this point I realise that the opposite category of the category of proofs is the "category of negated proofs", and the "opposite category" functor is simply taking "not" of everything. That's because the contrapositive of a statement is equivalent to the statement. Therefore since the product is the "and", the coproduct should be the "or" (which is the composition of "not-and-not", or "dual-product-dual). I'll keep going anyway.
+
+We need to be able to prove \\(A+B\\) from \\(A\\), and to prove \\(A+B\\) from \\(B\\). That's already mighty suggestive. Moreover, if there's a proof of \\(X\\) from \\(A\\), there needs to be a unique corresponding proof of \\(X\\) from \\(A+B\\). That's enough for my intuition to say "this is OR, beyond all reasonable doubt".
+
+I now look at the book's explanation of this. Of course, I omitted to perform an actual proof that OR formed the coproduct, and that bites me here: identical arrows must yield identical proofs, but any proof which goes via "a OR b" must be different from one which ignores b. Memo: need to prove the intuitions.
+
+Coproduct of two monoids. Ah, this is a cunning idea, viewing a monoid as a sub-monoid of its free monoid. We already know how to take the coproduct of two free monoids, and we can do the equiv-rel trick that worked with the category of proofs above. Is it possible that in general we do coproducts by moving into a free construction and then quotienting back down? I'm struggling to see how free posets might work, so I'll shelve that idea for now.
+
+I went to sleep in between the previous paragraph and this one, so I'm now in a position to write out a proper proof that the coproduct of two monoids is as stated. I did it without prompting in a very concrete way: given a word('s equivalence class) in \\(M(\vert A \vert + \vert B \vert)\\), and two maps \\(z_1: A \to N\\) and \\(z_2: B \to N\\), we send the letter \\(a \in \vert A \vert\\) to \\(z_1(a)\\), etc. The book gives a more abstract way of doing it. I don't feel like I could come up with that myself in a hurry without a better categorical idea of "quotient by an equivalence relation". At least this way gave me a good feel for why we needed to do the quotient: otherwise our \\(\phi: a \mapsto z_1(a)\\) could have been replaced by \\(a \mapsto u_A z_1(a)\\). The map is unique in this setting. Indeed, suppose \\(\phi([w]) \not = \phi_2([w])\\) for some \\(w\\). We may wlog that \\(w\\) is just one character long, since any longer and we could use that \\(\phi, \phi_2\\) are "homomorphic" to find a character where they differed. (That's where we need that we're working with equivalence classes.) Wlog \\([w] = [w_1]\\). Then \\(\phi([w_1]) \not = \phi_2([w_1])\\); but that means \\(z_1(w_1) \not = z_1(w_1)\\) because the map \\(\phi_2\\) also needs to commute with \\(z_1\\).
+
+I make sure to note that the forgetful functor Mon to Sets doesn't preserve coproducts.
+
+Aha! An example I've seen recently in a different context. (Oops, I've glanced to the bottom of the page, Proposition 3.11. I'll wait til I actually get there.)
+
+I'm confused by the "in effect we have pairs of elements" idea. What about a word like \\(a_1 a_2 b_1 b_2 b_3\\)? Then we don't get a pair of elements containing \\(b_3\\). Ah, I see - Awodey is implicitly pairing with \\(0_A\\) in that example. I'd have preferred to have that spelled out. Now I do see that the underlying set of the coproduct should be the same as that of the product, and that the given coproduct is indeed a coproduct.
+
+Now my "aha" moment from earlier. I've seen this fact referenced [a few days ago][stack exchange] on StackExchange. I can follow the proof, and I see where it relies on abelian-ness, but don't really see much of an idea behind it. The obvious arrows \\(A \to A\\) and \\(B \to B\\) are picked, but it seems to be something of a trick to pick the zero homomorphism \\(A \to B\\). In hindsight, it's the only homomorphism we could have picked, but it would have taken me a while to think of it.
+
+I skim over the bit about abelian categories, and over the various dual notions to products (like "coproducts are unique up to isomorphism", "the empty coproduct is an initial object" etc).
+
+# Summary
+
+This was a bit less intuitive than the idea of the product. Instead of having "finding \\(Z\\)-shaped things in \\(A\\) and \\(B\\) means we can find a \\(Z\\)-shaped thing in the product", we have "finding \\(A\\)- and \\(B\\)-shaped things in \\(Z\\) means we can find a coproduct-shaped thing in \\(Z\\) too", but it took me a while to fix this in my mind, and it still seems to me a little less easy for something to be a coproduct: we've been bothering with equivalence classes more.
+
+[semantic truth]: https://en.wikipedia.org/wiki/Semantic_theory_of_truth
+[syntactic truth]: https://en.wikipedia.org/wiki/Logical_consequence
+[stack exchange]: https://math.stackexchange.com/a/1430755/259262
diff --git a/hugo/content/awodey/2015-09-16-equalisers.md b/hugo/content/awodey/2015-09-16-equalisers.md
new file mode 100644
index 0000000..cc4b54d
--- /dev/null
+++ b/hugo/content/awodey/2015-09-16-equalisers.md
@@ -0,0 +1,53 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-16T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/equalisers/
+- /equalisers/
+title: Equalisers
+---
+
+This is pages 62 through 71 of Awodey, on [equalisers] and coequalisers.
+
+The first paragraph is really quite exciting. I can see that there would be a common generalisation of kernels and varieties - they're the same idea that lets us find complementary functions and particular integrals of linear differential equations, for instance. But the axiom of separation ("subset selection") as well? Now that's intriguing.
+
+We are given the definition of an equaliser: given a pair of arrows with the same domain and codomain, it's an arrow \\(e\\) which may feed into that domain to make the two arrows be "the same according to \\(e\\)".
+
+Let's see the examples of \\(f, g: \mathbb{R}^2 \to \mathbb{R}\\) with \\(f(x, y) = x^2+y^2, g(x,y) = 1\\). I'll try and find the equaliser (in Top) myself. It'll be a topological space \\(E\\) and a continuous function \\(e: E \to \mathbb{R}^2\\) such that \\(f \circ e = g \circ e\\). That is, such that \\(f \circ e = 1\\). That makes it easy: were it not for the "universal" property, \\(E\\) could be anything which has a continuous function mapping it into the unit circle in \\(\mathbb{R}^2\\), and \\(e\\) would be that mapping. (I'm beginning to see where the axiom of subset selection might come in.) But if we took the space \\(E = \{ (1, 0) \}\\) and the inclusion mapping as \\(e\\), this would fail the uniqueness property because there's more than one way we can continuously map a single point into that unit circle. In order to make sure everything is specified uniquely, we'll want \\(E\\) to be the entire unit circle and its inherited topology. Ah, Awodey points out that in this case, the work is easy because the inclusion is monic and so uniqueness is automatic.
+
+Let's do the same thing for Set. The equaliser of \\(f, g: A \to B\\) is a function \\(e: E \to A\\) such that \\(f \circ e = g \circ e\\). We need to make sure \\(f, g\\) only ever see the elements where they're equal after the \\(e\\)-filter has been applied to them, so \\(e\\) must only map into the set \\(\{a \in A : f(a) = g(a) \}\\). It should be easy to show that the equaliser is actually that set with the obvious inclusion into \\(A\\), and I look at the book to see that it is indeed so.
+
+"Every subset is an equaliser" is therefore true, and the characteristic function is indeed the obvious way to go about it. Huh - the axiom of subset selection has just fallen out, stating that there is an inverse to the characteristic function. Magic. Then \\(\text{Hom}(A, 2) \cong \mathbb{P}(A)\\), which we already knew because to specify an element of the power-set is precisely to specify which elements of \\(A\\) are included.
+
+Equalisers are monic: well, the diagram certainly looks like being the right shape, and it's intuitive for Sets: if \\(E \to A\\) weren't injective, then we could choose more than one way of mapping \\(Z \to E \to A \to B\\). The proof in general mimics the Sets example.
+
+Then a blurb on how equalisers can often be made as "restrict the sets and inherit the structure". That's a nice rule of thumb. Awodey points out the "kernel of homomorphism" interpretation, which I'd already pattern-matched in the first paragraph. The equaliser is basically specifying an equivalence class under an equiv rel.
+
+Hah, I wrote that before seeing that a coequaliser is a generalisation of "take the quotient by an equiv rel". Makes sense: if the equaliser is an equivalence class, it seems reasonable for its dual to be a quotient. I skip past the definition of an equivalence relation, because I already know it. What does the definition of a coequaliser really mean? It's an arrow \\(q: B \to Q\\) such that once we've let \\(f, g\\) do their thing, we can apply \\(q\\) to make it look like they've done the *same* thing. It's the other way round from the equaliser, which restricted \\(f, g\\) so that they could only do the same thing. I can see why this is like taking a quotient, and the next example makes that very clear.
+
+Coproduct of two rooted posets: we quotient by the appropriate equivalence relation. That is, we co-equalise using \\(\{ 0 \}\\) and its obvious inclusions into the two posets. I draw out the diagram and after some wrestling I convince myself that the rooted-posets coproduct is as stated. I'm still getting used to this diagram-chasing. I'll wait til the exercises to do the Top example.
+
+Presentations of algebras. I've never seen a demonstration that all groups can be obtained as presentations of free groups, I think, although it's fairly clear that it can be done (just specify every single relation that can possibly hold - in effect writing out the Cayley table). I would prefer it if Awodey defined \\(F(1)\\) explicitly, since it takes me a moment to realise it's the free algebra on one generator. Then \\(F(3) = F(x(1), y(1), z(1))\\). We then perform the next coequaliser. Awodey is again confusing me a bit, and I have to stop and work out that by \\(q(y^2)\\), he means \\(q \circ (1 \mapsto y^2)\\), and by \\(q(1)\\) he means \\(q \circ (1 \mapsto 1)\\). It's obvious what the intent is - chaining together these coequalisers in the obvious way. Each coequaliser doesn't significantly change the structure of the free group, so each coequaliser can be applied in turn, using the inherited structure where necessary. However, this is a bit of a confusing write-up.
+
+"The steps can be done simultaneously": oh dear. It looks like we should be able to do this construction sequentially all the time, and that is conceptually easier, but I'll try and understand the all-at-once construction anyway. Firstly, we define \\(F(2)\\) because we want to equalise two 2-tuples. (It would be \\(F(3)\\) if we had three constraints and so wanted to equalise two 3-tuples.) My instinct would have been to do this with the algebra product instead of the algebra coproduct - using \\(F(2) = F(1) \times F(1)\\). I asked a friend about this. The intuition is apparently that the product is good for imposing multiple conditions at the same time, while what we really want is a way to impose one of a number of conditions. The coproduct (by way of the direct sum) has the notion of "one of a number of things, but not necessarily all of them together".
+
+I draw out the diagram again the next day. This time it makes a bit more sense that we need the coproduct: it's because we need a way of getting from \\(F(1)\\) to \\(F(3)\\), and if we used the product we'd have all our arrows ending at \\(F(1)\\) rather than originating at \\(F(1)\\). I can see why the UMP guarantees the uniqueness of the algebra given by these generators and relations now.
+
+On to the specialisation to monoids. The construction is… odd, so I'll try and forget about it for a couple of minutes and then do it myself. We want to construct the free group on all the generators we have available - that is, all the monoid elements - so we're going to need a functor \\(T\\) (to use Awodey's notation) taking \\(N \to M(\vert N \vert)\\). Then we're also going to need a way to specify all the different relations in the monoid. We can do that by specifying a mapping taking a word \\(x_1, x_2, \dots, x_n\\) to the monoid product \\(x_1 x_2 \dots x_n\\). Write \\(C\\) for that multiplication ("C" for "collapsing") functor \\(T(N) \to N\\). Aha: our restrictions become of the form \\(C(x_1, x_2) = x_3\\), representing the equation \\(x_1 x_2 = x_3\\).
+
+Very well: we're going to need our left-hand sides to be going from \\(T^2 N \to T N\\), and our right-hand sides likewise. Then we'll coequalise them. Let \\(f: T^2 N \to T N\\) taking a word of words to its corresponding word of products, and let \\(g: T^2 N \to T N\\) taking a word of words to the product of products. Wait, that's got the wrong codomain. Let \\(g: T^2 N \to T N\\) taking a word of words to the corresponding word of letters. That's better: we have basically provided a list of equivalences between \\((x_1, x_2)\\) and \\(x_1 x_2\\).
+
+Finally, we take the coequaliser \\(e\\) of \\(f\\) and \\(g\\), and hope and pray that \\(N\\) (our original monoid) has the UMP for the resulting object. I remember from the proof in the book that we should first show that the coequaliser arrow is the operation "take the product of the word". (In hindsight, that's a great place to start. It's harder to deal with a function without knowing what it is.) Certainly the "take the product of the word" function does what we want, but does it actually satisfy the UMP? Drawing out a diagram convinces me that it does: any \\(\phi\\) which doesn't care how the letters are grouped between words, descends uniquely to a map from the monoid of all words.
+
+The coequaliser arrow therefore definitely does go into \\(N\\), and we can identify \\(N\\) with the coequaliser in the obvious way by including from the coequaliser (which still technically has the structure of a free group).
+
+# Summary
+
+This is yet another thing I've started to get a feel for, but not really understood. I now know what coequalisers and equalisers are for, and the utility of the "duals" idea. The exercises will certainly be helpful.
+
+[equalisers]: https://en.wikipedia.org/wiki/Equaliser_(mathematics)#In_category_theory
diff --git a/hugo/content/awodey/2015-09-19-duality-exercises.md b/hugo/content/awodey/2015-09-19-duality-exercises.md
new file mode 100644
index 0000000..763b7fc
--- /dev/null
+++ b/hugo/content/awodey/2015-09-19-duality-exercises.md
@@ -0,0 +1,66 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-19T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/duality-exercises/
+- /duality-exercises/
+title: Duality exercises
+---
+
+Exercise 1 is easy: at the end of Chapter 2 the corresponding products statement was proved, and the obvious dual statement turns out to be this one.
+
+Exercise 2 falls out of the appropriate diagram, whose upper triangle is irrelevant.
+
+![Free monoid functor preserves coproducts][ex 2]
+
+Exercise 3 I've [already proved][duality] - search on "sleep".
+
+Exercise 4: Let \\(\pi_1: \mathbb{P}(A + B) \to \mathbb{P}(A)\\) be given by \\(\pi_1(S) = S \cap A\\), and \\(\pi_2: \mathbb{P}(A+B) \to \mathbb{P}(B)\\) likewise by \\(S \mapsto S \cap B\\). Claim: this has the UMP of the product of \\(\mathbb{P}(A)\\) and \\(\mathbb{P}(B)\\). Indeed, if \\(z_1: Z \to \mathbb{P}(A)\\) and \\(z_2: Z \to \mathbb{P}(B)\\) are given, then \\(z: Z \to \mathbb{P}(A + B)\\) is specified uniquely by \\(S \mapsto z_1(S) \cup z_2(S)\\) (taking the disjoint union).
+
+Exercise 5: Let the coproduct of \\(A, B\\) be their disjunction. Then the "coproduct" property is saying "if we can prove \\(Z\\) from \\(A\\) and from \\(B\\), then we can prove it from \\(A \vee B\\)", which is clearly true. The uniqueness of proofs is sort of obvious, but I don't see how to prove it - I'm not at all used to the syntax of natural deduction. I look at the answer, which makes everything clear, although I still don't know if I could reproduce it. I understand its spirit, but not the mechanics of how to work in the category of proofs.
+
+Exercise 6: we need that for any two monoid homomorphisms \\(f, g: A \to B\\) there is a monoid \\(E\\) and a monoid homomorphism \\(e: E \to A\\) universal with \\(f e = g e\\). Certainly there is a monoid hom \\(e: E \to A\\) with that property (namely the trivial hom), so we just need to find one that is "big enough". Let \\(E\\) be the subset of \\(A\\) on which \\(f = g\\), which is nonempty because they must be equal on \\(1_A\\). I claim that it is a monoid with \\(A\\)'s operation. Indeed, if \\(f(a) = g(a)\\) and \\(f(b) = g(b)\\) then \\(f(ab) = f(a) f(b) = g(a) g(b) = g(ab)\\). This also works with abelian groups - and apparently groups as well.
+
+Finally we need that this structure satisfies the universal property. Let \\(Z\\) be a monoid with hom \\(h: Z \to A\\), such that \\(f h = g h\\). We want a hom \\(\bar{h} : Z \to E\\) with \\(e \bar{h} = h\\). But if \\(f h = g h\\) then we must have the image of \\(h\\) being in \\(E\\), so we can just take \\(\bar{h}\\) to be the inclusion. This reasoning works for abelian groups too. We relied on Mon having a terminal element and monoids being well-pointed.
+
+Finite products: we just need to check binary products and the existence of an initial object. Initial objects are easy: the trivial monoid/group is initial. Binary products: the componentwise direct product satisfies the UMP for the product, since if \\(z_1: Z \to A, z_2: Z \to B\\) then take \\(z: Z \to A \times B\\) by \\(z(y) = \langle z_1(y), z_2(y) \rangle\\). This is obviously homomorphic, while the projections make sure it is unique.
+
+Exercise 7 falls out of another diagram. The (1) label refers to arrows forced by the first step of the argument; the (2) label to the arrow forced by the (1) arrows.
+
+![Coproduct of projectives is projective][ex 7]
+
+Exercise 8: an injective object is \\(I\\) such that for any \\(X, E\\) with arrows \\(h: X \to I, m: X \to E\\) with \\(m\\) monic, there is \\(\bar{h}: E \to I\\) with \\(\bar{h} m = h\\). Let \\(P, Q\\) be posets, and let \\(f: P \to Q\\) be monic. Then for any points \\(x, y: \{ 1 \} \to P\\) we have \\(fx = fy \Rightarrow x=y\\), so \\(f\\) is injective. Conversely, if \\(f\\) is not monic then we can find \\(a: A \to P, b: B \to P\\) with \\(fa = fb\\) but \\(a \not = b\\). This means \\(A = B\\) because the arrows \\(fa, fb\\) agree on their domain; so we have \\(a, b: A \to P\\) and \\(x \in A\\) with \\(a(x) \not = b(x)\\). But \\(f a(x) = f b(x)\\), so we have \\(f\\) not injective.
+
+Now, a non-injective poset: we want to set up a situation where we force some extra structure on \\(X\\). If \\(I\\) is has two distinct nontrivial chunks which have no elements comparable between the chunks, then \\(I\\) is not injective. Indeed, let \\(X = I\\). Then the inclusion \\(X \to I\\) does not lift across the map which sends one chunk "on top of" the other: say one chunk is \\(\{a \leq b \}\\) and the other \\(\{c \leq d\}\\), then the map would have image \\(a \leq b \leq c \leq d\\).
+
+What about an injective poset? The dual of "posets" is "posets", so we can just take the dual of any projective poset - for instance, any discrete poset. Anything well-ordered will also do, suggests my intuition, but I looked it up and apparently the injective posets are exactly the complete lattices. Therefore a wellordering will almost never do. I couldn't see why \\(\omega\\) failed to be injective, so I asked a question on Stack Exchange; midway through, I [realised why][SE].
+
+Exercise 9: \\(\bar{h}\\) is obviously a homomorphism. Indeed, \\(\bar{h}(a) \bar{h}(b) = h i(a) h i(b) = h(i(a) i(b))\\) because \\(h\\) is a homomorphism. But \\(i(a)\\) is the wordification of the letter \\(a\\), and \\(i(b)\\) likewise of \\(b\\), so we have \\(i(a) i(b)\\) is the word \\((a, b)\\), which is itself the inclusion of the product \\(ab\\).
+
+Exercise 10: Functors preserve the structure of diagrams, so we just need to show that that the unique arrow guaranteed by the coequaliser UMP corresponds to a *unique* arrow in Sets. We need to show that given a function \\(\vert M \vert \to \vert N \vert\\) there is only one possible homomorphism \\(M \to N\\) which forgetful-functors down to it. But a homomorphism \\(M \to N\\) does specify where every single set element in \\(\vert M \vert\\) goes, so uniqueness is indeed preserved.
+
+Exercise 11: Let \\(R\\) be the smallest equiv rel on \\(B\\) with \\(f(x) \sim g(x)\\) for all \\(x \in A\\). Claim: the projection \\(\pi: B \to B/R\\) is a coequaliser of \\(f, g: A \to B\\). Indeed, let \\(C\\) be another set, with a function \\(c: B \to C\\). Then there is a unique function \\(q: B/R \to C\\) with \\(q \pi = c\\): namely, \\(q([b]) = c(b)\\). This is well-defined because if \\(b \sim b'\\) then \\(c(b') = q([b']) = q([b]) = c(b)\\).
+
+Exercise 12 I've [already done][duality] - search on "wrestling", though I didn't write this up.
+
+Exercise 13: I left this question to the end and couldn't be bothered to decipher the notation.
+
+Exercise 14: The equaliser of \\(f p_1\\) and \\(f p_2\\) is universal \\(e: E \to A \times A\\) such that \\(f p_1 e = f p_2 e\\). Let \\(E = \{ (a, b) \in A \times A : f(a) = f(b) \}\\) and \\(e\\) the inclusion. It is an equivalence relation manifestly: if \\(f(a) = f(b)\\) and \\(f(b) = f(c)\\) then \\(f(a) = f(c)\\), and so on.
+
+The kernel of \\(\pi: A \mapsto A/R\\), the quotient by an equiv rel \\(R\\), is \\(\{ (a, b) \in A \times A : \pi(a) = \pi(b) \}\\). This is obviously \\(R\\), since \\(a \sim b\\) iff \\(\pi(a) = \pi(b)\\). That's what it means to take the quotient.
+
+The coequaliser of the two projections \\(R \to A\\) is the quotient of \\(A\\) by the equiv rel generated by the pairs \\(\langle \pi_1(x), \pi_2(x) \rangle\\), as in exercise 11. This is precisely the specified quotient.
+
+The final part of the exercise is a simple summary of the preceding parts.
+
+Exercise 15 is more of a "check you follow this construction" than an actual exercise. I do follow it.
+
+[duality]: {{< ref "2015-09-15-duality-in-category-theory" >}}
+[ex 2]: /images/CategoryTheorySketches/FreeMonoidFunctorPreservesCoproducts.jpg
+[ex 7]: /images/CategoryTheorySketches/CoproductOfProjectivesIsProjective.jpg
+[SE]: https://math.stackexchange.com/a/1442264/259262
diff --git a/hugo/content/awodey/2015-09-19-groups-in-categories.md b/hugo/content/awodey/2015-09-19-groups-in-categories.md
new file mode 100644
index 0000000..d4c91bf
--- /dev/null
+++ b/hugo/content/awodey/2015-09-19-groups-in-categories.md
@@ -0,0 +1,57 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-19T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/groups-in-categories/
+- /groups-in-categories/
+title: Groups in categories
+---
+
+I go into this chapter hoping that it will be on things I already know about group theory. This post will be on pages 75 through 85 of Awodey.
+
+I already know about groups over sets, but this looks like they can be made more generally over other categories. It is clear that we will need to consider only categories with finite products, because the notion of a binary operation requires us to work on pairs of elements.
+
+The definition of a group is the obvious one: an object \\(G\\) with an "inverses" arrow \\(i: G \to G\\), a "multiplication" arrow \\(m: G \times G \to G\\) and a "unit" arrow \\(u: 1 \to G\\), such that \\(m\\) is associative in the obvious way, \\(u\\) is a unit for \\(m\\), and \\(i\\) is an inverse with respect to \\(m\\) - drawing out the appropriate diagrams.
+
+The definition of a homomorphism is likewise very familiar, and the examples which follow are very clear. (The operations are arrows, so they must preserve structure.)
+
+Example 4.4 is a group in the category of groups. I remember having proved Proposition 4.5 on an example sheet somewhere, but it wasn't indicated there that it was anything particularly important. I've only glanced over the construction of a group in the category of groups, so I'll try and work out what it is myself. A group in the category of groups is a group \\(G\\) together with its self-product \\(G \times G\\), and associative homomorphism \\(m: G \times G \to G\\), and \\(u: \{ 1 \} \to G\\), and \\(i: G \to G\\) which acts as an inverse for \\(m\\). This is still a bit nonspecific, so can we say anything about \\(m\\)? It must preserve the group structure on \\(((G, \cdot), m, i)\\), and we know \\(\cdot\\) preserves the group structure on \\((G, \cdot)\\). Is there perhaps a way to get them to play nicely together?
+
+I'll write \\(\times\\) as a shorthand for \\(m\\). Then \\(m(a \cdot b, c \cdot d) = m(a, c) \cdot m(b, d)\\) because \\(m\\) is a homomorphism \\(G \times G \to G\\). Letting \\(a = 1_G, d = 1_G\\) yields \\(m(b, c) = c \cdot b\\). Letting \\(b=1_G, c=1_G\\) yields \\(m(a, d) = a \cdot d\\). Therefore in fact \\(m\\) is the group operation on \\(G\\), and \\(G\\) is also abelian. (I won't bother with the converse, since on looking, the book says it's easy.)
+
+A strict monoidal category is a monoid in Cat. Dear heavens, this is confusingly general. I'll have to go through the examples Awodey gives.
+
+The operation of taking products and coproducts (that is, the meet/join operations) does indeed satisfy the criterion - ah, I move down and see that Awodey points out that these only hold up to isomorphism, not equality, so this isn't "strict". In posets, though, there's at most one arrow between any two objects, so we really do have equality.
+
+A discrete monoidal category is a standard Set-monoid: I can see that each Set-monoid is a discrete monoidal category. How about the converse? Yep, that's fine as long as we're talking about locally small categories. (I briefly got confused between the morphisms and the \\(\otimes\\) operation, but that's cleared up now.)
+
+A strict non-poset monoidal category is the finite ordinals: since no arrow between two different ordinals has an inverse, we must have that objects are unique not just up to isomorphism, but in a more specific sense. This again lets us say that this is a strict monoidal category.
+
+I'll leave that and hope for the best. Next is the category of groups, and we see the familiar equivalence between kernels of homomorphisms and normal subgroups. There's also this idea of "the equaliser is the subgroup; the coequaliser is the quotient" from earlier. I prove the coequaliser statement myself without looking at the proof - it's not hard, and it just involves showing that for \\(H\\) normal in \\(G\\), if \\(k: G \to K\\) is such that \\(k i = k u\\), then \\(k\\) is constant on \\(H\\) and so descends to the quotient \\(G/H\\). The category-theoretic statement about coequalisers is much more fearsome than the concrete group-theoretic one!
+
+I'm very familiar with these results, so having done one of them (the coproducts one), I skip through to the first one I don't know, which is Corollary 4.11. Actually this is the First Isomorphism Theorem in disguise. I whizz down to the exercises and see that the cokernel construction is an exercise, so I'll leave it til then (I'd like to avoid fragmenting them, and also I can't be bothered at the moment).
+
+Section 4.3: groups as categories. Groups certainly are categories - that's how I defined them in my Anki card for the category theory deck. A functor is therefore clearly a group hom, as Awodey says.
+
+Ah, that's cool. Functors from a group (viewed as a category) to any category form "representations" of that group. Elements of \\(G\\) become automorphisms of an object in \\(C\\). In the case of the functor into the category of finite-dimensional vector spaces with linear maps, we can have \\(G\\) appearing as the automorphism group of a wide variety of different objects: for instance, \\(C_5\\) acts on \\(\mathbb{C}^1\\), or on \\(\mathbb{C}^2\\) or on…
+
+In the case of the functor into the category of sets, it's most natural to identify \\(G\\) with a subgroup of some permutation group and to make \\(G\\) act on the appropriate set; in fact this looks like the only way of describing such a functor, since every group is unique up to isomorphism, so corresponds to only one "distinct" permutation-subgroup.
+
+Now we see the definition of a congruence on a category. It's easy to see that this is the equivalence relation we get by identifying all arrows which go to and from the same place, or an equiv rel with more classes than that.
+
+The congruence category uses some rather strange notation. What even is \\(C_0\\)? Surely it must be the set of objects, and \\(C_1\\) the set of arrows, but that isn't notation I remember from earlier in the book. Once that's settled, the definitions become easy: the congruence category is "the thing on which we need to take the quotient" in order to get the quotient by the congruence. It is the category where the morphisms are instead "congruent pairs of arrows" in the original category, and the composition is well-defined because \\(f' f\\) and \\(g' g\\) are congruent if \\(f', f\\) and \\(g', g\\) are.
+
+There are indeed two projection functors, because we're working on a category which has "pairs of arrows" as its morphisms; then the coequaliser of those two is the desired quotient. That seems fine.
+
+We then construct the "kernel of a functor \\(F\\)" in an analogous way to groups: two arrows are \\(F\\)-congruent iff \\(F\\) treats them in the same way, and we define the quotient category to be universal such that for any congruence \\(\sim\\), \\(F\\) descends to the quotient by \\(\sim\\) iff \\(\sim\\) is a sub-congruence of \\(\sim_F\\). (I had to sleep on this one, but I think I understand it now.)
+
+Finally, picking \\(\sim\\) to equal \\(\sim_F\\) gives that all functors descend to their quotients, where the descent is bijective on objects, surjective on hom-sets, and the descended map is injective on hom-sets.
+
+# Summary
+
+This section was an interesting one, but it took me a while to get the hang of it. I'm used to all of this in a concrete setting; seeing it in the abstract makes everything quite difficult. I'm going back to the section on hom-sets now, because the last paragraph is not intuitive at all to me, and I feel it ought to be.
diff --git a/hugo/content/awodey/2015-09-22-limits-and-pullbacks.md b/hugo/content/awodey/2015-09-22-limits-and-pullbacks.md
new file mode 100644
index 0000000..07e0137
--- /dev/null
+++ b/hugo/content/awodey/2015-09-22-limits-and-pullbacks.md
@@ -0,0 +1,50 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-22T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/limits-and-pullbacks/
+- /limits-and-pullbacks/
+title: Limits and pullbacks
+---
+
+I'm going to skip pages 85 through 88 of Awodey for the moment, because time is starting to get short and I want to make sure I'm doing stuff which is relevant to the Part III course on category theory. Therefore, I'll skip straight to Chapter 5, pages 89 through 95. (There's not really a nice way to break this chapter up into small chunks, because the next many pages are on pullbacks.)
+
+We have indeed seen that every subset of a set is an equaliser: just define two functions which agree on that subset and nowhere else. (The indicator function on the subset, and the constant-1 function, for example.) A mono is a generalised subset: well, we have that arrows are generalised elements, so can we make a mono represent a collection of generalised elements? Yes, sort of: given any generalised element which is "in the subset" - that is, on which the equaliser-functions agree - that element lifts over the mono, so can be interpreted as an element of the mono. It's a bit dubious, but it'll do.
+
+The idea of "an injective function which is isomorphic onto its image" comes up quite often, so the next chunk is quite familiar. Then the collection of subobjects of \\(X\\) is just the slice \\(\mathbf{C}/X\\), and the morphisms are the same as in the slice category: commuting triangles.
+
+Because our arrows are monic, we can have at most one way to complete any given commuting triangle, so we get the natural idea of "there is exactly one natural inclusion map from a subset to its parent set". Finally, we define what it means for two objects to be "the same object" in this setting: namely, each includes into the other. (Remark 5.2 describes the process of quotienting out those objects which are "the same" in this sense, and points out that in Set, each subobject is isomorphic only to itself.)
+
+We then see that subobjects of subobjects of \\(X\\) are subobjects of \\(X\\), because the composition of monic things is monic. We therefore have a way of including subobjects of subobjects of \\(X\\) into \\(X\\), and that lets us define the obvious membership relation.
+
+The final example in this section is that of the equaliser, which is actually a subobject consisting of generalised elements which \\(f, g\\) view as being the same. I follow this construction as symbols, but as ever, I don't really have an intuition for it. I'll accept that and move on.
+
+Pullbacks next. A pullback is a universal way of completing a square. My first thought on seeing the definition is that this is an awful lot like a product: given \\(f: A \to C, g: B \to C\\) we seek a product of \\(A\\) and \\(B\\) such that the projection diagram commutes with \\(f\\) and \\(g\\) in the right way. However, products are unique up to isomorphism, so there is "only one" product anyway: we can't just look for one which behaves in the right way, can we?
+
+I'm going to have to try and get this in Sets. Let \\(A = \{ 1, 2 \}\\), \\(B = \{4, 5 \}\\), \\(C = \{1, 2, 4, 5 \}\\) and \\(f, g\\) the inclusions. Then the pullback \\(P\\) must be the empty set - ah, this is the intersection operation Awodey mentioned earlier, and I sense an equaliser going on here. What about \\(A = \{1, 2, 4 \}\\) instead? Then we need \\(P\\) to be \\(\{4\}\\) only.
+
+Ah, I understand my confusion. Products are indeed unique - but they are universal: they are the most general kind of thing which satisfies the UMP of the product. There are other things which satisfy the "UMP-without-the-U" of the product: the statement of the UMP but without the word "unique". We want to pick the most general one of those which satisfies a certain property. So a product is just a pullback where \\(C\\) is initial, for instance.
+
+Proposition 5.5 is a description of the pullback as an equaliser. I knew there would be something like this! Without looking at the proof, I can tell it'll revolve around the fact that equalisers are monic (that'll be the step which guarantees uniqueness). The proof follows just by drawing out the diagram, really.
+
+![Pullback exists if equalisers and products do][pullback exists]
+
+Now comes a demonstration that inverse images are a kind of pullback. I don't see a way to understand this intuitively enough that I could reproduce it - the idea is simple but very much counter to my intuitions. I'll just plough on.
+
+In a pullback of \\(f: A \to B, m: M \to B\\), if \\(m\\) is monic then its parallel arrow \\(m'\\) is: that follows from another diagram.
+
+![Monic implies parallel arrow is monic in a pullback][monic]
+
+# Summary
+
+I get the impression that the idea of a limit is a very general one, of which presumably pullbacks are a specific example - I can't think of something which generalises the idea of "inverse image" off the top of my head. We're going to have six more pages on pullbacks, and then the idea of a limit will be introduced. (This chapter is rather long.)
+
+I do like the way Awodey is doing this: give examples of specific constructions, and then show how they may be unified. I glanced down to the blurb at the start of the "limits" section, and saw that another such unification is about to take place. I'm looking forwards to that.
+
+[pullback exists]: {{< baseurl >}}images/CategoryTheorySketches/PullbackExistsWithEqualisers.jpg
+[monic]: {{< baseurl >}}images/CategoryTheorySketches/ParallelArrowInPullbackIsMonic.jpg
diff --git a/hugo/content/awodey/2015-09-22-properties-of-pullbacks-limits.md b/hugo/content/awodey/2015-09-22-properties-of-pullbacks-limits.md
new file mode 100644
index 0000000..091f22a
--- /dev/null
+++ b/hugo/content/awodey/2015-09-22-properties-of-pullbacks-limits.md
@@ -0,0 +1,61 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-22T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/properties-of-pullbacks-limits/
+- /properties-of-pullbacks-limits/
+title: Limits and pullbacks 2
+---
+
+We have just had the definition of a pullback; now in Awodey pages 95 through 100 we'll see some more about them, and after that we'll get the more general unifying idea of the limit in pages 100 through 105.
+
+Lemma 5.8 states that a certain commuting diagram is a pullback. The proof is by "diagram chase", and I can see why - my proof goes along the lines of gesturing several times at various parts of the diagram. Then the corollary takes me a moment to get my head around, but then I turn my head sideways and it pops out of the diagram. If you push the \\(h'\\) line into the page in Lemma 5.8, and rotate the diagram by ninety degrees, you end up with the diagram of Corollary 5.9; then part 2 of Lemma 5.8 is the corollary.
+
+The operation of pullback is a functor. Given a "base" arrow \\(h\\), we may define a functor which takes an arrow \\(f\\) and pulls back along \\(f, h\\). It seems very plausible, but it takes me a while of staring at the diagrams before it makes sense. In particular, the diagram in the book doesn't do a good job of splitting up the two statements which are proved: namely that \\(h^* 1_X = 1_{X'}\\) and \\(1_{X'} = 1_{h^* X}\\).
+
+The corollary is that \\(f^{-1}\\) is a functor, which follows because the operation "pull things back along \\(f\\)" is a functor. Then we get that \\(f^{-1}\\) descends to the quotient by equivalence. This is all a set of symbols which I barely understand, so I have a break and the go back over the whole thing again.
+
+Pullback is a functor. Fine. Then \\(f^{-1}: \text{Sub}(B) \to \text{Sub}(A)\\) - which is defined as the pullback of an inclusion and \\(f\\) - must also be a functor, because it is exactly the operation "take a certain pullback". The statement that \\(M \subseteq N \Rightarrow f^{-1}(M) \subseteq f^{-1}(N)\\) is just the statement that "if we apply the pullback by \\(f\\) and an inclusion, the relation \\(\subseteq\\) is preserved", which is true because pullback is a functor. (Recall that \\(M \subseteq N\\) iff there is \\(g: M \to N\\) with the triangle \\(m: M \to Z, n: N \to Z\\) commuting. We're working throughout with subobjects of the object \\(Z\\).)
+
+Now we do have \\(M \equiv N\\) implies \\(f^{-1}(M) \equiv f^{-1}(N)\\) - recall that \\(M \equiv N\\) iff both are \\(\subseteq\\) each other - so \\(f^{-1}\\) is constant on equivalence classes and so descends to the quotient. That's a bit clearer now.
+
+Phew, a concrete example is coming up: a pullback in Sets. I draw out the general diagram first, then write in the assumptions we make, and end up with a diagram a lot like the one in the book, except that I've labelled the unlabelled arrow "inclusion".
+
+Ah, I'm starting to get this. The operation "take inverses" is a function which takes one "major" argument \\(f: A \to B\\), and one "minor" argument \\(M\\) (from which we extract the corresponding subset-arrow \\(m: M \to B\\)). The output is the pullback diagram, which may be interpreted as just the pullback object from those two arrows.
+
+Once I've realised that the operation "take inverses" is as above, the top of the following page (p99) becomes trivially obvious, although I still have to do some mental work to do the interpretation in terms of substituting a term \\(f\\) for a variable \\(f\\) in function \\(\phi\\). It seems like a very complicated way of saying something very simple.
+
+Then we see the naturality of the isomorphism \\(2^A \cong \mathbb{P}(A)\\). First, am I convinced we've even shown that there is an isomorphism? Certainly each function \\(A \to 2\\) corresponds (by inverses) to a unique member of \\(\mathbb{P}(A)\\), while each member of \\(\mathbb{P}(A)\\) corresponds to a unique member of \\(2^A\\) given by the characteristic function. Now, does the naturality diagram really commute? Yes, that's what happened above: \\(f^{-1}(V_{\phi}) = V_{\phi f}\\).
+
+This section has one final example: reindexing an indexed family of sets. The definition of \\(p\\) is fine; then we pull it back along \\(\alpha\\). I need to check that the guessed pullback object is indeed a pullback, for which I need a diagram. The required property eludes me completely until I realise that the topmost arrow of Awodey's diagram is in fact the identity; then the UMP falls out easily.
+
+Section 5.4 is entitled "limits", and it promises to unify pretty much everything we've already seen. Recall the theorem that if a category has pullbacks and a terminal object if has finite products and equalisers, because we may take the equaliser of the product to obtain a pullback, and we may perform the empty product to obtain a terminal object. Now Awodey proves the converse: constructing the product as a pullback from a terminal object, and constructing the equaliser as a pullback of the identity pair of arrows with the pair \\(\langle f, g \rangle\\) we want to equalise.
+
+Define a "diagram of type \\(\mathbf{J}\\) in \\(\mathbf{C}\\)" in the way you'd hope: since an arrow \\(X \to Y\\) is thought of as a shape-\\(X\\) subset of \\(Y\\), we should consider a shape-\\(\mathbf{J}\\) "subset" of \\(\mathbf{C}\\) to be a functor \\(\mathbf{J} \to \mathbf{C}\\).
+
+Define a *cone* to the diagram \\(D\\) as - well, the name is quite suggestive. Fix a base object \\(C\\) of \\(\mathbf{C}\\), and then the subcategory of all the arrows \\(C \to D_j\\) forms a category of shape \\(\mathbf{J}\\) in \\(\mathbf{C}\\), all linked to this base \\(C\\). (Of course, we insist that the arrows of this cone commute with the base object.)
+
+A morphism of cones behaves in the obvious way: send the base point to its new position, and send each arrow to its new arrow. (We keep the \\(D_j\\) the same, because we need to preserve the diagram; we're only changing the position of the apex of the cone.)
+
+Finally, the definition of a limit! It's a terminal object in the category of cones on a given diagram. All cones have exactly one arrow going into this cone (if it exists). The "closest cone to the diagram" idea is a nice one, and I can see how this links with the idea of a universal mapping property. The UMPs we've seen up to now are of the form "draw this diagram, and select the closest object that fulfils it" - how neat. This immediately covers the product, pullback and equaliser examples; from the empty diagram, there is precisely one cone for each object (namely "pick a vertex, and have no maps at all"), so the category of cones is just the original category, so the limit is a terminal object.
+
+Now, a theorem on an equivalent condition for having all finite limits. If a category has all finite limits, then it trivially has all finite products and equalisers, because they're limits. Therefore we need to show that if a category has all finite products and equalisers, then we can build any limit. The proof will have to start by fixing some finite category \\(\mathbf{J}\\) and considering some fixed diagram of shape \\(\mathbf{J}\\) in \\(\mathbf{C}\\). Construct the cone category. We're going to have to manufacture the limit somehow, given that we have finite products and equalisers. At this point I look in the book and it tells me that the first step is to consider the product of all the objects in the diagram. OK, that is a cone-shape - it has the right arrows. Could it be a limit? We'd need that for any other cone \\(X\\), there was a unique arrow \\(X \to \prod D_i\\) commuting with the projections. That doesn't actually hold, though: consider \\(D_1, D_2\\) as our diagram, and take \\(D_1 \times D_2\\) as the product. Then \\(D_1 \times \{ \langle x, x \rangle : x \in D_2 \}\\) doesn't have a unique arrow into \\(D_1 \times D_2\\), because we could take either the second or the third projection, so we want to equalise out by such manipulations.
+
+Ugh, I just don't see how to do this. I'll have to look at the book again. The construction is quite complicated: we take the product over all the possible arrows (ways to get to) \\(D_j\\) from any object \\(D_i\\), and we'll equalise out the different ways to get to each object. This becomes much clearer from a diagram, where it actually looks like the only possible way to do it: basically list all the different ways to get from A to B, and equalise out by "viewing them all as being the same way".
+
+![Equalisers to make limits][equalisers to make limits]
+
+Once that's done, the rest is bookkeeping to check that we've actually made a cone, and that the cone is a limit, by showing that "cone" is precisely "thing which satisfies the equaliser diagram"; then the fact that we made a limit falls straight out of the uniqueness part of the UMP.
+
+The final bit on "we didn't use the finiteness condition" is clear, and the dual bit is clear (though I have not much idea about what a colimit or a cocone is). Presumably we'll see some examples of colimits later, but I imagine the coequaliser and coproduct are examples.
+
+# Summary
+
+This section was really neat. Quite hard to understand - took a lot of time and effort to get the pullbacks idea - but the feeling of unification was great fun. Next up will be "preservation of limits" and colimits, and after that will come some exercises (which I think are sorely needed). Then the next chapter is on another kind of construction which is not a limit, and then the really meaty sections which Awodey has called "higher category theory" and which occupy a large chunk of the Part III introductory category theory course.
+
+[equalisers to make limits]: {{< baseurl >}}images/CategoryTheorySketches/EqualisersToMakeLimits.jpg
diff --git a/hugo/content/awodey/2015-09-23-properties-of-limits.md b/hugo/content/awodey/2015-09-23-properties-of-limits.md
new file mode 100644
index 0000000..fa03cab
--- /dev/null
+++ b/hugo/content/awodey/2015-09-23-properties-of-limits.md
@@ -0,0 +1,71 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-23T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/properties-of-limits/
+- /properties-of-categorical-limits/
+title: Properties of categorical limits
+---
+
+We've seen how limits are formed, and that they exist iff products and equalisers do. Now we get to see about continuous functions and colimits, pages 105 through 114 of Awodey.
+
+The definition of a continuous functor is obvious in hindsight given the real-valued version: it "preserves all limits", where "preserves a particular limit" means the obvious thing that limits of cones of the given shape remain limits when the functor is applied.
+
+The example is the representable functor, taking any arrow in category \\(\mathbf{C}\\) to its corresponding "apply me on the left!" arrow in Sets. That is basically to the relevant commutative triangle in \\(\mathbf{C}\\). I hope the following proof will help me understand the representable functors more clearly.
+
+Representable functors preserve all limits: we need to preserve all products and all equalisers. Awodey shows the empty product first, which is clear: the terminal object goes to the terminal object. Then an arbitrary product \\(\prod_{i \in I} X_i\\) gets sent to \\(\text{Hom}(C, \prod_i X_i)\\), which is itself a product because \\(f: C \to \prod_i X_i)\\) corresponds exactly with \\(\{ f_i: C \to X_i \mid i \in I\}\\). (Indeed, the projections give \\(f \mapsto \{ f_i \mid i \in I\}\\); conversely, the UMP of the product gives a unique \\(f\\) for the collection \\(\{ f_i \mid i \in I \}\\).)
+
+This has given me the intuition that "the representable functor preserves all the structure" in the sense that the diagrams will look the same before and after having done the functor.
+
+Equalisers are the other thing to show, and that falls out of the definition in a completely impenetrable way. I can't distill that into "the representable functor preserves all the structure" so easily.
+
+Then the definition of a contravariant functor. I've heard the terms "covariant" and "contravariant" before, several times, when people talk about tensors and general relativity and electromagnetism, but I could never understand what was meant by them. This definition is clearer: a functor which reverses input arrows with respect to the objects. Operations like \\(f \mapsto f^{-1}\\) would be contravariant, for instance.
+
+The representable functor \\(\text{Hom}_{\mathbf{C}} ( -, C) : \mathbf{C}^{\text{op}} \to \mathbf{Sets}\\) is certainly contravariant, taking \\(A\\) to \\(\text{Hom}(A, C)\\) and an arrow \\(f: B \to A\\) to \\(f^* : \text{Hom}(A, C) \to \text{Hom}(B, C)\\) by \\((a \mapsto g(a)) \mapsto (b \mapsto g(f(b)))\\). The contravariant functor reverses the order of arrows in its argument; it takes arrows to co-arrows, so it should take colimits to co-colimits, or limits. I need to keep in mind this example, to avoid the intuition that "functors take things to things and cothings to cothings": if the functor is covariant, it flips the co-ness of its input.
+
+Example: a coproduct is a colimit, so \\(\text{Hom}_{\mathbf{C}} ( - , C)\\) should take the coproduct to a product. That might be why we had \\(\mathbb{P}(A+B) \cong \mathbb{P}(A) \times \mathbb{P}(B)\\) as Boolean algebras: the functor \\(\mathbb{P}\\) might be contravariant. What does it do to the arrow \\(B \to A\\)? Recall that an arrow in the category of Boolean algebras (interpreted as posets) is an order-preserving map. Huh, not contravariant after all: the \\(\mathbb{P}\\) functor seems covariant to me. There must be some other reason; [it turns out][SE] that I'm mixing up two different functors, one of which is covariant and takes sets to sets, and one of which is contravariant and takes sets to Boolean algebras.
+
+"The ultrafilters in a coproduct of Boolean algebras correspond to pairs of ultrafilters": recall that the functor \\(\text{Ult}: \mathbf{BA}^{\text{op}} \to \mathbf{Sets}\\) takes an ultrafilter to the corresponding set of indicator functions picking out whether a given subset is in the filter, and an arrow \\(f: B \to A\\) of ultrafilters to the arrow \\(\text{Ult}(f): \text{Ult}(A) \to \text{Ult}(B)\\) by \\(\text{Ult}(f)(1_U) = 1_U \circ f\\), and so it is representable. (I barely remember this. I think I deferred properly thinking about representable functors until Awodey covered them properly.) At least once we've proved that, we do get "ultrafilters in the coproduct correspond to pairs of ultrafilters", by the iso in the previous paragraph.
+
+The exponent law is much easier - it follows immediately from the same iso.
+
+(Oh, by the way, we have that limits are unique up to unique isomorphism, because they may be formed from products and equalisers which are themselves unique up to unique isomorphism.)
+
+Next section: colimits. The construction of the co-pullback (that is, pushout) is dual to that of the pullback: take the coproduct and then coequalise across the two sides of the square. So the coproduct of two rooted posets would be the pushout of the two "pick out the root" functions: let \\(A = \{ 0 \}\\), and \\(B, C\\) be rooted posets with roots \\(0_B, 0_C\\). Then the pushout of \\(f: A \to B\\) by \\(f(0) = 0_B\\) and \\(g: A \to C\\) by \\(g(0) = 0_C\\) is just the coproduct of the two rooted posets.
+
+Ugh, a geometrical example next. Actually, this is fairly neat: the coproduct of two discs, but where we view two points as being the same if they are both images of the inclusion. That's just two circles glued together on the boundary, which is topologically the same as a sphere. In the next lower dimension, we want to take two intervals, glued together at their endpoints, making a circle.
+
+Then the definition of a colimit, which is the obvious dual to that of a limit. I skip through to the "direct limit" idea, where the colimit is taken over a linearly ordered indexing category. I can immediately see that this might be associated with the idea of a limit in \\(\mathbb{R}\\), but I'll save that until after the worked example, which is the direct limit of groups.
+
+The colimit setup is all pretty obvious in retrospect, but I didn't try and come up with it myself. (The exercises will show whether it really is obvious!) The colimiting object does exist because coproducts and coequalisers do, and we can construct it as the coproduct followed by a certain coequaliser - namely, the one where "following a path through the sequence, then going out to the colimit, is the same as just going straight to the colimit". That is, such that \\(p_n g_{n-1} g_{n-2} \dots g_i = p_i\\), where the \\(p_i: G_i \to L\\) are the maps into the colimit. The equivalence relation whose quotient we take, is therefore: if \\(x \in G_n, y \in G_m\\), then \\(x \sim y\\) iff there is some \\(k\\) such that if we follow along the homomorphisms starting from \\(x\\) and \\(y\\), we eventually hit a common element. (Indeed, if there existed elements \\(x, y\\) which didn't have this property, then \\(p_m g_{m-1} \dots g_n(x) \not = p_n(x)\\).) I think I've got that.
+
+The operations are the obvious ones, and we've made a kind of "infinite intersection" of these groups, where the maps \\(u_n: G_n \to G_{\infty}\\) are the "inclusions". Universality is inherited from Sets, so as long as the limiting structure obeys the group axioms, we have indeed ended up with a colimit.
+
+What does it mean, then, for functor \\(F: \mathbf{C} \to \mathbf{D}\\) to "create limits of type \\(\mathbf{J}\\)"? For each diagram \\(D\\) in \\(\mathbf{D}\\) of type \\(\mathbf{J}\\), and each limit of that diagram, there is a unique cone in \\(\mathbf{C}\\) which is sent to \\(D\\) by \\(F\\), and moreover that cone is itself a limit.
+
+In the example above, \\(F\\) is the forgetful functor Groups to Sets, \\(\mathbf{J}\\) is the ordinal category \\(\omega\\). For each diagram \\(D\\) in Sets of type \\(\omega\\), the colimit of the diagram is given by taking the coproduct of all the \\(D_i\\), and identifying \\(x_n \sim g_n(x_n)\\) (where \\(g_n: D_n \to D_{n+1}\\) is the arrow in \\(D\\) corresponding to the arrow in \\(\omega\\) from \\(n\\) to \\(n+1\\)). Then we can pull this back through the forgetful functor to obtain a corresponding cocone in Groups, and we can check that it's still a colimit. That is, \\(F\\) creates \\(\omega\\)-colimits.
+
+Why does it create all limits? Take a diagram \\(C: \mathbf{J} \to \mathbf{Groups}\\) and limit \\(p_j: L \to U C_j\\) in \\(\mathbf{Sets}\\). Then we need a unique Groups-cone which is a limit for \\(C\\). The Set-limit can be assigned a group structure, apparently. It's obvious how to do that in the case that the limit was an ordinal - it's the same as we saw above - but in general…
+
+I'll leave that for the moment, because I want to get on to adjoints sooner rather than later (they're apparently treated very early in the Part III course).
+
+The idea behind the cumulative hierarchy construction is clear in the light of the \\(\omega\\) example above, and this makes it immediately obvious that each \\(V_{\alpha}\\) is transitive. The construction of the colimit is the obvious one (although I keep having to convince myself that it is indeed a colimit, rather than a limit).
+
+What does it mean to have all colimits of type \\(\omega\\)? A diagram of \\(\omega\\)-shape is an \\(\omega\\)-chain. A colimit of that chain would compare bigger than all the elements of the chain (that's "there is an arrow \\(n \to \omega\\)" - that is, "it is a cocone"), and would have the property that if \\(n \leq x\\) for all \\(n\\) then \\(\omega \leq x\\) (that's "all other cocones have a map into the colimit"). The colimit is a "least upper bound" for the specified chain. A monotone map is called continuous if it maintains this kind of least upper bound.
+
+Then we have a restated version of the theorem that "an order-preserving map on a complete poset has a fixed point", which I remember from Part II Logic and Sets. The proof here is very different, though. I follow it through, doing pretty natural things, until "The last step follows because the first term \\(d_0 = 0\\) of the sequence is trivial". Does it actually make a difference? If we remove the first element of the chain, I think it couldn't possibly alter anything in this case, even if the first element were not trivial, because we've already taken the quotient identifying "things which are eventually equal".
+
+I was a little confused by the statement of the theorem. "Of course \\(h\\) has a least fixed point, because it's well-ordered" was my thought, but obviously that's nonsense because \\([0,1]\\) is not well-ordered. So there is some work to do here, although it's easy work.
+
+The final example seems almost trivial when it's spelled out, but I would never have come up with it myself. Basically saying that "you need to check that your proposed colimit object actually exists", and if it doesn't, you might have to add things to your colimit until it starts existing". I don't know how common a problem this turns out to be in practice, but the dual says that we can't assume naive limits exist either.
+
+# Summary
+
+This was another rather difficult section. Fortunately the exercises come next, and that should help a lot. I've dropped behind a bit on my Anki deck, and need to Ankify the colimits section.
+
+[SE]: http://math.stackexchange.com/a/1448655/259262
diff --git a/hugo/content/awodey/2015-09-29-exponentials-in-category-theory.md b/hugo/content/awodey/2015-09-29-exponentials-in-category-theory.md
new file mode 100644
index 0000000..793459d
--- /dev/null
+++ b/hugo/content/awodey/2015-09-29-exponentials-in-category-theory.md
@@ -0,0 +1,51 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-29T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/exponentials/
+- /exponentials-in-category-theory/
+title: Exponentials in category theory
+---
+
+Now we come to Chapter 6 of Awodey, on exponentials, pages 119 through 128. Supposedly, this represents a kind of universal property which is not of the form "for every arrow which makes this diagram commute, that arrow factors through this one".
+
+First, we define the currying of a function \\(f: A \times B \to C\\), producting a function \\(f(a) : B \to C\\) - that is, a function \\(f(a) \in C^B\\). That is, we view \\(f: A \to C^B\\), defining an isomorphism of homsets \\(\text{Hom}_{\mathbf{Sets}}(A \times B, C)\\) to \\(\text{Hom}_{\mathbf{Sets}}(A, C^B)\\).
+
+Now, we try to generalise this construction, by generalising the "currying" construct to allow for more kinds of evaluation. We just need a way to take \\(C^B \times B \to C\\) in a universal way. The resulting diagram is perhaps not something I could have come up with, but it is extremely reminiscent of the UMP of the free monoid.
+
+The general definition of an exponential is then "a way of currying", defined in terms of "a way of evaluating". We get some terminology - the "evaluation" is the way of evaluating, and the "transpose" of an arrow is the curried form. We can also define the transpose of a curried arrow, by giving it a way of evaluating on any input; the UMP tells us that if we transpose twice, we recover the original arrow; therefore, the "curry me" operation is an isomorphism between \\(\text{Hom}_{\mathbf{C}}(A \times B, C)\\) and \\(\text{Hom}_{\mathbf{C}}(A, C^B)\\). (This is all probably very harmful, thinking of this in terms of currying, but so far I think it is helping.)
+
+A category is then Cartesian closed if it has all finite products and exponentials. That is, if we can define multi-variable functions which curry. (Yes, arrows are usually not functions. This is for my beginner's intuition.)
+
+Then Example 6.4, showing that the product of two posets is a poset, and defining the exponential to be the Sets-exponential but with the pointwise ordering on arrows. There is work to do to show that the evaluation is an arrow and that the transpose of an arrow is an arrow.
+
+Restricting to \\(\omega\\)CPOs, we still need to show that \\(Q^P\\) is an \\(\omega\\)CPO. Indeed, given an \\(\omega\\)-chain in \\(Q^P\\), we need to find an upper bound in \\(Q^P\\). Say the chain was \\(f_1, f_2, \dots\\). Then for each \\(p\\), the chain with members \\(f_i(p)\\) has a least upper bound \\(f(p)\\). This defines an order-preserving function because if \\(p \leq q\\) then each \\(f_i(p) \leq f_i(q)\\), and weak inequalities respect the limiting operation. Therefore our prospective exponential is in fact in the category.
+
+\\(\epsilon\\) needs to be \\(\omega\\)-continuous: it needs to respect least upper bounds. Let \\((f_i, p_i)\\) be an \\(\omega\\)-chain in \\(Q^P \times P\\). (I'll take it as read that products exist.) We need that evaluating the least upper bound, \\(\epsilon(f, p)\\), yields the limit of \\(\epsilon(f_i, p_i)\\). This follows from the lemma that if the LUB of \\((f_i)\\) is \\(f\\), and of \\((p_i)\\) is \\(p\\), then the least upper bound of \\((f_i, p_i)\\) is \\((f, p)\\) (which is true: it is an upper bound, while any other upper bound is bigger than it). Then \\(\epsilon(f, p) = f(p)\\) while \\(\epsilon(f_i, p_i) = f_i(p_i)\\), so we do get the result: each \\(f_i(p_i) \leq f(p)\\) because \\(f_i(p_i) \leq f(p_i) \leq f(p)\\), while any other upper bound \\(g\\) would have all \\(f_i(p_i) \leq g\\) so (fixing \\(j\\)) all \\(f_j(p_i) \leq g\\), so all \\(f_j(p) \leq g\\), so (releasing \\(j\\)) \\(f(p) \leq g\\).
+
+Finally, the transpose of an \\(\omega\\)-continuous function needs to be \\(\omega\\)-continuous: let \\(f: A \times B \to C\\) be \\(\omega\\)-continuous. Its transpose is \\(\bar{f}: A \to C^B\\) given by \\(\epsilon \circ (\bar{f} \times 1_B) = f\\). If \\(\bar{f}\\) weren't \\(\omega\\)-continuous, there would be a witness sequence \\((a_i)\\) which had \\(\lim \bar{f}(a_i) \not = \bar{f}(\lim a_i)\\); plugging this into the definition of \\(\bar{f}\\) gives that \\((a_i)\\) is a witness against the \\(\omega\\)-continuity of \\(f\\). Contradiction.
+
+And now for something completely different: an exponential with more structure than previously. I just check the definition of the product graph, because I don't think we had it in our Graph Theory course; it seems to be the obvious one, taking pairs of vertices and corresponding pairs of edges. Then the exponential graph. This is defined as to have vertices "set-exponential of the vertices", and an edge between \\(\phi: G \to H\\), \\(\psi: G \to H\\) is a \\(e(G)\\)-indexed collection of edges in \\(H\\) which have "the source is where \\(\phi\\) takes the corresponding \\(G\\)-source" and "the target is where \\(\psi\\) takes the corresponding \\(G\\)-target". It's a way of embedding \\(G\\) into \\(H\\) along \\(\phi\\) and \\(\psi\\).
+
+The evaluation is the obvious one given those structures, and the transpose of a map is the curried version of that map. The different thing about this system is the fact that our maps have to have two parts (one for vertices and one for edges).
+
+"Basic facts about exponentials". The transpose of evaluation, without looking at the rest of the page: \\(\epsilon: B^A \times A \to B\\) must transpose to \\(\bar{\epsilon}: B^A \to B^A\\) with \\(\epsilon \circ (\bar{\epsilon} \times 1_{A}) = \epsilon\\). If \\(\epsilon\\) were monic, we could say that \\(\bar{\epsilon} = 1_{B^A}\\) immediately, but it's not monic. Ah, but we do have that \\(\bar{\epsilon}\\) is uniquely specified by the UMP, so it must be \\(1_{B^A}\\) after all. Maybe that'll help me remember things, if nothing else.
+
+A proof that "exponentiation by a fixed object" is a functor: it starts in Set, which makes me worry that representable functors are going to be involved again (because we seem to be able to cast many things as Set-based things). Onwards: currying is certainly functorial in Set because application of functions is associative, and because we check that the identity curries in the right way.
+
+In general, the definition of the exponential of an arrow \\(\beta: B \to C\\) is the obvious one: there's only one way to make an element of \\(C^A\\) given one in \\(B^A\\) and a map \\(\beta : B \to C\\), and that's to "evaluate at \\(a\\), then do \\(\beta\\)". This method does keep the identity map as an identity: \\(1: B \to B\\) causes \\(f: A \to B\\) to become \\(f: A \to B\\), of course. It respects composition by just writing a couple of lines of symbol-manipulation.
+
+Finally, the transpose of \\(1_{A \times B}\\), which is a map \\(\eta: A \to (A \times B)^B\\). This takes a value \\(a\\) and returns a function \\(b \mapsto (a, b)\\). Then some symbol shunting gives \\(\bar{f} = f^A \circ \eta\\).
+
+![Calculating the exponential][exponential]
+
+# Summary
+
+This section is the one I've thought most concretely about so far. That's probably something I'll have to unlearn. It's useful already being familiar with currying; this chapter would have been a lot harder without already having that intuition.
+
+[exponential]: {{< baseurl >}}images/CategoryTheorySketches/ExponentialEvaluation.jpg
diff --git a/hugo/content/awodey/2015-09-29-limit-exercises.md b/hugo/content/awodey/2015-09-29-limit-exercises.md
new file mode 100644
index 0000000..dfdea99
--- /dev/null
+++ b/hugo/content/awodey/2015-09-29-limit-exercises.md
@@ -0,0 +1,83 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-29T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/limit-exercises/
+- /limit-exercises/
+title: Limits exercises
+---
+
+These are located on pages 114 through 118 of Awodey.
+
+Exercise 1 follows by just drawing out the diagrams for the product and the pullback: they end up being the same diagram and the same UMP.
+
+Exercise 2 a): \\(m\\) is monic iff \\(mx = my \Rightarrow x=y\\); the diagram is a pullback iff for all \\(x: A \to M\\) and \\(y: A \to M\\) with \\(m x = m y\\), have \\(z: A \to M\\) such that \\(my = mz = mz = mx\\).
+
+Exercise 2 b): We can draw the cube line by line, checking that each pullback arrow exists, and ending up with a diagram.
+
+![Pullback cube][pullback]
+
+We still need the pullback of the pullback square to be a pullback square. If we can prove that \\(P\\) forms a pullback of \\(f \circ f^{-1}(\alpha), \beta\\) then we're done by the two-pullbacks lemma using the square with downward-arrow \\(f^{-1}(\beta): f^{-1}(B) \to Y\\). But it is: if we pull back the "diagonal square" \\(A \times_X B \to X\\) and \\(f\\), then we do get \\(P\\), and so all the commutative properties hold.
+
+Exercise 2 c): this follows by drawing out the diagram. We pull back the "\\(m\\) is monic" square along \\(f\\) to obtain the "\\(m'\\) is monic" square; this is a pullback because of the "\\(f, m\\) pull back to \\(m'\\)" square.
+
+Exercise 3: Let \\(x', y': R \to M'\\) with \\(m' x' = m' y'\\). Then \\(f m' x' = f m' y'\\); while labelling the unlabelled arrow in Awodey's diagram \\(\alpha\\), have \\(m \alpha x' = m \alpha y'\\) because the diagram commutes. But by monicness of \\(m\\), have \\(\alpha x' = \alpha y'\\). By the UMP of the pullback, there is a unique arrow \\(r\\) which arises \\(R \to M\\) such that \\(\alpha r = \alpha x'\\) and \\(m' r = m' x'\\), and so \\(r=x\\). Likewise \\(r=y\\) (since \\(\alpha y' = \alpha x'\\) and \\(m' x' = m' y'\\)). Hence \\(x=y\\).
+
+Exercise 4: One direction is easy. Suppose \\(z \in_A M \Rightarrow z \in_A N\\). Let \\(z = m: M \to A\\). Then \\(M \in_A N\\) so \\(M \subseteq N\\).
+
+Conversely, suppose \\(M \subseteq N\\) by means of \\(f: M \to N\\), and \\(z: Z \to A\\) gives \\(Z \in_A M\\). Then \\(z\\) lifts to \\(fz: Z \to N\\), and the entire diagram commutes as required.
+
+Exercise 5 is apparently a duplicate of Exercise 4.
+
+Exercise 6 is very similar in shape to some things we've already proved. Let \\(z: Z \to A\\) be such that \\(fz = gz\\). We need to find \\(\bar{z}: Z \to E\\) such that \\(e \bar{z} = z\\). Since \\(fz = gz\\), the arrow \\(Z \to B \times B\\) by \\(\langle f, g \rangle \circ z\\) is equal to the arrow \\(Z \to B \times B\\) given by \\(\langle 1_B, 1_B \rangle \circ f \circ z\\); so by the UMP of the pullback, there is \\(\bar{z}: Z \to E\\) with \\(e\bar{z} = z\\). That's all we needed.
+
+Exercise 7: we need to show that \\(\text{Hom}_{\mathbf{C}}(C, L)\\) is a limit for \\(\text{Hom}_{\mathbf{C}}(C, \cdot) \circ D = \text{Hom}_{\mathbf{C}}(C, D): \mathbf{J} \to \mathbf{Sets}\\). Equivalently, we need to show that the representable functor preserves products and equalisers, so let \\(p_1: P \to A, p_2: P \to B\\) be a product in \\(\mathbf{C}\\). I claim that \\(p_1' : \text{Hom}_{\mathbf{C}}(C, P) \to \text{Hom}_{\mathbf{C}}(C, A)\\) by \\(p_1': f \mapsto p_1 f\\), and likewise \\(p_2': f \mapsto p_2 f\\), form a product. Indeed, let \\(x_1: X \to \text{Hom}_{\mathbf{C}}(C, A)\\) and \\(x_2: X \to \text{Hom}_{\mathbf{C}}(C, B)\\). Then \\(\langle x_1(z), x_2(z) \rangle\\) is of the form \\(\langle C \to A, C \to B \rangle\\) for all \\(z \in X\\), so there is a unique corresponding \\(C \to P\\) for each \\(z \in X\\). This therefore constructs a product.
+
+Now the equalisers part. Let \\(e: E \to A\\) equalise \\(f, g: A \to B\\), and write \\(f^*, g^*\\) for the images of \\(f, g\\) under the representable functor. Let \\(x: X \to \text{Hom}_{\mathbf{C}}(C, A)\\) be such that \\(f^* x = g^* x\\). We need to lift \\(x\\) over \\(e^*\\). For each \\(z \in X\\), we have \\(x(z): C \to A\\) an arrow in \\(\mathbf{C}\\); this has \\(f \circ x(z) = g \circ x(z)\\), so \\(x(z)\\) lifts to unique \\(\overline{x(z)}: C \to E\\). This specifies a unique morphism \\(X \to \text{Hom}_{\mathbf{C}}(C, E)\\) as required.
+
+Exercise 8: It seems intuitive that partial maps should define a category. However, let's go for it. There is an identity arrow - namely, the pair \\((\vert id_A \vert, A)\\). This does behave as the identity, because the pullback of the identity with anything gives that anything. The composition of arrows is evidently an arrow (because the composition of monos is monic). We just need associativity of composition, which comes out of drawing the diagrams of what happens when we do the triple composition in the two available ways. We can complete each of the two diagrams using the two pullbacks lemma, as in the picture.
+
+![Partial maps associative][partial]
+
+The map \\(\mathbf{C} \to \mathbf{Par}(\mathbf{C})\\) given by \\((f: A \to B) \mapsto (\vert f \vert, A)\\) is a functor: it respects the identity arrow by inspection, while composition is respected by just looking at the diagram. It is clearly the identity on objects, by definition of the partial-maps category.
+
+![Partial maps functor is a functor][Partial maps functor]
+
+Exercise 9: Diagrams is a category: identity arrows are just identity arrows from the parent category; the composition of commutative squares is itself a commutative square (well, rectangle); composing with the identity arrow doesn't change anything. Taking the vertex objects of limits does determine a functor: it takes the identity arrows to identity arrows because taking a diagram to itself means taking its unique limit vertex to itself. It respects domains/codomains, because… well, it just does: if \\(f: D_1 \to D_2\\) in Diagrams, then \\(\lim f\\) is uniquely specified to go from limit-vertex 1 to limit-vertex 2. (By the way, the intuition for what an arrow in this category is, is the placing of one diagram above another with linking arrows between the objects.) Better justification: there is a unique morphism between the limit vertices, because we can use the arrow to determine a collection of morphisms from one limit vertex to the other making \\(D_1\\) into a cone for \\(D_2\\).
+
+The last part follows because \\(\mathbf{Diagrams}(I, \mathbf{Sets})\\) is isomorphic to \\(\mathbf{Sets}^I\\). Sets has all limits, so the theorem holds, and hence there is a product functor. This seems a little nonrigorous, but I can't put my finger on why.
+
+Exercise 10: we've already seen this. I'll state it anyway. The copullback of arrows \\(f: A \to B\\) and \\(g: A \to C\\) is the universal \\(P\\) and arrows \\(p_1: B \to P, p_2: C \to P\\) such that for any \\(b: B \to Z, c: C \to Z\\) with \\(cg = bf\\), there is a unique \\(p: P \to Z\\) with \\(p p_1 = b, p p_2 = c\\), as in the diagram.
+
+![Definition of a pushout][pushout]
+
+The construction of a pushout with coequalisers and coproducts is done by taking the coproduct of \\(B\\) and \\(C\\), and coequalising the two sides of the square.
+
+Exercise 11: To show that the diagram is an equaliser, we need to show that any \\(z: Z \to \mathbb{P}(X)\\), which causes the two \\(\mathbb{P}(r_i): \mathbb{P}(X) \to \mathbb{P}(R)\\) to be equal, factors through \\(\mathbb{P}(q): \mathbb{P}(Q) \to \mathbb{P}(X)\\). Any \\(z: Z \to \mathbb{P}(X)\\) is a selection of subsets of \\(X\\) for each element of \\(Z\\); the condition that it equalises \\(\mathbb{P}(r_1), \mathbb{P}(r_2)\\) is exactly the same as saying that if we take the \\(r_1\\)-inverse image and the \\(r_2\\)-inverse image of the result, then we get the same subset of \\(R\\). Can we make it assign an indicator function on \\(Q\\)? We're going to have to prove that \\(z: Z \to \mathbb{P}(X)\\) maps only into unions of equivalence classes, and then the map will descend.
+
+OK, we have "for each element of \\(Z\\), we pick out a subset of \\(X\\) which has the property that finding everything which that subset twiddles on the left, we get the same set as everything which that subset twiddles on the right". Suppose an element \\(a\\) is in the image of \\(z \in Z\\). Then we must have the entire equivalence class of \\(a\\) in the image set, because \\(\mathbb{P}(r_1)(\{ a \}) = \{ (a, x) \mid x \sim a \}\\) but \\(\mathbb{P}(r_2)(\{ a \}) = \{ (x, a) \mid x \sim a \}\\). These can't be equal unless the only thing in the equivalence class is \\(a\\). The reasoning generalises for when more than one thing is in the image set, by taking appropriate unions. Therefore the map does descend.
+
+Exercise 12: the limit is such that for any cone, there is a unique way to factor the cone through the limit. What is a cone? It's a way of identifying a subshape of every element of the sequence, such that all other subshapes also appear in this limit subshape. But the only shape in \\([0]\\) is \\([0]\\), so the limit must be isomorphic to \\([0]\\).
+
+The colimit must be \\(\omega\\). Indeed, a cocone is precisely an identification of a subset which contains an \\(\omega\\)-wellordered subset, and the colimit is the smallest \\(\omega\\)-well-ordered subset.
+
+Exercise 13 a): The limit of \\(M_0 \to M_1 \to \dots\\) is just \\(M_0\\) - same reasoning as Exercise 12 - so it's an abelian group. It seems like the colimit should also be abelian. Let \\(C\\) be the colimit, and let \\(x, y \in C\\). I claim that there is some \\(n\\) such that \\(x, y \in M_n\\), whence we're done because \\(M_n\\) is abelian. (Strictly, I claim that there is \\(n\\) and \\(\alpha, \beta\\) such that \\(i_n(\alpha) = x, i_n(\beta) = y\\), where \\(i_n\\) is the inclusion.) It's enough to show that there is \\(m\\) and \\(n\\) such that \\(x \in M_m, y \in M_n\\), because then the maximum of \\(m, n\\) would do. If there weren't such an \\(m\\) for \\(x\\), we could take the cocone \\(C \setminus \{ x \}\\), and this would fail to factor through \\(C\\).
+
+I then had a clever but sadly bogus idea: the second diagram is the same as the first but in the opposite category. Therefore by duality, we have that colimits <-> limits, so the limits and colimits are indeed abelian. This is bogus because the opposite category of Monoids is not Monoids, so we're not working the right category any more.
+
+Let's go back to the beginning. The colimit is \\(N_0\\) by the same reasoning that made the limit of the \\(M_i\\) sequence be \\(M_0\\). That means it's an abelian group. Taking the limit of \\(N_0 \gets N_1 \gets \dots\\): our limit is a shape \\(L\\) which is in \\(N_0\\), which is itself an image of \\(N_1\\), which… This is a kind of generalised intersection, and the (infinite) intersection of abelian groups is an abelian group, so the intuition should be that the limit is also an abelian group.
+
+Some on Stack Exchange [gave a cunning way to continue][SE], considering involutions \\(x \mapsto x^{-1}\\). I don't know if I'd ever have come up with that.
+
+Exercise 13 b): now they are all finite groups. The limit of the \\(M_i\\) is \\(M_0\\), so this certainly has the "all elements have orders" property. The colimit of the \\(N_i\\) is \\(N_0\\), so likewise. The colimit \\(M\\) of the \\(M_i\\): every element \\(x\\) appears in some \\(M_x\\) (and all later ones) as above, and it must have an order in those groups, so it has an order in \\(M\\) too (indeed, each \\(M_i\\) is a subgroup of \\(M\\)). The limit of the \\(N_i\\): what about \\(C_2 \gets C_2^2 \gets C_2^3 \gets \dots\\), each arrow being the quotient by the first coordinate? No, the limit of that is \\(C_2^{\mathbb{N}}\\) in which every element has order 2. If we use \\(C_{n!}\\) instead? Ugh, I'm confused. I'll leave this for the moment and try to press on. If it becomes vital to understand limits in great detail in the time left before my course starts, I'll come back to this.
+
+[pullback]: /images/CategoryTheorySketches/PullbackCube.jpg
+[Partial maps functor]: /images/CategoryTheorySketches/PartialMapsFunctor.jpg
+[partial]: /images/CategoryTheorySketches/PartialMapAssociative.jpg
+[pushout]: /images/CategoryTheorySketches/PushoutDefinition.jpg
+[SE]: https://math.stackexchange.com/a/1454266/259262
diff --git a/hugo/content/awodey/2015-09-30-heyting-algebras.md b/hugo/content/awodey/2015-09-30-heyting-algebras.md
new file mode 100644
index 0000000..18b203d
--- /dev/null
+++ b/hugo/content/awodey/2015-09-30-heyting-algebras.md
@@ -0,0 +1,49 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-09-30T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/heyting-algebras/
+- /heyting-algebras/
+title: Heyting algebras
+---
+
+Now that we've had the definition of an exponential, we move on to the Heyting algebra, pages 129 through 131 of Awodey. This is still in the "exponentials" chapter. I stop shortly after the definition of a Heyting algebra, so as to move on to the more general stuff which is more relevant to the Part III course.
+
+The first thing to come is the definition of an exponential \\(b^a\\) in a Boolean algebra \\(B\\) (regarded as a poset category). Without looking at the definition, I draw out a picture. We need to find \\(c^b\\) and \\(\epsilon: c^b \times b \to c\\) such that for all \\(f: a \times b \to c\\) there is \\(\bar{f}: a \to c^b\\) unique with \\(\epsilon \circ (\bar{f} \times 1_b) = f\\).
+
+The first thing to note is that arrows are already unique if they exist, because we are in a poset category, so we don't have to worry about uniqueness of \\(\bar{f}\\). Then note that \\(f: a \times b \to c\\) is nothing more nor less than the statement that \\(a \times b \leq c\\) - that is, that the greatest lower bound of \\(a\\) and \\(b\\) is \\(\leq c\\), or that \\(c\\) is not a lower bound for both \\(a\\) and \\(b\\) simultaneously (assuming \\(a \times b \not = c\\)). The definition of \\(\bar{f}\\) is precisely the statement that \\(a \leq c^b\\), and \\(\epsilon\\) says precisely that the GLB of \\(c^b\\) and \\(b\\) is \\(\leq c\\).
+
+In order to piece this together, we're going to want to know what the product of two arrows looks like. We're in a poset category, so it comes from "propagating the two arrows downwards until they hit a common basepoint, and taking that arrow": it is the arrow between the GLB of the domains and the GLB of the codomains. Therefore the product arrow \\(\bar{f} \times 1_B\\) is the arrow between the GLB of \\(a, b\\) and the GLB of \\(c^b, b\\).
+
+![Product of arrows][arrow product]
+
+Therefore the following picture is justified.
+
+![Exponential in boolean category][exponential]
+
+What could \\(c^b\\) be? If we let \\(f\\) be the arrow \\(\text{GLB}(c^b, b) \to c^b\\), then \\(\bar{f} = f\\), and \\(\bar{f} \times 1_b\\) is the identity arrow on that GLB. I don't know if this is helping, and I'm forced to look at the book.
+
+The book gives \\(c^b\\) as \\((\neg b \vee c)\\), the LUB of \\(\neg b\\) and \\(c\\). This certainly does have an appropriate evaluation arrow and it is an exponential (having worked through the lines in the book), but I really don't see how one could have come up with that.
+
+A Heyting algebra has finite intersections, unions and exponentials (where \\(a \Rightarrow b\\) is defined such that \\(x \leq (a \Rightarrow b)\\) iff \\((x \wedge a) \leq b\\)). What does this exponential really mean? In a Boolean algebra, it's an object which has as its subsets precisely those things which intersect with \\(a\\) to give a subset of \\(b\\). I can draw that in terms of a Venn diagram.
+
+The distributive property holds, as I write out myself given the first line.
+
+Now the definition of a complete poset (which I already know as "all subsets have a least upper bound"). Why is completeness equivalent to cocompleteness? In a Boolean algebra, this is easy because "join" is "complement-meet-complement". Actually, I'm now a bit confused: \\(\omega\\), the first infinite well-ordering, is not complete as a poset, but it certainly looks cocomplete. I check the definition of "complete" again to see if I'm going mad, and I see that it's "all limits exist", not just "\\(\omega\\)-limits exist". But then why does the book say "a poset is complete if it is so as a category - that is, if it has all set-indexed meets"? OK, \\(\omega\\) has a meet - namely \\(0\\) - but for it to have a join, we need \\(a \in \omega\\) such that for any \\(c \in \omega\\), have all elements of \\(\omega\\) are \\(\leq c\\) iff \\(a \leq c\\). Since \\(c+1 \not \leq c\\), we must have \\(a \not \leq c\\): that is, \\(a\\) is bigger than all members of \\(\omega\\). Therefore \\(\omega \subseteq \omega\\) doesn't have a join. Can we find a corresponding subset of \\(\omega\\) without a meet? No: the meet of any subset of a well-ordered set is just the least element. I'm horribly confused, so I've asked on [Stack Exchange]; the reply came that the corresponding meetless subset is \\(\emptyset\\), which I forgot to consider.
+
+OK, let's try again. Suppose our poset has a meetless subset \\((a_i)\\) - that is, one which doesn't have a greatest lower bound. Remember, our poset might not have a terminal object, so actually we might have to change this into a proof by contradiction rather than contrapositive: let's assume all subsets have joins, so in particular there is a terminal object (the empty join). I would love to say "Then the corresponding complement of \\(\{ a_i \}\\) has no join, because its least upper bound is a greatest lower bound for \\(\{ a_i \}\\)", but \\(\{ 1 \} \subset \omega\\) has \\(1\\) as its LUB, but its complement has \\(0\\) as its GLB. However, what I could say is "Let \\(\{ b_i \}\\) be the set of elements which are less than every element of \\(a_i\\). This doesn't have a least upper bound, because that would be a GLB of \\(a_i\\)." That's better.
+
+The power set algebra is certainly a complete Heyting algebra, as I mentioned above with the Venn diagram, or by Awodey's reasoning with the distributive law. The statement that Heyting algebras correspond to intuitionistic propositional calculus (where excluded middle may not apply) is pretty neat, but I'm afraid I'm still a bit lost.
+
+The next section is on propositional calculus, where Awodey provides a set of axioms for intuitionistic logic.
+
+At this point, I was told that exponentials don't really turn up in the Part III course, and since my aim here is to get an advantage in terms of the course, I'm skipping to the next chapter.
+
+[arrow product]: {{< baseurl >}}images/CategoryTheorySketches/ArrowProduct.jpg
+[exponential]: {{< baseurl >}}images/CategoryTheorySketches/ExponentialInBooleanAlgebra.jpg
+[Stack Exchange]: http://math.stackexchange.com/q/1459373/259262
diff --git a/hugo/content/comparisons/index.md b/hugo/content/comparisons/index.md
new file mode 100644
index 0000000..4d53802
--- /dev/null
+++ b/hugo/content/comparisons/index.md
@@ -0,0 +1,82 @@
+---
+lastmod: "2021-05-21T18:23:48.0000000+01:00"
+title: Product comparisons
+author: patrick
+layout: page
+---
+
+This page is an ongoing history of product comparisons, made informally but blindly.
+
+# Fig rolls
+
+Compared Boland with McVities, 2022-04-18.
+It was pretty obvious to me in the blind testing that the McVities ones were the big-brand ones, but Boland were much nicer.
+Tasted more of fig, and had less pastry around them.
+
+# Miso soup
+
+I'm still on a quest for the perfect miso soup; I remember one that was really great from my childhood, and I don't know which it was.
+
+Clearspring Organic Japanese Brown White miso soup is really insipid.
+Their "hearty red" tastes dark and kind of a bit unpleasant to me.
+Their "mellow white with tofu and green onions" seemed oddly vegetal on the first tasting, but on re-tasting seems good. One to compare with the others. Not mind-blowing.
+Their "instant miso soup with sea vegetables" is a bit darker than the "mellow white", also perfectly adequate; should taste-test against the others. Not mind-blowing.
+
+## Clearspring Organic White Miso Instant Soup Paste vs Itsu Miso'easy Traditional Miso
+
+These were quite different from each other.
+Itsu was powerful; Clearspring was subtle.
+Initially I thought that Itsu won by a country mile, but actually I could imagine a mood in which Clearspring wins; it's not totally clear cut.
+Itsu is still the victor.
+
+## Yutaka
+
+This was actually pretty good, though I didn't have it in a comparison with any other.
+I'll buy this again.
+
+# Houmous
+
+A standard useful thing to have around when you need calories.
+
+If you want to push the boat out, Natoora Spring Herb Houmous is very nice but rather expensive.
+
+## Ocado own-brand vs Tesco own-brand
+
+There's no contest here.
+Ocado tastes of tahini; Tesco is just a bit insipid.
+Ocado is the clear winner.
+
+# Baked beans
+
+Heinz beats Sainsbury's own brand by a country mile.
+It's not remotely close.
+Sainsbury's is sickly-sweet.
+
+Marks and Spencer beans are perfectly fine. I didn't try them side-by-side with Heinz, but I'd say they're just as good.
+
+# Meat replacements
+
+Beyond Burgers are pretty good, honestly, although they are much better when cooked correctly (i.e not overdone).
+Honest Burger on Tottenham Court Road did them well; Neat Burger also do them well.
+
+I was unable to tell the difference between Impossible (in Boston, 2020) and real meat.
+However, according to the person I was with, it was extremely obvious which was which.
+So your mileage probably will vary.
+
+# Tinned tuna
+
+Sainsbury's Taste the Difference Albacore tuna is really good: meaty chunks, not flakes.
+Similarly Waitrose own-brand tuna.
+
+Princes is clearly cheap and flaky.
+
+Marks and Spencer Tuna Steaks are somewhere in between - not as good as Waitrose or the Taste the Difference Sainsbury's, but much better than Princes.
+
+# Nutella
+
+The "cocoa" one is much less sweet, though I'm not sure I'd call the taste "chocolate" - it just tastes less nutty.
+I think I prefer the normal one.
+
+# Tinned tomatoes
+
+Marks and Spencer own-brand are, bizarrely, not very full of tomatoes. I tried making the Dishoom Chicken Ruby recipe with them and it took an extra tin again from when I made the same thing with Waitrose own-brand.
diff --git a/hugo/content/films/index.md b/hugo/content/films/index.md
new file mode 100644
index 0000000..0bf4aed
--- /dev/null
+++ b/hugo/content/films/index.md
@@ -0,0 +1,130 @@
+---
+lastmod: "2023-09-02T13:30:00.0000000+01:00"
+title: Films
+author: patrick
+layout: page
+---
+
+This page holds a list of films I have watched, spoiler-free, starting from 9th January 2015.
+
+* [A Haunting in Venice](https://www.imdb.com/title/tt22687790/): Well, I really enjoyed this, and I think I was surrounded by heathens in the cinema. I successfully called precisely none of the plot, and it all tied up so neatly. Ariadne Oliver will always be Zoë Wanamaker to me, but I believed Kenneth Branagh. Top-tier Poirot.
+
+* [Oppenheimer](https://en.wikipedia.org/wiki/Oppenheimer_(film)): Brilliant. It was a little too long, but I couldn't pick out anything to take away. Great acting, great filming, a bit harrowing.
+
+* [Barbie](https://en.wikipedia.org/wiki/Barbie_(film)): Pretty good. I did feel a bit like I was having messages shoved down my throat - so many different messages! - the whole way through. Enjoyable watch, though.
+
+* [Man of Steel]: Not much to say here, really. A film with no apparent merit at all, and so long, too! There was one single cute moment, where Superman gets thrown into a sign saying "106 Days Without an Accident", knocking off the digits 1 and 6 to produce "0 Days Without an Accident" as he destroys the building site.
+
+* [Pride and Prejudice and Zombies]: We watched this because we wanted a zom-rom-com. While perhaps not strictly a rom-com, it did not disappoint. Funny throughout. While I can never tell the Bennets apart at the best of times, it doesn't really matter.
+
+ * [Bohemian Rhapsody]: I was glued to this the whole way through. It was touching in all the right places, heroic in all the right places, and generally just a great introduction to some history I actually knew surprisingly little about. Would watch again.
+
+ * [Good Omens][Good Omens Wikipedia]: The book is laugh-out-loud funny, and the series (written by Neil Gaiman) did not disappoint. Very true to the book, many excellent actors acting excellently.
+
+ * [Star Wars: The Last Jedi][TLJ]: I know this has sparked some mixed reviews, but I absolutely loved this. There is a particular scene (spoiler-free words to identify the scene: light speed, silence, escape) which was an absolute treat, but every scene with a lightsaber or with Rey experiencing the Force is great. I could watch this again and again.
+
+ * [Arrival][Arrival IMDB]: I loved this. It felt like it was trying to be cerebral, and it largely succeeded. The end dragged on a bit, and there was a little bit of rabbit-out-of-a-hattery, but some great ideas. Could have done with a bit more exploring of the language.
+
+ * Netflix's House of Cards (not technically a film but a TV series): I loved the whole series. I really understand the motivation of most of the characters; they are unabashedly evil but I can see nuanced reasons for *why* they're evil. I described this series as "the Count of Monte Cristo, but more so", which I think is fair.
+
+ * [TRON: Legacy][Tron Legacy IMDB]: Ehhhh. Flashy graphics, minimal and predictable plot. Computers! Technology! Quantum teleportation and genetic algorithms! Perhaps I'm just getting cynical in my old age. At least it's better than Hackers (below).
+
+ * [Suicide Squad][Suicide Squad IMDB]: This was mildly amusing, though the plot was a bit simple and predictable. With some minor changes, it could have been a much more interesting film. Flashy, anyway.
+
+ * [Metropolis][Metropolis IMDB]: I can see why this film is considered iconic. Its depiction of technology must be well ahead of its time, and I think it's a good film. It does an excellent job of displaying peril and similar. The problem is that it's very long for my modern-day attention span.
+
+ * [Ender's Game][Ender's Game IMDB]: Ehhhh. It was watchable. The book was substantially better, although the book wasn't any better than "moderately good". Don't bother with this film.
+
+ * [Eddie the Eagle][Eddie the Eagle IMDB]: I read a review of this film, calling it "saccharine". While everyone else seemed to love it, I thought it was watchable but too sickly.
+
+ * [Donnie Darko][Donnie Darko IMDB]: Given that I'd heard Primer was "Donnie Darko for grown-ups", I was expecting a bit more from this film. It was pretty good anyway, but somewhat less satisfying than I'd hoped. A watchable sort-of-horror film.
+
+ * [Children of Men][Children of Men IMDB]: The premise of this film is very promising, but to be honest, it was mainly just used to set up a fairly run-of-the-mill action movie. Not a bad action movie, but there are certainly better ones (such as any of the Bourne series).
+
+ * [Hackers][Hackers IMDB]: Good grief. This is literally the worst film I have ever seen. Apparently this was Angelina Jolie's first film. I'm shocked her career took off, after this. Such realistic. Many technical. Very hackers. Wow.
+
+ * [V for Vendetta][V IMDB]: Wow. This is possibly my favourite film ever. All the Alan Moore-related films I've seen have been excellent, but this is even better than Watchmen, and I was not expecting any film to achieve that.
+
+ * [Memento][Memento IMDB]: I loved this film. It's a kind of thriller, about someone who has no ability to lay down new memories. I don't want to say much more about it, for spoiler purposes, but it was excellent. Contains several members of the cast of The Matrix.
+
+ * [Life of Pi][Life of Pi IMDB]: a very pretty film, with some gently funny bits, but if I'd paid to see it in a cinema I think I'd have been a bit disappointed. A film to go to sleep to, as a friend described it.
+
+ * [Star Wars VII][Star Wars VII IMDB]: this is one of two films I've ever seen after which the audience applauded; the other was [The King's Speech][King's Speech IMDB]. Good fun, and not the disappointment I was expecting.
+
+* [Airplane!][Airplane IMDB]: brilliant and extremely random comedy. Many very quotable lines.
+
+* [Some Like It Hot][SLIH IMDB]: one of the funniest sort-of-rom-coms ever made. Falls into the "we watch this when ill". It contains a cross-dressing Tony Curtis. What more is there to want from a film?
+
+* [My Favourite Wife][My Favourite Wife IMDB]: another happy Cary Grant film. Not much more to say, really. Great fun. A bit less funny than Bringing Up Baby (below).
+
+* [Bringing Up Baby][Bringing Up Baby IMDB]: very laugh-out-loud comedy with Cary Grant and Katharine Hepburn. It's one of the films my family watches if any of us is ill - quiet but hilarious.
+
+* [The Importance of Being Earnest][Earnest IMDB]: extremely funny, very British, and has all the right people in it. Great film for cheering-up.
+
+* [The Lion King][Lion King IMDB]: Good heavens, this was boring. Scar was the only remotely good character, and he got nowhere near enough screen time. Otherwise, just a deathly dull film.
+
+* [Inside Out][Inside Out IMDB]: sweet adventure film centred on the personified emotions of a girl who is undergoing a major life event. A [feels]y film; nearly everyone I know loved it, and I certainly did.
+
+* [Mad Max: Fury Road][Mad Max IMDB]: mildly entertaining action/post-apocalyptic film, which was basically just one very long action scene. I could have done with about an hour less of this film.
+
+* [Avengers: Age of Ultron][Ultron IMDB]: flashy and pretty fun action film, lots of nods to earlier films in the series, but a bit less plotty than I'd hoped. Even the usual suspension of disbelief wasn't quite enough. You get a bit tired of whizz-bang effects after having them nonstop for two hours.
+
+* [Kingsman][Kingsman IMDB]: very fun action film which I'd happily see again. Colin Firth is particularly good in his role.
+
+* [The Borrowers][Borrowers IMDB]: I loved the book when I was much smaller, but sadly even the presence of Stephen Fry and Victoria Wood isn't enough to make up for the mess they made of the plot. Arriety was made into a complete idiot who did pretty much exactly the wrong thing at every turn.
+
+* [Interstellar][Interstellar IMDB]: as of this writing, [my favourite film][Interstellar review] in its genre (disaster movie with space/time shenanigans, I suppose).
+
+* [Inception][Inception IMDB]: kind of predictable, and while people said things like "it's a game of chess" and "mind-bending", it really wasn't. Not a bad film per se, but Predestination and Primer (below) are just better.
+
+* [Predestination][Predestination IMDB]: I called the plot from about halfway through, but lots of other people said they didn't and it was all quite mysterious to them. Great film for those who want a more interesting Inception.
+
+* [Primer][Primer IMDB]: mind-boggling time-travel film whose events I still don't properly understand. One of the most cerebral films I've ever seen.
+
+* [Now You See Me][NYSM IMDB]: magic-tricks/heist film which really didn't live up to its trailer. I predicted essentially the entire plot from about ten minutes in, and the film was miles too long for the amount of interesting stuff which happened in it.
+
+* [The Illusionist][Illusionist IMDB]: film with magic and political intrigue, which I really enjoyed.
+
+* [Limitless][Limitless IMDB]: fun, if non-cerebral, film about someone who becomes much more intelligent than normal. Much better than Now You See Me, which I think was aiming for the same general effect.
+
+[Tron Legacy IMDB]: https://www.imdb.com/title/tt1104001
+[Metropolis IMDB]: https://www.imdb.com/title/tt0017136
+[Suicide Squad IMDB]: https://www.imdb.com/title/tt1386697/
+[Hackers IMDB]: https://www.imdb.com/title/tt0113243/
+[V IMDB]: https://www.imdb.com/title/tt0434409/
+[Memento IMDB]: https://www.imdb.com/title/tt0209144/
+[Interstellar IMDB]: https://www.imdb.com/title/tt0816692/
+[Predestination IMDB]: https://www.imdb.com/title/tt2397535/
+[Primer IMDB]: https://www.imdb.com/title/tt0390384
+[Inception IMDB]: https://www.imdb.com/title/tt1375666/
+[NYSM IMDB]: https://www.imdb.com/title/tt1670345/
+[Illusionist IMDB]: https://www.imdb.com/title/tt0443543/
+[Limitless IMDB]: https://www.imdb.com/title/tt1219289/
+[Borrowers IMDB]: https://www.imdb.com/title/tt1975269/
+[Kingsman IMDB]: https://www.imdb.com/title/tt2802144/
+[Ultron IMDB]: https://www.imdb.com/title/tt2395427/
+[Inside Out IMDB]: https://www.imdb.com/title/tt2096673/
+[Mad Max IMDB]: https://www.imdb.com/title/tt1392190/
+[Lion King IMDB]: https://www.imdb.com/title/tt0110357
+[Earnest IMDB]: https://www.imdb.com/title/tt0278500
+[Bringing Up Baby IMDB]: https://www.imdb.com/title/tt0029947/
+[My Favourite Wife IMDB]: https://www.imdb.com/title/tt0029284/
+[SLIH IMDB]: https://www.imdb.com/title/tt0053291/
+[Airplane IMDB]: https://www.imdb.com/title/tt0080339/
+[Star Wars VII IMDB]: https://www.imdb.com/title/tt2488496/
+[King's Speech IMDB]: https://www.imdb.com/title/tt1504320
+[Life of Pi IMDB]: https://www.imdb.com/title/tt0454876/
+[Donnie Darko IMDB]: https://www.imdb.com/title/tt0246578
+[Children of Men IMDB]: https://www.imdb.com/title/tt0206634
+[Eddie the Eagle IMDB]: https://www.imdb.com/title/tt1083452
+[Ender's Game IMDB]: https://www.imdb.com/title/tt1731141/
+[Arrival IMDB]: https://www.imdb.com/title/tt2543164/
+[TLJ]: https://www.imdb.com/title/tt2527336/
+[Good Omens Wikipedia]: https://en.wikipedia.org/wiki/Good_Omens_(TV_series)
+[Bohemian Rhapsody]: https://en.wikipedia.org/wiki/Bohemian_Rhapsody_(film)
+[Pride and Prejudice and Zombies]: https://www.imdb.com/title/tt1374989/
+[Man of Steel]: https://www.imdb.com/title/tt0770828/
+
+[Interstellar review]: {{< ref "2014-12-09-film-recommendation-interstellar" >}}
+
+[feels]: https://knowyourmeme.com/memes/feels
diff --git a/hugo/content/lifehacks/index.md b/hugo/content/lifehacks/index.md
new file mode 100755
index 0000000..4cbf06a
--- /dev/null
+++ b/hugo/content/lifehacks/index.md
@@ -0,0 +1,29 @@
+---
+lastmod: "2022-12-31T23:21:44.0000000+00:00"
+title: Lifehacks
+author: patrick
+layout: page
+---
+If I ever become rich and famous, I'm sure I'll be besieged with requests for "how to do better in life". I hereby head such requests off at the pass, by providing a list of [lifehacks] I am either using or considering the use of.
+
+* For learning smallish but numerous facts (such as a list of theorems), I use [Anki], which is a [spaced-repetition] learning system, allowing you to enter flashcards and have them shown to you regularly. The time between repetitions of a certain flashcard changes, depending on how well you've been doing on that flashcard - so marking your performance on a particular card as "easy" rather than "hard" tells Anki that you don't want to see that card for a while. It's a bit like the antithesis of cramming, where you see the material exactly once and use it a short time later; Anki is designed for reviewing the material many times (at an optimal spacing) for recall whenever you need it. The idea is to make use of the [spacing effect] - an extremely powerful memory technique that is currently ignored by almost all methods of formal teaching ([Memrise] is a notable exception; I used Memrise until I used Anki).
+* A surprisingly good way of making myself work when I'm feeling unmotivated is to gather a few like-minded friends and to work in absolute silence with them (possibly on completely unrelated topics). Oddly, I'd not thought of it until reading a LessWrong post on the subject. There's a kind of "all in this together" feeling, as well as the public commitment effect.
+* I use [f.lux], an application which dims and tints red the computer screen after dusk. I have no idea whether or not it has any effect on wakefulness at night (that is, whether or not being bathed in a standard blue glow keeps me awake), but it certainly feels nicer on the eye.
+* I am currently in the middle of learning [Dvorak], which is a keyboard layout (QWERTY is the usual one) that is supposedly easier on the hands than QWERTY. It puts vowels all together in easy-to-reach places, and the most common consonants in easy places such that words tend to be made of letters which lie in different hands. (In QWERTY, for example, the word "the" is oddly hard to type, for such a common word - all the characters are away from the home row - but in Dvorak it's just a simple flourish from right to left on the home row.) A friend tells me that [Colemak] is better than Dvorak, but I'd already half-learnt Dvorak by the time ey told me this, and Dvorak interfered heavily with my attempts to learn Colemak. It appears to be much of a muchness, anyway - both are considerably better than QWERTY.
+* I don't know if it qualifies as a lifehack - more of a biohack or something - but [lucid dreaming] is really cool, and it doesn't take an enormous amount of commitment to learn to do (it just requires the setting up of a few habits throughout the day).
+* Something I'm strongly considering when it's set up and running properly is [Soylent], a food substitute being developed by an engineer, which is nutritionally complete and satisfying. As of this writing, the creator has been on it for three and a half months without ill effects (once he'd sorted out the balance, anyway - he discovered a sulphur deficiency at the start of the third month, which is a very hard deficiency to give someone normally!) Currently, someone I know is using [the Exante diet][1] (the presence of a link is not necessarily an endorsement), which consists of similar but very low-calorie meal replacements; this person has been a very interesting source of information on replacing meals in this way. Their main objection to the diet seems to be the monotony, but supposedly Soylent is bland enough not to suffer from this (I could eat bread until the cows came home, for instance, but not chocolate). The Soylent Corp. says that Soylent will get cheaper as the company is set up and grows. (The creator wrote a response, but the link is now dead.)
+* How to get up in the morning: because it's really quite hard to motivate yourself to move, I count down from 10 to 1, with the resolution that on the count of 1 I will get up. It's much easier to motivate yourself to count down from 10 than it is to move your entire body somewhere uncomfortable, and once I'm counting, consistency pressure is enough to make me follow through. I'm careful with this technique - I never use it on anything I'm not absolutely certain to do. It might pollute the technique irreparably if I had an excuse that "oh, once I did this and it didn't work!".
+
+More to come.
+
+ [1]: https://www.exantediet.com/ "Exante diet"
+ [lifehacks]: https://en.wikipedia.org/wiki/Life_hack
+ [spaced-repetition]: https://en.wikipedia.org/wiki/Spaced_repetition
+ [Anki]: http://ankisrs.net/
+ [spacing effect]: https://en.wikipedia.org/wiki/Spacing_effect
+ [Memrise]: https://www.memrise.com/
+ [f.lux]: https://justgetflux.com/
+ [Dvorak]: https://en.wikipedia.org/wiki/Dvorak_Simplified_Keyboard
+ [Colemak]: http://colemak.com/
+ [lucid dreaming]: https://en.wikipedia.org/wiki/Lucid_dream
+ [Soylent]: https://soylent.com
diff --git a/hugo/content/posts/2013-06-26-cucats-puzzlehunt.md b/hugo/content/posts/2013-06-26-cucats-puzzlehunt.md
new file mode 100644
index 0000000..e5e042f
--- /dev/null
+++ b/hugo/content/posts/2013-06-26-cucats-puzzlehunt.md
@@ -0,0 +1,27 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-06-26T00:00:00Z"
+aliases:
+- /archives/10/index.html
+- /wordpress/archives/10/index.html
+- /uncategorized/cucats-puzzlehunt/
+title: CUCaTS Puzzlehunt
+---
+At the end of last (that is, Lent 2012-2013) term at Cambridge, I took part in the [Cambridge University Computing and Technology Society](https://cucats.org) [Puzzlehunt](https://cucats.org/puzzlehunt) (for some reason, as of this writing, they haven't yet updated that page for this year's Puzzlehunt, but last year's is up there). A short summary: the Puzzlehunt is a treasure hunt around Cambridge, crossed with a whole bunch of online computing-based puzzles. It's very difficult, and it lasts for twenty-four hours.
+
+It was great fun, and while my team was hampered considerably by the fact that (having found out about the event only a day in advance) we had all planned various May Week celebrations to coincide with the first five hours or so of the twenty-four hour competition, we still gave it a good shot and came fifth of about nine, as far as I remember. (Team G, for the win!)
+
+For possibly the first time ever, I adopted a sensible strategy of separating the programs I wrote for each puzzle, and saving them as I went. This means I have a record of my attempts at each puzzle - they're all in the form of [Mathematica](https://www.wolfram.com) notebooks.
+
+My attempts are *extremely* rough-and-ready, being thrown together in the shortest time possible.
+
+Mathematica Notebook files (.nb) can be read through the Wolfram CDF Player, which can be installed free from [the Wolfram website](https://www.wolfram.com/player "Wolfram CDF player page"); the plugin is quite large, so I can release them as PDFs instead if anyone wants. (Using the CDF player gives syntax highlighting and interactivity, not that many of these files will be interactive, because they were made so quickly.)
+
+* [Keyboard Cat](/cucats/Puzzlehunt2013/KeyboardCat.nb)
+* [The Chase](/cucats/Puzzlehunt2013/TheChase.nb)
+
+More to follow, when I've put a bit of explanatory commentary in them.
diff --git a/hugo/content/posts/2013-06-26-first-post.md b/hugo/content/posts/2013-06-26-first-post.md
new file mode 100644
index 0000000..37b8e1e
--- /dev/null
+++ b/hugo/content/posts/2013-06-26-first-post.md
@@ -0,0 +1,16 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+date: "2013-06-26T00:00:00Z"
+aliases:
+- /archives/4/index.html
+- /wordpress/archives/4/index.html
+- /uncategorized/first-post/
+- /first-post/
+title: First post
+---
+Hello all!
+
+In the spirit of shouting into an echoing void, this is my first post, testing whether the setup works. Some content will probably turn up soon.
diff --git a/hugo/content/posts/2013-06-26-sylow-theorems.md b/hugo/content/posts/2013-06-26-sylow-theorems.md
new file mode 100644
index 0000000..88b3d96
--- /dev/null
+++ b/hugo/content/posts/2013-06-26-sylow-theorems.md
@@ -0,0 +1,110 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2013-06-26T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/sylow-theorems/
+- /sylow-theorems/
+title: Sylow theorems
+summary: "A fairly long and winding way through a proof of the three Sylow theorems."
+---
+(This post is mostly to set up a kind of structure for the website; in particular, to be the first in a series of posts summarising some mathematical results I stumble across.)
+
+EDIT: There is now [an Anki deck](/AnkiDecks/SylowTheoremsProof.apkg) of this proof, and a [collection of poems][sylow sonnets] summarising it.
+
+In Part IB of the Mathematical Tripos (that is, second-year material), there is a course called Groups, Rings and Modules. I took it in the academic year 2012-2013, when it was lectured by [Imre Leader](https://en.wikipedia.org/wiki/Imre_Leader). He told us that there were three main proofs of the [Sylow theorems](https://en.wikipedia.org/wiki/Sylow_theorems), two of which were horrible and one of which was nice; he presented the "nice" one. At the time, I thought this was the most beautiful proof of anything I'd ever seen, although other people have told me it's a disgusting proof.
+
+# Theorem - the Sylow Theorems
+
+Let \\(G\\) be a group, of order \\(p^k m\\) for some prime \\(p\\), where the [HCF](https://en.wikipedia.org/wiki/Greatest_common_divisor) \\((p,m) = 1\\). Then:
+
+1. There is a subgroup \\(H\\) of \\(G\\), of order \\(p^k\\) (a Sylow p-subgroup);
+2. All such subgroups are conjugate to each other;
+3. The number of such subgroups, \\(n_p\\), satisfies \\(n_p \equiv 1 \pmod p\\) and \\(n_p \mid m\\).
+
+# Proof
+
+The proof goes as follows: pick a p-subgroup \\(P\\) to be of maximal size; then introduce its normaliser \\(N\\), and show that the orbit of \\(P\\) under the conjugation action when \\(P\\) acts on itself is precisely the set of Sylow p-subgroups.
+
+## First Sylow theorem
+
+The proof starts out in a natural way, by naming a subgroup \\(P\\) of order \\(p^a\\) for some \\(a\\). Such a subgroup certainly exists, by [Cauchy's Theorem](https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(group_theory)) (which has \\(a=1\\)). If we select \\(a\\) to be maximal, then we wish to show that \\(a=k\\), or equivalently (which seems even easier) that \\(\dfrac{ \vert G \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\).
+
+Now, how do we show that \\(\dfrac{ \vert G \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\)? Well, we don't know anything about such a quotient unless \\(P\\) is normal in \\(G\\). But we can't guarantee this - so let's introduce a subgroup, \\(N\\), in which \\(P\\) is normal. The natural one to pick, because we're trying to make the subgroup as big as possible, is the [normaliser](https://en.wikipedia.org/wiki/Centralizer_and_normalizer) \\(N(P)\\) - that is, \\(\{g : g P g^{-1} = P\}\\), or \\(Stab_G(P)\\) under the conjugation action. This is the largest subgroup of \\(G\\) in which \\(P\\) is normal.
+
+Then we want to show that \\(\dfrac{ \vert G \vert }{ \vert N \vert } \times \dfrac{ \vert N \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\); this is true if and only if neither of the multiplicands is divisible by \\(p\\).
+
+### The second multiplicand
+
+It looks like it will be easier to start with the second multiplicand, because it's got a really really obvious interpretation.
+
+We want to show that \\(\dfrac{ \vert N \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\). Now, from the [First Isomorphism Theorem](https://en.wikipedia.org/wiki/Centralizer_and_normalizer) we have \\(\dfrac{ \vert N \vert }{ \vert P \vert } = \vert \dfrac{N}{P} \vert \\).
+
+Suppose \\( \vert \dfrac{N}{P} \vert \equiv 0 \pmod p\\). Then by Cauchy's Theorem, there is an element \\(h \in \dfrac{N}{P}\\) such that the [order](https://en.wikipedia.org/wiki/Order_(group_theory)) \\(o(h) = p\\); let \\(H = \langle h \rangle\\), the group generated by \\(h\\). But we got to this quotient group \\(\dfrac{N}{P}\\) by applying the projection map \\(\pi : N \rightarrow \dfrac{N}{P}\\), so what happens when we "un-quotient" (that is, apply \\(\pi^{-1}\\))? We have \\(\pi^{-1}(H)\\) has order \\( \vert H \vert \vert P \vert \\), because \\(\pi\\) was a \\( \vert P \vert \\)-to-one mapping, and so \\(\pi^{-1}(H) \leq P\\) has order \\(p \vert P \vert \\). This is a contradiction.
+
+Hence \\( \vert \dfrac{N}{P} \vert \not \equiv 0 \pmod p\\).
+
+### The first multiplicand
+
+The first multiplicand, \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\): this is the number of conjugates of \\(P\\), by the [Orbit-Stabiliser Theorem](https://en.wikipedia.org/wiki/Orbit_stabiliser_theorem#Orbit-stabilizer_theorem_and_Burnside.27s_lemma) (by using the conjugation action: the stabiliser is \\(N\\); while the orbit of \\(P\\) is simply the set of conjugate subgroups). We want to show that this is not divisible by \\(p\\). We can do much more with the conjugates themselves, so let \\(X = \{gPg^{-1}, g \in G\}\\).
+
+We would like to show that \\( \vert X \vert \not \equiv 0 \pmod p\\). This expression rings a bell - we've seen it before, as a key idea in the [class equation](https://en.wikipedia.org/wiki/Conjugacy_class#Conjugacy_class_equation). In order to use the class equation, we need to act on \\(X\\). There are only three groups we've met so far: \\(N\\), \\(P\\) and \\(G\\). The group we haven't yet used is \\(P\\), and it's a [p-group](https://en.wikipedia.org/wiki/P-group) (and we know a bit about actions of p-groups). What's the only obvious action to use? It has to be conjugation.
+
+Let \\(P\\) act on \\(X\\) by conjugation. Since the orbits partition the set \\(X\\) and have order dividing \\( \vert P \vert \\), the order of each orbit is one of \\(1, p, p^2, \dots , p^a = \vert P \vert \\). \\(P\\) is clearly in an orbit all of its own (since \\(p P p^{-1} \in P\\) for every \\(p \in P\\)). What we really want is for \\(P = e P e^{-1}\\) to be the only conjugate of \\(P\\) which is in its own orbit, because then we have \\( \vert X \vert \equiv 1 \pmod p\\) (since the orbits partition the set).
+
+Suppose we have \\(g\\) such that \\(g P g^{-1}\\) is in an orbit of size 1. Then \\(p g P g^{-1} p^{-1} = g P g^{-1}\\) for all \\(p \in P\\), and so (by conjugating with \\(g^{-1}\\)) we have \\(g^{-1} p g P g^{-1} p^{-1} g = P\\), and so \\(g^{-1} p g\\) stabilises \\(P\\) and so is in \\(N\\). So \\(g^{-1} P g\\) is contained within \\(N\\).
+
+Now, we know that \\(g^{-1} P g\\) is contained within \\(N\\), so we can now use functions defined on \\(N\\). We have that \\(\pi : N \rightarrow \dfrac{N}{P}\\) (the quotient map) is a homomorphism with kernel \\(P\\). That is, \\(\pi(P) = \{e\}\\). Hence considering \\(\pi(g^{-1} P g) = \pi(g^{-1}) \pi(P) \pi(g)\\) because \\(\pi\\) is a homomorphism; but \\(\pi(P) = \{e\}\\) so this expression is just \\(\{\pi(g^{-1}) \pi(g)\} = \{\pi(g^{-1} g)\} = \{e\}\\).
+
+Hence \\(g^{-1} P g\\) is contained in the kernel of \\(\pi\\). But it's also the same size as \\(P\\) which is itself the kernel of \\(\pi\\). Hence \\(g^{-1} P g = P\\).
+
+So there is only one orbit of size \\(1\\), and hence because orbits partition the set, \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\) is not divisible by \\(p\\).
+
+This concludes the proof of the first Sylow theorem.
+
+## Second Sylow theorem
+
+Given a Sylow p-subgroup \\(Q\\) of \\(G\\), we want to show that it is conjugate to \\(P\\).
+
+Use \\(X\\) as before, the set of \\(\{g P g^{-1}\, g \in G \}\\). In the first theorem, we had \\(P\\) acting on \\(X\\); now let's use \\(Q\\) in the same way. We want to show that there is some \\(g \in G\\) such that \\(g^{-1} Q g = P\\), or equivalently that \\(Q \in X\\).
+
+Let \\(Q\\) act on \\(X\\) by conjugation. We have that \\( \vert X \vert \\) is not a multiple of \\(p\\) by the earlier part, but \\(X\\) is a union of orbits which are of size \\(p^s\\) for some \\(s\\). Hence there is a \\(g \in G\\) such that \\(\{g P g^{-1} \}\\) is the entire orbit of \\(P\\) when \\(Q\\) acts on that conjugate. (That is, there is \\(g \in G\\) such that \\(q g P g^{-1} q^{-1} = g P g^{-1}\\) for all \\(q \in Q\\).) Hence, as before, all elements of \\(g^{-1} Q g\\) fix \\(P\\) under conjugation, and hence \\(g^{-1} Q g \subset N\\).
+
+Now, \\(g^{-1} Q g \subset N\\) so we can apply the projection map \\(\pi\\) to it. We show that \\(\pi(g^{-1} Q g) = \{e\}\\). Indeed, suppose it isn't. Then \\(H = \pi(g^{-1} Q g)\\) is a non-trivial subgroup of \\(\dfrac{N}{P}\\), because \\(g^{-1} Q g\\) was a subgroup of \\(N\\). It has order dividing that of \\(g^{-1} Q g\\), because applying a homomorphism to a subgroup yields a subgroup of order dividing that of the original - and so its order is a multiple of \\(p\\). Also, its order divides that of \\(\dfrac{N}{P}\\), by Lagrange, because it's a subgroup of \\(\dfrac{N}{P}\\) - and this is not a multiple of \\(p\\). But now we have a multiple of \\(p\\) which divides a non-multiple of \\(p\\) - contradiction.
+
+Then \\(\{e\} = \pi(g^{-1} Q g) = \pi(g^{-1}) \pi(Q) \pi(g)\\); and hence we must have \\(\pi(Q) = \{e\}\\). So \\(g^{-1} Q g \subset \mathrm{Ker}(\pi)\\) and hence \\(g^{-1} Q g = P\\).
+
+This concludes the proof of the second Sylow theorem.
+
+## Third Sylow theorem
+
+We now want to show that the number \\(n_p\\) of Sylow p-subgroups is \\(1 \pmod p\\) and divides \\(m\\).
+
+We certainly have that \\(n_p = \vert X \vert \\), because every Sylow p-subgroup is a conjugate of \\(P\\), but also every conjugate of \\(P\\) (that is, every member of \\(X\\)) is itself a subgroup of \\(G\\), and has the same size as \\(P\\), so is also a Sylow p-subgroup. Hence, just as before, \\(n_p \equiv 1 \pmod p\\).
+
+Also, \\(n_p\\) is the size of an orbit under conjugation, and hence by the Orbit/Stabiliser Theorem, it divides \\( \vert G \vert = p^a m\\); but \\(n_p\\) does not have a factor of \\(p\\), so it must divide \\(m\\).
+
+This concludes the proof of the third Sylow theorem.
+
+# Summary
+
+So the proof went as follows:
+
+1. We're looking for information about Sylow p-subgroups, so we pick the maximum possible p-subgroup and hope that it's a Sylow one.
+2. How do we know whether this p-group is Sylow? If \\(\dfrac{ \vert G \vert }{ \vert P \vert }\\) is not divisible by \\(p\\).
+3. What can we do with a quotient? Not much, but we *can* use a quotient of a normal subgroup. We can't guarantee that \\(P\\) is normal in \\(G\\), so we split up the fraction into \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\) and \\(\dfrac{ \vert N \vert }{ \vert P \vert }\\).
+4. What's a good normal subgroup to use? We have a choice. We'll go for the normaliser \\(N = N(P)\\), because that gives a nice interpretation to \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\). (But otherwise, this step seems a bit arbitrary to me.)
+5. Now we'll go for \\(\dfrac{ \vert N \vert }{ \vert P \vert }\\); this is definitely something to do with the quotient group \\(\dfrac{N}{P}\\). Let's imagine its size were divisible by \\(p\\); then we can use Cauchy on \\(\dfrac{N}{P}\\) and get a contradiction on moving back to \\(N\\).
+6. Let's now consider \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\); the normaliser is something to do with conjugates, so we'll consider the conjugation action. Happily, this expression then becomes the size of the orbit of \\(P\\) under the conjugation action; call that orbit \\(X\\).
+7. We need \\( \vert X \vert \not \equiv 0 \pmod p\\). Remember the class equation; we want to act on \\(X\\) using a p-group. \\(P\\) is such a p-group, so we'll let \\(P\\) act on \\(X\\). The only natural action to use is conjugation. We know straight away that \\(P\\) is in an orbit all to itself; we need it to be the only one.
+8. Name a different conjugate of \\(P\\); call it \\(g P g^{-1}\\). We need this to be exactly \\(P\\). It's got the right size already, so we just need it to be contained in \\(P\\). Here a leap of faith: what's special about \\(P\\)? It's the kernel of a homomorphism \\(\pi: N \rightarrow \dfrac{N}{P}\\) (because it's a normal subgroup of \\(N\\)). So, after proving that \\(\pi\\) is defined on what we want to give as its arguments (that is, after showing that \\(g P g^{-1}\\) is contained in \\(N\\), or equivalently that all elements of \\(g P g^{-1}\\) stabilise \\(P\\) under conjugation), consider \\(\pi(g^{-1} P g)\\). This is clearly \\(\{e\}\\), and hence \\(g^{-1} P g\\) is in the kernel of \\(\pi\\), and hence is a subset of \\(P\\), as required.
+9. Now the second theorem: all the Sylow p-subgroups need to be conjugate. Name a Sylow p-subgroup \\(Q\\), and have it act on \\(X\\) as above. Then in exactly the same way as in step 7, since \\( \vert X \vert \\) is not a multiple of \\(p\\), we have that there is some \\(h \in G\\) such that \\(\{h P h^{-1}\}\\) is an entire orbit under conjugation by \\(Q\\).
+10. Exactly as in step 8, a conjugate \\(h P h^{-1}\\) is on its own in an orbit, so it is fixed under conjugation by every element in \\(h^{-1} Q h\\). Hence \\(H = h^{-1} Q h\\) is contained within \\(N\\) and we can use \\(\pi\\). Suppose that \\(H\\) is not fully contained in the kernel of \\(\pi\\); then applying \\(\pi\\) to it gives us a subgroup, which must have prime power order (from the fact that \\(h^{-1} Q h\\) had prime power order); it also has order dividing that of \\(\dfrac{N}{P}\\), which is not a multiple of \\(p\\): contradiction.
+11. \\(H\\), a conjugate of \\(Q\\), is hence contained in the kernel of \\(\pi\\). Then since it is of the same size as the kernel, it must be the kernel, but that is \\(P\\).
+12. Now the third theorem: we've just shown that \\(X\\) is precisely the set of Sylow p-subgroups, so \\( \vert X \vert \equiv 1 \pmod p\\) is just what we want (but we've already shown it back in step 8); and since it is also precisely an orbit when \\(G\\) acts on \\(P\\) by conjugation, it must have order dividing that of \\(G\\).
+
+ [sylow sonnets]: {{< ref "2013-08-31-slightly-silly-sylow-pseudo-sonnets" >}}
diff --git a/hugo/content/posts/2013-07-03-in-which-i-augment-the-lexicon.md b/hugo/content/posts/2013-07-03-in-which-i-augment-the-lexicon.md
new file mode 100644
index 0000000..45fcc8b
--- /dev/null
+++ b/hugo/content/posts/2013-07-03-in-which-i-augment-the-lexicon.md
@@ -0,0 +1,27 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-07-03T00:00:00Z"
+aliases:
+- /uncategorized/in-which-i-augment-the-lexicon/
+- /in-which-i-augment-the-lexicon/
+title: In which I augment the lexicon
+summary: "A few dubiously-real words which I think should be more widely used."
+---
+(This is my first post written in Dvorak; accordingly, it is a bit shorter than I would like, since I am very slow at it. [Tsuyoku naritai](http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/ "I want to become stronger"), and all that.)
+
+A really nice website I've come across in my wanderings is [Pretty Rational], a growing collection of pithy quotes about rationality, illustrated by one Katie Hartman.
+
+![Reality provides us with facts so romantic that imagination itself could add nothing to them][reality]
+
+This particular Jules Verne quote is expounded upon in [a LessWrong post](http://lesswrong.com/lw/or/joy_in_the_merely_real/ "Joy in the Merely Real"), as so many things are, but I can't help noticing that the source of the quote doesn't seem to appear on the Internet. If anyone knows where the quote appears, please let me know! It may turn out to be another Einsteinism - a word I hereby coin to mean "something misattributed to a(n) historical figure whom we think of as wise" - but the quote itself would be undiminished.
+
+Another niche in the language is "[evilogue](http://www.cracked.com/article_18798_6-words-that-need-to-be-invented-5Bcomic5D.html "Evilogue")" - don't click any links on that page, as Cracked is the third-hardest website on the Internet to escape, after [TV Tropes](http://tvtropes.org) and the [SCP wiki](http://scp-wiki.net). An evilogue is claimed in a situation in which someone has asked you for your opinion of (for example) a company, and you hate that company without at this time being able to recall any specific evidence. Then you may state that you have an evilogue, meaning that if ey wants you to, you will find the evidence you were referring to, at your leisure. (Beware, of course, of being unduly influenced by your past opinion - if in the course of your research you find your concerns to be unjustified, do tell the other person and update accordingly. You shouldn't be looking for new evidence, but finding the evidence you used originally.)
+
+My final bestowal on the English language (for the moment) is the word "yop", being a "yes" in response to a negative question. When asked "So I'm not the Pope after all?", the correct answer for most people would be "No" (you're not the Pope); the answer to "So I'm not sentient after all?" would usually (but not necessarily, according to [John Searle](https://en.wikipedia.org/wiki/Philosophical_zombie "P-zombie")) be "Yop" (you are sentient). This avoids the needless ambiguity of "Yes" or the prolixity of "No [or Yes], you are sentient".
+
+[Pretty Rational]: https://web.archive.org/web/20151016143228/http://prettyrational.com/
+[reality]: https://web.archive.org/web/20141012121546/http://prettyrational.com/wp-content/uploads/2013/06/PrettyRational_Reality.jpg
diff --git a/hugo/content/posts/2013-07-04-cambridge-undergrad-maths-tips.md b/hugo/content/posts/2013-07-04-cambridge-undergrad-maths-tips.md
new file mode 100644
index 0000000..45a91e7
--- /dev/null
+++ b/hugo/content/posts/2013-07-04-cambridge-undergrad-maths-tips.md
@@ -0,0 +1,31 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-07-04T00:00:00Z"
+aliases:
+- /uncategorized/cambridge-undergrad-maths-tips/
+- /cambridge-undergrad-maths-tips/
+title: Cambridge undergrad maths tips
+---
+I wrote this when I was excessively bored during exam term of my first year. It may grow as I get better at working (I'm something of a [revisionist](https://en.wikipedia.org/wiki/Ministry_of_Truth)). The advice is entirely Cambridge-based; a lot of it probably applies to other places with minor alterations. Most of this comes from personal experience.
+
+During a supervision, your supervisor will be writing all the time. As soon as you leave the supervision, mark the sheets that are particularly important in some obvious way (eg. by colouring in the corner). That way, when you're frantically flicking through the notes at the end of the year, you'll see where the information you need is. By "most important", I mean the places where the supervisor explains something fundamental to many questions, rather than the ins and outs of one particular question.
+
+Use Anki during the course - after each lecture, add the key factoids to your Anki deck for that course. (It's a bit annoying to do at the time, but it's seriously *so* much easier this way when it comes to the end of the year.) Try and get into the habit of doing some Anki every day. Remember that Anki does LaTeX!
+
+If there's anything you don't understand, email your supervisor quickly. Some of the supervisors are absolutely brilliant at replying to emails; but all of them will reply eventually. If you have a DoS [director of studies] who is all-powerful (my first-year DoS was head of almost everything in the maths department), most of your requests can be granted (even possibly to the extent of shuffling lecture times around at the start of the year, if given plenty of warning and a *very* good reason).
+
+It's going to feel weird at first, but you almost certainly aren't the best in the year - you're likely to be average. (That's what "average" is usually taken to *mean* - pun not originally intended.) This means that the lecturer probably isn't interested in hearing your pedantry or requests for rigour during the lecture. It's the supervisor's job to clear up points that you didn't understand. If everyone you've spoken to doesn't understand something from the lecture, then it might indeed be the lecturer's fault; in that case, email the lecturer, or go down to speak to them at the end of the lecture. If you notice something wrong that the lecturer's written, then unless you're absolutely sure the lecturer's made a mistake, check with the person next to you before calling it out. Protocol is to wait for a brief pause in speech before shouting "Should *this bit* be *this* instead of *what's on the board*?" - try and be as specific as you can, saying (for instance) "In your statement of Theorem 16, the first line says "f is differentiable" - should that be "g is differentiable"?". Most people are not specific when they spot a problem, and it makes it much harder for the lecturer to diagnose the problem if they don't know exactly where the problem is.
+
+Don't let your sleep cycle get too out-of-sync. It's absolutely fine (after the first couple of weeks of the term, anyway) to go to bed at whatever time you're tired - in my experience, everyone else is also tired and welcomes the chance to sleep. This is put on hold during the first couple of weeks of the term, because that's when everyone's all excited to be there and there's not too much work.
+
+If you have anything impairing your work that your DoS could conceivably help with, raise it as soon as you can. The earlier your DoS knows about it, the earlier something can be done, and your DOS is paid to worry about this sort of thing.
+
+If both you and a friend are having trouble working, go together to the library and work next to each other. You might find it helpful to view it as a competition between you and them, or as a "suffering in comradeship" kind of thing. Maintain an absolute rule of "no talking to each other", though. Schedule a break every 45mins or so, go outside and stretch your legs, and at the start of each 45min block you can ask each other about things you got stuck with on the previous 45min block.
+
+You will not be able to do every question on the example sheets [problem sheets you do as homework] easily. You're expected to have a good go at them all, but not to complete them (that would be a bonus). For those questions you can't do, pretend you are in an interview: write down your thought processes, what you've tried and why it failed. Pretend you're trying to appear really intelligent and solution-seeking in front of a prospective employer.
+
+In your answers, use lots of words; your answer should not just be a list of equations, but a coherent argument. It's a hundred times easier to mark if you explain every step properly, and it means you can go back over it at revision time; it's not that hard to do at the time, too. If you find that you pick up your work before a supervision and have no idea what you're wittering on about, you need to make your answers clearer.
diff --git a/hugo/content/posts/2013-07-06-cambridge-vocab-a-guide-for-the-mystified.md b/hugo/content/posts/2013-07-06-cambridge-vocab-a-guide-for-the-mystified.md
new file mode 100644
index 0000000..2989dd6
--- /dev/null
+++ b/hugo/content/posts/2013-07-06-cambridge-vocab-a-guide-for-the-mystified.md
@@ -0,0 +1,23 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-07-06T00:00:00Z"
+aliases:
+- /uncategorized/cambridge-vocab-a-guide-for-the-mystified/
+- /cambridge-vocab-a-guide-for-the-mystified/
+title: Cambridge vocab - a guide for the mystified
+---
+There is an awfully large collection of confusing words you will encounter on first coming to study at Cambridge. You pick them up really quickly in the natural run of things, but I thought perhaps a mini-dictionary might be helpful. The list is alphabetised (if I'm competent enough, anyway) and may, like so many of my writings, grow. Apologies for my crude attempts at pronunciations for the non-obvious words, but it's very hard to find someone who can read [IPA](https://en.wikipedia.org/wiki/IPA).
+
+* **Boatie** - one of the many people who row. Rowing is a very big thing at Cambridge, and some people are extremely dedicated to it (to the extent of getting up at six in the morning to train).
+* **Formal** - a contraction of "**formal hall**", this is an event in which you are served a three-course meal in college. Exceptions in the number of courses may apply between colleges, but I'm not aware of any non-three-coursers; exceptions also apply on special occasions, so the Jesus Christmas formal had seven courses (if I recall correctly). Usually you would wear a suit or mid-scale posh dress, with gown. Most formals start and end with a Latin grace. This is probably the closest experience to Hogwarts that Cambridge has to offer. You would almost always go to formal with other people you know (booking en-masse), as a celebration (such as for birthdays).
+* **Mathmo** - a mathematician. The word is used to refer both to maths students, and also (less commonly) to people who may not be studying maths but who share the mildly Aspergers-y traits of stereotypical mathematicians. The word is very adaptable - so, for instance, a Trinity mathmo might be referred to as a **Trinmo**, a mathmo who enjoys applied courses rather than pure courses might be referred to as an **appliedmo**, and so forth. It can also (in some circles) be femininised as **mathma**.
+* **Muso** - a music student.
+* **Natsci** (pron. "nat-ski") - a contraction of Natural Sciences, the subject studied by anyone who wishes to study a scientific subject. People studying (say) Biology would apply for Natsci, and then specialise later through judicious choice of courses. The Natscis are broadly subdivided into **Physnatscis** and **Bionatscis**. Also refers to Natsci students.
+* **Pennying** - a drinking game (in the loosest sense of the word "game", even for drinking games) fairly common across the UK, as far as I can tell. To my knowledge, the rules differ between Oxford, Durham and Cambridge; I present the Cambridge rules. If your drink is sitting on a surface, without your hand being contact with the glass, anyone else (though decorum dictates that this may only be done by people who are themselves drinking alcohol) is at liberty to drop a penny into the glass, whereupon you are honour-bound to down the drink. "An empty glass is a full glass" - that is, if an empty glass is pennied, you must fill your glass with drink and then down it. For this reason, it is wise to keep some liquid in your glass at all times. If you catch the penny in your teeth as you finish your drink, the pennier must down eir drink in turn. A "double penny" occurs when two people penny the same drink; in this situation, the second pennier must down eir drink, and the one who is pennied does not have to do so.
+* **Staircase** - this is the generic term for where students live if they are in college - the very vertical equivalent of a block of flats. They are essentially the same as dormitories, and usually have their own kitchen(s). A house owned by the college and used as accommodation can be referred to as an **external staircase**.
+* **Swap** - this is a sort of cross between a party and a speed-dating event. Usually they take the form of a formal (see above) or a trip to a local curry-house. They are designed to get lots of people who share an interest, or some sort of connection, to get to know each other very quickly. The Christ's College hockey team might **swap with** the Jesus hockey team, for example, meaning that the teams go to a formal (or curry-house) and have a meal. Swaps are usually pretty ad-hoc; they are planned entirely by the people who are swapping.
+* **Tripos** (pron. "try-poss") - the [Wikipedia article](https://en.wikipedia.org/wiki/Tripos "Tripos Wikipedia page") says it all, really, but this is the term used to refer to a course of study (the Mathematical Tripos, or the Historical Tripos, for example).
diff --git a/hugo/content/posts/2013-07-07-mundane-magics.md b/hugo/content/posts/2013-07-07-mundane-magics.md
new file mode 100644
index 0000000..78c64ac
--- /dev/null
+++ b/hugo/content/posts/2013-07-07-mundane-magics.md
@@ -0,0 +1,24 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- psychology
+comments: true
+date: "2013-07-07T00:00:00Z"
+aliases:
+- /psychology/mundane-magics/
+- /mundane-magics/
+title: Mundane magics
+---
+I have stumbled across a LessWrong post on the importance of [seeing what is real for just how cool it is](http://lesswrong.com/lw/ve/mundane_magic/ "LessWrong post on Mundane Magic"). It lists such examples as:
+
+* *Vibratory Telepathy*. By transmitting invisible vibrations through the very air itself, two users of this ability can *share thoughts*. As a result, Vibratory Telepaths can form emotional bonds much deeper than those possible to other primates.
+* *Psychometric Tracery*. By tracing small fine lines on a surface, the Psychometric Tracer can leave impressions of emotions, history, knowledge, even the structure of other spells. This is a higher level than Vibratory Telepathy as a Psychometric Tracer can share the thoughts of long-dead Tracers who lived thousands of years earlier. By reading one Tracery and inscribing another simultaneously, Tracers can duplicate Tracings; and these replicated Tracings can even contain the detailed pattern of other spells and magics. Thus, the Tracers wield almost unimaginable power as magicians; but Tracers can get in trouble trying to use complicated Traceries that they could not have Traced themselves.
+
+I thought I would give a few more. First, I hereby rename *The Eye* (as that post's author names this ability) to *Force Perception*, and I dub a user of any of these magics a Mage.
+
+* *Modular Incarnation*. An extremely powerful technique that allows enormous flexibility of function, Modular Incarnation is a method of creating superstructures out of tiny Modules, each specialised for a specific task. Out of a single generic Module, a huge variety of specialised Modules can be created, which together can be assembled into structures which can channel various other magics, including Modular Incarnation itself. Thus can an Incarnator increase eir abilities by leaps and bounds from the moment of the birth of eir Incarnatory power. The Incarnator must be wary of this ability, for in its nigh-unimaginable power lies the danger of upsetting the balance of the Modules: an Incarnator can become overrun by eir own frantically replicating Modules, the tide of which is as yet very hard to stem, even using the greatest achieved extent of the Ultimate Power.
+* *Elemental Shielding.* Users of this passive ability are granted a flexible, regenerating defence against fire, earth, air and water. It also gives the user a constant diagnostic of eir surroundings, allowing the Shielder to understand what adjustments to make to eir environment *without even thinking about it*.
+* *Infiltration Adaptation.* One of the most successful forms of attack that can be made on a Mage is the insertion of weapons so small that even the greatest of Force Perceptors cannot detect them. These weapons are a perversion of Modular Incarnation, and as such have the potential to be immensely powerful, but users of the Infiltration Adaptation ability can detect and neutralise them by creating a defence consisting of many thousands of Modules, each tailored to be highly effective against a single weapon that was once used against the Mage. In this way, each unsuccessful attack strengthens the Mage: after only a short period to gather eir strength, the Mage recovers, usually with no discernible damage dealt.
+* *The Web of Pure Extraction*. Among the many ways to apply the Ultimate Power, the WPE may be one of its purest instantiations. Thousands of Extractors have together spent thousands of years in building a magnificent edifice which lies just outside this world, intersecting everywhere yet nowhere tangible. Through this power, Extractors can predict with staggering accuracy the outcomes of events happening at all scales, from the level of the fabric of reality itself up to levels encompassing all that is known to exist, and even further. Extractors can use the structure already created to solve problems that no other art can; and the structure is so well integrated with itself that particularly strong Extractors may use parts of the structure to affect other parts which lesser Extractors deem totally unrelated.
+* *The Web of Mental Distribution*. Closely related in structure to the Web of Pure Extraction (to the extent that its name even derives from the WPE), the WMD represents the culmination of decades of work to integrate the arts of Psychometric Tracery with the Force. Through use of an abstracted version of Psychometric Tracery, users of the WMD may share thoughts across enormous distances and times, connecting all Distributors to better fuel the Ultimate Power.
diff --git a/hugo/content/posts/2013-07-08-an-obvious-improvement-to-tennis.md b/hugo/content/posts/2013-07-08-an-obvious-improvement-to-tennis.md
new file mode 100644
index 0000000..461a58b
--- /dev/null
+++ b/hugo/content/posts/2013-07-08-an-obvious-improvement-to-tennis.md
@@ -0,0 +1,40 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-07-08T00:00:00Z"
+aliases:
+- /uncategorized/an-obvious-improvement-to-tennis/
+- /an-obvious-improvement-to-tennis/
+title: An obvious improvement to tennis
+---
+So yesterday the [Wimbledon tennis tournament](https://en.wikipedia.org/wiki/The_Championships,_Wimbledon) was decided. The system for verifying whether the tennis ball is out or not (and hence whether play for the point stops or continues) on the main courts is as follows:
+
+1. The ball lands.
+2. The linesperson keeping charge of the line nearest to the landing point of the ball works out whether the ball landed inside or outside the region demarcated by the line.
+3. The umpire decides whether or not to overrule the linesperson's decision.
+4. The [Hawkeye](https://en.wikipedia.org/wiki/Hawk-Eye) ball-tracking system determines whether the ball landed inside or outside the region demarcated by the line.
+5. If either player disagrees with the official decision (that is, if the linesperson called "out" when the player thought the ball was in, or the linesperson was silent when the player thought the ball was out, or if the umpire overruled a decision that the player thinks was correct) then that player informs the umpire that ey wishes to "challenge" the linesperson. In this instance, the Hawkeye reading is consulted (and the ball's trajectory slowly animated on a big screen, for added tension) and regarded as definitive.
+
+The problem I have with this system is the process of "challenging". Each player starts out with a challenge count of three. If a player makes a challenge, and Hawkeye contradicts the official call, then the challenge count is maintained at its current level. If a player makes a challenge, and Hawkeye agrees with the official call, then the challenge count for that player is decremented. A player cannot challenge if eir challenge count is 0. On entering a tie-break, each player's challenge count is incremented.
+
+This resulted in an unhappy event in the last match of the Wimbledon tournament. The player who went on to lose ([Novak Djokovic](https://en.wikipedia.org/wiki/Novak_Djokovic)) used up his three permitted challenges in unsuccessful attempts to overrule the official rulings. Then in one particularly close game, Djokovic was denied a point when his opponent's shot was deemed to be "in". He became angry (displaying the unfortunate tendency of professional sports players to throw temper tantrums at the drop of a hat) and shouted at the umpire that the call should be overruled. He had no challenges remaining, and so could not force the official decision to be reassessed; I suspect his attitude very much did not help to press his case at this point. Later, the commentators showed the Hawkeye ruling to the TV broadcast; the opponent's shot was in fact "out", and Djokovic was vindicated. As I say, he went on to lose (pretty comprehensively, I gather, although I didn't really pay attention); it is conceivable, though admittedly unlikely, that this dispute cost Djokovic the match.
+
+My question is this: why do we rely on linespeople to do that which is done better by Hawkeye?
+
+Would it not be massively more sensible if the linespeople were allowed to do exactly what they normally do (as a salve to those who do not wish to sully the tradition), but the umpire were provided with Hawkeye's ruling after every point so that ey could overrule as necessary? This changes nothing except the umpire's ability to carry out a task ey already has to do. Of course, in instances where Hawkeye is unavailable, such as on the lower courts at Wimbledon, nothing need change.
+
+Hawkeye supposedly has an average error of 3.6mm, roughly equivalent to the fluff on the ball. I propose that the umpire should be provided with the possible error along with Hawkeye's decision, and that it should be down to eir judgement which verdict to accept in such tight cases that Hawkeye might have made an error. (I would suggest defaulting to the normal method of judgement in that case - that is, "continue as if Hawkeye had not been invented".)
+
+The only reason that I can think of to limit the number of allowable challenges is to prevent time being wasted in the administrative process. However, umpire overrules (and challenges themselves) happen rarely enough that I think the following procedure would be quite sufficient:
+
+1. The ball lands.
+2. The linesperson keeping charge of the line nearest to the landing point of the ball works out whether the ball landed "in" or "out".
+3. Hawkeye determines whether the ball landed "in" or "out".
+4. The umpire reads Hawkeye's decision off a screen.
+5. The umpire decides whether or not to overrule the official call.
+6. If the umpire decides to overrule the call, the ball's trajectory is animated slowly on a big screen.
+
+Now, this does (of course) do nothing to resolve the problem of conflicting verdicts during a very fast rally - the umpire cannot concentrate on both the game and the Hawkeye reading at the same time. But then there's no existing solution to that problem anyway, and I do not propose to resolve this problem at the current time.
diff --git a/hugo/content/posts/2013-07-09-stumbled-across-9th-july-2013.md b/hugo/content/posts/2013-07-09-stumbled-across-9th-july-2013.md
new file mode 100644
index 0000000..8bd3b4c
--- /dev/null
+++ b/hugo/content/posts/2013-07-09-stumbled-across-9th-july-2013.md
@@ -0,0 +1,32 @@
+---
+lastmod: "2022-08-21T10:39:44.0000000+01:00"
+author: patrick
+categories:
+- stumbled_across
+comments: true
+date: "2013-07-09T00:00:00Z"
+aliases:
+- /stumbled_across/stumbled-across-9th-july-2013/
+- /stumbled-across-9th-july-2013/
+title: Stumbled across 9th July 2013
+---
+Being bored over the summer holiday, I decided that I would document the cool things I ran across on the Internet. Over the last week, there have been many of these. If I see anything particularly amazing, it'll go in one of these aggregation posts.
+
+* Neurons are surprisingly beautiful:
+* A rather neat and very short story:
+* A *bit* less short but just as good a short story:
+* A rant with which students can all identify, in The Cambridge Student magazine: now lost from the Internet.
+* An Easter Island word "tingo" means "to borrow objects from a friend’s house one by one until there are none left": ("http://web.archive.org/web/20100516040410/http://blog.web-translations.com/2008/12/toujours-tingo-words-that-dont-exist-in-english/)
+* Musings on free will:
+* A thing that I just have to share again:
+* The human brain is a really weird piece of kit:
+* We *have* to make one of these at some point:
+* This is quite soothing in a weird kind of way:
+* It is possible to be deficient in arsenic. (Link to the Soylent Discourse forum is permanently defunct.)
+* A really useful website for when you don't want to have to spin up Wolfram|Alpha to work out time differences:
+* Why never to talk to the police (seriously, never talk to the police):
+* A fascinating book about the power of positive and negative reinforcement, and why they're often done wrongly: [Don’t Shoot the Dog]
+* The Church of England really took its time, but at last they've done it:
+* The Hawkeye Initiative, for the liberation of women in comics:
+
+[Don’t Shoot the Dog]: https://web.archive.org/web/20130206170903/http://www.papagalibg.com/FilesStore/karen_pryor_-_don_t_shoot_the_dog.pdf
diff --git a/hugo/content/posts/2013-07-10-imre-leader-appreciation-society.md b/hugo/content/posts/2013-07-10-imre-leader-appreciation-society.md
new file mode 100644
index 0000000..5cf842a
--- /dev/null
+++ b/hugo/content/posts/2013-07-10-imre-leader-appreciation-society.md
@@ -0,0 +1,15 @@
+---
+lastmod: "2022-08-21T11:04:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-07-10T00:00:00Z"
+aliases:
+- /uncategorized/imre-leader-appreciation-society/
+- /imre-leader-appreciation-society/
+title: Imre Leader Appreciation Society
+---
+There was once a small website devoted to noting the more interesting quotes from our more idiosyncratic lecturers.
+It sadly vanished from the web, although after some detective work, I found a copy floating around on one of Amazon's servers.
+I stored them for posterity using the archival service WebCitation, which is itself now dead, so instead I shall link to [Konrad Dąbrowski's capture](https://www.konraddabrowski.co.uk/ilas/index.html).
diff --git a/hugo/content/posts/2013-07-12-a-framework-for-discussing-pricelessness.md b/hugo/content/posts/2013-07-12-a-framework-for-discussing-pricelessness.md
new file mode 100644
index 0000000..8614bc6
--- /dev/null
+++ b/hugo/content/posts/2013-07-12-a-framework-for-discussing-pricelessness.md
@@ -0,0 +1,36 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- philosophy
+comments: true
+date: "2013-07-12T00:00:00Z"
+aliases:
+- /philosophy/a-framework-for-discussing-pricelessness/
+- /a-framework-for-discussing-pricelessness/
+title: A framework for discussing "pricelessness"
+---
+Sometimes some people argue that certain things are "priceless" - that is, worth an infinite amount of money to them. I posit that what this really means is that it would take work and uncomfortable imagination to evaluate the worth of that thing to them.
+
+The example that triggered this framework was my evaluation of how much my sense of smell was worth to me. (It was late at night and I couldn't get to sleep, so I just let my mind wander around for a bit.) I was unable to quantify the amount I would pay to keep my sense of smell, but it is certainly finite, as the following thought experiment demonstrates.
+
+Suppose that you are the Master [hmm, no gender-neutral version of that word exists, as far as I know] of the Universe. For the purposes of this discussion, humans haven't explored the rest of space, and so while you are the Master of everything, you don't actually know what the "everything" is - but it doesn't really matter to you, because there's so much you can do on Earth. Perhaps you'll branch out later. In the absence of your commands, the world ticks over much as it normally does, but if you want anything at all, you can issue a demand, and it will be met as soon as possible, by the people best-suited to dealing with it. You could, for instance, insist on being given a project to work on, which will lie within your range of abilities but will be nice and challenging, and will take you at least a week but less than a year. (This allows you to prevent yourself from becoming [a mere wanting-thing](http://lesswrong.com/lw/ww/high_challenge/ "LessWrong page on High Challenge"), if you don't want to be one of those.)
+
+The penalty for abdication is pretty severe. You were elected Master of the Universe because you are the single person best suited to the role; no-one else can come close to your suitability, so to make sure you never abdicate, it is enshrined in immutable law (the only thing you can't change, in fact) that were you to abdicate, you would have everything taken from you, and would be dumped penniless without a single possession (including clothes) in the centre of London (or substitute place where it's really hard to get started in life). After all, reasoned the lawmakers, why on earth would you want to retire?
+
+Now suppose that you are kidnapped, entirely by surprise, by a mad scientist. Ey says to you:
+
+> I want to be Master of the Universe. If you don't elect me MotU, I will in my anger take away your sense of smell - but of course I don't have the power to take the Mastery of the Universe from you, so you'll still be MotU. But I am a merciful mad scientist, so I will give you this device that hooks straight into your brain and tells you what you would be smelling if you still had a sense of smell. That way you'll know whether your toast is burning - you just won't have the [quale], and I am so cunning that it will be beyond the ken of mortals to replace the quale. I will be so depressed that I will retire to the Bahamas [[capital Nassau](/anki-decks "My Anki decks, including capitals of the world")] and never trouble you again.
+> If you hand the Mastery of the Universe to me, I will be ever so grateful - I will leave you with your sense of smell. But the penalty for abdication is pretty severe, as you know.
+> Make your choice.
+
+Of course, assume the [least convenient possible universe] when considering a thought experiment - for instance, assume that the smelling-device is no better and no worse than your nose at detecting chemicals, so that it is not an improvement to what is currently your sense of smell; assume that you never bothered to change the dictionary so that the penalty for abdication as outlined in law would no longer be what it says on the tin, etc.
+
+In this thought experiment, it's a one-off: you lose your sense of smell and keep Mastery of the Universe, or you become absolutely nothing and keep your sense of smell. (A variant might be that the mad scientist replaces your senses one by one until you give up the Mastery.)
+
+I rather suspect I would forgo the qualia associated with smell, in order to keep my Mastery of the Universe. This imposes an upper limit on the value of the qualia associated with my sense of smell - and hence my sense of smell cannot be priceless to me.
+
+This framework is very flexible - it adapts to thinking about essentially anything. You may, of course, feel that you would give up the Mastery in order to retain your sense of smell; in that case, the thought experiment has given a lower limit, and your sense of smell could still be priceless, but at least you've actually thought about it.
+
+[least convenient possible universe]: http://lesswrong.com/lw/2k/the_least_convenient_possible_world/
+[quale]: https://en.wikipedia.org/wiki/Qualia
diff --git a/hugo/content/posts/2013-07-13-stumbled-across-13th-july-2013.md b/hugo/content/posts/2013-07-13-stumbled-across-13th-july-2013.md
new file mode 100644
index 0000000..533ba6a
--- /dev/null
+++ b/hugo/content/posts/2013-07-13-stumbled-across-13th-july-2013.md
@@ -0,0 +1,24 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stumbled_across
+comments: true
+date: "2013-07-13T00:00:00Z"
+aliases:
+- /stumbled_across/stumbled-across-13th-july-2013/
+- /stumbled-across-13th-july-2013/
+title: Stumbled across 13th July 2013
+---
+* This is really quite heartwarming:
+* Interesting article on current trends in fiction:
+* A ridiculous reason for a rocket to explode:
+* A very information-dense way of storing data long-term: (compare which is much less information-dense but much more easily decoded in the event of being discovered after the collapse of civilisation)
+* A cool thing to do with a Raspberry Pi and a microwave:
+* I really want one of these - I think I might order one: (also, the word "plug" is insanely wonderful when spoken in a French accent)
+* An interesting idea for making the world a better place:
+* A look at how to infer causality or not, as the case may be, depending on the data:
+* I hope they get to producing this quickly:
+* Thank goodness for that - regular expressions are the most unreadable things ever:
+* Something else I would do if I had eternity to play with:
+* Glass ceiling issues:
diff --git a/hugo/content/posts/2013-07-14-prerequisites-for-hypothetical-situations.md b/hugo/content/posts/2013-07-14-prerequisites-for-hypothetical-situations.md
new file mode 100644
index 0000000..8e322de
--- /dev/null
+++ b/hugo/content/posts/2013-07-14-prerequisites-for-hypothetical-situations.md
@@ -0,0 +1,83 @@
+---
+lastmod: "2022-08-21T10:47:44.0000000+01:00"
+author: patrick
+categories:
+- philosophy
+- psychology
+comments: true
+date: "2013-07-14T00:00:00Z"
+aliases:
+- /philosophy/psychology/prerequisites-for-hypothetical-situations/
+- /prerequisites-for-hypothetical-situations/
+title: Prerequisites for hypothetical situations
+---
+Usually when I discover (or, more rarely, think up) a thought experiment about a moral point, and discuss it with an arbitrary person whom I will (for convenience) call Kim, the conversation usually goes like this:
+
+> Me: {Interesting scenario} - what do you think?
+>
+> Kim: I would just {avoids point of scenario by nitpicking}
+>
+> Me: You know what I meant. {applies easy fix to scenario to prevent nitpick}
+>
+> Kim: Well then, I'd {avoids point of scenario by raising unrelated moral issue}
+>
+> Me: That's not the point. The point is {point} - let's say I constructed the scenario to make {moral issue} not an issue.
+>
+> Kim: Hmm. {avoids point again}
+
+And so on, and so on.
+
+Now I have a platform on which to present the prerequisites for using hypothetical situations as aids to moral understanding.
+
+# Logical rudeness
+
+I have read two excellent pieces about logical rudeness - one [by a Peter Suber][logical rudeness], and one on [LessWrong][lw logical rudeness].
+Logical rudeness is a term used to denote a whole variety of techniques used to *appear to win arguments*, rather than to *address the issues at hand*.
+I can't offhand think of a way to improve Eliezer Yudkowsky's explanation on the LessWrong page I linked, so I won't elaborate on it.
+
+The main way people are logically rude with moral dilemmas [I suffered a little dilemma here myself, wondering whether to sound pretentious by pluralising as "dilemmata"] is in working out lots of ways in which your hypothetical situation could, in fact, not be about the point you want it to be about.
+A paraphrased real-life example that actually happened to me:
+
+> Me: \
+>
+> Kim: But how can you possibly even contemplate torturing a person! You're an evil person!
+>
+> Me: I would contemplate torturing a person if it would avert some greater harm, yes. That's not to say I would torture a person.
+>
+> Kim: But torture! Evil!
+
+This example shows Kim latching on to an emotional part of the hypothetical situation, and using it to launch an [ad hominem].
+This is not only logically rude (I could have outlined any scenario at all, and included the word "torture", and got the same result; Kim ignores the effort I put in to the explanation) but also verges on the socially rude.
+(In the actual situation in which this happened, I lost my temper, I am ashamed to say; the discussion, which was between about ten people, quickly turned into what was essentially a shouting match, that was only dissolved when some of us insisted on watching the latest episode of Doctor Who.)
+The key way to avoid this is to make sure that you never stop yourself considering something, and never condemn others for considering something.
+It's a moral dilemma - you're meant to feel uncomfortable while thinking about it.
+You shouldn't be afraid just to think something, and it takes some time and effort to learn [not to avoid uncomfortable thoughts](http://lesswrong.com/lw/21b/ugh_fields/ "Ugh Fields LessWrong post").
+(Obviously, speaking those uncomfortable thoughts is certainly something to consider avoiding.)
+
+# The Least Convenient Possible World
+
+The other major way people avoid grappling with moral dilemmas is to say, "But your hypothetical situation doesn't actually work, because of \."
+It's a very natural thing to do.
+My major inspiration on this is the LessWrong post on [considering the least convenient possible world](http://lesswrong.com/lw/2k/the_least_convenient_possible_world/) during debates.
+(As an aside, I'm not sure whether to use the word "argument", "debate" or "discussion" - an argument is a pointless thing, while a debate is something you enter with the aim of winning.
+Neither of these is what I am actually talking about, but the word "discussion" is becoming a little monotonous.)
+
+The usual situation: it's perfectly obvious to you (or at least would become so after five minutes of thought) what the flaws are in the presentation of the hypothetical situation, and it is probably abundantly clear that those flaws could be fixed, but because you want to *win the argument* rather than *address the moral issue*, you point out the flaws and waste the time of all concerned.
+
+However, the aim of a moral discussion is not to prove yourself to be a better arguer, but to discover what your thoughts are on an issue you've never really seen before. If you are going to point out the flaws in the given situation, at least do so while presenting a solution. My usual tactic when someone (let's make it Kim again) presents me with a moral dilemma is to begin the discussion with something like:
+
+> I presume we can ignore \? I could fix it with \.
+
+Invariably Kim will reply with something along the lines of "Yeah, that's what I meant" - and that is the signal for "I am trying to discuss a moral problem, not to construct a watertight scenario." If Kim were instead to respond with "Hmm, I hadn't considered that…", then that would be an indication that ey was looking for the implementation flaws in the situation ey had outlined. Then and only then would I generate more such flaws.
+
+I'm not holding myself up to be a paragon of hypothetical-considerators, but I like to think that I'm a bit better at it than most people are. My overarching rule is:
+
+> If either party in a discussion has become angry, you have failed.
+
+Of course, [some people][trolls] just enter into arguments in order to make you or them angry (after all, it's quite fun to be angry about something that doesn't matter) - but if you actually want a fruitful discussion, avoid inflaming people.
+
+[trolls]: https://en.wikipedia.org/wiki/Trolling
+[ad hominem]: https://yourlogicalfallacyis.com/ad-hominem
+[torture vs. dust specks]: http://lesswrong.com/lw/kn/torture_vs_dust_specks/
+[logical rudeness]: https://dash.harvard.edu/bitstream/handle/1/4317660/suber_rudeness.html
+[lw logical rudeness]: http://lesswrong.com/lw/1p1/logical_rudeness/
diff --git a/hugo/content/posts/2013-07-14-the-multiple-drafts-view-of-consciousness.md b/hugo/content/posts/2013-07-14-the-multiple-drafts-view-of-consciousness.md
new file mode 100644
index 0000000..b5daf26
--- /dev/null
+++ b/hugo/content/posts/2013-07-14-the-multiple-drafts-view-of-consciousness.md
@@ -0,0 +1,48 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- philosophy
+- psychology
+comments: true
+date: "2013-07-14T00:00:00Z"
+aliases:
+- /philosophy/psychology/the-multiple-drafts-view-of-consciousness/
+- the-multiple-drafts-view-of-consciousness/
+title: The Multiple Drafts view of consciousness
+---
+I've been reading one of [Daniel Dennett's](https://en.wikipedia.org/wiki/Daniel_Dennett) books, *Consciousness Explained*. Aside from the fact that the author has an incredible beard and is therefore correct on all matters, he can also write a very cogent book. In *Consciousness Explained*, Dennett outlines what he calls the Multiple Drafts approach to explaining consciousness; this blog post is my attempt to summarise that view in a couple of short analogies.
+
+Dennett starts off by providing evidence that our time-perception is somewhat malleable: we can interpret [two dots of different colours][colour phi] (appearing separated by a short distance in time and space) as a single moving dot that changes colour abruptly at some point. The key puzzle here is that we perceive the colour to have changed *before* seeing the second coloured dot. Dennett then outlines what seem to be the two mainstream points of view on how this happens.
+
+* The Orwellian view: that at the time of perception, we saw exactly what happened, and then we edit this after the fact to reflect a more logical sequence of events (à la [Minitrue]);
+* The Stalinist view: that the information is edited before even making its way into the consciousness.
+
+Dennett points out that both of these options implicitly assert the existence of a "Cartesian Theatre" - a place where consciousness is experienced as information is gathered. In particular, the Stalinist view requires consciousness to be experienced after sufficient time has passed for some decisions to be made. By the way, in arguing against this supposition, Dennett doesn't mention that there is precedent for this kind of behaviour in the reflex action, which we explicitly only realise we have made after it has happened; but it's a minor point, since there are sound physiological reasons for why the reflex action doesn't come under conscious control (the signal for action never actually enters the brain, but is headed off at the brain stem). He then gives a third possible view - the Multiple Drafts model. In each of the next two analogies, I will liken the consciousness to a general in war, making decisions based on reports from the battlefield. In fact, Dennett argues that since the Cartesian Theatre does not exist (that is, consciousness isn't something that is recorded and played back to some internal watcher), this type of analogy is deeply flawed, and the third analogy will contain an appropriate adjustment.
+Central to the analogies are two reports in particular:
+
+1. "At location X at time 15:00:00, M happened", analogous to the report-to-the-consciousness "My hand tells me that I drew near to a source of intense heat at time \___";
+2. "At location X at time 15:00:02, N happened", analogous to the report-to-the-consciousness "My eyes tell me that I touched the hot plate at time \___".
+
+We consider the case that report 2 arrives before report 1 (even though report 2 describes events which occurred later than report 1) - this is quite conceivable given the distance that messages must travel in the nervous system. (Please ignore the fact that this particular effect is probably going to work in reverse for this particular example, the eyes being closer to the brain than the hand - and assume that every decision is made in the brain, so that reflexes don't happen. It's harder than you might think to come up with something sufficiently urgent that isn't made as a reflex!)
+
+# The Stalinist analogy
+
+In this version of events, the reports come in from the battlefield, and flow through the general's underlings. The underlings see that the reports are in the wrong order, and switch them round so that they are in the right order, before presenting them to the general in the order {2,1} to consider; they also decide that there is a missing piece of information [corresponding to the "change-in-colour-of-dot" situation, but that doesn't fit with this analogy] between reports 2 and 1, so they insert it. The general acts on the augmented reports, and they are then sent off to be filed away for future reference.
+
+# The Orwellian analogy
+
+In this version of events, the reports come in from the battlefield, but the underlings don't correct the order of the reports, so the general sees {1,2}. The general acts on the reports once they've both been received, noticing that some information seems to be missing and adding it in, and sends them off to be filed. The archivist sees that they are in the wrong order, and switches them round just before filing them.
+
+# The Multiple Drafts analogy
+
+In this version of events, the reports come in one by one from the battlefield, but there is no general - just a room full of underlings. The first report (which records a later event) comes in, and the underlings all update their states-of-mind accordingly. Then the second report (which records an earlier event) comes in; the underlings nearest the door update soonest, and the report makes its way around the room from underling to underling. The underlings act on the reports (Multiple Drafts doesn't address how this happens - for the purposes of this analogy, let it be by everyone shouting at once, and the majority view prevails). As time goes on, more reports flood in, but eventually every underling has received reports 1 and 2 (this may happen before or after the action based on those reports is taken), and the archivist-underling files what ey thinks happened.
+
+Under Multiple Drafts, then, there is never a "point at which information enters the consciousness", but rather a "time interval in which information is making its way around the consciousness". The name of the model comes from an analogy to writing a summary of the events - starting from report 1, a summary is written; then report 2 is added, it progresses around the consciousness, and wherever it arrives, the summary is updated to reflect the new information. Thus there are *multiple drafts* of the summary at once. When the information is fully incorporated (that is, consensus has been reached on what the summary should contain), the consciousness is free to store the consensus draft in memory for future reference. Note that this could happen some time after the events described in the summary - Dennett is careful to separate "what happened" from "how the consciousness stores what happened".
+
+The reason Multiple Drafts is so attractive is that there is no experimental way to differentiate between Orwellian and Stalinist. Either way, the subject of an experiment will report the same thing, so it is strange to draw a distinction between these two possible methods. Having noticed that Orwellian and Stalinist are indistinguishable, the natural question is "why do we think they are different?" - and it turns out that the only real reason is that we think there is a centre of consciousness, through which the information must flow. Only under that interpretation is there a difference between amending-before-consideration and amending-after-consideration. So we relax the assumption of a centre of consciousness, and we end up with a "smear" of time during which information is incorporated, rather than an absolute time of perception. (This is borne out by experiment, by the way - we are very flexible when it comes to simultaneous perception.) This idea makes sense - we don't perceive space absolutely, and can happily work with receiving information about space at smeared-out times, adding more information to the model as we find out more. I nudge the table-leg with my foot, someone reacts, I am swinging my foot to kick it again, but just when I can no longer stop the kick I realise that it was in fact a human leg, the other person glares at me - my perception of the layout of space below the table has developed as new information came in, but out of sync with the information itself (the information bunched up and all came along at once). There is no particular reason why our time-perception should be any different.
+
+The book is an excellent one, very coherently written - this blog post doesn't really do it justice (although that's the point of this blog - to get me practised at writing). As of this writing, I am only half-way through the book, but it is shaping up well.
+
+[colour phi]: https://en.wikipedia.org/wiki/Color_Phi_phenomenon
+[Minitrue]: https://en.wikipedia.org/wiki/Minitrue "Ministry of Truth"
diff --git a/hugo/content/posts/2013-07-18-my-objection-to-the-one-logical-leap-view.md b/hugo/content/posts/2013-07-18-my-objection-to-the-one-logical-leap-view.md
new file mode 100644
index 0000000..e325bbf
--- /dev/null
+++ b/hugo/content/posts/2013-07-18-my-objection-to-the-one-logical-leap-view.md
@@ -0,0 +1,80 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- philosophy
+- psychology
+comments: true
+date: "2013-07-18T00:00:00Z"
+math: true
+aliases:
+- /philosophy/psychology/my-objection-to-the-one-logical-leap-view/
+- /my-objection-to-the-one-logical-leap-view/
+title: My objection to the One Logical Leap view
+---
+A large chunk of the reason why changing someone's mind is so difficult is the fact that our deeply-held beliefs seem so obviously true to us, and we find it hard to understand why those beliefs aren't obvious to others. Example:
+
+> A: A god exists - look around you; everything you see is so obviously created, not stumbled upon!
+> B: No, that's rubbish - look around you, everything you see is easily explained by understood processes!
+
+The basic problem here is that B *sees things differently* to A. Everything that B sees is automatically interpreted through the prism of "process that is understood", and that is a really hard thing to convey to A. The same evidence is spun in two totally different ways, and yet people argue as if the other party were only *one logical leap* away from coming round to their point of view. Weirdly, I can't find a source for that - I have heard the phrase before. In lieu of a source, I will (very briefly) summarise that viewpoint. I was under the impression that it is often trumpeted around that at the heart of every argument is one logical leap (OLL) that made that argument significantly different from the opposition, and that if one could only convey why that point was so important, then one could sway everyone to that point of view. (As I say, I can no longer find anyone saying this.) There is implicit evidence that people think in this way - the fact that when people are arguing earnestly with each other, each in an actual attempt to change the other's mind, they usually repeat the same argument again and again, as if that were simply a killer blow. Now, quite aside from the obvious symmetry here (both sides feel that their point is the one thing that just needs to be understood), there is a deeper point to be drawn about how we think, that exposes the OLL as a fallacy.
+
+# Ideas are not atomic
+
+There are precious few ideas that are what I call "atomic". An atomic idea is one that most observers will experience in essentially the same way. The idea that \\(1+1=2\\) is the closest I can get to an atomic idea - most people, I suspect, know that numbers can be added, and for small numbers, I posit that we all think of addition in the same way. Certainly the concept of "addition" is very much observer-dependent, in that a mathematician will probably have a very different view of addition to, say, a painter - but we have all been so well drilled that \\(1+1=2\\) that I suspect we all view it (not "addition", but "\\(1+1=2\\)") in the same way - as an isolated fact. By the way, the main difference in the concept of "addition" generally is, I think, that for a mathematician, addition is a small part of a much larger edifice (involving the Peano axioms and so forth), whereas I have met many people to whom "maths" is merely a collection of isolated computational techniques, for whom addition is simply an extra tool. Most ideas are not like \\(1+1=2\\). If you were to get me to [free-associate](https://en.wikipedia.org/wiki/Free_association_%28psychology%29) on the word "death", for instance, my immediate reaction would probably be "bad, get rid of it". If you were to get J. K. Rowling to do the same, you'd probably get "inevitable, must reconcile with". (I base this on the final book of the Harry Potter series, in which a major theme is the portrayal of death as "the next great adventure".) "Death" is a concept which varies heavily from person to person - it is *not atomic*. In order to change someone's view of death, it is likely that (for most people) a large reshuffling of the worldview would have to take place - for me, you would probably have to do one of the following:
+
+* weaken my "human life is to be desired" axiom (in the process drastically altering my aesthetic principles);
+
+* prove to me that there was something desirable after death (in the process weakening my ultra-materialistic worldview);
+
+* show me that there would be horrific consequences to the prolonging of life (but that wouldn't change my view that "it would be better if we could get rid of death").
+
+Common to the two options that are actually changing my mind (the first and second) is the requirement that you break down a key part of my worldview. I think that this is why opinions are so hard to change - because they so quickly become very heavily bound up with the entire worldview. Few ideas have sufficient force to alter my entire model of the world (although they do exist: for me, one such idea was [Cached Thoughts](http://lesswrong.com/lw/k5/cached_thoughts/)). The "one logical leap" in an argument is merely the global interface of a particularly large chunk of world-model - the tip of an iceberg.
+
+At this point, I will explicitly attempt to do what I have been claiming is the impossible - to convey my worldview to you. I attempt this in order to show just how much worldview sits behind my simple opinion on the topic that "the One Logical Leap does not exist" - and how much harder than it first appears it would be to change my mind on it. I very much doubt that I will succeed in explaining my mind to the extent that I would like, for reasons explained throughout this post. Anyway, I took fifteen minutes of introspection, and here (hopefully in a reasonably logical order) are the major areas of worldview on which this article rests. I will refer to these bullet points throughout the article, and will leave out my views on mathematics and death (which have already been mentioned, but are not central to my argument). It should go without saying that these are incomplete generalisations.
+
+1. Thought is computation, mainly oriented around pattern-matching and caching
+
+2. There is a correct answer to essentially every question - we just don't have the computational power (in our brains or otherwise), or are not using it correctly, to answer them
+
+3. The process of speech is literally the process of sharing thoughts (to a lesser extent, my [non-rent-paying](http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) belief that the mind is an entity that is distributed across multiple brains, as Hofstadter outlines in his book *I Am a Strange Loop*)
+
+4. There are low- and high-bandwidth ways to share thoughts (one-way blog posts are not high on the list of effective thought-sharing means), but we only really use low-bandwidth ones
+
+5. The mind is a vast collection of models of the world, constantly reaching consensus to provide a single contiguous model
+
+6. Humans are very bad at evaluating new ideas, and most of the thought happens below the level of consciousness
+
+7. For most people, argument is a battle to prove yourself right
+
+Of course, my statement of these aspects of my worldview is really inadequate - a soundbite summary of a seething mass of thought (viewpoints 4 and 5). As an exercise to the unusually-interested reader, you might find it interesting to go back and see where these views appeared implicitly up to now. I have tried to make this post as non-circular as I can, but it was harder than I expected to express that "the worldview is tightly bound up and hard to express". Now that I have these viewpoints explicitly labelled, I can outline my argument properly.
+
+* People think in "one logical leap" terms - that is, they believe that B is only one short step of understanding away from coming round to A's viewpoint
+
+* Worldviews are very hard (and therefore slow) to convey, because although we can share thoughts (viewpoint 3), we can't do it anywhere near fast enough (viewpoint 4) to put across what we call "a viewpoint" but is really a truly massive edifice (viewpoint 5)
+
+* People therefore receive novel thoughts slowly enough that they have time to pattern-match some standard answers to them (viewpoint 1), and thereby avoid dealing with the "logical leap" the other party is trying to convey
+
+* Unless A is very careful, B will interpret A's argument as an attack on B's own worldview (viewpoint 7) and is thus incentivised to find objections
+
+* Hence, it is extremely hard to change a worldview.
+
+* What I think of as "an obvious idea" is only obvious to me because that's how my pattern-matcher works (viewpoints 1 and 5)
+
+* To change your pattern-matcher sufficiently to view my idea as "obvious" is to alter your worldview
+
+* Therefore, my "obvious idea" is outlandish to you, unless our worldviews are sufficiently well-aligned already.
+
+An atomic idea, of course, doesn't suffer from this problem - it is seen by everyone in the same way, so it can just be packaged up and spoken, and understood as it was intended. Now, a single idea can be so powerful that it reshapes my worldview; or many different small, nearly-but-not-quite-atomic ideas relating to the same worldview can be presented, with my worldview adapting to accommodate these (viewpoints 1, 5 and 6); or I suppose I could maintain some kind of cognitive dissonance where an idea doesn't fit on top of my regular worldview, but I don't really count this as a good solution. But when conveying ideas, people don't use this fact-which-is-so-obvious-to-me, that it is hard to persuade people because the task is huge. (I hasten to add, by way of example, that it only became obvious after the considerable change in worldview brought about by reading most of LessWrong.) People almost invariably present to me a single idea without the supporting worldview (and of course I include myself as "people", but I do try to ameliorate the effect) - and then the idea has no worldview to slot into when it arrives in my brain, so I unconsciously and consciously find ways to reject it. To defeat this effect is the essence of a pretty big chunk of rationality - learning to recognise when you're automatically rejecting an argument, and stopping yourself - but that's a post for another time.
+
+# What to do with this information
+
+You may well say, "That's all very well, but what difference does it make?" - and that's a very natural question to ask, because (in my experience) the balance of probability suggests that you don't have the worldview which would make it obvious (and after all, you're reading this blog post to learn about my worldview). Over so short a period as the last few months, it has become a part of my worldview that people really don't like to evaluate arguments - probably because to do so means the other person has "won", as there's a reasonably effective social norm against adapting your view in response to evidence or argument. There are two very simply-stated ways you can use what I've been trying to convey of my worldview:
+
+* Notice when you're rejecting an idea on worldview-incompatibility grounds, so that you can actually think about it rather than letting your already-stored model of the world decide for you
+
+* When you are trying to convey a point, and you have the luxury of time, give much more justification than you think should be necessary, and remember, when the other person is obtusely refusing to absorb your idea, that it's probably because you aren't conveying enough background-idea. Yes, it may be very hard and time-consuming to do enough of this - but at least it stops you from unconsciously or consciously thinking that the only reason the other person isn't taking in your idea is that ey's stupid - and in my experience, that seems to be a major driving force behind the development of discussions into proper angry rows.
+
+# Post Scriptum
+
+This is far from the most coherent work to flow from my pen, I realise - it's an oddly hard thing to construct a cogent discussion on, because the argument is itself that the argument is hard to put across. My usual structure for an argument would be something like "Statement of argument, evidence, expound", but here my evidence is part of the statement of the argument, and my "expound" is also "evidence". That throws the whole structure into disarray. Ah well - let it stand as a key example in favour of the argument.
diff --git a/hugo/content/posts/2013-07-21-on-shakespeare.md b/hugo/content/posts/2013-07-21-on-shakespeare.md
new file mode 100644
index 0000000..177ca98
--- /dev/null
+++ b/hugo/content/posts/2013-07-21-on-shakespeare.md
@@ -0,0 +1,58 @@
+---
+lastmod: "2022-08-21T10:51:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-07-21T00:00:00Z"
+aliases:
+- /uncategorized/on-shakespeare/
+- /on-shakespeare/
+title: On Shakespeare
+---
+I've now seen two Shakespeare plays at the [Globe](https://en.wikipedia.org/wiki/Shakespeare%27s_Globe) - once in person, to see A Midsummer Night's Dream, and once with a one-year-and-eighty-mile gap between viewing and performance (through the [Globe On Screen](https://www.dramaonlinelibrary.com/shakespeares-globe-on-screen) project), to see Twelfth Night.
+Both times the plays were excellent.
+Both were comedies, and both were laugh-out-loud funny.
+
+The performance of Twelfth Night, then, was beamed into a local-ish cinema for our viewing pleasure.
+(Definitely more comfortable than the seating at the Globe, although I am reliably informed that if you go to the Globe, you really have to be a groundling, standing at the front next to the stage, in order to get the proper experience.)
+My seat was next to those of some young-ish children.
+The result of taking several young children to a three-hour performance of a play which isn't in Modern English was predictable, but it got me thinking.
+(Bear with me - this will become relevant.)
+
+I said that both the plays were laugh-out-loud funny.
+I'll be talking about Twelfth Night, as that's the one I remember best (since it happened in the past week).
+In fact, it started out pretty dull - I was completely lost for the first five minutes while some Count or other pontificated about how much he loved a reclusive Lady.
+I was only able to get the gist of what he was saying, through snatching out some words every now and again.
+However, as soon as the Count got off-stage, the play picked up immensely, and became properly funny.
+It was really noticeable that Shakespeare was writing in two different registers - the posh one, with the Count wittering on in soliloquy, was all but incomprehensible to me, while the standard register, in which everyone else spoke, was pretty much just English.
+
+It is also really hard to grasp the nature of the humour of Shakespeare just from reading the plays.
+Once they are being performed, however, it becomes immediately obvious that every other line is an innuendo of some sort.
+Even while the Count is talking, Shakespeare gives him double-entendres ("How will she love, when the rich golden shaft/ Hath kill'd the flock of all affections else/ That live in her…") - we are clearly meant to be laughing at him, such a serious character accidentally making ribald puns - and once the silly characters come on, the humour just gets coarser.
+You don't see that from the script unless you're actually looking for it - but actors can make so much more of it, with their freedom to move around and inflect.
+In fact, with the exception of the wordplay of the Fool and the plot-based shenanigans (twins being mistaken for one another, and so on), I would say that well over half the humour in Twelfth Night is sexual in nature.
+
+Cue smooth segue to the English National Curriculum, which seems desperate to get children learning Shakespeare.
+Thankfully, [Michael Gove](https://en.wikipedia.org/wiki/Michael_Gove) doesn't seem to have gone sufficiently mad as to insist on its teaching in primary school (that is, from the age of 4 to 11), but before his reforms take place in 2014, it is/was required (link now dead) that pupils be taught on at least one Shakespeare play in Key Stage 3 (that is, aged 11 to 14).
+I can't find information about the draft 2014 curriculum for Key Stage 3, but I'm sure it appears in there too, given Gove's attitudes to pedagogy.
+
+I just don't understand why pupils are taught Shakespeare at such a young age.
+I speak for myself here, but 14 is really not old enough to understand the main source of humour (innuendo) in Shakespeare plays.
+Shakespeare's comedies are full of it - essentially all non-plot-based humour is sexual in nature.
+It is bizarre that pupils who are too young to understand the humour would be taught to analyse it.
+The language is difficult enough to read (another hurdle that simply goes away when it's acted properly), but the plays are simply horrendously drab unless you are able to grasp the humour - when you remove three-quarters of the humour from a comedy, what is left?
+
+Aside from the fact that such young children can't really understand the humour, it's also difficult for a teacher to teach, unless that teacher is one of a very unusual breed who can talk to eir pupils candidly about anything at all without it feeling awkward.
+Most teachers would find it much easier simply to ignore the double-entendres in the first place - I know that when I was taught A Midsummer Night's Dream in Year 6 (aged 10-11), my teacher focused entirely on plot, but the plot of AMND is nothing special.
+The same happened when I was subsequently taught AMND in Year 8 (aged 12-13) - even worse, we were shown a film adaptation that was just not funny.
+(This may be that I was too young to be amused by Shakespeare-humour, but I actually think the film didn't portray it at all.)
+
+So we have this strange situation of young children being taught centuries-old plays, of which they understand neither the content nor the syntax.
+There is absolutely no reason for a pupil to find Shakespeare relevant or useful in any way, taught like this.
+It's a shame, because the simple fact that "Twelfth Night is laugh-out-loud funny" is enough to tell me that Shakespeare is relevant.
+There are a couple of interesting historical notes to be gleaned from it - for instance, the treatment of the puritanical Malvolio, the only character not to receive a happy ending (aside from the pirate and the Fool), seems to show that people really liked to put down killjoys back then, in contrast to our view now (I find Malvolio's plight rather sad, and so does everyone else I've spoken to).
+But that's not really why I think Shakespeare is relevant - I think his plays are relevant in much the same way that I think the Marx Brothers' films are.
+They are really entertaining plays.
+Humour seems not to have changed very much over the last few centuries.
+Taking pupils at a young age, and turning them off good plays which are part of our cultural heritage, is something of a travesty.
diff --git a/hugo/content/posts/2013-07-22-the-orbitstabiliser-theorem.md b/hugo/content/posts/2013-07-22-the-orbitstabiliser-theorem.md
new file mode 100644
index 0000000..38f4d51
--- /dev/null
+++ b/hugo/content/posts/2013-07-22-the-orbitstabiliser-theorem.md
@@ -0,0 +1,28 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2013-07-22T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/the-orbitstabiliser-theorem/
+- /the-orbitstabiliser-theorem/
+title: The Orbit/Stabiliser Theorem
+---
+The Orbit/Stabiliser Theorem is a simple theorem in group theory. Thanks to [Tim Gowers](https://gowers.wordpress.com/2011/11/09/group-actions-ii-the-orbit-stabilizer-theorem/) for the proof I outline here - I find it much more intuitive than the proof that was presented in lectures, and it involves equivalence relations (which I think are wonderful things).
+
+Theorem: \\(\vert \{g(x), g \in G\} \vert \times \vert \{g \in G: g(x) = x\} \vert = \vert G \vert\\).
+
+Proof: We fix an element \\(x \in G\\), and define two equivalence relations: \\(g \sim h\\) iff \\(g(x) = h(x)\\), and \\(g \cdot h\\) if \\(h^{-1} g \in \text{Stab}_G(x)\\), where \\(\text{Stab}_G(k) = \{g \in G: g(k) = k\}\\).
+
+Now, these are the same relation (we will check that they are indeed equivalence relations - don't worry!). This is because \\(g \sim h \iff g(x) = h(x) \iff h^{-1}g(x) = x \iff h^{-1}g \in \text{Stab}_G(x) \iff g \cdot h\\).
+
+And \\(\sim\\) is an equivalence relation, almost trivially: it is reflexive since \\(g \sim g \iff g(x) = g(x)\\) is obviously true; it is symmetric, since \\(g \sim h \iff g(x) = h(x) \iff h(x) = g(x) \iff h \sim g\\); it is transitive similarly.
+
+Now, it is clear that the number of equivalence classes of \\(\sim\\) is just the size of the orbit \\(\{g(x), g \in G \}\\), because for each equivalence class there is one member of the orbit (with \\([g]\\) representing \\(g(x)\\)), and for each member of the orbit there is one equivalence class (with \\(g(x)\\) being represented solely by \\([g]\\)).
+
+It is also clear that the size of the stabiliser \\(\text{Stab}_G(x)\\) is just the size of an equivalence class \\([g]\\) of \\(\cdot\\), since for each member \\(s\\) of the stabiliser, we have that \\(g \cdot (g s)\\) so \\(\vert [g] \vert \geq \vert \text{Stab}_G(x) \vert"\\), while for each for each member \\(h\\) of \\([g]\\) we have that \\(h^{-1}g \in \text{Stab}_G(x)\\) by definition of \\(\cdot\\) - but all these \\(h^{-1}g\\) are different (because otherwise we could cancel a \\(g\\)) so \\(\vert [g] \vert \leq \vert \text{Stab}_G(x) \vert\\).
+
+And the equivalence classes of \\(\sim \ = \cdot\\) partition the set \\(G\\), so (size of equivalence class) times (number of equivalence classes) is just \\( \vert G \vert\\) - but this is exactly what we required.
diff --git a/hugo/content/posts/2013-07-24-stumbled-across-24th-july-2013.md b/hugo/content/posts/2013-07-24-stumbled-across-24th-july-2013.md
new file mode 100644
index 0000000..79ae218
--- /dev/null
+++ b/hugo/content/posts/2013-07-24-stumbled-across-24th-july-2013.md
@@ -0,0 +1,25 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stumbled_across
+comments: true
+date: "2013-07-24T00:00:00Z"
+aliases:
+- /stumbled_across/stumbled-across/
+- /stumbled-across/
+title: Stumbled across 24th July 2013
+---
+
+* This is something I will try at some point, probably when I get back to uni:
+* This was fun:
+* Hah - stupid copyright owners:
+* The government's got around to allowing the testing of driverless cars:
+* An insightful comic about getting to sleep:
+* Roll on the cheap and easy satellites:
+* A bunch of interesting sciency things, including a new application of zapping current through the brain:
+* At last!
+* I didn't see this at the time - consider my faith in humanity restored:
+* Excellent essay on why it's hard to prohibit same-sex marriage: [cached][gender and same-sex marriage]
+
+[gender and same-sex marriage]: http://web.archive.org/web/20140723074138/http://linuxmafia.com/faq/Essays/marriage.html
diff --git a/hugo/content/posts/2013-07-25-metathought.md b/hugo/content/posts/2013-07-25-metathought.md
new file mode 100644
index 0000000..b582f85
--- /dev/null
+++ b/hugo/content/posts/2013-07-25-metathought.md
@@ -0,0 +1,42 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- psychology
+comments: true
+date: "2013-07-25T00:00:00Z"
+aliases:
+- /psychology/metathought/
+- /metathought/
+title: Metathought
+sidenotes: true
+---
+
+I have recently discovered the game of [Agricola](https://en.wikipedia.org/wiki/Agricola_%28board_game%29), a board game involving using resources (family members, stone, etc) to build a thriving farm.
+The game is turn-based, with the possible actions each turn being severely limited.
+This makes the game be in large part about optimising under constraint (the foundation of any good game).
+However, during gameplay I also detected a certain resonance between Agricola and the game of [Magic: The Gathering](https://en.wikipedia.org/wiki/Magic_the_gathering), beyond the usual "constrained optimisation" theme.
+While I was playing Agricola, there was a kind of niggle in the back of my mind, telling me that "ooh, this is like Magic".
+
+I notice a similar affinity when reading essentially anything by [Douglas R Hofstadter](https://en.wikipedia.org/wiki/Doug_Hofstadter), an author [famed](https://xkcd.com/917/ "xkcd I'm So Meta") for his "metaness".
+That is, when reading a good Hofstadter piece, I get a similar niggle (considerably weaker than the Magic-Agricola one) telling me that "ooh, this is a bit like Magic".
+Hofstadter invents puns and connections which feel so natural that you'd be forgiven for thinking that he had invented English specially for the purpose, were it not for the fact that his book [Gödel, Escher, Bach](https://en.wikipedia.org/wiki/Godel_escher_bach) was translated (I am told) with the same level of scintillation into at least {{< side right lang "eight other languages." >}}French, German, Spanish, Chinese, Swedish, Dutch, Italian and Russian, according to the bottom of Tal Cohen's review (cached).{{< /side >}}
+This leads me to wonder whether what I'm really noticing is not just constrained optimisation, but "metathought" - thought on a higher level of abstraction to the usual.
+With Hofstadter, it's on the level of words as well as of the symbols of thought that the words invoke; with Magic, it's thinking about plans and strategies involving the other player(s) and the interactions between their cards and mine; with Agricola it's thinking about the aims of the other player(s) and how best to compete for the limited actions available to us both.
+I note that I don't feel the resonance with chess - archetypical of "deterministic games", where you know exactly what moves are available to both sides - so the resonance is not a marker for "putting myself in others' shoes".
+Rather, it seems to be a marker of *interaction* - between players' plans, or between words and meaning, and so on.
+
+Closely linked to this is the related concept of [introspection](http://lesswrong.com/lw/6p6/the_limits_of_introspection/). It's a [well-researched](https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect "Dunning-Kruger effect Wikipedia page") [fact](https://en.wikipedia.org/wiki/Four_stages_of_competence "Four stages of competence Wikipedia page") that people are ([in general](http://lesswrong.com/lw/1xh/living_luminously/ "Luminosity LessWrong page")) [bad](http://lesswrong.com/lw/i4/belief_in_belief/ "Belief in belief LessWrong page") at introspection (hence the existence of [behaviourism](https://en.wikipedia.org/wiki/Behaviorism) and [heterophenomenology](https://en.wikipedia.org/wiki/Heterophenomenology)). I've trained myself over the last year or so to be much better at introspection than I was [^uncertain] - I notice myself shying away from thinking things, I recognise when there's a specific thing I can't be bothered to think about, and so forth. Of course, I am (as yet) imperfect, but I am [trying not to be](http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/ "I want to become stronger").
+
+How is this, as I have claimed, "closely linked"? I am slowly forming the opinion that it takes a reasonably good level of introspective ability just to be able to notice resonances between things. [^general] I am waiting for experimental evidence on this (and it is possible that my subjects are reading this blog, so I won't say what the tests are). {{< side right interrelation "However, I've noticed these resonances myself to a greater degree since learning introspection.">}}A possible explanation is that I've just been doing more interrelated things recently, so I would be very likely to spot more interrelations.{{< /side >}} The feeling of "affinity" between things is very difficult for me to describe - it's kind of a shade of extra interpretation laid on a concept, but it's not linked to any of the commonly-recognised senses, so English isn't very well set up to define it - but the feeling is very weak. I sometimes think of it as making an extra brushstroke on a watercolour - the added colour is there, but it's very slight - perhaps slight enough as to go unnoticed by someone who is not in the habit of noticing eir thoughts. It also feels like an area of light (in both senses - "not dark" and "not heavy") at the (literal) back of my mind. (Ah, how difficult to describe qualia accurately!) However it feels to me, it is my experience that people very rarely claim that one activity is similar to another in some abstract way (as I do with Magic and Gödel, Escher, Bach) - this may be because I don't notice it when they so claim, or that they never so claim because no-one else ever so claims, or that they never so claim because they don't notice the resonance, or that the resonance isn't actually there and I'm delusional (although in this instance that seems a bit unlikely, if I say so myself).
+
+ Why do I think that what I'm noticing is "metathought" rather than merely "constrained optimisation"? Well, I very rarely feel the resonance, and I'm always solving constrained optimisation problems without feeling the resonance (how succinct can I make this post, how many chocolates can I get away with eating…) so I suspect that it's not just the optimisation aspect. The only other link I have come up with at the moment is metathought. Magic, in particular, has the potential for very complicated interactions involving thinking hard about which strategies will be successful and when exactly to do things; Hofstadter's punning is ridiculously meta anyway; while Agricola is heavily based on working out what the opponents will be doing and taking that into account (that is, it requires *reflection*, a key component of metathought), while juggling your own strategies. I note for completeness that I read Gödel, Escher, Bach well before I discovered the game of Magic, and I didn't feel the resonance with GEB on first playing Magic - it was only once I'd played Magic that I started feeling the resonance. Alternatively put, I feel the "resonance with Magic", rather than "resonance with things in the class to which Magic belongs".
+
+I get slight shades of the same resonance when solving crosswords, and maybe even sometimes when proving mathematical statements - but take this with a pinch of salt, because I've had time to create a pattern for "I feel this resonance when…", and it's much easier to fill that pattern than to actually work out whether I do feel that resonance. I explicitly noticed it and noted it to myself when playing Agricola and when reading the Ricercar from Gödel, Escher, Bach - any other examples are potentially suspect, now that I've thought the concept through, because the feeling of resonance is so weak compared to the thought "If my hypothesis is correct, I should feel resonance now". (I came to this realisation while writing this paragraph.) It would appear that I may have accidentally corrupted my ability to feel this "resonance" in weak cases. Unfortunately, this makes it very hard to provide further tests: in particular, I need cases when I would predict feeling resonance but in fact do not.
+
+Anyway, I hypothesised that "resonance" is only felt by people who naturally or artificially have good introspection. I would be very interested to hear of evidence on this point - if you feel (with some kind of justification) that you have unusually good introspection, or if you think you have felt the kind of resonance that I describe (of course, my description was poor!), do let me know - I don't know which way causality runs, if any, and I would like to know whether it's just some oddity of my own, or whether It's A Thing that no-one bothers to mention for some reason.
+
+
+[^general]: The resonance which is the main subject of this post is a single instance of a more general class of relations - for instance, there is a different kind of resonance between Scrabble and [Countdown](https://en.wikipedia.org/wiki/Countdown_%28game_show%29).
+
+[^uncertain]: Or at least, I hope I have - it certainly feels like it's working, but then again, it would probably feel like that if I were getting *worse* at introspection, because I'd be getting worse at telling whether I was getting better or not.
diff --git a/hugo/content/posts/2013-07-29-stumbled-across-29-july-2013.md b/hugo/content/posts/2013-07-29-stumbled-across-29-july-2013.md
new file mode 100644
index 0000000..11e65ba
--- /dev/null
+++ b/hugo/content/posts/2013-07-29-stumbled-across-29-july-2013.md
@@ -0,0 +1,21 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stumbled_across
+comments: true
+date: "2013-07-29T00:00:00Z"
+aliases:
+- /stumbled_across/stumbled-across-2/
+- /stumbled-across-29-july-2013/
+title: Stumbled across 29th July 2013
+---
+* Hehe:
+* Wow - light trapped for a full minute:
+* The importance of a consistent utility function:
+* Obama promised to be friendly to whistleblowers, and has quietly removed said promise:
+* I wholeheartedly agree with this site:
+* Good post on belief-in-belief:
+* Huh. A strange system, the US medical system:
+* Very much this - about how the media has lost the plot about PRISMgate:
+* Aaand my faith in humanity is once again shattered:
diff --git a/hugo/content/posts/2013-07-30-on-to-do-lists-as-direction-in-life.md b/hugo/content/posts/2013-07-30-on-to-do-lists-as-direction-in-life.md
new file mode 100644
index 0000000..752b0e1
--- /dev/null
+++ b/hugo/content/posts/2013-07-30-on-to-do-lists-as-direction-in-life.md
@@ -0,0 +1,35 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-07-30T00:00:00Z"
+aliases:
+- /uncategorized/on-to-do-lists-as-direction-in-life/
+- /on-to-do-lists-as-direction-in-life/
+title: On to-do lists as direction in life
+---
+[Getting Things Done](https://en.wikipedia.org/wiki/Getting_Things_Done) has gathered something of a [cult following](http://web.archive.org/web/20130428015707/http://www.wired.com/techbiz/people/magazine/15-10/ff_allen? "Wired article on GTD") [archived due to [link rot][1]] since its inception. As a way of getting things done, it's pretty good - separate tasks out into small bits on your to-do list so that you have mental room free to consider the bigger picture. However, there's a certain aspect of to-do lists that I've not really seen mentioned before, and which I find to be really helpful.
+
+My to-do list takes up a large amount of space on one of my virtual desktops (specifically, on [Dashboard](https://en.wikipedia.org/wiki/Dashboard_%28Mac_OS%29)). It consists of a large number of short-term goals, with some longer-term goals and a couple of very long-term goals mixed in. Sample:
+
+> Library books: Flow, The Mind's I, Consciousness Explained
+>
+> Go and see the [Aurora](https://en.wikipedia.org/wiki/Aurora_borealis)
+>
+> See how many [taste buds](https://en.wikipedia.org/wiki/Supertaster "Supertaster") I have
+>
+> Update list of books on blog
+
+There are very long-term goals like seeing the Aurora (which I intend doing during the next solar maximum in seven years or so), some goals which can be accomplished very quickly (like seeing whether I am officially a supertaster), an ongoing task (updating the blog) and a list of the library books I have out at the moment.
+
+The reason I like this arrangement so much is that it doesn't make you feel bad to see a wall full of to-do items that you've not done. Because a fair few of the goals are so long-term, I expect to see lots of items on the list, so I don't get the sinking feeling when I see everything I have left to do. It also feels really good to tick off a long-term goal (my most recent being "Get a [Kindle](https://en.wikipedia.org/wiki/Amazon_Kindle)"), and it feels better than it otherwise would to tick off a short-term goal, since it is surrounded by things that I know won't get ticked off for a while, so it feels (by association) like a bigger accomplishment.
+
+It also means that I should never forget to do something big that I want to do. So often, I hear people say "I wish I could… before I die", or similar. Now I have a system for recording all these things that cross my mind, so I will eventually get round to doing them. (I should note that on a fairly regular basis, I read through the whole list and work out which items are feasible right now - hopefully this will mitigate the "that's a long-term goal, ignore it" effect.) My goal to "play in the [Tallis Fantasia](https://en.wikipedia.org/wiki/Fantasia_on_a_Theme_by_Thomas_Tallis)" is one such entry.
+
+I think that this kind of method of writing down goals could be used to create some sort of life direction. I've seen services into which you enter your long-term goals, and then when you complete one, you tell the system and you gain "experience points", levelling up after reaching a certain threshold of points. I like this idea, but I postulate that it encourages thinking of long-term goals as different things to short-term goals, and that this is not necessarily desirable. A goal is a goal; some are big-impact long-term things, some are big-impact short-term things, and so on; the system seems to create an artificial distinction between short-term and long-term. My system, in its simplicity, avoids this distinction. I can see a pattern of goals that reflects my future life; to get a bit soppy about it, I can see a much clearer "direction" this way, listing internships, the research I want to do for interest, a certain walk that is strongly recommended from Cambridge to Grantchester, and so on. The lack of "levels of abstraction", I think, makes it much easier to do long-term things that I would otherwise put off.
+
+I now get to tick something else off the list - hooray! I hope something comes along soon to replace it.
+
+ [1]: https://en.wikipedia.org/wiki/Link_rot "Link rot Wikipedia page"
diff --git a/hugo/content/posts/2013-08-04-new-computer-setup.md b/hugo/content/posts/2013-08-04-new-computer-setup.md
new file mode 100644
index 0000000..a5bab01
--- /dev/null
+++ b/hugo/content/posts/2013-08-04-new-computer-setup.md
@@ -0,0 +1,60 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-08-04T00:00:00Z"
+aliases:
+- /uncategorized/new-computer-setup/
+- /new-computer-setup/
+title: New computer setup
+---
+
+*Editor's note: this is a snapshot of life in 2013-08-04. My setup has changed substantially since then.*
+
+In case I ever have to get a new computer (or, indeed, in case anyone else is interested), I hereby present the (updating) list of applications and so forth that I would immediately install to get a computer up to usability.
+
+* Browser: [Firefox] with [Ghostery], [HTTPS Everywhere], and [NoScript] (and remember to turn on Do Not Track…)
+* Mail client: [Thunderbird] with [Enigmail]
+* Messaging client: [Adium] on Mac, and possibly [Pidgin] for others - I've never used a non-Mac chat client. Beware: as of this writing, Pidgin stores passwords in plain text, so don't save passwords in Pidgin.
+* Encryption: GPG ([Windows][GPG Windows], [Mac][GPG Mac], [Linux][GPG Linux])
+* Text editor: Vim
+* Memory training: [Anki]
+* Movie viewing: [VLC]
+* Screen colour muter: [f.lux]
+* Backup software: [CrashPlan] - but I also keep local backups using whatever built-in automated backup utility the OS provides
+* FTP client: [FileZilla], or [Cyberduck] on a Mac
+* Syncing: [Dropbox] (but I want to get rid of this, because of privacy concerns)
+* Computational software: [Mathematica]
+* Music: [iTunes] (but I want to switch this for something not-Apple, and it has no Linux version)
+* Gaming: [Steam]
+* RSS reader: Currently, my RSS feed is presented in-browser, at [NewsBlur].
+
+
+[Firefox]: https://www.mozilla.org/en-US/firefox/new/
+[Thunderbird]: https://www.mozilla.org/en-US/thunderbird/
+
+[Ghostery]: https://www.ghostery.com/
+[HTTPS Everywhere]: https://www.eff.org/https-everywhere
+[NoScript]: https://addons.mozilla.org/en-US/firefox/addon/noscript/
+[Enigmail]: http://www.enigmail.net/home/index.php
+
+[Dropbox]: https://www.dropbox.com/
+[Mathematica]: https://www.wolfram.com
+[iTunes]: https://www.apple.com/itunes/
+[Steam]: https://store.steampowered.com/
+[Anki]: http://ankisrs.net/
+[NewsBlur]: https://www.newsblur.com
+[FileZilla]: https://filezilla-project.org/
+[Cyberduck]: http://cyberduck.io/
+[CrashPlan]: https://www.crashplan.com/
+
+[f.lux]: http://stereopsis.com/flux/
+[VLC]: https://videolan.org/vlc/
+[Notepad++]: http://notepad-plus-plus.org/
+[Pidgin]: https://www.pidgin.im/
+[Adium]: https://adium.im/
+[GPG Linux]: https://gnupg.org/
+[GPG Mac]: https://gpgtools.org/
+[GPG Windows]: http://www.gpg4win.org/
diff --git a/hugo/content/posts/2013-08-04-stumbled-across-4-august-2013.md b/hugo/content/posts/2013-08-04-stumbled-across-4-august-2013.md
new file mode 100644
index 0000000..2ce554a
--- /dev/null
+++ b/hugo/content/posts/2013-08-04-stumbled-across-4-august-2013.md
@@ -0,0 +1,25 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stumbled_across
+comments: true
+date: "2013-08-04T00:00:00Z"
+aliases:
+- /stumbled_across/stumbled-across-3/
+- /stumbled-across-4-august-2013/
+title: Stumbled across 4th August 2013
+---
+* An ad developer has misgivings:
+* Hint for dealing with some automated phone helplines - swear at them and they'll put you through to a human:
+* The future is coming:
+* A large collection of replacements for various PRISM-vulnerable services:
+* Some people think in a really rather interesting way:
+* The joys of a memoryless distribution:
+* An impressive photograph: [largest photo cached]
+* A fair chunk of the "1910's predicted Year 2000 technologies" has been invented:
+* A sweet video about Street View:
+* How to enable encryption in your emails using [GPG][GPG]:
+
+[GPG]: https://en.wikipedia.org/wiki/GNU_Privacy_Guard
+[largest photo cached]: https://web.archive.org/web/20130814173950/http://www.oddly-even.com/2013/07/31/the-largest-photo-ever-taken-of-tokyo-is-zoomable-and-it-is-glorious/
diff --git a/hugo/content/posts/2013-08-11-stumbled-across-11th-august-2013.md b/hugo/content/posts/2013-08-11-stumbled-across-11th-august-2013.md
new file mode 100644
index 0000000..c825cbe
--- /dev/null
+++ b/hugo/content/posts/2013-08-11-stumbled-across-11th-august-2013.md
@@ -0,0 +1,24 @@
+---
+lastmod: "2022-08-21T11:10:44.0000000+01:00"
+author: patrick
+categories:
+- stumbled_across
+comments: true
+date: "2013-08-11T00:00:00Z"
+aliases:
+- /stumbled_across/stumbled-across-11th-august-2013/
+- /stumbled-across-11th-august-2013/
+title: Stumbled across 11th August 2013
+---
+* A thousand times this (EDIT 2022: the link is dead and I have no idea what I was referring to).
+* A possible fix for the "[economic problem][1] of democracy":
+* A fascinating look at privacy online, how we're not built for privacy, and how tribal cultures attain privacy:
+* I'm all for healthy competition and so forth, but do we really want such massive phones?
+* This is the kind of thing that I never quite have the courage or the morals to do:
+* This is an excellent summary for why I'm trying to find a good Gmail replacement:
+* A guide for dealing with introverts (not that many of my friends need it - perhaps that's why they're my friends):
+* I didn't know this was such a wide-spread problem:
+* I agree with this article on the state of maths teaching entirely - I had some excellent teachers, but I could see from the textbooks how it was designed to be taught:
+* How is it that Scandinavia manages to be so nice all the time?!
+
+ [1]: https://en.wikipedia.org/wiki/Criticism_of_democracy#Economic_criticisms
diff --git a/hugo/content/posts/2013-08-18-thinking-styles.md b/hugo/content/posts/2013-08-18-thinking-styles.md
new file mode 100644
index 0000000..b9fa7aa
--- /dev/null
+++ b/hugo/content/posts/2013-08-18-thinking-styles.md
@@ -0,0 +1,37 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- psychology
+comments: true
+date: "2013-08-18T00:00:00Z"
+math: true
+aliases:
+- /psychology/thinking-styles/
+- /thinking-styles/
+title: Thinking styles
+---
+All the way back into primary school (ages 4 to 11 years old, in case a non-Brit is reading this), we have been told repeatedly that "people learn things in different ways". There were two years in primary school when I had a teacher who was very into [Six Thinking Hats](https://en.wikipedia.org/wiki/Six_Thinking_Hats) (leading to the worst outbreak of headlice I've ever encountered) and [mind maps](https://en.wikipedia.org/wiki/Mind_map). I never understood mind maps, and whenever we were told to create a mind map, I'd make mine as linear and boxy as possible, out of simple frustration with the pointless task of making a picture of something that I already had perfectly well-set-out in my mind. I quickly learnt to correlate "making a mind map" with "being slow and inefficient at thinking". (This was back when my memory was still exceptionally good, so I wasn't really learning much at school - having read, and therefore memorised, a good children's encyclopaedia was enough for me - and hence relative to me, pretty much everyone else was slow and inefficient, because I'd already learnt the material.)
+
+It's only now that I've realised that perhaps some people actually do think in a way that makes mind maps helpful. I'm not bad at spatial visualisation (not great, but not totally inept), but I don't think in pictures at all. Apparently, [about 3% of people](https://www.lesswrong.com/posts/baTWMegR42PAsH9qJ/generalizing-from-one-example) [sorry, the source for the statistic wasn't given on that page] simply do not have mental images - I don't fall into that 3%, but a close family member tells me ey does - ey cannot make sense of pictures at all without translating them into words. (Possibly a genetic bias? At least another two close family members are very visual indeed.) Ey told me that the world is really not set up for people who can't visualise: whenever you say you don't understand something, the default response is apparently to say exactly the same thing again, but accompanied by a picture - completely useless for a non-visualiser. I've never noticed this before, and a quick memory trawl is inconclusive, but I will certainly keep a look out for it and against it.
+
+A prime example (no pun intended) of an extraneous visual approach to something was the multiplication of two two-digit numbers by using the fact that \\((a+b)(c+d)=ac+bc+ad+bd\\). As an example, I'll take the numbers 35 and 27. The method involved drawing a box of (nominal) side lengths 27x35, and drawing two lines to divide the sides (nominally) into 20+7 and 30+5. Then in each of the four sub-boxes thus created, you had to write the area of that sub-box (that is, calculate \\(30 \times 20\\), \\(30 \times 7\\), \\(5 \times 20\\), \\(5 \times 7\\)) and then add them all up to get the total area. This method seemed like an enormous waste of time and space to me; I had already learnt to multiply arbitrary numbers together through the [Kumon](https://en.wikipedia.org/wiki/Kumon) program by using the standard [long multiplication](https://en.wikipedia.org/wiki/Multiplication_algorithm#Long_multiplication), and to have to learn a method that was about ten times slower and used four times more paper seemed immensely wasteful. I formed the opinion that the reason people were bad at multiplication was that they were being told to use these useless methods that no-one in their right mind could possibly understand. The [Generalising from One Example](http://lesswrong.com/lw/dr/generalizing_from_one_example/) LessWrong post contains an extremely relevant passage:
+
+> I only really discovered this in my last job as a school teacher. There's a lot of data on teaching methods that students enjoy and learn from. I had some of these methods...inflicted...on me during my school days, and I had no intention of abusing my own students in the same way. And when I tried the sorts of really creative stuff I would have loved as a student...it fell completely flat. What ended up working? Something pretty close to the teaching methods I'd hated as a kid. Oh. Well. Now I know why people use them so much. And here I'd gone through life thinking my teachers were just inexplicably bad at what they did, never figuring out that I was just the odd outlier who couldn't be reached by this sort of stuff.
+
+And it's only very recently that it occurred to me that this is quite possibly exactly my experience. The visual techniques simply work for other people.
+
+Another example (again from arithmetic) is the [number line][1] (and the closely related and suggestively named [real line][2]). A large chunk of the first few years at primary school was devoted to learning to count and add (pretty tedious stuff, especially if you already knew how to count and add!). One of the key methods used was the number line - so, for instance, to work out \\(8-3\\), you had to count forward 8 and go back 3. I hated this method - again, it wasted time (why not just go forward 5?) and space (draw out a line? no thanks!). Apparently there was a study done on an untouched-by-society tribe, and it turns out that viewing numbers spatially is not inbuilt in humans. [^study] Maybe I was just unusually unable to learn this view of numbers.
+
+Over the last few years, however, most noticeably as I have come to learn more maths, I have started to rely on pictures considerably more than I used to. I discovered the memory technique of "imagine a picture, the more ridiculous the better" to link two concepts (that's how I'm learning the capitals of the world - Luanda is the capital of Angola, which I remember as a [footballer](https://en.wikipedia.org/wiki/Soccer) scoring a GOAL [Angola] by kicking the ball into a [LOO](https://en.wikipedia.org/wiki/Toilet "Toilet") which is sitting between the goalposts), and I have used it to learn a variety of things. In the topic of [analysis](https://en.wikipedia.org/wiki/Mathematical_analysis), I rely on pictures as a guide to intuition - the statement that "for every \\(\epsilon > 0\\), there is a \\(\delta > 0\\) such that for all \\(y\\) where \\(\vert y
+* But it may be a bit too half-baked:
+* I love a good visualisation:
+* I laughed pretty much constantly through this piece of bureaucracy-hacking:
+* This is a problem with the Internet of Things as well as with mind-computer interfaces:
+* Wow - it's possible to represent words as vectors so that *vector('Paris') - vector('France') + vector('Italy')* results in a vector that is very close to *vector('Rome')*:
+* Let there be food:
+* One of the manifold reasons why the USA's [TSA][1] should be scrapped:
+* An excellent witty dialogue between some experts in their respective fields:
+* How to disagree correctly:
+
+ [1]: https://en.wikipedia.org/wiki/Transportation_Security_Administration "TSA Wikipedia page"
diff --git a/hugo/content/posts/2013-08-26-topology-made-simple.md b/hugo/content/posts/2013-08-26-topology-made-simple.md
new file mode 100644
index 0000000..3747ae2
--- /dev/null
+++ b/hugo/content/posts/2013-08-26-topology-made-simple.md
@@ -0,0 +1,43 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2013-08-26T00:00:00Z"
+math: true
+aliases:
+- /wordpress/archives/364/index.html
+- /mathematical_summary/topology-made-simple/
+- /topology-made-simple/
+title: Topology made simple
+---
+I've been learning some basic [topology][1] over the last couple of months, and it strikes me that there are some *very* confusing names for things. Here I present an approach that hopefully avoids confusing terminology.
+
+We define a **topology** \\(\tau\\) on a set \\(X\\) to be a collection of sets such that: for every pair of sets \\(x,y \in \tau\\), we have that \\(x \cap y \in \tau\\); \\(\phi\\) the empty set and \\(X\\) are both in \\(\tau\\); for every \\(x \in \tau\\) we have that \\(x \subset X\\); and that \\(\displaystyle \cup_{\alpha} x_{\alpha}\\) is in \\(\tau\\) if all the \\(x_{\alpha}\\) are in \\(\tau\\). (That is: \\(\tau\\) contains the empty set and the entire set; sets in \\(\tau\\) are subsets of \\(X\\); not-necessarily-countable unions of sets in \\(\tau\\) are in \\(\tau\\); and finite intersections of sets in \\(\tau\\) are in \\(\tau\\).) We then say that \\((X, \tau)\\) is a **topological space**.
+
+If a set \\(x\\) is in \\(\tau\\), then we say that \\(x\\) is **fibble**. On the other hand, if \\(x^{\mathsf{c}}\\) (the complement of \\(x\\)) is in \\(\tau\\), then we say that \\(x\\) is **gobble**.
+
+We define a **metric space** \\((X,d)\\) to be a set \\(X\\) together with a "distance" function \\(d: X \to \mathbb{R}\\) such that: \\(d(x,y)=0\\) iff \\(x=y\\); \\(d(x,y)=d(y,x)\\); and \\(d(x,y)+d(y,z) \geq d(x,z)\\). (That is, "the distance between two points is 0 iff they're the same point; distance is the same if we reverse as if we go forward; and if we take a detour then the distance is greater".)
+
+We then define a **fiball** \\(B(x,\delta )\\) to be "a set for which every \\(y \in X\\) is within \\(\delta\\) of \\(x\\)" - that is, \\(\{ y \in X: d(x,y)<\delta \}\\).
+
+It turns out that we can create (or **induce**) a topology out of a metric space, by considering the fiballs. Let \\(x \in \tau\\) iff \\(x\\) is a union (not necessarily countable) of fiballs in the metric space. We can see that this is a topology, because unions of (things which are unions of fiballs) are unions of fiballs; the empty set is the union of no fiballs; the entire set \\(X\\) is the union of all possible fiballs; and it can be checked that intersections behave as required (although that takes a tiny bit of work).
+
+Now we see why fiballs are called "fiballs" - because in the induced topology, fiballs are fibble.
+
+We can define a **gobball** in the same way, by making the weak inequality strict in the definition of the fiball. And it can be verified that gobballs are gobble.
+
+We can keep going with these definitions - a **continuous function** between two topological spaces \\(f: (X, \tau) \to (Y, \sigma)\\) is defined to be one such that if \\(y \subset Y\\) is fibble in \\(Y\\), then \\(f^{-1}(y)\\) is fibble in \\(X\\), and so forth.
+
+Eventually we come to the reason that I've used the words "fibble" and "gobble". Consider the metric \\(d: \mathbb{R} \to \mathbb{R}\\) given by \\(d(x,y) = \vert x-y \vert\\). It can easily be checked that \\((\mathbb{R},d)\\) is a metric space, and so it induces a topology on \\(\mathbb{R}\\). What is the fiball \\(B(x,\delta)\\)? It is precisely the set of points which are within \\(\delta\\) of \\(x\\) - that is, the open interval \\((x-\delta, x+\delta)\\). So we know that open intervals are fibble. Note also that \\((1,2) \cup (3,4)\\) is fibble, but is not an open interval. All well and good.
+
+But now consider a different topology on \\(\mathbb{R}\\). Let \\(x\\) be fibble if it is a union of half-open intervals \\([a,b)\\). It can be checked that this is a topology. Now the set \\([1,2) \cup [3,4)\\) is fibble, and note that it is not an open interval. We can see that \\((1,2)\\) is still fibble (it's the union of the fibble sets \\([x, 2)\\) for \\(1}}
+ [2]: http://tartarus.org/gareth/
+ [3]: http://mmeblair.tumblr.com/post/61532912275/carnival-of-mathematics-102-my-summation-of-other
+ [4]: https://en.wikipedia.org/wiki/Monorhyme
+ [5]: https://en.wikipedia.org/wiki/Limerick_%28poetry%29
+ [6]: https://en.wikipedia.org/wiki/Cinquain
+ [7]: https://en.wikipedia.org/wiki/Trochaic_tetrameter
+ [8]: https://en.wikipedia.org/wiki/Quatrain
+ [9]: https://en.wikipedia.org/wiki/Haiku
diff --git a/hugo/content/posts/2013-09-13-stumbled-across-14th-september-2013.md b/hugo/content/posts/2013-09-13-stumbled-across-14th-september-2013.md
new file mode 100644
index 0000000..81b6f5e
--- /dev/null
+++ b/hugo/content/posts/2013-09-13-stumbled-across-14th-september-2013.md
@@ -0,0 +1,29 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stumbled_across
+comments: true
+date: "2013-09-13T00:00:00Z"
+aliases:
+- /stumbled_across/stumbled-across-14th-september-2013/
+- /stumbled-across-14th-september-2013/
+title: Stumbled across 14th September 2013
+---
+* On the merits of silence (I wholeheartedly agree):
+
+* Given the previous results on humans' sense of physical location, I'm not particularly surprised that you can make yourself identify your body as being somewhere other than where it really is:
+
+* Aaand the future arrives:
+
+* Another reason why Finland is amazing:
+
+* A thought-provoking story: [WebCite version](http://web.archive.org/web/20010802144026/http://www.tor.com/72ltrs.html)
+
+* On the "mundane magics" kind of lines:
+
+* Not sure what to make of this - I actually can't remember who narrated Paddington in the audio-books of my youth:
+
+* This links in heavily with the thesis of the book [Flow](https://en.wikipedia.org/wiki/Mihaly_Csikszentmihalyi#Flow), which I'm reading at the moment:
+
+This is the first post that I'm syndicating to social media. I hope it works. (If anyone has any ideas about what to syndicate - for instance, that Stumbled Across posts should not, or especially should, or things like that - then do let me know.)
diff --git a/hugo/content/posts/2013-09-21-how-to-prove-that-you-are-a-god.md b/hugo/content/posts/2013-09-21-how-to-prove-that-you-are-a-god.md
new file mode 100644
index 0000000..f8a9ff5
--- /dev/null
+++ b/hugo/content/posts/2013-09-21-how-to-prove-that-you-are-a-god.md
@@ -0,0 +1,38 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-09-21T00:00:00Z"
+aliases:
+- /uncategorized/how-to-prove-that-you-are-a-god/
+- /how-to-prove-that-you-are-a-god/
+title: How to prove that you are a god
+---
+I came across an interesting question while reading the blog of [Scott Aaronson][1] today. The question was as follows:
+
+> In the world of the colour-blind, how could I prove that I could see colour?
+
+I'm presuming, to make the discussion more life-like and less cheaty, that this civilisation hasn't discovered that light comes in wavelengths, or that it has but it can't distinguish very well between wavelengths (so that all coloured light falls into the same bucket of 100nm to 1000nm, for instance). The challenge is to design an experimental protocol to confirm or deny that I have access to information that the colour-blind do not. This question is much harder than the corresponding question in the world of the blind, because having vision tells you so much more than having colour vision (simply set up a flag two miles away, have someone raise it at a random time, note down the time you saw it raised, and compare notes).
+
+Oh. That's unfortunate. This protocol works perfectly well to determine colour, too - I just need to provide two flags of different colour, present them for inspection so that the experimenters verify that they look identical to them, mark the base of the flagpoles A and B in some way that the colour-blind can detect (etching?), note down which colour corresponds to which flag, walk a hundred metres, have the experimenter wave one of the flags at random, write down which flag was waved, repeat to taste.
+
+How about a proof that I could hear when no-one else knew what hearing was? I would need to find something that I could hear that no-one else could detect - perhaps the dropping of a vase fifty metres away - and, while blindfolded, raise my hand when I heard the vase drop. I would, of course, have to remember to explain that there could be a time delay over long distances.
+
+Stereo sound (the ability to detect where something is by the sound it makes)? I shut my eyes, someone walks around me and claps once; I point to that person.
+
+Smell? Easy - simply uncork a test tube of water or hydrogen sulphide. I can identify which was used.
+
+Taste? Again, we could dissolve sugar and salt separately into water.
+
+[Proprioception][2]? It seems odd to me that any physical being could have managed to evolve language and not proprioception, but I could at least demonstrate the ability to exercise fine control over my body by pulling the spring of a [Newton meter][3] with a toe, finger, mouth, etc. This should be good evidence that I know how much strength I am exerting. I could also do this blind (although my results would be more rough, because I'm not a good proprioceptor), and I could also not see the scale of the Newton meter. To test awareness of where my body parts are, I could place both hands behind my back and have someone move one of them (with me blindfolded); I would then touch the moved hand with my other hand.
+
+Language (the fact that "the sounds I am making convey information")? This would require two people who spoke the same language, of course. We could be placed in separate booths, with a row of pictures in front of us. Someone would point to a picture in one booth, the person in the relevant booth would describe it, the other person would point to the corresponding picture.
+
+This discussion turned out to be less interesting than I would have liked. Anyway, it would appear to imply that if someone did indeed have extra senses, that person would easily be able to convince me of this fact. For instance, in the world of the colour-blind, I would present two items, saying "These items differ in a property which I can sense and which you cannot; show me one and I will tell you which it was". If the experiment were repeated and I were consistently able to say which item was shown, then I think this should count as proof that I can see colour. Of course, any limitations on my power ("I can't necessarily distinguish between any two items, but I can distinguish between these two", or "I can only distinguish between two items when it's a full moon in three days and when I've received a blood sacrifice and when the experimenter has sufficient faith in my abilities") should be declared up front, so that they can't be used to explain away failure (so, for instance, we could in advance find someone very credulous). Parallels to the [Randi prize][4] fully intended.
+
+ [1]: http://www.scottaaronson.com
+ [2]: https://en.wikipedia.org/wiki/Proprioception
+ [3]: https://en.wikipedia.org/wiki/Spring_scale
+ [4]: http://www.skepdic.com/randi.html
diff --git a/hugo/content/posts/2013-10-10-plot-armour.md b/hugo/content/posts/2013-10-10-plot-armour.md
new file mode 100644
index 0000000..5ab2e25
--- /dev/null
+++ b/hugo/content/posts/2013-10-10-plot-armour.md
@@ -0,0 +1,51 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- creative
+comments: true
+date: "2013-10-10T00:00:00Z"
+aliases:
+- /creative/plot-armour/
+- /plot-armour/
+title: Plot Armour
+---
+*Wherein I dabble in parodic fiction. The title refers to the TV Tropes page on [Plot Armour][1], but don't follow that link unless you first resolve not to click on any links on that page. TV Tropes is the hardest extant website from which to escape.*
+
+Jim, third-in-command of the Watchers, ducked behind the Warlord's force-field, desperately trying to catch his breath in the face of an inexorable onslaught. His attackers, the hundred-strong members of the Hourglass Collective, had never been defeated in pitched battle. As testament to their ability, two thousand of the finest troops the Watchers had to offer stood motionless around him, suspended in time; even now, even with five of the most experienced Watchers still fighting, the Hourglass forces were calmly and efficiently slitting the throats of the frozen soldiers. Skilled in cultivating terror, they were working in from afar, and it looked to Jim as though he would have to endure another half-hour of helplessness before they got to him at last. Jim and the Warlord had only survived this far by virtue of an accidental and uncontrollable burst of power from the Founder of the Watchers, released at a fortuitous moment to counter the time-suspension channelled by the Hourglass. That had given the Warlord time to protect five people, before the Founder had collapsed.
+
+Sophia, the Second Vigilant, most powerful of the Watchers, the Founder's first recruit, was still fighting. She had been the recipient of the Warlord's first force-field, naturally, and she was using her borrowed time well. Jim was recovering, moving nearer to her, and her power waxed correspondingly: he was exerting his power to heal her and to fuel her efforts. She began to glow, first dimly but soon as bright as the moon and then as the sun on a cloudy day, and as her light fell on the ranks of the Hourglass, all movement across the battlefield stopped. Sophia gently closed her eyes; in response, threads of light began to take shape around the Hourglass, weaving a net to contain the enemy. Too slowly: a pulse of power blasted forth from the Collective, tearing through the weave and ripping away the Warlord's force-fields. Sophia teetered on her feet, her power spent, but Jim was too far away, having been frozen in place by the calm Sophia had laid on the battlefield. She fell even as he ran towards her, his healing power growing as he did so, but he was too late to stop her from falling unconscious.
+
+Three remaining Watchers, against a hundred of the Hourglass. The Warlord had used everything he had to create his force-fields. Jim had no offensive abilities at all. That left Christine, who (as the recipient of the Warlord's final, weakest, force-field) had been badly affected by the Collective's retaliation. Even with Jim's presence already staunching her head wound, her skill of intuition was still very much off-kilter. Her mind was sluggish, the chains of correlation and causation drifting to her as through treacle.
+
+After far too long, the first key insight came to her.
+
+"Warlord! Jim! Do you remember anything at all from before you threw up the force-fields?" she whispered to him, with as little voice as she could manage. Even that would have been audible to some of the far-away Hourglass, such was the eerie silence over the battlefield.
+
+Her two comrades stared at her in confusion for ten seconds. At last, the Warlord's furrowed brow cleared, and he announced proudly that he could recall the whole series of events in perfect detail. Jim nodded along.
+
+Christine closed her eyes. She, too, could now remember the assassination, the declarations of war, the summoning of the Watchers, and the start of the battle. Odd - but the exchange slipped from her mind as she made another connection.
+
+"Since when could anyone stop time?! How can the Hourglass possibly have the power to suspend an entire army? How come you can heal us, Jim? These aren't normal things for humans to be able to do!"
+
+At the edge of her mind, she could feel an explanation forming, but she was thoroughly spooked by now, and she squashed the nascent reasoning. The final piece clicked into place.
+
+"Jim - say something. Anything - recite the first ten digits of pi, in as normal a voice as you can," she ordered.
+
+"Three point one four one five nine two six…" Jim recited. He was beyond thinking that Christine was being weird, requiring the value of pi with an army advancing upon them - he had long since learnt to go with her requests for information, as you could never tell which piece of data would cause everything to make sense to her superhuman intuitive powers.
+
+"No - can you say it more *normally*?" Christine clarified, cutting him off.
+
+In a monotone, Jim reeled off "Three point one four one five nine two six…"
+
+That was all the confirmation Christine's inductive power needed.
+
+"OK. This will be a shock to you both, Warlord, Jim, but we're in a story. We're fictional. This situation we're in makes no sense at all. We had no backstory until I explicitly requested it, and it took a little while to come to us. And no-one seems to be capable of just *saying* something! Every time, we're ordering, or clarifying, or reeling things off, but never *saying*! We are fictional, and our author is not particularly competent to boot. That gives us a way out of our conveniently dramatic Dire Straits.
+
+"Author! We're three of the most powerful members of the Watchers, and we've been the entire focus of this short story. There are no other plausible protagonists. You must find a way for us to survive, or else the story ends and you will have wasted all this time on another creative endeavour that came to nothing!"
+
+The Hourglass were approaching faster now, provoked by Christine's loud outburst. Only thirty feet away, then twenty, then ten.
+
+The front runner drew a dagger, and slit Jim's throat, then the Warlord's, then Christine's.
+
+ [1]: http://tvtropes.org/pmwiki/pmwiki.php/Main/PlotArmor
diff --git a/hugo/content/posts/2013-10-11-meaning-what-you-say.md b/hugo/content/posts/2013-10-11-meaning-what-you-say.md
new file mode 100644
index 0000000..964beab
--- /dev/null
+++ b/hugo/content/posts/2013-10-11-meaning-what-you-say.md
@@ -0,0 +1,34 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-10-11T00:00:00Z"
+aliases:
+- /uncategorized/meaning-what-you-say/
+- /meaning-what-you-say/
+title: Meaning what you say
+---
+In conversation with (say, for the purposes of propagating a sterotype) humanities students, I am often struck by how imprecisely language is used, and how much confusion arises therefrom. A case in point:
+
+> A: I think that froogles should be sprogged!
+>
+> B: Sprogging froogles would make the bimmers go plog.
+>
+> A: But I use froogles all the time - I don't care about the bimmers! Why are you so caught up on the plogging of bimmers?
+
+Here, we see Person A espousing a view, Person B contributing a fact, and Person A responding as if the fact were an attack. This happens *all the time*, and it's not just that I'm incapable of sounding non-threatening, because Person A-style people seem to respond in the same way whoever's acting as Person B. It may well be an excellent tactic during a competitive debate, because a Person A-style response makes you sound impassioned and commanding. However, when it comes to attempting to divine truth, it's thoroughly detrimental. Person B has to spend the next few sentences saying that ey's not attacking A [^pun] - and that's time wasted which could have been spent discussing bimmer-plogging and its relevance.
+
+As a mathematician, you quickly learn to be able to shift into a state of mind in which you mean exactly what you say, and no more. Without this skill, I suspect it is very hard to be a mathematician. Imagine I, as Person B, said "10 is not a multiple of 3", and you (as person A) replied, "But 10 is a multiple of 2, and you didn't mention that!" You would be laughed out of the room, because it is simply taken for granted that I didn't mean to say anything beyond "10 is not a multiple of 3".
+
+Similarly, as a truth-finder (as opposed to debater), I should have the freedom to say "If the cinema were closed, it would very likely have little to no impact on your life" without my interlocutor assuming that I mean "The cinema should be closed" or "The cinema will be closed" or "You are a moron".[^cinema] Fine, in a competitive debate, no holds are barred, but in real life we should be trying to find truth, and it's much harder to do that if you have to keep clarifying every statement. "If the cinema were to be closed (and it might not be), then it would very likely have little to no impact on your life, but I'm not saying that its overall cultural value shouldn't mean that the cinema ought to stay" is considerably less easy to read and write. I, as its author, have limited room in my memory to store all the little dangly bits of sentence that I intend to include. You, as its reader, have limited room in your memory, some of which is taken up in holding irrelevant points like "Patrick is not arguing with you".
+
+Once Person A responds in that way, it becomes much harder for Person B to maintain a calm fact-finding frame of mind. It flashes through B's mind, "Person A has just attacked me! I must defend myself!", even if ey is trying as hard as possible to be balanced and to think clearly. [^experience] It clouds the rest of the discussion.
+
+Essentially, what I want is for everyone to receive training in meaning exactly what you say, and in understanding exactly what is said. I find that it adds greatly to pretty much every conversation if all parties are able to switch into this mode as necessary (to resolve some particular question of fact, for instance). I recognise that my causing offence to another person is always a failing on my part, but it is a lot of work to maintain the context of "I must de-offendify every factual statement". If nothing else, it's just one more thing I have to remember to do once I've realised what words are coming out of my mouth. Fine for normal conversation, since so much of that is based around appearing as un-offensive [^inoffensive] as possible, but it is a great burden when you're attempting to perform a distributed computation (namely, using two or more brains to discover whether a statement is true or false).
+
+[^pun]: See what I did there? "Ey, A"?
+[^cinema]: This example comes from a discussion about a certain news story about anti-monopoly laws in Cambridge cinemas (entitled "Cambridge set to lose Cineworld or Arts Picturehouse following Competition Commission ruling", of Cambridge News, published 2013-10-08 and now defunct). In fact, in this example, the cinema will be *sold on*, not closed - another reason for clear mathematical thinking in distinguishing "if X then Y" from "X is true".
+[^experience]: I am extrapolating from my own experience here - I am not yet well-enough practised at adopting the frame of mind that "the other person is only attacking me because ey doesn't know better - it doesn't really count".
+[^inoffensive]: Not inoffensive, which is something a little different.
diff --git a/hugo/content/posts/2013-10-13-training-away-mental-bias.md b/hugo/content/posts/2013-10-13-training-away-mental-bias.md
new file mode 100644
index 0000000..cadb376
--- /dev/null
+++ b/hugo/content/posts/2013-10-13-training-away-mental-bias.md
@@ -0,0 +1,46 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- psychology
+comments: true
+date: "2013-10-13T00:00:00Z"
+aliases:
+- /psychology/training-away-mental-bias/
+- /training-away-mental-bias/
+title: Training away mental bias
+---
+*In which I recount an experiment I have been performing. Please be aware that in this article I am in "[meaning what I say][1]" mode.*
+
+For the past year or so, I have been consciously trying to identify and counteract places in the "natural", everyday use of language in which gender bias is implicitly assumed to be correct. The kind of thing I mean is:
+
+> A: I called the plumber.
+>
+> B: And what did he say?
+
+I have also been keeping tabs on the way the word "man" creeps into occupations and so forth:
+
+> We are looking for a new chairman for the society.
+
+Specifically, I did the following to counter this culturally-imposed tendency:
+
+1. I switched to using gender-neutral pronouns in my writing (although more recently, I have reverted to "he-or-she" in conversation)
+2. I formed a habit of noticing whenever I thought the words "she" or "he", and checking whether I actually knew the gender of the person in question
+3. If for some reason I need to invent a person, and gender-neutral won't do, I flip a coin to determine that character's gender (which will, in theory, completely eliminate gender bias in my characters)
+
+I think that I have succeeded in correcting the bias, at least partially. A week ago, I was even caught by surprise when someone referred to an electrician of unspecified gender as "he" - I had to backtrack mentally and work out whether I'd missed the specification of eir gender, before I realised that this was simply the usual bias being demonstrated by other people. In much the same way, it would surprise me for an electrician to be *assumed* to be called Fred [^coinflips] ("I phoned for an electrician to fix our wiring. Fred was amazing."), or to be gluten-intolerant ("I phoned for an electrician to fix our wiring, but of course he couldn't eat the cheesecake I'd made for him."), or to be particularly tall ("I phoned for an electrician to fix our wiring, but she had the obvious trouble moving around in the attic.").
+
+This is not to proselytise - I've never pointed out people's gender bias unless they've specifically asked for me to do something similar, because in my experience (sample size of 1, when I explained the problem as I see it to someone) people get annoyed and call me a "feminist". [^slight] I don't understand why people would get annoyed that I would like women and men to be treated equally, and the issue is further obscured by the labelling of my views as "feminism". Not that I am against feminism particularly, but using the word "feminist" just clouds the issue. In the same way, calling yourself a "liberal" encompasses an enormous range of policies, and I may not agree with every single one to identify myself as a liberal. For me to be a "feminist" could be interpreted by some as "this person wishes for men to be replaced entirely by women", rather than the interpretation I would prefer (namely, "this person wishes for males to be treated as fairly as females in all things"). Classifying your argument immediately makes everything harder for all parties, as it then sets up a pressure to remain consistent with the entire category you have given, rather than with what you intended to convey. Silly example: "I'm a utilitarian" vs "I think people should act so as to maximise the happiness of people around them".
+
+Given that the person-who-got-annoyed in question was female, I don't think I have the right to overrule her position. [^parallel]
+
+Anyway, I think that my attempt to realign my thoughts so that the implicit, unannounced anti-female bias is less pronounced has been a success. I do not claim perfection, of course, and I will keep going with my new habits (they're a part of me now, so it's easier to keep going than not). I have no real-world outcomes to measure, apart from being acutely aware of everyone else's bias [^mean] - I intend at some point to take a test to give me a quantitative answer, but at present I can't find the particular test I have in mind. [^test] I hope these habits are having a good effect on my thinking.
+
+
+[^coinflips]: I'm flipping coins for the genders in this paragraph, too.
+[^slight]: As if that were some kind of horrendous slight upon my good name.
+[^parallel]: Although I can't help drawing a parallel with extremely wealthy people claiming that "money is unimportant"; it feels as bogus as a philosopher claiming that "truth is relative" - which is simply asking for you not to believe em.
+[^mean]: Again, I mean what I say: I do not necessarily mean that "I am unbiased" or "I am unaware of my own bias".
+[^test]: Its protocol is to flash up pairs of words like \{"good","female"\} or \{"uncle, male"\}, whereupon the testee presses a button for "related" and a different button for "unrelated". The idea is that it is easy to determine whether \{"uncle","male"\} are related, but if bias is present then \{"competent","female"\} will be harder (and hence slower) to determine than \{"competent", "male"\}, because we are so used to thinking of males implicitly as more competent.
+
+[1]: {% post_url 2013-10-11-meaning-what-you-say %}
diff --git a/hugo/content/posts/2013-10-20-the-ravenous.md b/hugo/content/posts/2013-10-20-the-ravenous.md
new file mode 100644
index 0000000..4aa3a78
--- /dev/null
+++ b/hugo/content/posts/2013-10-20-the-ravenous.md
@@ -0,0 +1,56 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- creative
+comments: true
+date: "2013-10-20T00:00:00Z"
+aliases:
+- /creative/the-ravenous/
+- /the-ravenous/
+title: The Ravenous
+sidenotes: true
+---
+[Once upon a midnight dreary][1], while I pondered, weak and weary,
+I required a snack to feed me. Reaching in the kitchen drawer -
+With the scissors, cut the wrapping, I revealed a jar of tapen-
+Ade of olives. Gently snapping, snapping off the lid, I saw:
+Lines of mouldy olive scored the tapenade. The lid I saw
+Speckled with each mocking spore.
+
+How the pangs of hunger rumbled while I cursed the jar I'd fumbled;
+Indistinct, I faintly mumbled, "May this torture last no more!"
+Suddenly I saw the bread bin; eagerly towards it edging,
+Bravely to my stomach pledging, pledging food would be in store.
+Opening that sacred vessel, only crumbs were left in store.
+Savagely the bag I tore.
+
+Now my thoughts turned to basmati; I would make a dish quite hearty,
+And my shattered brain was party to such plans of starch galore.
+Trembling I imagined sauces rich in spice and such resources,
+Gripped by these enchanting forces, opened I the cupboard door.
+Slavering, excitement mounting, opened I the cupboard door;
+Rice stocks were exceeding poor.
+
+How my stomach needed filling. Dreams of pancakes gently grilling
+Served to give me eager willingness to find a bag of {{< side right flour-footnote `flour.`>}} Pronounced "floor". {{< /side >}}
+Happily it was not lacking. Took the eggs out from their packing,
+Fetched a bowl, and in it cracking, cracking eggs so batter'd pour.
+Tipped the milk (blue top, full-fat) in, mixing up so batter'd pour.
+Sugar I could not ignore.
+
+Took out oil, and put the gas on. Measured out a goodly ration,
+Ladled it in practised fashion, spread it thin, my movements sure.
+Round the edges batter bubbled, far too quiet. The heat I doubled;
+Soon I'd be no longer troubled: hunger'd bother me no more.
+Oh, to be no longer troubled, hunger both'ring me no more.
+Crêpes: a food which I adore.
+
+Tested I the pancake, dipping fish-slice in to start its flipping;
+Grabbed the pan, towards me tipping. "Now be cooked!" I did implore.
+In my eagerness to turn it (lest I tarry and I burn it)
+With such horror I discern it: I had dropped it to the floor.
+Ah, with terror I discern that I had dropped it to the floor.
+Quoth the pancake: "Nevermore."
+
+ [1]: https://en.wikipedia.org/wiki/The_Raven "The Raven Wikipedia page"
diff --git a/hugo/content/posts/2013-10-24-how-to-do-analysis-questions.md b/hugo/content/posts/2013-10-24-how-to-do-analysis-questions.md
new file mode 100644
index 0000000..3fe711b
--- /dev/null
+++ b/hugo/content/posts/2013-10-24-how-to-do-analysis-questions.md
@@ -0,0 +1,115 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+- proof_discovery
+comments: true
+date: "2013-10-24T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/how-to-do-analysis-questions/
+- /how-to-do-analysis-questions/
+title: How to do Analysis questions
+---
+This post is for posterity, made shortly after [Dr Paul Russell][1] lectured Analysis II in Part IB of the Maths Tripos at Cambridge. In particular, he demonstrated a way of doing certain basic questions. It may be useful to people who are only just starting the study of analysis and/or who are doing example sheets in it.
+
+The first example sheet of an Analysis course will usually be full of questions designed to get you up and running with the basic definitions. For instance, one question from the first example sheet of Analysis II this year is as follows:
+
+> Show that if \\((f_n)\\) is a sequence of uniformly continuous real functions on \\(\mathbb{R}\\), and if \\(f_n \to f\\) uniformly, then \\(f\\) is uniformly continuous.
+
+This is one of those questions which only exists to make sure that you know what "uniformly continuous" and "converges uniformly" mean.
+
+How do we solve this question? The key with a definitions-question is to avoid employing the brain wherever possible. So the first step is to define \\((f_n)\\) and \\(f\\), and to write down everything we know about them:
+
+* Let \\((f_n)\\) be a sequence of uniformly continuous real functions on \\(\mathbb{R}\\), and \\(f\\) a real function on \\(\mathbb{R}\\), such that \\(f_n \to f\\) uniformly.
+* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), we have for every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
+* Since \\(f_n \to f\\) uniformly, we have that for all \\(\epsilon\\), there exists \\(N\\) such that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x) \vert < \epsilon\\).
+
+Now, what do we want to prove?
+
+* [Don't write this down yet - this line goes at the end of the proof!] Therefore, for every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert x-y < \delta\\), \\(\vert f(y)-f(x) \vert < \epsilon\\). Hence \\(f\\) is uniformly continuous.
+
+So what can we get from what we know? Everything we know is about "for all \\(\epsilon\\)". So we fix an arbitrary \\(\epsilon\\). If we can prove something that is true for this \\(\epsilon\\), with no further assumptions, then we are done for all \\(\epsilon\\).
+
+* Fix arbitrary \\(\epsilon\\) greater than \\(0\\).
+
+Now what do we know?
+
+* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
+* Since \\(f_n \to f\\) uniformly, we have that there exists \\(N\\) such that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
+* \\(\epsilon > 0\\).
+
+Aha! Now we have a definite something existing (namely, the \\(N\\) in the second condition). Let's fix it into existence.
+
+* Let \\(N\\) be such that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
+
+What do we know?
+
+* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
+* Since \\(f_n \to f\\) uniformly, we have that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
+* \\(\epsilon > 0\\), and \\(N\\) is an integer.
+
+Now, we have two "for all"s competing with each other. The more specific is the second one, so we'll fix that into existence.
+
+* Fix arbitrary \\(n\\) greater than or equal to \\(N\\).
+
+What do we know?
+
+* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
+* Since \\(f_n \to f\\) uniformly, we have that for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
+* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\).
+
+Now we have a choice of "for all"s again, but this time they aren't "talking about the same thing" (last time, both were integers referring to which \\(f_n\\) we were talking about; this time, one is an integer and one is an arbitrary real). However, now we have \\(n \geq N\\) which we can talk about; let's wring more information out of it, by using the "uniformly continuous" bit.
+
+* Since each \\(f_n\\) is uniformly continuous, there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x\vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
+* Since \\(f_n \to f\\) uniformly, we have that for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
+* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\).
+
+Aha - another "there exists" condition (on \\(\delta\\)). Let's fix it.
+
+* Fix \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x\vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
+
+What do we know?
+
+* Since each \\(f_n\\) is uniformly continuous, for all \\(x,y\\) with \\(\vert y-x\vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
+* Since \\(f_n \to f\\) uniformly, we have that for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
+* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\), and \\(\delta > 0\\).
+
+Two more "for all" conditions. Let's fix them into existence:
+
+* Let \\(x\\) be an arbitrary real, and let \\(y\\) be such that \\(\vert y-x\vert < \delta\\).
+
+What do we know?
+
+* Since each \\(f_n\\) is uniformly continuous, \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
+* Since \\(f_n \to f\\) uniformly, we have that \\(\vert f_n(x)-f(x) \vert < \epsilon\\).
+* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\), and \\(\delta > 0\\), and \\(x\\) is real, and \\(\vert y-x\vert < \delta\\).
+
+Now the conditions are really small things. It's kind of unclear how to proceed from here, so let's look at what we wanted to prove again:
+
+> For every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert x-y\vert < \delta\\), \\(\vert f(y)-f(x)\vert < \epsilon\\).
+
+Applying what we know, this becomes:
+
+* [to be proved] For all \\(y\\) with \\(\vert x-y\vert < \delta\\), \\(\vert f(y)-f(x)\vert < \epsilon\\).
+
+Aha! We have already got something to do with \\(y\\) (namely that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\)), and we have something to do with \\(f(x)\\) (namely that \\(\vert f_n(x)-f(x)\vert < \epsilon\\)). Hence \\(\vert f_n(y)-f_n(x)\vert + \vert f_n(x)-f(x)\vert < 2\epsilon\\), and the triangle inequality gives us that \\(\vert f_n(y)-f(x)\vert < 2\epsilon\\). Eek - we need to turn that \\(f_n(y)\\) into an \\(f(y)\\). We have no way of doing that, so we must have missed out some information somewhere. Backtracking, the nearest-to-the-end bit of missed out information was when we fixed \\(x, y\\). We threw away information in "for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\)" when we fixed \\(x\\) - it applies to \\(y\\) too. So we'll add a new statement to the "what do we know?" list:
+
+* \\(\vert f_n(y)-f(x)\vert < 2\epsilo\\)n
+* \\(\vert f_n(y)-f(y)\vert < \epsilon\\).
+* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\), and \\(\delta > 0\\), and \\(x\\) is real, and \\(\vert y-x \vert < \delta\\).
+
+And now it just drops out of the triangle inequality that \\( \vert f(y)-f(x) \vert < 3 \epsilon\\).
+
+Now, \\(\epsilon\\) was arbitrary, \\(N\\) was dictated by the conditions, \\(n \geq N\\) was arbitrary, \\(\delta\\) was dictated by the conditions, \\(x\\) was arbitrary, \\(y\\) was arbitrary subject to \\(\vert y-x \vert < \delta\\).
+
+Hence we have proved that for every \\(\epsilon\\) there exists \\(N\\) such that for all \\(n \geq N\\) there is a \\(\delta\\) such that for all \\(x\\), for all \\(y\\) with \\(\vert y-x\vert < \delta\\), \\(\vert f(y)-f(x)\vert < 3\epsilon\\).
+
+We can clean this statement up. Notice that neither \\(n\\) nor \\(N\\) was involved in the final expression, so we can simply get rid of them to obtain:
+
+> For every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x\\), for all \\(y\\) with \\(\vert y-x\vert < \delta\\), \\(\vert f(y)-f(x) \vert < 3\epsilon\\).
+
+From this, it is easy to obtain the required result. We want to turn \\(3 \epsilon\\) into \\(\epsilon\\) - but that's fine, because the expression holds for every \\(\epsilon\\), so in particular if we fix \\(\epsilon\\) then it holds for \\(\dfrac{\epsilon}{3}\\). We'll just use the \\(\delta\\) from that \\(\dfrac{\epsilon}{3}\\) instead. This gives us that \\(f\\) is uniformly continuous, as required, and without actually engaging the brain except to carry out the algorithm of "write down what we know; if there exists something, fix it, and repeat; if for all something, then fix an arbitrary one, and repeat; if we're stuck, go back through, looking to see if we missed out any information during a fixing-arbitrary-for-all phase" and to carry out the algorithm of "when the information we have is simple enough, compare terms from what we know with the expression that we want to show; use the triangle inequality to get them in there".
+
+ [1]: https://www.dpmms.cam.ac.uk/~par31 "Paul Russell"
diff --git a/hugo/content/posts/2013-11-07-my-quest-for-a-new-phone.md b/hugo/content/posts/2013-11-07-my-quest-for-a-new-phone.md
new file mode 100644
index 0000000..4358cfd
--- /dev/null
+++ b/hugo/content/posts/2013-11-07-my-quest-for-a-new-phone.md
@@ -0,0 +1,96 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-11-07T00:00:00Z"
+aliases:
+- /uncategorized/my-quest-for-a-new-phone/
+- /my-quest-for-a-new-phone/
+title: My quest for a new phone
+---
+*This post is unfinished, and may never be finished - I have decided that the Nexus 5 is sufficiently cheap, nice-looking and future-proof to outweigh the boredom of continuing the research here, especially given that such research by necessity has a very short lifespan. I am one of those people who hates shopping with a fiery passion.*
+
+My current phone is a five-year-old [Nokia 1680]. It has recently developed a disturbing tendency to turn off when I'm not watching it.
+This puts me in the market for a new phone. Having looked over the Internet for guides to which phone to buy, I've become lost in the swamp of information, so I am using this post to order my thoughts.
+
+# My current phone usage
+
+I use my phone pretty rarely. It has a camera, but I have only used it once ever (and that picture was so blurry that it doesn't really count). It has a colour screen, which I would happily forgo if it made the battery life better. The battery lasts about a week between charges at my current usage level. I have made about five calls on it in the past year, and sent a few hundred texts. The £20 of credit I gave it about four months ago is now down to £2.50, but I used it unusually often to make calls (four of the five calls I mentioned were in that period). The phone can connect to the Internet, but I have never used it thusly, because interacting with web pages would be too painful on that screen and with those buttons. My current tariff is pay-as-you-go, with Tesco Mobile, on a plan that doesn't seem to exist any more (4p per text, and some unspecified amount for calls).
+
+# Projected phone usage
+
+I have two main options available.
+
+* Buy a dumbphone
+* Buy a smartphone.
+
+These options greatly affect the way I would use the phone. For a dumbphone, I would use it much as I use my current phone: for rare calls and for less-rare texts. For a smartphone, I would branch out considerably, into using it for calendar syncing, to-do lists, GPS/maps/directions, on-the-go information, computation and so forth. I would not use it for games (because they are simply a waste of time that I could be using to [become more awesome][2], and because they aren't fun anyway). I don't see myself using it as a camera, either. I will not be installing social media apps on a smartphone, because I hate it when people use them in front of me, and because I categorically do not want to become one of these people who incessantly posts about what food they had this morning. I reserve public self-broadcasting platforms for those things which I think could be important or interesting to many people (and I amend the "what-I-post" category in response to feedback), or which I'm proud of having created, and it's much harder to find/make these things on a phone screen than on a computer with keyboard and big screen.
+
+# Requirements for a dumbphone
+
+* Cheap - I do not want to spend more than £50 on a dumbphone
+* Long battery life
+* No need for a camera or a colour screen - [eInk] sounds ideal
+* No need for Internet access
+
+# Requirements for a smartphone
+
+* Calendar syncing (I could host a [CalDAV] server on this website, so interoperability should be easy)
+* To-do list syncing (I have switched to [Workflowy] for to-do lists, and that can be accessed in-browser, so it only needs a web browser)
+* Preferably maps/GPS
+* Smooth user experience (I want to feel like I'm controlling [JARVIS])
+* Cheaper is better - I do not want to spend more than £400 on a smartphone
+* Preferably [libre][7] and more preferably secure/NSA-proof, although this is not paramount
+* At least a four-inch screen, preferably larger (up to a maximum of six inches)
+
+# Dumbphone research
+
+It would appear that very few purely eInk phones have ever been created. There are a few dual-screen [LCD and eInk phones][8], but they are primarily smartphones; what I want from eInk is more like a [Kindle] turned into a phone. The [eInk page on phones][10] demonstrates three phones, but they are either dual-screen or truly dreadful ([as in][11], only [two lines of text][12] can appear on the screen at once). It looks like eInk is a no-go.
+
+I am reduced to looking for dumbphones without a camera, colour screen or Internet access.
+
+# Smartphone research
+
+## Operating system
+
+There are two main OSs in use: [Android] and [iOS]. I say this because [Windows Phone] OS is ugly enough to flout the JARVIS requirement, and [Blackberry] phones… hmm. My cached thoughts on Blackberry phones run along the lines of "don't like them, uncool" more than anything else. I find myself generating excuses not to include them in this list, even though I don't actually know much about them. Better put them in.
+
+### iOS
+
+The only phone devices which run iOS are Apple's iPhones. With an education discount, the only model I can buy new within my £400 limit is the [iPhone 4S] (at £349). This model has access to [Siri] (the Apple personal assistant).
+
+Apple offers a "[refurbished and clearance][19]" store, but they do not offer iPhones through this.
+
+### Android
+
+Android phones are very widely available. Because there is such a huge choice of phones already, I will make the simplifying assumption that I only want a phone which runs [Android 4.4 "KitKat"][KitKat] (the latest version of Android, as of this writing).
+
+### Blackberry
+
+It turns out that only two Blackberry phones have full-size touchscreens. The JARVIS criterion is failed for screens which are too small to fit reasonable amounts of text on, which leaves only the [Z30][21] and [Z10][22]. However, from what [I've seen][23], the Blackberry OS is kind of uglier than Android or iOS. For the sake of simplifying the discussion, I will go with my cached self and rule out Blackberry.
+
+ [Nokia 1680]: https://en.wikipedia.org/wiki/Nokia_1680_classic "Nokia 1680 Wikipedia page"
+ [2]: http://lesswrong.com/lw/iri/how_to_become_a_1000_year_old_vampire/ "Thousand year old vampire LessWrong page"
+ [eInk]: http://www.eink.com "eInk"
+ [calDAV]: https://en.wikipedia.org/wiki/CalDAV "CalDAV Wikipedia page"
+ [Workflowy]: https://workflowy.com "Workflowy"
+ [JARVIS]: https://www.youtube.com/watch?v=D156TfHpE1Q "JARVIS Youtube video"
+ [7]: https://en.wikipedia.org/wiki/Gratis_versus_libre "Free-as-in-freedom Wikipedia page"
+ [8]: http://gizmodo.com/5967746/this-dual-lcd-and-e-ink-phone-will-be-available-in-2013 "LCD/eInk phone example"
+ [Kindle]: https://en.wikipedia.org/wiki/Amazon_Kindle "Amazon Kindle"
+ [10]: http://web.archive.org/web/20130718152515/http://www.eink.com/customer_showcase_cell_phones.html "eInk phones showcase"
+ [11]: https://en.wikipedia.org/wiki/Motorola_Fone "Motofone eInk phone"
+ [12]: https://en.wikipedia.org/wiki/Motofone_f3#Display_technology "Motofone F3 Wikipedia page"
+ [Android]: https://en.wikipedia.org/wiki/Android_OS "Android Wikipedia page"
+ [iOS]: https://en.wikipedia.org/wiki/IOS "iOS Wikipedia page"
+ [Windows Phone]: https://en.wikipedia.org/wiki/Windows_Phone_8 "Windows Phone Wikipedia page"
+ [Blackberry]: http://uk.blackberry.com/smartphones.html "Blackberry phone"
+ [iPhone 4S]: https://en.wikipedia.org/wiki/Iphone_4s "iPhone 4S Wikipedia page"
+ [Siri]: https://en.wikipedia.org/wiki/Siri "Siri Wikipedia page"
+ [19]: http://store.apple.com/uk/browse/home/specialdeals "Apple Refurbished store"
+ [KitKat]: https://www.android.com/kitkat/ "Android KitKat"
+ [21]: http://uk.blackberry.com/smartphones/blackberry-z30.html "Blackberry Z30"
+ [22]: http://uk.blackberry.com/smartphones/blackberry-z10 "Blackberry Z10"
+ [23]: http://www.youtube.com/watch?v=nyjMVJ3ISDQ "Blackberry Z30 Youtube video"
diff --git a/hugo/content/posts/2013-11-12-markov-chain-card-trick.md b/hugo/content/posts/2013-11-12-markov-chain-card-trick.md
new file mode 100644
index 0000000..bf81fb9
--- /dev/null
+++ b/hugo/content/posts/2013-11-12-markov-chain-card-trick.md
@@ -0,0 +1,59 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2013-11-12T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/markov-chain-card-trick/
+- /markov-chain-card-trick/
+title: Markov Chain card trick
+---
+In my latest lecture on [Markov Chains][1] in Part IB of the Mathematical Tripos, our lecturer showed us a very nice little application of the theorem that "if a discrete-time chain is aperiodic, irreducible and positive-recurrent, then there is an invariant distribution to which the chain tends as time increases". In particular, let \\(X\\) be a Markov chain on a state space consisting of "the value of a card revealed from a deck of cards", where aces count 1 and picture cards count 10. Let \\(P\\) be randomly chosen from the range \\(1 \dots 5\\), and let \\(X_0 = P\\). Proceed as follows: define \\(X_n\\) as "the value of the \\(\sum_{i=0}^{n-1} X_i\\)-th card". Stop when the newest \\(X_n\\) would be greater than \\(52\\).
+
+That is, I shuffle a pack of cards, and you select one of the first five at random. I then deal out the rest of the cards in order; you hop through the cards as they are revealed. For instance, if the deck looked like \\(\{5,4,9,10,1,2,6,8,8,3, \dots \}\\) and you picked \\(2\\) as your starting value, then your list of numbers would look like \\(\{4, 2,8, \dots \}\\) (moving forward four cards, then two, then eight, and so on). We keep going until I run out of cards to deal out, at which point I triumphantly announce the value of the card which you last remembered.
+
+How is this done? The point is that we are both walking along the same Markov chain, just from different starting positions. As soon as we both hit the same card, we are locked together for all time, and it is simply a matter of ensuring that we hit the same card at some point. But this is precisely what the quoted theorem tells us: if we go on for long enough, we will fall into the same distribution, and hence will likely hit the same card as each other at some point. I ran some simulations to determine the probability with which we end on the same value. The code is kind of dirty, for which I apologise - it was thrown together quickly, and is written in the [write-only][2] [Mathematica][3]. We first assume that all picture cards are 10s, and that aces are 1s.
+
+ nums = Flatten[{ConstantArray[Range[1, 10], 4], ConstantArray[10, 12]}];
+
+The following function runs one simulation using each of the supplied starting indices, using the given order of cards:
+
+ test\[perm\_, startPos\_List] := ({Length[#[[1]]], #[[2]]} &@ NestWhile[{#[[1]\]\[[#[[2\]] + 1 ;;]], #\[[1]\]\[[#[[ 2\]]]]} &, {perm, #}, Length[#[[1]]] #[[2]] &]) & /@ startPos
+
+It is astonishingly illegible. Read it as: "For each starting position supplied: start off with the input permutation and starting position. While the starting position is a valid position of the list (so it is less than or equal to the length of the list), set the starting position to the value of the card at that starting position, and set the list of cards to be everything after that position. Repeat until we've run out of cards. Then output the length of the remaining list of cards [and hence, indirectly, the final position we hit], and the last value we remembered."
+
+The following line of code runs a hundred thousand simulations with a random order of cards each time:
+
+ True/(False + True) /. Rule @@@ Tally[ Function[{inputStartPos}, #[[1, 1]] == #[[2, 1]] &@ test[RandomSample[nums, Length@nums], inputStartPos]] /@ RandomChoice[Range[4], {100000, 2}]] // N
+
+Again, it is illegible. Read it as "We're going to want the proportion of good results to all results, where "good" is defined as follows: call a run "good" if we stopped at the same card at the end. Do that for a hundred thousand different pairs of random starting points less than \\(6\\), and tally them all up. Give me a numerical answer at the end, not a fraction." This program output 0.76764 - that is, there is a better-than-three-quarters chance of "winning" in this variant, where we insist that players pick one of the first five cards to start with, and where we don't care that queens, kings, jacks and tens are all different.
+
+In order to try and be a bit more clever, I used a simple [Bayesian update technique][4] to try and get the confidence of the answer. Performing 5000 trials and updating from a prior of "uniformly likely that the required probability is any \\(\dfrac{n}{5000}\\) for integer \\(n\\)", I got the following PDF:
+
+
+
+This has mean 0.756297 and standard deviation 0.00606961.
+
+What if we want a different range of starting values? The following table gives the mean and standard deviation of \\(p\\) for different ranges of allowed starting cards.
+
+N=1: {0.999884, 0.000191799} [the true value is, of course, 1]
+N=2: {0.840064, 0.0051822}
+N=3: {0.805078, 0.0056006}
+N=5: {0.756897, 0.00606454}
+N=10: {0.69912, 0.00648421}
+
+How about if we make 10s different from picture cards? Let's make jacks 11, queens 12 and kings 13:
+
+N=2: {0.834066, 0.00525959}
+N=5: {0.716913, 0.0063691}
+N=10: {0.673331, 0.0066306}
+
+So your odds of winning are still pretty good, even if we insist that all cards are different (ignoring suit).
+
+ [1]: http://www.statslab.cam.ac.uk/~grg/teaching/markovc.html "Markov Chains course page"
+ [2]: https://en.wikipedia.org/wiki/Write-only_language "Write-only language Wikipedia page"
+ [3]: https://www.wolfram.com "Wolfram Mathematica"
+ [4]: https://web.archive.org/web/20131019220645/http://www.databozo.com/2013/09/15/Bayesian_updating_of_probability_distributions.html "Bayesian updating"
diff --git a/hugo/content/posts/2013-11-23-the-jean-paul-sartre-cookbook.md b/hugo/content/posts/2013-11-23-the-jean-paul-sartre-cookbook.md
new file mode 100644
index 0000000..049c644
--- /dev/null
+++ b/hugo/content/posts/2013-11-23-the-jean-paul-sartre-cookbook.md
@@ -0,0 +1,41 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-11-23T00:00:00Z"
+aliases:
+- uncategorized/the-jean-paul-sartre-cookbook/
+title: The Jean-Paul Sartre Cookbook
+---
+> Many thanks to the [Guru Bursill-Hall][1] for bringing this tract to my attention through his weekly History of Maths bulletins. It was originally written in 1987 by Marty Smith, according to the Internet.
+
+# The Jean-Paul Sartre Cookbook
+
+**October 3.** Spoke with Camus today about my cookbook. Though he has never actually eaten, he gave me much encouragement. I rushed home immediately to begin work. How excited I am! I have begun my formula for a Denver omelet.
+
+**October 4.** Still working on the omelet. There have been stumbling blocks. I keep creating omelets one after another, like soldiers marching into the sea, but each one seems empty, hollow, like stone. I want to create an omelet that expresses the meaninglessness of existence, and instead they taste like cheese. I look at them on the plate, but they do not look back. Tried eating them with the lights off. It did not help. Malraux suggested paprika.
+
+**October 6.** I have realized that the traditional omelet form (eggs and cheese) is bourgeois. Today I tried making one out of cigarette, some coffee, and four tiny stones. I fed it to Malraux, who puked. I am encouraged, but my journey is still long.
+
+**October 10.** I find myself trying ever more radical interpretations of traditional dishes, in an effort to somehow express the void I feel so acutely. Today I tried this recipe:
+
+> **Tuna Casserole**
+> Ingredients: 1 large casserole dish.
+>
+> Place the casserole dish in a cold oven. Place a chair facing the oven and sit in it forever. Think about how hungry you are. When night falls, do not turn on the light.
+
+While a void is expressed in this recipe, I am struck by its inapplicability to the bourgeois lifestyle. How can the eater recognize that the food denied him is a tuna casserole and not some other dish? I am becoming more and more frustrated.
+
+**October 25.** I have been forced to abandon the project of producing an entire cookbook. Rather, I now seek a single recipe which will, by itself, embody the plight of man in a world ruled by an unfeeling God, as well as providing the eater with at least one ingredient from each of the four basic food groups.
+
+To this end, I purchased six hundred pounds of foodstuffs from the corner grocery and locked myself in the kitchen,refusing to admit anyone. After several weeks of work, I produced a recipe calling for two eggs, half a cup of flour, four tons of beef, and a leek. While this is a start, I am afraid I still have much work ahead.
+
+**November 15.** Today I made a Black Forest cake out of five pounds of cherries and a live beaver, challenging the very definition of the word cake. I was very pleased. Malraux said he admired it greatly, but could not stay for dessert. Still, I feel that this may be most profound achievement yet, and have resolved to enter it in the Betty Crocker Bake-Off.
+
+**November 30.** Today was the day of the Bake-Off. Alas, things did not go as I had hoped. During the judging, the beaver became agitated and bit Betty Crocker on the wrist. The beaver's powerful jaws are capable of felling blue spruce in less than ten minutes and proved, needless to say, more than a match for the tender limbs of America's favourite homemaker. I only got third place. Moreover, I am now the subject of a rather nasty lawsuit.
+
+**December 1.** I have been gaining twenty-five pounds a week for two months, and I am now experiencing light tides. It is stupid to be so fat. My pain and ultimate solitude are still as authentic as they were when I was thin, but seem to impress girls far less. From now on, I will live on cigarettes and black coffee.
+
+ [1]: http://web.archive.org/web/20201113203936/https://www.dpmms.cam.ac.uk/~piers/ "Guru Piers Bursill-Hall"
diff --git a/hugo/content/posts/2013-12-14-the-training-game.md b/hugo/content/posts/2013-12-14-the-training-game.md
new file mode 100644
index 0000000..0e251d8
--- /dev/null
+++ b/hugo/content/posts/2013-12-14-the-training-game.md
@@ -0,0 +1,24 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-12-14T00:00:00Z"
+aliases:
+- /uncategorized/the-training-game/
+- /the-training-game/
+title: The Training Game
+---
+The book Don't Shoot the Dog, by Karen Pryor, contains a simple exercise in demonstrating [clicker training][1]. This is a very successful technique used to produce behaviour in animals: having first associated the sound of a click with the reward of attention or food, one can then use the click as an immediate substitute for the reward (so that one can train more complicated, time-critical actions through positive reinforcement; a click is instant, but food or attention requires the trainer approaching the trainee). The demonstration exercise involves a person designated the Trainer, and a person designated the Trainee. The trainer has a goal in mind, but cannot communicate that goal to the trainee; the only interaction allowed is a click when the trainee is doing something vaguely correct. As an example, the trainee can be made to move towards a light switch by dint of a click when ey is pointing towards the switch, then a click when ey moves in that direction (ignoring any attempts to move in a different direction); the trainer then draws attention to the general area of the light by clicking whenever the trainee looks in the right direction, and then for any hand movement, then for hand movement in the direction of the light switch. This kind of incremental reinforcement can be used to achieve all sorts of interesting behaviour. (I seem to remember, from Don't Shoot the Dog, that it has been used in chickens to make them do hundred-step dances, although I may have mis-remembered that.)
+
+The exercise, then, demonstrates the power of reinforcement to produce order from chaos. With one trainer and several trainees, I would imagine that the problem becomes harder, but not insurmountably so (click when the person whose attention you need moves - it would take a while, but eventually I think I could train individual behaviour out of the group).
+
+But what about one trainee and several trainers? Imagine a scenario in which a single trainee is in a room alone, with the clicks of two trainers coming through the door in such a way that the trainee can hear only a single click. No matter which of the trainers produced it, the trainee can't tell the difference between different trainers' commands. The two trainers have competing goals (or the same goals?), and they perform the above clicker-training procedure. Would any useful behaviour result? I can imagine that an animal would get hopelessly confused by the competing goals, but a human might be able to get some kind of result. (We must assume in the contradictory case that the trainers have among their goals that "progress towards the opposing goal should be minimised"; that prevents them from teaming up to, say, perform the two goals sequentially.)
+
+Imagine that one trainer aims to make the trainee do the [Macarena][2], while the other trainer wishes the trainee to assume the [lotus position][3]. The goals are contradictory. I would imagine that the trainee would receive reinforcement towards being low down (in order to sit), as well as for standing straight and still (the starting position for the Macarena). I suspect that the trainee would infer some completely unrelated behaviour. I don't know if there's an official name for "excessively powerful inference" - [pareidolia][4] (the tendency to see faces in random settings) is a related phenomenon, and might cover this. I would be interested to know what behaviour would result from this kind of stimulus. Perhaps an experiment is in order (or, if you are also interested, do convey your results to me).
+
+ [1]: https://en.wikipedia.org/wiki/Clicker_training "Clicker training Wikipedia page"
+ [2]: https://en.wikipedia.org/wiki/Macarena_%28song%29 "Macarena Wikipedia page"
+ [3]: https://en.wikipedia.org/wiki/Lotus_position "Lotus position Wikipedia page"
+ [4]: https://en.wikipedia.org/wiki/Pareidolia "Pareidolia Wikipedia page"
diff --git a/hugo/content/posts/2013-12-22-three-explanations-of-the-monty-hall-problem.md b/hugo/content/posts/2013-12-22-three-explanations-of-the-monty-hall-problem.md
new file mode 100644
index 0000000..613bd71
--- /dev/null
+++ b/hugo/content/posts/2013-12-22-three-explanations-of-the-monty-hall-problem.md
@@ -0,0 +1,40 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2013-12-22T00:00:00Z"
+aliases:
+- /mathematical_summary/three-explanations-of-the-monty-hall-problem/
+- /three-explanations-of-the-monty-hall-problem/
+title: Three explanations of the Monty Hall Problem
+---
+Earlier today, I had a rather depressing conversation with several people, in which it was revealed to me that many people will attempt to argue against the dictates of mathematical and empirical fact in the instance of the [Monty Hall Problem][1]. I present a version of the problem which is slightly simpler than the usual statement (I have replaced goats with empty rooms).
+
+> Monty Hall is a game show presenter. He shows you three doors; behind one of the three is a car, and the other two hide empty rooms. You have a free choice: you pick one of the doors. Monty Hall then opens a door which you did not pick, which he knows is an empty-room door. Then he gives you the choice: out of the two doors remaining, you may switch your choice to the other door, or stick with the one you first picked. You will get whatever is behind the door you end up with. You want to pick the car; do you stick with your first choice, or do you switch to the other door?
+
+The solution is that you should switch. I present three explanations for why this is true, each of which makes it obvious to me in a different way. They may not help.
+
+# Different worlds
+
+Imagine three possible worlds: you pick a door, and the car is behind the first, second or third door. These choices are equally likely: the position of the car is randomly chosen by Monty Hall beforehand. Hence there are three possible worlds that I could find myself in. Let's suppose I picked door 1; it doesn't matter.
+
+* In the "I pick door 1, car in 1" world, if I switch my door, I lose; if I keep my door, I win.
+* In the "I pick door 1, car in 2" world, if I switch my door, I win; if I keep my door, I lose.
+
+The "I pick door 1, car in 3" world is identical to the previous one.
+
+That is, in two cases out of three, switching wins for me. That means switching is better than sticking: I win in two-thirds of the worlds if I switch, and I only win in a third of the worlds if I stick. (This is the brute-force approach to understanding the problem.)
+
+# Extra information
+
+Let's suppose we pick a door, and then Monty Hall reveals a false door. Of course, when I picked my door, I had a 1/3 chance of having picked the car, and that probability is unchanged when Monty reveals the false door. However, if I switch, only then am I given a chance to use the information that Monty has provided to me. Only if I switch am I able to use the fact that only two doors remain (one of them hiding nothing, and one of them hiding a car) - that makes my chance of winning a car 1/2 if I switch (actually, it's 2/3 if we condition correctly, but that's not instantly obvious and this is an informal explanation), but only 1/3 if I stick. This means it's better for me to switch. Essentially, I'm restarting the game if I switch, because nothing was special about my original choice so I can discard it without changing anything. If I switch, I discard my original choice, changing nothing, and re-pick from the improved game with one fewer door. (This is the information-theoretic approach of incorporating new information.)
+
+The same idea can be seen if we think of the question in a slightly different way. Once you've picked a door, and before Monty Hall opens any door, Monty asks you, "Would you like to look behind the door you picked, or behind the two doors you didn't pick?" If you reply "my door, please", that's the same as sticking with your original choice: Monty opens an empty-room door (changing nothing; after all, you know Monty will do this before you even start the game) and then your original door. If you reply "the other two, please", Monty opens an empty-room door and then the other door. (That's the same as switching choices.) Essentially, Monty is giving you a choice of two doors in the second case, and only one door in the first. The reason that his opening an empty-room door changes something in this case, is because we might as well consider it as "Monty opens the other two doors simultaneously": you get a 2-in-3 chance this time, since Monty's opening two of the three doors.
+
+# Extreme problem
+
+Consider the phrasing of the problem as "You pick a door. If you picked the car, Monty Hall opens every door except the one you picked, and one random empty-room door. If you picked an empty room, Monty Hall opens every door except the one you picked, and the car." Now, this exactly reflects the original problem, but is amenable to extension in the following way. Instead of having one car and two empty rooms, have one car and a hundred empty rooms. Now, when you pick, Monty Hall opens every door except for the one you picked, and one other. You started with a one-in-101 chance of having picked the car. At the end, Monty Hall has left only two doors. The probability that you originally picked the car is very low (1 in 101). But if we switch, we suddenly see that Monty Hall has removed almost all of the chaff that caused us to have only a 1 in 101 chance originally. Now it's just obvious that we have a 1 in 2 chance of picking the car if we re-pick from the game in its new state. The only way we have of re-picking is to switch doors.
+
+ [1]: https://en.wikipedia.org/wiki/Monty_hall_problem "Monty Hall problem Wikipedia page"
diff --git a/hugo/content/posts/2013-12-30-smartphone-charter.md b/hugo/content/posts/2013-12-30-smartphone-charter.md
new file mode 100644
index 0000000..1837855
--- /dev/null
+++ b/hugo/content/posts/2013-12-30-smartphone-charter.md
@@ -0,0 +1,30 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2013-12-30T00:00:00Z"
+aliases:
+- /uncategorized/smartphone-charter/
+- /smartphone-charter/
+title: Smartphone Charter
+---
+I am shortly to receive a new [Nexus 5][1]. I am determined not to become a smartphone zombie, and so I hereby commit to the following Charter.
+
+* I will keep my phone free of social networking apps, and I will ensure that I do not know the passwords to access their web interfaces. While they can be really quite handy, they are usually simply a distraction. People are used to the fact that I am present on the Internet only when I have my computer with me; there's no need for that to change.
+* I will only look at text messages when I'm not talking to someone already.
+* I will never look at [reddit][2] or [Hacker News][3] or suchlike on my phone, unless there is no-one else around. Similarly, I will not access my news feeds from my phone. It's far too easy to waste time and attention on them, when such attention is expected from the people I'm with.
+* If I am doing something on my phone, and someone asks me to stop, I will do one of the following (with number 1 being heavily preferred, and number 3 only in emergency):
+ 1. I will stop using my phone within ten seconds
+ 2. I will explain what I am doing, and ask permission to continue
+ 3. I will explain what I am doing (or say that an explanation will be forthcoming as soon as possible), and continue.
+* I will keep my phone out of reach of my bed when I go to sleep. It's easy to become lost in the Internet, especially when you're tired and not really concentrating.
+* I will be able to access emails on my phone, but I will set it up so that it only checks manually.
+* I will not install games on my phone. It's not there as "something to keep me entertained when I'm bored" but as "something to be useful when needed", and in my experience, games seem to intrude.
+
+If I break any of these, you're allowed to get annoyed with me. (The converse is false in general.)
+
+ [1]: https://en.wikipedia.org/wiki/Nexus_5 "Nexus 5 Wikipedia page"
+ [2]: http://www.reddit.com/ "reddit"
+ [3]: https://news.ycombinator.com "Hacker News"
diff --git a/hugo/content/posts/2014-01-02-the-creation.md b/hugo/content/posts/2014-01-02-the-creation.md
new file mode 100644
index 0000000..4cd2ff3
--- /dev/null
+++ b/hugo/content/posts/2014-01-02-the-creation.md
@@ -0,0 +1,33 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- creative
+comments: true
+date: "2014-01-02T00:00:00Z"
+aliases:
+- /creative/the-creation/
+- /the-creation/
+title: The Creation
+---
+Once upon a time, before this bountiful age of Matter and Light, there was only the Fell. A single being, surrounded by Chaos, content to remain alone forever (for it did not know what a "friend" was). It had not the power to shape the Chaos; neither had it the inclination, for it needed nothing and had no desires. For seething unchanging aeons, it persisted.
+
+Then Chaos bore new fruit. A single electron, a point source of *charge*. The *electric field* thereby induced resonated throughout all of Chaos, propagating yet further, every second by the same amount; and so the Fell recognised *distance*. The Fell experienced *curiosity* then: for an electromagnetic field was entirely a novel sensation to it. The place it inhabited was changed, from isotropic to merely *spherically symmetric*: now the Fell identified *direction*. It began to *move towards* the point charge, first *slowly*, and then *faster*, until its *velocity* approached that of the electric field itself. All this was for to discover the nature of the descendant of Chaos.
+
+As the Fell approached the electron, its existence became *threatened*: as a simple pattern in Chaos, it could exist indefinitely, but approaching a source of electric charge was a new disturbance, one which the pattern had not been purposelessly selected to overcome. And it recoiled from the intrusion with great force, the influence of the electron growing with the square of the Fell's distance from it, much faster than was comfortable.
+
+But the pattern that was the Fell was changed by the charge, and the charge was changed by the pattern. The same perturbations that had caused the first electron were still latent in the Chaos, and the Fell's scramble to escape the charge was enough to revive them. A second electron emerged, accompanied by a single *photon*.
+
+Now there was unbound *energy* in Chaos. Before the Fell could even begin to *react*, Chaos began to resonate, shuffling, its patterns collapsing into such regularity that a great explosion of *matter* emerged. At the speed of light, *things* emerged, a great array of *muons*, *quarks* and their ilk. The Fell could but race away from the catastrophe; most of it was shorn away in that first burst of creation, before it could flee. And so it continued to exist.
+
+*Gradually*, the flurry of *order* was calmed. Chaos is infinite, unquenchable, and the energy which the Fell unwittingly brought into existence was but finite. At the boundary of the sphere of roiling matter did the Fell rest, recovering itself, painstakingly forging its old patterns anew from the Chaos. It felt the unconstrained resonance of the matter, and so could it know what was *happening* in this new world.
+
+And indeed it came to pass that the *Universe* settled down, protected from Chaos by its sheer radius. *Gravity*, not present in the isotropic Chaos, was very much a factor in the Universe, and things came together to form new patterns. With nothing better to *do*, the Fell learnt to peer into the Universe, *polling* it with the gentlest bursts of electromagnetism to discern what new *wonders* occurred. (The Fell grew larger and larger, forcing its pattern onto Chaos, to keep and examine this new information.) It learnt to send information into the Universe by gently affecting the boundary, and eventually it occurred to the Fell to *create* something. It planned and tweaked, and when it was satisfied, it chose a star and a newly-made planet, and altered it subtly.
+
+It came to pass that self-replicating structures emerged on that planet. With startling speed, they became better-adapted to their environment. The Fell's usual languid pace of existence was not enough to keep up with the rapidity of the changes, so it began to poll for information much more frequently. It felt *tenderness* for what it had wrought, and it tried to keep that planet from harm.
+
+And the changes accelerated, faster and faster: an *exponential* with no apparent end. The Fell struggled to keep up, polling yet faster; its error rate was low, but with so many polls occurring, every so often it misjudged and sent a beam of energy that was so powerful that it affected the planet's star itself, causing plasma to gout out of it.
+
+Reptiles had emerged before the Fell realised how quickly the changes were now happening. It stretched itself to its limit, polling more and more frequently until it could go no faster, desperate to document everything. It had no energy spare to protect the planet, and it came to pass that a very large chunk of rock hit, causing the destruction of the incumbent life; and so mammals emerged, followed in short order by primates and then humans.
+
+Therefore, send not to know for whom the Fell polls - it polls for thee.
diff --git a/hugo/content/posts/2014-01-12-denouement-of-myst-iii-exile.md b/hugo/content/posts/2014-01-12-denouement-of-myst-iii-exile.md
new file mode 100644
index 0000000..4be6996
--- /dev/null
+++ b/hugo/content/posts/2014-01-12-denouement-of-myst-iii-exile.md
@@ -0,0 +1,53 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-01-12T00:00:00Z"
+aliases:
+- /uncategorized/denouement-of-myst-iii-exile/
+- /denouement-of-myst-iii-exile/
+title: 'Denouement of Myst III: Exile'
+---
+A long time ago, in a galaxy far far away, I completed [Myst III: Exile][1]. It's a stupendously good puzzle game. For some reason, it popped into my mind again a couple of days ago. This post contains very hefty spoilers for that game (it will completely ruin the ending - I will be discussing information-exchange protocols which are key to completing it), so if you're ever going to play it, don't read this post yet. It's a brilliant game - I highly recommend it.
+
+The spoilers start here. Weak spoilers are first, so that you have time to stop reading if your eyes are accidentally moving downwards. After those weak spoilers will come a discussion of the final puzzle of the game. If you're familiar with the general Myst universe, skip the next two paragraphs; start at the sentence "Proper spoilers start here".
+
+The series of Myst games revolves around the concept of a *Linking Book*, a means of moving from world to world (these worlds are called "Ages") by touching the front page of certain books. Each book is a link to its ultimate *Descriptive Book*, which was at some point written by a Writer, and which describes an Age in such detail that the Age actually comes to exist. The act of touching the front page of a Linking Book or the Descriptive Book causes you to be transported into the Age described by the corresponding Descriptive Book. (By the way, you can't bring the Linking Book with you into the Age it links to.) The destruction of the Descriptive Book of an Age causes the Age to be destroyed.
+
+Atrus is a master Writer. After the destruction of his civilisation (the D'ni), he writes a new Age, called Releeshan, in which the remnants of the D'ni can start afresh. Myst III: Exile starts with you ("the Stranger", since you are never named or depicted in any way, in any of the Myst series) being invited to explore Releeshan yourself with Atrus for the first time; he shows you the Descriptive Book, which you will use to enter Releeshan. However, just as you're about to enter, a person, Saveedro, appears, starts a fire, and grabs the Descriptive Book, before linking back out. The book he used falls to the floor, and you rashly follow him. So the events of the game begin. (It turns out that the fire burns the Linking Book, so you're conveniently on your own.)
+
+Proper spoilers start here; the next couple of paragraphs describe the set-up of the final puzzle. At the end of the game, you've tracked down Saveedro. It turns out that for plotty reasons, he hates Atrus's sons with a fiery passion (this is elaborated on in games 1 and 4), and this was his way of getting back at Atrus. (He intended Atrus to follow him, not you, the Stranger.) He also wanted to show Atrus the consequences of his sons' actions. To that end, he has caused you to end up in the Age of Narayan, Saveedro's home Age, which used to be vibrantly natural but was ruined by Atrus's sons. Saveedro's home, which is where you have ended up, is shielded off from the rest of Narayan, and Saveedro desperately wants to get out into Narayan proper.
+
+Saveedro's home is divided into two chunks, which we will consider to be in concentric circles, with an impenetrable shield between them, and an impenetrable shield surrounding the whole set-up. (That's why Saveedro can't escape: he is stuck behind two shields, unable to get through even one.) Linking to Narayan takes you to the inner chunk. From there, you can turn the power on to a device which has enough power to inhibit one of the shields, but not both. Being a friend of Atrus, you (naturally) have access to his journal, which gives you the key insights necessary to activate this device. The device can switch between inhibiting either of the two shields, but not both. The mechanism to control which shield is inhibited is inside both the concentric circles. (That is, while the inner shield is raised, you can't access it from the outside-circle.)
+
+One can leave the house only if one is in the outside-circle and the outer shield is inhibited. However, one person alone can't do this: after activating the inhibitor, one can cause the inner shield to be inhibited and the outer shield to remain, and thereby one can get into the outside-circle; or one can cause the outer shield to be inhibited and the inner shield to remain. There is a single small passage between inner and outer circles, but it's not big enough for you to get through.
+
+Saveedro still holds Releeshan, which it is your objective to retain. In his home, you have found a Linking Book that will return you to Atrus's home, Tomahna. You want to obtain Releeshan and return it to Atrus. Saveedro wants to escape his house. You start this scenario with you in the inner circle next to the inhibitor's controls, and Saveedro standing next to the outer shield, which is currently raised, and the inner shield is inhibited.
+
+The official solutions are as follows (least-optimal to optimal in order, so you can think about the puzzle if you want to):
+
+1. Leave immediately, using the Tomahna Linking Book. (Then Saveedro follows you through the book you leave behind, and kills you.)
+2. Release Saveedro immediately. (Then he destroys Releeshan, because he is still angry with Atrus. Many years of Atrus's life's work are now gone.)
+3. Cut the power to the inhibitor, thereby raising both shields. Then Saveedro collapses, having had freedom waved in front of him and snatched away (this time, he is trapped away from all his belongings, which are in the inner circle.) He hands you Releeshan through the small passage, and pleads with you to let him out. You have several choices: you can link back to Atrus (you end the game remorseful that Saveedro is alone); you can turn power back onto the inhibitor (then Saveedro runs back and kills you); or you can turn the controls on the inhibitor and then power it back up (then Saveedro gets out of his house, and salutes you as he leaves towards the houses in the distance), before linking back to Atrus. This last ending is optimal.
+
+# The actual point of this post
+
+I was wondering whether there is a better solution, in the sense that "we don't have to cause Saveedro unnecessary anguish, and/or we can complete the scenario without requiring anyone to trust anyone else". (Recall that the solutions officially require Saveedro to be provoked into trusting you.) What we need is some sort of mechanism or box to hold Releeshan, which is open if and only if the outer shield is inhibited. Then Saveedro can go into the inner circle with you (inner shield inhibited), switch the inhibitor (opening the box), and place the book inside it (after verifying that it does indeed open under and only under those conditions). Then he switches the inhibitor (closing the box, and allowing him through to the outer circle), goes through to the outer circle, you switch the inhibitor (opening the box and allowing him to escape), take Releeshan, and link away. He cannot return to kill you (because the inner shield is up).
+
+Possible failure modes: you could shut off power to the inhibitor, thereby causing Saveedro to be trapped and you to have the unopenable box. This is obviously non-optimal, but it is precluded in-game by the fact that the controller of power to the inhibitor is located quite far from the inhibitor itself, so Saveedro has ample time to get back into the inner circle and kill you. You could link straight out, which leads to the exact scenario portrayed in-game. Saveedro could simply destroy Releeshan before putting it in the box (but then you will never release him).
+
+I can see parallels between this kind of scenario and the creation of currencies like Bitcoin. There are some pretty impressive protocols to allow parties to spend money without being able to spend the same money twice, and so forth. What I really want is an information-based solution to this Myst problem.
+
+I have heard of information-swapping protocols, to ensure that two parties swapping information will not do so asymmetrically, even if one party is evil with respect to the other. That sounds perfect for this.
+
+New plan: create a box with a long combination lock, and modify the inhibitor so that it requires the same long combination to open it. Place Releeshan in the box. Each pick digits for the combination in turn, without the other knowing the digits. Lock the box using that combination, and lock the inhibitor in the "inhibiting the inside" position. Then Saveedro goes into the outer circle, taking the box; you go with him, and ensure that it is anchored firmly in place, before going back. Saveedro starts reading out his first digit. You punch it in to the inhibitor, and he punches it into the box. You read out your first digit; then repeat. If at any point you stop entering digits into the inhibitor, or do so incorrectly, Saveedro simply stops reading out his numbers, and you can't get to Releeshan. (This is why Saveedro needs to have some numbers at all.) Similarly, Saveedro can't stop punching the digits into the safe, or else you will not release him.
+
+Now, as you get nearer to the end, you might be able to stop entering the digits and just brute-force the box open. Then Saveedro can come and kill you. Alternatively, Saveedro can stop reading his digits; but that doesn't serve him any purpose, for you know the code up to the point where he stopped reading his digits, and he's back where he started. (We will assume that there is a way to ensure that the same combination has been set for the box and the inhibitor.) Now, Saveedro has read all his digits; there are four left for you to enter. You don't read them out, but punch them in to the inhibitor and release Saveedro. Then you re-set the inhibitor and collect Releeshan, since you have the complete code to open the box.
+
+If you turn off power to the inhibitor, Saveedro will simply never give you the code to access Releeshan, so that option is ruled out on the grounds that an approach exists which doesn't require you to trust each other but still lets you both get what you want.
+
+Can anyone see any failure modes I've missed, or any simplifications? It probably works fine with just five numbers (one picked by Saveedro, and four by you), but I wanted to include a way to exchange arbitrary information.
+
+ [1]: https://en.wikipedia.org/wiki/Myst_III:_Exile "Myst III: Exile Wikipedia page"
diff --git a/hugo/content/posts/2014-01-24-introduction-to-functional-programming-syntax-of-mathematica.md b/hugo/content/posts/2014-01-24-introduction-to-functional-programming-syntax-of-mathematica.md
new file mode 100644
index 0000000..28ebe0c
--- /dev/null
+++ b/hugo/content/posts/2014-01-24-introduction-to-functional-programming-syntax-of-mathematica.md
@@ -0,0 +1,115 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- programming
+comments: true
+date: "2014-01-24T00:00:00Z"
+math: true
+aliases:
+- /uncategorized/introduction-to-functional-programming-syntax-of-mathematica/
+- /introduction-to-functional-programming-syntax-of-mathematica/
+title: Introduction to functional programming syntax of Mathematica
+---
+Recently, I was browsing the [Wolfram Community][1] forum, and I came across the following question:
+
+> What are the symbols @, #, / in Mathematica?
+
+I remember that grasping the basics of functional programming took me quite a lot of mental effort (well worth it, I think!) so here is my attempt at a guide to the process.
+
+In Mathematica, there are only two things you can work with: the Symbol and the Atom. There is only one way to combine these things: you can provide them as arguments to each other. We denote "\\(x\\) with arguments \\(y\\) and \\(z\\)" by "`x[y,z]`".
+
+What is an Atom? As the name suggests, it is something indivisible, like the number 2 or the string "Hello!". So that the language isn't too complicated to implement, we mean "indivisible without any further work" - so the number 15 is "divisible" (in the sense that it's 3x5), but not in our sense: it takes work to find the divisors of a number. Similarly, the string "Hello!" is "divisible" into characters, but that again takes work.
+
+A Symbol is something which we, as a programmer, tell Mathematica to give meaning to. We also tell it under what circumstances that Symbol has meaning. For instance, I might say to Mathematica, "In future, when I ask you the Symbol $MachinePrecision, you will pretend I said instead the Atom 15.9546." Something else I might say to Mathematica is, "In future, when I ask you for the Symbol Plus, combined with the arguments 1 and 2, you will pretend I said instead the Atom 3."
+
+In Mathematica's syntax, we write the above as:
+
+ $MachinePrecision = 15.9546;
+ Plus[1, 2] = 3;
+
+(The semicolons prevent Mathematica from printing the value we gave. Without the semicolons, it would print out 15.9546 and 3. In fact, the semicolons are a shorthand for the Symbol CompoundExpression, but that's not important.)
+
+Furthermore, we can ask Mathematica, "In future, when I ask you for Plus combined with zero and any other argument x, return that argument x". In Mathematica's syntax, that is:
+
+`Plus[0, Pattern[x, Blank[]] ] := x`
+
+More compactly:
+
+`Plus[0, x_] := x`
+
+Now, we have had to be careful. Mathematica needs a way of distinguishing the Symbol `x` from the "free argument" `x`. We want the "free argument" - that is, we want to be able to supply any argument we like, and just temporarily call it x. We do that using the Pattern symbol, better recognised as `x_` . I won't go into how Pattern works in terms of the Symbol/Atom idea, but just recognise that `x_` *matches* things, rather than *being* a thing.
+
+Now, we'll assume that there is already a "plus one" method - that Mathematica already knows how to do `Plus[1, x_]`. Let's also assume that it knows what `Plus[-1, x_]` is (not hard to do, in principle, once we know `Plus[1, x_]`). Then we can define Plus over the positive integers:
+
+`Plus[x_, y_] := Plus[Plus[-1, x], Plus[1, y]]`
+
+And so forth. This is how we build up functions out of Symbols and Atoms.
+
+Now, there is a shorthand for `f[x]`. We can instead write `f@x`. This means exactly the same thing.
+
+A really important Symbol is `List`. `List[x, y, z]` (or, in shorthand, `{x, y, z}`) represents a collection of things. There's nothing "special" about `List` - it's interpreted in exactly the same way as everything else - but it's a convenient, conventional way to lump several things together. (It would all have worked in exactly the same way if the creators of the language had decided that Yerpik would be the symbol that represented a generic collection; even `Plus` could be used this way, if we made sure to tell Mathematica that "Plus" should not be evaluated in the usual way. You could even use the number 2 as the list-indicating symbol, or even use it as `Plus` usually is used, leading to expressions like `2[5,6] == 11`.) We can define functions like `Length[list_]`, so `Length[{1, 2, 3}]` is just 3.
+
+Since everything is essentially function application ("apply a symbol to an expression"), we might explore ways to apply several functions at once, or to apply a function to several different parts of an expression. It turns out that a really useful thing to do is to be able to apply a function to all the inside bits of a List. We call this "mapping":
+
+`Map[f, {a, b, c}] == {f[a], f[b], f[c]}`
+
+More generally, `Map[f, s[a1, a2, … ]] == s[f[a1], f[a2], …]`, but we use `List` instead of `s` for convenience. There is a shorthand, reminiscent of the `f@x` notation: we use `f /@ {a, b, c}` to denote "mapping".
+
+It's all very well to want to map a function across the arguments to a symbol (let's call that symbol, which has those arguments, the Head of an expression, so `Head[f[x,y]]` is just `f`), but what about if we want to apply the function *to the Head symbol*? Actually, this turns out to be quite rare (the function is `Operate[p, f[x,y]]` to give `(p[f])[x,y]` ), but it's much more common to want to replace the Head completely. For instance, we might want to supply a List as arguments to a function, as follows:
+
+`f[x_, y_] := x + y^2`
+
+How would we get `f` to act on the List `{5, 6}`? We can't just say `f[{5, 6}]` because f requires two inputs, not the one that is `List[5, 6]`. Mathematica's syntax is that instead of `f@{5,6}`, we use `f@@{5, 6}`. This is shorthand for `Apply[f, {5,6}]`, and it returns `f[5, 6]`, which is 41.
+
+More generally, `f@@g[x, y] == f[x, y]`. (Note, however, that Mathematica evaluates things as much as possible before doing these transformations, so `f@@Plus[5,6]` doesn't give you `f[5,6]` but `f@@11`, an expression which makes no sense. Mathematica's convention is that Atoms don't really have a Head, so replacing the Head with `f` does nothing; hence `f@@11` will return 11.)
+
+Particularly in conjunction with `Map`, it can be useful to Apply a function not to an expression, but to the arguments of the expression. That is, given a List `{{1, 2}, {3, 4}}`, which is `{List[1, 2], List[3, 4]}`, we might want to output `{f[1, 2], f[3, 4]}`. We do this with the shorthand `f@@@{{1, 2}, {3, 4}}`, which is really `Apply[f, {{1, 2}, {3, 4}}, 2]`. This situation might arise if we wanted to "transpose" two strings "ab" and "cd" to return "ac" and "bd" (imagine writing the strings out in a table, and reading the answer down the columns instead of across the rows). We could use `StringJoin@@@Transpose@Map[Characters, {"ab", "cd"}]`. Indeed, what does this expression do? The first thing that will actually change when it is evaluated is `Map[Characters, {"ab", "cd"}]`. This will return `{{"a", "b"}, {"c", "d"}}`. Then Transpose sees that new list, and flips things round to `{{"a", "c"}, {"b", "d"}}`, which is `{List["a", "c"], List["b", "d"]}`. Then `StringJoin` is asked not to hit the outer `List`, or even to hit the inner Lists, but to *replace* the List head on the inner Lists: the expression becomes `{StringJoin["a", "c"], StringJoin["b", "d"]}`, or `{"ac", "bd"}`.
+
+Now, it's all very well to have functions that work like this. But what if we wanted to take the second character of a string? There's a function for that - `StringTake` - but it needs arguments. We could define a new function `takeSecondChars[str_] := StringTake[str, {2}]`, but that's unwieldy if we only want this function once - and what about if we wanted the third character instead, the next time?
+
+There is a really useful way to define functions without names. Unsurprisingly, they look like:
+
+`Function[{x, y, …}, …]`
+
+So in the above example, we'd have `Function[{str}, StringTake[str, {2}]]`. And then to map it across a list would look like:
+
+`Function[{str}, StringTake[str, {2}]] /@ {"str1", "str2", "str3"}`
+
+We can also apply it to a string: `Function[{str}, StringTake[str, {2}]]["string"]`, or `Function[{str}, StringTake[str, {2}]]@"string"`.
+
+There's a really compact shorthand. Instead of `Function[{args}, body]` we use `(body)&`. We don't even bother naming the arguments; we use the `Slot[i]` function to get the `i`th argument. `Slot[i]` is more neatly written as `#i`, while just the `#` symbol is interpreted as `#1`.
+
+Hence our function becomes `StringTake[#, {2}]&`, and its mapping looks like:
+
+`StringTake[#, {2}]& /@ {"str1", "str2", "str3"}`
+
+It takes some getting used to, but after a while it becomes extremely natural. In my most recent coursework project, there are almost no programs I wrote which don't use this syntax, even though the coursework is aimed at the language Matlab which is almost the antithesis of this idea of "symbols with arguments". Once you become able to see problems in this way - mapping small functions over expressions, and so forth - you start seeing it everywhere. The idea is about sixty years old - it's the principle of Lisp - and it's ridiculously powerful. Since functions are just expressions, you can use them to alter themselves. For instance, memoisation is trivial:
+
+ fibonacci[n_] := (fibonacci[n] = fibonacci[n-1] + fibonacci[n-2])
+ fibonacci[1] = 1;
+
+That is, "Whenever I ask you for fibonacci[n], you will set the value of fibonacci[n] to be the sum of the two previous values." Note that this is "set the value of fibonacci[n] to be", not "return" - this is a permanent change (well, as permanent as the Mathematica session), and it means that the value of fibonacci[36] is instantly available forever after once you've calculated it once.
+
+You can also get some crazy things with Slot notation, because `#0` (which is `Slot[0]`) represents *the function itself*. Off the top of my head, an example is:
+
+ (Boole[# < 10] #0[# + 1] + #) &[1]
+
+This generates the tenth triangle number. (The function `Boole[arg]` returns 1 if arg is `True`, and 0 otherwise.) This is because the function evaluates to exactly its input unless that input is less than 10; in that case, the function evaluates to (its input, plus "this function evaluated at input+1"). Recursively expanded, it is `f[x_] := If[x < 10, f[x+1]+x, x]`, evaluated at the input 1. It gets quite mind-bending quite quickly, and I don't think I have ever used `#0` in earnest. Another example I came up with quickly was:
+
+ If[Cos[#] == #, #, #0[Cos[#]]] &[1.]
+
+This finds a fixed point of the function Cos, starting at the initial input 1. (It has to be a numerical input, otherwise Mathematica will just keep going forever with better and better symbolic expressions for this fixed point, like `Cos[Cos[Cos[1]]]`. It rightly recognises that, for instance, `Cos[Cos[Cos[1]]]` is not equal to `Cos[Cos[Cos[Cos[1]]]]`, so it never stops.)
+
+The last really useful piece of shorthand I can think of at the moment is // which is another way to apply functions.
+Instead of `f@x`, we can use `x//f` . This has the benefit of making it a bit clearer what is actually contentful, and what is just afterthoughts, because the functions which are evaluated last actually appear at the end:
+
+`CharacterRange["a","z"] // StringJoin`
+
+Of course, the usual function notation can be used:
+
+`1 // (Boole[# < 10] #0[#+1] + # &)`
+
+Phew, that was a whistlestop tour in rather more words than I had hoped - turns out there are far more Mathematica concepts that I've internalised than I had thought, all of which are really quite fundamental and indispensable. I understand much better why people say Mathematica has a steep learning curve, and why it is derided as a "write-only language" - that final example is ridiculous!
+
+ [1]: http://community.wolfram.com
diff --git a/hugo/content/posts/2014-01-28-writing-essays.md b/hugo/content/posts/2014-01-28-writing-essays.md
new file mode 100644
index 0000000..c79b16b
--- /dev/null
+++ b/hugo/content/posts/2014-01-28-writing-essays.md
@@ -0,0 +1,60 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-01-28T00:00:00Z"
+aliases:
+- /uncategorized/writing-essays/
+- /writing-essays/
+title: Writing essays
+---
+The aim of this post is twofold: to find out whether a certain mental habit of mine is common, and to draw parallels between that habit and the writing of essays.
+
+I don't know whether this is common or not, but when I'm feeling particularly not-alert (for instance, when I'm nearly asleep, or while I'm doing routine tasks like cooking), I sometimes accidentally latch onto a topic and mentally explain it to myself, as if I were teaching it to the Ancient Greeks (who, naturally, speak English). As an example, last night's topic of discourse was "the composition of soil", in which I "talked" about soil, in a manner roughly according to the following diagram. It is laid out so as to display roughly what occurred to me, and the order in which it occurred to me to "say" it.
+
+The contents of soil
+
+* Soil contains fungi
+
+* * what is a fungus?
+
+* Soil contains fungi, lots and lots, which contributes to
+
+* we eat fungi
+
+* * we don't just eat mushrooms, we also are starting to eat Quorn etc
+
+* we eat fungi - more specifically, the reproductive organs of the mycelium
+
+* * what is a mycelium? it's a web that can span large areas
+
+* * * "fairy circles" - mycelium is why mushrooms often appear in arcs, because the mushrooms - the reproductive organs - appear at the periphery of the web
+
+* Soil contains fungi, lots and lots, which contributes to its taste
+
+* * I once accidentally ate some of a mouldy slice of bread, and it tasted just like soil
+
+* * * the mould looks the same as the mould which you get in damp areas of a house
+
+* * you can actually see something which is closely related to a mycelium on mouldy bread - webs of fungus
+
+* we eat fungi - Quorn, for instance, something similar to which was eaten during the first world war in Germany because of famine
+
+* * Quorn can be made in huge vats, 250 kg can be made using the same resources as would make a kilogram of chicken
+
+* * * chicken is the most-eaten meat in the world, and our treatment of them can be horrible
+
+* * * there are environmental problems associated with using the resources that could produce a quarter of a ton of food to instead produce a kilogram of chicken
+
+You get the idea. I'm essentially doing a depth-first search of my internal knowledge-base, starting from a particular place. When I feel that a topic is getting too big to include (for instance, I stop after "environmental problems" because that leads to a very large nexus of topics in my knowledge-base), I stop the search and backtrack. When I feel that a fact is particularly interesting but doesn't have too much relevant content after it, I stop (for instance, the "fairy circles" fact, which could lead to a digression on myths and legends, but I deem that too big a logical leap).
+
+This rings a bell with [an essay Paul Graham wrote][1] about essays, and more strongly with an anecdote which (infuriatingly) I can't find or recall the name of, by a teacher of English. As an exercise, he (I think it was "he") set a student an exercise to "write an essay about your home town". She looked at him blankly, and so he refined it to "write an essay about the High Street of your home town". The process continued until it got to "write 500 words about the top-left brick in the front of the bank on the High Street of your home town". The student left, almost in tears. The next day, she returned with five thousand words of essay, and said that "once I got started, I just didn't stop".
+
+What I am doing is very close to this view of writing an essay. If I kept going long enough (in an awake state), I would presumably hit areas I don't know much about (for instance, how is it that there is a kind of mushroom that can punch through tarmac? Hydraulics, I know, but that's a [stop sign][2].) That's where the research would start, and where I would start discovering new things - and that's where Paul Graham's view of writing an essay would happen.
+
+This very post was written somewhat in this manner, but to save space and time, I made the knowledge-tree much smaller. (Alternative ending to that sentence: "I destroyed my time machine and burned all my papers".) Naturally, when actually formulating an essay from such a tree, it is important only to keep that which is interesting and/or useful, and it is necessary to restrict the output to a reasonable length. A blog format, in particular, prefers shorter pieces, so maybe I should**—**
+
+ [1]: http://www.paulgraham.com/essay.html "Paul Graham on essays"
+ [2]: http://lesswrong.com/lw/it/semantic_stopsigns/ "Semantic Stop-signs LessWrong page"
diff --git a/hugo/content/posts/2014-02-16-rage-rage-against-the-poets-hardest-sell.md b/hugo/content/posts/2014-02-16-rage-rage-against-the-poets-hardest-sell.md
new file mode 100644
index 0000000..51903f6
--- /dev/null
+++ b/hugo/content/posts/2014-02-16-rage-rage-against-the-poets-hardest-sell.md
@@ -0,0 +1,38 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- creative
+comments: true
+date: "2014-02-16T00:00:00Z"
+aliases:
+- /creative/rage-rage-against-the-poets-hardest-sell/
+- /rage-rage-against-the-poets-hardest-sell/
+title: Rage, rage against the poet’s hardest sell
+---
+I feel that I can write a sonnet well.
+While sonnets are an easy thing to spout,
+It’s really hard to write a villanelle.
+
+By rhyming, any story I can tell:
+in couplets, rhyme and rhythm evens out.
+I feel that I can write a sonnet well.
+
+But alternately-structured verse is hell.
+The poet struggles, juggles words about:
+It’s really hard to write a villanelle.
+
+Enthusiasm’s difficult to quell.
+An acolyte of Shakespeare, I’m devout:
+I feel that I can write a sonnet well.
+
+But triplets are a task on which I dwell,
+I’m running out of rhymes, without a doubt.
+It’s really hard to write a villanelle.
+
+For sonnets, you don’t have to be [Kal-El][1]
+to make a super stanza just work out.
+I feel that I can write a sonnet well;
+It’s really hard to write a villanelle.
+
+ [1]: https://en.wikipedia.org/wiki/Superman "Superman Wikipedia page"
diff --git a/hugo/content/posts/2014-03-20-a-roundup-of-some-board-games.md b/hugo/content/posts/2014-03-20-a-roundup-of-some-board-games.md
new file mode 100644
index 0000000..a566d17
--- /dev/null
+++ b/hugo/content/posts/2014-03-20-a-roundup-of-some-board-games.md
@@ -0,0 +1,62 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-03-20T00:00:00Z"
+aliases:
+- /uncategorized/a-roundup-of-some-board-games/
+- /a-roundup-of-some-board-games/
+title: A roundup of some board games
+---
+It has been commented to me that it's quite hard to find out (on the Internet) what different games involve. For instance, [Agricola][1] is a game about farming (and that's easy to find out), but what you actually do while playing it is not easy to discover. Here, then, is a brief overview of some games.
+
+# Agricola
+
+[Agricola][2] is a game in which you control a farm, and are aiming to make your farm thrive. It is a multiplayer game (for two to five) divided into turns. During each turn, you can make several actions (the number of actions you can make is determined by the number of people you have on your farm; you start out with two, and some actions increase the number of people you have). The actions are shared between all players - that is, if I make an action, you may not make that same action this turn. There is no other inter-player interaction - no attacking or anything, and you all have your own farm to manage. Your aim is to use actions to gather resources, build and extend your house, and plough fields; at the end of the game (after fourteen rounds, which is about forty minutes) everyone scores their own farm according to a set checklist, and the winner is the one who has the most prosperous farm.
+
+# Settlers of Catan
+
+[Catan][3] is a game in which you are trying to build up your civilisation essentially from scratch. It is multiplayer (two to four), and is divided into turns. The game is played on a common board, which you gradually populate with your own settlements, cities and roads, while attempting to make sure that other people can't foil your plans with their own building. (Once something is built, it can't be un-built, so the game is competitive only in a strategic sense, not a combat sense.) You aim to gather resources (which you can trade freely with opponents) so as to build more such trappings (your settlements and cities gain resources according to dice rolls), and the winner is the first to reach a certain size of civilisation. Games last about 45mins.
+
+# Diplomacy
+
+[Diplomacy][4] is a game almost entirely down to how well you can connive with and against opponents. It takes place in turns, but actions happen essentially simultaneously in a turn; the real action happens in between turns, when you go and plot with other people. Games take many hours, and are very multiplayer (eight or so, I think, is normal). Your aim is to take over the world, which you can only feasibly do by persuading people both to assist you and to foil your opponents' attempts.
+
+# Dominion
+
+[Dominion][5] is a two-to-four player deck-building game. You aim to have acquired the most Victory cards by the end of the game. Turns involve playing cards you have already acquired, and acquiring more cards; the cards you acquire become part of an "economy" that is almost never subtracted from, but you may only use a small subset of your cards during any one turn. It is somewhat like Magic: the Gathering (below), but restricted so that cards only modify the structure of your turn and allow you to draw more cards. (There are some "attack" cards, but I find them not to be conducive to fun play.)
+
+# Magic: the Gathering
+
+[Magic][6] is a rather different game to those listed above. It is a collector's game: you acquire cards over your lifetime, although some formats involve getting a random selection of cards and doing the best you can with those. It is multiplayer (two players is common, but it goes arbitrarily high). The format is turn-based - each turn is subdivided - but the key point is that cards can do pretty much anything to the game. Win conditions can be altered, turns can be prevented, cards can be renamed, all as the result of card effects. Your aim is to win the game, which is usually done by taking the opponent's life total down to 0 or by forcing them to draw a card when they have no cards left to draw (that is, after they have already drawn all of their cards). Many other win conditions exist - [one card][7] causes you to win if you have a certain ridiculous number of cards; [one card][8]'s active effect is that a target opponent loses the game; [one card][9] causes you to win if nothing happens for a while; and so forth. A complete list can be seen on the [Gatherer card search facility][10] (you might want to search for "wins the game" as well as the default "win the game").
+
+That description of win conditions was intended to convey how complicated the game can become. It is not for the faint-hearted - it takes a while to get to grips with the myriad mechanics.
+
+# Mafia/Werewolf/Avalon/The Resistance
+
+These two games ([Mafia][11] and Werewolf are isomorphic games, as are Avalon and [The Resistance][12]) are both highly-multiplayer games (up to ten people) in which there are two teams: a team of innocents and a team of hidden spies (I'll refer to those as the Mafia). The job of the former team is to unmask the spies; the latter team usually wins by remaining undetected.
+
+In Mafia, the game revolves around group voting to "kill" people (and the first person killed will have a much less interesting game!). The innocents naturally want to kill the Mafia; the Mafia want to kill all the innocents. The Mafia get an extra turn in between every group vote, in which they can elect among themselves to kill someone. (That is, every normal turn, two people die.) The innocents win if all the Mafia are dead and an innocent is still alive; the Mafia win if they kill all the innocents. There are some extra roles to complicate things, but those are the basics.
+
+Avalon is slightly different - nobody dies at any point. In a round, someone chooses a team of people to go Questing, and the group votes on whether to allow that team to Quest. If the vote comes out negative, the next person chooses a team, and so on. This repeats until a team is approved; then if the team contains a Mafia member, the mission can be failed by that member (secretly: no-one finds out who the Mafia was). Otherwise, it succeeds. The aim is to accumulate failed/succeeded missions (depending on if you are Mafia/innocent).
+
+# Dixit
+
+[Dixit][13] is a [Keynesian beauty contest][14] style game. Everyone gets cards with pictures on them, and each round, a different Storyteller describes one of their cards. Then everyone puts one of their cards into a pile, and the cards are all placed in the centre of the table. Everyone then guesses which card was the Storyteller's card, based on their description. Points are allocated to any player whose card was guessed (that is, a player who put in a card which matches the description enough for someone to mistake it for the Storyteller's card). The Storyteller gets points *unless* everyone or no-one guessed correctly. (That is, the description must not be so obscure that no-one gets the answer right, and it must not be so obvious that everyone does.)
+
+ [1]: https://en.wikipedia.org/wiki/Agricola_(board_game) "Agricola Wikipedia page"
+ [2]: https://en.wikipedia.org/wiki/Agricola_(board_game) "Agricola (Wikipedia page)"
+ [3]: https://en.wikipedia.org/wiki/Catan "Settlers of Catan Wikipedia page"
+ [4]: https://en.wikipedia.org/wiki/Diplomacy_(board_game) "Diplomacy Wikipedia page"
+ [5]: https://en.wikipedia.org/wiki/Dominion_(game) "Dominion Wikipedia page"
+ [6]: https://en.wikipedia.org/wiki/Magic:_The_Gathering "Magic: the Gathering Wikipedia page"
+ [7]: https://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=288878 "Battle of Wits Magic card"
+ [8]: https://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=288992 "Door to Nothingness Magic card"
+ [9]: https://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=265418 "Azor's Elocutors Magic card"
+ [10]: https://gatherer.wizards.com/Pages/Search/Default.aspx?text=+[win]+[the]+[game] "Gatherer"
+ [11]: https://en.wikipedia.org/wiki/Mafia_(party_game) "Mafia Wikipedia page"
+ [12]: https://en.wikipedia.org/wiki/The_Resistance_(party_game) "The Resistance Wikipedia page"
+ [13]: https://en.wikipedia.org/wiki/Dixit_(card_game) "Dixit Wikipedia page"
+ [14]: https://en.wikipedia.org/wiki/Keynesian_beauty_contest "Keynesian beauty contest"
diff --git a/hugo/content/posts/2014-03-30-how-to-discover-the-contraction-mapping-theorem.md b/hugo/content/posts/2014-03-30-how-to-discover-the-contraction-mapping-theorem.md
new file mode 100644
index 0000000..ba1fa47
--- /dev/null
+++ b/hugo/content/posts/2014-03-30-how-to-discover-the-contraction-mapping-theorem.md
@@ -0,0 +1,72 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+- proof_discovery
+comments: true
+date: "2014-03-30T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/proof_discovery/how-to-discover-the-contraction-mapping-theorem/
+- /how-to-discover-the-contraction-mapping-theorem/
+title: How to discover the Contraction Mapping Theorem
+---
+A little while ago I set myself the exercise of stating and proving the [Contraction Mapping Theorem][1]. It turned out that I mis-stated it in three different aspects ("contraction", "non-empty" and "complete"), but I was able to correct the statement because there were several points in the proof where it was very natural to do a certain thing (and where that thing turned out to rely on a correct statement of the theorem).
+
+Here, then, is how you might go about discovering it from the point of having a definition of a [Lipschitz function][2] on a metric space \\((X, d)\\) (that is, a function \\(f\\) for which there exists \\(\lambda \in \mathbb{R}^{>0}\\) such that for all \\(x, y \in X\\), \\(d(f(x),f(y)) \leq \lambda d(x,y)\\)). We'll aim for a statement describing the fixed points of such a function.
+
+## Define the terms
+
+What is a "fixed point"? There's nowhere obvious to start other than working out what we mean by one of these. Well, what we mean is "a point \\(x \in X\\) such that \\(f(x) = x\\)". We'll also define \\((X, d)\\) to be an arbitrary metric space, and \\(f\\) an arbitrary Lipschitz function on that space with Lipschitz constant \\(\lambda\\).
+
+## How might we proceed?
+
+We're looking for a fixed point. We have a Lipschitz function (that is, one which "draws points together", in the sense that two points which are originally \\(\delta\\) apart end up \\(\lambda \delta\\) apart, or closer, after \\(f\\) is applied to them). That suggests the idea of starting out with two arbitrary points, repeatedly pulling them closer together with \\(f\\), and seeing where we end up. Actually, on second thoughts, we can dispense with one of the arbitrary points, because we can make another point given our arbitrary \\(x\\) - namely \\(f(x)\\).
+
+## What did we assume?
+
+So far, we've made a (silly) assumption: that the space \\(X\\) is not empty, because we've just picked a point in it. In order to use this "\\(f\\) draws points together", we're going to want \\(\lambda < 1\\), otherwise it's actually blowing them outwards.
+
+## How might we proceed?
+
+We have two points, \\(x\\) and \\(f(x)\\). We want to pull them together using \\(f\\), so it's natural to keep applying \\(f\\) to them. So that we can have access to all these values, we'll define a sequence \\(z_i = f(z_{i-1})\\) and \\(z_0 = x\\). What we really want is for this sequence to converge to the fixed point (after all, if we're drawing the points together to some limit, we'd imagine that the limit of the sequence is a local accumulator in some sense).
+
+Now, we know nothing about this metric space, and we know nothing about the limit of the sequence. There's a key thing we do in analysis if we want a limit of a sequence but know nothing about it: we show that it is [Cauchy][3]. In order to use this, though, we'll need to suppose that the metric space is complete (so that Cauchy sequences converge).
+
+Then we want to show that this sequence \\(z_i\\) is Cauchy. That is, we want \\(d(z_i,z_j) \to 0\\) as \\(i,j \to \infty\\) independently of each other, which means that for all \\(\epsilon > 0\\) there exists \\(N \in \mathbb{N}\\) such that for all \\(i, j > N\\), \\(d(f^i(x), f^j(x)) < \epsilon\\).
+
+Aha - we have \\(d(f^i(z), f^j(z))\\). We know \\(f\\) is Lipschitz, so ([wlog][4] \\(i \leq j\\)) this is \\(d(f^i(z), f^j(z)) \leq \lambda^i d(x, f^{j-i}(x))\\). It would be very convenient if the \\(d\\) expression were bounded, because then as \\(i \to \infty\\), the \\(\lambda^i\\) will take care of the rest (since \\(\lambda < 1\\)).
+
+But what else do we know about \\(d\\)? We're going to need something to bound \\(d(x, f^{j-i}(x))\\), but we don't know anything about this expression - we only know about \\(d(z_i, f(z_i)) \leq \lambda d(z_{i-1}, z_i)\\), by the Lipschitzness of \\(f\\). But in fact we can make \\(d(x, f^{j-i}(x))\\) in terms of those: \\(d\\) is a metric, which means that it obeys the triangle inequality.
+
+Hence \\(\displaystyle d(x, f^{j-i}(x)) \leq d(x, f(x)) + d(f(x), f^{j-i}(x)) \leq \dots \leq \sum_{k=1}^{j-i} d(z_{k-1}, z_k)\\). This we can bound: it's \\(\displaystyle \leq \sum_{k=1}^{j-i} \lambda^{k-1} d(z_0, z_1) = d(z_0, z_1) \sum_{k=1}^{j-i} \lambda^{k-1}\\). And, joy of joys, this sum is bounded, because the infinite sum \\(\displaystyle \sum_{k=1}^{\infty} \lambda^{k-1} = \dfrac{1}{1-\lambda}\\).
+
+Hence \\(d(z_i, z_j) < \lambda^i d(z_0, z_1) \dfrac{1}{1-\lambda}\\). This goes to \\(0\\) as \\(i \to \infty\\), so the sequence \\(z_i\\) is Cauchy.
+
+## What did we assume?
+
+In this section, we assumed that the space was complete.
+
+## Summary
+
+So far, we have shown that the sequence \\(f^i(x)\\) is Cauchy, so it converges to a limit. We'll call the limit \\(L\\): so we have \\(f^i(x) \to L\\) as \\(i \to \infty\\).
+
+## What next?
+
+It feels like we're very close to a result now. What we really want is for \\(L\\) to be a fixed point: we need \\(f(L) = L\\). Equivalently, we need \\(f(\lim z_i) \to \lim z_i\\); but \\(z_i = f(z_{i-1})\\), so this is \\(f(\lim z_i) \to \lim f(z_i)\\). This will be trivial if \\(f\\) is continuous. But \\(f\\) is Lipschitz, so it is uniformly continuous and hence continuous (this is a really simple lemma).
+
+That is, \\(L\\) is a fixed point of \\(f\\): we have proved that \\(f\\) has a fixed point.
+
+## Extension
+
+But we don't have to stop there - if we're drawing points together using \\(f\\), and we end up at a fixed point, surely there can't be two fixed points (since if there were, \\(f\\) would draw them together). Let's aim to prove that \\(f\\)'s fixed point is unique, by supposing that \\(L_1, L_2\\) are fixed points. Then \\(d(L_1, L_2) = d(f(L_1), f(L_2))\\), because \\(L_1, L_2\\) are fixed points, and then \\(d(f(L_1), f(L_2)) \leq \lambda d(L_1, L_2) < d(L_1, L_2)\\), contradiction.
+
+## Summary
+
+We have shown that there exists a unique fixed point \\(L\\) of a Lipschitz function \\(f\\) with Lipschitz constant \\(\lambda < 1\\) on a non-empty complete metric space. Moreover, we have shown that \\(f^(i)(x) \to L\\) for all \\(x\\), because we can perform this same construction of \\(z_i\\) starting from any point \\(x\\). Even more, we have shown that convergence is geometrically fast (by the \\(\lambda^i\\) term). This is a really strong theorem, and all I needed to remember in order to construct it was that Lipschitz functions were important and that we were looking for information about fixed points. (I didn't look up anything during the proof - I checked my statement of it afterwards, and it turned out to be correct. I didn't change anything after I finished it.)
+
+ [1]: https://en.wikipedia.org/wiki/Contraction_mapping_theorem "Contraction Mapping Theorem Wikipedia page"
+ [2]: https://en.wikipedia.org/wiki/Lipschitz_function "Lipschitz function Wikipedia page"
+ [3]: https://en.wikipedia.org/wiki/Cauchy_sequence "Cauchy sequence Wikipedia page"
+ [4]: https://en.wikipedia.org/wiki/Wlog "Wlog Wikipedia page"
diff --git a/hugo/content/posts/2014-04-04-discovering-a-proof-of-heine-borel.md b/hugo/content/posts/2014-04-04-discovering-a-proof-of-heine-borel.md
new file mode 100644
index 0000000..0390cdc
--- /dev/null
+++ b/hugo/content/posts/2014-04-04-discovering-a-proof-of-heine-borel.md
@@ -0,0 +1,68 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+- proof_discovery
+comments: true
+date: "2014-04-04T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/proof_discovery/discovering-a-proof-of-heine-borel/
+- /discovering-a-proof-of-heine-borel/
+title: Discovering a proof of Heine-Borel
+---
+I'm running through my Analysis proofs, trying to work out which ones are genuinely hard and which follow straightforwardly from my general knowledge base. I don't find the [Heine-Borel Theorem][1] "easy" enough that I can even forget its statement and still prove it (like [I can with the Contraction Mapping Theorem][2]), but it turns out to be easy in the sense that it follows simply from all the theorems I already know. Here, then, is my attempt to discover a proof of the theorem, using as a guide all the results I know but can't necessarily prove without lots of effort.
+
+# Statement of the theorem
+
+The Heine-Borel Theorem states that a subset of \\(\mathbb{R}^n\\) is compact if and only if it is closed and bounded.
+
+# First direction
+
+One direction looks easy - if we assume our set is not closed or not bounded, it should be simple to show that it is not compact, using an argument based on the fact that \\((0,1]\\) is not compact and \\([0, \infty)\\) is not compact. Both of those I know how to prove.
+
+## Assume not closed
+
+If the set \\(S\\) is not closed, the only thing we can do is take a sequence \\((x_n)_{n \geq 1}\\) tending to a limit \\(x\\) which is not in \\(S\\). From this, we need to create an open cover of \\(S\\) which has no finite subcover.
+
+In one dimension, this is easy because we can just take a ball around each \\(x_i\\), each ball overlapping by a tiny bit with the next. Clearly since any finite cover must include each \\(x_i\\), it must also include those balls, whence it must include an infinite number of balls (contradiction). However, in more dimensions this is not so obvious, because we don't have this handy "next ball" concept. What was really key in that 1D example was that the balls around each \\(x_i\\) didn't overlap to the extent that a ball contained more than one \\(x_i\\), and that no ball got near the forbidden limit point. (There was always "room to keep going" - in the \\((0,1]\\) example, taking the sets \\((\dfrac{1}{n+1}, \dfrac{1}{n})\\) and filling in some tiny balls around each \\(\dfrac{1}{n}\\), every set is some finite distance away from \\(0\\).)
+
+In more dimensions, if we create an open cover of \\(S\\) such that no set gets near the limit point \\(x\\) - that is, such that each set in the cover has some neighbourhood of \\(x\\) which it doesn't encroach upon - then any finite cover must also have some neighbourhood of \\(x\\) which it doesn't encroach upon. (A finite collection of things which don't get close to 0 must also not get close to 0.) Hence, because we have a sequence tending to \\(x\\) in \\(S\\), which \*does\* get close to \\(x\\), one of the \\(x_i\\) can't be included in our finite cover. That contradicts compactness.
+
+## Assume not bounded
+
+Remember our key example here was \\([0, \infty)\\). Since our set isn't bounded, we can take a sequence in it getting arbitrarily far out from \\(0\\) (that is, for every \\(n\\) there is \\(x_n\\) such that \\(\vert x_n \vert \geq n\\)). But then the easiest cover to use is just the set of balls centred on \\(0\\) with radius \\(n\\); this is an infinite cover, but there is no finite subcover because if we ever stop, there's an \\(x_n\\) we've missed.
+
+# The other direction
+
+Here's the bit that looks harder, because we're taking any closed bounded set and showing a strong property of it. Remember, though, that we have in fact proved this in 1D already: we proved the Bolzano-Weierstrass property of the reals, and it is a fact (although I don't remember how to prove it) that sequential compactness implies compactness. Let's see if we can make that proof work. (The proof I know goes along the lines of "fix an infinite sequence in an interval; keep halving the interval; there's an infinite subsequence in one of the halves; repeat".)
+
+Firstly, we're faced with an arbitrary closed bounded set. With the not-closed or not-bounded sets it was easier - we had somewhere to start from. We're going to need to make the problem simpler, because closed bounded sets can look really quite odd. The simplest possible closed bounded set is the closed ball centred on the origin of radius \\(r\\), but that's not great for halving. What we can halve is a box \\([-r, r]^n\\) - that's the second-simplest possible closed bounded set (arguably the most simple).
+
+Take an open cover of the box, and assume for contradiction that it has no finite subcover. Divide the box up into \\(2^n\\) smaller boxes by cutting halfway along each side. One of these boxes must have no finite subcover of the original cover (otherwise they'd all have finite subcovers, so we could union them all together to get a finite subcover of the big box), so we can repeat on that box. Inductively we obtain a sequence of nested boxes, none of which has a finite subcover in the original cover, and they are boxes of side length \\(r \times 2^{-n}\\).
+
+What do we know about these nested boxes? In 1D, the proof then went "our infinite sequence therefore has a limit": there was a point which lay in every box. We'd love that to be true here: an infinite sequence of closed nested boxes must have non-empty intersection. Fortunately, that's easy to prove: take a sequence \\(z_n\\) such that each \\(z_n\\) lies in the \\(n\\)th box but not the \\(n+1\\)th. This sequence tends to a limit, because it's clearly Cauchy; we'll show that the limit lies in every box. Indeed, we know that the boxes are closed, so the sequence \\(z_n, z_{n+1}, z_{n+2}, \dots \to z\\) tells us that \\(z\\) lies in box \\(n\\) for every \\(n\\), so there is no \\(n\\) such that \\(z\\) is not in the \\(n\\)th box, and hence \\(z\\) is in every box.
+
+Now, we have our concentric boxes homing in on \\(z\\), and \\(z\\) lies in all of these boxes. Moreover, the boxes get smaller and smaller, quite rapidly, and each of them requires an infinite number of sets from our original cover in order to cover it. But where is \\(z\\)? \\(z\\) lies in some set \\(U\\) in the original cover; \\(U\\) is some finite size, so it must cover one of the boxes completely, because the sizes of the boxes goes to zero. Formally, \\(U\\) contains some ball \\(B_z(\epsilon)\\); for all \\(\epsilon\\) there is \\(n\\) such that the \\(n\\)th box lies wholly in \\(B_z(\epsilon)\\); hence \\(U\\) contains the \\(n\\)th box, for some \\(n\\).
+
+This contradicts the fact that the \\(n\\)th box requires an infinite cover of open sets - we've done it in just one!
+
+Hence all boxes are compact.
+
+## Dealing with all possible closed bounded sets
+
+We've dealt with the easiest kind of closed bounded sets. How can we transform any other closed bounded set into one of these? We can't do that necessarily - closed sets aren't necessarily unions of closed boxes - but what we can say is that all closed bounded sets are contained some closed box. (Indeed, all bounded sets are.) It would be great if a closed subset \\(C\\) of a compact set \\(X\\) were compact.
+
+That's easy, though - if we have an open cover of \\(C\\), we can make an open cover of \\(X\\) by just adding \\(U = \mathbb{R}^n - C\\) to the cover. (That extra set is open, being the complement of a closed set.) Then this has a finite subcover, by compactness of \\(X\\); that subcover probably contains \\(U\\), but if it does, just throw it out and we've got a finite subcover of \\(C\\). Hence \\(C\\) is compact.
+
+# Summary
+
+We proved it as follows:
+
+1. Show the easier direction: assume not closed, make sequence tending to point not in set, define open cover such that no set individually gets close to that point; any finite subcover doesn't get close to that point, so the sequence can't be in the finite subcover. Assume not bounded, then use the nested balls centred on the origin.
+2. Do the easiest case of boxes, by taking an open cover with no finite subcover, repeatedly dividing up the box to get a sequence of nested boxes each with no finite subcover; there is a point in every box (by defining a sequence of points, one in each box, which must tend to a limit); that point is in some open set in the original cover, and eventually the boxes get small enough that a box is entirely contained within an open set - contradiction.
+3. Do the harder cases of boxes, by showing that a closed subset of a compact set is compact (by taking an open cover, extending it to a cover of the big set, and compactly taking a finite subcover, which turns back into a finite subcover of the small set).
+
+ [1]: https://en.wikipedia.org/wiki/Heine-Borel_theorem "Heine-Borel theorem"
+ [2]: {% post_url 2014-03-30-how-to-discover-the-contraction-mapping-theorem %}
diff --git a/hugo/content/posts/2014-04-07-useful-conformal-mappings.md b/hugo/content/posts/2014-04-07-useful-conformal-mappings.md
new file mode 100644
index 0000000..4df55a7
--- /dev/null
+++ b/hugo/content/posts/2014-04-07-useful-conformal-mappings.md
@@ -0,0 +1,44 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-04-07T00:00:00Z"
+math: true
+aliases:
+- /uncategorized/useful-conformal-mappings/
+- /useful-conformal-mappings/
+title: Useful conformal mappings
+---
+This post is to be a list of conformal mappings, so that I can get better at answering questions like "Find a conformal mapping from \ to \". The following Mathematica code is rough-and-ready, but it is designed to demonstrate where a given region goes under a given transformation.
+
+ whereRegionGoes[f_, pred_, xrange_, yrange_] :=
+
+ whereRegionGoes[f, pred, xrange, yrange] =
+ With[{xlist = Join[{x}, xrange], ylist = Join[{y}, yrange]},
+ ListPlot[
+ Transpose@
+ Through[{Re, Im}[
+ f /@ (#[[1]] + #[[2]] I & /@
+ Select[Flatten[Table[{x, y}, xlist, ylist], 1],
+ With[{z = #[[1]] + I #[[2]]}, pred[z]] &])]]]]
+
+* Möbius maps - these are of the form \\(z \mapsto \dfrac{az+b}{c z+d}\\). They keep circles and lines as circles and lines, so they are extremely useful when mapping a disc to a half-plane. A map is defined entirely by how it acts on any three points: there is a unique Möbius map taking any three points to any three points (and hence any circle/line to circle/line). (Some of the following are Möbius maps.)
+* To take the unit disc to the upper half plane, \\(z \mapsto \dfrac{z-i}{i z-1\\)}
+* To take the upper half plane to the unit disc, \\(z \mapsto \dfrac{z-i}{z+i}\\) (the [Cayley transform][1])
+* To rotate by 90 degrees about the origin, \\(z \mapsto i \\)z
+* To translate by \\(a\\), \\(z \mapsto a+\\)z
+* To scale by factor \\(a \in \mathbb{R}\\) from the origin, \\(z \mapsto a \\)z
+* \\(z \mapsto exp(z)\\) takes a vertical strip to an annulus - but note that it is not bijective, because its domain is simply connected while its range is not.
+* \\(z \mapsto exp(z)\\) takes a horizontal strip, width \\(\pi\\) centred on \\(\mathbb{R}\\) onto the right-half-plane.
+
+## Maps which might not be conformal
+
+These maps are useful but we can only use them when the domain doesn't include a point where \\(f'(z) = 0\\) (as that would stop the map from being conformal).
+
+* To "broaden" a wedge symmetric about the real axis pointing rightwards, \\(z \mapsto z^\\)2
+* To take a half-strip \\(Re(z) > 0, 0 < Im(z) < \dfrac{\pi}{2}\\) to the top-right quadrant: \\(z \mapsto \sinh(z\\))
+* to take a half-strip \\(Im(z) > 0, -\frac{\pi}{2} < Re(z) < \frac{\pi}{2}\\) to the upper half plane, \\(z \mapsto \sin(z\\))
+
+ [1]: https://en.wikipedia.org/wiki/Cayley_transform#Conformal_map "Cayley transform Wikipedia page"
diff --git a/hugo/content/posts/2014-04-15-sample-topology-question.md b/hugo/content/posts/2014-04-15-sample-topology-question.md
new file mode 100644
index 0000000..41bcf3c
--- /dev/null
+++ b/hugo/content/posts/2014-04-15-sample-topology-question.md
@@ -0,0 +1,53 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+- proof_discovery
+comments: true
+date: "2014-04-15T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/sample-topology-question/
+- /sample-topology-question/
+title: Sample topology question
+---
+As part of the recent series on how I approach maths problems, I give another one here (question 14 on the Maths Tripos IB 2007 paper 4). The question is:
+
+> Show that a compact metric space has a countable dense subset.
+
+This is intuitively clear if we go by our favourite examples of metric spaces (namely \\(\mathbb{R}^n\\), the discrete metric and the indiscrete metric). Indeed, in \\(\mathbb{R}^n\\), which isn't even compact, we have the rationals (so the theorem doesn't give a necessary condition, only a sufficient one); in the indiscrete metric, any singleton \\(\{x \}\\) is dense (since the only closed non-empty set is the whole space); in the discrete metric, where every set is open, we can't possibly be compact unless the space is finite, so that's why the theorem doesn't hold for a topology with so many sets.
+
+However, there are some really weird metric spaces out there, and if there's one thing I've learnt about topology it's that intuition-by-examples is an extremely bad way to prove things, although it's often a good way to work out *how* to prove something.
+
+Right. Our metric space could be really odd - it might be massively uncountable or something - so that means we're going to have to build our dense subset anew for each metric space. (It's like trying to find a good diet for your pet - the possible pets are so diverse that one diet won't fit all, so we have to find the right diet for each pet individually.) The "countable" bit can only come in from the rationals or naturals - it can't pop out of the metric space itself, because we have no idea how huge the metric space might be.
+
+That's all I can come up with for meta-reasoning at the moment. Let's find an example to guide intuition. By far the simplest is \\([a,b] \subset \mathbb{R}\\), whose dense subset is \\(\mathbb{Q} \cap [a,b]\\).
+
+My first thought is to make a dense subset by grabbing an arbitrary point \\(x\\) and then taking one point \\(x_p\\) such that \\(d(x_p, x) = p\\) for all rational \\(p\\). That definitely works for \\([a,b]\\), but actually it clearly fails in \\(\mathbb{R}^2\\) - what if we happened to pick our points so they all lay on the same line? They'd be dense along that line, but not anywhere else in the set. It's going to be a lot of work to fix this in \\(\mathbb{R}^2\\) without using special properties of \\(\mathbb{R}\\), so I'll abandon that line of thought.
+
+Nothing obvious has come of the "density" part of the statement. Let's move on to the other bit - we know our metric space is compact (or, equivalently, that any open cover has a finite subcover). That means we're going to want to create an open cover. Because our metric space might be so odd, the only obvious cover to take is one consisting of a ball around every point. (Those balls might all be different sizes, of course.) That's the only way to make sure that we have actually included our entire space in the cover.
+
+Compactness then gives us that there is a finite subcover of this cover of balls. That's not going to get us very far if we require a countable number of points, though. Where might we get a *point* rather than an open set (after all, compactness is all about sets, not points)? The only possible place is as the centre of some ball. Aha - we need to create a countable number of points, each of which lies at the centre of some ball. Equivalently, we want a countable number of balls.
+
+OK, we can create hugely many balls to cover the set (wrap every point in a ball), and we can turn that into finitely many balls to cover the set (by compactness). How can we get countably many? Obviously not from the "hugely many" directly, because it might be very very uncountable - but we can make countable from finite, by taking a countable union. That is, we're going to need a countable union of {finitely many balls which cover the set}.
+
+The simplest way I can create that countable union is to make every ball the same size (\\(\frac{1}{n}\\)), and use the cover \\(B_{\frac{1}{n}}\\) consisting of a \\(\frac{1}{n}\\)-ball around every point. We use compactness to turn that into \\(C_{\frac{1}{n}}\\) a collection of finitely many balls (which covers the entire space), and consider the union of all these \\(C_{\frac{1}{n}}\\).
+
+This has given us a countable collection of points \\(\cup_{n \geq 1} \cup_{j = 1}^{i_n} P_{\frac{1}{n},j}\\) (namely, the centres of the balls; notationally, \\(P_{\frac{1}{n}, j}\\) refers to the centre of the \\(j\\)th element of \\(C_{\frac{1}{n}}\\)). Now, we want that set to be dense - we need the closure to be the entire space. What would it mean if the closure weren't the entire space? There would be a point which was in the space but not the closure.
+
+At this point, I move back to the \\(\mathbb{R}^2\\) intuition-guide. I have drawn a mental picture of \\([0,1] \times [0,1]\\) with a countable collection of balls covering it, with a single point not in the closure of the set of centres. Aha, something is not right here - how can a point manage not to be in the closure of that set, unless it is outside the cover?
+
+Suppose \\(x\\) does not lie in \\(\text{cl}(\cup_n P_{\frac{1}{n}})\\) - that is, \\(x\\) is outside the closure. Then \\(x\\) lies in an open set - namely the complement of the closure - so there is an open ball \\(B_{\epsilon}(x)\\) which lies outside the closure. I can feel that we're going to use \\(\frac{1}{n}\\)-ness at some point, because that's how we defined our cover, so let's make \\(\epsilon = \frac{1}{m}\\) for some \\(m\\) (which we can do - if our original \\(\epsilon\\) didn't work, make it smaller until it is the reciprocal of an integer).
+
+Then we have a radius-\\(\frac{1}{m}\\) ball which doesn't lie inside the closure. That doesn't bode well for \\(C_{\frac{1}{m}}\\) being a cover, but it's just possible that the balls may sit next to each other in some way that makes it work (that's how vague my thoughts are, not just my incompetence at communication). For safety, let's consider \\(C_{\frac{1}{2m}}\\) instead.
+
+Then we have \\(x \in B_{\frac{1}{2m}}(k)\\) for some \\(k \in P_{\frac{1}{2m}}\\), because \\(C_\frac{1}{2m}\\) was a cover so \\(x\\) does lie in a ball in that cover; pick \\(k\\) to be the cetntre of that ball. In particular, \\(k\\) lies at most \\(\frac{1}{2m}\\) away from \\(x\\), and \\(k\\) lies both in \\(B_{\frac{1}{m}}(x)\\) (which is not in the closure) and in \\(P\\) (which is in the closure). This is a contradiction - we've found a point which is both in and not in the closure.
+
+Hence we must have the closure being the entire space, which means our countable collection of points is dense.
+
+# Summary
+
+I started off by thinking about the problem - working out roughly how I might be able to attack it, and deciding that it was too general for clever tricks to work. I then constructed an intuition-guide example, and worked off that, but decided that the line of attack suggested by my example would be very hard in general.
+
+Having exhausted one of the parts of the theorem's statement, I moved to the other, and followed my nose. The problem was so general that there were only a few possible places we could acquire a countable collection of points; compactness suggested using balls around every point in the space, to get a finite cover of balls. From finite we can create countable by just taking a union, so I made the finite covers more formal (giving the balls a particular size) and took the union of all of them. That naturally gives a countable set of points (the centres of the balls); in the spirit of "do as little work as possible", I set out to prove that this set was dense. Assuming the contrary made it obvious from my intuition-picture that the set was indeed dense.
diff --git a/hugo/content/posts/2014-04-17-cayley-hamilton-theorem.md b/hugo/content/posts/2014-04-17-cayley-hamilton-theorem.md
new file mode 100644
index 0000000..fc202aa
--- /dev/null
+++ b/hugo/content/posts/2014-04-17-cayley-hamilton-theorem.md
@@ -0,0 +1,58 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2014-04-17T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/cayley-hamilton-theorem/
+- /cayley-hamilton-theorem/
+title: Cayley-Hamilton theorem
+---
+This is to detail a much easier proof (at least, I find it so) of [Cayley-Hamilton][1] than the ones which appear on the Wikipedia page. It only applies in the case of complex vector spaces; most of the post is taken up with a proof of a lemma about complex matrices that is very useful in many contexts.
+
+The idea is as follows: given an arbitrary square matrix, upper-triangularise it (looking at it in basis \\(B\\)). Then consider how \\(A-\lambda I\\) acts on the vectors of \\(B\\); in particular, how it deals with the subspace spanned by \\(b_1, \dots, b_i\\).
+
+# Lemma: upper-triangulation
+
+> Given a square matrix \\(A\\), there is a basis with respect to which \\(A\\) is upper-triangular.
+
+Proof: by induction. It's obviously true for \\(1 \times 1\\) matrices, as they're already triangular. Now, let's take an arbitrary \\(n \times n\\) matrix \\(A\\). We want to make it upper-triangular. In particular, thinking about the top-left element, we need \\(A\\) to have an eigenvector (since if \\(A\\) is upper-triangular with respect to basis \\(B\\), then \\(A(b_1) = \lambda b_1\\), where \\(\lambda\\) is the top-left element). OK, let's grab an eigenvector \\(v_1\\) with eigenvalue \\(\lambda\\).
+
+We'd love to be done by induction at this point - if we extend our eigenvector to a basis, that extension itself forms a smaller space, on which \\(A\\) is upper-triangulable. We have that every subspace has a complement, so let's call pick a complement of \\(\text{span}(v_1)\\) and call it \\(W\\).
+
+Now, we want \\(A\\) to be upper-triangulable on \\(W\\). It makes sense, then, to restrict it to \\(W\\) - we'll call the restriction \\(\tilde{A}\\), and that's a linear map from \\(W\\) to \\(V\\). Our inductive hypothesis requires a square matrix, so we need to throw out one of the rows of this linear map - but in order that we're working with an endomorphism (rather than just a linear map) we need \\(A\\)'s domain to be \\(W\\). That means we have to throw out the top row as well - that is, we compose with \\(\pi\\) the projection map onto \\(W\\).
+
+Then \\(\pi \cdot \tilde{A}\\) is \\((n-1)\times(n-1)\\), and so we can induct to state that there is a basis of \\(W \leq V\\) with respect to which \\(\pi \cdot \tilde{A}\\) is upper-triangular. Let's take that basis of \\(W\\) as our extension to \\(v_1\\), to make a basis of \\(V\\). (These are \\(n-1\\) length-\\(n\\) vectors.)
+
+Then we construct \\(A\\)'s matrix as \\(A(v_1), A(v_2), \dots, A(v_n)\\). (That's how we construct a matrix for a map in a basis: state where the basis vectors go under the map.)
+
+Now, with respect to this basis \\(v_1, \dots, v_n\\), what does \\(A\\) look like? Certainly \\(A(v_1) = \lambda v_1\\) by definition. \\(\pi(A(v_2)) = \pi(\tilde{A}(v_2))\\) because \\(\tilde{A}\\) acts just the same as \\(A\\) on \\(W\\); by upper-triangularity of \\(\pi \cdot \tilde{A}\\), we have that \\(\pi \cdot \tilde{A}(\pi(v_2)) = k v_2\\) for some \\(k\\). The first element (the \\(v_1\\) coefficient) of \\(A(v_2)\\), who knows? (We threw that information away by taking \\(\pi\\).) But that doesn't matter - we're looking for upper-triangulability rather than diagonalisability, so we're allowed to have spare elements sitting at the top of the matrix.
+
+And so forth: \\(A\\) is upper-triangular with respect to some basis.
+
+## Note
+
+Remember that we threw out some information by projecting onto \\(W\\). If it turned out that we didn't throw out any information - if it turned out that if we could always "fill in with zeros" - then we'd find that we'd constructed a basis of eigenvectors, and that the matrix was diagonalisable. (This is how the two ideas are related.)
+
+# Theorem
+
+Recall the statement of the theorem:
+
+> Every square matrix satisfies its characteristic polynomial.
+
+Now, this would be absolutely trivial if our matrix \\(A\\) were diagonalisable - just look at it in a basis with respect to which \\(A\\) is diagonal (recalling that change-of-basis doesn't change characteristic polynomial), and we end up with \\(n\\) simultaneous equations which are conveniently decoupled from each other (by virtue of the fact that \\(A\\) is diagonal).
+
+We can't assume diagonalisability - but we've shown that there is something nearly as good, namely upper-triangulability. Let's assume (by picking an appropriate basis) that \\(A\\) is upper-triangular. Now, let's say the characteristic polynomial is \\(\chi(x) = (x - \lambda_1)(x-\lambda_2) \dots (x-\lambda_n)\\). What does \\(\chi(A)\\) do to the basis vectors?
+
+Well, let's consider the first basis vector, \\(e_1\\). We have that \\(A(e_1) = \lambda_1 e_i\\) because \\(A\\) is upper-triangular with top-left element \\(\lambda_1\\), so we have \\((A-\lambda_1 I)(e_1) = 0\\). If we look at the characteristic polynomial as \\((x-\lambda_n)\dots (x-\lambda_1)\\), then, we see that \\(\chi(A)(e_1) = 0\\).
+
+What about the second basis vector? \\(A(e_2) = k e_1 + \lambda_2 e_2\\); so \\((A - \lambda_2 I)(e_2) = k e_1\\). We've pulled the \\(2\\)nd basis vector into an earlier-considered subspace, and happily we can kill it by applying \\((A-\lambda_1 I)\\). That is, \\(\chi(A)(e_2) = (A-\lambda_n I)\dots (A-\lambda_1 I)(A-\lambda_2 I)(e_2) = (A-\lambda_n I)\dots (A-\lambda_1 I) (k e_1) = 0\\).
+
+Keep going: the final case is the \\(n\\)th basis vector, \\(e_n\\). \\(A-\lambda_n I\\) has a zero in the bottom-right entry, and is upper-triangular, so it must take \\(e_n\\) to the subspace spanned by \\(e_1, \dots, e_{n-1}\\). Hence \\((A-\lambda_1 I)\dots (A-\lambda_n I)(e_n) = 0\\).
+
+Since \\(\chi(A)\\) is zero on a basis, it must be zero on the whole space, and that is what we wanted to prove.
+
+ [1]: https://en.wikipedia.org/wiki/Cayley-Hamilton_theorem "Cayley-Hamilton theorem"
diff --git a/hugo/content/posts/2014-04-26-sequentially-compact-iff-compact.md b/hugo/content/posts/2014-04-26-sequentially-compact-iff-compact.md
new file mode 100644
index 0000000..8bc6ff0
--- /dev/null
+++ b/hugo/content/posts/2014-04-26-sequentially-compact-iff-compact.md
@@ -0,0 +1,122 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+- proof_discovery
+comments: true
+date: "2014-04-26T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/proof_discovery/sequentially-compact-iff-compact/
+- /sequentially-compact-iff-compact/
+title: Sequentially compact iff compact
+---
+[Prof Körner][1] told us during the [IB Metric and Topological Spaces][2] course that the real meat of the course (indeed, its hardest theorem) was "a metric space is sequentially compact iff it is compact". At the moment, all I remember of this result is that one direction requires Lebesgue's lemma (whose statement I don't remember) and that the other direction is quite easy. I'm going to try and discover a proof - I'll be honest when I have to look things up.
+
+# Easy direction
+
+I don't remember which direction was the easy one. However, I do know that in Analysis we prove very early on that closed intervals are sequentially compact (that is, they have the Bolzano-Weierstrass theorem), so I'm going to guess that that's the easy direction.
+
+## Thought process
+
+Suppose the space is compact. (Then for every open cover there is a finite subcover.) We want to show that every sequence has a convergent subsequence, so of course we'll try a proof by contradiction, because the statement is so general.
+
+Suppose the sequence \\(x_n\\) has no convergent subsequence. That is, no subsequence of \\(x_n\\) converges to \\(y\\), for any \\(y\\). We're aiming for some kind of open cover, and we're in a very general kind of metric space, so we're going to have to generate our cover by considering balls around every point.
+
+What does it mean for every subsequence of \\(x_n\\) not to converge to \\(y\\)? It means that for every ball around \\(y\\), and for every subsequence, we can find arbitrarily many \\(x_n\\) in the subsequence such that \\(x_n\\) is outside that ball. My first thought is that we've made a sequence which might be useful - the \\(x_n\\) outside balls of radius \\(\delta_i\\) for \\(i = \frac{1}{m}\\) - but it's not obvious whether that will in fact be useful, because all we know about this sequence is that it doesn't get near a particular point.
+
+OK, let's look at the "for every \\(y\\)" bit, because that's bound to be where our cover comes from. We're going to want a ball around each \\(y\\), so let's say the ball is of radius \\(\delta_y\\). (We'll delay stating what \\(\delta_y\\) actually is in value, because I have no idea what it's going to be.) Ah, then we know that for every subsequence, there are infinitely many \\(x_i\\) which lie outside the ball \\(B(y, \delta_y)\\).
+
+What does our finite subcover look like? It's a finite collection (say, \\(k\\) many) of balls, and we know that there are infinitely many \\(x_i\\) in any subsequence such that the \\(x_i\\) are outside a given one of those balls. But this is a contradiction: take the subsequence of \\(x_n\\) such that all of the \\(x_i\\) in the subsequence lie outside ball 1. Then take a subsequence of that such that all the elements lie outside ball 2. Repeat: eventually we end up with a subsequence of \\(x_n\\) such that all the elements lie in ball \\(k\\). This subsequence converges.
+
+## Proof
+
+Suppose \\((X,d)\\) is a compact metric space, and take a sequence \\(x_n\\) in \\(X\\). We show that there exists \\(y \in X\\) such that there is a subsequence \\(z_i\\) of the \\(x_n\\) such that \\(z_i \to y\\).
+
+Indeed, if the sequence \\(x_n\\) gets arbitrarily close to \\(y\\) then there is a subsequence of \\(x_n\\) tending to \\(y\\) (namely, let \\(\epsilon_m = \frac{1}{m}\\); then pick \\(x_{n_m}\\) such that \\(d(x_{n_m}, y) < \epsilon_m\\)), so it is enough to show that there is some \\(y\\) such that the sequence \\(x_n\\) gets arbitrarily close to \\(y\\).
+
+We show that this is true. Indeed, suppose not. Then for all \\(y\\) there exists \\(\delta_y\\) such that \\(x_n\\) never gets within \\(\delta_y\\) of \\(y\\) (for all \\(n > N\\), some \\(N\\) - the sequence might have started at \\(y\\), but we know it never returns after some point). Take a cover consisting of those \\(B(y, \delta_y)\\); by compactness, there is a finite subcover.
+
+Now, we have that for the \\(i\\)th ball in the cover, there exists \\(N_i\\) such that \\(x_n\\) never gets into the \\(i\\)th ball for \\(n > N_i\\); but there are only finitely many balls, so \\(x_n\\) never gets into any of the balls for \\(n > N = \text{max}(N_i)\\). But the finite collection of balls is a cover. That is, no \\(x_n\\) is in \\(X\\), for \\(n > N\\) - contradiction.
+
+## Postscript
+
+That did indeed turn out to be the easier direction, then.
+
+# Hard direction
+
+I'm not even going to begin attempting to find out what Lebesgue's lemma is on my own, so I'll just look it up and state it.
+
+> For a sequentially compact metric space \\((X, d)\\), and an open cover \\(U_{\alpha}\\), we have that there exists \\(\delta\\) such that for all \\(x \in X\\), there exists \\(\alpha_x\\) such that \\(B(x, \delta) \subset U_{\alpha_x}\\).
+
+That is, "given any open cover, we can find a ball-width such that for every point, a ball of that width lies entirely in some set in the cover". It feels kind of related to Hausdorffness - while "metric spaces are Hausdorff" guarantees that we can wrap distinct points in non-overlapping balls, Lebesgue's lemma tells us that if our distinct points are not covered by the same set then we can separate them while remaining in those different sets in the cover.
+
+OK, let's go for a proof of this.
+
+## Proving Lebesgue's lemma
+
+Well, where can we start? To actually produce such a \\(\delta\\), it looks like we'd need to take some kind of minimum, and that would require a finite cover (which is assuming compactness). So that's not a good place to start.
+
+If we don't know where to start, we contradict. Suppose there is no \\(\delta\\) such that for all \\(x \in X\\) there exists \\(\alpha_x\\) such that \\(B(x, \delta) \subset U_{\alpha}\\). That is, for every \\(\delta\\) there exists \\(x \in X\\) such that for all open sets in the cover, \\(B(x, \delta) \not \subset U_{\alpha}\\).
+
+We're in a sequentially compact space - we need a sequence, so that it can have a convergent subsequence. Mindlessly (nearly literally - I'm exhausted at the moment, having had an unusually long supervision since proving the easier direction), I'll take \\(\delta_n = \frac{1}{n}\\) and create a sequence \\(x_n\\) such that \\(B(x_n, \frac{1}{n})\\) is not wholly contained in any set of the cover. Then the \\(x_n\\) has a convergent subsequence \\(x_{n_i} \to x\\), say.
+
+Picture pause. We've got our \\(x_{n_i}\\) tending to \\(x\\), with ever-decreasing balls around them. It seems sensible that at some point (since the position of the balls, the centre \\(x_{n_i}\\), is hardly changing, while the radius is getting smaller) the balls will get so small that they start being contained in some cover-set.
+
+That's actually so close to a proof that I'll write it up formally from this point.
+
+### Proof
+
+Let \\((X, d)\\) be a sequentially compact metric space, and let \\(U_\alpha\\) be a cover (ranging \\(\alpha\\) over some indexing set). Assume for contradiction that for every \\(\delta\\) there exists \\(x \in X\\) such that for all \\(\alpha\\), \\(B(x, \delta) \not \subset U_{\alpha}\\).
+
+Specialise to the sequence \\(\delta_n = \frac{1}{n}\\), and let \\(x_n\\) be the corresponding \\(x \in X\\). Then by sequential compactness, there exists a subsequence \\(x_{n_i}\\) tending to some \\(x\\).
+
+Now, \\(B(x_{n_i}, \frac{1}{n_i}) \not \subset U_{\alpha}\\) for any \\(\alpha\\). Also, because each \\(U_{\alpha}\\) is open, we have that for every \\(\alpha\\) such that \\(x \in U_{\alpha}\\) there exists \\(\epsilon_{\alpha}\\) such that \\(B(x, \epsilon_{\alpha})\\) is wholly contained within \\(U_{\alpha}\\).
+
+Fix some \\(\alpha\\) such that \\(x \in U_{\alpha}\\), and let \\(\epsilon = \epsilon_{\alpha}\\). Take \\(n_i\\) such that \\(d(x_{n_i}, x) < \frac{\epsilon}{2}\\) (possible, because \\(x_{n_i} \to x\\)). We have \\(B(x_{n_i}, \frac{1}{n_i})\\) entirely contained in \\(B(x, \epsilon)\\), because any point in the former ball is at most \\(\frac{1}{n_i}\\) away from \\(x_{n_i}\\), which is itself at most \\(\frac{\epsilon}{2}\\) away from \\(x\\); hence any point in \\(B(x_{n_i}, \frac{1}{n_i})\\) is at most \\(\frac{1}{n_i} +\frac{\epsilon}{2}\\) away from \\(x\\). Picking \\(n_i > \frac{2}{\epsilon}\\) (as well as such that \\(d(x_{n_i}, x) < \frac{\epsilon}{2}\\)) ensures that \\(\frac{1}{n_i} +\frac{\epsilon}{2} < \epsilon\\).
+
+But this is a contradiction: we have a ball entirely contained in some \\(U_{\alpha}\\) - namely \\(B(x, \epsilon)\\) - which contains a ball which is not entirely contained in \\(U_{\alpha}\\) - namely \\(B(x_{n_i}, \frac{1}{n_i})\\).
+
+## Proving the main theorem
+
+OK, what do we have? We have that any open cover of a sequentially compact space allows us to draw a ball of *predetermined width* around each point, such that every ball is contained entirely in a set from the cover.
+
+What do we want? We want every open cover of a sequentially compact space to have a finite subcover. [^when]
+
+OK, let's do the only possible thing and take an open cover of a sequentially compact space. We might be able to build a finite subcover because of our predetermined-width balls, but I want a picture first.
+
+### Pictures (feel free to skip)
+
+Let's use \\([0, 1]\\) and the cover \\([0, \frac{1}{5}), (\frac{1}{n+2}, \frac{1}{n}), (\frac13, 1]\\) where \\(n \geq 2\\), and let's suppose \\(\delta = \frac17\\). (This clearly works as a \\(\delta\\) in Lebesgue's lemma.) Then a \\(\frac17\\)-ball around any point remains in some set of the cover. The reason we have a finite subcover in this case is that the sets in the cover get smaller, so eventually we can just discard the ones which are too small to contain a \\(\frac17\\)-ball. It turns out that wasn't a great intuition guide - metric spaces can be a lot odder than that.
+
+We want a space where the "balls get smaller" argument fails. Let's use \\(\mathbb{R} \cup \{ \infty \}\\) under the usual metric, and the cover \\((n-\frac34, n+\frac34)\\) along with some ball around \\(\infty\\). The reason this one works is because the ball around infinity makes sure we can throw out most of the sets of the cover, because they are contained in the ball around infinity. (A suitable \\(\delta\\) is \\(\frac14\\).)
+
+### End of pictures
+
+Hmm, I don't think I can easily come up with an example which explains exactly why the theorem is true. I slept on this, and got no further, so I looked up the next step: assume that it is not possible to cover the space with a finite number of \\(B(x_i, \delta)\\). (This should perhaps have been suggested to me by my finite examples, in hindsight.) It turns out that this step makes it really easy.
+
+Then for all finite sequences \\((x_i)_{i=1}^n\\), there is a point \\(x_{n+1}\\) such that \\(x_{n+1}\\) is not in the cover; this forms a sequence which must have a convergent subsequence. Because the covering-balls are all of fixed width \\(\delta\\), we must have that eventually the points in the subsequence draw together enough to sit in the same ball.
+
+## Proof
+
+Suppose \\((X, d)\\) is a sequentially compact metric space which is not compact, and fix an arbitrary open cover \\(U_{\alpha}\\) such that there is no finite subcover. Then by Lebesgue's lemma, there is \\(\delta\\) such that for all \\(x \in X\\), there is \\(\alpha_x\\) such that \\(B(x, \delta) \subset U_{\alpha}\\).
+
+Now, if it were possible to cover \\(X\\) with a finite number of \\(B(x_i, \delta)\\) then we would have a finite subcover (namely, \\(U_{\alpha_{x_i}}\\) for each \\(i\\)). Hence it is impossible to cover \\(X\\) with a finite number of \\(B(x_i, \delta)\\). Take a sequence \\((x_n)_{n=1}^{\infty}\\) such that \\(x_i\\) does not lie in any \\(B(x_j, \delta)\\) for \\(j < i\\) (and where \\(x_1\\) is arbitrary). Then there is a convergent subsequence \\(x_{n_i} \to x\\), say; wlog let \\(n_i = i\\), for ease of notation (so the original sequence converged).
+
+But this contradicts the requirement that \\(x_i\\) always lies outside \\(B(x_j, \delta)\\) for \\(j < i\\): indeed, \\(d(x_i, x_j) < \delta\\) for sufficiently large \\(i, j\\), since convergent sequences are Cauchy.
+
+Hence \\((X, d)\\) is compact.
+
+# Postscript
+
+Ouch, that took a long time. There were three key ideas I ended up using.
+
+1. One direction is so easy that it's one of the first theorems we prove in Analysis.
+2. Lebesgue's lemma.
+3. Contradict ALL the things. (Every single major step in either direction of the proof is a contradiction, and everything just falls out.)
+
+[^when]: When do we want it? Now!
+
+ [1]: https://en.wikipedia.org/wiki/Tom_Körner "Prof Körner Wikipedia page"
+ [2]: https://www.dpmms.cam.ac.uk/study/IB/MetricTopologicalSpaces/ "Met+Top"
diff --git a/hugo/content/posts/2014-05-03-discovering-a-proof-of-sylvesters-law-of-inertia.md b/hugo/content/posts/2014-05-03-discovering-a-proof-of-sylvesters-law-of-inertia.md
new file mode 100644
index 0000000..b24bbf1
--- /dev/null
+++ b/hugo/content/posts/2014-05-03-discovering-a-proof-of-sylvesters-law-of-inertia.md
@@ -0,0 +1,66 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+- proof_discovery
+comments: true
+date: "2014-05-03T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/proof_discovery/discovering-a-proof-of-sylvesters-law-of-inertia/
+- /discovering-a-proof-of-sylvesters-law-of-inertia/
+title: Discovering a proof of Sylvester's Law of Inertia
+---
+*This is part of what has become a series on discovering some fairly basic mathematical results, and/or discovering their proofs. It's mostly intended so that I start finding the results intuitive - having once found a proof myself, I hope to be able to reproduce it without too much effort in the exam.*
+
+# Statement of the theorem
+
+[Sylvester's Law of Inertia][1] states that given a quadratic form \\(A\\) on a real finite-dimensional vector space \\(V\\), there is a diagonal matrix \\(D\\), with entries \\(( 1_1,1_2,\dots,1_p, -1_1, -1_2, \dots, -1_q, 0,0,\dots,0 )\\), to which \\(A\\) is congruent; moreover, \\(p\\) and \\(q\\) are the same however we transform \\(A\\) into this diagonal form.
+
+# Proof
+
+The very first thing we need to know is that \\(A\\) is diagonalisable. (If it isn't diagonalisable, we don't have a hope of getting into this nice form.) We know of a few classes of diagonalisable matrices - symmetric, Hermitian, etc. All we know about \\(A\\) is that it is a real quadratic form. What does that mean? It means that \\(A(x) = x^T A x\\) if we move into some coordinate system; transposing gives us that \\(A(x)^T = x^T A^T x\\), but the left-hand-side is scalar so is symmetric, whence \\(A = A^T\\) (because \\(x\\) was arbitrary). Hence \\(A\\) has a symmetric matrix and so is diagonalisable: there is an orthogonal matrix \\(P\\) such that \\(P^{-1}AP = D\\), where \\(D\\) is diagonal. (Recall that a matrix \\(M\\) is orthogonal if it satisfies \\(M^{-1} = M^T\\).)
+
+Now we might as well consider \\(D\\) in diagonal form. Some of the elements are positive, some negative, and some zero - it's easy to transform \\(D\\) so that the positive ones are all together, the negative ones are all together and the zeros are all together, by swapping basis vectors. (For instance, if we want to swap diagonal elements in positions \\((i,i), (j,j)\\), just swap \\(e_i, e_j\\).) Now we can scale every diagonal element down to \\(1\\), by scaling the basis vectors - if we scale \\(e_i\\) by \\(\sqrt{ \vert A_{i,i} \vert }\\), calling the resulting basis \\(f_i\\) we'll get \\(A(f_i) = \vert A_{i,i} \vert A(e_i) = A_{i,i} e_i\\) as required. (The squaring comes from the fact that \\(A\\) is a \*quadratic\* form, so \\(A(a x) = a^2 A(x)\\).)
+
+Hence we've got \\(A\\) into the right form. But how do we show that the number of positive and negative elements is an invariant?
+
+## Positive bit
+
+All I remember from the notes is that there's something to do with positive definite subspaces. It turns out that's a really big hint, and I haven't been able to think up how you might discover it. Sadly, I'll just continue as if I'd thought it up for myself rather than remembering it.
+
+The following section was my first attempt. My supervisor then told me that it's a bit inaccurate (and some of it doesn't make sense). In particular, I talk about the dimension of \\(V \backslash P\\) for \\(P\\) a subspace of \\(V\\) - but \\(V \backslash P\\) isn't even a space (it doesn't contain \\(0\\)). During the supervision I attempted to refine it by using \\(P^C\\) the complement of \\(P\\) in \\(V\\), but even that is vague, not least because complements aren't unique.
+
+### Original attempt
+
+We have a subspace \\(P\\) on which \\(A\\) is positive definite - namely, make \\(A\\) diagonal and then take the first \\(p\\) basis vectors. (Remember, positive definite iff \\(A(x, x) > 0\\) unless \\(x = 0\\); but \\(A(x,x) > 0\\) for \\(x \in P\\) because \\(x^T A x\\) is a sum of positive things.) Similarly, we have a subspace \\(Q\\) on which \\(A\\) is negative semi-definite (namely "everything which isn't in \\(P\\)"). Then what we want is: for any other diagonal form of \\(A\\), there is the same number of 1s on the diagonal, and the same number of -1s, and the same number of 0s. That is, we want to ensure that just by changing basis, we can't alter the size of the subspace on which \\(A\\) is positive-definite.
+
+We'll show that for any subspace \\(R\\) on which \\(A\\) is positive-definite, we must have \\(\dim(R) \leq \dim(P)\\). Indeed, let's take \\(R\\) on which \\(A\\) is positive definite. The easiest way to ensure that its dimension is less than that of \\(P\\) is to show that it's contained in \\(P\\). Now, that might be hard - we don't know anything about what's in \\(R\\) - but we might do better in showing that nothing in \\(R\\) is also in \\(V \backslash P\\), because we know \\(A\\) is negative semi-definite on \\(V \backslash P\\), and that's inherently in tension with the positive-definiteness on \\(R\\).
+
+Suppose \\(r \not \in P\\) and \\(r \in R\\). Then \\(A(r,r) \leq 0\\) (by the first condition) and \\(A(r,r) > 0\\) (by the second condition, since \\(R\\) is positive-definite) - contradiction.
+
+That was quick - we showed, for all subspaces \\(R\\) on which \\(A\\) is positive-definite, that \\(\dim(R) \leq \dim(P)\\).
+
+### Supervisor-vetted version
+
+We have a subspace \\(P\\) on which \\(A\\) is positive-definite - namely, make \\(A\\) diagonal and take the first \\(p\\) basis vectors. We'll call the set of basis vectors \\(\{e_1, \dots, e_n \}\\); then \\(P\\) is spanned by \\(\{e_1, \dots, e_p \}\\).
+
+Now, let's take any subspace \\(\tilde{P}\\) on which \\(A\\) is positive-definite. We want \\(\dim(\tilde{P}) \leq \dim(P)\\); to that end, take \\(N\\) spanned by \\(\{e_{p+1}, \dots, e_n \}\\). We show that \\(\tilde{P} \cap N = \{0\}\\). Indeed, if \\(r \in\tilde{P} \cap N\\), with \\(r \not = 0\\), then:
+
+* \\(r \in \tilde{P}\\) so \\(A(r,r) > \\)0
+* \\(r \in N\\) so \\(A(r,r) \leq \\)0
+
+But this is a contradiction. Hence \\(\tilde{P} \cap N\\) is the zero space, and so \\(\dim(\tilde{P}) \leq \dim(P)\\) because \\(\dim(P) + \dim(N) = n\\) while \\(\dim(\tilde{P}) + \dim(N) \leq n\\).
+
+### Commentary
+
+Notice that my original version is conceptually quite close to correct: "take something in a positive-definite space, show that it can't be in the negative-semi-definite bit and hence must be in \\(P\\)". I was careless in not checking that what I had written made sense. I am slightly surprised that no alarm bells were triggered by my using \\(V \backslash P\\) as a space - I hope that now my background mental checks will come to include this idea of "make sure that when you transform objects, you retain their properties".
+
+### Completion (original and hopefully correct)
+
+Identically we can show that for all subspaces \\(Q\\) on which \\(A\\) is negative-definite, that \\(\dim(Q) \leq \dim(N)\\) (with \\(N\\) defined analogously to \\(P\\) but with negative-definiteness instead of positive-definiteness). And we already know that congruence preserves matrix rank (because matrix rank is a property of the eigenvalues, and basis change in this way only alters eigenvalues by multiples of squares), so we have that the number of zeros in any diagonal representation of \\(A\\) is the same.
+
+Hence in any diagonal representation of \\(A\\) with \\(p', q', z'\\) the number of \\(1, -1, 0\\) respectively on the diagonal, we need \\(p' \leq p, q' \leq q, z' = z\\) - but because the diagonal is the same size on each matrix (since the matrices don't change dimension), we must have equality throughout.
+
+ [1]: https://en.wikipedia.org/wiki/Sylvester's_Law_of_Inertia "Sylvester's law of inertia Wikipedia page"
diff --git a/hugo/content/posts/2014-05-26-proof-that-symmetric-matrices-are-diagonalisable.md b/hugo/content/posts/2014-05-26-proof-that-symmetric-matrices-are-diagonalisable.md
new file mode 100644
index 0000000..f6d17d9
--- /dev/null
+++ b/hugo/content/posts/2014-05-26-proof-that-symmetric-matrices-are-diagonalisable.md
@@ -0,0 +1,28 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2014-05-26T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/proof-that-symmetric-matrices-are-diagonalisable/
+- /proof-that-symmetric-matrices-are-diagonalisable/
+title: Proof that symmetric matrices are diagonalisable
+---
+This comes up quite frequently, but I've been stuck for an easy memory-friendly way to do this. I trawled through the 1A Vectors and Matrices course notes, and found the following mechanical proof. (It's not a discovery-proof - I looked it up.)
+
+## Lemma
+
+Let \\(A\\) be a symmetric matrix. Then any eigenvectors corresponding to different eigenvalues are orthonormal. (This is a very standard fact that is probably hammered very hard into your head if you have ever studied maths post-secondary-school.) The proof of this is of the "write it down, and you can't help proving it" variety:
+
+Suppose \\(\lambda, \mu\\) are different eigenvalues of \\(A\\), corresponding to eigenvectors \\(x, y\\). Then \\(Ax = \lambda x\\), \\(A y = \mu y\\). Hence (transposing the first equation) \\(x^T A^T = \lambda x^T\\); the left hand side is \\(x^T A\\). Hence \\(x^T A y = \lambda x^T y\\); but \\(A y = \mu y\\) so this is \\(x^T \mu y = \lambda x^T y\\). Since \\(\lambda \not = \mu\\), this means \\(x^T y = 0\\).
+
+## Theorem
+
+Now, suppose \\(A\\) has eigenvalues \\(\lambda_1, \dots, \lambda_n\\). They might not be distinct; take the ones which are, \\(\lambda_1, \dots, \lambda_r\\). Then extend this to a basis of \\(\mathbb{R}^n\\), and orthonormalise that basis using the [Gram-Schmidt process][1]. (This can be proved - it's tedious but not hard, as long as you remember what the Gram-Schmidt process is, and I think it's safe to assume.) With respect to this basis, \\(A\\) is a matrix which is diagonal in the first \\(r\\) entries. Moreover, we are performing an orthonormal change of basis, and conjugation by orthogonal matrices preserves the property of "symmetricness" (proof: \\((P^T A P)^T = P^T A^T P = P^T A P\\)), so the \\(r+1\\)th to \\(n\\)th row/column block is symmetric. It is also real (because we have performed a conjugation by a real matrix). And we have that the first \\(r\\) columns of \\(P^T A P\\) are filled with zeros below the diagonal (being the image of eigenvectors), so \\(P^T A P\\) is also filled with zeros in the first \\(r\\) rows above the diagonal, because it is a symmetric matrix.
+
+Now by induction, that sub-matrix \\(A_{r,r} \dots A_{n,n}\\) is diagonalisable by an orthogonal matrix. Hence we are done: all symmetric matrices are diagonalisable by an orthogonal change of basis. (The eigenvectors produced by the inductive step must be orthogonal to the ones we've already found, because they fall in a subspace which is orthogonal to that of the one we already found.)
+
+ [1]: https://en.wikipedia.org/wiki/Gram-Schmidt_process "Gram-Schmidt process Wikipedia page"
diff --git a/hugo/content/posts/2014-06-25-possible-cons-of-Soylent.md b/hugo/content/posts/2014-06-25-possible-cons-of-Soylent.md
new file mode 100644
index 0000000..546f6f4
--- /dev/null
+++ b/hugo/content/posts/2014-06-25-possible-cons-of-Soylent.md
@@ -0,0 +1,29 @@
+---
+lastmod: "2022-12-31T23:21:00.0000000+00:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-06-25T00:00:00Z"
+aliases:
+- /uncategorized/possible-cons-of-Soylent/
+- /possible-cons-of-Soylent/
+title: Possible cons of Soylent
+---
+
+I have seen many glowing reviews of [Soylent](https://soylent.com), and many vitriolic [naturalistic](https://en.wikipedia.org/wiki/Appeal_to_nature) arguments against it. What I have not really seen is a proper collection of credible reasons why you might not want to try Soylent (that is, reasons which do not boil down to "it’s not natural, therefore Soylent is bad" or "food is great, therefore Soylent is bad").
+
+This page used to contain citations in the form of links to the Soylent Discourse forum at `discourse.soylent.com`.
+However, that site is now defunct.
+
+* *Soylent is untested.* Indeed, there are apparently trials being run (there was originally a link to a post from the founder of Soylent, but the link is dead), but I have not seen any data coming out of them (or indeed any evidence of a trial, other than the founder’s word). It is perfectly plausible that Soylent misses out something important - [lycopene](https://en.wikipedia.org/wiki/Lycopene), for instance, may turn out to be highly beneficial. Of course, various fast-foody diets don’t contain lycopene or whatever anyway. The current fact that no-one has become ill (apart from a well-known and easily-fixed sodium problem) in a diet-related way from Soylent is insufficient as evidence that Soylent is safe.
+
+* *Soylent is even more addictive than whole food.* People often report that Soylent makes them feel really really good for a few days, before they adjust to their new level of wellbeing and "good" becomes "normal". Then returning to whole food causes them to feel sluggish and generally not very well. On the other hand, some report that whole food becomes extra-tasty, so perhaps it’s a balancing act - switching from Soylent to a good diet may be important.
+
+* *You hate the idea/you find cooking too fun.* Fine, don’t eat it.
+
+* *It’s effort to test and tune your home-made recipe.* Everyone is different, and you might need to make up for pre-existing deficiencies or whatever. As much as the DIY community and [Rosa Labs](http://www.rosalabs.com/) would like it, one size does not fit all, and it might take a while to find out what you need.
+
+* *There are side-effects of adjusting to Soylent.* People usually report gas when starting a soylent, and sometimes it doesn’t seem to settle down. It seems to be unclear why this issue is experienced, too. There are other symptoms, like headaches (which are apparently usually down to having not enough sodium or not enough water) and bloatedness (which is apparently solved by not drinking the Soylent so quickly).
+
+* *Expense.* There are some DIY recipes which are very expensive. This is often because protein is dear, and low-carb soylents are to be mostly protein and fat by necessity. Too high a fat content is unpalatable, so the expensive protein makes up the calories.
diff --git a/hugo/content/posts/2014-07-13-solvability-of-nonograms.md b/hugo/content/posts/2014-07-13-solvability-of-nonograms.md
new file mode 100644
index 0000000..f86ea1e
--- /dev/null
+++ b/hugo/content/posts/2014-07-13-solvability-of-nonograms.md
@@ -0,0 +1,48 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2014-07-13T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/nonograms/
+- /solvability-of-nonograms/
+title: Solvability of nonograms
+---
+Recently, a friend re-introduced me to the joys of the [nonogram] (variously known as "hanjie" or "griddler"). I was first shown these about ten years ago, I think, because they appeared in [The Times]. When The Times stopped printing them, I forgot about them for a long time, until two years ago, or thereabouts, I tried these on [a website][griddlers.net]. I find the process much more satisfying on paper with a pencil than on computer, so I gave them up again and forgot about them again.
+
+Anyway, the thought occurred to me: is a given griddler always solvable, and is it solvable uniquely? That is, given a grid and the edge entries, is it always a valid puzzle?
+
+Notation: we will say that a given *solved grid* has an *edge-set* consisting of the numbers we would see if we were about to start solving the nonogram. We say that an edge-set *applies to* a solved grid if that edge-set is consistent with the solved grid. (For instance, the empty edge-set doesn't apply to any solved grid apart from the zero-size grid.)
+
+Then our question has become: is there in some way a bijection between (edge-sets) and (solved grids)?
+
+# Existence of edge-sets
+
+We can trivially describe any solved grid by an edge-set and a grid size: simply write down the grid size of the solved grid, and write down the obvious edge-set. (We do need the grid size to be specified, because given an edge-set which applies to a solved grid, we can create a new grid to which that edge-set applies by simply appending a blank row to the solved grid.)
+
+# Uniqueness of edge-sets
+
+Is there an obvious reason why we could never have two different edge-sets applying to the same solved grid? It seems intuitively clear that a given solved grid can only have the obvious edge-set (namely, the one we get by writing down the blocks in each row and column in the obvious way). Is this rigorous as a proof? Yes: suppose that we had two edge-sets describing the same solved grid, and (wlog) the sets differ in the first row. In fact, let us wlog that our solved grid is only one row long.
+
+* If one edge-set is empty, we're done: because the two edge-sets are not the same, that means the other edge-set is non-empty, and so under the first edge-set the solved grid is empty, while under the second the solved grid is nonempty.
+* If both edge-sets are non-empty: suppose the first starts with the number \\(a\\), and the second with the number \\(b\\). Then we have some number of blank squares, and then \\(a\\) filled-in squares (by edge-set 1) and also \\(b\\) filled-in squares (by edge-set 2); hence \\(a=b\\), because our solved grid is fixed.
+
+# Existence of solutions
+Must a solution exist for a given grid size and edge-set? Is it possible to create a nonogram with no solution? One strategy for proving this might be to count the number of allowable edge-sets and to count the number of allowable solved grids (the latter problem is extremely easy if we consider a grid as being a binary number whose bits are laid out in a rectangle), because we have that any two finite sets of the same size must biject. However, the former problem sounds very hard.
+
+On second thoughts (read: I slept on this), it's blindingly obvious that there is a grid with no solution - namely, the one-by-one grid with edge-set "1 as column heading, 0 as row heading". So there certainly are edge-sets which don't have a solution grid.
+
+# Uniqueness of solutions
+OK, if we don't always have solvability, how about the "easy puzzle-setting property": that a given edge-set and grid-size cannot have two solved grids to which the edge-set applies? If this were true, it would make generating puzzles extremely easy: simply draw out a solved grid, write down its edge-set (which is unique, as shown above), and set that edge-set and grid-size as the puzzle, without fear that someone could sit down and solve the puzzle validly to get a different grid to your solution.
+
+On the same second thoughts as the 'existence of solutions' thoughts, it's clear that the 2-by-2 grid with a diagonal black stripe has two solutions - namely, send the stripe top-left to bottom-right, or top-right to bottom-left. Curses.
+
+# Summary
+Every solved grid has an edge-set, which is unique to that grid. However, not all edge-sets are solvable, and we don't have uniqueness of solutions. That was much less interesting than I had hoped.
+
+[nonogram]: https://en.wikipedia.org/wiki/Nonogram
+[griddlers.net]: https://www.griddlers.net/home
+[The Times]: http://www.thetimes.co.uk/tto/news/
diff --git a/hugo/content/posts/2014-07-15-what-maths-does-to-the-brain.md b/hugo/content/posts/2014-07-15-what-maths-does-to-the-brain.md
new file mode 100644
index 0000000..023446b
--- /dev/null
+++ b/hugo/content/posts/2014-07-15-what-maths-does-to-the-brain.md
@@ -0,0 +1,47 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- psychology
+comments: true
+date: "2014-07-15T00:00:00Z"
+disqus: true
+math: true
+aliases:
+- /psychology/what-maths-does-to-the-brain/
+- /what-maths-does-to-the-brain/
+title: What maths does to the brain
+---
+
+In my activities on [The Student Room], a student forum, someone (let's call em Entity, because I like that word) recently asked me about the following question.
+
+>Isaac places some counters onto the squares of an 8 by 8 chessboard so that there is at most one counter in each of the 64 squares. Determine, with justification, the maximum number that he can place without having five or more counters in the same row, or in the same column, or on either of the two long diagonals.
+
+You might like to have a think about it before I give first the answer Entity gave, and my commentary on it.
+
+I paraphrase Entity's answer:
+
+>The maximum is 32, because the maximum along each row is 4 and so having 33 counters means having more than one row being full. Moreover, I have found a pattern which satisfies the 32 requirement. Hence we have shown that the correct answer is at most and at least 32, so it must be 32.
+
+I'm going to assume that the 32-pattern is correct, because I wasn't shown the purported answer. What interested me was that my mind immediately pointed out internally that we have made an unproved claim. Again, you might like to think what the unproved claim might be - it's completely trivial to prove, but I found it fascinating. It'll come in the next paragraph.
+
+The unproved claim is "having 33 counters means having more than one row being full". There are a couple of trivial proofs:
+
+* \\(\frac{33}{64} > \frac{1}{2}\\) is the proportion of the board which is becountered, and the mean of eight quantities (the proportion of counters in each row) which are all less than or equal to a half cannot itself be greater than a half. Hence at least one of the eight quantities is greater than a half (that is, a row has more than four counters in).
+* The [pigeonhole principle] gives the result directly in a similar way (33 pigeons into eight holes means one hole has more than four pigeons).
+
+However, my mind flagged this claim automatically as something that wasn't necessarily obvious. It turned out to be trivial, but it is an example of a step which is in general not true. For instance:
+
+> Consider the natural numbers (greater than 0). The set of even numbers takes up half the space. Now if we remove the number 2 from the set of even numbers, we have the collection still taking up half the space, but now it's a smaller set - it's missing an element. Conundrum.
+
+Here, we used very similar reasoning ("removing something from a set makes it take up proportionally less space") but got nonsense, ultimately because the pigeonhole principle doesn't apply to infinite sets.
+
+I think what I did here was recognise a general pattern, but I struggle to work out what that pattern might be. The closest I've come is "if one property of a structure holds, then an obviously related property of that structure holds", because I'm pretty sure my thought wasn't triggered by the need for the pigeonhole principle. (In that case, the pattern would have been "if we fill up some slots, then some subset of the slots must be full", which is much more specific and trivial than I feel my reaction was. It felt like a specific instance of a very general check.)
+
+A similar pattern which is much more concrete is the distinction between "if" and "only if" and "if and only if". A mathematician trains emself early on to not get confused between these. It doesn't take too long before you simply stop having the mental architecture that lets you make a mistake like "all odd squares are squares of odd numbers. Indeed, if n is odd then n^2 is odd. QED" unless the structures you're working with are quite a bit more complicated. Of course, my mental checks can be overwhelmed by complexity, and I have certainly proved the wrong direction of a problem many times, but in everyday conversation and in simpler mathematical problems, it becomes not only easy but automatic to distinguish between "implies" and "is implied by".
+
+It feels vaguely similar to some of the filters I've installed in myself for other reasons. For instance, earlier today I was asked which of five leaflets looked best. I had already seen one of them before, and my first reaction (before any other) was "I've seen this one before, so I'm likely to think it looks better". I have a few anti-bias systems like these, and I have no idea whether they're useful or not, but I can certainly feel them going sometimes, without any input from myself.
+
+
+[The Student Room]: http://www.thestudentroom.co.uk
+[pigeonhole principle]: https://en.wikipedia.org/wiki/Pigeonhole_principle
diff --git a/hugo/content/posts/2014-07-19-music-practice.md b/hugo/content/posts/2014-07-19-music-practice.md
new file mode 100644
index 0000000..62d2029
--- /dev/null
+++ b/hugo/content/posts/2014-07-19-music-practice.md
@@ -0,0 +1,38 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-07-19T00:00:00Z"
+aliases:
+- /uncategorized/music-practice/
+- /music-practice/
+title: Music practice
+---
+
+A couple of weeks ago, someone opined to me that there was a type of person who was just able to sit down and play at the piano, without sheet music.
+
+I, myself, am capable of playing [precisely one piece][Jarrod Radnich PotC] inexpertly, from memory, at the piano. (My rendering of that piece is *nowhere* near the arranger's standard.) I can play nothing else without sheet music. I very much think that this is the natural state for essentially every musician who has not spent thousands upon thousands of hours practising in a general way. That is, almost no-one can naturally sit down and play a piece from memory without a lot of work beforehand, and almost no-one can improvise well without a great deal of effort directed either at learning how to improvise, or at learning generally the mechanics of playing.
+
+# How technical practice helps
+The syllabus for [ABRSM exams] contains a large body of scale-work, arpeggios and related patterns. There is a reason these are so heavily featured that one cannot attain a Distinction grade without them: because while they may not help much in learning pieces up to Grade 8, they really are useful beyond then. Someone who can sit down in front of an unseen piece and note automatically that "the left-hand is an Alberti bass in F#-major" is at a distinct advantage compared to someone who has never practised F#-major arpeggios, because the latter person has to read each note in both hands; the former can concentrate almost solely on the right hand's melody line. An impossible piece can become quite do-able if you can reduce the left-hand's job to that of repeating a memorised action.
+
+# How general performance practice helps
+Because the same patterns occur so often in music, there is essentially no upper limit on how many useful actions there are to memorise. Most phrases of music end in one of a couple of [recognised ways][cadence] (cadences), and a given cadence doesn't vary that much in its presentation. Someone well-practised could quite conceivably only need to read three-quarters of a piece, knowing that the remaining quarter is already-familiar cadences.
+
+And, of course, it is hard to practise actual pieces without coming across cadences - they show up so regularly. By just performing general practice of a wide range of pieces, you naturally come to be able to play cadences without much thought. If you devote effort to learning particular chordal patterns, this process becomes even easier.
+
+If you play the piano with any level of seriousness, you have probably played a [fugue] at some point. A fugue is a piece of music mainly characterised by a single melody which is repeated at various pitches, and around which a richly textured harmony is built. The idea is to bat this theme between several 'voices' (for instance, a fugue might have four voices, two played in the left hand and two in the right, analogously to a four-part choir), with each voice either playing the theme or embellishing upon it. It's kind of like a more complicated and interesting canon. The key point is that fugues are all very similar in style, and if you have the skill of playing a single tune more than once simultaneously, in different voices and offset from each other, then you can pretty much play a fugue. That skill comes with practice.
+
+Anyone who has sat a music exam knows how important it is to be able to recover from mistakes. The best way to recover from a mistake would be to improvise something that sounds plausible (ideally the original piece!) until you picked up the thread again. This, too, comes with practice: I have noticed myself that I have over time got substantially better at ignoring mistakes I make during a performance. Every so often, if I'm caught out while playing a piece I know well, I can just about invent a semi-plausible bar to fill in the gap before I recover. (If nothing else, I might be able to play the right chords in an unexpected [inversion] or something.) I understand that this skill has pretty much no upper bound.
+
+# Summary
+The point is, then, that it is probable that sheer mind-numbing amounts of practice is what makes people able to sit down and play. Certainly some may require less practice than others, but anyone who can play at the drop of a hat has probably practised an awful lot to get like that. I certainly know of no counterexamples.
+
+
+[Jarrod Radnich PotC]: https://www.youtube.com/watch?v=n4JD-3-UAzM
+[ABRSM exams]: https://en.wikipedia.org/wiki/ABRSM#Practical_exams
+[cadence]: https://en.wikipedia.org/wiki/Cadence_(music)
+[fugue]: https://en.wikipedia.org/wiki/Fugue
+[inversion]: https://en.wikipedia.org/wiki/Chord_inversion#Chords
diff --git a/hugo/content/posts/2014-07-21-perfect-pitch.md b/hugo/content/posts/2014-07-21-perfect-pitch.md
new file mode 100644
index 0000000..fa87ea0
--- /dev/null
+++ b/hugo/content/posts/2014-07-21-perfect-pitch.md
@@ -0,0 +1,48 @@
+---
+lastmod: "2022-08-21T12:09:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-07-21T00:00:00Z"
+aliases:
+- /uncategorized/perfect-pitch/
+- /perfect-pitch/
+title: Perfect pitch
+---
+
+I have a limited form of [perfect (absolute) pitch][perfect pitch], which I am sometimes asked about. Often it's the same questions, so here they are. No doubt people with better perfect pitch than mine will be annoyed at this impudent upstart claiming the ability, but perfect pitch comes on a spectrum anyway. Apparently some people can identify notes to within the nearest fifth of a semitone, while some can only identify the semitone closest to the note. I am a bit further towards the "tone-deaf" end of that spectrum.
+
+# References for notes
+
+Anyway, I have been able to sing a [concert A] without reference since about the age of 12, I think, on account of having learnt the violin since much younger than then. From then, until about the age of 15, I kind of accumulated more notes I could use as references (A because it's concert tuning; E because it's the start of [Für Elise]; D because it's the start of [Pachelbel's Canon] and of the Libera Me from [Fauré's Requiem]). Annoyingly, these were all notes we tune violin strings to; it's very easy to find the four notes of a violin given an A, because we hear that sound every time we start playing the violin. Eventually I picked up an unreliable B-flat (from [a rather rousing Christmas carol][This Little Babe]), which I always had to cross-check with the A.
+
+Then I started noticing that the F which lies at the bottom of my vocal range had a very distinctive feel on the piano. Not a particular piano - just that I could recognise that F when played on the piano. Similarly, middle C started to feel like a C. I came to be able to reproduce the C vocally, by imagining pressing the middle-C key and singing the note that it played.
+
+That is, I could identify notes ACDEF B-flat. More tentatively, I could identify G as being kind of weedy and characterless (as opposed to the rich understated heroism of F - sounds silly, but I can find no other way to describe it off the top of my head).
+
+I still have trouble with most accidentals (that is, flats and sharps), although I've just now realised that I can do F-sharp from [Tim Minchin]'s excellent [song of the same name][F Sharp] and I can do D-sharp from the start of [Chopin]'s [Nocturne in B][Chopin Nocturne in B]. So it's really just C-sharp, F-sharp, A-flat and B that I don't have references for. I can identify the white notes (except B, which feels a bit like a chameleon, could be either a C or a B-flat) on a piano by sound, and I can identify all the notes by producing them, or producing the next-door note, and comparing with what I heard.
+
+Having said that, I'm significantly slower and less accurate when there is background noise - particularly tuned background noise. It feels like my internal scale is fuzzy and easily subject to external influence.
+
+# FAQ
+*Have you always had it?* No, I picked it up mid-to-late secondary school. Also, my ability depends on having been playing music recently (by "recently" I mean "in the last week or so"). If the last few weeks have been musicless, I become much slower and less accurate.
+
+*What's it like to have it?* No different than otherwise, for the most part. It doesn't get in the way unless I ask for it, with some exceptions. In particular, I usually listen to a piece of music without noticing the notes, although I am not that fast at identifying most notes, so they might well pass me by before I have a chance to decide what they are. The individual letters of a text don't bother you.
+
+I said "exceptions": I am quite sensitive to instruments being out of tune. I don't know whether I'm much more sensitive than other people in this area - maybe they're all being polite in pretending not to notice. After a few minutes to get used to the pitch, it usually swamps my absolute representation of notes, and then I stop noticing out-of-tuneness (because I no longer have a reliable baseline).
+
+*Can you distinguish sound better than normal?* Apparently so, but I don't think it's caused by my perfect pitch. On a now-defunct online test, I scored in the 87th percentile of test takers, reliably distinguishing between 0.75 hertz around 500 hertz. I imagine that's to do with musical training.
+
+
+[perfect pitch]: https://en.wikipedia.org/wiki/Perfect_pitch
+[concert A]: https://en.wikipedia.org/wiki/Concert_A
+[Für Elise]: https://en.wikipedia.org/wiki/Fur_Elise
+[Pachelbel's Canon]: https://en.wikipedia.org/wiki/Pachelbel%27s_Canon
+[This Little Babe]: https://www.youtube.com/watch?v=BTyIP7m8Btg
+[Fauré's Requiem]: https://en.wikipedia.org/wiki/Faur%C3%A9_Requiem
+[F Sharp]: https://www.youtube.com/watch?v=5Ju8Wxmrk3s
+[Tim Minchin]: https://en.wikipedia.org/wiki/Tim_Minchin
+[Chopin Nocturne in B]: https://www.youtube.com/watch?v=BhIP4hDBp-E
+[Chopin]: https://en.wikipedia.org/wiki/Chopin
+[ear test]: http://tonometric.com/adaptivepitch/
diff --git a/hugo/content/posts/2014-08-19-parables.md b/hugo/content/posts/2014-08-19-parables.md
new file mode 100644
index 0000000..43cf0cd
--- /dev/null
+++ b/hugo/content/posts/2014-08-19-parables.md
@@ -0,0 +1,34 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- creative
+comments: true
+date: "2014-08-19T00:00:00Z"
+aliases:
+- /creative/parables/
+- /parables/
+title: Parables, chapter 1, verses 1-10
+---
+
+One day, a group of investors came to [Bezos] in the Temple and begged of him, "You are known throughout the land for your wisdom. Please tell us: what lessons did you learn early in life, which we have not yet learnt?"
+
+Bezos replied thus.
+
+"When I was but a child, when I had not yet seen seven summers, I discovered that my teacher had a bountiful store of chocolates hidden in the stationery cupboard. Being of an enterprising frame of mind, I proceeded to eat one of them every day for a week." For he was mindful of the need to preserve the source of good things.
+
+"The next Monday, the teacher took me aside, and asked me whether I had been eating the chocolates. I replied that I had no idea who had been eating the chocolates, and expressed astonishment that indeed there were free chocolates to be had so near to my place of work." He knew that the key to deceit was remembering what you *should* know, as a cover for what you *did* know.
+
+"But the teacher was wise beyond my years. Ey said to me, 'I saw you take chocolates last Friday!' And to prove it, ey brandished the selfsame wrapper I had carelessly discarded." And even these decades later, a tear ran down Bezos's cheek, that his scheme had failed in so predictable a manner.
+
+"I realised that now was the time for the truth. I explained myself: 'I am sorry, O teacher, that I allowed you to discover my scheme. I understand now that you become suspicious after only four repetitions of a deception, and not the five I thought were safe. In future, I shall be more careful.' I was a simple mind then, and believed that it was right to tell the truth. I wished to be held accountable for my lies." One of the investors nodded sympathetically.
+
+"To my surprise, the teacher flew into a rage. I was put into detention. That day I learnt that while the truth should set you free, this only holds up to the point of maintaining your societal role." He knew now that truth is secondary, when one is an underling.
+
+"I saw an opportunity to prevent further suffering. 'I see you are attempting to negatively reinforce me against telling the truth and explaining my actions. I have learnt my lesson - you need not apply further reinforcement. I shall remember this.'"
+
+"And that was the day I was expelled from my school, and was left to forge my own path."
+
+One's prescribed roles should not confine behaviour overmuch. That way lies stagnation and inactivity.
+
+[Bezos]: https://en.wikipedia.org/wiki/Jeff_Bezos
diff --git a/hugo/content/posts/2014-08-26-python-script-shadowing.md b/hugo/content/posts/2014-08-26-python-script-shadowing.md
new file mode 100644
index 0000000..06dbf5e
--- /dev/null
+++ b/hugo/content/posts/2014-08-26-python-script-shadowing.md
@@ -0,0 +1,26 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- programming
+comments: true
+date: "2014-08-26T00:00:00Z"
+aliases:
+- /uncategorized/python-script-shadowing/
+title: Python, script shadowing
+---
+
+*A very brief post about the solution to a problem I came across in Python.*
+
+In the course of my work on [Sextant] (specifically the project to add support for accessing a [Neo4j] instance by SSH), I ran into a problem whose nature is explained [here][Name shadowing trap] as the Name Shadowing Trap. Essentially, in a project whose root directory contains a `bin/executable.py` script, which is intended as a thin wrapper to the module `executable`, you can't `import executable`, because the `bin/executable.py` shadows the module `executable`.
+
+The particular example I had was a wrapper called `sextant.py`, which needed to `import sextant` somewhere in the code. There was no guarantee that the wrapper script would be located in a predictable place relative to the module, because `pip` has a lot of liberty about where it puts various files during a package installation. I really didn't want to mess with the PythonPath if at all possible; a maybe-workable solution might have been to alter the PythonPath to put the module `sextant` at the front temporarily, so that its import would take precedence over that of `sextant.py`, but it seemed like a dirty way to do it.
+
+No workaround was listed, other than to rename the script. A brief Google didn't give me anything more useful. Eventually, I asked someone in person, and ey told me to get rid of the `.py` from the end of the script name. That stops Python from recognising it as a script (for the purposes of `import`). As long as you have the right [shebang] at the top of the script, though, and its permissions are set to be executable, you can still run it.
+
+(Keywords in the hope that Google might direct people to this page if they have the same problem: Python shadow module script same name.)
+
+[Sextant]: https://launchpad.net/ensoft-sextant
+[Neo4j]: https://neo4j.com
+[Name shadowing trap]: http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html
+[shebang]: https://en.wikipedia.org/wiki/Shebang_(Unix)
diff --git a/hugo/content/posts/2014-09-09-sum-of-two-squares-theorem.md b/hugo/content/posts/2014-09-09-sum-of-two-squares-theorem.md
new file mode 100644
index 0000000..5750627
--- /dev/null
+++ b/hugo/content/posts/2014-09-09-sum-of-two-squares-theorem.md
@@ -0,0 +1,85 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2014-09-09T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/sum-of-two-squares-theorem/
+- /sum-of-two-squares-theorem/
+title: Sum-of-two-squares theorem
+---
+
+*Wherein I detail the most beautiful proof of a theorem I've ever seen, in a bite-size form suitable for an Anki deck. I attach the [Anki deck], which contains the bulleted lines of this post as flashcards.*
+
+# Statement
+There's no particularly nice way to motivate this in this context, I'm afraid, so we'll just dive in. I have found this method extremely hard to motivate - a few of the steps are a glorious magic.
+
+* \\(n\\) is a sum of two squares iff in the prime factorisation of \\(n\\), primes 3 mod 4 appear only to even powers.
+
+# Proof
+We're going to need a few background results.
+
+## Background
+* \\(\mathbb{Z}[i]\\), the ring of [Gaussian integers], is a UFD.
+* In a UFD, [irreducible]s are [prime].
+* \\(-1\\) is square mod \\(p\\) iff \\(p\\) is not 3 mod 4.
+
+Additionally, we'll call a number which is the sum of two squares a **nice** number.
+
+## First implication: if primes 3 mod 4 appear only to even powers…
+We prove the result first for the primes, and will then show that niceness is preserved on taking products.
+
+
+
+* Let \\(p=2\\). Then \\(p\\) is trivially the sum of two squares: it is \\(1+1\\).
+* Let \\(p\\) be 1 mod 4.
+* Then modulo \\(p\\), we have \\(-1\\) is square.
+* That is, there is \\(n \in \mathbb{N}\\) such that \\(x^2 + 1 = n p\\).
+* That is, there is \\(n \in \mathbb{N}\\) such that \\((x+i)(x-i) = n p\\).
+* \\(p\\) divides \\((x+i)(x-i)\\), but it does not divide either of the two multiplicands (since it does not divide their imaginary parts).
+* Therefore \\(p\\) is not prime in the complex integers.
+* Since \\(\mathbb{Z}[i]\\) is a UFD, \\(p\\) is not irreducible in the complex integers.
+* Hence there exist non-invertible \\(a, b \in \mathbb{Z}[i]\\) such that \\(a b = p\\).
+* Taking norms, \\(N(p) = N(ab)\\).
+* Since the norm is multiplicative, \\(N(p) = N(a) N(b)\\).
+* \\(N(p) = p^2\\), so \\(p^2 = N(a) N(b)\\).
+* Neither \\(a\\) nor \\(b\\) was invertible, so neither of them has norm 1 (since in \\(Z[i]\\), having norm 1 is equivalent to being invertible).
+* Hence wlog \\(N(a)\\) is exactly \\(p\\), since the product of two numbers being \\(p^2\\) means either one of them is 1 or they are both \\(p\\).
+* Let \\(a = u+iv\\). Then \\(N(a) = u^2 + v^2 = p\\), which was what we needed.
+
+Next, we need to take care of this "even powers" business:
+
+* \\(p^2\\) is a sum of two squares if \\(p\\) is 3 mod 4: indeed, it is \\(0 + p^2\\).
+
+All we now need is for niceness to be preserved under multiplication. (Recall \\(w^*\\) denotes the conjugate of \\(w\\).)
+
+* Let \\(x, y\\) be the sum of two squares each, \\(x_1^2 + x_2^2\\) and \\(y_1^2 + y_2^2\\).
+* Then \\(x = (x_1 + i x_2)(x_1 - i x_2)\\), and similarly for \\(y\\).
+* Then \\(x y = (x_1 + i x_2)(x_1 - i x_2)(y_1 + i y_2)(y_1 - i y_2)\\).
+* So \\(x y = w w^*\\), where \\(w = (x_1 + i x_2)(y_1 + i y_2)\\).
+* Hence \\(x y = N(w)\\), so is a sum of two squares (since norms are precisely sums of two squares).
+
+Together, this is enough to prove the first direction of the theorem.
+
+## Second implication: if \\(n\\) is the sum of two squares…
+We'll suppose that \\(n = x^2 + y^2\\) has a prime factor which is 3 mod 4, and show that it divides both \\(x\\) and \\(y\\).
+
+* Let \\(n = x^2 + y^2\\) have prime factor \\(p\\) which is 3 mod 4.
+* Then taken mod \\(p\\), we have \\(x^2 + y^2 = 0\\).
+* That is, \\(x^2 = - y^2\\).
+* If \\(y\\) is not zero mod \\(p\\), it is invertible.
+* That is, \\((x y^{-1})^2 = -1\\).
+* This contradicts that \\(p\\) is 3 mod 4 (since \\(-1\\) is not square mod \\(p\\)). So \\(y\\) is divisible by \\(p\\).
+* Symmetrically, \\(x\\) is divisible by \\(p\\).
+* Hence \\(p^2\\) divides \\(n\\), so we can divide through by it and repeat inductively.
+
+That ends the proof. Its beauty lies in the way it regards sums of two squares as norms of complex integers, and dances into and out of \\(\mathbb{C}\\), \\(\mathbb{Z}[i]\\) and \\(\mathbb{Z}\\) where necessary.
+
+[Gaussian integers]: https://en.wikipedia.org/wiki/Gaussian_integers
+[UFD]: https://en.wikipedia.org/wiki/Unique_factorization_domain
+[irreducible]: https://en.wikipedia.org/wiki/Irreducible_element
+[prime]: https://en.wikipedia.org/wiki/Prime_element
+[Anki deck]: {{< baseurl >}}AnkiDecks/SumOfTwoSquaresTheorem.apkg
diff --git a/hugo/content/posts/2014-12-02-christmas-carols.md b/hugo/content/posts/2014-12-02-christmas-carols.md
new file mode 100644
index 0000000..1cbd678
--- /dev/null
+++ b/hugo/content/posts/2014-12-02-christmas-carols.md
@@ -0,0 +1,43 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-12-02T00:00:00Z"
+aliases:
+- /uncategorized/christmas-carols/
+- /christmas-carols/
+title: Christmas carols
+---
+
+In which I provide my favourite carols and my favourite renditions of them.
+
+In no particular order, except that 1) must be at the start and 9) at the end.
+
+1) [Once in Royal David's City][Once]. Always opens the Festival of Nine Lessons and Carols. Has the same problem as 9) in that the only nice recordings seem to have congregations in, but I suppose that's all part of it.
+
+2) [The Three Kings]. My favourite. This performance (King's College) has a soloist who is a bit strident, I think, but all the other ones I've listened to are even stridenter.
+
+3) O Holy Night. My second-favourite. It took until 2017 before I found a recording I liked: it's by the Elora Festival Singers. (Pavarotti is a bit forceful. Most of the recordings appear to be soloists only, singing in very American voices. I want a SATB choir with soloist(s) and, if there must be accompaniment, organ. The soloist(s) must be reverent rather than joyful, and the choir must be singing the standard chordal patterns rather than funky modern ones. There's a version done by Libera which almost passes muster, but it's not SATB and it is accompanied by lighthearted orchestra. It's a solemn piece.)
+
+4) [This Little Babe]. I don't usually like Britten, but this one is too rousing. I had trouble finding a good version of this, but these people nailed it.
+
+5) [In Dulci Jubilo]. King's College does it perfectly.
+
+6) [In the Bleak Midwinter][Bleak] (Darke's setting). I'm sensing a theme with the King's choir.
+
+7) [It Came Upon the Midnight Clear][Midnight Clear]. This performance is beautifully smooth.
+
+8) [This Is the Truth Sent From Above]. Vaughan Williams had to make it into the list.
+
+9) [Hark, the Herald Angels Sing][Hark]. Have to end a carol service with that. Wow, there are some bad arrangements of this out there (Mormon Tabernacle Choir, I'm looking at you, and Pentatonix, which would be so nice if they didn't sing with such weirdly non-British vowel sounds). I still haven't found one in which there isn't a congregation.
+
+[Once]: https://www.youtube.com/watch?v=NMGMV-fujUY
+[The Three Kings]: https://www.youtube.com/watch?v=HIedUioo_Jk
+[This Little Babe]: https://www.youtube.com/watch?v=aPnP5zzHJoQ
+[In Dulci Jubilo]: https://www.youtube.com/watch?v=iXze_TLUTqM
+[Bleak]: https://www.youtube.com/watch?v=GPpy3XSk6c0
+[Midnight Clear]: https://www.youtube.com/watch?v=rSn0_Zj6gjQ
+[This Is the Truth Sent From Above]: https://www.youtube.com/watch?v=5M_8vjqWYmM
+[Hark]: https://www.youtube.com/watch?v=A_iLXNSIaYc
diff --git a/hugo/content/posts/2014-12-09-film-recommendation-interstellar.md b/hugo/content/posts/2014-12-09-film-recommendation-interstellar.md
new file mode 100644
index 0000000..3f6a5d7
--- /dev/null
+++ b/hugo/content/posts/2014-12-09-film-recommendation-interstellar.md
@@ -0,0 +1,25 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+comments: true
+date: "2014-12-09T00:00:00Z"
+aliases:
+- /uncategorized/film-recommendation-interstellar/
+title: Film recommendation, Interstellar
+---
+
+I’ve just come back from seeing [Interstellar], a film of peril and physics. This post will be spoiler-free except for sections which are in [rot13].
+
+I thought the film was excellent. My previous favourite film in its genre was [Sunshine], but this beats it in many ways, chiefly that the physics portrayed in Interstellar - relativity, primarily - is not so wrong that it’s immediately implausible. Indeed, some physics-driven plot twists (such as *gvqny sbeprf arne n oynpx ubyr*) I called in advance, which is a testament to how closely the film matched my physical expectations. My stomach nearly dropped out when the characters realised what relativity meant for them.
+
+This is one of few films whose outcome was truly tense and uncertain for me. Characters were reasonably well-developed, and Michael Caine was in it. Good long story, told at the right pace, and there weren’t too many concessions made to the plot. (By which I mean, it felt like things often happened as they would in real life, rather than just to make a good story, and I had genuine feelings of empathic frustration when reality intervened in the plot.)
+
+The film lasted perhaps seven minutes too long, in my opinion. *V gubhtug vg fubhyq unir raqrq jvgu gur cebgntbavfg qlvat bhgfvqr Fnghea, naq uhznavgl'f shgher hapregnva ohg thnenagrrq gb pbagnva tbqubbq.* I think it’s made to cater to USA audiences rather than British ones; we Brits tend to like emotions to be portrayed with subtlety in films. There were several places I thought the ending was going to be very different: *gung Pbbcre jbhyq qvr ba gur sebmra cynarg; gung gurl jbhyq fynz vagb gur oynpx ubyr naq qvr; gung Zhecul'f oebgure jbhyq xvyy Zhecul jura fur oenaqvfurq gur jngpu*. My favourite ending would simply have been the film without its last scene.
+
+Additionally, a little too much was made of *ybir genafpraqf gvzr naq fcnpr*: while I can believe one irrational person saying this, it stretches the imagination for an entire team of scientists to think it.
+
+I should stress that those are pretty much my only problems with this film, and they’re all pretty minor. I loved the soundtrack; the visual effects were astonishing (vaguely reminiscent of 2001: A Space Odyssey). I’d go so far as to say that this film is beautiful, not just in a visual sense but in an arty sense: its spirit is pure, or something like that. Very much worth the price of entry, at a little under £3/hr.
+
+[Interstellar]: https://en.wikipedia.org/wiki/Interstellar_(film)
+[rot13]: https://rot13.com/
+[Sunshine]: https://en.wikipedia.org/wiki/Sunshine_(2007_film)
diff --git a/hugo/content/posts/2014-12-19-matrix-puzzle.md b/hugo/content/posts/2014-12-19-matrix-puzzle.md
new file mode 100644
index 0000000..23efd84
--- /dev/null
+++ b/hugo/content/posts/2014-12-19-matrix-puzzle.md
@@ -0,0 +1,91 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2014-12-19T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/matrix-puzzle/
+title: Matrix puzzle
+---
+
+I recently saw a problem from an Indian maths olympiad:
+
+> There is a square arrangement made out of n elements on each side (n^2 elements total). You can put assign a value of +1 or -1 to any element. A function f is defined as the sum of the products of the elements of each row, over all rows and g is defined as the sum of the product of elements of each column, over all columns. Prove that, for n being an odd number, f(x)+g(x) can never be 0.
+
+There is a very quick solution, similar in flavour to that [famous dominoes puzzle][Mutilated chessboard]. However, I didn’t come up with it immediately, and my investigation led down an interesting route.
+
+Preliminary observations
+===========
+
+It is easy to see that given a matrix of \\(1\\) and \\(-1\\), we have \\(f, g\\) unchanged on reordering rows and columns, and on taking the transpose. This leads to a very useful lemma: \\(f, g\\) are unchanged if we negate the corners of a rectangle in the matrix.
+
+The idea then occurs: perhaps there is a [normal form] of some kind?
+
+Specification of normal form
+========
+
+Given any four -1’s laid out at the corners of a rectangle, we may flip them all into 1’s without changing \\(f, g\\). Similarly, given any three -1’s on the corners of a rectangle, where the fourth corner is 1, we may flip to get a rectangle with one -1 and three 1’s.
+
+Repeat this procedure until there are no rectangles with three or more corners -1. (Note that we might get a different answer depending on the order we do this in!) A Mathematica procedure to do this (expressed in a very disgusting way) is as follows.
+{% raw %}
+ internalReduce[mat_] := Module[{m = mat},
+ Do[If[(i != k && j != l) &&
+ Count[Extract[m, {{i, j}, {k, j}, {i, l}, {k, l}}], -1] >
+ 2, {m[[i, j]], m[[i, l]], m[[k, j]],
+ m[[k, l]]} = -{m[[i, j]], m[[i, l]], m[[k, j]], m[[k, l]]};
+ ], {i, 1, Length[mat]}, {j, 1, Length[mat]}, {k, 1,
+ Length[mat]}, {l, 1, Length[mat]}];
+ m]
+ reduce[mat_] := FixedPoint[internalReduce, mat]
+{% endraw %}
+
+Notice that columns which contain more than one -1 must not overlap, in the sense that no two columns with more than one -1 may have a -1 in the same row. Indeed, if they did, we’d have a submatrix somewhere of the form {{-1, -1}, {1, -1}}, which contradicts the “we’ve finished flipping” condition. Hence we may rearrange rows so that all -1’s appear together in contiguous columns.
+
+We may then rearrange columns so that reading from the left, we see successive columns with decreasingly many -1’s. Rearrange rows again so that they appear stacked on top of each other.
+
+![example of reduced matrix][reduced matrix]
+
+We’ve ended up with a normal form: columns of -1’s, diagonally adjoined to each other, followed by rows of -1’s. (The following Mathematica code relies on the fact that SortBy is a stable sort.)
+
+`normalform[mat_] := SortBy[Transpose@SortBy[Transpose@reduce[mat], -Count[#, -1] &], Count[#, -1] /. {0 -> Infinity} &]`
+
+We haven’t shown that it’s unique yet, and indeed it’s not. As a counterexample, {{-1,1,1,1,1}, {-1,1,1,1,1}, {1,-1,1,1,1}, {1,-1,1,1,1}, {1,1,1,1,1}} is transformed into {{-1,1,1,1,1}, {-1,1,1,1,1}, {-1,1,1,1,1}, {-1,1,1,1,1},{1,1,1,1,1}} by a rectangle-flip.
+
+This suggests a further improvement to the normal form: by flipping in this way, we may insist that any column of -1’s, other than the first, must contain only one -1. Indeed, if it contained two or more, we would flip two of them into the first column, rearrange so that all columns were contiguous -1’s again, and repeat.
+
+What does our matrix look like now? It’s a column of -1, followed by some diagonal -1’s, followed by a row of -1. We’ll call this the canonical form, although I’ve still not shown uniqueness.
+
+![example of matrix in canonical form][canonical matrix]
+
+Restatement of problem
+========
+
+The problem then becomes: given a matrix in canonical form, show that \\(f+g\\) cannot be 0.
+
+Notice that if the long column is \\(r\\) long, and there are \\(s\\) diagonal -1’s, and the long row is \\(t\\) long, and the matrix is \\(n \times n\\), then \\(f = -r-s+(-1)^t + (n-s-r-1)\\), \\(g = -t-s+(-1)^r + (n-s-t-1)\\).
+
+Hence \\(f+g = 2n - 2(r+2s+t+1) + (-1)^r + (-1)^t\\).
+
+Any choice of \\(r, s, t, n\\) with \\(r+s+1 \leq n; s+t+1 \leq n; r, t>1\\) yields a valid matrix. We therefore need to show that for all \\(r, s, t, n\\) we have \\(2(n-r-2s-t-1) + (-1)^r + (-1)^t \not = 0\\).
+
+Solution
+=======
+
+Reducing this mod 4, it is enough to show that \\(2(n-r-t-1) + (-1)^r + (-1)^t \not \equiv 0 \pmod{4}\\). But we can easily case-bash the four cases which arise depending on the odd-even parity of \\(r, t\\), to see that in all four cases, the congruence does indeed not hold.
+
+* \\(r, t\\) even: \\(2(n-1) + 2 = 2n\\), but since \\(n\\) is odd, this is not \\(0 \pmod{4}\\).
+* \\(r\\) even, \\(t\\) odd: \\(2n - 1\\), since \\(t-1\\) is even so \\(2(t-1)\\) is a multiple of 4. \\(2n-1\\) isn’t even even, let alone divisible by \\(4\\).
+* \\(r, t\\) odd: \\(2(n-1) + 2 = 2n\\) which is again not \\(0 \pmod{4}\\).
+
+Summary
+=======
+
+Once we had this canonical form, it was easy to find \\(f, g\\) and therefore analyse the behaviour of \\(f+g\\). Next steps: prove that canonical forms are unique (perhaps using the fact that \\(f, g\\) are invariant across forms, and showing a result along the lines that any two canonical forms with the same \\(f, g\\) must be equivalent). I won’t do that now.
+
+[Mutilated chessboard]: https://en.wikipedia.org/wiki/Mutilated_chessboard_problem
+[normal form]: https://en.wikipedia.org/wiki/Canonical_form
+[reduced matrix]: {{< baseurl >}}images/Matrices/matrix_reduced.jpg
+[canonical matrix]: {{< baseurl >}}images/Matrices/matrix_canonical.jpg
diff --git a/hugo/content/posts/2014-12-23-latin-translation-tips.md b/hugo/content/posts/2014-12-23-latin-translation-tips.md
new file mode 100644
index 0000000..af024a8
--- /dev/null
+++ b/hugo/content/posts/2014-12-23-latin-translation-tips.md
@@ -0,0 +1,42 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2014-12-23T00:00:00Z"
+aliases:
+- /uncategorized/latin-translation-tips/
+- /latin-translation-tips/
+title: Latin translation tips
+---
+
+I'm clearing out my computer, and found a file which may as well be here.
+
+Chunking:
+----
+
+1. The first thing to do is to run through the sentence, identifying the verbs and anything that looks like it might be a verb (even in a strange form, like “passus” or “ascendere”).
+2. Run through a second time, looking for structures like “ut + subjunctive” and “non solum… sed etiam…” - if a verb you spotted is in an odd form, this is when you look quickly for why it’s in that form.
+3. Look for any subordinate clauses (like “dixit Caecilius, qui in horto laborabat…”)
+4. If you see an adjective-looking thing, it probably has to go with a noun.
+5. With that in mind, chunk the text, remembering that two verbs in the same chunk is unlikely unless one is something like “dixit” or “poterat”, which can modify another verb. Remember that chunks shouldn’t be too long, but lots of really short words together might not count against the length limit. Try reading out each chunk - rhythm takes time to learn to grasp, but it might help you.
+
+Once the text is chunked:
+----
+
+1. Remember that your chunking is probably wrong somewhere, but also is probably broadly right.
+2. In each chunk, if there’s a nominative and a verb then try and translate those first. Then think about what the verb “expects”; if the verb is looking for an accusative, find an accusative, while if it’s looking for a dative, find a dative. For example, “docet” = “he teaches” is looking for an accusative, while “trahet” = “he drags” is looking both for an accusative (“he drags something”) and possibly a dative (“he drags something somewhere”).
+3. If it looks like a jumble of words, identify the case of everything (in poetry, it can help if you scan the text) - this should tell you what goes with what. Don’t be too fussy about getting the right case, though - I’d be happy with “dative or ablative”, most of the time, because that’s usually clear from context - as long as you have the right case among your options!
+
+Guessing vocab:
+----
+
+Try and work out what the principal parts of a verb are. The English word from a given Latin one almost always comes from the past passive participle (the fourth principal part), by adding “tion” instead of “us”: “passus” -> “passion” [a bit misleading if you don’t know about the Passion of the Christ, because it means “suffering”], “traho” -> “tractus” -> “traction”; it actually means “drag”.
+How to guess the principal parts is the kind of thing you learn with time, but as a general rule, “t” -> “s” (as in “patior passus”) and almost everything else goes to “ct”: “pingere pictus” from which “depiction” so “painting”, “facere factus” from which “manufaction” which isn’t really English but tells you it means “making”, etc.
+
+General:
+----
+
+* If you see lots and lots of things in the same case, ask yourself whether they all go together, or whether there’s some reason that more things than usual should be in that case. Usually it’ll be the former, with the major exception being “que” = “and”. (eg. Caesarem Brutumque - Caesar isn’t described by the word “Brutus”, but they’re both affected by the same verb.)
+* Don’t be afraid to amend your earlier translation, if something becomes clarified by later text. Keep looking at the English you get, to make sure you’re on track; while you’re working, it’s better to leave something blank than to get it wrong, so don’t guess too early. Once you’ve gone over the whole thing, or you’ve got to a point where everything afterwards is impossible without help from earlier, then you can guess. (And, of course, leave nothing blank when you hand it in!) If you do amend the translation, score out the old one with a thin line - don’t scribble it out - because then the examiners might take pity on you if it turned out to be right the first time after all. If you make a significant amendment (you find out that Brutus is actually doing what you thought Caesar was doing, for example) then you should reread the whole translation; check that the new interpretation isn’t just impossible from what Latin has come earlier, and check whether earlier parts make more sense under the new interpretation.
diff --git a/hugo/content/posts/2015-01-29-motivational-learning.md b/hugo/content/posts/2015-01-29-motivational-learning.md
new file mode 100644
index 0000000..6055f17
--- /dev/null
+++ b/hugo/content/posts/2015-01-29-motivational-learning.md
@@ -0,0 +1,28 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- uncategorized
+comments: true
+date: "2015-01-29T00:00:00Z"
+aliases:
+- /uncategorized/motivational-learning/
+- /motivational-learning/
+title: Motivational learning
+---
+
+*In which I am a wizard.*
+
+Sometimes as a student, the work piles up and I start to think "I'll never finish this". It becomes easy to think that there's no point in working because the work will never be over. When that happens to me, I imagine that my course is magic/alchemy/something with flashy special effects. I'm going through the Wizardry Academy, and I'll graduate able to manipulate the four elements. Even if I'm not the best in the year at it, I'm still able to *manipulate the elements*, and if I work at it, I'll be able to manipulate them better and in flashier ways - that's not something most people can do!
+
+I tend not to take this analogy very far. It's usually enough just for me to pretend I'm [Kvothe] for a moment, and I'm all motivated again. However, the trick kind of works for specific topics, too. At the moment, for instance, I need to know how to classify the [representations] of a group, per [Slate Star Codex's article][extreme mnemonics].
+
+An arcanist who is working with minerals needs to know lots of properties of those minerals, and is greatly advantaged by performing certain rituals to divine the Affinities of a metal. As you know, metals are nothing more nor less than a physical embodiment of a collection of Aspects, and you get a different kind of metal for each Aspect that has gone into its construction. All metals have an Affinity with Nothing - that's just standard Elemental Theory. Metals only have a certain number of Affinities, too, and it turns out to be a fact that each Affinity corresponds exactly with a purity band of the metal, and you can see which purity band goes with an Affinity if you look at the Affinity through a Tracer. (On that note, recall from the first Alchemy course you ever took that there is a ritual we can perform to extract a particular Aspect already present in a metal. Purity bands are what we call the product of that ritual, and represent a distilled Aspect which is still related to the original metal.)
+
+A mineral is an algebraic structure; a metal, a finite group. An Aspect is a group element, and so if we have different generators for the group, we get a different group. An Affinity of a group is a complex irreducible representation. All finite groups have the trivial representation, as is standard Representation Theory. Finite groups only have a certain number of irreducible complex representations, and they are in bijection with the conjugacy classes of the group. (If you apply the trace operator to a representation, you obtain a character.) From any first course in group theory, we can extract the conjugacy class of an element of a group, and it is those conjugacy classes which are in bijection with the the characters.
+
+It's paraphrased a bit, and my notation is a bit sloppy, but it certainly sounds more interesting than representation theory.
+
+[Kvothe]: https://en.wikipedia.org/wiki/The_Kingkiller_Chronicle
+[representations]: https://en.wikipedia.org/wiki/Group_representation
+[extreme mnemonics]: https://slatestarcodex.com/2013/08/14/extreme-mnemonics/
diff --git a/hugo/content/posts/2015-08-19-awodey.md b/hugo/content/posts/2015-08-19-awodey.md
new file mode 100644
index 0000000..227dc4e
--- /dev/null
+++ b/hugo/content/posts/2015-08-19-awodey.md
@@ -0,0 +1,14 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+title: Sequence on Awodey's Category Theory
+author: patrick
+layout: page
+date: "2015-08-19T00:00:00Z"
+comments: true
+---
+
+In the summer of 2015, I worked through Awodey's [Category Theory][book], and I produced [a large collection of posts][posts] as I tried to understand its contents.
+These posts are probably not of much interest to anyone who is just looking for something to read, so they're siloed off.
+
+[book]: https://global.oup.com/ukhe/product/category-theory-9780199237180
+[posts]: /awodey
diff --git a/hugo/content/posts/2015-08-21-proof-by-contradiction.md b/hugo/content/posts/2015-08-21-proof-by-contradiction.md
new file mode 100644
index 0000000..3c9d7ed
--- /dev/null
+++ b/hugo/content/posts/2015-08-21-proof-by-contradiction.md
@@ -0,0 +1,33 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2015-08-21T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/proof-by-contradiction/
+- /proof-by-contradiction/
+title: Proof by contradiction
+summary: Here I explain proof by contradiction so that anyone who has ever done a sudoku and seen algebra may understand it.
+---
+
+Here I explain proof by contradiction so that anyone who has ever done a [sudoku] and seen algebra may understand it.
+
+Imagine you are doing a sudoku, and you have narrowed a particular cell down to being either a 1 or a 3. You're not sure which it is, so you do the "guess and see" approach: you guess it's a 1. That forces this other cell to be an 8, this one to be a 5, and then - oh no! That one over there has to be a 7, but there's already a 7 in its row! That means we have to backtrack: our first guess of 1 was wrong, so it has to be a 3 after all.
+
+That was a proof by contradiction that the cell was a 3.
+
+Now I present the standard proof that \\(\sqrt{2}\\) is not [expressible as a fraction][rational] \\(\frac{p}{q}\\) where \\(p, q\\) are whole numbers.
+
+Analogy: "the cell was a 1" corresponds to "\\(\sqrt{2}\\) is fraction-expressible". "The cell was a 3" corresponds to "\\(\sqrt{2}\\) is not fraction-expressible".
+
+Suppose \\(\sqrt{2}\\) were fraction-expressible. Then we could write it explicitly as \\(\sqrt{2} = \frac{p}{q}\\), and we can insist that \\(q > 0\\): if it's negative, we can move the negative up to the \\(p\\). If we clear denominators, we get \\(q \sqrt{2} = p\\); then square both sides, to get \\(2 q^2 = p^2\\).
+
+But now think about how many times 2 divides the left-hand side and the right-hand side. 2 divides a square an even number of times, if it divides it at all (because any square which is divisible by 2 is also divisible by 4, so we can pair off the 2-factors). So 2 must divide \\(q^2\\) an even number of times, and hence the left-hand side an odd number of times (because that's \\(2 \times q^2\\)). It divides the right-hand side an even number of times. So the number of times 2 divides \\(p^2\\) is both odd and even. No number is both odd and even!
+
+We've done the equivalent of finding a 7 appearing twice in a single row. We have to backtrack and conclude that the starting cell was a 3 after all: \\(\sqrt{2}\\) is not fraction-expressible.
+
+[sudoku]: https://en.wikipedia.org/wiki/Sudoku
+[rational]: https://en.wikipedia.org/wiki/Rational_number
diff --git a/hugo/content/posts/2015-09-25-lottery-odds.md b/hugo/content/posts/2015-09-25-lottery-odds.md
new file mode 100644
index 0000000..479b14a
--- /dev/null
+++ b/hugo/content/posts/2015-09-25-lottery-odds.md
@@ -0,0 +1,36 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2015-09-25T00:00:00Z"
+redirect-from:
+- /mathematical_summary/lottery-odds/
+- /lottery-odds/
+title: Lottery odds
+summary:
+ It has been proposed to me that if one is to play the National Lottery, one should be sure to select one's own numbers instead of allowing the machine to select them for you. This is not an optimal strategy.
+---
+
+It has been proposed to me that if one is to play the National Lottery, one should be sure to select one's own numbers instead of allowing the machine to select them for you.
+
+To summarise and slightly simplify the Lottery: at some point during the week, the entrant picks six distinct numbers from 1 to 49 inclusive, and buys a ticket with those numbers on. There is also the option to let the ticket vending machine choose numbers at random, instead of having you choose them. Then on Wednesday evening, six numbers are selected from 1 to 49 on live TV by a process which is as near to true random as we can get while still retaining drama. If all six of your numbers match all six of the prize numbers, you win a prize. (In the actual game, there are also smaller prizes for matching fewer numbers, and so on.)
+
+The argument goes as follows: if you let the vending machine decide your numbers, you have the square of the probability of winning. (That is, a much smaller chance.) Indeed, in order to win, the vending machine first needs to select the winning numbers, and then the TV machine also does.
+
+This is, of course, a confusion of the probability of A given B, and the probability of A and B. What was calculated was the probability that the vending machine picks the six given numbers and the TV picks the six given numbers. What is actually required is the probability that the TV picks the six given numbers given that the vending machine also did.
+
+By the way, "A and B" is definitely distinct from "A given B": in a population, the probability that a person is both Albert Einstein and a man is rather low, but the probability that a given person is a man given that they are Albert Einstein is 1.
+
+The first way to make the lottery more intuitive is to note that we could have conducted the lottery so that we already drew the TV's winning numbers, in secret, before you bought your ticket. Only on buying it do you find out whether you've won or not. Now we are simply trying to match six specific numbers by buying our ticket (although we do not know what they are in advance, we do know they are fixed), and the vending machine can guess exactly as well as I can. By analogy: the TV person flips a coin, and then tells you that you will win if you can guess what the outcome of the coin flip was. It's obvious that you'll win half the time if you pick heads, and half the time if you pick tails, and you won't do any better than the vending machine if you guess. Now, instead, let's say that you pick your heads/tails option first, and then the TV person flips a coin. Nothing has changed except the order in which we do things, and the machine will still do just as well as you. (Analogy, of course, is that selecting the six numbers you want to win is the same as selecting the head/tail option you want to win.)
+
+That is, the bogus argument of the third paragraph is not time independent. If you simply shuffle some of the stages of the lottery around, even though this should have no effect on the outcome, the bogus argument says the outcome should be different.
+
+The second way is let's say I'm in competition with you to win most money on the lottery. I'm going to pick the "vending machine" option. You claim I'm thirteen million times less likely to win when the vending machine has picked my numbers, so you surely won't object if I change the lottery slightly so that if I choose the "vending machine" option, it picks two sets of six numbers and enters me for them both simultaneously. That doubles my winning chance, but it's still a damn sight worse than the penalty of thirteen million times I incurred by picking the "vending machine" option. You likewise won't mind if I change the lottery so that the "vending machine" chance picks ten sets of numbers. A hundred. Thirteen million, which brings me into parity with your lottery: according to you, we're now equally likely to win. But wait - now them machine has picked every combination. I win if any combination wins! And I'm still… just as likely as you to win? Come back to me when you're winning every time and we can rethink.
+
+The third way to make the distinction more intuitive is to make everything much smaller. Let's say I just need to pick one number, and the TV picks one number, each out of 3 instead of 50. Now, the cases in which I win are precisely {(1,1), (2,2), (3,3)}, where (a,b) means "I picked number a, and the machine picked number b". The cases in which I do not win are precisely {(1,2), (1,3), (2,1), (2,3), (3,1), (3,2)}. All of these are equally likely - (1,1) is exactly as likely as (1,2), because if I sneakily relabeled the TV's lottery balls by swapping 1 for 2 then that should have no effect on the outcomes - so my chance of winning is 3/9, or 1/3. This is independent of the means I used to pick my choice, because there is exactly one winning outcome for each of my possible choices. The situation is completely symmetrical: relabelling all the choices doesn't change anything. If it helps, we could think of the option "let the vending machine decide" as "I choose the number 1. Now I let the vending machine apply some scrambling operation I don't know, and it will spit out the number I'll end up using." This doesn't change any of the probabilities, because the statement of the problem is completely independent of what labels appear on the choices (as long as they're all different).
+
+I fear that my third way might require more maths than most people have - the idea of symmetry isn't exactly common.
+
+Anyway, everyone should agree that the lottery is a bad investment if your intention is only to gain money out of it. (Aside from anything else, if you stood to gain anything from playing the lottery, then by symmetry so must everyone else, so the lottery itself must stand to lose. There's simply nowhere else the gain could come from. The lottery would be closed down immediately if it made a loss.)
diff --git a/hugo/content/posts/2015-11-12-eilenberg-moore.md b/hugo/content/posts/2015-11-12-eilenberg-moore.md
new file mode 100644
index 0000000..e61637c
--- /dev/null
+++ b/hugo/content/posts/2015-11-12-eilenberg-moore.md
@@ -0,0 +1,50 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- awodey
+comments: true
+date: "2015-11-12T00:00:00Z"
+math: true
+aliases:
+- /categorytheory/eilenberg-moore/
+- /eilenberg-moore/
+title: Eilenberg-Moore
+summary: As an exercise in understanding the definitions involved, I find the Eilenberg-Moore category of a certain functor.
+---
+
+During my attempts to understand the fearsomely difficult Part III course "[Introduction to Category Theory][course]" by PT Johnstone, I came across the monadicity of the power-set functor \\(\mathbf{Sets} \to \mathbf{Sets}\\). The monad is given by the triple \\((\mathbb{P}, \eta_A: A \to \mathbb{P}(A), \mu_A: \mathbb{PP}(A) \to \mathbb{P}(A))\\), where \\(\eta_A: a \mapsto \{ a \}\\), and \\(\mu_A\\) is the union operator. So \\(\mu_A(\{ \{1, 2 \}, \{3\} \}) = \{1,2,3 \}\\).
+
+It's easy enough to check that this is a monad. We have a theorem saying that every monad has an associated "[Eilenberg-Moore]" category - the category of algebras over that monad. What, then, is the E-M category for this monad?
+
+Recall: an algebra over the monad is a pair \\((A, \alpha)\\) where \\(A\\) is a set and \\(\alpha: \mathbb{P}(A) \to A\\), such that the following two diagrams commute. (That is, \\(\alpha\\) here is an operation which takes a collection of elements of \\(A\\), and returns an element of \\(A\\).)
+
+![Power-set monad algebra diagram][PowersetMonad]
+
+Aha! The second diagram says that the operation \\(\alpha\\) is "massively associative": however we group up terms and successively apply \\(\alpha\\) to them, we'll come up with the same answer. Mathematica calls this attribute "[Flat]"ness, when applied to finite sets only.
+
+Moreover, it doesn't matter what order we feed the elements in to \\(\alpha\\), since it works only on sets and not on ordered sets. So \\(\alpha\\) is effectively commutative. (Mathematica calls this "[Orderless]".)
+
+The first diagram says that \\(\alpha\\) applied to a singleton is just the contained element. Mathematica calls this attribute "[OneIdentity]".
+
+Finally, \\(\alpha(a, a) = \alpha(a)\\), because \\(\alpha\\) is implemented by looking at a set of inputs.
+
+So what is an algebra over this monad? It's a set equipped with an infinitarily-Flat, OneIdentity, commutative operation which ignores repeated arguments. If we forgot that "repeated arguments" requirement, we could use any finite set with any commutative monoid structure; the nonnegative reals with infinity, as a monoid, with addition; and so on. However, this way we're reduced to monoids which have an operation such that \\(a+a = a\\). That's not many monoids.
+
+What operations do work this way? The [Flatten]-followed-by-[Sort] operation in Mathematica obeys this, if the underlying set \\(A\\) is a power-set of a well-ordered set. The union operation also works, if the underlying set is a complete poset - so the power-set example is subsumed in that.
+
+Have we by some miracle got every algebra? If we have an arbitrary algebra \\((A, \alpha)\\), we want to define a complete poset which has \\(\alpha\\) acting as the union. So we need some ordering on \\(A\\); and if \\(x \leq y\\), we need \\(\alpha(\{x, y\}) = y\\). That looks like a fair enough definition to me. It turns out that this definition just works.
+
+So the Eilenberg-Moore category of the covariant power-set functor is just the category of complete posets.
+
+(Subsequently, I looked up the definition of "complete poset", and it turns out I mean "complete lattice". I've already identified the need for unions of all sets to exist, so this is just a terminology issue. A complete poset only has sups of directed sequences. A complete lattice has all sups.)
+
+
+[course]: /archive/2015IntroToCategoryTheory.pdf
+[Eilenberg-Moore]: https://ncatlab.org/nlab/show/Eilenberg-Moore+category
+[PowersetMonad]: {{< baseurl >}}images/CategoryTheorySketches/PowersetMonad.jpg
+[OneIdentity]: https://reference.wolfram.com/language/ref/OneIdentity.html
+[Orderless]: https://reference.wolfram.com/language/ref/Orderless.html
+[Flat]: https://reference.wolfram.com/language/ref/Flat.html
+[Flatten]: https://reference.wolfram.com/language/ref/Flatten.html
+[Sort]: https://reference.wolfram.com/language/ref/Sort.html
diff --git a/hugo/content/posts/2015-11-28-my-first-forcing.md b/hugo/content/posts/2015-11-28-my-first-forcing.md
new file mode 100644
index 0000000..227edd5
--- /dev/null
+++ b/hugo/content/posts/2015-11-28-my-first-forcing.md
@@ -0,0 +1,61 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2015-11-28T00:00:00Z"
+math: true
+aliases:
+- /mathematical_summary/my-first-forcing/
+- /my-first-forcing/
+title: My First Forcing
+summary:
+ In the Part III Topics in Set Theory course, we have used forcing to show the consistency of the Continuum Hypothesis, and we are about to show the consistency of its negation. I don't really grok forcing at the moment, so I thought I would go through an example.
+---
+
+In the Part III Topics in Set Theory course, we have used [forcing] to show the consistency of the [Continuum Hypothesis][CH], and we are about to show the consistency of its negation. I don't really grok forcing at the moment, so I thought I would go through an example.
+
+A forcing is just a quasiorder, so I'll pick a nice one: \\(\mathbb{N}\\), with the usual order. Let's go through some terminology: condition \\(p \in \mathbb{N}\\) is stronger than condition \\(q \in \mathbb{N}\\) (according to my course's convention) iff \\(q \leq p\\). All conditions are compatible, because for every pair of conditions there is a condition stronger than both of them.
+
+The dense subsets of this forcing are precisely the unbounded ones: that is, the infinite ones.
+
+The directed subsets are precisely all subsets, because there is always a natural bigger-than-or-equal-to any two specified naturals. The downward-closed subsets are the initial segments.
+
+The generic set existence theorem is in this case satisfied trivially by \\(G = \mathbb{N}\\), which is generic relative to any collection of dense subsets, and which contains any specified element.
+
+The sets which are \\(\mathbb{P}\\)-generic over \\(\mathbb{M}\\) (any model which contains \\(\mathbb{N}\\)) are those initial segments of \\(\mathbb{N}\\) which intersect every dense set: that is, the only \\(\mathbb{P}\\)-generic set over \\(\mathbb{M}\\) is \\(\mathbb{N}\\) itself.
+
+\\(\mathbb{P}\\) is not separative, because it's total so every pair of elements is compatible. That means our forcing isn't guaranteed to add any elements. Let's plough on anyway.
+
+What are the \\(\mathbb{P}\\)-names of rank \\(0\\)? The empty set is the only such name.
+
+What are the \\(\mathbb{P}\\)-names of rank \\(1\\)? They are all of the form \\(\tau = \{ (n_i, \sigma_i) : n_i \in \mathbb{N}, \sigma_i = \emptyset, i < i_0 \in \text{Ord} \}\\): that is, \\(\{ (n_i, \emptyset): n_i \in \mathbb{N} \}\\). Hence the \\(\mathbb{P}\\)-names of rank \\(1\\) are in one-to-one correspondence with the subsets of \\(\mathbb{N}\\), and subset \\(N\\) is taken to \\(\{ (n, \emptyset) : n \in N \}\\).
+
+What are the \\(\mathbb{P}\\)-names of rank \\(2\\)? They are of the form \\(\tau = \{ (n_i, \sigma_i): n_i \in \mathbb{N}, (\sigma_i = \emptyset) \vee (\sigma_i = N \subseteq \mathbb{N}) \}\\), where I'm abusing notation and identifying the subset of \\(\mathbb{N}\\) with its corresponding \\(\mathbb{P}\\)-name of rank \\(1\\). (This isn't a horrible abuse, because \\(\emptyset\\) means the same thing in the two contexts.) That is, it's basically an arbitrary relation between naturals and subsets of naturals.
+
+The ones of rank \\(3\\), after some mental gymnastics, turn out effectively to be arbitrary relations between pairs of naturals and subsets of naturals; and those of rank \\(n\\) are arbitrary relations between \\(n-1\\)-tuples of naturals and subsets of naturals.
+
+The ones of rank \\(\omega\\) look like being relations between \\(\omega\\)-indexed tuples of naturals and subsets of naturals, and so on. I'm willing to proceed on the assumption that they are.
+
+On to the interpretation. We can interpret with respect to any set \\(G \subseteq \mathbb{N}\\), although most of our theorems only really talk about when \\(G\\) is \\(\mathbb{P}\\)-generic: that is, when it is \\(\mathbb{N}\\) itself.
+
+The interpretation of anything of rank \\(0\\) is, of course, the empty set. If we take anything of rank \\(1\\) - that is, a subset of the naturals - its interpretation is either the empty set (if \\(G\\) doesn't intersect the subset) or the set containing the empty set (if \\(G\\) does intersect the subset).
+
+Let \\(\sim\\) be a relation between the naturals and subsets of the naturals: that is, a name of rank \\(2\\). Then the interpretation is \\(\{ \sigma_G: (\exists p \in G: p \sim \sigma) \}\\). That is, for everything in \\(G\\), take everything it twiddles, and interpret that (producing the empty set if \\(G\\) doesn't intersect the twiddled thing, or \\(\{ \emptyset \}\\) if it does). Hence we produce the empty set if nothing in \\(G\\) twiddles anything; we get \\(\{ \emptyset \}\\) if everything in \\(G\\) only twiddles things which intersect \\(G\\); and \\(\{ \{ \emptyset \}, \emptyset \}\\) if {something in \\(G\\) twiddles something which intersects \\(G\\), and something in \\(G\\) twiddles something which is disjoint from \\(G\\)}.
+
+Repeating, it looks like we're building the ordinals, and with the right choice of \\(\mathbb{P}\\)-name, we'll get every ordinal for most choices of \\(G\\) (including the only generic one, \\(\mathbb{N}\\)).
+
+I'm struggling to think why the entire class of ordinals isn't in this extension. If we started from a countable transitive model, there's a theorem which says that not only have we gained no new ordinals, but we still remain countable. So perhaps we've only actually generated the ordinals up to the Hartogs ordinal of the CTM (that is, \\(\omega_1\\)).
+
+Let's move into \\(\mathbb{M}\\). As far as \\(\mathbb{M}\\) is concerned, we've just verified the existence of the von Neumann hierarchy (that is, we can show that every subset of every ordinal is present as an interpretation), so our forcing hasn't added anything at all. Aha, I've got it! Every \\(\mathbb{P}\\)-name lives in \\(\mathbb{M}\\), and so there are only countably many of those, but \\(\mathbb{M}\\) thinks that lots of those \\(\mathbb{P}\\)-names are different, though they are actually (from our outside, $V$, perspective) the same. \\(\mathbb{M}\\) doesn't have enough power to show they're the same. Therefore, from \\(\mathbb{M}\\)'s point of view, every ordinal really does exist. The previous paragraph was all backwards: our interpretations contain every ordinal because \\(\mathbb{M}\\) thinks there is every ordinal represented among the \\(\mathbb{P}\\)-names, even though to us with the super-strong large cardinal axiom that "the CTM isn't everything" fundamentally is, there's only countably many of those names.
+
+Are there indeed countably many of those names, to us in \\(V\\)? There must be, because we're in a CTM. Indeed, if we go up to \\(\alpha = \omega_1\\), we are attempting to talk about \\(V\\)-uncountable families of elements drawn from this countable model, so actually there aren't any \\(\mathbb{P}\\)-names of rank \\(\omega_1\\).
+
+OK. The above all goes to show that if we force our CTM by \\(\mathbb{N}\\), we don't get anything new. (And this doesn't contradict our theorem that if \\(\mathbb{P}\\) is separative, then we do get something new, because \\(\mathbb{N}\\) is not separative.) Hooray! I feel like I've just cast my first spell with a shiny new magic wand, examined what the spell did, and discovered that it did nothing more than check that magic was still working today.
+
+Next time, I'll try a separative forcing, so I'm guaranteed something new.
+
+[forcing]: https://en.wikipedia.org/wiki/Forcing_(mathematics)
+[CH]: https://en.wikipedia.org/wiki/Continuum_hypothesis
+[quasiorder]: https://en.wikipedia.org/wiki/Preorder
diff --git a/hugo/content/posts/2015-12-24-general-adjoint-functor-theorem.md b/hugo/content/posts/2015-12-24-general-adjoint-functor-theorem.md
new file mode 100644
index 0000000..d4101d8
--- /dev/null
+++ b/hugo/content/posts/2015-12-24-general-adjoint-functor-theorem.md
@@ -0,0 +1,20 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- categorytheory
+comments: true
+date: "2015-12-24T00:00:00Z"
+aliases:
+- /categorytheory/general-adjoint-functor-theorem/
+- /general-adjoint-functor-theorem/
+title: General Adjoint Functor Theorem
+---
+
+Just a post to draw attention to [my new article][article] about the [General Adjoint Functor Theorem][GAFT].
+It's a motivation of the GAFT and its proof.
+I've never seen it motivated in this way, and it's actually quite a natural theorem.
+I haven't managed to motivate the Special Adjoint Functor Theorem at all, although I'm told that it's natural if you know Stone-Cech compactification.
+
+[article]: /misc/AdjointFunctorTheorems/AdjointFunctorTheorems.pdf
+[GAFT]: https://ncatlab.org/nlab/show/adjoint+functor+theorem
diff --git a/hugo/content/posts/2015-12-31-monadicity-theorems.md b/hugo/content/posts/2015-12-31-monadicity-theorems.md
new file mode 100644
index 0000000..f10959c
--- /dev/null
+++ b/hugo/content/posts/2015-12-31-monadicity-theorems.md
@@ -0,0 +1,16 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- categorytheory
+comments: true
+date: "2015-12-31T00:00:00Z"
+aliases:
+- /categorytheory/monadicity-theorems/
+- /monadicity-theorems/
+title: Monadicity Theorems
+---
+
+Another short post to highlight the existence of [an article about the Monadicity Theorems][mts], in which I prove one direction of both the Crude and Precise versions. Comments and corrections would be very much appreciated, because there is an awful lot of work involved in proving those theorems. It would be good to know of any parts where the argument is unclear, unmotivated, too long-winded, or wrong.
+
+[mts]: /misc/MonadicityTheorems/MonadicityTheorems.pdf
diff --git a/hugo/content/posts/2016-01-01-multiplicative-determinant.md b/hugo/content/posts/2016-01-01-multiplicative-determinant.md
new file mode 100644
index 0000000..f73de87
--- /dev/null
+++ b/hugo/content/posts/2016-01-01-multiplicative-determinant.md
@@ -0,0 +1,20 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-01-01T00:00:00Z"
+aliases:
+- /mathematical_summary/multiplicative-determinant/
+- /multiplicative-determinant/
+title: Multiplicative determinant
+---
+
+I'm clearing out my desktop again, and found [this document on the multiplicativity of the
+determinant][doc], which I wrote in 2014. It might as well be up here.
+
+I should note that this document contains no motivation of any kind. It is simply an
+exercise in symbol-shunting, and it has no clever ideas in it.
+
+[doc]: /misc/MultiplicativeDetProof/MultiplicativeDetProof.pdf
diff --git a/hugo/content/posts/2016-01-26-representable-functors.md b/hugo/content/posts/2016-01-26-representable-functors.md
new file mode 100644
index 0000000..0c08ba6
--- /dev/null
+++ b/hugo/content/posts/2016-01-26-representable-functors.md
@@ -0,0 +1,17 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- categorytheory
+comments: true
+date: "2016-01-26T00:00:00Z"
+aliases:
+- /categorytheory/representable-functors/
+- /representable-functors/
+title: Representable functors
+---
+
+Just a post to draw attention to [my new article][article] about representable functors and their links to adjoint functors.
+It's very short, but it gives a reason for being interested in representable functors: they are basically "those with left adjoints", up to minor quibbles.
+
+[article]: /misc/RepresentableFunctors/RepresentableFunctors.pdf
diff --git a/hugo/content/posts/2016-02-05-friedberg-muchnik-theorem.md b/hugo/content/posts/2016-02-05-friedberg-muchnik-theorem.md
new file mode 100644
index 0000000..78aeecb
--- /dev/null
+++ b/hugo/content/posts/2016-02-05-friedberg-muchnik-theorem.md
@@ -0,0 +1,17 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-02-05T00:00:00Z"
+aliases:
+- /mathematical_summary/friedberg-muchnik-theorem/
+- /friedberg-muchnik-theorem/
+title: Friedberg-Muchnik theorem
+---
+
+Another short post to point out [my new article on the Friedberg-Muchnik theorem][FM], a theorem from computability theory. It uses what is known officially as a finite injury priority method, and the proof is cribbed entirely from [Dr Thomas Forster][tf].
+
+[FM]: /misc/FriedbergMuchnik/FriedbergMuchnik.pdf
+[tf]: https://www.dpmms.cam.ac.uk/~tf/
diff --git a/hugo/content/posts/2016-03-03-a-certain-limit.md b/hugo/content/posts/2016-03-03-a-certain-limit.md
new file mode 100644
index 0000000..977ac75
--- /dev/null
+++ b/hugo/content/posts/2016-03-03-a-certain-limit.md
@@ -0,0 +1,67 @@
+---
+lastmod: "2021-10-25T23:24:01.0000000+01:00"
+author: patrick
+categories:
+- stack-exchange
+comments: true
+math: true
+date: "2016-03-03T00:00:00Z"
+title: Why do we get complex numbers in a certain expression?
+summary: Answering the question, "Why does a continued fraction containing only 1, subtraction, and division result in one of two complex numbers?".
+---
+
+*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/1681993/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
+
+# Question
+
+So we all know that the continued fraction containing all 1s...
+
+$$
+x = 1 + \frac{1}{1 + \frac{1}{1 + \ldots}}
+$$
+
+yields the golden ratio \\(x = \phi\\), which can easily be proven by rewriting it as \\(x = 1 + \dfrac{1}{x}\\), solving the resulting quadratic equation and assuming that a continued fraction that only contains additions will give a positive number.
+
+Now, a friend asked me what would happen if we replaced all additions with subtractions:
+
+$$
+x = 1 - \frac{1}{1 - \frac{1}{1 - \ldots}}
+$$
+
+I thought "oh cool, I know how to solve this...":
+
+$$
+x = 1 - \frac{1}{x}
+$$
+
+$$
+x^2 - x + 1 = 0
+$$
+
+And voila, I get...
+
+$$ x \in \{e^{i\pi/3}, e^{-i\pi/3} \} $$
+
+Ummm... why does a continued fraction containing only 1s, subtraction and division result in one of two complex (as opposed to real) numbers?
+
+(I have a feeling this is something like the \\(\sum_i (-1)^i\\) thing, that the infinite continued fraction isn't well-defined unless we can express it as the limit of a converging series, because the truncated fractions \\(1 - \frac{1}{1-1}\\) etc. aren't well-defined, but I thought I'd ask for a well-founded answer. Even if this is the case, do the two complex numbers have any "meaning"?)
+
+# Answer
+
+You're attempting to take a limit.
+
+$$
+x_{n+1} = 1-\frac{1}{x_n}
+$$
+
+This recurrence actually never converges, from any real starting point.
+Indeed, \\(x_2 = 1-\frac{1}{x_1}; \\ x_3 = 1-\frac{1}{1-1/x_1} = 1-\frac{x_1}{x_1-1} = \frac{1}{1-x_1}; \\ x_4 = x_1\\)
+
+So the sequence is periodic with period 3.
+Therefore it converges if and only if it is constant; but the only way it could be constant is, as you say, if \\(x_1\\) is one of the two complex numbers you found.
+
+Therefore, what you have is actually basically a proof by contradiction that the sequence doesn't converge when you consider it over the reals.
+
+However, you have found exactly the two values for which the iteration does converge; that is their significance.
+
+Alternatively viewed, the map \\(z \mapsto 1-\frac{1}{z}\\) is a certain transformation of the complex plane, which has precisely two fixed points. You might find it an interesting exercise to work out what that map does to the complex plane, and examine in particular what it does to points on the real line.
diff --git a/hugo/content/posts/2016-03-28-clojure-exercism.md b/hugo/content/posts/2016-03-28-clojure-exercism.md
new file mode 100644
index 0000000..7e964ee
--- /dev/null
+++ b/hugo/content/posts/2016-03-28-clojure-exercism.md
@@ -0,0 +1,173 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- programming
+comments: true
+date: "2016-03-28T00:00:00Z"
+aliases:
+- /clojure-exercism/
+title: Clojure and Exercism
+summary:
+ I've been trying to learn Clojure through Exercism, a programming exercises tool.
+ It took me an hour to get Hello, World! up and running, so I thought I'd document how it's done.
+ I'm using Leiningen on Mac OS 10.11.4.
+---
+
+I've been trying to learn [Clojure] (a LISP) through [Exercism], a programming exercises tool.
+It took me an hour to get Hello, World! up and running, so I thought I'd document how it's done.
+I'm using [Leiningen] on Mac OS 10.11.4.
+
+The [Installing Clojure page] on Exercism details how to install Leiningen; that part is easy.
+Installing `exercism` is likewise easy, so we run `exercism fetch clojure hello-world`.
+
+And then we enter a world of pain.
+
+`exercism` downloads a project structure:
+
+ hello-world/
+ -- project.clj
+ -- README.md
+ -- test/
+ -- hello_world_test.clj
+
+The README helpfully tells us what Hello, World! is, and a specification for the answer.
+How are we to come up with our answer?
+`lein` gives access to a REPL we can use to write an answer, but there's no indication of
+where to put our files so that `lein` can see them.
+
+Let's run `lein test` to see what `lein` complains about.
+
+ Exception in thread "main" java.io.FileNotFoundException:
+ Could not locate hello_world__init.class or hello_world.clj on classpath.
+ Please check that namespaces with dashes use underscores in the Clojure file name.,
+ compiling:(hello_world_test.clj:1:1)
+
+Fine. It's looking for `hello_world.clj`. Let's make one!
+
+I've put the following in `hello-world/hello_world.clj`:
+
+ (defn hello
+ []
+ "Hello, World!"
+ [name]
+ (str "Hello, " name "!"))
+
+ (defn main- [& _] (println "Hello!"))
+
+`lein test` fails again, with the same error.
+
+Do we get any hints from the test file?
+It starts with a namespace declaration:
+
+ (ns hello-world-test
+ (:require [clojure.test :refer [deftest is]]
+ hello-world))
+
+We're going to want a `hello-world` namespace, so let's put that at the top of our `hello_world.clj`.
+
+ (ns hello-world)
+
+Still fails with the same error.
+OK, the thing that is telling `lein` what to do must be `project.clj`, and it turns out to contain the following:
+
+ (defproject hello-world "0.1.0-SNAPSHOT"
+ :description "hello-world exercise."
+ :url "https://github.com/exercism/xclojure/tree/master/exercises/hello-world"
+ :dependencies [[org.clojure/clojure "1.8.0"]])
+
+None of that tells `lein` where to look for the source file.
+If we make a new `lein` project somewhere, let's see what the project file is supposed to look like.
+
+Go to a temporary directory and use `lein new app newproj`.
+The source tree looks like:
+
+ newproj/
+ -- CHANGELOG.md
+ -- LICENSE.md
+ -- README.md
+ -- doc/
+ -- intro.md
+ -- project.clj
+ -- resources/
+ -- src/
+ -- newproj/
+ -- core.clj
+ -- test/
+ -- newproj/
+ -- core_test.clj
+
+And `project.clj` looks like:
+
+ (defproject newproj "0.1.0-SNAPSHOT"
+ :description "FIXME: write description"
+ :url "http://example.com/FIXME"
+ :license {:name "Eclipse Public License"
+ :url "http://www.eclipse.org/legal/epl-v10.html"}
+ :dependencies [[org.clojure/clojure "1.8.0"]]
+ :main ^:skip-aot newproj.core
+ :target-path "target/%s"
+ :profiles {:uberjar {:aot :all}})
+
+The only interesting thing there seems to be `:main ^:skip-aot newproj.core`.
+Let's try putting `:main ^:skip-aot hello-world` into our own `project.clj`.
+
+`lein test` continues to fail with the same error.
+Looking up `:skip-aot`, it just tells `lein` to skip Ahead-Of-Time compilation, which isn't what we want.
+
+With a heavy heart, then, let's restructure `hello-world` so it looks exactly like `newproj`:
+
+ hello-world/
+ -- README.md
+ -- project.clj
+ -- src/
+ -- hello_world/
+ -- hello_world.clj
+ -- test/
+ -- hello_world/
+ -- hello_world_test.clj
+
+Miraculous! We have a different error!
+
+ Exception in thread "main" java.io.FileNotFoundException:
+ Could not locate hello_world_test__init.class or hello_world_test.clj on classpath.
+
+I think this might be a back-step, because beforehand it was at least finding the test file.
+I get the same error if I navigate into the test folder and run `lein test`.
+And if we try `lein run`, we get the original error:
+
+ Exception in thread "main" java.io.FileNotFoundException:
+ Could not locate hello_world__init.class or hello_world.clj on classpath.
+
+From the [Leiningen documentation]:
+
+> The `src/my_stuff/core.clj` file corresponds to the `my-stuff.core` namespace.
+
+That would imply that our source file corresponds to the `hello-world.hello-world` namespace.
+Let's try flattening out the structure a bit, and returning the `hello_world_test.clj` to where at least
+`lein` recognised it:
+
+ hello-world/
+ -- README.md
+ -- project.clj
+ -- src/
+ -- hello_world.clj
+ -- test/
+ -- hello_world_test.clj
+
+And it works! Woohoo!
+(Well, the tests fail, but that's because I'm new to Clojure and missed out a bunch of parentheses.)
+
+The final contents of `src/hello_world.clj`, causing the tests to pass, were:
+
+ (ns hello-world)
+
+ (defn hello
+ ([] "Hello, World!")
+ ([namevar] (str "Hello, " namevar "!")))
+
+[Clojure]: https://clojure.org/
+[Exercism]: https://exercism.io/
+[Installing Clojure page]: https://exercism.io/languages/clojure
+[Leiningen]: https://leiningen.org
+[Leiningen documentation]: https://github.com/technomancy/leiningen/blob/stable/doc/TUTORIAL.md#creating-a-project
diff --git a/hugo/content/posts/2016-04-08-another-monty-hall-explanation.md b/hugo/content/posts/2016-04-08-another-monty-hall-explanation.md
new file mode 100644
index 0000000..beca258
--- /dev/null
+++ b/hugo/content/posts/2016-04-08-another-monty-hall-explanation.md
@@ -0,0 +1,52 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-04-08T00:00:00Z"
+aliases:
+- /another-monty-hall-explanation/
+title: Another Monty Hall explanation
+---
+
+Recall the [Monty Hall problem]: the host, Monty Hall, shows you three doors, named A, B and C.
+You are assured that behind one of the doors is a car, and behind the two others there is a goat each.
+You want the car.
+You pick a door, and Monty Hall opens one of the two doors you didn't pick that he knows contains a goat.
+He offers you the chance to switch guesses from the door you first picked to the one remaining door.
+Should you switch or stick?
+
+I'll slightly reframe the problem: let's pretend you are playing cooperatively with Monty Hall, where he
+knows the layouts and he is trying to open two goat-doors, and you're trying for the car; you're not allowed to communicate.
+The game is (noting the distinction between "picking" a door - i.e. announcing your intention to open it - and opening it):
+
+* You pick a door;
+* Monty Hall opens a door you didn't pick;
+* You open a door Monty Hall didn't just pick;
+* Monty Hall opens the remaining door.
+
+(The problem is the same: in standard Monty Hall, you win if and only if you open the car door and Monty Hall opens two goat doors.
+Let's say Monty Hall really likes goats, and not inquire further.)
+
+You pick a door, B say. Monty Hall now opens a goat-door, C say,
+because he knows the layouts and can pick one with a goat behind.
+
+At this point, you know Monty Hall *decided not to open* door A.
+Why would he not have chosen door A?
+It's either because he chose randomly between his available goaty options A and C,
+or because he knew A had a car behind so he was choosing the only goat door available to him.
+(Remember, Monty Hall wants to find goats.)
+
+If he chose randomly, you're better off sticking, because that means you have the car.
+But if he *actively refused* door A (which can only happen because it had a car behind), that means you need to switch to door A.
+
+He chose randomly with probability 1/3 (because he chose randomly if, and only if, you originally picked the car).
+He actively refused door A with probability 2/3, therefore.
+
+So with 2/3 probability, you're in the case that means you guarantee yourself a car if you switch.
+With 1/3 probability, you're in the case that means you guarantee yourself a car if you stick.
+
+So you should switch.
+
+[Monty Hall problem]: {{< ref "2013-12-22-three-explanations-of-the-monty-hall-problem" >}}
diff --git a/hugo/content/posts/2016-04-13-independence-of-choice.md b/hugo/content/posts/2016-04-13-independence-of-choice.md
new file mode 100644
index 0000000..aee64d5
--- /dev/null
+++ b/hugo/content/posts/2016-04-13-independence-of-choice.md
@@ -0,0 +1,59 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-04-13T00:00:00Z"
+math: true
+aliases:
+- /independence-of-choice/
+title: Independence of the Axiom of Choice (for programmers)
+summary: So you've heard that the Axiom of Choice is magical and special and unprovable and independent of set theory, and you're here to work out what that means.
+---
+
+So you've heard that the Axiom of Choice is magical and special and unprovable and independent of set theory,
+and you're here to work out what that means.
+Let's not get too hung up on what the Axiom of Choice (or "AC") actually is, because you probably don't care.
+Let's instead discuss what it means for something to be "independent".
+
+Often I hear the layperson say things like "AC is unprovable".
+This is true in a sense, but it's misleading.
+
+Take an object \\(n\\) of the type "integer" - so \\(5\\), \\(-100\\), that kind of thing.
+Here is what I will call the Positivity Hypothesis (or "PH"):
+
+> \\(n\\) is (strictly) greater than \\(0\\).
+
+Of course, depending on how we chose \\(n\\), PH might be true or it might be false, although it can't be both.
+So, while maths might let us prove which of PH or not-PH holds for our given \\(n\\),
+maths will emphatically not let us prove that PH is always true, and it will not let us prove that PH is always false.
+(Maths would be stupid if it did that, because PH is neither always true nor always false.
+The integers \\(5\\) and \\(-100\\) witness that PH can be true and can be false respectively.)
+
+Therefore PH is independent of integer theory.
+It's not magic: there is no god-given reason why PH mysteriously resists all efforts to prove it.
+It's simply not always true, but it's not always false either.
+
+Let's go back to the Axiom of Choice.
+
+The usual system of set theory (which is used as a foundation for all of maths) is a collection of nine axioms,
+together comprising what is known as ZF.
+(If we add Choice to that collection as a tenth axiom, we obtain the set theory called ZFC.)
+In the "integers" analogy above, "the integer type" plays the role of ZF.
+
+Now, just as we may pick an object of type "integer", we may pick a set-theory of type "ZF".
+A "set theory of type ZF" is my informal phrasing for what is usually called "a model of ZF".
+(I'm eliding the question of the consistency of ZF, and I'll just assume it's consistent.)
+In the "integers" analogy, the number \\(5\\) plays the role of one of these set theories,
+as does the number \\(-100\\).
+We can ask of this set theory whether it obeys AC (for which we substituted PH in the analogy).
+
+And it turns out that for some models of set theory, AC holds, and for some models, it doesn't.
+It's quite hard to describe models of set theory, because set theory supports so much complexity;
+the integers are much easier to specify.
+However, if you want the names of two models: in the model which contains precisely the "constructible sets", AC holds, while in Solovay's model, AC fails.
+
+That's all there is to it.
+Maths won't let us prove AC, because it's not true of every set theory of the type "ZF".
+Maths won't let us prove AC is false, because there are some set theories of the type "ZF" in which it is true.
diff --git a/hugo/content/posts/2016-04-21-modular-machines.md b/hugo/content/posts/2016-04-21-modular-machines.md
new file mode 100644
index 0000000..4f7473c
--- /dev/null
+++ b/hugo/content/posts/2016-04-21-modular-machines.md
@@ -0,0 +1,32 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-04-21T00:00:00Z"
+aliases:
+- /modular-machines/
+title: Modular machines
+---
+
+I've written [a blurb][MM] about what a modular machine is (namely, another Turing-equivalent form of computing machine),
+and how a Turing machine may be simulated in one.
+(In fact, that blurb now contains an overview of how we may use modular machines to produce a group with insoluble word problem,
+and how to use them to embed a recursively presented group into a finitely presented one.)
+
+A modular machine is like a slightly more complicated version of a Turing machine, but it has the advantage
+that it is easier to embed a modular machine into a group than it is to embed a Turing machine directly into a group.
+We can use this embedding to show that there is a group with unsolvable word problem:
+solving the word problem would correspond to determining whether a certain Turing machine halted.
+
+This is as part of my revision process for the Part III course on "Infinite Groups and Decision Problems".
+It's probably more comprehensible if you already know what a modular machine is.
+Below are some notes which are handwritten, because I needed to draw pictures easily; the linked notes are typeset but might be less legible.
+
+![Notes1]
+![Notes2]
+
+[MM]: /misc/ModularMachines/EmbedMMIntoTuringMachine.pdf
+[Notes1]: /images/ModularMachines/ModularMachines1.jpg
+[Notes2]: /images/ModularMachines/ModularMachines2.jpg
diff --git a/hugo/content/posts/2016-04-27-tennenbaums-theorem.md b/hugo/content/posts/2016-04-27-tennenbaums-theorem.md
new file mode 100644
index 0000000..95ecd71
--- /dev/null
+++ b/hugo/content/posts/2016-04-27-tennenbaums-theorem.md
@@ -0,0 +1,18 @@
+---
+lastmod: "2020-11-07T15:42:41.0000000+00:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-04-27T00:00:00Z"
+aliases:
+- /tennenbaums-theorem/
+title: Tennenbaum's theorem
+---
+
+Most recent exposition: [an article][tennenbaum] on [Tennenbaum's Theorem].
+Comments welcome.
+The proof is cribbed from Dr Thomas Forster, but his notes only sketched the fairly crucial last step, on account of the notes not yet being complete.
+
+[tennenbaum]: /misc/Tennenbaum/Tennenbaum.pdf
+[Tennenbaum's Theorem]: https://en.wikipedia.org/wiki/Tennenbaum%27s_theorem
diff --git a/hugo/content/posts/2016-05-25-finitistic-reducibility.md b/hugo/content/posts/2016-05-25-finitistic-reducibility.md
new file mode 100644
index 0000000..7f84d1f
--- /dev/null
+++ b/hugo/content/posts/2016-05-25-finitistic-reducibility.md
@@ -0,0 +1,68 @@
+---
+lastmod: "2021-09-12T22:50:36.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-05-25T00:00:00Z"
+math: true
+aliases:
+- /finitistic-reducibility/
+title: Finitistic reducibility
+summary: A quick overview of the definition of the mathematical concept of finitistic reducibility.
+---
+
+There is a [Hacker News thread][HN] at the moment about [an article on Quanta][quanta]
+which describes a paper which claims to prove that Ramsey's theorem for pairs is finitistically reducible.
+That thread contains lots of people being a bit confused about what this means.
+I wrote a comment which I hope is elucidating; this is that comment.
+
+It is a fact of mathematics that there are some statements which are solely about finite objects,
+but to prove them requires reasoning about an infinite object.
+The [TREE function]'s well-definedness is one of them.
+For a more accessible example than TREE, I think the [Ackermann function] falls into this category.
+The Ackermann function \\(A(n+1, m+1) = A(n, A(n+1, m))\\) is well-defined for all \\(n\\) and \\(m\\)
+(we prove this by induction over \\(\mathbb{N} \times \mathbb{N}\\)),
+but the proof relies on considering the [lexicographic order][lex] on \\(\mathbb{N} \times \mathbb{N}\\)
+which is inherently infinite.
+(I'm not totally certain that all proofs of Ackermann's well-definedness rely on an infinite object,
+but the only proof known to me does.)
+Ackermann's function itself is in some sense a "finite" object,
+but the proof of its well-definedness is in some sense "infinite".
+
+Whatever the status of my conjecture that "you can't prove that Ackermann's function is well-defined without considering an infinite object",
+it is [certainly a fact][ack not primrec] that Ackermann is not [primitive-recursive],
+and "primitive-recursive functions" corresponds to the lowest level of the five "mysterious levels" the article talks about.
+
+There are some mathematicians ("finitists") who don't believe that any infinite objects exist.
+Such mathematicians will reject any proof that relies on an infinite object,
+so their mathematics is necessarily less wide-ranging than the usual version.
+Any result that shows that more things are finitistically true is good,
+because it means the finitists get to use these facts the rest of us were already happy about.
+
+So the analogy is as follows.
+Imagine that we knew of this "infinitary" proof that Ackermann is well-defined,
+but we hadn't proved that no "finitary" proof exists.
+(So finitists are not happy to use Ackermann, because it might not actually be well-defined according to them:
+any known proof requires dealing with an infinite object.)
+Now, this paper comes along and proves that actually a finitary proof exists.
+Suddenly the finitists are happy to use the Ackermann function.
+
+Similarly, in real life, most mathematicians were quite happy to use \\(R_2^2\\) to reason about finite objects,
+but the finitists rejected such proofs.
+Now, because of the paper, it turns out that the finitists are allowed to use \\(R_2^2\\) after all,
+because there is a purely finitistic reason why \\(R_2^2\\) is true.
+
+The actual definition of TREE is a bit too long for me to explain here,
+but it is an example of a function like Ackermann, which is well-defined,
+but in fact if you're not allowed to consider infinite objects during the proof then it is provably impossible to prove that TREE is well-defined.
+So the statement "TREE is well-defined" is, in some sense, "less constructive" or "more infinitary" than \\(R_2^2\\).
+
+
+[HN]: https://news.ycombinator.com/item?id=11763080
+[quanta]: https://www.quantamagazine.org/mathematicians-bridge-finite-infinite-divide-20160524
+[TREE function]: https://en.wikipedia.org/wiki/Kruskal's_tree_theorem
+[Ackermann function]: https://en.wikipedia.org/wiki/Ackermann_function
+[lex]: https://en.wikipedia.org/wiki/Lexicographical_order
+[primitive-recursive]: https://en.wikipedia.org/wiki/Primitive_recursive_function
+[ack not primrec]: http://planetmath.org/ackermannfunctionisnotprimitiverecursive
diff --git a/hugo/content/posts/2016-06-13-the-use-of-jargon.md b/hugo/content/posts/2016-06-13-the-use-of-jargon.md
new file mode 100644
index 0000000..116fcf4
--- /dev/null
+++ b/hugo/content/posts/2016-06-13-the-use-of-jargon.md
@@ -0,0 +1,67 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- psychology
+comments: true
+date: "2016-06-13T00:00:00Z"
+aliases:
+- /the-use-of-jargon/
+title: The use of jargon
+summary: "Why jargon is a really useful thing to have and use."
+---
+
+I was recently having a late-night argument with someone about the following thesis:
+
+> If you can't explain something in a simple way, you don't understand it.
+
+They were using this to argue something like the following:
+
+> Jargon is unhelpful because it sets a very high barrier for entry into any field.
+
+My reply, as something of a mathematician, is as follows.
+
+While there are certainly more accessible parts of physics and maths which can be well-explained by analogies and imprecise language
+(and, indeed, we often use them to students, and Brian Cox tries to use them in e.g. documentaries),
+it has led to the horrible nightmare which is everyone thinking wrongly that they understand quantum mechanics [QM] because they heard some cool analogies.
+QM has very little in common with its analogies;
+the analogies are basically just there to give an idea that "things are weird, classical intuition will fail".
+It's the flip side to "if you use abstruse language then you create an environment where you must pass the initiation tests to take part":
+
+> If you use imprecise language then you create an environment where everyone thinks they understand but they're all wrong.
+
+Both approaches have merits, and boringly the correct answer is probably "use a mixture of the two, with the ratio depending on appropriateness to the subject".
+However, physics is increasingly a subfield of maths since the advent of QM and general relativity (which are purely mathematical frameworks),
+and in maths we find the precise language *extremely* important because we strive for total rigour in this, the only subject where it's actually possible.
+Most people start doing maths without access to the language,
+and they often find lots of interesting stuff
+([Ramanujan] is a particular example of such a mathematician,
+who did a lot of great work before ever interacting with Western mathematicians),
+but once you know the language, the language creates a framework which goes some way to guaranteeing the correctness of your results and which can help you spot connections/see more patterns.
+
+From a maths point of view, documentaries are there to get people interested in playing around for themselves,
+rather than to actually impart mathematical knowledge.
+In an ideal world, I think we'd let people discover loads of maths on their own,
+and then show them the precise framework and language it fits into,
+but there just isn't time,
+so we teach it by shoving the framework down students' throats until they either give up maths or become divinely inspired and start playing with it for themselves.
+Additionally a lot of the maths I study [though this might be historical accident,
+derived from our tradition of using jargon] consists of the study of objects which have very few properties, so they defy analogy.
+
+Sometimes it turns out that a certain collection of "very few properties",
+like the collection by which we define the objects we call [groups],
+happen to capture a certain intuition
+(in this case, the idea of "symmetry" [turns out in a deep way][Cayley's theorem] to be precisely captured by groups).
+However, that seems like being the exception rather than the rule,
+and a general collection of "few properties" won't have a neat accessible analogy that anyone has been able to find.
+Especially when you study metamathematics, as well,
+some very deep theorems turn out to hinge on *exactly* what you mean by "the integers" or "the real numbers" or whatever.
+In such fringe cases it is absolutely necessary to be totally precise that we mean "the integers" in a specific technical sense rather than "the integers" as a fuzzy concept,
+or else one will almost certainly go wrong.
+
+So there are definitely cases where the "stupid jargon" is necessary to maintain clarity of thought.
+(Some such theorems do actually impinge on reality, too! Usually via computer science.)
+
+[Ramanujan]: https://en.wikipedia.org/wiki/Srinivasa_Ramanujan
+[groups]: https://en.wikipedia.org/wiki/Group_(mathematics)
+[Cayley's theorem]: https://arbital.com/p/cayley_theorem_symmetric_groups/
diff --git a/hugo/content/posts/2016-06-15-part-iii-essay.md b/hugo/content/posts/2016-06-15-part-iii-essay.md
new file mode 100644
index 0000000..ef4f909
--- /dev/null
+++ b/hugo/content/posts/2016-06-15-part-iii-essay.md
@@ -0,0 +1,20 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematical_summary
+comments: true
+date: "2016-06-15T00:00:00Z"
+aliases:
+- /part-iii-essay/
+title: Part III essay
+---
+
+Now that my time in [Part III] is over, I feel justified in releasing [my essay],
+which is on the subject of [Non-standard Analysis].
+It was supervised by Dr Thomas Forster
+(to whom I owe many thanks for exposing me to such an interesting subject, and for agreeing to supervise the essay).
+
+[Part III]: https://en.wikipedia.org/wiki/Part_III_of_the_Mathematical_Tripos
+[Non-standard Analysis]: https://en.wikipedia.org/wiki/Non-standard_analysis
+[my essay]: https://www.patrickstevens.co.uk/misc/NonstandardAnalysis/NonstandardAnalysisPartIII.pdf
diff --git a/hugo/content/posts/2016-08-05-be-a-beginner.md b/hugo/content/posts/2016-08-05-be-a-beginner.md
new file mode 100644
index 0000000..d0d31eb
--- /dev/null
+++ b/hugo/content/posts/2016-08-05-be-a-beginner.md
@@ -0,0 +1,45 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+comments: true
+date: "2016-08-05T00:00:00Z"
+aliases:
+- /be-a-beginner/
+title: Be a Beginner
+summary: Being a beginner at something is great, especially if it's something that humans are built for.
+---
+
+TL;DR: Being a beginner at something is great, especially if it's something that humans are built for.
+
+Humans are, among other things, [persistence hunters].
+That means one of the ways we are adapted to catch prey is by the brutest of brute-force techniques:
+on foot, we follow a large animal as it runs, until it sits down and dies of exhaustion,
+whereupon we eat it.
+We're pretty slow, but we can run for hours in the full heat of the day
+(we're unreasonably effective at regulating our own body temperature)
+and we just don't stop.
+One of the adaptations by which the human body is built is the ability to run at a constant speed for a long time.
+This art is, of course, increasingly unnecessary,
+as we have supplanted it with tools wrought of pure intellect (agriculture and so forth);
+but the underlying mechanisms are still there in [most of] our bodies.
+
+If you start something as a beginner, you make extremely rapid progress.
+The general effect has a name: the [Pareto principle],
+which is a rule of thumb which states that 80% of the effects come from 20% of the causes.
+If you just learn the most basic 20% of something,
+that often gets you 80% of the total possible effects.
+Beginners improve rapidly in most human endeavours.
+
+I started running using the NHS [Couch to 5k] programme, about nine weeks ago.
+In that time, I have gone from being able to run fitfully for about thirty seconds before having to stop and breathe,
+to being able to run for thirty minutes and only stopping because that's when the timer finished.
+It wasn't particularly fun, but it's always satisfying to improve rapidly at something,
+and it is certainly better to be able to run for half an hour than not to be able to run at all.
+(I had a similar experience with lifting weights, a year and a half ago, except I actually find that fun.)
+
+This post is to recommend being a beginner every so often,
+and specifically to point to the Couch to 5k programme for those who don't currently do things that involve running.
+
+[persistence hunters]: https://en.wikipedia.org/wiki/Persistence_hunting
+[Pareto principle]: https://en.wikipedia.org/wiki/Pareto_principle
+[Couch to 5k]: https://www.nhs.uk/live-well/exercise/couch-to-5k-week-by-week
diff --git a/hugo/content/posts/2016-08-07-a-free-market.md b/hugo/content/posts/2016-08-07-a-free-market.md
new file mode 100644
index 0000000..7f9c9f4
--- /dev/null
+++ b/hugo/content/posts/2016-08-07-a-free-market.md
@@ -0,0 +1,50 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- creative
+- fiction
+comments: true
+date: "2016-08-07T00:00:00Z"
+aliases:
+- /a-free-market/
+title: A Free Market
+summary: The story of Martin's search for a kaki fruit.
+---
+
+Martin was walking through the farmers' market.
+He had scored off nearly everything on his shopping list, but one item stubbornly remained:
+he needed some kaki fruit for a new sorbet recipe he wanted to try out.
+
+High and low he searched,
+weaving in and out of the stalls,
+but his mission proved… well, let us say that it was not successful.
+
+Finally, he thought to give up and place his problem into better hands than his own.
+He forged towards the market's finest attraction,
+the Personal Shopper ("Guaranteed to find your stuff!").
+Her name was Posy,
+and she had been a fixture here for the last twenty years:
+that was when she first noticed the curious way that no-one could ever find quite what they wanted at the weirdly inefficient market.
+Posy was uncannily good at navigating the cobbled rows between the stalls,
+and had an unerring eye for picking out exactly what the customer required.
+
+Martin poured out his problems.
+"Please! I need your help to find a kaki fruit. The recipe will be ruined without it."
+
+Posy smiled, assumed a look of determination, and forged off,
+leaving Martin to scurry behind her as she ducked first left,
+then left again, then (for some reason) a third and a fourth time.
+After what had to be the eighth or ninth left turn through the higgledy-piggledy stalls,
+with Martin hopelessly lost,
+she stopped in front of a little tent whose sign read
+"Children educated and tutored in etiquette: inquire within".
+She raised the entrance flap, and an elderly lady emerged.
+
+Angry, baffled and confused, Martin raised his voice,
+ignoring the proper-and-prim-looking lady from the tent.
+"Why haven't you found me a kaki fruit?
+I thought you knew this market like the back of your hand!"
+
+"Haven't you heard?" said Posy incredulously.
+"It's better to ask for a governess than seek persimmons."
diff --git a/hugo/content/posts/2016-08-10-reinvent-maths.md b/hugo/content/posts/2016-08-10-reinvent-maths.md
new file mode 100644
index 0000000..e6cd89e
--- /dev/null
+++ b/hugo/content/posts/2016-08-10-reinvent-maths.md
@@ -0,0 +1,34 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stack-exchange
+comments: true
+date: "2016-08-10T00:00:00Z"
+title: How far back does mathematical understanding go?
+summary: Answering the question, "how far back in time would maths be understandable to a modern mathematician?".
+---
+
+*This is my answer to the same [question posed on the WorldBuilding Stack Exchange](https://worldbuilding.stackexchange.com/q/51166/13796). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
+
+# Question
+
+How far could a mathematician go back in time, and have to spend as less time as possible in relearning stuff?
+
+Background: The main character has realised that he can travel back in time voluntarily, and he wishes to travel back in time to a time-period where he can participate in the beginning of maths, but without relearning as much as possible.
+
+Magic: To make things clear, I'll add this in. The magic allows him to communicate in the time-periods language easily. He can understand it effortlessly, and it stops the other people from asking him very incriminating questions (like where are you from, etc). They simply think he is a travelling scholar and leave it at that. (It stops them from digging to deeply, even if he does not know what they think is common sense.) They also have given him food and a place to stay.
+
+# Answer
+
+It strongly depends which area of maths you're talking about.
+
+* Category theory is basically new, so before the 1950s or so, it just didn't exist in anything like its modern form.
+* Combinatorics has been around for a long time, but before Erdös it looked very different.
+* Before Newton and Leibniz, the notion of calculus wasn't very clear, and its notation would make it very difficult for us modern-day people to work with.
+* Before Cauchy, they didn't really have what we would refer to as a "rigorous" foundation of analysis, and the relevant language changed substantially since Cauchy to take into account the new approach to rigour.
+* There was a time, even some point after the Renaissance IIRC, when mathematicians were still not really sold on this whole "rigour" thing, and the art of defining things crisply so as to deduce (nearly) incontrovertible stuff about them. The entire mindset of mathematics is different now.
+
+A first-year undergraduate going back before Newton could, if their ideas were taken seriously, revolutionise multiple areas of maths simply because we now know (and take for granted) the correct ways of thinking about certain fields of study.
+Conversely, of course, the first-year undergraduate would have a hard time following the maths of the day, because the technical language and frameworks are so strongly unfamiliar.
+The only frameworks I can think of which haven't changed much post-Renaissance are Euclidean geometry and arithmetic, though of course geometry and number theory have advanced substantially since then.
diff --git a/hugo/content/posts/2016-12-31-complex-infinity.md b/hugo/content/posts/2016-12-31-complex-infinity.md
new file mode 100644
index 0000000..841005b
--- /dev/null
+++ b/hugo/content/posts/2016-12-31-complex-infinity.md
@@ -0,0 +1,32 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stack-exchange
+comments: true
+date: "2016-12-31T00:00:00Z"
+title: What does Mathematica mean by ComplexInfinity?
+summary: Answering the question, "Why does WolframAlpha say that a quantity is ComplexInfinity?".
+---
+
+*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2078754/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
+
+# Question
+
+When entered into [Wolfram|Alpha](https://www.wolframalpha.com/), \\(\infty^{\infty}\\) results in "complex infinity".
+Why?
+
+# Answer
+
+WA's `ComplexInfinity` is the same as Mathematica's: it represents a complex "number" which has infinite magnitude but unknown or nonexistent phase.
+One can use `DirectedInfinity` to specify the phase of an infinite quantity, if it approaches infinity in a certain direction.
+The standard `Infinity` is the special case of phase `0`.
+Note that `Infinity` is different from `Indeterminate` (which would be the output of e.g. `0/0`).
+
+Some elucidating examples:
+
+* `0/0` returns `Indeterminate`, since (for instance) the limit may be approached as \\(\frac{1/n}{1/n}\\) or \\(\frac{2/n}{2/n}\\), resulting in two different real numbers.
+* `1/0` returns `ComplexInfinity`, since (for instance) the limit may be approached as \\(\frac{1}{-1/n}\\) or as \\(\frac{1}{1/n}\\), but every possible way of approaching the limit gives an infinite answer.
+* `Abs[1/0]` returns `Infinity`, since the limit is guaranteed to be infinite and approached along the real line in the positive direction.
+
+In your particular example, you get `ComplexInfinity` because the infinite limit may be approached as (e.g.) \\(n^n\\) or as \\(n^{n+i}\\).
diff --git a/hugo/content/posts/2017-02-14-cauchy-schwarz-proof.md b/hugo/content/posts/2017-02-14-cauchy-schwarz-proof.md
new file mode 100644
index 0000000..a242ebb
--- /dev/null
+++ b/hugo/content/posts/2017-02-14-cauchy-schwarz-proof.md
@@ -0,0 +1,17 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- mathematics
+comments: true
+date: "2017-02-14T00:00:00Z"
+aliases:
+- /cauchy-schwarz-proof/
+title: Proof of Cauchy-Schwarz
+---
+
+This is just a link to a [beautiful proof][proof] of the [Cauchy-Schwarz inequality][CS].
+There are a number of elegant proofs, but this is by far my favourite, because (as pointed out in the paper) it "builds itself".
+
+[CS]: https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality
+[proof]: http://www-stat.wharton.upenn.edu/~steele/Publications/Books/CSMC/New%20Problems/CSNewProof/CauchySchwarzInequalityProof.pdf
diff --git a/hugo/content/posts/2017-03-14-maths-olympiad.md b/hugo/content/posts/2017-03-14-maths-olympiad.md
new file mode 100644
index 0000000..574f2da
--- /dev/null
+++ b/hugo/content/posts/2017-03-14-maths-olympiad.md
@@ -0,0 +1,25 @@
+---
+lastmod: "2021-01-24T12:53:36.0000000+00:00"
+author: patrick
+categories:
+- stack-exchange
+comments: true
+date: "2017-03-14T00:00:00Z"
+title: The relationship between the IMO and research mathematics
+summary: Answering the question, "does the International Maths Olympiad help research mathematics?".
+---
+
+*This is my answer to the same [question posed on the Academia Stack Exchange](https://academia.stackexchange.com/q/86451/51909). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
+
+# Question
+
+I was reading a note of Hojoo Lee on inequality which is written for International Math Olympiad (IMO) participants. Although he writes that “target readers are challenging high schools students and undergraduate students“, it appears to be quite advanced.
+
+It occurred to me to ask, do these IMO problems contribute towards research work in math? Do these math notes/books give good overview for research work?
+
+# Answer
+
+I think of Olympiad problems more as "parlour tricks".
+They're really difficult, and it's super-impressive if someone's good at them, but the skills are very different to the skills you need in research.
+As a big example of a difference: the Olympiad rewards quick accurate leaps of reasoning, because you're under such time pressure.
+Research rewards long-term grit and persistence through blind alleys and repeated failure.
diff --git a/hugo/content/posts/2017-11-05-abuse-of-notation.md b/hugo/content/posts/2017-11-05-abuse-of-notation.md
new file mode 100644
index 0000000..db288ec
--- /dev/null
+++ b/hugo/content/posts/2017-11-05-abuse-of-notation.md
@@ -0,0 +1,38 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stack-exchange
+comments: true
+date: "2017-11-05T00:00:00Z"
+title: Abuse of notation in function application
+summary: Answering the question, "Are these examples of abuses of notation?".
+---
+
+*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2505777/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
+
+# Question
+
+I have often seen notation like this:
+
+> Let \\(f : \mathbb{R}^2 \to \mathbb{R}\\) be defined by \\(f(x, y) = x^2 + 83xy + y^7\\).
+
+How does this make any sense?
+If the domain is \\(\mathbb{R}^2\\) then \\(f\\) should be mapping individual tuples.
+
+Also when speaking of algebraic structures why do people constantly interchange the carrier set with the algebraic structure itself?
+For example you might see someone write this:
+
+> Given any field \\(\mathbb{F}\\) take those elements in our field \\(a \in \mathbb{F}\\) that satisfy the equation \\(a^8 = a\\).
+
+How does this make any sense?
+If \\(\mathbb{F}\\) is a field then it is a tuple equipped with two binary operations and corresponding identity elements all of which satisfy a variety of axioms.
+
+# Answer
+
+The example you've given of a function is not an abuse. \\(x\\) is instead shorthand for \\(\pi_1(t)\\) and \\(y\\) is shorthand for \\(\pi_2(t)\\) and \\((x, y)\\) is shorthand for \\(t\\).
+
+\\(g \in G\\) is a very minor abuse, yes.
+"A group \\(G\\) is a set \\(G\\) endowed with some operations" is a slight abuse, but one which will never be misinterpreted.
+It is done this way to avoid the proliferation of unnecessary and confusing symbols.
+For the same reason, we use the symbol \\(+\\) to refer to the three different operations of addition of integers, rationals, and reals.
diff --git a/hugo/content/posts/2018-02-03-epsilon-delta.md b/hugo/content/posts/2018-02-03-epsilon-delta.md
new file mode 100644
index 0000000..5c1f985
--- /dev/null
+++ b/hugo/content/posts/2018-02-03-epsilon-delta.md
@@ -0,0 +1,26 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stack-exchange
+comments: true
+date: "2018-02-03T00:00:00Z"
+title: Infinitesimals as an idea that took a long time
+summary: Answering the question, "Which mathematical ideas took a long time to define rigorously?".
+---
+
+*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2633847/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
+
+# Question
+
+It often happens in mathematics that the answer to a problem is "known" long before anybody knows how to prove it. (Some examples of contemporary interest are among the Millennium Prize problems: E.g. Yang-Mills existence is widely believed to be true based on ideas from physics, and the Riemann hypothesis is widely believed to be true because it would be an awful shame if it wasn't. Another good example is Schramm–Loewner evolution, where again the answer was anticipated by ideas from physics.)
+
+More rare are the instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea. An example of this is umbral calculus, where a mysterious technique for proving properties of certain sequences existed for over a century before anybody understood why the technique worked, in a rigorous way.
+
+I find these instances of mathematical ideas without rigorous interpretation fascinating, because they seem to often lead to the development of radically new branches of mathematics. What are further examples of this type?
+
+# Answer
+
+Following from the continuity example, in which the epsilon-delta formulation eventually became ubiquitous, I submit the notion of the infinitesimal. It took until Robinson in the 1950s and early 60s before we had "the right construction" of infinitesimals via ultrapowers, in a way that made infinitesimal manipulation fully rigorous as a way of dealing with the reals. They were a very useful tool for centuries before then, with (e.g.) Cauchy using them regularly, attempting to formalise them but not succeeding, and with Leibniz's calculus being defined entirely in terms of infinitesimals.
+
+Of course, there are other systems which contain infinitesimals - for example, the field of formal Laurent series, in which the variable may be viewed as an infinitesimal - but e.g. the infinitesimal \\(x\\) doesn't have a square root in this system, so it's not ideal as a place in which to do analysis.
diff --git a/hugo/content/posts/2018-04-08-kinds-of-number.md b/hugo/content/posts/2018-04-08-kinds-of-number.md
new file mode 100644
index 0000000..84fd955
--- /dev/null
+++ b/hugo/content/posts/2018-04-08-kinds-of-number.md
@@ -0,0 +1,25 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- stack-exchange
+comments: true
+date: "2018-04-08T00:00:00Z"
+title: What is lost when we move between number systems?
+summary: Answering the question, "What is lost when we move from the reals to the complex numbers?".
+---
+
+*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2728317/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
+
+# Question
+
+As I know when you move to "bigger" number systems (such as from complex to quaternions) you lose some properties (e.g. moving from complex to quaternions requires loss of commutativity), but does it hold when you move for example from naturals to integers or from reals to complex and what properties do you lose?
+
+# Answer
+
+The most important ones as I see it:
+
+* Naturals to integers: lose well-orderedness, gain "abelian group" (and, indeed, "ring").
+* Integers to rationals: lose discreteness, gain "field".
+* Rationals to reals: lose countability, gain "Cauchy-complete".
+* Reals to complexes: lose a compatible total order, gain the Fundamental Theorem of Algebra.
diff --git a/hugo/content/posts/2018-06-02-json-comments.md b/hugo/content/posts/2018-06-02-json-comments.md
new file mode 100644
index 0000000..0b8f9f0
--- /dev/null
+++ b/hugo/content/posts/2018-06-02-json-comments.md
@@ -0,0 +1,25 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- hacker-news
+- programming
+comments: true
+date: "2018-06-02T00:00:00Z"
+title: JSON comments (a note from Hacker News)
+summary: "A quick note from Hacker News about why the comment-handling situation in JSON is bad."
+---
+
+In response to [a linkpost](https://news.ycombinator.com/item?id=17358103) to [an article about how YAML isn't perfect](https://arp242.net/weblog/yaml_probably_not_so_great_after_all.html), [user jiveturkey](https://news.ycombinator.com/user?id=jiveturkey) [commented with confusion](https://news.ycombinator.com/item?id=17359727):
+
+> > JSON doesn't support comments
+
+> eh?
+>
+> `{ "firstName": "John", "lastName": "Smith", "comment": "foo", }`
+>
+> I know it isn't the same as `#comments`, but who cares really.
+
+[I replied](https://news.ycombinator.com/item?id=17359800):
+
+> The trouble there is that your comments come in-band. What if you're trying to serialise something and you don't have the power to insist that it's not a dictionary with "comment" as a key?
diff --git a/hugo/content/posts/2018-07-21-dependent-types-overview.md b/hugo/content/posts/2018-07-21-dependent-types-overview.md
new file mode 100644
index 0000000..5c117c6
--- /dev/null
+++ b/hugo/content/posts/2018-07-21-dependent-types-overview.md
@@ -0,0 +1,185 @@
+---
+lastmod: "2021-09-12T22:47:44.0000000+01:00"
+author: patrick
+categories:
+- programming
+- mathematics
+comments: true
+date: "2018-07-21T00:00:00Z"
+aliases:
+- /dependent-types-overview/
+title: Dependent types overview
+summary: "A quick overview of dependent types."
+---
+
+# Proving things in Agda, part 1: what is dependent typing?
+
+[Agda] is a [dependently-typed] programming language which I've been investigating over the last couple of months, inspired by Conor McBride's [CS410] lecture series.
+Being dependently-typed, its type system is powerful enough to encode mathematical truth, and you can use the type system to verify proofs of mathematical statements, as well as to almost completely obviate the need for tests by having the compiler verify almost any property of your program.
+This post is an overview of what that means.
+
+Before you read any of the Agda code that lives in [my Agda repository][GitHub], please keep in mind that I'm an Agda novice who is exploring.
+I make no claims that any of this code is any good; only that it is correct.
+I'm also not interested in performance, since I'm using it as a proof environment rather than as a source of runnable programs; while all of the code is runnable, I have not optimised it at all.
+We shall see that the mere existence of these programs is enough to constitute mathematical proof.
+
+## What is a type system?
+
+I think of a type system as one or both of two things.
+
+* A way of informing the compiler that certain objects are supposed to match up in certain ways, such that this information may vanish at runtime but allows the compiler to help you when you're writing the program.
+* A way of ensuring at runtime that you don't perform nonsensical operations on objects that don't support those operations.
+
+For example, the language Python has a type system which is "dynamic": you don't specify the type of an object while you're writing the program, so the compiler can't really use type information to help you.
+The language F# has a "static" type system: you specify the type of every object up front, while you're writing the program, so the compiler has more opportunities to tell whether you've told your program to do something inconsistent.
+
+From now on, I'll focus on the first kind of type system (i.e. on type systems where you specify types while you're writing the program, so the compiler can help you).
+
+## What can a type system do for you?
+
+Any Python programmer has probably encountered a certain extremely common bug: since strings are iterable, it's all too easy to iterate accidentally over a single string when you intended to iterate over a list of strings.
+A baby example, in which the bug is very obvious, is as follows:
+
+{{< highlight python >}}
+stringsList = ["hello", "world"]
+for ch in stringsList[0]: # Oops - stringsList[0], not stringsList
+ print(ch)
+# Expected: "hello" and then "world"
+# Actual: "h", then "e", then "l", then "l", then "o"
+{{< / highlight >}}
+
+Python's dynamic typing means that you often can't find out that you've iterated over the wrong thing until you come to run the program and discover that it blows up.
+(It doesn't help that there's no such thing as a character in Python; only a string of length 1.)
+
+In F#, this is a class of bug that never makes it to runtime, because you know the type of every variable up front.
+
+{{< highlight fsharp >}}
+let stringsList = [ "hello" ; "world" ]
+stringsList.[0]
+|> List.map (printfn "%s") // doesn't compile!
+{{< / highlight >}}
+
+`List.map` can't take a string as an argument.
+Even if it could, `printfn "%s"` can't take a character as an argument.
+
+The type system has protected you from this particular bug.
+So far, so familiar.
+
+## Dependent types?
+
+In most common type systems, you're restricted to declaring that any particular object inhabits one of some fixed collection of types, or inhabits a type that is built out of those.
+(For example, `string` or `int` or `List