mirror of
https://github.com/Smaug123/static-site-pipeline
synced 2025-10-05 00:08:40 +00:00
Import Hugo
This commit is contained in:
27
hugo/content/posts/2013-06-26-cucats-puzzlehunt.md
Normal file
27
hugo/content/posts/2013-06-26-cucats-puzzlehunt.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-06-26T00:00:00Z"
|
||||
aliases:
|
||||
- /archives/10/index.html
|
||||
- /wordpress/archives/10/index.html
|
||||
- /uncategorized/cucats-puzzlehunt/
|
||||
title: CUCaTS Puzzlehunt
|
||||
---
|
||||
At the end of last (that is, Lent 2012-2013) term at Cambridge, I took part in the [Cambridge University Computing and Technology Society](https://cucats.org) [Puzzlehunt](https://cucats.org/puzzlehunt) (for some reason, as of this writing, they haven't yet updated that page for this year's Puzzlehunt, but last year's is up there). A short summary: the Puzzlehunt is a treasure hunt around Cambridge, crossed with a whole bunch of online computing-based puzzles. It's very difficult, and it lasts for twenty-four hours.
|
||||
|
||||
It was great fun, and while my team was hampered considerably by the fact that (having found out about the event only a day in advance) we had all planned various May Week celebrations to coincide with the first five hours or so of the twenty-four hour competition, we still gave it a good shot and came fifth of about nine, as far as I remember. (Team G, for the win!)
|
||||
|
||||
For possibly the first time ever, I adopted a sensible strategy of separating the programs I wrote for each puzzle, and saving them as I went. This means I have a record of my attempts at each puzzle - they're all in the form of [Mathematica](https://www.wolfram.com) notebooks.
|
||||
|
||||
My attempts are *extremely* rough-and-ready, being thrown together in the shortest time possible.
|
||||
|
||||
Mathematica Notebook files (.nb) can be read through the Wolfram CDF Player, which can be installed free from [the Wolfram website](https://www.wolfram.com/player "Wolfram CDF player page"); the plugin is quite large, so I can release them as PDFs instead if anyone wants. (Using the CDF player gives syntax highlighting and interactivity, not that many of these files will be interactive, because they were made so quickly.)
|
||||
|
||||
* [Keyboard Cat](/cucats/Puzzlehunt2013/KeyboardCat.nb)
|
||||
* [The Chase](/cucats/Puzzlehunt2013/TheChase.nb)
|
||||
|
||||
More to follow, when I've put a bit of explanatory commentary in them.
|
16
hugo/content/posts/2013-06-26-first-post.md
Normal file
16
hugo/content/posts/2013-06-26-first-post.md
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
date: "2013-06-26T00:00:00Z"
|
||||
aliases:
|
||||
- /archives/4/index.html
|
||||
- /wordpress/archives/4/index.html
|
||||
- /uncategorized/first-post/
|
||||
- /first-post/
|
||||
title: First post
|
||||
---
|
||||
Hello all!
|
||||
|
||||
In the spirit of shouting into an echoing void, this is my first post, testing whether the setup works. Some content will probably turn up soon.
|
110
hugo/content/posts/2013-06-26-sylow-theorems.md
Normal file
110
hugo/content/posts/2013-06-26-sylow-theorems.md
Normal file
@@ -0,0 +1,110 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2013-06-26T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/sylow-theorems/
|
||||
- /sylow-theorems/
|
||||
title: Sylow theorems
|
||||
summary: "A fairly long and winding way through a proof of the three Sylow theorems."
|
||||
---
|
||||
(This post is mostly to set up a kind of structure for the website; in particular, to be the first in a series of posts summarising some mathematical results I stumble across.)
|
||||
|
||||
EDIT: There is now [an Anki deck](/AnkiDecks/SylowTheoremsProof.apkg) of this proof, and a [collection of poems][sylow sonnets] summarising it.
|
||||
|
||||
In Part IB of the Mathematical Tripos (that is, second-year material), there is a course called Groups, Rings and Modules. I took it in the academic year 2012-2013, when it was lectured by [Imre Leader](https://en.wikipedia.org/wiki/Imre_Leader). He told us that there were three main proofs of the [Sylow theorems](https://en.wikipedia.org/wiki/Sylow_theorems), two of which were horrible and one of which was nice; he presented the "nice" one. At the time, I thought this was the most beautiful proof of anything I'd ever seen, although other people have told me it's a disgusting proof.
|
||||
|
||||
# Theorem - the Sylow Theorems
|
||||
|
||||
Let \\(G\\) be a group, of order \\(p^k m\\) for some prime \\(p\\), where the [HCF](https://en.wikipedia.org/wiki/Greatest_common_divisor) \\((p,m) = 1\\). Then:
|
||||
|
||||
1. There is a subgroup \\(H\\) of \\(G\\), of order \\(p^k\\) (a Sylow p-subgroup);
|
||||
2. All such subgroups are conjugate to each other;
|
||||
3. The number of such subgroups, \\(n_p\\), satisfies \\(n_p \equiv 1 \pmod p\\) and \\(n_p \mid m\\).
|
||||
|
||||
# Proof
|
||||
|
||||
The proof goes as follows: pick a p-subgroup \\(P\\) to be of maximal size; then introduce its normaliser \\(N\\), and show that the orbit of \\(P\\) under the conjugation action when \\(P\\) acts on itself is precisely the set of Sylow p-subgroups.
|
||||
|
||||
## First Sylow theorem
|
||||
|
||||
The proof starts out in a natural way, by naming a subgroup \\(P\\) of order \\(p^a\\) for some \\(a\\). Such a subgroup certainly exists, by [Cauchy's Theorem](https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(group_theory)) (which has \\(a=1\\)). If we select \\(a\\) to be maximal, then we wish to show that \\(a=k\\), or equivalently (which seems even easier) that \\(\dfrac{ \vert G \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\).
|
||||
|
||||
Now, how do we show that \\(\dfrac{ \vert G \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\)? Well, we don't know anything about such a quotient unless \\(P\\) is normal in \\(G\\). But we can't guarantee this - so let's introduce a subgroup, \\(N\\), in which \\(P\\) is normal. The natural one to pick, because we're trying to make the subgroup as big as possible, is the [normaliser](https://en.wikipedia.org/wiki/Centralizer_and_normalizer) \\(N(P)\\) - that is, \\(\{g : g P g^{-1} = P\}\\), or \\(Stab_G(P)\\) under the conjugation action. This is the largest subgroup of \\(G\\) in which \\(P\\) is normal.
|
||||
|
||||
Then we want to show that \\(\dfrac{ \vert G \vert }{ \vert N \vert } \times \dfrac{ \vert N \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\); this is true if and only if neither of the multiplicands is divisible by \\(p\\).
|
||||
|
||||
### The second multiplicand
|
||||
|
||||
It looks like it will be easier to start with the second multiplicand, because it's got a really really obvious interpretation.
|
||||
|
||||
We want to show that \\(\dfrac{ \vert N \vert }{ \vert P \vert }\\) is not a multiple of \\(p\\). Now, from the [First Isomorphism Theorem](https://en.wikipedia.org/wiki/Centralizer_and_normalizer) we have \\(\dfrac{ \vert N \vert }{ \vert P \vert } = \vert \dfrac{N}{P} \vert \\).
|
||||
|
||||
Suppose \\( \vert \dfrac{N}{P} \vert \equiv 0 \pmod p\\). Then by Cauchy's Theorem, there is an element \\(h \in \dfrac{N}{P}\\) such that the [order](https://en.wikipedia.org/wiki/Order_(group_theory)) \\(o(h) = p\\); let \\(H = \langle h \rangle\\), the group generated by \\(h\\). But we got to this quotient group \\(\dfrac{N}{P}\\) by applying the projection map \\(\pi : N \rightarrow \dfrac{N}{P}\\), so what happens when we "un-quotient" (that is, apply \\(\pi^{-1}\\))? We have \\(\pi^{-1}(H)\\) has order \\( \vert H \vert \vert P \vert \\), because \\(\pi\\) was a \\( \vert P \vert \\)-to-one mapping, and so \\(\pi^{-1}(H) \leq P\\) has order \\(p \vert P \vert \\). This is a contradiction.
|
||||
|
||||
Hence \\( \vert \dfrac{N}{P} \vert \not \equiv 0 \pmod p\\).
|
||||
|
||||
### The first multiplicand
|
||||
|
||||
The first multiplicand, \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\): this is the number of conjugates of \\(P\\), by the [Orbit-Stabiliser Theorem](https://en.wikipedia.org/wiki/Orbit_stabiliser_theorem#Orbit-stabilizer_theorem_and_Burnside.27s_lemma) (by using the conjugation action: the stabiliser is \\(N\\); while the orbit of \\(P\\) is simply the set of conjugate subgroups). We want to show that this is not divisible by \\(p\\). We can do much more with the conjugates themselves, so let \\(X = \{gPg^{-1}, g \in G\}\\).
|
||||
|
||||
We would like to show that \\( \vert X \vert \not \equiv 0 \pmod p\\). This expression rings a bell - we've seen it before, as a key idea in the [class equation](https://en.wikipedia.org/wiki/Conjugacy_class#Conjugacy_class_equation). In order to use the class equation, we need to act on \\(X\\). There are only three groups we've met so far: \\(N\\), \\(P\\) and \\(G\\). The group we haven't yet used is \\(P\\), and it's a [p-group](https://en.wikipedia.org/wiki/P-group) (and we know a bit about actions of p-groups). What's the only obvious action to use? It has to be conjugation.
|
||||
|
||||
Let \\(P\\) act on \\(X\\) by conjugation. Since the orbits partition the set \\(X\\) and have order dividing \\( \vert P \vert \\), the order of each orbit is one of \\(1, p, p^2, \dots , p^a = \vert P \vert \\). \\(P\\) is clearly in an orbit all of its own (since \\(p P p^{-1} \in P\\) for every \\(p \in P\\)). What we really want is for \\(P = e P e^{-1}\\) to be the only conjugate of \\(P\\) which is in its own orbit, because then we have \\( \vert X \vert \equiv 1 \pmod p\\) (since the orbits partition the set).
|
||||
|
||||
Suppose we have \\(g\\) such that \\(g P g^{-1}\\) is in an orbit of size 1. Then \\(p g P g^{-1} p^{-1} = g P g^{-1}\\) for all \\(p \in P\\), and so (by conjugating with \\(g^{-1}\\)) we have \\(g^{-1} p g P g^{-1} p^{-1} g = P\\), and so \\(g^{-1} p g\\) stabilises \\(P\\) and so is in \\(N\\). So \\(g^{-1} P g\\) is contained within \\(N\\).
|
||||
|
||||
Now, we know that \\(g^{-1} P g\\) is contained within \\(N\\), so we can now use functions defined on \\(N\\). We have that \\(\pi : N \rightarrow \dfrac{N}{P}\\) (the quotient map) is a homomorphism with kernel \\(P\\). That is, \\(\pi(P) = \{e\}\\). Hence considering \\(\pi(g^{-1} P g) = \pi(g^{-1}) \pi(P) \pi(g)\\) because \\(\pi\\) is a homomorphism; but \\(\pi(P) = \{e\}\\) so this expression is just \\(\{\pi(g^{-1}) \pi(g)\} = \{\pi(g^{-1} g)\} = \{e\}\\).
|
||||
|
||||
Hence \\(g^{-1} P g\\) is contained in the kernel of \\(\pi\\). But it's also the same size as \\(P\\) which is itself the kernel of \\(\pi\\). Hence \\(g^{-1} P g = P\\).
|
||||
|
||||
So there is only one orbit of size \\(1\\), and hence because orbits partition the set, \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\) is not divisible by \\(p\\).
|
||||
|
||||
This concludes the proof of the first Sylow theorem.
|
||||
|
||||
## Second Sylow theorem
|
||||
|
||||
Given a Sylow p-subgroup \\(Q\\) of \\(G\\), we want to show that it is conjugate to \\(P\\).
|
||||
|
||||
Use \\(X\\) as before, the set of \\(\{g P g^{-1}\, g \in G \}\\). In the first theorem, we had \\(P\\) acting on \\(X\\); now let's use \\(Q\\) in the same way. We want to show that there is some \\(g \in G\\) such that \\(g^{-1} Q g = P\\), or equivalently that \\(Q \in X\\).
|
||||
|
||||
Let \\(Q\\) act on \\(X\\) by conjugation. We have that \\( \vert X \vert \\) is not a multiple of \\(p\\) by the earlier part, but \\(X\\) is a union of orbits which are of size \\(p^s\\) for some \\(s\\). Hence there is a \\(g \in G\\) such that \\(\{g P g^{-1} \}\\) is the entire orbit of \\(P\\) when \\(Q\\) acts on that conjugate. (That is, there is \\(g \in G\\) such that \\(q g P g^{-1} q^{-1} = g P g^{-1}\\) for all \\(q \in Q\\).) Hence, as before, all elements of \\(g^{-1} Q g\\) fix \\(P\\) under conjugation, and hence \\(g^{-1} Q g \subset N\\).
|
||||
|
||||
Now, \\(g^{-1} Q g \subset N\\) so we can apply the projection map \\(\pi\\) to it. We show that \\(\pi(g^{-1} Q g) = \{e\}\\). Indeed, suppose it isn't. Then \\(H = \pi(g^{-1} Q g)\\) is a non-trivial subgroup of \\(\dfrac{N}{P}\\), because \\(g^{-1} Q g\\) was a subgroup of \\(N\\). It has order dividing that of \\(g^{-1} Q g\\), because applying a homomorphism to a subgroup yields a subgroup of order dividing that of the original - and so its order is a multiple of \\(p\\). Also, its order divides that of \\(\dfrac{N}{P}\\), by Lagrange, because it's a subgroup of \\(\dfrac{N}{P}\\) - and this is not a multiple of \\(p\\). But now we have a multiple of \\(p\\) which divides a non-multiple of \\(p\\) - contradiction.
|
||||
|
||||
Then \\(\{e\} = \pi(g^{-1} Q g) = \pi(g^{-1}) \pi(Q) \pi(g)\\); and hence we must have \\(\pi(Q) = \{e\}\\). So \\(g^{-1} Q g \subset \mathrm{Ker}(\pi)\\) and hence \\(g^{-1} Q g = P\\).
|
||||
|
||||
This concludes the proof of the second Sylow theorem.
|
||||
|
||||
## Third Sylow theorem
|
||||
|
||||
We now want to show that the number \\(n_p\\) of Sylow p-subgroups is \\(1 \pmod p\\) and divides \\(m\\).
|
||||
|
||||
We certainly have that \\(n_p = \vert X \vert \\), because every Sylow p-subgroup is a conjugate of \\(P\\), but also every conjugate of \\(P\\) (that is, every member of \\(X\\)) is itself a subgroup of \\(G\\), and has the same size as \\(P\\), so is also a Sylow p-subgroup. Hence, just as before, \\(n_p \equiv 1 \pmod p\\).
|
||||
|
||||
Also, \\(n_p\\) is the size of an orbit under conjugation, and hence by the Orbit/Stabiliser Theorem, it divides \\( \vert G \vert = p^a m\\); but \\(n_p\\) does not have a factor of \\(p\\), so it must divide \\(m\\).
|
||||
|
||||
This concludes the proof of the third Sylow theorem.
|
||||
|
||||
# Summary
|
||||
|
||||
So the proof went as follows:
|
||||
|
||||
1. We're looking for information about Sylow p-subgroups, so we pick the maximum possible p-subgroup and hope that it's a Sylow one.
|
||||
2. How do we know whether this p-group is Sylow? If \\(\dfrac{ \vert G \vert }{ \vert P \vert }\\) is not divisible by \\(p\\).
|
||||
3. What can we do with a quotient? Not much, but we *can* use a quotient of a normal subgroup. We can't guarantee that \\(P\\) is normal in \\(G\\), so we split up the fraction into \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\) and \\(\dfrac{ \vert N \vert }{ \vert P \vert }\\).
|
||||
4. What's a good normal subgroup to use? We have a choice. We'll go for the normaliser \\(N = N(P)\\), because that gives a nice interpretation to \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\). (But otherwise, this step seems a bit arbitrary to me.)
|
||||
5. Now we'll go for \\(\dfrac{ \vert N \vert }{ \vert P \vert }\\); this is definitely something to do with the quotient group \\(\dfrac{N}{P}\\). Let's imagine its size were divisible by \\(p\\); then we can use Cauchy on \\(\dfrac{N}{P}\\) and get a contradiction on moving back to \\(N\\).
|
||||
6. Let's now consider \\(\dfrac{ \vert G \vert }{ \vert N \vert }\\); the normaliser is something to do with conjugates, so we'll consider the conjugation action. Happily, this expression then becomes the size of the orbit of \\(P\\) under the conjugation action; call that orbit \\(X\\).
|
||||
7. We need \\( \vert X \vert \not \equiv 0 \pmod p\\). Remember the class equation; we want to act on \\(X\\) using a p-group. \\(P\\) is such a p-group, so we'll let \\(P\\) act on \\(X\\). The only natural action to use is conjugation. We know straight away that \\(P\\) is in an orbit all to itself; we need it to be the only one.
|
||||
8. Name a different conjugate of \\(P\\); call it \\(g P g^{-1}\\). We need this to be exactly \\(P\\). It's got the right size already, so we just need it to be contained in \\(P\\). Here a leap of faith: what's special about \\(P\\)? It's the kernel of a homomorphism \\(\pi: N \rightarrow \dfrac{N}{P}\\) (because it's a normal subgroup of \\(N\\)). So, after proving that \\(\pi\\) is defined on what we want to give as its arguments (that is, after showing that \\(g P g^{-1}\\) is contained in \\(N\\), or equivalently that all elements of \\(g P g^{-1}\\) stabilise \\(P\\) under conjugation), consider \\(\pi(g^{-1} P g)\\). This is clearly \\(\{e\}\\), and hence \\(g^{-1} P g\\) is in the kernel of \\(\pi\\), and hence is a subset of \\(P\\), as required.
|
||||
9. Now the second theorem: all the Sylow p-subgroups need to be conjugate. Name a Sylow p-subgroup \\(Q\\), and have it act on \\(X\\) as above. Then in exactly the same way as in step 7, since \\( \vert X \vert \\) is not a multiple of \\(p\\), we have that there is some \\(h \in G\\) such that \\(\{h P h^{-1}\}\\) is an entire orbit under conjugation by \\(Q\\).
|
||||
10. Exactly as in step 8, a conjugate \\(h P h^{-1}\\) is on its own in an orbit, so it is fixed under conjugation by every element in \\(h^{-1} Q h\\). Hence \\(H = h^{-1} Q h\\) is contained within \\(N\\) and we can use \\(\pi\\). Suppose that \\(H\\) is not fully contained in the kernel of \\(\pi\\); then applying \\(\pi\\) to it gives us a subgroup, which must have prime power order (from the fact that \\(h^{-1} Q h\\) had prime power order); it also has order dividing that of \\(\dfrac{N}{P}\\), which is not a multiple of \\(p\\): contradiction.
|
||||
11. \\(H\\), a conjugate of \\(Q\\), is hence contained in the kernel of \\(\pi\\). Then since it is of the same size as the kernel, it must be the kernel, but that is \\(P\\).
|
||||
12. Now the third theorem: we've just shown that \\(X\\) is precisely the set of Sylow p-subgroups, so \\( \vert X \vert \equiv 1 \pmod p\\) is just what we want (but we've already shown it back in step 8); and since it is also precisely an orbit when \\(G\\) acts on \\(P\\) by conjugation, it must have order dividing that of \\(G\\).
|
||||
|
||||
[sylow sonnets]: {{< ref "2013-08-31-slightly-silly-sylow-pseudo-sonnets" >}}
|
@@ -0,0 +1,27 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-07-03T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/in-which-i-augment-the-lexicon/
|
||||
- /in-which-i-augment-the-lexicon/
|
||||
title: In which I augment the lexicon
|
||||
summary: "A few dubiously-real words which I think should be more widely used."
|
||||
---
|
||||
(This is my first post written in Dvorak; accordingly, it is a bit shorter than I would like, since I am very slow at it. [Tsuyoku naritai](http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/ "I want to become stronger"), and all that.)
|
||||
|
||||
A really nice website I've come across in my wanderings is [Pretty Rational], a growing collection of pithy quotes about rationality, illustrated by one Katie Hartman.
|
||||
|
||||
![Reality provides us with facts so romantic that imagination itself could add nothing to them][reality]
|
||||
|
||||
This particular Jules Verne quote is expounded upon in [a LessWrong post](http://lesswrong.com/lw/or/joy_in_the_merely_real/ "Joy in the Merely Real"), as so many things are, but I can't help noticing that the source of the quote doesn't seem to appear on the Internet. If anyone knows where the quote appears, please let me know! It may turn out to be another Einsteinism - a word I hereby coin to mean "something misattributed to a(n) historical figure whom we think of as wise" - but the quote itself would be undiminished.
|
||||
|
||||
Another niche in the language is "[evilogue](http://www.cracked.com/article_18798_6-words-that-need-to-be-invented-5Bcomic5D.html "Evilogue")" - don't click any links on that page, as Cracked is the third-hardest website on the Internet to escape, after [TV Tropes](http://tvtropes.org) and the [SCP wiki](http://scp-wiki.net). An evilogue is claimed in a situation in which someone has asked you for your opinion of (for example) a company, and you hate that company without at this time being able to recall any specific evidence. Then you may state that you have an evilogue, meaning that if ey wants you to, you will find the evidence you were referring to, at your leisure. (Beware, of course, of being unduly influenced by your past opinion - if in the course of your research you find your concerns to be unjustified, do tell the other person and update accordingly. You shouldn't be looking for new evidence, but finding the evidence you used originally.)
|
||||
|
||||
My final bestowal on the English language (for the moment) is the word "yop", being a "yes" in response to a negative question. When asked "So I'm not the Pope after all?", the correct answer for most people would be "No" (you're not the Pope); the answer to "So I'm not sentient after all?" would usually (but not necessarily, according to [John Searle](https://en.wikipedia.org/wiki/Philosophical_zombie "P-zombie")) be "Yop" (you are sentient). This avoids the needless ambiguity of "Yes" or the prolixity of "No [or Yes], you are sentient".
|
||||
|
||||
[Pretty Rational]: https://web.archive.org/web/20151016143228/http://prettyrational.com/
|
||||
[reality]: https://web.archive.org/web/20141012121546/http://prettyrational.com/wp-content/uploads/2013/06/PrettyRational_Reality.jpg
|
@@ -0,0 +1,31 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-07-04T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/cambridge-undergrad-maths-tips/
|
||||
- /cambridge-undergrad-maths-tips/
|
||||
title: Cambridge undergrad maths tips
|
||||
---
|
||||
I wrote this when I was excessively bored during exam term of my first year. It may grow as I get better at working (I'm something of a [revisionist](https://en.wikipedia.org/wiki/Ministry_of_Truth)). The advice is entirely Cambridge-based; a lot of it probably applies to other places with minor alterations. Most of this comes from personal experience.
|
||||
|
||||
During a supervision, your supervisor will be writing all the time. As soon as you leave the supervision, mark the sheets that are particularly important in some obvious way (eg. by colouring in the corner). That way, when you're frantically flicking through the notes at the end of the year, you'll see where the information you need is. By "most important", I mean the places where the supervisor explains something fundamental to many questions, rather than the ins and outs of one particular question.
|
||||
|
||||
Use Anki during the course - after each lecture, add the key factoids to your Anki deck for that course. (It's a bit annoying to do at the time, but it's seriously *so* much easier this way when it comes to the end of the year.) Try and get into the habit of doing some Anki every day. Remember that Anki does LaTeX!
|
||||
|
||||
If there's anything you don't understand, email your supervisor quickly. Some of the supervisors are absolutely brilliant at replying to emails; but all of them will reply eventually. If you have a DoS [director of studies] who is all-powerful (my first-year DoS was head of almost everything in the maths department), most of your requests can be granted (even possibly to the extent of shuffling lecture times around at the start of the year, if given plenty of warning and a *very* good reason).
|
||||
|
||||
It's going to feel weird at first, but you almost certainly aren't the best in the year - you're likely to be average. (That's what "average" is usually taken to *mean* - pun not originally intended.) This means that the lecturer probably isn't interested in hearing your pedantry or requests for rigour during the lecture. It's the supervisor's job to clear up points that you didn't understand. If everyone you've spoken to doesn't understand something from the lecture, then it might indeed be the lecturer's fault; in that case, email the lecturer, or go down to speak to them at the end of the lecture. If you notice something wrong that the lecturer's written, then unless you're absolutely sure the lecturer's made a mistake, check with the person next to you before calling it out. Protocol is to wait for a brief pause in speech before shouting "Should *this bit* be *this* instead of *what's on the board*?" - try and be as specific as you can, saying (for instance) "In your statement of Theorem 16, the first line says "f is differentiable" - should that be "g is differentiable"?". Most people are not specific when they spot a problem, and it makes it much harder for the lecturer to diagnose the problem if they don't know exactly where the problem is.
|
||||
|
||||
Don't let your sleep cycle get too out-of-sync. It's absolutely fine (after the first couple of weeks of the term, anyway) to go to bed at whatever time you're tired - in my experience, everyone else is also tired and welcomes the chance to sleep. This is put on hold during the first couple of weeks of the term, because that's when everyone's all excited to be there and there's not too much work.
|
||||
|
||||
If you have anything impairing your work that your DoS could conceivably help with, raise it as soon as you can. The earlier your DoS knows about it, the earlier something can be done, and your DOS is paid to worry about this sort of thing.
|
||||
|
||||
If both you and a friend are having trouble working, go together to the library and work next to each other. You might find it helpful to view it as a competition between you and them, or as a "suffering in comradeship" kind of thing. Maintain an absolute rule of "no talking to each other", though. Schedule a break every 45mins or so, go outside and stretch your legs, and at the start of each 45min block you can ask each other about things you got stuck with on the previous 45min block.
|
||||
|
||||
You will not be able to do every question on the example sheets [problem sheets you do as homework] easily. You're expected to have a good go at them all, but not to complete them (that would be a bonus). For those questions you can't do, pretend you are in an interview: write down your thought processes, what you've tried and why it failed. Pretend you're trying to appear really intelligent and solution-seeking in front of a prospective employer.
|
||||
|
||||
In your answers, use lots of words; your answer should not just be a list of equations, but a coherent argument. It's a hundred times easier to mark if you explain every step properly, and it means you can go back over it at revision time; it's not that hard to do at the time, too. If you find that you pick up your work before a supervision and have no idea what you're wittering on about, you need to make your answers clearer.
|
@@ -0,0 +1,23 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-07-06T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/cambridge-vocab-a-guide-for-the-mystified/
|
||||
- /cambridge-vocab-a-guide-for-the-mystified/
|
||||
title: Cambridge vocab - a guide for the mystified
|
||||
---
|
||||
There is an awfully large collection of confusing words you will encounter on first coming to study at Cambridge. You pick them up really quickly in the natural run of things, but I thought perhaps a mini-dictionary might be helpful. The list is alphabetised (if I'm competent enough, anyway) and may, like so many of my writings, grow. Apologies for my crude attempts at pronunciations for the non-obvious words, but it's very hard to find someone who can read [IPA](https://en.wikipedia.org/wiki/IPA).
|
||||
|
||||
* **Boatie** - one of the many people who row. Rowing is a very big thing at Cambridge, and some people are extremely dedicated to it (to the extent of getting up at six in the morning to train).
|
||||
* **Formal** - a contraction of "**formal hall**", this is an event in which you are served a three-course meal in college. Exceptions in the number of courses may apply between colleges, but I'm not aware of any non-three-coursers; exceptions also apply on special occasions, so the Jesus Christmas formal had seven courses (if I recall correctly). Usually you would wear a suit or mid-scale posh dress, with gown. Most formals start and end with a Latin grace. This is probably the closest experience to Hogwarts that Cambridge has to offer. You would almost always go to formal with other people you know (booking en-masse), as a celebration (such as for birthdays).
|
||||
* **Mathmo** - a mathematician. The word is used to refer both to maths students, and also (less commonly) to people who may not be studying maths but who share the mildly Aspergers-y traits of stereotypical mathematicians. The word is very adaptable - so, for instance, a Trinity mathmo might be referred to as a **Trinmo**, a mathmo who enjoys applied courses rather than pure courses might be referred to as an **appliedmo**, and so forth. It can also (in some circles) be femininised as **mathma**.
|
||||
* **Muso** - a music student.
|
||||
* **Natsci** (pron. "nat-ski") - a contraction of Natural Sciences, the subject studied by anyone who wishes to study a scientific subject. People studying (say) Biology would apply for Natsci, and then specialise later through judicious choice of courses. The Natscis are broadly subdivided into **Physnatscis** and **Bionatscis**. Also refers to Natsci students.
|
||||
* **Pennying** - a drinking game (in the loosest sense of the word "game", even for drinking games) fairly common across the UK, as far as I can tell. To my knowledge, the rules differ between Oxford, Durham and Cambridge; I present the Cambridge rules. If your drink is sitting on a surface, without your hand being contact with the glass, anyone else (though decorum dictates that this may only be done by people who are themselves drinking alcohol) is at liberty to drop a penny into the glass, whereupon you are honour-bound to down the drink. "An empty glass is a full glass" - that is, if an empty glass is pennied, you must fill your glass with drink and then down it. For this reason, it is wise to keep some liquid in your glass at all times. If you catch the penny in your teeth as you finish your drink, the pennier must down eir drink in turn. A "double penny" occurs when two people penny the same drink; in this situation, the second pennier must down eir drink, and the one who is pennied does not have to do so.
|
||||
* **Staircase** - this is the generic term for where students live if they are in college - the very vertical equivalent of a block of flats. They are essentially the same as dormitories, and usually have their own kitchen(s). A house owned by the college and used as accommodation can be referred to as an **external staircase**.
|
||||
* **Swap** - this is a sort of cross between a party and a speed-dating event. Usually they take the form of a formal (see above) or a trip to a local curry-house. They are designed to get lots of people who share an interest, or some sort of connection, to get to know each other very quickly. The Christ's College hockey team might **swap with** the Jesus hockey team, for example, meaning that the teams go to a formal (or curry-house) and have a meal. Swaps are usually pretty ad-hoc; they are planned entirely by the people who are swapping.
|
||||
* **Tripos** (pron. "try-poss") - the [Wikipedia article](https://en.wikipedia.org/wiki/Tripos "Tripos Wikipedia page") says it all, really, but this is the term used to refer to a course of study (the Mathematical Tripos, or the Historical Tripos, for example).
|
24
hugo/content/posts/2013-07-07-mundane-magics.md
Normal file
24
hugo/content/posts/2013-07-07-mundane-magics.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-07-07T00:00:00Z"
|
||||
aliases:
|
||||
- /psychology/mundane-magics/
|
||||
- /mundane-magics/
|
||||
title: Mundane magics
|
||||
---
|
||||
I have stumbled across a LessWrong post on the importance of [seeing what is real for just how cool it is](http://lesswrong.com/lw/ve/mundane_magic/ "LessWrong post on Mundane Magic"). It lists such examples as:
|
||||
|
||||
* *Vibratory Telepathy*. By transmitting invisible vibrations through the very air itself, two users of this ability can *share thoughts*. As a result, Vibratory Telepaths can form emotional bonds much deeper than those possible to other primates.
|
||||
* *Psychometric Tracery*. By tracing small fine lines on a surface, the Psychometric Tracer can leave impressions of emotions, history, knowledge, even the structure of other spells. This is a higher level than Vibratory Telepathy as a Psychometric Tracer can share the thoughts of long-dead Tracers who lived thousands of years earlier. By reading one Tracery and inscribing another simultaneously, Tracers can duplicate Tracings; and these replicated Tracings can even contain the detailed pattern of other spells and magics. Thus, the Tracers wield almost unimaginable power as magicians; but Tracers can get in trouble trying to use complicated Traceries that they could not have Traced themselves.
|
||||
|
||||
I thought I would give a few more. First, I hereby rename *The Eye* (as that post's author names this ability) to *Force Perception*, and I dub a user of any of these magics a Mage.
|
||||
|
||||
* *Modular Incarnation*. An extremely powerful technique that allows enormous flexibility of function, Modular Incarnation is a method of creating superstructures out of tiny Modules, each specialised for a specific task. Out of a single generic Module, a huge variety of specialised Modules can be created, which together can be assembled into structures which can channel various other magics, including Modular Incarnation itself. Thus can an Incarnator increase eir abilities by leaps and bounds from the moment of the birth of eir Incarnatory power. The Incarnator must be wary of this ability, for in its nigh-unimaginable power lies the danger of upsetting the balance of the Modules: an Incarnator can become overrun by eir own frantically replicating Modules, the tide of which is as yet very hard to stem, even using the greatest achieved extent of the Ultimate Power.
|
||||
* *Elemental Shielding.* Users of this passive ability are granted a flexible, regenerating defence against fire, earth, air and water. It also gives the user a constant diagnostic of eir surroundings, allowing the Shielder to understand what adjustments to make to eir environment *without even thinking about it*.
|
||||
* *Infiltration Adaptation.* One of the most successful forms of attack that can be made on a Mage is the insertion of weapons so small that even the greatest of Force Perceptors cannot detect them. These weapons are a perversion of Modular Incarnation, and as such have the potential to be immensely powerful, but users of the Infiltration Adaptation ability can detect and neutralise them by creating a defence consisting of many thousands of Modules, each tailored to be highly effective against a single weapon that was once used against the Mage. In this way, each unsuccessful attack strengthens the Mage: after only a short period to gather eir strength, the Mage recovers, usually with no discernible damage dealt.
|
||||
* *The Web of Pure Extraction*. Among the many ways to apply the Ultimate Power, the WPE may be one of its purest instantiations. Thousands of Extractors have together spent thousands of years in building a magnificent edifice which lies just outside this world, intersecting everywhere yet nowhere tangible. Through this power, Extractors can predict with staggering accuracy the outcomes of events happening at all scales, from the level of the fabric of reality itself up to levels encompassing all that is known to exist, and even further. Extractors can use the structure already created to solve problems that no other art can; and the structure is so well integrated with itself that particularly strong Extractors may use parts of the structure to affect other parts which lesser Extractors deem totally unrelated.
|
||||
* *The Web of Mental Distribution*. Closely related in structure to the Web of Pure Extraction (to the extent that its name even derives from the WPE), the WMD represents the culmination of decades of work to integrate the arts of Psychometric Tracery with the Force. Through use of an abstracted version of Psychometric Tracery, users of the WMD may share thoughts across enormous distances and times, connecting all Distributors to better fuel the Ultimate Power.
|
@@ -0,0 +1,40 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-07-08T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/an-obvious-improvement-to-tennis/
|
||||
- /an-obvious-improvement-to-tennis/
|
||||
title: An obvious improvement to tennis
|
||||
---
|
||||
So yesterday the [Wimbledon tennis tournament](https://en.wikipedia.org/wiki/The_Championships,_Wimbledon) was decided. The system for verifying whether the tennis ball is out or not (and hence whether play for the point stops or continues) on the main courts is as follows:
|
||||
|
||||
1. The ball lands.
|
||||
2. The linesperson keeping charge of the line nearest to the landing point of the ball works out whether the ball landed inside or outside the region demarcated by the line.
|
||||
3. The umpire decides whether or not to overrule the linesperson's decision.
|
||||
4. The [Hawkeye](https://en.wikipedia.org/wiki/Hawk-Eye) ball-tracking system determines whether the ball landed inside or outside the region demarcated by the line.
|
||||
5. If either player disagrees with the official decision (that is, if the linesperson called "out" when the player thought the ball was in, or the linesperson was silent when the player thought the ball was out, or if the umpire overruled a decision that the player thinks was correct) then that player informs the umpire that ey wishes to "challenge" the linesperson. In this instance, the Hawkeye reading is consulted (and the ball's trajectory slowly animated on a big screen, for added tension) and regarded as definitive.
|
||||
|
||||
The problem I have with this system is the process of "challenging". Each player starts out with a challenge count of three. If a player makes a challenge, and Hawkeye contradicts the official call, then the challenge count is maintained at its current level. If a player makes a challenge, and Hawkeye agrees with the official call, then the challenge count for that player is decremented. A player cannot challenge if eir challenge count is 0. On entering a tie-break, each player's challenge count is incremented.
|
||||
|
||||
This resulted in an unhappy event in the last match of the Wimbledon tournament. The player who went on to lose ([Novak Djokovic](https://en.wikipedia.org/wiki/Novak_Djokovic)) used up his three permitted challenges in unsuccessful attempts to overrule the official rulings. Then in one particularly close game, Djokovic was denied a point when his opponent's shot was deemed to be "in". He became angry (displaying the unfortunate tendency of professional sports players to throw temper tantrums at the drop of a hat) and shouted at the umpire that the call should be overruled. He had no challenges remaining, and so could not force the official decision to be reassessed; I suspect his attitude very much did not help to press his case at this point. Later, the commentators showed the Hawkeye ruling to the TV broadcast; the opponent's shot was in fact "out", and Djokovic was vindicated. As I say, he went on to lose (pretty comprehensively, I gather, although I didn't really pay attention); it is conceivable, though admittedly unlikely, that this dispute cost Djokovic the match.
|
||||
|
||||
My question is this: why do we rely on linespeople to do that which is done better by Hawkeye?
|
||||
|
||||
Would it not be massively more sensible if the linespeople were allowed to do exactly what they normally do (as a salve to those who do not wish to sully the tradition), but the umpire were provided with Hawkeye's ruling after every point so that ey could overrule as necessary? This changes nothing except the umpire's ability to carry out a task ey already has to do. Of course, in instances where Hawkeye is unavailable, such as on the lower courts at Wimbledon, nothing need change.
|
||||
|
||||
Hawkeye supposedly has an average error of 3.6mm, roughly equivalent to the fluff on the ball. I propose that the umpire should be provided with the possible error along with Hawkeye's decision, and that it should be down to eir judgement which verdict to accept in such tight cases that Hawkeye might have made an error. (I would suggest defaulting to the normal method of judgement in that case - that is, "continue as if Hawkeye had not been invented".)
|
||||
|
||||
The only reason that I can think of to limit the number of allowable challenges is to prevent time being wasted in the administrative process. However, umpire overrules (and challenges themselves) happen rarely enough that I think the following procedure would be quite sufficient:
|
||||
|
||||
1. The ball lands.
|
||||
2. The linesperson keeping charge of the line nearest to the landing point of the ball works out whether the ball landed "in" or "out".
|
||||
3. Hawkeye determines whether the ball landed "in" or "out".
|
||||
4. The umpire reads Hawkeye's decision off a screen.
|
||||
5. The umpire decides whether or not to overrule the official call.
|
||||
6. If the umpire decides to overrule the call, the ball's trajectory is animated slowly on a big screen.
|
||||
|
||||
Now, this does (of course) do nothing to resolve the problem of conflicting verdicts during a very fast rally - the umpire cannot concentrate on both the game and the Hawkeye reading at the same time. But then there's no existing solution to that problem anyway, and I do not propose to resolve this problem at the current time.
|
@@ -0,0 +1,32 @@
|
||||
---
|
||||
lastmod: "2022-08-21T10:39:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-07-09T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across-9th-july-2013/
|
||||
- /stumbled-across-9th-july-2013/
|
||||
title: Stumbled across 9th July 2013
|
||||
---
|
||||
Being bored over the summer holiday, I decided that I would document the cool things I ran across on the Internet. Over the last week, there have been many of these. If I see anything particularly amazing, it'll go in one of these aggregation posts.
|
||||
|
||||
* Neurons are surprisingly beautiful: <http://blog.eyewire.org/gallery/image-gallery/>
|
||||
* A rather neat and very short story: <https://qntm.org/timeloop>
|
||||
* A *bit* less short but just as good a short story: <https://qntm.org/responsibility>
|
||||
* A rant with which students can all identify, in The Cambridge Student magazine: now lost from the Internet.
|
||||
* An Easter Island word "tingo" means "to borrow objects from a friend’s house one by one until there are none left": <link to the Internet Archive>("http://web.archive.org/web/20100516040410/http://blog.web-translations.com/2008/12/toujours-tingo-words-that-dont-exist-in-english/)
|
||||
* Musings on free will: <http://www.mit.edu/people/dpolicar/writing/prose/text/godTaoist.html>
|
||||
* A thing that I just have to share again: <http://nextbigfuture.com/2013/06/technical-hurdles-have-been-overcome.html>
|
||||
* The human brain is a really weird piece of kit: <http://lesswrong.com/lw/20/the_apologist_and_the_revolutionary/>
|
||||
* We *have* to make one of these at some point: <http://www.pimpthatsnack.com/project/302/1">
|
||||
* This is quite soothing in a weird kind of way: <https://thingsfittingperfectlyintothings.tumblr.com/>
|
||||
* It is possible to be deficient in arsenic. (Link to the Soylent Discourse forum is permanently defunct.)
|
||||
* A really useful website for when you don't want to have to spin up Wolfram|Alpha to work out time differences: <http://everytimezone.com/>
|
||||
* Why never to talk to the police (seriously, never talk to the police): <https://www.youtube.com/watch?v=6wXkI4t7nuc>
|
||||
* A fascinating book about the power of positive and negative reinforcement, and why they're often done wrongly: [Don’t Shoot the Dog]
|
||||
* The Church of England really took its time, but at last they've done it: <https://www.bbc.co.uk/news/uk-23215388>
|
||||
* The Hawkeye Initiative, for the liberation of women in comics: <http://thehawkeyeinitiative.com/>
|
||||
|
||||
[Don’t Shoot the Dog]: https://web.archive.org/web/20130206170903/http://www.papagalibg.com/FilesStore/karen_pryor_-_don_t_shoot_the_dog.pdf
|
@@ -0,0 +1,15 @@
|
||||
---
|
||||
lastmod: "2022-08-21T11:04:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-07-10T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/imre-leader-appreciation-society/
|
||||
- /imre-leader-appreciation-society/
|
||||
title: Imre Leader Appreciation Society
|
||||
---
|
||||
There was once a small website devoted to noting the more interesting quotes from our more idiosyncratic lecturers.
|
||||
It sadly vanished from the web, although after some detective work, I found a copy floating around on one of Amazon's servers.
|
||||
I stored them for posterity using the archival service WebCitation, which is itself now dead, so instead I shall link to [Konrad Dąbrowski's capture](https://www.konraddabrowski.co.uk/ilas/index.html).
|
@@ -0,0 +1,36 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- philosophy
|
||||
comments: true
|
||||
date: "2013-07-12T00:00:00Z"
|
||||
aliases:
|
||||
- /philosophy/a-framework-for-discussing-pricelessness/
|
||||
- /a-framework-for-discussing-pricelessness/
|
||||
title: A framework for discussing "pricelessness"
|
||||
---
|
||||
Sometimes some people argue that certain things are "priceless" - that is, worth an infinite amount of money to them. I posit that what this really means is that it would take work and uncomfortable imagination to evaluate the worth of that thing to them.
|
||||
|
||||
The example that triggered this framework was my evaluation of how much my sense of smell was worth to me. (It was late at night and I couldn't get to sleep, so I just let my mind wander around for a bit.) I was unable to quantify the amount I would pay to keep my sense of smell, but it is certainly finite, as the following thought experiment demonstrates.
|
||||
|
||||
Suppose that you are the Master [hmm, no gender-neutral version of that word exists, as far as I know] of the Universe. For the purposes of this discussion, humans haven't explored the rest of space, and so while you are the Master of everything, you don't actually know what the "everything" is - but it doesn't really matter to you, because there's so much you can do on Earth. Perhaps you'll branch out later. In the absence of your commands, the world ticks over much as it normally does, but if you want anything at all, you can issue a demand, and it will be met as soon as possible, by the people best-suited to dealing with it. You could, for instance, insist on being given a project to work on, which will lie within your range of abilities but will be nice and challenging, and will take you at least a week but less than a year. (This allows you to prevent yourself from becoming [a mere wanting-thing](http://lesswrong.com/lw/ww/high_challenge/ "LessWrong page on High Challenge"), if you don't want to be one of those.)
|
||||
|
||||
The penalty for abdication is pretty severe. You were elected Master of the Universe because you are the single person best suited to the role; no-one else can come close to your suitability, so to make sure you never abdicate, it is enshrined in immutable law (the only thing you can't change, in fact) that were you to abdicate, you would have everything taken from you, and would be dumped penniless without a single possession (including clothes) in the centre of London (or substitute place where it's really hard to get started in life). After all, reasoned the lawmakers, why on earth would you want to retire?
|
||||
|
||||
Now suppose that you are kidnapped, entirely by surprise, by a mad scientist. Ey says to you:
|
||||
|
||||
> I want to be Master of the Universe. If you don't elect me MotU, I will in my anger take away your sense of smell - but of course I don't have the power to take the Mastery of the Universe from you, so you'll still be MotU. But I am a merciful mad scientist, so I will give you this device that hooks straight into your brain and tells you what you would be smelling if you still had a sense of smell. That way you'll know whether your toast is burning - you just won't have the [quale], and I am so cunning that it will be beyond the ken of mortals to replace the quale. I will be so depressed that I will retire to the Bahamas [[capital Nassau](/anki-decks "My Anki decks, including capitals of the world")] and never trouble you again.
|
||||
> If you hand the Mastery of the Universe to me, I will be ever so grateful - I will leave you with your sense of smell. But the penalty for abdication is pretty severe, as you know.
|
||||
> Make your choice.
|
||||
|
||||
Of course, assume the [least convenient possible universe] when considering a thought experiment - for instance, assume that the smelling-device is no better and no worse than your nose at detecting chemicals, so that it is not an improvement to what is currently your sense of smell; assume that you never bothered to change the dictionary so that the penalty for abdication as outlined in law would no longer be what it says on the tin, etc.
|
||||
|
||||
In this thought experiment, it's a one-off: you lose your sense of smell and keep Mastery of the Universe, or you become absolutely nothing and keep your sense of smell. (A variant might be that the mad scientist replaces your senses one by one until you give up the Mastery.)
|
||||
|
||||
I rather suspect I would forgo the qualia associated with smell, in order to keep my Mastery of the Universe. This imposes an upper limit on the value of the qualia associated with my sense of smell - and hence my sense of smell cannot be priceless to me.
|
||||
|
||||
This framework is very flexible - it adapts to thinking about essentially anything. You may, of course, feel that you would give up the Mastery in order to retain your sense of smell; in that case, the thought experiment has given a lower limit, and your sense of smell could still be priceless, but at least you've actually thought about it.
|
||||
|
||||
[least convenient possible universe]: http://lesswrong.com/lw/2k/the_least_convenient_possible_world/
|
||||
[quale]: https://en.wikipedia.org/wiki/Qualia
|
@@ -0,0 +1,24 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-07-13T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across-13th-july-2013/
|
||||
- /stumbled-across-13th-july-2013/
|
||||
title: Stumbled across 13th July 2013
|
||||
---
|
||||
* This is really quite heartwarming: <http://www.reddit.com/r/Random_Acts_Of_Pizza/>
|
||||
* Interesting article on current trends in fiction: <http://www.locusmag.com/Perspectives/2013/01/david-brin-our-favorite-cliche-a-world-filled-with-idiots-orwhy-films-and-novels-routinely-depict-society-and-its-citizens-as-fools/>
|
||||
* A ridiculous reason for a rocket to explode: <https://arstechnica.com/science/2013/07/parts-installed-upside-down-caused-last-weeks-russian-rocket-to-explode/>
|
||||
* A very information-dense way of storing data long-term: <https://phys.org/news/2013-07-5d-optical-memory-glass-evidence.html> (compare <http://rosettaproject.org/disk/technology/> which is much less information-dense but much more easily decoded in the event of being discovered after the collapse of civilisation)
|
||||
* A cool thing to do with a Raspberry Pi and a microwave: <http://madebynathan.com/2013/07/10/raspberry-pi-powered-microwave/>
|
||||
* I really want one of these - I think I might order one: <http://www.kickstarter.com/projects/cloud-guys/plug-the-brain-of-your-devices> (also, the word "plug" is insanely wonderful when spoken in a French accent)
|
||||
* An interesting idea for making the world a better place: <http://web.archive.org/web/20130713135924/http://simulacrum.cc/2013/07/10/three-trends-that-push-us-towards-an-unconditional-basic-income/>
|
||||
* A look at how to infer causality or not, as the case may be, depending on the data: <http://www.michaelnielsen.org/ddi/if-correlation-doesnt-imply-causation-then-what-does/>
|
||||
* I hope they get to producing this quickly: <http://technabob.com/blog/2013/05/22/lumigrids-bike-leds/>
|
||||
* Thank goodness for that - regular expressions are the most unreadable things ever: <https://phys.org/news/2013-07-ordinary-language.html>
|
||||
* Something else I would do if I had eternity to play with: <https://www.youtube.com/watch?v=voB6WiP83NU>
|
||||
* Glass ceiling issues: <http://whatwouldkingleonidasdo.tumblr.com/post/54989171152/how-i-discovered-gender-discrimination>
|
@@ -0,0 +1,83 @@
|
||||
---
|
||||
lastmod: "2022-08-21T10:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- philosophy
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-07-14T00:00:00Z"
|
||||
aliases:
|
||||
- /philosophy/psychology/prerequisites-for-hypothetical-situations/
|
||||
- /prerequisites-for-hypothetical-situations/
|
||||
title: Prerequisites for hypothetical situations
|
||||
---
|
||||
Usually when I discover (or, more rarely, think up) a thought experiment about a moral point, and discuss it with an arbitrary person whom I will (for convenience) call Kim, the conversation usually goes like this:
|
||||
|
||||
> Me: {Interesting scenario} - what do you think?
|
||||
>
|
||||
> Kim: I would just {avoids point of scenario by nitpicking}
|
||||
>
|
||||
> Me: You know what I meant. {applies easy fix to scenario to prevent nitpick}
|
||||
>
|
||||
> Kim: Well then, I'd {avoids point of scenario by raising unrelated moral issue}
|
||||
>
|
||||
> Me: That's not the point. The point is {point} - let's say I constructed the scenario to make {moral issue} not an issue.
|
||||
>
|
||||
> Kim: Hmm. {avoids point again}
|
||||
|
||||
And so on, and so on.
|
||||
|
||||
Now I have a platform on which to present the prerequisites for using hypothetical situations as aids to moral understanding.
|
||||
|
||||
# Logical rudeness
|
||||
|
||||
I have read two excellent pieces about logical rudeness - one [by a Peter Suber][logical rudeness], and one on [LessWrong][lw logical rudeness].
|
||||
Logical rudeness is a term used to denote a whole variety of techniques used to *appear to win arguments*, rather than to *address the issues at hand*.
|
||||
I can't offhand think of a way to improve Eliezer Yudkowsky's explanation on the LessWrong page I linked, so I won't elaborate on it.
|
||||
|
||||
The main way people are logically rude with moral dilemmas [I suffered a little dilemma here myself, wondering whether to sound pretentious by pluralising as "dilemmata"] is in working out lots of ways in which your hypothetical situation could, in fact, not be about the point you want it to be about.
|
||||
A paraphrased real-life example that actually happened to me:
|
||||
|
||||
> Me: \<explains the [torture vs. dust specks] moral problem\>
|
||||
>
|
||||
> Kim: But how can you possibly even contemplate torturing a person! You're an evil person!
|
||||
>
|
||||
> Me: I would contemplate torturing a person if it would avert some greater harm, yes. That's not to say I would torture a person.
|
||||
>
|
||||
> Kim: But torture! Evil!
|
||||
|
||||
This example shows Kim latching on to an emotional part of the hypothetical situation, and using it to launch an [ad hominem].
|
||||
This is not only logically rude (I could have outlined any scenario at all, and included the word "torture", and got the same result; Kim ignores the effort I put in to the explanation) but also verges on the socially rude.
|
||||
(In the actual situation in which this happened, I lost my temper, I am ashamed to say; the discussion, which was between about ten people, quickly turned into what was essentially a shouting match, that was only dissolved when some of us insisted on watching the latest episode of Doctor Who.)
|
||||
The key way to avoid this is to make sure that you never stop yourself considering something, and never condemn others for considering something.
|
||||
It's a moral dilemma - you're meant to feel uncomfortable while thinking about it.
|
||||
You shouldn't be afraid just to think something, and it takes some time and effort to learn [not to avoid uncomfortable thoughts](http://lesswrong.com/lw/21b/ugh_fields/ "Ugh Fields LessWrong post").
|
||||
(Obviously, speaking those uncomfortable thoughts is certainly something to consider avoiding.)
|
||||
|
||||
# The Least Convenient Possible World
|
||||
|
||||
The other major way people avoid grappling with moral dilemmas is to say, "But your hypothetical situation doesn't actually work, because of \<this objection\>."
|
||||
It's a very natural thing to do.
|
||||
My major inspiration on this is the LessWrong post on [considering the least convenient possible world](http://lesswrong.com/lw/2k/the_least_convenient_possible_world/) during debates.
|
||||
(As an aside, I'm not sure whether to use the word "argument", "debate" or "discussion" - an argument is a pointless thing, while a debate is something you enter with the aim of winning.
|
||||
Neither of these is what I am actually talking about, but the word "discussion" is becoming a little monotonous.)
|
||||
|
||||
The usual situation: it's perfectly obvious to you (or at least would become so after five minutes of thought) what the flaws are in the presentation of the hypothetical situation, and it is probably abundantly clear that those flaws could be fixed, but because you want to *win the argument* rather than *address the moral issue*, you point out the flaws and waste the time of all concerned.
|
||||
|
||||
However, the aim of a moral discussion is not to prove yourself to be a better arguer, but to discover what your thoughts are on an issue you've never really seen before. If you are going to point out the flaws in the given situation, at least do so while presenting a solution. My usual tactic when someone (let's make it Kim again) presents me with a moral dilemma is to begin the discussion with something like:
|
||||
|
||||
> I presume we can ignore \<this flaw\>? I could fix it with \<very brief explanation of fix\>.
|
||||
|
||||
Invariably Kim will reply with something along the lines of "Yeah, that's what I meant" - and that is the signal for "I am trying to discuss a moral problem, not to construct a watertight scenario." If Kim were instead to respond with "Hmm, I hadn't considered that…", then that would be an indication that ey was looking for the implementation flaws in the situation ey had outlined. Then and only then would I generate more such flaws.
|
||||
|
||||
I'm not holding myself up to be a paragon of hypothetical-considerators, but I like to think that I'm a bit better at it than most people are. My overarching rule is:
|
||||
|
||||
> If either party in a discussion has become angry, you have failed.
|
||||
|
||||
Of course, [some people][trolls] just enter into arguments in order to make you or them angry (after all, it's quite fun to be angry about something that doesn't matter) - but if you actually want a fruitful discussion, avoid inflaming people.
|
||||
|
||||
[trolls]: https://en.wikipedia.org/wiki/Trolling
|
||||
[ad hominem]: https://yourlogicalfallacyis.com/ad-hominem
|
||||
[torture vs. dust specks]: http://lesswrong.com/lw/kn/torture_vs_dust_specks/
|
||||
[logical rudeness]: https://dash.harvard.edu/bitstream/handle/1/4317660/suber_rudeness.html
|
||||
[lw logical rudeness]: http://lesswrong.com/lw/1p1/logical_rudeness/
|
@@ -0,0 +1,48 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- philosophy
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-07-14T00:00:00Z"
|
||||
aliases:
|
||||
- /philosophy/psychology/the-multiple-drafts-view-of-consciousness/
|
||||
- the-multiple-drafts-view-of-consciousness/
|
||||
title: The Multiple Drafts view of consciousness
|
||||
---
|
||||
I've been reading one of [Daniel Dennett's](https://en.wikipedia.org/wiki/Daniel_Dennett) books, *Consciousness Explained*. Aside from the fact that the author has an incredible beard and is therefore correct on all matters, he can also write a very cogent book. In *Consciousness Explained*, Dennett outlines what he calls the Multiple Drafts approach to explaining consciousness; this blog post is my attempt to summarise that view in a couple of short analogies.
|
||||
|
||||
Dennett starts off by providing evidence that our time-perception is somewhat malleable: we can interpret [two dots of different colours][colour phi] (appearing separated by a short distance in time and space) as a single moving dot that changes colour abruptly at some point. The key puzzle here is that we perceive the colour to have changed *before* seeing the second coloured dot. Dennett then outlines what seem to be the two mainstream points of view on how this happens.
|
||||
|
||||
* The Orwellian view: that at the time of perception, we saw exactly what happened, and then we edit this after the fact to reflect a more logical sequence of events (à la [Minitrue]);
|
||||
* The Stalinist view: that the information is edited before even making its way into the consciousness.
|
||||
|
||||
Dennett points out that both of these options implicitly assert the existence of a "Cartesian Theatre" - a place where consciousness is experienced as information is gathered. In particular, the Stalinist view requires consciousness to be experienced after sufficient time has passed for some decisions to be made. By the way, in arguing against this supposition, Dennett doesn't mention that there is precedent for this kind of behaviour in the reflex action, which we explicitly only realise we have made after it has happened; but it's a minor point, since there are sound physiological reasons for why the reflex action doesn't come under conscious control (the signal for action never actually enters the brain, but is headed off at the brain stem). He then gives a third possible view - the Multiple Drafts model. In each of the next two analogies, I will liken the consciousness to a general in war, making decisions based on reports from the battlefield. In fact, Dennett argues that since the Cartesian Theatre does not exist (that is, consciousness isn't something that is recorded and played back to some internal watcher), this type of analogy is deeply flawed, and the third analogy will contain an appropriate adjustment.
|
||||
Central to the analogies are two reports in particular:
|
||||
|
||||
1. "At location X at time 15:00:00, M happened", analogous to the report-to-the-consciousness "My hand tells me that I drew near to a source of intense heat at time \___";
|
||||
2. "At location X at time 15:00:02, N happened", analogous to the report-to-the-consciousness "My eyes tell me that I touched the hot plate at time \___".
|
||||
|
||||
We consider the case that report 2 arrives before report 1 (even though report 2 describes events which occurred later than report 1) - this is quite conceivable given the distance that messages must travel in the nervous system. (Please ignore the fact that this particular effect is probably going to work in reverse for this particular example, the eyes being closer to the brain than the hand - and assume that every decision is made in the brain, so that reflexes don't happen. It's harder than you might think to come up with something sufficiently urgent that isn't made as a reflex!)
|
||||
|
||||
# The Stalinist analogy
|
||||
|
||||
In this version of events, the reports come in from the battlefield, and flow through the general's underlings. The underlings see that the reports are in the wrong order, and switch them round so that they are in the right order, before presenting them to the general in the order {2,1} to consider; they also decide that there is a missing piece of information [corresponding to the "change-in-colour-of-dot" situation, but that doesn't fit with this analogy] between reports 2 and 1, so they insert it. The general acts on the augmented reports, and they are then sent off to be filed away for future reference.
|
||||
|
||||
# The Orwellian analogy
|
||||
|
||||
In this version of events, the reports come in from the battlefield, but the underlings don't correct the order of the reports, so the general sees {1,2}. The general acts on the reports once they've both been received, noticing that some information seems to be missing and adding it in, and sends them off to be filed. The archivist sees that they are in the wrong order, and switches them round just before filing them.
|
||||
|
||||
# The Multiple Drafts analogy
|
||||
|
||||
In this version of events, the reports come in one by one from the battlefield, but there is no general - just a room full of underlings. The first report (which records a later event) comes in, and the underlings all update their states-of-mind accordingly. Then the second report (which records an earlier event) comes in; the underlings nearest the door update soonest, and the report makes its way around the room from underling to underling. The underlings act on the reports (Multiple Drafts doesn't address how this happens - for the purposes of this analogy, let it be by everyone shouting at once, and the majority view prevails). As time goes on, more reports flood in, but eventually every underling has received reports 1 and 2 (this may happen before or after the action based on those reports is taken), and the archivist-underling files what ey thinks happened.
|
||||
|
||||
Under Multiple Drafts, then, there is never a "point at which information enters the consciousness", but rather a "time interval in which information is making its way around the consciousness". The name of the model comes from an analogy to writing a summary of the events - starting from report 1, a summary is written; then report 2 is added, it progresses around the consciousness, and wherever it arrives, the summary is updated to reflect the new information. Thus there are *multiple drafts* of the summary at once. When the information is fully incorporated (that is, consensus has been reached on what the summary should contain), the consciousness is free to store the consensus draft in memory for future reference. Note that this could happen some time after the events described in the summary - Dennett is careful to separate "what happened" from "how the consciousness stores what happened".
|
||||
|
||||
The reason Multiple Drafts is so attractive is that there is no experimental way to differentiate between Orwellian and Stalinist. Either way, the subject of an experiment will report the same thing, so it is strange to draw a distinction between these two possible methods. Having noticed that Orwellian and Stalinist are indistinguishable, the natural question is "why do we think they are different?" - and it turns out that the only real reason is that we think there is a centre of consciousness, through which the information must flow. Only under that interpretation is there a difference between amending-before-consideration and amending-after-consideration. So we relax the assumption of a centre of consciousness, and we end up with a "smear" of time during which information is incorporated, rather than an absolute time of perception. (This is borne out by experiment, by the way - we are very flexible when it comes to simultaneous perception.) This idea makes sense - we don't perceive space absolutely, and can happily work with receiving information about space at smeared-out times, adding more information to the model as we find out more. I nudge the table-leg with my foot, someone reacts, I am swinging my foot to kick it again, but just when I can no longer stop the kick I realise that it was in fact a human leg, the other person glares at me - my perception of the layout of space below the table has developed as new information came in, but out of sync with the information itself (the information bunched up and all came along at once). There is no particular reason why our time-perception should be any different.
|
||||
|
||||
The book is an excellent one, very coherently written - this blog post doesn't really do it justice (although that's the point of this blog - to get me practised at writing). As of this writing, I am only half-way through the book, but it is shaping up well.
|
||||
|
||||
[colour phi]: https://en.wikipedia.org/wiki/Color_Phi_phenomenon
|
||||
[Minitrue]: https://en.wikipedia.org/wiki/Minitrue "Ministry of Truth"
|
@@ -0,0 +1,80 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- philosophy
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-07-18T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /philosophy/psychology/my-objection-to-the-one-logical-leap-view/
|
||||
- /my-objection-to-the-one-logical-leap-view/
|
||||
title: My objection to the One Logical Leap view
|
||||
---
|
||||
A large chunk of the reason why changing someone's mind is so difficult is the fact that our deeply-held beliefs seem so obviously true to us, and we find it hard to understand why those beliefs aren't obvious to others. Example:
|
||||
|
||||
> A: A god exists - look around you; everything you see is so obviously created, not stumbled upon!
|
||||
> B: No, that's rubbish - look around you, everything you see is easily explained by understood processes!
|
||||
|
||||
The basic problem here is that B *sees things differently* to A. Everything that B sees is automatically interpreted through the prism of "process that is understood", and that is a really hard thing to convey to A. The same evidence is spun in two totally different ways, and yet people argue as if the other party were only *one logical leap* away from coming round to their point of view. Weirdly, I can't find a source for that - I have heard the phrase before. In lieu of a source, I will (very briefly) summarise that viewpoint. I was under the impression that it is often trumpeted around that at the heart of every argument is one logical leap (OLL) that made that argument significantly different from the opposition, and that if one could only convey why that point was so important, then one could sway everyone to that point of view. (As I say, I can no longer find anyone saying this.) There is implicit evidence that people think in this way - the fact that when people are arguing earnestly with each other, each in an actual attempt to change the other's mind, they usually repeat the same argument again and again, as if that were simply a killer blow. Now, quite aside from the obvious symmetry here (both sides feel that their point is the one thing that just needs to be understood), there is a deeper point to be drawn about how we think, that exposes the OLL as a fallacy.
|
||||
|
||||
# Ideas are not atomic
|
||||
|
||||
There are precious few ideas that are what I call "atomic". An atomic idea is one that most observers will experience in essentially the same way. The idea that \\(1+1=2\\) is the closest I can get to an atomic idea - most people, I suspect, know that numbers can be added, and for small numbers, I posit that we all think of addition in the same way. Certainly the concept of "addition" is very much observer-dependent, in that a mathematician will probably have a very different view of addition to, say, a painter - but we have all been so well drilled that \\(1+1=2\\) that I suspect we all view it (not "addition", but "\\(1+1=2\\)") in the same way - as an isolated fact. By the way, the main difference in the concept of "addition" generally is, I think, that for a mathematician, addition is a small part of a much larger edifice (involving the Peano axioms and so forth), whereas I have met many people to whom "maths" is merely a collection of isolated computational techniques, for whom addition is simply an extra tool. Most ideas are not like \\(1+1=2\\). If you were to get me to [free-associate](https://en.wikipedia.org/wiki/Free_association_%28psychology%29) on the word "death", for instance, my immediate reaction would probably be "bad, get rid of it". If you were to get J. K. Rowling to do the same, you'd probably get "inevitable, must reconcile with". (I base this on the final book of the Harry Potter series, in which a major theme is the portrayal of death as "the next great adventure".) "Death" is a concept which varies heavily from person to person - it is *not atomic*. In order to change someone's view of death, it is likely that (for most people) a large reshuffling of the worldview would have to take place - for me, you would probably have to do one of the following:
|
||||
|
||||
* weaken my "human life is to be desired" axiom (in the process drastically altering my aesthetic principles);
|
||||
|
||||
* prove to me that there was something desirable after death (in the process weakening my ultra-materialistic worldview);
|
||||
|
||||
* show me that there would be horrific consequences to the prolonging of life (but that wouldn't change my view that "it would be better if we could get rid of death").
|
||||
|
||||
Common to the two options that are actually changing my mind (the first and second) is the requirement that you break down a key part of my worldview. I think that this is why opinions are so hard to change - because they so quickly become very heavily bound up with the entire worldview. Few ideas have sufficient force to alter my entire model of the world (although they do exist: for me, one such idea was [Cached Thoughts](http://lesswrong.com/lw/k5/cached_thoughts/)). The "one logical leap" in an argument is merely the global interface of a particularly large chunk of world-model - the tip of an iceberg.
|
||||
|
||||
At this point, I will explicitly attempt to do what I have been claiming is the impossible - to convey my worldview to you. I attempt this in order to show just how much worldview sits behind my simple opinion on the topic that "the One Logical Leap does not exist" - and how much harder than it first appears it would be to change my mind on it. I very much doubt that I will succeed in explaining my mind to the extent that I would like, for reasons explained throughout this post. Anyway, I took fifteen minutes of introspection, and here (hopefully in a reasonably logical order) are the major areas of worldview on which this article rests. I will refer to these bullet points throughout the article, and will leave out my views on mathematics and death (which have already been mentioned, but are not central to my argument). It should go without saying that these are incomplete generalisations.
|
||||
|
||||
1. Thought is computation, mainly oriented around pattern-matching and caching
|
||||
|
||||
2. There is a correct answer to essentially every question - we just don't have the computational power (in our brains or otherwise), or are not using it correctly, to answer them
|
||||
|
||||
3. The process of speech is literally the process of sharing thoughts (to a lesser extent, my [non-rent-paying](http://lesswrong.com/lw/i3/making_beliefs_pay_rent_in_anticipated_experiences/) belief that the mind is an entity that is distributed across multiple brains, as Hofstadter outlines in his book *I Am a Strange Loop*)
|
||||
|
||||
4. There are low- and high-bandwidth ways to share thoughts (one-way blog posts are not high on the list of effective thought-sharing means), but we only really use low-bandwidth ones
|
||||
|
||||
5. The mind is a vast collection of models of the world, constantly reaching consensus to provide a single contiguous model
|
||||
|
||||
6. Humans are very bad at evaluating new ideas, and most of the thought happens below the level of consciousness
|
||||
|
||||
7. For most people, argument is a battle to prove yourself right
|
||||
|
||||
Of course, my statement of these aspects of my worldview is really inadequate - a soundbite summary of a seething mass of thought (viewpoints 4 and 5). As an exercise to the unusually-interested reader, you might find it interesting to go back and see where these views appeared implicitly up to now. I have tried to make this post as non-circular as I can, but it was harder than I expected to express that "the worldview is tightly bound up and hard to express". Now that I have these viewpoints explicitly labelled, I can outline my argument properly.
|
||||
|
||||
* People think in "one logical leap" terms - that is, they believe that B is only one short step of understanding away from coming round to A's viewpoint
|
||||
|
||||
* Worldviews are very hard (and therefore slow) to convey, because although we can share thoughts (viewpoint 3), we can't do it anywhere near fast enough (viewpoint 4) to put across what we call "a viewpoint" but is really a truly massive edifice (viewpoint 5)
|
||||
|
||||
* People therefore receive novel thoughts slowly enough that they have time to pattern-match some standard answers to them (viewpoint 1), and thereby avoid dealing with the "logical leap" the other party is trying to convey
|
||||
|
||||
* Unless A is very careful, B will interpret A's argument as an attack on B's own worldview (viewpoint 7) and is thus incentivised to find objections
|
||||
|
||||
* Hence, it is extremely hard to change a worldview.
|
||||
|
||||
* What I think of as "an obvious idea" is only obvious to me because that's how my pattern-matcher works (viewpoints 1 and 5)
|
||||
|
||||
* To change your pattern-matcher sufficiently to view my idea as "obvious" is to alter your worldview
|
||||
|
||||
* Therefore, my "obvious idea" is outlandish to you, unless our worldviews are sufficiently well-aligned already.
|
||||
|
||||
An atomic idea, of course, doesn't suffer from this problem - it is seen by everyone in the same way, so it can just be packaged up and spoken, and understood as it was intended. Now, a single idea can be so powerful that it reshapes my worldview; or many different small, nearly-but-not-quite-atomic ideas relating to the same worldview can be presented, with my worldview adapting to accommodate these (viewpoints 1, 5 and 6); or I suppose I could maintain some kind of cognitive dissonance where an idea doesn't fit on top of my regular worldview, but I don't really count this as a good solution. But when conveying ideas, people don't use this fact-which-is-so-obvious-to-me, that it is hard to persuade people because the task is huge. (I hasten to add, by way of example, that it only became obvious after the considerable change in worldview brought about by reading most of LessWrong.) People almost invariably present to me a single idea without the supporting worldview (and of course I include myself as "people", but I do try to ameliorate the effect) - and then the idea has no worldview to slot into when it arrives in my brain, so I unconsciously and consciously find ways to reject it. To defeat this effect is the essence of a pretty big chunk of rationality - learning to recognise when you're automatically rejecting an argument, and stopping yourself - but that's a post for another time.
|
||||
|
||||
# What to do with this information
|
||||
|
||||
You may well say, "That's all very well, but what difference does it make?" - and that's a very natural question to ask, because (in my experience) the balance of probability suggests that you don't have the worldview which would make it obvious (and after all, you're reading this blog post to learn about my worldview). Over so short a period as the last few months, it has become a part of my worldview that people really don't like to evaluate arguments - probably because to do so means the other person has "won", as there's a reasonably effective social norm against adapting your view in response to evidence or argument. There are two very simply-stated ways you can use what I've been trying to convey of my worldview:
|
||||
|
||||
* Notice when you're rejecting an idea on worldview-incompatibility grounds, so that you can actually think about it rather than letting your already-stored model of the world decide for you
|
||||
|
||||
* When you are trying to convey a point, and you have the luxury of time, give much more justification than you think should be necessary, and remember, when the other person is obtusely refusing to absorb your idea, that it's probably because you aren't conveying enough background-idea. Yes, it may be very hard and time-consuming to do enough of this - but at least it stops you from unconsciously or consciously thinking that the only reason the other person isn't taking in your idea is that ey's stupid - and in my experience, that seems to be a major driving force behind the development of discussions into proper angry rows.
|
||||
|
||||
# Post Scriptum
|
||||
|
||||
This is far from the most coherent work to flow from my pen, I realise - it's an oddly hard thing to construct a cogent discussion on, because the argument is itself that the argument is hard to put across. My usual structure for an argument would be something like "Statement of argument, evidence, expound", but here my evidence is part of the statement of the argument, and my "expound" is also "evidence". That throws the whole structure into disarray. Ah well - let it stand as a key example in favour of the argument.
|
58
hugo/content/posts/2013-07-21-on-shakespeare.md
Normal file
58
hugo/content/posts/2013-07-21-on-shakespeare.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
lastmod: "2022-08-21T10:51:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-07-21T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/on-shakespeare/
|
||||
- /on-shakespeare/
|
||||
title: On Shakespeare
|
||||
---
|
||||
I've now seen two Shakespeare plays at the [Globe](https://en.wikipedia.org/wiki/Shakespeare%27s_Globe) - once in person, to see A Midsummer Night's Dream, and once with a one-year-and-eighty-mile gap between viewing and performance (through the [Globe On Screen](https://www.dramaonlinelibrary.com/shakespeares-globe-on-screen) project), to see Twelfth Night.
|
||||
Both times the plays were excellent.
|
||||
Both were comedies, and both were laugh-out-loud funny.
|
||||
|
||||
The performance of Twelfth Night, then, was beamed into a local-ish cinema for our viewing pleasure.
|
||||
(Definitely more comfortable than the seating at the Globe, although I am reliably informed that if you go to the Globe, you really have to be a groundling, standing at the front next to the stage, in order to get the proper experience.)
|
||||
My seat was next to those of some young-ish children.
|
||||
The result of taking several young children to a three-hour performance of a play which isn't in Modern English was predictable, but it got me thinking.
|
||||
(Bear with me - this will become relevant.)
|
||||
|
||||
I said that both the plays were laugh-out-loud funny.
|
||||
I'll be talking about Twelfth Night, as that's the one I remember best (since it happened in the past week).
|
||||
In fact, it started out pretty dull - I was completely lost for the first five minutes while some Count or other pontificated about how much he loved a reclusive Lady.
|
||||
I was only able to get the gist of what he was saying, through snatching out some words every now and again.
|
||||
However, as soon as the Count got off-stage, the play picked up immensely, and became properly funny.
|
||||
It was really noticeable that Shakespeare was writing in two different registers - the posh one, with the Count wittering on in soliloquy, was all but incomprehensible to me, while the standard register, in which everyone else spoke, was pretty much just English.
|
||||
|
||||
It is also really hard to grasp the nature of the humour of Shakespeare just from reading the plays.
|
||||
Once they are being performed, however, it becomes immediately obvious that every other line is an innuendo of some sort.
|
||||
Even while the Count is talking, Shakespeare gives him double-entendres ("How will she love, when the rich golden shaft/ Hath kill'd the flock of all affections else/ That live in her…") - we are clearly meant to be laughing at him, such a serious character accidentally making ribald puns - and once the silly characters come on, the humour just gets coarser.
|
||||
You don't see that from the script unless you're actually looking for it - but actors can make so much more of it, with their freedom to move around and inflect.
|
||||
In fact, with the exception of the wordplay of the Fool and the plot-based shenanigans (twins being mistaken for one another, and so on), I would say that well over half the humour in Twelfth Night is sexual in nature.
|
||||
|
||||
Cue smooth segue to the English National Curriculum, which seems desperate to get children learning Shakespeare.
|
||||
Thankfully, [Michael Gove](https://en.wikipedia.org/wiki/Michael_Gove) doesn't seem to have gone sufficiently mad as to insist on its teaching in primary school (that is, from the age of 4 to 11), but before his reforms take place in 2014, it is/was required (link now dead) that pupils be taught on at least one Shakespeare play in Key Stage 3 (that is, aged 11 to 14).
|
||||
I can't find information about the draft 2014 curriculum for Key Stage 3, but I'm sure it appears in there too, given Gove's attitudes to pedagogy.
|
||||
|
||||
I just don't understand why pupils are taught Shakespeare at such a young age.
|
||||
I speak for myself here, but 14 is really not old enough to understand the main source of humour (innuendo) in Shakespeare plays.
|
||||
Shakespeare's comedies are full of it - essentially all non-plot-based humour is sexual in nature.
|
||||
It is bizarre that pupils who are too young to understand the humour would be taught to analyse it.
|
||||
The language is difficult enough to read (another hurdle that simply goes away when it's acted properly), but the plays are simply horrendously drab unless you are able to grasp the humour - when you remove three-quarters of the humour from a comedy, what is left?
|
||||
|
||||
Aside from the fact that such young children can't really understand the humour, it's also difficult for a teacher to teach, unless that teacher is one of a very unusual breed who can talk to eir pupils candidly about anything at all without it feeling awkward.
|
||||
Most teachers would find it much easier simply to ignore the double-entendres in the first place - I know that when I was taught A Midsummer Night's Dream in Year 6 (aged 10-11), my teacher focused entirely on plot, but the plot of AMND is nothing special.
|
||||
The same happened when I was subsequently taught AMND in Year 8 (aged 12-13) - even worse, we were shown a film adaptation that was just not funny.
|
||||
(This may be that I was too young to be amused by Shakespeare-humour, but I actually think the film didn't portray it at all.)
|
||||
|
||||
So we have this strange situation of young children being taught centuries-old plays, of which they understand neither the content nor the syntax.
|
||||
There is absolutely no reason for a pupil to find Shakespeare relevant or useful in any way, taught like this.
|
||||
It's a shame, because the simple fact that "Twelfth Night is laugh-out-loud funny" is enough to tell me that Shakespeare is relevant.
|
||||
There are a couple of interesting historical notes to be gleaned from it - for instance, the treatment of the puritanical Malvolio, the only character not to receive a happy ending (aside from the pirate and the Fool), seems to show that people really liked to put down killjoys back then, in contrast to our view now (I find Malvolio's plight rather sad, and so does everyone else I've spoken to).
|
||||
But that's not really why I think Shakespeare is relevant - I think his plays are relevant in much the same way that I think the Marx Brothers' films are.
|
||||
They are really entertaining plays.
|
||||
Humour seems not to have changed very much over the last few centuries.
|
||||
Taking pupils at a young age, and turning them off good plays which are part of our cultural heritage, is something of a travesty.
|
28
hugo/content/posts/2013-07-22-the-orbitstabiliser-theorem.md
Normal file
28
hugo/content/posts/2013-07-22-the-orbitstabiliser-theorem.md
Normal file
@@ -0,0 +1,28 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2013-07-22T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/the-orbitstabiliser-theorem/
|
||||
- /the-orbitstabiliser-theorem/
|
||||
title: The Orbit/Stabiliser Theorem
|
||||
---
|
||||
The Orbit/Stabiliser Theorem is a simple theorem in group theory. Thanks to [Tim Gowers](https://gowers.wordpress.com/2011/11/09/group-actions-ii-the-orbit-stabilizer-theorem/) for the proof I outline here - I find it much more intuitive than the proof that was presented in lectures, and it involves equivalence relations (which I think are wonderful things).
|
||||
|
||||
Theorem: \\(\vert \{g(x), g \in G\} \vert \times \vert \{g \in G: g(x) = x\} \vert = \vert G \vert\\).
|
||||
|
||||
Proof: We fix an element \\(x \in G\\), and define two equivalence relations: \\(g \sim h\\) iff \\(g(x) = h(x)\\), and \\(g \cdot h\\) if \\(h^{-1} g \in \text{Stab}_G(x)\\), where \\(\text{Stab}_G(k) = \{g \in G: g(k) = k\}\\).
|
||||
|
||||
Now, these are the same relation (we will check that they are indeed equivalence relations - don't worry!). This is because \\(g \sim h \iff g(x) = h(x) \iff h^{-1}g(x) = x \iff h^{-1}g \in \text{Stab}_G(x) \iff g \cdot h\\).
|
||||
|
||||
And \\(\sim\\) is an equivalence relation, almost trivially: it is reflexive since \\(g \sim g \iff g(x) = g(x)\\) is obviously true; it is symmetric, since \\(g \sim h \iff g(x) = h(x) \iff h(x) = g(x) \iff h \sim g\\); it is transitive similarly.
|
||||
|
||||
Now, it is clear that the number of equivalence classes of \\(\sim\\) is just the size of the orbit \\(\{g(x), g \in G \}\\), because for each equivalence class there is one member of the orbit (with \\([g]\\) representing \\(g(x)\\)), and for each member of the orbit there is one equivalence class (with \\(g(x)\\) being represented solely by \\([g]\\)).
|
||||
|
||||
It is also clear that the size of the stabiliser \\(\text{Stab}_G(x)\\) is just the size of an equivalence class \\([g]\\) of \\(\cdot\\), since for each member \\(s\\) of the stabiliser, we have that \\(g \cdot (g s)\\) so \\(\vert [g] \vert \geq \vert \text{Stab}_G(x) \vert"\\), while for each for each member \\(h\\) of \\([g]\\) we have that \\(h^{-1}g \in \text{Stab}_G(x)\\) by definition of \\(\cdot\\) - but all these \\(h^{-1}g\\) are different (because otherwise we could cancel a \\(g\\)) so \\(\vert [g] \vert \leq \vert \text{Stab}_G(x) \vert\\).
|
||||
|
||||
And the equivalence classes of \\(\sim \ = \cdot\\) partition the set \\(G\\), so (size of equivalence class) times (number of equivalence classes) is just \\( \vert G \vert\\) - but this is exactly what we required.
|
@@ -0,0 +1,25 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-07-24T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across/
|
||||
- /stumbled-across/
|
||||
title: Stumbled across 24th July 2013
|
||||
---
|
||||
|
||||
* This is something I will try at some point, probably when I get back to uni: <http://www.bulletproofexec.com/how-to-make-your-coffee-bulletproof-and-your-morning-too/>
|
||||
* This was fun: <http://www.sporcle.com/games/Government_Agent/true-or-false-logic-quiz>
|
||||
* Hah - stupid copyright owners: <https://torrentfreak.com/hbo-wants-google-to-censor-hbo-com-130203/>
|
||||
* The government's got around to allowing the testing of driverless cars: <https://arstechnica.com/tech-policy/2013/07/uk-govt-approves-autonomous-cars-on-public-roads-before-years-end/>
|
||||
* An insightful comic about getting to sleep: <https://abstrusegoose.com/523>
|
||||
* Roll on the cheap and easy satellites: <http://www.guardian.co.uk/science/across-the-universe/2013/jul/17/sabre-rocket-engine-reaction-skylon>
|
||||
* A bunch of interesting sciency things, including a new application of zapping current through the brain: <https://arstechnica.com/science/2013/07/weird-science-always-runs-current-through-its-brain-before-speed-dating/>
|
||||
* At last! <http://coderinaworldofcode.blogspot.co.uk/2013/07/50-shades-of-grey-made-illegal-in-uk.html>
|
||||
* I didn't see this at the time - consider my faith in humanity restored: <https://www.bbc.co.uk/news/world-asia-pacific-13598607>
|
||||
* Excellent essay on why it's hard to prohibit same-sex marriage: [cached][gender and same-sex marriage]
|
||||
|
||||
[gender and same-sex marriage]: http://web.archive.org/web/20140723074138/http://linuxmafia.com/faq/Essays/marriage.html
|
42
hugo/content/posts/2013-07-25-metathought.md
Normal file
42
hugo/content/posts/2013-07-25-metathought.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-07-25T00:00:00Z"
|
||||
aliases:
|
||||
- /psychology/metathought/
|
||||
- /metathought/
|
||||
title: Metathought
|
||||
sidenotes: true
|
||||
---
|
||||
|
||||
I have recently discovered the game of [Agricola](https://en.wikipedia.org/wiki/Agricola_%28board_game%29), a board game involving using resources (family members, stone, etc) to build a thriving farm.
|
||||
The game is turn-based, with the possible actions each turn being severely limited.
|
||||
This makes the game be in large part about optimising under constraint (the foundation of any good game).
|
||||
However, during gameplay I also detected a certain resonance between Agricola and the game of [Magic: The Gathering](https://en.wikipedia.org/wiki/Magic_the_gathering), beyond the usual "constrained optimisation" theme.
|
||||
While I was playing Agricola, there was a kind of niggle in the back of my mind, telling me that "ooh, this is like Magic".
|
||||
|
||||
I notice a similar affinity when reading essentially anything by [Douglas R Hofstadter](https://en.wikipedia.org/wiki/Doug_Hofstadter), an author [famed](https://xkcd.com/917/ "xkcd I'm So Meta") for his "metaness".
|
||||
That is, when reading a good Hofstadter piece, I get a similar niggle (considerably weaker than the Magic-Agricola one) telling me that "ooh, this is a bit like Magic".
|
||||
Hofstadter invents puns and connections which feel so natural that you'd be forgiven for thinking that he had invented English specially for the purpose, were it not for the fact that his book [Gödel, Escher, Bach](https://en.wikipedia.org/wiki/Godel_escher_bach) was translated (I am told) with the same level of scintillation into at least {{< side right lang "eight other languages." >}}French, German, Spanish, Chinese, Swedish, Dutch, Italian and Russian, according to the bottom of <a href="http://tal.forum2.org/geb">Tal Cohen's review</a> (<a href="http://web.archive.org/web/20220428223304/http://tal.forum2.org/geb">cached</a>).{{< /side >}}
|
||||
This leads me to wonder whether what I'm really noticing is not just constrained optimisation, but "metathought" - thought on a higher level of abstraction to the usual.
|
||||
With Hofstadter, it's on the level of words as well as of the symbols of thought that the words invoke; with Magic, it's thinking about plans and strategies involving the other player(s) and the interactions between their cards and mine; with Agricola it's thinking about the aims of the other player(s) and how best to compete for the limited actions available to us both.
|
||||
I note that I don't feel the resonance with chess - archetypical of "deterministic games", where you know exactly what moves are available to both sides - so the resonance is not a marker for "putting myself in others' shoes".
|
||||
Rather, it seems to be a marker of *interaction* - between players' plans, or between words and meaning, and so on.
|
||||
|
||||
Closely linked to this is the related concept of [introspection](http://lesswrong.com/lw/6p6/the_limits_of_introspection/). It's a [well-researched](https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect "Dunning-Kruger effect Wikipedia page") [fact](https://en.wikipedia.org/wiki/Four_stages_of_competence "Four stages of competence Wikipedia page") that people are ([in general](http://lesswrong.com/lw/1xh/living_luminously/ "Luminosity LessWrong page")) [bad](http://lesswrong.com/lw/i4/belief_in_belief/ "Belief in belief LessWrong page") at introspection (hence the existence of [behaviourism](https://en.wikipedia.org/wiki/Behaviorism) and [heterophenomenology](https://en.wikipedia.org/wiki/Heterophenomenology)). I've trained myself over the last year or so to be much better at introspection than I was [^uncertain] - I notice myself shying away from thinking things, I recognise when there's a specific thing I can't be bothered to think about, and so forth. Of course, I am (as yet) imperfect, but I am [trying not to be](http://lesswrong.com/lw/h8/tsuyoku_naritai_i_want_to_become_stronger/ "I want to become stronger").
|
||||
|
||||
How is this, as I have claimed, "closely linked"? I am slowly forming the opinion that it takes a reasonably good level of introspective ability just to be able to notice resonances between things. [^general] I am waiting for experimental evidence on this (and it is possible that my subjects are reading this blog, so I won't say what the tests are). {{< side right interrelation "However, I've noticed these resonances myself to a greater degree since learning introspection.">}}A possible explanation is that I've just been doing more interrelated things recently, so I would be very likely to spot more interrelations.{{< /side >}} The feeling of "affinity" between things is very difficult for me to describe - it's kind of a shade of extra interpretation laid on a concept, but it's not linked to any of the commonly-recognised senses, so English isn't very well set up to define it - but the feeling is very weak. I sometimes think of it as making an extra brushstroke on a watercolour - the added colour is there, but it's very slight - perhaps slight enough as to go unnoticed by someone who is not in the habit of noticing eir thoughts. It also feels like an area of light (in both senses - "not dark" and "not heavy") at the (literal) back of my mind. (Ah, how difficult to describe qualia accurately!) However it feels to me, it is my experience that people very rarely claim that one activity is similar to another in some abstract way (as I do with Magic and Gödel, Escher, Bach) - this may be because I don't notice it when they so claim, or that they never so claim because no-one else ever so claims, or that they never so claim because they don't notice the resonance, or that the resonance isn't actually there and I'm delusional (although in this instance that seems a bit unlikely, if I say so myself).
|
||||
|
||||
Why do I think that what I'm noticing is "metathought" rather than merely "constrained optimisation"? Well, I very rarely feel the resonance, and I'm always solving constrained optimisation problems without feeling the resonance (how succinct can I make this post, how many chocolates can I get away with eating…) so I suspect that it's not just the optimisation aspect. The only other link I have come up with at the moment is metathought. Magic, in particular, has the potential for very complicated interactions involving thinking hard about which strategies will be successful and when exactly to do things; Hofstadter's punning is ridiculously meta anyway; while Agricola is heavily based on working out what the opponents will be doing and taking that into account (that is, it requires *reflection*, a key component of metathought), while juggling your own strategies. I note for completeness that I read Gödel, Escher, Bach well before I discovered the game of Magic, and I didn't feel the resonance with GEB on first playing Magic - it was only once I'd played Magic that I started feeling the resonance. Alternatively put, I feel the "resonance with Magic", rather than "resonance with things in the class to which Magic belongs".
|
||||
|
||||
I get slight shades of the same resonance when solving crosswords, and maybe even sometimes when proving mathematical statements - but take this with a pinch of salt, because I've had time to create a pattern for "I feel this resonance when…", and it's much easier to fill that pattern than to actually work out whether I do feel that resonance. I explicitly noticed it and noted it to myself when playing Agricola and when reading the Ricercar from Gödel, Escher, Bach - any other examples are potentially suspect, now that I've thought the concept through, because the feeling of resonance is so weak compared to the thought "If my hypothesis is correct, I should feel resonance now". (I came to this realisation while writing this paragraph.) It would appear that I may have accidentally corrupted my ability to feel this "resonance" in weak cases. Unfortunately, this makes it very hard to provide further tests: in particular, I need cases when I would predict feeling resonance but in fact do not.
|
||||
|
||||
Anyway, I hypothesised that "resonance" is only felt by people who naturally or artificially have good introspection. I would be very interested to hear of evidence on this point - if you feel (with some kind of justification) that you have unusually good introspection, or if you think you have felt the kind of resonance that I describe (of course, my description was poor!), do let me know - I don't know which way causality runs, if any, and I would like to know whether it's just some oddity of my own, or whether It's A Thing that no-one bothers to mention for some reason.
|
||||
|
||||
|
||||
[^general]: The resonance which is the main subject of this post is a single instance of a more general class of relations - for instance, there is a different kind of resonance between Scrabble and [Countdown](https://en.wikipedia.org/wiki/Countdown_%28game_show%29).
|
||||
|
||||
[^uncertain]: Or at least, I hope I have - it certainly feels like it's working, but then again, it would probably feel like that if I were getting *worse* at introspection, because I'd be getting worse at telling whether I was getting better or not.
|
@@ -0,0 +1,21 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-07-29T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across-2/
|
||||
- /stumbled-across-29-july-2013/
|
||||
title: Stumbled across 29th July 2013
|
||||
---
|
||||
* Hehe: <http://www.pixartheory.com/>
|
||||
* Wow - light trapped for a full minute: <http://www.newscientist.com/article/dn23925-light-completely-stopped-for-a-recordbreaking-minute.html>
|
||||
* The importance of a consistent utility function: <http://lesswrong.com/lw/my/the_allais_paradox/>
|
||||
* Obama promised to be friendly to whistleblowers, and has quietly removed said promise: <http://sunlightfoundation.com/blog/2013/07/25/obama-promises-disappear-from-web/>
|
||||
* I wholeheartedly agree with this site: <https://abandonmatlab.wordpress.com/>
|
||||
* Good post on belief-in-belief: <http://web.archive.org/web/20120629012114/http://stairs.umd.edu/236/meta-atheism.html>
|
||||
* Huh. A strange system, the US medical system: <http://stallman.org/articles/asked_to_lie.html>
|
||||
* Very much this - about how the media has lost the plot about PRISMgate: <http://m.guardiannews.com/technology/2013/jul/28/edward-snowden-death-of-internet>
|
||||
* Aaand my faith in humanity is once again shattered: <http://i.imgur.com/mdfvFA6.jpg>
|
@@ -0,0 +1,35 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-07-30T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/on-to-do-lists-as-direction-in-life/
|
||||
- /on-to-do-lists-as-direction-in-life/
|
||||
title: On to-do lists as direction in life
|
||||
---
|
||||
[Getting Things Done](https://en.wikipedia.org/wiki/Getting_Things_Done) has gathered something of a [cult following](http://web.archive.org/web/20130428015707/http://www.wired.com/techbiz/people/magazine/15-10/ff_allen? "Wired article on GTD") [archived due to [link rot][1]] since its inception. As a way of getting things done, it's pretty good - separate tasks out into small bits on your to-do list so that you have mental room free to consider the bigger picture. However, there's a certain aspect of to-do lists that I've not really seen mentioned before, and which I find to be really helpful.
|
||||
|
||||
My to-do list takes up a large amount of space on one of my virtual desktops (specifically, on [Dashboard](https://en.wikipedia.org/wiki/Dashboard_%28Mac_OS%29)). It consists of a large number of short-term goals, with some longer-term goals and a couple of very long-term goals mixed in. Sample:
|
||||
|
||||
> Library books: Flow, The Mind's I, Consciousness Explained
|
||||
>
|
||||
> Go and see the [Aurora](https://en.wikipedia.org/wiki/Aurora_borealis)
|
||||
>
|
||||
> See how many [taste buds](https://en.wikipedia.org/wiki/Supertaster "Supertaster") I have
|
||||
>
|
||||
> Update list of books on blog
|
||||
|
||||
There are very long-term goals like seeing the Aurora (which I intend doing during the next solar maximum in seven years or so), some goals which can be accomplished very quickly (like seeing whether I am officially a supertaster), an ongoing task (updating the blog) and a list of the library books I have out at the moment.
|
||||
|
||||
The reason I like this arrangement so much is that it doesn't make you feel bad to see a wall full of to-do items that you've not done. Because a fair few of the goals are so long-term, I expect to see lots of items on the list, so I don't get the sinking feeling when I see everything I have left to do. It also feels really good to tick off a long-term goal (my most recent being "Get a [Kindle](https://en.wikipedia.org/wiki/Amazon_Kindle)"), and it feels better than it otherwise would to tick off a short-term goal, since it is surrounded by things that I know won't get ticked off for a while, so it feels (by association) like a bigger accomplishment.
|
||||
|
||||
It also means that I should never forget to do something big that I want to do. So often, I hear people say "I wish I could… before I die", or similar. Now I have a system for recording all these things that cross my mind, so I will eventually get round to doing them. (I should note that on a fairly regular basis, I read through the whole list and work out which items are feasible right now - hopefully this will mitigate the "that's a long-term goal, ignore it" effect.) My goal to "play in the [Tallis Fantasia](https://en.wikipedia.org/wiki/Fantasia_on_a_Theme_by_Thomas_Tallis)" is one such entry.
|
||||
|
||||
I think that this kind of method of writing down goals could be used to create some sort of life direction. I've seen services into which you enter your long-term goals, and then when you complete one, you tell the system and you gain "experience points", levelling up after reaching a certain threshold of points. I like this idea, but I postulate that it encourages thinking of long-term goals as different things to short-term goals, and that this is not necessarily desirable. A goal is a goal; some are big-impact long-term things, some are big-impact short-term things, and so on; the system seems to create an artificial distinction between short-term and long-term. My system, in its simplicity, avoids this distinction. I can see a pattern of goals that reflects my future life; to get a bit soppy about it, I can see a much clearer "direction" this way, listing internships, the research I want to do for interest, a certain walk that is strongly recommended from Cambridge to Grantchester, and so on. The lack of "levels of abstraction", I think, makes it much easier to do long-term things that I would otherwise put off.
|
||||
|
||||
I now get to tick something else off the list - hooray! I hope something comes along soon to replace it.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Link_rot "Link rot Wikipedia page"
|
60
hugo/content/posts/2013-08-04-new-computer-setup.md
Normal file
60
hugo/content/posts/2013-08-04-new-computer-setup.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-08-04T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/new-computer-setup/
|
||||
- /new-computer-setup/
|
||||
title: New computer setup
|
||||
---
|
||||
|
||||
*Editor's note: this is a snapshot of life in 2013-08-04. My setup has changed substantially since then.*
|
||||
|
||||
In case I ever have to get a new computer (or, indeed, in case anyone else is interested), I hereby present the (updating) list of applications and so forth that I would immediately install to get a computer up to usability.
|
||||
|
||||
* Browser: [Firefox] with [Ghostery], [HTTPS Everywhere], and [NoScript] (and remember to turn on Do Not Track…)
|
||||
* Mail client: [Thunderbird] with [Enigmail]
|
||||
* Messaging client: [Adium] on Mac, and possibly [Pidgin] for others - I've never used a non-Mac chat client. Beware: as of this writing, Pidgin stores passwords in plain text, so don't save passwords in Pidgin.
|
||||
* Encryption: GPG ([Windows][GPG Windows], [Mac][GPG Mac], [Linux][GPG Linux])
|
||||
* Text editor: Vim
|
||||
* Memory training: [Anki]
|
||||
* Movie viewing: [VLC]
|
||||
* Screen colour muter: [f.lux]
|
||||
* Backup software: [CrashPlan] - but I also keep local backups using whatever built-in automated backup utility the OS provides
|
||||
* FTP client: [FileZilla], or [Cyberduck] on a Mac
|
||||
* Syncing: [Dropbox] (but I want to get rid of this, because of privacy concerns)
|
||||
* Computational software: [Mathematica]
|
||||
* Music: [iTunes] (but I want to switch this for something not-Apple, and it has no Linux version)
|
||||
* Gaming: [Steam]
|
||||
* RSS reader: Currently, my RSS feed is presented in-browser, at [NewsBlur].
|
||||
|
||||
|
||||
[Firefox]: https://www.mozilla.org/en-US/firefox/new/
|
||||
[Thunderbird]: https://www.mozilla.org/en-US/thunderbird/
|
||||
|
||||
[Ghostery]: https://www.ghostery.com/
|
||||
[HTTPS Everywhere]: https://www.eff.org/https-everywhere
|
||||
[NoScript]: https://addons.mozilla.org/en-US/firefox/addon/noscript/
|
||||
[Enigmail]: http://www.enigmail.net/home/index.php
|
||||
|
||||
[Dropbox]: https://www.dropbox.com/
|
||||
[Mathematica]: https://www.wolfram.com
|
||||
[iTunes]: https://www.apple.com/itunes/
|
||||
[Steam]: https://store.steampowered.com/
|
||||
[Anki]: http://ankisrs.net/
|
||||
[NewsBlur]: https://www.newsblur.com
|
||||
[FileZilla]: https://filezilla-project.org/
|
||||
[Cyberduck]: http://cyberduck.io/
|
||||
[CrashPlan]: https://www.crashplan.com/
|
||||
|
||||
[f.lux]: http://stereopsis.com/flux/
|
||||
[VLC]: https://videolan.org/vlc/
|
||||
[Notepad++]: http://notepad-plus-plus.org/
|
||||
[Pidgin]: https://www.pidgin.im/
|
||||
[Adium]: https://adium.im/
|
||||
[GPG Linux]: https://gnupg.org/
|
||||
[GPG Mac]: https://gpgtools.org/
|
||||
[GPG Windows]: http://www.gpg4win.org/
|
@@ -0,0 +1,25 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-08-04T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across-3/
|
||||
- /stumbled-across-4-august-2013/
|
||||
title: Stumbled across 4th August 2013
|
||||
---
|
||||
* An ad developer has misgivings: <http://seriouspony.com/blog/2013/7/24/your-app-makes-me-fat>
|
||||
* Hint for dealing with some automated phone helplines - swear at them and they'll put you through to a human: <https://www.vice.com/en/article/z444dx/if-you-swear-at-apple-s-automated-customer-service-they-ll-put-you-through-to-a-human>
|
||||
* The future is coming: <http://www.extremetech.com/extreme/162678-harvard-creates-brain-to-brain-interface-allows-humans-to-control-other-animals-with-thoughts-alone>
|
||||
* A large collection of replacements for various PRISM-vulnerable services: <https://prism-break.org/>
|
||||
* Some people think in a really rather interesting way: <https://www.schneier.com/blog/archives/2005/02/smart_water.html>
|
||||
* The joys of a memoryless distribution: <http://io9.com/the-quantum-zeno-effect-actually-does-stop-the-world-977909459>
|
||||
* An impressive photograph: [largest photo cached]
|
||||
* A fair chunk of the "1910's predicted Year 2000 technologies" has been invented: <http://www.sadanduseless.com/2011/03/world-in-2000/>
|
||||
* A sweet video about Street View: <http://vimeo.com/32397612>
|
||||
* How to enable encryption in your emails using [GPG][GPG]: <http://arstechnica.com/security/2013/06/encrypted-e-mail-how-much-annoyance-will-you-tolerate-to-keep-the-nsa-away/>
|
||||
|
||||
[GPG]: https://en.wikipedia.org/wiki/GNU_Privacy_Guard
|
||||
[largest photo cached]: https://web.archive.org/web/20130814173950/http://www.oddly-even.com/2013/07/31/the-largest-photo-ever-taken-of-tokyo-is-zoomable-and-it-is-glorious/
|
@@ -0,0 +1,24 @@
|
||||
---
|
||||
lastmod: "2022-08-21T11:10:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-08-11T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across-11th-august-2013/
|
||||
- /stumbled-across-11th-august-2013/
|
||||
title: Stumbled across 11th August 2013
|
||||
---
|
||||
* A thousand times this (EDIT 2022: the link is dead and I have no idea what I was referring to).
|
||||
* A possible fix for the "[economic problem][1] of democracy": <https://mason.gmu.edu/~rhanson/futarchy.html>
|
||||
* A fascinating look at privacy online, how we're not built for privacy, and how tribal cultures attain privacy: <https://aeon.co/essays/facebook-s-privacy-settings-aren-t-the-problem-ours-are/>
|
||||
* I'm all for healthy competition and so forth, but do we really want such massive phones? <https://arstechnica.com/gadgets/2013/08/the-smallest-new-android-phone-you-can-buy-isnt-small-at-all/>
|
||||
* This is the kind of thing that I never quite have the courage or the morals to do: <http://web.archive.org/web/20130809202515/http://www.minyanville.com/business-news/editors-pick/articles/A-Russian-Bank-Is-Sued-for/8/7/2013/id/51205>
|
||||
* This is an excellent summary for why I'm trying to find a good Gmail replacement: <https://ar.al/notes/schnail-mail-free-real-mail-for-life/>
|
||||
* A guide for dealing with introverts (not that many of my friends need it - perhaps that's why they're my friends): <http://laughingsquid.com/how-to-live-with-introverts/>
|
||||
* I didn't know this was such a wide-spread problem: <http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-computers/>
|
||||
* I agree with this article on the state of maths teaching entirely - I had some excellent teachers, but I could see from the textbooks how it was designed to be taught: <http://mysite.science.uottawa.ca/mnewman/LockhartsLament.pdf>
|
||||
* How is it that Scandinavia manages to be so nice all the time?! <https://www.bbc.co.uk/news/world-europe-23655675>
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Criticism_of_democracy#Economic_criticisms
|
37
hugo/content/posts/2013-08-18-thinking-styles.md
Normal file
37
hugo/content/posts/2013-08-18-thinking-styles.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-08-18T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /psychology/thinking-styles/
|
||||
- /thinking-styles/
|
||||
title: Thinking styles
|
||||
---
|
||||
All the way back into primary school (ages 4 to 11 years old, in case a non-Brit is reading this), we have been told repeatedly that "people learn things in different ways". There were two years in primary school when I had a teacher who was very into [Six Thinking Hats](https://en.wikipedia.org/wiki/Six_Thinking_Hats) (leading to the worst outbreak of headlice I've ever encountered) and [mind maps](https://en.wikipedia.org/wiki/Mind_map). I never understood mind maps, and whenever we were told to create a mind map, I'd make mine as linear and boxy as possible, out of simple frustration with the pointless task of making a picture of something that I already had perfectly well-set-out in my mind. I quickly learnt to correlate "making a mind map" with "being slow and inefficient at thinking". (This was back when my memory was still exceptionally good, so I wasn't really learning much at school - having read, and therefore memorised, a good children's encyclopaedia was enough for me - and hence relative to me, pretty much everyone else was slow and inefficient, because I'd already learnt the material.)
|
||||
|
||||
It's only now that I've realised that perhaps some people actually do think in a way that makes mind maps helpful. I'm not bad at spatial visualisation (not great, but not totally inept), but I don't think in pictures at all. Apparently, [about 3% of people](https://www.lesswrong.com/posts/baTWMegR42PAsH9qJ/generalizing-from-one-example) [sorry, the source for the statistic wasn't given on that page] simply do not have mental images - I don't fall into that 3%, but a close family member tells me ey does - ey cannot make sense of pictures at all without translating them into words. (Possibly a genetic bias? At least another two close family members are very visual indeed.) Ey told me that the world is really not set up for people who can't visualise: whenever you say you don't understand something, the default response is apparently to say exactly the same thing again, but accompanied by a picture - completely useless for a non-visualiser. I've never noticed this before, and a quick memory trawl is inconclusive, but I will certainly keep a look out for it and against it.
|
||||
|
||||
A prime example (no pun intended) of an extraneous visual approach to something was the multiplication of two two-digit numbers by using the fact that \\((a+b)(c+d)=ac+bc+ad+bd\\). As an example, I'll take the numbers 35 and 27. The method involved drawing a box of (nominal) side lengths 27x35, and drawing two lines to divide the sides (nominally) into 20+7 and 30+5. Then in each of the four sub-boxes thus created, you had to write the area of that sub-box (that is, calculate \\(30 \times 20\\), \\(30 \times 7\\), \\(5 \times 20\\), \\(5 \times 7\\)) and then add them all up to get the total area. This method seemed like an enormous waste of time and space to me; I had already learnt to multiply arbitrary numbers together through the [Kumon](https://en.wikipedia.org/wiki/Kumon) program by using the standard [long multiplication](https://en.wikipedia.org/wiki/Multiplication_algorithm#Long_multiplication), and to have to learn a method that was about ten times slower and used four times more paper seemed immensely wasteful. I formed the opinion that the reason people were bad at multiplication was that they were being told to use these useless methods that no-one in their right mind could possibly understand. The [Generalising from One Example](http://lesswrong.com/lw/dr/generalizing_from_one_example/) LessWrong post contains an extremely relevant passage:
|
||||
|
||||
> I only really discovered this in my last job as a school teacher. There's a lot of data on teaching methods that students enjoy and learn from. I had some of these methods...inflicted...on me during my school days, and I had no intention of abusing my own students in the same way. And when I tried the sorts of really creative stuff I would have loved as a student...it fell completely flat. What ended up working? Something pretty close to the teaching methods I'd hated as a kid. Oh. Well. Now I know why people use them so much. And here I'd gone through life thinking my teachers were just inexplicably bad at what they did, never figuring out that I was just the odd outlier who couldn't be reached by this sort of stuff.
|
||||
|
||||
And it's only very recently that it occurred to me that this is quite possibly exactly my experience. The visual techniques simply work for other people.
|
||||
|
||||
Another example (again from arithmetic) is the [number line][1] (and the closely related and suggestively named [real line][2]). A large chunk of the first few years at primary school was devoted to learning to count and add (pretty tedious stuff, especially if you already knew how to count and add!). One of the key methods used was the number line - so, for instance, to work out \\(8-3\\), you had to count forward 8 and go back 3. I hated this method - again, it wasted time (why not just go forward 5?) and space (draw out a line? no thanks!). Apparently there was a study done on an untouched-by-society tribe, and it turns out that viewing numbers spatially is not inbuilt in humans. [^study] Maybe I was just unusually unable to learn this view of numbers.
|
||||
|
||||
Over the last few years, however, most noticeably as I have come to learn more maths, I have started to rely on pictures considerably more than I used to. I discovered the memory technique of "imagine a picture, the more ridiculous the better" to link two concepts (that's how I'm learning the capitals of the world - Luanda is the capital of Angola, which I remember as a [footballer](https://en.wikipedia.org/wiki/Soccer) scoring a GOAL [Angola] by kicking the ball into a [LOO](https://en.wikipedia.org/wiki/Toilet "Toilet") which is sitting between the goalposts), and I have used it to learn a variety of things. In the topic of [analysis](https://en.wikipedia.org/wiki/Mathematical_analysis), I rely on pictures as a guide to intuition - the statement that "for every \\(\epsilon > 0\\), there is a \\(\delta > 0\\) such that for all \\(y\\) where \\(\vert y<x \vert < \delta\\), we have \\( \vert f(y)-f(x) \vert < \epsilon\\) is pretty horrific to grok, but a [simple picture](https://en.wikipedia.org/wiki/File:Example_of_continuous_function.png "Continuous function example") is enough for me to understand it. However, I don't use pictures except as intuition-pumps - they're often too unreliable to use as proofs, and I don't (yet) think sufficiently pictorially to do this.
|
||||
|
||||
I also have an unusually wide vocabulary. While I used to find it facile to augment the erudition of my verbiage, I am now afflicted by a slowness of thought that makes it something of an effort. It feels perhaps as if my facility with words has decreased as my mental use of pictures has increased (though this is hard to gauge, because I also know that my mental faculties in general have declined precipitously since perhaps the age of 13). It is possible that I was forced by my brain to become excellent with words, as a substitute for thinking in pictures (since words are the most general means we have of representing thought-symbols, though pictures may be higher-fidelity).
|
||||
|
||||
I also wonder whether gesturing during speech has an association with picture-representation. As far as I am aware, I don't gesture very much when speaking; I know that the close-family-member-who-doesn't-think-in-pictures does not gesture at all. However, the greatest gesturer I know certainly thinks in pictures; and [Tom Körner](https://en.wikipedia.org/wiki/Tom_Korner) gestures vigorously while lecturing proofs, while also using pictures very frequently as intuition guides. I don't have very much data on this, though, and I would be interested to add more data points.
|
||||
|
||||
Anyway, the message to take away is that people don't all think in the same way - if you find yourself trying to communicate something to someone who seems to be refusing obstinately to understand, do consider the possibility that you're not approaching the explanation in a way that the other person can comprehend.
|
||||
|
||||
[^study]: I saw this study over a year ago, and I can't find it now. I thought it was reported on [Ars Technica](https://arstechnica.com), but apparently not.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Number_line "Number line Wikipedia page"
|
||||
[2]: https://en.wikipedia.org/wiki/Real_line "Real line Wikipedia page"
|
30
hugo/content/posts/2013-08-21-my-experiences-with-flow.md
Normal file
30
hugo/content/posts/2013-08-21-my-experiences-with-flow.md
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
lastmod: "2022-12-31T23:46:44.0000000+00:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-08-21T00:00:00Z"
|
||||
aliases:
|
||||
- /psychology/flow/
|
||||
- /my-experiences-with-flow/
|
||||
title: My experiences with flow
|
||||
---
|
||||
I'm in the middle of reading [Flow](https://en.wikipedia.org/wiki/Flow_(psychology)), by [Mihály Csíkszentmihályi][1], and so far, I love it. It describes the "[flow state](https://en.wikipedia.org/wiki/Flow_%28psychology%29)" of consciousness, that state of "everything is irrelevant except for the task at hand" in which time flies past without your noticing, and you don't notice hunger or thirst or people moving around you. Flow can be induced when performing a difficult task which lies within your abilities, where immediate feedback is provided. I, at least, feel characteristically exhausted after coming out of a long period of flow - but it's a good kind of mental exhaustion, much as the tiredness after a long swim is a good kind of physical exhaustion (in contrast to tiredness-after-a-long-day-of-doing-nothing, which feels sort of lazier and unwholesome). The Wikipedia page is a good enough explanation of flow that I will not describe it further here.
|
||||
|
||||
I myself have experienced flow when playing violin in the orchestra, playing the piano solo, playing chess, doing maths, doing sudoku-type puzzles, reading (fiction and non-fiction), doing exams, and programming. These are the examples that come to mind during five minutes' probing of my mind - there may very well be more.
|
||||
|
||||
I never experienced flow during a music exam (unfortunately) - this is probably because during an exam, good results are *required*, and this creates a lot of pressure to perform well. As an added source of worry during music exams, you are performing alone, doing something where it is excruciatingly obvious when you get it wrong. If the pressure to do well is too great, it is very hard to achieve flow; Csíkszentmihályi would claim that this is because "the self, in the form of worry and fear, is too strongly asserted when under pressure, and flow requires losing the sense of self". I hold this to be a fringe benefit to the modular system of exams - if you mess up one module, it's not the end of the world.
|
||||
|
||||
I no longer experience flow during chess, because I have had several years of hiatus since being fairly good at chess - time enough to forget many of the patterns that I used to be able to see effortlessly - and so the game has become a little too difficult for me to attain flow. However, now that I am playing casually again, I hope that it might become easier to achieve flow during chess. Maths is the main source of flow in my life at the moment. I measure how long I have spent doing a particular problem by how many sheets of paper I have used, and I always end up having used far more sheets that I feel there could possibly have been time for. I also notice flow-tiredness pretty often after doing a long bout of maths, which is a fair indication that I have been in flow. It is from maths that I have come to think that I can kick flow up a notch if required, because I can sit a maths exam, completing (to a better standard than normal) questions that feel harder than those I do for practice, and come out of the exam absolutely exhausted - with flow-tiredness times two. However, the experience feels much the same as "normal flow" at the time - that is, it doesn't really feel like anything, and the problem I'm solving is all there is. It's quite hard to perform accurate introspection on a state of mind in which introspection is entirely halted! I have to rely on evidence from outside the experience: the fact that I am much more tired and have completed more than I would have done in "normal flow". I would be interested to know if anyone else has what they perceive to be two levels of flow. As further evidence in favour of "multiple levels of flow", it is apparently possible for groups to achieve flow together, working on the same task - I struggle to come up with an activity in which the delay of talking is not a huge barrier to flow. (I suppose that if a group can get to the stage where its members are passing around information before it is requested, by anticipating the questions, then it could be possible.) It seems much more likely that a "lesser level" of flow could be achieved in a group, during which interruptions are not so disruptive; I cannot imagine my maths-flow or anything like it surviving with people's talking around me and my having to reply (with anything more sophisticated than "mmm").
|
||||
I go into flow with maths exams ([A-level][2], [BMO][3], [Tripos][4], and the corresponding mock exams), but also in certain other subjects. When writing an essay for my A-level Latin exam, I went into flow. Interestingly, I never went into flow when writing practice essays for Latin - it took the structure and importance of the actual exam to get me into flow. I'm looking for ways to get more flow; I remember that I used to go into flow all the time when I was younger, through doing [sudoku][5] and related puzzles, so this is perhaps an avenue of exploration.
|
||||
|
||||
I like the contrast that Csíkszentmihályi gives between "pleasure" and "enjoyment". "Pleasure" is your standard run-of-the-mill emotion, something that hasn't got a strong thought component involved. It is (of course) not a bad thing, but it is somehow less wholesome than "enjoyment", which is the pleasure obtained from investing a large amount of mental energy in something. Watching television is usually pleasurable (assuming it isn't boring, anyway), while reading a really good book is enjoyable - possibly because of the work involved in constructing a mental model, which elevates the act of reading from "pleasure" to "enjoyment". Thinking on the distinction, I realised that in fact most of the things I do as a distraction are not enjoyable, and many are not even pleasurable - forthwith, I am deleting almost all the iOS games I have on my iPod, because their only purpose (from my perspective) is to be used as a time-sink, rather than to provide pleasure. I might take up crosswords again - while not necessarily an easy flow-inducing task (because the crossword is split up into many small but very difficult tasks, rather than small and reasonable ones), it is at least enjoyable.
|
||||
|
||||
The book Flow describes many different activities that can induce flow in a skilled practitioner - rock climbing, ice skating, even assembly-line work - so it is not a phenomenon that is confined to mental pursuits. (The skew of this blog post is just because I have never achieved proficiency in any physical activity.) I would be interested to know what activities induce flow in my readers.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Mihaly_Csikszentmihalyi "Mihaly Csikszentmihalyi"
|
||||
[2]: https://en.wikipedia.org/wiki/A-level "A-levels Wikipedia page"
|
||||
[3]: https://en.wikipedia.org/wiki/British_Mathematical_Olympiad "BMO Wikipedia page"
|
||||
[4]: https://en.wikipedia.org/wiki/Mathematical_Tripos "Mathematical Tripos Wikipedia page "
|
||||
[5]: https://en.wikipedia.org/wiki/Sudoku "Sudoku Wikipedia page"
|
43
hugo/content/posts/2013-08-22-how-to-punt-in-cambridge.md
Normal file
43
hugo/content/posts/2013-08-22-how-to-punt-in-cambridge.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-08-22T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/how-to-punt-in-cambridge/
|
||||
- /how-to-punt-in-cambridge/
|
||||
title: How to punt in Cambridge
|
||||
---
|
||||
[When in Cambridge][1]…
|
||||
|
||||
The river is always full of beginners and professional puntists. The beginners veer all over the place, getting very wet, while the professionals zip between them, somehow managing to avoid collision by the width of an otter's hair. The worst attempt by a beginner I've ever seen at punting was an attempt to use the pole rather like an oar, without ever touching the bottom of the river with it. This patent perplexity pertaining to the point of the punt provoked a pertinent post.
|
||||
|
||||
Step 1: hire a punt. They are usually hired hourly, with a three-minute grace at returning-time. [Scudamore's][2] is the usual punt-hire company for the public; they have punts stationed on the "town" side of Magdalene College (under the bridge there), and another set stationed on the river past Queen's College (or approaching from Pembroke College, cross the road into Mill Lane and go down to the river; you're pretty much there). If you're a student, they charge £16.50/hr, which is absurd. (I have not yet tried any of the following student alternatives.) Nominally, Trinity College does punts at [£4/hr][3] for Trinity members and £12/hr for other students. Magdalene does punts at what may still be £5/hr for Magdalene students, but I believe this is only available during the Easter term (that is, the summer term). Clare College has punts which may or may not be free to Clare students. St John's College has punts for John's students at [£4/hr][5] (also, bizarrely, discount rates for the British Antarctic Survey). Other options may exist, but I don't know about them.
|
||||
|
||||
Step 2: ensure that the punt hire people have given you a pole for punting with, and you might want a paddle too, just in case. I have myself accidentally left the pole anchored in the river-bed, and narrowly avoided being left clinging onto it for dear life (à la [Three Men in a Boat][6]) - in this situation, a paddle is invaluable.
|
||||
|
||||
Step 3: note your starting time, for payment purposes, and calculate the time you need to be back by. You should probably remember to turn back after you have got a little more than halfway to this end time (as it's always faster once you've got into the swing of punting).
|
||||
|
||||
Step 4: the designated first puntist takes the pole and puts it into the water (remembering to keep hold of one end). The water end often has a hook or some other distinguishing feature on it. Then ey walks with the pole to the end of the punt, which will be flat and elevated. This end is the back of the punt; passengers sit in front of you.
|
||||
|
||||
Step 5: passengers embark, keeping their hands inside the punt (not gripping the edge, as this can lead to horrific injury).
|
||||
|
||||
Step 6: you will be released, or you will have to arrange your own release, by dint of untying whatever ropes attach you to the shore. The puntist immerses the pole until its bottom hits the floor, and then uses this as a lever to push the Earth away from the punt. In this manner, you can rotate the Earth beneath you until you are in the middle of the river. Do try to avoid doing this while people are on top of the Eiffel tower, or they might fall off due to the sudden movement of the Earth. Also avoid hitting other punts.
|
||||
|
||||
Step 7: Now that you are thoroughly in reverse, you will need to apply some sort of forward propulsion to avoid ploughing into the other bank of the river and thereby annoying the John's porters and clogging up the river. Do not do this by putting the pole directly behind the punt, as this could cause you to be levered straight off when the punt continues to go backwards. Instead, put the pole behind and to one side of the back of the punt, and push away from you, so that the back of the punt swings away from the pole (and hence the front of the punt swings towards the pole). It will take a lot of pressure to alter course, but once you start changing direction, it is oddly difficult to stop - so do this step as gradually as seems prudent.
|
||||
|
||||
Step 8: You should now be parallel with the river, or at least pointing in some direction approximating "water". To go forwards, put the pole directly behind the punt and push against the river bed with it. Steering is accomplished by dint of waving the pole behind you (still trailing in the water, but no longer touching the bottom) - a long stick of wood/metal suffices as a pretty good rudder. I find it helpful to use my own body as a fulcrum. You will find that it is possible to make very large course adjustments very quickly if you apply enough force to the pole, but again remember that changes of direction are hard to start and easy to keep going, so be gentle. Whenever you need more speed, repeat the "push against river bed" manouevre. If you're going quickly, you need to do this quickly, as otherwise the punt will be racing away from the point at which you put the pole in. If you're under a bridge, remember that you won't have enough height available to do this.
|
||||
|
||||
For some reason, courtesy dictates that you keep to the right-hand side of the river in the direction you are going. Why it's not the left is beyond me.
|
||||
|
||||
Step 9: If you find that the punt is not coming out of the river bed, apply a twisting motion while pulling quite hard. It should come suddenly free (and may overbalance you, so keep your weight low); if it does not, you should let go so that you are not left clinging onto it while the punt drifts away from you. Judicious use of the paddle will get you back to retrieve it.
|
||||
|
||||
And that's it. You are now a master puntist. Go forth and wreak havoc upon the river.
|
||||
|
||||
[1]: https://en.wiktionary.org/wiki/when_in_Rome,_do_as_the_Romans_do "When in Rome…"
|
||||
[2]: http://www.scudamores.com/ "Scudamore"
|
||||
[3]: http://www.trin.cam.ac.uk/index.php?pageid=664 "Trinity punting scheme"
|
||||
[5]: http://www.joh.cam.ac.uk/punt-society "St John's punts"
|
||||
[6]: /reading-list "Things Everyone Should Read"
|
@@ -0,0 +1,24 @@
|
||||
---
|
||||
lastmod: "2022-08-21T11:30:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-08-24T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across-4/
|
||||
- /stumbled-across-24-august-2013/
|
||||
title: Stumbled across 24th August 2013
|
||||
---
|
||||
* The much-vaunted Hyperloop looks really cool, if it could ever be built: <https://arstechnica.com/business/2013/08/hyperloop-a-theoretical-760-mph-transit-system-made-of-sun-air-and-magnets/>
|
||||
* But it may be a bit too half-baked: <https://pedestrianobservations.com/2013/08/13/loopy-ideas-are-fine-if-youre-an-entrepreneur/>
|
||||
* I love a good visualisation: <http://nickolaylamm.com/art-for-clients/what-if-you-could-see-wifi/>
|
||||
* I laughed pretty much constantly through this piece of bureaucracy-hacking: <http://www.slate.com/articles/life/culturebox/2013/08/the_kindly_brontosaurus_the_amazing_prehistoric_posture_that_will_get_you.html>
|
||||
* This is a problem with the Internet of Things as well as with mind-computer interfaces: <http://www.extremetech.com/extreme/134682-hackers-backdoor-the-human-brain-successfully-extract-sensitive-data>
|
||||
* Wow - it's possible to represent words as vectors so that *vector('Paris') - vector('France') + vector('Italy')* results in a vector that is very close to *vector('Rome')*: <https://code.google.com/p/word2vec/>
|
||||
* Let there be food: <http://web.archive.org/web/20150219023941/http://thisiswhyyourefat.kinja.com/cadbury-creme-eggs-benedict-230182670>
|
||||
* One of the manifold reasons why the USA's [TSA][1] should be scrapped: <https://varnull.adityamukerjee.net/2013/08/22/dont-fly-during-ramadan>
|
||||
* An excellent witty dialogue between some experts in their respective fields: <http://mathwithbaddrawings.com/2013/08/21/five-math-experts-split-the-check/>
|
||||
* How to disagree correctly: <http://www.paulgraham.com/disagree.html>
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Transportation_Security_Administration "TSA Wikipedia page"
|
43
hugo/content/posts/2013-08-26-topology-made-simple.md
Normal file
43
hugo/content/posts/2013-08-26-topology-made-simple.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2013-08-26T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /wordpress/archives/364/index.html
|
||||
- /mathematical_summary/topology-made-simple/
|
||||
- /topology-made-simple/
|
||||
title: Topology made simple
|
||||
---
|
||||
I've been learning some basic [topology][1] over the last couple of months, and it strikes me that there are some *very* confusing names for things. Here I present an approach that hopefully avoids confusing terminology.
|
||||
|
||||
We define a **topology** \\(\tau\\) on a set \\(X\\) to be a collection of sets such that: for every pair of sets \\(x,y \in \tau\\), we have that \\(x \cap y \in \tau\\); \\(\phi\\) the empty set and \\(X\\) are both in \\(\tau\\); for every \\(x \in \tau\\) we have that \\(x \subset X\\); and that \\(\displaystyle \cup_{\alpha} x_{\alpha}\\) is in \\(\tau\\) if all the \\(x_{\alpha}\\) are in \\(\tau\\). (That is: \\(\tau\\) contains the empty set and the entire set; sets in \\(\tau\\) are subsets of \\(X\\); not-necessarily-countable unions of sets in \\(\tau\\) are in \\(\tau\\); and finite intersections of sets in \\(\tau\\) are in \\(\tau\\).) We then say that \\((X, \tau)\\) is a **topological space**.
|
||||
|
||||
If a set \\(x\\) is in \\(\tau\\), then we say that \\(x\\) is **fibble**. On the other hand, if \\(x^{\mathsf{c}}\\) (the complement of \\(x\\)) is in \\(\tau\\), then we say that \\(x\\) is **gobble**.
|
||||
|
||||
We define a **metric space** \\((X,d)\\) to be a set \\(X\\) together with a "distance" function \\(d: X \to \mathbb{R}\\) such that: \\(d(x,y)=0\\) iff \\(x=y\\); \\(d(x,y)=d(y,x)\\); and \\(d(x,y)+d(y,z) \geq d(x,z)\\). (That is, "the distance between two points is 0 iff they're the same point; distance is the same if we reverse as if we go forward; and if we take a detour then the distance is greater".)
|
||||
|
||||
We then define a **fiball** \\(B(x,\delta )\\) to be "a set for which every \\(y \in X\\) is within \\(\delta\\) of \\(x\\)" - that is, \\(\{ y \in X: d(x,y)<\delta \}\\).
|
||||
|
||||
It turns out that we can create (or **induce**) a topology out of a metric space, by considering the fiballs. Let \\(x \in \tau\\) iff \\(x\\) is a union (not necessarily countable) of fiballs in the metric space. We can see that this is a topology, because unions of (things which are unions of fiballs) are unions of fiballs; the empty set is the union of no fiballs; the entire set \\(X\\) is the union of all possible fiballs; and it can be checked that intersections behave as required (although that takes a tiny bit of work).
|
||||
|
||||
Now we see why fiballs are called "fiballs" - because in the induced topology, fiballs are fibble.
|
||||
|
||||
We can define a **gobball** in the same way, by making the weak inequality strict in the definition of the fiball. And it can be verified that gobballs are gobble.
|
||||
|
||||
We can keep going with these definitions - a **continuous function** between two topological spaces \\(f: (X, \tau) \to (Y, \sigma)\\) is defined to be one such that if \\(y \subset Y\\) is fibble in \\(Y\\), then \\(f^{-1}(y)\\) is fibble in \\(X\\), and so forth.
|
||||
|
||||
Eventually we come to the reason that I've used the words "fibble" and "gobble". Consider the metric \\(d: \mathbb{R} \to \mathbb{R}\\) given by \\(d(x,y) = \vert x-y \vert\\). It can easily be checked that \\((\mathbb{R},d)\\) is a metric space, and so it induces a topology on \\(\mathbb{R}\\). What is the fiball \\(B(x,\delta)\\)? It is precisely the set of points which are within \\(\delta\\) of \\(x\\) - that is, the open interval \\((x-\delta, x+\delta)\\). So we know that open intervals are fibble. Note also that \\((1,2) \cup (3,4)\\) is fibble, but is not an open interval. All well and good.
|
||||
|
||||
But now consider a different topology on \\(\mathbb{R}\\). Let \\(x\\) be fibble if it is a union of half-open intervals \\([a,b)\\). It can be checked that this is a topology. Now the set \\([1,2) \cup [3,4)\\) is fibble, and note that it is not an open interval. We can see that \\((1,2)\\) is still fibble (it's the union of the fibble sets \\([x, 2)\\) for \\(1<x<1.1\\), for example).
|
||||
|
||||
And consider a third, final topology on\\(\mathbb{R}\\). Let \\(x\\) be fibble iff \\(x\\) is \\(\mathbb{R}\\) or the empty set. We can easily see that this is a topology. Now no open interval is fibble.
|
||||
|
||||
The problem is that in standard notation, fibble sets are referred to as **open**. It's all fine when you have that open intervals are open in the usual topology, but we can construct a topology in which there is an open set which is not an open interval, and we can construct a topology where no open intervals are open. What madness is this? Why not have a different word, because the meaning is different?!
|
||||
|
||||
When I am Master of the Universe, I will reform topology so that it makes sense.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Topology "Topology Wikipedia page"
|
@@ -0,0 +1,171 @@
|
||||
---
|
||||
lastmod: "2022-01-01T22:20:19.0000000+00:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- creative
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2013-08-31T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /wordpress/archives/379/index.html
|
||||
- /creative/mathematical_summary/slightly-silly-sylow-pseudo-sonnets/index.html
|
||||
- /slightly-silly-sylow-pseudo-sonnets/index.html
|
||||
title: Slightly silly Sylow pseudo-sonnets
|
||||
---
|
||||
This is a collection of poems which together prove the [Sylow theorems][1].
|
||||
|
||||
# Notes on pronunciation
|
||||
|
||||
* Pronounce \\( \vert P \vert \\) as "mod P", \\(a/b\\) or \\(\dfrac{a}{b}\\) as "a on b", and \\(=\\) as "equals".
|
||||
* \\(a^b\\) for positive integer \\(b\\) is pronounced "a to the b".
|
||||
* \\(g^{-1}\\) is pronounced "gee inverse".
|
||||
* "Sylow" is pronounced "see-lov", for the purposes of these poems.
|
||||
* \\(p\\) and \\(P\\) and \\(n_p\\) are different entities, so they're allowed to rhyme.
|
||||
|
||||
# [Monorhymic][4] Motivation [^notsonnet]
|
||||
|
||||
Suppose we have a finite group called \\(G\\).
|
||||
This group has size \\(m\\) times a power of \\(p\\).
|
||||
We choose \\(m\\) to have coprimality:
|
||||
the power of \\(p\\)'s the biggest we can see.
|
||||
Then One: a subgroup of that size do we
|
||||
assert exists. And Two: such subgroups be
|
||||
all conjugate. And \\(m\\)'s nought mod \\(n_p\\),
|
||||
while \\(n_p = 1 \pmod{p}\\); that's Three.
|
||||
|
||||
# Theorem One
|
||||
|
||||
## Little [Lemmarick][5]
|
||||
|
||||
*Subtitle: "The size of the normaliser \\(N\\) of a maximal \\(p\\)-subgroup \\(P\\) has \\(N/P\\) coprime to \\(p\\)"*
|
||||
|
||||
There was a \\(p\\)-subgroup of \\(G\\)
|
||||
(by Cauchy). The largest was \\(P\\).
|
||||
Let \\(N\\) normalise,
|
||||
Take \\(\dfrac{N}{P}\\)'s size,
|
||||
Suppose that it's zero mod \\(p\\).
|
||||
|
||||
---
|
||||
|
||||
Now \\(\dfrac{N}{P}\\) also has some
|
||||
p-subgroup (by Cauchy); take one.
|
||||
Take it un-projected,
|
||||
\\(P\\)'s most big? Corrected!
|
||||
We've found one sized \\(p \vert P \vert \\): done.
|
||||
|
||||
## Introductory Interlude (to the tune of "[Jerusalem](https://en.wikipedia.org/wiki/Jerusalem_%28hymn%29)")
|
||||
|
||||
*Subtitle: "\\(\{P\}\\) is an orbit of size \\(1\\) under the conjugation action of \\(P\\) on the set of \\(G\\)-conjugates of \\(P\\)"*
|
||||
|
||||
Let \\(X\\) be \\(P\\)'s orbit under \\(G\\)
|
||||
Acting by conjuga-ti-on.
|
||||
Mod \\(G\\) o'er \\(N\\)'s the size of \\(X\\)
|
||||
The Orbit/Stabiliser's done.
|
||||
And in its turn, \\(P\\) acts on \\(X\\)
|
||||
By conjugating, as before,
|
||||
Then \\(P\\) is certainly all alone:
|
||||
Its orbit is itself, no more.
|
||||
|
||||
---
|
||||
|
||||
Let \\(gPg^{-1}\\) be alone,
|
||||
\\(P\\) stabilises it, and hence
|
||||
\\(pgPg^{-1}p^{-1}\\)
|
||||
Is \\(gPg^{-1}\\) - from whence
|
||||
We conjugate by \\(g^{-1}\\):
|
||||
\\(g^{-1}Pg\\) fixes \\(P\\).
|
||||
\\(g^{-1}Pg\\) is in \\(N\\),
|
||||
so \\(\pi\\) applies. From this, we'll see:
|
||||
|
||||
## [Cinquain][6] Claim [^cinquain]
|
||||
|
||||
*Subtitle: "\\(\{P\}\\) is the only orbit of size \\(1\\)"*
|
||||
|
||||
A claim:
|
||||
\\(\pi(g^{-1}Pg)\\) is \\({1}\\).
|
||||
Call it \\(K\\). If false, \\(p\\)
|
||||
divides \\( \vert K \vert \\),
|
||||
as \\(\pi\\)
|
||||
a hom [^hom].
|
||||
Also, \\( \vert K \vert \\)
|
||||
divides \\( \vert N/P \vert \\)
|
||||
(Lagrange). Then Lemmarick proves: \\(K\\)
|
||||
Is \\({1}\\).
|
||||
|
||||
## [Trochaic Tetrameter][7] Tying Together [^rhyme]
|
||||
|
||||
*Subtitle: "\\(\{P\}\\) is Sylow, since \\(G/N\\) has size coprime to \\(p\\)"*
|
||||
|
||||
\\(\pi\\) has kernel \\(P\\) - but also
|
||||
\\(K\\) is \\({1}\\), so lies inside it.
|
||||
\\(P\\) contains \\(g^{-1}Pg\\);
|
||||
Both have size \\(p^a\\). So
|
||||
since they're finite, they're the same set.
|
||||
Any set alone in orbit
|
||||
must be \\(P\\). The class equation
|
||||
Tells us \\( \vert G \vert / \vert N \vert \\) is
|
||||
Just precisely \\(1 \pmod{p}\\). Then
|
||||
\\( \vert G \vert / \vert P \vert \\) is not a
|
||||
multiple of \\(p\\) because it's
|
||||
\\( \vert \dfrac{N}{P} \vert \\) multiplied by
|
||||
\\(\dfrac{ \vert G \vert }{ \vert N \vert }\\) and \\(p\\) can't
|
||||
possibly divide those two. So
|
||||
Maximal the power of \\(p\\) is:
|
||||
\\(P\\)'s a Sylow \\(p\\)-subgroup.
|
||||
|
||||
# Theorem Two - Quad-[quatrain][8] [^quatrain]
|
||||
|
||||
A Sylow \\(p\\)-subgroup let \\(Q\\) be:
|
||||
a subgroup, size \\(p^a\\).
|
||||
Because it's the same size as was \\(P\\),
|
||||
it acts on \\(X\\) in the same way.
|
||||
|
||||
---
|
||||
|
||||
Mod \\(p\\), we have \\( \vert X \vert \\) is \\(1\\) -
|
||||
the orbits of \\(Q\\) will divide it;
|
||||
Now invoke the class equation:
|
||||
an orbit, size \\(1\\), lies inside it.
|
||||
|
||||
---
|
||||
|
||||
We dub this one \\(gPg^{-1}\\),
|
||||
then \\(g^{-1}Qg\\)'s in \\(N\\).
|
||||
Projection works just as well in verse:
|
||||
\\(\pi(g^{-1}Qg)\\) is \\({1}\\).
|
||||
|
||||
---
|
||||
|
||||
The previous poem's our saviour:
|
||||
\\(g^{-1}Qg\\) is in \\(P\\).
|
||||
The Pigeonhole tells its behaviour:
|
||||
that \\(P\\) is \\(g^{-1}Qg\\).
|
||||
|
||||
# Theorem Three - Hindmost [Haiku][9] [^haiku]
|
||||
|
||||
\\( \vert X \vert \\): \\(1 \pmod{p}\\)
|
||||
Orbit \\(X\\) divides \\(G\\)'s size:
|
||||
We have proved the Third.
|
||||
|
||||
[^notsonnet]: This is not a sonnet - it is six lines too short, and is monorhymic rather than following a more varied rhyme scheme. I started out intending it to be a sonnet, but all the rhymes for "p", "G" and so forth were irresistible. "Power" is a monosyllable.
|
||||
|
||||
[^cinquain]: I use a form of reverse cinquain, with syllable count 2,8,6,4,2,2,4,6,8,2.
|
||||
|
||||
[^hom]: "Hom", of course, is short for "homomorphism". Imre Leader used it all the time, so I took it to be legitimate.
|
||||
|
||||
[^rhyme]: This section is unrhymed; although Shakespeare rhymes his tetrameter, Longfellow doesn't. The strong iambic nature of English makes enjambement very natural to write when you're constrained to trochees, so I have just gone with the flow.
|
||||
|
||||
[^quatrain]: Quatrains have a variety of allowable rhyme schemes, but I plumped for ABAB for the sake of variety. Yes, "N" rhymes with "one". For the purposes of scansion, pronounce each line as the first line of a limerick, with an optional weak syllable at the end if necessary.
|
||||
|
||||
[^haiku]: I know that a haiku should mention a season, etc - but that is a constraint I am willing to relax. Gareth pointed out that if "sum" and "size" were synonymous, then " \|X\| : 1 (mod p)/Orbit X divides G's sum/A proof of the Third" would mention the season "sum-A".
|
||||
|
||||
[1]: {{< ref "2013-06-26-sylow-theorems" >}}
|
||||
[2]: http://tartarus.org/gareth/
|
||||
[3]: http://mmeblair.tumblr.com/post/61532912275/carnival-of-mathematics-102-my-summation-of-other
|
||||
[4]: https://en.wikipedia.org/wiki/Monorhyme
|
||||
[5]: https://en.wikipedia.org/wiki/Limerick_%28poetry%29
|
||||
[6]: https://en.wikipedia.org/wiki/Cinquain
|
||||
[7]: https://en.wikipedia.org/wiki/Trochaic_tetrameter
|
||||
[8]: https://en.wikipedia.org/wiki/Quatrain
|
||||
[9]: https://en.wikipedia.org/wiki/Haiku
|
@@ -0,0 +1,29 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stumbled_across
|
||||
comments: true
|
||||
date: "2013-09-13T00:00:00Z"
|
||||
aliases:
|
||||
- /stumbled_across/stumbled-across-14th-september-2013/
|
||||
- /stumbled-across-14th-september-2013/
|
||||
title: Stumbled across 14th September 2013
|
||||
---
|
||||
* On the merits of silence (I wholeheartedly agree): <http://www.nytimes.com/2013/08/25/opinion/sunday/im-thinking-please-be-quiet.html>
|
||||
|
||||
* Given the previous results on humans' sense of physical location, I'm not particularly surprised that you can make yourself identify your body as being somewhere other than where it really is: <http://www.psychologicalscience.org/index.php/news/releases/visualized-heartbeat-can-trigger-out-of-body-experience.html>
|
||||
|
||||
* Aaand the future arrives: <http://www.washington.edu/news/2013/08/27/researcher-controls-colleagues-motions-in-1st-human-brain-to-brain-interface>
|
||||
|
||||
* Another reason why Finland is amazing: <http://web.archive.org/web/20150302131427/http://neomam.com/blog/there-is-no-homework-in-finland>
|
||||
|
||||
* A thought-provoking story: [WebCite version](http://web.archive.org/web/20010802144026/http://www.tor.com/72ltrs.html)
|
||||
|
||||
* On the "mundane magics" kind of lines: <http://i.imgur.com/hINj1xf.png>
|
||||
|
||||
* Not sure what to make of this - I actually can't remember who narrated Paddington in the audio-books of my youth: <http://www.bbc.co.uk/news/entertainment-arts-24077834>
|
||||
|
||||
* This links in heavily with the thesis of the book [Flow](https://en.wikipedia.org/wiki/Mihaly_Csikszentmihalyi#Flow), which I'm reading at the moment: <https://www.washingtonpost.com/news/wonk/wp/2013/09/13/being-poor-changes-your-thinking-about-everything/>
|
||||
|
||||
This is the first post that I'm syndicating to social media. I hope it works. (If anyone has any ideas about what to syndicate - for instance, that Stumbled Across posts should not, or especially should, or things like that - then do let me know.)
|
@@ -0,0 +1,38 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-09-21T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/how-to-prove-that-you-are-a-god/
|
||||
- /how-to-prove-that-you-are-a-god/
|
||||
title: How to prove that you are a god
|
||||
---
|
||||
I came across an interesting question while reading the blog of [Scott Aaronson][1] today. The question was as follows:
|
||||
|
||||
> In the world of the colour-blind, how could I prove that I could see colour?
|
||||
|
||||
I'm presuming, to make the discussion more life-like and less cheaty, that this civilisation hasn't discovered that light comes in wavelengths, or that it has but it can't distinguish very well between wavelengths (so that all coloured light falls into the same bucket of 100nm to 1000nm, for instance). The challenge is to design an experimental protocol to confirm or deny that I have access to information that the colour-blind do not. This question is much harder than the corresponding question in the world of the blind, because having vision tells you so much more than having colour vision (simply set up a flag two miles away, have someone raise it at a random time, note down the time you saw it raised, and compare notes).
|
||||
|
||||
Oh. That's unfortunate. This protocol works perfectly well to determine colour, too - I just need to provide two flags of different colour, present them for inspection so that the experimenters verify that they look identical to them, mark the base of the flagpoles A and B in some way that the colour-blind can detect (etching?), note down which colour corresponds to which flag, walk a hundred metres, have the experimenter wave one of the flags at random, write down which flag was waved, repeat to taste.
|
||||
|
||||
How about a proof that I could hear when no-one else knew what hearing was? I would need to find something that I could hear that no-one else could detect - perhaps the dropping of a vase fifty metres away - and, while blindfolded, raise my hand when I heard the vase drop. I would, of course, have to remember to explain that there could be a time delay over long distances.
|
||||
|
||||
Stereo sound (the ability to detect where something is by the sound it makes)? I shut my eyes, someone walks around me and claps once; I point to that person.
|
||||
|
||||
Smell? Easy - simply uncork a test tube of water or hydrogen sulphide. I can identify which was used.
|
||||
|
||||
Taste? Again, we could dissolve sugar and salt separately into water.
|
||||
|
||||
[Proprioception][2]? It seems odd to me that any physical being could have managed to evolve language and not proprioception, but I could at least demonstrate the ability to exercise fine control over my body by pulling the spring of a [Newton meter][3] with a toe, finger, mouth, etc. This should be good evidence that I know how much strength I am exerting. I could also do this blind (although my results would be more rough, because I'm not a good proprioceptor), and I could also not see the scale of the Newton meter. To test awareness of where my body parts are, I could place both hands behind my back and have someone move one of them (with me blindfolded); I would then touch the moved hand with my other hand.
|
||||
|
||||
Language (the fact that "the sounds I am making convey information")? This would require two people who spoke the same language, of course. We could be placed in separate booths, with a row of pictures in front of us. Someone would point to a picture in one booth, the person in the relevant booth would describe it, the other person would point to the corresponding picture.
|
||||
|
||||
This discussion turned out to be less interesting than I would have liked. Anyway, it would appear to imply that if someone did indeed have extra senses, that person would easily be able to convince me of this fact. For instance, in the world of the colour-blind, I would present two items, saying "These items differ in a property which I can sense and which you cannot; show me one and I will tell you which it was". If the experiment were repeated and I were consistently able to say which item was shown, then I think this should count as proof that I can see colour. Of course, any limitations on my power ("I can't necessarily distinguish between any two items, but I can distinguish between these two", or "I can only distinguish between two items when it's a full moon in three days and when I've received a blood sacrifice and when the experimenter has sufficient faith in my abilities") should be declared up front, so that they can't be used to explain away failure (so, for instance, we could in advance find someone very credulous). Parallels to the [Randi prize][4] fully intended.
|
||||
|
||||
[1]: http://www.scottaaronson.com
|
||||
[2]: https://en.wikipedia.org/wiki/Proprioception
|
||||
[3]: https://en.wikipedia.org/wiki/Spring_scale
|
||||
[4]: http://www.skepdic.com/randi.html
|
51
hugo/content/posts/2013-10-10-plot-armour.md
Normal file
51
hugo/content/posts/2013-10-10-plot-armour.md
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- creative
|
||||
comments: true
|
||||
date: "2013-10-10T00:00:00Z"
|
||||
aliases:
|
||||
- /creative/plot-armour/
|
||||
- /plot-armour/
|
||||
title: Plot Armour
|
||||
---
|
||||
*Wherein I dabble in parodic fiction. The title refers to the TV Tropes page on [Plot Armour][1], but don't follow that link unless you first resolve not to click on any links on that page. TV Tropes is the hardest extant website from which to escape.*
|
||||
|
||||
Jim, third-in-command of the Watchers, ducked behind the Warlord's force-field, desperately trying to catch his breath in the face of an inexorable onslaught. His attackers, the hundred-strong members of the Hourglass Collective, had never been defeated in pitched battle. As testament to their ability, two thousand of the finest troops the Watchers had to offer stood motionless around him, suspended in time; even now, even with five of the most experienced Watchers still fighting, the Hourglass forces were calmly and efficiently slitting the throats of the frozen soldiers. Skilled in cultivating terror, they were working in from afar, and it looked to Jim as though he would have to endure another half-hour of helplessness before they got to him at last. Jim and the Warlord had only survived this far by virtue of an accidental and uncontrollable burst of power from the Founder of the Watchers, released at a fortuitous moment to counter the time-suspension channelled by the Hourglass. That had given the Warlord time to protect five people, before the Founder had collapsed.
|
||||
|
||||
Sophia, the Second Vigilant, most powerful of the Watchers, the Founder's first recruit, was still fighting. She had been the recipient of the Warlord's first force-field, naturally, and she was using her borrowed time well. Jim was recovering, moving nearer to her, and her power waxed correspondingly: he was exerting his power to heal her and to fuel her efforts. She began to glow, first dimly but soon as bright as the moon and then as the sun on a cloudy day, and as her light fell on the ranks of the Hourglass, all movement across the battlefield stopped. Sophia gently closed her eyes; in response, threads of light began to take shape around the Hourglass, weaving a net to contain the enemy. Too slowly: a pulse of power blasted forth from the Collective, tearing through the weave and ripping away the Warlord's force-fields. Sophia teetered on her feet, her power spent, but Jim was too far away, having been frozen in place by the calm Sophia had laid on the battlefield. She fell even as he ran towards her, his healing power growing as he did so, but he was too late to stop her from falling unconscious.
|
||||
|
||||
Three remaining Watchers, against a hundred of the Hourglass. The Warlord had used everything he had to create his force-fields. Jim had no offensive abilities at all. That left Christine, who (as the recipient of the Warlord's final, weakest, force-field) had been badly affected by the Collective's retaliation. Even with Jim's presence already staunching her head wound, her skill of intuition was still very much off-kilter. Her mind was sluggish, the chains of correlation and causation drifting to her as through treacle.
|
||||
|
||||
After far too long, the first key insight came to her.
|
||||
|
||||
"Warlord! Jim! Do you remember anything at all from before you threw up the force-fields?" she whispered to him, with as little voice as she could manage. Even that would have been audible to some of the far-away Hourglass, such was the eerie silence over the battlefield.
|
||||
|
||||
Her two comrades stared at her in confusion for ten seconds. At last, the Warlord's furrowed brow cleared, and he announced proudly that he could recall the whole series of events in perfect detail. Jim nodded along.
|
||||
|
||||
Christine closed her eyes. She, too, could now remember the assassination, the declarations of war, the summoning of the Watchers, and the start of the battle. Odd - but the exchange slipped from her mind as she made another connection.
|
||||
|
||||
"Since when could anyone stop time?! How can the Hourglass possibly have the power to suspend an entire army? How come you can heal us, Jim? These aren't normal things for humans to be able to do!"
|
||||
|
||||
At the edge of her mind, she could feel an explanation forming, but she was thoroughly spooked by now, and she squashed the nascent reasoning. The final piece clicked into place.
|
||||
|
||||
"Jim - say something. Anything - recite the first ten digits of pi, in as normal a voice as you can," she ordered.
|
||||
|
||||
"Three point one four one five nine two six…" Jim recited. He was beyond thinking that Christine was being weird, requiring the value of pi with an army advancing upon them - he had long since learnt to go with her requests for information, as you could never tell which piece of data would cause everything to make sense to her superhuman intuitive powers.
|
||||
|
||||
"No - can you say it more *normally*?" Christine clarified, cutting him off.
|
||||
|
||||
In a monotone, Jim reeled off "Three point one four one five nine two six…"
|
||||
|
||||
That was all the confirmation Christine's inductive power needed.
|
||||
|
||||
"OK. This will be a shock to you both, Warlord, Jim, but we're in a story. We're fictional. This situation we're in makes no sense at all. We had no backstory until I explicitly requested it, and it took a little while to come to us. And no-one seems to be capable of just *saying* something! Every time, we're ordering, or clarifying, or reeling things off, but never *saying*! We are fictional, and our author is not particularly competent to boot. That gives us a way out of our conveniently dramatic Dire Straits.
|
||||
|
||||
"Author! We're three of the most powerful members of the Watchers, and we've been the entire focus of this short story. There are no other plausible protagonists. You must find a way for us to survive, or else the story ends and you will have wasted all this time on another creative endeavour that came to nothing!"
|
||||
|
||||
The Hourglass were approaching faster now, provoked by Christine's loud outburst. Only thirty feet away, then twenty, then ten.
|
||||
|
||||
The front runner drew a dagger, and slit Jim's throat, then the Warlord's, then Christine's.
|
||||
|
||||
[1]: http://tvtropes.org/pmwiki/pmwiki.php/Main/PlotArmor
|
34
hugo/content/posts/2013-10-11-meaning-what-you-say.md
Normal file
34
hugo/content/posts/2013-10-11-meaning-what-you-say.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-10-11T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/meaning-what-you-say/
|
||||
- /meaning-what-you-say/
|
||||
title: Meaning what you say
|
||||
---
|
||||
In conversation with (say, for the purposes of propagating a sterotype) humanities students, I am often struck by how imprecisely language is used, and how much confusion arises therefrom. A case in point:
|
||||
|
||||
> A: I think that froogles should be sprogged!
|
||||
>
|
||||
> B: Sprogging froogles would make the bimmers go plog.
|
||||
>
|
||||
> A: But I use froogles all the time - I don't care about the bimmers! Why are you so caught up on the plogging of bimmers?
|
||||
|
||||
Here, we see Person A espousing a view, Person B contributing a fact, and Person A responding as if the fact were an attack. This happens *all the time*, and it's not just that I'm incapable of sounding non-threatening, because Person A-style people seem to respond in the same way whoever's acting as Person B. It may well be an excellent tactic during a competitive debate, because a Person A-style response makes you sound impassioned and commanding. However, when it comes to attempting to divine truth, it's thoroughly detrimental. Person B has to spend the next few sentences saying that ey's not attacking A [^pun] - and that's time wasted which could have been spent discussing bimmer-plogging and its relevance.
|
||||
|
||||
As a mathematician, you quickly learn to be able to shift into a state of mind in which you mean exactly what you say, and no more. Without this skill, I suspect it is very hard to be a mathematician. Imagine I, as Person B, said "10 is not a multiple of 3", and you (as person A) replied, "But 10 is a multiple of 2, and you didn't mention that!" You would be laughed out of the room, because it is simply taken for granted that I didn't mean to say anything beyond "10 is not a multiple of 3".
|
||||
|
||||
Similarly, as a truth-finder (as opposed to debater), I should have the freedom to say "If the cinema were closed, it would very likely have little to no impact on your life" without my interlocutor assuming that I mean "The cinema should be closed" or "The cinema will be closed" or "You are a moron".[^cinema] Fine, in a competitive debate, no holds are barred, but in real life we should be trying to find truth, and it's much harder to do that if you have to keep clarifying every statement. "If the cinema were to be closed (and it might not be), then it would very likely have little to no impact on your life, but I'm not saying that its overall cultural value shouldn't mean that the cinema ought to stay" is considerably less easy to read and write. I, as its author, have limited room in my memory to store all the little dangly bits of sentence that I intend to include. You, as its reader, have limited room in your memory, some of which is taken up in holding irrelevant points like "Patrick is not arguing with you".
|
||||
|
||||
Once Person A responds in that way, it becomes much harder for Person B to maintain a calm fact-finding frame of mind. It flashes through B's mind, "Person A has just attacked me! I must defend myself!", even if ey is trying as hard as possible to be balanced and to think clearly. [^experience] It clouds the rest of the discussion.
|
||||
|
||||
Essentially, what I want is for everyone to receive training in meaning exactly what you say, and in understanding exactly what is said. I find that it adds greatly to pretty much every conversation if all parties are able to switch into this mode as necessary (to resolve some particular question of fact, for instance). I recognise that my causing offence to another person is always a failing on my part, but it is a lot of work to maintain the context of "I must de-offendify every factual statement". If nothing else, it's just one more thing I have to remember to do once I've realised what words are coming out of my mouth. Fine for normal conversation, since so much of that is based around appearing as un-offensive [^inoffensive] as possible, but it is a great burden when you're attempting to perform a distributed computation (namely, using two or more brains to discover whether a statement is true or false).
|
||||
|
||||
[^pun]: See what I did there? "Ey, A"?
|
||||
[^cinema]: This example comes from a discussion about a certain news story about anti-monopoly laws in Cambridge cinemas (entitled "Cambridge set to lose Cineworld or Arts Picturehouse following Competition Commission ruling", of Cambridge News, published 2013-10-08 and now defunct). In fact, in this example, the cinema will be *sold on*, not closed - another reason for clear mathematical thinking in distinguishing "if X then Y" from "X is true".
|
||||
[^experience]: I am extrapolating from my own experience here - I am not yet well-enough practised at adopting the frame of mind that "the other person is only attacking me because ey doesn't know better - it doesn't really count".
|
||||
[^inoffensive]: Not inoffensive, which is something a little different.
|
46
hugo/content/posts/2013-10-13-training-away-mental-bias.md
Normal file
46
hugo/content/posts/2013-10-13-training-away-mental-bias.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2013-10-13T00:00:00Z"
|
||||
aliases:
|
||||
- /psychology/training-away-mental-bias/
|
||||
- /training-away-mental-bias/
|
||||
title: Training away mental bias
|
||||
---
|
||||
*In which I recount an experiment I have been performing. Please be aware that in this article I am in "[meaning what I say][1]" mode.*
|
||||
|
||||
For the past year or so, I have been consciously trying to identify and counteract places in the "natural", everyday use of language in which gender bias is implicitly assumed to be correct. The kind of thing I mean is:
|
||||
|
||||
> A: I called the plumber.
|
||||
>
|
||||
> B: And what did he say?
|
||||
|
||||
I have also been keeping tabs on the way the word "man" creeps into occupations and so forth:
|
||||
|
||||
> We are looking for a new chairman for the society.
|
||||
|
||||
Specifically, I did the following to counter this culturally-imposed tendency:
|
||||
|
||||
1. I switched to using gender-neutral pronouns in my writing (although more recently, I have reverted to "he-or-she" in conversation)
|
||||
2. I formed a habit of noticing whenever I thought the words "she" or "he", and checking whether I actually knew the gender of the person in question
|
||||
3. If for some reason I need to invent a person, and gender-neutral won't do, I flip a coin to determine that character's gender (which will, in theory, completely eliminate gender bias in my characters)
|
||||
|
||||
I think that I have succeeded in correcting the bias, at least partially. A week ago, I was even caught by surprise when someone referred to an electrician of unspecified gender as "he" - I had to backtrack mentally and work out whether I'd missed the specification of eir gender, before I realised that this was simply the usual bias being demonstrated by other people. In much the same way, it would surprise me for an electrician to be *assumed* to be called Fred [^coinflips] ("I phoned for an electrician to fix our wiring. Fred was amazing."), or to be gluten-intolerant ("I phoned for an electrician to fix our wiring, but of course he couldn't eat the cheesecake I'd made for him."), or to be particularly tall ("I phoned for an electrician to fix our wiring, but she had the obvious trouble moving around in the attic.").
|
||||
|
||||
This is not to proselytise - I've never pointed out people's gender bias unless they've specifically asked for me to do something similar, because in my experience (sample size of 1, when I explained the problem as I see it to someone) people get annoyed and call me a "feminist". [^slight] I don't understand why people would get annoyed that I would like women and men to be treated equally, and the issue is further obscured by the labelling of my views as "feminism". Not that I am against feminism particularly, but using the word "feminist" just clouds the issue. In the same way, calling yourself a "liberal" encompasses an enormous range of policies, and I may not agree with every single one to identify myself as a liberal. For me to be a "feminist" could be interpreted by some as "this person wishes for men to be replaced entirely by women", rather than the interpretation I would prefer (namely, "this person wishes for males to be treated as fairly as females in all things"). Classifying your argument immediately makes everything harder for all parties, as it then sets up a pressure to remain consistent with the entire category you have given, rather than with what you intended to convey. Silly example: "I'm a utilitarian" vs "I think people should act so as to maximise the happiness of people around them".
|
||||
|
||||
Given that the person-who-got-annoyed in question was female, I don't think I have the right to overrule her position. [^parallel]
|
||||
|
||||
Anyway, I think that my attempt to realign my thoughts so that the implicit, unannounced anti-female bias is less pronounced has been a success. I do not claim perfection, of course, and I will keep going with my new habits (they're a part of me now, so it's easier to keep going than not). I have no real-world outcomes to measure, apart from being acutely aware of everyone else's bias [^mean] - I intend at some point to take a test to give me a quantitative answer, but at present I can't find the particular test I have in mind. [^test] I hope these habits are having a good effect on my thinking.
|
||||
|
||||
|
||||
[^coinflips]: I'm flipping coins for the genders in this paragraph, too.
|
||||
[^slight]: As if that were some kind of horrendous slight upon my good name.
|
||||
[^parallel]: Although I can't help drawing a parallel with extremely wealthy people claiming that "money is unimportant"; it feels as bogus as a philosopher claiming that "truth is relative" - which is simply asking for you not to believe em.
|
||||
[^mean]: Again, I mean what I say: I do not necessarily mean that "I am unbiased" or "I am unaware of my own bias".
|
||||
[^test]: Its protocol is to flash up pairs of words like \{"good","female"\} or \{"uncle, male"\}, whereupon the testee presses a button for "related" and a different button for "unrelated". The idea is that it is easy to determine whether \{"uncle","male"\} are related, but if bias is present then \{"competent","female"\} will be harder (and hence slower) to determine than \{"competent", "male"\}, because we are so used to thinking of males implicitly as more competent.
|
||||
|
||||
[1]: {% post_url 2013-10-11-meaning-what-you-say %}
|
56
hugo/content/posts/2013-10-20-the-ravenous.md
Normal file
56
hugo/content/posts/2013-10-20-the-ravenous.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- creative
|
||||
comments: true
|
||||
date: "2013-10-20T00:00:00Z"
|
||||
aliases:
|
||||
- /creative/the-ravenous/
|
||||
- /the-ravenous/
|
||||
title: The Ravenous
|
||||
sidenotes: true
|
||||
---
|
||||
[Once upon a midnight dreary][1], while I pondered, weak and weary,
|
||||
I required a snack to feed me. Reaching in the kitchen drawer -
|
||||
With the scissors, cut the wrapping, I revealed a jar of tapen-
|
||||
Ade of olives. Gently snapping, snapping off the lid, I saw:
|
||||
Lines of mouldy olive scored the tapenade. The lid I saw
|
||||
Speckled with each mocking spore.
|
||||
|
||||
How the pangs of hunger rumbled while I cursed the jar I'd fumbled;
|
||||
Indistinct, I faintly mumbled, "May this torture last no more!"
|
||||
Suddenly I saw the bread bin; eagerly towards it edging,
|
||||
Bravely to my stomach pledging, pledging food would be in store.
|
||||
Opening that sacred vessel, only crumbs were left in store.
|
||||
Savagely the bag I tore.
|
||||
|
||||
Now my thoughts turned to basmati; I would make a dish quite hearty,
|
||||
And my shattered brain was party to such plans of starch galore.
|
||||
Trembling I imagined sauces rich in spice and such resources,
|
||||
Gripped by these enchanting forces, opened I the cupboard door.
|
||||
Slavering, excitement mounting, opened I the cupboard door;
|
||||
Rice stocks were exceeding poor.
|
||||
|
||||
How my stomach needed filling. Dreams of pancakes gently grilling
|
||||
Served to give me eager willingness to find a bag of {{< side right flour-footnote `flour.`>}} Pronounced "floor". {{< /side >}}
|
||||
Happily it was not lacking. Took the eggs out from their packing,
|
||||
Fetched a bowl, and in it cracking, cracking eggs so batter'd pour.
|
||||
Tipped the milk (blue top, full-fat) in, mixing up so batter'd pour.
|
||||
Sugar I could not ignore.
|
||||
|
||||
Took out oil, and put the gas on. Measured out a goodly ration,
|
||||
Ladled it in practised fashion, spread it thin, my movements sure.
|
||||
Round the edges batter bubbled, far too quiet. The heat I doubled;
|
||||
Soon I'd be no longer troubled: hunger'd bother me no more.
|
||||
Oh, to be no longer troubled, hunger both'ring me no more.
|
||||
Crêpes: a food which I adore.
|
||||
|
||||
Tested I the pancake, dipping fish-slice in to start its flipping;
|
||||
Grabbed the pan, towards me tipping. "Now be cooked!" I did implore.
|
||||
In my eagerness to turn it (lest I tarry and I burn it)
|
||||
With such horror I discern it: I had dropped it to the floor.
|
||||
Ah, with terror I discern that I had dropped it to the floor.
|
||||
Quoth the pancake: "Nevermore."
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/The_Raven "The Raven Wikipedia page"
|
115
hugo/content/posts/2013-10-24-how-to-do-analysis-questions.md
Normal file
115
hugo/content/posts/2013-10-24-how-to-do-analysis-questions.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
- proof_discovery
|
||||
comments: true
|
||||
date: "2013-10-24T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/how-to-do-analysis-questions/
|
||||
- /how-to-do-analysis-questions/
|
||||
title: How to do Analysis questions
|
||||
---
|
||||
This post is for posterity, made shortly after [Dr Paul Russell][1] lectured Analysis II in Part IB of the Maths Tripos at Cambridge. In particular, he demonstrated a way of doing certain basic questions. It may be useful to people who are only just starting the study of analysis and/or who are doing example sheets in it.
|
||||
|
||||
The first example sheet of an Analysis course will usually be full of questions designed to get you up and running with the basic definitions. For instance, one question from the first example sheet of Analysis II this year is as follows:
|
||||
|
||||
> Show that if \\((f_n)\\) is a sequence of uniformly continuous real functions on \\(\mathbb{R}\\), and if \\(f_n \to f\\) uniformly, then \\(f\\) is uniformly continuous.
|
||||
|
||||
This is one of those questions which only exists to make sure that you know what "uniformly continuous" and "converges uniformly" mean.
|
||||
|
||||
How do we solve this question? The key with a definitions-question is to avoid employing the brain wherever possible. So the first step is to define \\((f_n)\\) and \\(f\\), and to write down everything we know about them:
|
||||
|
||||
* Let \\((f_n)\\) be a sequence of uniformly continuous real functions on \\(\mathbb{R}\\), and \\(f\\) a real function on \\(\mathbb{R}\\), such that \\(f_n \to f\\) uniformly.
|
||||
* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), we have for every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
|
||||
* Since \\(f_n \to f\\) uniformly, we have that for all \\(\epsilon\\), there exists \\(N\\) such that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x) \vert < \epsilon\\).
|
||||
|
||||
Now, what do we want to prove?
|
||||
|
||||
* [Don't write this down yet - this line goes at the end of the proof!] Therefore, for every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert x-y < \delta\\), \\(\vert f(y)-f(x) \vert < \epsilon\\). Hence \\(f\\) is uniformly continuous.
|
||||
|
||||
So what can we get from what we know? Everything we know is about "for all \\(\epsilon\\)". So we fix an arbitrary \\(\epsilon\\). If we can prove something that is true for this \\(\epsilon\\), with no further assumptions, then we are done for all \\(\epsilon\\).
|
||||
|
||||
* Fix arbitrary \\(\epsilon\\) greater than \\(0\\).
|
||||
|
||||
Now what do we know?
|
||||
|
||||
* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
|
||||
* Since \\(f_n \to f\\) uniformly, we have that there exists \\(N\\) such that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
|
||||
* \\(\epsilon > 0\\).
|
||||
|
||||
Aha! Now we have a definite something existing (namely, the \\(N\\) in the second condition). Let's fix it into existence.
|
||||
|
||||
* Let \\(N\\) be such that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
|
||||
|
||||
What do we know?
|
||||
|
||||
* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
|
||||
* Since \\(f_n \to f\\) uniformly, we have that for all \\(n \geq N\\), for every \\(x\\) we have \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
|
||||
* \\(\epsilon > 0\\), and \\(N\\) is an integer.
|
||||
|
||||
Now, we have two "for all"s competing with each other. The more specific is the second one, so we'll fix that into existence.
|
||||
|
||||
* Fix arbitrary \\(n\\) greater than or equal to \\(N\\).
|
||||
|
||||
What do we know?
|
||||
|
||||
* Since each \\(f_n\\) is uniformly continuous, we have that for all \\(n\\), there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x \vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
|
||||
* Since \\(f_n \to f\\) uniformly, we have that for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
|
||||
* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\).
|
||||
|
||||
Now we have a choice of "for all"s again, but this time they aren't "talking about the same thing" (last time, both were integers referring to which \\(f_n\\) we were talking about; this time, one is an integer and one is an arbitrary real). However, now we have \\(n \geq N\\) which we can talk about; let's wring more information out of it, by using the "uniformly continuous" bit.
|
||||
|
||||
* Since each \\(f_n\\) is uniformly continuous, there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x\vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
|
||||
* Since \\(f_n \to f\\) uniformly, we have that for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
|
||||
* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\).
|
||||
|
||||
Aha - another "there exists" condition (on \\(\delta\\)). Let's fix it.
|
||||
|
||||
* Fix \\(\delta\\) such that for all \\(x,y\\) with \\(\vert y-x\vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\).
|
||||
|
||||
What do we know?
|
||||
|
||||
* Since each \\(f_n\\) is uniformly continuous, for all \\(x,y\\) with \\(\vert y-x\vert < \delta\\), it is true that \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
|
||||
* Since \\(f_n \to f\\) uniformly, we have that for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\).
|
||||
* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\), and \\(\delta > 0\\).
|
||||
|
||||
Two more "for all" conditions. Let's fix them into existence:
|
||||
|
||||
* Let \\(x\\) be an arbitrary real, and let \\(y\\) be such that \\(\vert y-x\vert < \delta\\).
|
||||
|
||||
What do we know?
|
||||
|
||||
* Since each \\(f_n\\) is uniformly continuous, \\(\vert f_n(y)-f_n(x) \vert < \epsilon\\).
|
||||
* Since \\(f_n \to f\\) uniformly, we have that \\(\vert f_n(x)-f(x) \vert < \epsilon\\).
|
||||
* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\), and \\(\delta > 0\\), and \\(x\\) is real, and \\(\vert y-x\vert < \delta\\).
|
||||
|
||||
Now the conditions are really small things. It's kind of unclear how to proceed from here, so let's look at what we wanted to prove again:
|
||||
|
||||
> For every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x,y\\) with \\(\vert x-y\vert < \delta\\), \\(\vert f(y)-f(x)\vert < \epsilon\\).
|
||||
|
||||
Applying what we know, this becomes:
|
||||
|
||||
* [to be proved] For all \\(y\\) with \\(\vert x-y\vert < \delta\\), \\(\vert f(y)-f(x)\vert < \epsilon\\).
|
||||
|
||||
Aha! We have already got something to do with \\(y\\) (namely that \\(\vert f_n(y)-f_n(x)\vert < \epsilon\\)), and we have something to do with \\(f(x)\\) (namely that \\(\vert f_n(x)-f(x)\vert < \epsilon\\)). Hence \\(\vert f_n(y)-f_n(x)\vert + \vert f_n(x)-f(x)\vert < 2\epsilon\\), and the triangle inequality gives us that \\(\vert f_n(y)-f(x)\vert < 2\epsilon\\). Eek - we need to turn that \\(f_n(y)\\) into an \\(f(y)\\). We have no way of doing that, so we must have missed out some information somewhere. Backtracking, the nearest-to-the-end bit of missed out information was when we fixed \\(x, y\\). We threw away information in "for every \\(x\\), \\(\vert f_n(x)-f(x)\vert < \epsilon\\)" when we fixed \\(x\\) - it applies to \\(y\\) too. So we'll add a new statement to the "what do we know?" list:
|
||||
|
||||
* \\(\vert f_n(y)-f(x)\vert < 2\epsilo\\)n
|
||||
* \\(\vert f_n(y)-f(y)\vert < \epsilon\\).
|
||||
* \\(\epsilon > 0\\), and \\(N\\) is an integer, and \\(n \geq N\\), and \\(\delta > 0\\), and \\(x\\) is real, and \\(\vert y-x \vert < \delta\\).
|
||||
|
||||
And now it just drops out of the triangle inequality that \\( \vert f(y)-f(x) \vert < 3 \epsilon\\).
|
||||
|
||||
Now, \\(\epsilon\\) was arbitrary, \\(N\\) was dictated by the conditions, \\(n \geq N\\) was arbitrary, \\(\delta\\) was dictated by the conditions, \\(x\\) was arbitrary, \\(y\\) was arbitrary subject to \\(\vert y-x \vert < \delta\\).
|
||||
|
||||
Hence we have proved that for every \\(\epsilon\\) there exists \\(N\\) such that for all \\(n \geq N\\) there is a \\(\delta\\) such that for all \\(x\\), for all \\(y\\) with \\(\vert y-x\vert < \delta\\), \\(\vert f(y)-f(x)\vert < 3\epsilon\\).
|
||||
|
||||
We can clean this statement up. Notice that neither \\(n\\) nor \\(N\\) was involved in the final expression, so we can simply get rid of them to obtain:
|
||||
|
||||
> For every \\(\epsilon\\) there is a \\(\delta\\) such that for all \\(x\\), for all \\(y\\) with \\(\vert y-x\vert < \delta\\), \\(\vert f(y)-f(x) \vert < 3\epsilon\\).
|
||||
|
||||
From this, it is easy to obtain the required result. We want to turn \\(3 \epsilon\\) into \\(\epsilon\\) - but that's fine, because the expression holds for every \\(\epsilon\\), so in particular if we fix \\(\epsilon\\) then it holds for \\(\dfrac{\epsilon}{3}\\). We'll just use the \\(\delta\\) from that \\(\dfrac{\epsilon}{3}\\) instead. This gives us that \\(f\\) is uniformly continuous, as required, and without actually engaging the brain except to carry out the algorithm of "write down what we know; if there exists something, fix it, and repeat; if for all something, then fix an arbitrary one, and repeat; if we're stuck, go back through, looking to see if we missed out any information during a fixing-arbitrary-for-all phase" and to carry out the algorithm of "when the information we have is simple enough, compare terms from what we know with the expression that we want to show; use the triangle inequality to get them in there".
|
||||
|
||||
[1]: https://www.dpmms.cam.ac.uk/~par31 "Paul Russell"
|
96
hugo/content/posts/2013-11-07-my-quest-for-a-new-phone.md
Normal file
96
hugo/content/posts/2013-11-07-my-quest-for-a-new-phone.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-11-07T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/my-quest-for-a-new-phone/
|
||||
- /my-quest-for-a-new-phone/
|
||||
title: My quest for a new phone
|
||||
---
|
||||
*This post is unfinished, and may never be finished - I have decided that the Nexus 5 is sufficiently cheap, nice-looking and future-proof to outweigh the boredom of continuing the research here, especially given that such research by necessity has a very short lifespan. I am one of those people who hates shopping with a fiery passion.*
|
||||
|
||||
My current phone is a five-year-old [Nokia 1680]. It has recently developed a disturbing tendency to turn off when I'm not watching it.
|
||||
This puts me in the market for a new phone. Having looked over the Internet for guides to which phone to buy, I've become lost in the swamp of information, so I am using this post to order my thoughts.
|
||||
|
||||
# My current phone usage
|
||||
|
||||
I use my phone pretty rarely. It has a camera, but I have only used it once ever (and that picture was so blurry that it doesn't really count). It has a colour screen, which I would happily forgo if it made the battery life better. The battery lasts about a week between charges at my current usage level. I have made about five calls on it in the past year, and sent a few hundred texts. The £20 of credit I gave it about four months ago is now down to £2.50, but I used it unusually often to make calls (four of the five calls I mentioned were in that period). The phone can connect to the Internet, but I have never used it thusly, because interacting with web pages would be too painful on that screen and with those buttons. My current tariff is pay-as-you-go, with Tesco Mobile, on a plan that doesn't seem to exist any more (4p per text, and some unspecified amount for calls).
|
||||
|
||||
# Projected phone usage
|
||||
|
||||
I have two main options available.
|
||||
|
||||
* Buy a dumbphone
|
||||
* Buy a smartphone.
|
||||
|
||||
These options greatly affect the way I would use the phone. For a dumbphone, I would use it much as I use my current phone: for rare calls and for less-rare texts. For a smartphone, I would branch out considerably, into using it for calendar syncing, to-do lists, GPS/maps/directions, on-the-go information, computation and so forth. I would not use it for games (because they are simply a waste of time that I could be using to [become more awesome][2], and because they aren't fun anyway). I don't see myself using it as a camera, either. I will not be installing social media apps on a smartphone, because I hate it when people use them in front of me, and because I categorically do not want to become one of these people who incessantly posts about what food they had this morning. I reserve public self-broadcasting platforms for those things which I think could be important or interesting to many people (and I amend the "what-I-post" category in response to feedback), or which I'm proud of having created, and it's much harder to find/make these things on a phone screen than on a computer with keyboard and big screen.
|
||||
|
||||
# Requirements for a dumbphone
|
||||
|
||||
* Cheap - I do not want to spend more than £50 on a dumbphone
|
||||
* Long battery life
|
||||
* No need for a camera or a colour screen - [eInk] sounds ideal
|
||||
* No need for Internet access
|
||||
|
||||
# Requirements for a smartphone
|
||||
|
||||
* Calendar syncing (I could host a [CalDAV] server on this website, so interoperability should be easy)
|
||||
* To-do list syncing (I have switched to [Workflowy] for to-do lists, and that can be accessed in-browser, so it only needs a web browser)
|
||||
* Preferably maps/GPS
|
||||
* Smooth user experience (I want to feel like I'm controlling [JARVIS])
|
||||
* Cheaper is better - I do not want to spend more than £400 on a smartphone
|
||||
* Preferably [libre][7] and more preferably secure/NSA-proof, although this is not paramount
|
||||
* At least a four-inch screen, preferably larger (up to a maximum of six inches)
|
||||
|
||||
# Dumbphone research
|
||||
|
||||
It would appear that very few purely eInk phones have ever been created. There are a few dual-screen [LCD and eInk phones][8], but they are primarily smartphones; what I want from eInk is more like a [Kindle] turned into a phone. The [eInk page on phones][10] demonstrates three phones, but they are either dual-screen or truly dreadful ([as in][11], only [two lines of text][12] can appear on the screen at once). It looks like eInk is a no-go.
|
||||
|
||||
I am reduced to looking for dumbphones without a camera, colour screen or Internet access.
|
||||
|
||||
# Smartphone research
|
||||
|
||||
## Operating system
|
||||
|
||||
There are two main OSs in use: [Android] and [iOS]. I say this because [Windows Phone] OS is ugly enough to flout the JARVIS requirement, and [Blackberry] phones… hmm. My cached thoughts on Blackberry phones run along the lines of "don't like them, uncool" more than anything else. I find myself generating excuses not to include them in this list, even though I don't actually know much about them. Better put them in.
|
||||
|
||||
### iOS
|
||||
|
||||
The only phone devices which run iOS are Apple's iPhones. With an education discount, the only model I can buy new within my £400 limit is the [iPhone 4S] (at £349). This model has access to [Siri] (the Apple personal assistant).
|
||||
|
||||
Apple offers a "[refurbished and clearance][19]" store, but they do not offer iPhones through this.
|
||||
|
||||
### Android
|
||||
|
||||
Android phones are very widely available. Because there is such a huge choice of phones already, I will make the simplifying assumption that I only want a phone which runs [Android 4.4 "KitKat"][KitKat] (the latest version of Android, as of this writing).
|
||||
|
||||
### Blackberry
|
||||
|
||||
It turns out that only two Blackberry phones have full-size touchscreens. The JARVIS criterion is failed for screens which are too small to fit reasonable amounts of text on, which leaves only the [Z30][21] and [Z10][22]. However, from what [I've seen][23], the Blackberry OS is kind of uglier than Android or iOS. For the sake of simplifying the discussion, I will go with my cached self and rule out Blackberry.
|
||||
|
||||
[Nokia 1680]: https://en.wikipedia.org/wiki/Nokia_1680_classic "Nokia 1680 Wikipedia page"
|
||||
[2]: http://lesswrong.com/lw/iri/how_to_become_a_1000_year_old_vampire/ "Thousand year old vampire LessWrong page"
|
||||
[eInk]: http://www.eink.com "eInk"
|
||||
[calDAV]: https://en.wikipedia.org/wiki/CalDAV "CalDAV Wikipedia page"
|
||||
[Workflowy]: https://workflowy.com "Workflowy"
|
||||
[JARVIS]: https://www.youtube.com/watch?v=D156TfHpE1Q "JARVIS Youtube video"
|
||||
[7]: https://en.wikipedia.org/wiki/Gratis_versus_libre "Free-as-in-freedom Wikipedia page"
|
||||
[8]: http://gizmodo.com/5967746/this-dual-lcd-and-e-ink-phone-will-be-available-in-2013 "LCD/eInk phone example"
|
||||
[Kindle]: https://en.wikipedia.org/wiki/Amazon_Kindle "Amazon Kindle"
|
||||
[10]: http://web.archive.org/web/20130718152515/http://www.eink.com/customer_showcase_cell_phones.html "eInk phones showcase"
|
||||
[11]: https://en.wikipedia.org/wiki/Motorola_Fone "Motofone eInk phone"
|
||||
[12]: https://en.wikipedia.org/wiki/Motofone_f3#Display_technology "Motofone F3 Wikipedia page"
|
||||
[Android]: https://en.wikipedia.org/wiki/Android_OS "Android Wikipedia page"
|
||||
[iOS]: https://en.wikipedia.org/wiki/IOS "iOS Wikipedia page"
|
||||
[Windows Phone]: https://en.wikipedia.org/wiki/Windows_Phone_8 "Windows Phone Wikipedia page"
|
||||
[Blackberry]: http://uk.blackberry.com/smartphones.html "Blackberry phone"
|
||||
[iPhone 4S]: https://en.wikipedia.org/wiki/Iphone_4s "iPhone 4S Wikipedia page"
|
||||
[Siri]: https://en.wikipedia.org/wiki/Siri "Siri Wikipedia page"
|
||||
[19]: http://store.apple.com/uk/browse/home/specialdeals "Apple Refurbished store"
|
||||
[KitKat]: https://www.android.com/kitkat/ "Android KitKat"
|
||||
[21]: http://uk.blackberry.com/smartphones/blackberry-z30.html "Blackberry Z30"
|
||||
[22]: http://uk.blackberry.com/smartphones/blackberry-z10 "Blackberry Z10"
|
||||
[23]: http://www.youtube.com/watch?v=nyjMVJ3ISDQ "Blackberry Z30 Youtube video"
|
59
hugo/content/posts/2013-11-12-markov-chain-card-trick.md
Normal file
59
hugo/content/posts/2013-11-12-markov-chain-card-trick.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2013-11-12T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/markov-chain-card-trick/
|
||||
- /markov-chain-card-trick/
|
||||
title: Markov Chain card trick
|
||||
---
|
||||
In my latest lecture on [Markov Chains][1] in Part IB of the Mathematical Tripos, our lecturer showed us a very nice little application of the theorem that "if a discrete-time chain is aperiodic, irreducible and positive-recurrent, then there is an invariant distribution to which the chain tends as time increases". In particular, let \\(X\\) be a Markov chain on a state space consisting of "the value of a card revealed from a deck of cards", where aces count 1 and picture cards count 10. Let \\(P\\) be randomly chosen from the range \\(1 \dots 5\\), and let \\(X_0 = P\\). Proceed as follows: define \\(X_n\\) as "the value of the \\(\sum_{i=0}^{n-1} X_i\\)-th card". Stop when the newest \\(X_n\\) would be greater than \\(52\\).
|
||||
|
||||
That is, I shuffle a pack of cards, and you select one of the first five at random. I then deal out the rest of the cards in order; you hop through the cards as they are revealed. For instance, if the deck looked like \\(\{5,4,9,10,1,2,6,8,8,3, \dots \}\\) and you picked \\(2\\) as your starting value, then your list of numbers would look like \\(\{4, 2,8, \dots \}\\) (moving forward four cards, then two, then eight, and so on). We keep going until I run out of cards to deal out, at which point I triumphantly announce the value of the card which you last remembered.
|
||||
|
||||
How is this done? The point is that we are both walking along the same Markov chain, just from different starting positions. As soon as we both hit the same card, we are locked together for all time, and it is simply a matter of ensuring that we hit the same card at some point. But this is precisely what the quoted theorem tells us: if we go on for long enough, we will fall into the same distribution, and hence will likely hit the same card as each other at some point. I ran some simulations to determine the probability with which we end on the same value. The code is kind of dirty, for which I apologise - it was thrown together quickly, and is written in the [write-only][2] [Mathematica][3]. We first assume that all picture cards are 10s, and that aces are 1s.
|
||||
|
||||
nums = Flatten[{ConstantArray[Range[1, 10], 4], ConstantArray[10, 12]}];
|
||||
|
||||
The following function runs one simulation using each of the supplied starting indices, using the given order of cards:
|
||||
|
||||
test\[perm\_, startPos\_List] := ({Length[#[[1]]], #[[2]]} &@ NestWhile[{#[[1]\]\[[#[[2\]] + 1 ;;]], #\[[1]\]\[[#[[ 2\]]]]} &, {perm, #}, Length[#[[1]]] #[[2]] &]) & /@ startPos
|
||||
|
||||
It is astonishingly illegible. Read it as: "For each starting position supplied: start off with the input permutation and starting position. While the starting position is a valid position of the list (so it is less than or equal to the length of the list), set the starting position to the value of the card at that starting position, and set the list of cards to be everything after that position. Repeat until we've run out of cards. Then output the length of the remaining list of cards [and hence, indirectly, the final position we hit], and the last value we remembered."
|
||||
|
||||
The following line of code runs a hundred thousand simulations with a random order of cards each time:
|
||||
|
||||
True/(False + True) /. Rule @@@ Tally[ Function[{inputStartPos}, #[[1, 1]] == #[[2, 1]] &@ test[RandomSample[nums, Length@nums], inputStartPos]] /@ RandomChoice[Range[4], {100000, 2}]] // N
|
||||
|
||||
Again, it is illegible. Read it as "We're going to want the proportion of good results to all results, where "good" is defined as follows: call a run "good" if we stopped at the same card at the end. Do that for a hundred thousand different pairs of random starting points less than \\(6\\), and tally them all up. Give me a numerical answer at the end, not a fraction." This program output 0.76764 - that is, there is a better-than-three-quarters chance of "winning" in this variant, where we insist that players pick one of the first five cards to start with, and where we don't care that queens, kings, jacks and tens are all different.
|
||||
|
||||
In order to try and be a bit more clever, I used a simple [Bayesian update technique][4] to try and get the confidence of the answer. Performing 5000 trials and updating from a prior of "uniformly likely that the required probability is any \\(\dfrac{n}{5000}\\) for integer \\(n\\)", I got the following PDF:
|
||||
|
||||

|
||||
|
||||
This has mean 0.756297 and standard deviation 0.00606961.
|
||||
|
||||
What if we want a different range of starting values? The following table gives the mean and standard deviation of \\(p\\) for different ranges of allowed starting cards.
|
||||
|
||||
N=1: {0.999884, 0.000191799} [the true value is, of course, 1]
|
||||
N=2: {0.840064, 0.0051822}
|
||||
N=3: {0.805078, 0.0056006}
|
||||
N=5: {0.756897, 0.00606454}
|
||||
N=10: {0.69912, 0.00648421}
|
||||
|
||||
How about if we make 10s different from picture cards? Let's make jacks 11, queens 12 and kings 13:
|
||||
|
||||
N=2: {0.834066, 0.00525959}
|
||||
N=5: {0.716913, 0.0063691}
|
||||
N=10: {0.673331, 0.0066306}
|
||||
|
||||
So your odds of winning are still pretty good, even if we insist that all cards are different (ignoring suit).
|
||||
|
||||
[1]: http://www.statslab.cam.ac.uk/~grg/teaching/markovc.html "Markov Chains course page"
|
||||
[2]: https://en.wikipedia.org/wiki/Write-only_language "Write-only language Wikipedia page"
|
||||
[3]: https://www.wolfram.com "Wolfram Mathematica"
|
||||
[4]: https://web.archive.org/web/20131019220645/http://www.databozo.com/2013/09/15/Bayesian_updating_of_probability_distributions.html "Bayesian updating"
|
@@ -0,0 +1,41 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-11-23T00:00:00Z"
|
||||
aliases:
|
||||
- uncategorized/the-jean-paul-sartre-cookbook/
|
||||
title: The Jean-Paul Sartre Cookbook
|
||||
---
|
||||
> Many thanks to the [Guru Bursill-Hall][1] for bringing this tract to my attention through his weekly History of Maths bulletins. It was originally written in 1987 by Marty Smith, according to the Internet.
|
||||
|
||||
# The Jean-Paul Sartre Cookbook
|
||||
|
||||
**October 3.** Spoke with Camus today about my cookbook. Though he has never actually eaten, he gave me much encouragement. I rushed home immediately to begin work. How excited I am! I have begun my formula for a Denver omelet.
|
||||
|
||||
**October 4.** Still working on the omelet. There have been stumbling blocks. I keep creating omelets one after another, like soldiers marching into the sea, but each one seems empty, hollow, like stone. I want to create an omelet that expresses the meaninglessness of existence, and instead they taste like cheese. I look at them on the plate, but they do not look back. Tried eating them with the lights off. It did not help. Malraux suggested paprika.
|
||||
|
||||
**October 6.** I have realized that the traditional omelet form (eggs and cheese) is bourgeois. Today I tried making one out of cigarette, some coffee, and four tiny stones. I fed it to Malraux, who puked. I am encouraged, but my journey is still long.
|
||||
|
||||
**October 10.** I find myself trying ever more radical interpretations of traditional dishes, in an effort to somehow express the void I feel so acutely. Today I tried this recipe:
|
||||
|
||||
> **Tuna Casserole**
|
||||
> Ingredients: 1 large casserole dish.
|
||||
>
|
||||
> Place the casserole dish in a cold oven. Place a chair facing the oven and sit in it forever. Think about how hungry you are. When night falls, do not turn on the light.
|
||||
|
||||
While a void is expressed in this recipe, I am struck by its inapplicability to the bourgeois lifestyle. How can the eater recognize that the food denied him is a tuna casserole and not some other dish? I am becoming more and more frustrated.
|
||||
|
||||
**October 25.** I have been forced to abandon the project of producing an entire cookbook. Rather, I now seek a single recipe which will, by itself, embody the plight of man in a world ruled by an unfeeling God, as well as providing the eater with at least one ingredient from each of the four basic food groups.
|
||||
|
||||
To this end, I purchased six hundred pounds of foodstuffs from the corner grocery and locked myself in the kitchen,refusing to admit anyone. After several weeks of work, I produced a recipe calling for two eggs, half a cup of flour, four tons of beef, and a leek. While this is a start, I am afraid I still have much work ahead.
|
||||
|
||||
**November 15.** Today I made a Black Forest cake out of five pounds of cherries and a live beaver, challenging the very definition of the word cake. I was very pleased. Malraux said he admired it greatly, but could not stay for dessert. Still, I feel that this may be most profound achievement yet, and have resolved to enter it in the Betty Crocker Bake-Off.
|
||||
|
||||
**November 30.** Today was the day of the Bake-Off. Alas, things did not go as I had hoped. During the judging, the beaver became agitated and bit Betty Crocker on the wrist. The beaver's powerful jaws are capable of felling blue spruce in less than ten minutes and proved, needless to say, more than a match for the tender limbs of America's favourite homemaker. I only got third place. Moreover, I am now the subject of a rather nasty lawsuit.
|
||||
|
||||
**December 1.** I have been gaining twenty-five pounds a week for two months, and I am now experiencing light tides. It is stupid to be so fat. My pain and ultimate solitude are still as authentic as they were when I was thin, but seem to impress girls far less. From now on, I will live on cigarettes and black coffee.
|
||||
|
||||
[1]: http://web.archive.org/web/20201113203936/https://www.dpmms.cam.ac.uk/~piers/ "Guru Piers Bursill-Hall"
|
24
hugo/content/posts/2013-12-14-the-training-game.md
Normal file
24
hugo/content/posts/2013-12-14-the-training-game.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-12-14T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/the-training-game/
|
||||
- /the-training-game/
|
||||
title: The Training Game
|
||||
---
|
||||
The book Don't Shoot the Dog, by Karen Pryor, contains a simple exercise in demonstrating [clicker training][1]. This is a very successful technique used to produce behaviour in animals: having first associated the sound of a click with the reward of attention or food, one can then use the click as an immediate substitute for the reward (so that one can train more complicated, time-critical actions through positive reinforcement; a click is instant, but food or attention requires the trainer approaching the trainee). The demonstration exercise involves a person designated the Trainer, and a person designated the Trainee. The trainer has a goal in mind, but cannot communicate that goal to the trainee; the only interaction allowed is a click when the trainee is doing something vaguely correct. As an example, the trainee can be made to move towards a light switch by dint of a click when ey is pointing towards the switch, then a click when ey moves in that direction (ignoring any attempts to move in a different direction); the trainer then draws attention to the general area of the light by clicking whenever the trainee looks in the right direction, and then for any hand movement, then for hand movement in the direction of the light switch. This kind of incremental reinforcement can be used to achieve all sorts of interesting behaviour. (I seem to remember, from Don't Shoot the Dog, that it has been used in chickens to make them do hundred-step dances, although I may have mis-remembered that.)
|
||||
|
||||
The exercise, then, demonstrates the power of reinforcement to produce order from chaos. With one trainer and several trainees, I would imagine that the problem becomes harder, but not insurmountably so (click when the person whose attention you need moves - it would take a while, but eventually I think I could train individual behaviour out of the group).
|
||||
|
||||
But what about one trainee and several trainers? Imagine a scenario in which a single trainee is in a room alone, with the clicks of two trainers coming through the door in such a way that the trainee can hear only a single click. No matter which of the trainers produced it, the trainee can't tell the difference between different trainers' commands. The two trainers have competing goals (or the same goals?), and they perform the above clicker-training procedure. Would any useful behaviour result? I can imagine that an animal would get hopelessly confused by the competing goals, but a human might be able to get some kind of result. (We must assume in the contradictory case that the trainers have among their goals that "progress towards the opposing goal should be minimised"; that prevents them from teaming up to, say, perform the two goals sequentially.)
|
||||
|
||||
Imagine that one trainer aims to make the trainee do the [Macarena][2], while the other trainer wishes the trainee to assume the [lotus position][3]. The goals are contradictory. I would imagine that the trainee would receive reinforcement towards being low down (in order to sit), as well as for standing straight and still (the starting position for the Macarena). I suspect that the trainee would infer some completely unrelated behaviour. I don't know if there's an official name for "excessively powerful inference" - [pareidolia][4] (the tendency to see faces in random settings) is a related phenomenon, and might cover this. I would be interested to know what behaviour would result from this kind of stimulus. Perhaps an experiment is in order (or, if you are also interested, do convey your results to me).
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Clicker_training "Clicker training Wikipedia page"
|
||||
[2]: https://en.wikipedia.org/wiki/Macarena_%28song%29 "Macarena Wikipedia page"
|
||||
[3]: https://en.wikipedia.org/wiki/Lotus_position "Lotus position Wikipedia page"
|
||||
[4]: https://en.wikipedia.org/wiki/Pareidolia "Pareidolia Wikipedia page"
|
@@ -0,0 +1,40 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2013-12-22T00:00:00Z"
|
||||
aliases:
|
||||
- /mathematical_summary/three-explanations-of-the-monty-hall-problem/
|
||||
- /three-explanations-of-the-monty-hall-problem/
|
||||
title: Three explanations of the Monty Hall Problem
|
||||
---
|
||||
Earlier today, I had a rather depressing conversation with several people, in which it was revealed to me that many people will attempt to argue against the dictates of mathematical and empirical fact in the instance of the [Monty Hall Problem][1]. I present a version of the problem which is slightly simpler than the usual statement (I have replaced goats with empty rooms).
|
||||
|
||||
> Monty Hall is a game show presenter. He shows you three doors; behind one of the three is a car, and the other two hide empty rooms. You have a free choice: you pick one of the doors. Monty Hall then opens a door which you did not pick, which he knows is an empty-room door. Then he gives you the choice: out of the two doors remaining, you may switch your choice to the other door, or stick with the one you first picked. You will get whatever is behind the door you end up with. You want to pick the car; do you stick with your first choice, or do you switch to the other door?
|
||||
|
||||
The solution is that you should switch. I present three explanations for why this is true, each of which makes it obvious to me in a different way. They may not help.
|
||||
|
||||
# Different worlds
|
||||
|
||||
Imagine three possible worlds: you pick a door, and the car is behind the first, second or third door. These choices are equally likely: the position of the car is randomly chosen by Monty Hall beforehand. Hence there are three possible worlds that I could find myself in. Let's suppose I picked door 1; it doesn't matter.
|
||||
|
||||
* In the "I pick door 1, car in 1" world, if I switch my door, I lose; if I keep my door, I win.
|
||||
* In the "I pick door 1, car in 2" world, if I switch my door, I win; if I keep my door, I lose.
|
||||
|
||||
The "I pick door 1, car in 3" world is identical to the previous one.
|
||||
|
||||
That is, in two cases out of three, switching wins for me. That means switching is better than sticking: I win in two-thirds of the worlds if I switch, and I only win in a third of the worlds if I stick. (This is the brute-force approach to understanding the problem.)
|
||||
|
||||
# Extra information
|
||||
|
||||
Let's suppose we pick a door, and then Monty Hall reveals a false door. Of course, when I picked my door, I had a 1/3 chance of having picked the car, and that probability is unchanged when Monty reveals the false door. However, if I switch, only then am I given a chance to use the information that Monty has provided to me. Only if I switch am I able to use the fact that only two doors remain (one of them hiding nothing, and one of them hiding a car) - that makes my chance of winning a car 1/2 if I switch (actually, it's 2/3 if we condition correctly, but that's not instantly obvious and this is an informal explanation), but only 1/3 if I stick. This means it's better for me to switch. Essentially, I'm restarting the game if I switch, because nothing was special about my original choice so I can discard it without changing anything. If I switch, I discard my original choice, changing nothing, and re-pick from the improved game with one fewer door. (This is the information-theoretic approach of incorporating new information.)
|
||||
|
||||
The same idea can be seen if we think of the question in a slightly different way. Once you've picked a door, and before Monty Hall opens any door, Monty asks you, "Would you like to look behind the door you picked, or behind the two doors you didn't pick?" If you reply "my door, please", that's the same as sticking with your original choice: Monty opens an empty-room door (changing nothing; after all, you know Monty will do this before you even start the game) and then your original door. If you reply "the other two, please", Monty opens an empty-room door and then the other door. (That's the same as switching choices.) Essentially, Monty is giving you a choice of two doors in the second case, and only one door in the first. The reason that his opening an empty-room door changes something in this case, is because we might as well consider it as "Monty opens the other two doors simultaneously": you get a 2-in-3 chance this time, since Monty's opening two of the three doors.
|
||||
|
||||
# Extreme problem
|
||||
|
||||
Consider the phrasing of the problem as "You pick a door. If you picked the car, Monty Hall opens every door except the one you picked, and one random empty-room door. If you picked an empty room, Monty Hall opens every door except the one you picked, and the car." Now, this exactly reflects the original problem, but is amenable to extension in the following way. Instead of having one car and two empty rooms, have one car and a hundred empty rooms. Now, when you pick, Monty Hall opens every door except for the one you picked, and one other. You started with a one-in-101 chance of having picked the car. At the end, Monty Hall has left only two doors. The probability that you originally picked the car is very low (1 in 101). But if we switch, we suddenly see that Monty Hall has removed almost all of the chaff that caused us to have only a 1 in 101 chance originally. Now it's just obvious that we have a 1 in 2 chance of picking the car if we re-pick from the game in its new state. The only way we have of re-picking is to switch doors.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Monty_hall_problem "Monty Hall problem Wikipedia page"
|
30
hugo/content/posts/2013-12-30-smartphone-charter.md
Normal file
30
hugo/content/posts/2013-12-30-smartphone-charter.md
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2013-12-30T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/smartphone-charter/
|
||||
- /smartphone-charter/
|
||||
title: Smartphone Charter
|
||||
---
|
||||
I am shortly to receive a new [Nexus 5][1]. I am determined not to become a smartphone zombie, and so I hereby commit to the following Charter.
|
||||
|
||||
* I will keep my phone free of social networking apps, and I will ensure that I do not know the passwords to access their web interfaces. While they can be really quite handy, they are usually simply a distraction. People are used to the fact that I am present on the Internet only when I have my computer with me; there's no need for that to change.
|
||||
* I will only look at text messages when I'm not talking to someone already.
|
||||
* I will never look at [reddit][2] or [Hacker News][3] or suchlike on my phone, unless there is no-one else around. Similarly, I will not access my news feeds from my phone. It's far too easy to waste time and attention on them, when such attention is expected from the people I'm with.
|
||||
* If I am doing something on my phone, and someone asks me to stop, I will do one of the following (with number 1 being heavily preferred, and number 3 only in emergency):
|
||||
1. I will stop using my phone within ten seconds
|
||||
2. I will explain what I am doing, and ask permission to continue
|
||||
3. I will explain what I am doing (or say that an explanation will be forthcoming as soon as possible), and continue.
|
||||
* I will keep my phone out of reach of my bed when I go to sleep. It's easy to become lost in the Internet, especially when you're tired and not really concentrating.
|
||||
* I will be able to access emails on my phone, but I will set it up so that it only checks manually.
|
||||
* I will not install games on my phone. It's not there as "something to keep me entertained when I'm bored" but as "something to be useful when needed", and in my experience, games seem to intrude.
|
||||
|
||||
If I break any of these, you're allowed to get annoyed with me. (The converse is false in general.)
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Nexus_5 "Nexus 5 Wikipedia page"
|
||||
[2]: http://www.reddit.com/ "reddit"
|
||||
[3]: https://news.ycombinator.com "Hacker News"
|
33
hugo/content/posts/2014-01-02-the-creation.md
Normal file
33
hugo/content/posts/2014-01-02-the-creation.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- creative
|
||||
comments: true
|
||||
date: "2014-01-02T00:00:00Z"
|
||||
aliases:
|
||||
- /creative/the-creation/
|
||||
- /the-creation/
|
||||
title: The Creation
|
||||
---
|
||||
Once upon a time, before this bountiful age of Matter and Light, there was only the Fell. A single being, surrounded by Chaos, content to remain alone forever (for it did not know what a "friend" was). It had not the power to shape the Chaos; neither had it the inclination, for it needed nothing and had no desires. For seething unchanging aeons, it persisted.
|
||||
|
||||
Then Chaos bore new fruit. A single electron, a point source of *charge*. The *electric field* thereby induced resonated throughout all of Chaos, propagating yet further, every second by the same amount; and so the Fell recognised *distance*. The Fell experienced *curiosity* then: for an electromagnetic field was entirely a novel sensation to it. The place it inhabited was changed, from isotropic to merely *spherically symmetric*: now the Fell identified *direction*. It began to *move towards* the point charge, first *slowly*, and then *faster*, until its *velocity* approached that of the electric field itself. All this was for to discover the nature of the descendant of Chaos.
|
||||
|
||||
As the Fell approached the electron, its existence became *threatened*: as a simple pattern in Chaos, it could exist indefinitely, but approaching a source of electric charge was a new disturbance, one which the pattern had not been purposelessly selected to overcome. And it recoiled from the intrusion with great force, the influence of the electron growing with the square of the Fell's distance from it, much faster than was comfortable.
|
||||
|
||||
But the pattern that was the Fell was changed by the charge, and the charge was changed by the pattern. The same perturbations that had caused the first electron were still latent in the Chaos, and the Fell's scramble to escape the charge was enough to revive them. A second electron emerged, accompanied by a single *photon*.
|
||||
|
||||
Now there was unbound *energy* in Chaos. Before the Fell could even begin to *react*, Chaos began to resonate, shuffling, its patterns collapsing into such regularity that a great explosion of *matter* emerged. At the speed of light, *things* emerged, a great array of *muons*, *quarks* and their ilk. The Fell could but race away from the catastrophe; most of it was shorn away in that first burst of creation, before it could flee. And so it continued to exist.
|
||||
|
||||
*Gradually*, the flurry of *order* was calmed. Chaos is infinite, unquenchable, and the energy which the Fell unwittingly brought into existence was but finite. At the boundary of the sphere of roiling matter did the Fell rest, recovering itself, painstakingly forging its old patterns anew from the Chaos. It felt the unconstrained resonance of the matter, and so could it know what was *happening* in this new world.
|
||||
|
||||
And indeed it came to pass that the *Universe* settled down, protected from Chaos by its sheer radius. *Gravity*, not present in the isotropic Chaos, was very much a factor in the Universe, and things came together to form new patterns. With nothing better to *do*, the Fell learnt to peer into the Universe, *polling* it with the gentlest bursts of electromagnetism to discern what new *wonders* occurred. (The Fell grew larger and larger, forcing its pattern onto Chaos, to keep and examine this new information.) It learnt to send information into the Universe by gently affecting the boundary, and eventually it occurred to the Fell to *create* something. It planned and tweaked, and when it was satisfied, it chose a star and a newly-made planet, and altered it subtly.
|
||||
|
||||
It came to pass that self-replicating structures emerged on that planet. With startling speed, they became better-adapted to their environment. The Fell's usual languid pace of existence was not enough to keep up with the rapidity of the changes, so it began to poll for information much more frequently. It felt *tenderness* for what it had wrought, and it tried to keep that planet from harm.
|
||||
|
||||
And the changes accelerated, faster and faster: an *exponential* with no apparent end. The Fell struggled to keep up, polling yet faster; its error rate was low, but with so many polls occurring, every so often it misjudged and sent a beam of energy that was so powerful that it affected the planet's star itself, causing plasma to gout out of it.
|
||||
|
||||
Reptiles had emerged before the Fell realised how quickly the changes were now happening. It stretched itself to its limit, polling more and more frequently until it could go no faster, desperate to document everything. It had no energy spare to protect the planet, and it came to pass that a very large chunk of rock hit, causing the destruction of the incumbent life; and so mammals emerged, followed in short order by primates and then humans.
|
||||
|
||||
Therefore, send not to know for whom the Fell polls - it polls for thee.
|
@@ -0,0 +1,53 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-01-12T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/denouement-of-myst-iii-exile/
|
||||
- /denouement-of-myst-iii-exile/
|
||||
title: 'Denouement of Myst III: Exile'
|
||||
---
|
||||
A long time ago, in a galaxy far far away, I completed [Myst III: Exile][1]. It's a stupendously good puzzle game. For some reason, it popped into my mind again a couple of days ago. This post contains very hefty spoilers for that game (it will completely ruin the ending - I will be discussing information-exchange protocols which are key to completing it), so if you're ever going to play it, don't read this post yet. It's a brilliant game - I highly recommend it.
|
||||
|
||||
The spoilers start here. Weak spoilers are first, so that you have time to stop reading if your eyes are accidentally moving downwards. After those weak spoilers will come a discussion of the final puzzle of the game. If you're familiar with the general Myst universe, skip the next two paragraphs; start at the sentence "Proper spoilers start here".
|
||||
|
||||
The series of Myst games revolves around the concept of a *Linking Book*, a means of moving from world to world (these worlds are called "Ages") by touching the front page of certain books. Each book is a link to its ultimate *Descriptive Book*, which was at some point written by a Writer, and which describes an Age in such detail that the Age actually comes to exist. The act of touching the front page of a Linking Book or the Descriptive Book causes you to be transported into the Age described by the corresponding Descriptive Book. (By the way, you can't bring the Linking Book with you into the Age it links to.) The destruction of the Descriptive Book of an Age causes the Age to be destroyed.
|
||||
|
||||
Atrus is a master Writer. After the destruction of his civilisation (the D'ni), he writes a new Age, called Releeshan, in which the remnants of the D'ni can start afresh. Myst III: Exile starts with you ("the Stranger", since you are never named or depicted in any way, in any of the Myst series) being invited to explore Releeshan yourself with Atrus for the first time; he shows you the Descriptive Book, which you will use to enter Releeshan. However, just as you're about to enter, a person, Saveedro, appears, starts a fire, and grabs the Descriptive Book, before linking back out. The book he used falls to the floor, and you rashly follow him. So the events of the game begin. (It turns out that the fire burns the Linking Book, so you're conveniently on your own.)
|
||||
|
||||
Proper spoilers start here; the next couple of paragraphs describe the set-up of the final puzzle. At the end of the game, you've tracked down Saveedro. It turns out that for plotty reasons, he hates Atrus's sons with a fiery passion (this is elaborated on in games 1 and 4), and this was his way of getting back at Atrus. (He intended Atrus to follow him, not you, the Stranger.) He also wanted to show Atrus the consequences of his sons' actions. To that end, he has caused you to end up in the Age of Narayan, Saveedro's home Age, which used to be vibrantly natural but was ruined by Atrus's sons. Saveedro's home, which is where you have ended up, is shielded off from the rest of Narayan, and Saveedro desperately wants to get out into Narayan proper.
|
||||
|
||||
Saveedro's home is divided into two chunks, which we will consider to be in concentric circles, with an impenetrable shield between them, and an impenetrable shield surrounding the whole set-up. (That's why Saveedro can't escape: he is stuck behind two shields, unable to get through even one.) Linking to Narayan takes you to the inner chunk. From there, you can turn the power on to a device which has enough power to inhibit one of the shields, but not both. Being a friend of Atrus, you (naturally) have access to his journal, which gives you the key insights necessary to activate this device. The device can switch between inhibiting either of the two shields, but not both. The mechanism to control which shield is inhibited is inside both the concentric circles. (That is, while the inner shield is raised, you can't access it from the outside-circle.)
|
||||
|
||||
One can leave the house only if one is in the outside-circle and the outer shield is inhibited. However, one person alone can't do this: after activating the inhibitor, one can cause the inner shield to be inhibited and the outer shield to remain, and thereby one can get into the outside-circle; or one can cause the outer shield to be inhibited and the inner shield to remain. There is a single small passage between inner and outer circles, but it's not big enough for you to get through.
|
||||
|
||||
Saveedro still holds Releeshan, which it is your objective to retain. In his home, you have found a Linking Book that will return you to Atrus's home, Tomahna. You want to obtain Releeshan and return it to Atrus. Saveedro wants to escape his house. You start this scenario with you in the inner circle next to the inhibitor's controls, and Saveedro standing next to the outer shield, which is currently raised, and the inner shield is inhibited.
|
||||
|
||||
The official solutions are as follows (least-optimal to optimal in order, so you can think about the puzzle if you want to):
|
||||
|
||||
1. Leave immediately, using the Tomahna Linking Book. (Then Saveedro follows you through the book you leave behind, and kills you.)
|
||||
2. Release Saveedro immediately. (Then he destroys Releeshan, because he is still angry with Atrus. Many years of Atrus's life's work are now gone.)
|
||||
3. Cut the power to the inhibitor, thereby raising both shields. Then Saveedro collapses, having had freedom waved in front of him and snatched away (this time, he is trapped away from all his belongings, which are in the inner circle.) He hands you Releeshan through the small passage, and pleads with you to let him out. You have several choices: you can link back to Atrus (you end the game remorseful that Saveedro is alone); you can turn power back onto the inhibitor (then Saveedro runs back and kills you); or you can turn the controls on the inhibitor and then power it back up (then Saveedro gets out of his house, and salutes you as he leaves towards the houses in the distance), before linking back to Atrus. This last ending is optimal.
|
||||
|
||||
# The actual point of this post
|
||||
|
||||
I was wondering whether there is a better solution, in the sense that "we don't have to cause Saveedro unnecessary anguish, and/or we can complete the scenario without requiring anyone to trust anyone else". (Recall that the solutions officially require Saveedro to be provoked into trusting you.) What we need is some sort of mechanism or box to hold Releeshan, which is open if and only if the outer shield is inhibited. Then Saveedro can go into the inner circle with you (inner shield inhibited), switch the inhibitor (opening the box), and place the book inside it (after verifying that it does indeed open under and only under those conditions). Then he switches the inhibitor (closing the box, and allowing him through to the outer circle), goes through to the outer circle, you switch the inhibitor (opening the box and allowing him to escape), take Releeshan, and link away. He cannot return to kill you (because the inner shield is up).
|
||||
|
||||
Possible failure modes: you could shut off power to the inhibitor, thereby causing Saveedro to be trapped and you to have the unopenable box. This is obviously non-optimal, but it is precluded in-game by the fact that the controller of power to the inhibitor is located quite far from the inhibitor itself, so Saveedro has ample time to get back into the inner circle and kill you. You could link straight out, which leads to the exact scenario portrayed in-game. Saveedro could simply destroy Releeshan before putting it in the box (but then you will never release him).
|
||||
|
||||
I can see parallels between this kind of scenario and the creation of currencies like Bitcoin. There are some pretty impressive protocols to allow parties to spend money without being able to spend the same money twice, and so forth. What I really want is an information-based solution to this Myst problem.
|
||||
|
||||
I have heard of information-swapping protocols, to ensure that two parties swapping information will not do so asymmetrically, even if one party is evil with respect to the other. That sounds perfect for this.
|
||||
|
||||
New plan: create a box with a long combination lock, and modify the inhibitor so that it requires the same long combination to open it. Place Releeshan in the box. Each pick digits for the combination in turn, without the other knowing the digits. Lock the box using that combination, and lock the inhibitor in the "inhibiting the inside" position. Then Saveedro goes into the outer circle, taking the box; you go with him, and ensure that it is anchored firmly in place, before going back. Saveedro starts reading out his first digit. You punch it in to the inhibitor, and he punches it into the box. You read out your first digit; then repeat. If at any point you stop entering digits into the inhibitor, or do so incorrectly, Saveedro simply stops reading out his numbers, and you can't get to Releeshan. (This is why Saveedro needs to have some numbers at all.) Similarly, Saveedro can't stop punching the digits into the safe, or else you will not release him.
|
||||
|
||||
Now, as you get nearer to the end, you might be able to stop entering the digits and just brute-force the box open. Then Saveedro can come and kill you. Alternatively, Saveedro can stop reading his digits; but that doesn't serve him any purpose, for you know the code up to the point where he stopped reading his digits, and he's back where he started. (We will assume that there is a way to ensure that the same combination has been set for the box and the inhibitor.) Now, Saveedro has read all his digits; there are four left for you to enter. You don't read them out, but punch them in to the inhibitor and release Saveedro. Then you re-set the inhibitor and collect Releeshan, since you have the complete code to open the box.
|
||||
|
||||
If you turn off power to the inhibitor, Saveedro will simply never give you the code to access Releeshan, so that option is ruled out on the grounds that an approach exists which doesn't require you to trust each other but still lets you both get what you want.
|
||||
|
||||
Can anyone see any failure modes I've missed, or any simplifications? It probably works fine with just five numbers (one picked by Saveedro, and four by you), but I wanted to include a way to exchange arbitrary information.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Myst_III:_Exile "Myst III: Exile Wikipedia page"
|
@@ -0,0 +1,115 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- programming
|
||||
comments: true
|
||||
date: "2014-01-24T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /uncategorized/introduction-to-functional-programming-syntax-of-mathematica/
|
||||
- /introduction-to-functional-programming-syntax-of-mathematica/
|
||||
title: Introduction to functional programming syntax of Mathematica
|
||||
---
|
||||
Recently, I was browsing the [Wolfram Community][1] forum, and I came across the following question:
|
||||
|
||||
> What are the symbols @, #, / in Mathematica?
|
||||
|
||||
I remember that grasping the basics of functional programming took me quite a lot of mental effort (well worth it, I think!) so here is my attempt at a guide to the process.
|
||||
|
||||
In Mathematica, there are only two things you can work with: the Symbol and the Atom. There is only one way to combine these things: you can provide them as arguments to each other. We denote "\\(x\\) with arguments \\(y\\) and \\(z\\)" by "`x[y,z]`".
|
||||
|
||||
What is an Atom? As the name suggests, it is something indivisible, like the number 2 or the string "Hello!". So that the language isn't too complicated to implement, we mean "indivisible without any further work" - so the number 15 is "divisible" (in the sense that it's 3x5), but not in our sense: it takes work to find the divisors of a number. Similarly, the string "Hello!" is "divisible" into characters, but that again takes work.
|
||||
|
||||
A Symbol is something which we, as a programmer, tell Mathematica to give meaning to. We also tell it under what circumstances that Symbol has meaning. For instance, I might say to Mathematica, "In future, when I ask you the Symbol $MachinePrecision, you will pretend I said instead the Atom 15.9546." Something else I might say to Mathematica is, "In future, when I ask you for the Symbol Plus, combined with the arguments 1 and 2, you will pretend I said instead the Atom 3."
|
||||
|
||||
In Mathematica's syntax, we write the above as:
|
||||
|
||||
$MachinePrecision = 15.9546;
|
||||
Plus[1, 2] = 3;
|
||||
|
||||
(The semicolons prevent Mathematica from printing the value we gave. Without the semicolons, it would print out 15.9546 and 3. In fact, the semicolons are a shorthand for the Symbol CompoundExpression, but that's not important.)
|
||||
|
||||
Furthermore, we can ask Mathematica, "In future, when I ask you for Plus combined with zero and any other argument x, return that argument x". In Mathematica's syntax, that is:
|
||||
|
||||
`Plus[0, Pattern[x, Blank[]] ] := x`
|
||||
|
||||
More compactly:
|
||||
|
||||
`Plus[0, x_] := x`
|
||||
|
||||
Now, we have had to be careful. Mathematica needs a way of distinguishing the Symbol `x` from the "free argument" `x`. We want the "free argument" - that is, we want to be able to supply any argument we like, and just temporarily call it x. We do that using the Pattern symbol, better recognised as `x_` . I won't go into how Pattern works in terms of the Symbol/Atom idea, but just recognise that `x_` *matches* things, rather than *being* a thing.
|
||||
|
||||
Now, we'll assume that there is already a "plus one" method - that Mathematica already knows how to do `Plus[1, x_]`. Let's also assume that it knows what `Plus[-1, x_]` is (not hard to do, in principle, once we know `Plus[1, x_]`). Then we can define Plus over the positive integers:
|
||||
|
||||
`Plus[x_, y_] := Plus[Plus[-1, x], Plus[1, y]]`
|
||||
|
||||
And so forth. This is how we build up functions out of Symbols and Atoms.
|
||||
|
||||
Now, there is a shorthand for `f[x]`. We can instead write `f@x`. This means exactly the same thing.
|
||||
|
||||
A really important Symbol is `List`. `List[x, y, z]` (or, in shorthand, `{x, y, z}`) represents a collection of things. There's nothing "special" about `List` - it's interpreted in exactly the same way as everything else - but it's a convenient, conventional way to lump several things together. (It would all have worked in exactly the same way if the creators of the language had decided that Yerpik would be the symbol that represented a generic collection; even `Plus` could be used this way, if we made sure to tell Mathematica that "Plus" should not be evaluated in the usual way. You could even use the number 2 as the list-indicating symbol, or even use it as `Plus` usually is used, leading to expressions like `2[5,6] == 11`.) We can define functions like `Length[list_]`, so `Length[{1, 2, 3}]` is just 3.
|
||||
|
||||
Since everything is essentially function application ("apply a symbol to an expression"), we might explore ways to apply several functions at once, or to apply a function to several different parts of an expression. It turns out that a really useful thing to do is to be able to apply a function to all the inside bits of a List. We call this "mapping":
|
||||
|
||||
`Map[f, {a, b, c}] == {f[a], f[b], f[c]}`
|
||||
|
||||
More generally, `Map[f, s[a1, a2, … ]] == s[f[a1], f[a2], …]`, but we use `List` instead of `s` for convenience. There is a shorthand, reminiscent of the `f@x` notation: we use `f /@ {a, b, c}` to denote "mapping".
|
||||
|
||||
It's all very well to want to map a function across the arguments to a symbol (let's call that symbol, which has those arguments, the Head of an expression, so `Head[f[x,y]]` is just `f`), but what about if we want to apply the function *to the Head symbol*? Actually, this turns out to be quite rare (the function is `Operate[p, f[x,y]]` to give `(p[f])[x,y]` ), but it's much more common to want to replace the Head completely. For instance, we might want to supply a List as arguments to a function, as follows:
|
||||
|
||||
`f[x_, y_] := x + y^2`
|
||||
|
||||
How would we get `f` to act on the List `{5, 6}`? We can't just say `f[{5, 6}]` because f requires two inputs, not the one that is `List[5, 6]`. Mathematica's syntax is that instead of `f@{5,6}`, we use `f@@{5, 6}`. This is shorthand for `Apply[f, {5,6}]`, and it returns `f[5, 6]`, which is 41.
|
||||
|
||||
More generally, `f@@g[x, y] == f[x, y]`. (Note, however, that Mathematica evaluates things as much as possible before doing these transformations, so `f@@Plus[5,6]` doesn't give you `f[5,6]` but `f@@11`, an expression which makes no sense. Mathematica's convention is that Atoms don't really have a Head, so replacing the Head with `f` does nothing; hence `f@@11` will return 11.)
|
||||
|
||||
Particularly in conjunction with `Map`, it can be useful to Apply a function not to an expression, but to the arguments of the expression. That is, given a List `{{1, 2}, {3, 4}}`, which is `{List[1, 2], List[3, 4]}`, we might want to output `{f[1, 2], f[3, 4]}`. We do this with the shorthand `f@@@{{1, 2}, {3, 4}}`, which is really `Apply[f, {{1, 2}, {3, 4}}, 2]`. This situation might arise if we wanted to "transpose" two strings "ab" and "cd" to return "ac" and "bd" (imagine writing the strings out in a table, and reading the answer down the columns instead of across the rows). We could use `StringJoin@@@Transpose@Map[Characters, {"ab", "cd"}]`. Indeed, what does this expression do? The first thing that will actually change when it is evaluated is `Map[Characters, {"ab", "cd"}]`. This will return `{{"a", "b"}, {"c", "d"}}`. Then Transpose sees that new list, and flips things round to `{{"a", "c"}, {"b", "d"}}`, which is `{List["a", "c"], List["b", "d"]}`. Then `StringJoin` is asked not to hit the outer `List`, or even to hit the inner Lists, but to *replace* the List head on the inner Lists: the expression becomes `{StringJoin["a", "c"], StringJoin["b", "d"]}`, or `{"ac", "bd"}`.
|
||||
|
||||
Now, it's all very well to have functions that work like this. But what if we wanted to take the second character of a string? There's a function for that - `StringTake` - but it needs arguments. We could define a new function `takeSecondChars[str_] := StringTake[str, {2}]`, but that's unwieldy if we only want this function once - and what about if we wanted the third character instead, the next time?
|
||||
|
||||
There is a really useful way to define functions without names. Unsurprisingly, they look like:
|
||||
|
||||
`Function[{x, y, …}, …]`
|
||||
|
||||
So in the above example, we'd have `Function[{str}, StringTake[str, {2}]]`. And then to map it across a list would look like:
|
||||
|
||||
`Function[{str}, StringTake[str, {2}]] /@ {"str1", "str2", "str3"}`
|
||||
|
||||
We can also apply it to a string: `Function[{str}, StringTake[str, {2}]]["string"]`, or `Function[{str}, StringTake[str, {2}]]@"string"`.
|
||||
|
||||
There's a really compact shorthand. Instead of `Function[{args}, body]` we use `(body)&`. We don't even bother naming the arguments; we use the `Slot[i]` function to get the `i`th argument. `Slot[i]` is more neatly written as `#i`, while just the `#` symbol is interpreted as `#1`.
|
||||
|
||||
Hence our function becomes `StringTake[#, {2}]&`, and its mapping looks like:
|
||||
|
||||
`StringTake[#, {2}]& /@ {"str1", "str2", "str3"}`
|
||||
|
||||
It takes some getting used to, but after a while it becomes extremely natural. In my most recent coursework project, there are almost no programs I wrote which don't use this syntax, even though the coursework is aimed at the language Matlab which is almost the antithesis of this idea of "symbols with arguments". Once you become able to see problems in this way - mapping small functions over expressions, and so forth - you start seeing it everywhere. The idea is about sixty years old - it's the principle of Lisp - and it's ridiculously powerful. Since functions are just expressions, you can use them to alter themselves. For instance, memoisation is trivial:
|
||||
|
||||
fibonacci[n_] := (fibonacci[n] = fibonacci[n-1] + fibonacci[n-2])
|
||||
fibonacci[1] = 1;
|
||||
|
||||
That is, "Whenever I ask you for fibonacci[n], you will set the value of fibonacci[n] to be the sum of the two previous values." Note that this is "set the value of fibonacci[n] to be", not "return" - this is a permanent change (well, as permanent as the Mathematica session), and it means that the value of fibonacci[36] is instantly available forever after once you've calculated it once.
|
||||
|
||||
You can also get some crazy things with Slot notation, because `#0` (which is `Slot[0]`) represents *the function itself*. Off the top of my head, an example is:
|
||||
|
||||
(Boole[# < 10] #0[# + 1] + #) &[1]
|
||||
|
||||
This generates the tenth triangle number. (The function `Boole[arg]` returns 1 if arg is `True`, and 0 otherwise.) This is because the function evaluates to exactly its input unless that input is less than 10; in that case, the function evaluates to (its input, plus "this function evaluated at input+1"). Recursively expanded, it is `f[x_] := If[x < 10, f[x+1]+x, x]`, evaluated at the input 1. It gets quite mind-bending quite quickly, and I don't think I have ever used `#0` in earnest. Another example I came up with quickly was:
|
||||
|
||||
If[Cos[#] == #, #, #0[Cos[#]]] &[1.]
|
||||
|
||||
This finds a fixed point of the function Cos, starting at the initial input 1. (It has to be a numerical input, otherwise Mathematica will just keep going forever with better and better symbolic expressions for this fixed point, like `Cos[Cos[Cos[1]]]`. It rightly recognises that, for instance, `Cos[Cos[Cos[1]]]` is not equal to `Cos[Cos[Cos[Cos[1]]]]`, so it never stops.)
|
||||
|
||||
The last really useful piece of shorthand I can think of at the moment is // which is another way to apply functions.
|
||||
Instead of `f@x`, we can use `x//f` . This has the benefit of making it a bit clearer what is actually contentful, and what is just afterthoughts, because the functions which are evaluated last actually appear at the end:
|
||||
|
||||
`CharacterRange["a","z"] // StringJoin`
|
||||
|
||||
Of course, the usual function notation can be used:
|
||||
|
||||
`1 // (Boole[# < 10] #0[#+1] + # &)`
|
||||
|
||||
Phew, that was a whistlestop tour in rather more words than I had hoped - turns out there are far more Mathematica concepts that I've internalised than I had thought, all of which are really quite fundamental and indispensable. I understand much better why people say Mathematica has a steep learning curve, and why it is derided as a "write-only language" - that final example is ridiculous!
|
||||
|
||||
[1]: http://community.wolfram.com
|
60
hugo/content/posts/2014-01-28-writing-essays.md
Normal file
60
hugo/content/posts/2014-01-28-writing-essays.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-01-28T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/writing-essays/
|
||||
- /writing-essays/
|
||||
title: Writing essays
|
||||
---
|
||||
The aim of this post is twofold: to find out whether a certain mental habit of mine is common, and to draw parallels between that habit and the writing of essays.
|
||||
|
||||
I don't know whether this is common or not, but when I'm feeling particularly not-alert (for instance, when I'm nearly asleep, or while I'm doing routine tasks like cooking), I sometimes accidentally latch onto a topic and mentally explain it to myself, as if I were teaching it to the Ancient Greeks (who, naturally, speak English). As an example, last night's topic of discourse was "the composition of soil", in which I "talked" about soil, in a manner roughly according to the following diagram. It is laid out so as to display roughly what occurred to me, and the order in which it occurred to me to "say" it.
|
||||
|
||||
The contents of soil
|
||||
|
||||
* Soil contains fungi
|
||||
|
||||
* * what is a fungus?
|
||||
|
||||
* Soil contains fungi, lots and lots, which contributes to
|
||||
|
||||
* we eat fungi
|
||||
|
||||
* * we don't just eat mushrooms, we also are starting to eat <a title="Quorn Wikipedia page" href="https://en.wikipedia.org/wiki/Quorn">Quorn</a> etc
|
||||
|
||||
* we eat fungi - more specifically, the reproductive organs of the mycelium
|
||||
|
||||
* * what is a mycelium? it's a web that can span large areas
|
||||
|
||||
* * * "fairy circles" - mycelium is why mushrooms often appear in arcs, because the mushrooms - the reproductive organs - appear at the periphery of the web
|
||||
|
||||
* Soil contains fungi, lots and lots, which contributes to its taste
|
||||
|
||||
* * I once accidentally ate some of a mouldy slice of bread, and it tasted just like soil
|
||||
|
||||
* * * the mould looks the same as the mould which you get in damp areas of a house
|
||||
|
||||
* * you can actually see something which is closely related to a mycelium on mouldy bread - webs of fungus
|
||||
|
||||
* we eat fungi - Quorn, for instance, something similar to which was eaten during the first world war in Germany because of famine
|
||||
|
||||
* * Quorn can be made in huge vats, 250 kg can be made using the same resources as would make a kilogram of chicken
|
||||
|
||||
* * * chicken is the most-eaten meat in the world, and our treatment of them can be horrible
|
||||
|
||||
* * * there are environmental problems associated with using the resources that could produce a quarter of a ton of food to instead produce a kilogram of chicken
|
||||
|
||||
You get the idea. I'm essentially doing a depth-first search of my internal knowledge-base, starting from a particular place. When I feel that a topic is getting too big to include (for instance, I stop after "environmental problems" because that leads to a very large nexus of topics in my knowledge-base), I stop the search and backtrack. When I feel that a fact is particularly interesting but doesn't have too much relevant content after it, I stop (for instance, the "fairy circles" fact, which could lead to a digression on myths and legends, but I deem that too big a logical leap).
|
||||
|
||||
This rings a bell with [an essay Paul Graham wrote][1] about essays, and more strongly with an anecdote which (infuriatingly) I can't find or recall the name of, by a teacher of English. As an exercise, he (I think it was "he") set a student an exercise to "write an essay about your home town". She looked at him blankly, and so he refined it to "write an essay about the High Street of your home town". The process continued until it got to "write 500 words about the top-left brick in the front of the bank on the High Street of your home town". The student left, almost in tears. The next day, she returned with five thousand words of essay, and said that "once I got started, I just didn't stop".
|
||||
|
||||
What I am doing is very close to this view of writing an essay. If I kept going long enough (in an awake state), I would presumably hit areas I don't know much about (for instance, how is it that there is a kind of mushroom that can punch through tarmac? Hydraulics, I know, but that's a [stop sign][2].) That's where the research would start, and where I would start discovering new things - and that's where Paul Graham's view of writing an essay would happen.
|
||||
|
||||
This very post was written somewhat in this manner, but to save space and time, I made the knowledge-tree much smaller. (Alternative ending to that sentence: "I destroyed my time machine and burned all my papers".) Naturally, when actually formulating an essay from such a tree, it is important only to keep that which is interesting and/or useful, and it is necessary to restrict the output to a reasonable length. A blog format, in particular, prefers shorter pieces, so maybe I should**—**
|
||||
|
||||
[1]: http://www.paulgraham.com/essay.html "Paul Graham on essays"
|
||||
[2]: http://lesswrong.com/lw/it/semantic_stopsigns/ "Semantic Stop-signs LessWrong page"
|
@@ -0,0 +1,38 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- creative
|
||||
comments: true
|
||||
date: "2014-02-16T00:00:00Z"
|
||||
aliases:
|
||||
- /creative/rage-rage-against-the-poets-hardest-sell/
|
||||
- /rage-rage-against-the-poets-hardest-sell/
|
||||
title: Rage, rage against the poet’s hardest sell
|
||||
---
|
||||
I feel that I can write a sonnet well.
|
||||
While sonnets are an easy thing to spout,
|
||||
It’s really hard to write a villanelle.
|
||||
|
||||
By rhyming, any story I can tell:
|
||||
in couplets, rhyme and rhythm evens out.
|
||||
I feel that I can write a sonnet well.
|
||||
|
||||
But alternately-structured verse is hell.
|
||||
The poet struggles, juggles words about:
|
||||
It’s really hard to write a villanelle.
|
||||
|
||||
Enthusiasm’s difficult to quell.
|
||||
An acolyte of Shakespeare, I’m devout:
|
||||
I feel that I can write a sonnet well.
|
||||
|
||||
But triplets are a task on which I dwell,
|
||||
I’m running out of rhymes, without a doubt.
|
||||
It’s really hard to write a villanelle.
|
||||
|
||||
For sonnets, you don’t have to be [Kal-El][1]
|
||||
to make a super stanza just work out.
|
||||
I feel that I can write a sonnet well;
|
||||
It’s really hard to write a villanelle.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Superman "Superman Wikipedia page"
|
@@ -0,0 +1,62 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-03-20T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/a-roundup-of-some-board-games/
|
||||
- /a-roundup-of-some-board-games/
|
||||
title: A roundup of some board games
|
||||
---
|
||||
It has been commented to me that it's quite hard to find out (on the Internet) what different games involve. For instance, [Agricola][1] is a game about farming (and that's easy to find out), but what you actually do while playing it is not easy to discover. Here, then, is a brief overview of some games.
|
||||
|
||||
# Agricola
|
||||
|
||||
[Agricola][2] is a game in which you control a farm, and are aiming to make your farm thrive. It is a multiplayer game (for two to five) divided into turns. During each turn, you can make several actions (the number of actions you can make is determined by the number of people you have on your farm; you start out with two, and some actions increase the number of people you have). The actions are shared between all players - that is, if I make an action, you may not make that same action this turn. There is no other inter-player interaction - no attacking or anything, and you all have your own farm to manage. Your aim is to use actions to gather resources, build and extend your house, and plough fields; at the end of the game (after fourteen rounds, which is about forty minutes) everyone scores their own farm according to a set checklist, and the winner is the one who has the most prosperous farm.
|
||||
|
||||
# Settlers of Catan
|
||||
|
||||
[Catan][3] is a game in which you are trying to build up your civilisation essentially from scratch. It is multiplayer (two to four), and is divided into turns. The game is played on a common board, which you gradually populate with your own settlements, cities and roads, while attempting to make sure that other people can't foil your plans with their own building. (Once something is built, it can't be un-built, so the game is competitive only in a strategic sense, not a combat sense.) You aim to gather resources (which you can trade freely with opponents) so as to build more such trappings (your settlements and cities gain resources according to dice rolls), and the winner is the first to reach a certain size of civilisation. Games last about 45mins.
|
||||
|
||||
# Diplomacy
|
||||
|
||||
[Diplomacy][4] is a game almost entirely down to how well you can connive with and against opponents. It takes place in turns, but actions happen essentially simultaneously in a turn; the real action happens in between turns, when you go and plot with other people. Games take many hours, and are very multiplayer (eight or so, I think, is normal). Your aim is to take over the world, which you can only feasibly do by persuading people both to assist you and to foil your opponents' attempts.
|
||||
|
||||
# Dominion
|
||||
|
||||
[Dominion][5] is a two-to-four player deck-building game. You aim to have acquired the most Victory cards by the end of the game. Turns involve playing cards you have already acquired, and acquiring more cards; the cards you acquire become part of an "economy" that is almost never subtracted from, but you may only use a small subset of your cards during any one turn. It is somewhat like Magic: the Gathering (below), but restricted so that cards only modify the structure of your turn and allow you to draw more cards. (There are some "attack" cards, but I find them not to be conducive to fun play.)
|
||||
|
||||
# Magic: the Gathering
|
||||
|
||||
[Magic][6] is a rather different game to those listed above. It is a collector's game: you acquire cards over your lifetime, although some formats involve getting a random selection of cards and doing the best you can with those. It is multiplayer (two players is common, but it goes arbitrarily high). The format is turn-based - each turn is subdivided - but the key point is that cards can do pretty much anything to the game. Win conditions can be altered, turns can be prevented, cards can be renamed, all as the result of card effects. Your aim is to win the game, which is usually done by taking the opponent's life total down to 0 or by forcing them to draw a card when they have no cards left to draw (that is, after they have already drawn all of their cards). Many other win conditions exist - [one card][7] causes you to win if you have a certain ridiculous number of cards; [one card][8]'s active effect is that a target opponent loses the game; [one card][9] causes you to win if nothing happens for a while; and so forth. A complete list can be seen on the [Gatherer card search facility][10] (you might want to search for "wins the game" as well as the default "win the game").
|
||||
|
||||
That description of win conditions was intended to convey how complicated the game can become. It is not for the faint-hearted - it takes a while to get to grips with the myriad mechanics.
|
||||
|
||||
# Mafia/Werewolf/Avalon/The Resistance
|
||||
|
||||
These two games ([Mafia][11] and Werewolf are isomorphic games, as are Avalon and [The Resistance][12]) are both highly-multiplayer games (up to ten people) in which there are two teams: a team of innocents and a team of hidden spies (I'll refer to those as the Mafia). The job of the former team is to unmask the spies; the latter team usually wins by remaining undetected.
|
||||
|
||||
In Mafia, the game revolves around group voting to "kill" people (and the first person killed will have a much less interesting game!). The innocents naturally want to kill the Mafia; the Mafia want to kill all the innocents. The Mafia get an extra turn in between every group vote, in which they can elect among themselves to kill someone. (That is, every normal turn, two people die.) The innocents win if all the Mafia are dead and an innocent is still alive; the Mafia win if they kill all the innocents. There are some extra roles to complicate things, but those are the basics.
|
||||
|
||||
Avalon is slightly different - nobody dies at any point. In a round, someone chooses a team of people to go Questing, and the group votes on whether to allow that team to Quest. If the vote comes out negative, the next person chooses a team, and so on. This repeats until a team is approved; then if the team contains a Mafia member, the mission can be failed by that member (secretly: no-one finds out who the Mafia was). Otherwise, it succeeds. The aim is to accumulate failed/succeeded missions (depending on if you are Mafia/innocent).
|
||||
|
||||
# Dixit
|
||||
|
||||
[Dixit][13] is a [Keynesian beauty contest][14] style game. Everyone gets cards with pictures on them, and each round, a different Storyteller describes one of their cards. Then everyone puts one of their cards into a pile, and the cards are all placed in the centre of the table. Everyone then guesses which card was the Storyteller's card, based on their description. Points are allocated to any player whose card was guessed (that is, a player who put in a card which matches the description enough for someone to mistake it for the Storyteller's card). The Storyteller gets points *unless* everyone or no-one guessed correctly. (That is, the description must not be so obscure that no-one gets the answer right, and it must not be so obvious that everyone does.)
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Agricola_(board_game) "Agricola Wikipedia page"
|
||||
[2]: https://en.wikipedia.org/wiki/Agricola_(board_game) "Agricola (Wikipedia page)"
|
||||
[3]: https://en.wikipedia.org/wiki/Catan "Settlers of Catan Wikipedia page"
|
||||
[4]: https://en.wikipedia.org/wiki/Diplomacy_(board_game) "Diplomacy Wikipedia page"
|
||||
[5]: https://en.wikipedia.org/wiki/Dominion_(game) "Dominion Wikipedia page"
|
||||
[6]: https://en.wikipedia.org/wiki/Magic:_The_Gathering "Magic: the Gathering Wikipedia page"
|
||||
[7]: https://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=288878 "Battle of Wits Magic card"
|
||||
[8]: https://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=288992 "Door to Nothingness Magic card"
|
||||
[9]: https://gatherer.wizards.com/Pages/Card/Details.aspx?multiverseid=265418 "Azor's Elocutors Magic card"
|
||||
[10]: https://gatherer.wizards.com/Pages/Search/Default.aspx?text=+[win]+[the]+[game] "Gatherer"
|
||||
[11]: https://en.wikipedia.org/wiki/Mafia_(party_game) "Mafia Wikipedia page"
|
||||
[12]: https://en.wikipedia.org/wiki/The_Resistance_(party_game) "The Resistance Wikipedia page"
|
||||
[13]: https://en.wikipedia.org/wiki/Dixit_(card_game) "Dixit Wikipedia page"
|
||||
[14]: https://en.wikipedia.org/wiki/Keynesian_beauty_contest "Keynesian beauty contest"
|
@@ -0,0 +1,72 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
- proof_discovery
|
||||
comments: true
|
||||
date: "2014-03-30T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/proof_discovery/how-to-discover-the-contraction-mapping-theorem/
|
||||
- /how-to-discover-the-contraction-mapping-theorem/
|
||||
title: How to discover the Contraction Mapping Theorem
|
||||
---
|
||||
A little while ago I set myself the exercise of stating and proving the [Contraction Mapping Theorem][1]. It turned out that I mis-stated it in three different aspects ("contraction", "non-empty" and "complete"), but I was able to correct the statement because there were several points in the proof where it was very natural to do a certain thing (and where that thing turned out to rely on a correct statement of the theorem).
|
||||
|
||||
Here, then, is how you might go about discovering it from the point of having a definition of a [Lipschitz function][2] on a metric space \\((X, d)\\) (that is, a function \\(f\\) for which there exists \\(\lambda \in \mathbb{R}^{>0}\\) such that for all \\(x, y \in X\\), \\(d(f(x),f(y)) \leq \lambda d(x,y)\\)). We'll aim for a statement describing the fixed points of such a function.
|
||||
|
||||
## Define the terms
|
||||
|
||||
What is a "fixed point"? There's nowhere obvious to start other than working out what we mean by one of these. Well, what we mean is "a point \\(x \in X\\) such that \\(f(x) = x\\)". We'll also define \\((X, d)\\) to be an arbitrary metric space, and \\(f\\) an arbitrary Lipschitz function on that space with Lipschitz constant \\(\lambda\\).
|
||||
|
||||
## How might we proceed?
|
||||
|
||||
We're looking for a fixed point. We have a Lipschitz function (that is, one which "draws points together", in the sense that two points which are originally \\(\delta\\) apart end up \\(\lambda \delta\\) apart, or closer, after \\(f\\) is applied to them). That suggests the idea of starting out with two arbitrary points, repeatedly pulling them closer together with \\(f\\), and seeing where we end up. Actually, on second thoughts, we can dispense with one of the arbitrary points, because we can make another point given our arbitrary \\(x\\) - namely \\(f(x)\\).
|
||||
|
||||
## What did we assume?
|
||||
|
||||
So far, we've made a (silly) assumption: that the space \\(X\\) is not empty, because we've just picked a point in it. In order to use this "\\(f\\) draws points together", we're going to want \\(\lambda < 1\\), otherwise it's actually blowing them outwards.
|
||||
|
||||
## How might we proceed?
|
||||
|
||||
We have two points, \\(x\\) and \\(f(x)\\). We want to pull them together using \\(f\\), so it's natural to keep applying \\(f\\) to them. So that we can have access to all these values, we'll define a sequence \\(z_i = f(z_{i-1})\\) and \\(z_0 = x\\). What we really want is for this sequence to converge to the fixed point (after all, if we're drawing the points together to some limit, we'd imagine that the limit of the sequence is a local accumulator in some sense).
|
||||
|
||||
Now, we know nothing about this metric space, and we know nothing about the limit of the sequence. There's a key thing we do in analysis if we want a limit of a sequence but know nothing about it: we show that it is [Cauchy][3]. In order to use this, though, we'll need to suppose that the metric space is complete (so that Cauchy sequences converge).
|
||||
|
||||
Then we want to show that this sequence \\(z_i\\) is Cauchy. That is, we want \\(d(z_i,z_j) \to 0\\) as \\(i,j \to \infty\\) independently of each other, which means that for all \\(\epsilon > 0\\) there exists \\(N \in \mathbb{N}\\) such that for all \\(i, j > N\\), \\(d(f^i(x), f^j(x)) < \epsilon\\).
|
||||
|
||||
Aha - we have \\(d(f^i(z), f^j(z))\\). We know \\(f\\) is Lipschitz, so ([wlog][4] \\(i \leq j\\)) this is \\(d(f^i(z), f^j(z)) \leq \lambda^i d(x, f^{j-i}(x))\\). It would be very convenient if the \\(d\\) expression were bounded, because then as \\(i \to \infty\\), the \\(\lambda^i\\) will take care of the rest (since \\(\lambda < 1\\)).
|
||||
|
||||
But what else do we know about \\(d\\)? We're going to need something to bound \\(d(x, f^{j-i}(x))\\), but we don't know anything about this expression - we only know about \\(d(z_i, f(z_i)) \leq \lambda d(z_{i-1}, z_i)\\), by the Lipschitzness of \\(f\\). But in fact we can make \\(d(x, f^{j-i}(x))\\) in terms of those: \\(d\\) is a metric, which means that it obeys the triangle inequality.
|
||||
|
||||
Hence \\(\displaystyle d(x, f^{j-i}(x)) \leq d(x, f(x)) + d(f(x), f^{j-i}(x)) \leq \dots \leq \sum_{k=1}^{j-i} d(z_{k-1}, z_k)\\). This we can bound: it's \\(\displaystyle \leq \sum_{k=1}^{j-i} \lambda^{k-1} d(z_0, z_1) = d(z_0, z_1) \sum_{k=1}^{j-i} \lambda^{k-1}\\). And, joy of joys, this sum is bounded, because the infinite sum \\(\displaystyle \sum_{k=1}^{\infty} \lambda^{k-1} = \dfrac{1}{1-\lambda}\\).
|
||||
|
||||
Hence \\(d(z_i, z_j) < \lambda^i d(z_0, z_1) \dfrac{1}{1-\lambda}\\). This goes to \\(0\\) as \\(i \to \infty\\), so the sequence \\(z_i\\) is Cauchy.
|
||||
|
||||
## What did we assume?
|
||||
|
||||
In this section, we assumed that the space was complete.
|
||||
|
||||
## Summary
|
||||
|
||||
So far, we have shown that the sequence \\(f^i(x)\\) is Cauchy, so it converges to a limit. We'll call the limit \\(L\\): so we have \\(f^i(x) \to L\\) as \\(i \to \infty\\).
|
||||
|
||||
## What next?
|
||||
|
||||
It feels like we're very close to a result now. What we really want is for \\(L\\) to be a fixed point: we need \\(f(L) = L\\). Equivalently, we need \\(f(\lim z_i) \to \lim z_i\\); but \\(z_i = f(z_{i-1})\\), so this is \\(f(\lim z_i) \to \lim f(z_i)\\). This will be trivial if \\(f\\) is continuous. But \\(f\\) is Lipschitz, so it is uniformly continuous and hence continuous (this is a really simple lemma).
|
||||
|
||||
That is, \\(L\\) is a fixed point of \\(f\\): we have proved that \\(f\\) has a fixed point.
|
||||
|
||||
## Extension
|
||||
|
||||
But we don't have to stop there - if we're drawing points together using \\(f\\), and we end up at a fixed point, surely there can't be two fixed points (since if there were, \\(f\\) would draw them together). Let's aim to prove that \\(f\\)'s fixed point is unique, by supposing that \\(L_1, L_2\\) are fixed points. Then \\(d(L_1, L_2) = d(f(L_1), f(L_2))\\), because \\(L_1, L_2\\) are fixed points, and then \\(d(f(L_1), f(L_2)) \leq \lambda d(L_1, L_2) < d(L_1, L_2)\\), contradiction.
|
||||
|
||||
## Summary
|
||||
|
||||
We have shown that there exists a unique fixed point \\(L\\) of a Lipschitz function \\(f\\) with Lipschitz constant \\(\lambda < 1\\) on a non-empty complete metric space. Moreover, we have shown that \\(f^(i)(x) \to L\\) for all \\(x\\), because we can perform this same construction of \\(z_i\\) starting from any point \\(x\\). Even more, we have shown that convergence is geometrically fast (by the \\(\lambda^i\\) term). This is a really strong theorem, and all I needed to remember in order to construct it was that Lipschitz functions were important and that we were looking for information about fixed points. (I didn't look up anything during the proof - I checked my statement of it afterwards, and it turned out to be correct. I didn't change anything after I finished it.)
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Contraction_mapping_theorem "Contraction Mapping Theorem Wikipedia page"
|
||||
[2]: https://en.wikipedia.org/wiki/Lipschitz_function "Lipschitz function Wikipedia page"
|
||||
[3]: https://en.wikipedia.org/wiki/Cauchy_sequence "Cauchy sequence Wikipedia page"
|
||||
[4]: https://en.wikipedia.org/wiki/Wlog "Wlog Wikipedia page"
|
@@ -0,0 +1,68 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
- proof_discovery
|
||||
comments: true
|
||||
date: "2014-04-04T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/proof_discovery/discovering-a-proof-of-heine-borel/
|
||||
- /discovering-a-proof-of-heine-borel/
|
||||
title: Discovering a proof of Heine-Borel
|
||||
---
|
||||
I'm running through my Analysis proofs, trying to work out which ones are genuinely hard and which follow straightforwardly from my general knowledge base. I don't find the [Heine-Borel Theorem][1] "easy" enough that I can even forget its statement and still prove it (like [I can with the Contraction Mapping Theorem][2]), but it turns out to be easy in the sense that it follows simply from all the theorems I already know. Here, then, is my attempt to discover a proof of the theorem, using as a guide all the results I know but can't necessarily prove without lots of effort.
|
||||
|
||||
# Statement of the theorem
|
||||
|
||||
The Heine-Borel Theorem states that a subset of \\(\mathbb{R}^n\\) is compact if and only if it is closed and bounded.
|
||||
|
||||
# First direction
|
||||
|
||||
One direction looks easy - if we assume our set is not closed or not bounded, it should be simple to show that it is not compact, using an argument based on the fact that \\((0,1]\\) is not compact and \\([0, \infty)\\) is not compact. Both of those I know how to prove.
|
||||
|
||||
## Assume not closed
|
||||
|
||||
If the set \\(S\\) is not closed, the only thing we can do is take a sequence \\((x_n)_{n \geq 1}\\) tending to a limit \\(x\\) which is not in \\(S\\). From this, we need to create an open cover of \\(S\\) which has no finite subcover.
|
||||
|
||||
In one dimension, this is easy because we can just take a ball around each \\(x_i\\), each ball overlapping by a tiny bit with the next. Clearly since any finite cover must include each \\(x_i\\), it must also include those balls, whence it must include an infinite number of balls (contradiction). However, in more dimensions this is not so obvious, because we don't have this handy "next ball" concept. What was really key in that 1D example was that the balls around each \\(x_i\\) didn't overlap to the extent that a ball contained more than one \\(x_i\\), and that no ball got near the forbidden limit point. (There was always "room to keep going" - in the \\((0,1]\\) example, taking the sets \\((\dfrac{1}{n+1}, \dfrac{1}{n})\\) and filling in some tiny balls around each \\(\dfrac{1}{n}\\), every set is some finite distance away from \\(0\\).)
|
||||
|
||||
In more dimensions, if we create an open cover of \\(S\\) such that no set gets near the limit point \\(x\\) - that is, such that each set in the cover has some neighbourhood of \\(x\\) which it doesn't encroach upon - then any finite cover must also have some neighbourhood of \\(x\\) which it doesn't encroach upon. (A finite collection of things which don't get close to 0 must also not get close to 0.) Hence, because we have a sequence tending to \\(x\\) in \\(S\\), which \*does\* get close to \\(x\\), one of the \\(x_i\\) can't be included in our finite cover. That contradicts compactness.
|
||||
|
||||
## Assume not bounded
|
||||
|
||||
Remember our key example here was \\([0, \infty)\\). Since our set isn't bounded, we can take a sequence in it getting arbitrarily far out from \\(0\\) (that is, for every \\(n\\) there is \\(x_n\\) such that \\(\vert x_n \vert \geq n\\)). But then the easiest cover to use is just the set of balls centred on \\(0\\) with radius \\(n\\); this is an infinite cover, but there is no finite subcover because if we ever stop, there's an \\(x_n\\) we've missed.
|
||||
|
||||
# The other direction
|
||||
|
||||
Here's the bit that looks harder, because we're taking any closed bounded set and showing a strong property of it. Remember, though, that we have in fact proved this in 1D already: we proved the Bolzano-Weierstrass property of the reals, and it is a fact (although I don't remember how to prove it) that sequential compactness implies compactness. Let's see if we can make that proof work. (The proof I know goes along the lines of "fix an infinite sequence in an interval; keep halving the interval; there's an infinite subsequence in one of the halves; repeat".)
|
||||
|
||||
Firstly, we're faced with an arbitrary closed bounded set. With the not-closed or not-bounded sets it was easier - we had somewhere to start from. We're going to need to make the problem simpler, because closed bounded sets can look really quite odd. The simplest possible closed bounded set is the closed ball centred on the origin of radius \\(r\\), but that's not great for halving. What we can halve is a box \\([-r, r]^n\\) - that's the second-simplest possible closed bounded set (arguably the most simple).
|
||||
|
||||
Take an open cover of the box, and assume for contradiction that it has no finite subcover. Divide the box up into \\(2^n\\) smaller boxes by cutting halfway along each side. One of these boxes must have no finite subcover of the original cover (otherwise they'd all have finite subcovers, so we could union them all together to get a finite subcover of the big box), so we can repeat on that box. Inductively we obtain a sequence of nested boxes, none of which has a finite subcover in the original cover, and they are boxes of side length \\(r \times 2^{-n}\\).
|
||||
|
||||
What do we know about these nested boxes? In 1D, the proof then went "our infinite sequence therefore has a limit": there was a point which lay in every box. We'd love that to be true here: an infinite sequence of closed nested boxes must have non-empty intersection. Fortunately, that's easy to prove: take a sequence \\(z_n\\) such that each \\(z_n\\) lies in the \\(n\\)th box but not the \\(n+1\\)th. This sequence tends to a limit, because it's clearly Cauchy; we'll show that the limit lies in every box. Indeed, we know that the boxes are closed, so the sequence \\(z_n, z_{n+1}, z_{n+2}, \dots \to z\\) tells us that \\(z\\) lies in box \\(n\\) for every \\(n\\), so there is no \\(n\\) such that \\(z\\) is not in the \\(n\\)th box, and hence \\(z\\) is in every box.
|
||||
|
||||
Now, we have our concentric boxes homing in on \\(z\\), and \\(z\\) lies in all of these boxes. Moreover, the boxes get smaller and smaller, quite rapidly, and each of them requires an infinite number of sets from our original cover in order to cover it. But where is \\(z\\)? \\(z\\) lies in some set \\(U\\) in the original cover; \\(U\\) is some finite size, so it must cover one of the boxes completely, because the sizes of the boxes goes to zero. Formally, \\(U\\) contains some ball \\(B_z(\epsilon)\\); for all \\(\epsilon\\) there is \\(n\\) such that the \\(n\\)th box lies wholly in \\(B_z(\epsilon)\\); hence \\(U\\) contains the \\(n\\)th box, for some \\(n\\).
|
||||
|
||||
This contradicts the fact that the \\(n\\)th box requires an infinite cover of open sets - we've done it in just one!
|
||||
|
||||
Hence all boxes are compact.
|
||||
|
||||
## Dealing with all possible closed bounded sets
|
||||
|
||||
We've dealt with the easiest kind of closed bounded sets. How can we transform any other closed bounded set into one of these? We can't do that necessarily - closed sets aren't necessarily unions of closed boxes - but what we can say is that all closed bounded sets are contained some closed box. (Indeed, all bounded sets are.) It would be great if a closed subset \\(C\\) of a compact set \\(X\\) were compact.
|
||||
|
||||
That's easy, though - if we have an open cover of \\(C\\), we can make an open cover of \\(X\\) by just adding \\(U = \mathbb{R}^n - C\\) to the cover. (That extra set is open, being the complement of a closed set.) Then this has a finite subcover, by compactness of \\(X\\); that subcover probably contains \\(U\\), but if it does, just throw it out and we've got a finite subcover of \\(C\\). Hence \\(C\\) is compact.
|
||||
|
||||
# Summary
|
||||
|
||||
We proved it as follows:
|
||||
|
||||
1. Show the easier direction: assume not closed, make sequence tending to point not in set, define open cover such that no set individually gets close to that point; any finite subcover doesn't get close to that point, so the sequence can't be in the finite subcover. Assume not bounded, then use the nested balls centred on the origin.
|
||||
2. Do the easiest case of boxes, by taking an open cover with no finite subcover, repeatedly dividing up the box to get a sequence of nested boxes each with no finite subcover; there is a point in every box (by defining a sequence of points, one in each box, which must tend to a limit); that point is in some open set in the original cover, and eventually the boxes get small enough that a box is entirely contained within an open set - contradiction.
|
||||
3. Do the harder cases of boxes, by showing that a closed subset of a compact set is compact (by taking an open cover, extending it to a cover of the big set, and compactly taking a finite subcover, which turns back into a finite subcover of the small set).
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Heine-Borel_theorem "Heine-Borel theorem"
|
||||
[2]: {% post_url 2014-03-30-how-to-discover-the-contraction-mapping-theorem %}
|
44
hugo/content/posts/2014-04-07-useful-conformal-mappings.md
Normal file
44
hugo/content/posts/2014-04-07-useful-conformal-mappings.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-04-07T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /uncategorized/useful-conformal-mappings/
|
||||
- /useful-conformal-mappings/
|
||||
title: Useful conformal mappings
|
||||
---
|
||||
This post is to be a list of conformal mappings, so that I can get better at answering questions like "Find a conformal mapping from \<this domain\> to \<this domain\>". The following Mathematica code is rough-and-ready, but it is designed to demonstrate where a given region goes under a given transformation.
|
||||
|
||||
whereRegionGoes[f_, pred_, xrange_, yrange_] :=
|
||||
|
||||
whereRegionGoes[f, pred, xrange, yrange] =
|
||||
With[{xlist = Join[{x}, xrange], ylist = Join[{y}, yrange]},
|
||||
ListPlot[
|
||||
Transpose@
|
||||
Through[{Re, Im}[
|
||||
f /@ (#[[1]] + #[[2]] I & /@
|
||||
Select[Flatten[Table[{x, y}, xlist, ylist], 1],
|
||||
With[{z = #[[1]] + I #[[2]]}, pred[z]] &])]]]]
|
||||
|
||||
* Möbius maps - these are of the form \\(z \mapsto \dfrac{az+b}{c z+d}\\). They keep circles and lines as circles and lines, so they are extremely useful when mapping a disc to a half-plane. A map is defined entirely by how it acts on any three points: there is a unique Möbius map taking any three points to any three points (and hence any circle/line to circle/line). (Some of the following are Möbius maps.)
|
||||
* To take the unit disc to the upper half plane, \\(z \mapsto \dfrac{z-i}{i z-1\\)}
|
||||
* To take the upper half plane to the unit disc, \\(z \mapsto \dfrac{z-i}{z+i}\\) (the [Cayley transform][1])
|
||||
* To rotate by 90 degrees about the origin, \\(z \mapsto i \\)z
|
||||
* To translate by \\(a\\), \\(z \mapsto a+\\)z
|
||||
* To scale by factor \\(a \in \mathbb{R}\\) from the origin, \\(z \mapsto a \\)z
|
||||
* \\(z \mapsto exp(z)\\) takes a vertical strip to an annulus - but note that it is not bijective, because its domain is simply connected while its range is not.
|
||||
* \\(z \mapsto exp(z)\\) takes a horizontal strip, width \\(\pi\\) centred on \\(\mathbb{R}\\) onto the right-half-plane.
|
||||
|
||||
## Maps which might not be conformal
|
||||
|
||||
These maps are useful but we can only use them when the domain doesn't include a point where \\(f'(z) = 0\\) (as that would stop the map from being conformal).
|
||||
|
||||
* To "broaden" a wedge symmetric about the real axis pointing rightwards, \\(z \mapsto z^\\)2
|
||||
* To take a half-strip \\(Re(z) > 0, 0 < Im(z) < \dfrac{\pi}{2}\\) to the top-right quadrant: \\(z \mapsto \sinh(z\\))
|
||||
* to take a half-strip \\(Im(z) > 0, -\frac{\pi}{2} < Re(z) < \frac{\pi}{2}\\) to the upper half plane, \\(z \mapsto \sin(z\\))
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Cayley_transform#Conformal_map "Cayley transform Wikipedia page"
|
53
hugo/content/posts/2014-04-15-sample-topology-question.md
Normal file
53
hugo/content/posts/2014-04-15-sample-topology-question.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
- proof_discovery
|
||||
comments: true
|
||||
date: "2014-04-15T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/sample-topology-question/
|
||||
- /sample-topology-question/
|
||||
title: Sample topology question
|
||||
---
|
||||
As part of the recent series on how I approach maths problems, I give another one here (question 14 on the Maths Tripos IB 2007 paper 4). The question is:
|
||||
|
||||
> Show that a compact metric space has a countable dense subset.
|
||||
|
||||
This is intuitively clear if we go by our favourite examples of metric spaces (namely \\(\mathbb{R}^n\\), the discrete metric and the indiscrete metric). Indeed, in \\(\mathbb{R}^n\\), which isn't even compact, we have the rationals (so the theorem doesn't give a necessary condition, only a sufficient one); in the indiscrete metric, any singleton \\(\{x \}\\) is dense (since the only closed non-empty set is the whole space); in the discrete metric, where every set is open, we can't possibly be compact unless the space is finite, so that's why the theorem doesn't hold for a topology with so many sets.
|
||||
|
||||
However, there are some really weird metric spaces out there, and if there's one thing I've learnt about topology it's that intuition-by-examples is an extremely bad way to prove things, although it's often a good way to work out *how* to prove something.
|
||||
|
||||
Right. Our metric space could be really odd - it might be massively uncountable or something - so that means we're going to have to build our dense subset anew for each metric space. (It's like trying to find a good diet for your pet - the possible pets are so diverse that one diet won't fit all, so we have to find the right diet for each pet individually.) The "countable" bit can only come in from the rationals or naturals - it can't pop out of the metric space itself, because we have no idea how huge the metric space might be.
|
||||
|
||||
That's all I can come up with for meta-reasoning at the moment. Let's find an example to guide intuition. By far the simplest is \\([a,b] \subset \mathbb{R}\\), whose dense subset is \\(\mathbb{Q} \cap [a,b]\\).
|
||||
|
||||
My first thought is to make a dense subset by grabbing an arbitrary point \\(x\\) and then taking one point \\(x_p\\) such that \\(d(x_p, x) = p\\) for all rational \\(p\\). That definitely works for \\([a,b]\\), but actually it clearly fails in \\(\mathbb{R}^2\\) - what if we happened to pick our points so they all lay on the same line? They'd be dense along that line, but not anywhere else in the set. It's going to be a lot of work to fix this in \\(\mathbb{R}^2\\) without using special properties of \\(\mathbb{R}\\), so I'll abandon that line of thought.
|
||||
|
||||
Nothing obvious has come of the "density" part of the statement. Let's move on to the other bit - we know our metric space is compact (or, equivalently, that any open cover has a finite subcover). That means we're going to want to create an open cover. Because our metric space might be so odd, the only obvious cover to take is one consisting of a ball around every point. (Those balls might all be different sizes, of course.) That's the only way to make sure that we have actually included our entire space in the cover.
|
||||
|
||||
Compactness then gives us that there is a finite subcover of this cover of balls. That's not going to get us very far if we require a countable number of points, though. Where might we get a *point* rather than an open set (after all, compactness is all about sets, not points)? The only possible place is as the centre of some ball. Aha - we need to create a countable number of points, each of which lies at the centre of some ball. Equivalently, we want a countable number of balls.
|
||||
|
||||
OK, we can create hugely many balls to cover the set (wrap every point in a ball), and we can turn that into finitely many balls to cover the set (by compactness). How can we get countably many? Obviously not from the "hugely many" directly, because it might be very very uncountable - but we can make countable from finite, by taking a countable union. That is, we're going to need a countable union of {finitely many balls which cover the set}.
|
||||
|
||||
The simplest way I can create that countable union is to make every ball the same size (\\(\frac{1}{n}\\)), and use the cover \\(B_{\frac{1}{n}}\\) consisting of a \\(\frac{1}{n}\\)-ball around every point. We use compactness to turn that into \\(C_{\frac{1}{n}}\\) a collection of finitely many balls (which covers the entire space), and consider the union of all these \\(C_{\frac{1}{n}}\\).
|
||||
|
||||
This has given us a countable collection of points \\(\cup_{n \geq 1} \cup_{j = 1}^{i_n} P_{\frac{1}{n},j}\\) (namely, the centres of the balls; notationally, \\(P_{\frac{1}{n}, j}\\) refers to the centre of the \\(j\\)th element of \\(C_{\frac{1}{n}}\\)). Now, we want that set to be dense - we need the closure to be the entire space. What would it mean if the closure weren't the entire space? There would be a point which was in the space but not the closure.
|
||||
|
||||
At this point, I move back to the \\(\mathbb{R}^2\\) intuition-guide. I have drawn a mental picture of \\([0,1] \times [0,1]\\) with a countable collection of balls covering it, with a single point not in the closure of the set of centres. Aha, something is not right here - how can a point manage not to be in the closure of that set, unless it is outside the cover?
|
||||
|
||||
Suppose \\(x\\) does not lie in \\(\text{cl}(\cup_n P_{\frac{1}{n}})\\) - that is, \\(x\\) is outside the closure. Then \\(x\\) lies in an open set - namely the complement of the closure - so there is an open ball \\(B_{\epsilon}(x)\\) which lies outside the closure. I can feel that we're going to use \\(\frac{1}{n}\\)-ness at some point, because that's how we defined our cover, so let's make \\(\epsilon = \frac{1}{m}\\) for some \\(m\\) (which we can do - if our original \\(\epsilon\\) didn't work, make it smaller until it is the reciprocal of an integer).
|
||||
|
||||
Then we have a radius-\\(\frac{1}{m}\\) ball which doesn't lie inside the closure. That doesn't bode well for \\(C_{\frac{1}{m}}\\) being a cover, but it's just possible that the balls may sit next to each other in some way that makes it work (that's how vague my thoughts are, not just my incompetence at communication). For safety, let's consider \\(C_{\frac{1}{2m}}\\) instead.
|
||||
|
||||
Then we have \\(x \in B_{\frac{1}{2m}}(k)\\) for some \\(k \in P_{\frac{1}{2m}}\\), because \\(C_\frac{1}{2m}\\) was a cover so \\(x\\) does lie in a ball in that cover; pick \\(k\\) to be the cetntre of that ball. In particular, \\(k\\) lies at most \\(\frac{1}{2m}\\) away from \\(x\\), and \\(k\\) lies both in \\(B_{\frac{1}{m}}(x)\\) (which is not in the closure) and in \\(P\\) (which is in the closure). This is a contradiction - we've found a point which is both in and not in the closure.
|
||||
|
||||
Hence we must have the closure being the entire space, which means our countable collection of points is dense.
|
||||
|
||||
# Summary
|
||||
|
||||
I started off by thinking about the problem - working out roughly how I might be able to attack it, and deciding that it was too general for clever tricks to work. I then constructed an intuition-guide example, and worked off that, but decided that the line of attack suggested by my example would be very hard in general.
|
||||
|
||||
Having exhausted one of the parts of the theorem's statement, I moved to the other, and followed my nose. The problem was so general that there were only a few possible places we could acquire a countable collection of points; compactness suggested using balls around every point in the space, to get a finite cover of balls. From finite we can create countable by just taking a union, so I made the finite covers more formal (giving the balls a particular size) and took the union of all of them. That naturally gives a countable set of points (the centres of the balls); in the spirit of "do as little work as possible", I set out to prove that this set was dense. Assuming the contrary made it obvious from my intuition-picture that the set was indeed dense.
|
58
hugo/content/posts/2014-04-17-cayley-hamilton-theorem.md
Normal file
58
hugo/content/posts/2014-04-17-cayley-hamilton-theorem.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2014-04-17T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/cayley-hamilton-theorem/
|
||||
- /cayley-hamilton-theorem/
|
||||
title: Cayley-Hamilton theorem
|
||||
---
|
||||
This is to detail a much easier proof (at least, I find it so) of [Cayley-Hamilton][1] than the ones which appear on the Wikipedia page. It only applies in the case of complex vector spaces; most of the post is taken up with a proof of a lemma about complex matrices that is very useful in many contexts.
|
||||
|
||||
The idea is as follows: given an arbitrary square matrix, upper-triangularise it (looking at it in basis \\(B\\)). Then consider how \\(A-\lambda I\\) acts on the vectors of \\(B\\); in particular, how it deals with the subspace spanned by \\(b_1, \dots, b_i\\).
|
||||
|
||||
# Lemma: upper-triangulation
|
||||
|
||||
> Given a square matrix \\(A\\), there is a basis with respect to which \\(A\\) is upper-triangular.
|
||||
|
||||
Proof: by induction. It's obviously true for \\(1 \times 1\\) matrices, as they're already triangular. Now, let's take an arbitrary \\(n \times n\\) matrix \\(A\\). We want to make it upper-triangular. In particular, thinking about the top-left element, we need \\(A\\) to have an eigenvector (since if \\(A\\) is upper-triangular with respect to basis \\(B\\), then \\(A(b_1) = \lambda b_1\\), where \\(\lambda\\) is the top-left element). OK, let's grab an eigenvector \\(v_1\\) with eigenvalue \\(\lambda\\).
|
||||
|
||||
We'd love to be done by induction at this point - if we extend our eigenvector to a basis, that extension itself forms a smaller space, on which \\(A\\) is upper-triangulable. We have that every subspace has a complement, so let's call pick a complement of \\(\text{span}(v_1)\\) and call it \\(W\\).
|
||||
|
||||
Now, we want \\(A\\) to be upper-triangulable on \\(W\\). It makes sense, then, to restrict it to \\(W\\) - we'll call the restriction \\(\tilde{A}\\), and that's a linear map from \\(W\\) to \\(V\\). Our inductive hypothesis requires a square matrix, so we need to throw out one of the rows of this linear map - but in order that we're working with an endomorphism (rather than just a linear map) we need \\(A\\)'s domain to be \\(W\\). That means we have to throw out the top row as well - that is, we compose with \\(\pi\\) the projection map onto \\(W\\).
|
||||
|
||||
Then \\(\pi \cdot \tilde{A}\\) is \\((n-1)\times(n-1)\\), and so we can induct to state that there is a basis of \\(W \leq V\\) with respect to which \\(\pi \cdot \tilde{A}\\) is upper-triangular. Let's take that basis of \\(W\\) as our extension to \\(v_1\\), to make a basis of \\(V\\). (These are \\(n-1\\) length-\\(n\\) vectors.)
|
||||
|
||||
Then we construct \\(A\\)'s matrix as \\(A(v_1), A(v_2), \dots, A(v_n)\\). (That's how we construct a matrix for a map in a basis: state where the basis vectors go under the map.)
|
||||
|
||||
Now, with respect to this basis \\(v_1, \dots, v_n\\), what does \\(A\\) look like? Certainly \\(A(v_1) = \lambda v_1\\) by definition. \\(\pi(A(v_2)) = \pi(\tilde{A}(v_2))\\) because \\(\tilde{A}\\) acts just the same as \\(A\\) on \\(W\\); by upper-triangularity of \\(\pi \cdot \tilde{A}\\), we have that \\(\pi \cdot \tilde{A}(\pi(v_2)) = k v_2\\) for some \\(k\\). The first element (the \\(v_1\\) coefficient) of \\(A(v_2)\\), who knows? (We threw that information away by taking \\(\pi\\).) But that doesn't matter - we're looking for upper-triangulability rather than diagonalisability, so we're allowed to have spare elements sitting at the top of the matrix.
|
||||
|
||||
And so forth: \\(A\\) is upper-triangular with respect to some basis.
|
||||
|
||||
## Note
|
||||
|
||||
Remember that we threw out some information by projecting onto \\(W\\). If it turned out that we didn't throw out any information - if it turned out that if we could always "fill in with zeros" - then we'd find that we'd constructed a basis of eigenvectors, and that the matrix was diagonalisable. (This is how the two ideas are related.)
|
||||
|
||||
# Theorem
|
||||
|
||||
Recall the statement of the theorem:
|
||||
|
||||
> Every square matrix satisfies its characteristic polynomial.
|
||||
|
||||
Now, this would be absolutely trivial if our matrix \\(A\\) were diagonalisable - just look at it in a basis with respect to which \\(A\\) is diagonal (recalling that change-of-basis doesn't change characteristic polynomial), and we end up with \\(n\\) simultaneous equations which are conveniently decoupled from each other (by virtue of the fact that \\(A\\) is diagonal).
|
||||
|
||||
We can't assume diagonalisability - but we've shown that there is something nearly as good, namely upper-triangulability. Let's assume (by picking an appropriate basis) that \\(A\\) is upper-triangular. Now, let's say the characteristic polynomial is \\(\chi(x) = (x - \lambda_1)(x-\lambda_2) \dots (x-\lambda_n)\\). What does \\(\chi(A)\\) do to the basis vectors?
|
||||
|
||||
Well, let's consider the first basis vector, \\(e_1\\). We have that \\(A(e_1) = \lambda_1 e_i\\) because \\(A\\) is upper-triangular with top-left element \\(\lambda_1\\), so we have \\((A-\lambda_1 I)(e_1) = 0\\). If we look at the characteristic polynomial as \\((x-\lambda_n)\dots (x-\lambda_1)\\), then, we see that \\(\chi(A)(e_1) = 0\\).
|
||||
|
||||
What about the second basis vector? \\(A(e_2) = k e_1 + \lambda_2 e_2\\); so \\((A - \lambda_2 I)(e_2) = k e_1\\). We've pulled the \\(2\\)nd basis vector into an earlier-considered subspace, and happily we can kill it by applying \\((A-\lambda_1 I)\\). That is, \\(\chi(A)(e_2) = (A-\lambda_n I)\dots (A-\lambda_1 I)(A-\lambda_2 I)(e_2) = (A-\lambda_n I)\dots (A-\lambda_1 I) (k e_1) = 0\\).
|
||||
|
||||
Keep going: the final case is the \\(n\\)th basis vector, \\(e_n\\). \\(A-\lambda_n I\\) has a zero in the bottom-right entry, and is upper-triangular, so it must take \\(e_n\\) to the subspace spanned by \\(e_1, \dots, e_{n-1}\\). Hence \\((A-\lambda_1 I)\dots (A-\lambda_n I)(e_n) = 0\\).
|
||||
|
||||
Since \\(\chi(A)\\) is zero on a basis, it must be zero on the whole space, and that is what we wanted to prove.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Cayley-Hamilton_theorem "Cayley-Hamilton theorem"
|
@@ -0,0 +1,122 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
- proof_discovery
|
||||
comments: true
|
||||
date: "2014-04-26T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/proof_discovery/sequentially-compact-iff-compact/
|
||||
- /sequentially-compact-iff-compact/
|
||||
title: Sequentially compact iff compact
|
||||
---
|
||||
[Prof Körner][1] told us during the [IB Metric and Topological Spaces][2] course that the real meat of the course (indeed, its hardest theorem) was "a metric space is sequentially compact iff it is compact". At the moment, all I remember of this result is that one direction requires Lebesgue's lemma (whose statement I don't remember) and that the other direction is quite easy. I'm going to try and discover a proof - I'll be honest when I have to look things up.
|
||||
|
||||
# Easy direction
|
||||
|
||||
I don't remember which direction was the easy one. However, I do know that in Analysis we prove very early on that closed intervals are sequentially compact (that is, they have the Bolzano-Weierstrass theorem), so I'm going to guess that that's the easy direction.
|
||||
|
||||
## Thought process
|
||||
|
||||
Suppose the space is compact. (Then for every open cover there is a finite subcover.) We want to show that every sequence has a convergent subsequence, so of course we'll try a proof by contradiction, because the statement is so general.
|
||||
|
||||
Suppose the sequence \\(x_n\\) has no convergent subsequence. That is, no subsequence of \\(x_n\\) converges to \\(y\\), for any \\(y\\). We're aiming for some kind of open cover, and we're in a very general kind of metric space, so we're going to have to generate our cover by considering balls around every point.
|
||||
|
||||
What does it mean for every subsequence of \\(x_n\\) not to converge to \\(y\\)? It means that for every ball around \\(y\\), and for every subsequence, we can find arbitrarily many \\(x_n\\) in the subsequence such that \\(x_n\\) is outside that ball. My first thought is that we've made a sequence which might be useful - the \\(x_n\\) outside balls of radius \\(\delta_i\\) for \\(i = \frac{1}{m}\\) - but it's not obvious whether that will in fact be useful, because all we know about this sequence is that it doesn't get near a particular point.
|
||||
|
||||
OK, let's look at the "for every \\(y\\)" bit, because that's bound to be where our cover comes from. We're going to want a ball around each \\(y\\), so let's say the ball is of radius \\(\delta_y\\). (We'll delay stating what \\(\delta_y\\) actually is in value, because I have no idea what it's going to be.) Ah, then we know that for every subsequence, there are infinitely many \\(x_i\\) which lie outside the ball \\(B(y, \delta_y)\\).
|
||||
|
||||
What does our finite subcover look like? It's a finite collection (say, \\(k\\) many) of balls, and we know that there are infinitely many \\(x_i\\) in any subsequence such that the \\(x_i\\) are outside a given one of those balls. But this is a contradiction: take the subsequence of \\(x_n\\) such that all of the \\(x_i\\) in the subsequence lie outside ball 1. Then take a subsequence of that such that all the elements lie outside ball 2. Repeat: eventually we end up with a subsequence of \\(x_n\\) such that all the elements lie in ball \\(k\\). This subsequence converges.
|
||||
|
||||
## Proof
|
||||
|
||||
Suppose \\((X,d)\\) is a compact metric space, and take a sequence \\(x_n\\) in \\(X\\). We show that there exists \\(y \in X\\) such that there is a subsequence \\(z_i\\) of the \\(x_n\\) such that \\(z_i \to y\\).
|
||||
|
||||
Indeed, if the sequence \\(x_n\\) gets arbitrarily close to \\(y\\) then there is a subsequence of \\(x_n\\) tending to \\(y\\) (namely, let \\(\epsilon_m = \frac{1}{m}\\); then pick \\(x_{n_m}\\) such that \\(d(x_{n_m}, y) < \epsilon_m\\)), so it is enough to show that there is some \\(y\\) such that the sequence \\(x_n\\) gets arbitrarily close to \\(y\\).
|
||||
|
||||
We show that this is true. Indeed, suppose not. Then for all \\(y\\) there exists \\(\delta_y\\) such that \\(x_n\\) never gets within \\(\delta_y\\) of \\(y\\) (for all \\(n > N\\), some \\(N\\) - the sequence might have started at \\(y\\), but we know it never returns after some point). Take a cover consisting of those \\(B(y, \delta_y)\\); by compactness, there is a finite subcover.
|
||||
|
||||
Now, we have that for the \\(i\\)th ball in the cover, there exists \\(N_i\\) such that \\(x_n\\) never gets into the \\(i\\)th ball for \\(n > N_i\\); but there are only finitely many balls, so \\(x_n\\) never gets into any of the balls for \\(n > N = \text{max}(N_i)\\). But the finite collection of balls is a cover. That is, no \\(x_n\\) is in \\(X\\), for \\(n > N\\) - contradiction.
|
||||
|
||||
## Postscript
|
||||
|
||||
That did indeed turn out to be the easier direction, then.
|
||||
|
||||
# Hard direction
|
||||
|
||||
I'm not even going to begin attempting to find out what Lebesgue's lemma is on my own, so I'll just look it up and state it.
|
||||
|
||||
> For a sequentially compact metric space \\((X, d)\\), and an open cover \\(U_{\alpha}\\), we have that there exists \\(\delta\\) such that for all \\(x \in X\\), there exists \\(\alpha_x\\) such that \\(B(x, \delta) \subset U_{\alpha_x}\\).
|
||||
|
||||
That is, "given any open cover, we can find a ball-width such that for every point, a ball of that width lies entirely in some set in the cover". It feels kind of related to Hausdorffness - while "metric spaces are Hausdorff" guarantees that we can wrap distinct points in non-overlapping balls, Lebesgue's lemma tells us that if our distinct points are not covered by the same set then we can separate them while remaining in those different sets in the cover.
|
||||
|
||||
OK, let's go for a proof of this.
|
||||
|
||||
## Proving Lebesgue's lemma
|
||||
|
||||
Well, where can we start? To actually produce such a \\(\delta\\), it looks like we'd need to take some kind of minimum, and that would require a finite cover (which is assuming compactness). So that's not a good place to start.
|
||||
|
||||
If we don't know where to start, we contradict. Suppose there is no \\(\delta\\) such that for all \\(x \in X\\) there exists \\(\alpha_x\\) such that \\(B(x, \delta) \subset U_{\alpha}\\). That is, for every \\(\delta\\) there exists \\(x \in X\\) such that for all open sets in the cover, \\(B(x, \delta) \not \subset U_{\alpha}\\).
|
||||
|
||||
We're in a sequentially compact space - we need a sequence, so that it can have a convergent subsequence. Mindlessly (nearly literally - I'm exhausted at the moment, having had an unusually long supervision since proving the easier direction), I'll take \\(\delta_n = \frac{1}{n}\\) and create a sequence \\(x_n\\) such that \\(B(x_n, \frac{1}{n})\\) is not wholly contained in any set of the cover. Then the \\(x_n\\) has a convergent subsequence \\(x_{n_i} \to x\\), say.
|
||||
|
||||
Picture pause. We've got our \\(x_{n_i}\\) tending to \\(x\\), with ever-decreasing balls around them. It seems sensible that at some point (since the position of the balls, the centre \\(x_{n_i}\\), is hardly changing, while the radius is getting smaller) the balls will get so small that they start being contained in some cover-set.
|
||||
|
||||
That's actually so close to a proof that I'll write it up formally from this point.
|
||||
|
||||
### Proof
|
||||
|
||||
Let \\((X, d)\\) be a sequentially compact metric space, and let \\(U_\alpha\\) be a cover (ranging \\(\alpha\\) over some indexing set). Assume for contradiction that for every \\(\delta\\) there exists \\(x \in X\\) such that for all \\(\alpha\\), \\(B(x, \delta) \not \subset U_{\alpha}\\).
|
||||
|
||||
Specialise to the sequence \\(\delta_n = \frac{1}{n}\\), and let \\(x_n\\) be the corresponding \\(x \in X\\). Then by sequential compactness, there exists a subsequence \\(x_{n_i}\\) tending to some \\(x\\).
|
||||
|
||||
Now, \\(B(x_{n_i}, \frac{1}{n_i}) \not \subset U_{\alpha}\\) for any \\(\alpha\\). Also, because each \\(U_{\alpha}\\) is open, we have that for every \\(\alpha\\) such that \\(x \in U_{\alpha}\\) there exists \\(\epsilon_{\alpha}\\) such that \\(B(x, \epsilon_{\alpha})\\) is wholly contained within \\(U_{\alpha}\\).
|
||||
|
||||
Fix some \\(\alpha\\) such that \\(x \in U_{\alpha}\\), and let \\(\epsilon = \epsilon_{\alpha}\\). Take \\(n_i\\) such that \\(d(x_{n_i}, x) < \frac{\epsilon}{2}\\) (possible, because \\(x_{n_i} \to x\\)). We have \\(B(x_{n_i}, \frac{1}{n_i})\\) entirely contained in \\(B(x, \epsilon)\\), because any point in the former ball is at most \\(\frac{1}{n_i}\\) away from \\(x_{n_i}\\), which is itself at most \\(\frac{\epsilon}{2}\\) away from \\(x\\); hence any point in \\(B(x_{n_i}, \frac{1}{n_i})\\) is at most \\(\frac{1}{n_i} +\frac{\epsilon}{2}\\) away from \\(x\\). Picking \\(n_i > \frac{2}{\epsilon}\\) (as well as such that \\(d(x_{n_i}, x) < \frac{\epsilon}{2}\\)) ensures that \\(\frac{1}{n_i} +\frac{\epsilon}{2} < \epsilon\\).
|
||||
|
||||
But this is a contradiction: we have a ball entirely contained in some \\(U_{\alpha}\\) - namely \\(B(x, \epsilon)\\) - which contains a ball which is not entirely contained in \\(U_{\alpha}\\) - namely \\(B(x_{n_i}, \frac{1}{n_i})\\).
|
||||
|
||||
## Proving the main theorem
|
||||
|
||||
OK, what do we have? We have that any open cover of a sequentially compact space allows us to draw a ball of *predetermined width* around each point, such that every ball is contained entirely in a set from the cover.
|
||||
|
||||
What do we want? We want every open cover of a sequentially compact space to have a finite subcover. [^when]
|
||||
|
||||
OK, let's do the only possible thing and take an open cover of a sequentially compact space. We might be able to build a finite subcover because of our predetermined-width balls, but I want a picture first.
|
||||
|
||||
### Pictures (feel free to skip)
|
||||
|
||||
Let's use \\([0, 1]\\) and the cover \\([0, \frac{1}{5}), (\frac{1}{n+2}, \frac{1}{n}), (\frac13, 1]\\) where \\(n \geq 2\\), and let's suppose \\(\delta = \frac17\\). (This clearly works as a \\(\delta\\) in Lebesgue's lemma.) Then a \\(\frac17\\)-ball around any point remains in some set of the cover. The reason we have a finite subcover in this case is that the sets in the cover get smaller, so eventually we can just discard the ones which are too small to contain a \\(\frac17\\)-ball. It turns out that wasn't a great intuition guide - metric spaces can be a lot odder than that.
|
||||
|
||||
We want a space where the "balls get smaller" argument fails. Let's use \\(\mathbb{R} \cup \{ \infty \}\\) under the usual metric, and the cover \\((n-\frac34, n+\frac34)\\) along with some ball around \\(\infty\\). The reason this one works is because the ball around infinity makes sure we can throw out most of the sets of the cover, because they are contained in the ball around infinity. (A suitable \\(\delta\\) is \\(\frac14\\).)
|
||||
|
||||
### End of pictures
|
||||
|
||||
Hmm, I don't think I can easily come up with an example which explains exactly why the theorem is true. I slept on this, and got no further, so I looked up the next step: assume that it is not possible to cover the space with a finite number of \\(B(x_i, \delta)\\). (This should perhaps have been suggested to me by my finite examples, in hindsight.) It turns out that this step makes it really easy.
|
||||
|
||||
Then for all finite sequences \\((x_i)_{i=1}^n\\), there is a point \\(x_{n+1}\\) such that \\(x_{n+1}\\) is not in the cover; this forms a sequence which must have a convergent subsequence. Because the covering-balls are all of fixed width \\(\delta\\), we must have that eventually the points in the subsequence draw together enough to sit in the same ball.
|
||||
|
||||
## Proof
|
||||
|
||||
Suppose \\((X, d)\\) is a sequentially compact metric space which is not compact, and fix an arbitrary open cover \\(U_{\alpha}\\) such that there is no finite subcover. Then by Lebesgue's lemma, there is \\(\delta\\) such that for all \\(x \in X\\), there is \\(\alpha_x\\) such that \\(B(x, \delta) \subset U_{\alpha}\\).
|
||||
|
||||
Now, if it were possible to cover \\(X\\) with a finite number of \\(B(x_i, \delta)\\) then we would have a finite subcover (namely, \\(U_{\alpha_{x_i}}\\) for each \\(i\\)). Hence it is impossible to cover \\(X\\) with a finite number of \\(B(x_i, \delta)\\). Take a sequence \\((x_n)_{n=1}^{\infty}\\) such that \\(x_i\\) does not lie in any \\(B(x_j, \delta)\\) for \\(j < i\\) (and where \\(x_1\\) is arbitrary). Then there is a convergent subsequence \\(x_{n_i} \to x\\), say; wlog let \\(n_i = i\\), for ease of notation (so the original sequence converged).
|
||||
|
||||
But this contradicts the requirement that \\(x_i\\) always lies outside \\(B(x_j, \delta)\\) for \\(j < i\\): indeed, \\(d(x_i, x_j) < \delta\\) for sufficiently large \\(i, j\\), since convergent sequences are Cauchy.
|
||||
|
||||
Hence \\((X, d)\\) is compact.
|
||||
|
||||
# Postscript
|
||||
|
||||
Ouch, that took a long time. There were three key ideas I ended up using.
|
||||
|
||||
1. One direction is so easy that it's one of the first theorems we prove in Analysis.
|
||||
2. Lebesgue's lemma.
|
||||
3. Contradict ALL the things. (Every single major step in either direction of the proof is a contradiction, and everything just falls out.)
|
||||
|
||||
[^when]: When do we want it? Now!
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Tom_Körner "Prof Körner Wikipedia page"
|
||||
[2]: https://www.dpmms.cam.ac.uk/study/IB/MetricTopologicalSpaces/ "Met+Top"
|
@@ -0,0 +1,66 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
- proof_discovery
|
||||
comments: true
|
||||
date: "2014-05-03T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/proof_discovery/discovering-a-proof-of-sylvesters-law-of-inertia/
|
||||
- /discovering-a-proof-of-sylvesters-law-of-inertia/
|
||||
title: Discovering a proof of Sylvester's Law of Inertia
|
||||
---
|
||||
*This is part of what has become a series on discovering some fairly basic mathematical results, and/or discovering their proofs. It's mostly intended so that I start finding the results intuitive - having once found a proof myself, I hope to be able to reproduce it without too much effort in the exam.*
|
||||
|
||||
# Statement of the theorem
|
||||
|
||||
[Sylvester's Law of Inertia][1] states that given a quadratic form \\(A\\) on a real finite-dimensional vector space \\(V\\), there is a diagonal matrix \\(D\\), with entries \\(( 1_1,1_2,\dots,1_p, -1_1, -1_2, \dots, -1_q, 0,0,\dots,0 )\\), to which \\(A\\) is congruent; moreover, \\(p\\) and \\(q\\) are the same however we transform \\(A\\) into this diagonal form.
|
||||
|
||||
# Proof
|
||||
|
||||
The very first thing we need to know is that \\(A\\) is diagonalisable. (If it isn't diagonalisable, we don't have a hope of getting into this nice form.) We know of a few classes of diagonalisable matrices - symmetric, Hermitian, etc. All we know about \\(A\\) is that it is a real quadratic form. What does that mean? It means that \\(A(x) = x^T A x\\) if we move into some coordinate system; transposing gives us that \\(A(x)^T = x^T A^T x\\), but the left-hand-side is scalar so is symmetric, whence \\(A = A^T\\) (because \\(x\\) was arbitrary). Hence \\(A\\) has a symmetric matrix and so is diagonalisable: there is an orthogonal matrix \\(P\\) such that \\(P^{-1}AP = D\\), where \\(D\\) is diagonal. (Recall that a matrix \\(M\\) is orthogonal if it satisfies \\(M^{-1} = M^T\\).)
|
||||
|
||||
Now we might as well consider \\(D\\) in diagonal form. Some of the elements are positive, some negative, and some zero - it's easy to transform \\(D\\) so that the positive ones are all together, the negative ones are all together and the zeros are all together, by swapping basis vectors. (For instance, if we want to swap diagonal elements in positions \\((i,i), (j,j)\\), just swap \\(e_i, e_j\\).) Now we can scale every diagonal element down to \\(1\\), by scaling the basis vectors - if we scale \\(e_i\\) by \\(\sqrt{ \vert A_{i,i} \vert }\\), calling the resulting basis \\(f_i\\) we'll get \\(A(f_i) = \vert A_{i,i} \vert A(e_i) = A_{i,i} e_i\\) as required. (The squaring comes from the fact that \\(A\\) is a \*quadratic\* form, so \\(A(a x) = a^2 A(x)\\).)
|
||||
|
||||
Hence we've got \\(A\\) into the right form. But how do we show that the number of positive and negative elements is an invariant?
|
||||
|
||||
## Positive bit
|
||||
|
||||
All I remember from the notes is that there's something to do with positive definite subspaces. It turns out that's a really big hint, and I haven't been able to think up how you might discover it. Sadly, I'll just continue as if I'd thought it up for myself rather than remembering it.
|
||||
|
||||
The following section was my first attempt. My supervisor then told me that it's a bit inaccurate (and some of it doesn't make sense). In particular, I talk about the dimension of \\(V \backslash P\\) for \\(P\\) a subspace of \\(V\\) - but \\(V \backslash P\\) isn't even a space (it doesn't contain \\(0\\)). During the supervision I attempted to refine it by using \\(P^C\\) the complement of \\(P\\) in \\(V\\), but even that is vague, not least because complements aren't unique.
|
||||
|
||||
### Original attempt
|
||||
|
||||
We have a subspace \\(P\\) on which \\(A\\) is positive definite - namely, make \\(A\\) diagonal and then take the first \\(p\\) basis vectors. (Remember, positive definite iff \\(A(x, x) > 0\\) unless \\(x = 0\\); but \\(A(x,x) > 0\\) for \\(x \in P\\) because \\(x^T A x\\) is a sum of positive things.) Similarly, we have a subspace \\(Q\\) on which \\(A\\) is negative semi-definite (namely "everything which isn't in \\(P\\)"). Then what we want is: for any other diagonal form of \\(A\\), there is the same number of 1s on the diagonal, and the same number of -1s, and the same number of 0s. That is, we want to ensure that just by changing basis, we can't alter the size of the subspace on which \\(A\\) is positive-definite.
|
||||
|
||||
We'll show that for any subspace \\(R\\) on which \\(A\\) is positive-definite, we must have \\(\dim(R) \leq \dim(P)\\). Indeed, let's take \\(R\\) on which \\(A\\) is positive definite. The easiest way to ensure that its dimension is less than that of \\(P\\) is to show that it's contained in \\(P\\). Now, that might be hard - we don't know anything about what's in \\(R\\) - but we might do better in showing that nothing in \\(R\\) is also in \\(V \backslash P\\), because we know \\(A\\) is negative semi-definite on \\(V \backslash P\\), and that's inherently in tension with the positive-definiteness on \\(R\\).
|
||||
|
||||
Suppose \\(r \not \in P\\) and \\(r \in R\\). Then \\(A(r,r) \leq 0\\) (by the first condition) and \\(A(r,r) > 0\\) (by the second condition, since \\(R\\) is positive-definite) - contradiction.
|
||||
|
||||
That was quick - we showed, for all subspaces \\(R\\) on which \\(A\\) is positive-definite, that \\(\dim(R) \leq \dim(P)\\).
|
||||
|
||||
### Supervisor-vetted version
|
||||
|
||||
We have a subspace \\(P\\) on which \\(A\\) is positive-definite - namely, make \\(A\\) diagonal and take the first \\(p\\) basis vectors. We'll call the set of basis vectors \\(\{e_1, \dots, e_n \}\\); then \\(P\\) is spanned by \\(\{e_1, \dots, e_p \}\\).
|
||||
|
||||
Now, let's take any subspace \\(\tilde{P}\\) on which \\(A\\) is positive-definite. We want \\(\dim(\tilde{P}) \leq \dim(P)\\); to that end, take \\(N\\) spanned by \\(\{e_{p+1}, \dots, e_n \}\\). We show that \\(\tilde{P} \cap N = \{0\}\\). Indeed, if \\(r \in\tilde{P} \cap N\\), with \\(r \not = 0\\), then:
|
||||
|
||||
* \\(r \in \tilde{P}\\) so \\(A(r,r) > \\)0
|
||||
* \\(r \in N\\) so \\(A(r,r) \leq \\)0
|
||||
|
||||
But this is a contradiction. Hence \\(\tilde{P} \cap N\\) is the zero space, and so \\(\dim(\tilde{P}) \leq \dim(P)\\) because \\(\dim(P) + \dim(N) = n\\) while \\(\dim(\tilde{P}) + \dim(N) \leq n\\).
|
||||
|
||||
### Commentary
|
||||
|
||||
Notice that my original version is conceptually quite close to correct: "take something in a positive-definite space, show that it can't be in the negative-semi-definite bit and hence must be in \\(P\\)". I was careless in not checking that what I had written made sense. I am slightly surprised that no alarm bells were triggered by my using \\(V \backslash P\\) as a space - I hope that now my background mental checks will come to include this idea of "make sure that when you transform objects, you retain their properties".
|
||||
|
||||
### Completion (original and hopefully correct)
|
||||
|
||||
Identically we can show that for all subspaces \\(Q\\) on which \\(A\\) is negative-definite, that \\(\dim(Q) \leq \dim(N)\\) (with \\(N\\) defined analogously to \\(P\\) but with negative-definiteness instead of positive-definiteness). And we already know that congruence preserves matrix rank (because matrix rank is a property of the eigenvalues, and basis change in this way only alters eigenvalues by multiples of squares), so we have that the number of zeros in any diagonal representation of \\(A\\) is the same.
|
||||
|
||||
Hence in any diagonal representation of \\(A\\) with \\(p', q', z'\\) the number of \\(1, -1, 0\\) respectively on the diagonal, we need \\(p' \leq p, q' \leq q, z' = z\\) - but because the diagonal is the same size on each matrix (since the matrices don't change dimension), we must have equality throughout.
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Sylvester's_Law_of_Inertia "Sylvester's law of inertia Wikipedia page"
|
@@ -0,0 +1,28 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2014-05-26T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/proof-that-symmetric-matrices-are-diagonalisable/
|
||||
- /proof-that-symmetric-matrices-are-diagonalisable/
|
||||
title: Proof that symmetric matrices are diagonalisable
|
||||
---
|
||||
This comes up quite frequently, but I've been stuck for an easy memory-friendly way to do this. I trawled through the 1A Vectors and Matrices course notes, and found the following mechanical proof. (It's not a discovery-proof - I looked it up.)
|
||||
|
||||
## Lemma
|
||||
|
||||
Let \\(A\\) be a symmetric matrix. Then any eigenvectors corresponding to different eigenvalues are orthonormal. (This is a very standard fact that is probably hammered very hard into your head if you have ever studied maths post-secondary-school.) The proof of this is of the "write it down, and you can't help proving it" variety:
|
||||
|
||||
Suppose \\(\lambda, \mu\\) are different eigenvalues of \\(A\\), corresponding to eigenvectors \\(x, y\\). Then \\(Ax = \lambda x\\), \\(A y = \mu y\\). Hence (transposing the first equation) \\(x^T A^T = \lambda x^T\\); the left hand side is \\(x^T A\\). Hence \\(x^T A y = \lambda x^T y\\); but \\(A y = \mu y\\) so this is \\(x^T \mu y = \lambda x^T y\\). Since \\(\lambda \not = \mu\\), this means \\(x^T y = 0\\).
|
||||
|
||||
## Theorem
|
||||
|
||||
Now, suppose \\(A\\) has eigenvalues \\(\lambda_1, \dots, \lambda_n\\). They might not be distinct; take the ones which are, \\(\lambda_1, \dots, \lambda_r\\). Then extend this to a basis of \\(\mathbb{R}^n\\), and orthonormalise that basis using the [Gram-Schmidt process][1]. (This can be proved - it's tedious but not hard, as long as you remember what the Gram-Schmidt process is, and I think it's safe to assume.) With respect to this basis, \\(A\\) is a matrix which is diagonal in the first \\(r\\) entries. Moreover, we are performing an orthonormal change of basis, and conjugation by orthogonal matrices preserves the property of "symmetricness" (proof: \\((P^T A P)^T = P^T A^T P = P^T A P\\)), so the \\(r+1\\)th to \\(n\\)th row/column block is symmetric. It is also real (because we have performed a conjugation by a real matrix). And we have that the first \\(r\\) columns of \\(P^T A P\\) are filled with zeros below the diagonal (being the image of eigenvectors), so \\(P^T A P\\) is also filled with zeros in the first \\(r\\) rows above the diagonal, because it is a symmetric matrix.
|
||||
|
||||
Now by induction, that sub-matrix \\(A_{r,r} \dots A_{n,n}\\) is diagonalisable by an orthogonal matrix. Hence we are done: all symmetric matrices are diagonalisable by an orthogonal change of basis. (The eigenvectors produced by the inductive step must be orthogonal to the ones we've already found, because they fall in a subspace which is orthogonal to that of the one we already found.)
|
||||
|
||||
[1]: https://en.wikipedia.org/wiki/Gram-Schmidt_process "Gram-Schmidt process Wikipedia page"
|
29
hugo/content/posts/2014-06-25-possible-cons-of-Soylent.md
Normal file
29
hugo/content/posts/2014-06-25-possible-cons-of-Soylent.md
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
lastmod: "2022-12-31T23:21:00.0000000+00:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-06-25T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/possible-cons-of-Soylent/
|
||||
- /possible-cons-of-Soylent/
|
||||
title: Possible cons of Soylent
|
||||
---
|
||||
|
||||
I have seen many glowing reviews of [Soylent](https://soylent.com), and many vitriolic [naturalistic](https://en.wikipedia.org/wiki/Appeal_to_nature) arguments against it. What I have not really seen is a proper collection of credible reasons why you might not want to try Soylent (that is, reasons which do not boil down to "it’s not natural, therefore Soylent is bad" or "food is great, therefore Soylent is bad").
|
||||
|
||||
This page used to contain citations in the form of links to the Soylent Discourse forum at `discourse.soylent.com`.
|
||||
However, that site is now defunct.
|
||||
|
||||
* *Soylent is untested.* Indeed, there are apparently trials being run (there was originally a link to a post from the founder of Soylent, but the link is dead), but I have not seen any data coming out of them (or indeed any evidence of a trial, other than the founder’s word). It is perfectly plausible that Soylent misses out something important - [lycopene](https://en.wikipedia.org/wiki/Lycopene), for instance, may turn out to be highly beneficial. Of course, various fast-foody diets don’t contain lycopene or whatever anyway. The current fact that no-one has become ill (apart from a well-known and easily-fixed sodium problem) in a diet-related way from Soylent is insufficient as evidence that Soylent is safe.
|
||||
|
||||
* *Soylent is even more addictive than whole food.* People often report that Soylent makes them feel really really good for a few days, before they adjust to their new level of wellbeing and "good" becomes "normal". Then returning to whole food causes them to feel sluggish and generally not very well. On the other hand, some report that whole food becomes extra-tasty, so perhaps it’s a balancing act - switching from Soylent to a good diet may be important.
|
||||
|
||||
* *You hate the idea/you find cooking too fun.* Fine, don’t eat it.
|
||||
|
||||
* *It’s effort to test and tune your home-made recipe.* Everyone is different, and you might need to make up for pre-existing deficiencies or whatever. As much as the DIY community and [Rosa Labs](http://www.rosalabs.com/) would like it, one size does not fit all, and it might take a while to find out what you need.
|
||||
|
||||
* *There are side-effects of adjusting to Soylent.* People usually report gas when starting a soylent, and sometimes it doesn’t seem to settle down. It seems to be unclear why this issue is experienced, too. There are other symptoms, like headaches (which are apparently usually down to having not enough sodium or not enough water) and bloatedness (which is apparently solved by not drinking the Soylent so quickly).
|
||||
|
||||
* *Expense.* There are some DIY recipes which are very expensive. This is often because protein is dear, and low-carb soylents are to be mostly protein and fat by necessity. Too high a fat content is unpalatable, so the expensive protein makes up the calories.
|
48
hugo/content/posts/2014-07-13-solvability-of-nonograms.md
Normal file
48
hugo/content/posts/2014-07-13-solvability-of-nonograms.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2014-07-13T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/nonograms/
|
||||
- /solvability-of-nonograms/
|
||||
title: Solvability of nonograms
|
||||
---
|
||||
Recently, a friend re-introduced me to the joys of the [nonogram] (variously known as "hanjie" or "griddler"). I was first shown these about ten years ago, I think, because they appeared in [The Times]. When The Times stopped printing them, I forgot about them for a long time, until two years ago, or thereabouts, I tried these on [a website][griddlers.net]. I find the process much more satisfying on paper with a pencil than on computer, so I gave them up again and forgot about them again.
|
||||
|
||||
Anyway, the thought occurred to me: is a given griddler always solvable, and is it solvable uniquely? That is, given a grid and the edge entries, is it always a valid puzzle?
|
||||
|
||||
Notation: we will say that a given *solved grid* has an *edge-set* consisting of the numbers we would see if we were about to start solving the nonogram. We say that an edge-set *applies to* a solved grid if that edge-set is consistent with the solved grid. (For instance, the empty edge-set doesn't apply to any solved grid apart from the zero-size grid.)
|
||||
|
||||
Then our question has become: is there in some way a bijection between (edge-sets) and (solved grids)?
|
||||
|
||||
# Existence of edge-sets
|
||||
|
||||
We can trivially describe any solved grid by an edge-set and a grid size: simply write down the grid size of the solved grid, and write down the obvious edge-set. (We do need the grid size to be specified, because given an edge-set which applies to a solved grid, we can create a new grid to which that edge-set applies by simply appending a blank row to the solved grid.)
|
||||
|
||||
# Uniqueness of edge-sets
|
||||
|
||||
Is there an obvious reason why we could never have two different edge-sets applying to the same solved grid? It seems intuitively clear that a given solved grid can only have the obvious edge-set (namely, the one we get by writing down the blocks in each row and column in the obvious way). Is this rigorous as a proof? Yes: suppose that we had two edge-sets describing the same solved grid, and (wlog) the sets differ in the first row. In fact, let us wlog that our solved grid is only one row long.
|
||||
|
||||
* If one edge-set is empty, we're done: because the two edge-sets are not the same, that means the other edge-set is non-empty, and so under the first edge-set the solved grid is empty, while under the second the solved grid is nonempty.
|
||||
* If both edge-sets are non-empty: suppose the first starts with the number \\(a\\), and the second with the number \\(b\\). Then we have some number of blank squares, and then \\(a\\) filled-in squares (by edge-set 1) and also \\(b\\) filled-in squares (by edge-set 2); hence \\(a=b\\), because our solved grid is fixed.
|
||||
|
||||
# Existence of solutions
|
||||
Must a solution exist for a given grid size and edge-set? Is it possible to create a nonogram with no solution? One strategy for proving this might be to count the number of allowable edge-sets and to count the number of allowable solved grids (the latter problem is extremely easy if we consider a grid as being a binary number whose bits are laid out in a rectangle), because we have that any two finite sets of the same size must biject. However, the former problem sounds very hard.
|
||||
|
||||
On second thoughts (read: I slept on this), it's blindingly obvious that there is a grid with no solution - namely, the one-by-one grid with edge-set "1 as column heading, 0 as row heading". So there certainly are edge-sets which don't have a solution grid.
|
||||
|
||||
# Uniqueness of solutions
|
||||
OK, if we don't always have solvability, how about the "easy puzzle-setting property": that a given edge-set and grid-size cannot have two solved grids to which the edge-set applies? If this were true, it would make generating puzzles extremely easy: simply draw out a solved grid, write down its edge-set (which is unique, as shown above), and set that edge-set and grid-size as the puzzle, without fear that someone could sit down and solve the puzzle validly to get a different grid to your solution.
|
||||
|
||||
On the same second thoughts as the 'existence of solutions' thoughts, it's clear that the 2-by-2 grid with a diagonal black stripe has two solutions - namely, send the stripe top-left to bottom-right, or top-right to bottom-left. Curses.
|
||||
|
||||
# Summary
|
||||
Every solved grid has an edge-set, which is unique to that grid. However, not all edge-sets are solvable, and we don't have uniqueness of solutions. That was much less interesting than I had hoped.
|
||||
|
||||
[nonogram]: https://en.wikipedia.org/wiki/Nonogram
|
||||
[griddlers.net]: https://www.griddlers.net/home
|
||||
[The Times]: http://www.thetimes.co.uk/tto/news/
|
@@ -0,0 +1,47 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2014-07-15T00:00:00Z"
|
||||
disqus: true
|
||||
math: true
|
||||
aliases:
|
||||
- /psychology/what-maths-does-to-the-brain/
|
||||
- /what-maths-does-to-the-brain/
|
||||
title: What maths does to the brain
|
||||
---
|
||||
|
||||
In my activities on [The Student Room], a student forum, someone (let's call em Entity, because I like that word) recently asked me about the following question.
|
||||
|
||||
>Isaac places some counters onto the squares of an 8 by 8 chessboard so that there is at most one counter in each of the 64 squares. Determine, with justification, the maximum number that he can place without having five or more counters in the same row, or in the same column, or on either of the two long diagonals.
|
||||
|
||||
You might like to have a think about it before I give first the answer Entity gave, and my commentary on it.
|
||||
|
||||
I paraphrase Entity's answer:
|
||||
|
||||
>The maximum is 32, because the maximum along each row is 4 and so having 33 counters means having more than one row being full. Moreover, I have found a pattern which satisfies the 32 requirement. Hence we have shown that the correct answer is at most and at least 32, so it must be 32.
|
||||
|
||||
I'm going to assume that the 32-pattern is correct, because I wasn't shown the purported answer. What interested me was that my mind immediately pointed out internally that we have made an unproved claim. Again, you might like to think what the unproved claim might be - it's completely trivial to prove, but I found it fascinating. It'll come in the next paragraph.
|
||||
|
||||
The unproved claim is "having 33 counters means having more than one row being full". There are a couple of trivial proofs:
|
||||
|
||||
* \\(\frac{33}{64} > \frac{1}{2}\\) is the proportion of the board which is becountered, and the mean of eight quantities (the proportion of counters in each row) which are all less than or equal to a half cannot itself be greater than a half. Hence at least one of the eight quantities is greater than a half (that is, a row has more than four counters in).
|
||||
* The [pigeonhole principle] gives the result directly in a similar way (33 pigeons into eight holes means one hole has more than four pigeons).
|
||||
|
||||
However, my mind flagged this claim automatically as something that wasn't necessarily obvious. It turned out to be trivial, but it is an example of a step which is in general not true. For instance:
|
||||
|
||||
> Consider the natural numbers (greater than 0). The set of even numbers takes up half the space. Now if we remove the number 2 from the set of even numbers, we have the collection still taking up half the space, but now it's a smaller set - it's missing an element. Conundrum.
|
||||
|
||||
Here, we used very similar reasoning ("removing something from a set makes it take up proportionally less space") but got nonsense, ultimately because the pigeonhole principle doesn't apply to infinite sets.
|
||||
|
||||
I think what I did here was recognise a general pattern, but I struggle to work out what that pattern might be. The closest I've come is "if one property of a structure holds, then an obviously related property of that structure holds", because I'm pretty sure my thought wasn't triggered by the need for the pigeonhole principle. (In that case, the pattern would have been "if we fill up some slots, then some subset of the slots must be full", which is much more specific and trivial than I feel my reaction was. It felt like a specific instance of a very general check.)
|
||||
|
||||
A similar pattern which is much more concrete is the distinction between "if" and "only if" and "if and only if". A mathematician trains emself early on to not get confused between these. It doesn't take too long before you simply stop having the mental architecture that lets you make a mistake like "all odd squares are squares of odd numbers. Indeed, if n is odd then n^2 is odd. QED" unless the structures you're working with are quite a bit more complicated. Of course, my mental checks can be overwhelmed by complexity, and I have certainly proved the wrong direction of a problem many times, but in everyday conversation and in simpler mathematical problems, it becomes not only easy but automatic to distinguish between "implies" and "is implied by".
|
||||
|
||||
It feels vaguely similar to some of the filters I've installed in myself for other reasons. For instance, earlier today I was asked which of five leaflets looked best. I had already seen one of them before, and my first reaction (before any other) was "I've seen this one before, so I'm likely to think it looks better". I have a few anti-bias systems like these, and I have no idea whether they're useful or not, but I can certainly feel them going sometimes, without any input from myself.
|
||||
|
||||
|
||||
[The Student Room]: http://www.thestudentroom.co.uk
|
||||
[pigeonhole principle]: https://en.wikipedia.org/wiki/Pigeonhole_principle
|
38
hugo/content/posts/2014-07-19-music-practice.md
Normal file
38
hugo/content/posts/2014-07-19-music-practice.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-07-19T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/music-practice/
|
||||
- /music-practice/
|
||||
title: Music practice
|
||||
---
|
||||
|
||||
A couple of weeks ago, someone opined to me that there was a type of person who was just able to sit down and play at the piano, without sheet music.
|
||||
|
||||
I, myself, am capable of playing [precisely one piece][Jarrod Radnich PotC] inexpertly, from memory, at the piano. (My rendering of that piece is *nowhere* near the arranger's standard.) I can play nothing else without sheet music. I very much think that this is the natural state for essentially every musician who has not spent thousands upon thousands of hours practising in a general way. That is, almost no-one can naturally sit down and play a piece from memory without a lot of work beforehand, and almost no-one can improvise well without a great deal of effort directed either at learning how to improvise, or at learning generally the mechanics of playing.
|
||||
|
||||
# How technical practice helps
|
||||
The syllabus for [ABRSM exams] contains a large body of scale-work, arpeggios and related patterns. There is a reason these are so heavily featured that one cannot attain a Distinction grade without them: because while they may not help much in learning pieces up to Grade 8, they really are useful beyond then. Someone who can sit down in front of an unseen piece and note automatically that "the left-hand is an Alberti bass in F#-major" is at a distinct advantage compared to someone who has never practised F#-major arpeggios, because the latter person has to read each note in both hands; the former can concentrate almost solely on the right hand's melody line. An impossible piece can become quite do-able if you can reduce the left-hand's job to that of repeating a memorised action.
|
||||
|
||||
# How general performance practice helps
|
||||
Because the same patterns occur so often in music, there is essentially no upper limit on how many useful actions there are to memorise. Most phrases of music end in one of a couple of [recognised ways][cadence] (cadences), and a given cadence doesn't vary that much in its presentation. Someone well-practised could quite conceivably only need to read three-quarters of a piece, knowing that the remaining quarter is already-familiar cadences.
|
||||
|
||||
And, of course, it is hard to practise actual pieces without coming across cadences - they show up so regularly. By just performing general practice of a wide range of pieces, you naturally come to be able to play cadences without much thought. If you devote effort to learning particular chordal patterns, this process becomes even easier.
|
||||
|
||||
If you play the piano with any level of seriousness, you have probably played a [fugue] at some point. A fugue is a piece of music mainly characterised by a single melody which is repeated at various pitches, and around which a richly textured harmony is built. The idea is to bat this theme between several 'voices' (for instance, a fugue might have four voices, two played in the left hand and two in the right, analogously to a four-part choir), with each voice either playing the theme or embellishing upon it. It's kind of like a more complicated and interesting canon. The key point is that fugues are all very similar in style, and if you have the skill of playing a single tune more than once simultaneously, in different voices and offset from each other, then you can pretty much play a fugue. That skill comes with practice.
|
||||
|
||||
Anyone who has sat a music exam knows how important it is to be able to recover from mistakes. The best way to recover from a mistake would be to improvise something that sounds plausible (ideally the original piece!) until you picked up the thread again. This, too, comes with practice: I have noticed myself that I have over time got substantially better at ignoring mistakes I make during a performance. Every so often, if I'm caught out while playing a piece I know well, I can just about invent a semi-plausible bar to fill in the gap before I recover. (If nothing else, I might be able to play the right chords in an unexpected [inversion] or something.) I understand that this skill has pretty much no upper bound.
|
||||
|
||||
# Summary
|
||||
The point is, then, that it is probable that sheer mind-numbing amounts of practice is what makes people able to sit down and play. Certainly some may require less practice than others, but anyone who can play at the drop of a hat has probably practised an awful lot to get like that. I certainly know of no counterexamples.
|
||||
|
||||
|
||||
[Jarrod Radnich PotC]: https://www.youtube.com/watch?v=n4JD-3-UAzM
|
||||
[ABRSM exams]: https://en.wikipedia.org/wiki/ABRSM#Practical_exams
|
||||
[cadence]: https://en.wikipedia.org/wiki/Cadence_(music)
|
||||
[fugue]: https://en.wikipedia.org/wiki/Fugue
|
||||
[inversion]: https://en.wikipedia.org/wiki/Chord_inversion#Chords
|
48
hugo/content/posts/2014-07-21-perfect-pitch.md
Normal file
48
hugo/content/posts/2014-07-21-perfect-pitch.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
lastmod: "2022-08-21T12:09:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-07-21T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/perfect-pitch/
|
||||
- /perfect-pitch/
|
||||
title: Perfect pitch
|
||||
---
|
||||
|
||||
I have a limited form of [perfect (absolute) pitch][perfect pitch], which I am sometimes asked about. Often it's the same questions, so here they are. No doubt people with better perfect pitch than mine will be annoyed at this impudent upstart claiming the ability, but perfect pitch comes on a spectrum anyway. Apparently some people can identify notes to within the nearest fifth of a semitone, while some can only identify the semitone closest to the note. I am a bit further towards the "tone-deaf" end of that spectrum.
|
||||
|
||||
# References for notes
|
||||
|
||||
Anyway, I have been able to sing a [concert A] without reference since about the age of 12, I think, on account of having learnt the violin since much younger than then. From then, until about the age of 15, I kind of accumulated more notes I could use as references (A because it's concert tuning; E because it's the start of [Für Elise]; D because it's the start of [Pachelbel's Canon] and of the Libera Me from [Fauré's Requiem]). Annoyingly, these were all notes we tune violin strings to; it's very easy to find the four notes of a violin given an A, because we hear that sound every time we start playing the violin. Eventually I picked up an unreliable B-flat (from [a rather rousing Christmas carol][This Little Babe]), which I always had to cross-check with the A.
|
||||
|
||||
Then I started noticing that the F which lies at the bottom of my vocal range had a very distinctive feel on the piano. Not a particular piano - just that I could recognise that F when played on the piano. Similarly, middle C started to feel like a C. I came to be able to reproduce the C vocally, by imagining pressing the middle-C key and singing the note that it played.
|
||||
|
||||
That is, I could identify notes ACDEF B-flat. More tentatively, I could identify G as being kind of weedy and characterless (as opposed to the rich understated heroism of F - sounds silly, but I can find no other way to describe it off the top of my head).
|
||||
|
||||
I still have trouble with most accidentals (that is, flats and sharps), although I've just now realised that I can do F-sharp from [Tim Minchin]'s excellent [song of the same name][F Sharp] and I can do D-sharp from the start of [Chopin]'s [Nocturne in B][Chopin Nocturne in B]. So it's really just C-sharp, F-sharp, A-flat and B that I don't have references for. I can identify the white notes (except B, which feels a bit like a chameleon, could be either a C or a B-flat) on a piano by sound, and I can identify all the notes by producing them, or producing the next-door note, and comparing with what I heard.
|
||||
|
||||
Having said that, I'm significantly slower and less accurate when there is background noise - particularly tuned background noise. It feels like my internal scale is fuzzy and easily subject to external influence.
|
||||
|
||||
# FAQ
|
||||
*Have you always had it?* No, I picked it up mid-to-late secondary school. Also, my ability depends on having been playing music recently (by "recently" I mean "in the last week or so"). If the last few weeks have been musicless, I become much slower and less accurate.
|
||||
|
||||
*What's it like to have it?* No different than otherwise, for the most part. It doesn't get in the way unless I ask for it, with some exceptions. In particular, I usually listen to a piece of music without noticing the notes, although I am not that fast at identifying most notes, so they might well pass me by before I have a chance to decide what they are. The individual letters of a text don't bother you.
|
||||
|
||||
I said "exceptions": I am quite sensitive to instruments being out of tune. I don't know whether I'm much more sensitive than other people in this area - maybe they're all being polite in pretending not to notice. After a few minutes to get used to the pitch, it usually swamps my absolute representation of notes, and then I stop noticing out-of-tuneness (because I no longer have a reliable baseline).
|
||||
|
||||
*Can you distinguish sound better than normal?* Apparently so, but I don't think it's caused by my perfect pitch. On a now-defunct online test, I scored in the 87th percentile of test takers, reliably distinguishing between 0.75 hertz around 500 hertz. I imagine that's to do with musical training.
|
||||
|
||||
|
||||
[perfect pitch]: https://en.wikipedia.org/wiki/Perfect_pitch
|
||||
[concert A]: https://en.wikipedia.org/wiki/Concert_A
|
||||
[Für Elise]: https://en.wikipedia.org/wiki/Fur_Elise
|
||||
[Pachelbel's Canon]: https://en.wikipedia.org/wiki/Pachelbel%27s_Canon
|
||||
[This Little Babe]: https://www.youtube.com/watch?v=BTyIP7m8Btg
|
||||
[Fauré's Requiem]: https://en.wikipedia.org/wiki/Faur%C3%A9_Requiem
|
||||
[F Sharp]: https://www.youtube.com/watch?v=5Ju8Wxmrk3s
|
||||
[Tim Minchin]: https://en.wikipedia.org/wiki/Tim_Minchin
|
||||
[Chopin Nocturne in B]: https://www.youtube.com/watch?v=BhIP4hDBp-E
|
||||
[Chopin]: https://en.wikipedia.org/wiki/Chopin
|
||||
[ear test]: http://tonometric.com/adaptivepitch/
|
34
hugo/content/posts/2014-08-19-parables.md
Normal file
34
hugo/content/posts/2014-08-19-parables.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- creative
|
||||
comments: true
|
||||
date: "2014-08-19T00:00:00Z"
|
||||
aliases:
|
||||
- /creative/parables/
|
||||
- /parables/
|
||||
title: Parables, chapter 1, verses 1-10
|
||||
---
|
||||
|
||||
One day, a group of investors came to [Bezos] in the Temple and begged of him, "You are known throughout the land for your wisdom. Please tell us: what lessons did you learn early in life, which we have not yet learnt?"
|
||||
|
||||
Bezos replied thus.
|
||||
|
||||
"When I was but a child, when I had not yet seen seven summers, I discovered that my teacher had a bountiful store of chocolates hidden in the stationery cupboard. Being of an enterprising frame of mind, I proceeded to eat one of them every day for a week." For he was mindful of the need to preserve the source of good things.
|
||||
|
||||
"The next Monday, the teacher took me aside, and asked me whether I had been eating the chocolates. I replied that I had no idea who had been eating the chocolates, and expressed astonishment that indeed there were free chocolates to be had so near to my place of work." He knew that the key to deceit was remembering what you *should* know, as a cover for what you *did* know.
|
||||
|
||||
"But the teacher was wise beyond my years. Ey said to me, 'I saw you take chocolates last Friday!' And to prove it, ey brandished the selfsame wrapper I had carelessly discarded." And even these decades later, a tear ran down Bezos's cheek, that his scheme had failed in so predictable a manner.
|
||||
|
||||
"I realised that now was the time for the truth. I explained myself: 'I am sorry, O teacher, that I allowed you to discover my scheme. I understand now that you become suspicious after only four repetitions of a deception, and not the five I thought were safe. In future, I shall be more careful.' I was a simple mind then, and believed that it was right to tell the truth. I wished to be held accountable for my lies." One of the investors nodded sympathetically.
|
||||
|
||||
"To my surprise, the teacher flew into a rage. I was put into detention. That day I learnt that while the truth should set you free, this only holds up to the point of maintaining your societal role." He knew now that truth is secondary, when one is an underling.
|
||||
|
||||
"I saw an opportunity to prevent further suffering. 'I see you are attempting to negatively reinforce me against telling the truth and explaining my actions. I have learnt my lesson - you need not apply further reinforcement. I shall remember this.'"
|
||||
|
||||
"And that was the day I was expelled from my school, and was left to forge my own path."
|
||||
|
||||
One's prescribed roles should not confine behaviour overmuch. That way lies stagnation and inactivity.
|
||||
|
||||
[Bezos]: https://en.wikipedia.org/wiki/Jeff_Bezos
|
26
hugo/content/posts/2014-08-26-python-script-shadowing.md
Normal file
26
hugo/content/posts/2014-08-26-python-script-shadowing.md
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- programming
|
||||
comments: true
|
||||
date: "2014-08-26T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/python-script-shadowing/
|
||||
title: Python, script shadowing
|
||||
---
|
||||
|
||||
*A very brief post about the solution to a problem I came across in Python.*
|
||||
|
||||
In the course of my work on [Sextant] (specifically the project to add support for accessing a [Neo4j] instance by SSH), I ran into a problem whose nature is explained [here][Name shadowing trap] as the Name Shadowing Trap. Essentially, in a project whose root directory contains a `bin/executable.py` script, which is intended as a thin wrapper to the module `executable`, you can't `import executable`, because the `bin/executable.py` shadows the module `executable`.
|
||||
|
||||
The particular example I had was a wrapper called `sextant.py`, which needed to `import sextant` somewhere in the code. There was no guarantee that the wrapper script would be located in a predictable place relative to the module, because `pip` has a lot of liberty about where it puts various files during a package installation. I really didn't want to mess with the PythonPath if at all possible; a maybe-workable solution might have been to alter the PythonPath to put the module `sextant` at the front temporarily, so that its import would take precedence over that of `sextant.py`, but it seemed like a dirty way to do it.
|
||||
|
||||
No workaround was listed, other than to rename the script. A brief Google didn't give me anything more useful. Eventually, I asked someone in person, and ey told me to get rid of the `.py` from the end of the script name. That stops Python from recognising it as a script (for the purposes of `import`). As long as you have the right [shebang] at the top of the script, though, and its permissions are set to be executable, you can still run it.
|
||||
|
||||
(Keywords in the hope that Google might direct people to this page if they have the same problem: Python shadow module script same name.)
|
||||
|
||||
[Sextant]: https://launchpad.net/ensoft-sextant
|
||||
[Neo4j]: https://neo4j.com
|
||||
[Name shadowing trap]: http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html
|
||||
[shebang]: https://en.wikipedia.org/wiki/Shebang_(Unix)
|
85
hugo/content/posts/2014-09-09-sum-of-two-squares-theorem.md
Normal file
85
hugo/content/posts/2014-09-09-sum-of-two-squares-theorem.md
Normal file
@@ -0,0 +1,85 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2014-09-09T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/sum-of-two-squares-theorem/
|
||||
- /sum-of-two-squares-theorem/
|
||||
title: Sum-of-two-squares theorem
|
||||
---
|
||||
|
||||
*Wherein I detail the most beautiful proof of a theorem I've ever seen, in a bite-size form suitable for an Anki deck. I attach the [Anki deck], which contains the bulleted lines of this post as flashcards.*
|
||||
|
||||
# Statement
|
||||
There's no particularly nice way to motivate this in this context, I'm afraid, so we'll just dive in. I have found this method extremely hard to motivate - a few of the steps are a glorious magic.
|
||||
|
||||
* \\(n\\) is a sum of two squares iff in the prime factorisation of \\(n\\), primes 3 mod 4 appear only to even powers.
|
||||
|
||||
# Proof
|
||||
We're going to need a few background results.
|
||||
|
||||
## Background
|
||||
* \\(\mathbb{Z}[i]\\), the ring of [Gaussian integers], is a UFD.
|
||||
* In a UFD, [irreducible]s are [prime].
|
||||
* \\(-1\\) is square mod \\(p\\) iff \\(p\\) is not 3 mod 4.
|
||||
|
||||
Additionally, we'll call a number which is the sum of two squares a **nice** number.
|
||||
|
||||
## First implication: if primes 3 mod 4 appear only to even powers…
|
||||
We prove the result first for the primes, and will then show that niceness is preserved on taking products.
|
||||
|
||||
|
||||
|
||||
* Let \\(p=2\\). Then \\(p\\) is trivially the sum of two squares: it is \\(1+1\\).
|
||||
* Let \\(p\\) be 1 mod 4.
|
||||
* Then modulo \\(p\\), we have \\(-1\\) is square.
|
||||
* That is, there is \\(n \in \mathbb{N}\\) such that \\(x^2 + 1 = n p\\).
|
||||
* That is, there is \\(n \in \mathbb{N}\\) such that \\((x+i)(x-i) = n p\\).
|
||||
* \\(p\\) divides \\((x+i)(x-i)\\), but it does not divide either of the two multiplicands (since it does not divide their imaginary parts).
|
||||
* Therefore \\(p\\) is not prime in the complex integers.
|
||||
* Since \\(\mathbb{Z}[i]\\) is a UFD, \\(p\\) is not irreducible in the complex integers.
|
||||
* Hence there exist non-invertible \\(a, b \in \mathbb{Z}[i]\\) such that \\(a b = p\\).
|
||||
* Taking norms, \\(N(p) = N(ab)\\).
|
||||
* Since the norm is multiplicative, \\(N(p) = N(a) N(b)\\).
|
||||
* \\(N(p) = p^2\\), so \\(p^2 = N(a) N(b)\\).
|
||||
* Neither \\(a\\) nor \\(b\\) was invertible, so neither of them has norm 1 (since in \\(Z[i]\\), having norm 1 is equivalent to being invertible).
|
||||
* Hence wlog \\(N(a)\\) is exactly \\(p\\), since the product of two numbers being \\(p^2\\) means either one of them is 1 or they are both \\(p\\).
|
||||
* Let \\(a = u+iv\\). Then \\(N(a) = u^2 + v^2 = p\\), which was what we needed.
|
||||
|
||||
Next, we need to take care of this "even powers" business:
|
||||
|
||||
* \\(p^2\\) is a sum of two squares if \\(p\\) is 3 mod 4: indeed, it is \\(0 + p^2\\).
|
||||
|
||||
All we now need is for niceness to be preserved under multiplication. (Recall \\(w^*\\) denotes the conjugate of \\(w\\).)
|
||||
|
||||
* Let \\(x, y\\) be the sum of two squares each, \\(x_1^2 + x_2^2\\) and \\(y_1^2 + y_2^2\\).
|
||||
* Then \\(x = (x_1 + i x_2)(x_1 - i x_2)\\), and similarly for \\(y\\).
|
||||
* Then \\(x y = (x_1 + i x_2)(x_1 - i x_2)(y_1 + i y_2)(y_1 - i y_2)\\).
|
||||
* So \\(x y = w w^*\\), where \\(w = (x_1 + i x_2)(y_1 + i y_2)\\).
|
||||
* Hence \\(x y = N(w)\\), so is a sum of two squares (since norms are precisely sums of two squares).
|
||||
|
||||
Together, this is enough to prove the first direction of the theorem.
|
||||
|
||||
## Second implication: if \\(n\\) is the sum of two squares…
|
||||
We'll suppose that \\(n = x^2 + y^2\\) has a prime factor which is 3 mod 4, and show that it divides both \\(x\\) and \\(y\\).
|
||||
|
||||
* Let \\(n = x^2 + y^2\\) have prime factor \\(p\\) which is 3 mod 4.
|
||||
* Then taken mod \\(p\\), we have \\(x^2 + y^2 = 0\\).
|
||||
* That is, \\(x^2 = - y^2\\).
|
||||
* If \\(y\\) is not zero mod \\(p\\), it is invertible.
|
||||
* That is, \\((x y^{-1})^2 = -1\\).
|
||||
* This contradicts that \\(p\\) is 3 mod 4 (since \\(-1\\) is not square mod \\(p\\)). So \\(y\\) is divisible by \\(p\\).
|
||||
* Symmetrically, \\(x\\) is divisible by \\(p\\).
|
||||
* Hence \\(p^2\\) divides \\(n\\), so we can divide through by it and repeat inductively.
|
||||
|
||||
That ends the proof. Its beauty lies in the way it regards sums of two squares as norms of complex integers, and dances into and out of \\(\mathbb{C}\\), \\(\mathbb{Z}[i]\\) and \\(\mathbb{Z}\\) where necessary.
|
||||
|
||||
[Gaussian integers]: https://en.wikipedia.org/wiki/Gaussian_integers
|
||||
[UFD]: https://en.wikipedia.org/wiki/Unique_factorization_domain
|
||||
[irreducible]: https://en.wikipedia.org/wiki/Irreducible_element
|
||||
[prime]: https://en.wikipedia.org/wiki/Prime_element
|
||||
[Anki deck]: {{< baseurl >}}AnkiDecks/SumOfTwoSquaresTheorem.apkg
|
43
hugo/content/posts/2014-12-02-christmas-carols.md
Normal file
43
hugo/content/posts/2014-12-02-christmas-carols.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-12-02T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/christmas-carols/
|
||||
- /christmas-carols/
|
||||
title: Christmas carols
|
||||
---
|
||||
|
||||
In which I provide my favourite carols and my favourite renditions of them.
|
||||
|
||||
In no particular order, except that 1) must be at the start and 9) at the end.
|
||||
|
||||
1) [Once in Royal David's City][Once]. Always opens the Festival of Nine Lessons and Carols. Has the same problem as 9) in that the only nice recordings seem to have congregations in, but I suppose that's all part of it.
|
||||
|
||||
2) [The Three Kings]. My favourite. This performance (King's College) has a soloist who is a bit strident, I think, but all the other ones I've listened to are even stridenter.
|
||||
|
||||
3) O Holy Night. My second-favourite. It took until 2017 before I found a recording I liked: it's by the Elora Festival Singers. (Pavarotti is a bit forceful. Most of the recordings appear to be soloists only, singing in very American voices. I want a SATB choir with soloist(s) and, if there must be accompaniment, organ. The soloist(s) must be reverent rather than joyful, and the choir must be singing the standard chordal patterns rather than funky modern ones. There's a version done by Libera which almost passes muster, but it's not SATB and it is accompanied by lighthearted orchestra. It's a solemn piece.)
|
||||
|
||||
4) [This Little Babe]. I don't usually like Britten, but this one is too rousing. I had trouble finding a good version of this, but these people nailed it.
|
||||
|
||||
5) [In Dulci Jubilo]. King's College does it perfectly.
|
||||
|
||||
6) [In the Bleak Midwinter][Bleak] (Darke's setting). I'm sensing a theme with the King's choir.
|
||||
|
||||
7) [It Came Upon the Midnight Clear][Midnight Clear]. This performance is beautifully smooth.
|
||||
|
||||
8) [This Is the Truth Sent From Above]. Vaughan Williams had to make it into the list.
|
||||
|
||||
9) [Hark, the Herald Angels Sing][Hark]. Have to end a carol service with that. Wow, there are some bad arrangements of this out there (Mormon Tabernacle Choir, I'm looking at you, and Pentatonix, which would be so nice if they didn't sing with such weirdly non-British vowel sounds). I still haven't found one in which there isn't a congregation.
|
||||
|
||||
[Once]: https://www.youtube.com/watch?v=NMGMV-fujUY
|
||||
[The Three Kings]: https://www.youtube.com/watch?v=HIedUioo_Jk
|
||||
[This Little Babe]: https://www.youtube.com/watch?v=aPnP5zzHJoQ
|
||||
[In Dulci Jubilo]: https://www.youtube.com/watch?v=iXze_TLUTqM
|
||||
[Bleak]: https://www.youtube.com/watch?v=GPpy3XSk6c0
|
||||
[Midnight Clear]: https://www.youtube.com/watch?v=rSn0_Zj6gjQ
|
||||
[This Is the Truth Sent From Above]: https://www.youtube.com/watch?v=5M_8vjqWYmM
|
||||
[Hark]: https://www.youtube.com/watch?v=A_iLXNSIaYc
|
@@ -0,0 +1,25 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
comments: true
|
||||
date: "2014-12-09T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/film-recommendation-interstellar/
|
||||
title: Film recommendation, Interstellar
|
||||
---
|
||||
|
||||
I’ve just come back from seeing [Interstellar], a film of peril and physics. This post will be spoiler-free except for sections which are in [rot13].
|
||||
|
||||
I thought the film was excellent. My previous favourite film in its genre was [Sunshine], but this beats it in many ways, chiefly that the physics portrayed in Interstellar - relativity, primarily - is not so wrong that it’s immediately implausible. Indeed, some physics-driven plot twists (such as *gvqny sbeprf arne n oynpx ubyr*) I called in advance, which is a testament to how closely the film matched my physical expectations. My stomach nearly dropped out when the characters realised what relativity meant for them.
|
||||
|
||||
This is one of few films whose outcome was truly tense and uncertain for me. Characters were reasonably well-developed, and Michael Caine was in it. Good long story, told at the right pace, and there weren’t too many concessions made to the plot. (By which I mean, it felt like things often happened as they would in real life, rather than just to make a good story, and I had genuine feelings of empathic frustration when reality intervened in the plot.)
|
||||
|
||||
The film lasted perhaps seven minutes too long, in my opinion. *V gubhtug vg fubhyq unir raqrq jvgu gur cebgntbavfg qlvat bhgfvqr Fnghea, naq uhznavgl'f shgher hapregnva ohg thnenagrrq gb pbagnva tbqubbq.* I think it’s made to cater to USA audiences rather than British ones; we Brits tend to like emotions to be portrayed with subtlety in films. There were several places I thought the ending was going to be very different: *gung Pbbcre jbhyq qvr ba gur sebmra cynarg; gung gurl jbhyq fynz vagb gur oynpx ubyr naq qvr; gung Zhecul'f oebgure jbhyq xvyy Zhecul jura fur oenaqvfurq gur jngpu*. My favourite ending would simply have been the film without its last scene.
|
||||
|
||||
Additionally, a little too much was made of *ybir genafpraqf gvzr naq fcnpr*: while I can believe one irrational person saying this, it stretches the imagination for an entire team of scientists to think it.
|
||||
|
||||
I should stress that those are pretty much my only problems with this film, and they’re all pretty minor. I loved the soundtrack; the visual effects were astonishing (vaguely reminiscent of 2001: A Space Odyssey). I’d go so far as to say that this film is beautiful, not just in a visual sense but in an arty sense: its spirit is pure, or something like that. Very much worth the price of entry, at a little under £3/hr.
|
||||
|
||||
[Interstellar]: https://en.wikipedia.org/wiki/Interstellar_(film)
|
||||
[rot13]: https://rot13.com/
|
||||
[Sunshine]: https://en.wikipedia.org/wiki/Sunshine_(2007_film)
|
91
hugo/content/posts/2014-12-19-matrix-puzzle.md
Normal file
91
hugo/content/posts/2014-12-19-matrix-puzzle.md
Normal file
@@ -0,0 +1,91 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2014-12-19T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/matrix-puzzle/
|
||||
title: Matrix puzzle
|
||||
---
|
||||
|
||||
I recently saw a problem from an Indian maths olympiad:
|
||||
|
||||
> There is a square arrangement made out of n elements on each side (n^2 elements total). You can put assign a value of +1 or -1 to any element. A function f is defined as the sum of the products of the elements of each row, over all rows and g is defined as the sum of the product of elements of each column, over all columns. Prove that, for n being an odd number, f(x)+g(x) can never be 0.
|
||||
|
||||
There is a very quick solution, similar in flavour to that [famous dominoes puzzle][Mutilated chessboard]. However, I didn’t come up with it immediately, and my investigation led down an interesting route.
|
||||
|
||||
Preliminary observations
|
||||
===========
|
||||
|
||||
It is easy to see that given a matrix of \\(1\\) and \\(-1\\), we have \\(f, g\\) unchanged on reordering rows and columns, and on taking the transpose. This leads to a very useful lemma: \\(f, g\\) are unchanged if we negate the corners of a rectangle in the matrix.
|
||||
|
||||
The idea then occurs: perhaps there is a [normal form] of some kind?
|
||||
|
||||
Specification of normal form
|
||||
========
|
||||
|
||||
Given any four -1’s laid out at the corners of a rectangle, we may flip them all into 1’s without changing \\(f, g\\). Similarly, given any three -1’s on the corners of a rectangle, where the fourth corner is 1, we may flip to get a rectangle with one -1 and three 1’s.
|
||||
|
||||
Repeat this procedure until there are no rectangles with three or more corners -1. (Note that we might get a different answer depending on the order we do this in!) A Mathematica procedure to do this (expressed in a very disgusting way) is as follows.
|
||||
{% raw %}
|
||||
internalReduce[mat_] := Module[{m = mat},
|
||||
Do[If[(i != k && j != l) &&
|
||||
Count[Extract[m, {{i, j}, {k, j}, {i, l}, {k, l}}], -1] >
|
||||
2, {m[[i, j]], m[[i, l]], m[[k, j]],
|
||||
m[[k, l]]} = -{m[[i, j]], m[[i, l]], m[[k, j]], m[[k, l]]};
|
||||
], {i, 1, Length[mat]}, {j, 1, Length[mat]}, {k, 1,
|
||||
Length[mat]}, {l, 1, Length[mat]}];
|
||||
m]
|
||||
reduce[mat_] := FixedPoint[internalReduce, mat]
|
||||
{% endraw %}
|
||||
|
||||
Notice that columns which contain more than one -1 must not overlap, in the sense that no two columns with more than one -1 may have a -1 in the same row. Indeed, if they did, we’d have a submatrix somewhere of the form {{-1, -1}, {1, -1}}, which contradicts the “we’ve finished flipping” condition. Hence we may rearrange rows so that all -1’s appear together in contiguous columns.
|
||||
|
||||
We may then rearrange columns so that reading from the left, we see successive columns with decreasingly many -1’s. Rearrange rows again so that they appear stacked on top of each other.
|
||||
|
||||
![example of reduced matrix][reduced matrix]
|
||||
|
||||
We’ve ended up with a normal form: columns of -1’s, diagonally adjoined to each other, followed by rows of -1’s. (The following Mathematica code relies on the fact that SortBy is a stable sort.)
|
||||
|
||||
`normalform[mat_] := SortBy[Transpose@SortBy[Transpose@reduce[mat], -Count[#, -1] &], Count[#, -1] /. {0 -> Infinity} &]`
|
||||
|
||||
We haven’t shown that it’s unique yet, and indeed it’s not. As a counterexample, {{-1,1,1,1,1}, {-1,1,1,1,1}, {1,-1,1,1,1}, {1,-1,1,1,1}, {1,1,1,1,1}} is transformed into {{-1,1,1,1,1}, {-1,1,1,1,1}, {-1,1,1,1,1}, {-1,1,1,1,1},{1,1,1,1,1}} by a rectangle-flip.
|
||||
|
||||
This suggests a further improvement to the normal form: by flipping in this way, we may insist that any column of -1’s, other than the first, must contain only one -1. Indeed, if it contained two or more, we would flip two of them into the first column, rearrange so that all columns were contiguous -1’s again, and repeat.
|
||||
|
||||
What does our matrix look like now? It’s a column of -1, followed by some diagonal -1’s, followed by a row of -1. We’ll call this the canonical form, although I’ve still not shown uniqueness.
|
||||
|
||||
![example of matrix in canonical form][canonical matrix]
|
||||
|
||||
Restatement of problem
|
||||
========
|
||||
|
||||
The problem then becomes: given a matrix in canonical form, show that \\(f+g\\) cannot be 0.
|
||||
|
||||
Notice that if the long column is \\(r\\) long, and there are \\(s\\) diagonal -1’s, and the long row is \\(t\\) long, and the matrix is \\(n \times n\\), then \\(f = -r-s+(-1)^t + (n-s-r-1)\\), \\(g = -t-s+(-1)^r + (n-s-t-1)\\).
|
||||
|
||||
Hence \\(f+g = 2n - 2(r+2s+t+1) + (-1)^r + (-1)^t\\).
|
||||
|
||||
Any choice of \\(r, s, t, n\\) with \\(r+s+1 \leq n; s+t+1 \leq n; r, t>1\\) yields a valid matrix. We therefore need to show that for all \\(r, s, t, n\\) we have \\(2(n-r-2s-t-1) + (-1)^r + (-1)^t \not = 0\\).
|
||||
|
||||
Solution
|
||||
=======
|
||||
|
||||
Reducing this mod 4, it is enough to show that \\(2(n-r-t-1) + (-1)^r + (-1)^t \not \equiv 0 \pmod{4}\\). But we can easily case-bash the four cases which arise depending on the odd-even parity of \\(r, t\\), to see that in all four cases, the congruence does indeed not hold.
|
||||
|
||||
* \\(r, t\\) even: \\(2(n-1) + 2 = 2n\\), but since \\(n\\) is odd, this is not \\(0 \pmod{4}\\).
|
||||
* \\(r\\) even, \\(t\\) odd: \\(2n - 1\\), since \\(t-1\\) is even so \\(2(t-1)\\) is a multiple of 4. \\(2n-1\\) isn’t even even, let alone divisible by \\(4\\).
|
||||
* \\(r, t\\) odd: \\(2(n-1) + 2 = 2n\\) which is again not \\(0 \pmod{4}\\).
|
||||
|
||||
Summary
|
||||
=======
|
||||
|
||||
Once we had this canonical form, it was easy to find \\(f, g\\) and therefore analyse the behaviour of \\(f+g\\). Next steps: prove that canonical forms are unique (perhaps using the fact that \\(f, g\\) are invariant across forms, and showing a result along the lines that any two canonical forms with the same \\(f, g\\) must be equivalent). I won’t do that now.
|
||||
|
||||
[Mutilated chessboard]: https://en.wikipedia.org/wiki/Mutilated_chessboard_problem
|
||||
[normal form]: https://en.wikipedia.org/wiki/Canonical_form
|
||||
[reduced matrix]: {{< baseurl >}}images/Matrices/matrix_reduced.jpg
|
||||
[canonical matrix]: {{< baseurl >}}images/Matrices/matrix_canonical.jpg
|
42
hugo/content/posts/2014-12-23-latin-translation-tips.md
Normal file
42
hugo/content/posts/2014-12-23-latin-translation-tips.md
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2014-12-23T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/latin-translation-tips/
|
||||
- /latin-translation-tips/
|
||||
title: Latin translation tips
|
||||
---
|
||||
|
||||
I'm clearing out my computer, and found a file which may as well be here.
|
||||
|
||||
Chunking:
|
||||
----
|
||||
|
||||
1. The first thing to do is to run through the sentence, identifying the verbs and anything that looks like it might be a verb (even in a strange form, like “passus” or “ascendere”).
|
||||
2. Run through a second time, looking for structures like “ut + subjunctive” and “non solum… sed etiam…” - if a verb you spotted is in an odd form, this is when you look quickly for why it’s in that form.
|
||||
3. Look for any subordinate clauses (like “dixit Caecilius, qui in horto laborabat…”)
|
||||
4. If you see an adjective-looking thing, it probably has to go with a noun.
|
||||
5. With that in mind, chunk the text, remembering that two verbs in the same chunk is unlikely unless one is something like “dixit” or “poterat”, which can modify another verb. Remember that chunks shouldn’t be too long, but lots of really short words together might not count against the length limit. Try reading out each chunk - rhythm takes time to learn to grasp, but it might help you.
|
||||
|
||||
Once the text is chunked:
|
||||
----
|
||||
|
||||
1. Remember that your chunking is probably wrong somewhere, but also is probably broadly right.
|
||||
2. In each chunk, if there’s a nominative and a verb then try and translate those first. Then think about what the verb “expects”; if the verb is looking for an accusative, find an accusative, while if it’s looking for a dative, find a dative. For example, “docet” = “he teaches” is looking for an accusative, while “trahet” = “he drags” is looking both for an accusative (“he drags something”) and possibly a dative (“he drags something somewhere”).
|
||||
3. If it looks like a jumble of words, identify the case of everything (in poetry, it can help if you scan the text) - this should tell you what goes with what. Don’t be too fussy about getting the right case, though - I’d be happy with “dative or ablative”, most of the time, because that’s usually clear from context - as long as you have the right case among your options!
|
||||
|
||||
Guessing vocab:
|
||||
----
|
||||
|
||||
Try and work out what the principal parts of a verb are. The English word from a given Latin one almost always comes from the past passive participle (the fourth principal part), by adding “tion” instead of “us”: “passus” -> “passion” [a bit misleading if you don’t know about the Passion of the Christ, because it means “suffering”], “traho” -> “tractus” -> “traction”; it actually means “drag”.
|
||||
How to guess the principal parts is the kind of thing you learn with time, but as a general rule, “t” -> “s” (as in “patior passus”) and almost everything else goes to “ct”: “pingere pictus” from which “depiction” so “painting”, “facere factus” from which “manufaction” which isn’t really English but tells you it means “making”, etc.
|
||||
|
||||
General:
|
||||
----
|
||||
|
||||
* If you see lots and lots of things in the same case, ask yourself whether they all go together, or whether there’s some reason that more things than usual should be in that case. Usually it’ll be the former, with the major exception being “que” = “and”. (eg. Caesarem Brutumque - Caesar isn’t described by the word “Brutus”, but they’re both affected by the same verb.)
|
||||
* Don’t be afraid to amend your earlier translation, if something becomes clarified by later text. Keep looking at the English you get, to make sure you’re on track; while you’re working, it’s better to leave something blank than to get it wrong, so don’t guess too early. Once you’ve gone over the whole thing, or you’ve got to a point where everything afterwards is impossible without help from earlier, then you can guess. (And, of course, leave nothing blank when you hand it in!) If you do amend the translation, score out the old one with a thin line - don’t scribble it out - because then the examiners might take pity on you if it turned out to be right the first time after all. If you make a significant amendment (you find out that Brutus is actually doing what you thought Caesar was doing, for example) then you should reread the whole translation; check that the new interpretation isn’t just impossible from what Latin has come earlier, and check whether earlier parts make more sense under the new interpretation.
|
28
hugo/content/posts/2015-01-29-motivational-learning.md
Normal file
28
hugo/content/posts/2015-01-29-motivational-learning.md
Normal file
@@ -0,0 +1,28 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- uncategorized
|
||||
comments: true
|
||||
date: "2015-01-29T00:00:00Z"
|
||||
aliases:
|
||||
- /uncategorized/motivational-learning/
|
||||
- /motivational-learning/
|
||||
title: Motivational learning
|
||||
---
|
||||
|
||||
*In which I am a wizard.*
|
||||
|
||||
Sometimes as a student, the work piles up and I start to think "I'll never finish this". It becomes easy to think that there's no point in working because the work will never be over. When that happens to me, I imagine that my course is magic/alchemy/something with flashy special effects. I'm going through the Wizardry Academy, and I'll graduate able to manipulate the four elements. Even if I'm not the best in the year at it, I'm still able to *manipulate the elements*, and if I work at it, I'll be able to manipulate them better and in flashier ways - that's not something most people can do!
|
||||
|
||||
I tend not to take this analogy very far. It's usually enough just for me to pretend I'm [Kvothe] for a moment, and I'm all motivated again. However, the trick kind of works for specific topics, too. At the moment, for instance, I need to know how to classify the [representations] of a group, per [Slate Star Codex's article][extreme mnemonics].
|
||||
|
||||
An arcanist who is working with minerals needs to know lots of properties of those minerals, and is greatly advantaged by performing certain rituals to divine the Affinities of a metal. As you know, metals are nothing more nor less than a physical embodiment of a collection of Aspects, and you get a different kind of metal for each Aspect that has gone into its construction. All metals have an Affinity with Nothing - that's just standard Elemental Theory. Metals only have a certain number of Affinities, too, and it turns out to be a fact that each Affinity corresponds exactly with a purity band of the metal, and you can see which purity band goes with an Affinity if you look at the Affinity through a Tracer. (On that note, recall from the first Alchemy course you ever took that there is a ritual we can perform to extract a particular Aspect already present in a metal. Purity bands are what we call the product of that ritual, and represent a distilled Aspect which is still related to the original metal.)
|
||||
|
||||
A mineral is an algebraic structure; a metal, a finite group. An Aspect is a group element, and so if we have different generators for the group, we get a different group. An Affinity of a group is a complex irreducible representation. All finite groups have the trivial representation, as is standard Representation Theory. Finite groups only have a certain number of irreducible complex representations, and they are in bijection with the conjugacy classes of the group. (If you apply the trace operator to a representation, you obtain a character.) From any first course in group theory, we can extract the conjugacy class of an element of a group, and it is those conjugacy classes which are in bijection with the the characters.
|
||||
|
||||
It's paraphrased a bit, and my notation is a bit sloppy, but it certainly sounds more interesting than representation theory.
|
||||
|
||||
[Kvothe]: https://en.wikipedia.org/wiki/The_Kingkiller_Chronicle
|
||||
[representations]: https://en.wikipedia.org/wiki/Group_representation
|
||||
[extreme mnemonics]: https://slatestarcodex.com/2013/08/14/extreme-mnemonics/
|
14
hugo/content/posts/2015-08-19-awodey.md
Normal file
14
hugo/content/posts/2015-08-19-awodey.md
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
title: Sequence on Awodey's Category Theory
|
||||
author: patrick
|
||||
layout: page
|
||||
date: "2015-08-19T00:00:00Z"
|
||||
comments: true
|
||||
---
|
||||
|
||||
In the summer of 2015, I worked through Awodey's [Category Theory][book], and I produced [a large collection of posts][posts] as I tried to understand its contents.
|
||||
These posts are probably not of much interest to anyone who is just looking for something to read, so they're siloed off.
|
||||
|
||||
[book]: https://global.oup.com/ukhe/product/category-theory-9780199237180
|
||||
[posts]: /awodey
|
33
hugo/content/posts/2015-08-21-proof-by-contradiction.md
Normal file
33
hugo/content/posts/2015-08-21-proof-by-contradiction.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2015-08-21T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/proof-by-contradiction/
|
||||
- /proof-by-contradiction/
|
||||
title: Proof by contradiction
|
||||
summary: Here I explain proof by contradiction so that anyone who has ever done a sudoku and seen algebra may understand it.
|
||||
---
|
||||
|
||||
Here I explain proof by contradiction so that anyone who has ever done a [sudoku] and seen algebra may understand it.
|
||||
|
||||
Imagine you are doing a sudoku, and you have narrowed a particular cell down to being either a 1 or a 3. You're not sure which it is, so you do the "guess and see" approach: you guess it's a 1. That forces this other cell to be an 8, this one to be a 5, and then - oh no! That one over there has to be a 7, but there's already a 7 in its row! That means we have to backtrack: our first guess of 1 was wrong, so it has to be a 3 after all.
|
||||
|
||||
That was a proof by contradiction that the cell was a 3.
|
||||
|
||||
Now I present the standard proof that \\(\sqrt{2}\\) is not [expressible as a fraction][rational] \\(\frac{p}{q}\\) where \\(p, q\\) are whole numbers.
|
||||
|
||||
Analogy: "the cell was a 1" corresponds to "\\(\sqrt{2}\\) is fraction-expressible". "The cell was a 3" corresponds to "\\(\sqrt{2}\\) is not fraction-expressible".
|
||||
|
||||
Suppose \\(\sqrt{2}\\) were fraction-expressible. Then we could write it explicitly as \\(\sqrt{2} = \frac{p}{q}\\), and we can insist that \\(q > 0\\): if it's negative, we can move the negative up to the \\(p\\). If we clear denominators, we get \\(q \sqrt{2} = p\\); then square both sides, to get \\(2 q^2 = p^2\\).
|
||||
|
||||
But now think about how many times 2 divides the left-hand side and the right-hand side. 2 divides a square an even number of times, if it divides it at all (because any square which is divisible by 2 is also divisible by 4, so we can pair off the 2-factors). So 2 must divide \\(q^2\\) an even number of times, and hence the left-hand side an odd number of times (because that's \\(2 \times q^2\\)). It divides the right-hand side an even number of times. So the number of times 2 divides \\(p^2\\) is both odd and even. No number is both odd and even!
|
||||
|
||||
We've done the equivalent of finding a 7 appearing twice in a single row. We have to backtrack and conclude that the starting cell was a 3 after all: \\(\sqrt{2}\\) is not fraction-expressible.
|
||||
|
||||
[sudoku]: https://en.wikipedia.org/wiki/Sudoku
|
||||
[rational]: https://en.wikipedia.org/wiki/Rational_number
|
36
hugo/content/posts/2015-09-25-lottery-odds.md
Normal file
36
hugo/content/posts/2015-09-25-lottery-odds.md
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2015-09-25T00:00:00Z"
|
||||
redirect-from:
|
||||
- /mathematical_summary/lottery-odds/
|
||||
- /lottery-odds/
|
||||
title: Lottery odds
|
||||
summary:
|
||||
It has been proposed to me that if one is to play the National Lottery, one should be sure to select one's own numbers instead of allowing the machine to select them for you. This is not an optimal strategy.
|
||||
---
|
||||
|
||||
It has been proposed to me that if one is to play the National Lottery, one should be sure to select one's own numbers instead of allowing the machine to select them for you.
|
||||
|
||||
To summarise and slightly simplify the Lottery: at some point during the week, the entrant picks six distinct numbers from 1 to 49 inclusive, and buys a ticket with those numbers on. There is also the option to let the ticket vending machine choose numbers at random, instead of having you choose them. Then on Wednesday evening, six numbers are selected from 1 to 49 on live TV by a process which is as near to true random as we can get while still retaining drama. If all six of your numbers match all six of the prize numbers, you win a prize. (In the actual game, there are also smaller prizes for matching fewer numbers, and so on.)
|
||||
|
||||
The argument goes as follows: if you let the vending machine decide your numbers, you have the square of the probability of winning. (That is, a much smaller chance.) Indeed, in order to win, the vending machine first needs to select the winning numbers, and then the TV machine also does.
|
||||
|
||||
This is, of course, a confusion of the probability of A given B, and the probability of A and B. What was calculated was the probability that the vending machine picks the six given numbers and the TV picks the six given numbers. What is actually required is the probability that the TV picks the six given numbers given that the vending machine also did.
|
||||
|
||||
By the way, "A and B" is definitely distinct from "A given B": in a population, the probability that a person is both Albert Einstein and a man is rather low, but the probability that a given person is a man given that they are Albert Einstein is 1.
|
||||
|
||||
The first way to make the lottery more intuitive is to note that we could have conducted the lottery so that we already drew the TV's winning numbers, in secret, before you bought your ticket. Only on buying it do you find out whether you've won or not. Now we are simply trying to match six specific numbers by buying our ticket (although we do not know what they are in advance, we do know they are fixed), and the vending machine can guess exactly as well as I can. By analogy: the TV person flips a coin, and then tells you that you will win if you can guess what the outcome of the coin flip was. It's obvious that you'll win half the time if you pick heads, and half the time if you pick tails, and you won't do any better than the vending machine if you guess. Now, instead, let's say that you pick your heads/tails option first, and then the TV person flips a coin. Nothing has changed except the order in which we do things, and the machine will still do just as well as you. (Analogy, of course, is that selecting the six numbers you want to win is the same as selecting the head/tail option you want to win.)
|
||||
|
||||
That is, the bogus argument of the third paragraph is not time independent. If you simply shuffle some of the stages of the lottery around, even though this should have no effect on the outcome, the bogus argument says the outcome should be different.
|
||||
|
||||
The second way is let's say I'm in competition with you to win most money on the lottery. I'm going to pick the "vending machine" option. You claim I'm thirteen million times less likely to win when the vending machine has picked my numbers, so you surely won't object if I change the lottery slightly so that if I choose the "vending machine" option, it picks two sets of six numbers and enters me for them both simultaneously. That doubles my winning chance, but it's still a damn sight worse than the penalty of thirteen million times I incurred by picking the "vending machine" option. You likewise won't mind if I change the lottery so that the "vending machine" chance picks ten sets of numbers. A hundred. Thirteen million, which brings me into parity with your lottery: according to you, we're now equally likely to win. But wait - now them machine has picked every combination. I win if any combination wins! And I'm still… just as likely as you to win? Come back to me when you're winning every time and we can rethink.
|
||||
|
||||
The third way to make the distinction more intuitive is to make everything much smaller. Let's say I just need to pick one number, and the TV picks one number, each out of 3 instead of 50. Now, the cases in which I win are precisely {(1,1), (2,2), (3,3)}, where (a,b) means "I picked number a, and the machine picked number b". The cases in which I do not win are precisely {(1,2), (1,3), (2,1), (2,3), (3,1), (3,2)}. All of these are equally likely - (1,1) is exactly as likely as (1,2), because if I sneakily relabeled the TV's lottery balls by swapping 1 for 2 then that should have no effect on the outcomes - so my chance of winning is 3/9, or 1/3. This is independent of the means I used to pick my choice, because there is exactly one winning outcome for each of my possible choices. The situation is completely symmetrical: relabelling all the choices doesn't change anything. If it helps, we could think of the option "let the vending machine decide" as "I choose the number 1. Now I let the vending machine apply some scrambling operation I don't know, and it will spit out the number I'll end up using." This doesn't change any of the probabilities, because the statement of the problem is completely independent of what labels appear on the choices (as long as they're all different).
|
||||
|
||||
I fear that my third way might require more maths than most people have - the idea of symmetry isn't exactly common.
|
||||
|
||||
Anyway, everyone should agree that the lottery is a bad investment if your intention is only to gain money out of it. (Aside from anything else, if you stood to gain anything from playing the lottery, then by symmetry so must everyone else, so the lottery itself must stand to lose. There's simply nowhere else the gain could come from. The lottery would be closed down immediately if it made a loss.)
|
50
hugo/content/posts/2015-11-12-eilenberg-moore.md
Normal file
50
hugo/content/posts/2015-11-12-eilenberg-moore.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- awodey
|
||||
comments: true
|
||||
date: "2015-11-12T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /categorytheory/eilenberg-moore/
|
||||
- /eilenberg-moore/
|
||||
title: Eilenberg-Moore
|
||||
summary: As an exercise in understanding the definitions involved, I find the Eilenberg-Moore category of a certain functor.
|
||||
---
|
||||
|
||||
During my attempts to understand the fearsomely difficult Part III course "[Introduction to Category Theory][course]" by PT Johnstone, I came across the monadicity of the power-set functor \\(\mathbf{Sets} \to \mathbf{Sets}\\). The monad is given by the triple \\((\mathbb{P}, \eta_A: A \to \mathbb{P}(A), \mu_A: \mathbb{PP}(A) \to \mathbb{P}(A))\\), where \\(\eta_A: a \mapsto \{ a \}\\), and \\(\mu_A\\) is the union operator. So \\(\mu_A(\{ \{1, 2 \}, \{3\} \}) = \{1,2,3 \}\\).
|
||||
|
||||
It's easy enough to check that this is a monad. We have a theorem saying that every monad has an associated "[Eilenberg-Moore]" category - the category of algebras over that monad. What, then, is the E-M category for this monad?
|
||||
|
||||
Recall: an algebra over the monad is a pair \\((A, \alpha)\\) where \\(A\\) is a set and \\(\alpha: \mathbb{P}(A) \to A\\), such that the following two diagrams commute. (That is, \\(\alpha\\) here is an operation which takes a collection of elements of \\(A\\), and returns an element of \\(A\\).)
|
||||
|
||||
![Power-set monad algebra diagram][PowersetMonad]
|
||||
|
||||
Aha! The second diagram says that the operation \\(\alpha\\) is "massively associative": however we group up terms and successively apply \\(\alpha\\) to them, we'll come up with the same answer. Mathematica calls this attribute "[Flat]"ness, when applied to finite sets only.
|
||||
|
||||
Moreover, it doesn't matter what order we feed the elements in to \\(\alpha\\), since it works only on sets and not on ordered sets. So \\(\alpha\\) is effectively commutative. (Mathematica calls this "[Orderless]".)
|
||||
|
||||
The first diagram says that \\(\alpha\\) applied to a singleton is just the contained element. Mathematica calls this attribute "[OneIdentity]".
|
||||
|
||||
Finally, \\(\alpha(a, a) = \alpha(a)\\), because \\(\alpha\\) is implemented by looking at a set of inputs.
|
||||
|
||||
So what is an algebra over this monad? It's a set equipped with an infinitarily-Flat, OneIdentity, commutative operation which ignores repeated arguments. If we forgot that "repeated arguments" requirement, we could use any finite set with any commutative monoid structure; the nonnegative reals with infinity, as a monoid, with addition; and so on. However, this way we're reduced to monoids which have an operation such that \\(a+a = a\\). That's not many monoids.
|
||||
|
||||
What operations do work this way? The [Flatten]-followed-by-[Sort] operation in Mathematica obeys this, if the underlying set \\(A\\) is a power-set of a well-ordered set. The union operation also works, if the underlying set is a complete poset - so the power-set example is subsumed in that.
|
||||
|
||||
Have we by some miracle got every algebra? If we have an arbitrary algebra \\((A, \alpha)\\), we want to define a complete poset which has \\(\alpha\\) acting as the union. So we need some ordering on \\(A\\); and if \\(x \leq y\\), we need \\(\alpha(\{x, y\}) = y\\). That looks like a fair enough definition to me. It turns out that this definition just works.
|
||||
|
||||
So the Eilenberg-Moore category of the covariant power-set functor is just the category of complete posets.
|
||||
|
||||
(Subsequently, I looked up the definition of "complete poset", and it turns out I mean "complete lattice". I've already identified the need for unions of all sets to exist, so this is just a terminology issue. A complete poset only has sups of directed sequences. A complete lattice has all sups.)
|
||||
|
||||
|
||||
[course]: /archive/2015IntroToCategoryTheory.pdf
|
||||
[Eilenberg-Moore]: https://ncatlab.org/nlab/show/Eilenberg-Moore+category
|
||||
[PowersetMonad]: {{< baseurl >}}images/CategoryTheorySketches/PowersetMonad.jpg
|
||||
[OneIdentity]: https://reference.wolfram.com/language/ref/OneIdentity.html
|
||||
[Orderless]: https://reference.wolfram.com/language/ref/Orderless.html
|
||||
[Flat]: https://reference.wolfram.com/language/ref/Flat.html
|
||||
[Flatten]: https://reference.wolfram.com/language/ref/Flatten.html
|
||||
[Sort]: https://reference.wolfram.com/language/ref/Sort.html
|
61
hugo/content/posts/2015-11-28-my-first-forcing.md
Normal file
61
hugo/content/posts/2015-11-28-my-first-forcing.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2015-11-28T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /mathematical_summary/my-first-forcing/
|
||||
- /my-first-forcing/
|
||||
title: My First Forcing
|
||||
summary:
|
||||
In the Part III Topics in Set Theory course, we have used forcing to show the consistency of the Continuum Hypothesis, and we are about to show the consistency of its negation. I don't really grok forcing at the moment, so I thought I would go through an example.
|
||||
---
|
||||
|
||||
In the Part III Topics in Set Theory course, we have used [forcing] to show the consistency of the [Continuum Hypothesis][CH], and we are about to show the consistency of its negation. I don't really grok forcing at the moment, so I thought I would go through an example.
|
||||
|
||||
A forcing is just a quasiorder, so I'll pick a nice one: \\(\mathbb{N}\\), with the usual order. Let's go through some terminology: condition \\(p \in \mathbb{N}\\) is stronger than condition \\(q \in \mathbb{N}\\) (according to my course's convention) iff \\(q \leq p\\). All conditions are compatible, because for every pair of conditions there is a condition stronger than both of them.
|
||||
|
||||
The dense subsets of this forcing are precisely the unbounded ones: that is, the infinite ones.
|
||||
|
||||
The directed subsets are precisely all subsets, because there is always a natural bigger-than-or-equal-to any two specified naturals. The downward-closed subsets are the initial segments.
|
||||
|
||||
The generic set existence theorem is in this case satisfied trivially by \\(G = \mathbb{N}\\), which is generic relative to any collection of dense subsets, and which contains any specified element.
|
||||
|
||||
The sets which are \\(\mathbb{P}\\)-generic over \\(\mathbb{M}\\) (any model which contains \\(\mathbb{N}\\)) are those initial segments of \\(\mathbb{N}\\) which intersect every dense set: that is, the only \\(\mathbb{P}\\)-generic set over \\(\mathbb{M}\\) is \\(\mathbb{N}\\) itself.
|
||||
|
||||
\\(\mathbb{P}\\) is not separative, because it's total so every pair of elements is compatible. That means our forcing isn't guaranteed to add any elements. Let's plough on anyway.
|
||||
|
||||
What are the \\(\mathbb{P}\\)-names of rank \\(0\\)? The empty set is the only such name.
|
||||
|
||||
What are the \\(\mathbb{P}\\)-names of rank \\(1\\)? They are all of the form \\(\tau = \{ (n_i, \sigma_i) : n_i \in \mathbb{N}, \sigma_i = \emptyset, i < i_0 \in \text{Ord} \}\\): that is, \\(\{ (n_i, \emptyset): n_i \in \mathbb{N} \}\\). Hence the \\(\mathbb{P}\\)-names of rank \\(1\\) are in one-to-one correspondence with the subsets of \\(\mathbb{N}\\), and subset \\(N\\) is taken to \\(\{ (n, \emptyset) : n \in N \}\\).
|
||||
|
||||
What are the \\(\mathbb{P}\\)-names of rank \\(2\\)? They are of the form \\(\tau = \{ (n_i, \sigma_i): n_i \in \mathbb{N}, (\sigma_i = \emptyset) \vee (\sigma_i = N \subseteq \mathbb{N}) \}\\), where I'm abusing notation and identifying the subset of \\(\mathbb{N}\\) with its corresponding \\(\mathbb{P}\\)-name of rank \\(1\\). (This isn't a horrible abuse, because \\(\emptyset\\) means the same thing in the two contexts.) That is, it's basically an arbitrary relation between naturals and subsets of naturals.
|
||||
|
||||
The ones of rank \\(3\\), after some mental gymnastics, turn out effectively to be arbitrary relations between pairs of naturals and subsets of naturals; and those of rank \\(n\\) are arbitrary relations between \\(n-1\\)-tuples of naturals and subsets of naturals.
|
||||
|
||||
The ones of rank \\(\omega\\) look like being relations between \\(\omega\\)-indexed tuples of naturals and subsets of naturals, and so on. I'm willing to proceed on the assumption that they are.
|
||||
|
||||
On to the interpretation. We can interpret with respect to any set \\(G \subseteq \mathbb{N}\\), although most of our theorems only really talk about when \\(G\\) is \\(\mathbb{P}\\)-generic: that is, when it is \\(\mathbb{N}\\) itself.
|
||||
|
||||
The interpretation of anything of rank \\(0\\) is, of course, the empty set. If we take anything of rank \\(1\\) - that is, a subset of the naturals - its interpretation is either the empty set (if \\(G\\) doesn't intersect the subset) or the set containing the empty set (if \\(G\\) does intersect the subset).
|
||||
|
||||
Let \\(\sim\\) be a relation between the naturals and subsets of the naturals: that is, a name of rank \\(2\\). Then the interpretation is \\(\{ \sigma_G: (\exists p \in G: p \sim \sigma) \}\\). That is, for everything in \\(G\\), take everything it twiddles, and interpret that (producing the empty set if \\(G\\) doesn't intersect the twiddled thing, or \\(\{ \emptyset \}\\) if it does). Hence we produce the empty set if nothing in \\(G\\) twiddles anything; we get \\(\{ \emptyset \}\\) if everything in \\(G\\) only twiddles things which intersect \\(G\\); and \\(\{ \{ \emptyset \}, \emptyset \}\\) if {something in \\(G\\) twiddles something which intersects \\(G\\), and something in \\(G\\) twiddles something which is disjoint from \\(G\\)}.
|
||||
|
||||
Repeating, it looks like we're building the ordinals, and with the right choice of \\(\mathbb{P}\\)-name, we'll get every ordinal for most choices of \\(G\\) (including the only generic one, \\(\mathbb{N}\\)).
|
||||
|
||||
I'm struggling to think why the entire class of ordinals isn't in this extension. If we started from a countable transitive model, there's a theorem which says that not only have we gained no new ordinals, but we still remain countable. So perhaps we've only actually generated the ordinals up to the Hartogs ordinal of the CTM (that is, \\(\omega_1\\)).
|
||||
|
||||
Let's move into \\(\mathbb{M}\\). As far as \\(\mathbb{M}\\) is concerned, we've just verified the existence of the von Neumann hierarchy (that is, we can show that every subset of every ordinal is present as an interpretation), so our forcing hasn't added anything at all. Aha, I've got it! Every \\(\mathbb{P}\\)-name lives in \\(\mathbb{M}\\), and so there are only countably many of those, but \\(\mathbb{M}\\) thinks that lots of those \\(\mathbb{P}\\)-names are different, though they are actually (from our outside, $V$, perspective) the same. \\(\mathbb{M}\\) doesn't have enough power to show they're the same. Therefore, from \\(\mathbb{M}\\)'s point of view, every ordinal really does exist. The previous paragraph was all backwards: our interpretations contain every ordinal because \\(\mathbb{M}\\) thinks there is every ordinal represented among the \\(\mathbb{P}\\)-names, even though to us with the super-strong large cardinal axiom that "the CTM isn't everything" fundamentally is, there's only countably many of those names.
|
||||
|
||||
Are there indeed countably many of those names, to us in \\(V\\)? There must be, because we're in a CTM. Indeed, if we go up to \\(\alpha = \omega_1\\), we are attempting to talk about \\(V\\)-uncountable families of elements drawn from this countable model, so actually there aren't any \\(\mathbb{P}\\)-names of rank \\(\omega_1\\).
|
||||
|
||||
OK. The above all goes to show that if we force our CTM by \\(\mathbb{N}\\), we don't get anything new. (And this doesn't contradict our theorem that if \\(\mathbb{P}\\) is separative, then we do get something new, because \\(\mathbb{N}\\) is not separative.) Hooray! I feel like I've just cast my first spell with a shiny new magic wand, examined what the spell did, and discovered that it did nothing more than check that magic was still working today.
|
||||
|
||||
Next time, I'll try a separative forcing, so I'm guaranteed something new.
|
||||
|
||||
[forcing]: https://en.wikipedia.org/wiki/Forcing_(mathematics)
|
||||
[CH]: https://en.wikipedia.org/wiki/Continuum_hypothesis
|
||||
[quasiorder]: https://en.wikipedia.org/wiki/Preorder
|
@@ -0,0 +1,20 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- categorytheory
|
||||
comments: true
|
||||
date: "2015-12-24T00:00:00Z"
|
||||
aliases:
|
||||
- /categorytheory/general-adjoint-functor-theorem/
|
||||
- /general-adjoint-functor-theorem/
|
||||
title: General Adjoint Functor Theorem
|
||||
---
|
||||
|
||||
Just a post to draw attention to [my new article][article] about the [General Adjoint Functor Theorem][GAFT].
|
||||
It's a motivation of the GAFT and its proof.
|
||||
I've never seen it motivated in this way, and it's actually quite a natural theorem.
|
||||
I haven't managed to motivate the Special Adjoint Functor Theorem at all, although I'm told that it's natural if you know Stone-Cech compactification.
|
||||
|
||||
[article]: /misc/AdjointFunctorTheorems/AdjointFunctorTheorems.pdf
|
||||
[GAFT]: https://ncatlab.org/nlab/show/adjoint+functor+theorem
|
16
hugo/content/posts/2015-12-31-monadicity-theorems.md
Normal file
16
hugo/content/posts/2015-12-31-monadicity-theorems.md
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- categorytheory
|
||||
comments: true
|
||||
date: "2015-12-31T00:00:00Z"
|
||||
aliases:
|
||||
- /categorytheory/monadicity-theorems/
|
||||
- /monadicity-theorems/
|
||||
title: Monadicity Theorems
|
||||
---
|
||||
|
||||
Another short post to highlight the existence of [an article about the Monadicity Theorems][mts], in which I prove one direction of both the Crude and Precise versions. Comments and corrections would be very much appreciated, because there is an awful lot of work involved in proving those theorems. It would be good to know of any parts where the argument is unclear, unmotivated, too long-winded, or wrong.
|
||||
|
||||
[mts]: /misc/MonadicityTheorems/MonadicityTheorems.pdf
|
20
hugo/content/posts/2016-01-01-multiplicative-determinant.md
Normal file
20
hugo/content/posts/2016-01-01-multiplicative-determinant.md
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-01-01T00:00:00Z"
|
||||
aliases:
|
||||
- /mathematical_summary/multiplicative-determinant/
|
||||
- /multiplicative-determinant/
|
||||
title: Multiplicative determinant
|
||||
---
|
||||
|
||||
I'm clearing out my desktop again, and found [this document on the multiplicativity of the
|
||||
determinant][doc], which I wrote in 2014. It might as well be up here.
|
||||
|
||||
I should note that this document contains no motivation of any kind. It is simply an
|
||||
exercise in symbol-shunting, and it has no clever ideas in it.
|
||||
|
||||
[doc]: /misc/MultiplicativeDetProof/MultiplicativeDetProof.pdf
|
17
hugo/content/posts/2016-01-26-representable-functors.md
Normal file
17
hugo/content/posts/2016-01-26-representable-functors.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- categorytheory
|
||||
comments: true
|
||||
date: "2016-01-26T00:00:00Z"
|
||||
aliases:
|
||||
- /categorytheory/representable-functors/
|
||||
- /representable-functors/
|
||||
title: Representable functors
|
||||
---
|
||||
|
||||
Just a post to draw attention to [my new article][article] about representable functors and their links to adjoint functors.
|
||||
It's very short, but it gives a reason for being interested in representable functors: they are basically "those with left adjoints", up to minor quibbles.
|
||||
|
||||
[article]: /misc/RepresentableFunctors/RepresentableFunctors.pdf
|
17
hugo/content/posts/2016-02-05-friedberg-muchnik-theorem.md
Normal file
17
hugo/content/posts/2016-02-05-friedberg-muchnik-theorem.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-02-05T00:00:00Z"
|
||||
aliases:
|
||||
- /mathematical_summary/friedberg-muchnik-theorem/
|
||||
- /friedberg-muchnik-theorem/
|
||||
title: Friedberg-Muchnik theorem
|
||||
---
|
||||
|
||||
Another short post to point out [my new article on the Friedberg-Muchnik theorem][FM], a theorem from computability theory. It uses what is known officially as a finite injury priority method, and the proof is cribbed entirely from [Dr Thomas Forster][tf].
|
||||
|
||||
[FM]: /misc/FriedbergMuchnik/FriedbergMuchnik.pdf
|
||||
[tf]: https://www.dpmms.cam.ac.uk/~tf/
|
67
hugo/content/posts/2016-03-03-a-certain-limit.md
Normal file
67
hugo/content/posts/2016-03-03-a-certain-limit.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
lastmod: "2021-10-25T23:24:01.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stack-exchange
|
||||
comments: true
|
||||
math: true
|
||||
date: "2016-03-03T00:00:00Z"
|
||||
title: Why do we get complex numbers in a certain expression?
|
||||
summary: Answering the question, "Why does a continued fraction containing only 1, subtraction, and division result in one of two complex numbers?".
|
||||
---
|
||||
|
||||
*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/1681993/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
|
||||
|
||||
# Question
|
||||
|
||||
So we all know that the continued fraction containing all 1s...
|
||||
|
||||
$$
|
||||
x = 1 + \frac{1}{1 + \frac{1}{1 + \ldots}}
|
||||
$$
|
||||
|
||||
yields the golden ratio \\(x = \phi\\), which can easily be proven by rewriting it as \\(x = 1 + \dfrac{1}{x}\\), solving the resulting quadratic equation and assuming that a continued fraction that only contains additions will give a positive number.
|
||||
|
||||
Now, a friend asked me what would happen if we replaced all additions with subtractions:
|
||||
|
||||
$$
|
||||
x = 1 - \frac{1}{1 - \frac{1}{1 - \ldots}}
|
||||
$$
|
||||
|
||||
I thought "oh cool, I know how to solve this...":
|
||||
|
||||
$$
|
||||
x = 1 - \frac{1}{x}
|
||||
$$
|
||||
|
||||
$$
|
||||
x^2 - x + 1 = 0
|
||||
$$
|
||||
|
||||
And voila, I get...
|
||||
|
||||
$$ x \in \{e^{i\pi/3}, e^{-i\pi/3} \} $$
|
||||
|
||||
Ummm... why does a continued fraction containing only 1s, subtraction and division result in one of two complex (as opposed to real) numbers?
|
||||
|
||||
(I have a feeling this is something like the \\(\sum_i (-1)^i\\) thing, that the infinite continued fraction isn't well-defined unless we can express it as the limit of a converging series, because the truncated fractions \\(1 - \frac{1}{1-1}\\) etc. aren't well-defined, but I thought I'd ask for a well-founded answer. Even if this is the case, do the two complex numbers have any "meaning"?)
|
||||
|
||||
# Answer
|
||||
|
||||
You're attempting to take a limit.
|
||||
|
||||
$$
|
||||
x_{n+1} = 1-\frac{1}{x_n}
|
||||
$$
|
||||
|
||||
This recurrence actually never converges, from any real starting point.
|
||||
Indeed, \\(x_2 = 1-\frac{1}{x_1}; \\ x_3 = 1-\frac{1}{1-1/x_1} = 1-\frac{x_1}{x_1-1} = \frac{1}{1-x_1}; \\ x_4 = x_1\\)
|
||||
|
||||
So the sequence is periodic with period 3.
|
||||
Therefore it converges if and only if it is constant; but the only way it could be constant is, as you say, if \\(x_1\\) is one of the two complex numbers you found.
|
||||
|
||||
Therefore, what you have is actually basically a proof by contradiction that the sequence doesn't converge when you consider it over the reals.
|
||||
|
||||
However, you have found exactly the two values for which the iteration does converge; that is their significance.
|
||||
|
||||
Alternatively viewed, the map \\(z \mapsto 1-\frac{1}{z}\\) is a certain transformation of the complex plane, which has precisely two fixed points. You might find it an interesting exercise to work out what that map does to the complex plane, and examine in particular what it does to points on the real line.
|
173
hugo/content/posts/2016-03-28-clojure-exercism.md
Normal file
173
hugo/content/posts/2016-03-28-clojure-exercism.md
Normal file
@@ -0,0 +1,173 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- programming
|
||||
comments: true
|
||||
date: "2016-03-28T00:00:00Z"
|
||||
aliases:
|
||||
- /clojure-exercism/
|
||||
title: Clojure and Exercism
|
||||
summary:
|
||||
I've been trying to learn Clojure through Exercism, a programming exercises tool.
|
||||
It took me an hour to get Hello, World! up and running, so I thought I'd document how it's done.
|
||||
I'm using Leiningen on Mac OS 10.11.4.
|
||||
---
|
||||
|
||||
I've been trying to learn [Clojure] (a LISP) through [Exercism], a programming exercises tool.
|
||||
It took me an hour to get Hello, World! up and running, so I thought I'd document how it's done.
|
||||
I'm using [Leiningen] on Mac OS 10.11.4.
|
||||
|
||||
The [Installing Clojure page] on Exercism details how to install Leiningen; that part is easy.
|
||||
Installing `exercism` is likewise easy, so we run `exercism fetch clojure hello-world`.
|
||||
|
||||
And then we enter a world of pain.
|
||||
|
||||
`exercism` downloads a project structure:
|
||||
|
||||
hello-world/
|
||||
-- project.clj
|
||||
-- README.md
|
||||
-- test/
|
||||
-- hello_world_test.clj
|
||||
|
||||
The README helpfully tells us what Hello, World! is, and a specification for the answer.
|
||||
How are we to come up with our answer?
|
||||
`lein` gives access to a REPL we can use to write an answer, but there's no indication of
|
||||
where to put our files so that `lein` can see them.
|
||||
|
||||
Let's run `lein test` to see what `lein` complains about.
|
||||
|
||||
Exception in thread "main" java.io.FileNotFoundException:
|
||||
Could not locate hello_world__init.class or hello_world.clj on classpath.
|
||||
Please check that namespaces with dashes use underscores in the Clojure file name.,
|
||||
compiling:(hello_world_test.clj:1:1)
|
||||
|
||||
Fine. It's looking for `hello_world.clj`. Let's make one!
|
||||
|
||||
I've put the following in `hello-world/hello_world.clj`:
|
||||
|
||||
(defn hello
|
||||
[]
|
||||
"Hello, World!"
|
||||
[name]
|
||||
(str "Hello, " name "!"))
|
||||
|
||||
(defn main- [& _] (println "Hello!"))
|
||||
|
||||
`lein test` fails again, with the same error.
|
||||
|
||||
Do we get any hints from the test file?
|
||||
It starts with a namespace declaration:
|
||||
|
||||
(ns hello-world-test
|
||||
(:require [clojure.test :refer [deftest is]]
|
||||
hello-world))
|
||||
|
||||
We're going to want a `hello-world` namespace, so let's put that at the top of our `hello_world.clj`.
|
||||
|
||||
(ns hello-world)
|
||||
|
||||
Still fails with the same error.
|
||||
OK, the thing that is telling `lein` what to do must be `project.clj`, and it turns out to contain the following:
|
||||
|
||||
(defproject hello-world "0.1.0-SNAPSHOT"
|
||||
:description "hello-world exercise."
|
||||
:url "https://github.com/exercism/xclojure/tree/master/exercises/hello-world"
|
||||
:dependencies [[org.clojure/clojure "1.8.0"]])
|
||||
|
||||
None of that tells `lein` where to look for the source file.
|
||||
If we make a new `lein` project somewhere, let's see what the project file is supposed to look like.
|
||||
|
||||
Go to a temporary directory and use `lein new app newproj`.
|
||||
The source tree looks like:
|
||||
|
||||
newproj/
|
||||
-- CHANGELOG.md
|
||||
-- LICENSE.md
|
||||
-- README.md
|
||||
-- doc/
|
||||
-- intro.md
|
||||
-- project.clj
|
||||
-- resources/
|
||||
-- src/
|
||||
-- newproj/
|
||||
-- core.clj
|
||||
-- test/
|
||||
-- newproj/
|
||||
-- core_test.clj
|
||||
|
||||
And `project.clj` looks like:
|
||||
|
||||
(defproject newproj "0.1.0-SNAPSHOT"
|
||||
:description "FIXME: write description"
|
||||
:url "http://example.com/FIXME"
|
||||
:license {:name "Eclipse Public License"
|
||||
:url "http://www.eclipse.org/legal/epl-v10.html"}
|
||||
:dependencies [[org.clojure/clojure "1.8.0"]]
|
||||
:main ^:skip-aot newproj.core
|
||||
:target-path "target/%s"
|
||||
:profiles {:uberjar {:aot :all}})
|
||||
|
||||
The only interesting thing there seems to be `:main ^:skip-aot newproj.core`.
|
||||
Let's try putting `:main ^:skip-aot hello-world` into our own `project.clj`.
|
||||
|
||||
`lein test` continues to fail with the same error.
|
||||
Looking up `:skip-aot`, it just tells `lein` to skip Ahead-Of-Time compilation, which isn't what we want.
|
||||
|
||||
With a heavy heart, then, let's restructure `hello-world` so it looks exactly like `newproj`:
|
||||
|
||||
hello-world/
|
||||
-- README.md
|
||||
-- project.clj
|
||||
-- src/
|
||||
-- hello_world/
|
||||
-- hello_world.clj
|
||||
-- test/
|
||||
-- hello_world/
|
||||
-- hello_world_test.clj
|
||||
|
||||
Miraculous! We have a different error!
|
||||
|
||||
Exception in thread "main" java.io.FileNotFoundException:
|
||||
Could not locate hello_world_test__init.class or hello_world_test.clj on classpath.
|
||||
|
||||
I think this might be a back-step, because beforehand it was at least finding the test file.
|
||||
I get the same error if I navigate into the test folder and run `lein test`.
|
||||
And if we try `lein run`, we get the original error:
|
||||
|
||||
Exception in thread "main" java.io.FileNotFoundException:
|
||||
Could not locate hello_world__init.class or hello_world.clj on classpath.
|
||||
|
||||
From the [Leiningen documentation]:
|
||||
|
||||
> The `src/my_stuff/core.clj` file corresponds to the `my-stuff.core` namespace.
|
||||
|
||||
That would imply that our source file corresponds to the `hello-world.hello-world` namespace.
|
||||
Let's try flattening out the structure a bit, and returning the `hello_world_test.clj` to where at least
|
||||
`lein` recognised it:
|
||||
|
||||
hello-world/
|
||||
-- README.md
|
||||
-- project.clj
|
||||
-- src/
|
||||
-- hello_world.clj
|
||||
-- test/
|
||||
-- hello_world_test.clj
|
||||
|
||||
And it works! Woohoo!
|
||||
(Well, the tests fail, but that's because I'm new to Clojure and missed out a bunch of parentheses.)
|
||||
|
||||
The final contents of `src/hello_world.clj`, causing the tests to pass, were:
|
||||
|
||||
(ns hello-world)
|
||||
|
||||
(defn hello
|
||||
([] "Hello, World!")
|
||||
([namevar] (str "Hello, " namevar "!")))
|
||||
|
||||
[Clojure]: https://clojure.org/
|
||||
[Exercism]: https://exercism.io/
|
||||
[Installing Clojure page]: https://exercism.io/languages/clojure
|
||||
[Leiningen]: https://leiningen.org
|
||||
[Leiningen documentation]: https://github.com/technomancy/leiningen/blob/stable/doc/TUTORIAL.md#creating-a-project
|
@@ -0,0 +1,52 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-04-08T00:00:00Z"
|
||||
aliases:
|
||||
- /another-monty-hall-explanation/
|
||||
title: Another Monty Hall explanation
|
||||
---
|
||||
|
||||
Recall the [Monty Hall problem]: the host, Monty Hall, shows you three doors, named A, B and C.
|
||||
You are assured that behind one of the doors is a car, and behind the two others there is a goat each.
|
||||
You want the car.
|
||||
You pick a door, and Monty Hall opens one of the two doors you didn't pick that he knows contains a goat.
|
||||
He offers you the chance to switch guesses from the door you first picked to the one remaining door.
|
||||
Should you switch or stick?
|
||||
|
||||
I'll slightly reframe the problem: let's pretend you are playing cooperatively with Monty Hall, where he
|
||||
knows the layouts and he is trying to open two goat-doors, and you're trying for the car; you're not allowed to communicate.
|
||||
The game is (noting the distinction between "picking" a door - i.e. announcing your intention to open it - and opening it):
|
||||
|
||||
* You pick a door;
|
||||
* Monty Hall opens a door you didn't pick;
|
||||
* You open a door Monty Hall didn't just pick;
|
||||
* Monty Hall opens the remaining door.
|
||||
|
||||
(The problem is the same: in standard Monty Hall, you win if and only if you open the car door and Monty Hall opens two goat doors.
|
||||
Let's say Monty Hall really likes goats, and not inquire further.)
|
||||
|
||||
You pick a door, B say. Monty Hall now opens a goat-door, C say,
|
||||
because he knows the layouts and can pick one with a goat behind.
|
||||
|
||||
At this point, you know Monty Hall *decided not to open* door A.
|
||||
Why would he not have chosen door A?
|
||||
It's either because he chose randomly between his available goaty options A and C,
|
||||
or because he knew A had a car behind so he was choosing the only goat door available to him.
|
||||
(Remember, Monty Hall wants to find goats.)
|
||||
|
||||
If he chose randomly, you're better off sticking, because that means you have the car.
|
||||
But if he *actively refused* door A (which can only happen because it had a car behind), that means you need to switch to door A.
|
||||
|
||||
He chose randomly with probability 1/3 (because he chose randomly if, and only if, you originally picked the car).
|
||||
He actively refused door A with probability 2/3, therefore.
|
||||
|
||||
So with 2/3 probability, you're in the case that means you guarantee yourself a car if you switch.
|
||||
With 1/3 probability, you're in the case that means you guarantee yourself a car if you stick.
|
||||
|
||||
So you should switch.
|
||||
|
||||
[Monty Hall problem]: {{< ref "2013-12-22-three-explanations-of-the-monty-hall-problem" >}}
|
59
hugo/content/posts/2016-04-13-independence-of-choice.md
Normal file
59
hugo/content/posts/2016-04-13-independence-of-choice.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-04-13T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /independence-of-choice/
|
||||
title: Independence of the Axiom of Choice (for programmers)
|
||||
summary: So you've heard that the Axiom of Choice is magical and special and unprovable and independent of set theory, and you're here to work out what that means.
|
||||
---
|
||||
|
||||
So you've heard that the Axiom of Choice is magical and special and unprovable and independent of set theory,
|
||||
and you're here to work out what that means.
|
||||
Let's not get too hung up on what the Axiom of Choice (or "AC") actually is, because you probably don't care.
|
||||
Let's instead discuss what it means for something to be "independent".
|
||||
|
||||
Often I hear the layperson say things like "AC is unprovable".
|
||||
This is true in a sense, but it's misleading.
|
||||
|
||||
Take an object \\(n\\) of the type "integer" - so \\(5\\), \\(-100\\), that kind of thing.
|
||||
Here is what I will call the Positivity Hypothesis (or "PH"):
|
||||
|
||||
> \\(n\\) is (strictly) greater than \\(0\\).
|
||||
|
||||
Of course, depending on how we chose \\(n\\), PH might be true or it might be false, although it can't be both.
|
||||
So, while maths might let us prove which of PH or not-PH holds for our given \\(n\\),
|
||||
maths will emphatically not let us prove that PH is always true, and it will not let us prove that PH is always false.
|
||||
(Maths would be stupid if it did that, because PH is neither always true nor always false.
|
||||
The integers \\(5\\) and \\(-100\\) witness that PH can be true and can be false respectively.)
|
||||
|
||||
Therefore PH is independent of integer theory.
|
||||
It's not magic: there is no god-given reason why PH mysteriously resists all efforts to prove it.
|
||||
It's simply not always true, but it's not always false either.
|
||||
|
||||
Let's go back to the Axiom of Choice.
|
||||
|
||||
The usual system of set theory (which is used as a foundation for all of maths) is a collection of nine axioms,
|
||||
together comprising what is known as ZF.
|
||||
(If we add Choice to that collection as a tenth axiom, we obtain the set theory called ZFC.)
|
||||
In the "integers" analogy above, "the integer type" plays the role of ZF.
|
||||
|
||||
Now, just as we may pick an object of type "integer", we may pick a set-theory of type "ZF".
|
||||
A "set theory of type ZF" is my informal phrasing for what is usually called "a model of ZF".
|
||||
(I'm eliding the question of the consistency of ZF, and I'll just assume it's consistent.)
|
||||
In the "integers" analogy, the number \\(5\\) plays the role of one of these set theories,
|
||||
as does the number \\(-100\\).
|
||||
We can ask of this set theory whether it obeys AC (for which we substituted PH in the analogy).
|
||||
|
||||
And it turns out that for some models of set theory, AC holds, and for some models, it doesn't.
|
||||
It's quite hard to describe models of set theory, because set theory supports so much complexity;
|
||||
the integers are much easier to specify.
|
||||
However, if you want the names of two models: in the model which contains precisely the "constructible sets", AC holds, while in Solovay's model, AC fails.
|
||||
|
||||
That's all there is to it.
|
||||
Maths won't let us prove AC, because it's not true of every set theory of the type "ZF".
|
||||
Maths won't let us prove AC is false, because there are some set theories of the type "ZF" in which it is true.
|
32
hugo/content/posts/2016-04-21-modular-machines.md
Normal file
32
hugo/content/posts/2016-04-21-modular-machines.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-04-21T00:00:00Z"
|
||||
aliases:
|
||||
- /modular-machines/
|
||||
title: Modular machines
|
||||
---
|
||||
|
||||
I've written [a blurb][MM] about what a modular machine is (namely, another Turing-equivalent form of computing machine),
|
||||
and how a Turing machine may be simulated in one.
|
||||
(In fact, that blurb now contains an overview of how we may use modular machines to produce a group with insoluble word problem,
|
||||
and how to use them to embed a recursively presented group into a finitely presented one.)
|
||||
|
||||
A modular machine is like a slightly more complicated version of a Turing machine, but it has the advantage
|
||||
that it is easier to embed a modular machine into a group than it is to embed a Turing machine directly into a group.
|
||||
We can use this embedding to show that there is a group with unsolvable word problem:
|
||||
solving the word problem would correspond to determining whether a certain Turing machine halted.
|
||||
|
||||
This is as part of my revision process for the Part III course on "Infinite Groups and Decision Problems".
|
||||
It's probably more comprehensible if you already know what a modular machine is.
|
||||
Below are some notes which are handwritten, because I needed to draw pictures easily; the linked notes are typeset but might be less legible.
|
||||
|
||||
![Notes1]
|
||||
![Notes2]
|
||||
|
||||
[MM]: /misc/ModularMachines/EmbedMMIntoTuringMachine.pdf
|
||||
[Notes1]: /images/ModularMachines/ModularMachines1.jpg
|
||||
[Notes2]: /images/ModularMachines/ModularMachines2.jpg
|
18
hugo/content/posts/2016-04-27-tennenbaums-theorem.md
Normal file
18
hugo/content/posts/2016-04-27-tennenbaums-theorem.md
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
lastmod: "2020-11-07T15:42:41.0000000+00:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-04-27T00:00:00Z"
|
||||
aliases:
|
||||
- /tennenbaums-theorem/
|
||||
title: Tennenbaum's theorem
|
||||
---
|
||||
|
||||
Most recent exposition: [an article][tennenbaum] on [Tennenbaum's Theorem].
|
||||
Comments welcome.
|
||||
The proof is cribbed from Dr Thomas Forster, but his notes only sketched the fairly crucial last step, on account of the notes not yet being complete.
|
||||
|
||||
[tennenbaum]: /misc/Tennenbaum/Tennenbaum.pdf
|
||||
[Tennenbaum's Theorem]: https://en.wikipedia.org/wiki/Tennenbaum%27s_theorem
|
68
hugo/content/posts/2016-05-25-finitistic-reducibility.md
Normal file
68
hugo/content/posts/2016-05-25-finitistic-reducibility.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:50:36.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-05-25T00:00:00Z"
|
||||
math: true
|
||||
aliases:
|
||||
- /finitistic-reducibility/
|
||||
title: Finitistic reducibility
|
||||
summary: A quick overview of the definition of the mathematical concept of finitistic reducibility.
|
||||
---
|
||||
|
||||
There is a [Hacker News thread][HN] at the moment about [an article on Quanta][quanta]
|
||||
which describes a paper which claims to prove that Ramsey's theorem for pairs is finitistically reducible.
|
||||
That thread contains lots of people being a bit confused about what this means.
|
||||
I wrote a comment which I hope is elucidating; this is that comment.
|
||||
|
||||
It is a fact of mathematics that there are some statements which are solely about finite objects,
|
||||
but to prove them requires reasoning about an infinite object.
|
||||
The [TREE function]'s well-definedness is one of them.
|
||||
For a more accessible example than TREE, I think the [Ackermann function] falls into this category.
|
||||
The Ackermann function \\(A(n+1, m+1) = A(n, A(n+1, m))\\) is well-defined for all \\(n\\) and \\(m\\)
|
||||
(we prove this by induction over \\(\mathbb{N} \times \mathbb{N}\\)),
|
||||
but the proof relies on considering the [lexicographic order][lex] on \\(\mathbb{N} \times \mathbb{N}\\)
|
||||
which is inherently infinite.
|
||||
(I'm not totally certain that all proofs of Ackermann's well-definedness rely on an infinite object,
|
||||
but the only proof known to me does.)
|
||||
Ackermann's function itself is in some sense a "finite" object,
|
||||
but the proof of its well-definedness is in some sense "infinite".
|
||||
|
||||
Whatever the status of my conjecture that "you can't prove that Ackermann's function is well-defined without considering an infinite object",
|
||||
it is [certainly a fact][ack not primrec] that Ackermann is not [primitive-recursive],
|
||||
and "primitive-recursive functions" corresponds to the lowest level of the five "mysterious levels" the article talks about.
|
||||
|
||||
There are some mathematicians ("finitists") who don't believe that any infinite objects exist.
|
||||
Such mathematicians will reject any proof that relies on an infinite object,
|
||||
so their mathematics is necessarily less wide-ranging than the usual version.
|
||||
Any result that shows that more things are finitistically true is good,
|
||||
because it means the finitists get to use these facts the rest of us were already happy about.
|
||||
|
||||
So the analogy is as follows.
|
||||
Imagine that we knew of this "infinitary" proof that Ackermann is well-defined,
|
||||
but we hadn't proved that no "finitary" proof exists.
|
||||
(So finitists are not happy to use Ackermann, because it might not actually be well-defined according to them:
|
||||
any known proof requires dealing with an infinite object.)
|
||||
Now, this paper comes along and proves that actually a finitary proof exists.
|
||||
Suddenly the finitists are happy to use the Ackermann function.
|
||||
|
||||
Similarly, in real life, most mathematicians were quite happy to use \\(R_2^2\\) to reason about finite objects,
|
||||
but the finitists rejected such proofs.
|
||||
Now, because of the paper, it turns out that the finitists are allowed to use \\(R_2^2\\) after all,
|
||||
because there is a purely finitistic reason why \\(R_2^2\\) is true.
|
||||
|
||||
The actual definition of TREE is a bit too long for me to explain here,
|
||||
but it is an example of a function like Ackermann, which is well-defined,
|
||||
but in fact if you're not allowed to consider infinite objects during the proof then it is provably impossible to prove that TREE is well-defined.
|
||||
So the statement "TREE is well-defined" is, in some sense, "less constructive" or "more infinitary" than \\(R_2^2\\).
|
||||
|
||||
|
||||
[HN]: https://news.ycombinator.com/item?id=11763080
|
||||
[quanta]: https://www.quantamagazine.org/mathematicians-bridge-finite-infinite-divide-20160524
|
||||
[TREE function]: https://en.wikipedia.org/wiki/Kruskal's_tree_theorem
|
||||
[Ackermann function]: https://en.wikipedia.org/wiki/Ackermann_function
|
||||
[lex]: https://en.wikipedia.org/wiki/Lexicographical_order
|
||||
[primitive-recursive]: https://en.wikipedia.org/wiki/Primitive_recursive_function
|
||||
[ack not primrec]: http://planetmath.org/ackermannfunctionisnotprimitiverecursive
|
67
hugo/content/posts/2016-06-13-the-use-of-jargon.md
Normal file
67
hugo/content/posts/2016-06-13-the-use-of-jargon.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- psychology
|
||||
comments: true
|
||||
date: "2016-06-13T00:00:00Z"
|
||||
aliases:
|
||||
- /the-use-of-jargon/
|
||||
title: The use of jargon
|
||||
summary: "Why jargon is a really useful thing to have and use."
|
||||
---
|
||||
|
||||
I was recently having a late-night argument with someone about the following thesis:
|
||||
|
||||
> If you can't explain something in a simple way, you don't understand it.
|
||||
|
||||
They were using this to argue something like the following:
|
||||
|
||||
> Jargon is unhelpful because it sets a very high barrier for entry into any field.
|
||||
|
||||
My reply, as something of a mathematician, is as follows.
|
||||
|
||||
While there are certainly more accessible parts of physics and maths which can be well-explained by analogies and imprecise language
|
||||
(and, indeed, we often use them to students, and Brian Cox tries to use them in e.g. documentaries),
|
||||
it has led to the horrible nightmare which is everyone thinking wrongly that they understand quantum mechanics [QM] because they heard some cool analogies.
|
||||
QM has very little in common with its analogies;
|
||||
the analogies are basically just there to give an idea that "things are weird, classical intuition will fail".
|
||||
It's the flip side to "if you use abstruse language then you create an environment where you must pass the initiation tests to take part":
|
||||
|
||||
> If you use imprecise language then you create an environment where everyone thinks they understand but they're all wrong.
|
||||
|
||||
Both approaches have merits, and boringly the correct answer is probably "use a mixture of the two, with the ratio depending on appropriateness to the subject".
|
||||
However, physics is increasingly a subfield of maths since the advent of QM and general relativity (which are purely mathematical frameworks),
|
||||
and in maths we find the precise language *extremely* important because we strive for total rigour in this, the only subject where it's actually possible.
|
||||
Most people start doing maths without access to the language,
|
||||
and they often find lots of interesting stuff
|
||||
([Ramanujan] is a particular example of such a mathematician,
|
||||
who did a lot of great work before ever interacting with Western mathematicians),
|
||||
but once you know the language, the language creates a framework which goes some way to guaranteeing the correctness of your results and which can help you spot connections/see more patterns.
|
||||
|
||||
From a maths point of view, documentaries are there to get people interested in playing around for themselves,
|
||||
rather than to actually impart mathematical knowledge.
|
||||
In an ideal world, I think we'd let people discover loads of maths on their own,
|
||||
and then show them the precise framework and language it fits into,
|
||||
but there just isn't time,
|
||||
so we teach it by shoving the framework down students' throats until they either give up maths or become divinely inspired and start playing with it for themselves.
|
||||
Additionally a lot of the maths I study [though this might be historical accident,
|
||||
derived from our tradition of using jargon] consists of the study of objects which have very few properties, so they defy analogy.
|
||||
|
||||
Sometimes it turns out that a certain collection of "very few properties",
|
||||
like the collection by which we define the objects we call [groups],
|
||||
happen to capture a certain intuition
|
||||
(in this case, the idea of "symmetry" [turns out in a deep way][Cayley's theorem] to be precisely captured by groups).
|
||||
However, that seems like being the exception rather than the rule,
|
||||
and a general collection of "few properties" won't have a neat accessible analogy that anyone has been able to find.
|
||||
Especially when you study metamathematics, as well,
|
||||
some very deep theorems turn out to hinge on *exactly* what you mean by "the integers" or "the real numbers" or whatever.
|
||||
In such fringe cases it is absolutely necessary to be totally precise that we mean "the integers" in a specific technical sense rather than "the integers" as a fuzzy concept,
|
||||
or else one will almost certainly go wrong.
|
||||
|
||||
So there are definitely cases where the "stupid jargon" is necessary to maintain clarity of thought.
|
||||
(Some such theorems do actually impinge on reality, too! Usually via computer science.)
|
||||
|
||||
[Ramanujan]: https://en.wikipedia.org/wiki/Srinivasa_Ramanujan
|
||||
[groups]: https://en.wikipedia.org/wiki/Group_(mathematics)
|
||||
[Cayley's theorem]: https://arbital.com/p/cayley_theorem_symmetric_groups/
|
20
hugo/content/posts/2016-06-15-part-iii-essay.md
Normal file
20
hugo/content/posts/2016-06-15-part-iii-essay.md
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematical_summary
|
||||
comments: true
|
||||
date: "2016-06-15T00:00:00Z"
|
||||
aliases:
|
||||
- /part-iii-essay/
|
||||
title: Part III essay
|
||||
---
|
||||
|
||||
Now that my time in [Part III] is over, I feel justified in releasing [my essay],
|
||||
which is on the subject of [Non-standard Analysis].
|
||||
It was supervised by Dr Thomas Forster
|
||||
(to whom I owe many thanks for exposing me to such an interesting subject, and for agreeing to supervise the essay).
|
||||
|
||||
[Part III]: https://en.wikipedia.org/wiki/Part_III_of_the_Mathematical_Tripos
|
||||
[Non-standard Analysis]: https://en.wikipedia.org/wiki/Non-standard_analysis
|
||||
[my essay]: https://www.patrickstevens.co.uk/misc/NonstandardAnalysis/NonstandardAnalysisPartIII.pdf
|
45
hugo/content/posts/2016-08-05-be-a-beginner.md
Normal file
45
hugo/content/posts/2016-08-05-be-a-beginner.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
comments: true
|
||||
date: "2016-08-05T00:00:00Z"
|
||||
aliases:
|
||||
- /be-a-beginner/
|
||||
title: Be a Beginner
|
||||
summary: Being a beginner at something is great, especially if it's something that humans are built for.
|
||||
---
|
||||
|
||||
TL;DR: Being a beginner at something is great, especially if it's something that humans are built for.
|
||||
|
||||
Humans are, among other things, [persistence hunters].
|
||||
That means one of the ways we are adapted to catch prey is by the brutest of brute-force techniques:
|
||||
on foot, we follow a large animal as it runs, until it sits down and dies of exhaustion,
|
||||
whereupon we eat it.
|
||||
We're pretty slow, but we can run for hours in the full heat of the day
|
||||
(we're unreasonably effective at regulating our own body temperature)
|
||||
and we just don't stop.
|
||||
One of the adaptations by which the human body is built is the ability to run at a constant speed for a long time.
|
||||
This art is, of course, increasingly unnecessary,
|
||||
as we have supplanted it with tools wrought of pure intellect (agriculture and so forth);
|
||||
but the underlying mechanisms are still there in [most of] our bodies.
|
||||
|
||||
If you start something as a beginner, you make extremely rapid progress.
|
||||
The general effect has a name: the [Pareto principle],
|
||||
which is a rule of thumb which states that 80% of the effects come from 20% of the causes.
|
||||
If you just learn the most basic 20% of something,
|
||||
that often gets you 80% of the total possible effects.
|
||||
Beginners improve rapidly in most human endeavours.
|
||||
|
||||
I started running using the NHS [Couch to 5k] programme, about nine weeks ago.
|
||||
In that time, I have gone from being able to run fitfully for about thirty seconds before having to stop and breathe,
|
||||
to being able to run for thirty minutes and only stopping because that's when the timer finished.
|
||||
It wasn't particularly fun, but it's always satisfying to improve rapidly at something,
|
||||
and it is certainly better to be able to run for half an hour than not to be able to run at all.
|
||||
(I had a similar experience with lifting weights, a year and a half ago, except I actually find that fun.)
|
||||
|
||||
This post is to recommend being a beginner every so often,
|
||||
and specifically to point to the Couch to 5k programme for those who don't currently do things that involve running.
|
||||
|
||||
[persistence hunters]: https://en.wikipedia.org/wiki/Persistence_hunting
|
||||
[Pareto principle]: https://en.wikipedia.org/wiki/Pareto_principle
|
||||
[Couch to 5k]: https://www.nhs.uk/live-well/exercise/couch-to-5k-week-by-week
|
50
hugo/content/posts/2016-08-07-a-free-market.md
Normal file
50
hugo/content/posts/2016-08-07-a-free-market.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- creative
|
||||
- fiction
|
||||
comments: true
|
||||
date: "2016-08-07T00:00:00Z"
|
||||
aliases:
|
||||
- /a-free-market/
|
||||
title: A Free Market
|
||||
summary: The story of Martin's search for a kaki fruit.
|
||||
---
|
||||
|
||||
Martin was walking through the farmers' market.
|
||||
He had scored off nearly everything on his shopping list, but one item stubbornly remained:
|
||||
he needed some kaki fruit for a new sorbet recipe he wanted to try out.
|
||||
|
||||
High and low he searched,
|
||||
weaving in and out of the stalls,
|
||||
but his mission proved… well, let us say that it was not successful.
|
||||
|
||||
Finally, he thought to give up and place his problem into better hands than his own.
|
||||
He forged towards the market's finest attraction,
|
||||
the Personal Shopper ("Guaranteed to find your stuff!").
|
||||
Her name was Posy,
|
||||
and she had been a fixture here for the last twenty years:
|
||||
that was when she first noticed the curious way that no-one could ever find quite what they wanted at the weirdly inefficient market.
|
||||
Posy was uncannily good at navigating the cobbled rows between the stalls,
|
||||
and had an unerring eye for picking out exactly what the customer required.
|
||||
|
||||
Martin poured out his problems.
|
||||
"Please! I need your help to find a kaki fruit. The recipe will be ruined without it."
|
||||
|
||||
Posy smiled, assumed a look of determination, and forged off,
|
||||
leaving Martin to scurry behind her as she ducked first left,
|
||||
then left again, then (for some reason) a third and a fourth time.
|
||||
After what had to be the eighth or ninth left turn through the higgledy-piggledy stalls,
|
||||
with Martin hopelessly lost,
|
||||
she stopped in front of a little tent whose sign read
|
||||
"Children educated and tutored in etiquette: inquire within".
|
||||
She raised the entrance flap, and an elderly lady emerged.
|
||||
|
||||
Angry, baffled and confused, Martin raised his voice,
|
||||
ignoring the proper-and-prim-looking lady from the tent.
|
||||
"Why haven't you found me a kaki fruit?
|
||||
I thought you knew this market like the back of your hand!"
|
||||
|
||||
"Haven't you heard?" said Posy incredulously.
|
||||
"It's better to ask for a governess than seek persimmons."
|
34
hugo/content/posts/2016-08-10-reinvent-maths.md
Normal file
34
hugo/content/posts/2016-08-10-reinvent-maths.md
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stack-exchange
|
||||
comments: true
|
||||
date: "2016-08-10T00:00:00Z"
|
||||
title: How far back does mathematical understanding go?
|
||||
summary: Answering the question, "how far back in time would maths be understandable to a modern mathematician?".
|
||||
---
|
||||
|
||||
*This is my answer to the same [question posed on the WorldBuilding Stack Exchange](https://worldbuilding.stackexchange.com/q/51166/13796). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
|
||||
|
||||
# Question
|
||||
|
||||
How far could a mathematician go back in time, and have to spend as less time as possible in relearning stuff?
|
||||
|
||||
Background: The main character has realised that he can travel back in time voluntarily, and he wishes to travel back in time to a time-period where he can participate in the beginning of maths, but without relearning as much as possible.
|
||||
|
||||
Magic: To make things clear, I'll add this in. The magic allows him to communicate in the time-periods language easily. He can understand it effortlessly, and it stops the other people from asking him very incriminating questions (like where are you from, etc). They simply think he is a travelling scholar and leave it at that. (It stops them from digging to deeply, even if he does not know what they think is common sense.) They also have given him food and a place to stay.
|
||||
|
||||
# Answer
|
||||
|
||||
It strongly depends which area of maths you're talking about.
|
||||
|
||||
* Category theory is basically new, so before the 1950s or so, it just didn't exist in anything like its modern form.
|
||||
* Combinatorics has been around for a long time, but before Erdös it looked very different.
|
||||
* Before Newton and Leibniz, the notion of calculus wasn't very clear, and its notation would make it very difficult for us modern-day people to work with.
|
||||
* Before Cauchy, they didn't really have what we would refer to as a "rigorous" foundation of analysis, and the relevant language changed substantially since Cauchy to take into account the new approach to rigour.
|
||||
* There was a time, even some point after the Renaissance IIRC, when mathematicians were still not really sold on this whole "rigour" thing, and the art of defining things crisply so as to deduce (nearly) incontrovertible stuff about them. The entire mindset of mathematics is different now.
|
||||
|
||||
A first-year undergraduate going back before Newton could, if their ideas were taken seriously, revolutionise multiple areas of maths simply because we now know (and take for granted) the correct ways of thinking about certain fields of study.
|
||||
Conversely, of course, the first-year undergraduate would have a hard time following the maths of the day, because the technical language and frameworks are so strongly unfamiliar.
|
||||
The only frameworks I can think of which haven't changed much post-Renaissance are Euclidean geometry and arithmetic, though of course geometry and number theory have advanced substantially since then.
|
32
hugo/content/posts/2016-12-31-complex-infinity.md
Normal file
32
hugo/content/posts/2016-12-31-complex-infinity.md
Normal file
@@ -0,0 +1,32 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stack-exchange
|
||||
comments: true
|
||||
date: "2016-12-31T00:00:00Z"
|
||||
title: What does Mathematica mean by ComplexInfinity?
|
||||
summary: Answering the question, "Why does WolframAlpha say that a quantity is ComplexInfinity?".
|
||||
---
|
||||
|
||||
*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2078754/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
|
||||
|
||||
# Question
|
||||
|
||||
When entered into [Wolfram|Alpha](https://www.wolframalpha.com/), \\(\infty^{\infty}\\) results in "complex infinity".
|
||||
Why?
|
||||
|
||||
# Answer
|
||||
|
||||
WA's `ComplexInfinity` is the same as Mathematica's: it represents a complex "number" which has infinite magnitude but unknown or nonexistent phase.
|
||||
One can use `DirectedInfinity` to specify the phase of an infinite quantity, if it approaches infinity in a certain direction.
|
||||
The standard `Infinity` is the special case of phase `0`.
|
||||
Note that `Infinity` is different from `Indeterminate` (which would be the output of e.g. `0/0`).
|
||||
|
||||
Some elucidating examples:
|
||||
|
||||
* `0/0` returns `Indeterminate`, since (for instance) the limit may be approached as \\(\frac{1/n}{1/n}\\) or \\(\frac{2/n}{2/n}\\), resulting in two different real numbers.
|
||||
* `1/0` returns `ComplexInfinity`, since (for instance) the limit may be approached as \\(\frac{1}{-1/n}\\) or as \\(\frac{1}{1/n}\\), but every possible way of approaching the limit gives an infinite answer.
|
||||
* `Abs[1/0]` returns `Infinity`, since the limit is guaranteed to be infinite and approached along the real line in the positive direction.
|
||||
|
||||
In your particular example, you get `ComplexInfinity` because the infinite limit may be approached as (e.g.) \\(n^n\\) or as \\(n^{n+i}\\).
|
17
hugo/content/posts/2017-02-14-cauchy-schwarz-proof.md
Normal file
17
hugo/content/posts/2017-02-14-cauchy-schwarz-proof.md
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- mathematics
|
||||
comments: true
|
||||
date: "2017-02-14T00:00:00Z"
|
||||
aliases:
|
||||
- /cauchy-schwarz-proof/
|
||||
title: Proof of Cauchy-Schwarz
|
||||
---
|
||||
|
||||
This is just a link to a [beautiful proof][proof] of the [Cauchy-Schwarz inequality][CS].
|
||||
There are a number of elegant proofs, but this is by far my favourite, because (as pointed out in the paper) it "builds itself".
|
||||
|
||||
[CS]: https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality
|
||||
[proof]: http://www-stat.wharton.upenn.edu/~steele/Publications/Books/CSMC/New%20Problems/CSNewProof/CauchySchwarzInequalityProof.pdf
|
25
hugo/content/posts/2017-03-14-maths-olympiad.md
Normal file
25
hugo/content/posts/2017-03-14-maths-olympiad.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
lastmod: "2021-01-24T12:53:36.0000000+00:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stack-exchange
|
||||
comments: true
|
||||
date: "2017-03-14T00:00:00Z"
|
||||
title: The relationship between the IMO and research mathematics
|
||||
summary: Answering the question, "does the International Maths Olympiad help research mathematics?".
|
||||
---
|
||||
|
||||
*This is my answer to the same [question posed on the Academia Stack Exchange](https://academia.stackexchange.com/q/86451/51909). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
|
||||
|
||||
# Question
|
||||
|
||||
I was reading a note of Hojoo Lee on inequality which is written for International Math Olympiad (IMO) participants. Although he writes that “target readers are challenging high schools students and undergraduate students“, it appears to be quite advanced.
|
||||
|
||||
It occurred to me to ask, do these IMO problems contribute towards research work in math? Do these math notes/books give good overview for research work?
|
||||
|
||||
# Answer
|
||||
|
||||
I think of Olympiad problems more as "parlour tricks".
|
||||
They're really difficult, and it's super-impressive if someone's good at them, but the skills are very different to the skills you need in research.
|
||||
As a big example of a difference: the Olympiad rewards quick accurate leaps of reasoning, because you're under such time pressure.
|
||||
Research rewards long-term grit and persistence through blind alleys and repeated failure.
|
38
hugo/content/posts/2017-11-05-abuse-of-notation.md
Normal file
38
hugo/content/posts/2017-11-05-abuse-of-notation.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stack-exchange
|
||||
comments: true
|
||||
date: "2017-11-05T00:00:00Z"
|
||||
title: Abuse of notation in function application
|
||||
summary: Answering the question, "Are these examples of abuses of notation?".
|
||||
---
|
||||
|
||||
*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2505777/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
|
||||
|
||||
# Question
|
||||
|
||||
I have often seen notation like this:
|
||||
|
||||
> Let \\(f : \mathbb{R}^2 \to \mathbb{R}\\) be defined by \\(f(x, y) = x^2 + 83xy + y^7\\).
|
||||
|
||||
How does this make any sense?
|
||||
If the domain is \\(\mathbb{R}^2\\) then \\(f\\) should be mapping individual tuples.
|
||||
|
||||
Also when speaking of algebraic structures why do people constantly interchange the carrier set with the algebraic structure itself?
|
||||
For example you might see someone write this:
|
||||
|
||||
> Given any field \\(\mathbb{F}\\) take those elements in our field \\(a \in \mathbb{F}\\) that satisfy the equation \\(a^8 = a\\).
|
||||
|
||||
How does this make any sense?
|
||||
If \\(\mathbb{F}\\) is a field then it is a tuple equipped with two binary operations and corresponding identity elements all of which satisfy a variety of axioms.
|
||||
|
||||
# Answer
|
||||
|
||||
The example you've given of a function is not an abuse. \\(x\\) is instead shorthand for \\(\pi_1(t)\\) and \\(y\\) is shorthand for \\(\pi_2(t)\\) and \\((x, y)\\) is shorthand for \\(t\\).
|
||||
|
||||
\\(g \in G\\) is a very minor abuse, yes.
|
||||
"A group \\(G\\) is a set \\(G\\) endowed with some operations" is a slight abuse, but one which will never be misinterpreted.
|
||||
It is done this way to avoid the proliferation of unnecessary and confusing symbols.
|
||||
For the same reason, we use the symbol \\(+\\) to refer to the three different operations of addition of integers, rationals, and reals.
|
26
hugo/content/posts/2018-02-03-epsilon-delta.md
Normal file
26
hugo/content/posts/2018-02-03-epsilon-delta.md
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stack-exchange
|
||||
comments: true
|
||||
date: "2018-02-03T00:00:00Z"
|
||||
title: Infinitesimals as an idea that took a long time
|
||||
summary: Answering the question, "Which mathematical ideas took a long time to define rigorously?".
|
||||
---
|
||||
|
||||
*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2633847/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
|
||||
|
||||
# Question
|
||||
|
||||
It often happens in mathematics that the answer to a problem is "known" long before anybody knows how to prove it. (Some examples of contemporary interest are among the Millennium Prize problems: E.g. Yang-Mills existence is widely believed to be true based on ideas from physics, and the Riemann hypothesis is widely believed to be true because it would be an awful shame if it wasn't. Another good example is Schramm–Loewner evolution, where again the answer was anticipated by ideas from physics.)
|
||||
|
||||
More rare are the instances where an abstract mathematical "idea" floats around for many years before even a rigorous definition or interpretation can be developed to describe the idea. An example of this is umbral calculus, where a mysterious technique for proving properties of certain sequences existed for over a century before anybody understood why the technique worked, in a rigorous way.
|
||||
|
||||
I find these instances of mathematical ideas without rigorous interpretation fascinating, because they seem to often lead to the development of radically new branches of mathematics. What are further examples of this type?
|
||||
|
||||
# Answer
|
||||
|
||||
Following from the continuity example, in which the epsilon-delta formulation eventually became ubiquitous, I submit the notion of the infinitesimal. It took until Robinson in the 1950s and early 60s before we had "the right construction" of infinitesimals via ultrapowers, in a way that made infinitesimal manipulation fully rigorous as a way of dealing with the reals. They were a very useful tool for centuries before then, with (e.g.) Cauchy using them regularly, attempting to formalise them but not succeeding, and with Leibniz's calculus being defined entirely in terms of infinitesimals.
|
||||
|
||||
Of course, there are other systems which contain infinitesimals - for example, the field of formal Laurent series, in which the variable may be viewed as an infinitesimal - but e.g. the infinitesimal \\(x\\) doesn't have a square root in this system, so it's not ideal as a place in which to do analysis.
|
25
hugo/content/posts/2018-04-08-kinds-of-number.md
Normal file
25
hugo/content/posts/2018-04-08-kinds-of-number.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- stack-exchange
|
||||
comments: true
|
||||
date: "2018-04-08T00:00:00Z"
|
||||
title: What is lost when we move between number systems?
|
||||
summary: Answering the question, "What is lost when we move from the reals to the complex numbers?".
|
||||
---
|
||||
|
||||
*This is my answer to the same [question posed on the Mathematics Stack Exchange](https://math.stackexchange.com/q/2728317/259262). It is therefore licenced under [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).*
|
||||
|
||||
# Question
|
||||
|
||||
As I know when you move to "bigger" number systems (such as from complex to quaternions) you lose some properties (e.g. moving from complex to quaternions requires loss of commutativity), but does it hold when you move for example from naturals to integers or from reals to complex and what properties do you lose?
|
||||
|
||||
# Answer
|
||||
|
||||
The most important ones as I see it:
|
||||
|
||||
* Naturals to integers: lose well-orderedness, gain "abelian group" (and, indeed, "ring").
|
||||
* Integers to rationals: lose discreteness, gain "field".
|
||||
* Rationals to reals: lose countability, gain "Cauchy-complete".
|
||||
* Reals to complexes: lose a compatible total order, gain the Fundamental Theorem of Algebra.
|
25
hugo/content/posts/2018-06-02-json-comments.md
Normal file
25
hugo/content/posts/2018-06-02-json-comments.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- hacker-news
|
||||
- programming
|
||||
comments: true
|
||||
date: "2018-06-02T00:00:00Z"
|
||||
title: JSON comments (a note from Hacker News)
|
||||
summary: "A quick note from Hacker News about why the comment-handling situation in JSON is bad."
|
||||
---
|
||||
|
||||
In response to [a linkpost](https://news.ycombinator.com/item?id=17358103) to [an article about how YAML isn't perfect](https://arp242.net/weblog/yaml_probably_not_so_great_after_all.html), [user jiveturkey](https://news.ycombinator.com/user?id=jiveturkey) [commented with confusion](https://news.ycombinator.com/item?id=17359727):
|
||||
|
||||
> > JSON doesn't support comments
|
||||
|
||||
> eh?
|
||||
>
|
||||
> `{ "firstName": "John", "lastName": "Smith", "comment": "foo", }`
|
||||
>
|
||||
> I know it isn't the same as `#comments`, but who cares really.
|
||||
|
||||
[I replied](https://news.ycombinator.com/item?id=17359800):
|
||||
|
||||
> The trouble there is that your comments come in-band. What if you're trying to serialise something and you don't have the power to insist that it's not a dictionary with "comment" as a key?
|
185
hugo/content/posts/2018-07-21-dependent-types-overview.md
Normal file
185
hugo/content/posts/2018-07-21-dependent-types-overview.md
Normal file
@@ -0,0 +1,185 @@
|
||||
---
|
||||
lastmod: "2021-09-12T22:47:44.0000000+01:00"
|
||||
author: patrick
|
||||
categories:
|
||||
- programming
|
||||
- mathematics
|
||||
comments: true
|
||||
date: "2018-07-21T00:00:00Z"
|
||||
aliases:
|
||||
- /dependent-types-overview/
|
||||
title: Dependent types overview
|
||||
summary: "A quick overview of dependent types."
|
||||
---
|
||||
|
||||
# Proving things in Agda, part 1: what is dependent typing?
|
||||
|
||||
[Agda] is a [dependently-typed] programming language which I've been investigating over the last couple of months, inspired by Conor McBride's [CS410] lecture series.
|
||||
Being dependently-typed, its type system is powerful enough to encode mathematical truth, and you can use the type system to verify proofs of mathematical statements, as well as to almost completely obviate the need for tests by having the compiler verify almost any property of your program.
|
||||
This post is an overview of what that means.
|
||||
|
||||
Before you read any of the Agda code that lives in [my Agda repository][GitHub], please keep in mind that I'm an Agda novice who is exploring.
|
||||
I make no claims that any of this code is any good; only that it is correct.
|
||||
I'm also not interested in performance, since I'm using it as a proof environment rather than as a source of runnable programs; while all of the code is runnable, I have not optimised it at all.
|
||||
We shall see that the mere existence of these programs is enough to constitute mathematical proof.
|
||||
|
||||
## What is a type system?
|
||||
|
||||
I think of a type system as one or both of two things.
|
||||
|
||||
* A way of informing the compiler that certain objects are supposed to match up in certain ways, such that this information may vanish at runtime but allows the compiler to help you when you're writing the program.
|
||||
* A way of ensuring at runtime that you don't perform nonsensical operations on objects that don't support those operations.
|
||||
|
||||
For example, the language Python has a type system which is "dynamic": you don't specify the type of an object while you're writing the program, so the compiler can't really use type information to help you.
|
||||
The language F# has a "static" type system: you specify the type of every object up front, while you're writing the program, so the compiler has more opportunities to tell whether you've told your program to do something inconsistent.
|
||||
|
||||
From now on, I'll focus on the first kind of type system (i.e. on type systems where you specify types while you're writing the program, so the compiler can help you).
|
||||
|
||||
## What can a type system do for you?
|
||||
|
||||
Any Python programmer has probably encountered a certain extremely common bug: since strings are iterable, it's all too easy to iterate accidentally over a single string when you intended to iterate over a list of strings.
|
||||
A baby example, in which the bug is very obvious, is as follows:
|
||||
|
||||
{{< highlight python >}}
|
||||
stringsList = ["hello", "world"]
|
||||
for ch in stringsList[0]: # Oops - stringsList[0], not stringsList
|
||||
print(ch)
|
||||
# Expected: "hello" and then "world"
|
||||
# Actual: "h", then "e", then "l", then "l", then "o"
|
||||
{{< / highlight >}}
|
||||
|
||||
Python's dynamic typing means that you often can't find out that you've iterated over the wrong thing until you come to run the program and discover that it blows up.
|
||||
(It doesn't help that there's no such thing as a character in Python; only a string of length 1.)
|
||||
|
||||
In F#, this is a class of bug that never makes it to runtime, because you know the type of every variable up front.
|
||||
|
||||
{{< highlight fsharp >}}
|
||||
let stringsList = [ "hello" ; "world" ]
|
||||
stringsList.[0]
|
||||
|> List.map (printfn "%s") // doesn't compile!
|
||||
{{< / highlight >}}
|
||||
|
||||
`List.map` can't take a string as an argument.
|
||||
Even if it could, `printfn "%s"` can't take a character as an argument.
|
||||
|
||||
The type system has protected you from this particular bug.
|
||||
So far, so familiar.
|
||||
|
||||
## Dependent types?
|
||||
|
||||
In most common type systems, you're restricted to declaring that any particular object inhabits one of some fixed collection of types, or inhabits a type that is built out of those.
|
||||
(For example, `string` or `int` or `List<Map<int, Set<char>>>`).
|
||||
This deprives you of some power: there may be things you know about your program, which you proved to your own satisfaction when you wrote it, but which you have been unable to tell the compiler because the language's type system was not expressive enough.
|
||||
|
||||
For example, the following (very inefficient) program computes highest common factors of integers:
|
||||
|
||||
{{< highlight fsharp >}}
|
||||
let rec euclid a b =
|
||||
if a < b then euclid b a
|
||||
elif a = b then a
|
||||
else euclid (a - b) b
|
||||
{{< / highlight >}}
|
||||
|
||||
The F# compiler can tell me that I haven't made any of a collection of stupid errors - it will stop me from trying to compute the highest common factor of `"hello"` and `[1 ; 2]`, for example, and it tells me that I've given `euclid` the right number of arguments in the recursive call.
|
||||
However, in order to know that my function really does compute highest common factors, you need to execute tests.
|
||||
(Did you spot the bug in the program? The compiler certainly didn't, because F# doesn't let me inform the compiler that the program is meant to be computing highest common factors, so the compiler doesn't know what to check for.)
|
||||
|
||||
But a *dependently-typed* language has the expressive power to lift values themselves, and therefore much more general statements about values, into the type system.
|
||||
A dependently-typed language can define a function whose type signature contains terms which depend on other arguments to the function, and can thereby express restrictions on the possible arguments that the function can take *which are enforced at compile-time*.
|
||||
Much like F# can prevent you from applying `euclid` to a string, so a dependently-typed language can prevent you from applying `euclid` to a nonpositive number, and can thereby protect you from one bug which is present in the `euclid` above (namely, that the function does not terminate under some conditions, e.g. when `b = 0`).
|
||||
Several examples of this will appear later.
|
||||
|
||||
Moreover, while any particular non-dependently typed language could in theory have been designed to contain a type for the strictly positive integers (and thereby protect you from that bug, converting it into a compile error), only a dependently-typed language will allow you to construct arbitrary restrictions which the language inventors didn't think of in advance.
|
||||
|
||||
In theory, you can use a dependent type system to encode *any* information about the output of your program, in such a way that the compiler knows so much that it will refuse to compile any program that fails to adhere to the specification you gave.
|
||||
You can rule out *any* output-based bug if you take enough time over the program specification.
|
||||
(Strictly, you need to assume that the compiler is bug-free, and that the implementation of the computer you're running the program on is bug-free, and so forth. No compiler will protect you in full generality from cosmic rays flipping bits in memory while your program is executing.)
|
||||
|
||||
## Propositions as types
|
||||
|
||||
Health warning: this concept is one that takes a while to grok, so I will not devote much time to it.
|
||||
For me, it took several hours of example sheets followed by a very clear explanation from my Logic and Sets supervisor.
|
||||
|
||||
The trick for using a dependently-typed language to encode properties of your program is to use this magical ability to raise values up into the type level.
|
||||
The most fundamental expression of this concept is the "equality type": for every value `a`, there is a type `=a= : T -> T` (where `T` is the universe of all possible values and types).
|
||||
A member of `=a= b` is an object that somehow expresses the notion that `a = b`.
|
||||
(Much in the same way as how a member of `int` somehow captures an object that is an integer, so a member of `=a= b` somehow captures a proof that `a = b`.)
|
||||
|
||||
There are several possible ways one could implement this type.
|
||||
For example, one could define `=a= b` to be the type of all proofs that `a = b` in a mathematical sense; then there are a couple of ways to generate members of `=a= b` (for example, "prepend a proof that `a = c` to a proof that `c = b`"), and there might be many different proofs that `a = b`.
|
||||
But in any sane world, no matter what the other ways are to create a proof that `a = b`, there is certainly a canonical thing that `a` is equal to: namely, `a` itself!
|
||||
So `=a= a` is a type that should definitely have an inhabitant no matter what system we're working in: namely, the inhabitant that expresses the fact that `a = a` canonically.
|
||||
|
||||
In fact, Agda goes further than this and asserts that this is the *only* inhabitant of that type, and moreover that there is no other way to create a member of `=a= b` than to identify `b` with `a` and to use the canonical `a = a`.
|
||||
In Agda, if you've proved that two things are equal, then they are literally identical; if you want anything looser, like the notion of "isomorphic", or if you want to capture the idea that you might be able to prove equality in more than one way, then you'll have to define primitives for that yourself.
|
||||
|
||||
## A real example
|
||||
|
||||
The upshot of the above is that you may insist, in the functions that you define, that something is equal to something else.
|
||||
This opens the door to expressing in the type system, for example, that an integer can be decomposed into a modulus and a remainder in a certain way.
|
||||
In Agda (here, this is in my [EuclideanAlgorithm.agda]):
|
||||
|
||||
```agda
|
||||
record divisionAlgResult (a : ℕ) (b : ℕ) : Set where
|
||||
field
|
||||
quot : ℕ
|
||||
rem : ℕ
|
||||
pr : a *N quot +N rem ≡ b
|
||||
remIsSmall : (rem <N a) || (a ≡ 0)
|
||||
```
|
||||
|
||||
(Here, `*N` is the operation of multiplication on the natural numbers, `<N` is the type of proofs that the left-hand is less than the right-hand as natural numbers, and `+N` is the addition of naturals.
|
||||
Also `||` is the "or" type which expresses that a particular one of the left-hand or the right-hand operand is inhabited.)
|
||||
|
||||
This is a declaration of a record type with four fields: a quotient and a remainder (which are naturals), together with a proof object witnessing that `a * quot + rem = b`, and a proof object witnessing that the remainder is smaller than `a` (and hence that `rem` really is the value of `b` modulo `a`).
|
||||
|
||||
An inhabitant of this record type is simply a proof that `b` can be decomposed into a quotient and a remainder on division by `a`.
|
||||
|
||||
Observe that this doesn't actually construct the record for you; it's a declaration of a type, not a construction of an object of that type.
|
||||
It still remains for the programmer to provide a function `(a : ℕ) -> (b : ℕ) -> divisionAlgResult a b` and thereby show that the decomposition is always possible.
|
||||
Note also that the function signature I just specified is itself an example of lifting values into the type system: the return type `divisionAlgResult a b` depends on the input values `a` and `b`.
|
||||
This reflects the fact that one may construct proofs *about specific objects*: a `divisionAlgResult a b` is a proof of a certain property of specific integers `a` and `b` (namely "we can find `b % a` and `b / a`").
|
||||
|
||||
Crucially, if I can ever get my hands on a `divisionAlgResult a b`, then I know incontrovertibly (in a way that is guaranteed at compile-time) that the `rem` field and the `quot` field together satisfy `a * quot + rem = b`.
|
||||
There is no way to make one of these objects without supplying a proof that the quotient and the remainder behave in this way.
|
||||
That means, for example, that there is no possible off-by-one error: I cannot somehow accidentally use the fact `a * quot + rem = b + 1` in a context where I intended to use `a * quot + rem = b`, because the compiler will notice that the thing I am trying to use has type `pr : a *N quot +N rem ≡ b +N 1` instead of `pr : a *N quot +N rem ≡ b`.
|
||||
The potential off-by-one error has become a type error, caught at compile time, rather than a runtime error that could only be caught by tests.
|
||||
|
||||
## Example: the Hello World of dependently typed languages
|
||||
|
||||
The canonical intro to a dependently-typed language is a program `rev` which reverses lists, together with a proof that applying `rev` twice is just the same as the identity function.
|
||||
|
||||
In [my own Agda library][GitHub], this is the function `rev` in [Lists/Reversal/Reversal.agda], and the proof is named `revRevIsId`.
|
||||
I shall remove some of the arguments which are only there for technical reasons related to preventing Russell's paradox, and give its signature:
|
||||
`revRevIsId : {A : Set} → (l : List A) → (rev (rev l) ≡ l)`.
|
||||
That is, whenever we have a type (or, in Agda's language, `Set`) called `A`, we can take a list `l` of things of type `A`, and produce a proof that reversing `l` twice yields a list which is identical to `l`.
|
||||
|
||||
If I had made any mistakes in the definition of `rev`, then the compiler would have caught them insofar as those mistakes prevented `rev (rev l)` from being identical to `l` for all `l`.
|
||||
There is no need to test the `rev` function for this property: I have proved that it holds, and the compiler has verified my proof.
|
||||
If it were ever false, there would be an error in my proof, and the compiler would have been unable to compile the function `revRevIsId` that embodies that proof.
|
||||
|
||||
One could, if desired, run my function `revRevIsId` explicitly on some list.
|
||||
Then one would obtain at runtime a proof that `rev (rev l)` is equal to `l`.
|
||||
But I certainly don't expect anyone to run this function; merely the fact that one *could* run it, and if one did then it would always produce an equality, is enough to guarantee the correctness of `rev` in this aspect.
|
||||
|
||||
Ultimately my proof uses a few intermediate facts, which I had to prove first:
|
||||
|
||||
* Concatenating the empty list to the end of `l` yields `l`;
|
||||
* Concatenation of lists is associative;
|
||||
* We can pull the head off a concatenated list in the obvious way (this is hard to say in any kind of slick way, but I named the theorem `canMovePrepend` in [Lists.agda]);
|
||||
* `rev (l ++ m)` is equal to `(rev m) ++ (rev l)`.
|
||||
|
||||
I proved each of these facts, then converted my proof into a program that could be used in theory on any given list to produce objects that witnessed the truth of each fact when specialised to that list.
|
||||
|
||||
# Next steps: mathematics
|
||||
|
||||
This post has been a whistlestop tour of why one might be interested in dependent types, and how in principle they can be used to write proofs of correctness instead of having to rely on tests.
|
||||
The next post will look in more depth at how Agda's type system can be used to check mathematical proofs, and will go into some detail about how "(constructive) proof" and "program" are really the same thing.
|
||||
Ultimately we will produce a program/proof that is on the one hand a proof of the Fundamental Theorem of Arithmetic (roughly, "every natural decomposes uniquely into prime factors"), and is on the other hand a fully-verified program that factorises naturals with no tests required.
|
||||
|
||||
[Agda]: https://en.wikipedia.org/wiki/Agda_(programming_language)
|
||||
[dependently-typed]: https://en.wikipedia.org/wiki/Dependent_type
|
||||
[CS410]: https://www.youtube.com/watch?v=O4oczQry9Jw
|
||||
[Lists/Reversal/Reversal.agda]: https://github.com/Smaug123/agdaproofs/blob/master/Lists/Reversal/Reversal.agda
|
||||
[GitHub]: https://github.com/Smaug123/agdaproofs
|
||||
[EuclideanAlgorithm.agda]: https://github.com/Smaug123/agdaproofs/blob/cab004f6d84dfd13a12ca1e73a68aed23d42a348/Numbers/Naturals/EuclideanAlgorithm.agda
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user