Tuesday, May 22, 2012

Gödel's Incompleteness Theorems and You - A Helpful Guide

I get a ton of emails asking me what is up with Gödel's Incompleteness Theorems, and with good reason: they are complicated as fuck, but they're also among the most important results ever obtained in logic. They're also super clever - it wasn't enough that Gödel was incredibly good at logic, he also had to be inventive as fuck to come up with this procedure. The theorems stand at this really weird crossroads of being important, celebrated, and complicated, and as a result they're a part of logic that people tend to hone in on, even when they have no context whatsoever. It's like asking what made Bobby Fischer so great at chess when you don't know how the pieces move - there's so much context necessary. So it's easy for people who don't know what they're talking about to get away with misrepresenting Gödel.

What do they actually show?

They say (very roughly) that no set of axioms capable of proving certain arithmetical truths can prove every arithmetical truth without also proving something false.

Examples might help.

Let's say there are fifteen sentences - 1-10 are true, and 11-15 are false. You've got some axioms, some rules you can play with like "If a=b and b=c, then a=c" and "if a=b, then b=a." You want to combine those rules in a way that lets you prove other sentences, sentences like "1+2=3+0". According to Gödel's Incompleteness Theorems, though, there is no way to get all ten of 1-10, whatever they are, without also sneaking in one of 11-15, which is bad. Maybe there are these three rules, rules a, b, and c, that combined can get you 1-6, and ruled a, b, and d can get you 5-10. But if you combine rules a, b, c, and d all at once, they'd prove 1-10, but they'd also prove 12, which is bad. And this happens no matter what. You can only ever prove some of 1-10, even though we know they're all true. If we want to keep false statements out of the party, we're going to have to keep some true ones out, too. Which true ones? Depends on how you decide who can come in. But there have to be some true ones.

That is an oversimplification because we're trying to prove infinitely many true statements. So let's say we have infinitely many statements, and they're numbered "statement 1, statement 2..." on to infinity. Now, let's say all the even statements are true, and all the odd ones are false. We might be able to come up with a combination of rules that can prove all the statements divisible by four, or all the statements ending in an '8'. Maybe we can combine those so we can prove statements 4, 8, 12, 16, 18, 20... and so on. But we have to leave out statements 2, and 6, and 10... Any set of rules that allows us to prove all the even statements, unfortunately will also prove an odd statement, and the odd statements are false, so everything goes to hell.

Again, that's an oversimplification since we can't quite order all the true statements of arithmetic - they aren't, like, clean and neatly countable like that, and no set of clean, neatly countable rules will allow us to prove all of them either. We're guaranteed some chaotic bullshit.

Why are we guaranteed chaotic bullshit? Well, Gödel managed to encode logical statements in numbers, which is fucking sweet. And since these languages can decide on arithmetical relationships (like whether a number is related to another in some way), they can evaluate the logical statements encoded in those numbers without necessarily decoding them.

So he managed to make a number, call that number y, that encodes the sentence "The sentence encoded by the number y cannot be provable." Which, what?! So if the system can prove that arithmetical statement, then the system is inconsistent. And then everything is broken.

Importantly, Gödel's first incompleteness theorem only applies to arithmetic. It's about a particular subset of math. If that doesn't seem every interesting to you, suck it - it's the reality. It isn't about metaphysics or ethics or quantum physics or any of those other things. It says that arithmetic is messy in a way we wish it weren't, a way we cannot clean up despite our best efforts.

The second incompleteness theorem in particular says that any proof system with neatly ordered, countable rules that includes arithmetic and also can say things about provability cannot prove it's own consistency. All this is really saying is that almost every proof system we have will fail to prove it's own consistency. If a proof system manages to say about itself HEY I'M CONSISTENT, then either it is lying to you or it isn't very strong.

So then how the fuck did such confusion creep in? How did this pretty cool but very specific proof manage to run rampant among spiritual hucksters and non-specialists?

Okay. Well, it turns out that, combined with some other important results in math and computer science, Gödel's Incompleteness Theorems say that computers can never prove every truth of arithmetic. If you decided, "I'm going to tell my computer some rules for determining which arithmetical statements - like 1+1=2 or 2x3=7 - are true, and which ones are false!" regardless of what rules you tell your computer, it's going to fuck up, either by missing a true one or letting in a false one.

And OKAY HERE COMES THE PART PEOPLE FUCK UP ON SO STAY WITH ME HERE, IF YOU READ THIS PART YOU HAVE TO READ THE REST some people think that the human mind/brain works like a computer, and so is also limited by the Incompleteness Theorems. NO SERIOUSLY STOP ITS NOT WHAT YOU THINK.

All that means is that it's possible we can't prove everything that is true about arithmetic, and we can't prove everything that is true about the way we think. And if it seems like we can prove everything that is true about arithmetic or the way we think, we might not be like computers after all.

Might it say something more?

Probably not. Douglas Hofstadter's excellent Gödel, Escher, Bach, which I highly recommend to anyone interested in neuroscience and cognition, contains a paragraph that very clearly states what some people try to do with Gödel's Incompleteness Theorems [emphasis added]:

Looked at this way, Gödel's proof suggests – though by no means does it prove! – that there could be some high-level way of viewing the mind/brain, involving concepts which do not appear on lower levels, and that this level might have explanatory power that does not exist – not even in principle – on lower levels. It would mean that some facts could be explained on the high level quite easily, but not on lower levels at all. No matter how long and cumbersome a low-level statement were made, it would not explain the phenomena in question. It is analogous to the fact that, if you make derivation after derivation in Peano arithmetic, no matter how long and cumbersome you make them, you will never come up with one for G – despite the fact that on a higher level, you can see that the Gödel sentence is true. What might such high-level concepts be? It has been proposed for eons, by various holistically or "soulistically" inclined scientists and humanists that consciousness is a phenomenon that escapes explanation in terms of brain components; so here is a candidate at least. There is also the ever-puzzling notion of free will. So perhaps these qualities could be "emergent" in the sense of requiring explanations which cannot be furnished by the physiology alone.

I've taken the liberty of bolding Hofstadter's qualifiers of doubt, because otherwise when Hofstadter says "suggests - but by no means proves!" you might be tempted to think that he's saying that Gödel's Incompleteness Theorems give us good reason to believe, but not absolute proof, that there is something 'higher-order' going on with the brain. He's not saying that. Gödel's Incompleteness Theorems 'suggest' free will or a soul in the same way that your roommate not being in his room 'suggests' he has been murdered by an international conspiracy dating back to Da Vinci. Like, it hypothetically could be considered evidence of murder, but to make that work to show all that, you would need so much more fucking context. If someone said to you, "Dude, your roommate isn't in his room. I think he's been murdered." You would react with so many more questions. "Is he missing? How long has be been gone for? Is there evidence of a struggle? Are you sure he isn't just pooping in the bathroom?"

His being missing in no way establishes conclusively that he has been murdered. So when someone says that Gödel's Incompleteness Theorems show that there is a soul or free will or any other claim outside of a VERY VERY NARROW BODY OF MATHEMATICS you should take that to mean "If a whole bunch of other, really really controversial shit aaaaaaaaaaalllllllllll happened to be true, then Gödel's Incompleteness Theorems might maybe show that something weird is going on, and I am claiming that weird thing is a soul, even though Gödel in no way backs me up on that claim."

For starters, in order for the Incompleteness Theorems to say anything at all about persons you would need to show that our brains/minds are somehow equivalent to computers IN A VERY VERY SPECIFIC WAY - i.e. they can be represented by Turing machines. Don't worry if you don't know what that is - it's just a very basic computer that can do all the things your computer can do with some very basic parts. Importantly, things about Turing machines are provable, so loose analogies between brain/mind and computers just won't fucking cut it. And developing even tighter analogies to a computer doesn't make the case any stronger. Without an actual proof, you don't get to invoke Gödel. Developing a good analogy between the mind/brain and a computer is like finding out that someone saw your roommate with a suspicious guy somewhere else. Like, that doesn't actually prove that this guy murdered your roommate or even that your roommate was murdered at all. If you desperately want it to be the case that your roommate was murdered, you might take it to prove that your roommate was murdered, but for all you know he could be planning a surprise party for you and this mysterious guy was just a clown he wanted to hire.

So, even if you've got a really super-tight analogy of the brain to a computer - and they've been developed, believe me - you still can't prove shit. You can at best just sort of gesture in the direction of Gödel's Incompleteness Theorems and say, "maaaaaaaaaaaaaaaybe there is something about the human brain/mind such that we cannot prove it is true using the human brain/mind." If that happens to be right, then by your very argument there is no way to prove that you are right. This is one of the deeply frustrating things about Gödel's Incompleteness Theorems in general - systems that bump up against them are sort of necessarily vague about where and how that bumping happens. If we are one of those systems then we can't ever prove we are one of those systems and you'll always be limited to speculation.

You might be tempted here to think that, since close analogies and rough gestures are the best we can do, you should use them. That argument might go something like this: "Well, you're saying that if the mind/brain-computer analogy works, it would be impossible to prove their equivalence. So, we have some really close analogies that seem to work and also suggest that such analogies are as far as we can go. Since those analogies are so well-developed, don't we have grounds to go ahead and make the leap and believe they're probably right?"


No. We do not.

Historically, one way to sound like an incredibly assclown is to assume we pretty much know how the brain works. That "we only use 10%" figure? Left brain/right brain distinctions? Really, all sorts of "Hey guys, this is it!"-type claims have turned out to be way super wrong. And Turing machines are an incredibly specific sort of thing. So no, some decent analogies does not justify assuming the brain works like a Turing machine in a very strict and mathematically describable way.

But let's just pretend that we could establish that the mind/brain works like a Turing machine, and so is limited by Gödel's Incompleteness Theorems. Would that mean that we probably have a soul or free will or something like them?


All it would mean is there is something about the mind/brain that we can't prove, but that is true. For example, maybe we can't solve a particular kind of math problem because we can't reason well enough about that sort of math problem. Maybe the thing we can't prove just is that the mind/brain works like a Turing machine. The fact that we can't prove that we have free will or a soul doesn't show that that's the thing that is true and can't be shown. It just means that we can't prove those things. It could be that we can't prove them because they're false. Even if you found your roommate's body, you would need so much more evidence that a shadowy international conspiracy was involved, that you're not even close. Maybe there are in fact arguments that a shadowy international conspiracy was involved, evidence you find elsewhere, just like there might be other arguments for free will or a soul somewhere else in other branches of philosophy. But Gödel's Incompleteness Theorems lend such negligible support to those conclusions that maybe at best you can say that 'Gödel's Incompleteness Theorems don't conclusively prove that there is not a soul or free will.' If you say that the Incompleteness Theorems imply that there is a soul or free will or God or some other metaphysical entity you are wrong and shut up. They don't imply anything of the sort and they don't work with other arguments to imply anything like that.

And that's the deal with Gödel's Incompleteness Theorems - they're these super-cool mathematical proofs, that create numbers that have secret messages like "you can't prove I'm true" encoded into them. It's awesome. However, they only prove things about numbers and the sorts of systems that contain them. So next time someone says, "You know, Gödel showed that..." get ready to bring down the ol' NOPE hammer. It turns out that, unless you do mathematical logic, very high-level computer science, or some other very specialized aspect of mathematics and logic, Gödel's incompleteness theorems very probably have nothing whatsoever to say about how you go about your day-to-day life.
You can get Godel, Escher, Bach from Amazon. It's pretty cool stuff.

The excerpt from above was taken from The Mechanism page of Wikipedia, which discusses some of the applications of Gödel to persons. In particular, you'll note Gödel himself was skeptical of such attempts.


  1. So when I teach my students that the first theorem shows that any system of formal logic capable of proving statements true will find some statements to be undecidable, am I doing it wrong? Are there systems of formal logic which cannot be used to reproduce the axioms of Peano arithmetic? Because if arithmetic is flawed, then so is the system used to validate it.

    1. Jeff, yes you are doing it wrong. If you take Th(N) to be the set of sentences which are true in the standard model of arithmetic and then use Th(n) as your axiom set, then you can prove all the true statements of arithmetic trivially, as they are axioms. This doesn't break the incompleteness theorems because Th(N) is not recursive, which is one of the additional assumptions needed in the proof.

      In answer to your second question, predicate calculus probably can't express facts about numbers in a way we like.

      Finally Philosophy bro, Th(N) is very much countable, it just isn't recursive.

    2. I guess if your system of formal logic only contains the axioms (a = a) and (a != b), it can prove statements true, making the statement "any system of formal logic..." false, but that hardly seems worth consideration. Going around saying, "Things are themselves, and not things that aren't themselves" seems like it would get old after awhile.

      For more complicated and applicable systems, isn't formal logic, as a method of operating/evaluating statements by applying rules in succession, by definition recursive? It doesn't seem like it would have much use otherwise. If Th(N) is axiomatic and non-recursive, containing no inference rules (such as "iff a == b and b == c, a == c"), how would it be able to "say" anything beyond Th(N), let alone anything interesting enough to be non-trivial as a philosophical method (as opposed to the triviality of proving truth when everything's an axiom)?

    3. This comment has been removed by the author.

    4. Jeff,

      Just because the Second Incompleteness Theorem says you can't use Peano to validate its own consistency using itself, it doesn't mean you can't use other systems.


      ZFC shows Peano is consistent.

    5. No, it doesn't. It is indeed a theorem of ZFC that Peano Arithmetic (PA) is consistent. But that doesn't *show* that PA is consistent unless we can also prove that ZFC is consistent. For if ZFC is inconsistent, then what it says about PA's consistency might be one of its false deliverances. But by Gödel's theorem, ZFC (if it's consistent) cannot prove its own consistency and any system that can is even stronger than ZFC and hence in need of a proof of *its* consistency. But by Gödel's theorem again, any system capable of proving that will be stronger still; and so on. There is no magic bullet for consistency.

      BTW, Philosophy Bro isn't quite right when he says that Gödel's theorem only applies to number theory. It applies to any system (like ZFC) capable of representing Gödel's coding trick.

  2. I think I actually understood some (most?) of that - which proves how brilliant you are at summarizing and explaining these very complicated concepts.

    Nice one, bro.

    Can you recommend any reading on the concept/nature of emergence?

  3. "If that doesn't seem every interesting to you" seems to be a word error of some sort.

    1. Yeah, I think The Bro means *very* interesting. Otherwise, awesome summary, Bro. Even if Gödel's incompleteness theorems don't explain day-to-day living, arithmetic is, in itself, kinda fascinating. To me, anyway. :-)

    2. also, "Historically, one way to sound like an incredibly assclown".

      Love the work, but what I love more is to yell it at those passing by so they maybe pay attention and think a bit about things. Hurts flow when you read a part that's got a mixup in it and then have to try to correct it on the fly.

  4. Daniel you are an incredibly assclown

  5. "It could be that we can't prove them because they're false."

    No it couldn't. If the only reason that we couldn't prove them is because they're false then we could prove their negations.

    Otherwise, this is great!

    1. I'm not sure how what you're saying conflicts with what I'm saying.

      If you're a geometry student who is asked to prove that we can determine a unique triangle given one angle, the length of a side touching that angle, and the length of the side NOT touching that angle, you would struggle for a while, and you would find yourself unable to do that.

      Because it is false.

      Now we can construct counter-examples, but we can't prove that we can construct a unique triangle from an angle and those two sides because it is false that we can construct a unique triangle out of that.

      This was the case even before Euclid discovered it.

      It might be we can't prove there's a soul because there aren't souls - that doesn't mean we necessarily can prove there are not souls.

    2. But we can prove the negation of that statement pretty easily (like, not THAT easily cause I never would have been able to do it but like, Euclid can). So it's not undecidable.

      Here's what I'm saying. "There is a soul" which is a proposition which is true or false, and independently, decidable or unprovable. No statement is unprovable *because it is false*—its truth and provability are orthogonal to each other. If "there is a soul" is false then "there is a soul is false" is true, and "there is a soul is false" is provable or unprovable.

      It's a minor quibble but it seems like you're conflating provability and truth there a little bit. Nothing is "unprovable because it is false" in the strict sense in which we mean "unprovable". To prove X doesn't mean to establish that X is true. It means to follow valid rules of inference within a logical system to assign a truth value to X.

    3. To anyone, if you're confused about the triangle uniqueness referenced above, it might be because uniqueness actually is guaranteed when the angle is opposite the longer side. In this case, it is possible to determine a unique triangle given an angle and two sides.

      To Philosophy Bro, the brain will never be proven to be a Turing Machine, since this would require all sorts of impossible knowledge. It could be the case that a very good approximation is shown, and this will probably be good enough to apply Godel's Incompleteness Theorem. An unprovable statement for example will be one that references states of mind of the brain in calculating the truth of that statement. Assuming the brain is capable of reading itself with some kind of device, it will run into contradictions in thinking of its own state. This doesn't mean the brain couldn't predict this and set up an outside system to solve it, however. But this would be like adding axioms to the theory.

    4. To the Anonymous guy who said the brain will never be proven to be a Turing Machine...

      "A man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine." - Alan Turing

      The problem with Turing machines is that they require an infinitely long tape. In this respect, there's no such thing as an actual Turing Machine. That said, if we measure off of potential rather than the actual... if you give a guy the simple tools above and equip him with immortality and an infinite ability to write, then yeah, sure he's a Turing machine.

      You know that, in reality, the computer you write on is not a Turing machine? It's actually based off of the Von Neumann architecture, which yes, was inspired by principles from the theoretical Turing Machine.

      The point is that, no, it's pretty safe to say that humans are not Turing Machines as I cast doubt that we possess the infinite memory required to be Turing Machines. It is safe to say we are Turing Complete. That said, I would not call us Turing Equivalent to the Von Neumann model as while I do believe a human can simulate all the actions that take place within a computer, the computer cannot do the same for a human.

      Anyway, people keep asking "are humans machines?" like they expect some revelation. Yes. We are undoubtedly machines as we possess all the qualities of a machine. The question isn't are we machines... it's have we transcended past machines to something more. I see no reason to believe we have as we cannot define what it means to be more than machine without mechanizing the very transcendental process we were describing in the first place.

    5. To Rafajafar...

      "A quote has never proven anything." - Anonymous

      I kid, I kid. What I meant was that we'll never prove the brain is a Turing machine because of things like Heisenberg's uncertainty principle, the planck limit, etc. I recognize I'm being picky here.

      Thinking more about this, I've come to realize that even if we did have perfect knowledge of the brain, we still couldn't call it a Turing machine, or even Turing Complete. If you give some human immortal powers and infinite tape and ink, then you could produce a Turing machine. We need to assume the human doesn't make mistakes and keeps working forever. The human is now capable of performing a mathematically rigorous operation for an infinite amount of time. But this human is not like any human without immortal powers! Real people can perform the function of a Turing machine for a finite amount of time, only. You just cannot assume a mortal brain works the same way an immortal one does. Maybe you'll disagree, I won't argue.

      Yes, people are machines, just not Turing machines. We don't know if the brain is equivalent to a mathematical structure, because we don't have enough knowledge. I don't know what you mean about transcending machines.

  6. Dude, that was difficult to follow, but nice job.

    I wanted to say, because I heard the brain/computer theory somewhere and it lodged itself in my thinking a while ago. I don't try and use it to prove free will or souls (dead against the second and pretty skeptical the first) but I think that emergent systems are exactly the kind of thing we need to reject the dualist mind-set about the... er... mind-set. The principle of emergent properties give us a concrete way to say "look, I know the idea that some atoms and shit can make complex thought and emotions and sticky stuff just seems too damn weird to you and you'd rather believe in a soul and god that made it all up instead but actually complex stuff comes out of simple stuff all the time." It's like people who think evolution couldn't have produced an eye because it's too complex...

    And yea maybe Godel doesn't have all that much to do with it but it still seems like a valid hypothetical model to guide our brain research. I think it leads to some really interesting philosophy without having to stray anywhere near souls or any of that junk. Here's just one interpretation straight off my thought process, for eg: If we can imagine that brains are mechanical, that consciousness is just a fancy word for some ultra high levels of complexity, then we can start to consider that consciousness might work at other levels than the human brain, just wherever you achieved enough complexity, and more to the point maybe other levels within the human brain. We could think of the subconscious as literally another consciousness within our brain, a different level of complexity, but maybe one that we can study and learn to communicate with better. Or we can look at it and say "why is this different level of complexity that makes our subconscious better at solving logic puzzles than the level of complexity that makes up our conscious self?"

    And yea you can derive whatever spiritual consciousness out of the universe with this but I think it's much more fun to think about the human, or computer or animal I suppose, the tangible that we could make hypotheses around and then model and test. And it could have implications in mental health, in computer programming...

    Just a thought bro. Chill, love the blog <3

  7. Thanks for tackling a difficult topic.

  8. Honestly, I thought you being a "bro" might mean you'd dumb it down for the masses. I didn't think it'd mean you'd actually complicate it. Poorly written, poorly explained. I can't believe I read it twice just to make sure I'm not insane for thinking this was bad.

    I know Godel's Incompleteness Theorems and their implications already, and you confused the hell out of me. "Bro-ing it up" (to coin a phrase) seems to have done nothing more than let you curse, use strange bolds and italics, and ignore writing for clarity. That's not helpful to anyone. You want to curse, fine, I fucking love cursing! Curse with purpose. You want to bold something, great, I love to find the important key words are phrases easier to notice. But do it for proper emphasis because it loses meaning when you do it so often. Most importantly, write clearly. If you really want to bring this stuff to many people, it helps to be 1) Concise, 2) Illustrative, and 3) Targeted. I'll tell you this... "Bro's" ... they wouldn't read this shit (<--- cursing with purpose).

    Look I could go over the problems (like saying that the First Incompleteness Theorem only applies to arithmetic when really it's any formal system which capable of expressing arithmetic), but I really don't want to and I don't have the time.

    I just hope you take this as advice and not insult, bro.

  9. "If you say that the Incompleteness Theorems imply that there is a soul or free will or God or some other metaphysical entity you are wrong and shut up."

    Sort of, depending on whether numbers count as metaphysical entities. The Incompleteness Theorems point us at a way of continuously discovering new kinds of numbers, or, at least, new kinds of mathematical systems that can be expressed as numbers, simply by taking the axiom set that delivers a given kind of numbers, using Gödel's work to come up with an undecidable statement, and then writing down two new axiom sets that include stating either that that previously undecidable statement is true or that it is false.

  10. Badiou's treatment of the incompleteness theorem may be incredibly instructive here, at least since a lot of commentators seem confused. Even though he makes a step that you find to be unforgivable ("math is ontology"), examples of his truth-process could help clarify what is at stake in arithmetical systems. Especially since Badiou shows how the incompleteness theorem is, itself, inscribed in arithmetic as the unnameable point where the truth-process turns in on itself. So, we can extend Gödel's proof to things like the French Revolution or a love affair. At the very least, even if you don't want to accept Badiou's fundamental claim about metaphysics, these examples provide interesting analogies.

  11. I'm a little dumbfounded that people actually use Goedel's incompleteness theorems to postulate a soul or some non-reductionist explanation for consciousness. The simplest explanation why we can understand logic outside the bounds of particular proof systems is that whatever logic our brains utilize is itself inconsistent, and so we can prove anything with it and even believe contradictions. As long as the contradictions we believe are isolated ala paraconsistent logics, these contradictions don't impact the rest of our reasoning.