What do they actually show?
They say (very roughly) that no set of axioms capable of proving certain arithmetical truths can prove every arithmetical truth without also proving something false.
Examples might help.
Let's say there are fifteen sentences - 1-10 are true, and 11-15 are false. You've got some axioms, some rules you can play with like "If a=b and b=c, then a=c" and "if a=b, then b=a." You want to combine those rules in a way that lets you prove other sentences, sentences like "1+2=3+0". According to Gödel's Incompleteness Theorems, though, there is no way to get all ten of 1-10, whatever they are, without also sneaking in one of 11-15, which is bad. Maybe there are these three rules, rules a, b, and c, that combined can get you 1-6, and ruled a, b, and d can get you 5-10. But if you combine rules a, b, c, and d all at once, they'd prove 1-10, but they'd also prove 12, which is bad. And this happens no matter what. You can only ever prove some of 1-10, even though we know they're all true. If we want to keep false statements out of the party, we're going to have to keep some true ones out, too. Which true ones? Depends on how you decide who can come in. But there have to be some true ones.
That is an oversimplification because we're trying to prove infinitely many true statements. So let's say we have infinitely many statements, and they're numbered "statement 1, statement 2..." on to infinity. Now, let's say all the even statements are true, and all the odd ones are false. We might be able to come up with a combination of rules that can prove all the statements divisible by four, or all the statements ending in an '8'. Maybe we can combine those so we can prove statements 4, 8, 12, 16, 18, 20... and so on. But we have to leave out statements 2, and 6, and 10... Any set of rules that allows us to prove all the even statements, unfortunately will also prove an odd statement, and the odd statements are false, so everything goes to hell.
Again, that's an oversimplification since we can't quite order all the true statements of arithmetic - they aren't, like, clean and neatly countable like that, and no set of clean, neatly countable rules will allow us to prove all of them either. We're guaranteed some chaotic bullshit.
Why are we guaranteed chaotic bullshit? Well, Gödel managed to encode logical statements in numbers, which is fucking sweet. And since these languages can decide on arithmetical relationships (like whether a number is related to another in some way), they can evaluate the logical statements encoded in those numbers without necessarily decoding them.
So he managed to make a number, call that number y, that encodes the sentence "The sentence encoded by the number y cannot be provable." Which, what?! So if the system can prove that arithmetical statement, then the system is inconsistent. And then everything is broken.
Importantly, Gödel's first incompleteness theorem only applies to arithmetic. It's about a particular subset of math. If that doesn't seem every interesting to you, suck it - it's the reality. It isn't about metaphysics or ethics or quantum physics or any of those other things. It says that arithmetic is messy in a way we wish it weren't, a way we cannot clean up despite our best efforts.
The second incompleteness theorem in particular says that any proof system with neatly ordered, countable rules that includes arithmetic and also can say things about provability cannot prove it's own consistency. All this is really saying is that almost every proof system we have will fail to prove it's own consistency. If a proof system manages to say about itself HEY I'M CONSISTENT, then either it is lying to you or it isn't very strong.
So then how the fuck did such confusion creep in? How did this pretty cool but very specific proof manage to run rampant among spiritual hucksters and non-specialists?
Okay. Well, it turns out that, combined with some other important results in math and computer science, Gödel's Incompleteness Theorems say that computers can never prove every truth of arithmetic. If you decided, "I'm going to tell my computer some rules for determining which arithmetical statements - like 1+1=2 or 2x3=7 - are true, and which ones are false!" regardless of what rules you tell your computer, it's going to fuck up, either by missing a true one or letting in a false one.
And OKAY HERE COMES THE PART PEOPLE FUCK UP ON SO STAY WITH ME HERE, IF YOU READ THIS PART YOU HAVE TO READ THE REST some people think that the human mind/brain works like a computer, and so is also limited by the Incompleteness Theorems. NO SERIOUSLY STOP ITS NOT WHAT YOU THINK.
All that means is that it's possible we can't prove everything that is true about arithmetic, and we can't prove everything that is true about the way we think. And if it seems like we can prove everything that is true about arithmetic or the way we think, we might not be like computers after all.
Might it say something more?
Probably not. Douglas Hofstadter's excellent Gödel, Escher, Bach, which I highly recommend to anyone interested in neuroscience and cognition, contains a paragraph that very clearly states what some people try to do with Gödel's Incompleteness Theorems [emphasis added]:
Looked at this way, Gödel's proof suggests – though by no means does it prove! – that there could be some high-level way of viewing the mind/brain, involving concepts which do not appear on lower levels, and that this level might have explanatory power that does not exist – not even in principle – on lower levels. It would mean that some facts could be explained on the high level quite easily, but not on lower levels at all. No matter how long and cumbersome a low-level statement were made, it would not explain the phenomena in question. It is analogous to the fact that, if you make derivation after derivation in Peano arithmetic, no matter how long and cumbersome you make them, you will never come up with one for G – despite the fact that on a higher level, you can see that the Gödel sentence is true. What might such high-level concepts be? It has been proposed for eons, by various holistically or "soulistically" inclined scientists and humanists that consciousness is a phenomenon that escapes explanation in terms of brain components; so here is a candidate at least. There is also the ever-puzzling notion of free will. So perhaps these qualities could be "emergent" in the sense of requiring explanations which cannot be furnished by the physiology alone.
I've taken the liberty of bolding Hofstadter's qualifiers of doubt, because otherwise when Hofstadter says "suggests - but by no means proves!" you might be tempted to think that he's saying that Gödel's Incompleteness Theorems give us good reason to believe, but not absolute proof, that there is something 'higher-order' going on with the brain. He's not saying that. Gödel's Incompleteness Theorems 'suggest' free will or a soul in the same way that your roommate not being in his room 'suggests' he has been murdered by an international conspiracy dating back to Da Vinci. Like, it hypothetically could be considered evidence of murder, but to make that work to show all that, you would need so much more fucking context. If someone said to you, "Dude, your roommate isn't in his room. I think he's been murdered." You would react with so many more questions. "Is he missing? How long has be been gone for? Is there evidence of a struggle? Are you sure he isn't just pooping in the bathroom?"
His being missing in no way establishes conclusively that he has been murdered. So when someone says that Gödel's Incompleteness Theorems show that there is a soul or free will or any other claim outside of a VERY VERY NARROW BODY OF MATHEMATICS you should take that to mean "If a whole bunch of other, really really controversial shit aaaaaaaaaaalllllllllll happened to be true, then Gödel's Incompleteness Theorems might maybe show that something weird is going on, and I am claiming that weird thing is a soul, even though Gödel in no way backs me up on that claim."
For starters, in order for the Incompleteness Theorems to say anything at all about persons you would need to show that our brains/minds are somehow equivalent to computers IN A VERY VERY SPECIFIC WAY - i.e. they can be represented by Turing machines. Don't worry if you don't know what that is - it's just a very basic computer that can do all the things your computer can do with some very basic parts. Importantly, things about Turing machines are provable, so loose analogies between brain/mind and computers just won't fucking cut it. And developing even tighter analogies to a computer doesn't make the case any stronger. Without an actual proof, you don't get to invoke Gödel. Developing a good analogy between the mind/brain and a computer is like finding out that someone saw your roommate with a suspicious guy somewhere else. Like, that doesn't actually prove that this guy murdered your roommate or even that your roommate was murdered at all. If you desperately want it to be the case that your roommate was murdered, you might take it to prove that your roommate was murdered, but for all you know he could be planning a surprise party for you and this mysterious guy was just a clown he wanted to hire.
So, even if you've got a really super-tight analogy of the brain to a computer - and they've been developed, believe me - you still can't prove shit. You can at best just sort of gesture in the direction of Gödel's Incompleteness Theorems and say, "maaaaaaaaaaaaaaaybe there is something about the human brain/mind such that we cannot prove it is true using the human brain/mind." If that happens to be right, then by your very argument there is no way to prove that you are right. This is one of the deeply frustrating things about Gödel's Incompleteness Theorems in general - systems that bump up against them are sort of necessarily vague about where and how that bumping happens. If we are one of those systems then we can't ever prove we are one of those systems and you'll always be limited to speculation.
You might be tempted here to think that, since close analogies and rough gestures are the best we can do, you should use them. That argument might go something like this: "Well, you're saying that if the mind/brain-computer analogy works, it would be impossible to prove their equivalence. So, we have some really close analogies that seem to work and also suggest that such analogies are as far as we can go. Since those analogies are so well-developed, don't we have grounds to go ahead and make the leap and believe they're probably right?"
No. We do not.
Historically, one way to sound like an incredibly assclown is to assume we pretty much know how the brain works. That "we only use 10%" figure? Left brain/right brain distinctions? Really, all sorts of "Hey guys, this is it!"-type claims have turned out to be way super wrong. And Turing machines are an incredibly specific sort of thing. So no, some decent analogies does not justify assuming the brain works like a Turing machine in a very strict and mathematically describable way.
But let's just pretend that we could establish that the mind/brain works like a Turing machine, and so is limited by Gödel's Incompleteness Theorems. Would that mean that we probably have a soul or free will or something like them?
All it would mean is there is something about the mind/brain that we can't prove, but that is true. For example, maybe we can't solve a particular kind of math problem because we can't reason well enough about that sort of math problem. Maybe the thing we can't prove just is that the mind/brain works like a Turing machine. The fact that we can't prove that we have free will or a soul doesn't show that that's the thing that is true and can't be shown. It just means that we can't prove those things. It could be that we can't prove them because they're false. Even if you found your roommate's body, you would need so much more evidence that a shadowy international conspiracy was involved, that you're not even close. Maybe there are in fact arguments that a shadowy international conspiracy was involved, evidence you find elsewhere, just like there might be other arguments for free will or a soul somewhere else in other branches of philosophy. But Gödel's Incompleteness Theorems lend such negligible support to those conclusions that maybe at best you can say that 'Gödel's Incompleteness Theorems don't conclusively prove that there is not a soul or free will.' If you say that the Incompleteness Theorems imply that there is a soul or free will or God or some other metaphysical entity you are wrong and shut up. They don't imply anything of the sort and they don't work with other arguments to imply anything like that.
And that's the deal with Gödel's Incompleteness Theorems - they're these super-cool mathematical proofs, that create numbers that have secret messages like "you can't prove I'm true" encoded into them. It's awesome. However, they only prove things about numbers and the sorts of systems that contain them. So next time someone says, "You know, Gödel showed that..." get ready to bring down the ol' NOPE hammer. It turns out that, unless you do mathematical logic, very high-level computer science, or some other very specialized aspect of mathematics and logic, Gödel's incompleteness theorems very probably have nothing whatsoever to say about how you go about your day-to-day life.
You can get Godel, Escher, Bach from Amazon. It's pretty cool stuff.
The excerpt from above was taken from The Mechanism page of Wikipedia, which discusses some of the applications of Gödel to persons. In particular, you'll note Gödel himself was skeptical of such attempts.