David_Z has left a new comment on your post "Reader Mail #29":

...but you can't ever be exactly 100% sure there isn't a hidden contradiction somewhere

Are you exactly 100% sure of this? Isn't the allegation inherently self-contradictory?

...just trying to pick your brain, the idea intrigues me but I'd love to hear more about it - do you have a post where you explain in greater detail? If so, let me know and I'll check it out.

This is a well know result from Mathematics, known as the "Incompleteness Theorem". Superficially, it makes sense. Could you prove, using the axioms of Mathematics, that the axioms themselves are consistent? Obviously not. Even if you could, it would be circular reasoning. The "Incompleteness Theorem" says that if you show me a valid proof for "Mathematics is consistent", I can turn it around and point out two contradictory axioms.

This subject deserves its own full post.

The Incompleteness Theorem is a famous result from Mathematics. Basically, the Incompleteness Theorem says that you'll never know everything.

The Incompleteness Theorem can be most easily explained in the language of Computer Science. The simplest version is known as the "Halting Problem". Pick any Turing-complete programming language. (A Turing-complete programming language is one that contains all the "standard" features. Anything that can be calculated in one Turing-complete language can be calculated in any other Turing-comlpete language. All the standard programming languages, such as C, C++, C#, Java, etc. are Turning-complete. Any actual computer has a finite amount of memory. Technically, a Turning-complete language would have to operate on a computer that can add as much extra memory as needed, along with a mechanism for addressing an unlimited amount of memory.)

I'll use pseudo-code. Suppose I have an arbitrary computer program X. If I run program X, will it finish in a finite amount of time, or will it be stuck in an infinite loop and never exit? Assume (for a later contradiction) there were a computer program "HALT?" that always answered this question and gave the correct answer. I then write this program, and call it "Y".

program Y:

If HALT?(Y), then loop forever

else exit.

Obviously, the HALT? program gives the incorrect answer for program Y.

In Mathematics, this type of reasoning is called a "diagonalization argument". If you believe you can do something, I can point out a specific example of why you are wrong.

There's big one detail that I ignored. I'm able to write a compiler. Looking at the full source code for a compiler, it's a big program. I just assumed one existed. If I wanted to make this proof 100% airtight, I would have to also show you the source code for my compiler. Of course, everybody knows that it's possible to write a compiler, so you shouldn't be bothered by this missing detail.

The language of Mathematics is Turing-complete just like a computer programming language. You can consider a full debugging trace of a computer program as equivalent to a "proof" that the program behaved correctly.

Suppose I wrote a computer program "PROVABLE?" that definitively answered the question "Given axioms {x1, x2, x3, ..., xn}, is y a theorem". Assume the PROVABLE? program always returned an answer in a finite amount of time. That program would have at least one mistake, for the same reason the HALT? program had a mistake.

When Godel proved the Incompleteness theorem the first time, he found a way to encode Mathematical statements as numbers. He then constructed a statement that said "If there's a proof that this statement is true, then it's false." (encoded as a number). This was equivalent to "writing the full source code for a compiler".

This result is both disturbing and reassuring. Is it possible to tweak our axioms so that the Incompleteness Theorem doesn't hold? It turns out that if your axioms include "standard arithmetic", then you can conclude the Incompleteness Theorem as a theorem. People have come up with a bunch of different ways to describe "standard arithmetic", but they're all equivalent. In other words, if you describe "standard arithmetic" using axioms {X} and "standard arithmetic" another way using axioms {Y}, then you can prove "{X} is consistent if and only if {Y} is consistent".

This result is disturbing, because there's no definitive process by which you can check if a statement is provable or not. The only way to be sure is to check every possible proof forever. It's also reassuring, because it guarantees that you'll never know everything.

There's another interesting conclusion of the Incompleteness Theorem. Suppose you had a proof that said "The axioms of arithmetic are consistent." In the same way that program Y operated above, I can manipulate your proof and come up with a proof of "The axioms of arithmetic are inconsistent". Therefore, you'll never be sure if "standard arithmetic" is logically inconsistent or not. It probably is consistent, but you won't be sure unless you check every possible proof.

There also is a complementary result, called the Completeness Theorem. The Completeness Theorem says that if something ACTUALLY REALLY IS ABSOLUTELY TRUE, you'll find a proof eventually. However, for any given theorem, you won't know if it's provable or not, or if you just haven't been looking long enough.

The Incompleteness Theorem and Completeness Theorem seem contradictory, but they aren't. It turns out that some statements are NEITHER TRUE NOR FALSE! They're undecidable! How is that possible?

The answer comes from model theory. There are a lot of different models that satisfy the axioms of "standard arithmetic". When people say "integer arithmetic", you know what you're talking about. However, no matter how hard you try, there are an infinite number of models that satisfy the axioms. Even though you have a specific model in mind when you say "integer arithmetic", there's no way to pin it down with a finite list of axioms. The "undecidable" statements are true in some models, but not true in other models. If a statement is actually true in every possible model, then you'll find a proof eventually, which is what the Completeness Theorem says.

Here's an example of an undecidable statement, from set theory (which is another equivalent way of stating the axioms of arithmetic). Suppose you have an infinite collection of sets. Can you always choose a specific element out of each set simultaneously? This is know as "the Axiom of Choice" (AC). It has been proven to be undecidable. If you believe "standard set theory" is logically consistent, then you can prove that "standard set theory + AC" is consistent. You can also prove that if "standard set theory" is logically consistent, then "standard set theory + !AC" is logically consistent. This is done by, starting with a model of standard set theory, constructing two models that satisfies both possibilities. It's pointless to argue "Do you believe in the Axiom of Choice?" It's irrelevant. In practice, you can't prove which actually holds in this universe, because you can't construct an infinite collection of sets. (The Axiom of Choice is used to prove some statements in Real Analysis, which is the theoretical basis for calculus. The usual productive conclusions of calculus are still valid if you don't accept the Axiom of Choice, but the proofs are harder.)

The Incompleteness Theorem is also an argument against all organized religions. Most organized religions say "God gave us a finite list of rules that cover how to live in every possible situation!" If you're working from finite fixed rules, that can't possibly cover every possible situation. That's the reason old religions struggle with modern ideas like birth control. Nobody imagined safe and effective birth control when the religion's rules were invented, so they don't handle that well.

Similarly, the Incompleteness Theorem is an argument against the State. The State has a finite list of rules that are intended to cover every possible situation. No finite fixed set of rules will ever be sufficient. If you try to patch up the rules, you wind up creating more flaws than you fix.

The Incompleteness Theorem says that you'll never know everything. You won't know every possible Mathematical theorem unless you take forever checking every possible proof. For some statements, you won't know if they're provable or not unless you spend forever checking every possibility. Similarly, in other areas of science and philosophy, it isn't possible to know everything.

## 2 comments:

This comic is related in a way.

http://xkcd.com/468/

Hey FSK.

I'm interested in "finitism", which doesn't allow the axiom of infinity. There is a question about whether or not the incompleteness theorem holds for this set of axioms.

For instance, does the halting problem exist for Turing machines with finite memory?

I've done plenty with cardinalities of "infinite sets", and I understand how the math works, but at the core I think if you want to talk about a set, then you'd better have a terminating algorithm that yields it. Otherwise, what are you talking about?

My friend told me that he did his masters thesis on infinite-dimensional spaces.. is that useful?

Post a Comment