Scott Aaronson just posted the transcript from a very interesting talk he gave at Google a few weeks ago on his blog. He discusses various issues related to computational complexity, especially quantum computation and the ramifications of P=NP and why it would be a great surprise were it to be true. I find his claim about P=NP allowing the other six millenium problems to be solved by just doing a polynomialtime search through the possible proofs up to a certain length to be slightly suspicious (but only slightly, fortunately). Is there some way we could place a bound on the maximal possible length of the shortest proof of a given conjecture, should it be true? That seems to be very tricky, since some problems that can be stated fairly easily have very long proofs (for example, the fourcolor theorem). But I wonder if expressing the fourcolor theorem (just the statement, not the proof) in terms of ZFC formalism would actually require a large number of symbols. Still, a bound on the length of the shortest proof of a theorem requiring n characters to state seems very difficult to compute. It sounds faintly busy beaverlike.

Recent Posts
Recent Comments
Simon on Defeating an IM aquazorcarson on Defeating an IM Simon on You could have discovered the… Burt on You could have discovered the… aquazorcarson on Why didn’t Euler conject… Archives
 February 2015
 March 2014
 February 2014
 December 2013
 November 2013
 September 2013
 February 2013
 November 2012
 September 2011
 August 2011
 July 2011
 June 2011
 May 2011
 April 2011
 March 2011
 February 2011
 January 2011
 December 2010
 November 2010
 October 2010
 September 2010
 August 2010
 July 2010
 June 2010
 May 2010
 April 2010
 March 2010
 February 2010
 January 2010
 December 2009
 November 2009
 October 2009
 September 2009
 July 2009
 June 2009
 March 2009
 February 2009
 January 2009
 December 2008
 October 2008
 August 2008
 April 2008
 December 2007
 October 2007
 September 2007
 August 2007
 July 2007
 June 2007
 May 2007
 April 2007
 February 2007
 January 2007
 November 2006
 October 2006
 June 2006
 May 2006
 April 2006
 March 2006
 February 2006
 January 2006
 December 2005
 November 2005
 October 2005
 September 2005
 August 2005
 July 2005
 June 2005
 May 2005
 April 2005
 March 2005
 February 2005
 January 2005
 December 2004
 November 2004
 October 2004
 September 2004
 August 2004
 July 2004
 June 2004
 May 2004
 April 2004
 March 2004
 February 2004
 January 2004
 December 2003
 November 2003
 October 2003
 September 2003
 August 2003
 July 2003
 June 2003
 May 2003
Categories
Meta
Still, a bound on the length of the shortest proof of a theorem requiring n characters to state seems very difficult to compute.
It ought to be uncomputable, right? Such a function would yield a finite algorithm for checking whether statements are provable or not, which seems like something that should not be allowed to exist (although I don’t know exactly what the official name of the theorem is that says this).
Indeed, I think you can Godelize: if you had a computable expression for such an f, you can construct a statement G that says that “G has no proof of length < f(N)." where N can be taken large enough so that it is larger than the length of the statement. As in Godel's argument, if G is provable, G has (by assumption) a proof of length < f(N), which implies that ~G is also provable, so mathematics (or at least your formal system) is inconsistent. On the other, if G is unprovable, then in particular, G has no proof of length < f(N) — since this fact can be checked by a finite search, it can be proved, hence G is provable, which is a contradiction.
So, fortunately, Godel’s theorem is still true — regardless of P = NP, finite computations are still finite, and infinite computations are still infinite.
On the other hand, if P=NP were true, it doesn’t seem too outlandish to imagine that humanity could build a supercomputer capable of searching for all proofs of RH short enough for a human to reasonably be able to understand in a lifetime — which I think is what Scott Aaronson had in mind (although there would clearly be some technical difficulties — one of them would be determining the “compression factor” of humanexpressed mathematics relative to computercheckable proofs — i.e. if you have a page of mathematics that a human mathematician can understand in an hour, can you put an upper bound on how long a formal machinecheckable proof of the results of that page would have to be? — this is like your question about the statement of the fourcolor theorem, although I doubt that ZFC is the ideal way to encode mathematics in a machinecheckable form. One might have a humanunderstandable proof of RH which depended upon some sort of human intuition but such that a formal computerunderstandable proof would be forbiddingly long.)
But if a computer search for all humanunderstandable proofs of RH turned up nothing, and RH were still true (and provable! albeit with a proof too vast for humans to comprehend), /then/ it would be pretty deep ðŸ™‚
I could say more on metamathematics and what it means to be “deep”, but I think I’ll hold off, because dinner’s ready ðŸ™‚