In a blog post on Computational Knowledge and the Future of Pure Mathematics Stephen Wolfram lays out a vision that is in many ways exciting and challenging. What if all of mathematics could be expressed in a common formal notation, stored in computers so it is searchable and amenable to computer-assisted discovery and proof of new theorems?
As a former mathematician who is now a programmer, it is I think inevitable that I have had similar dreams for a very long time; anyone with that common background would imagine broadly the same things. Like Dr. Wolfram, I have thought carefully not merely about the knowledge representation and UI issues in such a project, but also the difficulties in staffing and funding it. So it was with a feeling more of recognition than anything else that I received much of the essay.
To his great credit, Dr. Wolfram has done much – more than anyone else – to bring this vision towards reality. Mathematica and Wolfram Alpha are concrete steps towards it, and far from trivial ones. They show, I think, that the vision is possible and could be achieved with relatively modest funding – less than (say) the budget of a typical summer-blockbuster movie.
But there is one question that looms unanswered in Dr. Wolfram’s call to action. Let us suppose that we think we have all of the world’s mathematics formalized in a huge database of linked theorems and proof sequences, diligently being crawled by search agents and inference engines. In tribute to Wolfram Alpha, let us call this system “Omega”. How, and why, would we trust Omega?
There are at least three levels of possible error in such a system. One would be human error in entering mathematics into it (a true theorem is entered incorrectly). Another would be errors in human mathematics (a false theorem is entered correctly). A third would be errors in the search and inference engines used to trawl the database and generate new proofs to be added to it.
Errors of the first two kinds would eventually be discovered by using inference engines to consistency-check the entire database (unless the assertions in it separate into disconnected cliques, which seems unlikely). It was already clear to me thirty years ago when I first started thinking seriously about this problem that sanity-checking would have to be run as a continuing background process responding to every new mathematical assertion entered: I am sure this requirement has not escaped Dr. Wolfram.
The possible of errors of the third kind – bugs in the inference engine(s) – is more troubling. Such bugs could mask errors of the first two kinds, lead to the generation of incorrect mathematics, and corrupt the database. So we have a difficult verification problem here; we can trust the database (eventually) if we trust the inference engines, but how do we know we can trust the inference engines?
Mathematical thinking cannot solve this problem, because the most likely kind of bug is not a bad inference algorithm but an incorrect implementation of a good one. Notice what has happened here, though; the verification problem for Omega no longer lives in the rarefied realm of pure mathematics but the more concrete province of software engineering.
As such, there are things that experience can teach us. We don’t know how to do perfect software engineering, but we do know what the best practices are. And this is the point at Dr. Wolfram’s proposal to build Omega on Mathematica and Wolfram Alpha begins to be troubling. These are amazing tools, but they’re closed source. They cannot be meaningfully audited for correctness by anyone outside Wolfram Research. Experience teaches us that this is a danger sign, a fragile single point of failure, and simply not tolerable in any project with the ambitions of Omega.
I think Dr. Wolfram is far too intelligent not to understand this, which makes his failure to address the issue the more troubling. For Omega to be trusted, the entire system will need to be transparent top to bottom. The design, the data representations, and the implementation code for its software must all be freely auditable by third-party mathematical topic experts and mathematically literate software engineers.
I would go so far as to say that any mathematician or software engineer asked to participate in this project is ethically required to insist on complete auditability and open source. Otherwise, what has the tradition of peer review and process transparency in science taught us?
I hope that Dr. Wolfram will address this issue in a future blog post. And I hope he understands that, for all his brilliance and impressive accomplishments, “Trust my secret code” will not – and cannot – be an answer that satisfies.
Proof checkers are quite a bit simpler than inference engines, so as long as you run the output of the inference engine through a proof checker, only the latter part needs to actually be trusted. Anyway, Wolfram’s vision is not novel at all – see the ‘QED manifesto’ and more recently Tim Gowers’ comments about building an ‘automated mathematician’s assistant’.
You’re assuming Dr. Wolfram has made a conscious decision to keep the source closed, rather than merely not seeing that it should be any different from Mathematica..and people trust Mathematica. don’t they?
Shouldn’t the inference engine be able to regenerate the theorem’s proof from its statement?
There are many cases where original long proofs have been confirmed and reduced with a much smaller elegant proof. There are lots of proofs for Pythagoras’ theorem.
As far as the tradition of “peer review”, look up “Climate Change”. I think it now means that a friendly colleague peered at the paper for a few seconds. Oh, and the whole “science” is in the main closed data sets and closed source models. They don’t allow checking nor apparently high-sticking, but do call out icing.
There is another more subtle problem in some corners. I haven’t seen a proof of Cantor’s theorem that involves closed formal notation. Consider his infinite list (in binary):
.1
.01
.11
.001
.011
.101
.111
.0001
…
The problem is the diagonal trick doesn’t work. You get .0111111… but it is equal to .1000…, which is the first number on the list. You can assert a number is not on the list, but I can use something from calculus that given any epsilon, I can find N (2**bits) such that there will be a number differing from your value by less than epsilon.
I’m not sure how an inference engine would handle this.
I’m also not sure how it would handle things like topology and other fields where I’m not sure the description is as easy as number theory.
It’s not just a question of having enough eyes of different types on the code (which IS important)…
There’s also an availability issue…what if the company goes under? No discipline should be so reliant on ANY one entity that it’s set back five years by that entity ceasing to exist. What if the company DOESN’T go under, but becomes a gatekeeper of what math gets priority in the system, effectively arbitrating the progress that’s allowed in one area vs another?
Not to mention integrity of another kind: To most people, math isn’t a high-value attack target, but since I’ve spent time doing information security for NSF-funded research projects and infrastructure projects that support science, I’ve become even more sure that it is. You see, in the end, some of us aren’t theoretical mathematicians…we are people who use math to DO things, things that matter.
There’s some pretty serious math involved in systems that HAVE to work, e.g.:
* countless financial systems (no, it’s not just four-function math going on there, trust me)
* cryptography, which we need for virtually everything
* medical research
* emergency management
What if this became exactly as central and important a tool as Wolphram thinks it might? We’d be relying on ONE entity to not get infiltrated (either technologically or on a human level, or both) and not have anything go wrong otherwise that might make e.g. a comparison of two cryptographic algorithms get skewed, or a formula to be used in the financial industry gets wrongly accepted just long enough to bankrupt a developing nation and put a few older Americans out of their retirement savings?
Decentralizing things to some degree (by allowing the same work to easily be reproduced elsewhere to confirm it) is a necessary risk mitigation.
Bertrand Russel, Alfred North Whitehead vs. Kurt Godel – you can’t prove all of mathematics with mathematics. Turing vs Turing – you can’t prove a program with a program. This project has been proven impossible. Open source can’t change the laws of the universe (yet). However, you could probably accomplish a lot of useful stuff, so there’s that.
I know there are existing projects to build up a full foundation of mathematics using existing proof assistant programs like Coq. I think this is the one I’ve heard of.
One of the things the constructivists are hoping for is that their recent theory work will let them start from a very basic set of axioms and build up to higher mathematics using only a very rigorous procedure that a computer can check, with a lower likelihood for that sort of mathematical hole than a non-constructivist approach. It’s harder that starting with non-constructive math, though, and it has been slow going from what I understand, though I suspect because it’s a hard problem more than anything else. But that probably works in favor of a project like this… doing the harder work to reduce it to something truly mechanically checkable is a huge positive for reliability.
>One of the things the constructivists are hoping for is that their recent theory work will let them start from a very basic set of axioms and build up to higher mathematics using only a very rigorous procedure that a computer can check, with a lower likelihood for that sort of mathematical hole than a non-constructivist approach
Perhaps not entirely by coincidence, back when I was aiming at spending my life as a theoretical mathematician this was the exact research program I aimed at. Only I didn’t think of it as “constructivism” because it was the late 1970s and the constructivist idea wouldn’t coalesce into anything like a school (as opposed to individual foundationalists considering that the Intuitionists might have been on to something after all) until after I moved sideways into programming.
Roads not taken. Maybe if I’d stayed in mathematics I would have become a founding eminence of constructivist logic. Considering the sorts of things I wound up doing as a software engineer that seems rather plausible.
tz, some versipns of Cantor’s diagonal argument have in fact been formalized. See e.g. here, from MetaMath. And yes, the whole point of an inference engine is to generate proofs, but coming up with a proof given a statement can be computationally expensive. So proofs have to be stored as part of the system, at whatever level of detail suffices to make remaining inferences easy enough.
John Savage, Gödel’s theorem is not a real obstacle here, any more than it’s an obstacle to doing any mathematical logic. The worst of it is that you’ll need to add consistency axioms whenever some proof depends on them them, and keep track of what theorems depend on what ‘levels’ of consistency. Being able to work reasonably with multiple choices in axioms/foundations/logical systems is a requirement anyway, or so it seems to me.
Thanks for the Wolfram link. It was fascinating.
I think the proofs can be made more likely to be correct if they the same theorem is proved in several different ways, and if implications from the theorem fit with other mathematics.
“I think Dr. Wolfram is far too intelligent not to understand this, which makes his failure to address the issue the more troubling.”
I doubt that Wolfram made a deliberate choice. I think he loves his Wolfram Alpha, and has an optimistic and expansive temperament. He’s much more focused on “it would be so great if this works” rather than “what would go wrong if this fails”.
This being said, I agree that it would be much better if math-with-substantial-computer-aid were open source.
I have no idea whether you can convince mathematicians to apply enough social pressure to Wolfram to get him to change his policy, and I’m getting an impression that the project is too huge and not-obviously-profitable to do it from the bottom up with volunteers.
Have you contacted Wolfram about going open source?
Firstly, ex contradictione sequitur quodlibet (from contradiction, anything follows), in that a single false theorem may well allow the theorem provers to conclude that 0=1, which would be easy to spot.
Secondly, I think the best way to ensure reliability of the system is to have multiple setups, each being completely different, except that they have the same input and output. This way, every query would be sent to both systems, and the answers would be compared and, hopefully, match to the byte.
I’m not sure that the source code matters so long as the outputs are available. After all, it doesn’t matter if a proof is written by a human, a machine, or a monkey at a typewriter as long as it is correct. Independent verification of the results can be done this way through an independent algorithm, or by hand if needed.
>Independent verification of the results can be done this way through an independent algorithm, or by hand if needed.
I doubt it. Well, the by-hand part, anyway.
Computer assistance for mathematics is doubtless going to do the same thing to mathematical proofs that the invention of compilers did to the size and complexity of programs. We don’t confine ourselves to writing with compilers the kinds of simple programs we could have written in handcrafted assembler; instead we push to the limit of our ability to manage program complexity with compilers.
The new normal, in the presence of Omega, will similarly be to do proofs that we cannot generate or understand without computer assistance. Thus, the source code really will matter.
Maybe fifteen years ago I read an article by a math professor talking about the kind of papers he was getting for masters and doctors theses. He was lamenting that they were almost all based on throwing lots of raw computational power at problems instead of trying to create traditionally elegant solutions.
It was basically an alarum of “the end of mathematics as we know it”, but it seems to have been wrong, or at least premature. Every new tool has its heyday; doubtless the old timers gnashed their teeth about Leibnitz and Newton and their “calculus,” but algebra and trigonometry haven’t gone away.
>He was lamenting that they were almost all based on throwing lots of raw computational power at problems instead of trying to create traditionally elegant solutions.
Maybe. But what if the stock of problems with elegant solutions has run low?
I don’t know that this is the case, and am not asserting it. But the possibility exists. The domain of “all mathematical problems with “elegant” solutions” looks inexhaustible now, but we might be closer to the edge of it than we know. Maybe this guy wasn’t seeing lazy papers; maybe reality was trying to tell him something.
The parallel question with respect to Omega is: what happens when unassisted human thinkers have mined out all problems that unassisted human thinkers can do?
Nancy Lebovitz, a piece of formal math is either correct or not, as far as it goes. One correct proof is enough. Math is different from informal (probabilistic) reasoning, from this POV. Where multiple proofs can be (empirically) useful is in figuring out generalizations that will still hold under slightly different conditions.
Lambert, it’s possible to spot inconsistencies in principle. But no, it’s not computationally easy. However this is unlikely to be a problem if we ground everything properly in well-regarded foundational systems, as finding inconsistencies in such foundations would be a major achievement in its own right.
>The new normal, in the presence of Omega, will similarly be to do proofs that we cannot generate or understand without computer assistance.
It seems rather worrying, at first thought, to have proofs that cannot be understood, yet I do not doubt that there was no small degree of suspicion of the first compiled programs, which evaporated with their increasing ubiquity and usefulness.
“You can assert a number is not on the list, but I can use something from calculus that given any epsilon, I can find N (2**bits) such that there will be a number differing from your value by less than epsilon.”
Er, it is actually true that for any real number, there is a rational number (part of a known countable set) that differs from it by less than epsilon for any given epsilon, so I’m not sure what this is supposed to prove.
Also, cantor’s proof isn’t for the uncountability of the reals in particular, it is for the existence of uncountable sets at all – as a set of sequences, 011… doesn’t have to be equal to 100…. And anyway, the set of numbers that have two representations in any given radix is, as a subset of the rationals, a countable one anyway.
Probably the accepted definition of “elegant” will evolve to match the available tools.
@Lambert
> It seems rather worrying, at first thought, to have proofs that cannot be understood
But should we limit ourselves to the things that a few pounds of gray matter can handle? Should we require mathematics to be abstracted into a modality designed for a brain that is designed for hunter gatherers?
Along these lines I recommend you read this article:
http://www.cringely.com/2014/04/15/big-data-new-artificial-intelligence/
Here is an excerpt:
“Google Translate, for example, can be used online for free by anyone to translate text back and forth between more than 70 languages. This statistical translator uses billions of word sequences mapped in two or more languages. This in English means that in French. There are no parts of speech, no subjects or verbs, no grammar at all. The system just figures it out. And that means there’s no need for theory. It works, but we can’t say exactly why because the whole process is data driven.”
That isn’t to say there isn’t a deterministic translation, just that it is far to complicated for a human to understand. That is a future of both heaven and hell. It reminds me of the Star Trek Voyager Episode “Conspiracy”.
http://en.memory-alpha.org/wiki/The_Voyager_Conspiracy_(episode)
But, as with nanotechnology, irrespective of whether the future is scary and dangerous, it is the future that stands before us.
A sentient AI would find this tool most useful.
@esr:
Simplistically, you may be conflating three different issues here:
Describing proofs — well, I’ve done it in Python for some things, but in the general case it may necessitate the kind of DSL that is being described, or maybe you can do it with Python and sympy.
In any case, there should and probably will be at least one open document specification for proof description.
Checking proofs — A proof should simply describe a combination of sub-proofs recursively until you get to agreed-upon universal truths. There should and probably will be many implementations that can do this based on the agreed-upon truth specifications.
Generating proofs will probably be a fertile field for a long time, and people like Wolfram may be able to maintain closed systems that will do a better job in some cases than the open source alternatives.
Intuitively, it seems to me that checking proofs is in P but generating proofs is in NP.
If Wolfram comes up with some useful theorem that he has proved, but he is not willing to share the actual proof, that is a red flag. OTOH, if he shares the proof but not the system that generated the proof, I think we will find that a lot of people figure out how to do the checking in a timely manner.
On a somewhat tangential topic (still DSLs and proving things, but…), does anyone here have experience with TTCN-3?
It’s a testing DSL that is maintained by ETSI, and is apparently used for things like testing SIP stacks, IPv6 conformance, etc. In theory, it’s the sort of thing you might use for tests for GPSD.
My boss mentioned it to me yesterday, and I’m trying to figure out whether it brings anything to the table other than good pay for consultants in combination with letting not-really-programmers do a better job of automated testing.
I guess it’s not that well known, but it is apparently popular enough that there is at least one open-source compiler, and also a project that implements some of its concepts on top of Python.
I am trying to keep an open mind, but this hatchet job on Python by a University researcher who apparently makes his money by promoting TTCN-3 really raises my hackles:
http://www.site.uottawa.ca/~bernard/A%20comparison%20between%20ttcn-3%20and%20python%20v%2012.pdf
Doesn’t need to. Just needs to help find where our current understanding is lacking.
Does “Unassisted Human” exclude GMOs?
guest, a proof can be thoroughly sound, but if a proof has tens of thousands of steps– or more– it becomes harder and harder to be sure it’s sound.
I think there’s two problems going on here: 1) Academia doesn’t sufficiently distrust closed-source yet, and 2) The open-source tools for symbolic math right now are actually kind of terrible, in comparison.
People in academia, despite all their talk about how transparency and reproducibility is what really matters, still disproportionally value status. And closed-source math/stats software packages like Matlab or Mathematica (or even SAS) have pretty high status: it’s what we teach to the undergrads! I think there’s a very real sense of “it’s officially sanctioned, so it must be okay and, besides, it’s more convenient than trying to get Sympy + Scipy + Matplotlib to work.” Discontinuing teaching courses in closed-source languages is going to have to happen before we can get academia to treat closed-source with the level of distrust.
The other part of the problem is that a bottom-up approach to building a system like “Omega” is detrimental to the quality of the system. The Scipy stack (and SAGE) is built on top of Python which, despite being a very good language in it’s own right, just isn’t the right base for this kind of work. The results are impressive, but there’s too much of an impedance mismatch between working with a general-purpose programming language and trying to do symbolic math.
The nice thing is that solving one of these problems should help solve the other – once people stop trusting closed-source, there will be a higher demand for higher quality open source tools. And vise-versa.
>people figure out how to do the checking in a timely manner
https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics
These problems just need ‘checking’.
@lambert:
Please say you weren’t paying attention snd will try to do better.
@Patrick Maupin
“Intuitively, it seems to me that checking proofs is in P but generating proofs is in NP.”
I know I am boring pedantic, but NP actually means that checking the solution is in P by definition. Finding a solution (generating the proof) is not in P.
*Facepalm*
@Winter:
Yes, my statement was redundant for anybody who knows there are complexity classes beyond NP.
But, my bad, I was also conflating two different concepts. A problem could have a solution in P without having an obvious solution in P. But if Wolfram or anybody else comes up with a canonical way to fully describe proofs, then I believe that Wolfram-level intelligence probably will not be required to come up with a correctness checker that works in polynomial time — that there will be plenty of people around who could create such a checker, and I believe that open source implementations will happen relatively quickly.
I also believe that open source heuristic proof generators will happen, but those may or may not be much crappier than what Wolfram comes up with.
“…errors in the search and inference engines used to trawl the database and generate new proofs to be added to it.”
Let’s say there are no such errors; the engines work perfectly. You turn this thing loose and it keeps finding more and more ‘junk theorems’, night and day. The database just grows and grows until any ‘interesting’ math gets completely buried under a mountain of trivial, boring, or otherwise undesirable results.
A technical glitch: Your link to the essay has a screwy “#comments” that drops you past the bottom of it.
>completely buried under a mountain of trivial, boring, or otherwise undesirable results.
Google Theorems, anyone? That said, optimising for interesting and nontrivial theorems will be a challenge, as will optimising the inference engines to use the most promising axioms first.(No good trying to prove the Riemann hypothesis with ‘1+1=2’.)
@Nancy
No. The correct way to do something like this to stick to the “de Bruijn criterion”, in which case it makes no difference how many steps there are. Of course, this article by renowned megalomaniac Stephen Wolfram doesn’t give you any information on the actual state of the art in this area, in light of which the contribution of this project appears considerably more modest.
I think about the proof of Fermat’s last Theorem.
https://en.wikipedia.org/wiki/Wiles%27_proof_of_Fermat%27s_Last_Theorem
The proof is next to unreadable for specialists (others do not have to try). Good proof checkers would help here.
As of finding it automatically, that would have to be build up step-by-step. In very small steps.
The proof assistants that real mathematicians are already using to do real work are Coq, Isabelle/HOL, etc., all of which are open source. There’s been work going on in this regard for years, see, for example, this list of 100 important theorems to formalize, most of which have since actually been formalized:
http://www.cs.ru.nl/~freek/100/
Wolfram is, as usual, woolgathering about stuff other people have already done, and is thinking about it worse than they would.
> Otherwise, what has the tradition of peer review and process transparency in science taught us?
Isn’t Wolfram infamous for shirking peer review for his “New Kind of Science”? What makes us think that he or his company respects it now?
“I think about the proof of Fermat’s last Theorem.”
It would be supremely ironic if the great system gets built, and the first thing it does is reconstruct the “wonderful proof” that Fermat claimed to have discovered.
““…errors in the search and inference engines used to trawl the database and generate new proofs to be added to it.”
Let’s say there are no such errors; the engines work perfectly. You turn this thing loose and it keeps finding more and more ‘junk theorems’, night and day. The database just grows and grows until any ‘interesting’ math gets completely buried under a mountain of trivial, boring, or otherwise undesirable results.
I wonder how large the set of interesting proofs relative to junk proofs is? Is it similar to interesting books of N words vs. correctly spelled random word salad of N words? If so, that could get very large very fast.
I’m familiar with optimization problems in spaces with nice continuous variables – analytical mathematical operations though strike me as searching some space defined by discrete degrees of freedom (trees of operators, etc) – how is that usually done? Suppose you have some proof, and you want to search the space of nearby proofs (by a large set of permissible operations) for something simpler (more comprehensible, therefore more likely to be interesting?) – that local space is probably enormous, and your objective function is probably discontinuous over all these sequences of discrete changes you can make. How is optimization done for problems like that?
As an engineer, I have had some difficulty following dry symbolic mathematics. For it to mean something to me (enable me to solve problems and intuit (with appropriately trained intuition) the behavior of complex systems), I have to be able to tie it into some sort of visualization. Given a proof that does something I can visualize or follow, I can “see” what I can do with it and how to apply it to various situations. If I were given some unmotivated abstract rule that came out of a proof that is too complex to understand, I’m not sure I could apply it in the same way. (See that it applies to a situation, see where the results are leading, etc.) “I see” is more than a metaphor for understanding in my case. I get the impression that mathematicians think of math in a far more symbolic manner than I do. (It doesn’t mean that I can’t turn the crank on a symbolic expression, but every symbolic expression ‘looks alike’ to me without the visualization hook.)
In order to create a viable open-source version of a project on this scale there would have to be a large online population of programmers with both the mathematical expertise to work on the problem and the inclination to spend their free time doing so. Unfortunately, I’m pretty sure that the necessary manpower doesn’t exist. Very few programmers have that particular combination of skills and interests, and those who do tend to already be vital keystones of projects in cryptography or other math-heavy fields. So where would an open source project find enough contributors to get anywhere?
@E. William Brown
> So where would an open source project find enough contributors to get anywhere?
Apart from the fact that such open source projects already exists (see comments above), contributors should be found among users – in the math / computer sciences academia… should, to have reproductibility so important in sciences.
@E. William Brown:
As has been pointed out, such things already exist, so you are predicting the impossibility of something that exists already.
By the way, the ability to have a very small kernel logic, such as the Predicative Calculus of Inductive Constructions, on the basis of which one’s reasoning is conducted, where proof objects can be verified with a very small program, is not merely quite practical but already something done in several systems. (The small kernel calculus implementation can in fact itself be formally verified without Gödel’s incompleteness theorem being a problem — the incompleteness theorem says you can never know if the core calculus is consistent, not that you can’t check whether a particular program checks its rules accurately.)
BTW, the state of the art in formal verification of software has advanced very far in recent years. We have a formally verified open source microkernel (seL4), a formally verified open source C compiler (CompCert), and loads of other interesting developments. A few decades ago no one thought we would ever have reasonable sized formally verified programs, but the art has changed enormously, largely because of all the engineering breakthroughs that have been made in extant proof assistant systems, all of which are open source. Coq in particular won the ACM Software System Award this year.
I wonder whether there is a way of defining some kind of Occamesque ‘elegance’ metric, and only store and calculate the theorems that are of a certain elegance, or devote a lot of processing time to generalisation of existing theorems, meaning that more can be proven using fewer theorems.