The above definition seems to be very obvious and natural. But then this truth will prove that Mark Twain is correct – “Majority is always in the wrong”. Real numbers are not objects of nature, and therefore they must be false. Real numbers are points on a straight line. But there is no straight line in nature, because every object in the universe is continuously moving. Thus the foundation of mathematic is wrong and therefore the entire mathematics must be wrong. Since money is also real number, money must be false. Thus the entire economics must be false too. Since money controls everything on our earth, everything must be false or wrong.

However engineering is not false, because engineering uses objects of nature. But since engineering also uses false mathematics, false science, and false money, engineering is unreliable, breaks down, and pollutes environment. The society is severely suffering; poverty is spreading, wars, migration, violence, discrimination, etc., are all destroying everything around us; and all because of false science and false money. If you eliminate money, and create moneyless economy (MLE) then the heaven will come to earth.

]]>– the proof of the Kepler Conjecture in 2017 which straddles the HOL Light and Isabelle/HOL proof assistants by Hales and his many collaborators (500k+ lines of code, 20+ person years effort): http://www.proof-technologies.com/flyspeck/

– the proof in Coq of the Feit-Thompson (odd order) theorem in 2012 in Coq in abstract algebra by Gonthier et al. (170k+ loc, 10+ py): https://github.com/math-comp/odd-order

– the correctness proof of the seL4 operating system kernel, 2009 and onwards, in Isabelle/HOL by Klein et al. (200k+ loc, 22+ py): https://sel4.systems

– the correctness proof of the CompCert C compiler, 2006 and onwards, in Coq by Leroy et al. (40k+ loc, 8+ py): http://compcert.inria.fr

I think that many CS researchers (at least in Theory B) would say that these proofs have indeed provided new insights besides platinum certification. However, another important positive consequence has been “spillovers” in formal mathematics library development and proof assistant tooling improvements, which other researchers have continued to build on.

]]>1) Many practical situations are quite simply extremely difficult to tackle using exact or “closed form” methods so often simulations or numerical methods are the only available, practical approach

2) Even when analytical methods might be possible, the expertise how to do them are not widely known and not easily verifiable either.

3) Sometimes applying an analytical approach might take longer time (to think) than doing a simulation.

4) Even if you manage to apply an analytical approach, if someone suggests a simple change to the model maybe you have to do a full re-analysis and hope another analytical approach will work. If a numerical was used, often you could simply change this aspect of the calculation and run it again. I.e. numerical methods are often more flexible.

5) Computing resources are easily available – even the simplest laptop can quickly run a quite sophisticated simulation.

6) Even analytical methods often end up relying on analytical methods. You can use the equation for solving a 3rd degree equation all you want, but at the end of the day you need to calculate some square roots to get a definite answer and how are you gonna do them? Numerical methods.

7) Even as recent as the 80s and 90s, lay-people understood what a mathematical proof was and it held some respect. TOday, many people are mathematical proof “unaware” or doesn’t hold it in high regard. I have even heard things like “Yeah, proved until someone finds the opposite”, effectively treating a mathematical proof on the same level as empirical findings.

etc. etc.

I think there’s also a huge loss going in this direction, but it is what I am seeing. Maybe it is part of an even bigger trend about companies only chasing easy-wins, meaning most R & D departments are mostly just applying existing knowledge and doing very little research these days. Of course, this is also up to how you set the bar for “research”. Is putting some neurons together and running Tensorflow “true” research? I would say not, but clearly mainstream disagrees. Maybe that is why we are seeingly – arguably – less innovation than we used to.

https://www.technologyreview.com/s/530901/technology-stalled-in-1970/

I can also see the other side of the coin – I got a lot of respect out of exactly solving a model related to a biological problem 20 years ago as a PhD student. The formulas came out very elegant and just enough surprising that it was beautiful. The others working in the field had relied on “simulations in spreadsheet” because of an inability to apply the math which was by no means hard (relying just on elementary analysis), so that’s why they were impressed. But to tell the truth, from a practical point-of-view I don’t even think it changed anything. Sure it looked very beautiful. But it was from an algebraic point-of-view it was elegant and surprising to see these short formulas matching up with reality, not from a practical point-of-view. What they had before, however crude it had been calculated, was “good enough” for practical purposes.

]]>The classification of finite simple groups is currently “in the literature” in the sense that if you take the results in hundreds of journal articles (some of which prove things in far more generality than necessary for the proof) then a number of experts know how to put these results together to create a proof. My understanding (I am an algebraic number theorist, not a group theorist) is that there is currently no good “map” through the proof in the literature, however the experts believe it is complete (and I have no reason to disbelieve the experts). However the experts are dying out. Young people are not going into the area of finite group theory, because “the main problem has been resolved”, and the proposed 12 volume “second generation” proof is still only seven volumes in, with only one volume published in the last 10 years. The authors of these proposed volumes are in their 70s. I am genuinely concerned that the details of this proof are going to die.

Jim Arthur recently was recently awarded both the Wolf prize and Leroy P Steele prize, both mainly for lifetime achievements, however both no doubt sparked by his magnificent monograph “The endoscopic classification of representations”. The main results of this monograph are a spectacular classification of automorphic representations for many classical groups. The results rely on four manuscripts (references [A24] to [A27] in the monograph) which are “in preparation”. The book was published in 2013. These manuscripts have still not yet appeared. I asked Arthur last month and he assures me that he will get back to them at some point. Arthur is also in his 70s. Let me absolutely stress that I am in no way suggesting that there might be problems in the ideas Arthur has which will fill in the gaps in his monograph. However let me also suggest that if Arthur is hit by a bus tomorrow then actually we have a problem here. I have asked several leading number theorists if they feel confident of being able to prove the missing results themselves, and every single one has said “no, but I bet X could do it”; I then ask X who says “no, but I bet Y could do it” etc. I am genuinely concerned that the details of this proof are going to die.

I found Voevodsky’s 2014 IAS talk https://www.math.ias.edu/vladimir/sites/math.ias.edu.vladimir/files/2014_IAS.pdf very interesting. I am a working mathematician and I fully appreciate that the notion of proof in our subject is in pretty good shape. However the story Voevodsky tells in this talk leads me to the conclusion that it is not in as good a shape as many commentators here believe it to be. In particular I find Voevodsky’s claim “A technical argument by a trusted author, which is hard to check and looks similar to arguments known to be correct, is hardly ever checked in detail.” very disturbing. Voevodsky cites examples of mistakes in articles by both Bloch and himself in this talk. There are other examples of major works by prestigious authors turning out to be problematic. Manin’s proof of the Mordell conjecture for function fields over the complexes was accepted by the community for 25 years before Coleman found a mistake in it in the late ’80s (and also, fortunately in this case, a modification which fixed the proof). There is a mistake in Faltings-Chai about extending abelian schemes over an open set in a base scheme which invalidates some of the arguments — but the experts know what is ok and what is not. I would like to argue that “by induction” it is very likely that results which are currently accepted by the community will be rejected later. This for me is a problem. When this happens, experts will be able to assess the damage and will know what is still OK and what is not OK, assuming the area is still in fashion and there are still experts around. I am not naive enough to believe that areas currently in fashion will remain in fashion forever. This concerns me.

Mathematics is done by humans, and issues of the latter kind (proofs turning out to be wrong) are inevitable. But proofs dying out are more concerning to me. Other mathematicians have given me other examples in their areas, both of proofs which rely on results which were never satisfactorily written up and of proofs which rely on additional expertise which is dying out. If and when the computer revolution comes and substantial databases of formally verified mathematics start being built, I envisage problems when it comes to proofs which have not been correctly documented, and proofs which turn out to be not quite right, especially if they are in areas which are no longer regarded as fashionable e.g. because the major problems have been resolved to the community’s satisfaction.

I am not attempting to defend Horgan’s article, however I believe do there is some sort of debate to be had here, and as computer proof verification systems become more powerful these issues will slowly start to emerge. I do not know how long this will take. I have seen with my own eyes the deficiencies of the refereeing process, and I have seen with my own eyes the vast amount of work which one has to put in to prove even the simplest MSc level mathematical results in a formal proof verification system, and I am not offering any answers. However I do believe that the era of formal proof verification systems will come, and when it comes I am concerned that poorly documented and incorrect proofs in areas which have dropped out of fashion will prove very difficult to fix.

]]>I must admit that i dont recall a thm at those levels in the 2010s)

Second remark, If interested in Voevodski’s type theory work I would consult Mark Bickford

]]>1) I saw Michael Rabin give a talk on randomized algorithms for primality back in 1980. What struck me was he said that

If the prob that the algorihtm fails is 1/2^{100} thats about the same as the prob that a deterministic algorithm fails because a cosmic ray flips a bit.

so YEAH, good enough!

2) I think empirical methods are GOOD for intuitions and the basis for conjecture but not for proof. As such, perhaps such methods should have more of a place in the math community.

3) If someone claims to resolve P vs NP (or some other hard problem) and mistakes are found and that person RETRACTS, thats FINE- that person still has my respect.

If someone claims to resolve P vs NP (or..) and mistakes are found and they NEVER RETRACT thats BAD, and that person does not have my respect.

Hogan is in a funny position in terms of my respect (Yes, Hogan has sleepless nights wondering if he has my respect 🙂 ). He DID retract but 25 years later. And what he

retracted was not a proof, but a … prediction? If viewed as a prediction, then maybe 25

years is a fine amount of time to wait.

4) One odd issue here is just the word `Proof’ The phrase

`interactive proof” (e.g., IP=PSPACE)

and

`proof of the Poincare Conjecture’

really are two different uses of the word proof that are perhaps being confused.

]]>