Japan Slices The Biggest Pi Ever
from the well,-that's-useful dept
Is it just me, or is it a really slow news day? It seems that some researchers in Japan have calculated pi to 1.2411 trillion places, which beats the old world record for calculating pi by approximately six times (talk about trouncing some poor mathemetician's claim to fame). It took a supercomputer 400 hours to calculate the answer, but (get this) it took the team five years to write the software for the supercomputer to do this. To be honest, I'm a bit disappointed in the team. I mean, why stop at 1.2411 trillion places? If you've gone that way, you might as well continue on a few more billion, right? Anyway, in case you were wondering, there is no practical use of pi to 1.2411 trillion places.Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Reader Comments
Subscribe: RSS
View by: Time | Thread
Nature of the business
Because computers store numbers as digital bits in base-2 form, it is mathematically impossible to store the exact value "0.1"; it is an irrational number (0*1/2 + 0*1/4 + 0*1/8 + 1*1/16 + ...). Computers approximate such values with long sequences of negative powers of 2.
To avoid the irrational number problem, one would have to devise an elaborate algorithm of raising powers of ten to convert decimal-place digits into integers.
And yes, calculating pi to trillions of digits does have a practical purpose. Some mathematical theorems have been shown to fail at very large numbers. If a comparable discovery is made for pi, it would have profound implications for mathematics, therefore for cryptology, engineering sciences, physics, cosmology, everything.
[ link to this | view in chronology ]
Re: Nature of the business
You were really beginning to frighten me there for a moment dorpus...you actually had a rational thought...then you destroyed it.
Computers do not have any problem storing floating point numbers (for finite numbers). Calculating them it does, but not storing them. Most modern programming languages have single/double precisiong floating point number data types and dealing with the storage of floating point is quite easy even with assembly language. To a computer, storing 0.1 is no different then storing 42, or 270,000.147.
True, the machine stores a floating point in base-2 form, just like every other bit of data, so computers are as good at storing floating point numbers as they are at storing strings, integers, etc. Computers don't know how to use base-10 for storing bits, nor do they care about it except when they need to display the results to us.
However in this particular exercise, you don't think the computer is actually computing the value of pi as one operation? The calculation for the value of pi is one that the computer can do very easily, it breaks the computation down into (an infinite) series of computations of smaller values, either using fractions, arctan, etc. This is something a computer can do.
[ link to this | view in chronology ]
Re: Nature of the business
The problem of calculating very precise numbers is an old one in scientific programming. What makes matters worse is that modern CPU's have sacrificed mathematical purity for performance. Even at the design level, a modern CPU is no longer tested for the accuracy of every single possible state; only probabilistic methods are employed. At the manufacturing level, about half of CPU's are thrown out because they make too many calculation errors. The other half are "accurate enough", and the compiler-trick you described usually hides the errors from users anyway.
Back in 1990, Intel famously had this problem with the new Pentium chip, which would make mistakes in scientific computations. Since then, chip makers have both cooperated with language designers to sweep the problem under the hood, and also kept a tighter lid on such problems.
The future of chip design may have less to do with speed and more to do with reliability of extremely precise calculations.
[ link to this | view in chronology ]
Re: Nature of the business
Yours is mostly fiction.
The bug in the Pentium was a couple missing values in the fdiv lookup table. Logically, the mechanism for divining the results of fdiv to 80 bits is accurate, but implementing it wrong is something else.
Also, it is virtual impossible to run every possible combination of states in a modern processor (24 million transistors, ie switches), even at a couple GHz speeds. When you're talking emulation before production, double hah. When I worked at Intel, the emulator ran at about 8 Hz. Sure, computers run faster now, but the model is also several times the size. At that size, you can't even do all the standard integer math operations on all 32bit integers, much less 80bit floating point.
As for the need for reliability beyond 80bits... that strikes me as a very small subset of people. Most people would probably be fine just using 64 bit integers and multiplying all values by 10000 or so, if they need that level of accuracy.
Brandon
[ link to this | view in chronology ]
Re: Nature of the business
When NASA sent astronauts to the moon, they had 3 different computers make orbit and trajectory computations, since they would all come up with different answers.
[ link to this | view in chronology ]
Re: Nature of the business
Source?
I couldn't believe this, why would NASA use three different computers to come up with orbital and trajectory info if none of them came up with the right answer. These weren't Intel Pentiums, but custom designed computers, designed solely for the purposes of calculating the flight trajectory and orbit of a space vessel. I couldn't find info on this system on either NASA or elsewhere. (They did have a ton of information on the Navagational systems within the Apollo spacecraft, but not about the computers on the ground.)
However, when it comes to NASA, I suspect the use of more than one computer for this operation was not due to bad computation, but to redundancy, especially in a real time environment where lives were at stake. If the computer went down, there would be a need to switch to a backup. When I've done real-time application development in the past for the government, they ALWAYS required multiple redundant systems to be in place.
If you have a source to back this claim up, I'd be really interested in reading it.
[ link to this | view in chronology ]
Re: Nature of the business
[ link to this | view in chronology ]
Re: Nature of the business
[ link to this | view in chronology ]
Re: Nature of the business
[ link to this | view in chronology ]