AI helps you reading Science
AI generates interpretation videos
AI extracts and analyses the key points of the paper to generate videos automatically
AI parses the academic lineage of this thesis
AI extracts a summary of this paper
Our results demonstrate that Herbie can effectively discover transformations that substantially improve accuracy while imposing a median overhead of 40%
Automatically improving accuracy for floating point expressions
PLDI, no. 6 (2015): 1-11
Scientific and engineering applications depend on floating point arithmetic to approximate real arithmetic. This approximation introduces rounding error, which can accumulate to produce unacceptable results. While the numerical methods literature provides techniques to mitigate rounding error, applying these techniques requires manually r...More
PPT (Upload PPT)
- Floating point rounding errors are notoriously difficult to detect and debug [24, 25, 38].
- Floating point arithmetic makes these computations feasible, but it introduces rounding error, which may cause the approximate results to differ from the ideal real-number results by an unacceptable margin.
- When these floating point issues are discovered, many developers first try perturbing the code until the answers produced for.
- Even the largest hardware-supported precision may still exhibit unacceptable rounding error, and increasing precision further would require simulating floating point in software, incurring orders of magnitude slowdown.1
- Floating point rounding errors are notoriously difficult to detect and debug [24, 25, 38]
- Floating point arithmetic makes these computations feasible, but it introduces rounding error, which may cause the approximate results to differ from the ideal real-number results by an unacceptable margin
- We evaluate Herbie on examples drawn from a classic numerical methods textbook  and consider its broader applicability to floating point expressions extracted from a mathematical library as well as formulas from recent scientific articles
- Our results demonstrate that Herbie can effectively discover transformations that substantially improve accuracy while imposing a median overhead of 40%
- In addition to the case studies described above, we evaluated Herbie on benchmarks drawn from Hamming’s Numerical Methods for Scientists and Engineers (NMSE) , a standard textbook for applying numerical analysis to scientific and engineering computations
- Herbie automatically improves the accuracy of floating point expressions by randomly sampling inputs, localizing error, generating candidate rewrites, and merging rewrites with complementary effects
- In addition to the case studies described above, the authors evaluated Herbie on benchmarks drawn from Hamming’s Numerical Methods for Scientists and Engineers (NMSE) , a standard textbook for applying numerical analysis to scientific and engineering computations.
- The authors' evaluation includes twenty-eight worked examples and problems from Chapter 3, which discusses manually rearranging formulas to improve accuracy, the same task that Herbie automates.
- Qlog logq expq3.
- 2cos sqrtexp expm1 expq2 2sin 2tan quadm.
- Double Precision qlog logq expq3
- The authors' evaluation includes twenty-eight worked examples and problems from Chapter 3, which discusses manually rearranging formulas to improve accuracy, the same task that Herbie automates. qlog logq expq3
- Herbie automatically improves the accuracy of floating point expressions by randomly sampling inputs, localizing error, generating candidate rewrites, and merging rewrites with complementary effects.
- The authors' results demonstrate that Herbie can effectively discover transformations that substantially improve accuracy while imposing a median overhead of 40%.
- The authors will extend Herbie to reduce error accumulation within loops.
- The authors would like to explore combining Herbie with FPDebug, FPTaylor and Rosa, and STOKE
- Program Transformations M. Martel proposed a bounded exhaustive search for algebraically-equivalent programs for which a better accuracy bound could be statically proven . Martel’s line of work builds an abstract interpretation to bound rounding errors using a sound over-approximation. His technique then generates a set of programs equivalent over the real numbers, and chooses the one with the smallest rounding error. Martel’s approach, since it is based on abstract interpretation, generalizes well to programs with loops . However, the bounded exhaustive search limits the program transformations that can be found, since a brute-force search cannot scale with a large database of rewrites. It is also dependent on accurate static analyses for error, which makes supporting transcendental functions difficult. Herbie is fundamentally different from Martel’s work in its use of sampling rather than static analysis, its use of a guided search over brute-force enumeration, and its ability to change programs without preserving their real semantics, such as with series expansion.
- This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No DGE-1256082
- M. Altman and M. McDonald. The robustness of statistical abstractions. Political Methodology, 1999.
- M. Altman and M. P. McDonald. Replication with attention to numerical accuracy. Political Analysis, 11(3):302–307, 2003. URL http://pan.oxfordjournals.org/content/11/3/302.abstract.
- M. Altman, J. Gill, and M. P. McDonald. Numerical Issues in Statistical Computing for the Social Scientist. Springer-Verlag, 2003.
- E. T. Barr, T. Vo, V. Le, and Z. Su. Automatic detection of floating-point exceptions. POPL ’13, 2013.
- F. Benz, A. Hildebrandt, and S. Hack. A dynamic program analysis to find floating-point accuracy problems. PLDI ’12, pages 453–462, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1205-9. URL http://doi.acm.org/10.1145/2254064.2254118.
- S. Boldo. Kahan’s algorithm for a correct discriminant computation at last formally proven. IEEE Transactions on Computers, 58(2):220–225, Feb. 2009. URL http://hal.inria.fr/inria-00171497/.
- S. Boldo, F. Clément, J.-C. Filliâtre, M. Mayero, G. Melquiond, and P. Weis. Wave Equation Numerical Resolution: a Comprehensive Mechanized Proof of a C Program. Journal of Automated Reasoning, 50(4): 423–456, Apr. 2013. URL http://hal.inria.fr/hal-00649240/en/.
- W.-F. Chiang, G. Gopalakrishnan, Z. Rakamaric, and A. Solovyev. Efficient search for inputs causing high floating-point errors. pages 43–52. ACM, 2014.
- V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of Operations Research, 4(3):pp. 233–235, 197URL http://www.jstor.org/stable/3689577.
- E. Darulova and V. Kuncak. Sound compilation of reals. POPL ’14, pages 235–248, New York, NY, USA, 2014. ACM. ISBN 9781-4503-2544-8. URL http://doi.acm.org/10.1145/2535838.2535874.
- E. Darulova, V. Kuncak, R. Majumdar, and I. Saha. Synthesis of fixed-point programs. EMSOFT ’13, pages 22:1–22:10, Piscataway, NJ, USA, 2013. IEEE Press. ISBN 978-1-4799-1443-2. URL http://dl.acm.org/citation.cfm?id=2555754.2555776.
- M. Daumas and G. Melquiond. Certification of bounds on expressions involving rounded operators. ACM Trans. Math. Softw., 37(1):2:1–2:20, Jan. 2010. http://gappa.gforge.inria.fr/.
- H. Eldib and C. Wang. An SMT based method for optimizing arithmetic computations in embedded software code. FMCAD ’13, 2013.
- European Commission. The introduction of the euro and the rounding of currency amounts. Euro papers. European Commission, Directorate General II Economic and Financial Affairs, 1998.
- L. Fousse, G. Hanrot, V. Lefèvre, P. Pélissier, and P. Zimmermann. MPFR: A multiple-precision binary floating-point library with correct rounding. ACM Transactions on Mathematical Software, 33(2):13:1– 13:15, June 2007. URL http://doi.acm.org/10.1145/1236463.1236468.
- D. Goldberg. What every computer scientist should know about floating-point arithmetic. ACM Comput. Surv., 23(1):5–48, Mar. 1991. URL http://doi.acm.org/10.1145/103162.103163.
- E. Goubault and S. Putot. Static analysis of finite precision computations. VMCAI’11, pages 232–247, Berlin, Heidelberg, 2011. Springer-Verlag. ISBN 978-3-642-18274-7. URL http://dl.acm.org/citation.cfm?id=1946284.1946301.
- R. Hamming. Numerical Methods for Scientists and Engineers. Dover Publications, 2nd edition, 1987.
- N. J. Higham. Accuracy and Stability of Numerical Algorithms. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2nd edition, 2002. ISBN 0898715210.
- IEEE. IEEE standard for binary floating-point arithmetic. IEEE Std. 754-2008, 2008.
- A. Ioualalen and M. Martel. Synthesizing accurate floating-point formulas. ASAP ’13, pages 113–116, June 2013.
- W. Kahan. A Survey of Error Analysis. Defense Technical Information Center, 1971. URL http://books.google.com/books?id=dkW7tgAACAAJ.
- W. Kahan. Miscalculating area and angles of a needle-like triangle. Technical report, University of California, Berkeley, Mar. 2000. URL http://www.cs.berkeley.edu/~wkahan/Triangle.pdf.
- W. Kahan and J. D. Darcy. How Java’s floating-point hurts everyone everywhere. Technical report, University of California, Berkeley, June
- 1998. URL http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf.
-  J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2005. ISBN 0321295358.
-  V. Lefévre and J.-M. Muller. The table maker’s dilemma: our search for worst cases. Technical report, Ecole Normale Supérieure de Lyon, Oct. 2003. URL http://perso.ens-lyon.fr/jean-michel.muller/Intro-to-TMD.htm.
-  M. Martel. Program transformation for numerical precision. PEPM ’09, pages 101–110, New York, NY, USA, 2009. ACM. ISBN 978-1-60558327-3. URL http://doi.acm.org/10.1145/1480945.1480960.
-  B. D. McCullough and H. D. Vinod. The numerical reliability of econometric software. Journal of Economic Literature, 37(2):633–665, 1999.
-  D. Monniaux. The pitfalls of verifying floating-point computations. ACM Trans. Program. Lang. Syst., 30(3):12:1–12:41, May 2008. URL http://doi.acm.org/10.1145/1353445.1353446.
-  C. G. Nelson. Techniques for Program Verification. PhD thesis, Stanford University, 1979.
-  P. Panchekha. Numerical imprecision in complex square root, 2014. URL https://github.com/josdejong/mathjs/pull/208.
-  P. Panchekha. Accuracy of sinh and complex cos/sin, 2014. URL https://github.com/josdejong/mathjs/pull/247.
-  K. Quinn. Ever had problems rounding off figures? This stock exchange has. The Wall Street Journal, page 37, November 8, 1983.
-  C. Rubio-González, C. Nguyen, H. D. Nguyen, J. Demmel, W. Kahan, K. Sen, D. H. Bailey, C. Iancu, and D. Hough. Precimonious: Tuning assistant for floating-point precision. SC ’13, 2013.
-  E. Schkufza, R. Sharma, and A. Aiken. Stochastic optimization of floating point programs using tunable precision. PLDI ’14, 2014.
-  A. Solovyev, C. Jacobsen, Z. Rakamaric, and G. Gopalakrishnan. Rigorous estimation of floating-point round-off errors with symbolic taylor expansions. FM’15. Springer, June 2015.
-  N. Toronto and J. McCarthy. Practically accurate floating-point math. Computing in Science Engineering, 16(4):80–95, July 2014.