AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
Our results demonstrate that Herbie can effectively discover transformations that substantially improve accuracy while imposing a median overhead of 40%

Automatically improving accuracy for floating point expressions

PLDI, no. 6 (2015): 1-11

Cited by: 120|Views210
EI

Abstract

Scientific and engineering applications depend on floating point arithmetic to approximate real arithmetic. This approximation introduces rounding error, which can accumulate to produce unacceptable results. While the numerical methods literature provides techniques to mitigate rounding error, applying these techniques requires manually r...More

Code:

Data:

0
Introduction
  • Floating point rounding errors are notoriously difficult to detect and debug [24, 25, 38].
  • Floating point arithmetic makes these computations feasible, but it introduces rounding error, which may cause the approximate results to differ from the ideal real-number results by an unacceptable margin.
  • When these floating point issues are discovered, many developers first try perturbing the code until the answers produced for.
  • Even the largest hardware-supported precision may still exhibit unacceptable rounding error, and increasing precision further would require simulating floating point in software, incurring orders of magnitude slowdown.1
Highlights
  • Floating point rounding errors are notoriously difficult to detect and debug [24, 25, 38]
  • Floating point arithmetic makes these computations feasible, but it introduces rounding error, which may cause the approximate results to differ from the ideal real-number results by an unacceptable margin
  • We evaluate Herbie on examples drawn from a classic numerical methods textbook [19] and consider its broader applicability to floating point expressions extracted from a mathematical library as well as formulas from recent scientific articles
  • Our results demonstrate that Herbie can effectively discover transformations that substantially improve accuracy while imposing a median overhead of 40%
  • In addition to the case studies described above, we evaluated Herbie on benchmarks drawn from Hamming’s Numerical Methods for Scientists and Engineers (NMSE) [19], a standard textbook for applying numerical analysis to scientific and engineering computations
  • Herbie automatically improves the accuracy of floating point expressions by randomly sampling inputs, localizing error, generating candidate rewrites, and merging rewrites with complementary effects
Results
  • In addition to the case studies described above, the authors evaluated Herbie on benchmarks drawn from Hamming’s Numerical Methods for Scientists and Engineers (NMSE) [19], a standard textbook for applying numerical analysis to scientific and engineering computations.
  • The authors' evaluation includes twenty-eight worked examples and problems from Chapter 3, which discusses manually rearranging formulas to improve accuracy, the same task that Herbie automates.
  • Qlog logq expq3.
  • 2cos sqrtexp expm1 expq2 2sin 2tan quadm.
  • Double Precision qlog logq expq3
  • The authors' evaluation includes twenty-eight worked examples and problems from Chapter 3, which discusses manually rearranging formulas to improve accuracy, the same task that Herbie automates. qlog logq expq3
Conclusion
  • Herbie automatically improves the accuracy of floating point expressions by randomly sampling inputs, localizing error, generating candidate rewrites, and merging rewrites with complementary effects.
  • The authors' results demonstrate that Herbie can effectively discover transformations that substantially improve accuracy while imposing a median overhead of 40%.
  • The authors will extend Herbie to reduce error accumulation within loops.
  • The authors would like to explore combining Herbie with FPDebug, FPTaylor and Rosa, and STOKE
Related work
  • Program Transformations M. Martel proposed a bounded exhaustive search for algebraically-equivalent programs for which a better accuracy bound could be statically proven [28]. Martel’s line of work builds an abstract interpretation to bound rounding errors using a sound over-approximation. His technique then generates a set of programs equivalent over the real numbers, and chooses the one with the smallest rounding error. Martel’s approach, since it is based on abstract interpretation, generalizes well to programs with loops [22]. However, the bounded exhaustive search limits the program transformations that can be found, since a brute-force search cannot scale with a large database of rewrites. It is also dependent on accurate static analyses for error, which makes supporting transcendental functions difficult. Herbie is fundamentally different from Martel’s work in its use of sampling rather than static analysis, its use of a guided search over brute-force enumeration, and its ability to change programs without preserving their real semantics, such as with series expansion.
Funding
  • This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No DGE-1256082
Reference
  • M. Altman and M. McDonald. The robustness of statistical abstractions. Political Methodology, 1999.
    Google ScholarLocate open access versionFindings
  • M. Altman and M. P. McDonald. Replication with attention to numerical accuracy. Political Analysis, 11(3):302–307, 2003. URL http://pan.oxfordjournals.org/content/11/3/302.abstract.
    Locate open access versionFindings
  • M. Altman, J. Gill, and M. P. McDonald. Numerical Issues in Statistical Computing for the Social Scientist. Springer-Verlag, 2003.
    Google ScholarFindings
  • E. T. Barr, T. Vo, V. Le, and Z. Su. Automatic detection of floating-point exceptions. POPL ’13, 2013.
    Google ScholarFindings
  • F. Benz, A. Hildebrandt, and S. Hack. A dynamic program analysis to find floating-point accuracy problems. PLDI ’12, pages 453–462, New York, NY, USA, 2012. ACM. ISBN 978-1-4503-1205-9. URL http://doi.acm.org/10.1145/2254064.2254118.
    Findings
  • S. Boldo. Kahan’s algorithm for a correct discriminant computation at last formally proven. IEEE Transactions on Computers, 58(2):220–225, Feb. 2009. URL http://hal.inria.fr/inria-00171497/.
    Locate open access versionFindings
  • S. Boldo, F. Clément, J.-C. Filliâtre, M. Mayero, G. Melquiond, and P. Weis. Wave Equation Numerical Resolution: a Comprehensive Mechanized Proof of a C Program. Journal of Automated Reasoning, 50(4): 423–456, Apr. 2013. URL http://hal.inria.fr/hal-00649240/en/.
    Locate open access versionFindings
  • W.-F. Chiang, G. Gopalakrishnan, Z. Rakamaric, and A. Solovyev. Efficient search for inputs causing high floating-point errors. pages 43–52. ACM, 2014.
    Google ScholarFindings
  • V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of Operations Research, 4(3):pp. 233–235, 197URL http://www.jstor.org/stable/3689577.
    Locate open access versionFindings
  • E. Darulova and V. Kuncak. Sound compilation of reals. POPL ’14, pages 235–248, New York, NY, USA, 2014. ACM. ISBN 9781-4503-2544-8. URL http://doi.acm.org/10.1145/2535838.2535874.
    Findings
  • E. Darulova, V. Kuncak, R. Majumdar, and I. Saha. Synthesis of fixed-point programs. EMSOFT ’13, pages 22:1–22:10, Piscataway, NJ, USA, 2013. IEEE Press. ISBN 978-1-4799-1443-2. URL http://dl.acm.org/citation.cfm?id=2555754.2555776.
    Findings
  • M. Daumas and G. Melquiond. Certification of bounds on expressions involving rounded operators. ACM Trans. Math. Softw., 37(1):2:1–2:20, Jan. 2010. http://gappa.gforge.inria.fr/.
    Locate open access versionFindings
  • J. de Jong. math.js: An extensive math library for JavaScript and Node.js, 20URL http://mathjs.org/.
    Findings
  • H. Eldib and C. Wang. An SMT based method for optimizing arithmetic computations in embedded software code. FMCAD ’13, 2013.
    Google ScholarLocate open access versionFindings
  • European Commission. The introduction of the euro and the rounding of currency amounts. Euro papers. European Commission, Directorate General II Economic and Financial Affairs, 1998.
    Google ScholarLocate open access versionFindings
  • L. Fousse, G. Hanrot, V. Lefèvre, P. Pélissier, and P. Zimmermann. MPFR: A multiple-precision binary floating-point library with correct rounding. ACM Transactions on Mathematical Software, 33(2):13:1– 13:15, June 2007. URL http://doi.acm.org/10.1145/1236463.1236468.
    Locate open access versionFindings
  • D. Goldberg. What every computer scientist should know about floating-point arithmetic. ACM Comput. Surv., 23(1):5–48, Mar. 1991. URL http://doi.acm.org/10.1145/103162.103163.
    Locate open access versionFindings
  • E. Goubault and S. Putot. Static analysis of finite precision computations. VMCAI’11, pages 232–247, Berlin, Heidelberg, 2011. Springer-Verlag. ISBN 978-3-642-18274-7. URL http://dl.acm.org/citation.cfm?id=1946284.1946301.
    Findings
  • R. Hamming. Numerical Methods for Scientists and Engineers. Dover Publications, 2nd edition, 1987.
    Google ScholarFindings
  • N. J. Higham. Accuracy and Stability of Numerical Algorithms. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2nd edition, 2002. ISBN 0898715210.
    Google ScholarFindings
  • IEEE. IEEE standard for binary floating-point arithmetic. IEEE Std. 754-2008, 2008.
    Google ScholarLocate open access versionFindings
  • A. Ioualalen and M. Martel. Synthesizing accurate floating-point formulas. ASAP ’13, pages 113–116, June 2013.
    Google ScholarFindings
  • W. Kahan. A Survey of Error Analysis. Defense Technical Information Center, 1971. URL http://books.google.com/books?id=dkW7tgAACAAJ.
    Locate open access versionFindings
  • W. Kahan. Miscalculating area and angles of a needle-like triangle. Technical report, University of California, Berkeley, Mar. 2000. URL http://www.cs.berkeley.edu/~wkahan/Triangle.pdf.
    Findings
  • W. Kahan and J. D. Darcy. How Java’s floating-point hurts everyone everywhere. Technical report, University of California, Berkeley, June
    Google ScholarFindings
  • 1998. URL http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf.
    Findings
  • [26] J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2005. ISBN 0321295358.
    Google ScholarFindings
  • [27] V. Lefévre and J.-M. Muller. The table maker’s dilemma: our search for worst cases. Technical report, Ecole Normale Supérieure de Lyon, Oct. 2003. URL http://perso.ens-lyon.fr/jean-michel.muller/Intro-to-TMD.htm.
    Findings
  • [28] M. Martel. Program transformation for numerical precision. PEPM ’09, pages 101–110, New York, NY, USA, 2009. ACM. ISBN 978-1-60558327-3. URL http://doi.acm.org/10.1145/1480945.1480960.
    Findings
  • [29] B. D. McCullough and H. D. Vinod. The numerical reliability of econometric software. Journal of Economic Literature, 37(2):633–665, 1999.
    Google ScholarLocate open access versionFindings
  • [30] D. Monniaux. The pitfalls of verifying floating-point computations. ACM Trans. Program. Lang. Syst., 30(3):12:1–12:41, May 2008. URL http://doi.acm.org/10.1145/1353445.1353446.
    Locate open access versionFindings
  • [31] C. G. Nelson. Techniques for Program Verification. PhD thesis, Stanford University, 1979.
    Google ScholarFindings
  • [32] P. Panchekha. Numerical imprecision in complex square root, 2014. URL https://github.com/josdejong/mathjs/pull/208.
    Findings
  • [33] P. Panchekha. Accuracy of sinh and complex cos/sin, 2014. URL https://github.com/josdejong/mathjs/pull/247.
    Findings
  • [34] K. Quinn. Ever had problems rounding off figures? This stock exchange has. The Wall Street Journal, page 37, November 8, 1983.
    Google ScholarLocate open access versionFindings
  • [35] C. Rubio-González, C. Nguyen, H. D. Nguyen, J. Demmel, W. Kahan, K. Sen, D. H. Bailey, C. Iancu, and D. Hough. Precimonious: Tuning assistant for floating-point precision. SC ’13, 2013.
    Google ScholarFindings
  • [36] E. Schkufza, R. Sharma, and A. Aiken. Stochastic optimization of floating point programs using tunable precision. PLDI ’14, 2014.
    Google ScholarFindings
  • [37] A. Solovyev, C. Jacobsen, Z. Rakamaric, and G. Gopalakrishnan. Rigorous estimation of floating-point round-off errors with symbolic taylor expansions. FM’15. Springer, June 2015.
    Google ScholarFindings
  • [38] N. Toronto and J. McCarthy. Practically accurate floating-point math. Computing in Science Engineering, 16(4):80–95, July 2014.
    Google ScholarLocate open access versionFindings
Author
Alex Sanchez-Stern
Alex Sanchez-Stern
James R. Wilcox
James R. Wilcox
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科