Average Error: 0.2 → 0.1
Time: 2.6s
Precision: 64
\[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[0.95492965855137202 \cdot x - \left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x\]
0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)
0.95492965855137202 \cdot x - \left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x
double f(double x) {
        double r23736 = 0.954929658551372;
        double r23737 = x;
        double r23738 = r23736 * r23737;
        double r23739 = 0.12900613773279798;
        double r23740 = r23737 * r23737;
        double r23741 = r23740 * r23737;
        double r23742 = r23739 * r23741;
        double r23743 = r23738 - r23742;
        return r23743;
}

double f(double x) {
        double r23744 = 0.954929658551372;
        double r23745 = x;
        double r23746 = r23744 * r23745;
        double r23747 = 0.12900613773279798;
        double r23748 = r23745 * r23745;
        double r23749 = r23747 * r23748;
        double r23750 = r23749 * r23745;
        double r23751 = r23746 - r23750;
        return r23751;
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.2

    \[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Using strategy rm
  3. Applied associate-*r*0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x}\]
  4. Final simplification0.1

    \[\leadsto 0.95492965855137202 \cdot x - \left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x\]

Reproduce

herbie shell --seed 2020018 
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))