Average Error: 0.2 → 0.1
Time: 2.9s
Precision: 64
\[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[0.95492965855137202 \cdot x - \left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x\]
0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)
0.95492965855137202 \cdot x - \left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x
double f(double x) {
        double r31246 = 0.954929658551372;
        double r31247 = x;
        double r31248 = r31246 * r31247;
        double r31249 = 0.12900613773279798;
        double r31250 = r31247 * r31247;
        double r31251 = r31250 * r31247;
        double r31252 = r31249 * r31251;
        double r31253 = r31248 - r31252;
        return r31253;
}

double f(double x) {
        double r31254 = 0.954929658551372;
        double r31255 = x;
        double r31256 = r31254 * r31255;
        double r31257 = 0.12900613773279798;
        double r31258 = r31255 * r31255;
        double r31259 = r31257 * r31258;
        double r31260 = r31259 * r31255;
        double r31261 = r31256 - r31260;
        return r31261;
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.2

    \[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Using strategy rm
  3. Applied associate-*r*0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x}\]
  4. Final simplification0.1

    \[\leadsto 0.95492965855137202 \cdot x - \left(0.129006137732797982 \cdot \left(x \cdot x\right)\right) \cdot x\]

Reproduce

herbie shell --seed 2020018 +o rules:numerics
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))