Average Error: 0.1 → 0.1
Time: 4.5s
Precision: 64
\[0.9549296585513720181381813745247200131416 \cdot x - 0.1290061377327979819096270830414141528308 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[0.9549296585513720181381813745247200131416 \cdot x - 0.1290061377327979819096270830414141528308 \cdot {x}^{3}\]
0.9549296585513720181381813745247200131416 \cdot x - 0.1290061377327979819096270830414141528308 \cdot \left(\left(x \cdot x\right) \cdot x\right)
0.9549296585513720181381813745247200131416 \cdot x - 0.1290061377327979819096270830414141528308 \cdot {x}^{3}
double f(double x) {
        double r17690 = 0.954929658551372;
        double r17691 = x;
        double r17692 = r17690 * r17691;
        double r17693 = 0.12900613773279798;
        double r17694 = r17691 * r17691;
        double r17695 = r17694 * r17691;
        double r17696 = r17693 * r17695;
        double r17697 = r17692 - r17696;
        return r17697;
}

double f(double x) {
        double r17698 = 0.954929658551372;
        double r17699 = x;
        double r17700 = r17698 * r17699;
        double r17701 = 0.12900613773279798;
        double r17702 = 3.0;
        double r17703 = pow(r17699, r17702);
        double r17704 = r17701 * r17703;
        double r17705 = r17700 - r17704;
        return r17705;
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.1

    \[0.9549296585513720181381813745247200131416 \cdot x - 0.1290061377327979819096270830414141528308 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Using strategy rm
  3. Applied sub-neg0.1

    \[\leadsto \color{blue}{0.9549296585513720181381813745247200131416 \cdot x + \left(-0.1290061377327979819096270830414141528308 \cdot \left(\left(x \cdot x\right) \cdot x\right)\right)}\]
  4. Simplified0.1

    \[\leadsto 0.9549296585513720181381813745247200131416 \cdot x + \color{blue}{\left(-0.1290061377327979819096270830414141528308\right) \cdot {x}^{3}}\]
  5. Using strategy rm
  6. Applied cube-mult0.1

    \[\leadsto 0.9549296585513720181381813745247200131416 \cdot x + \left(-0.1290061377327979819096270830414141528308\right) \cdot \color{blue}{\left(x \cdot \left(x \cdot x\right)\right)}\]
  7. Applied associate-*r*0.1

    \[\leadsto 0.9549296585513720181381813745247200131416 \cdot x + \color{blue}{\left(\left(-0.1290061377327979819096270830414141528308\right) \cdot x\right) \cdot \left(x \cdot x\right)}\]
  8. Final simplification0.1

    \[\leadsto 0.9549296585513720181381813745247200131416 \cdot x - 0.1290061377327979819096270830414141528308 \cdot {x}^{3}\]

Reproduce

herbie shell --seed 2019308 
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.95492965855137202 x) (* 0.129006137732797982 (* (* x x) x))))