Average Error: 0.1 → 0.1
Time: 2.1s
Precision: 64
\[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[\mathsf{fma}\left(0.95492965855137202, x, \left(-0.129006137732797982\right) \cdot {x}^{3}\right)\]
0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)
\mathsf{fma}\left(0.95492965855137202, x, \left(-0.129006137732797982\right) \cdot {x}^{3}\right)
double f(double x) {
        double r30214 = 0.954929658551372;
        double r30215 = x;
        double r30216 = r30214 * r30215;
        double r30217 = 0.12900613773279798;
        double r30218 = r30215 * r30215;
        double r30219 = r30218 * r30215;
        double r30220 = r30217 * r30219;
        double r30221 = r30216 - r30220;
        return r30221;
}

double f(double x) {
        double r30222 = 0.954929658551372;
        double r30223 = x;
        double r30224 = 0.12900613773279798;
        double r30225 = -r30224;
        double r30226 = 3.0;
        double r30227 = pow(r30223, r30226);
        double r30228 = r30225 * r30227;
        double r30229 = fma(r30222, r30223, r30228);
        return r30229;
}

Error

Bits error versus x

Derivation

  1. Initial program 0.1

    \[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Using strategy rm
  3. Applied fma-neg0.1

    \[\leadsto \color{blue}{\mathsf{fma}\left(0.95492965855137202, x, -0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\right)}\]
  4. Simplified0.1

    \[\leadsto \mathsf{fma}\left(0.95492965855137202, x, \color{blue}{\left(-0.129006137732797982\right) \cdot {x}^{3}}\right)\]
  5. Final simplification0.1

    \[\leadsto \mathsf{fma}\left(0.95492965855137202, x, \left(-0.129006137732797982\right) \cdot {x}^{3}\right)\]

Reproduce

herbie shell --seed 2020049 +o rules:numerics
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))