Average Error: 0.2 → 0.1
Time: 1.4s
Precision: binary64
\[0.954929658551372 \cdot x - 0.12900613773279798 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[0.954929658551372 \cdot x - x \cdot \left(0.12900613773279798 \cdot \left(x \cdot x\right)\right)\]
0.954929658551372 \cdot x - 0.12900613773279798 \cdot \left(\left(x \cdot x\right) \cdot x\right)
0.954929658551372 \cdot x - x \cdot \left(0.12900613773279798 \cdot \left(x \cdot x\right)\right)
(FPCore (x)
 :precision binary64
 (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))
(FPCore (x)
 :precision binary64
 (- (* 0.954929658551372 x) (* x (* 0.12900613773279798 (* x x)))))
double code(double x) {
	return ((double) (((double) (0.954929658551372 * x)) - ((double) (0.12900613773279798 * ((double) (((double) (x * x)) * x))))));
}
double code(double x) {
	return ((double) (((double) (0.954929658551372 * x)) - ((double) (x * ((double) (0.12900613773279798 * ((double) (x * x))))))));
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.2

    \[0.954929658551372 \cdot x - 0.12900613773279798 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Simplified0.1

    \[\leadsto \color{blue}{0.954929658551372 \cdot x - 0.12900613773279798 \cdot {x}^{3}}\]
  3. Using strategy rm
  4. Applied unpow3_binary640.2

    \[\leadsto 0.954929658551372 \cdot x - 0.12900613773279798 \cdot \color{blue}{\left(\left(x \cdot x\right) \cdot x\right)}\]
  5. Applied associate-*r*_binary640.1

    \[\leadsto 0.954929658551372 \cdot x - \color{blue}{\left(0.12900613773279798 \cdot \left(x \cdot x\right)\right) \cdot x}\]
  6. Final simplification0.1

    \[\leadsto 0.954929658551372 \cdot x - x \cdot \left(0.12900613773279798 \cdot \left(x \cdot x\right)\right)\]

Reproduce

herbie shell --seed 2020219 
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))