Average Error: 0.2 → 0.1
Time: 1.8s
Precision: binary64
\[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[x \cdot \left(0.95492965855137202 - \left(0.129006137732797982 \cdot x\right) \cdot x\right)\]
0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)
x \cdot \left(0.95492965855137202 - \left(0.129006137732797982 \cdot x\right) \cdot x\right)
double code(double x) {
	return ((double) (((double) (0.954929658551372 * x)) - ((double) (0.12900613773279798 * ((double) (((double) (x * x)) * x))))));
}
double code(double x) {
	return ((double) (x * ((double) (0.954929658551372 - ((double) (((double) (0.12900613773279798 * x)) * x))))));
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.2

    \[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Simplified0.1

    \[\leadsto \color{blue}{x \cdot \left(0.95492965855137202 - 0.129006137732797982 \cdot \left(x \cdot x\right)\right)}\]
  3. Using strategy rm
  4. Applied associate-*r*0.1

    \[\leadsto x \cdot \left(0.95492965855137202 - \color{blue}{\left(0.129006137732797982 \cdot x\right) \cdot x}\right)\]
  5. Final simplification0.1

    \[\leadsto x \cdot \left(0.95492965855137202 - \left(0.129006137732797982 \cdot x\right) \cdot x\right)\]

Reproduce

herbie shell --seed 2020155 
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))