Average Error: 0.1 → 0.1
Time: 14.2s
Precision: 64
\[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[0.95492965855137202 \cdot x - x \cdot \left(x \cdot \left(0.129006137732797982 \cdot x\right)\right)\]
0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)
0.95492965855137202 \cdot x - x \cdot \left(x \cdot \left(0.129006137732797982 \cdot x\right)\right)
double f(double x) {
        double r20696 = 0.954929658551372;
        double r20697 = x;
        double r20698 = r20696 * r20697;
        double r20699 = 0.12900613773279798;
        double r20700 = r20697 * r20697;
        double r20701 = r20700 * r20697;
        double r20702 = r20699 * r20701;
        double r20703 = r20698 - r20702;
        return r20703;
}

double f(double x) {
        double r20704 = 0.954929658551372;
        double r20705 = x;
        double r20706 = r20704 * r20705;
        double r20707 = 0.12900613773279798;
        double r20708 = r20707 * r20705;
        double r20709 = r20705 * r20708;
        double r20710 = r20705 * r20709;
        double r20711 = r20706 - r20710;
        return r20711;
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.1

    \[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Using strategy rm
  3. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot \color{blue}{{x}^{1}}\right)\]
  4. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot \color{blue}{{x}^{1}}\right) \cdot {x}^{1}\right)\]
  5. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(\color{blue}{{x}^{1}} \cdot {x}^{1}\right) \cdot {x}^{1}\right)\]
  6. Applied pow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\color{blue}{{\left(x \cdot x\right)}^{1}} \cdot {x}^{1}\right)\]
  7. Applied pow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \color{blue}{{\left(\left(x \cdot x\right) \cdot x\right)}^{1}}\]
  8. Simplified0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot {\color{blue}{\left({x}^{3}\right)}}^{1}\]
  9. Using strategy rm
  10. Applied unpow30.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot {\color{blue}{\left(\left(x \cdot x\right) \cdot x\right)}}^{1}\]
  11. Applied unpow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \color{blue}{\left({\left(x \cdot x\right)}^{1} \cdot {x}^{1}\right)}\]
  12. Applied associate-*r*0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(0.129006137732797982 \cdot {\left(x \cdot x\right)}^{1}\right) \cdot {x}^{1}}\]
  13. Simplified0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(x \cdot \left(0.129006137732797982 \cdot x\right)\right)} \cdot {x}^{1}\]
  14. Final simplification0.1

    \[\leadsto 0.95492965855137202 \cdot x - x \cdot \left(x \cdot \left(0.129006137732797982 \cdot x\right)\right)\]

Reproduce

herbie shell --seed 2020045 
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))