Average Error: 0.1 → 0.1
Time: 9.7s
Precision: 64
\[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[0.95492965855137202 \cdot x - x \cdot \left(\left(0.129006137732797982 \cdot x\right) \cdot x\right)\]
0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)
0.95492965855137202 \cdot x - x \cdot \left(\left(0.129006137732797982 \cdot x\right) \cdot x\right)
double f(double x) {
        double r23029 = 0.954929658551372;
        double r23030 = x;
        double r23031 = r23029 * r23030;
        double r23032 = 0.12900613773279798;
        double r23033 = r23030 * r23030;
        double r23034 = r23033 * r23030;
        double r23035 = r23032 * r23034;
        double r23036 = r23031 - r23035;
        return r23036;
}

double f(double x) {
        double r23037 = 0.954929658551372;
        double r23038 = x;
        double r23039 = r23037 * r23038;
        double r23040 = 0.12900613773279798;
        double r23041 = r23040 * r23038;
        double r23042 = r23041 * r23038;
        double r23043 = r23038 * r23042;
        double r23044 = r23039 - r23043;
        return r23044;
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.1

    \[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Using strategy rm
  3. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot \color{blue}{{x}^{1}}\right)\]
  4. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot \color{blue}{{x}^{1}}\right) \cdot {x}^{1}\right)\]
  5. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(\color{blue}{{x}^{1}} \cdot {x}^{1}\right) \cdot {x}^{1}\right)\]
  6. Applied pow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\color{blue}{{\left(x \cdot x\right)}^{1}} \cdot {x}^{1}\right)\]
  7. Applied pow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \color{blue}{{\left(\left(x \cdot x\right) \cdot x\right)}^{1}}\]
  8. Simplified0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot {\color{blue}{\left({x}^{3}\right)}}^{1}\]
  9. Using strategy rm
  10. Applied unpow30.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot {\color{blue}{\left(\left(x \cdot x\right) \cdot x\right)}}^{1}\]
  11. Applied unpow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \color{blue}{\left({\left(x \cdot x\right)}^{1} \cdot {x}^{1}\right)}\]
  12. Applied associate-*r*0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(0.129006137732797982 \cdot {\left(x \cdot x\right)}^{1}\right) \cdot {x}^{1}}\]
  13. Simplified0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(\left(0.129006137732797982 \cdot x\right) \cdot x\right)} \cdot {x}^{1}\]
  14. Final simplification0.1

    \[\leadsto 0.95492965855137202 \cdot x - x \cdot \left(\left(0.129006137732797982 \cdot x\right) \cdot x\right)\]

Reproduce

herbie shell --seed 2020045 
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))