Average Error: 0.1 → 0.1
Time: 2.7s
Precision: 64
\[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
\[0.95492965855137202 \cdot x - \left(x \cdot 0.129006137732797982\right) \cdot {\left(x \cdot x\right)}^{1}\]
0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)
0.95492965855137202 \cdot x - \left(x \cdot 0.129006137732797982\right) \cdot {\left(x \cdot x\right)}^{1}
double f(double x) {
        double r26015 = 0.954929658551372;
        double r26016 = x;
        double r26017 = r26015 * r26016;
        double r26018 = 0.12900613773279798;
        double r26019 = r26016 * r26016;
        double r26020 = r26019 * r26016;
        double r26021 = r26018 * r26020;
        double r26022 = r26017 - r26021;
        return r26022;
}

double f(double x) {
        double r26023 = 0.954929658551372;
        double r26024 = x;
        double r26025 = r26023 * r26024;
        double r26026 = 0.12900613773279798;
        double r26027 = r26024 * r26026;
        double r26028 = r26024 * r26024;
        double r26029 = 1.0;
        double r26030 = pow(r26028, r26029);
        double r26031 = r26027 * r26030;
        double r26032 = r26025 - r26031;
        return r26032;
}

Error

Bits error versus x

Try it out

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation

  1. Initial program 0.1

    \[0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot x\right)\]
  2. Using strategy rm
  3. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot x\right) \cdot \color{blue}{{x}^{1}}\right)\]
  4. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(x \cdot \color{blue}{{x}^{1}}\right) \cdot {x}^{1}\right)\]
  5. Applied pow10.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\left(\color{blue}{{x}^{1}} \cdot {x}^{1}\right) \cdot {x}^{1}\right)\]
  6. Applied pow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \left(\color{blue}{{\left(x \cdot x\right)}^{1}} \cdot {x}^{1}\right)\]
  7. Applied pow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \color{blue}{{\left(\left(x \cdot x\right) \cdot x\right)}^{1}}\]
  8. Simplified0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot {\color{blue}{\left({x}^{3}\right)}}^{1}\]
  9. Using strategy rm
  10. Applied cube-mult0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot {\color{blue}{\left(x \cdot \left(x \cdot x\right)\right)}}^{1}\]
  11. Applied unpow-prod-down0.1

    \[\leadsto 0.95492965855137202 \cdot x - 0.129006137732797982 \cdot \color{blue}{\left({x}^{1} \cdot {\left(x \cdot x\right)}^{1}\right)}\]
  12. Applied associate-*r*0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(0.129006137732797982 \cdot {x}^{1}\right) \cdot {\left(x \cdot x\right)}^{1}}\]
  13. Simplified0.1

    \[\leadsto 0.95492965855137202 \cdot x - \color{blue}{\left(x \cdot 0.129006137732797982\right)} \cdot {\left(x \cdot x\right)}^{1}\]
  14. Final simplification0.1

    \[\leadsto 0.95492965855137202 \cdot x - \left(x \cdot 0.129006137732797982\right) \cdot {\left(x \cdot x\right)}^{1}\]

Reproduce

herbie shell --seed 2020083 
(FPCore (x)
  :name "Rosa's Benchmark"
  :precision binary64
  (- (* 0.954929658551372 x) (* 0.12900613773279798 (* (* x x) x))))