exp2 (problem 3.3.7)

Percentage Accurate: 53.6% → 99.2%
Time: 9.8s
Alternatives: 6
Speedup: 34.8×

Specification

?
\[\left|x\right| \leq 710\]
\[\begin{array}{l} \\ \left(e^{x} - 2\right) + e^{-x} \end{array} \]
(FPCore (x) :precision binary64 (+ (- (exp x) 2.0) (exp (- x))))
double code(double x) {
	return (exp(x) - 2.0) + exp(-x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (exp(x) - 2.0d0) + exp(-x)
end function
public static double code(double x) {
	return (Math.exp(x) - 2.0) + Math.exp(-x);
}
def code(x):
	return (math.exp(x) - 2.0) + math.exp(-x)
function code(x)
	return Float64(Float64(exp(x) - 2.0) + exp(Float64(-x)))
end
function tmp = code(x)
	tmp = (exp(x) - 2.0) + exp(-x);
end
code[x_] := N[(N[(N[Exp[x], $MachinePrecision] - 2.0), $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\left(e^{x} - 2\right) + e^{-x}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 6 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 53.6% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \left(e^{x} - 2\right) + e^{-x} \end{array} \]
(FPCore (x) :precision binary64 (+ (- (exp x) 2.0) (exp (- x))))
double code(double x) {
	return (exp(x) - 2.0) + exp(-x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (exp(x) - 2.0d0) + exp(-x)
end function
public static double code(double x) {
	return (Math.exp(x) - 2.0) + Math.exp(-x);
}
def code(x):
	return (math.exp(x) - 2.0) + math.exp(-x)
function code(x)
	return Float64(Float64(exp(x) - 2.0) + exp(Float64(-x)))
end
function tmp = code(x)
	tmp = (exp(x) - 2.0) + exp(-x);
end
code[x_] := N[(N[(N[Exp[x], $MachinePrecision] - 2.0), $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\left(e^{x} - 2\right) + e^{-x}
\end{array}

Alternative 1: 99.2% accurate, 5.5× speedup?

\[\begin{array}{l} \\ \mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.002777777777777778, 0.08333333333333333\right) \cdot \left(x \cdot x\right), x \cdot x, x \cdot x\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (fma
  (* (fma (* x x) 0.002777777777777778 0.08333333333333333) (* x x))
  (* x x)
  (* x x)))
double code(double x) {
	return fma((fma((x * x), 0.002777777777777778, 0.08333333333333333) * (x * x)), (x * x), (x * x));
}
function code(x)
	return fma(Float64(fma(Float64(x * x), 0.002777777777777778, 0.08333333333333333) * Float64(x * x)), Float64(x * x), Float64(x * x))
end
code[x_] := N[(N[(N[(N[(x * x), $MachinePrecision] * 0.002777777777777778 + 0.08333333333333333), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(x * x), $MachinePrecision] + N[(x * x), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.002777777777777778, 0.08333333333333333\right) \cdot \left(x \cdot x\right), x \cdot x, x \cdot x\right)
\end{array}
Derivation
  1. Initial program 51.6%

    \[\left(e^{x} - 2\right) + e^{-x} \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)} \]
  4. Step-by-step derivation
    1. distribute-lft-inN/A

      \[\leadsto {x}^{2} \cdot \left(1 + \color{blue}{\left({x}^{2} \cdot \frac{1}{12} + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)}\right) \]
    2. *-commutativeN/A

      \[\leadsto {x}^{2} \cdot \left(1 + \left(\color{blue}{\frac{1}{12} \cdot {x}^{2}} + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)\right) \]
    3. associate-+r+N/A

      \[\leadsto {x}^{2} \cdot \color{blue}{\left(\left(1 + \frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)} \]
    4. distribute-lft-inN/A

      \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)} \]
    5. +-commutativeN/A

      \[\leadsto \color{blue}{{x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right) + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)} \]
    6. associate-*r*N/A

      \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
    7. *-commutativeN/A

      \[\leadsto \color{blue}{\left(\frac{1}{360} \cdot {x}^{2}\right) \cdot \left({x}^{2} \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
    8. associate-*l*N/A

      \[\leadsto \color{blue}{\left(\left(\frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}\right) \cdot {x}^{2}} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
    9. associate-*l*N/A

      \[\leadsto \color{blue}{\left(\frac{1}{360} \cdot \left({x}^{2} \cdot {x}^{2}\right)\right)} \cdot {x}^{2} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
    10. associate-*l*N/A

      \[\leadsto \color{blue}{\frac{1}{360} \cdot \left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
    11. *-commutativeN/A

      \[\leadsto \color{blue}{\left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}\right) \cdot \frac{1}{360}} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
    12. lower-fma.f64N/A

      \[\leadsto \color{blue}{\mathsf{fma}\left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}, \frac{1}{360}, {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)\right)} \]
  5. Applied rewrites98.7%

    \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{6}, 0.002777777777777778, \mathsf{fma}\left({x}^{4}, 0.08333333333333333, x \cdot x\right)\right)} \]
  6. Taylor expanded in x around 0

    \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)} \]
  7. Step-by-step derivation
    1. +-commutativeN/A

      \[\leadsto {x}^{2} \cdot \color{blue}{\left({x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1\right)} \]
    2. distribute-lft-inN/A

      \[\leadsto \color{blue}{{x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + {x}^{2} \cdot 1} \]
    3. associate-*r*N/A

      \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)} + {x}^{2} \cdot 1 \]
    4. *-rgt-identityN/A

      \[\leadsto \left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + \color{blue}{{x}^{2}} \]
    5. lower-fma.f64N/A

      \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2} \cdot {x}^{2}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right)} \]
    6. pow-sqrN/A

      \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{\left(2 \cdot 2\right)}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
    7. metadata-evalN/A

      \[\leadsto \mathsf{fma}\left({x}^{\color{blue}{4}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
    8. lower-pow.f64N/A

      \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{4}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
    9. +-commutativeN/A

      \[\leadsto \mathsf{fma}\left({x}^{4}, \color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}\right) \]
    10. lower-fma.f64N/A

      \[\leadsto \mathsf{fma}\left({x}^{4}, \color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}\right) \]
    11. unpow2N/A

      \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}\right) \]
    12. lower-*.f64N/A

      \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}\right) \]
    13. unpow2N/A

      \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}\right) \]
    14. lower-*.f6498.7

      \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), \color{blue}{x \cdot x}\right) \]
  8. Applied rewrites98.7%

    \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x\right)} \]
  9. Step-by-step derivation
    1. Applied rewrites98.7%

      \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.002777777777777778, 0.08333333333333333\right) \cdot \left(x \cdot x\right), \color{blue}{x \cdot x}, x \cdot x\right) \]
    2. Add Preprocessing

    Alternative 2: 99.2% accurate, 5.5× speedup?

    \[\begin{array}{l} \\ \mathsf{fma}\left(\left(x \cdot x\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x\right) \end{array} \]
    (FPCore (x)
     :precision binary64
     (fma
      (* (* x x) (* x x))
      (fma 0.002777777777777778 (* x x) 0.08333333333333333)
      (* x x)))
    double code(double x) {
    	return fma(((x * x) * (x * x)), fma(0.002777777777777778, (x * x), 0.08333333333333333), (x * x));
    }
    
    function code(x)
    	return fma(Float64(Float64(x * x) * Float64(x * x)), fma(0.002777777777777778, Float64(x * x), 0.08333333333333333), Float64(x * x))
    end
    
    code[x_] := N[(N[(N[(x * x), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(0.002777777777777778 * N[(x * x), $MachinePrecision] + 0.08333333333333333), $MachinePrecision] + N[(x * x), $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \mathsf{fma}\left(\left(x \cdot x\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x\right)
    \end{array}
    
    Derivation
    1. Initial program 51.6%

      \[\left(e^{x} - 2\right) + e^{-x} \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)} \]
    4. Step-by-step derivation
      1. distribute-lft-inN/A

        \[\leadsto {x}^{2} \cdot \left(1 + \color{blue}{\left({x}^{2} \cdot \frac{1}{12} + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)}\right) \]
      2. *-commutativeN/A

        \[\leadsto {x}^{2} \cdot \left(1 + \left(\color{blue}{\frac{1}{12} \cdot {x}^{2}} + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)\right) \]
      3. associate-+r+N/A

        \[\leadsto {x}^{2} \cdot \color{blue}{\left(\left(1 + \frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)} \]
      4. distribute-lft-inN/A

        \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)} \]
      5. +-commutativeN/A

        \[\leadsto \color{blue}{{x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right) + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)} \]
      6. associate-*r*N/A

        \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
      7. *-commutativeN/A

        \[\leadsto \color{blue}{\left(\frac{1}{360} \cdot {x}^{2}\right) \cdot \left({x}^{2} \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
      8. associate-*l*N/A

        \[\leadsto \color{blue}{\left(\left(\frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}\right) \cdot {x}^{2}} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
      9. associate-*l*N/A

        \[\leadsto \color{blue}{\left(\frac{1}{360} \cdot \left({x}^{2} \cdot {x}^{2}\right)\right)} \cdot {x}^{2} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
      10. associate-*l*N/A

        \[\leadsto \color{blue}{\frac{1}{360} \cdot \left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
      11. *-commutativeN/A

        \[\leadsto \color{blue}{\left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}\right) \cdot \frac{1}{360}} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
      12. lower-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}, \frac{1}{360}, {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)\right)} \]
    5. Applied rewrites98.7%

      \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{6}, 0.002777777777777778, \mathsf{fma}\left({x}^{4}, 0.08333333333333333, x \cdot x\right)\right)} \]
    6. Taylor expanded in x around 0

      \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)} \]
    7. Step-by-step derivation
      1. +-commutativeN/A

        \[\leadsto {x}^{2} \cdot \color{blue}{\left({x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1\right)} \]
      2. distribute-lft-inN/A

        \[\leadsto \color{blue}{{x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + {x}^{2} \cdot 1} \]
      3. associate-*r*N/A

        \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)} + {x}^{2} \cdot 1 \]
      4. *-rgt-identityN/A

        \[\leadsto \left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + \color{blue}{{x}^{2}} \]
      5. lower-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2} \cdot {x}^{2}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right)} \]
      6. pow-sqrN/A

        \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{\left(2 \cdot 2\right)}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
      7. metadata-evalN/A

        \[\leadsto \mathsf{fma}\left({x}^{\color{blue}{4}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
      8. lower-pow.f64N/A

        \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{4}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
      9. +-commutativeN/A

        \[\leadsto \mathsf{fma}\left({x}^{4}, \color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}\right) \]
      10. lower-fma.f64N/A

        \[\leadsto \mathsf{fma}\left({x}^{4}, \color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}\right) \]
      11. unpow2N/A

        \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}\right) \]
      12. lower-*.f64N/A

        \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}\right) \]
      13. unpow2N/A

        \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}\right) \]
      14. lower-*.f6498.7

        \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), \color{blue}{x \cdot x}\right) \]
    8. Applied rewrites98.7%

      \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x\right)} \]
    9. Step-by-step derivation
      1. Applied rewrites98.7%

        \[\leadsto \mathsf{fma}\left(\left(x \cdot x\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\color{blue}{0.002777777777777778}, x \cdot x, 0.08333333333333333\right), x \cdot x\right) \]
      2. Add Preprocessing

      Alternative 3: 99.2% accurate, 6.3× speedup?

      \[\begin{array}{l} \\ \mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right) \cdot \left(x \cdot x\right) \end{array} \]
      (FPCore (x)
       :precision binary64
       (*
        (fma (fma 0.002777777777777778 (* x x) 0.08333333333333333) (* x x) 1.0)
        (* x x)))
      double code(double x) {
      	return fma(fma(0.002777777777777778, (x * x), 0.08333333333333333), (x * x), 1.0) * (x * x);
      }
      
      function code(x)
      	return Float64(fma(fma(0.002777777777777778, Float64(x * x), 0.08333333333333333), Float64(x * x), 1.0) * Float64(x * x))
      end
      
      code[x_] := N[(N[(N[(0.002777777777777778 * N[(x * x), $MachinePrecision] + 0.08333333333333333), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision]
      
      \begin{array}{l}
      
      \\
      \mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right) \cdot \left(x \cdot x\right)
      \end{array}
      
      Derivation
      1. Initial program 51.6%

        \[\left(e^{x} - 2\right) + e^{-x} \]
      2. Add Preprocessing
      3. Taylor expanded in x around 0

        \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)} \]
      4. Step-by-step derivation
        1. distribute-lft-inN/A

          \[\leadsto {x}^{2} \cdot \left(1 + \color{blue}{\left({x}^{2} \cdot \frac{1}{12} + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)}\right) \]
        2. *-commutativeN/A

          \[\leadsto {x}^{2} \cdot \left(1 + \left(\color{blue}{\frac{1}{12} \cdot {x}^{2}} + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)\right) \]
        3. associate-+r+N/A

          \[\leadsto {x}^{2} \cdot \color{blue}{\left(\left(1 + \frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)} \]
        4. distribute-lft-inN/A

          \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right)} \]
        5. +-commutativeN/A

          \[\leadsto \color{blue}{{x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)\right) + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)} \]
        6. associate-*r*N/A

          \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{360} \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
        7. *-commutativeN/A

          \[\leadsto \color{blue}{\left(\frac{1}{360} \cdot {x}^{2}\right) \cdot \left({x}^{2} \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
        8. associate-*l*N/A

          \[\leadsto \color{blue}{\left(\left(\frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}\right) \cdot {x}^{2}} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
        9. associate-*l*N/A

          \[\leadsto \color{blue}{\left(\frac{1}{360} \cdot \left({x}^{2} \cdot {x}^{2}\right)\right)} \cdot {x}^{2} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
        10. associate-*l*N/A

          \[\leadsto \color{blue}{\frac{1}{360} \cdot \left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}\right)} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
        11. *-commutativeN/A

          \[\leadsto \color{blue}{\left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}\right) \cdot \frac{1}{360}} + {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right) \]
        12. lower-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left(\left({x}^{2} \cdot {x}^{2}\right) \cdot {x}^{2}, \frac{1}{360}, {x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)\right)} \]
      5. Applied rewrites98.7%

        \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{6}, 0.002777777777777778, \mathsf{fma}\left({x}^{4}, 0.08333333333333333, x \cdot x\right)\right)} \]
      6. Taylor expanded in x around 0

        \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)} \]
      7. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto {x}^{2} \cdot \color{blue}{\left({x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1\right)} \]
        2. distribute-lft-inN/A

          \[\leadsto \color{blue}{{x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + {x}^{2} \cdot 1} \]
        3. associate-*r*N/A

          \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)} + {x}^{2} \cdot 1 \]
        4. *-rgt-identityN/A

          \[\leadsto \left({x}^{2} \cdot {x}^{2}\right) \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + \color{blue}{{x}^{2}} \]
        5. lower-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2} \cdot {x}^{2}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right)} \]
        6. pow-sqrN/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{\left(2 \cdot 2\right)}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
        7. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left({x}^{\color{blue}{4}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
        8. lower-pow.f64N/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{4}}, \frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}\right) \]
        9. +-commutativeN/A

          \[\leadsto \mathsf{fma}\left({x}^{4}, \color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}\right) \]
        10. lower-fma.f64N/A

          \[\leadsto \mathsf{fma}\left({x}^{4}, \color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}\right) \]
        11. unpow2N/A

          \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}\right) \]
        12. lower-*.f64N/A

          \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}\right) \]
        13. unpow2N/A

          \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}\right) \]
        14. lower-*.f6498.7

          \[\leadsto \mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), \color{blue}{x \cdot x}\right) \]
      8. Applied rewrites98.7%

        \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{4}, \mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x\right)} \]
      9. Step-by-step derivation
        1. Applied rewrites98.7%

          \[\leadsto \mathsf{fma}\left(\left(x \cdot x\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\color{blue}{0.002777777777777778}, x \cdot x, 0.08333333333333333\right), x \cdot x\right) \]
        2. Step-by-step derivation
          1. Applied rewrites98.6%

            \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right) \cdot \color{blue}{\left(x \cdot x\right)} \]
          2. Add Preprocessing

          Alternative 4: 99.0% accurate, 7.7× speedup?

          \[\begin{array}{l} \\ \mathsf{fma}\left(x, x, 0.08333333333333333 \cdot \left(\left(x \cdot x\right) \cdot \left(x \cdot x\right)\right)\right) \end{array} \]
          (FPCore (x)
           :precision binary64
           (fma x x (* 0.08333333333333333 (* (* x x) (* x x)))))
          double code(double x) {
          	return fma(x, x, (0.08333333333333333 * ((x * x) * (x * x))));
          }
          
          function code(x)
          	return fma(x, x, Float64(0.08333333333333333 * Float64(Float64(x * x) * Float64(x * x))))
          end
          
          code[x_] := N[(x * x + N[(0.08333333333333333 * N[(N[(x * x), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
          
          \begin{array}{l}
          
          \\
          \mathsf{fma}\left(x, x, 0.08333333333333333 \cdot \left(\left(x \cdot x\right) \cdot \left(x \cdot x\right)\right)\right)
          \end{array}
          
          Derivation
          1. Initial program 51.6%

            \[\left(e^{x} - 2\right) + e^{-x} \]
          2. Add Preprocessing
          3. Taylor expanded in x around 0

            \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)} \]
          4. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto {x}^{2} \cdot \color{blue}{\left(\frac{1}{12} \cdot {x}^{2} + 1\right)} \]
            2. distribute-lft-inN/A

              \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot 1} \]
            3. *-commutativeN/A

              \[\leadsto {x}^{2} \cdot \color{blue}{\left({x}^{2} \cdot \frac{1}{12}\right)} + {x}^{2} \cdot 1 \]
            4. associate-*r*N/A

              \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \frac{1}{12}} + {x}^{2} \cdot 1 \]
            5. *-rgt-identityN/A

              \[\leadsto \left({x}^{2} \cdot {x}^{2}\right) \cdot \frac{1}{12} + \color{blue}{{x}^{2}} \]
            6. lower-fma.f64N/A

              \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2} \cdot {x}^{2}, \frac{1}{12}, {x}^{2}\right)} \]
            7. pow-sqrN/A

              \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{\left(2 \cdot 2\right)}}, \frac{1}{12}, {x}^{2}\right) \]
            8. lower-pow.f64N/A

              \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{\left(2 \cdot 2\right)}}, \frac{1}{12}, {x}^{2}\right) \]
            9. metadata-evalN/A

              \[\leadsto \mathsf{fma}\left({x}^{\color{blue}{4}}, \frac{1}{12}, {x}^{2}\right) \]
            10. unpow2N/A

              \[\leadsto \mathsf{fma}\left({x}^{4}, \frac{1}{12}, \color{blue}{x \cdot x}\right) \]
            11. lower-*.f6498.5

              \[\leadsto \mathsf{fma}\left({x}^{4}, 0.08333333333333333, \color{blue}{x \cdot x}\right) \]
          5. Applied rewrites98.5%

            \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{4}, 0.08333333333333333, x \cdot x\right)} \]
          6. Step-by-step derivation
            1. Applied rewrites98.5%

              \[\leadsto \mathsf{fma}\left(x, \color{blue}{x}, 0.08333333333333333 \cdot {x}^{4}\right) \]
            2. Step-by-step derivation
              1. Applied rewrites98.5%

                \[\leadsto \mathsf{fma}\left(x, x, 0.08333333333333333 \cdot \left(\left(x \cdot x\right) \cdot \left(x \cdot x\right)\right)\right) \]
              2. Add Preprocessing

              Alternative 5: 99.0% accurate, 9.5× speedup?

              \[\begin{array}{l} \\ \mathsf{fma}\left(0.08333333333333333, x \cdot x, 1\right) \cdot \left(x \cdot x\right) \end{array} \]
              (FPCore (x)
               :precision binary64
               (* (fma 0.08333333333333333 (* x x) 1.0) (* x x)))
              double code(double x) {
              	return fma(0.08333333333333333, (x * x), 1.0) * (x * x);
              }
              
              function code(x)
              	return Float64(fma(0.08333333333333333, Float64(x * x), 1.0) * Float64(x * x))
              end
              
              code[x_] := N[(N[(0.08333333333333333 * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision]
              
              \begin{array}{l}
              
              \\
              \mathsf{fma}\left(0.08333333333333333, x \cdot x, 1\right) \cdot \left(x \cdot x\right)
              \end{array}
              
              Derivation
              1. Initial program 51.6%

                \[\left(e^{x} - 2\right) + e^{-x} \]
              2. Add Preprocessing
              3. Taylor expanded in x around 0

                \[\leadsto \color{blue}{{x}^{2} \cdot \left(1 + \frac{1}{12} \cdot {x}^{2}\right)} \]
              4. Step-by-step derivation
                1. +-commutativeN/A

                  \[\leadsto {x}^{2} \cdot \color{blue}{\left(\frac{1}{12} \cdot {x}^{2} + 1\right)} \]
                2. distribute-lft-inN/A

                  \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{12} \cdot {x}^{2}\right) + {x}^{2} \cdot 1} \]
                3. *-commutativeN/A

                  \[\leadsto {x}^{2} \cdot \color{blue}{\left({x}^{2} \cdot \frac{1}{12}\right)} + {x}^{2} \cdot 1 \]
                4. associate-*r*N/A

                  \[\leadsto \color{blue}{\left({x}^{2} \cdot {x}^{2}\right) \cdot \frac{1}{12}} + {x}^{2} \cdot 1 \]
                5. *-rgt-identityN/A

                  \[\leadsto \left({x}^{2} \cdot {x}^{2}\right) \cdot \frac{1}{12} + \color{blue}{{x}^{2}} \]
                6. lower-fma.f64N/A

                  \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2} \cdot {x}^{2}, \frac{1}{12}, {x}^{2}\right)} \]
                7. pow-sqrN/A

                  \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{\left(2 \cdot 2\right)}}, \frac{1}{12}, {x}^{2}\right) \]
                8. lower-pow.f64N/A

                  \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{\left(2 \cdot 2\right)}}, \frac{1}{12}, {x}^{2}\right) \]
                9. metadata-evalN/A

                  \[\leadsto \mathsf{fma}\left({x}^{\color{blue}{4}}, \frac{1}{12}, {x}^{2}\right) \]
                10. unpow2N/A

                  \[\leadsto \mathsf{fma}\left({x}^{4}, \frac{1}{12}, \color{blue}{x \cdot x}\right) \]
                11. lower-*.f6498.5

                  \[\leadsto \mathsf{fma}\left({x}^{4}, 0.08333333333333333, \color{blue}{x \cdot x}\right) \]
              5. Applied rewrites98.5%

                \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{4}, 0.08333333333333333, x \cdot x\right)} \]
              6. Step-by-step derivation
                1. Applied rewrites98.5%

                  \[\leadsto \mathsf{fma}\left(x, \color{blue}{x}, 0.08333333333333333 \cdot {x}^{4}\right) \]
                2. Step-by-step derivation
                  1. Applied rewrites98.5%

                    \[\leadsto \mathsf{fma}\left(x, x, 0.08333333333333333 \cdot \left(\left(x \cdot x\right) \cdot \left(x \cdot x\right)\right)\right) \]
                  2. Step-by-step derivation
                    1. Applied rewrites98.5%

                      \[\leadsto \mathsf{fma}\left(0.08333333333333333, x \cdot x, 1\right) \cdot \color{blue}{\left(x \cdot x\right)} \]
                    2. Add Preprocessing

                    Alternative 6: 98.4% accurate, 34.8× speedup?

                    \[\begin{array}{l} \\ x \cdot x \end{array} \]
                    (FPCore (x) :precision binary64 (* x x))
                    double code(double x) {
                    	return x * x;
                    }
                    
                    real(8) function code(x)
                        real(8), intent (in) :: x
                        code = x * x
                    end function
                    
                    public static double code(double x) {
                    	return x * x;
                    }
                    
                    def code(x):
                    	return x * x
                    
                    function code(x)
                    	return Float64(x * x)
                    end
                    
                    function tmp = code(x)
                    	tmp = x * x;
                    end
                    
                    code[x_] := N[(x * x), $MachinePrecision]
                    
                    \begin{array}{l}
                    
                    \\
                    x \cdot x
                    \end{array}
                    
                    Derivation
                    1. Initial program 51.6%

                      \[\left(e^{x} - 2\right) + e^{-x} \]
                    2. Add Preprocessing
                    3. Taylor expanded in x around 0

                      \[\leadsto \color{blue}{{x}^{2}} \]
                    4. Step-by-step derivation
                      1. unpow2N/A

                        \[\leadsto \color{blue}{x \cdot x} \]
                      2. lower-*.f6497.9

                        \[\leadsto \color{blue}{x \cdot x} \]
                    5. Applied rewrites97.9%

                      \[\leadsto \color{blue}{x \cdot x} \]
                    6. Add Preprocessing

                    Developer Target 1: 99.9% accurate, 0.9× speedup?

                    \[\begin{array}{l} \\ \begin{array}{l} t_0 := \sinh \left(\frac{x}{2}\right)\\ 4 \cdot \left(t\_0 \cdot t\_0\right) \end{array} \end{array} \]
                    (FPCore (x)
                     :precision binary64
                     (let* ((t_0 (sinh (/ x 2.0)))) (* 4.0 (* t_0 t_0))))
                    double code(double x) {
                    	double t_0 = sinh((x / 2.0));
                    	return 4.0 * (t_0 * t_0);
                    }
                    
                    real(8) function code(x)
                        real(8), intent (in) :: x
                        real(8) :: t_0
                        t_0 = sinh((x / 2.0d0))
                        code = 4.0d0 * (t_0 * t_0)
                    end function
                    
                    public static double code(double x) {
                    	double t_0 = Math.sinh((x / 2.0));
                    	return 4.0 * (t_0 * t_0);
                    }
                    
                    def code(x):
                    	t_0 = math.sinh((x / 2.0))
                    	return 4.0 * (t_0 * t_0)
                    
                    function code(x)
                    	t_0 = sinh(Float64(x / 2.0))
                    	return Float64(4.0 * Float64(t_0 * t_0))
                    end
                    
                    function tmp = code(x)
                    	t_0 = sinh((x / 2.0));
                    	tmp = 4.0 * (t_0 * t_0);
                    end
                    
                    code[x_] := Block[{t$95$0 = N[Sinh[N[(x / 2.0), $MachinePrecision]], $MachinePrecision]}, N[(4.0 * N[(t$95$0 * t$95$0), $MachinePrecision]), $MachinePrecision]]
                    
                    \begin{array}{l}
                    
                    \\
                    \begin{array}{l}
                    t_0 := \sinh \left(\frac{x}{2}\right)\\
                    4 \cdot \left(t\_0 \cdot t\_0\right)
                    \end{array}
                    \end{array}
                    

                    Reproduce

                    ?
                    herbie shell --seed 2024311 
                    (FPCore (x)
                      :name "exp2 (problem 3.3.7)"
                      :precision binary64
                      :pre (<= (fabs x) 710.0)
                    
                      :alt
                      (! :herbie-platform default (* 4 (* (sinh (/ x 2)) (sinh (/ x 2)))))
                    
                      (+ (- (exp x) 2.0) (exp (- x))))