Hyperbolic secant

Percentage Accurate: 100.0% → 100.0%
Time: 7.3s
Alternatives: 13
Speedup: 1.9×

Specification

?
\[\begin{array}{l} \\ \frac{2}{e^{x} + e^{-x}} \end{array} \]
(FPCore (x) :precision binary64 (/ 2.0 (+ (exp x) (exp (- x)))))
double code(double x) {
	return 2.0 / (exp(x) + exp(-x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 2.0d0 / (exp(x) + exp(-x))
end function
public static double code(double x) {
	return 2.0 / (Math.exp(x) + Math.exp(-x));
}
def code(x):
	return 2.0 / (math.exp(x) + math.exp(-x))
function code(x)
	return Float64(2.0 / Float64(exp(x) + exp(Float64(-x))))
end
function tmp = code(x)
	tmp = 2.0 / (exp(x) + exp(-x));
end
code[x_] := N[(2.0 / N[(N[Exp[x], $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{2}{e^{x} + e^{-x}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 13 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{2}{e^{x} + e^{-x}} \end{array} \]
(FPCore (x) :precision binary64 (/ 2.0 (+ (exp x) (exp (- x)))))
double code(double x) {
	return 2.0 / (exp(x) + exp(-x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 2.0d0 / (exp(x) + exp(-x))
end function
public static double code(double x) {
	return 2.0 / (Math.exp(x) + Math.exp(-x));
}
def code(x):
	return 2.0 / (math.exp(x) + math.exp(-x))
function code(x)
	return Float64(2.0 / Float64(exp(x) + exp(Float64(-x))))
end
function tmp = code(x)
	tmp = 2.0 / (exp(x) + exp(-x));
end
code[x_] := N[(2.0 / N[(N[Exp[x], $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{2}{e^{x} + e^{-x}}
\end{array}

Alternative 1: 100.0% accurate, 1.9× speedup?

\[\begin{array}{l} \\ \frac{1}{\cosh x} \end{array} \]
(FPCore (x) :precision binary64 (/ 1.0 (cosh x)))
double code(double x) {
	return 1.0 / cosh(x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 1.0d0 / cosh(x)
end function
public static double code(double x) {
	return 1.0 / Math.cosh(x);
}
def code(x):
	return 1.0 / math.cosh(x)
function code(x)
	return Float64(1.0 / cosh(x))
end
function tmp = code(x)
	tmp = 1.0 / cosh(x);
end
code[x_] := N[(1.0 / N[Cosh[x], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{\cosh x}
\end{array}
Derivation
  1. Initial program 100.0%

    \[\frac{2}{e^{x} + e^{-x}} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-/.f64N/A

      \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{\mathsf{neg}\left(x\right)}}} \]
    2. clear-numN/A

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{\mathsf{neg}\left(x\right)}}{2}}} \]
    3. lift-+.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{\mathsf{neg}\left(x\right)}}}{2}} \]
    4. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{\mathsf{neg}\left(x\right)}}{2}} \]
    5. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{\mathsf{neg}\left(x\right)}}}{2}} \]
    6. lift-neg.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
    7. cosh-defN/A

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
    8. lower-/.f64N/A

      \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
    9. lower-cosh.f64100.0

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
  4. Applied rewrites100.0%

    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
  5. Add Preprocessing

Alternative 2: 92.9% accurate, 0.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\frac{2}{e^{-x} + e^{x}} \leq 0:\\ \;\;\;\;\frac{2}{\left(\left(0.002777777777777778 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot x\right)}\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.08472222222222223, x \cdot x, 0.20833333333333334\right), x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \end{array} \end{array} \]
(FPCore (x)
 :precision binary64
 (if (<= (/ 2.0 (+ (exp (- x)) (exp x))) 0.0)
   (/ 2.0 (* (* (* 0.002777777777777778 (* x x)) x) (* (* x x) x)))
   (fma
    (fma (fma -0.08472222222222223 (* x x) 0.20833333333333334) (* x x) -0.5)
    (* x x)
    1.0)))
double code(double x) {
	double tmp;
	if ((2.0 / (exp(-x) + exp(x))) <= 0.0) {
		tmp = 2.0 / (((0.002777777777777778 * (x * x)) * x) * ((x * x) * x));
	} else {
		tmp = fma(fma(fma(-0.08472222222222223, (x * x), 0.20833333333333334), (x * x), -0.5), (x * x), 1.0);
	}
	return tmp;
}
function code(x)
	tmp = 0.0
	if (Float64(2.0 / Float64(exp(Float64(-x)) + exp(x))) <= 0.0)
		tmp = Float64(2.0 / Float64(Float64(Float64(0.002777777777777778 * Float64(x * x)) * x) * Float64(Float64(x * x) * x)));
	else
		tmp = fma(fma(fma(-0.08472222222222223, Float64(x * x), 0.20833333333333334), Float64(x * x), -0.5), Float64(x * x), 1.0);
	end
	return tmp
end
code[x_] := If[LessEqual[N[(2.0 / N[(N[Exp[(-x)], $MachinePrecision] + N[Exp[x], $MachinePrecision]), $MachinePrecision]), $MachinePrecision], 0.0], N[(2.0 / N[(N[(N[(0.002777777777777778 * N[(x * x), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision] * N[(N[(x * x), $MachinePrecision] * x), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(N[(N[(-0.08472222222222223 * N[(x * x), $MachinePrecision] + 0.20833333333333334), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.5), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;\frac{2}{e^{-x} + e^{x}} \leq 0:\\
\;\;\;\;\frac{2}{\left(\left(0.002777777777777778 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot x\right)}\\

\mathbf{else}:\\
\;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.08472222222222223, x \cdot x, 0.20833333333333334\right), x \cdot x, -0.5\right), x \cdot x, 1\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (/.f64 #s(literal 2 binary64) (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x)))) < 0.0

    1. Initial program 100.0%

      \[\frac{2}{e^{x} + e^{-x}} \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
    4. Step-by-step derivation
      1. +-commutativeN/A

        \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
      2. *-commutativeN/A

        \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
      3. lower-fma.f64N/A

        \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
      4. +-commutativeN/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
      5. *-commutativeN/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
      6. lower-fma.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
      7. +-commutativeN/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      8. lower-fma.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      9. unpow2N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      10. lower-*.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      11. unpow2N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
      12. lower-*.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
      13. unpow2N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
      14. lower-*.f6479.5

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
    5. Applied rewrites79.5%

      \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
    6. Taylor expanded in x around inf

      \[\leadsto \frac{2}{\frac{1}{360} \cdot \color{blue}{{x}^{6}}} \]
    7. Step-by-step derivation
      1. Applied rewrites79.5%

        \[\leadsto \frac{2}{\left(\left(x \cdot x\right) \cdot x\right) \cdot \color{blue}{\left(\left(0.002777777777777778 \cdot \left(x \cdot x\right)\right) \cdot x\right)}} \]

      if 0.0 < (/.f64 #s(literal 2 binary64) (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x))))

      1. Initial program 100.0%

        \[\frac{2}{e^{x} + e^{-x}} \]
      2. Add Preprocessing
      3. Taylor expanded in x around 0

        \[\leadsto \color{blue}{1 + {x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}\right) - \frac{1}{2}\right)} \]
      4. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \color{blue}{{x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}\right) - \frac{1}{2}\right) + 1} \]
        2. *-commutativeN/A

          \[\leadsto \color{blue}{\left({x}^{2} \cdot \left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}\right) - \frac{1}{2}\right) \cdot {x}^{2}} + 1 \]
        3. lower-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2} \cdot \left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}\right) - \frac{1}{2}, {x}^{2}, 1\right)} \]
        4. sub-negN/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}\right) + \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)}, {x}^{2}, 1\right) \]
        5. *-commutativeN/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \left(\mathsf{neg}\left(\frac{1}{2}\right)\right), {x}^{2}, 1\right) \]
        6. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left(\left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}\right) \cdot {x}^{2} + \color{blue}{\frac{-1}{2}}, {x}^{2}, 1\right) \]
        7. lower-fma.f64N/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{5}{24} + \frac{-61}{720} \cdot {x}^{2}, {x}^{2}, \frac{-1}{2}\right)}, {x}^{2}, 1\right) \]
        8. +-commutativeN/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{-61}{720} \cdot {x}^{2} + \frac{5}{24}}, {x}^{2}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        9. lower-fma.f64N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{-61}{720}, {x}^{2}, \frac{5}{24}\right)}, {x}^{2}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        10. unpow2N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{-61}{720}, \color{blue}{x \cdot x}, \frac{5}{24}\right), {x}^{2}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        11. lower-*.f64N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{-61}{720}, \color{blue}{x \cdot x}, \frac{5}{24}\right), {x}^{2}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        12. unpow2N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{-61}{720}, x \cdot x, \frac{5}{24}\right), \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        13. lower-*.f64N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{-61}{720}, x \cdot x, \frac{5}{24}\right), \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        14. unpow2N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{-61}{720}, x \cdot x, \frac{5}{24}\right), x \cdot x, \frac{-1}{2}\right), \color{blue}{x \cdot x}, 1\right) \]
        15. lower-*.f64100.0

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.08472222222222223, x \cdot x, 0.20833333333333334\right), x \cdot x, -0.5\right), \color{blue}{x \cdot x}, 1\right) \]
      5. Applied rewrites100.0%

        \[\leadsto \color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.08472222222222223, x \cdot x, 0.20833333333333334\right), x \cdot x, -0.5\right), x \cdot x, 1\right)} \]
    8. Recombined 2 regimes into one program.
    9. Final simplification90.0%

      \[\leadsto \begin{array}{l} \mathbf{if}\;\frac{2}{e^{-x} + e^{x}} \leq 0:\\ \;\;\;\;\frac{2}{\left(\left(0.002777777777777778 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot x\right)}\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.08472222222222223, x \cdot x, 0.20833333333333334\right), x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \end{array} \]
    10. Add Preprocessing

    Alternative 3: 97.0% accurate, 2.3× speedup?

    \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.00044444444444444447, x \cdot x, -0.013333333333333334\right), x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \end{array} \]
    (FPCore (x)
     :precision binary64
     (/
      2.0
      (fma
       (fma
        (*
         (fma (* (* x x) 7.71604938271605e-6) (* x x) -0.006944444444444444)
         (* x x))
        (fma
         (fma
          (fma -0.00044444444444444447 (* x x) -0.013333333333333334)
          (* x x)
          -0.4)
         (* x x)
         -12.0)
        1.0)
       (* x x)
       2.0)))
    double code(double x) {
    	return 2.0 / fma(fma((fma(((x * x) * 7.71604938271605e-6), (x * x), -0.006944444444444444) * (x * x)), fma(fma(fma(-0.00044444444444444447, (x * x), -0.013333333333333334), (x * x), -0.4), (x * x), -12.0), 1.0), (x * x), 2.0);
    }
    
    function code(x)
    	return Float64(2.0 / fma(fma(Float64(fma(Float64(Float64(x * x) * 7.71604938271605e-6), Float64(x * x), -0.006944444444444444) * Float64(x * x)), fma(fma(fma(-0.00044444444444444447, Float64(x * x), -0.013333333333333334), Float64(x * x), -0.4), Float64(x * x), -12.0), 1.0), Float64(x * x), 2.0))
    end
    
    code[x_] := N[(2.0 / N[(N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 7.71604938271605e-6), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.006944444444444444), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(N[(N[(-0.00044444444444444447 * N[(x * x), $MachinePrecision] + -0.013333333333333334), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.4), $MachinePrecision] * N[(x * x), $MachinePrecision] + -12.0), $MachinePrecision] + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.00044444444444444447, x \cdot x, -0.013333333333333334\right), x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)}
    \end{array}
    
    Derivation
    1. Initial program 100.0%

      \[\frac{2}{e^{x} + e^{-x}} \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
    4. Step-by-step derivation
      1. +-commutativeN/A

        \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
      2. *-commutativeN/A

        \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
      3. lower-fma.f64N/A

        \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
      4. +-commutativeN/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
      5. *-commutativeN/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
      6. lower-fma.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
      7. +-commutativeN/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      8. lower-fma.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      9. unpow2N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      10. lower-*.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
      11. unpow2N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
      12. lower-*.f64N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
      13. unpow2N/A

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
      14. lower-*.f6490.0

        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
    5. Applied rewrites90.0%

      \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
    6. Step-by-step derivation
      1. Applied rewrites90.0%

        \[\leadsto \frac{2}{\mathsf{fma}\left(\frac{1}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right)}}, \color{blue}{x} \cdot x, 2\right)} \]
      2. Step-by-step derivation
        1. Applied rewrites69.3%

          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \frac{1}{\mathsf{fma}\left(x \cdot 0.002777777777777778, x, -0.08333333333333333\right)}, 1\right), \color{blue}{x} \cdot x, 2\right)} \]
        2. Taylor expanded in x around 0

          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{129600} \cdot \left(x \cdot x\right), x \cdot x, \frac{-1}{144}\right) \cdot \left(x \cdot x\right), {x}^{2} \cdot \left({x}^{2} \cdot \left(\frac{-1}{2250} \cdot {x}^{2} - \frac{1}{75}\right) - \frac{2}{5}\right) - 12, 1\right), x \cdot x, 2\right)} \]
        3. Step-by-step derivation
          1. Applied rewrites96.3%

            \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.00044444444444444447, x \cdot x, -0.013333333333333334\right), x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \]
          2. Final simplification96.3%

            \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.00044444444444444447, x \cdot x, -0.013333333333333334\right), x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \]
          3. Add Preprocessing

          Alternative 4: 96.5% accurate, 2.6× speedup?

          \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(-0.013333333333333334, x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \end{array} \]
          (FPCore (x)
           :precision binary64
           (/
            2.0
            (fma
             (fma
              (*
               (fma (* (* x x) 7.71604938271605e-6) (* x x) -0.006944444444444444)
               (* x x))
              (fma (fma -0.013333333333333334 (* x x) -0.4) (* x x) -12.0)
              1.0)
             (* x x)
             2.0)))
          double code(double x) {
          	return 2.0 / fma(fma((fma(((x * x) * 7.71604938271605e-6), (x * x), -0.006944444444444444) * (x * x)), fma(fma(-0.013333333333333334, (x * x), -0.4), (x * x), -12.0), 1.0), (x * x), 2.0);
          }
          
          function code(x)
          	return Float64(2.0 / fma(fma(Float64(fma(Float64(Float64(x * x) * 7.71604938271605e-6), Float64(x * x), -0.006944444444444444) * Float64(x * x)), fma(fma(-0.013333333333333334, Float64(x * x), -0.4), Float64(x * x), -12.0), 1.0), Float64(x * x), 2.0))
          end
          
          code[x_] := N[(2.0 / N[(N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 7.71604938271605e-6), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.006944444444444444), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(N[(-0.013333333333333334 * N[(x * x), $MachinePrecision] + -0.4), $MachinePrecision] * N[(x * x), $MachinePrecision] + -12.0), $MachinePrecision] + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision]
          
          \begin{array}{l}
          
          \\
          \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(-0.013333333333333334, x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)}
          \end{array}
          
          Derivation
          1. Initial program 100.0%

            \[\frac{2}{e^{x} + e^{-x}} \]
          2. Add Preprocessing
          3. Taylor expanded in x around 0

            \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
          4. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
            2. *-commutativeN/A

              \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
            3. lower-fma.f64N/A

              \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
            4. +-commutativeN/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
            5. *-commutativeN/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
            6. lower-fma.f64N/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
            7. +-commutativeN/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
            8. lower-fma.f64N/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
            9. unpow2N/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
            10. lower-*.f64N/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
            11. unpow2N/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
            12. lower-*.f64N/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
            13. unpow2N/A

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
            14. lower-*.f6490.0

              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
          5. Applied rewrites90.0%

            \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
          6. Step-by-step derivation
            1. Applied rewrites90.0%

              \[\leadsto \frac{2}{\mathsf{fma}\left(\frac{1}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right)}}, \color{blue}{x} \cdot x, 2\right)} \]
            2. Step-by-step derivation
              1. Applied rewrites69.3%

                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \frac{1}{\mathsf{fma}\left(x \cdot 0.002777777777777778, x, -0.08333333333333333\right)}, 1\right), \color{blue}{x} \cdot x, 2\right)} \]
              2. Taylor expanded in x around 0

                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{129600} \cdot \left(x \cdot x\right), x \cdot x, \frac{-1}{144}\right) \cdot \left(x \cdot x\right), {x}^{2} \cdot \left(\frac{-1}{75} \cdot {x}^{2} - \frac{2}{5}\right) - 12, 1\right), x \cdot x, 2\right)} \]
              3. Step-by-step derivation
                1. Applied rewrites95.2%

                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(-0.013333333333333334, x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \]
                2. Final simplification95.2%

                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(\mathsf{fma}\left(-0.013333333333333334, x \cdot x, -0.4\right), x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \]
                3. Add Preprocessing

                Alternative 5: 95.8% accurate, 3.1× speedup?

                \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(-0.4, x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \end{array} \]
                (FPCore (x)
                 :precision binary64
                 (/
                  2.0
                  (fma
                   (fma
                    (*
                     (fma (* (* x x) 7.71604938271605e-6) (* x x) -0.006944444444444444)
                     (* x x))
                    (fma -0.4 (* x x) -12.0)
                    1.0)
                   (* x x)
                   2.0)))
                double code(double x) {
                	return 2.0 / fma(fma((fma(((x * x) * 7.71604938271605e-6), (x * x), -0.006944444444444444) * (x * x)), fma(-0.4, (x * x), -12.0), 1.0), (x * x), 2.0);
                }
                
                function code(x)
                	return Float64(2.0 / fma(fma(Float64(fma(Float64(Float64(x * x) * 7.71604938271605e-6), Float64(x * x), -0.006944444444444444) * Float64(x * x)), fma(-0.4, Float64(x * x), -12.0), 1.0), Float64(x * x), 2.0))
                end
                
                code[x_] := N[(2.0 / N[(N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 7.71604938271605e-6), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.006944444444444444), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(-0.4 * N[(x * x), $MachinePrecision] + -12.0), $MachinePrecision] + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision]
                
                \begin{array}{l}
                
                \\
                \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(-0.4, x \cdot x, -12\right), 1\right), x \cdot x, 2\right)}
                \end{array}
                
                Derivation
                1. Initial program 100.0%

                  \[\frac{2}{e^{x} + e^{-x}} \]
                2. Add Preprocessing
                3. Taylor expanded in x around 0

                  \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
                4. Step-by-step derivation
                  1. +-commutativeN/A

                    \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
                  2. *-commutativeN/A

                    \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
                  3. lower-fma.f64N/A

                    \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
                  4. +-commutativeN/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
                  5. *-commutativeN/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
                  6. lower-fma.f64N/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
                  7. +-commutativeN/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                  8. lower-fma.f64N/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                  9. unpow2N/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                  10. lower-*.f64N/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                  11. unpow2N/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                  12. lower-*.f64N/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                  13. unpow2N/A

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                  14. lower-*.f6490.0

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                5. Applied rewrites90.0%

                  \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
                6. Step-by-step derivation
                  1. Applied rewrites90.0%

                    \[\leadsto \frac{2}{\mathsf{fma}\left(\frac{1}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right)}}, \color{blue}{x} \cdot x, 2\right)} \]
                  2. Step-by-step derivation
                    1. Applied rewrites69.3%

                      \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \frac{1}{\mathsf{fma}\left(x \cdot 0.002777777777777778, x, -0.08333333333333333\right)}, 1\right), \color{blue}{x} \cdot x, 2\right)} \]
                    2. Taylor expanded in x around 0

                      \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{129600} \cdot \left(x \cdot x\right), x \cdot x, \frac{-1}{144}\right) \cdot \left(x \cdot x\right), \frac{-2}{5} \cdot {x}^{2} - 12, 1\right), x \cdot x, 2\right)} \]
                    3. Step-by-step derivation
                      1. Applied rewrites95.1%

                        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(-0.4, x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \]
                      2. Final simplification95.1%

                        \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \mathsf{fma}\left(-0.4, x \cdot x, -12\right), 1\right), x \cdot x, 2\right)} \]
                      3. Add Preprocessing

                      Alternative 6: 94.6% accurate, 3.6× speedup?

                      \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), -12, 1\right), x \cdot x, 2\right)} \end{array} \]
                      (FPCore (x)
                       :precision binary64
                       (/
                        2.0
                        (fma
                         (fma
                          (*
                           (fma (* (* x x) 7.71604938271605e-6) (* x x) -0.006944444444444444)
                           (* x x))
                          -12.0
                          1.0)
                         (* x x)
                         2.0)))
                      double code(double x) {
                      	return 2.0 / fma(fma((fma(((x * x) * 7.71604938271605e-6), (x * x), -0.006944444444444444) * (x * x)), -12.0, 1.0), (x * x), 2.0);
                      }
                      
                      function code(x)
                      	return Float64(2.0 / fma(fma(Float64(fma(Float64(Float64(x * x) * 7.71604938271605e-6), Float64(x * x), -0.006944444444444444) * Float64(x * x)), -12.0, 1.0), Float64(x * x), 2.0))
                      end
                      
                      code[x_] := N[(2.0 / N[(N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 7.71604938271605e-6), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.006944444444444444), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision] * -12.0 + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision]
                      
                      \begin{array}{l}
                      
                      \\
                      \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), -12, 1\right), x \cdot x, 2\right)}
                      \end{array}
                      
                      Derivation
                      1. Initial program 100.0%

                        \[\frac{2}{e^{x} + e^{-x}} \]
                      2. Add Preprocessing
                      3. Taylor expanded in x around 0

                        \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
                      4. Step-by-step derivation
                        1. +-commutativeN/A

                          \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
                        2. *-commutativeN/A

                          \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
                        3. lower-fma.f64N/A

                          \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
                        4. +-commutativeN/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
                        5. *-commutativeN/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
                        6. lower-fma.f64N/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
                        7. +-commutativeN/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                        8. lower-fma.f64N/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                        9. unpow2N/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                        10. lower-*.f64N/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                        11. unpow2N/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                        12. lower-*.f64N/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                        13. unpow2N/A

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                        14. lower-*.f6490.0

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                      5. Applied rewrites90.0%

                        \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
                      6. Step-by-step derivation
                        1. Applied rewrites90.0%

                          \[\leadsto \frac{2}{\mathsf{fma}\left(\frac{1}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right)}}, \color{blue}{x} \cdot x, 2\right)} \]
                        2. Step-by-step derivation
                          1. Applied rewrites69.3%

                            \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), \frac{1}{\mathsf{fma}\left(x \cdot 0.002777777777777778, x, -0.08333333333333333\right)}, 1\right), \color{blue}{x} \cdot x, 2\right)} \]
                          2. Taylor expanded in x around 0

                            \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{129600} \cdot \left(x \cdot x\right), x \cdot x, \frac{-1}{144}\right) \cdot \left(x \cdot x\right), -12, 1\right), x \cdot x, 2\right)} \]
                          3. Step-by-step derivation
                            1. Applied rewrites92.6%

                              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(7.71604938271605 \cdot 10^{-6} \cdot \left(x \cdot x\right), x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), -12, 1\right), x \cdot x, 2\right)} \]
                            2. Final simplification92.6%

                              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 7.71604938271605 \cdot 10^{-6}, x \cdot x, -0.006944444444444444\right) \cdot \left(x \cdot x\right), -12, 1\right), x \cdot x, 2\right)} \]
                            3. Add Preprocessing

                            Alternative 7: 92.9% accurate, 4.8× speedup?

                            \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)} \end{array} \]
                            (FPCore (x)
                             :precision binary64
                             (/
                              2.0
                              (fma
                               (fma (fma 0.002777777777777778 (* x x) 0.08333333333333333) (* x x) 1.0)
                               (* x x)
                               2.0)))
                            double code(double x) {
                            	return 2.0 / fma(fma(fma(0.002777777777777778, (x * x), 0.08333333333333333), (x * x), 1.0), (x * x), 2.0);
                            }
                            
                            function code(x)
                            	return Float64(2.0 / fma(fma(fma(0.002777777777777778, Float64(x * x), 0.08333333333333333), Float64(x * x), 1.0), Float64(x * x), 2.0))
                            end
                            
                            code[x_] := N[(2.0 / N[(N[(N[(0.002777777777777778 * N[(x * x), $MachinePrecision] + 0.08333333333333333), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision]
                            
                            \begin{array}{l}
                            
                            \\
                            \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}
                            \end{array}
                            
                            Derivation
                            1. Initial program 100.0%

                              \[\frac{2}{e^{x} + e^{-x}} \]
                            2. Add Preprocessing
                            3. Taylor expanded in x around 0

                              \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
                            4. Step-by-step derivation
                              1. +-commutativeN/A

                                \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
                              2. *-commutativeN/A

                                \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
                              3. lower-fma.f64N/A

                                \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
                              4. +-commutativeN/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
                              5. *-commutativeN/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
                              6. lower-fma.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
                              7. +-commutativeN/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              8. lower-fma.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              9. unpow2N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              10. lower-*.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              11. unpow2N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                              12. lower-*.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                              13. unpow2N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                              14. lower-*.f6490.0

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                            5. Applied rewrites90.0%

                              \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
                            6. Add Preprocessing

                            Alternative 8: 92.8% accurate, 4.9× speedup?

                            \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778 \cdot \left(x \cdot x\right), x \cdot x, 1\right), x \cdot x, 2\right)} \end{array} \]
                            (FPCore (x)
                             :precision binary64
                             (/ 2.0 (fma (fma (* 0.002777777777777778 (* x x)) (* x x) 1.0) (* x x) 2.0)))
                            double code(double x) {
                            	return 2.0 / fma(fma((0.002777777777777778 * (x * x)), (x * x), 1.0), (x * x), 2.0);
                            }
                            
                            function code(x)
                            	return Float64(2.0 / fma(fma(Float64(0.002777777777777778 * Float64(x * x)), Float64(x * x), 1.0), Float64(x * x), 2.0))
                            end
                            
                            code[x_] := N[(2.0 / N[(N[(N[(0.002777777777777778 * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision] * N[(x * x), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision]
                            
                            \begin{array}{l}
                            
                            \\
                            \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778 \cdot \left(x \cdot x\right), x \cdot x, 1\right), x \cdot x, 2\right)}
                            \end{array}
                            
                            Derivation
                            1. Initial program 100.0%

                              \[\frac{2}{e^{x} + e^{-x}} \]
                            2. Add Preprocessing
                            3. Taylor expanded in x around 0

                              \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
                            4. Step-by-step derivation
                              1. +-commutativeN/A

                                \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
                              2. *-commutativeN/A

                                \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
                              3. lower-fma.f64N/A

                                \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
                              4. +-commutativeN/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
                              5. *-commutativeN/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
                              6. lower-fma.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
                              7. +-commutativeN/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              8. lower-fma.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              9. unpow2N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              10. lower-*.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                              11. unpow2N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                              12. lower-*.f64N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                              13. unpow2N/A

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                              14. lower-*.f6490.0

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                            5. Applied rewrites90.0%

                              \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
                            6. Taylor expanded in x around inf

                              \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360} \cdot {x}^{2}, x \cdot x, 1\right), x \cdot x, 2\right)} \]
                            7. Step-by-step derivation
                              1. Applied rewrites89.8%

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778 \cdot \left(x \cdot x\right), x \cdot x, 1\right), x \cdot x, 2\right)} \]
                              2. Add Preprocessing

                              Alternative 9: 92.4% accurate, 5.0× speedup?

                              \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\left(0.002777777777777778 \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot x\right), x \cdot x, 2\right)} \end{array} \]
                              (FPCore (x)
                               :precision binary64
                               (/ 2.0 (fma (* (* 0.002777777777777778 x) (* (* x x) x)) (* x x) 2.0)))
                              double code(double x) {
                              	return 2.0 / fma(((0.002777777777777778 * x) * ((x * x) * x)), (x * x), 2.0);
                              }
                              
                              function code(x)
                              	return Float64(2.0 / fma(Float64(Float64(0.002777777777777778 * x) * Float64(Float64(x * x) * x)), Float64(x * x), 2.0))
                              end
                              
                              code[x_] := N[(2.0 / N[(N[(N[(0.002777777777777778 * x), $MachinePrecision] * N[(N[(x * x), $MachinePrecision] * x), $MachinePrecision]), $MachinePrecision] * N[(x * x), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision]
                              
                              \begin{array}{l}
                              
                              \\
                              \frac{2}{\mathsf{fma}\left(\left(0.002777777777777778 \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot x\right), x \cdot x, 2\right)}
                              \end{array}
                              
                              Derivation
                              1. Initial program 100.0%

                                \[\frac{2}{e^{x} + e^{-x}} \]
                              2. Add Preprocessing
                              3. Taylor expanded in x around 0

                                \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
                              4. Step-by-step derivation
                                1. +-commutativeN/A

                                  \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
                                2. *-commutativeN/A

                                  \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
                                3. lower-fma.f64N/A

                                  \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
                                4. +-commutativeN/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
                                5. *-commutativeN/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
                                6. lower-fma.f64N/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
                                7. +-commutativeN/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                8. lower-fma.f64N/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                9. unpow2N/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                10. lower-*.f64N/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                11. unpow2N/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                                12. lower-*.f64N/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                                13. unpow2N/A

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                                14. lower-*.f6490.0

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                              5. Applied rewrites90.0%

                                \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
                              6. Taylor expanded in x around inf

                                \[\leadsto \frac{2}{\mathsf{fma}\left(\frac{1}{360} \cdot {x}^{4}, \color{blue}{x} \cdot x, 2\right)} \]
                              7. Step-by-step derivation
                                1. Applied rewrites89.5%

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\left(\left(x \cdot x\right) \cdot x\right) \cdot \left(0.002777777777777778 \cdot x\right), \color{blue}{x} \cdot x, 2\right)} \]
                                2. Final simplification89.5%

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\left(0.002777777777777778 \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot x\right), x \cdot x, 2\right)} \]
                                3. Add Preprocessing

                                Alternative 10: 89.0% accurate, 6.4× speedup?

                                \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(0.08333333333333333, x \cdot x, 1\right) \cdot x, x, 2\right)} \end{array} \]
                                (FPCore (x)
                                 :precision binary64
                                 (/ 2.0 (fma (* (fma 0.08333333333333333 (* x x) 1.0) x) x 2.0)))
                                double code(double x) {
                                	return 2.0 / fma((fma(0.08333333333333333, (x * x), 1.0) * x), x, 2.0);
                                }
                                
                                function code(x)
                                	return Float64(2.0 / fma(Float64(fma(0.08333333333333333, Float64(x * x), 1.0) * x), x, 2.0))
                                end
                                
                                code[x_] := N[(2.0 / N[(N[(N[(0.08333333333333333 * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision] * x), $MachinePrecision] * x + 2.0), $MachinePrecision]), $MachinePrecision]
                                
                                \begin{array}{l}
                                
                                \\
                                \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(0.08333333333333333, x \cdot x, 1\right) \cdot x, x, 2\right)}
                                \end{array}
                                
                                Derivation
                                1. Initial program 100.0%

                                  \[\frac{2}{e^{x} + e^{-x}} \]
                                2. Add Preprocessing
                                3. Taylor expanded in x around 0

                                  \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right)}} \]
                                4. Step-by-step derivation
                                  1. +-commutativeN/A

                                    \[\leadsto \frac{2}{\color{blue}{{x}^{2} \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) + 2}} \]
                                  2. *-commutativeN/A

                                    \[\leadsto \frac{2}{\color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 2} \]
                                  3. lower-fma.f64N/A

                                    \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(1 + {x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right), {x}^{2}, 2\right)}} \]
                                  4. +-commutativeN/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) + 1}, {x}^{2}, 2\right)} \]
                                  5. *-commutativeN/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1, {x}^{2}, 2\right)} \]
                                  6. lower-fma.f64N/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{12} + \frac{1}{360} \cdot {x}^{2}, {x}^{2}, 1\right)}, {x}^{2}, 2\right)} \]
                                  7. +-commutativeN/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{360} \cdot {x}^{2} + \frac{1}{12}}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                  8. lower-fma.f64N/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{360}, {x}^{2}, \frac{1}{12}\right)}, {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                  9. unpow2N/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                  10. lower-*.f64N/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, \color{blue}{x \cdot x}, \frac{1}{12}\right), {x}^{2}, 1\right), {x}^{2}, 2\right)} \]
                                  11. unpow2N/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                                  12. lower-*.f64N/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), \color{blue}{x \cdot x}, 1\right), {x}^{2}, 2\right)} \]
                                  13. unpow2N/A

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{360}, x \cdot x, \frac{1}{12}\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                                  14. lower-*.f6490.0

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), \color{blue}{x \cdot x}, 2\right)} \]
                                5. Applied rewrites90.0%

                                  \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.002777777777777778, x \cdot x, 0.08333333333333333\right), x \cdot x, 1\right), x \cdot x, 2\right)}} \]
                                6. Taylor expanded in x around 0

                                  \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{12}, x \cdot x, 1\right), x \cdot x, 2\right)} \]
                                7. Step-by-step derivation
                                  1. Applied rewrites88.0%

                                    \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(0.08333333333333333, x \cdot x, 1\right), x \cdot x, 2\right)} \]
                                  2. Step-by-step derivation
                                    1. Applied rewrites88.0%

                                      \[\leadsto \frac{2}{\mathsf{fma}\left(\mathsf{fma}\left(0.08333333333333333, x \cdot x, 1\right) \cdot x, \color{blue}{x}, 2\right)} \]
                                    2. Add Preprocessing

                                    Alternative 11: 63.9% accurate, 9.4× speedup?

                                    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;x \leq 1.2:\\ \;\;\;\;\mathsf{fma}\left(x \cdot x, -0.5, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{2}{x \cdot x}\\ \end{array} \end{array} \]
                                    (FPCore (x)
                                     :precision binary64
                                     (if (<= x 1.2) (fma (* x x) -0.5 1.0) (/ 2.0 (* x x))))
                                    double code(double x) {
                                    	double tmp;
                                    	if (x <= 1.2) {
                                    		tmp = fma((x * x), -0.5, 1.0);
                                    	} else {
                                    		tmp = 2.0 / (x * x);
                                    	}
                                    	return tmp;
                                    }
                                    
                                    function code(x)
                                    	tmp = 0.0
                                    	if (x <= 1.2)
                                    		tmp = fma(Float64(x * x), -0.5, 1.0);
                                    	else
                                    		tmp = Float64(2.0 / Float64(x * x));
                                    	end
                                    	return tmp
                                    end
                                    
                                    code[x_] := If[LessEqual[x, 1.2], N[(N[(x * x), $MachinePrecision] * -0.5 + 1.0), $MachinePrecision], N[(2.0 / N[(x * x), $MachinePrecision]), $MachinePrecision]]
                                    
                                    \begin{array}{l}
                                    
                                    \\
                                    \begin{array}{l}
                                    \mathbf{if}\;x \leq 1.2:\\
                                    \;\;\;\;\mathsf{fma}\left(x \cdot x, -0.5, 1\right)\\
                                    
                                    \mathbf{else}:\\
                                    \;\;\;\;\frac{2}{x \cdot x}\\
                                    
                                    
                                    \end{array}
                                    \end{array}
                                    
                                    Derivation
                                    1. Split input into 2 regimes
                                    2. if x < 1.19999999999999996

                                      1. Initial program 100.0%

                                        \[\frac{2}{e^{x} + e^{-x}} \]
                                      2. Add Preprocessing
                                      3. Taylor expanded in x around 0

                                        \[\leadsto \color{blue}{1 + \frac{-1}{2} \cdot {x}^{2}} \]
                                      4. Step-by-step derivation
                                        1. +-commutativeN/A

                                          \[\leadsto \color{blue}{\frac{-1}{2} \cdot {x}^{2} + 1} \]
                                        2. *-commutativeN/A

                                          \[\leadsto \color{blue}{{x}^{2} \cdot \frac{-1}{2}} + 1 \]
                                        3. lower-fma.f64N/A

                                          \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2}, \frac{-1}{2}, 1\right)} \]
                                        4. unpow2N/A

                                          \[\leadsto \mathsf{fma}\left(\color{blue}{x \cdot x}, \frac{-1}{2}, 1\right) \]
                                        5. lower-*.f6467.0

                                          \[\leadsto \mathsf{fma}\left(\color{blue}{x \cdot x}, -0.5, 1\right) \]
                                      5. Applied rewrites67.0%

                                        \[\leadsto \color{blue}{\mathsf{fma}\left(x \cdot x, -0.5, 1\right)} \]

                                      if 1.19999999999999996 < x

                                      1. Initial program 100.0%

                                        \[\frac{2}{e^{x} + e^{-x}} \]
                                      2. Add Preprocessing
                                      3. Taylor expanded in x around 0

                                        \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2}}} \]
                                      4. Step-by-step derivation
                                        1. +-commutativeN/A

                                          \[\leadsto \frac{2}{\color{blue}{{x}^{2} + 2}} \]
                                        2. unpow2N/A

                                          \[\leadsto \frac{2}{\color{blue}{x \cdot x} + 2} \]
                                        3. lower-fma.f6448.1

                                          \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                                      5. Applied rewrites48.1%

                                        \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                                      6. Taylor expanded in x around inf

                                        \[\leadsto \frac{2}{{x}^{\color{blue}{2}}} \]
                                      7. Step-by-step derivation
                                        1. Applied rewrites48.1%

                                          \[\leadsto \frac{2}{x \cdot \color{blue}{x}} \]
                                      8. Recombined 2 regimes into one program.
                                      9. Add Preprocessing

                                      Alternative 12: 77.0% accurate, 12.1× speedup?

                                      \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(x, x, 2\right)} \end{array} \]
                                      (FPCore (x) :precision binary64 (/ 2.0 (fma x x 2.0)))
                                      double code(double x) {
                                      	return 2.0 / fma(x, x, 2.0);
                                      }
                                      
                                      function code(x)
                                      	return Float64(2.0 / fma(x, x, 2.0))
                                      end
                                      
                                      code[x_] := N[(2.0 / N[(x * x + 2.0), $MachinePrecision]), $MachinePrecision]
                                      
                                      \begin{array}{l}
                                      
                                      \\
                                      \frac{2}{\mathsf{fma}\left(x, x, 2\right)}
                                      \end{array}
                                      
                                      Derivation
                                      1. Initial program 100.0%

                                        \[\frac{2}{e^{x} + e^{-x}} \]
                                      2. Add Preprocessing
                                      3. Taylor expanded in x around 0

                                        \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2}}} \]
                                      4. Step-by-step derivation
                                        1. +-commutativeN/A

                                          \[\leadsto \frac{2}{\color{blue}{{x}^{2} + 2}} \]
                                        2. unpow2N/A

                                          \[\leadsto \frac{2}{\color{blue}{x \cdot x} + 2} \]
                                        3. lower-fma.f6474.0

                                          \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                                      5. Applied rewrites74.0%

                                        \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                                      6. Add Preprocessing

                                      Alternative 13: 51.4% accurate, 217.0× speedup?

                                      \[\begin{array}{l} \\ 1 \end{array} \]
                                      (FPCore (x) :precision binary64 1.0)
                                      double code(double x) {
                                      	return 1.0;
                                      }
                                      
                                      real(8) function code(x)
                                          real(8), intent (in) :: x
                                          code = 1.0d0
                                      end function
                                      
                                      public static double code(double x) {
                                      	return 1.0;
                                      }
                                      
                                      def code(x):
                                      	return 1.0
                                      
                                      function code(x)
                                      	return 1.0
                                      end
                                      
                                      function tmp = code(x)
                                      	tmp = 1.0;
                                      end
                                      
                                      code[x_] := 1.0
                                      
                                      \begin{array}{l}
                                      
                                      \\
                                      1
                                      \end{array}
                                      
                                      Derivation
                                      1. Initial program 100.0%

                                        \[\frac{2}{e^{x} + e^{-x}} \]
                                      2. Add Preprocessing
                                      3. Taylor expanded in x around 0

                                        \[\leadsto \color{blue}{1} \]
                                      4. Step-by-step derivation
                                        1. Applied rewrites52.5%

                                          \[\leadsto \color{blue}{1} \]
                                        2. Add Preprocessing

                                        Reproduce

                                        ?
                                        herbie shell --seed 2024240 
                                        (FPCore (x)
                                          :name "Hyperbolic secant"
                                          :precision binary64
                                          (/ 2.0 (+ (exp x) (exp (- x)))))