Hyperbolic secant

Percentage Accurate: 100.0% → 100.0%
Time: 6.5s
Alternatives: 11
Speedup: 1.9×

Specification

?
\[\begin{array}{l} \\ \frac{2}{e^{x} + e^{-x}} \end{array} \]
(FPCore (x) :precision binary64 (/ 2.0 (+ (exp x) (exp (- x)))))
double code(double x) {
	return 2.0 / (exp(x) + exp(-x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 2.0d0 / (exp(x) + exp(-x))
end function
public static double code(double x) {
	return 2.0 / (Math.exp(x) + Math.exp(-x));
}
def code(x):
	return 2.0 / (math.exp(x) + math.exp(-x))
function code(x)
	return Float64(2.0 / Float64(exp(x) + exp(Float64(-x))))
end
function tmp = code(x)
	tmp = 2.0 / (exp(x) + exp(-x));
end
code[x_] := N[(2.0 / N[(N[Exp[x], $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{2}{e^{x} + e^{-x}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 11 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{2}{e^{x} + e^{-x}} \end{array} \]
(FPCore (x) :precision binary64 (/ 2.0 (+ (exp x) (exp (- x)))))
double code(double x) {
	return 2.0 / (exp(x) + exp(-x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 2.0d0 / (exp(x) + exp(-x))
end function
public static double code(double x) {
	return 2.0 / (Math.exp(x) + Math.exp(-x));
}
def code(x):
	return 2.0 / (math.exp(x) + math.exp(-x))
function code(x)
	return Float64(2.0 / Float64(exp(x) + exp(Float64(-x))))
end
function tmp = code(x)
	tmp = 2.0 / (exp(x) + exp(-x));
end
code[x_] := N[(2.0 / N[(N[Exp[x], $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{2}{e^{x} + e^{-x}}
\end{array}

Alternative 1: 100.0% accurate, 1.9× speedup?

\[\begin{array}{l} \\ \frac{1}{\cosh x} \end{array} \]
(FPCore (x) :precision binary64 (/ 1.0 (cosh x)))
double code(double x) {
	return 1.0 / cosh(x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 1.0d0 / cosh(x)
end function
public static double code(double x) {
	return 1.0 / Math.cosh(x);
}
def code(x):
	return 1.0 / math.cosh(x)
function code(x)
	return Float64(1.0 / cosh(x))
end
function tmp = code(x)
	tmp = 1.0 / cosh(x);
end
code[x_] := N[(1.0 / N[Cosh[x], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{\cosh x}
\end{array}
Derivation
  1. Initial program 100.0%

    \[\frac{2}{e^{x} + e^{-x}} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-/.f64N/A

      \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
    2. clear-numN/A

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
    3. lift-+.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
    4. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
    5. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
    6. lift-neg.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
    7. cosh-defN/A

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
    8. lower-/.f64N/A

      \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
    9. lower-cosh.f64100.0

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
  4. Applied rewrites100.0%

    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
  5. Add Preprocessing

Alternative 2: 88.5% accurate, 0.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot x}\\ \end{array} \end{array} \]
(FPCore (x)
 :precision binary64
 (if (<= (+ (exp (- x)) (exp x)) 4.0)
   (fma (fma 0.20833333333333334 (* x x) -0.5) (* x x) 1.0)
   (/ 1.0 (* (* (fma 0.041666666666666664 (* x x) 0.5) x) x))))
double code(double x) {
	double tmp;
	if ((exp(-x) + exp(x)) <= 4.0) {
		tmp = fma(fma(0.20833333333333334, (x * x), -0.5), (x * x), 1.0);
	} else {
		tmp = 1.0 / ((fma(0.041666666666666664, (x * x), 0.5) * x) * x);
	}
	return tmp;
}
function code(x)
	tmp = 0.0
	if (Float64(exp(Float64(-x)) + exp(x)) <= 4.0)
		tmp = fma(fma(0.20833333333333334, Float64(x * x), -0.5), Float64(x * x), 1.0);
	else
		tmp = Float64(1.0 / Float64(Float64(fma(0.041666666666666664, Float64(x * x), 0.5) * x) * x));
	end
	return tmp
end
code[x_] := If[LessEqual[N[(N[Exp[(-x)], $MachinePrecision] + N[Exp[x], $MachinePrecision]), $MachinePrecision], 4.0], N[(N[(0.20833333333333334 * N[(x * x), $MachinePrecision] + -0.5), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision], N[(1.0 / N[(N[(N[(0.041666666666666664 * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;e^{-x} + e^{x} \leq 4:\\
\;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot x}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x))) < 4

    1. Initial program 100.0%

      \[\frac{2}{e^{x} + e^{-x}} \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{1 + {x}^{2} \cdot \left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right)} \]
    4. Step-by-step derivation
      1. +-commutativeN/A

        \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right) + 1} \]
      2. *-commutativeN/A

        \[\leadsto \color{blue}{\left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right) \cdot {x}^{2}} + 1 \]
      3. lower-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}, {x}^{2}, 1\right)} \]
      4. sub-negN/A

        \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{5}{24} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)}, {x}^{2}, 1\right) \]
      5. metadata-evalN/A

        \[\leadsto \mathsf{fma}\left(\frac{5}{24} \cdot {x}^{2} + \color{blue}{\frac{-1}{2}}, {x}^{2}, 1\right) \]
      6. lower-fma.f64N/A

        \[\leadsto \mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{5}{24}, {x}^{2}, \frac{-1}{2}\right)}, {x}^{2}, 1\right) \]
      7. unpow2N/A

        \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
      8. lower-*.f64N/A

        \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
      9. unpow2N/A

        \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, x \cdot x, \frac{-1}{2}\right), \color{blue}{x \cdot x}, 1\right) \]
      10. lower-*.f64100.0

        \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), \color{blue}{x \cdot x}, 1\right) \]
    5. Applied rewrites100.0%

      \[\leadsto \color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)} \]

    if 4 < (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x)))

    1. Initial program 100.0%

      \[\frac{2}{e^{x} + e^{-x}} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-/.f64N/A

        \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
      2. clear-numN/A

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
      3. lift-+.f64N/A

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
      4. lift-exp.f64N/A

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
      5. lift-exp.f64N/A

        \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
      6. lift-neg.f64N/A

        \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
      7. cosh-defN/A

        \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
      8. lower-/.f64N/A

        \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
      9. lower-cosh.f64100.0

        \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
    4. Applied rewrites100.0%

      \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
    5. Taylor expanded in x around 0

      \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right)}} \]
    6. Step-by-step derivation
      1. +-commutativeN/A

        \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) + 1}} \]
      2. *-commutativeN/A

        \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1} \]
      3. lower-fma.f64N/A

        \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}, {x}^{2}, 1\right)}} \]
      4. +-commutativeN/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\frac{1}{24} \cdot {x}^{2} + \frac{1}{2}}, {x}^{2}, 1\right)} \]
      5. lower-fma.f64N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
      6. unpow2N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      7. lower-*.f64N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      8. unpow2N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
      9. lower-*.f6468.7

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
    7. Applied rewrites68.7%

      \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
    8. Taylor expanded in x around inf

      \[\leadsto \frac{1}{{x}^{4} \cdot \color{blue}{\left(\frac{1}{24} + \frac{1}{2} \cdot \frac{1}{{x}^{2}}\right)}} \]
    9. Step-by-step derivation
      1. Applied rewrites68.7%

        \[\leadsto \frac{1}{\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot \color{blue}{x}} \]
    10. Recombined 2 regimes into one program.
    11. Final simplification84.5%

      \[\leadsto \begin{array}{l} \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot x}\\ \end{array} \]
    12. Add Preprocessing

    Alternative 3: 88.5% accurate, 0.9× speedup?

    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(\left(0.041666666666666664 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot x}\\ \end{array} \end{array} \]
    (FPCore (x)
     :precision binary64
     (if (<= (+ (exp (- x)) (exp x)) 4.0)
       (fma (fma 0.20833333333333334 (* x x) -0.5) (* x x) 1.0)
       (/ 1.0 (* (* (* 0.041666666666666664 (* x x)) x) x))))
    double code(double x) {
    	double tmp;
    	if ((exp(-x) + exp(x)) <= 4.0) {
    		tmp = fma(fma(0.20833333333333334, (x * x), -0.5), (x * x), 1.0);
    	} else {
    		tmp = 1.0 / (((0.041666666666666664 * (x * x)) * x) * x);
    	}
    	return tmp;
    }
    
    function code(x)
    	tmp = 0.0
    	if (Float64(exp(Float64(-x)) + exp(x)) <= 4.0)
    		tmp = fma(fma(0.20833333333333334, Float64(x * x), -0.5), Float64(x * x), 1.0);
    	else
    		tmp = Float64(1.0 / Float64(Float64(Float64(0.041666666666666664 * Float64(x * x)) * x) * x));
    	end
    	return tmp
    end
    
    code[x_] := If[LessEqual[N[(N[Exp[(-x)], $MachinePrecision] + N[Exp[x], $MachinePrecision]), $MachinePrecision], 4.0], N[(N[(0.20833333333333334 * N[(x * x), $MachinePrecision] + -0.5), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision], N[(1.0 / N[(N[(N[(0.041666666666666664 * N[(x * x), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision]), $MachinePrecision]]
    
    \begin{array}{l}
    
    \\
    \begin{array}{l}
    \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\
    \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\
    
    \mathbf{else}:\\
    \;\;\;\;\frac{1}{\left(\left(0.041666666666666664 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot x}\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x))) < 4

      1. Initial program 100.0%

        \[\frac{2}{e^{x} + e^{-x}} \]
      2. Add Preprocessing
      3. Taylor expanded in x around 0

        \[\leadsto \color{blue}{1 + {x}^{2} \cdot \left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right)} \]
      4. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right) + 1} \]
        2. *-commutativeN/A

          \[\leadsto \color{blue}{\left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right) \cdot {x}^{2}} + 1 \]
        3. lower-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}, {x}^{2}, 1\right)} \]
        4. sub-negN/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{5}{24} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)}, {x}^{2}, 1\right) \]
        5. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left(\frac{5}{24} \cdot {x}^{2} + \color{blue}{\frac{-1}{2}}, {x}^{2}, 1\right) \]
        6. lower-fma.f64N/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{5}{24}, {x}^{2}, \frac{-1}{2}\right)}, {x}^{2}, 1\right) \]
        7. unpow2N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        8. lower-*.f64N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
        9. unpow2N/A

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, x \cdot x, \frac{-1}{2}\right), \color{blue}{x \cdot x}, 1\right) \]
        10. lower-*.f64100.0

          \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), \color{blue}{x \cdot x}, 1\right) \]
      5. Applied rewrites100.0%

        \[\leadsto \color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)} \]

      if 4 < (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x)))

      1. Initial program 100.0%

        \[\frac{2}{e^{x} + e^{-x}} \]
      2. Add Preprocessing
      3. Step-by-step derivation
        1. lift-/.f64N/A

          \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
        2. clear-numN/A

          \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
        3. lift-+.f64N/A

          \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
        4. lift-exp.f64N/A

          \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
        5. lift-exp.f64N/A

          \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
        6. lift-neg.f64N/A

          \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
        7. cosh-defN/A

          \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
        8. lower-/.f64N/A

          \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
        9. lower-cosh.f64100.0

          \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
      4. Applied rewrites100.0%

        \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
      5. Taylor expanded in x around 0

        \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right)}} \]
      6. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) + 1}} \]
        2. *-commutativeN/A

          \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1} \]
        3. lower-fma.f64N/A

          \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}, {x}^{2}, 1\right)}} \]
        4. +-commutativeN/A

          \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\frac{1}{24} \cdot {x}^{2} + \frac{1}{2}}, {x}^{2}, 1\right)} \]
        5. lower-fma.f64N/A

          \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
        6. unpow2N/A

          \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
        7. lower-*.f64N/A

          \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
        8. unpow2N/A

          \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
        9. lower-*.f6468.7

          \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
      7. Applied rewrites68.7%

        \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
      8. Taylor expanded in x around inf

        \[\leadsto \frac{1}{{x}^{4} \cdot \color{blue}{\left(\frac{1}{24} + \frac{1}{2} \cdot \frac{1}{{x}^{2}}\right)}} \]
      9. Step-by-step derivation
        1. Applied rewrites68.7%

          \[\leadsto \frac{1}{\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot \color{blue}{x}} \]
        2. Taylor expanded in x around inf

          \[\leadsto \frac{1}{\left(\left(\frac{1}{24} \cdot {x}^{2}\right) \cdot x\right) \cdot x} \]
        3. Step-by-step derivation
          1. Applied rewrites68.7%

            \[\leadsto \frac{1}{\left(\left(0.041666666666666664 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot x} \]
        4. Recombined 2 regimes into one program.
        5. Final simplification84.5%

          \[\leadsto \begin{array}{l} \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(\left(0.041666666666666664 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot x}\\ \end{array} \]
        6. Add Preprocessing

        Alternative 4: 76.3% accurate, 1.0× speedup?

        \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\ \;\;\;\;\mathsf{fma}\left(x \cdot x, -0.5, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{2}{x \cdot x}\\ \end{array} \end{array} \]
        (FPCore (x)
         :precision binary64
         (if (<= (+ (exp (- x)) (exp x)) 4.0) (fma (* x x) -0.5 1.0) (/ 2.0 (* x x))))
        double code(double x) {
        	double tmp;
        	if ((exp(-x) + exp(x)) <= 4.0) {
        		tmp = fma((x * x), -0.5, 1.0);
        	} else {
        		tmp = 2.0 / (x * x);
        	}
        	return tmp;
        }
        
        function code(x)
        	tmp = 0.0
        	if (Float64(exp(Float64(-x)) + exp(x)) <= 4.0)
        		tmp = fma(Float64(x * x), -0.5, 1.0);
        	else
        		tmp = Float64(2.0 / Float64(x * x));
        	end
        	return tmp
        end
        
        code[x_] := If[LessEqual[N[(N[Exp[(-x)], $MachinePrecision] + N[Exp[x], $MachinePrecision]), $MachinePrecision], 4.0], N[(N[(x * x), $MachinePrecision] * -0.5 + 1.0), $MachinePrecision], N[(2.0 / N[(x * x), $MachinePrecision]), $MachinePrecision]]
        
        \begin{array}{l}
        
        \\
        \begin{array}{l}
        \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\
        \;\;\;\;\mathsf{fma}\left(x \cdot x, -0.5, 1\right)\\
        
        \mathbf{else}:\\
        \;\;\;\;\frac{2}{x \cdot x}\\
        
        
        \end{array}
        \end{array}
        
        Derivation
        1. Split input into 2 regimes
        2. if (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x))) < 4

          1. Initial program 100.0%

            \[\frac{2}{e^{x} + e^{-x}} \]
          2. Add Preprocessing
          3. Taylor expanded in x around 0

            \[\leadsto \color{blue}{1 + \frac{-1}{2} \cdot {x}^{2}} \]
          4. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto \color{blue}{\frac{-1}{2} \cdot {x}^{2} + 1} \]
            2. *-commutativeN/A

              \[\leadsto \color{blue}{{x}^{2} \cdot \frac{-1}{2}} + 1 \]
            3. lower-fma.f64N/A

              \[\leadsto \color{blue}{\mathsf{fma}\left({x}^{2}, \frac{-1}{2}, 1\right)} \]
            4. unpow2N/A

              \[\leadsto \mathsf{fma}\left(\color{blue}{x \cdot x}, \frac{-1}{2}, 1\right) \]
            5. lower-*.f64100.0

              \[\leadsto \mathsf{fma}\left(\color{blue}{x \cdot x}, -0.5, 1\right) \]
          5. Applied rewrites100.0%

            \[\leadsto \color{blue}{\mathsf{fma}\left(x \cdot x, -0.5, 1\right)} \]

          if 4 < (+.f64 (exp.f64 x) (exp.f64 (neg.f64 x)))

          1. Initial program 100.0%

            \[\frac{2}{e^{x} + e^{-x}} \]
          2. Add Preprocessing
          3. Taylor expanded in x around 0

            \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2}}} \]
          4. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto \frac{2}{\color{blue}{{x}^{2} + 2}} \]
            2. unpow2N/A

              \[\leadsto \frac{2}{\color{blue}{x \cdot x} + 2} \]
            3. lower-fma.f6446.6

              \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
          5. Applied rewrites46.6%

            \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
          6. Taylor expanded in x around inf

            \[\leadsto \frac{2}{{x}^{\color{blue}{2}}} \]
          7. Step-by-step derivation
            1. Applied rewrites46.6%

              \[\leadsto \frac{2}{x \cdot \color{blue}{x}} \]
          8. Recombined 2 regimes into one program.
          9. Final simplification73.5%

            \[\leadsto \begin{array}{l} \mathbf{if}\;e^{-x} + e^{x} \leq 4:\\ \;\;\;\;\mathsf{fma}\left(x \cdot x, -0.5, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{2}{x \cdot x}\\ \end{array} \]
          10. Add Preprocessing

          Alternative 5: 92.6% accurate, 4.8× speedup?

          \[\begin{array}{l} \\ \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, x, 1\right)} \end{array} \]
          (FPCore (x)
           :precision binary64
           (/
            1.0
            (fma
             (*
              (fma (fma 0.001388888888888889 (* x x) 0.041666666666666664) (* x x) 0.5)
              x)
             x
             1.0)))
          double code(double x) {
          	return 1.0 / fma((fma(fma(0.001388888888888889, (x * x), 0.041666666666666664), (x * x), 0.5) * x), x, 1.0);
          }
          
          function code(x)
          	return Float64(1.0 / fma(Float64(fma(fma(0.001388888888888889, Float64(x * x), 0.041666666666666664), Float64(x * x), 0.5) * x), x, 1.0))
          end
          
          code[x_] := N[(1.0 / N[(N[(N[(N[(0.001388888888888889 * N[(x * x), $MachinePrecision] + 0.041666666666666664), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * x), $MachinePrecision] * x + 1.0), $MachinePrecision]), $MachinePrecision]
          
          \begin{array}{l}
          
          \\
          \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, x, 1\right)}
          \end{array}
          
          Derivation
          1. Initial program 100.0%

            \[\frac{2}{e^{x} + e^{-x}} \]
          2. Add Preprocessing
          3. Step-by-step derivation
            1. lift-/.f64N/A

              \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
            2. clear-numN/A

              \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
            3. lift-+.f64N/A

              \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
            4. lift-exp.f64N/A

              \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
            5. lift-exp.f64N/A

              \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
            6. lift-neg.f64N/A

              \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
            7. cosh-defN/A

              \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
            8. lower-/.f64N/A

              \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
            9. lower-cosh.f64100.0

              \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
          4. Applied rewrites100.0%

            \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
          5. Taylor expanded in x around 0

            \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right)}} \]
          6. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) + 1}} \]
            2. *-commutativeN/A

              \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 1} \]
            3. lower-fma.f64N/A

              \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right), {x}^{2}, 1\right)}} \]
            4. +-commutativeN/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) + \frac{1}{2}}, {x}^{2}, 1\right)} \]
            5. *-commutativeN/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \frac{1}{2}, {x}^{2}, 1\right)} \]
            6. lower-fma.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
            7. +-commutativeN/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{720} \cdot {x}^{2} + \frac{1}{24}}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            8. lower-fma.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{720}, {x}^{2}, \frac{1}{24}\right)}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            9. unpow2N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            10. lower-*.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            11. unpow2N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            12. lower-*.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            13. unpow2N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
            14. lower-*.f6489.7

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
          7. Applied rewrites89.7%

            \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
          8. Step-by-step derivation
            1. Applied rewrites89.7%

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, \color{blue}{x}, 1\right)} \]
            2. Add Preprocessing

            Alternative 6: 92.4% accurate, 4.9× speedup?

            \[\begin{array}{l} \\ \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 0.001388888888888889, x \cdot x, 0.5\right), x \cdot x, 1\right)} \end{array} \]
            (FPCore (x)
             :precision binary64
             (/ 1.0 (fma (fma (* (* x x) 0.001388888888888889) (* x x) 0.5) (* x x) 1.0)))
            double code(double x) {
            	return 1.0 / fma(fma(((x * x) * 0.001388888888888889), (x * x), 0.5), (x * x), 1.0);
            }
            
            function code(x)
            	return Float64(1.0 / fma(fma(Float64(Float64(x * x) * 0.001388888888888889), Float64(x * x), 0.5), Float64(x * x), 1.0))
            end
            
            code[x_] := N[(1.0 / N[(N[(N[(N[(x * x), $MachinePrecision] * 0.001388888888888889), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]
            
            \begin{array}{l}
            
            \\
            \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 0.001388888888888889, x \cdot x, 0.5\right), x \cdot x, 1\right)}
            \end{array}
            
            Derivation
            1. Initial program 100.0%

              \[\frac{2}{e^{x} + e^{-x}} \]
            2. Add Preprocessing
            3. Step-by-step derivation
              1. lift-/.f64N/A

                \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
              2. clear-numN/A

                \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
              3. lift-+.f64N/A

                \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
              4. lift-exp.f64N/A

                \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
              5. lift-exp.f64N/A

                \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
              6. lift-neg.f64N/A

                \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
              7. cosh-defN/A

                \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
              8. lower-/.f64N/A

                \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
              9. lower-cosh.f64100.0

                \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
            4. Applied rewrites100.0%

              \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
            5. Taylor expanded in x around 0

              \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right)}} \]
            6. Step-by-step derivation
              1. +-commutativeN/A

                \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) + 1}} \]
              2. *-commutativeN/A

                \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 1} \]
              3. lower-fma.f64N/A

                \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right), {x}^{2}, 1\right)}} \]
              4. +-commutativeN/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) + \frac{1}{2}}, {x}^{2}, 1\right)} \]
              5. *-commutativeN/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \frac{1}{2}, {x}^{2}, 1\right)} \]
              6. lower-fma.f64N/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
              7. +-commutativeN/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{720} \cdot {x}^{2} + \frac{1}{24}}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
              8. lower-fma.f64N/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{720}, {x}^{2}, \frac{1}{24}\right)}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
              9. unpow2N/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
              10. lower-*.f64N/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
              11. unpow2N/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
              12. lower-*.f64N/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
              13. unpow2N/A

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
              14. lower-*.f6489.7

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
            7. Applied rewrites89.7%

              \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
            8. Taylor expanded in x around inf

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720} \cdot {x}^{2}, x \cdot x, \frac{1}{2}\right), x \cdot x, 1\right)} \]
            9. Step-by-step derivation
              1. Applied rewrites89.7%

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889 \cdot \left(x \cdot x\right), x \cdot x, 0.5\right), x \cdot x, 1\right)} \]
              2. Final simplification89.7%

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 0.001388888888888889, x \cdot x, 0.5\right), x \cdot x, 1\right)} \]
              3. Add Preprocessing

              Alternative 7: 92.1% accurate, 4.9× speedup?

              \[\begin{array}{l} \\ \frac{1}{\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, x \cdot x, 1\right)} \end{array} \]
              (FPCore (x)
               :precision binary64
               (/
                1.0
                (fma
                 (* (* (fma 0.001388888888888889 (* x x) 0.041666666666666664) x) x)
                 (* x x)
                 1.0)))
              double code(double x) {
              	return 1.0 / fma(((fma(0.001388888888888889, (x * x), 0.041666666666666664) * x) * x), (x * x), 1.0);
              }
              
              function code(x)
              	return Float64(1.0 / fma(Float64(Float64(fma(0.001388888888888889, Float64(x * x), 0.041666666666666664) * x) * x), Float64(x * x), 1.0))
              end
              
              code[x_] := N[(1.0 / N[(N[(N[(N[(0.001388888888888889 * N[(x * x), $MachinePrecision] + 0.041666666666666664), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]
              
              \begin{array}{l}
              
              \\
              \frac{1}{\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, x \cdot x, 1\right)}
              \end{array}
              
              Derivation
              1. Initial program 100.0%

                \[\frac{2}{e^{x} + e^{-x}} \]
              2. Add Preprocessing
              3. Step-by-step derivation
                1. lift-/.f64N/A

                  \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
                2. clear-numN/A

                  \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
                3. lift-+.f64N/A

                  \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
                4. lift-exp.f64N/A

                  \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
                5. lift-exp.f64N/A

                  \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
                6. lift-neg.f64N/A

                  \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
                7. cosh-defN/A

                  \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                8. lower-/.f64N/A

                  \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                9. lower-cosh.f64100.0

                  \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
              4. Applied rewrites100.0%

                \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
              5. Taylor expanded in x around 0

                \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right)}} \]
              6. Step-by-step derivation
                1. +-commutativeN/A

                  \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) + 1}} \]
                2. *-commutativeN/A

                  \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 1} \]
                3. lower-fma.f64N/A

                  \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right), {x}^{2}, 1\right)}} \]
                4. +-commutativeN/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) + \frac{1}{2}}, {x}^{2}, 1\right)} \]
                5. *-commutativeN/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \frac{1}{2}, {x}^{2}, 1\right)} \]
                6. lower-fma.f64N/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
                7. +-commutativeN/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{720} \cdot {x}^{2} + \frac{1}{24}}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                8. lower-fma.f64N/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{720}, {x}^{2}, \frac{1}{24}\right)}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                9. unpow2N/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                10. lower-*.f64N/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                11. unpow2N/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                12. lower-*.f64N/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                13. unpow2N/A

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
                14. lower-*.f6489.7

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
              7. Applied rewrites89.7%

                \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
              8. Taylor expanded in x around inf

                \[\leadsto \frac{1}{\mathsf{fma}\left({x}^{4} \cdot \left(\frac{1}{720} + \frac{1}{24} \cdot \frac{1}{{x}^{2}}\right), \color{blue}{x} \cdot x, 1\right)} \]
              9. Step-by-step derivation
                1. Applied rewrites89.6%

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, \color{blue}{x} \cdot x, 1\right)} \]
                2. Add Preprocessing

                Alternative 8: 88.5% accurate, 6.4× speedup?

                \[\begin{array}{l} \\ \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)} \end{array} \]
                (FPCore (x)
                 :precision binary64
                 (/ 1.0 (fma (fma 0.041666666666666664 (* x x) 0.5) (* x x) 1.0)))
                double code(double x) {
                	return 1.0 / fma(fma(0.041666666666666664, (x * x), 0.5), (x * x), 1.0);
                }
                
                function code(x)
                	return Float64(1.0 / fma(fma(0.041666666666666664, Float64(x * x), 0.5), Float64(x * x), 1.0))
                end
                
                code[x_] := N[(1.0 / N[(N[(0.041666666666666664 * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]
                
                \begin{array}{l}
                
                \\
                \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}
                \end{array}
                
                Derivation
                1. Initial program 100.0%

                  \[\frac{2}{e^{x} + e^{-x}} \]
                2. Add Preprocessing
                3. Step-by-step derivation
                  1. lift-/.f64N/A

                    \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
                  2. clear-numN/A

                    \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
                  3. lift-+.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
                  4. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
                  5. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
                  6. lift-neg.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
                  7. cosh-defN/A

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                  8. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                  9. lower-cosh.f64100.0

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                4. Applied rewrites100.0%

                  \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                5. Taylor expanded in x around 0

                  \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right)}} \]
                6. Step-by-step derivation
                  1. +-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) + 1}} \]
                  2. *-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1} \]
                  3. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}, {x}^{2}, 1\right)}} \]
                  4. +-commutativeN/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\frac{1}{24} \cdot {x}^{2} + \frac{1}{2}}, {x}^{2}, 1\right)} \]
                  5. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
                  6. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  7. lower-*.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  8. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
                  9. lower-*.f6484.5

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
                7. Applied rewrites84.5%

                  \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
                8. Add Preprocessing

                Alternative 9: 88.1% accurate, 6.6× speedup?

                \[\begin{array}{l} \\ \frac{1}{\mathsf{fma}\left(0.041666666666666664 \cdot \left(x \cdot x\right), x \cdot x, 1\right)} \end{array} \]
                (FPCore (x)
                 :precision binary64
                 (/ 1.0 (fma (* 0.041666666666666664 (* x x)) (* x x) 1.0)))
                double code(double x) {
                	return 1.0 / fma((0.041666666666666664 * (x * x)), (x * x), 1.0);
                }
                
                function code(x)
                	return Float64(1.0 / fma(Float64(0.041666666666666664 * Float64(x * x)), Float64(x * x), 1.0))
                end
                
                code[x_] := N[(1.0 / N[(N[(0.041666666666666664 * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]
                
                \begin{array}{l}
                
                \\
                \frac{1}{\mathsf{fma}\left(0.041666666666666664 \cdot \left(x \cdot x\right), x \cdot x, 1\right)}
                \end{array}
                
                Derivation
                1. Initial program 100.0%

                  \[\frac{2}{e^{x} + e^{-x}} \]
                2. Add Preprocessing
                3. Step-by-step derivation
                  1. lift-/.f64N/A

                    \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
                  2. clear-numN/A

                    \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
                  3. lift-+.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
                  4. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
                  5. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
                  6. lift-neg.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
                  7. cosh-defN/A

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                  8. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                  9. lower-cosh.f64100.0

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                4. Applied rewrites100.0%

                  \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                5. Taylor expanded in x around 0

                  \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right)}} \]
                6. Step-by-step derivation
                  1. +-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) + 1}} \]
                  2. *-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1} \]
                  3. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}, {x}^{2}, 1\right)}} \]
                  4. +-commutativeN/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\frac{1}{24} \cdot {x}^{2} + \frac{1}{2}}, {x}^{2}, 1\right)} \]
                  5. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
                  6. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  7. lower-*.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  8. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
                  9. lower-*.f6484.5

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
                7. Applied rewrites84.5%

                  \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
                8. Taylor expanded in x around inf

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{1}{24} \cdot {x}^{2}, \color{blue}{x} \cdot x, 1\right)} \]
                9. Step-by-step derivation
                  1. Applied rewrites84.3%

                    \[\leadsto \frac{1}{\mathsf{fma}\left(0.041666666666666664 \cdot \left(x \cdot x\right), \color{blue}{x} \cdot x, 1\right)} \]
                  2. Add Preprocessing

                  Alternative 10: 76.4% accurate, 12.1× speedup?

                  \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(x, x, 2\right)} \end{array} \]
                  (FPCore (x) :precision binary64 (/ 2.0 (fma x x 2.0)))
                  double code(double x) {
                  	return 2.0 / fma(x, x, 2.0);
                  }
                  
                  function code(x)
                  	return Float64(2.0 / fma(x, x, 2.0))
                  end
                  
                  code[x_] := N[(2.0 / N[(x * x + 2.0), $MachinePrecision]), $MachinePrecision]
                  
                  \begin{array}{l}
                  
                  \\
                  \frac{2}{\mathsf{fma}\left(x, x, 2\right)}
                  \end{array}
                  
                  Derivation
                  1. Initial program 100.0%

                    \[\frac{2}{e^{x} + e^{-x}} \]
                  2. Add Preprocessing
                  3. Taylor expanded in x around 0

                    \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2}}} \]
                  4. Step-by-step derivation
                    1. +-commutativeN/A

                      \[\leadsto \frac{2}{\color{blue}{{x}^{2} + 2}} \]
                    2. unpow2N/A

                      \[\leadsto \frac{2}{\color{blue}{x \cdot x} + 2} \]
                    3. lower-fma.f6473.5

                      \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                  5. Applied rewrites73.5%

                    \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                  6. Add Preprocessing

                  Alternative 11: 51.1% accurate, 217.0× speedup?

                  \[\begin{array}{l} \\ 1 \end{array} \]
                  (FPCore (x) :precision binary64 1.0)
                  double code(double x) {
                  	return 1.0;
                  }
                  
                  real(8) function code(x)
                      real(8), intent (in) :: x
                      code = 1.0d0
                  end function
                  
                  public static double code(double x) {
                  	return 1.0;
                  }
                  
                  def code(x):
                  	return 1.0
                  
                  function code(x)
                  	return 1.0
                  end
                  
                  function tmp = code(x)
                  	tmp = 1.0;
                  end
                  
                  code[x_] := 1.0
                  
                  \begin{array}{l}
                  
                  \\
                  1
                  \end{array}
                  
                  Derivation
                  1. Initial program 100.0%

                    \[\frac{2}{e^{x} + e^{-x}} \]
                  2. Add Preprocessing
                  3. Taylor expanded in x around 0

                    \[\leadsto \color{blue}{1} \]
                  4. Step-by-step derivation
                    1. Applied rewrites51.8%

                      \[\leadsto \color{blue}{1} \]
                    2. Add Preprocessing

                    Reproduce

                    ?
                    herbie shell --seed 2024271 
                    (FPCore (x)
                      :name "Hyperbolic secant"
                      :precision binary64
                      (/ 2.0 (+ (exp x) (exp (- x)))))