Hyperbolic secant

Percentage Accurate: 100.0% → 100.0%
Time: 5.5s
Alternatives: 10
Speedup: 1.1×

Specification

?
\[\begin{array}{l} \\ \frac{2}{e^{x} + e^{-x}} \end{array} \]
(FPCore (x) :precision binary64 (/ 2.0 (+ (exp x) (exp (- x)))))
double code(double x) {
	return 2.0 / (exp(x) + exp(-x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 2.0d0 / (exp(x) + exp(-x))
end function
public static double code(double x) {
	return 2.0 / (Math.exp(x) + Math.exp(-x));
}
def code(x):
	return 2.0 / (math.exp(x) + math.exp(-x))
function code(x)
	return Float64(2.0 / Float64(exp(x) + exp(Float64(-x))))
end
function tmp = code(x)
	tmp = 2.0 / (exp(x) + exp(-x));
end
code[x_] := N[(2.0 / N[(N[Exp[x], $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{2}{e^{x} + e^{-x}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 10 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{2}{e^{x} + e^{-x}} \end{array} \]
(FPCore (x) :precision binary64 (/ 2.0 (+ (exp x) (exp (- x)))))
double code(double x) {
	return 2.0 / (exp(x) + exp(-x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 2.0d0 / (exp(x) + exp(-x))
end function
public static double code(double x) {
	return 2.0 / (Math.exp(x) + Math.exp(-x));
}
def code(x):
	return 2.0 / (math.exp(x) + math.exp(-x))
function code(x)
	return Float64(2.0 / Float64(exp(x) + exp(Float64(-x))))
end
function tmp = code(x)
	tmp = 2.0 / (exp(x) + exp(-x));
end
code[x_] := N[(2.0 / N[(N[Exp[x], $MachinePrecision] + N[Exp[(-x)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{2}{e^{x} + e^{-x}}
\end{array}

Alternative 1: 100.0% accurate, 1.1× speedup?

\[\begin{array}{l} \\ {\cosh x}^{-1} \end{array} \]
(FPCore (x) :precision binary64 (pow (cosh x) -1.0))
double code(double x) {
	return pow(cosh(x), -1.0);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = cosh(x) ** (-1.0d0)
end function
public static double code(double x) {
	return Math.pow(Math.cosh(x), -1.0);
}
def code(x):
	return math.pow(math.cosh(x), -1.0)
function code(x)
	return cosh(x) ^ -1.0
end
function tmp = code(x)
	tmp = cosh(x) ^ -1.0;
end
code[x_] := N[Power[N[Cosh[x], $MachinePrecision], -1.0], $MachinePrecision]
\begin{array}{l}

\\
{\cosh x}^{-1}
\end{array}
Derivation
  1. Initial program 100.0%

    \[\frac{2}{e^{x} + e^{-x}} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-/.f64N/A

      \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
    2. clear-numN/A

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
    3. lift-+.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
    4. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
    5. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
    6. lift-neg.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
    7. cosh-defN/A

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
    8. lower-/.f64N/A

      \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
    9. lower-cosh.f64100.0

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
  4. Applied rewrites100.0%

    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
  5. Final simplification100.0%

    \[\leadsto {\cosh x}^{-1} \]
  6. Add Preprocessing

Alternative 2: 91.8% accurate, 1.6× speedup?

\[\begin{array}{l} \\ {\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, x, 1\right)\right)}^{-1} \end{array} \]
(FPCore (x)
 :precision binary64
 (pow
  (fma
   (*
    (fma (fma 0.001388888888888889 (* x x) 0.041666666666666664) (* x x) 0.5)
    x)
   x
   1.0)
  -1.0))
double code(double x) {
	return pow(fma((fma(fma(0.001388888888888889, (x * x), 0.041666666666666664), (x * x), 0.5) * x), x, 1.0), -1.0);
}
function code(x)
	return fma(Float64(fma(fma(0.001388888888888889, Float64(x * x), 0.041666666666666664), Float64(x * x), 0.5) * x), x, 1.0) ^ -1.0
end
code[x_] := N[Power[N[(N[(N[(N[(0.001388888888888889 * N[(x * x), $MachinePrecision] + 0.041666666666666664), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * x), $MachinePrecision] * x + 1.0), $MachinePrecision], -1.0], $MachinePrecision]
\begin{array}{l}

\\
{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, x, 1\right)\right)}^{-1}
\end{array}
Derivation
  1. Initial program 100.0%

    \[\frac{2}{e^{x} + e^{-x}} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-/.f64N/A

      \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
    2. clear-numN/A

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
    3. lift-+.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
    4. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
    5. lift-exp.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
    6. lift-neg.f64N/A

      \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
    7. cosh-defN/A

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
    8. lower-/.f64N/A

      \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
    9. lower-cosh.f64100.0

      \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
  4. Applied rewrites100.0%

    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
  5. Taylor expanded in x around 0

    \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right)}} \]
  6. Step-by-step derivation
    1. +-commutativeN/A

      \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) + 1}} \]
    2. *-commutativeN/A

      \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 1} \]
    3. lower-fma.f64N/A

      \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right), {x}^{2}, 1\right)}} \]
    4. +-commutativeN/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) + \frac{1}{2}}, {x}^{2}, 1\right)} \]
    5. *-commutativeN/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \frac{1}{2}, {x}^{2}, 1\right)} \]
    6. lower-fma.f64N/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
    7. +-commutativeN/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{720} \cdot {x}^{2} + \frac{1}{24}}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
    8. lower-fma.f64N/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{720}, {x}^{2}, \frac{1}{24}\right)}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
    9. unpow2N/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
    10. lower-*.f64N/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
    11. unpow2N/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
    12. lower-*.f64N/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
    13. unpow2N/A

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
    14. lower-*.f6491.5

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
  7. Applied rewrites91.5%

    \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
  8. Step-by-step derivation
    1. Applied rewrites91.5%

      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, \color{blue}{x}, 1\right)} \]
    2. Final simplification91.5%

      \[\leadsto {\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, x, 1\right)\right)}^{-1} \]
    3. Add Preprocessing

    Alternative 3: 91.6% accurate, 1.6× speedup?

    \[\begin{array}{l} \\ {\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 0.001388888888888889, x \cdot x, 0.5\right) \cdot x, x, 1\right)\right)}^{-1} \end{array} \]
    (FPCore (x)
     :precision binary64
     (pow
      (fma (* (fma (* (* x x) 0.001388888888888889) (* x x) 0.5) x) x 1.0)
      -1.0))
    double code(double x) {
    	return pow(fma((fma(((x * x) * 0.001388888888888889), (x * x), 0.5) * x), x, 1.0), -1.0);
    }
    
    function code(x)
    	return fma(Float64(fma(Float64(Float64(x * x) * 0.001388888888888889), Float64(x * x), 0.5) * x), x, 1.0) ^ -1.0
    end
    
    code[x_] := N[Power[N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 0.001388888888888889), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * x), $MachinePrecision] * x + 1.0), $MachinePrecision], -1.0], $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    {\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 0.001388888888888889, x \cdot x, 0.5\right) \cdot x, x, 1\right)\right)}^{-1}
    \end{array}
    
    Derivation
    1. Initial program 100.0%

      \[\frac{2}{e^{x} + e^{-x}} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-/.f64N/A

        \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
      2. clear-numN/A

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
      3. lift-+.f64N/A

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
      4. lift-exp.f64N/A

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
      5. lift-exp.f64N/A

        \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
      6. lift-neg.f64N/A

        \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
      7. cosh-defN/A

        \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
      8. lower-/.f64N/A

        \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
      9. lower-cosh.f64100.0

        \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
    4. Applied rewrites100.0%

      \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
    5. Taylor expanded in x around 0

      \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right)}} \]
    6. Step-by-step derivation
      1. +-commutativeN/A

        \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) + 1}} \]
      2. *-commutativeN/A

        \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 1} \]
      3. lower-fma.f64N/A

        \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right), {x}^{2}, 1\right)}} \]
      4. +-commutativeN/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) + \frac{1}{2}}, {x}^{2}, 1\right)} \]
      5. *-commutativeN/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \frac{1}{2}, {x}^{2}, 1\right)} \]
      6. lower-fma.f64N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
      7. +-commutativeN/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{720} \cdot {x}^{2} + \frac{1}{24}}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      8. lower-fma.f64N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{720}, {x}^{2}, \frac{1}{24}\right)}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      9. unpow2N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      10. lower-*.f64N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      11. unpow2N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      12. lower-*.f64N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
      13. unpow2N/A

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
      14. lower-*.f6491.5

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
    7. Applied rewrites91.5%

      \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
    8. Step-by-step derivation
      1. Applied rewrites91.5%

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right) \cdot x, \color{blue}{x}, 1\right)} \]
      2. Taylor expanded in x around inf

        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720} \cdot {x}^{2}, x \cdot x, \frac{1}{2}\right) \cdot x, x, 1\right)} \]
      3. Step-by-step derivation
        1. Applied rewrites91.2%

          \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 0.001388888888888889, x \cdot x, 0.5\right) \cdot x, x, 1\right)} \]
        2. Final simplification91.2%

          \[\leadsto {\left(\mathsf{fma}\left(\mathsf{fma}\left(\left(x \cdot x\right) \cdot 0.001388888888888889, x \cdot x, 0.5\right) \cdot x, x, 1\right)\right)}^{-1} \]
        3. Add Preprocessing

        Alternative 4: 91.3% accurate, 1.6× speedup?

        \[\begin{array}{l} \\ {\left(\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, x \cdot x, 1\right)\right)}^{-1} \end{array} \]
        (FPCore (x)
         :precision binary64
         (pow
          (fma
           (* (* (fma 0.001388888888888889 (* x x) 0.041666666666666664) x) x)
           (* x x)
           1.0)
          -1.0))
        double code(double x) {
        	return pow(fma(((fma(0.001388888888888889, (x * x), 0.041666666666666664) * x) * x), (x * x), 1.0), -1.0);
        }
        
        function code(x)
        	return fma(Float64(Float64(fma(0.001388888888888889, Float64(x * x), 0.041666666666666664) * x) * x), Float64(x * x), 1.0) ^ -1.0
        end
        
        code[x_] := N[Power[N[(N[(N[(N[(0.001388888888888889 * N[(x * x), $MachinePrecision] + 0.041666666666666664), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision], -1.0], $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        {\left(\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, x \cdot x, 1\right)\right)}^{-1}
        \end{array}
        
        Derivation
        1. Initial program 100.0%

          \[\frac{2}{e^{x} + e^{-x}} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
          2. clear-numN/A

            \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
          3. lift-+.f64N/A

            \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
          4. lift-exp.f64N/A

            \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
          5. lift-exp.f64N/A

            \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
          6. lift-neg.f64N/A

            \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
          7. cosh-defN/A

            \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
          8. lower-/.f64N/A

            \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
          9. lower-cosh.f64100.0

            \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
        4. Applied rewrites100.0%

          \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
        5. Taylor expanded in x around 0

          \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right)}} \]
        6. Step-by-step derivation
          1. +-commutativeN/A

            \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) + 1}} \]
          2. *-commutativeN/A

            \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 1} \]
          3. lower-fma.f64N/A

            \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right), {x}^{2}, 1\right)}} \]
          4. +-commutativeN/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) + \frac{1}{2}}, {x}^{2}, 1\right)} \]
          5. *-commutativeN/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \frac{1}{2}, {x}^{2}, 1\right)} \]
          6. lower-fma.f64N/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
          7. +-commutativeN/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{720} \cdot {x}^{2} + \frac{1}{24}}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
          8. lower-fma.f64N/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{720}, {x}^{2}, \frac{1}{24}\right)}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
          9. unpow2N/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
          10. lower-*.f64N/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
          11. unpow2N/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
          12. lower-*.f64N/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
          13. unpow2N/A

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
          14. lower-*.f6491.5

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
        7. Applied rewrites91.5%

          \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
        8. Taylor expanded in x around inf

          \[\leadsto \frac{1}{\mathsf{fma}\left({x}^{4} \cdot \left(\frac{1}{720} + \frac{1}{24} \cdot \frac{1}{{x}^{2}}\right), \color{blue}{x} \cdot x, 1\right)} \]
        9. Step-by-step derivation
          1. Applied rewrites90.7%

            \[\leadsto \frac{1}{\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, \color{blue}{x} \cdot x, 1\right)} \]
          2. Final simplification90.7%

            \[\leadsto {\left(\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, x \cdot x, 1\right)\right)}^{-1} \]
          3. Add Preprocessing

          Alternative 5: 91.3% accurate, 1.6× speedup?

          \[\begin{array}{l} \\ {\left(\mathsf{fma}\left(\left(\left(\left(x \cdot x\right) \cdot 0.001388888888888889\right) \cdot x\right) \cdot x, x \cdot x, 1\right)\right)}^{-1} \end{array} \]
          (FPCore (x)
           :precision binary64
           (pow (fma (* (* (* (* x x) 0.001388888888888889) x) x) (* x x) 1.0) -1.0))
          double code(double x) {
          	return pow(fma(((((x * x) * 0.001388888888888889) * x) * x), (x * x), 1.0), -1.0);
          }
          
          function code(x)
          	return fma(Float64(Float64(Float64(Float64(x * x) * 0.001388888888888889) * x) * x), Float64(x * x), 1.0) ^ -1.0
          end
          
          code[x_] := N[Power[N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 0.001388888888888889), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision], -1.0], $MachinePrecision]
          
          \begin{array}{l}
          
          \\
          {\left(\mathsf{fma}\left(\left(\left(\left(x \cdot x\right) \cdot 0.001388888888888889\right) \cdot x\right) \cdot x, x \cdot x, 1\right)\right)}^{-1}
          \end{array}
          
          Derivation
          1. Initial program 100.0%

            \[\frac{2}{e^{x} + e^{-x}} \]
          2. Add Preprocessing
          3. Step-by-step derivation
            1. lift-/.f64N/A

              \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
            2. clear-numN/A

              \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
            3. lift-+.f64N/A

              \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
            4. lift-exp.f64N/A

              \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
            5. lift-exp.f64N/A

              \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
            6. lift-neg.f64N/A

              \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
            7. cosh-defN/A

              \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
            8. lower-/.f64N/A

              \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
            9. lower-cosh.f64100.0

              \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
          4. Applied rewrites100.0%

            \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
          5. Taylor expanded in x around 0

            \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right)}} \]
          6. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) + 1}} \]
            2. *-commutativeN/A

              \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right)\right) \cdot {x}^{2}} + 1} \]
            3. lower-fma.f64N/A

              \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + {x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right), {x}^{2}, 1\right)}} \]
            4. +-commutativeN/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{{x}^{2} \cdot \left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) + \frac{1}{2}}, {x}^{2}, 1\right)} \]
            5. *-commutativeN/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}\right) \cdot {x}^{2}} + \frac{1}{2}, {x}^{2}, 1\right)} \]
            6. lower-fma.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24} + \frac{1}{720} \cdot {x}^{2}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
            7. +-commutativeN/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\frac{1}{720} \cdot {x}^{2} + \frac{1}{24}}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            8. lower-fma.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{720}, {x}^{2}, \frac{1}{24}\right)}, {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            9. unpow2N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            10. lower-*.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, \color{blue}{x \cdot x}, \frac{1}{24}\right), {x}^{2}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            11. unpow2N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            12. lower-*.f64N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
            13. unpow2N/A

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{720}, x \cdot x, \frac{1}{24}\right), x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
            14. lower-*.f6491.5

              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
          7. Applied rewrites91.5%

            \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right), x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
          8. Taylor expanded in x around inf

            \[\leadsto \frac{1}{\mathsf{fma}\left({x}^{4} \cdot \left(\frac{1}{720} + \frac{1}{24} \cdot \frac{1}{{x}^{2}}\right), \color{blue}{x} \cdot x, 1\right)} \]
          9. Step-by-step derivation
            1. Applied rewrites90.7%

              \[\leadsto \frac{1}{\mathsf{fma}\left(\left(\mathsf{fma}\left(0.001388888888888889, x \cdot x, 0.041666666666666664\right) \cdot x\right) \cdot x, \color{blue}{x} \cdot x, 1\right)} \]
            2. Taylor expanded in x around inf

              \[\leadsto \frac{1}{\mathsf{fma}\left(\left(\left(\frac{1}{720} \cdot {x}^{2}\right) \cdot x\right) \cdot x, x \cdot x, 1\right)} \]
            3. Step-by-step derivation
              1. Applied rewrites90.7%

                \[\leadsto \frac{1}{\mathsf{fma}\left(\left(\left(\left(x \cdot x\right) \cdot 0.001388888888888889\right) \cdot x\right) \cdot x, x \cdot x, 1\right)} \]
              2. Final simplification90.7%

                \[\leadsto {\left(\mathsf{fma}\left(\left(\left(\left(x \cdot x\right) \cdot 0.001388888888888889\right) \cdot x\right) \cdot x, x \cdot x, 1\right)\right)}^{-1} \]
              3. Add Preprocessing

              Alternative 6: 69.5% accurate, 1.7× speedup?

              \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;x \leq 1.45:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \mathbf{else}:\\ \;\;\;\;{\left(\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot x\right)}^{-1}\\ \end{array} \end{array} \]
              (FPCore (x)
               :precision binary64
               (if (<= x 1.45)
                 (fma (fma 0.20833333333333334 (* x x) -0.5) (* x x) 1.0)
                 (pow (* (* (fma 0.041666666666666664 (* x x) 0.5) x) x) -1.0)))
              double code(double x) {
              	double tmp;
              	if (x <= 1.45) {
              		tmp = fma(fma(0.20833333333333334, (x * x), -0.5), (x * x), 1.0);
              	} else {
              		tmp = pow(((fma(0.041666666666666664, (x * x), 0.5) * x) * x), -1.0);
              	}
              	return tmp;
              }
              
              function code(x)
              	tmp = 0.0
              	if (x <= 1.45)
              		tmp = fma(fma(0.20833333333333334, Float64(x * x), -0.5), Float64(x * x), 1.0);
              	else
              		tmp = Float64(Float64(fma(0.041666666666666664, Float64(x * x), 0.5) * x) * x) ^ -1.0;
              	end
              	return tmp
              end
              
              code[x_] := If[LessEqual[x, 1.45], N[(N[(0.20833333333333334 * N[(x * x), $MachinePrecision] + -0.5), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision], N[Power[N[(N[(N[(0.041666666666666664 * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision], -1.0], $MachinePrecision]]
              
              \begin{array}{l}
              
              \\
              \begin{array}{l}
              \mathbf{if}\;x \leq 1.45:\\
              \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\
              
              \mathbf{else}:\\
              \;\;\;\;{\left(\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot x\right)}^{-1}\\
              
              
              \end{array}
              \end{array}
              
              Derivation
              1. Split input into 2 regimes
              2. if x < 1.44999999999999996

                1. Initial program 100.0%

                  \[\frac{2}{e^{x} + e^{-x}} \]
                2. Add Preprocessing
                3. Taylor expanded in x around 0

                  \[\leadsto \color{blue}{1 + {x}^{2} \cdot \left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right)} \]
                4. Step-by-step derivation
                  1. +-commutativeN/A

                    \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right) + 1} \]
                  2. *-commutativeN/A

                    \[\leadsto \color{blue}{\left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}\right) \cdot {x}^{2}} + 1 \]
                  3. lower-fma.f64N/A

                    \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{5}{24} \cdot {x}^{2} - \frac{1}{2}, {x}^{2}, 1\right)} \]
                  4. sub-negN/A

                    \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{5}{24} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)}, {x}^{2}, 1\right) \]
                  5. metadata-evalN/A

                    \[\leadsto \mathsf{fma}\left(\frac{5}{24} \cdot {x}^{2} + \color{blue}{\frac{-1}{2}}, {x}^{2}, 1\right) \]
                  6. lower-fma.f64N/A

                    \[\leadsto \mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{5}{24}, {x}^{2}, \frac{-1}{2}\right)}, {x}^{2}, 1\right) \]
                  7. unpow2N/A

                    \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
                  8. lower-*.f64N/A

                    \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, \color{blue}{x \cdot x}, \frac{-1}{2}\right), {x}^{2}, 1\right) \]
                  9. unpow2N/A

                    \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\frac{5}{24}, x \cdot x, \frac{-1}{2}\right), \color{blue}{x \cdot x}, 1\right) \]
                  10. lower-*.f6466.1

                    \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), \color{blue}{x \cdot x}, 1\right) \]
                5. Applied rewrites66.1%

                  \[\leadsto \color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)} \]

                if 1.44999999999999996 < x

                1. Initial program 100.0%

                  \[\frac{2}{e^{x} + e^{-x}} \]
                2. Add Preprocessing
                3. Step-by-step derivation
                  1. lift-/.f64N/A

                    \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
                  2. clear-numN/A

                    \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
                  3. lift-+.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
                  4. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
                  5. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
                  6. lift-neg.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
                  7. cosh-defN/A

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                  8. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                  9. lower-cosh.f64100.0

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                4. Applied rewrites100.0%

                  \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                5. Taylor expanded in x around 0

                  \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right)}} \]
                6. Step-by-step derivation
                  1. +-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) + 1}} \]
                  2. *-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1} \]
                  3. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}, {x}^{2}, 1\right)}} \]
                  4. +-commutativeN/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\frac{1}{24} \cdot {x}^{2} + \frac{1}{2}}, {x}^{2}, 1\right)} \]
                  5. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
                  6. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  7. lower-*.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  8. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
                  9. lower-*.f6475.8

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
                7. Applied rewrites75.8%

                  \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
                8. Taylor expanded in x around inf

                  \[\leadsto \frac{1}{{x}^{4} \cdot \color{blue}{\left(\frac{1}{24} + \frac{1}{2} \cdot \frac{1}{{x}^{2}}\right)}} \]
                9. Step-by-step derivation
                  1. Applied rewrites75.8%

                    \[\leadsto \frac{1}{\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot \color{blue}{x}} \]
                10. Recombined 2 regimes into one program.
                11. Final simplification69.1%

                  \[\leadsto \begin{array}{l} \mathbf{if}\;x \leq 1.45:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.20833333333333334, x \cdot x, -0.5\right), x \cdot x, 1\right)\\ \mathbf{else}:\\ \;\;\;\;{\left(\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right) \cdot x\right) \cdot x\right)}^{-1}\\ \end{array} \]
                12. Add Preprocessing

                Alternative 7: 87.7% accurate, 1.8× speedup?

                \[\begin{array}{l} \\ {\left(\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)\right)}^{-1} \end{array} \]
                (FPCore (x)
                 :precision binary64
                 (pow (fma (fma 0.041666666666666664 (* x x) 0.5) (* x x) 1.0) -1.0))
                double code(double x) {
                	return pow(fma(fma(0.041666666666666664, (x * x), 0.5), (x * x), 1.0), -1.0);
                }
                
                function code(x)
                	return fma(fma(0.041666666666666664, Float64(x * x), 0.5), Float64(x * x), 1.0) ^ -1.0
                end
                
                code[x_] := N[Power[N[(N[(0.041666666666666664 * N[(x * x), $MachinePrecision] + 0.5), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision], -1.0], $MachinePrecision]
                
                \begin{array}{l}
                
                \\
                {\left(\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)\right)}^{-1}
                \end{array}
                
                Derivation
                1. Initial program 100.0%

                  \[\frac{2}{e^{x} + e^{-x}} \]
                2. Add Preprocessing
                3. Step-by-step derivation
                  1. lift-/.f64N/A

                    \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
                  2. clear-numN/A

                    \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
                  3. lift-+.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
                  4. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
                  5. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
                  6. lift-neg.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
                  7. cosh-defN/A

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                  8. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                  9. lower-cosh.f64100.0

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                4. Applied rewrites100.0%

                  \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                5. Taylor expanded in x around 0

                  \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right)}} \]
                6. Step-by-step derivation
                  1. +-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) + 1}} \]
                  2. *-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1} \]
                  3. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}, {x}^{2}, 1\right)}} \]
                  4. +-commutativeN/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\frac{1}{24} \cdot {x}^{2} + \frac{1}{2}}, {x}^{2}, 1\right)} \]
                  5. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
                  6. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  7. lower-*.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  8. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
                  9. lower-*.f6487.7

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
                7. Applied rewrites87.7%

                  \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
                8. Final simplification87.7%

                  \[\leadsto {\left(\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)\right)}^{-1} \]
                9. Add Preprocessing

                Alternative 8: 87.3% accurate, 1.8× speedup?

                \[\begin{array}{l} \\ {\left(\mathsf{fma}\left(0.041666666666666664 \cdot \left(x \cdot x\right), x \cdot x, 1\right)\right)}^{-1} \end{array} \]
                (FPCore (x)
                 :precision binary64
                 (pow (fma (* 0.041666666666666664 (* x x)) (* x x) 1.0) -1.0))
                double code(double x) {
                	return pow(fma((0.041666666666666664 * (x * x)), (x * x), 1.0), -1.0);
                }
                
                function code(x)
                	return fma(Float64(0.041666666666666664 * Float64(x * x)), Float64(x * x), 1.0) ^ -1.0
                end
                
                code[x_] := N[Power[N[(N[(0.041666666666666664 * N[(x * x), $MachinePrecision]), $MachinePrecision] * N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision], -1.0], $MachinePrecision]
                
                \begin{array}{l}
                
                \\
                {\left(\mathsf{fma}\left(0.041666666666666664 \cdot \left(x \cdot x\right), x \cdot x, 1\right)\right)}^{-1}
                \end{array}
                
                Derivation
                1. Initial program 100.0%

                  \[\frac{2}{e^{x} + e^{-x}} \]
                2. Add Preprocessing
                3. Step-by-step derivation
                  1. lift-/.f64N/A

                    \[\leadsto \color{blue}{\frac{2}{e^{x} + e^{-x}}} \]
                  2. clear-numN/A

                    \[\leadsto \color{blue}{\frac{1}{\frac{e^{x} + e^{-x}}{2}}} \]
                  3. lift-+.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x} + e^{-x}}}{2}} \]
                  4. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{\color{blue}{e^{x}} + e^{-x}}{2}} \]
                  5. lift-exp.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + \color{blue}{e^{-x}}}{2}} \]
                  6. lift-neg.f64N/A

                    \[\leadsto \frac{1}{\frac{e^{x} + e^{\color{blue}{\mathsf{neg}\left(x\right)}}}{2}} \]
                  7. cosh-defN/A

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                  8. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                  9. lower-cosh.f64100.0

                    \[\leadsto \frac{1}{\color{blue}{\cosh x}} \]
                4. Applied rewrites100.0%

                  \[\leadsto \color{blue}{\frac{1}{\cosh x}} \]
                5. Taylor expanded in x around 0

                  \[\leadsto \frac{1}{\color{blue}{1 + {x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right)}} \]
                6. Step-by-step derivation
                  1. +-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) + 1}} \]
                  2. *-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}\right) \cdot {x}^{2}} + 1} \]
                  3. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\frac{1}{2} + \frac{1}{24} \cdot {x}^{2}, {x}^{2}, 1\right)}} \]
                  4. +-commutativeN/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\frac{1}{24} \cdot {x}^{2} + \frac{1}{2}}, {x}^{2}, 1\right)} \]
                  5. lower-fma.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{24}, {x}^{2}, \frac{1}{2}\right)}, {x}^{2}, 1\right)} \]
                  6. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  7. lower-*.f64N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, \color{blue}{x \cdot x}, \frac{1}{2}\right), {x}^{2}, 1\right)} \]
                  8. unpow2N/A

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{24}, x \cdot x, \frac{1}{2}\right), \color{blue}{x \cdot x}, 1\right)} \]
                  9. lower-*.f6487.7

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), \color{blue}{x \cdot x}, 1\right)} \]
                7. Applied rewrites87.7%

                  \[\leadsto \frac{1}{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(0.041666666666666664, x \cdot x, 0.5\right), x \cdot x, 1\right)}} \]
                8. Taylor expanded in x around inf

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{1}{24} \cdot {x}^{2}, \color{blue}{x} \cdot x, 1\right)} \]
                9. Step-by-step derivation
                  1. Applied rewrites87.0%

                    \[\leadsto \frac{1}{\mathsf{fma}\left(0.041666666666666664 \cdot \left(x \cdot x\right), \color{blue}{x} \cdot x, 1\right)} \]
                  2. Final simplification87.0%

                    \[\leadsto {\left(\mathsf{fma}\left(0.041666666666666664 \cdot \left(x \cdot x\right), x \cdot x, 1\right)\right)}^{-1} \]
                  3. Add Preprocessing

                  Alternative 9: 76.0% accurate, 12.1× speedup?

                  \[\begin{array}{l} \\ \frac{2}{\mathsf{fma}\left(x, x, 2\right)} \end{array} \]
                  (FPCore (x) :precision binary64 (/ 2.0 (fma x x 2.0)))
                  double code(double x) {
                  	return 2.0 / fma(x, x, 2.0);
                  }
                  
                  function code(x)
                  	return Float64(2.0 / fma(x, x, 2.0))
                  end
                  
                  code[x_] := N[(2.0 / N[(x * x + 2.0), $MachinePrecision]), $MachinePrecision]
                  
                  \begin{array}{l}
                  
                  \\
                  \frac{2}{\mathsf{fma}\left(x, x, 2\right)}
                  \end{array}
                  
                  Derivation
                  1. Initial program 100.0%

                    \[\frac{2}{e^{x} + e^{-x}} \]
                  2. Add Preprocessing
                  3. Taylor expanded in x around 0

                    \[\leadsto \frac{2}{\color{blue}{2 + {x}^{2}}} \]
                  4. Step-by-step derivation
                    1. +-commutativeN/A

                      \[\leadsto \frac{2}{\color{blue}{{x}^{2} + 2}} \]
                    2. unpow2N/A

                      \[\leadsto \frac{2}{\color{blue}{x \cdot x} + 2} \]
                    3. lower-fma.f6476.4

                      \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                  5. Applied rewrites76.4%

                    \[\leadsto \frac{2}{\color{blue}{\mathsf{fma}\left(x, x, 2\right)}} \]
                  6. Add Preprocessing

                  Alternative 10: 50.3% accurate, 217.0× speedup?

                  \[\begin{array}{l} \\ 1 \end{array} \]
                  (FPCore (x) :precision binary64 1.0)
                  double code(double x) {
                  	return 1.0;
                  }
                  
                  real(8) function code(x)
                      real(8), intent (in) :: x
                      code = 1.0d0
                  end function
                  
                  public static double code(double x) {
                  	return 1.0;
                  }
                  
                  def code(x):
                  	return 1.0
                  
                  function code(x)
                  	return 1.0
                  end
                  
                  function tmp = code(x)
                  	tmp = 1.0;
                  end
                  
                  code[x_] := 1.0
                  
                  \begin{array}{l}
                  
                  \\
                  1
                  \end{array}
                  
                  Derivation
                  1. Initial program 100.0%

                    \[\frac{2}{e^{x} + e^{-x}} \]
                  2. Add Preprocessing
                  3. Taylor expanded in x around 0

                    \[\leadsto \color{blue}{1} \]
                  4. Step-by-step derivation
                    1. Applied rewrites46.6%

                      \[\leadsto \color{blue}{1} \]
                    2. Add Preprocessing

                    Reproduce

                    ?
                    herbie shell --seed 2024309 
                    (FPCore (x)
                      :name "Hyperbolic secant"
                      :precision binary64
                      (/ 2.0 (+ (exp x) (exp (- x)))))