Quotient of sum of exps

Percentage Accurate: 99.0% → 99.0%
Time: 7.3s
Alternatives: 20
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 20 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 99.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Alternative 1: 99.0% accurate, 0.8× speedup?

\[\begin{array}{l} \\ {\left(e^{-a} \cdot \left(e^{b} + e^{a}\right)\right)}^{-1} \end{array} \]
(FPCore (a b)
 :precision binary64
 (pow (* (exp (- a)) (+ (exp b) (exp a))) -1.0))
double code(double a, double b) {
	return pow((exp(-a) * (exp(b) + exp(a))), -1.0);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = (exp(-a) * (exp(b) + exp(a))) ** (-1.0d0)
end function
public static double code(double a, double b) {
	return Math.pow((Math.exp(-a) * (Math.exp(b) + Math.exp(a))), -1.0);
}
def code(a, b):
	return math.pow((math.exp(-a) * (math.exp(b) + math.exp(a))), -1.0)
function code(a, b)
	return Float64(exp(Float64(-a)) * Float64(exp(b) + exp(a))) ^ -1.0
end
function tmp = code(a, b)
	tmp = (exp(-a) * (exp(b) + exp(a))) ^ -1.0;
end
code[a_, b_] := N[Power[N[(N[Exp[(-a)], $MachinePrecision] * N[(N[Exp[b], $MachinePrecision] + N[Exp[a], $MachinePrecision]), $MachinePrecision]), $MachinePrecision], -1.0], $MachinePrecision]
\begin{array}{l}

\\
{\left(e^{-a} \cdot \left(e^{b} + e^{a}\right)\right)}^{-1}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-/.f64N/A

      \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
    2. lift-exp.f64N/A

      \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
    3. sinh-+-cosh-revN/A

      \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
    4. flip-+N/A

      \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
    5. sinh-coshN/A

      \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
    6. sinh-coshN/A

      \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
    7. sinh---cosh-revN/A

      \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
    8. associate-/l/N/A

      \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
    9. lower-/.f64N/A

      \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
    10. sinh-coshN/A

      \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
    11. lower-*.f64N/A

      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
    12. lower-exp.f64N/A

      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
    13. lower-neg.f6498.8

      \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
    14. lift-+.f64N/A

      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
    15. +-commutativeN/A

      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
    16. lower-+.f6498.8

      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
  4. Applied rewrites98.8%

    \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
  5. Final simplification98.8%

    \[\leadsto {\left(e^{-a} \cdot \left(e^{b} + e^{a}\right)\right)}^{-1} \]
  6. Add Preprocessing

Alternative 2: 99.0% accurate, 0.8× speedup?

\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + {\left(e^{-b}\right)}^{-1}} \end{array} \]
(FPCore (a b)
 :precision binary64
 (/ (exp a) (+ (exp a) (pow (exp (- b)) -1.0))))
double code(double a, double b) {
	return exp(a) / (exp(a) + pow(exp(-b), -1.0));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + (exp(-b) ** (-1.0d0)))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.pow(Math.exp(-b), -1.0));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.pow(math.exp(-b), -1.0))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + (exp(Float64(-b)) ^ -1.0)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + (exp(-b) ^ -1.0));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Power[N[Exp[(-b)], $MachinePrecision], -1.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + {\left(e^{-b}\right)}^{-1}}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-exp.f64N/A

      \[\leadsto \frac{e^{a}}{e^{a} + \color{blue}{e^{b}}} \]
    2. sinh-+-cosh-revN/A

      \[\leadsto \frac{e^{a}}{e^{a} + \color{blue}{\left(\cosh b + \sinh b\right)}} \]
    3. flip-+N/A

      \[\leadsto \frac{e^{a}}{e^{a} + \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\cosh b - \sinh b}}} \]
    4. sinh---cosh-revN/A

      \[\leadsto \frac{e^{a}}{e^{a} + \frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(b\right)}}}} \]
    5. lower-/.f64N/A

      \[\leadsto \frac{e^{a}}{e^{a} + \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(b\right)}}}} \]
    6. sinh-coshN/A

      \[\leadsto \frac{e^{a}}{e^{a} + \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(b\right)}}} \]
    7. lower-exp.f64N/A

      \[\leadsto \frac{e^{a}}{e^{a} + \frac{1}{\color{blue}{e^{\mathsf{neg}\left(b\right)}}}} \]
    8. lower-neg.f6498.8

      \[\leadsto \frac{e^{a}}{e^{a} + \frac{1}{e^{\color{blue}{-b}}}} \]
  4. Applied rewrites98.8%

    \[\leadsto \frac{e^{a}}{e^{a} + \color{blue}{\frac{1}{e^{-b}}}} \]
  5. Final simplification98.8%

    \[\leadsto \frac{e^{a}}{e^{a} + {\left(e^{-b}\right)}^{-1}} \]
  6. Add Preprocessing

Alternative 3: 99.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Add Preprocessing
  3. Add Preprocessing

Alternative 4: 98.4% accurate, 1.5× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -7.5 \cdot 10^{-7}:\\ \;\;\;\;{\left(e^{-a} - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -7.5e-7)
   (pow (- (exp (- a)) -1.0) -1.0)
   (pow (+ (exp b) 1.0) -1.0)))
double code(double a, double b) {
	double tmp;
	if (a <= -7.5e-7) {
		tmp = pow((exp(-a) - -1.0), -1.0);
	} else {
		tmp = pow((exp(b) + 1.0), -1.0);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-7.5d-7)) then
        tmp = (exp(-a) - (-1.0d0)) ** (-1.0d0)
    else
        tmp = (exp(b) + 1.0d0) ** (-1.0d0)
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -7.5e-7) {
		tmp = Math.pow((Math.exp(-a) - -1.0), -1.0);
	} else {
		tmp = Math.pow((Math.exp(b) + 1.0), -1.0);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -7.5e-7:
		tmp = math.pow((math.exp(-a) - -1.0), -1.0)
	else:
		tmp = math.pow((math.exp(b) + 1.0), -1.0)
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -7.5e-7)
		tmp = Float64(exp(Float64(-a)) - -1.0) ^ -1.0;
	else
		tmp = Float64(exp(b) + 1.0) ^ -1.0;
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -7.5e-7)
		tmp = (exp(-a) - -1.0) ^ -1.0;
	else
		tmp = (exp(b) + 1.0) ^ -1.0;
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -7.5e-7], N[Power[N[(N[Exp[(-a)], $MachinePrecision] - -1.0), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[Exp[b], $MachinePrecision] + 1.0), $MachinePrecision], -1.0], $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -7.5 \cdot 10^{-7}:\\
\;\;\;\;{\left(e^{-a} - -1\right)}^{-1}\\

\mathbf{else}:\\
\;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -7.5000000000000002e-7

    1. Initial program 99.9%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-/.f64N/A

        \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
      2. lift-exp.f64N/A

        \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
      3. sinh-+-cosh-revN/A

        \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
      4. flip-+N/A

        \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
      5. sinh-coshN/A

        \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
      6. sinh-coshN/A

        \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
      7. sinh---cosh-revN/A

        \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
      8. associate-/l/N/A

        \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
      9. lower-/.f64N/A

        \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
      10. sinh-coshN/A

        \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
      11. lower-*.f64N/A

        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
      12. lower-exp.f64N/A

        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
      13. lower-neg.f64100.0

        \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
      14. lift-+.f64N/A

        \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
      15. +-commutativeN/A

        \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
      16. lower-+.f64100.0

        \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
    4. Applied rewrites100.0%

      \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
    5. Taylor expanded in b around 0

      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
    6. Step-by-step derivation
      1. distribute-lft-inN/A

        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
      2. *-rgt-identityN/A

        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
      3. cancel-sign-subN/A

        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
      4. distribute-lft-neg-outN/A

        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
      5. exp-negN/A

        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
      6. lft-mult-inverseN/A

        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
      7. metadata-evalN/A

        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
      8. lower--.f64N/A

        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
      9. lower-exp.f64N/A

        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
      10. lower-neg.f6498.9

        \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
    7. Applied rewrites98.9%

      \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]

    if -7.5000000000000002e-7 < a

    1. Initial program 98.3%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Add Preprocessing
    3. Taylor expanded in a around 0

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
    4. Step-by-step derivation
      1. lower-/.f64N/A

        \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
      2. +-commutativeN/A

        \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
      3. lower-+.f64N/A

        \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
      4. lower-exp.f6498.3

        \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
    5. Applied rewrites98.3%

      \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification98.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -7.5 \cdot 10^{-7}:\\ \;\;\;\;{\left(e^{-a} - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 98.4% accurate, 1.5× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -0.4:\\ \;\;\;\;\frac{e^{a}}{2 + a}\\ \mathbf{else}:\\ \;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -0.4) (/ (exp a) (+ 2.0 a)) (pow (+ (exp b) 1.0) -1.0)))
double code(double a, double b) {
	double tmp;
	if (a <= -0.4) {
		tmp = exp(a) / (2.0 + a);
	} else {
		tmp = pow((exp(b) + 1.0), -1.0);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-0.4d0)) then
        tmp = exp(a) / (2.0d0 + a)
    else
        tmp = (exp(b) + 1.0d0) ** (-1.0d0)
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -0.4) {
		tmp = Math.exp(a) / (2.0 + a);
	} else {
		tmp = Math.pow((Math.exp(b) + 1.0), -1.0);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -0.4:
		tmp = math.exp(a) / (2.0 + a)
	else:
		tmp = math.pow((math.exp(b) + 1.0), -1.0)
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -0.4)
		tmp = Float64(exp(a) / Float64(2.0 + a));
	else
		tmp = Float64(exp(b) + 1.0) ^ -1.0;
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -0.4)
		tmp = exp(a) / (2.0 + a);
	else
		tmp = (exp(b) + 1.0) ^ -1.0;
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -0.4], N[(N[Exp[a], $MachinePrecision] / N[(2.0 + a), $MachinePrecision]), $MachinePrecision], N[Power[N[(N[Exp[b], $MachinePrecision] + 1.0), $MachinePrecision], -1.0], $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -0.4:\\
\;\;\;\;\frac{e^{a}}{2 + a}\\

\mathbf{else}:\\
\;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -0.40000000000000002

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0

      \[\leadsto \frac{e^{a}}{\color{blue}{1 + e^{a}}} \]
    4. Step-by-step derivation
      1. +-commutativeN/A

        \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
      2. lower-+.f64N/A

        \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
      3. lower-exp.f64100.0

        \[\leadsto \frac{e^{a}}{\color{blue}{e^{a}} + 1} \]
    5. Applied rewrites100.0%

      \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
    6. Taylor expanded in a around 0

      \[\leadsto \frac{e^{a}}{2 + \color{blue}{a}} \]
    7. Step-by-step derivation
      1. Applied rewrites98.9%

        \[\leadsto \frac{e^{a}}{2 + \color{blue}{a}} \]

      if -0.40000000000000002 < a

      1. Initial program 98.3%

        \[\frac{e^{a}}{e^{a} + e^{b}} \]
      2. Add Preprocessing
      3. Taylor expanded in a around 0

        \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
      4. Step-by-step derivation
        1. lower-/.f64N/A

          \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
        2. +-commutativeN/A

          \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
        3. lower-+.f64N/A

          \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
        4. lower-exp.f6497.7

          \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
      5. Applied rewrites97.7%

        \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
    8. Recombined 2 regimes into one program.
    9. Final simplification98.0%

      \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -0.4:\\ \;\;\;\;\frac{e^{a}}{2 + a}\\ \mathbf{else}:\\ \;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\ \end{array} \]
    10. Add Preprocessing

    Alternative 6: 98.4% accurate, 1.5× speedup?

    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -0.4:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\ \end{array} \end{array} \]
    (FPCore (a b)
     :precision binary64
     (if (<= a -0.4) (/ (exp a) 2.0) (pow (+ (exp b) 1.0) -1.0)))
    double code(double a, double b) {
    	double tmp;
    	if (a <= -0.4) {
    		tmp = exp(a) / 2.0;
    	} else {
    		tmp = pow((exp(b) + 1.0), -1.0);
    	}
    	return tmp;
    }
    
    real(8) function code(a, b)
        real(8), intent (in) :: a
        real(8), intent (in) :: b
        real(8) :: tmp
        if (a <= (-0.4d0)) then
            tmp = exp(a) / 2.0d0
        else
            tmp = (exp(b) + 1.0d0) ** (-1.0d0)
        end if
        code = tmp
    end function
    
    public static double code(double a, double b) {
    	double tmp;
    	if (a <= -0.4) {
    		tmp = Math.exp(a) / 2.0;
    	} else {
    		tmp = Math.pow((Math.exp(b) + 1.0), -1.0);
    	}
    	return tmp;
    }
    
    def code(a, b):
    	tmp = 0
    	if a <= -0.4:
    		tmp = math.exp(a) / 2.0
    	else:
    		tmp = math.pow((math.exp(b) + 1.0), -1.0)
    	return tmp
    
    function code(a, b)
    	tmp = 0.0
    	if (a <= -0.4)
    		tmp = Float64(exp(a) / 2.0);
    	else
    		tmp = Float64(exp(b) + 1.0) ^ -1.0;
    	end
    	return tmp
    end
    
    function tmp_2 = code(a, b)
    	tmp = 0.0;
    	if (a <= -0.4)
    		tmp = exp(a) / 2.0;
    	else
    		tmp = (exp(b) + 1.0) ^ -1.0;
    	end
    	tmp_2 = tmp;
    end
    
    code[a_, b_] := If[LessEqual[a, -0.4], N[(N[Exp[a], $MachinePrecision] / 2.0), $MachinePrecision], N[Power[N[(N[Exp[b], $MachinePrecision] + 1.0), $MachinePrecision], -1.0], $MachinePrecision]]
    
    \begin{array}{l}
    
    \\
    \begin{array}{l}
    \mathbf{if}\;a \leq -0.4:\\
    \;\;\;\;\frac{e^{a}}{2}\\
    
    \mathbf{else}:\\
    \;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if a < -0.40000000000000002

      1. Initial program 100.0%

        \[\frac{e^{a}}{e^{a} + e^{b}} \]
      2. Add Preprocessing
      3. Taylor expanded in b around 0

        \[\leadsto \frac{e^{a}}{\color{blue}{1 + e^{a}}} \]
      4. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
        2. lower-+.f64N/A

          \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
        3. lower-exp.f64100.0

          \[\leadsto \frac{e^{a}}{\color{blue}{e^{a}} + 1} \]
      5. Applied rewrites100.0%

        \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
      6. Taylor expanded in a around 0

        \[\leadsto \frac{e^{a}}{2} \]
      7. Step-by-step derivation
        1. Applied rewrites98.9%

          \[\leadsto \frac{e^{a}}{2} \]

        if -0.40000000000000002 < a

        1. Initial program 98.3%

          \[\frac{e^{a}}{e^{a} + e^{b}} \]
        2. Add Preprocessing
        3. Taylor expanded in a around 0

          \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
        4. Step-by-step derivation
          1. lower-/.f64N/A

            \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
          2. +-commutativeN/A

            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
          3. lower-+.f64N/A

            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
          4. lower-exp.f6497.7

            \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
        5. Applied rewrites97.7%

          \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
      8. Recombined 2 regimes into one program.
      9. Final simplification98.0%

        \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -0.4:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;{\left(e^{b} + 1\right)}^{-1}\\ \end{array} \]
      10. Add Preprocessing

      Alternative 7: 71.1% accurate, 2.1× speedup?

      \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -1 \cdot 10^{+103}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\ \mathbf{elif}\;a \leq -0.048:\\ \;\;\;\;{\left(\left(\left(\frac{\frac{2}{b} + 1}{b} + 0.5\right) \cdot b\right) \cdot b\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
      (FPCore (a b)
       :precision binary64
       (if (<= a -1e+103)
         (pow (fma (- (* (* a a) -0.16666666666666666) 1.0) a 2.0) -1.0)
         (if (<= a -0.048)
           (pow (* (* (+ (/ (+ (/ 2.0 b) 1.0) b) 0.5) b) b) -1.0)
           (pow (fma (fma (fma 0.16666666666666666 b 0.5) b 1.0) b 2.0) -1.0))))
      double code(double a, double b) {
      	double tmp;
      	if (a <= -1e+103) {
      		tmp = pow(fma((((a * a) * -0.16666666666666666) - 1.0), a, 2.0), -1.0);
      	} else if (a <= -0.048) {
      		tmp = pow(((((((2.0 / b) + 1.0) / b) + 0.5) * b) * b), -1.0);
      	} else {
      		tmp = pow(fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0), -1.0);
      	}
      	return tmp;
      }
      
      function code(a, b)
      	tmp = 0.0
      	if (a <= -1e+103)
      		tmp = fma(Float64(Float64(Float64(a * a) * -0.16666666666666666) - 1.0), a, 2.0) ^ -1.0;
      	elseif (a <= -0.048)
      		tmp = Float64(Float64(Float64(Float64(Float64(Float64(2.0 / b) + 1.0) / b) + 0.5) * b) * b) ^ -1.0;
      	else
      		tmp = fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0) ^ -1.0;
      	end
      	return tmp
      end
      
      code[a_, b_] := If[LessEqual[a, -1e+103], N[Power[N[(N[(N[(N[(a * a), $MachinePrecision] * -0.16666666666666666), $MachinePrecision] - 1.0), $MachinePrecision] * a + 2.0), $MachinePrecision], -1.0], $MachinePrecision], If[LessEqual[a, -0.048], N[Power[N[(N[(N[(N[(N[(N[(2.0 / b), $MachinePrecision] + 1.0), $MachinePrecision] / b), $MachinePrecision] + 0.5), $MachinePrecision] * b), $MachinePrecision] * b), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(N[(0.16666666666666666 * b + 0.5), $MachinePrecision] * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]]
      
      \begin{array}{l}
      
      \\
      \begin{array}{l}
      \mathbf{if}\;a \leq -1 \cdot 10^{+103}:\\
      \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\
      
      \mathbf{elif}\;a \leq -0.048:\\
      \;\;\;\;{\left(\left(\left(\frac{\frac{2}{b} + 1}{b} + 0.5\right) \cdot b\right) \cdot b\right)}^{-1}\\
      
      \mathbf{else}:\\
      \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 3 regimes
      2. if a < -1e103

        1. Initial program 100.0%

          \[\frac{e^{a}}{e^{a} + e^{b}} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
          2. lift-exp.f64N/A

            \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
          3. sinh-+-cosh-revN/A

            \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
          4. flip-+N/A

            \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
          5. sinh-coshN/A

            \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
          6. sinh-coshN/A

            \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
          7. sinh---cosh-revN/A

            \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
          8. associate-/l/N/A

            \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
          9. lower-/.f64N/A

            \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
          10. sinh-coshN/A

            \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
          11. lower-*.f64N/A

            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
          12. lower-exp.f64N/A

            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
          13. lower-neg.f64100.0

            \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
          14. lift-+.f64N/A

            \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
          15. +-commutativeN/A

            \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
          16. lower-+.f64100.0

            \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
        4. Applied rewrites100.0%

          \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
        5. Taylor expanded in b around 0

          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
        6. Step-by-step derivation
          1. distribute-lft-inN/A

            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
          2. *-rgt-identityN/A

            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
          3. cancel-sign-subN/A

            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
          4. distribute-lft-neg-outN/A

            \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
          5. exp-negN/A

            \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
          6. lft-mult-inverseN/A

            \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
          7. metadata-evalN/A

            \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
          8. lower--.f64N/A

            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
          9. lower-exp.f64N/A

            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
          10. lower-neg.f64100.0

            \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
        7. Applied rewrites100.0%

          \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
        8. Taylor expanded in a around 0

          \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(a \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot a\right) - 1\right)}} \]
        9. Step-by-step derivation
          1. Applied rewrites100.0%

            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, a, 0.5\right) \cdot a - 1, \color{blue}{a}, 2\right)} \]
          2. Taylor expanded in a around inf

            \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{-1}{6} \cdot {a}^{2} - 1, a, 2\right)} \]
          3. Step-by-step derivation
            1. Applied rewrites100.0%

              \[\leadsto \frac{1}{\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)} \]

            if -1e103 < a < -0.048000000000000001

            1. Initial program 99.9%

              \[\frac{e^{a}}{e^{a} + e^{b}} \]
            2. Add Preprocessing
            3. Taylor expanded in a around 0

              \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
            4. Step-by-step derivation
              1. lower-/.f64N/A

                \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
              2. +-commutativeN/A

                \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
              3. lower-+.f64N/A

                \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
              4. lower-exp.f6429.6

                \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
            5. Applied rewrites29.6%

              \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
            6. Taylor expanded in b around 0

              \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + \frac{1}{2} \cdot b\right)}} \]
            7. Step-by-step derivation
              1. Applied rewrites13.9%

                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), \color{blue}{b}, 2\right)} \]
              2. Taylor expanded in b around inf

                \[\leadsto \frac{1}{{b}^{2} \cdot \left(\frac{1}{2} + \color{blue}{\left(\frac{1}{b} + \frac{2}{{b}^{2}}\right)}\right)} \]
              3. Step-by-step derivation
                1. Applied rewrites52.6%

                  \[\leadsto \frac{1}{\left(\left(\frac{\frac{2}{b} + 1}{b} + 0.5\right) \cdot b\right) \cdot b} \]

                if -0.048000000000000001 < a

                1. Initial program 98.3%

                  \[\frac{e^{a}}{e^{a} + e^{b}} \]
                2. Add Preprocessing
                3. Taylor expanded in a around 0

                  \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                4. Step-by-step derivation
                  1. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                  2. +-commutativeN/A

                    \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                  3. lower-+.f64N/A

                    \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                  4. lower-exp.f6497.7

                    \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                5. Applied rewrites97.7%

                  \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                6. Taylor expanded in b around 0

                  \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + b \cdot \left(\frac{1}{2} + \frac{1}{6} \cdot b\right)\right)}} \]
                7. Step-by-step derivation
                  1. Applied rewrites69.3%

                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), \color{blue}{b}, 2\right)} \]
                8. Recombined 3 regimes into one program.
                9. Final simplification72.3%

                  \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1 \cdot 10^{+103}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\ \mathbf{elif}\;a \leq -0.048:\\ \;\;\;\;{\left(\left(\left(\frac{\frac{2}{b} + 1}{b} + 0.5\right) \cdot b\right) \cdot b\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                10. Add Preprocessing

                Alternative 8: 71.6% accurate, 2.4× speedup?

                \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -1 \cdot 10^{+103}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\ \mathbf{elif}\;a \leq -112000:\\ \;\;\;\;{b}^{5} \cdot -0.0020833333333333333\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                (FPCore (a b)
                 :precision binary64
                 (if (<= a -1e+103)
                   (pow (fma (- (* (* a a) -0.16666666666666666) 1.0) a 2.0) -1.0)
                   (if (<= a -112000.0)
                     (* (pow b 5.0) -0.0020833333333333333)
                     (pow (fma (fma (fma 0.16666666666666666 b 0.5) b 1.0) b 2.0) -1.0))))
                double code(double a, double b) {
                	double tmp;
                	if (a <= -1e+103) {
                		tmp = pow(fma((((a * a) * -0.16666666666666666) - 1.0), a, 2.0), -1.0);
                	} else if (a <= -112000.0) {
                		tmp = pow(b, 5.0) * -0.0020833333333333333;
                	} else {
                		tmp = pow(fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0), -1.0);
                	}
                	return tmp;
                }
                
                function code(a, b)
                	tmp = 0.0
                	if (a <= -1e+103)
                		tmp = fma(Float64(Float64(Float64(a * a) * -0.16666666666666666) - 1.0), a, 2.0) ^ -1.0;
                	elseif (a <= -112000.0)
                		tmp = Float64((b ^ 5.0) * -0.0020833333333333333);
                	else
                		tmp = fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0) ^ -1.0;
                	end
                	return tmp
                end
                
                code[a_, b_] := If[LessEqual[a, -1e+103], N[Power[N[(N[(N[(N[(a * a), $MachinePrecision] * -0.16666666666666666), $MachinePrecision] - 1.0), $MachinePrecision] * a + 2.0), $MachinePrecision], -1.0], $MachinePrecision], If[LessEqual[a, -112000.0], N[(N[Power[b, 5.0], $MachinePrecision] * -0.0020833333333333333), $MachinePrecision], N[Power[N[(N[(N[(0.16666666666666666 * b + 0.5), $MachinePrecision] * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]]
                
                \begin{array}{l}
                
                \\
                \begin{array}{l}
                \mathbf{if}\;a \leq -1 \cdot 10^{+103}:\\
                \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\
                
                \mathbf{elif}\;a \leq -112000:\\
                \;\;\;\;{b}^{5} \cdot -0.0020833333333333333\\
                
                \mathbf{else}:\\
                \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\
                
                
                \end{array}
                \end{array}
                
                Derivation
                1. Split input into 3 regimes
                2. if a < -1e103

                  1. Initial program 100.0%

                    \[\frac{e^{a}}{e^{a} + e^{b}} \]
                  2. Add Preprocessing
                  3. Step-by-step derivation
                    1. lift-/.f64N/A

                      \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                    2. lift-exp.f64N/A

                      \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                    3. sinh-+-cosh-revN/A

                      \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                    4. flip-+N/A

                      \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                    5. sinh-coshN/A

                      \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                    6. sinh-coshN/A

                      \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                    7. sinh---cosh-revN/A

                      \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                    8. associate-/l/N/A

                      \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                    9. lower-/.f64N/A

                      \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                    10. sinh-coshN/A

                      \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                    11. lower-*.f64N/A

                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                    12. lower-exp.f64N/A

                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                    13. lower-neg.f64100.0

                      \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                    14. lift-+.f64N/A

                      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                    15. +-commutativeN/A

                      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                    16. lower-+.f64100.0

                      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                  4. Applied rewrites100.0%

                    \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                  5. Taylor expanded in b around 0

                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                  6. Step-by-step derivation
                    1. distribute-lft-inN/A

                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                    2. *-rgt-identityN/A

                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                    3. cancel-sign-subN/A

                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                    4. distribute-lft-neg-outN/A

                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                    5. exp-negN/A

                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                    6. lft-mult-inverseN/A

                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                    7. metadata-evalN/A

                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                    8. lower--.f64N/A

                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                    9. lower-exp.f64N/A

                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                    10. lower-neg.f64100.0

                      \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                  7. Applied rewrites100.0%

                    \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                  8. Taylor expanded in a around 0

                    \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(a \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot a\right) - 1\right)}} \]
                  9. Step-by-step derivation
                    1. Applied rewrites100.0%

                      \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, a, 0.5\right) \cdot a - 1, \color{blue}{a}, 2\right)} \]
                    2. Taylor expanded in a around inf

                      \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{-1}{6} \cdot {a}^{2} - 1, a, 2\right)} \]
                    3. Step-by-step derivation
                      1. Applied rewrites100.0%

                        \[\leadsto \frac{1}{\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)} \]

                      if -1e103 < a < -112000

                      1. Initial program 100.0%

                        \[\frac{e^{a}}{e^{a} + e^{b}} \]
                      2. Add Preprocessing
                      3. Taylor expanded in a around 0

                        \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                      4. Step-by-step derivation
                        1. lower-/.f64N/A

                          \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                        2. +-commutativeN/A

                          \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                        3. lower-+.f64N/A

                          \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                        4. lower-exp.f6424.7

                          \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                      5. Applied rewrites24.7%

                        \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                      6. Taylor expanded in b around 0

                        \[\leadsto \frac{1}{2} + \color{blue}{b \cdot \left({b}^{2} \cdot \left(\frac{1}{48} + \frac{-1}{480} \cdot {b}^{2}\right) - \frac{1}{4}\right)} \]
                      7. Step-by-step derivation
                        1. Applied rewrites2.8%

                          \[\leadsto \mathsf{fma}\left(\left(\mathsf{fma}\left(-0.0020833333333333333, b \cdot b, 0.020833333333333332\right) \cdot b\right) \cdot b - 0.25, \color{blue}{b}, 0.5\right) \]
                        2. Taylor expanded in b around inf

                          \[\leadsto \frac{-1}{480} \cdot {b}^{\color{blue}{5}} \]
                        3. Step-by-step derivation
                          1. Applied rewrites71.1%

                            \[\leadsto {b}^{5} \cdot -0.0020833333333333333 \]

                          if -112000 < a

                          1. Initial program 98.3%

                            \[\frac{e^{a}}{e^{a} + e^{b}} \]
                          2. Add Preprocessing
                          3. Taylor expanded in a around 0

                            \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                          4. Step-by-step derivation
                            1. lower-/.f64N/A

                              \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                            2. +-commutativeN/A

                              \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                            3. lower-+.f64N/A

                              \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                            4. lower-exp.f6497.3

                              \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                          5. Applied rewrites97.3%

                            \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                          6. Taylor expanded in b around 0

                            \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + b \cdot \left(\frac{1}{2} + \frac{1}{6} \cdot b\right)\right)}} \]
                          7. Step-by-step derivation
                            1. Applied rewrites68.9%

                              \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), \color{blue}{b}, 2\right)} \]
                          8. Recombined 3 regimes into one program.
                          9. Final simplification74.1%

                            \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1 \cdot 10^{+103}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\ \mathbf{elif}\;a \leq -112000:\\ \;\;\;\;{b}^{5} \cdot -0.0020833333333333333\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                          10. Add Preprocessing

                          Alternative 9: 71.5% accurate, 2.5× speedup?

                          \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 9.2 \cdot 10^{+102}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, a, 0.5\right) \cdot a - 1, a, 2\right)\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                          (FPCore (a b)
                           :precision binary64
                           (if (<= b 9.2e+102)
                             (pow (fma (- (* (fma -0.16666666666666666 a 0.5) a) 1.0) a 2.0) -1.0)
                             (pow (fma (fma (fma 0.16666666666666666 b 0.5) b 1.0) b 2.0) -1.0)))
                          double code(double a, double b) {
                          	double tmp;
                          	if (b <= 9.2e+102) {
                          		tmp = pow(fma(((fma(-0.16666666666666666, a, 0.5) * a) - 1.0), a, 2.0), -1.0);
                          	} else {
                          		tmp = pow(fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0), -1.0);
                          	}
                          	return tmp;
                          }
                          
                          function code(a, b)
                          	tmp = 0.0
                          	if (b <= 9.2e+102)
                          		tmp = fma(Float64(Float64(fma(-0.16666666666666666, a, 0.5) * a) - 1.0), a, 2.0) ^ -1.0;
                          	else
                          		tmp = fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0) ^ -1.0;
                          	end
                          	return tmp
                          end
                          
                          code[a_, b_] := If[LessEqual[b, 9.2e+102], N[Power[N[(N[(N[(N[(-0.16666666666666666 * a + 0.5), $MachinePrecision] * a), $MachinePrecision] - 1.0), $MachinePrecision] * a + 2.0), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(N[(0.16666666666666666 * b + 0.5), $MachinePrecision] * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]
                          
                          \begin{array}{l}
                          
                          \\
                          \begin{array}{l}
                          \mathbf{if}\;b \leq 9.2 \cdot 10^{+102}:\\
                          \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, a, 0.5\right) \cdot a - 1, a, 2\right)\right)}^{-1}\\
                          
                          \mathbf{else}:\\
                          \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\
                          
                          
                          \end{array}
                          \end{array}
                          
                          Derivation
                          1. Split input into 2 regimes
                          2. if b < 9.1999999999999995e102

                            1. Initial program 98.5%

                              \[\frac{e^{a}}{e^{a} + e^{b}} \]
                            2. Add Preprocessing
                            3. Step-by-step derivation
                              1. lift-/.f64N/A

                                \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                              2. lift-exp.f64N/A

                                \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                              3. sinh-+-cosh-revN/A

                                \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                              4. flip-+N/A

                                \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                              5. sinh-coshN/A

                                \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                              6. sinh-coshN/A

                                \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                              7. sinh---cosh-revN/A

                                \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                              8. associate-/l/N/A

                                \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                              9. lower-/.f64N/A

                                \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                              10. sinh-coshN/A

                                \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                              11. lower-*.f64N/A

                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                              12. lower-exp.f64N/A

                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                              13. lower-neg.f6498.5

                                \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                              14. lift-+.f64N/A

                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                              15. +-commutativeN/A

                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                              16. lower-+.f6498.5

                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                            4. Applied rewrites98.5%

                              \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                            5. Taylor expanded in b around 0

                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                            6. Step-by-step derivation
                              1. distribute-lft-inN/A

                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                              2. *-rgt-identityN/A

                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                              3. cancel-sign-subN/A

                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                              4. distribute-lft-neg-outN/A

                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                              5. exp-negN/A

                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                              6. lft-mult-inverseN/A

                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                              7. metadata-evalN/A

                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                              8. lower--.f64N/A

                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                              9. lower-exp.f64N/A

                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                              10. lower-neg.f6477.3

                                \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                            7. Applied rewrites77.3%

                              \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                            8. Taylor expanded in a around 0

                              \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(a \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot a\right) - 1\right)}} \]
                            9. Step-by-step derivation
                              1. Applied rewrites64.8%

                                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, a, 0.5\right) \cdot a - 1, \color{blue}{a}, 2\right)} \]

                              if 9.1999999999999995e102 < b

                              1. Initial program 100.0%

                                \[\frac{e^{a}}{e^{a} + e^{b}} \]
                              2. Add Preprocessing
                              3. Taylor expanded in a around 0

                                \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                              4. Step-by-step derivation
                                1. lower-/.f64N/A

                                  \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                2. +-commutativeN/A

                                  \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                3. lower-+.f64N/A

                                  \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                4. lower-exp.f64100.0

                                  \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                              5. Applied rewrites100.0%

                                \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                              6. Taylor expanded in b around 0

                                \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + b \cdot \left(\frac{1}{2} + \frac{1}{6} \cdot b\right)\right)}} \]
                              7. Step-by-step derivation
                                1. Applied rewrites100.0%

                                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), \color{blue}{b}, 2\right)} \]
                              8. Recombined 2 regimes into one program.
                              9. Final simplification70.7%

                                \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 9.2 \cdot 10^{+102}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, a, 0.5\right) \cdot a - 1, a, 2\right)\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                              10. Add Preprocessing

                              Alternative 10: 71.4% accurate, 2.5× speedup?

                              \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 9.2 \cdot 10^{+102}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                              (FPCore (a b)
                               :precision binary64
                               (if (<= b 9.2e+102)
                                 (pow (fma (- (* (* a a) -0.16666666666666666) 1.0) a 2.0) -1.0)
                                 (pow (fma (fma (fma 0.16666666666666666 b 0.5) b 1.0) b 2.0) -1.0)))
                              double code(double a, double b) {
                              	double tmp;
                              	if (b <= 9.2e+102) {
                              		tmp = pow(fma((((a * a) * -0.16666666666666666) - 1.0), a, 2.0), -1.0);
                              	} else {
                              		tmp = pow(fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0), -1.0);
                              	}
                              	return tmp;
                              }
                              
                              function code(a, b)
                              	tmp = 0.0
                              	if (b <= 9.2e+102)
                              		tmp = fma(Float64(Float64(Float64(a * a) * -0.16666666666666666) - 1.0), a, 2.0) ^ -1.0;
                              	else
                              		tmp = fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0) ^ -1.0;
                              	end
                              	return tmp
                              end
                              
                              code[a_, b_] := If[LessEqual[b, 9.2e+102], N[Power[N[(N[(N[(N[(a * a), $MachinePrecision] * -0.16666666666666666), $MachinePrecision] - 1.0), $MachinePrecision] * a + 2.0), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(N[(0.16666666666666666 * b + 0.5), $MachinePrecision] * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]
                              
                              \begin{array}{l}
                              
                              \\
                              \begin{array}{l}
                              \mathbf{if}\;b \leq 9.2 \cdot 10^{+102}:\\
                              \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\
                              
                              \mathbf{else}:\\
                              \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\
                              
                              
                              \end{array}
                              \end{array}
                              
                              Derivation
                              1. Split input into 2 regimes
                              2. if b < 9.1999999999999995e102

                                1. Initial program 98.5%

                                  \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                2. Add Preprocessing
                                3. Step-by-step derivation
                                  1. lift-/.f64N/A

                                    \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                  2. lift-exp.f64N/A

                                    \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                  3. sinh-+-cosh-revN/A

                                    \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                  4. flip-+N/A

                                    \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                  5. sinh-coshN/A

                                    \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                  6. sinh-coshN/A

                                    \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                  7. sinh---cosh-revN/A

                                    \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                  8. associate-/l/N/A

                                    \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                  9. lower-/.f64N/A

                                    \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                  10. sinh-coshN/A

                                    \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                  11. lower-*.f64N/A

                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                  12. lower-exp.f64N/A

                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                  13. lower-neg.f6498.5

                                    \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                  14. lift-+.f64N/A

                                    \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                  15. +-commutativeN/A

                                    \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                  16. lower-+.f6498.5

                                    \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                4. Applied rewrites98.5%

                                  \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                5. Taylor expanded in b around 0

                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                6. Step-by-step derivation
                                  1. distribute-lft-inN/A

                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                  2. *-rgt-identityN/A

                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                  3. cancel-sign-subN/A

                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                  4. distribute-lft-neg-outN/A

                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                  5. exp-negN/A

                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                  6. lft-mult-inverseN/A

                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                  7. metadata-evalN/A

                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                  8. lower--.f64N/A

                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                  9. lower-exp.f64N/A

                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                  10. lower-neg.f6477.3

                                    \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                7. Applied rewrites77.3%

                                  \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                8. Taylor expanded in a around 0

                                  \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(a \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot a\right) - 1\right)}} \]
                                9. Step-by-step derivation
                                  1. Applied rewrites64.8%

                                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, a, 0.5\right) \cdot a - 1, \color{blue}{a}, 2\right)} \]
                                  2. Taylor expanded in a around inf

                                    \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{-1}{6} \cdot {a}^{2} - 1, a, 2\right)} \]
                                  3. Step-by-step derivation
                                    1. Applied rewrites64.5%

                                      \[\leadsto \frac{1}{\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)} \]

                                    if 9.1999999999999995e102 < b

                                    1. Initial program 100.0%

                                      \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                    2. Add Preprocessing
                                    3. Taylor expanded in a around 0

                                      \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                    4. Step-by-step derivation
                                      1. lower-/.f64N/A

                                        \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                      2. +-commutativeN/A

                                        \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                      3. lower-+.f64N/A

                                        \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                      4. lower-exp.f64100.0

                                        \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                    5. Applied rewrites100.0%

                                      \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                    6. Taylor expanded in b around 0

                                      \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + b \cdot \left(\frac{1}{2} + \frac{1}{6} \cdot b\right)\right)}} \]
                                    7. Step-by-step derivation
                                      1. Applied rewrites100.0%

                                        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), \color{blue}{b}, 2\right)} \]
                                    8. Recombined 2 regimes into one program.
                                    9. Final simplification70.5%

                                      \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 9.2 \cdot 10^{+102}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\left(a \cdot a\right) \cdot -0.16666666666666666 - 1, a, 2\right)\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                                    10. Add Preprocessing

                                    Alternative 11: 77.7% accurate, 2.5× speedup?

                                    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 10^{+103}:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                                    (FPCore (a b)
                                     :precision binary64
                                     (if (<= b 1e+103)
                                       (/ (exp a) 2.0)
                                       (pow (fma (fma (fma 0.16666666666666666 b 0.5) b 1.0) b 2.0) -1.0)))
                                    double code(double a, double b) {
                                    	double tmp;
                                    	if (b <= 1e+103) {
                                    		tmp = exp(a) / 2.0;
                                    	} else {
                                    		tmp = pow(fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0), -1.0);
                                    	}
                                    	return tmp;
                                    }
                                    
                                    function code(a, b)
                                    	tmp = 0.0
                                    	if (b <= 1e+103)
                                    		tmp = Float64(exp(a) / 2.0);
                                    	else
                                    		tmp = fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0) ^ -1.0;
                                    	end
                                    	return tmp
                                    end
                                    
                                    code[a_, b_] := If[LessEqual[b, 1e+103], N[(N[Exp[a], $MachinePrecision] / 2.0), $MachinePrecision], N[Power[N[(N[(N[(0.16666666666666666 * b + 0.5), $MachinePrecision] * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]
                                    
                                    \begin{array}{l}
                                    
                                    \\
                                    \begin{array}{l}
                                    \mathbf{if}\;b \leq 10^{+103}:\\
                                    \;\;\;\;\frac{e^{a}}{2}\\
                                    
                                    \mathbf{else}:\\
                                    \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\
                                    
                                    
                                    \end{array}
                                    \end{array}
                                    
                                    Derivation
                                    1. Split input into 2 regimes
                                    2. if b < 1e103

                                      1. Initial program 98.5%

                                        \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                      2. Add Preprocessing
                                      3. Taylor expanded in b around 0

                                        \[\leadsto \frac{e^{a}}{\color{blue}{1 + e^{a}}} \]
                                      4. Step-by-step derivation
                                        1. +-commutativeN/A

                                          \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
                                        2. lower-+.f64N/A

                                          \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
                                        3. lower-exp.f6475.8

                                          \[\leadsto \frac{e^{a}}{\color{blue}{e^{a}} + 1} \]
                                      5. Applied rewrites75.8%

                                        \[\leadsto \frac{e^{a}}{\color{blue}{e^{a} + 1}} \]
                                      6. Taylor expanded in a around 0

                                        \[\leadsto \frac{e^{a}}{2} \]
                                      7. Step-by-step derivation
                                        1. Applied rewrites74.7%

                                          \[\leadsto \frac{e^{a}}{2} \]

                                        if 1e103 < b

                                        1. Initial program 100.0%

                                          \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                        2. Add Preprocessing
                                        3. Taylor expanded in a around 0

                                          \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                        4. Step-by-step derivation
                                          1. lower-/.f64N/A

                                            \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                          2. +-commutativeN/A

                                            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                          3. lower-+.f64N/A

                                            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                          4. lower-exp.f64100.0

                                            \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                        5. Applied rewrites100.0%

                                          \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                        6. Taylor expanded in b around 0

                                          \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + b \cdot \left(\frac{1}{2} + \frac{1}{6} \cdot b\right)\right)}} \]
                                        7. Step-by-step derivation
                                          1. Applied rewrites100.0%

                                            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), \color{blue}{b}, 2\right)} \]
                                        8. Recombined 2 regimes into one program.
                                        9. Final simplification78.9%

                                          \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 10^{+103}:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                                        10. Add Preprocessing

                                        Alternative 12: 68.5% accurate, 2.5× speedup?

                                        \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 2 \cdot 10^{+92}:\\ \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                                        (FPCore (a b)
                                         :precision binary64
                                         (if (<= b 2e+92)
                                           (pow (- (fma (- (* 0.5 a) 1.0) a 1.0) -1.0) -1.0)
                                           (pow (fma (fma (fma 0.16666666666666666 b 0.5) b 1.0) b 2.0) -1.0)))
                                        double code(double a, double b) {
                                        	double tmp;
                                        	if (b <= 2e+92) {
                                        		tmp = pow((fma(((0.5 * a) - 1.0), a, 1.0) - -1.0), -1.0);
                                        	} else {
                                        		tmp = pow(fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0), -1.0);
                                        	}
                                        	return tmp;
                                        }
                                        
                                        function code(a, b)
                                        	tmp = 0.0
                                        	if (b <= 2e+92)
                                        		tmp = Float64(fma(Float64(Float64(0.5 * a) - 1.0), a, 1.0) - -1.0) ^ -1.0;
                                        	else
                                        		tmp = fma(fma(fma(0.16666666666666666, b, 0.5), b, 1.0), b, 2.0) ^ -1.0;
                                        	end
                                        	return tmp
                                        end
                                        
                                        code[a_, b_] := If[LessEqual[b, 2e+92], N[Power[N[(N[(N[(N[(0.5 * a), $MachinePrecision] - 1.0), $MachinePrecision] * a + 1.0), $MachinePrecision] - -1.0), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(N[(0.16666666666666666 * b + 0.5), $MachinePrecision] * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]
                                        
                                        \begin{array}{l}
                                        
                                        \\
                                        \begin{array}{l}
                                        \mathbf{if}\;b \leq 2 \cdot 10^{+92}:\\
                                        \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1\right)}^{-1}\\
                                        
                                        \mathbf{else}:\\
                                        \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\
                                        
                                        
                                        \end{array}
                                        \end{array}
                                        
                                        Derivation
                                        1. Split input into 2 regimes
                                        2. if b < 2.0000000000000001e92

                                          1. Initial program 98.5%

                                            \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                          2. Add Preprocessing
                                          3. Step-by-step derivation
                                            1. lift-/.f64N/A

                                              \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                            2. lift-exp.f64N/A

                                              \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                            3. sinh-+-cosh-revN/A

                                              \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                            4. flip-+N/A

                                              \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                            5. sinh-coshN/A

                                              \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                            6. sinh-coshN/A

                                              \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                            7. sinh---cosh-revN/A

                                              \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                            8. associate-/l/N/A

                                              \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                            9. lower-/.f64N/A

                                              \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                            10. sinh-coshN/A

                                              \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                            11. lower-*.f64N/A

                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                            12. lower-exp.f64N/A

                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                            13. lower-neg.f6498.5

                                              \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                            14. lift-+.f64N/A

                                              \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                            15. +-commutativeN/A

                                              \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                            16. lower-+.f6498.5

                                              \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                          4. Applied rewrites98.5%

                                            \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                          5. Taylor expanded in b around 0

                                            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                          6. Step-by-step derivation
                                            1. distribute-lft-inN/A

                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                            2. *-rgt-identityN/A

                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                            3. cancel-sign-subN/A

                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                            4. distribute-lft-neg-outN/A

                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                            5. exp-negN/A

                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                            6. lft-mult-inverseN/A

                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                            7. metadata-evalN/A

                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                            8. lower--.f64N/A

                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                            9. lower-exp.f64N/A

                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                            10. lower-neg.f6477.2

                                              \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                          7. Applied rewrites77.2%

                                            \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                          8. Taylor expanded in a around 0

                                            \[\leadsto \frac{1}{\left(1 + a \cdot \left(\frac{1}{2} \cdot a - 1\right)\right) - -1} \]
                                          9. Step-by-step derivation
                                            1. Applied rewrites61.9%

                                              \[\leadsto \frac{1}{\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1} \]

                                            if 2.0000000000000001e92 < b

                                            1. Initial program 100.0%

                                              \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                            2. Add Preprocessing
                                            3. Taylor expanded in a around 0

                                              \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                            4. Step-by-step derivation
                                              1. lower-/.f64N/A

                                                \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                              2. +-commutativeN/A

                                                \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                              3. lower-+.f64N/A

                                                \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                              4. lower-exp.f64100.0

                                                \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                            5. Applied rewrites100.0%

                                              \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                            6. Taylor expanded in b around 0

                                              \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + b \cdot \left(\frac{1}{2} + \frac{1}{6} \cdot b\right)\right)}} \]
                                            7. Step-by-step derivation
                                              1. Applied rewrites97.9%

                                                \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), \color{blue}{b}, 2\right)} \]
                                            8. Recombined 2 regimes into one program.
                                            9. Final simplification68.1%

                                              \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 2 \cdot 10^{+92}:\\ \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(0.16666666666666666, b, 0.5\right), b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                                            10. Add Preprocessing

                                            Alternative 13: 65.0% accurate, 2.5× speedup?

                                            \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 2.15 \cdot 10^{+140}:\\ \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                                            (FPCore (a b)
                                             :precision binary64
                                             (if (<= b 2.15e+140)
                                               (pow (- (fma (- (* 0.5 a) 1.0) a 1.0) -1.0) -1.0)
                                               (pow (fma (fma 0.5 b 1.0) b 2.0) -1.0)))
                                            double code(double a, double b) {
                                            	double tmp;
                                            	if (b <= 2.15e+140) {
                                            		tmp = pow((fma(((0.5 * a) - 1.0), a, 1.0) - -1.0), -1.0);
                                            	} else {
                                            		tmp = pow(fma(fma(0.5, b, 1.0), b, 2.0), -1.0);
                                            	}
                                            	return tmp;
                                            }
                                            
                                            function code(a, b)
                                            	tmp = 0.0
                                            	if (b <= 2.15e+140)
                                            		tmp = Float64(fma(Float64(Float64(0.5 * a) - 1.0), a, 1.0) - -1.0) ^ -1.0;
                                            	else
                                            		tmp = fma(fma(0.5, b, 1.0), b, 2.0) ^ -1.0;
                                            	end
                                            	return tmp
                                            end
                                            
                                            code[a_, b_] := If[LessEqual[b, 2.15e+140], N[Power[N[(N[(N[(N[(0.5 * a), $MachinePrecision] - 1.0), $MachinePrecision] * a + 1.0), $MachinePrecision] - -1.0), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(0.5 * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]
                                            
                                            \begin{array}{l}
                                            
                                            \\
                                            \begin{array}{l}
                                            \mathbf{if}\;b \leq 2.15 \cdot 10^{+140}:\\
                                            \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1\right)}^{-1}\\
                                            
                                            \mathbf{else}:\\
                                            \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\
                                            
                                            
                                            \end{array}
                                            \end{array}
                                            
                                            Derivation
                                            1. Split input into 2 regimes
                                            2. if b < 2.15000000000000001e140

                                              1. Initial program 98.6%

                                                \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                              2. Add Preprocessing
                                              3. Step-by-step derivation
                                                1. lift-/.f64N/A

                                                  \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                2. lift-exp.f64N/A

                                                  \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                3. sinh-+-cosh-revN/A

                                                  \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                4. flip-+N/A

                                                  \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                5. sinh-coshN/A

                                                  \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                6. sinh-coshN/A

                                                  \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                7. sinh---cosh-revN/A

                                                  \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                8. associate-/l/N/A

                                                  \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                9. lower-/.f64N/A

                                                  \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                10. sinh-coshN/A

                                                  \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                11. lower-*.f64N/A

                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                12. lower-exp.f64N/A

                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                13. lower-neg.f6498.6

                                                  \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                14. lift-+.f64N/A

                                                  \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                15. +-commutativeN/A

                                                  \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                16. lower-+.f6498.6

                                                  \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                              4. Applied rewrites98.6%

                                                \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                              5. Taylor expanded in b around 0

                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                              6. Step-by-step derivation
                                                1. distribute-lft-inN/A

                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                2. *-rgt-identityN/A

                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                3. cancel-sign-subN/A

                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                4. distribute-lft-neg-outN/A

                                                  \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                5. exp-negN/A

                                                  \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                6. lft-mult-inverseN/A

                                                  \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                7. metadata-evalN/A

                                                  \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                8. lower--.f64N/A

                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                9. lower-exp.f64N/A

                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                10. lower-neg.f6476.5

                                                  \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                              7. Applied rewrites76.5%

                                                \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                              8. Taylor expanded in a around 0

                                                \[\leadsto \frac{1}{\left(1 + a \cdot \left(\frac{1}{2} \cdot a - 1\right)\right) - -1} \]
                                              9. Step-by-step derivation
                                                1. Applied rewrites60.8%

                                                  \[\leadsto \frac{1}{\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1} \]

                                                if 2.15000000000000001e140 < b

                                                1. Initial program 100.0%

                                                  \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                2. Add Preprocessing
                                                3. Taylor expanded in a around 0

                                                  \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                4. Step-by-step derivation
                                                  1. lower-/.f64N/A

                                                    \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                  2. +-commutativeN/A

                                                    \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                  3. lower-+.f64N/A

                                                    \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                  4. lower-exp.f64100.0

                                                    \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                                5. Applied rewrites100.0%

                                                  \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                                6. Taylor expanded in b around 0

                                                  \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + \frac{1}{2} \cdot b\right)}} \]
                                                7. Step-by-step derivation
                                                  1. Applied rewrites95.2%

                                                    \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), \color{blue}{b}, 2\right)} \]
                                                8. Recombined 2 regimes into one program.
                                                9. Final simplification65.9%

                                                  \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 2.15 \cdot 10^{+140}:\\ \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 1\right) - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                                                10. Add Preprocessing

                                                Alternative 14: 55.8% accurate, 2.5× speedup?

                                                \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 0.85:\\ \;\;\;\;{\left(\left(1 - a\right) - -1\right)}^{-1}\\ \mathbf{elif}\;b \leq 2.15 \cdot 10^{+140}:\\ \;\;\;\;{\left(\left(a \cdot a\right) \cdot 0.5\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\left(0.5 \cdot b\right) \cdot b\right)}^{-1}\\ \end{array} \end{array} \]
                                                (FPCore (a b)
                                                 :precision binary64
                                                 (if (<= b 0.85)
                                                   (pow (- (- 1.0 a) -1.0) -1.0)
                                                   (if (<= b 2.15e+140)
                                                     (pow (* (* a a) 0.5) -1.0)
                                                     (pow (* (* 0.5 b) b) -1.0))))
                                                double code(double a, double b) {
                                                	double tmp;
                                                	if (b <= 0.85) {
                                                		tmp = pow(((1.0 - a) - -1.0), -1.0);
                                                	} else if (b <= 2.15e+140) {
                                                		tmp = pow(((a * a) * 0.5), -1.0);
                                                	} else {
                                                		tmp = pow(((0.5 * b) * b), -1.0);
                                                	}
                                                	return tmp;
                                                }
                                                
                                                real(8) function code(a, b)
                                                    real(8), intent (in) :: a
                                                    real(8), intent (in) :: b
                                                    real(8) :: tmp
                                                    if (b <= 0.85d0) then
                                                        tmp = ((1.0d0 - a) - (-1.0d0)) ** (-1.0d0)
                                                    else if (b <= 2.15d+140) then
                                                        tmp = ((a * a) * 0.5d0) ** (-1.0d0)
                                                    else
                                                        tmp = ((0.5d0 * b) * b) ** (-1.0d0)
                                                    end if
                                                    code = tmp
                                                end function
                                                
                                                public static double code(double a, double b) {
                                                	double tmp;
                                                	if (b <= 0.85) {
                                                		tmp = Math.pow(((1.0 - a) - -1.0), -1.0);
                                                	} else if (b <= 2.15e+140) {
                                                		tmp = Math.pow(((a * a) * 0.5), -1.0);
                                                	} else {
                                                		tmp = Math.pow(((0.5 * b) * b), -1.0);
                                                	}
                                                	return tmp;
                                                }
                                                
                                                def code(a, b):
                                                	tmp = 0
                                                	if b <= 0.85:
                                                		tmp = math.pow(((1.0 - a) - -1.0), -1.0)
                                                	elif b <= 2.15e+140:
                                                		tmp = math.pow(((a * a) * 0.5), -1.0)
                                                	else:
                                                		tmp = math.pow(((0.5 * b) * b), -1.0)
                                                	return tmp
                                                
                                                function code(a, b)
                                                	tmp = 0.0
                                                	if (b <= 0.85)
                                                		tmp = Float64(Float64(1.0 - a) - -1.0) ^ -1.0;
                                                	elseif (b <= 2.15e+140)
                                                		tmp = Float64(Float64(a * a) * 0.5) ^ -1.0;
                                                	else
                                                		tmp = Float64(Float64(0.5 * b) * b) ^ -1.0;
                                                	end
                                                	return tmp
                                                end
                                                
                                                function tmp_2 = code(a, b)
                                                	tmp = 0.0;
                                                	if (b <= 0.85)
                                                		tmp = ((1.0 - a) - -1.0) ^ -1.0;
                                                	elseif (b <= 2.15e+140)
                                                		tmp = ((a * a) * 0.5) ^ -1.0;
                                                	else
                                                		tmp = ((0.5 * b) * b) ^ -1.0;
                                                	end
                                                	tmp_2 = tmp;
                                                end
                                                
                                                code[a_, b_] := If[LessEqual[b, 0.85], N[Power[N[(N[(1.0 - a), $MachinePrecision] - -1.0), $MachinePrecision], -1.0], $MachinePrecision], If[LessEqual[b, 2.15e+140], N[Power[N[(N[(a * a), $MachinePrecision] * 0.5), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(0.5 * b), $MachinePrecision] * b), $MachinePrecision], -1.0], $MachinePrecision]]]
                                                
                                                \begin{array}{l}
                                                
                                                \\
                                                \begin{array}{l}
                                                \mathbf{if}\;b \leq 0.85:\\
                                                \;\;\;\;{\left(\left(1 - a\right) - -1\right)}^{-1}\\
                                                
                                                \mathbf{elif}\;b \leq 2.15 \cdot 10^{+140}:\\
                                                \;\;\;\;{\left(\left(a \cdot a\right) \cdot 0.5\right)}^{-1}\\
                                                
                                                \mathbf{else}:\\
                                                \;\;\;\;{\left(\left(0.5 \cdot b\right) \cdot b\right)}^{-1}\\
                                                
                                                
                                                \end{array}
                                                \end{array}
                                                
                                                Derivation
                                                1. Split input into 3 regimes
                                                2. if b < 0.849999999999999978

                                                  1. Initial program 98.4%

                                                    \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                  2. Add Preprocessing
                                                  3. Step-by-step derivation
                                                    1. lift-/.f64N/A

                                                      \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                    2. lift-exp.f64N/A

                                                      \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                    3. sinh-+-cosh-revN/A

                                                      \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                    4. flip-+N/A

                                                      \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                    5. sinh-coshN/A

                                                      \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                    6. sinh-coshN/A

                                                      \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                    7. sinh---cosh-revN/A

                                                      \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                    8. associate-/l/N/A

                                                      \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                    9. lower-/.f64N/A

                                                      \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                    10. sinh-coshN/A

                                                      \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                    11. lower-*.f64N/A

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                    12. lower-exp.f64N/A

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                    13. lower-neg.f6498.4

                                                      \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                    14. lift-+.f64N/A

                                                      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                    15. +-commutativeN/A

                                                      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                    16. lower-+.f6498.4

                                                      \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                  4. Applied rewrites98.4%

                                                    \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                                  5. Taylor expanded in b around 0

                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                                  6. Step-by-step derivation
                                                    1. distribute-lft-inN/A

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                    2. *-rgt-identityN/A

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                    3. cancel-sign-subN/A

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                    4. distribute-lft-neg-outN/A

                                                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                    5. exp-negN/A

                                                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                    6. lft-mult-inverseN/A

                                                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                    7. metadata-evalN/A

                                                      \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                    8. lower--.f64N/A

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                    9. lower-exp.f64N/A

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                    10. lower-neg.f6479.4

                                                      \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                                  7. Applied rewrites79.4%

                                                    \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                                  8. Taylor expanded in a around 0

                                                    \[\leadsto \frac{1}{\left(1 + -1 \cdot a\right) - -1} \]
                                                  9. Step-by-step derivation
                                                    1. Applied rewrites54.0%

                                                      \[\leadsto \frac{1}{\left(1 - a\right) - -1} \]

                                                    if 0.849999999999999978 < b < 2.15000000000000001e140

                                                    1. Initial program 100.0%

                                                      \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                    2. Add Preprocessing
                                                    3. Step-by-step derivation
                                                      1. lift-/.f64N/A

                                                        \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                      2. lift-exp.f64N/A

                                                        \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                      3. sinh-+-cosh-revN/A

                                                        \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                      4. flip-+N/A

                                                        \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                      5. sinh-coshN/A

                                                        \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                      6. sinh-coshN/A

                                                        \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                      7. sinh---cosh-revN/A

                                                        \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                      8. associate-/l/N/A

                                                        \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                      9. lower-/.f64N/A

                                                        \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                      10. sinh-coshN/A

                                                        \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                      11. lower-*.f64N/A

                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                      12. lower-exp.f64N/A

                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                      13. lower-neg.f64100.0

                                                        \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                      14. lift-+.f64N/A

                                                        \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                      15. +-commutativeN/A

                                                        \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                      16. lower-+.f64100.0

                                                        \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                    4. Applied rewrites100.0%

                                                      \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                                    5. Taylor expanded in b around 0

                                                      \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                                    6. Step-by-step derivation
                                                      1. distribute-lft-inN/A

                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                      2. *-rgt-identityN/A

                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                      3. cancel-sign-subN/A

                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                      4. distribute-lft-neg-outN/A

                                                        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                      5. exp-negN/A

                                                        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                      6. lft-mult-inverseN/A

                                                        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                      7. metadata-evalN/A

                                                        \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                      8. lower--.f64N/A

                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                      9. lower-exp.f64N/A

                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                      10. lower-neg.f6453.5

                                                        \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                                    7. Applied rewrites53.5%

                                                      \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                                    8. Taylor expanded in a around 0

                                                      \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(\frac{1}{2} \cdot a - 1\right)}} \]
                                                    9. Step-by-step derivation
                                                      1. Applied rewrites34.4%

                                                        \[\leadsto \frac{1}{\mathsf{fma}\left(0.5 \cdot a - 1, \color{blue}{a}, 2\right)} \]
                                                      2. Taylor expanded in a around inf

                                                        \[\leadsto \frac{1}{\frac{1}{2} \cdot {a}^{\color{blue}{2}}} \]
                                                      3. Step-by-step derivation
                                                        1. Applied rewrites33.9%

                                                          \[\leadsto \frac{1}{\left(a \cdot a\right) \cdot 0.5} \]

                                                        if 2.15000000000000001e140 < b

                                                        1. Initial program 100.0%

                                                          \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                        2. Add Preprocessing
                                                        3. Taylor expanded in a around 0

                                                          \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                        4. Step-by-step derivation
                                                          1. lower-/.f64N/A

                                                            \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                          2. +-commutativeN/A

                                                            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                          3. lower-+.f64N/A

                                                            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                          4. lower-exp.f64100.0

                                                            \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                                        5. Applied rewrites100.0%

                                                          \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                                        6. Taylor expanded in b around 0

                                                          \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + \frac{1}{2} \cdot b\right)}} \]
                                                        7. Step-by-step derivation
                                                          1. Applied rewrites95.2%

                                                            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), \color{blue}{b}, 2\right)} \]
                                                          2. Taylor expanded in b around inf

                                                            \[\leadsto \frac{1}{\frac{1}{2} \cdot {b}^{\color{blue}{2}}} \]
                                                          3. Step-by-step derivation
                                                            1. Applied rewrites95.2%

                                                              \[\leadsto \frac{1}{\left(0.5 \cdot b\right) \cdot b} \]
                                                          4. Recombined 3 regimes into one program.
                                                          5. Final simplification58.2%

                                                            \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 0.85:\\ \;\;\;\;{\left(\left(1 - a\right) - -1\right)}^{-1}\\ \mathbf{elif}\;b \leq 2.15 \cdot 10^{+140}:\\ \;\;\;\;{\left(\left(a \cdot a\right) \cdot 0.5\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\left(0.5 \cdot b\right) \cdot b\right)}^{-1}\\ \end{array} \]
                                                          6. Add Preprocessing

                                                          Alternative 15: 65.0% accurate, 2.6× speedup?

                                                          \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 2.15 \cdot 10^{+140}:\\ \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 2\right)\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                                                          (FPCore (a b)
                                                           :precision binary64
                                                           (if (<= b 2.15e+140)
                                                             (pow (fma (- (* 0.5 a) 1.0) a 2.0) -1.0)
                                                             (pow (fma (fma 0.5 b 1.0) b 2.0) -1.0)))
                                                          double code(double a, double b) {
                                                          	double tmp;
                                                          	if (b <= 2.15e+140) {
                                                          		tmp = pow(fma(((0.5 * a) - 1.0), a, 2.0), -1.0);
                                                          	} else {
                                                          		tmp = pow(fma(fma(0.5, b, 1.0), b, 2.0), -1.0);
                                                          	}
                                                          	return tmp;
                                                          }
                                                          
                                                          function code(a, b)
                                                          	tmp = 0.0
                                                          	if (b <= 2.15e+140)
                                                          		tmp = fma(Float64(Float64(0.5 * a) - 1.0), a, 2.0) ^ -1.0;
                                                          	else
                                                          		tmp = fma(fma(0.5, b, 1.0), b, 2.0) ^ -1.0;
                                                          	end
                                                          	return tmp
                                                          end
                                                          
                                                          code[a_, b_] := If[LessEqual[b, 2.15e+140], N[Power[N[(N[(N[(0.5 * a), $MachinePrecision] - 1.0), $MachinePrecision] * a + 2.0), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(0.5 * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]
                                                          
                                                          \begin{array}{l}
                                                          
                                                          \\
                                                          \begin{array}{l}
                                                          \mathbf{if}\;b \leq 2.15 \cdot 10^{+140}:\\
                                                          \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 2\right)\right)}^{-1}\\
                                                          
                                                          \mathbf{else}:\\
                                                          \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\
                                                          
                                                          
                                                          \end{array}
                                                          \end{array}
                                                          
                                                          Derivation
                                                          1. Split input into 2 regimes
                                                          2. if b < 2.15000000000000001e140

                                                            1. Initial program 98.6%

                                                              \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                            2. Add Preprocessing
                                                            3. Step-by-step derivation
                                                              1. lift-/.f64N/A

                                                                \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                              2. lift-exp.f64N/A

                                                                \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                              3. sinh-+-cosh-revN/A

                                                                \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                              4. flip-+N/A

                                                                \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                              5. sinh-coshN/A

                                                                \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                              6. sinh-coshN/A

                                                                \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                              7. sinh---cosh-revN/A

                                                                \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                              8. associate-/l/N/A

                                                                \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                              9. lower-/.f64N/A

                                                                \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                              10. sinh-coshN/A

                                                                \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                              11. lower-*.f64N/A

                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                              12. lower-exp.f64N/A

                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                              13. lower-neg.f6498.6

                                                                \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                              14. lift-+.f64N/A

                                                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                              15. +-commutativeN/A

                                                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                              16. lower-+.f6498.6

                                                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                            4. Applied rewrites98.6%

                                                              \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                                            5. Taylor expanded in b around 0

                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                                            6. Step-by-step derivation
                                                              1. distribute-lft-inN/A

                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                              2. *-rgt-identityN/A

                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                              3. cancel-sign-subN/A

                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                              4. distribute-lft-neg-outN/A

                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                              5. exp-negN/A

                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                              6. lft-mult-inverseN/A

                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                              7. metadata-evalN/A

                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                              8. lower--.f64N/A

                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                              9. lower-exp.f64N/A

                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                              10. lower-neg.f6476.5

                                                                \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                                            7. Applied rewrites76.5%

                                                              \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                                            8. Taylor expanded in a around 0

                                                              \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(\frac{1}{2} \cdot a - 1\right)}} \]
                                                            9. Step-by-step derivation
                                                              1. Applied rewrites60.8%

                                                                \[\leadsto \frac{1}{\mathsf{fma}\left(0.5 \cdot a - 1, \color{blue}{a}, 2\right)} \]

                                                              if 2.15000000000000001e140 < b

                                                              1. Initial program 100.0%

                                                                \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                              2. Add Preprocessing
                                                              3. Taylor expanded in a around 0

                                                                \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                              4. Step-by-step derivation
                                                                1. lower-/.f64N/A

                                                                  \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                                2. +-commutativeN/A

                                                                  \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                3. lower-+.f64N/A

                                                                  \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                4. lower-exp.f64100.0

                                                                  \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                                              5. Applied rewrites100.0%

                                                                \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                                              6. Taylor expanded in b around 0

                                                                \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + \frac{1}{2} \cdot b\right)}} \]
                                                              7. Step-by-step derivation
                                                                1. Applied rewrites95.2%

                                                                  \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), \color{blue}{b}, 2\right)} \]
                                                              8. Recombined 2 regimes into one program.
                                                              9. Final simplification65.9%

                                                                \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 2.15 \cdot 10^{+140}:\\ \;\;\;\;{\left(\mathsf{fma}\left(0.5 \cdot a - 1, a, 2\right)\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                                                              10. Add Preprocessing

                                                              Alternative 16: 62.6% accurate, 2.6× speedup?

                                                              \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -3.4 \cdot 10^{+141}:\\ \;\;\;\;{\left(\left(a \cdot a\right) \cdot 0.5\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \end{array} \]
                                                              (FPCore (a b)
                                                               :precision binary64
                                                               (if (<= a -3.4e+141)
                                                                 (pow (* (* a a) 0.5) -1.0)
                                                                 (pow (fma (fma 0.5 b 1.0) b 2.0) -1.0)))
                                                              double code(double a, double b) {
                                                              	double tmp;
                                                              	if (a <= -3.4e+141) {
                                                              		tmp = pow(((a * a) * 0.5), -1.0);
                                                              	} else {
                                                              		tmp = pow(fma(fma(0.5, b, 1.0), b, 2.0), -1.0);
                                                              	}
                                                              	return tmp;
                                                              }
                                                              
                                                              function code(a, b)
                                                              	tmp = 0.0
                                                              	if (a <= -3.4e+141)
                                                              		tmp = Float64(Float64(a * a) * 0.5) ^ -1.0;
                                                              	else
                                                              		tmp = fma(fma(0.5, b, 1.0), b, 2.0) ^ -1.0;
                                                              	end
                                                              	return tmp
                                                              end
                                                              
                                                              code[a_, b_] := If[LessEqual[a, -3.4e+141], N[Power[N[(N[(a * a), $MachinePrecision] * 0.5), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(0.5 * b + 1.0), $MachinePrecision] * b + 2.0), $MachinePrecision], -1.0], $MachinePrecision]]
                                                              
                                                              \begin{array}{l}
                                                              
                                                              \\
                                                              \begin{array}{l}
                                                              \mathbf{if}\;a \leq -3.4 \cdot 10^{+141}:\\
                                                              \;\;\;\;{\left(\left(a \cdot a\right) \cdot 0.5\right)}^{-1}\\
                                                              
                                                              \mathbf{else}:\\
                                                              \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\
                                                              
                                                              
                                                              \end{array}
                                                              \end{array}
                                                              
                                                              Derivation
                                                              1. Split input into 2 regimes
                                                              2. if a < -3.3999999999999998e141

                                                                1. Initial program 100.0%

                                                                  \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                                2. Add Preprocessing
                                                                3. Step-by-step derivation
                                                                  1. lift-/.f64N/A

                                                                    \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                                  2. lift-exp.f64N/A

                                                                    \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                                  3. sinh-+-cosh-revN/A

                                                                    \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                                  4. flip-+N/A

                                                                    \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                                  5. sinh-coshN/A

                                                                    \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                  6. sinh-coshN/A

                                                                    \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                  7. sinh---cosh-revN/A

                                                                    \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                                  8. associate-/l/N/A

                                                                    \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                  9. lower-/.f64N/A

                                                                    \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                  10. sinh-coshN/A

                                                                    \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                  11. lower-*.f64N/A

                                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                  12. lower-exp.f64N/A

                                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                  13. lower-neg.f64100.0

                                                                    \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                  14. lift-+.f64N/A

                                                                    \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                                  15. +-commutativeN/A

                                                                    \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                  16. lower-+.f64100.0

                                                                    \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                4. Applied rewrites100.0%

                                                                  \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                                                5. Taylor expanded in b around 0

                                                                  \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                                                6. Step-by-step derivation
                                                                  1. distribute-lft-inN/A

                                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                                  2. *-rgt-identityN/A

                                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                                  3. cancel-sign-subN/A

                                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                                  4. distribute-lft-neg-outN/A

                                                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                                  5. exp-negN/A

                                                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                                  6. lft-mult-inverseN/A

                                                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                                  7. metadata-evalN/A

                                                                    \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                                  8. lower--.f64N/A

                                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                                  9. lower-exp.f64N/A

                                                                    \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                                  10. lower-neg.f64100.0

                                                                    \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                                                7. Applied rewrites100.0%

                                                                  \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                                                8. Taylor expanded in a around 0

                                                                  \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(\frac{1}{2} \cdot a - 1\right)}} \]
                                                                9. Step-by-step derivation
                                                                  1. Applied rewrites92.5%

                                                                    \[\leadsto \frac{1}{\mathsf{fma}\left(0.5 \cdot a - 1, \color{blue}{a}, 2\right)} \]
                                                                  2. Taylor expanded in a around inf

                                                                    \[\leadsto \frac{1}{\frac{1}{2} \cdot {a}^{\color{blue}{2}}} \]
                                                                  3. Step-by-step derivation
                                                                    1. Applied rewrites92.5%

                                                                      \[\leadsto \frac{1}{\left(a \cdot a\right) \cdot 0.5} \]

                                                                    if -3.3999999999999998e141 < a

                                                                    1. Initial program 98.6%

                                                                      \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                                    2. Add Preprocessing
                                                                    3. Taylor expanded in a around 0

                                                                      \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                                    4. Step-by-step derivation
                                                                      1. lower-/.f64N/A

                                                                        \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                                      2. +-commutativeN/A

                                                                        \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                      3. lower-+.f64N/A

                                                                        \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                      4. lower-exp.f6487.1

                                                                        \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                                                    5. Applied rewrites87.1%

                                                                      \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                                                    6. Taylor expanded in b around 0

                                                                      \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + \frac{1}{2} \cdot b\right)}} \]
                                                                    7. Step-by-step derivation
                                                                      1. Applied rewrites58.7%

                                                                        \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), \color{blue}{b}, 2\right)} \]
                                                                    8. Recombined 2 regimes into one program.
                                                                    9. Final simplification63.5%

                                                                      \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -3.4 \cdot 10^{+141}:\\ \;\;\;\;{\left(\left(a \cdot a\right) \cdot 0.5\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), b, 2\right)\right)}^{-1}\\ \end{array} \]
                                                                    10. Add Preprocessing

                                                                    Alternative 17: 53.9% accurate, 2.7× speedup?

                                                                    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 8.5 \cdot 10^{+89}:\\ \;\;\;\;{\left(\left(1 - a\right) - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\left(0.5 \cdot b\right) \cdot b\right)}^{-1}\\ \end{array} \end{array} \]
                                                                    (FPCore (a b)
                                                                     :precision binary64
                                                                     (if (<= b 8.5e+89) (pow (- (- 1.0 a) -1.0) -1.0) (pow (* (* 0.5 b) b) -1.0)))
                                                                    double code(double a, double b) {
                                                                    	double tmp;
                                                                    	if (b <= 8.5e+89) {
                                                                    		tmp = pow(((1.0 - a) - -1.0), -1.0);
                                                                    	} else {
                                                                    		tmp = pow(((0.5 * b) * b), -1.0);
                                                                    	}
                                                                    	return tmp;
                                                                    }
                                                                    
                                                                    real(8) function code(a, b)
                                                                        real(8), intent (in) :: a
                                                                        real(8), intent (in) :: b
                                                                        real(8) :: tmp
                                                                        if (b <= 8.5d+89) then
                                                                            tmp = ((1.0d0 - a) - (-1.0d0)) ** (-1.0d0)
                                                                        else
                                                                            tmp = ((0.5d0 * b) * b) ** (-1.0d0)
                                                                        end if
                                                                        code = tmp
                                                                    end function
                                                                    
                                                                    public static double code(double a, double b) {
                                                                    	double tmp;
                                                                    	if (b <= 8.5e+89) {
                                                                    		tmp = Math.pow(((1.0 - a) - -1.0), -1.0);
                                                                    	} else {
                                                                    		tmp = Math.pow(((0.5 * b) * b), -1.0);
                                                                    	}
                                                                    	return tmp;
                                                                    }
                                                                    
                                                                    def code(a, b):
                                                                    	tmp = 0
                                                                    	if b <= 8.5e+89:
                                                                    		tmp = math.pow(((1.0 - a) - -1.0), -1.0)
                                                                    	else:
                                                                    		tmp = math.pow(((0.5 * b) * b), -1.0)
                                                                    	return tmp
                                                                    
                                                                    function code(a, b)
                                                                    	tmp = 0.0
                                                                    	if (b <= 8.5e+89)
                                                                    		tmp = Float64(Float64(1.0 - a) - -1.0) ^ -1.0;
                                                                    	else
                                                                    		tmp = Float64(Float64(0.5 * b) * b) ^ -1.0;
                                                                    	end
                                                                    	return tmp
                                                                    end
                                                                    
                                                                    function tmp_2 = code(a, b)
                                                                    	tmp = 0.0;
                                                                    	if (b <= 8.5e+89)
                                                                    		tmp = ((1.0 - a) - -1.0) ^ -1.0;
                                                                    	else
                                                                    		tmp = ((0.5 * b) * b) ^ -1.0;
                                                                    	end
                                                                    	tmp_2 = tmp;
                                                                    end
                                                                    
                                                                    code[a_, b_] := If[LessEqual[b, 8.5e+89], N[Power[N[(N[(1.0 - a), $MachinePrecision] - -1.0), $MachinePrecision], -1.0], $MachinePrecision], N[Power[N[(N[(0.5 * b), $MachinePrecision] * b), $MachinePrecision], -1.0], $MachinePrecision]]
                                                                    
                                                                    \begin{array}{l}
                                                                    
                                                                    \\
                                                                    \begin{array}{l}
                                                                    \mathbf{if}\;b \leq 8.5 \cdot 10^{+89}:\\
                                                                    \;\;\;\;{\left(\left(1 - a\right) - -1\right)}^{-1}\\
                                                                    
                                                                    \mathbf{else}:\\
                                                                    \;\;\;\;{\left(\left(0.5 \cdot b\right) \cdot b\right)}^{-1}\\
                                                                    
                                                                    
                                                                    \end{array}
                                                                    \end{array}
                                                                    
                                                                    Derivation
                                                                    1. Split input into 2 regimes
                                                                    2. if b < 8.50000000000000045e89

                                                                      1. Initial program 98.5%

                                                                        \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                                      2. Add Preprocessing
                                                                      3. Step-by-step derivation
                                                                        1. lift-/.f64N/A

                                                                          \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                                        2. lift-exp.f64N/A

                                                                          \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                                        3. sinh-+-cosh-revN/A

                                                                          \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                                        4. flip-+N/A

                                                                          \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                                        5. sinh-coshN/A

                                                                          \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                        6. sinh-coshN/A

                                                                          \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                        7. sinh---cosh-revN/A

                                                                          \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                                        8. associate-/l/N/A

                                                                          \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                        9. lower-/.f64N/A

                                                                          \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                        10. sinh-coshN/A

                                                                          \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                        11. lower-*.f64N/A

                                                                          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                        12. lower-exp.f64N/A

                                                                          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                        13. lower-neg.f6498.5

                                                                          \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                        14. lift-+.f64N/A

                                                                          \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                                        15. +-commutativeN/A

                                                                          \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                        16. lower-+.f6498.5

                                                                          \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                      4. Applied rewrites98.5%

                                                                        \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                                                      5. Taylor expanded in b around 0

                                                                        \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                                                      6. Step-by-step derivation
                                                                        1. distribute-lft-inN/A

                                                                          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                                        2. *-rgt-identityN/A

                                                                          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                                        3. cancel-sign-subN/A

                                                                          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                                        4. distribute-lft-neg-outN/A

                                                                          \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                                        5. exp-negN/A

                                                                          \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                                        6. lft-mult-inverseN/A

                                                                          \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                                        7. metadata-evalN/A

                                                                          \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                                        8. lower--.f64N/A

                                                                          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                                        9. lower-exp.f64N/A

                                                                          \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                                        10. lower-neg.f6477.2

                                                                          \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                                                      7. Applied rewrites77.2%

                                                                        \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                                                      8. Taylor expanded in a around 0

                                                                        \[\leadsto \frac{1}{\left(1 + -1 \cdot a\right) - -1} \]
                                                                      9. Step-by-step derivation
                                                                        1. Applied rewrites49.6%

                                                                          \[\leadsto \frac{1}{\left(1 - a\right) - -1} \]

                                                                        if 8.50000000000000045e89 < b

                                                                        1. Initial program 100.0%

                                                                          \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                                        2. Add Preprocessing
                                                                        3. Taylor expanded in a around 0

                                                                          \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                                        4. Step-by-step derivation
                                                                          1. lower-/.f64N/A

                                                                            \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                                          2. +-commutativeN/A

                                                                            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                          3. lower-+.f64N/A

                                                                            \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                          4. lower-exp.f64100.0

                                                                            \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                                                        5. Applied rewrites100.0%

                                                                          \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                                                        6. Taylor expanded in b around 0

                                                                          \[\leadsto \frac{1}{2 + \color{blue}{b \cdot \left(1 + \frac{1}{2} \cdot b\right)}} \]
                                                                        7. Step-by-step derivation
                                                                          1. Applied rewrites83.2%

                                                                            \[\leadsto \frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(0.5, b, 1\right), \color{blue}{b}, 2\right)} \]
                                                                          2. Taylor expanded in b around inf

                                                                            \[\leadsto \frac{1}{\frac{1}{2} \cdot {b}^{\color{blue}{2}}} \]
                                                                          3. Step-by-step derivation
                                                                            1. Applied rewrites83.2%

                                                                              \[\leadsto \frac{1}{\left(0.5 \cdot b\right) \cdot b} \]
                                                                          4. Recombined 2 regimes into one program.
                                                                          5. Final simplification55.4%

                                                                            \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 8.5 \cdot 10^{+89}:\\ \;\;\;\;{\left(\left(1 - a\right) - -1\right)}^{-1}\\ \mathbf{else}:\\ \;\;\;\;{\left(\left(0.5 \cdot b\right) \cdot b\right)}^{-1}\\ \end{array} \]
                                                                          6. Add Preprocessing

                                                                          Alternative 18: 40.7% accurate, 2.9× speedup?

                                                                          \[\begin{array}{l} \\ {\left(\left(1 - a\right) - -1\right)}^{-1} \end{array} \]
                                                                          (FPCore (a b) :precision binary64 (pow (- (- 1.0 a) -1.0) -1.0))
                                                                          double code(double a, double b) {
                                                                          	return pow(((1.0 - a) - -1.0), -1.0);
                                                                          }
                                                                          
                                                                          real(8) function code(a, b)
                                                                              real(8), intent (in) :: a
                                                                              real(8), intent (in) :: b
                                                                              code = ((1.0d0 - a) - (-1.0d0)) ** (-1.0d0)
                                                                          end function
                                                                          
                                                                          public static double code(double a, double b) {
                                                                          	return Math.pow(((1.0 - a) - -1.0), -1.0);
                                                                          }
                                                                          
                                                                          def code(a, b):
                                                                          	return math.pow(((1.0 - a) - -1.0), -1.0)
                                                                          
                                                                          function code(a, b)
                                                                          	return Float64(Float64(1.0 - a) - -1.0) ^ -1.0
                                                                          end
                                                                          
                                                                          function tmp = code(a, b)
                                                                          	tmp = ((1.0 - a) - -1.0) ^ -1.0;
                                                                          end
                                                                          
                                                                          code[a_, b_] := N[Power[N[(N[(1.0 - a), $MachinePrecision] - -1.0), $MachinePrecision], -1.0], $MachinePrecision]
                                                                          
                                                                          \begin{array}{l}
                                                                          
                                                                          \\
                                                                          {\left(\left(1 - a\right) - -1\right)}^{-1}
                                                                          \end{array}
                                                                          
                                                                          Derivation
                                                                          1. Initial program 98.8%

                                                                            \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                                          2. Add Preprocessing
                                                                          3. Step-by-step derivation
                                                                            1. lift-/.f64N/A

                                                                              \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                                            2. lift-exp.f64N/A

                                                                              \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                                            3. sinh-+-cosh-revN/A

                                                                              \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                                            4. flip-+N/A

                                                                              \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                                            5. sinh-coshN/A

                                                                              \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                            6. sinh-coshN/A

                                                                              \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                            7. sinh---cosh-revN/A

                                                                              \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                                            8. associate-/l/N/A

                                                                              \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                            9. lower-/.f64N/A

                                                                              \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                            10. sinh-coshN/A

                                                                              \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                            11. lower-*.f64N/A

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                            12. lower-exp.f64N/A

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                            13. lower-neg.f6498.8

                                                                              \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                            14. lift-+.f64N/A

                                                                              \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                                            15. +-commutativeN/A

                                                                              \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                            16. lower-+.f6498.8

                                                                              \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                          4. Applied rewrites98.8%

                                                                            \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                                                          5. Taylor expanded in b around 0

                                                                            \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                                                          6. Step-by-step derivation
                                                                            1. distribute-lft-inN/A

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                                            2. *-rgt-identityN/A

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                                            3. cancel-sign-subN/A

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                                            4. distribute-lft-neg-outN/A

                                                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                                            5. exp-negN/A

                                                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                                            6. lft-mult-inverseN/A

                                                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                                            7. metadata-evalN/A

                                                                              \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                                            8. lower--.f64N/A

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                                            9. lower-exp.f64N/A

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                                            10. lower-neg.f6469.0

                                                                              \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                                                          7. Applied rewrites69.0%

                                                                            \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                                                          8. Taylor expanded in a around 0

                                                                            \[\leadsto \frac{1}{\left(1 + -1 \cdot a\right) - -1} \]
                                                                          9. Step-by-step derivation
                                                                            1. Applied rewrites41.7%

                                                                              \[\leadsto \frac{1}{\left(1 - a\right) - -1} \]
                                                                            2. Final simplification41.7%

                                                                              \[\leadsto {\left(\left(1 - a\right) - -1\right)}^{-1} \]
                                                                            3. Add Preprocessing

                                                                            Alternative 19: 40.7% accurate, 3.0× speedup?

                                                                            \[\begin{array}{l} \\ {\left(2 - a\right)}^{-1} \end{array} \]
                                                                            (FPCore (a b) :precision binary64 (pow (- 2.0 a) -1.0))
                                                                            double code(double a, double b) {
                                                                            	return pow((2.0 - a), -1.0);
                                                                            }
                                                                            
                                                                            real(8) function code(a, b)
                                                                                real(8), intent (in) :: a
                                                                                real(8), intent (in) :: b
                                                                                code = (2.0d0 - a) ** (-1.0d0)
                                                                            end function
                                                                            
                                                                            public static double code(double a, double b) {
                                                                            	return Math.pow((2.0 - a), -1.0);
                                                                            }
                                                                            
                                                                            def code(a, b):
                                                                            	return math.pow((2.0 - a), -1.0)
                                                                            
                                                                            function code(a, b)
                                                                            	return Float64(2.0 - a) ^ -1.0
                                                                            end
                                                                            
                                                                            function tmp = code(a, b)
                                                                            	tmp = (2.0 - a) ^ -1.0;
                                                                            end
                                                                            
                                                                            code[a_, b_] := N[Power[N[(2.0 - a), $MachinePrecision], -1.0], $MachinePrecision]
                                                                            
                                                                            \begin{array}{l}
                                                                            
                                                                            \\
                                                                            {\left(2 - a\right)}^{-1}
                                                                            \end{array}
                                                                            
                                                                            Derivation
                                                                            1. Initial program 98.8%

                                                                              \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                                            2. Add Preprocessing
                                                                            3. Step-by-step derivation
                                                                              1. lift-/.f64N/A

                                                                                \[\leadsto \color{blue}{\frac{e^{a}}{e^{a} + e^{b}}} \]
                                                                              2. lift-exp.f64N/A

                                                                                \[\leadsto \frac{\color{blue}{e^{a}}}{e^{a} + e^{b}} \]
                                                                              3. sinh-+-cosh-revN/A

                                                                                \[\leadsto \frac{\color{blue}{\cosh a + \sinh a}}{e^{a} + e^{b}} \]
                                                                              4. flip-+N/A

                                                                                \[\leadsto \frac{\color{blue}{\frac{\cosh a \cdot \cosh a - \sinh a \cdot \sinh a}{\cosh a - \sinh a}}}{e^{a} + e^{b}} \]
                                                                              5. sinh-coshN/A

                                                                                \[\leadsto \frac{\frac{\color{blue}{1}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                              6. sinh-coshN/A

                                                                                \[\leadsto \frac{\frac{\color{blue}{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}}{\cosh a - \sinh a}}{e^{a} + e^{b}} \]
                                                                              7. sinh---cosh-revN/A

                                                                                \[\leadsto \frac{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{\color{blue}{e^{\mathsf{neg}\left(a\right)}}}}{e^{a} + e^{b}} \]
                                                                              8. associate-/l/N/A

                                                                                \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                              9. lower-/.f64N/A

                                                                                \[\leadsto \color{blue}{\frac{\cosh b \cdot \cosh b - \sinh b \cdot \sinh b}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                              10. sinh-coshN/A

                                                                                \[\leadsto \frac{\color{blue}{1}}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                              11. lower-*.f64N/A

                                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(e^{a} + e^{b}\right)}} \]
                                                                              12. lower-exp.f64N/A

                                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                              13. lower-neg.f6498.8

                                                                                \[\leadsto \frac{1}{e^{\color{blue}{-a}} \cdot \left(e^{a} + e^{b}\right)} \]
                                                                              14. lift-+.f64N/A

                                                                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{a} + e^{b}\right)}} \]
                                                                              15. +-commutativeN/A

                                                                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                              16. lower-+.f6498.8

                                                                                \[\leadsto \frac{1}{e^{-a} \cdot \color{blue}{\left(e^{b} + e^{a}\right)}} \]
                                                                            4. Applied rewrites98.8%

                                                                              \[\leadsto \color{blue}{\frac{1}{e^{-a} \cdot \left(e^{b} + e^{a}\right)}} \]
                                                                            5. Taylor expanded in b around 0

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot \left(1 + e^{a}\right)}} \]
                                                                            6. Step-by-step derivation
                                                                              1. distribute-lft-inN/A

                                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} \cdot 1 + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}}} \]
                                                                              2. *-rgt-identityN/A

                                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} + e^{\mathsf{neg}\left(a\right)} \cdot e^{a}} \]
                                                                              3. cancel-sign-subN/A

                                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)}\right)\right) \cdot e^{a}}} \]
                                                                              4. distribute-lft-neg-outN/A

                                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{\left(\mathsf{neg}\left(e^{\mathsf{neg}\left(a\right)} \cdot e^{a}\right)\right)}} \]
                                                                              5. exp-negN/A

                                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{\frac{1}{e^{a}}} \cdot e^{a}\right)\right)} \]
                                                                              6. lft-mult-inverseN/A

                                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \left(\mathsf{neg}\left(\color{blue}{1}\right)\right)} \]
                                                                              7. metadata-evalN/A

                                                                                \[\leadsto \frac{1}{e^{\mathsf{neg}\left(a\right)} - \color{blue}{-1}} \]
                                                                              8. lower--.f64N/A

                                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)} - -1}} \]
                                                                              9. lower-exp.f64N/A

                                                                                \[\leadsto \frac{1}{\color{blue}{e^{\mathsf{neg}\left(a\right)}} - -1} \]
                                                                              10. lower-neg.f6469.0

                                                                                \[\leadsto \frac{1}{e^{\color{blue}{-a}} - -1} \]
                                                                            7. Applied rewrites69.0%

                                                                              \[\leadsto \frac{1}{\color{blue}{e^{-a} - -1}} \]
                                                                            8. Taylor expanded in a around 0

                                                                              \[\leadsto \frac{1}{2 + \color{blue}{-1 \cdot a}} \]
                                                                            9. Step-by-step derivation
                                                                              1. Applied rewrites41.7%

                                                                                \[\leadsto \frac{1}{2 - \color{blue}{a}} \]
                                                                              2. Final simplification41.7%

                                                                                \[\leadsto {\left(2 - a\right)}^{-1} \]
                                                                              3. Add Preprocessing

                                                                              Alternative 20: 39.9% accurate, 315.0× speedup?

                                                                              \[\begin{array}{l} \\ 0.5 \end{array} \]
                                                                              (FPCore (a b) :precision binary64 0.5)
                                                                              double code(double a, double b) {
                                                                              	return 0.5;
                                                                              }
                                                                              
                                                                              real(8) function code(a, b)
                                                                                  real(8), intent (in) :: a
                                                                                  real(8), intent (in) :: b
                                                                                  code = 0.5d0
                                                                              end function
                                                                              
                                                                              public static double code(double a, double b) {
                                                                              	return 0.5;
                                                                              }
                                                                              
                                                                              def code(a, b):
                                                                              	return 0.5
                                                                              
                                                                              function code(a, b)
                                                                              	return 0.5
                                                                              end
                                                                              
                                                                              function tmp = code(a, b)
                                                                              	tmp = 0.5;
                                                                              end
                                                                              
                                                                              code[a_, b_] := 0.5
                                                                              
                                                                              \begin{array}{l}
                                                                              
                                                                              \\
                                                                              0.5
                                                                              \end{array}
                                                                              
                                                                              Derivation
                                                                              1. Initial program 98.8%

                                                                                \[\frac{e^{a}}{e^{a} + e^{b}} \]
                                                                              2. Add Preprocessing
                                                                              3. Taylor expanded in a around 0

                                                                                \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                                              4. Step-by-step derivation
                                                                                1. lower-/.f64N/A

                                                                                  \[\leadsto \color{blue}{\frac{1}{1 + e^{b}}} \]
                                                                                2. +-commutativeN/A

                                                                                  \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                                3. lower-+.f64N/A

                                                                                  \[\leadsto \frac{1}{\color{blue}{e^{b} + 1}} \]
                                                                                4. lower-exp.f6479.9

                                                                                  \[\leadsto \frac{1}{\color{blue}{e^{b}} + 1} \]
                                                                              5. Applied rewrites79.9%

                                                                                \[\leadsto \color{blue}{\frac{1}{e^{b} + 1}} \]
                                                                              6. Taylor expanded in b around 0

                                                                                \[\leadsto \frac{1}{2} \]
                                                                              7. Step-by-step derivation
                                                                                1. Applied rewrites40.9%

                                                                                  \[\leadsto 0.5 \]
                                                                                2. Add Preprocessing

                                                                                Developer Target 1: 100.0% accurate, 2.7× speedup?

                                                                                \[\begin{array}{l} \\ \frac{1}{1 + e^{b - a}} \end{array} \]
                                                                                (FPCore (a b) :precision binary64 (/ 1.0 (+ 1.0 (exp (- b a)))))
                                                                                double code(double a, double b) {
                                                                                	return 1.0 / (1.0 + exp((b - a)));
                                                                                }
                                                                                
                                                                                real(8) function code(a, b)
                                                                                    real(8), intent (in) :: a
                                                                                    real(8), intent (in) :: b
                                                                                    code = 1.0d0 / (1.0d0 + exp((b - a)))
                                                                                end function
                                                                                
                                                                                public static double code(double a, double b) {
                                                                                	return 1.0 / (1.0 + Math.exp((b - a)));
                                                                                }
                                                                                
                                                                                def code(a, b):
                                                                                	return 1.0 / (1.0 + math.exp((b - a)))
                                                                                
                                                                                function code(a, b)
                                                                                	return Float64(1.0 / Float64(1.0 + exp(Float64(b - a))))
                                                                                end
                                                                                
                                                                                function tmp = code(a, b)
                                                                                	tmp = 1.0 / (1.0 + exp((b - a)));
                                                                                end
                                                                                
                                                                                code[a_, b_] := N[(1.0 / N[(1.0 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
                                                                                
                                                                                \begin{array}{l}
                                                                                
                                                                                \\
                                                                                \frac{1}{1 + e^{b - a}}
                                                                                \end{array}
                                                                                

                                                                                Reproduce

                                                                                ?
                                                                                herbie shell --seed 2024342 
                                                                                (FPCore (a b)
                                                                                  :name "Quotient of sum of exps"
                                                                                  :precision binary64
                                                                                
                                                                                  :alt
                                                                                  (! :herbie-platform default (/ 1 (+ 1 (exp (- b a)))))
                                                                                
                                                                                  (/ (exp a) (+ (exp a) (exp b))))