Quotient of sum of exps

Percentage Accurate: 99.0% → 100.0%
Time: 5.7s
Alternatives: 12
Speedup: 2.9×

Specification

?
\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 12 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 99.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Alternative 1: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{1 + e^{b - a}} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 1.0 (exp (- b a)))))
double code(double a, double b) {
	return 1.0 / (1.0 + exp((b - a)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (1.0d0 + exp((b - a)))
end function
public static double code(double a, double b) {
	return 1.0 / (1.0 + Math.exp((b - a)));
}
def code(a, b):
	return 1.0 / (1.0 + math.exp((b - a)))
function code(a, b)
	return Float64(1.0 / Float64(1.0 + exp(Float64(b - a))))
end
function tmp = code(a, b)
	tmp = 1.0 / (1.0 + exp((b - a)));
end
code[a_, b_] := N[(1.0 / N[(1.0 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{1 + e^{b - a}}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity74.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Final simplification100.0%

    \[\leadsto \frac{1}{1 + e^{b - a}} \]
  6. Add Preprocessing

Alternative 2: 98.7% accurate, 2.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -0.053:\\ \;\;\;\;\frac{1}{1 + e^{-a}}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -0.053) (/ 1.0 (+ 1.0 (exp (- a)))) (/ 1.0 (+ 1.0 (exp b)))))
double code(double a, double b) {
	double tmp;
	if (a <= -0.053) {
		tmp = 1.0 / (1.0 + exp(-a));
	} else {
		tmp = 1.0 / (1.0 + exp(b));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-0.053d0)) then
        tmp = 1.0d0 / (1.0d0 + exp(-a))
    else
        tmp = 1.0d0 / (1.0d0 + exp(b))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -0.053) {
		tmp = 1.0 / (1.0 + Math.exp(-a));
	} else {
		tmp = 1.0 / (1.0 + Math.exp(b));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -0.053:
		tmp = 1.0 / (1.0 + math.exp(-a))
	else:
		tmp = 1.0 / (1.0 + math.exp(b))
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -0.053)
		tmp = Float64(1.0 / Float64(1.0 + exp(Float64(-a))));
	else
		tmp = Float64(1.0 / Float64(1.0 + exp(b)));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -0.053)
		tmp = 1.0 / (1.0 + exp(-a));
	else
		tmp = 1.0 / (1.0 + exp(b));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -0.053], N[(1.0 / N[(1.0 + N[Exp[(-a)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -0.053:\\
\;\;\;\;\frac{1}{1 + e^{-a}}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{1 + e^{b}}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -0.0529999999999999985

    1. Initial program 98.5%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.5%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.5%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.4%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg98.4%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg98.4%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub1.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity1.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/1.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse98.4%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg98.4%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg98.4%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg98.4%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]

    if -0.0529999999999999985 < a

    1. Initial program 98.9%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.9%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.9%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.9%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg98.9%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg98.9%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub98.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity98.9%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/98.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 98.2%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification98.7%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -0.053:\\ \;\;\;\;\frac{1}{1 + e^{-a}}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \]
  5. Add Preprocessing

Alternative 3: 93.1% accurate, 2.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -9.5 \cdot 10^{+102}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -9.5e+102)
   (/ 1.0 (+ 2.0 (* a (+ (* a (+ 0.5 (* a -0.16666666666666666))) -1.0))))
   (/ 1.0 (+ 1.0 (exp b)))))
double code(double a, double b) {
	double tmp;
	if (a <= -9.5e+102) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else {
		tmp = 1.0 / (1.0 + exp(b));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-9.5d+102)) then
        tmp = 1.0d0 / (2.0d0 + (a * ((a * (0.5d0 + (a * (-0.16666666666666666d0)))) + (-1.0d0))))
    else
        tmp = 1.0d0 / (1.0d0 + exp(b))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -9.5e+102) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else {
		tmp = 1.0 / (1.0 + Math.exp(b));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -9.5e+102:
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)))
	else:
		tmp = 1.0 / (1.0 + math.exp(b))
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -9.5e+102)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666))) + -1.0))));
	else
		tmp = Float64(1.0 / Float64(1.0 + exp(b)));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -9.5e+102)
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	else
		tmp = 1.0 / (1.0 + exp(b));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -9.5e+102], N[(1.0 / N[(2.0 + N[(a * N[(N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -9.5 \cdot 10^{+102}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{1 + e^{b}}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -9.4999999999999992e102

    1. Initial program 97.4%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity97.4%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/97.4%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/97.4%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg97.4%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg97.4%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub0.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity0.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/0.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse97.4%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg97.4%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg97.4%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg97.4%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    6. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if -9.4999999999999992e102 < a

    1. Initial program 99.1%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity99.1%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/99.1%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/99.1%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg99.1%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg99.1%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub87.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity87.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/87.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 92.3%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification93.4%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -9.5 \cdot 10^{+102}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \]
  5. Add Preprocessing

Alternative 4: 73.6% accurate, 6.5× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := b \cdot \left(b \cdot 0.16666666666666666\right)\\ \mathbf{if}\;b \leq 2.8 \cdot 10^{+77}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{elif}\;b \leq 10^{+103}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + \frac{\left(b \cdot 0.5\right) \cdot \left(b \cdot 0.5\right) - t\_0 \cdot t\_0}{b \cdot 0.5 - t\_0}\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (let* ((t_0 (* b (* b 0.16666666666666666))))
   (if (<= b 2.8e+77)
     (/ 1.0 (+ 2.0 (* a (+ (* a (+ 0.5 (* a -0.16666666666666666))) -1.0))))
     (if (<= b 1e+103)
       (/
        1.0
        (+
         2.0
         (*
          b
          (+
           1.0
           (/ (- (* (* b 0.5) (* b 0.5)) (* t_0 t_0)) (- (* b 0.5) t_0))))))
       (/
        1.0
        (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666)))))))))))
double code(double a, double b) {
	double t_0 = b * (b * 0.16666666666666666);
	double tmp;
	if (b <= 2.8e+77) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else if (b <= 1e+103) {
		tmp = 1.0 / (2.0 + (b * (1.0 + ((((b * 0.5) * (b * 0.5)) - (t_0 * t_0)) / ((b * 0.5) - t_0)))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: t_0
    real(8) :: tmp
    t_0 = b * (b * 0.16666666666666666d0)
    if (b <= 2.8d+77) then
        tmp = 1.0d0 / (2.0d0 + (a * ((a * (0.5d0 + (a * (-0.16666666666666666d0)))) + (-1.0d0))))
    else if (b <= 1d+103) then
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + ((((b * 0.5d0) * (b * 0.5d0)) - (t_0 * t_0)) / ((b * 0.5d0) - t_0)))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double t_0 = b * (b * 0.16666666666666666);
	double tmp;
	if (b <= 2.8e+77) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else if (b <= 1e+103) {
		tmp = 1.0 / (2.0 + (b * (1.0 + ((((b * 0.5) * (b * 0.5)) - (t_0 * t_0)) / ((b * 0.5) - t_0)))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	t_0 = b * (b * 0.16666666666666666)
	tmp = 0
	if b <= 2.8e+77:
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)))
	elif b <= 1e+103:
		tmp = 1.0 / (2.0 + (b * (1.0 + ((((b * 0.5) * (b * 0.5)) - (t_0 * t_0)) / ((b * 0.5) - t_0)))))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	t_0 = Float64(b * Float64(b * 0.16666666666666666))
	tmp = 0.0
	if (b <= 2.8e+77)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666))) + -1.0))));
	elseif (b <= 1e+103)
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(Float64(Float64(Float64(b * 0.5) * Float64(b * 0.5)) - Float64(t_0 * t_0)) / Float64(Float64(b * 0.5) - t_0))))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	t_0 = b * (b * 0.16666666666666666);
	tmp = 0.0;
	if (b <= 2.8e+77)
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	elseif (b <= 1e+103)
		tmp = 1.0 / (2.0 + (b * (1.0 + ((((b * 0.5) * (b * 0.5)) - (t_0 * t_0)) / ((b * 0.5) - t_0)))));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := Block[{t$95$0 = N[(b * N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[b, 2.8e+77], N[(1.0 / N[(2.0 + N[(a * N[(N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], If[LessEqual[b, 1e+103], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(N[(N[(N[(b * 0.5), $MachinePrecision] * N[(b * 0.5), $MachinePrecision]), $MachinePrecision] - N[(t$95$0 * t$95$0), $MachinePrecision]), $MachinePrecision] / N[(N[(b * 0.5), $MachinePrecision] - t$95$0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := b \cdot \left(b \cdot 0.16666666666666666\right)\\
\mathbf{if}\;b \leq 2.8 \cdot 10^{+77}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\

\mathbf{elif}\;b \leq 10^{+103}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + \frac{\left(b \cdot 0.5\right) \cdot \left(b \cdot 0.5\right) - t\_0 \cdot t\_0}{b \cdot 0.5 - t\_0}\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if b < 2.8e77

    1. Initial program 98.5%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.5%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.5%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg98.5%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg98.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub76.8%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity76.8%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/76.8%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse99.5%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg99.5%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg99.5%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg99.5%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 71.9%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    6. Taylor expanded in a around 0 64.5%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 2.8e77 < b < 1e103

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub42.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity42.9%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/42.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    6. Taylor expanded in b around 0 8.6%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative8.6%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified8.6%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
    9. Step-by-step derivation
      1. distribute-lft-in8.6%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{\left(b \cdot 0.5 + b \cdot \left(b \cdot 0.16666666666666666\right)\right)}\right)} \]
      2. flip-+100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{\frac{\left(b \cdot 0.5\right) \cdot \left(b \cdot 0.5\right) - \left(b \cdot \left(b \cdot 0.16666666666666666\right)\right) \cdot \left(b \cdot \left(b \cdot 0.16666666666666666\right)\right)}{b \cdot 0.5 - b \cdot \left(b \cdot 0.16666666666666666\right)}}\right)} \]
    10. Applied egg-rr100.0%

      \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{\frac{\left(b \cdot 0.5\right) \cdot \left(b \cdot 0.5\right) - \left(b \cdot \left(b \cdot 0.16666666666666666\right)\right) \cdot \left(b \cdot \left(b \cdot 0.16666666666666666\right)\right)}{b \cdot 0.5 - b \cdot \left(b \cdot 0.16666666666666666\right)}}\right)} \]

    if 1e103 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub67.4%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity67.4%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/67.4%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    6. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification71.9%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 2.8 \cdot 10^{+77}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{elif}\;b \leq 10^{+103}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + \frac{\left(b \cdot 0.5\right) \cdot \left(b \cdot 0.5\right) - \left(b \cdot \left(b \cdot 0.16666666666666666\right)\right) \cdot \left(b \cdot \left(b \cdot 0.16666666666666666\right)\right)}{b \cdot 0.5 - b \cdot \left(b \cdot 0.16666666666666666\right)}\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 68.8% accurate, 15.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 3.6 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 3.6e+153)
   (/ 1.0 (+ 2.0 (* a (+ (* a (+ 0.5 (* a -0.16666666666666666))) -1.0))))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b 0.5)))))))
double code(double a, double b) {
	double tmp;
	if (b <= 3.6e+153) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 3.6d+153) then
        tmp = 1.0d0 / (2.0d0 + (a * ((a * (0.5d0 + (a * (-0.16666666666666666d0)))) + (-1.0d0))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * 0.5d0))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 3.6e+153) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 3.6e+153:
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 3.6e+153)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666))) + -1.0))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * 0.5)))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 3.6e+153)
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 3.6e+153], N[(1.0 / N[(2.0 + N[(a * N[(N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 3.6 \cdot 10^{+153}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 3.6000000000000001e153

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.6%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg98.6%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub76.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity76.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/76.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse99.5%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg99.5%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg99.5%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg99.5%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 68.5%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    6. Taylor expanded in a around 0 60.9%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 3.6000000000000001e153 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub60.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity60.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/60.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    6. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + 0.5 \cdot b\right)}} \]
    7. Step-by-step derivation
      1. *-commutative100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{b \cdot 0.5}\right)} \]
    8. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot 0.5\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification66.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 3.6 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 6: 72.2% accurate, 15.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 10^{+103}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 1e+103)
   (/ 1.0 (+ 2.0 (* a (+ (* a (+ 0.5 (* a -0.16666666666666666))) -1.0))))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666)))))))))
double code(double a, double b) {
	double tmp;
	if (b <= 1e+103) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 1d+103) then
        tmp = 1.0d0 / (2.0d0 + (a * ((a * (0.5d0 + (a * (-0.16666666666666666d0)))) + (-1.0d0))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 1e+103) {
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 1e+103:
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 1e+103)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666))) + -1.0))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 1e+103)
		tmp = 1.0 / (2.0 + (a * ((a * (0.5 + (a * -0.16666666666666666))) + -1.0)));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 1e+103], N[(1.0 / N[(2.0 + N[(a * N[(N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 10^{+103}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 1e103

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.6%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg98.6%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub75.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity75.7%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/75.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse99.5%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg99.5%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg99.5%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg99.5%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 71.5%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    6. Taylor expanded in a around 0 63.4%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 1e103 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub67.4%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity67.4%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/67.4%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    6. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification70.0%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 10^{+103}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot \left(0.5 + a \cdot -0.16666666666666666\right) + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 7: 65.2% accurate, 19.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 2.7 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot 0.5 + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 2.7e+153)
   (/ 1.0 (+ 2.0 (* a (+ (* a 0.5) -1.0))))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b 0.5)))))))
double code(double a, double b) {
	double tmp;
	if (b <= 2.7e+153) {
		tmp = 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 2.7d+153) then
        tmp = 1.0d0 / (2.0d0 + (a * ((a * 0.5d0) + (-1.0d0))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * 0.5d0))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 2.7e+153) {
		tmp = 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 2.7e+153:
		tmp = 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 2.7e+153)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(Float64(a * 0.5) + -1.0))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * 0.5)))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 2.7e+153)
		tmp = 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 2.7e+153], N[(1.0 / N[(2.0 + N[(a * N[(N[(a * 0.5), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 2.7 \cdot 10^{+153}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot 0.5 + -1\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 2.7000000000000001e153

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.6%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg98.6%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub76.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity76.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/76.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse99.5%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg99.5%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg99.5%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg99.5%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 68.5%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    6. Taylor expanded in a around 0 55.0%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(0.5 \cdot a - 1\right)}} \]

    if 2.7000000000000001e153 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub60.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity60.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/60.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    6. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + 0.5 \cdot b\right)}} \]
    7. Step-by-step derivation
      1. *-commutative100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{b \cdot 0.5}\right)} \]
    8. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot 0.5\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification61.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 2.7 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(a \cdot 0.5 + -1\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 8: 53.5% accurate, 27.7× speedup?

\[\begin{array}{l} \\ \frac{1}{2 + a \cdot \left(a \cdot 0.5 - -1\right)} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 2.0 (* a (- (* a 0.5) -1.0)))))
double code(double a, double b) {
	return 1.0 / (2.0 + (a * ((a * 0.5) - -1.0)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 + (a * ((a * 0.5d0) - (-1.0d0))))
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 + (a * ((a * 0.5) - -1.0)));
}
def code(a, b):
	return 1.0 / (2.0 + (a * ((a * 0.5) - -1.0)))
function code(a, b)
	return Float64(1.0 / Float64(2.0 + Float64(a * Float64(Float64(a * 0.5) - -1.0))))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 + (a * ((a * 0.5) - -1.0)));
end
code[a_, b_] := N[(1.0 / N[(2.0 + N[(a * N[(N[(a * 0.5), $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 + a \cdot \left(a \cdot 0.5 - -1\right)}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity74.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 64.9%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  6. Taylor expanded in a around 0 49.1%

    \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(0.5 \cdot a - 1\right)}} \]
  7. Step-by-step derivation
    1. sub-neg49.1%

      \[\leadsto \frac{1}{2 + a \cdot \color{blue}{\left(0.5 \cdot a + \left(-1\right)\right)}} \]
    2. distribute-rgt-in49.1%

      \[\leadsto \frac{1}{2 + \color{blue}{\left(\left(0.5 \cdot a\right) \cdot a + \left(-1\right) \cdot a\right)}} \]
    3. *-commutative49.1%

      \[\leadsto \frac{1}{2 + \left(\color{blue}{\left(a \cdot 0.5\right)} \cdot a + \left(-1\right) \cdot a\right)} \]
    4. metadata-eval49.1%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a + \color{blue}{-1} \cdot a\right)} \]
  8. Applied egg-rr49.1%

    \[\leadsto \frac{1}{2 + \color{blue}{\left(\left(a \cdot 0.5\right) \cdot a + -1 \cdot a\right)}} \]
  9. Step-by-step derivation
    1. mul-1-neg49.1%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a + \color{blue}{\left(-a\right)}\right)} \]
    2. unsub-neg49.1%

      \[\leadsto \frac{1}{2 + \color{blue}{\left(\left(a \cdot 0.5\right) \cdot a - a\right)}} \]
    3. add-sqr-sqrt19.8%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a - \color{blue}{\sqrt{a} \cdot \sqrt{a}}\right)} \]
    4. sqrt-unprod39.3%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a - \color{blue}{\sqrt{a \cdot a}}\right)} \]
    5. sqr-neg39.3%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a - \sqrt{\color{blue}{\left(-a\right) \cdot \left(-a\right)}}\right)} \]
    6. mul-1-neg39.3%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a - \sqrt{\color{blue}{\left(-1 \cdot a\right)} \cdot \left(-a\right)}\right)} \]
    7. mul-1-neg39.3%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a - \sqrt{\left(-1 \cdot a\right) \cdot \color{blue}{\left(-1 \cdot a\right)}}\right)} \]
    8. sqrt-unprod28.9%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a - \color{blue}{\sqrt{-1 \cdot a} \cdot \sqrt{-1 \cdot a}}\right)} \]
    9. add-sqr-sqrt48.4%

      \[\leadsto \frac{1}{2 + \left(\left(a \cdot 0.5\right) \cdot a - \color{blue}{-1 \cdot a}\right)} \]
    10. distribute-rgt-out--48.4%

      \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(a \cdot 0.5 - -1\right)}} \]
    11. *-commutative48.4%

      \[\leadsto \frac{1}{2 + a \cdot \left(\color{blue}{0.5 \cdot a} - -1\right)} \]
  10. Applied egg-rr48.4%

    \[\leadsto \frac{1}{2 + \color{blue}{a \cdot \left(0.5 \cdot a - -1\right)}} \]
  11. Final simplification48.4%

    \[\leadsto \frac{1}{2 + a \cdot \left(a \cdot 0.5 - -1\right)} \]
  12. Add Preprocessing

Alternative 9: 54.0% accurate, 27.7× speedup?

\[\begin{array}{l} \\ \frac{1}{2 + a \cdot \left(a \cdot 0.5 + -1\right)} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 2.0 (* a (+ (* a 0.5) -1.0)))))
double code(double a, double b) {
	return 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 + (a * ((a * 0.5d0) + (-1.0d0))))
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)));
}
def code(a, b):
	return 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)))
function code(a, b)
	return Float64(1.0 / Float64(2.0 + Float64(a * Float64(Float64(a * 0.5) + -1.0))))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 + (a * ((a * 0.5) + -1.0)));
end
code[a_, b_] := N[(1.0 / N[(2.0 + N[(a * N[(N[(a * 0.5), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 + a \cdot \left(a \cdot 0.5 + -1\right)}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity74.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 64.9%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  6. Taylor expanded in a around 0 49.1%

    \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(0.5 \cdot a - 1\right)}} \]
  7. Final simplification49.1%

    \[\leadsto \frac{1}{2 + a \cdot \left(a \cdot 0.5 + -1\right)} \]
  8. Add Preprocessing

Alternative 10: 39.3% accurate, 61.0× speedup?

\[\begin{array}{l} \\ 0.5 + a \cdot 0.25 \end{array} \]
(FPCore (a b) :precision binary64 (+ 0.5 (* a 0.25)))
double code(double a, double b) {
	return 0.5 + (a * 0.25);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 0.5d0 + (a * 0.25d0)
end function
public static double code(double a, double b) {
	return 0.5 + (a * 0.25);
}
def code(a, b):
	return 0.5 + (a * 0.25)
function code(a, b)
	return Float64(0.5 + Float64(a * 0.25))
end
function tmp = code(a, b)
	tmp = 0.5 + (a * 0.25);
end
code[a_, b_] := N[(0.5 + N[(a * 0.25), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 + a \cdot 0.25
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity74.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 64.9%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  6. Taylor expanded in a around 0 39.5%

    \[\leadsto \color{blue}{0.5 + 0.25 \cdot a} \]
  7. Final simplification39.5%

    \[\leadsto 0.5 + a \cdot 0.25 \]
  8. Add Preprocessing

Alternative 11: 40.1% accurate, 61.0× speedup?

\[\begin{array}{l} \\ \frac{1}{2 - a} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (- 2.0 a)))
double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 - a)
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
def code(a, b):
	return 1.0 / (2.0 - a)
function code(a, b)
	return Float64(1.0 / Float64(2.0 - a))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 - a);
end
code[a_, b_] := N[(1.0 / N[(2.0 - a), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 - a}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity74.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 64.9%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  6. Taylor expanded in a around 0 39.9%

    \[\leadsto \frac{1}{\color{blue}{2 + -1 \cdot a}} \]
  7. Step-by-step derivation
    1. mul-1-neg39.9%

      \[\leadsto \frac{1}{2 + \color{blue}{\left(-a\right)}} \]
    2. unsub-neg39.9%

      \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  8. Simplified39.9%

    \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  9. Final simplification39.9%

    \[\leadsto \frac{1}{2 - a} \]
  10. Add Preprocessing

Alternative 12: 39.2% accurate, 305.0× speedup?

\[\begin{array}{l} \\ 0.5 \end{array} \]
(FPCore (a b) :precision binary64 0.5)
double code(double a, double b) {
	return 0.5;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 0.5d0
end function
public static double code(double a, double b) {
	return 0.5;
}
def code(a, b):
	return 0.5
function code(a, b)
	return 0.5
end
function tmp = code(a, b)
	tmp = 0.5;
end
code[a_, b_] := 0.5
\begin{array}{l}

\\
0.5
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity74.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/74.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 64.9%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  6. Taylor expanded in a around 0 39.0%

    \[\leadsto \color{blue}{0.5} \]
  7. Final simplification39.0%

    \[\leadsto 0.5 \]
  8. Add Preprocessing

Developer target: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{1 + e^{b - a}} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 1.0 (exp (- b a)))))
double code(double a, double b) {
	return 1.0 / (1.0 + exp((b - a)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (1.0d0 + exp((b - a)))
end function
public static double code(double a, double b) {
	return 1.0 / (1.0 + Math.exp((b - a)));
}
def code(a, b):
	return 1.0 / (1.0 + math.exp((b - a)))
function code(a, b)
	return Float64(1.0 / Float64(1.0 + exp(Float64(b - a))))
end
function tmp = code(a, b)
	tmp = 1.0 / (1.0 + exp((b - a)));
end
code[a_, b_] := N[(1.0 / N[(1.0 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{1 + e^{b - a}}
\end{array}

Reproduce

?
herbie shell --seed 2024085 
(FPCore (a b)
  :name "Quotient of sum of exps"
  :precision binary64

  :alt
  (/ 1.0 (+ 1.0 (exp (- b a))))

  (/ (exp a) (+ (exp a) (exp b))))