Quotient of sum of exps

Percentage Accurate: 98.9% → 100.0%
Time: 8.2s
Alternatives: 13
Speedup: 2.9×

Specification

?
\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 13 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 98.9% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Alternative 1: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{-\mathsf{log1p}\left(e^{b - a}\right)} \end{array} \]
(FPCore (a b) :precision binary64 (exp (- (log1p (exp (- b a))))))
double code(double a, double b) {
	return exp(-log1p(exp((b - a))));
}
public static double code(double a, double b) {
	return Math.exp(-Math.log1p(Math.exp((b - a))));
}
def code(a, b):
	return math.exp(-math.log1p(math.exp((b - a))))
function code(a, b)
	return exp(Float64(-log1p(exp(Float64(b - a)))))
end
code[a_, b_] := N[Exp[(-N[Log[1 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]], $MachinePrecision])], $MachinePrecision]
\begin{array}{l}

\\
e^{-\mathsf{log1p}\left(e^{b - a}\right)}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/99.6%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity72.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Step-by-step derivation
    1. div-exp99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{e^{b}}{e^{a}}}} \]
    2. +-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} + 1}} \]
    3. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} + \color{blue}{\left(--1\right)}} \]
    4. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - -1}} \]
    5. add-exp-log99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{\log \left(\frac{e^{b}}{e^{a}} - -1\right)}}} \]
    6. rec-exp99.6%

      \[\leadsto \color{blue}{e^{-\log \left(\frac{e^{b}}{e^{a}} - -1\right)}} \]
    7. sub-neg99.6%

      \[\leadsto e^{-\log \color{blue}{\left(\frac{e^{b}}{e^{a}} + \left(--1\right)\right)}} \]
    8. metadata-eval99.6%

      \[\leadsto e^{-\log \left(\frac{e^{b}}{e^{a}} + \color{blue}{1}\right)} \]
    9. +-commutative99.6%

      \[\leadsto e^{-\log \color{blue}{\left(1 + \frac{e^{b}}{e^{a}}\right)}} \]
    10. log1p-define99.6%

      \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(\frac{e^{b}}{e^{a}}\right)}} \]
    11. div-exp100.0%

      \[\leadsto e^{-\mathsf{log1p}\left(\color{blue}{e^{b - a}}\right)} \]
  6. Applied egg-rr100.0%

    \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
  7. Final simplification100.0%

    \[\leadsto e^{-\mathsf{log1p}\left(e^{b - a}\right)} \]
  8. Add Preprocessing

Alternative 2: 98.7% accurate, 1.4× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;e^{a} \leq 0.998:\\ \;\;\;\;\frac{1}{e^{-a} - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= (exp a) 0.998) (/ 1.0 (- (exp (- a)) -1.0)) (/ 1.0 (- (exp b) -1.0))))
double code(double a, double b) {
	double tmp;
	if (exp(a) <= 0.998) {
		tmp = 1.0 / (exp(-a) - -1.0);
	} else {
		tmp = 1.0 / (exp(b) - -1.0);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (exp(a) <= 0.998d0) then
        tmp = 1.0d0 / (exp(-a) - (-1.0d0))
    else
        tmp = 1.0d0 / (exp(b) - (-1.0d0))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (Math.exp(a) <= 0.998) {
		tmp = 1.0 / (Math.exp(-a) - -1.0);
	} else {
		tmp = 1.0 / (Math.exp(b) - -1.0);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if math.exp(a) <= 0.998:
		tmp = 1.0 / (math.exp(-a) - -1.0)
	else:
		tmp = 1.0 / (math.exp(b) - -1.0)
	return tmp
function code(a, b)
	tmp = 0.0
	if (exp(a) <= 0.998)
		tmp = Float64(1.0 / Float64(exp(Float64(-a)) - -1.0));
	else
		tmp = Float64(1.0 / Float64(exp(b) - -1.0));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (exp(a) <= 0.998)
		tmp = 1.0 / (exp(-a) - -1.0);
	else
		tmp = 1.0 / (exp(b) - -1.0);
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[N[Exp[a], $MachinePrecision], 0.998], N[(1.0 / N[(N[Exp[(-a)], $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(N[Exp[b], $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;e^{a} \leq 0.998:\\
\;\;\;\;\frac{1}{e^{-a} - -1}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{e^{b} - -1}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (exp.f64 a) < 0.998

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.6%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.6%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub4.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-14.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative4.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/4.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval4.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac4.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg4.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out4.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg4.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse98.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval98.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified98.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]

    if 0.998 < (exp.f64 a)

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-1100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 99.1%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.3%

    \[\leadsto \begin{array}{l} \mathbf{if}\;e^{a} \leq 0.998:\\ \;\;\;\;\frac{1}{e^{-a} - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \]
  5. Add Preprocessing

Alternative 3: 98.3% accurate, 1.5× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;e^{a} \leq 0.998:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= (exp a) 0.998) (/ (exp a) 2.0) (/ 1.0 (- (exp b) -1.0))))
double code(double a, double b) {
	double tmp;
	if (exp(a) <= 0.998) {
		tmp = exp(a) / 2.0;
	} else {
		tmp = 1.0 / (exp(b) - -1.0);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (exp(a) <= 0.998d0) then
        tmp = exp(a) / 2.0d0
    else
        tmp = 1.0d0 / (exp(b) - (-1.0d0))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (Math.exp(a) <= 0.998) {
		tmp = Math.exp(a) / 2.0;
	} else {
		tmp = 1.0 / (Math.exp(b) - -1.0);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if math.exp(a) <= 0.998:
		tmp = math.exp(a) / 2.0
	else:
		tmp = 1.0 / (math.exp(b) - -1.0)
	return tmp
function code(a, b)
	tmp = 0.0
	if (exp(a) <= 0.998)
		tmp = Float64(exp(a) / 2.0);
	else
		tmp = Float64(1.0 / Float64(exp(b) - -1.0));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (exp(a) <= 0.998)
		tmp = exp(a) / 2.0;
	else
		tmp = 1.0 / (exp(b) - -1.0);
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[N[Exp[a], $MachinePrecision], 0.998], N[(N[Exp[a], $MachinePrecision] / 2.0), $MachinePrecision], N[(1.0 / N[(N[Exp[b], $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;e^{a} \leq 0.998:\\
\;\;\;\;\frac{e^{a}}{2}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{e^{b} - -1}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (exp.f64 a) < 0.998

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0 100.0%

      \[\leadsto \color{blue}{\frac{e^{a}}{1 + e^{a}}} \]
    4. Taylor expanded in a around 0 96.9%

      \[\leadsto \frac{e^{a}}{\color{blue}{2}} \]

    if 0.998 < (exp.f64 a)

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-1100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 99.1%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification98.4%

    \[\leadsto \begin{array}{l} \mathbf{if}\;e^{a} \leq 0.998:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \]
  5. Add Preprocessing

Alternative 4: 91.5% accurate, 2.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq -920:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 1.05 \cdot 10^{+103}:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b -920.0)
   (+ 1.0 (exp b))
   (if (<= b 1.05e+103)
     (/ (exp a) 2.0)
     (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666))))))))))
double code(double a, double b) {
	double tmp;
	if (b <= -920.0) {
		tmp = 1.0 + exp(b);
	} else if (b <= 1.05e+103) {
		tmp = exp(a) / 2.0;
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= (-920.0d0)) then
        tmp = 1.0d0 + exp(b)
    else if (b <= 1.05d+103) then
        tmp = exp(a) / 2.0d0
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= -920.0) {
		tmp = 1.0 + Math.exp(b);
	} else if (b <= 1.05e+103) {
		tmp = Math.exp(a) / 2.0;
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= -920.0:
		tmp = 1.0 + math.exp(b)
	elif b <= 1.05e+103:
		tmp = math.exp(a) / 2.0
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= -920.0)
		tmp = Float64(1.0 + exp(b));
	elseif (b <= 1.05e+103)
		tmp = Float64(exp(a) / 2.0);
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= -920.0)
		tmp = 1.0 + exp(b);
	elseif (b <= 1.05e+103)
		tmp = exp(a) / 2.0;
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, -920.0], N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision], If[LessEqual[b, 1.05e+103], N[(N[Exp[a], $MachinePrecision] / 2.0), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq -920:\\
\;\;\;\;1 + e^{b}\\

\mathbf{elif}\;b \leq 1.05 \cdot 10^{+103}:\\
\;\;\;\;\frac{e^{a}}{2}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if b < -920

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{e^{b}}{e^{a}}}} \]
      2. +-commutative100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} + 1}} \]
      3. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} + \color{blue}{\left(--1\right)}} \]
      4. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - -1}} \]
      5. add-exp-log100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{\log \left(\frac{e^{b}}{e^{a}} - -1\right)}}} \]
      6. rec-exp100.0%

        \[\leadsto \color{blue}{e^{-\log \left(\frac{e^{b}}{e^{a}} - -1\right)}} \]
      7. sub-neg100.0%

        \[\leadsto e^{-\log \color{blue}{\left(\frac{e^{b}}{e^{a}} + \left(--1\right)\right)}} \]
      8. metadata-eval100.0%

        \[\leadsto e^{-\log \left(\frac{e^{b}}{e^{a}} + \color{blue}{1}\right)} \]
      9. +-commutative100.0%

        \[\leadsto e^{-\log \color{blue}{\left(1 + \frac{e^{b}}{e^{a}}\right)}} \]
      10. log1p-define100.0%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(\frac{e^{b}}{e^{a}}\right)}} \]
      11. div-exp100.0%

        \[\leadsto e^{-\mathsf{log1p}\left(\color{blue}{e^{b - a}}\right)} \]
    6. Applied egg-rr100.0%

      \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
    7. Taylor expanded in a around 0 100.0%

      \[\leadsto e^{-\color{blue}{\log \left(1 + e^{b}\right)}} \]
    8. Step-by-step derivation
      1. log1p-define100.0%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
    9. Simplified100.0%

      \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
    10. Step-by-step derivation
      1. add-sqr-sqrt100.0%

        \[\leadsto e^{\color{blue}{\sqrt{-\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{-\mathsf{log1p}\left(e^{b}\right)}}} \]
      2. sqrt-unprod100.0%

        \[\leadsto e^{\color{blue}{\sqrt{\left(-\mathsf{log1p}\left(e^{b}\right)\right) \cdot \left(-\mathsf{log1p}\left(e^{b}\right)\right)}}} \]
      3. sqr-neg100.0%

        \[\leadsto e^{\sqrt{\color{blue}{\mathsf{log1p}\left(e^{b}\right) \cdot \mathsf{log1p}\left(e^{b}\right)}}} \]
      4. sqrt-unprod100.0%

        \[\leadsto e^{\color{blue}{\sqrt{\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{\mathsf{log1p}\left(e^{b}\right)}}} \]
      5. add-sqr-sqrt100.0%

        \[\leadsto e^{\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
      6. log1p-undefine100.0%

        \[\leadsto e^{\color{blue}{\log \left(1 + e^{b}\right)}} \]
      7. rem-exp-log100.0%

        \[\leadsto \color{blue}{1 + e^{b}} \]
      8. +-commutative100.0%

        \[\leadsto \color{blue}{e^{b} + 1} \]
    11. Applied egg-rr100.0%

      \[\leadsto \color{blue}{e^{b} + 1} \]

    if -920 < b < 1.0500000000000001e103

    1. Initial program 99.4%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0 91.2%

      \[\leadsto \color{blue}{\frac{e^{a}}{1 + e^{a}}} \]
    4. Taylor expanded in a around 0 88.8%

      \[\leadsto \frac{e^{a}}{\color{blue}{2}} \]

    if 1.0500000000000001e103 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub56.8%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-156.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative56.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/56.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval56.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac56.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg56.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out56.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg56.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification92.8%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq -920:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 1.05 \cdot 10^{+103}:\\ \;\;\;\;\frac{e^{a}}{2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 85.1% accurate, 2.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq -1.7:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 2.7 \cdot 10^{+77}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b -1.7)
   (+ 1.0 (exp b))
   (if (<= b 2.7e+77)
     (/ 1.0 (+ 2.0 (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666)))))))
     (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666))))))))))
double code(double a, double b) {
	double tmp;
	if (b <= -1.7) {
		tmp = 1.0 + exp(b);
	} else if (b <= 2.7e+77) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= (-1.7d0)) then
        tmp = 1.0d0 + exp(b)
    else if (b <= 2.7d+77) then
        tmp = 1.0d0 / (2.0d0 + (a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0)))))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= -1.7) {
		tmp = 1.0 + Math.exp(b);
	} else if (b <= 2.7e+77) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= -1.7:
		tmp = 1.0 + math.exp(b)
	elif b <= 2.7e+77:
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= -1.7)
		tmp = Float64(1.0 + exp(b));
	elseif (b <= 2.7e+77)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666)))))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= -1.7)
		tmp = 1.0 + exp(b);
	elseif (b <= 2.7e+77)
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, -1.7], N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision], If[LessEqual[b, 2.7e+77], N[(1.0 / N[(2.0 + N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq -1.7:\\
\;\;\;\;1 + e^{b}\\

\mathbf{elif}\;b \leq 2.7 \cdot 10^{+77}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if b < -1.69999999999999996

    1. Initial program 98.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg98.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg98.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub98.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity98.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/98.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse98.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg98.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg98.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg98.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. div-exp98.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{e^{b}}{e^{a}}}} \]
      2. +-commutative98.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} + 1}} \]
      3. metadata-eval98.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} + \color{blue}{\left(--1\right)}} \]
      4. sub-neg98.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - -1}} \]
      5. add-exp-log98.0%

        \[\leadsto \frac{1}{\color{blue}{e^{\log \left(\frac{e^{b}}{e^{a}} - -1\right)}}} \]
      6. rec-exp98.0%

        \[\leadsto \color{blue}{e^{-\log \left(\frac{e^{b}}{e^{a}} - -1\right)}} \]
      7. sub-neg98.0%

        \[\leadsto e^{-\log \color{blue}{\left(\frac{e^{b}}{e^{a}} + \left(--1\right)\right)}} \]
      8. metadata-eval98.0%

        \[\leadsto e^{-\log \left(\frac{e^{b}}{e^{a}} + \color{blue}{1}\right)} \]
      9. +-commutative98.0%

        \[\leadsto e^{-\log \color{blue}{\left(1 + \frac{e^{b}}{e^{a}}\right)}} \]
      10. log1p-define98.0%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(\frac{e^{b}}{e^{a}}\right)}} \]
      11. div-exp100.0%

        \[\leadsto e^{-\mathsf{log1p}\left(\color{blue}{e^{b - a}}\right)} \]
    6. Applied egg-rr100.0%

      \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
    7. Taylor expanded in a around 0 98.1%

      \[\leadsto e^{-\color{blue}{\log \left(1 + e^{b}\right)}} \]
    8. Step-by-step derivation
      1. log1p-define98.1%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
    9. Simplified98.1%

      \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
    10. Step-by-step derivation
      1. add-sqr-sqrt96.1%

        \[\leadsto e^{\color{blue}{\sqrt{-\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{-\mathsf{log1p}\left(e^{b}\right)}}} \]
      2. sqrt-unprod97.5%

        \[\leadsto e^{\color{blue}{\sqrt{\left(-\mathsf{log1p}\left(e^{b}\right)\right) \cdot \left(-\mathsf{log1p}\left(e^{b}\right)\right)}}} \]
      3. sqr-neg97.5%

        \[\leadsto e^{\sqrt{\color{blue}{\mathsf{log1p}\left(e^{b}\right) \cdot \mathsf{log1p}\left(e^{b}\right)}}} \]
      4. sqrt-unprod97.5%

        \[\leadsto e^{\color{blue}{\sqrt{\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{\mathsf{log1p}\left(e^{b}\right)}}} \]
      5. add-sqr-sqrt97.5%

        \[\leadsto e^{\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
      6. log1p-undefine97.5%

        \[\leadsto e^{\color{blue}{\log \left(1 + e^{b}\right)}} \]
      7. rem-exp-log97.5%

        \[\leadsto \color{blue}{1 + e^{b}} \]
      8. +-commutative97.5%

        \[\leadsto \color{blue}{e^{b} + 1} \]
    11. Applied egg-rr97.5%

      \[\leadsto \color{blue}{e^{b} + 1} \]

    if -1.69999999999999996 < b < 2.6999999999999998e77

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/99.9%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative99.9%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg99.9%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub68.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-168.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative68.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/68.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval68.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac68.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg68.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out68.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg68.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.9%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 93.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp93.2%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified93.2%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    8. Taylor expanded in a around 0 80.5%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 2.6999999999999998e77 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub58.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-158.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 92.2%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative92.2%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified92.2%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification86.0%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq -1.7:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 2.7 \cdot 10^{+77}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 6: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{e^{b - a} + 1} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ (exp (- b a)) 1.0)))
double code(double a, double b) {
	return 1.0 / (exp((b - a)) + 1.0);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (exp((b - a)) + 1.0d0)
end function
public static double code(double a, double b) {
	return 1.0 / (Math.exp((b - a)) + 1.0);
}
def code(a, b):
	return 1.0 / (math.exp((b - a)) + 1.0)
function code(a, b)
	return Float64(1.0 / Float64(exp(Float64(b - a)) + 1.0))
end
function tmp = code(a, b)
	tmp = 1.0 / (exp((b - a)) + 1.0);
end
code[a_, b_] := N[(1.0 / N[(N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{e^{b - a} + 1}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/99.6%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity72.2%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Final simplification100.0%

    \[\leadsto \frac{1}{e^{b - a} + 1} \]
  6. Add Preprocessing

Alternative 7: 67.0% accurate, 15.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 1.25 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 1.25e+153)
   (/ 1.0 (+ 2.0 (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666)))))))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b 0.5)))))))
double code(double a, double b) {
	double tmp;
	if (b <= 1.25e+153) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 1.25d+153) then
        tmp = 1.0d0 / (2.0d0 + (a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0)))))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * 0.5d0))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 1.25e+153) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 1.25e+153:
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 1.25e+153)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666)))))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * 0.5)))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 1.25e+153)
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 1.25e+153], N[(1.0 / N[(2.0 + N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 1.25 \cdot 10^{+153}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 1.25000000000000005e153

    1. Initial program 99.5%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity99.5%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/99.5%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/99.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative99.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg99.5%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg99.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub72.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-172.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.5%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 74.8%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp74.8%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified74.8%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    8. Taylor expanded in a around 0 63.8%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 1.25000000000000005e153 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub68.8%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-168.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + 0.5 \cdot b\right)}} \]
    7. Step-by-step derivation
      1. *-commutative100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{b \cdot 0.5}\right)} \]
    8. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot 0.5\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification68.3%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 1.25 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 8: 69.8% accurate, 15.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 2.7 \cdot 10^{+77}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 2.7e+77)
   (/ 1.0 (+ 2.0 (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666)))))))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666)))))))))
double code(double a, double b) {
	double tmp;
	if (b <= 2.7e+77) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 2.7d+77) then
        tmp = 1.0d0 / (2.0d0 + (a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0)))))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 2.7e+77) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 2.7e+77:
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 2.7e+77)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666)))))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 2.7e+77)
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 2.7e+77], N[(1.0 / N[(2.0 + N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 2.7 \cdot 10^{+77}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 2.6999999999999998e77

    1. Initial program 99.5%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity99.5%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/99.5%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/99.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative99.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg99.5%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg99.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub75.4%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-175.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative75.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/75.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval75.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac75.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg75.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out75.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg75.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.5%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 75.7%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp75.7%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified75.7%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    8. Taylor expanded in a around 0 65.6%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 2.6999999999999998e77 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub58.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-158.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg58.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 92.2%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative92.2%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified92.2%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification70.6%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 2.7 \cdot 10^{+77}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 9: 63.4% accurate, 19.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 1.95 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 1.95e+153)
   (/ 1.0 (+ 2.0 (* a (+ -1.0 (* a 0.5)))))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b 0.5)))))))
double code(double a, double b) {
	double tmp;
	if (b <= 1.95e+153) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 1.95d+153) then
        tmp = 1.0d0 / (2.0d0 + (a * ((-1.0d0) + (a * 0.5d0))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * 0.5d0))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 1.95e+153) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 1.95e+153:
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 1.95e+153)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(-1.0 + Float64(a * 0.5)))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * 0.5)))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 1.95e+153)
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 1.95e+153], N[(1.0 / N[(2.0 + N[(a * N[(-1.0 + N[(a * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 1.95 \cdot 10^{+153}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 1.94999999999999992e153

    1. Initial program 99.5%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity99.5%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/99.5%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/99.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative99.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg99.5%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg99.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub72.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-172.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg72.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.5%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 74.8%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp74.8%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified74.8%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    8. Taylor expanded in a around 0 60.4%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(0.5 \cdot a - 1\right)}} \]

    if 1.94999999999999992e153 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub68.8%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-168.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg68.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + 0.5 \cdot b\right)}} \]
    7. Step-by-step derivation
      1. *-commutative100.0%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{b \cdot 0.5}\right)} \]
    8. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot 0.5\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification65.4%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 1.95 \cdot 10^{+153}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 10: 52.5% accurate, 27.7× speedup?

\[\begin{array}{l} \\ \frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 2.0 (* a (+ -1.0 (* a 0.5))))))
double code(double a, double b) {
	return 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 + (a * ((-1.0d0) + (a * 0.5d0))))
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
}
def code(a, b):
	return 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))))
function code(a, b)
	return Float64(1.0 / Float64(2.0 + Float64(a * Float64(-1.0 + Float64(a * 0.5)))))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
end
code[a_, b_] := N[(1.0 / N[(2.0 + N[(a * N[(-1.0 + N[(a * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/99.6%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. +-commutative99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
    5. remove-double-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
    6. sub-neg99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
    7. div-sub72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
    8. neg-mul-172.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
    9. *-commutative72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
    10. associate-*r/72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
    11. metadata-eval72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
    12. distribute-neg-frac72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
    13. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
    14. distribute-rgt-neg-out72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
    15. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
    16. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
    17. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 69.6%

    \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
  6. Step-by-step derivation
    1. rec-exp69.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  7. Simplified69.6%

    \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  8. Taylor expanded in a around 0 55.6%

    \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(0.5 \cdot a - 1\right)}} \]
  9. Final simplification55.6%

    \[\leadsto \frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)} \]
  10. Add Preprocessing

Alternative 11: 39.0% accurate, 61.0× speedup?

\[\begin{array}{l} \\ 0.5 + a \cdot 0.25 \end{array} \]
(FPCore (a b) :precision binary64 (+ 0.5 (* a 0.25)))
double code(double a, double b) {
	return 0.5 + (a * 0.25);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 0.5d0 + (a * 0.25d0)
end function
public static double code(double a, double b) {
	return 0.5 + (a * 0.25);
}
def code(a, b):
	return 0.5 + (a * 0.25)
function code(a, b)
	return Float64(0.5 + Float64(a * 0.25))
end
function tmp = code(a, b)
	tmp = 0.5 + (a * 0.25);
end
code[a_, b_] := N[(0.5 + N[(a * 0.25), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 + a \cdot 0.25
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/99.6%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. +-commutative99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
    5. remove-double-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
    6. sub-neg99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
    7. div-sub72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
    8. neg-mul-172.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
    9. *-commutative72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
    10. associate-*r/72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
    11. metadata-eval72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
    12. distribute-neg-frac72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
    13. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
    14. distribute-rgt-neg-out72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
    15. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
    16. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
    17. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 69.6%

    \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
  6. Step-by-step derivation
    1. rec-exp69.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  7. Simplified69.6%

    \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  8. Taylor expanded in a around 0 41.3%

    \[\leadsto \color{blue}{0.5 + 0.25 \cdot a} \]
  9. Step-by-step derivation
    1. *-commutative41.3%

      \[\leadsto 0.5 + \color{blue}{a \cdot 0.25} \]
  10. Simplified41.3%

    \[\leadsto \color{blue}{0.5 + a \cdot 0.25} \]
  11. Final simplification41.3%

    \[\leadsto 0.5 + a \cdot 0.25 \]
  12. Add Preprocessing

Alternative 12: 39.7% accurate, 61.0× speedup?

\[\begin{array}{l} \\ \frac{1}{2 - a} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (- 2.0 a)))
double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 - a)
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
def code(a, b):
	return 1.0 / (2.0 - a)
function code(a, b)
	return Float64(1.0 / Float64(2.0 - a))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 - a);
end
code[a_, b_] := N[(1.0 / N[(2.0 - a), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 - a}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/99.6%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. +-commutative99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
    5. remove-double-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
    6. sub-neg99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
    7. div-sub72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
    8. neg-mul-172.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
    9. *-commutative72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
    10. associate-*r/72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
    11. metadata-eval72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
    12. distribute-neg-frac72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
    13. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
    14. distribute-rgt-neg-out72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
    15. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
    16. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
    17. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 69.6%

    \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
  6. Step-by-step derivation
    1. rec-exp69.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  7. Simplified69.6%

    \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  8. Taylor expanded in a around 0 42.0%

    \[\leadsto \frac{1}{\color{blue}{2 + -1 \cdot a}} \]
  9. Step-by-step derivation
    1. neg-mul-142.0%

      \[\leadsto \frac{1}{2 + \color{blue}{\left(-a\right)}} \]
    2. unsub-neg42.0%

      \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  10. Simplified42.0%

    \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  11. Final simplification42.0%

    \[\leadsto \frac{1}{2 - a} \]
  12. Add Preprocessing

Alternative 13: 38.9% accurate, 305.0× speedup?

\[\begin{array}{l} \\ 0.5 \end{array} \]
(FPCore (a b) :precision binary64 0.5)
double code(double a, double b) {
	return 0.5;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 0.5d0
end function
public static double code(double a, double b) {
	return 0.5;
}
def code(a, b):
	return 0.5
function code(a, b)
	return 0.5
end
function tmp = code(a, b)
	tmp = 0.5;
end
code[a_, b_] := 0.5
\begin{array}{l}

\\
0.5
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/99.6%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. +-commutative99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
    5. remove-double-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
    6. sub-neg99.6%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
    7. div-sub72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
    8. neg-mul-172.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
    9. *-commutative72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
    10. associate-*r/72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
    11. metadata-eval72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
    12. distribute-neg-frac72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
    13. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
    14. distribute-rgt-neg-out72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
    15. exp-neg72.2%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
    16. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
    17. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
  4. Add Preprocessing
  5. Taylor expanded in a around 0 81.3%

    \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
  6. Taylor expanded in b around 0 41.2%

    \[\leadsto \color{blue}{0.5} \]
  7. Final simplification41.2%

    \[\leadsto 0.5 \]
  8. Add Preprocessing

Developer target: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{1 + e^{b - a}} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 1.0 (exp (- b a)))))
double code(double a, double b) {
	return 1.0 / (1.0 + exp((b - a)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (1.0d0 + exp((b - a)))
end function
public static double code(double a, double b) {
	return 1.0 / (1.0 + Math.exp((b - a)));
}
def code(a, b):
	return 1.0 / (1.0 + math.exp((b - a)))
function code(a, b)
	return Float64(1.0 / Float64(1.0 + exp(Float64(b - a))))
end
function tmp = code(a, b)
	tmp = 1.0 / (1.0 + exp((b - a)));
end
code[a_, b_] := N[(1.0 / N[(1.0 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{1 + e^{b - a}}
\end{array}

Reproduce

?
herbie shell --seed 2024096 
(FPCore (a b)
  :name "Quotient of sum of exps"
  :precision binary64

  :alt
  (/ 1.0 (+ 1.0 (exp (- b a))))

  (/ (exp a) (+ (exp a) (exp b))))