Quotient of sum of exps

Percentage Accurate: 98.9% → 100.0%
Time: 5.9s
Alternatives: 14
Speedup: 2.9×

Specification

?
\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 14 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 98.9% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Alternative 1: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{-\mathsf{log1p}\left(e^{b - a}\right)} \end{array} \]
(FPCore (a b) :precision binary64 (exp (- (log1p (exp (- b a))))))
double code(double a, double b) {
	return exp(-log1p(exp((b - a))));
}
public static double code(double a, double b) {
	return Math.exp(-Math.log1p(Math.exp((b - a))));
}
def code(a, b):
	return math.exp(-math.log1p(math.exp((b - a))))
function code(a, b)
	return exp(Float64(-log1p(exp(Float64(b - a)))))
end
code[a_, b_] := N[Exp[(-N[Log[1 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]], $MachinePrecision])], $MachinePrecision]
\begin{array}{l}

\\
e^{-\mathsf{log1p}\left(e^{b - a}\right)}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-/l*99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    3. remove-double-div99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
    4. exp-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
    5. associate-/r/99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
    6. /-rgt-identity99.6%

      \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
    7. *-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
    8. distribute-rgt-in71.0%

      \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
    9. exp-neg71.1%

      \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
    10. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
    11. prod-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
    12. unsub-neg100.0%

      \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Step-by-step derivation
    1. add-log-exp99.6%

      \[\leadsto \color{blue}{\log \left(e^{\frac{1}{1 + e^{b - a}}}\right)} \]
    2. *-un-lft-identity99.6%

      \[\leadsto \log \color{blue}{\left(1 \cdot e^{\frac{1}{1 + e^{b - a}}}\right)} \]
    3. log-prod99.6%

      \[\leadsto \color{blue}{\log 1 + \log \left(e^{\frac{1}{1 + e^{b - a}}}\right)} \]
    4. metadata-eval99.6%

      \[\leadsto \color{blue}{0} + \log \left(e^{\frac{1}{1 + e^{b - a}}}\right) \]
    5. add-log-exp100.0%

      \[\leadsto 0 + \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    6. add-exp-log100.0%

      \[\leadsto 0 + \color{blue}{e^{\log \left(\frac{1}{1 + e^{b - a}}\right)}} \]
    7. log-rec100.0%

      \[\leadsto 0 + e^{\color{blue}{-\log \left(1 + e^{b - a}\right)}} \]
    8. log1p-udef100.0%

      \[\leadsto 0 + e^{-\color{blue}{\mathsf{log1p}\left(e^{b - a}\right)}} \]
  5. Applied egg-rr100.0%

    \[\leadsto \color{blue}{0 + e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
  6. Step-by-step derivation
    1. +-lft-identity100.0%

      \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
  7. Simplified100.0%

    \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
  8. Final simplification100.0%

    \[\leadsto e^{-\mathsf{log1p}\left(e^{b - a}\right)} \]

Alternative 2: 98.4% accurate, 2.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -0.49:\\ \;\;\;\;\frac{1}{1 + e^{-a}}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -0.49) (/ 1.0 (+ 1.0 (exp (- a)))) (/ 1.0 (+ 1.0 (exp b)))))
double code(double a, double b) {
	double tmp;
	if (a <= -0.49) {
		tmp = 1.0 / (1.0 + exp(-a));
	} else {
		tmp = 1.0 / (1.0 + exp(b));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-0.49d0)) then
        tmp = 1.0d0 / (1.0d0 + exp(-a))
    else
        tmp = 1.0d0 / (1.0d0 + exp(b))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -0.49) {
		tmp = 1.0 / (1.0 + Math.exp(-a));
	} else {
		tmp = 1.0 / (1.0 + Math.exp(b));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -0.49:
		tmp = 1.0 / (1.0 + math.exp(-a))
	else:
		tmp = 1.0 / (1.0 + math.exp(b))
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -0.49)
		tmp = Float64(1.0 / Float64(1.0 + exp(Float64(-a))));
	else
		tmp = Float64(1.0 / Float64(1.0 + exp(b)));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -0.49)
		tmp = 1.0 / (1.0 + exp(-a));
	else
		tmp = 1.0 / (1.0 + exp(b));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -0.49], N[(1.0 / N[(1.0 + N[Exp[(-a)], $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -0.49:\\
\;\;\;\;\frac{1}{1 + e^{-a}}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{1 + e^{b}}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -0.48999999999999999

    1. Initial program 98.7%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.7%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*98.7%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div98.7%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg98.7%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/98.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity98.7%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative98.7%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in1.3%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg1.3%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse98.7%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]

    if -0.48999999999999999 < a

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.9%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg100.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in a around 0 98.6%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.0%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -0.49:\\ \;\;\;\;\frac{1}{1 + e^{-a}}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \]

Alternative 3: 98.5% accurate, 2.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -730:\\ \;\;\;\;\frac{e^{a}}{a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -730.0) (/ (exp a) a) (/ 1.0 (+ 1.0 (exp b)))))
double code(double a, double b) {
	double tmp;
	if (a <= -730.0) {
		tmp = exp(a) / a;
	} else {
		tmp = 1.0 / (1.0 + exp(b));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-730.0d0)) then
        tmp = exp(a) / a
    else
        tmp = 1.0d0 / (1.0d0 + exp(b))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -730.0) {
		tmp = Math.exp(a) / a;
	} else {
		tmp = 1.0 / (1.0 + Math.exp(b));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -730.0:
		tmp = math.exp(a) / a
	else:
		tmp = 1.0 / (1.0 + math.exp(b))
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -730.0)
		tmp = Float64(exp(a) / a);
	else
		tmp = Float64(1.0 / Float64(1.0 + exp(b)));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -730.0)
		tmp = exp(a) / a;
	else
		tmp = 1.0 / (1.0 + exp(b));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -730.0], N[(N[Exp[a], $MachinePrecision] / a), $MachinePrecision], N[(1.0 / N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -730:\\
\;\;\;\;\frac{e^{a}}{a}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{1 + e^{b}}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -730

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Taylor expanded in b around 0 100.0%

      \[\leadsto \color{blue}{\frac{e^{a}}{1 + e^{a}}} \]
    3. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{e^{a}}{\color{blue}{2 + a}} \]
    4. Step-by-step derivation
      1. +-commutative100.0%

        \[\leadsto \frac{e^{a}}{\color{blue}{a + 2}} \]
    5. Simplified100.0%

      \[\leadsto \frac{e^{a}}{\color{blue}{a + 2}} \]
    6. Taylor expanded in a around inf 100.0%

      \[\leadsto \color{blue}{\frac{e^{a}}{a}} \]

    if -730 < a

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.9%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg100.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in a around 0 98.1%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification98.6%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -730:\\ \;\;\;\;\frac{e^{a}}{a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{1 + e^{b}}\\ \end{array} \]

Alternative 4: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{e^{b - a} + 1} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ (exp (- b a)) 1.0)))
double code(double a, double b) {
	return 1.0 / (exp((b - a)) + 1.0);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (exp((b - a)) + 1.0d0)
end function
public static double code(double a, double b) {
	return 1.0 / (Math.exp((b - a)) + 1.0);
}
def code(a, b):
	return 1.0 / (math.exp((b - a)) + 1.0)
function code(a, b)
	return Float64(1.0 / Float64(exp(Float64(b - a)) + 1.0))
end
function tmp = code(a, b)
	tmp = 1.0 / (exp((b - a)) + 1.0);
end
code[a_, b_] := N[(1.0 / N[(N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{e^{b - a} + 1}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-/l*99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    3. remove-double-div99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
    4. exp-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
    5. associate-/r/99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
    6. /-rgt-identity99.6%

      \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
    7. *-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
    8. distribute-rgt-in71.0%

      \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
    9. exp-neg71.1%

      \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
    10. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
    11. prod-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
    12. unsub-neg100.0%

      \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Final simplification100.0%

    \[\leadsto \frac{1}{e^{b - a} + 1} \]

Alternative 5: 72.1% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -720:\\ \;\;\;\;\frac{e^{a}}{a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -720.0) (/ (exp a) a) (/ 1.0 (+ (+ b 2.0) (* 0.5 (* b b))))))
double code(double a, double b) {
	double tmp;
	if (a <= -720.0) {
		tmp = exp(a) / a;
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-720.0d0)) then
        tmp = exp(a) / a
    else
        tmp = 1.0d0 / ((b + 2.0d0) + (0.5d0 * (b * b)))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -720.0) {
		tmp = Math.exp(a) / a;
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -720.0:
		tmp = math.exp(a) / a
	else:
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)))
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -720.0)
		tmp = Float64(exp(a) / a);
	else
		tmp = Float64(1.0 / Float64(Float64(b + 2.0) + Float64(0.5 * Float64(b * b))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -720.0)
		tmp = exp(a) / a;
	else
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -720.0], N[(N[Exp[a], $MachinePrecision] / a), $MachinePrecision], N[(1.0 / N[(N[(b + 2.0), $MachinePrecision] + N[(0.5 * N[(b * b), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -720:\\
\;\;\;\;\frac{e^{a}}{a}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -720

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Taylor expanded in b around 0 100.0%

      \[\leadsto \color{blue}{\frac{e^{a}}{1 + e^{a}}} \]
    3. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{e^{a}}{\color{blue}{2 + a}} \]
    4. Step-by-step derivation
      1. +-commutative100.0%

        \[\leadsto \frac{e^{a}}{\color{blue}{a + 2}} \]
    5. Simplified100.0%

      \[\leadsto \frac{e^{a}}{\color{blue}{a + 2}} \]
    6. Taylor expanded in a around inf 100.0%

      \[\leadsto \color{blue}{\frac{e^{a}}{a}} \]

    if -720 < a

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.9%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg100.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in a around 0 98.1%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    5. Taylor expanded in b around 0 65.1%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(b + 0.5 \cdot {b}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+65.1%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + b\right) + 0.5 \cdot {b}^{2}}} \]
      2. +-commutative65.1%

        \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right)} + 0.5 \cdot {b}^{2}} \]
      3. unpow265.1%

        \[\leadsto \frac{1}{\left(b + 2\right) + 0.5 \cdot \color{blue}{\left(b \cdot b\right)}} \]
    7. Simplified65.1%

      \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification75.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -720:\\ \;\;\;\;\frac{e^{a}}{a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \]

Alternative 6: 67.1% accurate, 9.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := 0.5 \cdot \left(a \cdot a\right)\\ t_1 := a - t_0\\ \mathbf{if}\;a \leq -1.32 \cdot 10^{+154}:\\ \;\;\;\;\frac{2}{a \cdot a}\\ \mathbf{elif}\;a \leq -1.25 \cdot 10^{+75}:\\ \;\;\;\;\frac{1}{\frac{4 + t_1 \cdot \left(t_0 - a\right)}{2 + t_1}}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (let* ((t_0 (* 0.5 (* a a))) (t_1 (- a t_0)))
   (if (<= a -1.32e+154)
     (/ 2.0 (* a a))
     (if (<= a -1.25e+75)
       (/ 1.0 (/ (+ 4.0 (* t_1 (- t_0 a))) (+ 2.0 t_1)))
       (/ 1.0 (+ (+ b 2.0) (* 0.5 (* b b))))))))
double code(double a, double b) {
	double t_0 = 0.5 * (a * a);
	double t_1 = a - t_0;
	double tmp;
	if (a <= -1.32e+154) {
		tmp = 2.0 / (a * a);
	} else if (a <= -1.25e+75) {
		tmp = 1.0 / ((4.0 + (t_1 * (t_0 - a))) / (2.0 + t_1));
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: t_0
    real(8) :: t_1
    real(8) :: tmp
    t_0 = 0.5d0 * (a * a)
    t_1 = a - t_0
    if (a <= (-1.32d+154)) then
        tmp = 2.0d0 / (a * a)
    else if (a <= (-1.25d+75)) then
        tmp = 1.0d0 / ((4.0d0 + (t_1 * (t_0 - a))) / (2.0d0 + t_1))
    else
        tmp = 1.0d0 / ((b + 2.0d0) + (0.5d0 * (b * b)))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double t_0 = 0.5 * (a * a);
	double t_1 = a - t_0;
	double tmp;
	if (a <= -1.32e+154) {
		tmp = 2.0 / (a * a);
	} else if (a <= -1.25e+75) {
		tmp = 1.0 / ((4.0 + (t_1 * (t_0 - a))) / (2.0 + t_1));
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
def code(a, b):
	t_0 = 0.5 * (a * a)
	t_1 = a - t_0
	tmp = 0
	if a <= -1.32e+154:
		tmp = 2.0 / (a * a)
	elif a <= -1.25e+75:
		tmp = 1.0 / ((4.0 + (t_1 * (t_0 - a))) / (2.0 + t_1))
	else:
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)))
	return tmp
function code(a, b)
	t_0 = Float64(0.5 * Float64(a * a))
	t_1 = Float64(a - t_0)
	tmp = 0.0
	if (a <= -1.32e+154)
		tmp = Float64(2.0 / Float64(a * a));
	elseif (a <= -1.25e+75)
		tmp = Float64(1.0 / Float64(Float64(4.0 + Float64(t_1 * Float64(t_0 - a))) / Float64(2.0 + t_1)));
	else
		tmp = Float64(1.0 / Float64(Float64(b + 2.0) + Float64(0.5 * Float64(b * b))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	t_0 = 0.5 * (a * a);
	t_1 = a - t_0;
	tmp = 0.0;
	if (a <= -1.32e+154)
		tmp = 2.0 / (a * a);
	elseif (a <= -1.25e+75)
		tmp = 1.0 / ((4.0 + (t_1 * (t_0 - a))) / (2.0 + t_1));
	else
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	end
	tmp_2 = tmp;
end
code[a_, b_] := Block[{t$95$0 = N[(0.5 * N[(a * a), $MachinePrecision]), $MachinePrecision]}, Block[{t$95$1 = N[(a - t$95$0), $MachinePrecision]}, If[LessEqual[a, -1.32e+154], N[(2.0 / N[(a * a), $MachinePrecision]), $MachinePrecision], If[LessEqual[a, -1.25e+75], N[(1.0 / N[(N[(4.0 + N[(t$95$1 * N[(t$95$0 - a), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(2.0 + t$95$1), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(N[(b + 2.0), $MachinePrecision] + N[(0.5 * N[(b * b), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := 0.5 \cdot \left(a \cdot a\right)\\
t_1 := a - t_0\\
\mathbf{if}\;a \leq -1.32 \cdot 10^{+154}:\\
\;\;\;\;\frac{2}{a \cdot a}\\

\mathbf{elif}\;a \leq -1.25 \cdot 10^{+75}:\\
\;\;\;\;\frac{1}{\frac{4 + t_1 \cdot \left(t_0 - a\right)}{2 + t_1}}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if a < -1.31999999999999998e154

    1. Initial program 97.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity97.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*97.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div97.6%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg97.6%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/97.6%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity97.6%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative97.6%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in0.0%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg0.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse97.6%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(-1 \cdot a + 0.5 \cdot {a}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+100.0%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + -1 \cdot a\right) + 0.5 \cdot {a}^{2}}} \]
      2. neg-mul-1100.0%

        \[\leadsto \frac{1}{\left(2 + \color{blue}{\left(-a\right)}\right) + 0.5 \cdot {a}^{2}} \]
      3. unsub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right)} + 0.5 \cdot {a}^{2}} \]
      4. *-commutative100.0%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{{a}^{2} \cdot 0.5}} \]
      5. unpow2100.0%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{\left(a \cdot a\right)} \cdot 0.5} \]
      6. associate-*l*100.0%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{a \cdot \left(a \cdot 0.5\right)}} \]
    7. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}} \]
    8. Taylor expanded in a around inf 100.0%

      \[\leadsto \color{blue}{\frac{2}{{a}^{2}}} \]
    9. Step-by-step derivation
      1. unpow2100.0%

        \[\leadsto \frac{2}{\color{blue}{a \cdot a}} \]
    10. Simplified100.0%

      \[\leadsto \color{blue}{\frac{2}{a \cdot a}} \]

    if -1.31999999999999998e154 < a < -1.2500000000000001e75

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity100.0%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in0.0%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg0.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 7.6%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(-1 \cdot a + 0.5 \cdot {a}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+7.6%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + -1 \cdot a\right) + 0.5 \cdot {a}^{2}}} \]
      2. neg-mul-17.6%

        \[\leadsto \frac{1}{\left(2 + \color{blue}{\left(-a\right)}\right) + 0.5 \cdot {a}^{2}} \]
      3. unsub-neg7.6%

        \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right)} + 0.5 \cdot {a}^{2}} \]
      4. *-commutative7.6%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{{a}^{2} \cdot 0.5}} \]
      5. unpow27.6%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{\left(a \cdot a\right)} \cdot 0.5} \]
      6. associate-*l*7.6%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{a \cdot \left(a \cdot 0.5\right)}} \]
    7. Simplified7.6%

      \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}} \]
    8. Step-by-step derivation
      1. associate-+l-7.6%

        \[\leadsto \frac{1}{\color{blue}{2 - \left(a - a \cdot \left(a \cdot 0.5\right)\right)}} \]
      2. flip--95.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{2 \cdot 2 - \left(a - a \cdot \left(a \cdot 0.5\right)\right) \cdot \left(a - a \cdot \left(a \cdot 0.5\right)\right)}{2 + \left(a - a \cdot \left(a \cdot 0.5\right)\right)}}} \]
      3. metadata-eval95.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{4} - \left(a - a \cdot \left(a \cdot 0.5\right)\right) \cdot \left(a - a \cdot \left(a \cdot 0.5\right)\right)}{2 + \left(a - a \cdot \left(a \cdot 0.5\right)\right)}} \]
      4. associate-*r*95.5%

        \[\leadsto \frac{1}{\frac{4 - \left(a - \color{blue}{\left(a \cdot a\right) \cdot 0.5}\right) \cdot \left(a - a \cdot \left(a \cdot 0.5\right)\right)}{2 + \left(a - a \cdot \left(a \cdot 0.5\right)\right)}} \]
      5. *-commutative95.5%

        \[\leadsto \frac{1}{\frac{4 - \left(a - \color{blue}{0.5 \cdot \left(a \cdot a\right)}\right) \cdot \left(a - a \cdot \left(a \cdot 0.5\right)\right)}{2 + \left(a - a \cdot \left(a \cdot 0.5\right)\right)}} \]
      6. associate-*r*95.5%

        \[\leadsto \frac{1}{\frac{4 - \left(a - 0.5 \cdot \left(a \cdot a\right)\right) \cdot \left(a - \color{blue}{\left(a \cdot a\right) \cdot 0.5}\right)}{2 + \left(a - a \cdot \left(a \cdot 0.5\right)\right)}} \]
      7. *-commutative95.5%

        \[\leadsto \frac{1}{\frac{4 - \left(a - 0.5 \cdot \left(a \cdot a\right)\right) \cdot \left(a - \color{blue}{0.5 \cdot \left(a \cdot a\right)}\right)}{2 + \left(a - a \cdot \left(a \cdot 0.5\right)\right)}} \]
      8. associate-*r*95.5%

        \[\leadsto \frac{1}{\frac{4 - \left(a - 0.5 \cdot \left(a \cdot a\right)\right) \cdot \left(a - 0.5 \cdot \left(a \cdot a\right)\right)}{2 + \left(a - \color{blue}{\left(a \cdot a\right) \cdot 0.5}\right)}} \]
      9. *-commutative95.5%

        \[\leadsto \frac{1}{\frac{4 - \left(a - 0.5 \cdot \left(a \cdot a\right)\right) \cdot \left(a - 0.5 \cdot \left(a \cdot a\right)\right)}{2 + \left(a - \color{blue}{0.5 \cdot \left(a \cdot a\right)}\right)}} \]
    9. Applied egg-rr95.5%

      \[\leadsto \frac{1}{\color{blue}{\frac{4 - \left(a - 0.5 \cdot \left(a \cdot a\right)\right) \cdot \left(a - 0.5 \cdot \left(a \cdot a\right)\right)}{2 + \left(a - 0.5 \cdot \left(a \cdot a\right)\right)}}} \]

    if -1.2500000000000001e75 < a

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.9%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in93.7%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg93.8%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in a around 0 94.7%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    5. Taylor expanded in b around 0 63.3%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(b + 0.5 \cdot {b}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+63.3%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + b\right) + 0.5 \cdot {b}^{2}}} \]
      2. +-commutative63.3%

        \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right)} + 0.5 \cdot {b}^{2}} \]
      3. unpow263.3%

        \[\leadsto \frac{1}{\left(b + 2\right) + 0.5 \cdot \color{blue}{\left(b \cdot b\right)}} \]
    7. Simplified63.3%

      \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification71.8%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1.32 \cdot 10^{+154}:\\ \;\;\;\;\frac{2}{a \cdot a}\\ \mathbf{elif}\;a \leq -1.25 \cdot 10^{+75}:\\ \;\;\;\;\frac{1}{\frac{4 + \left(a - 0.5 \cdot \left(a \cdot a\right)\right) \cdot \left(0.5 \cdot \left(a \cdot a\right) - a\right)}{2 + \left(a - 0.5 \cdot \left(a \cdot a\right)\right)}}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \]

Alternative 7: 64.1% accurate, 23.3× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 4 \cdot 10^{+128}:\\ \;\;\;\;\frac{a + 2}{4 - a \cdot a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 4e+128)
   (/ (+ a 2.0) (- 4.0 (* a a)))
   (/ 1.0 (+ (+ b 2.0) (* 0.5 (* b b))))))
double code(double a, double b) {
	double tmp;
	if (b <= 4e+128) {
		tmp = (a + 2.0) / (4.0 - (a * a));
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 4d+128) then
        tmp = (a + 2.0d0) / (4.0d0 - (a * a))
    else
        tmp = 1.0d0 / ((b + 2.0d0) + (0.5d0 * (b * b)))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 4e+128) {
		tmp = (a + 2.0) / (4.0 - (a * a));
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 4e+128:
		tmp = (a + 2.0) / (4.0 - (a * a))
	else:
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 4e+128)
		tmp = Float64(Float64(a + 2.0) / Float64(4.0 - Float64(a * a)));
	else
		tmp = Float64(1.0 / Float64(Float64(b + 2.0) + Float64(0.5 * Float64(b * b))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 4e+128)
		tmp = (a + 2.0) / (4.0 - (a * a));
	else
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 4e+128], N[(N[(a + 2.0), $MachinePrecision] / N[(4.0 - N[(a * a), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(N[(b + 2.0), $MachinePrecision] + N[(0.5 * N[(b * b), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 4 \cdot 10^{+128}:\\
\;\;\;\;\frac{a + 2}{4 - a \cdot a}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 4.0000000000000003e128

    1. Initial program 99.5%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity99.5%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*99.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div99.5%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.5%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.5%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.5%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in72.8%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg72.8%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 75.9%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 49.5%

      \[\leadsto \frac{1}{\color{blue}{2 + -1 \cdot a}} \]
    6. Step-by-step derivation
      1. neg-mul-149.5%

        \[\leadsto \frac{1}{2 + \color{blue}{\left(-a\right)}} \]
      2. unsub-neg49.5%

        \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
    7. Simplified49.5%

      \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
    8. Step-by-step derivation
      1. flip--64.6%

        \[\leadsto \frac{1}{\color{blue}{\frac{2 \cdot 2 - a \cdot a}{2 + a}}} \]
      2. +-commutative64.6%

        \[\leadsto \frac{1}{\frac{2 \cdot 2 - a \cdot a}{\color{blue}{a + 2}}} \]
      3. associate-/r/64.6%

        \[\leadsto \color{blue}{\frac{1}{2 \cdot 2 - a \cdot a} \cdot \left(a + 2\right)} \]
      4. metadata-eval64.6%

        \[\leadsto \frac{1}{\color{blue}{4} - a \cdot a} \cdot \left(a + 2\right) \]
    9. Applied egg-rr64.6%

      \[\leadsto \color{blue}{\frac{1}{4 - a \cdot a} \cdot \left(a + 2\right)} \]
    10. Step-by-step derivation
      1. associate-*l/64.6%

        \[\leadsto \color{blue}{\frac{1 \cdot \left(a + 2\right)}{4 - a \cdot a}} \]
      2. *-lft-identity64.6%

        \[\leadsto \frac{\color{blue}{a + 2}}{4 - a \cdot a} \]
    11. Simplified64.6%

      \[\leadsto \color{blue}{\frac{a + 2}{4 - a \cdot a}} \]

    if 4.0000000000000003e128 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity100.0%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in60.0%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg60.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    5. Taylor expanded in b around 0 92.1%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(b + 0.5 \cdot {b}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+92.1%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + b\right) + 0.5 \cdot {b}^{2}}} \]
      2. +-commutative92.1%

        \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right)} + 0.5 \cdot {b}^{2}} \]
      3. unpow292.1%

        \[\leadsto \frac{1}{\left(b + 2\right) + 0.5 \cdot \color{blue}{\left(b \cdot b\right)}} \]
    7. Simplified92.1%

      \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification68.3%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 4 \cdot 10^{+128}:\\ \;\;\;\;\frac{a + 2}{4 - a \cdot a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \]

Alternative 8: 64.4% accurate, 23.3× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 4 \cdot 10^{+128}:\\ \;\;\;\;\frac{1}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 4e+128)
   (/ 1.0 (+ (- 2.0 a) (* a (* a 0.5))))
   (/ 1.0 (+ (+ b 2.0) (* 0.5 (* b b))))))
double code(double a, double b) {
	double tmp;
	if (b <= 4e+128) {
		tmp = 1.0 / ((2.0 - a) + (a * (a * 0.5)));
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 4d+128) then
        tmp = 1.0d0 / ((2.0d0 - a) + (a * (a * 0.5d0)))
    else
        tmp = 1.0d0 / ((b + 2.0d0) + (0.5d0 * (b * b)))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 4e+128) {
		tmp = 1.0 / ((2.0 - a) + (a * (a * 0.5)));
	} else {
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 4e+128:
		tmp = 1.0 / ((2.0 - a) + (a * (a * 0.5)))
	else:
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 4e+128)
		tmp = Float64(1.0 / Float64(Float64(2.0 - a) + Float64(a * Float64(a * 0.5))));
	else
		tmp = Float64(1.0 / Float64(Float64(b + 2.0) + Float64(0.5 * Float64(b * b))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 4e+128)
		tmp = 1.0 / ((2.0 - a) + (a * (a * 0.5)));
	else
		tmp = 1.0 / ((b + 2.0) + (0.5 * (b * b)));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 4e+128], N[(1.0 / N[(N[(2.0 - a), $MachinePrecision] + N[(a * N[(a * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(N[(b + 2.0), $MachinePrecision] + N[(0.5 * N[(b * b), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 4 \cdot 10^{+128}:\\
\;\;\;\;\frac{1}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 4.0000000000000003e128

    1. Initial program 99.5%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity99.5%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*99.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div99.5%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.5%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.5%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.5%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in72.8%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg72.8%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 75.9%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 65.0%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(-1 \cdot a + 0.5 \cdot {a}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+65.0%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + -1 \cdot a\right) + 0.5 \cdot {a}^{2}}} \]
      2. neg-mul-165.0%

        \[\leadsto \frac{1}{\left(2 + \color{blue}{\left(-a\right)}\right) + 0.5 \cdot {a}^{2}} \]
      3. unsub-neg65.0%

        \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right)} + 0.5 \cdot {a}^{2}} \]
      4. *-commutative65.0%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{{a}^{2} \cdot 0.5}} \]
      5. unpow265.0%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{\left(a \cdot a\right)} \cdot 0.5} \]
      6. associate-*l*65.0%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{a \cdot \left(a \cdot 0.5\right)}} \]
    7. Simplified65.0%

      \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}} \]

    if 4.0000000000000003e128 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity100.0%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in60.0%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg60.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
    5. Taylor expanded in b around 0 92.1%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(b + 0.5 \cdot {b}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+92.1%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + b\right) + 0.5 \cdot {b}^{2}}} \]
      2. +-commutative92.1%

        \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right)} + 0.5 \cdot {b}^{2}} \]
      3. unpow292.1%

        \[\leadsto \frac{1}{\left(b + 2\right) + 0.5 \cdot \color{blue}{\left(b \cdot b\right)}} \]
    7. Simplified92.1%

      \[\leadsto \frac{1}{\color{blue}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification68.7%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 4 \cdot 10^{+128}:\\ \;\;\;\;\frac{1}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(b + 2\right) + 0.5 \cdot \left(b \cdot b\right)}\\ \end{array} \]

Alternative 9: 53.2% accurate, 27.6× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -1.65:\\ \;\;\;\;\frac{1}{0.5 \cdot \left(a \cdot a\right) - a}\\ \mathbf{else}:\\ \;\;\;\;0.5 + a \cdot 0.25\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -1.65) (/ 1.0 (- (* 0.5 (* a a)) a)) (+ 0.5 (* a 0.25))))
double code(double a, double b) {
	double tmp;
	if (a <= -1.65) {
		tmp = 1.0 / ((0.5 * (a * a)) - a);
	} else {
		tmp = 0.5 + (a * 0.25);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-1.65d0)) then
        tmp = 1.0d0 / ((0.5d0 * (a * a)) - a)
    else
        tmp = 0.5d0 + (a * 0.25d0)
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -1.65) {
		tmp = 1.0 / ((0.5 * (a * a)) - a);
	} else {
		tmp = 0.5 + (a * 0.25);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -1.65:
		tmp = 1.0 / ((0.5 * (a * a)) - a)
	else:
		tmp = 0.5 + (a * 0.25)
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -1.65)
		tmp = Float64(1.0 / Float64(Float64(0.5 * Float64(a * a)) - a));
	else
		tmp = Float64(0.5 + Float64(a * 0.25));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -1.65)
		tmp = 1.0 / ((0.5 * (a * a)) - a);
	else
		tmp = 0.5 + (a * 0.25);
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -1.65], N[(1.0 / N[(N[(0.5 * N[(a * a), $MachinePrecision]), $MachinePrecision] - a), $MachinePrecision]), $MachinePrecision], N[(0.5 + N[(a * 0.25), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -1.65:\\
\;\;\;\;\frac{1}{0.5 \cdot \left(a \cdot a\right) - a}\\

\mathbf{else}:\\
\;\;\;\;0.5 + a \cdot 0.25\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -1.6499999999999999

    1. Initial program 98.7%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.7%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*98.7%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div98.7%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg98.7%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/98.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity98.7%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative98.7%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in1.3%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg1.3%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse98.7%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 57.5%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(-1 \cdot a + 0.5 \cdot {a}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+57.5%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + -1 \cdot a\right) + 0.5 \cdot {a}^{2}}} \]
      2. neg-mul-157.5%

        \[\leadsto \frac{1}{\left(2 + \color{blue}{\left(-a\right)}\right) + 0.5 \cdot {a}^{2}} \]
      3. unsub-neg57.5%

        \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right)} + 0.5 \cdot {a}^{2}} \]
      4. *-commutative57.5%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{{a}^{2} \cdot 0.5}} \]
      5. unpow257.5%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{\left(a \cdot a\right)} \cdot 0.5} \]
      6. associate-*l*57.5%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{a \cdot \left(a \cdot 0.5\right)}} \]
    7. Simplified57.5%

      \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}} \]
    8. Taylor expanded in a around inf 57.5%

      \[\leadsto \frac{1}{\color{blue}{-1 \cdot a + 0.5 \cdot {a}^{2}}} \]
    9. Step-by-step derivation
      1. neg-mul-157.5%

        \[\leadsto \frac{1}{\color{blue}{\left(-a\right)} + 0.5 \cdot {a}^{2}} \]
      2. +-commutative57.5%

        \[\leadsto \frac{1}{\color{blue}{0.5 \cdot {a}^{2} + \left(-a\right)}} \]
      3. unpow257.5%

        \[\leadsto \frac{1}{0.5 \cdot \color{blue}{\left(a \cdot a\right)} + \left(-a\right)} \]
      4. unsub-neg57.5%

        \[\leadsto \frac{1}{\color{blue}{0.5 \cdot \left(a \cdot a\right) - a}} \]
    10. Simplified57.5%

      \[\leadsto \frac{1}{\color{blue}{0.5 \cdot \left(a \cdot a\right) - a}} \]

    if -1.6499999999999999 < a

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.9%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg100.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 59.4%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 58.9%

      \[\leadsto \color{blue}{0.5 + 0.25 \cdot a} \]
    6. Step-by-step derivation
      1. *-commutative58.9%

        \[\leadsto 0.5 + \color{blue}{a \cdot 0.25} \]
    7. Simplified58.9%

      \[\leadsto \color{blue}{0.5 + a \cdot 0.25} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification58.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1.65:\\ \;\;\;\;\frac{1}{0.5 \cdot \left(a \cdot a\right) - a}\\ \mathbf{else}:\\ \;\;\;\;0.5 + a \cdot 0.25\\ \end{array} \]

Alternative 10: 52.8% accurate, 33.9× speedup?

\[\begin{array}{l} \\ \frac{a + 2}{4 - a \cdot a} \end{array} \]
(FPCore (a b) :precision binary64 (/ (+ a 2.0) (- 4.0 (* a a))))
double code(double a, double b) {
	return (a + 2.0) / (4.0 - (a * a));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = (a + 2.0d0) / (4.0d0 - (a * a))
end function
public static double code(double a, double b) {
	return (a + 2.0) / (4.0 - (a * a));
}
def code(a, b):
	return (a + 2.0) / (4.0 - (a * a))
function code(a, b)
	return Float64(Float64(a + 2.0) / Float64(4.0 - Float64(a * a)))
end
function tmp = code(a, b)
	tmp = (a + 2.0) / (4.0 - (a * a));
end
code[a_, b_] := N[(N[(a + 2.0), $MachinePrecision] / N[(4.0 - N[(a * a), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{a + 2}{4 - a \cdot a}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-/l*99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    3. remove-double-div99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
    4. exp-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
    5. associate-/r/99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
    6. /-rgt-identity99.6%

      \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
    7. *-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
    8. distribute-rgt-in71.0%

      \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
    9. exp-neg71.1%

      \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
    10. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
    11. prod-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
    12. unsub-neg100.0%

      \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Taylor expanded in b around 0 71.3%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  5. Taylor expanded in a around 0 43.2%

    \[\leadsto \frac{1}{\color{blue}{2 + -1 \cdot a}} \]
  6. Step-by-step derivation
    1. neg-mul-143.2%

      \[\leadsto \frac{1}{2 + \color{blue}{\left(-a\right)}} \]
    2. unsub-neg43.2%

      \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  7. Simplified43.2%

    \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  8. Step-by-step derivation
    1. flip--58.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{2 \cdot 2 - a \cdot a}{2 + a}}} \]
    2. +-commutative58.1%

      \[\leadsto \frac{1}{\frac{2 \cdot 2 - a \cdot a}{\color{blue}{a + 2}}} \]
    3. associate-/r/58.1%

      \[\leadsto \color{blue}{\frac{1}{2 \cdot 2 - a \cdot a} \cdot \left(a + 2\right)} \]
    4. metadata-eval58.1%

      \[\leadsto \frac{1}{\color{blue}{4} - a \cdot a} \cdot \left(a + 2\right) \]
  9. Applied egg-rr58.1%

    \[\leadsto \color{blue}{\frac{1}{4 - a \cdot a} \cdot \left(a + 2\right)} \]
  10. Step-by-step derivation
    1. associate-*l/58.1%

      \[\leadsto \color{blue}{\frac{1 \cdot \left(a + 2\right)}{4 - a \cdot a}} \]
    2. *-lft-identity58.1%

      \[\leadsto \frac{\color{blue}{a + 2}}{4 - a \cdot a} \]
  11. Simplified58.1%

    \[\leadsto \color{blue}{\frac{a + 2}{4 - a \cdot a}} \]
  12. Final simplification58.1%

    \[\leadsto \frac{a + 2}{4 - a \cdot a} \]

Alternative 11: 53.2% accurate, 43.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -1.7:\\ \;\;\;\;\frac{2}{a \cdot a}\\ \mathbf{else}:\\ \;\;\;\;0.5 + a \cdot 0.25\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -1.7) (/ 2.0 (* a a)) (+ 0.5 (* a 0.25))))
double code(double a, double b) {
	double tmp;
	if (a <= -1.7) {
		tmp = 2.0 / (a * a);
	} else {
		tmp = 0.5 + (a * 0.25);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-1.7d0)) then
        tmp = 2.0d0 / (a * a)
    else
        tmp = 0.5d0 + (a * 0.25d0)
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -1.7) {
		tmp = 2.0 / (a * a);
	} else {
		tmp = 0.5 + (a * 0.25);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -1.7:
		tmp = 2.0 / (a * a)
	else:
		tmp = 0.5 + (a * 0.25)
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -1.7)
		tmp = Float64(2.0 / Float64(a * a));
	else
		tmp = Float64(0.5 + Float64(a * 0.25));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -1.7)
		tmp = 2.0 / (a * a);
	else
		tmp = 0.5 + (a * 0.25);
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -1.7], N[(2.0 / N[(a * a), $MachinePrecision]), $MachinePrecision], N[(0.5 + N[(a * 0.25), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -1.7:\\
\;\;\;\;\frac{2}{a \cdot a}\\

\mathbf{else}:\\
\;\;\;\;0.5 + a \cdot 0.25\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -1.69999999999999996

    1. Initial program 98.7%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.7%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*98.7%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div98.7%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg98.7%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/98.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity98.7%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative98.7%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in1.3%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg1.3%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse98.7%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 57.5%

      \[\leadsto \frac{1}{\color{blue}{2 + \left(-1 \cdot a + 0.5 \cdot {a}^{2}\right)}} \]
    6. Step-by-step derivation
      1. associate-+r+57.5%

        \[\leadsto \frac{1}{\color{blue}{\left(2 + -1 \cdot a\right) + 0.5 \cdot {a}^{2}}} \]
      2. neg-mul-157.5%

        \[\leadsto \frac{1}{\left(2 + \color{blue}{\left(-a\right)}\right) + 0.5 \cdot {a}^{2}} \]
      3. unsub-neg57.5%

        \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right)} + 0.5 \cdot {a}^{2}} \]
      4. *-commutative57.5%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{{a}^{2} \cdot 0.5}} \]
      5. unpow257.5%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{\left(a \cdot a\right)} \cdot 0.5} \]
      6. associate-*l*57.5%

        \[\leadsto \frac{1}{\left(2 - a\right) + \color{blue}{a \cdot \left(a \cdot 0.5\right)}} \]
    7. Simplified57.5%

      \[\leadsto \frac{1}{\color{blue}{\left(2 - a\right) + a \cdot \left(a \cdot 0.5\right)}} \]
    8. Taylor expanded in a around inf 57.5%

      \[\leadsto \color{blue}{\frac{2}{{a}^{2}}} \]
    9. Step-by-step derivation
      1. unpow257.5%

        \[\leadsto \frac{2}{\color{blue}{a \cdot a}} \]
    10. Simplified57.5%

      \[\leadsto \color{blue}{\frac{2}{a \cdot a}} \]

    if -1.69999999999999996 < a

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-/l*100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      3. remove-double-div100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
      4. exp-neg99.9%

        \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
      5. associate-/r/99.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
      6. /-rgt-identity99.9%

        \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
      7. *-commutative99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
      8. distribute-rgt-in99.9%

        \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
      9. exp-neg100.0%

        \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
      10. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
      11. prod-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
      12. unsub-neg100.0%

        \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Taylor expanded in b around 0 59.4%

      \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
    5. Taylor expanded in a around 0 58.9%

      \[\leadsto \color{blue}{0.5 + 0.25 \cdot a} \]
    6. Step-by-step derivation
      1. *-commutative58.9%

        \[\leadsto 0.5 + \color{blue}{a \cdot 0.25} \]
    7. Simplified58.9%

      \[\leadsto \color{blue}{0.5 + a \cdot 0.25} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification58.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1.7:\\ \;\;\;\;\frac{2}{a \cdot a}\\ \mathbf{else}:\\ \;\;\;\;0.5 + a \cdot 0.25\\ \end{array} \]

Alternative 12: 38.9% accurate, 61.0× speedup?

\[\begin{array}{l} \\ 0.5 + a \cdot 0.25 \end{array} \]
(FPCore (a b) :precision binary64 (+ 0.5 (* a 0.25)))
double code(double a, double b) {
	return 0.5 + (a * 0.25);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 0.5d0 + (a * 0.25d0)
end function
public static double code(double a, double b) {
	return 0.5 + (a * 0.25);
}
def code(a, b):
	return 0.5 + (a * 0.25)
function code(a, b)
	return Float64(0.5 + Float64(a * 0.25))
end
function tmp = code(a, b)
	tmp = 0.5 + (a * 0.25);
end
code[a_, b_] := N[(0.5 + N[(a * 0.25), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 + a \cdot 0.25
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-/l*99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    3. remove-double-div99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
    4. exp-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
    5. associate-/r/99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
    6. /-rgt-identity99.6%

      \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
    7. *-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
    8. distribute-rgt-in71.0%

      \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
    9. exp-neg71.1%

      \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
    10. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
    11. prod-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
    12. unsub-neg100.0%

      \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Taylor expanded in b around 0 71.3%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  5. Taylor expanded in a around 0 42.3%

    \[\leadsto \color{blue}{0.5 + 0.25 \cdot a} \]
  6. Step-by-step derivation
    1. *-commutative42.3%

      \[\leadsto 0.5 + \color{blue}{a \cdot 0.25} \]
  7. Simplified42.3%

    \[\leadsto \color{blue}{0.5 + a \cdot 0.25} \]
  8. Final simplification42.3%

    \[\leadsto 0.5 + a \cdot 0.25 \]

Alternative 13: 39.6% accurate, 61.0× speedup?

\[\begin{array}{l} \\ \frac{1}{2 - a} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (- 2.0 a)))
double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 - a)
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
def code(a, b):
	return 1.0 / (2.0 - a)
function code(a, b)
	return Float64(1.0 / Float64(2.0 - a))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 - a);
end
code[a_, b_] := N[(1.0 / N[(2.0 - a), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 - a}
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-/l*99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    3. remove-double-div99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
    4. exp-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
    5. associate-/r/99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
    6. /-rgt-identity99.6%

      \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
    7. *-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
    8. distribute-rgt-in71.0%

      \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
    9. exp-neg71.1%

      \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
    10. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
    11. prod-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
    12. unsub-neg100.0%

      \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Taylor expanded in b around 0 71.3%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{-a}}} \]
  5. Taylor expanded in a around 0 43.2%

    \[\leadsto \frac{1}{\color{blue}{2 + -1 \cdot a}} \]
  6. Step-by-step derivation
    1. neg-mul-143.2%

      \[\leadsto \frac{1}{2 + \color{blue}{\left(-a\right)}} \]
    2. unsub-neg43.2%

      \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  7. Simplified43.2%

    \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  8. Final simplification43.2%

    \[\leadsto \frac{1}{2 - a} \]

Alternative 14: 38.8% accurate, 305.0× speedup?

\[\begin{array}{l} \\ 0.5 \end{array} \]
(FPCore (a b) :precision binary64 0.5)
double code(double a, double b) {
	return 0.5;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 0.5d0
end function
public static double code(double a, double b) {
	return 0.5;
}
def code(a, b):
	return 0.5
function code(a, b)
	return 0.5
end
function tmp = code(a, b)
	tmp = 0.5;
end
code[a_, b_] := 0.5
\begin{array}{l}

\\
0.5
\end{array}
Derivation
  1. Initial program 99.6%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity99.6%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-/l*99.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    3. remove-double-div99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\color{blue}{\frac{1}{\frac{1}{e^{a}}}}}} \]
    4. exp-neg99.6%

      \[\leadsto \frac{1}{\frac{e^{a} + e^{b}}{\frac{1}{\color{blue}{e^{-a}}}}} \]
    5. associate-/r/99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a} + e^{b}}{1} \cdot e^{-a}}} \]
    6. /-rgt-identity99.6%

      \[\leadsto \frac{1}{\color{blue}{\left(e^{a} + e^{b}\right)} \cdot e^{-a}} \]
    7. *-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a} \cdot \left(e^{a} + e^{b}\right)}} \]
    8. distribute-rgt-in71.0%

      \[\leadsto \frac{1}{\color{blue}{e^{a} \cdot e^{-a} + e^{b} \cdot e^{-a}}} \]
    9. exp-neg71.1%

      \[\leadsto \frac{1}{e^{a} \cdot \color{blue}{\frac{1}{e^{a}}} + e^{b} \cdot e^{-a}} \]
    10. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} + e^{b} \cdot e^{-a}} \]
    11. prod-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b + \left(-a\right)}}} \]
    12. unsub-neg100.0%

      \[\leadsto \frac{1}{1 + e^{\color{blue}{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Taylor expanded in a around 0 79.3%

    \[\leadsto \frac{1}{\color{blue}{1 + e^{b}}} \]
  5. Taylor expanded in b around 0 41.9%

    \[\leadsto \color{blue}{0.5} \]
  6. Final simplification41.9%

    \[\leadsto 0.5 \]

Developer target: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{1 + e^{b - a}} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 1.0 (exp (- b a)))))
double code(double a, double b) {
	return 1.0 / (1.0 + exp((b - a)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (1.0d0 + exp((b - a)))
end function
public static double code(double a, double b) {
	return 1.0 / (1.0 + Math.exp((b - a)));
}
def code(a, b):
	return 1.0 / (1.0 + math.exp((b - a)))
function code(a, b)
	return Float64(1.0 / Float64(1.0 + exp(Float64(b - a))))
end
function tmp = code(a, b)
	tmp = 1.0 / (1.0 + exp((b - a)));
end
code[a_, b_] := N[(1.0 / N[(1.0 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{1 + e^{b - a}}
\end{array}

Reproduce

?
herbie shell --seed 2023271 
(FPCore (a b)
  :name "Quotient of sum of exps"
  :precision binary64

  :herbie-target
  (/ 1.0 (+ 1.0 (exp (- b a))))

  (/ (exp a) (+ (exp a) (exp b))))