Quotient of sum of exps

Percentage Accurate: 98.6% → 100.0%
Time: 8.2s
Alternatives: 13
Speedup: 2.9×

Specification

?
\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 13 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 98.6% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{a}}{e^{a} + e^{b}} \end{array} \]
(FPCore (a b) :precision binary64 (/ (exp a) (+ (exp a) (exp b))))
double code(double a, double b) {
	return exp(a) / (exp(a) + exp(b));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = exp(a) / (exp(a) + exp(b))
end function
public static double code(double a, double b) {
	return Math.exp(a) / (Math.exp(a) + Math.exp(b));
}
def code(a, b):
	return math.exp(a) / (math.exp(a) + math.exp(b))
function code(a, b)
	return Float64(exp(a) / Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = exp(a) / (exp(a) + exp(b));
end
code[a_, b_] := N[(N[Exp[a], $MachinePrecision] / N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{a}}{e^{a} + e^{b}}
\end{array}

Alternative 1: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{-\mathsf{log1p}\left(e^{b - a}\right)} \end{array} \]
(FPCore (a b) :precision binary64 (exp (- (log1p (exp (- b a))))))
double code(double a, double b) {
	return exp(-log1p(exp((b - a))));
}
public static double code(double a, double b) {
	return Math.exp(-Math.log1p(Math.exp((b - a))));
}
def code(a, b):
	return math.exp(-math.log1p(math.exp((b - a))))
function code(a, b)
	return exp(Float64(-log1p(exp(Float64(b - a)))))
end
code[a_, b_] := N[Exp[(-N[Log[1 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]], $MachinePrecision])], $MachinePrecision]
\begin{array}{l}

\\
e^{-\mathsf{log1p}\left(e^{b - a}\right)}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity71.1%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Step-by-step derivation
    1. div-exp99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{e^{b}}{e^{a}}}} \]
    2. +-commutative99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} + 1}} \]
    3. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} + \color{blue}{\left(--1\right)}} \]
    4. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - -1}} \]
    5. add-exp-log99.6%

      \[\leadsto \frac{1}{\color{blue}{e^{\log \left(\frac{e^{b}}{e^{a}} - -1\right)}}} \]
    6. rec-exp99.6%

      \[\leadsto \color{blue}{e^{-\log \left(\frac{e^{b}}{e^{a}} - -1\right)}} \]
    7. sub-neg99.6%

      \[\leadsto e^{-\log \color{blue}{\left(\frac{e^{b}}{e^{a}} + \left(--1\right)\right)}} \]
    8. metadata-eval99.6%

      \[\leadsto e^{-\log \left(\frac{e^{b}}{e^{a}} + \color{blue}{1}\right)} \]
    9. +-commutative99.6%

      \[\leadsto e^{-\log \color{blue}{\left(1 + \frac{e^{b}}{e^{a}}\right)}} \]
    10. log1p-define99.6%

      \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(\frac{e^{b}}{e^{a}}\right)}} \]
    11. div-exp100.0%

      \[\leadsto e^{-\mathsf{log1p}\left(\color{blue}{e^{b - a}}\right)} \]
  6. Applied egg-rr100.0%

    \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
  7. Final simplification100.0%

    \[\leadsto e^{-\mathsf{log1p}\left(e^{b - a}\right)} \]
  8. Add Preprocessing

Alternative 2: 86.1% accurate, 2.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq -860:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 1.95 \cdot 10^{+96}:\\ \;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{6}{{b}^{3}}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b -860.0)
   (+ 1.0 (exp b))
   (if (<= b 1.95e+96)
     (/
      1.0
      (- (+ 1.0 (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666)))))) -1.0))
     (/ 6.0 (pow b 3.0)))))
double code(double a, double b) {
	double tmp;
	if (b <= -860.0) {
		tmp = 1.0 + exp(b);
	} else if (b <= 1.95e+96) {
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	} else {
		tmp = 6.0 / pow(b, 3.0);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= (-860.0d0)) then
        tmp = 1.0d0 + exp(b)
    else if (b <= 1.95d+96) then
        tmp = 1.0d0 / ((1.0d0 + (a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0))))))) - (-1.0d0))
    else
        tmp = 6.0d0 / (b ** 3.0d0)
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= -860.0) {
		tmp = 1.0 + Math.exp(b);
	} else if (b <= 1.95e+96) {
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	} else {
		tmp = 6.0 / Math.pow(b, 3.0);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= -860.0:
		tmp = 1.0 + math.exp(b)
	elif b <= 1.95e+96:
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0)
	else:
		tmp = 6.0 / math.pow(b, 3.0)
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= -860.0)
		tmp = Float64(1.0 + exp(b));
	elseif (b <= 1.95e+96)
		tmp = Float64(1.0 / Float64(Float64(1.0 + Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666)))))) - -1.0));
	else
		tmp = Float64(6.0 / (b ^ 3.0));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= -860.0)
		tmp = 1.0 + exp(b);
	elseif (b <= 1.95e+96)
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	else
		tmp = 6.0 / (b ^ 3.0);
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, -860.0], N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision], If[LessEqual[b, 1.95e+96], N[(1.0 / N[(N[(1.0 + N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision], N[(6.0 / N[Power[b, 3.0], $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq -860:\\
\;\;\;\;1 + e^{b}\\

\mathbf{elif}\;b \leq 1.95 \cdot 10^{+96}:\\
\;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\

\mathbf{else}:\\
\;\;\;\;\frac{6}{{b}^{3}}\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if b < -860

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{e^{b}}{e^{a}}}} \]
      2. +-commutative100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} + 1}} \]
      3. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} + \color{blue}{\left(--1\right)}} \]
      4. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - -1}} \]
      5. add-exp-log100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{\log \left(\frac{e^{b}}{e^{a}} - -1\right)}}} \]
      6. rec-exp100.0%

        \[\leadsto \color{blue}{e^{-\log \left(\frac{e^{b}}{e^{a}} - -1\right)}} \]
      7. sub-neg100.0%

        \[\leadsto e^{-\log \color{blue}{\left(\frac{e^{b}}{e^{a}} + \left(--1\right)\right)}} \]
      8. metadata-eval100.0%

        \[\leadsto e^{-\log \left(\frac{e^{b}}{e^{a}} + \color{blue}{1}\right)} \]
      9. +-commutative100.0%

        \[\leadsto e^{-\log \color{blue}{\left(1 + \frac{e^{b}}{e^{a}}\right)}} \]
      10. log1p-define100.0%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(\frac{e^{b}}{e^{a}}\right)}} \]
      11. div-exp100.0%

        \[\leadsto e^{-\mathsf{log1p}\left(\color{blue}{e^{b - a}}\right)} \]
    6. Applied egg-rr100.0%

      \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
    7. Taylor expanded in a around 0 100.0%

      \[\leadsto \color{blue}{e^{-\log \left(1 + e^{b}\right)}} \]
    8. Step-by-step derivation
      1. log1p-define100.0%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
    9. Simplified100.0%

      \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b}\right)}} \]
    10. Step-by-step derivation
      1. add-sqr-sqrt100.0%

        \[\leadsto e^{\color{blue}{\sqrt{-\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{-\mathsf{log1p}\left(e^{b}\right)}}} \]
      2. sqrt-unprod100.0%

        \[\leadsto e^{\color{blue}{\sqrt{\left(-\mathsf{log1p}\left(e^{b}\right)\right) \cdot \left(-\mathsf{log1p}\left(e^{b}\right)\right)}}} \]
      3. sqr-neg100.0%

        \[\leadsto e^{\sqrt{\color{blue}{\mathsf{log1p}\left(e^{b}\right) \cdot \mathsf{log1p}\left(e^{b}\right)}}} \]
      4. sqrt-unprod100.0%

        \[\leadsto e^{\color{blue}{\sqrt{\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{\mathsf{log1p}\left(e^{b}\right)}}} \]
      5. add-sqr-sqrt100.0%

        \[\leadsto e^{\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
      6. log1p-undefine100.0%

        \[\leadsto e^{\color{blue}{\log \left(1 + e^{b}\right)}} \]
      7. rem-exp-log100.0%

        \[\leadsto \color{blue}{1 + e^{b}} \]
      8. +-commutative100.0%

        \[\leadsto \color{blue}{e^{b} + 1} \]
    11. Applied egg-rr100.0%

      \[\leadsto \color{blue}{e^{b} + 1} \]

    if -860 < b < 1.95e96

    1. Initial program 98.1%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.1%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.1%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.1%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.1%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.1%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.1%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub63.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-163.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.3%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 89.9%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Taylor expanded in a around 0 75.6%

      \[\leadsto \frac{1}{\color{blue}{\left(1 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)\right)} - -1} \]

    if 1.95e96 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub64.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-164.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative97.9%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
    9. Taylor expanded in b around inf 97.9%

      \[\leadsto \color{blue}{\frac{6}{{b}^{3}}} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification84.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq -860:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 1.95 \cdot 10^{+96}:\\ \;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{6}{{b}^{3}}\\ \end{array} \]
  5. Add Preprocessing

Alternative 3: 98.3% accurate, 2.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -114:\\ \;\;\;\;\frac{1}{e^{-a} - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -114.0) (/ 1.0 (- (exp (- a)) -1.0)) (/ 1.0 (- (exp b) -1.0))))
double code(double a, double b) {
	double tmp;
	if (a <= -114.0) {
		tmp = 1.0 / (exp(-a) - -1.0);
	} else {
		tmp = 1.0 / (exp(b) - -1.0);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-114.0d0)) then
        tmp = 1.0d0 / (exp(-a) - (-1.0d0))
    else
        tmp = 1.0d0 / (exp(b) - (-1.0d0))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -114.0) {
		tmp = 1.0 / (Math.exp(-a) - -1.0);
	} else {
		tmp = 1.0 / (Math.exp(b) - -1.0);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -114.0:
		tmp = 1.0 / (math.exp(-a) - -1.0)
	else:
		tmp = 1.0 / (math.exp(b) - -1.0)
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -114.0)
		tmp = Float64(1.0 / Float64(exp(Float64(-a)) - -1.0));
	else
		tmp = Float64(1.0 / Float64(exp(b) - -1.0));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -114.0)
		tmp = 1.0 / (exp(-a) - -1.0);
	else
		tmp = 1.0 / (exp(b) - -1.0);
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -114.0], N[(1.0 / N[(N[Exp[(-a)], $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(N[Exp[b], $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -114:\\
\;\;\;\;\frac{1}{e^{-a} - -1}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{e^{b} - -1}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -114

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.6%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.6%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub2.7%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-12.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative2.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/2.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval2.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac2.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg2.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out2.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg2.7%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse98.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval98.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified98.6%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]

    if -114 < a

    1. Initial program 98.8%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.8%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.8%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.8%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.8%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.8%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.8%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub98.8%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-198.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative98.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/98.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval98.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac98.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg98.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out98.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg98.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.9%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 97.9%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification98.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -114:\\ \;\;\;\;\frac{1}{e^{-a} - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \]
  5. Add Preprocessing

Alternative 4: 98.2% accurate, 2.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;a \leq -475000000:\\ \;\;\;\;\frac{e^{a}}{a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= a -475000000.0) (/ (exp a) a) (/ 1.0 (- (exp b) -1.0))))
double code(double a, double b) {
	double tmp;
	if (a <= -475000000.0) {
		tmp = exp(a) / a;
	} else {
		tmp = 1.0 / (exp(b) - -1.0);
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (a <= (-475000000.0d0)) then
        tmp = exp(a) / a
    else
        tmp = 1.0d0 / (exp(b) - (-1.0d0))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (a <= -475000000.0) {
		tmp = Math.exp(a) / a;
	} else {
		tmp = 1.0 / (Math.exp(b) - -1.0);
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if a <= -475000000.0:
		tmp = math.exp(a) / a
	else:
		tmp = 1.0 / (math.exp(b) - -1.0)
	return tmp
function code(a, b)
	tmp = 0.0
	if (a <= -475000000.0)
		tmp = Float64(exp(a) / a);
	else
		tmp = Float64(1.0 / Float64(exp(b) - -1.0));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (a <= -475000000.0)
		tmp = exp(a) / a;
	else
		tmp = 1.0 / (exp(b) - -1.0);
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[a, -475000000.0], N[(N[Exp[a], $MachinePrecision] / a), $MachinePrecision], N[(1.0 / N[(N[Exp[b], $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;a \leq -475000000:\\
\;\;\;\;\frac{e^{a}}{a}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{e^{b} - -1}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if a < -4.75e8

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0 100.0%

      \[\leadsto \color{blue}{\frac{e^{a}}{1 + e^{a}}} \]
    4. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{e^{a}}{\color{blue}{2 + a}} \]
    5. Step-by-step derivation
      1. +-commutative100.0%

        \[\leadsto \frac{e^{a}}{\color{blue}{a + 2}} \]
    6. Simplified100.0%

      \[\leadsto \frac{e^{a}}{\color{blue}{a + 2}} \]
    7. Taylor expanded in a around inf 100.0%

      \[\leadsto \color{blue}{\frac{e^{a}}{a}} \]

    if -4.75e8 < a

    1. Initial program 98.9%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.9%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.9%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.9%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.9%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.9%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.9%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub98.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-198.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative98.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/98.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval98.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac98.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg98.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out98.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg98.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.9%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.9%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 96.9%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification97.8%

    \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -475000000:\\ \;\;\;\;\frac{e^{a}}{a}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{e^{b} - -1}\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 86.1% accurate, 2.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq -860:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 1.3 \cdot 10^{+98}:\\ \;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b -860.0)
   (+ 1.0 (exp b))
   (if (<= b 1.3e+98)
     (/
      1.0
      (- (+ 1.0 (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666)))))) -1.0))
     (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666))))))))))
double code(double a, double b) {
	double tmp;
	if (b <= -860.0) {
		tmp = 1.0 + exp(b);
	} else if (b <= 1.3e+98) {
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= (-860.0d0)) then
        tmp = 1.0d0 + exp(b)
    else if (b <= 1.3d+98) then
        tmp = 1.0d0 / ((1.0d0 + (a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0))))))) - (-1.0d0))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= -860.0) {
		tmp = 1.0 + Math.exp(b);
	} else if (b <= 1.3e+98) {
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= -860.0:
		tmp = 1.0 + math.exp(b)
	elif b <= 1.3e+98:
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0)
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= -860.0)
		tmp = Float64(1.0 + exp(b));
	elseif (b <= 1.3e+98)
		tmp = Float64(1.0 / Float64(Float64(1.0 + Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666)))))) - -1.0));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= -860.0)
		tmp = 1.0 + exp(b);
	elseif (b <= 1.3e+98)
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, -860.0], N[(1.0 + N[Exp[b], $MachinePrecision]), $MachinePrecision], If[LessEqual[b, 1.3e+98], N[(1.0 / N[(N[(1.0 + N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq -860:\\
\;\;\;\;1 + e^{b}\\

\mathbf{elif}\;b \leq 1.3 \cdot 10^{+98}:\\
\;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if b < -860

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
      5. unsub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
      6. div-sub100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
      7. *-lft-identity100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      8. associate-*l/100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
      9. lft-mult-inverse100.0%

        \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
      10. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
      11. distribute-frac-neg100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
      12. remove-double-neg100.0%

        \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
      13. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. div-exp100.0%

        \[\leadsto \frac{1}{1 + \color{blue}{\frac{e^{b}}{e^{a}}}} \]
      2. +-commutative100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} + 1}} \]
      3. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} + \color{blue}{\left(--1\right)}} \]
      4. sub-neg100.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - -1}} \]
      5. add-exp-log100.0%

        \[\leadsto \frac{1}{\color{blue}{e^{\log \left(\frac{e^{b}}{e^{a}} - -1\right)}}} \]
      6. rec-exp100.0%

        \[\leadsto \color{blue}{e^{-\log \left(\frac{e^{b}}{e^{a}} - -1\right)}} \]
      7. sub-neg100.0%

        \[\leadsto e^{-\log \color{blue}{\left(\frac{e^{b}}{e^{a}} + \left(--1\right)\right)}} \]
      8. metadata-eval100.0%

        \[\leadsto e^{-\log \left(\frac{e^{b}}{e^{a}} + \color{blue}{1}\right)} \]
      9. +-commutative100.0%

        \[\leadsto e^{-\log \color{blue}{\left(1 + \frac{e^{b}}{e^{a}}\right)}} \]
      10. log1p-define100.0%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(\frac{e^{b}}{e^{a}}\right)}} \]
      11. div-exp100.0%

        \[\leadsto e^{-\mathsf{log1p}\left(\color{blue}{e^{b - a}}\right)} \]
    6. Applied egg-rr100.0%

      \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b - a}\right)}} \]
    7. Taylor expanded in a around 0 100.0%

      \[\leadsto \color{blue}{e^{-\log \left(1 + e^{b}\right)}} \]
    8. Step-by-step derivation
      1. log1p-define100.0%

        \[\leadsto e^{-\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
    9. Simplified100.0%

      \[\leadsto \color{blue}{e^{-\mathsf{log1p}\left(e^{b}\right)}} \]
    10. Step-by-step derivation
      1. add-sqr-sqrt100.0%

        \[\leadsto e^{\color{blue}{\sqrt{-\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{-\mathsf{log1p}\left(e^{b}\right)}}} \]
      2. sqrt-unprod100.0%

        \[\leadsto e^{\color{blue}{\sqrt{\left(-\mathsf{log1p}\left(e^{b}\right)\right) \cdot \left(-\mathsf{log1p}\left(e^{b}\right)\right)}}} \]
      3. sqr-neg100.0%

        \[\leadsto e^{\sqrt{\color{blue}{\mathsf{log1p}\left(e^{b}\right) \cdot \mathsf{log1p}\left(e^{b}\right)}}} \]
      4. sqrt-unprod100.0%

        \[\leadsto e^{\color{blue}{\sqrt{\mathsf{log1p}\left(e^{b}\right)} \cdot \sqrt{\mathsf{log1p}\left(e^{b}\right)}}} \]
      5. add-sqr-sqrt100.0%

        \[\leadsto e^{\color{blue}{\mathsf{log1p}\left(e^{b}\right)}} \]
      6. log1p-undefine100.0%

        \[\leadsto e^{\color{blue}{\log \left(1 + e^{b}\right)}} \]
      7. rem-exp-log100.0%

        \[\leadsto \color{blue}{1 + e^{b}} \]
      8. +-commutative100.0%

        \[\leadsto \color{blue}{e^{b} + 1} \]
    11. Applied egg-rr100.0%

      \[\leadsto \color{blue}{e^{b} + 1} \]

    if -860 < b < 1.3e98

    1. Initial program 98.1%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.1%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.1%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.1%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.1%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.1%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.1%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub63.5%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-163.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg63.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.3%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 89.9%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Taylor expanded in a around 0 75.6%

      \[\leadsto \frac{1}{\color{blue}{\left(1 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)\right)} - -1} \]

    if 1.3e98 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub64.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-164.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative97.9%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification84.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq -860:\\ \;\;\;\;1 + e^{b}\\ \mathbf{elif}\;b \leq 1.3 \cdot 10^{+98}:\\ \;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 6: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{e^{b - a} + 1} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ (exp (- b a)) 1.0)))
double code(double a, double b) {
	return 1.0 / (exp((b - a)) + 1.0);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (exp((b - a)) + 1.0d0)
end function
public static double code(double a, double b) {
	return 1.0 / (Math.exp((b - a)) + 1.0);
}
def code(a, b):
	return 1.0 / (math.exp((b - a)) + 1.0)
function code(a, b)
	return Float64(1.0 / Float64(exp(Float64(b - a)) + 1.0))
end
function tmp = code(a, b)
	tmp = 1.0 / (exp((b - a)) + 1.0);
end
code[a_, b_] := N[(1.0 / N[(N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{e^{b - a} + 1}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{a} + \color{blue}{\left(-\left(-e^{b}\right)\right)}}{e^{a}}} \]
    5. unsub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{a} - \left(-e^{b}\right)}}{e^{a}}} \]
    6. div-sub71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{a}}{e^{a}} - \frac{-e^{b}}{e^{a}}}} \]
    7. *-lft-identity71.1%

      \[\leadsto \frac{1}{\frac{\color{blue}{1 \cdot e^{a}}}{e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    8. associate-*l/71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}} \cdot e^{a}} - \frac{-e^{b}}{e^{a}}} \]
    9. lft-mult-inverse99.6%

      \[\leadsto \frac{1}{\color{blue}{1} - \frac{-e^{b}}{e^{a}}} \]
    10. sub-neg99.6%

      \[\leadsto \frac{1}{\color{blue}{1 + \left(-\frac{-e^{b}}{e^{a}}\right)}} \]
    11. distribute-frac-neg99.6%

      \[\leadsto \frac{1}{1 + \color{blue}{\frac{-\left(-e^{b}\right)}{e^{a}}}} \]
    12. remove-double-neg99.6%

      \[\leadsto \frac{1}{1 + \frac{\color{blue}{e^{b}}}{e^{a}}} \]
    13. div-exp100.0%

      \[\leadsto \frac{1}{1 + \color{blue}{e^{b - a}}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{1}{1 + e^{b - a}}} \]
  4. Add Preprocessing
  5. Final simplification100.0%

    \[\leadsto \frac{1}{e^{b - a} + 1} \]
  6. Add Preprocessing

Alternative 7: 70.9% accurate, 13.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 6.5 \cdot 10^{+99}:\\ \;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 6.5e+99)
   (/
    1.0
    (- (+ 1.0 (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666)))))) -1.0))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666)))))))))
double code(double a, double b) {
	double tmp;
	if (b <= 6.5e+99) {
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 6.5d+99) then
        tmp = 1.0d0 / ((1.0d0 + (a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0))))))) - (-1.0d0))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 6.5e+99) {
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 6.5e+99:
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0)
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 6.5e+99)
		tmp = Float64(1.0 / Float64(Float64(1.0 + Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666)))))) - -1.0));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 6.5e+99)
		tmp = 1.0 / ((1.0 + (a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666)))))) - -1.0);
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 6.5e+99], N[(1.0 / N[(N[(1.0 + N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] - -1.0), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 6.5 \cdot 10^{+99}:\\
\;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 6.5000000000000004e99

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.5%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.5%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub72.4%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-172.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.5%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 72.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Taylor expanded in a around 0 61.8%

      \[\leadsto \frac{1}{\color{blue}{\left(1 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)\right)} - -1} \]

    if 6.5000000000000004e99 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub64.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-164.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative97.9%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification67.7%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 6.5 \cdot 10^{+99}:\\ \;\;\;\;\frac{1}{\left(1 + a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right)\right) - -1}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 8: 67.3% accurate, 15.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 6.5 \cdot 10^{+145}:\\ \;\;\;\;\frac{1}{a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right) + 2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 6.5e+145)
   (/ 1.0 (+ (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666))))) 2.0))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b 0.5)))))))
double code(double a, double b) {
	double tmp;
	if (b <= 6.5e+145) {
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 6.5d+145) then
        tmp = 1.0d0 / ((a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0)))))) + 2.0d0)
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * 0.5d0))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 6.5e+145) {
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 6.5e+145:
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0)
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 6.5e+145)
		tmp = Float64(1.0 / Float64(Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666))))) + 2.0));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * 0.5)))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 6.5e+145)
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0);
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 6.5e+145], N[(1.0 / N[(N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 6.5 \cdot 10^{+145}:\\
\;\;\;\;\frac{1}{a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right) + 2}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 6.50000000000000034e145

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.6%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.6%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub72.2%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-172.2%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative72.2%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/72.1%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval72.1%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac72.1%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg72.1%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out72.1%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg72.1%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.5%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp71.1%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified71.1%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    8. Taylor expanded in a around 0 60.7%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 6.50000000000000034e145 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub63.6%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-163.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative63.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/63.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval63.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac63.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg63.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out63.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg63.6%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 91.8%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + 0.5 \cdot b\right)}} \]
    7. Step-by-step derivation
      1. *-commutative91.8%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{b \cdot 0.5}\right)} \]
    8. Simplified91.8%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot 0.5\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification64.7%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 6.5 \cdot 10^{+145}:\\ \;\;\;\;\frac{1}{a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right) + 2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 9: 70.8% accurate, 15.2× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 2.5 \cdot 10^{+96}:\\ \;\;\;\;\frac{1}{a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right) + 2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 2.5e+96)
   (/ 1.0 (+ (* a (+ -1.0 (* a (+ 0.5 (* a -0.16666666666666666))))) 2.0))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b (+ 0.5 (* b 0.16666666666666666)))))))))
double code(double a, double b) {
	double tmp;
	if (b <= 2.5e+96) {
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 2.5d+96) then
        tmp = 1.0d0 / ((a * ((-1.0d0) + (a * (0.5d0 + (a * (-0.16666666666666666d0)))))) + 2.0d0)
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * (0.5d0 + (b * 0.16666666666666666d0))))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 2.5e+96) {
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0);
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 2.5e+96:
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0)
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 2.5e+96)
		tmp = Float64(1.0 / Float64(Float64(a * Float64(-1.0 + Float64(a * Float64(0.5 + Float64(a * -0.16666666666666666))))) + 2.0));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * Float64(0.5 + Float64(b * 0.16666666666666666)))))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 2.5e+96)
		tmp = 1.0 / ((a * (-1.0 + (a * (0.5 + (a * -0.16666666666666666))))) + 2.0);
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * (0.5 + (b * 0.16666666666666666))))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 2.5e+96], N[(1.0 / N[(N[(a * N[(-1.0 + N[(a * N[(0.5 + N[(a * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * N[(0.5 + N[(b * 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 2.5 \cdot 10^{+96}:\\
\;\;\;\;\frac{1}{a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right) + 2}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 2.5000000000000002e96

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.5%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.5%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.5%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.5%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub72.4%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-172.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg72.4%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.5%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 72.6%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp72.6%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified72.6%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    8. Taylor expanded in a around 0 61.8%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(a \cdot \left(0.5 + -0.16666666666666666 \cdot a\right) - 1\right)}} \]

    if 2.5000000000000002e96 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub64.3%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-164.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg64.3%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + 0.16666666666666666 \cdot b\right)\right)}} \]
    7. Step-by-step derivation
      1. *-commutative97.9%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + \color{blue}{b \cdot 0.16666666666666666}\right)\right)} \]
    8. Simplified97.9%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification67.7%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 2.5 \cdot 10^{+96}:\\ \;\;\;\;\frac{1}{a \cdot \left(-1 + a \cdot \left(0.5 + a \cdot -0.16666666666666666\right)\right) + 2}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot \left(0.5 + b \cdot 0.16666666666666666\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 10: 63.6% accurate, 19.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;b \leq 1.35 \cdot 10^{+129}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \end{array} \]
(FPCore (a b)
 :precision binary64
 (if (<= b 1.35e+129)
   (/ 1.0 (+ 2.0 (* a (+ -1.0 (* a 0.5)))))
   (/ 1.0 (+ 2.0 (* b (+ 1.0 (* b 0.5)))))))
double code(double a, double b) {
	double tmp;
	if (b <= 1.35e+129) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (b <= 1.35d+129) then
        tmp = 1.0d0 / (2.0d0 + (a * ((-1.0d0) + (a * 0.5d0))))
    else
        tmp = 1.0d0 / (2.0d0 + (b * (1.0d0 + (b * 0.5d0))))
    end if
    code = tmp
end function
public static double code(double a, double b) {
	double tmp;
	if (b <= 1.35e+129) {
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
	} else {
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	}
	return tmp;
}
def code(a, b):
	tmp = 0
	if b <= 1.35e+129:
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))))
	else:
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))))
	return tmp
function code(a, b)
	tmp = 0.0
	if (b <= 1.35e+129)
		tmp = Float64(1.0 / Float64(2.0 + Float64(a * Float64(-1.0 + Float64(a * 0.5)))));
	else
		tmp = Float64(1.0 / Float64(2.0 + Float64(b * Float64(1.0 + Float64(b * 0.5)))));
	end
	return tmp
end
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (b <= 1.35e+129)
		tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
	else
		tmp = 1.0 / (2.0 + (b * (1.0 + (b * 0.5))));
	end
	tmp_2 = tmp;
end
code[a_, b_] := If[LessEqual[b, 1.35e+129], N[(1.0 / N[(2.0 + N[(a * N[(-1.0 + N[(a * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(1.0 / N[(2.0 + N[(b * N[(1.0 + N[(b * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;b \leq 1.35 \cdot 10^{+129}:\\
\;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if b < 1.35e129

    1. Initial program 98.6%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity98.6%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/98.6%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/98.6%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg98.6%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg98.6%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub72.0%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-172.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative72.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/72.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval72.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac72.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg72.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out72.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg72.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval99.5%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified99.5%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in b around 0 72.2%

      \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
    6. Step-by-step derivation
      1. rec-exp72.2%

        \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    7. Simplified72.2%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
    8. Taylor expanded in a around 0 57.8%

      \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(0.5 \cdot a - 1\right)}} \]

    if 1.35e129 < b

    1. Initial program 100.0%

      \[\frac{e^{a}}{e^{a} + e^{b}} \]
    2. Step-by-step derivation
      1. *-lft-identity100.0%

        \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
      2. associate-*l/100.0%

        \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
      3. associate-/r/100.0%

        \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
      4. +-commutative100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
      5. remove-double-neg100.0%

        \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
      6. sub-neg100.0%

        \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
      7. div-sub65.8%

        \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
      8. neg-mul-165.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
      9. *-commutative65.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
      10. associate-*r/65.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
      11. metadata-eval65.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
      12. distribute-neg-frac65.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
      13. exp-neg65.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
      14. distribute-rgt-neg-out65.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
      15. exp-neg65.8%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
      16. rgt-mult-inverse100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
    4. Add Preprocessing
    5. Taylor expanded in a around 0 100.0%

      \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
    6. Taylor expanded in b around 0 80.8%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + 0.5 \cdot b\right)}} \]
    7. Step-by-step derivation
      1. *-commutative80.8%

        \[\leadsto \frac{1}{2 + b \cdot \left(1 + \color{blue}{b \cdot 0.5}\right)} \]
    8. Simplified80.8%

      \[\leadsto \frac{1}{\color{blue}{2 + b \cdot \left(1 + b \cdot 0.5\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification61.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;b \leq 1.35 \cdot 10^{+129}:\\ \;\;\;\;\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{2 + b \cdot \left(1 + b \cdot 0.5\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 11: 53.3% accurate, 27.7× speedup?

\[\begin{array}{l} \\ \frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 2.0 (* a (+ -1.0 (* a 0.5))))))
double code(double a, double b) {
	return 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 + (a * ((-1.0d0) + (a * 0.5d0))))
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
}
def code(a, b):
	return 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))))
function code(a, b)
	return Float64(1.0 / Float64(2.0 + Float64(a * Float64(-1.0 + Float64(a * 0.5)))))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 + (a * (-1.0 + (a * 0.5))));
end
code[a_, b_] := N[(1.0 / N[(2.0 + N[(a * N[(-1.0 + N[(a * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. +-commutative98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
    5. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
    6. sub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
    7. div-sub71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
    8. neg-mul-171.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
    9. *-commutative71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
    10. associate-*r/71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
    11. metadata-eval71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
    12. distribute-neg-frac71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
    13. exp-neg71.0%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
    14. distribute-rgt-neg-out71.0%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
    15. exp-neg71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
    16. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
    17. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 66.9%

    \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
  6. Step-by-step derivation
    1. rec-exp66.9%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  7. Simplified66.9%

    \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  8. Taylor expanded in a around 0 52.4%

    \[\leadsto \frac{1}{\color{blue}{2 + a \cdot \left(0.5 \cdot a - 1\right)}} \]
  9. Final simplification52.4%

    \[\leadsto \frac{1}{2 + a \cdot \left(-1 + a \cdot 0.5\right)} \]
  10. Add Preprocessing

Alternative 12: 40.0% accurate, 61.0× speedup?

\[\begin{array}{l} \\ \frac{1}{2 - a} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (- 2.0 a)))
double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (2.0d0 - a)
end function
public static double code(double a, double b) {
	return 1.0 / (2.0 - a);
}
def code(a, b):
	return 1.0 / (2.0 - a)
function code(a, b)
	return Float64(1.0 / Float64(2.0 - a))
end
function tmp = code(a, b)
	tmp = 1.0 / (2.0 - a);
end
code[a_, b_] := N[(1.0 / N[(2.0 - a), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2 - a}
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. +-commutative98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
    5. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
    6. sub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
    7. div-sub71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
    8. neg-mul-171.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
    9. *-commutative71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
    10. associate-*r/71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
    11. metadata-eval71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
    12. distribute-neg-frac71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
    13. exp-neg71.0%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
    14. distribute-rgt-neg-out71.0%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
    15. exp-neg71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
    16. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
    17. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
  4. Add Preprocessing
  5. Taylor expanded in b around 0 66.9%

    \[\leadsto \frac{1}{\color{blue}{\frac{1}{e^{a}}} - -1} \]
  6. Step-by-step derivation
    1. rec-exp66.9%

      \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  7. Simplified66.9%

    \[\leadsto \frac{1}{\color{blue}{e^{-a}} - -1} \]
  8. Taylor expanded in a around 0 38.1%

    \[\leadsto \frac{1}{\color{blue}{2 + -1 \cdot a}} \]
  9. Step-by-step derivation
    1. neg-mul-138.1%

      \[\leadsto \frac{1}{2 + \color{blue}{\left(-a\right)}} \]
    2. unsub-neg38.1%

      \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  10. Simplified38.1%

    \[\leadsto \frac{1}{\color{blue}{2 - a}} \]
  11. Final simplification38.1%

    \[\leadsto \frac{1}{2 - a} \]
  12. Add Preprocessing

Alternative 13: 39.2% accurate, 305.0× speedup?

\[\begin{array}{l} \\ 0.5 \end{array} \]
(FPCore (a b) :precision binary64 0.5)
double code(double a, double b) {
	return 0.5;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 0.5d0
end function
public static double code(double a, double b) {
	return 0.5;
}
def code(a, b):
	return 0.5
function code(a, b)
	return 0.5
end
function tmp = code(a, b)
	tmp = 0.5;
end
code[a_, b_] := 0.5
\begin{array}{l}

\\
0.5
\end{array}
Derivation
  1. Initial program 98.8%

    \[\frac{e^{a}}{e^{a} + e^{b}} \]
  2. Step-by-step derivation
    1. *-lft-identity98.8%

      \[\leadsto \frac{\color{blue}{1 \cdot e^{a}}}{e^{a} + e^{b}} \]
    2. associate-*l/98.8%

      \[\leadsto \color{blue}{\frac{1}{e^{a} + e^{b}} \cdot e^{a}} \]
    3. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{a} + e^{b}}{e^{a}}}} \]
    4. +-commutative98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} + e^{a}}}{e^{a}}} \]
    5. remove-double-neg98.8%

      \[\leadsto \frac{1}{\frac{e^{b} + \color{blue}{\left(-\left(-e^{a}\right)\right)}}{e^{a}}} \]
    6. sub-neg98.8%

      \[\leadsto \frac{1}{\frac{\color{blue}{e^{b} - \left(-e^{a}\right)}}{e^{a}}} \]
    7. div-sub71.1%

      \[\leadsto \frac{1}{\color{blue}{\frac{e^{b}}{e^{a}} - \frac{-e^{a}}{e^{a}}}} \]
    8. neg-mul-171.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{-1 \cdot e^{a}}}{e^{a}}} \]
    9. *-commutative71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \frac{\color{blue}{e^{a} \cdot -1}}{e^{a}}} \]
    10. associate-*r/71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{e^{a} \cdot \frac{-1}{e^{a}}}} \]
    11. metadata-eval71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \frac{\color{blue}{-1}}{e^{a}}} \]
    12. distribute-neg-frac71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \color{blue}{\left(-\frac{1}{e^{a}}\right)}} \]
    13. exp-neg71.0%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - e^{a} \cdot \left(-\color{blue}{e^{-a}}\right)} \]
    14. distribute-rgt-neg-out71.0%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{\left(-e^{a} \cdot e^{-a}\right)}} \]
    15. exp-neg71.1%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-e^{a} \cdot \color{blue}{\frac{1}{e^{a}}}\right)} \]
    16. rgt-mult-inverse99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \left(-\color{blue}{1}\right)} \]
    17. metadata-eval99.6%

      \[\leadsto \frac{1}{\frac{e^{b}}{e^{a}} - \color{blue}{-1}} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{\frac{1}{\frac{e^{b}}{e^{a}} - -1}} \]
  4. Add Preprocessing
  5. Taylor expanded in a around 0 81.1%

    \[\leadsto \frac{1}{\color{blue}{e^{b}} - -1} \]
  6. Taylor expanded in b around 0 37.4%

    \[\leadsto \color{blue}{0.5} \]
  7. Final simplification37.4%

    \[\leadsto 0.5 \]
  8. Add Preprocessing

Developer target: 100.0% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \frac{1}{1 + e^{b - a}} \end{array} \]
(FPCore (a b) :precision binary64 (/ 1.0 (+ 1.0 (exp (- b a)))))
double code(double a, double b) {
	return 1.0 / (1.0 + exp((b - a)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = 1.0d0 / (1.0d0 + exp((b - a)))
end function
public static double code(double a, double b) {
	return 1.0 / (1.0 + Math.exp((b - a)));
}
def code(a, b):
	return 1.0 / (1.0 + math.exp((b - a)))
function code(a, b)
	return Float64(1.0 / Float64(1.0 + exp(Float64(b - a))))
end
function tmp = code(a, b)
	tmp = 1.0 / (1.0 + exp((b - a)));
end
code[a_, b_] := N[(1.0 / N[(1.0 + N[Exp[N[(b - a), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{1 + e^{b - a}}
\end{array}

Reproduce

?
herbie shell --seed 2024077 
(FPCore (a b)
  :name "Quotient of sum of exps"
  :precision binary64

  :alt
  (/ 1.0 (+ 1.0 (exp (- b a))))

  (/ (exp a) (+ (exp a) (exp b))))