symmetry log of sum of exp

Percentage Accurate: 53.9% → 98.9%
Time: 12.0s
Alternatives: 11
Speedup: 2.8×

Specification

?
\[\begin{array}{l} \\ \log \left(e^{a} + e^{b}\right) \end{array} \]
(FPCore (a b) :precision binary64 (log (+ (exp a) (exp b))))
double code(double a, double b) {
	return log((exp(a) + exp(b)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = log((exp(a) + exp(b)))
end function
public static double code(double a, double b) {
	return Math.log((Math.exp(a) + Math.exp(b)));
}
def code(a, b):
	return math.log((math.exp(a) + math.exp(b)))
function code(a, b)
	return log(Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = log((exp(a) + exp(b)));
end
code[a_, b_] := N[Log[N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(e^{a} + e^{b}\right)
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 11 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 53.9% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \log \left(e^{a} + e^{b}\right) \end{array} \]
(FPCore (a b) :precision binary64 (log (+ (exp a) (exp b))))
double code(double a, double b) {
	return log((exp(a) + exp(b)));
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = log((exp(a) + exp(b)))
end function
public static double code(double a, double b) {
	return Math.log((Math.exp(a) + Math.exp(b)));
}
def code(a, b):
	return math.log((math.exp(a) + math.exp(b)))
function code(a, b)
	return log(Float64(exp(a) + exp(b)))
end
function tmp = code(a, b)
	tmp = log((exp(a) + exp(b)));
end
code[a_, b_] := N[Log[N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(e^{a} + e^{b}\right)
\end{array}

Alternative 1: 98.9% accurate, 0.7× speedup?

\[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\log \left(e^{a} + e^{b}\right)\\ \end{array} \end{array} \]
NOTE: a and b should be sorted in increasing order before calling this function.
(FPCore (a b)
 :precision binary64
 (if (<= (exp a) 0.0) (/ b (+ (exp a) 1.0)) (log (+ (exp a) (exp b)))))
assert(a < b);
double code(double a, double b) {
	double tmp;
	if (exp(a) <= 0.0) {
		tmp = b / (exp(a) + 1.0);
	} else {
		tmp = log((exp(a) + exp(b)));
	}
	return tmp;
}
NOTE: a and b should be sorted in increasing order before calling this function.
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (exp(a) <= 0.0d0) then
        tmp = b / (exp(a) + 1.0d0)
    else
        tmp = log((exp(a) + exp(b)))
    end if
    code = tmp
end function
assert a < b;
public static double code(double a, double b) {
	double tmp;
	if (Math.exp(a) <= 0.0) {
		tmp = b / (Math.exp(a) + 1.0);
	} else {
		tmp = Math.log((Math.exp(a) + Math.exp(b)));
	}
	return tmp;
}
[a, b] = sort([a, b])
def code(a, b):
	tmp = 0
	if math.exp(a) <= 0.0:
		tmp = b / (math.exp(a) + 1.0)
	else:
		tmp = math.log((math.exp(a) + math.exp(b)))
	return tmp
a, b = sort([a, b])
function code(a, b)
	tmp = 0.0
	if (exp(a) <= 0.0)
		tmp = Float64(b / Float64(exp(a) + 1.0));
	else
		tmp = log(Float64(exp(a) + exp(b)));
	end
	return tmp
end
a, b = num2cell(sort([a, b])){:}
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (exp(a) <= 0.0)
		tmp = b / (exp(a) + 1.0);
	else
		tmp = log((exp(a) + exp(b)));
	end
	tmp_2 = tmp;
end
NOTE: a and b should be sorted in increasing order before calling this function.
code[a_, b_] := If[LessEqual[N[Exp[a], $MachinePrecision], 0.0], N[(b / N[(N[Exp[a], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision], N[Log[N[(N[Exp[a], $MachinePrecision] + N[Exp[b], $MachinePrecision]), $MachinePrecision]], $MachinePrecision]]
\begin{array}{l}
[a, b] = \mathsf{sort}([a, b])\\
\\
\begin{array}{l}
\mathbf{if}\;e^{a} \leq 0:\\
\;\;\;\;\frac{b}{e^{a} + 1}\\

\mathbf{else}:\\
\;\;\;\;\log \left(e^{a} + e^{b}\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (exp.f64 a) < 0.0

    1. Initial program 7.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0

      \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    4. Step-by-step derivation
      1. *-rgt-identityN/A

        \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
      2. associate-*r/N/A

        \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
      3. +-lowering-+.f64N/A

        \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
      4. accelerator-lowering-log1p.f64N/A

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
      5. exp-lowering-exp.f64N/A

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
      6. associate-*r/N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
      7. *-rgt-identityN/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
      8. /-lowering-/.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
      9. +-lowering-+.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
      10. exp-lowering-exp.f6498.4

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
    5. Simplified98.4%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    6. Taylor expanded in b around inf

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
    7. Step-by-step derivation
      1. /-lowering-/.f64N/A

        \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
      2. +-lowering-+.f64N/A

        \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
      3. exp-lowering-exp.f6498.4

        \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
    8. Simplified98.4%

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]

    if 0.0 < (exp.f64 a)

    1. Initial program 67.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
  3. Recombined 2 regimes into one program.
  4. Final simplification74.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\log \left(e^{a} + e^{b}\right)\\ \end{array} \]
  5. Add Preprocessing

Alternative 2: 98.4% accurate, 1.0× speedup?

\[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \mathsf{log1p}\left(e^{a}\right) + \frac{b}{e^{a} + 1} \end{array} \]
NOTE: a and b should be sorted in increasing order before calling this function.
(FPCore (a b) :precision binary64 (+ (log1p (exp a)) (/ b (+ (exp a) 1.0))))
assert(a < b);
double code(double a, double b) {
	return log1p(exp(a)) + (b / (exp(a) + 1.0));
}
assert a < b;
public static double code(double a, double b) {
	return Math.log1p(Math.exp(a)) + (b / (Math.exp(a) + 1.0));
}
[a, b] = sort([a, b])
def code(a, b):
	return math.log1p(math.exp(a)) + (b / (math.exp(a) + 1.0))
a, b = sort([a, b])
function code(a, b)
	return Float64(log1p(exp(a)) + Float64(b / Float64(exp(a) + 1.0)))
end
NOTE: a and b should be sorted in increasing order before calling this function.
code[a_, b_] := N[(N[Log[1 + N[Exp[a], $MachinePrecision]], $MachinePrecision] + N[(b / N[(N[Exp[a], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}
[a, b] = \mathsf{sort}([a, b])\\
\\
\mathsf{log1p}\left(e^{a}\right) + \frac{b}{e^{a} + 1}
\end{array}
Derivation
  1. Initial program 52.7%

    \[\log \left(e^{a} + e^{b}\right) \]
  2. Add Preprocessing
  3. Taylor expanded in b around 0

    \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
  4. Step-by-step derivation
    1. *-rgt-identityN/A

      \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
    2. associate-*r/N/A

      \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
    3. +-lowering-+.f64N/A

      \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
    4. accelerator-lowering-log1p.f64N/A

      \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
    5. exp-lowering-exp.f64N/A

      \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
    6. associate-*r/N/A

      \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
    7. *-rgt-identityN/A

      \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
    8. /-lowering-/.f64N/A

      \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
    9. +-lowering-+.f64N/A

      \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
    10. exp-lowering-exp.f6472.5

      \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
  5. Simplified72.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
  6. Final simplification72.5%

    \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{e^{a} + 1} \]
  7. Add Preprocessing

Alternative 3: 98.2% accurate, 1.0× speedup?

\[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\log \left(e^{b} + \left(a + 1\right)\right)\\ \end{array} \end{array} \]
NOTE: a and b should be sorted in increasing order before calling this function.
(FPCore (a b)
 :precision binary64
 (if (<= (exp a) 0.0) (/ b (+ (exp a) 1.0)) (log (+ (exp b) (+ a 1.0)))))
assert(a < b);
double code(double a, double b) {
	double tmp;
	if (exp(a) <= 0.0) {
		tmp = b / (exp(a) + 1.0);
	} else {
		tmp = log((exp(b) + (a + 1.0)));
	}
	return tmp;
}
NOTE: a and b should be sorted in increasing order before calling this function.
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    real(8) :: tmp
    if (exp(a) <= 0.0d0) then
        tmp = b / (exp(a) + 1.0d0)
    else
        tmp = log((exp(b) + (a + 1.0d0)))
    end if
    code = tmp
end function
assert a < b;
public static double code(double a, double b) {
	double tmp;
	if (Math.exp(a) <= 0.0) {
		tmp = b / (Math.exp(a) + 1.0);
	} else {
		tmp = Math.log((Math.exp(b) + (a + 1.0)));
	}
	return tmp;
}
[a, b] = sort([a, b])
def code(a, b):
	tmp = 0
	if math.exp(a) <= 0.0:
		tmp = b / (math.exp(a) + 1.0)
	else:
		tmp = math.log((math.exp(b) + (a + 1.0)))
	return tmp
a, b = sort([a, b])
function code(a, b)
	tmp = 0.0
	if (exp(a) <= 0.0)
		tmp = Float64(b / Float64(exp(a) + 1.0));
	else
		tmp = log(Float64(exp(b) + Float64(a + 1.0)));
	end
	return tmp
end
a, b = num2cell(sort([a, b])){:}
function tmp_2 = code(a, b)
	tmp = 0.0;
	if (exp(a) <= 0.0)
		tmp = b / (exp(a) + 1.0);
	else
		tmp = log((exp(b) + (a + 1.0)));
	end
	tmp_2 = tmp;
end
NOTE: a and b should be sorted in increasing order before calling this function.
code[a_, b_] := If[LessEqual[N[Exp[a], $MachinePrecision], 0.0], N[(b / N[(N[Exp[a], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision], N[Log[N[(N[Exp[b], $MachinePrecision] + N[(a + 1.0), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]]
\begin{array}{l}
[a, b] = \mathsf{sort}([a, b])\\
\\
\begin{array}{l}
\mathbf{if}\;e^{a} \leq 0:\\
\;\;\;\;\frac{b}{e^{a} + 1}\\

\mathbf{else}:\\
\;\;\;\;\log \left(e^{b} + \left(a + 1\right)\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (exp.f64 a) < 0.0

    1. Initial program 7.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0

      \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    4. Step-by-step derivation
      1. *-rgt-identityN/A

        \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
      2. associate-*r/N/A

        \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
      3. +-lowering-+.f64N/A

        \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
      4. accelerator-lowering-log1p.f64N/A

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
      5. exp-lowering-exp.f64N/A

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
      6. associate-*r/N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
      7. *-rgt-identityN/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
      8. /-lowering-/.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
      9. +-lowering-+.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
      10. exp-lowering-exp.f6498.4

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
    5. Simplified98.4%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    6. Taylor expanded in b around inf

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
    7. Step-by-step derivation
      1. /-lowering-/.f64N/A

        \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
      2. +-lowering-+.f64N/A

        \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
      3. exp-lowering-exp.f6498.4

        \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
    8. Simplified98.4%

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]

    if 0.0 < (exp.f64 a)

    1. Initial program 67.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in a around 0

      \[\leadsto \log \left(\color{blue}{\left(1 + a\right)} + e^{b}\right) \]
    4. Step-by-step derivation
      1. +-lowering-+.f6465.2

        \[\leadsto \log \left(\color{blue}{\left(1 + a\right)} + e^{b}\right) \]
    5. Simplified65.2%

      \[\leadsto \log \left(\color{blue}{\left(1 + a\right)} + e^{b}\right) \]
  3. Recombined 2 regimes into one program.
  4. Final simplification73.1%

    \[\leadsto \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\log \left(e^{b} + \left(a + 1\right)\right)\\ \end{array} \]
  5. Add Preprocessing

Alternative 4: 97.8% accurate, 1.0× speedup?

\[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\mathsf{log1p}\left(e^{b}\right)\\ \end{array} \end{array} \]
NOTE: a and b should be sorted in increasing order before calling this function.
(FPCore (a b)
 :precision binary64
 (if (<= (exp a) 0.0) (/ b (+ (exp a) 1.0)) (log1p (exp b))))
assert(a < b);
double code(double a, double b) {
	double tmp;
	if (exp(a) <= 0.0) {
		tmp = b / (exp(a) + 1.0);
	} else {
		tmp = log1p(exp(b));
	}
	return tmp;
}
assert a < b;
public static double code(double a, double b) {
	double tmp;
	if (Math.exp(a) <= 0.0) {
		tmp = b / (Math.exp(a) + 1.0);
	} else {
		tmp = Math.log1p(Math.exp(b));
	}
	return tmp;
}
[a, b] = sort([a, b])
def code(a, b):
	tmp = 0
	if math.exp(a) <= 0.0:
		tmp = b / (math.exp(a) + 1.0)
	else:
		tmp = math.log1p(math.exp(b))
	return tmp
a, b = sort([a, b])
function code(a, b)
	tmp = 0.0
	if (exp(a) <= 0.0)
		tmp = Float64(b / Float64(exp(a) + 1.0));
	else
		tmp = log1p(exp(b));
	end
	return tmp
end
NOTE: a and b should be sorted in increasing order before calling this function.
code[a_, b_] := If[LessEqual[N[Exp[a], $MachinePrecision], 0.0], N[(b / N[(N[Exp[a], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision], N[Log[1 + N[Exp[b], $MachinePrecision]], $MachinePrecision]]
\begin{array}{l}
[a, b] = \mathsf{sort}([a, b])\\
\\
\begin{array}{l}
\mathbf{if}\;e^{a} \leq 0:\\
\;\;\;\;\frac{b}{e^{a} + 1}\\

\mathbf{else}:\\
\;\;\;\;\mathsf{log1p}\left(e^{b}\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (exp.f64 a) < 0.0

    1. Initial program 7.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0

      \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    4. Step-by-step derivation
      1. *-rgt-identityN/A

        \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
      2. associate-*r/N/A

        \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
      3. +-lowering-+.f64N/A

        \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
      4. accelerator-lowering-log1p.f64N/A

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
      5. exp-lowering-exp.f64N/A

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
      6. associate-*r/N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
      7. *-rgt-identityN/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
      8. /-lowering-/.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
      9. +-lowering-+.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
      10. exp-lowering-exp.f6498.4

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
    5. Simplified98.4%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    6. Taylor expanded in b around inf

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
    7. Step-by-step derivation
      1. /-lowering-/.f64N/A

        \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
      2. +-lowering-+.f64N/A

        \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
      3. exp-lowering-exp.f6498.4

        \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
    8. Simplified98.4%

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]

    if 0.0 < (exp.f64 a)

    1. Initial program 67.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in a around 0

      \[\leadsto \color{blue}{\log \left(1 + e^{b}\right)} \]
    4. Step-by-step derivation
      1. accelerator-lowering-log1p.f64N/A

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{b}\right)} \]
      2. exp-lowering-exp.f6463.1

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{b}}\right) \]
    5. Simplified63.1%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{b}\right)} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification71.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\mathsf{log1p}\left(e^{b}\right)\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 97.5% accurate, 1.4× speedup?

\[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(a, 0.5, \mathsf{fma}\left(0.5, b, \log 2\right)\right)\\ \end{array} \end{array} \]
NOTE: a and b should be sorted in increasing order before calling this function.
(FPCore (a b)
 :precision binary64
 (if (<= (exp a) 0.0) (/ b (+ (exp a) 1.0)) (fma a 0.5 (fma 0.5 b (log 2.0)))))
assert(a < b);
double code(double a, double b) {
	double tmp;
	if (exp(a) <= 0.0) {
		tmp = b / (exp(a) + 1.0);
	} else {
		tmp = fma(a, 0.5, fma(0.5, b, log(2.0)));
	}
	return tmp;
}
a, b = sort([a, b])
function code(a, b)
	tmp = 0.0
	if (exp(a) <= 0.0)
		tmp = Float64(b / Float64(exp(a) + 1.0));
	else
		tmp = fma(a, 0.5, fma(0.5, b, log(2.0)));
	end
	return tmp
end
NOTE: a and b should be sorted in increasing order before calling this function.
code[a_, b_] := If[LessEqual[N[Exp[a], $MachinePrecision], 0.0], N[(b / N[(N[Exp[a], $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision], N[(a * 0.5 + N[(0.5 * b + N[Log[2.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}
[a, b] = \mathsf{sort}([a, b])\\
\\
\begin{array}{l}
\mathbf{if}\;e^{a} \leq 0:\\
\;\;\;\;\frac{b}{e^{a} + 1}\\

\mathbf{else}:\\
\;\;\;\;\mathsf{fma}\left(a, 0.5, \mathsf{fma}\left(0.5, b, \log 2\right)\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (exp.f64 a) < 0.0

    1. Initial program 7.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0

      \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    4. Step-by-step derivation
      1. *-rgt-identityN/A

        \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
      2. associate-*r/N/A

        \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
      3. +-lowering-+.f64N/A

        \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
      4. accelerator-lowering-log1p.f64N/A

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
      5. exp-lowering-exp.f64N/A

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
      6. associate-*r/N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
      7. *-rgt-identityN/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
      8. /-lowering-/.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
      9. +-lowering-+.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
      10. exp-lowering-exp.f6498.4

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
    5. Simplified98.4%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    6. Taylor expanded in b around inf

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
    7. Step-by-step derivation
      1. /-lowering-/.f64N/A

        \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
      2. +-lowering-+.f64N/A

        \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
      3. exp-lowering-exp.f6498.4

        \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
    8. Simplified98.4%

      \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]

    if 0.0 < (exp.f64 a)

    1. Initial program 67.0%

      \[\log \left(e^{a} + e^{b}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in b around 0

      \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    4. Step-by-step derivation
      1. *-rgt-identityN/A

        \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
      2. associate-*r/N/A

        \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
      3. +-lowering-+.f64N/A

        \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
      4. accelerator-lowering-log1p.f64N/A

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
      5. exp-lowering-exp.f64N/A

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
      6. associate-*r/N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
      7. *-rgt-identityN/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
      8. /-lowering-/.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
      9. +-lowering-+.f64N/A

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
      10. exp-lowering-exp.f6464.4

        \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
    5. Simplified64.4%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
    6. Taylor expanded in a around 0

      \[\leadsto \color{blue}{\log 2 + \left(\frac{1}{2} \cdot b + a \cdot \left(\frac{1}{2} - \frac{1}{4} \cdot b\right)\right)} \]
    7. Step-by-step derivation
      1. associate-+r+N/A

        \[\leadsto \color{blue}{\left(\log 2 + \frac{1}{2} \cdot b\right) + a \cdot \left(\frac{1}{2} - \frac{1}{4} \cdot b\right)} \]
      2. +-commutativeN/A

        \[\leadsto \color{blue}{a \cdot \left(\frac{1}{2} - \frac{1}{4} \cdot b\right) + \left(\log 2 + \frac{1}{2} \cdot b\right)} \]
      3. accelerator-lowering-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(a, \frac{1}{2} - \frac{1}{4} \cdot b, \log 2 + \frac{1}{2} \cdot b\right)} \]
      4. sub-negN/A

        \[\leadsto \mathsf{fma}\left(a, \color{blue}{\frac{1}{2} + \left(\mathsf{neg}\left(\frac{1}{4} \cdot b\right)\right)}, \log 2 + \frac{1}{2} \cdot b\right) \]
      5. +-commutativeN/A

        \[\leadsto \mathsf{fma}\left(a, \color{blue}{\left(\mathsf{neg}\left(\frac{1}{4} \cdot b\right)\right) + \frac{1}{2}}, \log 2 + \frac{1}{2} \cdot b\right) \]
      6. *-commutativeN/A

        \[\leadsto \mathsf{fma}\left(a, \left(\mathsf{neg}\left(\color{blue}{b \cdot \frac{1}{4}}\right)\right) + \frac{1}{2}, \log 2 + \frac{1}{2} \cdot b\right) \]
      7. distribute-rgt-neg-inN/A

        \[\leadsto \mathsf{fma}\left(a, \color{blue}{b \cdot \left(\mathsf{neg}\left(\frac{1}{4}\right)\right)} + \frac{1}{2}, \log 2 + \frac{1}{2} \cdot b\right) \]
      8. metadata-evalN/A

        \[\leadsto \mathsf{fma}\left(a, b \cdot \color{blue}{\frac{-1}{4}} + \frac{1}{2}, \log 2 + \frac{1}{2} \cdot b\right) \]
      9. accelerator-lowering-fma.f64N/A

        \[\leadsto \mathsf{fma}\left(a, \color{blue}{\mathsf{fma}\left(b, \frac{-1}{4}, \frac{1}{2}\right)}, \log 2 + \frac{1}{2} \cdot b\right) \]
      10. +-commutativeN/A

        \[\leadsto \mathsf{fma}\left(a, \mathsf{fma}\left(b, \frac{-1}{4}, \frac{1}{2}\right), \color{blue}{\frac{1}{2} \cdot b + \log 2}\right) \]
      11. accelerator-lowering-fma.f64N/A

        \[\leadsto \mathsf{fma}\left(a, \mathsf{fma}\left(b, \frac{-1}{4}, \frac{1}{2}\right), \color{blue}{\mathsf{fma}\left(\frac{1}{2}, b, \log 2\right)}\right) \]
      12. log-lowering-log.f6462.9

        \[\leadsto \mathsf{fma}\left(a, \mathsf{fma}\left(b, -0.25, 0.5\right), \mathsf{fma}\left(0.5, b, \color{blue}{\log 2}\right)\right) \]
    8. Simplified62.9%

      \[\leadsto \color{blue}{\mathsf{fma}\left(a, \mathsf{fma}\left(b, -0.25, 0.5\right), \mathsf{fma}\left(0.5, b, \log 2\right)\right)} \]
    9. Taylor expanded in b around 0

      \[\leadsto \mathsf{fma}\left(a, \color{blue}{\frac{1}{2}}, \mathsf{fma}\left(\frac{1}{2}, b, \log 2\right)\right) \]
    10. Step-by-step derivation
      1. Simplified62.9%

        \[\leadsto \mathsf{fma}\left(a, \color{blue}{0.5}, \mathsf{fma}\left(0.5, b, \log 2\right)\right) \]
    11. Recombined 2 regimes into one program.
    12. Final simplification71.4%

      \[\leadsto \begin{array}{l} \mathbf{if}\;e^{a} \leq 0:\\ \;\;\;\;\frac{b}{e^{a} + 1}\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(a, 0.5, \mathsf{fma}\left(0.5, b, \log 2\right)\right)\\ \end{array} \]
    13. Add Preprocessing

    Alternative 6: 57.3% accurate, 2.6× speedup?

    \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;a \leq -1.4:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(a, 0.5, \mathsf{fma}\left(0.5, b, \log 2\right)\right)\\ \end{array} \end{array} \]
    NOTE: a and b should be sorted in increasing order before calling this function.
    (FPCore (a b)
     :precision binary64
     (if (<= a -1.4) (* b 0.5) (fma a 0.5 (fma 0.5 b (log 2.0)))))
    assert(a < b);
    double code(double a, double b) {
    	double tmp;
    	if (a <= -1.4) {
    		tmp = b * 0.5;
    	} else {
    		tmp = fma(a, 0.5, fma(0.5, b, log(2.0)));
    	}
    	return tmp;
    }
    
    a, b = sort([a, b])
    function code(a, b)
    	tmp = 0.0
    	if (a <= -1.4)
    		tmp = Float64(b * 0.5);
    	else
    		tmp = fma(a, 0.5, fma(0.5, b, log(2.0)));
    	end
    	return tmp
    end
    
    NOTE: a and b should be sorted in increasing order before calling this function.
    code[a_, b_] := If[LessEqual[a, -1.4], N[(b * 0.5), $MachinePrecision], N[(a * 0.5 + N[(0.5 * b + N[Log[2.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
    
    \begin{array}{l}
    [a, b] = \mathsf{sort}([a, b])\\
    \\
    \begin{array}{l}
    \mathbf{if}\;a \leq -1.4:\\
    \;\;\;\;b \cdot 0.5\\
    
    \mathbf{else}:\\
    \;\;\;\;\mathsf{fma}\left(a, 0.5, \mathsf{fma}\left(0.5, b, \log 2\right)\right)\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if a < -1.3999999999999999

      1. Initial program 7.0%

        \[\log \left(e^{a} + e^{b}\right) \]
      2. Add Preprocessing
      3. Taylor expanded in b around 0

        \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
      4. Step-by-step derivation
        1. *-rgt-identityN/A

          \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
        2. associate-*r/N/A

          \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
        3. +-lowering-+.f64N/A

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
        4. accelerator-lowering-log1p.f64N/A

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
        5. exp-lowering-exp.f64N/A

          \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
        6. associate-*r/N/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
        7. *-rgt-identityN/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
        8. /-lowering-/.f64N/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
        9. +-lowering-+.f64N/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
        10. exp-lowering-exp.f6498.4

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
      5. Simplified98.4%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
      6. Taylor expanded in b around inf

        \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
      7. Step-by-step derivation
        1. /-lowering-/.f64N/A

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        2. +-lowering-+.f64N/A

          \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
        3. exp-lowering-exp.f6498.4

          \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
      8. Simplified98.4%

        \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
      9. Taylor expanded in a around 0

        \[\leadsto \color{blue}{\frac{1}{2} \cdot b} \]
      10. Step-by-step derivation
        1. *-lowering-*.f6418.5

          \[\leadsto \color{blue}{0.5 \cdot b} \]
      11. Simplified18.5%

        \[\leadsto \color{blue}{0.5 \cdot b} \]

      if -1.3999999999999999 < a

      1. Initial program 67.0%

        \[\log \left(e^{a} + e^{b}\right) \]
      2. Add Preprocessing
      3. Taylor expanded in b around 0

        \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
      4. Step-by-step derivation
        1. *-rgt-identityN/A

          \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
        2. associate-*r/N/A

          \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
        3. +-lowering-+.f64N/A

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
        4. accelerator-lowering-log1p.f64N/A

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
        5. exp-lowering-exp.f64N/A

          \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
        6. associate-*r/N/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
        7. *-rgt-identityN/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
        8. /-lowering-/.f64N/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
        9. +-lowering-+.f64N/A

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
        10. exp-lowering-exp.f6464.4

          \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
      5. Simplified64.4%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
      6. Taylor expanded in a around 0

        \[\leadsto \color{blue}{\log 2 + \left(\frac{1}{2} \cdot b + a \cdot \left(\frac{1}{2} - \frac{1}{4} \cdot b\right)\right)} \]
      7. Step-by-step derivation
        1. associate-+r+N/A

          \[\leadsto \color{blue}{\left(\log 2 + \frac{1}{2} \cdot b\right) + a \cdot \left(\frac{1}{2} - \frac{1}{4} \cdot b\right)} \]
        2. +-commutativeN/A

          \[\leadsto \color{blue}{a \cdot \left(\frac{1}{2} - \frac{1}{4} \cdot b\right) + \left(\log 2 + \frac{1}{2} \cdot b\right)} \]
        3. accelerator-lowering-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left(a, \frac{1}{2} - \frac{1}{4} \cdot b, \log 2 + \frac{1}{2} \cdot b\right)} \]
        4. sub-negN/A

          \[\leadsto \mathsf{fma}\left(a, \color{blue}{\frac{1}{2} + \left(\mathsf{neg}\left(\frac{1}{4} \cdot b\right)\right)}, \log 2 + \frac{1}{2} \cdot b\right) \]
        5. +-commutativeN/A

          \[\leadsto \mathsf{fma}\left(a, \color{blue}{\left(\mathsf{neg}\left(\frac{1}{4} \cdot b\right)\right) + \frac{1}{2}}, \log 2 + \frac{1}{2} \cdot b\right) \]
        6. *-commutativeN/A

          \[\leadsto \mathsf{fma}\left(a, \left(\mathsf{neg}\left(\color{blue}{b \cdot \frac{1}{4}}\right)\right) + \frac{1}{2}, \log 2 + \frac{1}{2} \cdot b\right) \]
        7. distribute-rgt-neg-inN/A

          \[\leadsto \mathsf{fma}\left(a, \color{blue}{b \cdot \left(\mathsf{neg}\left(\frac{1}{4}\right)\right)} + \frac{1}{2}, \log 2 + \frac{1}{2} \cdot b\right) \]
        8. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left(a, b \cdot \color{blue}{\frac{-1}{4}} + \frac{1}{2}, \log 2 + \frac{1}{2} \cdot b\right) \]
        9. accelerator-lowering-fma.f64N/A

          \[\leadsto \mathsf{fma}\left(a, \color{blue}{\mathsf{fma}\left(b, \frac{-1}{4}, \frac{1}{2}\right)}, \log 2 + \frac{1}{2} \cdot b\right) \]
        10. +-commutativeN/A

          \[\leadsto \mathsf{fma}\left(a, \mathsf{fma}\left(b, \frac{-1}{4}, \frac{1}{2}\right), \color{blue}{\frac{1}{2} \cdot b + \log 2}\right) \]
        11. accelerator-lowering-fma.f64N/A

          \[\leadsto \mathsf{fma}\left(a, \mathsf{fma}\left(b, \frac{-1}{4}, \frac{1}{2}\right), \color{blue}{\mathsf{fma}\left(\frac{1}{2}, b, \log 2\right)}\right) \]
        12. log-lowering-log.f6462.9

          \[\leadsto \mathsf{fma}\left(a, \mathsf{fma}\left(b, -0.25, 0.5\right), \mathsf{fma}\left(0.5, b, \color{blue}{\log 2}\right)\right) \]
      8. Simplified62.9%

        \[\leadsto \color{blue}{\mathsf{fma}\left(a, \mathsf{fma}\left(b, -0.25, 0.5\right), \mathsf{fma}\left(0.5, b, \log 2\right)\right)} \]
      9. Taylor expanded in b around 0

        \[\leadsto \mathsf{fma}\left(a, \color{blue}{\frac{1}{2}}, \mathsf{fma}\left(\frac{1}{2}, b, \log 2\right)\right) \]
      10. Step-by-step derivation
        1. Simplified62.9%

          \[\leadsto \mathsf{fma}\left(a, \color{blue}{0.5}, \mathsf{fma}\left(0.5, b, \log 2\right)\right) \]
      11. Recombined 2 regimes into one program.
      12. Final simplification52.3%

        \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1.4:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(a, 0.5, \mathsf{fma}\left(0.5, b, \log 2\right)\right)\\ \end{array} \]
      13. Add Preprocessing

      Alternative 7: 57.2% accurate, 2.7× speedup?

      \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;a \leq -1:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\log \left(2 + \left(a + b\right)\right)\\ \end{array} \end{array} \]
      NOTE: a and b should be sorted in increasing order before calling this function.
      (FPCore (a b)
       :precision binary64
       (if (<= a -1.0) (* b 0.5) (log (+ 2.0 (+ a b)))))
      assert(a < b);
      double code(double a, double b) {
      	double tmp;
      	if (a <= -1.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = log((2.0 + (a + b)));
      	}
      	return tmp;
      }
      
      NOTE: a and b should be sorted in increasing order before calling this function.
      real(8) function code(a, b)
          real(8), intent (in) :: a
          real(8), intent (in) :: b
          real(8) :: tmp
          if (a <= (-1.0d0)) then
              tmp = b * 0.5d0
          else
              tmp = log((2.0d0 + (a + b)))
          end if
          code = tmp
      end function
      
      assert a < b;
      public static double code(double a, double b) {
      	double tmp;
      	if (a <= -1.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = Math.log((2.0 + (a + b)));
      	}
      	return tmp;
      }
      
      [a, b] = sort([a, b])
      def code(a, b):
      	tmp = 0
      	if a <= -1.0:
      		tmp = b * 0.5
      	else:
      		tmp = math.log((2.0 + (a + b)))
      	return tmp
      
      a, b = sort([a, b])
      function code(a, b)
      	tmp = 0.0
      	if (a <= -1.0)
      		tmp = Float64(b * 0.5);
      	else
      		tmp = log(Float64(2.0 + Float64(a + b)));
      	end
      	return tmp
      end
      
      a, b = num2cell(sort([a, b])){:}
      function tmp_2 = code(a, b)
      	tmp = 0.0;
      	if (a <= -1.0)
      		tmp = b * 0.5;
      	else
      		tmp = log((2.0 + (a + b)));
      	end
      	tmp_2 = tmp;
      end
      
      NOTE: a and b should be sorted in increasing order before calling this function.
      code[a_, b_] := If[LessEqual[a, -1.0], N[(b * 0.5), $MachinePrecision], N[Log[N[(2.0 + N[(a + b), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]]
      
      \begin{array}{l}
      [a, b] = \mathsf{sort}([a, b])\\
      \\
      \begin{array}{l}
      \mathbf{if}\;a \leq -1:\\
      \;\;\;\;b \cdot 0.5\\
      
      \mathbf{else}:\\
      \;\;\;\;\log \left(2 + \left(a + b\right)\right)\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if a < -1

        1. Initial program 7.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        4. Step-by-step derivation
          1. *-rgt-identityN/A

            \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
          2. associate-*r/N/A

            \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
          3. +-lowering-+.f64N/A

            \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
          4. accelerator-lowering-log1p.f64N/A

            \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
          5. exp-lowering-exp.f64N/A

            \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
          6. associate-*r/N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
          7. *-rgt-identityN/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
          8. /-lowering-/.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
          9. +-lowering-+.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
          10. exp-lowering-exp.f6498.4

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
        5. Simplified98.4%

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        6. Taylor expanded in b around inf

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        7. Step-by-step derivation
          1. /-lowering-/.f64N/A

            \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
          2. +-lowering-+.f64N/A

            \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
          3. exp-lowering-exp.f6498.4

            \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
        8. Simplified98.4%

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        9. Taylor expanded in a around 0

          \[\leadsto \color{blue}{\frac{1}{2} \cdot b} \]
        10. Step-by-step derivation
          1. *-lowering-*.f6418.5

            \[\leadsto \color{blue}{0.5 \cdot b} \]
        11. Simplified18.5%

          \[\leadsto \color{blue}{0.5 \cdot b} \]

        if -1 < a

        1. Initial program 67.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \log \left(e^{a} + \color{blue}{\left(1 + b\right)}\right) \]
        4. Step-by-step derivation
          1. +-lowering-+.f6463.4

            \[\leadsto \log \left(e^{a} + \color{blue}{\left(1 + b\right)}\right) \]
        5. Simplified63.4%

          \[\leadsto \log \left(e^{a} + \color{blue}{\left(1 + b\right)}\right) \]
        6. Taylor expanded in a around 0

          \[\leadsto \log \color{blue}{\left(2 + \left(a + b\right)\right)} \]
        7. Step-by-step derivation
          1. +-lowering-+.f64N/A

            \[\leadsto \log \color{blue}{\left(2 + \left(a + b\right)\right)} \]
          2. +-commutativeN/A

            \[\leadsto \log \left(2 + \color{blue}{\left(b + a\right)}\right) \]
          3. +-lowering-+.f6461.8

            \[\leadsto \log \left(2 + \color{blue}{\left(b + a\right)}\right) \]
        8. Simplified61.8%

          \[\leadsto \log \color{blue}{\left(2 + \left(b + a\right)\right)} \]
      3. Recombined 2 regimes into one program.
      4. Final simplification51.5%

        \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\log \left(2 + \left(a + b\right)\right)\\ \end{array} \]
      5. Add Preprocessing

      Alternative 8: 56.7% accurate, 2.8× speedup?

      \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;a \leq -1:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\mathsf{log1p}\left(a + 1\right)\\ \end{array} \end{array} \]
      NOTE: a and b should be sorted in increasing order before calling this function.
      (FPCore (a b) :precision binary64 (if (<= a -1.0) (* b 0.5) (log1p (+ a 1.0))))
      assert(a < b);
      double code(double a, double b) {
      	double tmp;
      	if (a <= -1.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = log1p((a + 1.0));
      	}
      	return tmp;
      }
      
      assert a < b;
      public static double code(double a, double b) {
      	double tmp;
      	if (a <= -1.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = Math.log1p((a + 1.0));
      	}
      	return tmp;
      }
      
      [a, b] = sort([a, b])
      def code(a, b):
      	tmp = 0
      	if a <= -1.0:
      		tmp = b * 0.5
      	else:
      		tmp = math.log1p((a + 1.0))
      	return tmp
      
      a, b = sort([a, b])
      function code(a, b)
      	tmp = 0.0
      	if (a <= -1.0)
      		tmp = Float64(b * 0.5);
      	else
      		tmp = log1p(Float64(a + 1.0));
      	end
      	return tmp
      end
      
      NOTE: a and b should be sorted in increasing order before calling this function.
      code[a_, b_] := If[LessEqual[a, -1.0], N[(b * 0.5), $MachinePrecision], N[Log[1 + N[(a + 1.0), $MachinePrecision]], $MachinePrecision]]
      
      \begin{array}{l}
      [a, b] = \mathsf{sort}([a, b])\\
      \\
      \begin{array}{l}
      \mathbf{if}\;a \leq -1:\\
      \;\;\;\;b \cdot 0.5\\
      
      \mathbf{else}:\\
      \;\;\;\;\mathsf{log1p}\left(a + 1\right)\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if a < -1

        1. Initial program 7.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        4. Step-by-step derivation
          1. *-rgt-identityN/A

            \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
          2. associate-*r/N/A

            \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
          3. +-lowering-+.f64N/A

            \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
          4. accelerator-lowering-log1p.f64N/A

            \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
          5. exp-lowering-exp.f64N/A

            \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
          6. associate-*r/N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
          7. *-rgt-identityN/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
          8. /-lowering-/.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
          9. +-lowering-+.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
          10. exp-lowering-exp.f6498.4

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
        5. Simplified98.4%

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        6. Taylor expanded in b around inf

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        7. Step-by-step derivation
          1. /-lowering-/.f64N/A

            \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
          2. +-lowering-+.f64N/A

            \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
          3. exp-lowering-exp.f6498.4

            \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
        8. Simplified98.4%

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        9. Taylor expanded in a around 0

          \[\leadsto \color{blue}{\frac{1}{2} \cdot b} \]
        10. Step-by-step derivation
          1. *-lowering-*.f6418.5

            \[\leadsto \color{blue}{0.5 \cdot b} \]
        11. Simplified18.5%

          \[\leadsto \color{blue}{0.5 \cdot b} \]

        if -1 < a

        1. Initial program 67.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right)} \]
        4. Step-by-step derivation
          1. accelerator-lowering-log1p.f64N/A

            \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} \]
          2. exp-lowering-exp.f6464.2

            \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) \]
        5. Simplified64.2%

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} \]
        6. Taylor expanded in a around 0

          \[\leadsto \mathsf{log1p}\left(\color{blue}{1 + a}\right) \]
        7. Step-by-step derivation
          1. +-commutativeN/A

            \[\leadsto \mathsf{log1p}\left(\color{blue}{a + 1}\right) \]
          2. +-lowering-+.f6462.5

            \[\leadsto \mathsf{log1p}\left(\color{blue}{a + 1}\right) \]
        8. Simplified62.5%

          \[\leadsto \mathsf{log1p}\left(\color{blue}{a + 1}\right) \]
      3. Recombined 2 regimes into one program.
      4. Final simplification52.0%

        \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -1:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\mathsf{log1p}\left(a + 1\right)\\ \end{array} \]
      5. Add Preprocessing

      Alternative 9: 56.8% accurate, 2.8× speedup?

      \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;a \leq -126:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\log \left(b + 2\right)\\ \end{array} \end{array} \]
      NOTE: a and b should be sorted in increasing order before calling this function.
      (FPCore (a b) :precision binary64 (if (<= a -126.0) (* b 0.5) (log (+ b 2.0))))
      assert(a < b);
      double code(double a, double b) {
      	double tmp;
      	if (a <= -126.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = log((b + 2.0));
      	}
      	return tmp;
      }
      
      NOTE: a and b should be sorted in increasing order before calling this function.
      real(8) function code(a, b)
          real(8), intent (in) :: a
          real(8), intent (in) :: b
          real(8) :: tmp
          if (a <= (-126.0d0)) then
              tmp = b * 0.5d0
          else
              tmp = log((b + 2.0d0))
          end if
          code = tmp
      end function
      
      assert a < b;
      public static double code(double a, double b) {
      	double tmp;
      	if (a <= -126.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = Math.log((b + 2.0));
      	}
      	return tmp;
      }
      
      [a, b] = sort([a, b])
      def code(a, b):
      	tmp = 0
      	if a <= -126.0:
      		tmp = b * 0.5
      	else:
      		tmp = math.log((b + 2.0))
      	return tmp
      
      a, b = sort([a, b])
      function code(a, b)
      	tmp = 0.0
      	if (a <= -126.0)
      		tmp = Float64(b * 0.5);
      	else
      		tmp = log(Float64(b + 2.0));
      	end
      	return tmp
      end
      
      a, b = num2cell(sort([a, b])){:}
      function tmp_2 = code(a, b)
      	tmp = 0.0;
      	if (a <= -126.0)
      		tmp = b * 0.5;
      	else
      		tmp = log((b + 2.0));
      	end
      	tmp_2 = tmp;
      end
      
      NOTE: a and b should be sorted in increasing order before calling this function.
      code[a_, b_] := If[LessEqual[a, -126.0], N[(b * 0.5), $MachinePrecision], N[Log[N[(b + 2.0), $MachinePrecision]], $MachinePrecision]]
      
      \begin{array}{l}
      [a, b] = \mathsf{sort}([a, b])\\
      \\
      \begin{array}{l}
      \mathbf{if}\;a \leq -126:\\
      \;\;\;\;b \cdot 0.5\\
      
      \mathbf{else}:\\
      \;\;\;\;\log \left(b + 2\right)\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if a < -126

        1. Initial program 7.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        4. Step-by-step derivation
          1. *-rgt-identityN/A

            \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
          2. associate-*r/N/A

            \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
          3. +-lowering-+.f64N/A

            \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
          4. accelerator-lowering-log1p.f64N/A

            \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
          5. exp-lowering-exp.f64N/A

            \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
          6. associate-*r/N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
          7. *-rgt-identityN/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
          8. /-lowering-/.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
          9. +-lowering-+.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
          10. exp-lowering-exp.f6498.4

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
        5. Simplified98.4%

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        6. Taylor expanded in b around inf

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        7. Step-by-step derivation
          1. /-lowering-/.f64N/A

            \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
          2. +-lowering-+.f64N/A

            \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
          3. exp-lowering-exp.f6498.4

            \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
        8. Simplified98.4%

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        9. Taylor expanded in a around 0

          \[\leadsto \color{blue}{\frac{1}{2} \cdot b} \]
        10. Step-by-step derivation
          1. *-lowering-*.f6418.5

            \[\leadsto \color{blue}{0.5 \cdot b} \]
        11. Simplified18.5%

          \[\leadsto \color{blue}{0.5 \cdot b} \]

        if -126 < a

        1. Initial program 67.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \log \left(e^{a} + \color{blue}{\left(1 + b\right)}\right) \]
        4. Step-by-step derivation
          1. +-lowering-+.f6463.4

            \[\leadsto \log \left(e^{a} + \color{blue}{\left(1 + b\right)}\right) \]
        5. Simplified63.4%

          \[\leadsto \log \left(e^{a} + \color{blue}{\left(1 + b\right)}\right) \]
        6. Taylor expanded in a around 0

          \[\leadsto \color{blue}{\log \left(2 + b\right)} \]
        7. Step-by-step derivation
          1. log-lowering-log.f64N/A

            \[\leadsto \color{blue}{\log \left(2 + b\right)} \]
          2. +-lowering-+.f6460.5

            \[\leadsto \log \color{blue}{\left(2 + b\right)} \]
        8. Simplified60.5%

          \[\leadsto \color{blue}{\log \left(2 + b\right)} \]
      3. Recombined 2 regimes into one program.
      4. Final simplification50.5%

        \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -126:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\log \left(b + 2\right)\\ \end{array} \]
      5. Add Preprocessing

      Alternative 10: 56.2% accurate, 2.8× speedup?

      \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;a \leq -170:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\mathsf{log1p}\left(1\right)\\ \end{array} \end{array} \]
      NOTE: a and b should be sorted in increasing order before calling this function.
      (FPCore (a b) :precision binary64 (if (<= a -170.0) (* b 0.5) (log1p 1.0)))
      assert(a < b);
      double code(double a, double b) {
      	double tmp;
      	if (a <= -170.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = log1p(1.0);
      	}
      	return tmp;
      }
      
      assert a < b;
      public static double code(double a, double b) {
      	double tmp;
      	if (a <= -170.0) {
      		tmp = b * 0.5;
      	} else {
      		tmp = Math.log1p(1.0);
      	}
      	return tmp;
      }
      
      [a, b] = sort([a, b])
      def code(a, b):
      	tmp = 0
      	if a <= -170.0:
      		tmp = b * 0.5
      	else:
      		tmp = math.log1p(1.0)
      	return tmp
      
      a, b = sort([a, b])
      function code(a, b)
      	tmp = 0.0
      	if (a <= -170.0)
      		tmp = Float64(b * 0.5);
      	else
      		tmp = log1p(1.0);
      	end
      	return tmp
      end
      
      NOTE: a and b should be sorted in increasing order before calling this function.
      code[a_, b_] := If[LessEqual[a, -170.0], N[(b * 0.5), $MachinePrecision], N[Log[1 + 1.0], $MachinePrecision]]
      
      \begin{array}{l}
      [a, b] = \mathsf{sort}([a, b])\\
      \\
      \begin{array}{l}
      \mathbf{if}\;a \leq -170:\\
      \;\;\;\;b \cdot 0.5\\
      
      \mathbf{else}:\\
      \;\;\;\;\mathsf{log1p}\left(1\right)\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if a < -170

        1. Initial program 7.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        4. Step-by-step derivation
          1. *-rgt-identityN/A

            \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
          2. associate-*r/N/A

            \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
          3. +-lowering-+.f64N/A

            \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
          4. accelerator-lowering-log1p.f64N/A

            \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
          5. exp-lowering-exp.f64N/A

            \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
          6. associate-*r/N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
          7. *-rgt-identityN/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
          8. /-lowering-/.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
          9. +-lowering-+.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
          10. exp-lowering-exp.f6498.4

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
        5. Simplified98.4%

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        6. Taylor expanded in b around inf

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        7. Step-by-step derivation
          1. /-lowering-/.f64N/A

            \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
          2. +-lowering-+.f64N/A

            \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
          3. exp-lowering-exp.f6498.4

            \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
        8. Simplified98.4%

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        9. Taylor expanded in a around 0

          \[\leadsto \color{blue}{\frac{1}{2} \cdot b} \]
        10. Step-by-step derivation
          1. *-lowering-*.f6418.5

            \[\leadsto \color{blue}{0.5 \cdot b} \]
        11. Simplified18.5%

          \[\leadsto \color{blue}{0.5 \cdot b} \]

        if -170 < a

        1. Initial program 67.0%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right)} \]
        4. Step-by-step derivation
          1. accelerator-lowering-log1p.f64N/A

            \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} \]
          2. exp-lowering-exp.f6464.2

            \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) \]
        5. Simplified64.2%

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} \]
        6. Taylor expanded in a around 0

          \[\leadsto \mathsf{log1p}\left(\color{blue}{1}\right) \]
        7. Step-by-step derivation
          1. Simplified61.3%

            \[\leadsto \mathsf{log1p}\left(\color{blue}{1}\right) \]
        8. Recombined 2 regimes into one program.
        9. Final simplification51.1%

          \[\leadsto \begin{array}{l} \mathbf{if}\;a \leq -170:\\ \;\;\;\;b \cdot 0.5\\ \mathbf{else}:\\ \;\;\;\;\mathsf{log1p}\left(1\right)\\ \end{array} \]
        10. Add Preprocessing

        Alternative 11: 12.0% accurate, 50.7× speedup?

        \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ b \cdot 0.5 \end{array} \]
        NOTE: a and b should be sorted in increasing order before calling this function.
        (FPCore (a b) :precision binary64 (* b 0.5))
        assert(a < b);
        double code(double a, double b) {
        	return b * 0.5;
        }
        
        NOTE: a and b should be sorted in increasing order before calling this function.
        real(8) function code(a, b)
            real(8), intent (in) :: a
            real(8), intent (in) :: b
            code = b * 0.5d0
        end function
        
        assert a < b;
        public static double code(double a, double b) {
        	return b * 0.5;
        }
        
        [a, b] = sort([a, b])
        def code(a, b):
        	return b * 0.5
        
        a, b = sort([a, b])
        function code(a, b)
        	return Float64(b * 0.5)
        end
        
        a, b = num2cell(sort([a, b])){:}
        function tmp = code(a, b)
        	tmp = b * 0.5;
        end
        
        NOTE: a and b should be sorted in increasing order before calling this function.
        code[a_, b_] := N[(b * 0.5), $MachinePrecision]
        
        \begin{array}{l}
        [a, b] = \mathsf{sort}([a, b])\\
        \\
        b \cdot 0.5
        \end{array}
        
        Derivation
        1. Initial program 52.7%

          \[\log \left(e^{a} + e^{b}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in b around 0

          \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        4. Step-by-step derivation
          1. *-rgt-identityN/A

            \[\leadsto \log \left(1 + e^{a}\right) + \frac{\color{blue}{b \cdot 1}}{1 + e^{a}} \]
          2. associate-*r/N/A

            \[\leadsto \log \left(1 + e^{a}\right) + \color{blue}{b \cdot \frac{1}{1 + e^{a}}} \]
          3. +-lowering-+.f64N/A

            \[\leadsto \color{blue}{\log \left(1 + e^{a}\right) + b \cdot \frac{1}{1 + e^{a}}} \]
          4. accelerator-lowering-log1p.f64N/A

            \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right)} + b \cdot \frac{1}{1 + e^{a}} \]
          5. exp-lowering-exp.f64N/A

            \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{a}}\right) + b \cdot \frac{1}{1 + e^{a}} \]
          6. associate-*r/N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b \cdot 1}{1 + e^{a}}} \]
          7. *-rgt-identityN/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{\color{blue}{b}}{1 + e^{a}} \]
          8. /-lowering-/.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \color{blue}{\frac{b}{1 + e^{a}}} \]
          9. +-lowering-+.f64N/A

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{\color{blue}{1 + e^{a}}} \]
          10. exp-lowering-exp.f6472.5

            \[\leadsto \mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + \color{blue}{e^{a}}} \]
        5. Simplified72.5%

          \[\leadsto \color{blue}{\mathsf{log1p}\left(e^{a}\right) + \frac{b}{1 + e^{a}}} \]
        6. Taylor expanded in b around inf

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        7. Step-by-step derivation
          1. /-lowering-/.f64N/A

            \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
          2. +-lowering-+.f64N/A

            \[\leadsto \frac{b}{\color{blue}{1 + e^{a}}} \]
          3. exp-lowering-exp.f6426.0

            \[\leadsto \frac{b}{1 + \color{blue}{e^{a}}} \]
        8. Simplified26.0%

          \[\leadsto \color{blue}{\frac{b}{1 + e^{a}}} \]
        9. Taylor expanded in a around 0

          \[\leadsto \color{blue}{\frac{1}{2} \cdot b} \]
        10. Step-by-step derivation
          1. *-lowering-*.f646.9

            \[\leadsto \color{blue}{0.5 \cdot b} \]
        11. Simplified6.9%

          \[\leadsto \color{blue}{0.5 \cdot b} \]
        12. Final simplification6.9%

          \[\leadsto b \cdot 0.5 \]
        13. Add Preprocessing

        Reproduce

        ?
        herbie shell --seed 2024197 
        (FPCore (a b)
          :name "symmetry log of sum of exp"
          :precision binary64
          (log (+ (exp a) (exp b))))