Octave 3.8, jcobi/3

Percentage Accurate: 94.4% → 99.8%
Time: 9.6s
Alternatives: 14
Speedup: 1.6×

Specification

?
\[\alpha > -1 \land \beta > -1\]
\[\begin{array}{l} \\ \begin{array}{l} t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\ \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1} \end{array} \end{array} \]
(FPCore (alpha beta)
 :precision binary64
 (let* ((t_0 (+ (+ alpha beta) (* 2.0 1.0))))
   (/ (/ (/ (+ (+ (+ alpha beta) (* beta alpha)) 1.0) t_0) t_0) (+ t_0 1.0))))
double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
real(8) function code(alpha, beta)
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    real(8) :: t_0
    t_0 = (alpha + beta) + (2.0d0 * 1.0d0)
    code = (((((alpha + beta) + (beta * alpha)) + 1.0d0) / t_0) / t_0) / (t_0 + 1.0d0)
end function
public static double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
def code(alpha, beta):
	t_0 = (alpha + beta) + (2.0 * 1.0)
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0)
function code(alpha, beta)
	t_0 = Float64(Float64(alpha + beta) + Float64(2.0 * 1.0))
	return Float64(Float64(Float64(Float64(Float64(Float64(alpha + beta) + Float64(beta * alpha)) + 1.0) / t_0) / t_0) / Float64(t_0 + 1.0))
end
function tmp = code(alpha, beta)
	t_0 = (alpha + beta) + (2.0 * 1.0);
	tmp = (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
end
code[alpha_, beta_] := Block[{t$95$0 = N[(N[(alpha + beta), $MachinePrecision] + N[(2.0 * 1.0), $MachinePrecision]), $MachinePrecision]}, N[(N[(N[(N[(N[(N[(alpha + beta), $MachinePrecision] + N[(beta * alpha), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / t$95$0), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(t$95$0 + 1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\
\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1}
\end{array}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 14 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 94.4% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\ \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1} \end{array} \end{array} \]
(FPCore (alpha beta)
 :precision binary64
 (let* ((t_0 (+ (+ alpha beta) (* 2.0 1.0))))
   (/ (/ (/ (+ (+ (+ alpha beta) (* beta alpha)) 1.0) t_0) t_0) (+ t_0 1.0))))
double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
real(8) function code(alpha, beta)
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    real(8) :: t_0
    t_0 = (alpha + beta) + (2.0d0 * 1.0d0)
    code = (((((alpha + beta) + (beta * alpha)) + 1.0d0) / t_0) / t_0) / (t_0 + 1.0d0)
end function
public static double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
def code(alpha, beta):
	t_0 = (alpha + beta) + (2.0 * 1.0)
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0)
function code(alpha, beta)
	t_0 = Float64(Float64(alpha + beta) + Float64(2.0 * 1.0))
	return Float64(Float64(Float64(Float64(Float64(Float64(alpha + beta) + Float64(beta * alpha)) + 1.0) / t_0) / t_0) / Float64(t_0 + 1.0))
end
function tmp = code(alpha, beta)
	t_0 = (alpha + beta) + (2.0 * 1.0);
	tmp = (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
end
code[alpha_, beta_] := Block[{t$95$0 = N[(N[(alpha + beta), $MachinePrecision] + N[(2.0 * 1.0), $MachinePrecision]), $MachinePrecision]}, N[(N[(N[(N[(N[(N[(alpha + beta), $MachinePrecision] + N[(beta * alpha), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / t$95$0), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(t$95$0 + 1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\
\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1}
\end{array}
\end{array}

Alternative 1: 99.8% accurate, 1.3× speedup?

\[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} t_0 := -2 - \left(\alpha + \beta\right)\\ \frac{1 + \beta}{t\_0} \cdot \frac{\frac{\alpha + 1}{t\_0}}{\left(3 + \alpha\right) + \beta} \end{array} \end{array} \]
NOTE: alpha and beta should be sorted in increasing order before calling this function.
(FPCore (alpha beta)
 :precision binary64
 (let* ((t_0 (- -2.0 (+ alpha beta))))
   (* (/ (+ 1.0 beta) t_0) (/ (/ (+ alpha 1.0) t_0) (+ (+ 3.0 alpha) beta)))))
assert(alpha < beta);
double code(double alpha, double beta) {
	double t_0 = -2.0 - (alpha + beta);
	return ((1.0 + beta) / t_0) * (((alpha + 1.0) / t_0) / ((3.0 + alpha) + beta));
}
NOTE: alpha and beta should be sorted in increasing order before calling this function.
real(8) function code(alpha, beta)
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    real(8) :: t_0
    t_0 = (-2.0d0) - (alpha + beta)
    code = ((1.0d0 + beta) / t_0) * (((alpha + 1.0d0) / t_0) / ((3.0d0 + alpha) + beta))
end function
assert alpha < beta;
public static double code(double alpha, double beta) {
	double t_0 = -2.0 - (alpha + beta);
	return ((1.0 + beta) / t_0) * (((alpha + 1.0) / t_0) / ((3.0 + alpha) + beta));
}
[alpha, beta] = sort([alpha, beta])
def code(alpha, beta):
	t_0 = -2.0 - (alpha + beta)
	return ((1.0 + beta) / t_0) * (((alpha + 1.0) / t_0) / ((3.0 + alpha) + beta))
alpha, beta = sort([alpha, beta])
function code(alpha, beta)
	t_0 = Float64(-2.0 - Float64(alpha + beta))
	return Float64(Float64(Float64(1.0 + beta) / t_0) * Float64(Float64(Float64(alpha + 1.0) / t_0) / Float64(Float64(3.0 + alpha) + beta)))
end
alpha, beta = num2cell(sort([alpha, beta])){:}
function tmp = code(alpha, beta)
	t_0 = -2.0 - (alpha + beta);
	tmp = ((1.0 + beta) / t_0) * (((alpha + 1.0) / t_0) / ((3.0 + alpha) + beta));
end
NOTE: alpha and beta should be sorted in increasing order before calling this function.
code[alpha_, beta_] := Block[{t$95$0 = N[(-2.0 - N[(alpha + beta), $MachinePrecision]), $MachinePrecision]}, N[(N[(N[(1.0 + beta), $MachinePrecision] / t$95$0), $MachinePrecision] * N[(N[(N[(alpha + 1.0), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(N[(3.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}
[alpha, beta] = \mathsf{sort}([alpha, beta])\\
\\
\begin{array}{l}
t_0 := -2 - \left(\alpha + \beta\right)\\
\frac{1 + \beta}{t\_0} \cdot \frac{\frac{\alpha + 1}{t\_0}}{\left(3 + \alpha\right) + \beta}
\end{array}
\end{array}
Derivation
  1. Initial program 92.9%

    \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. Applied rewrites92.1%

      \[\leadsto \color{blue}{\frac{\left(\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1\right) \cdot {\left(\left(\beta + \alpha\right) + 2\right)}^{-2}}{3 + \left(\beta + \alpha\right)}} \]
    2. Applied rewrites91.5%

      \[\leadsto \frac{\color{blue}{\frac{-\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(2 + \alpha\right) + \beta\right) \cdot \left(-\left(\left(2 + \alpha\right) + \beta\right)\right)}}}{3 + \left(\beta + \alpha\right)} \]
    3. Applied rewrites91.6%

      \[\leadsto \color{blue}{\frac{1}{\left(\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(\left(2 + \beta\right) + \alpha\right)\right) \cdot \frac{\left(2 + \beta\right) + \alpha}{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}}} \]
    4. Applied rewrites99.8%

      \[\leadsto \color{blue}{\frac{1 + \beta}{-2 - \left(\alpha + \beta\right)} \cdot \frac{\frac{\alpha + 1}{-2 - \left(\alpha + \beta\right)}}{\left(3 + \alpha\right) + \beta}} \]
    5. Add Preprocessing

    Alternative 2: 99.5% accurate, 1.3× speedup?

    \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} t_0 := -2 - \left(\beta + \alpha\right)\\ \mathbf{if}\;\beta \leq 5 \cdot 10^{+117}:\\ \;\;\;\;\frac{\frac{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}{t\_0}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot t\_0}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\ \end{array} \end{array} \]
    NOTE: alpha and beta should be sorted in increasing order before calling this function.
    (FPCore (alpha beta)
     :precision binary64
     (let* ((t_0 (- -2.0 (+ beta alpha))))
       (if (<= beta 5e+117)
         (/ (/ (* (+ beta 1.0) (+ 1.0 alpha)) t_0) (* (+ (+ beta alpha) 3.0) t_0))
         (/ (/ (+ 1.0 alpha) (+ beta (+ alpha 3.0))) (+ (+ 2.0 alpha) beta)))))
    assert(alpha < beta);
    double code(double alpha, double beta) {
    	double t_0 = -2.0 - (beta + alpha);
    	double tmp;
    	if (beta <= 5e+117) {
    		tmp = (((beta + 1.0) * (1.0 + alpha)) / t_0) / (((beta + alpha) + 3.0) * t_0);
    	} else {
    		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
    	}
    	return tmp;
    }
    
    NOTE: alpha and beta should be sorted in increasing order before calling this function.
    real(8) function code(alpha, beta)
        real(8), intent (in) :: alpha
        real(8), intent (in) :: beta
        real(8) :: t_0
        real(8) :: tmp
        t_0 = (-2.0d0) - (beta + alpha)
        if (beta <= 5d+117) then
            tmp = (((beta + 1.0d0) * (1.0d0 + alpha)) / t_0) / (((beta + alpha) + 3.0d0) * t_0)
        else
            tmp = ((1.0d0 + alpha) / (beta + (alpha + 3.0d0))) / ((2.0d0 + alpha) + beta)
        end if
        code = tmp
    end function
    
    assert alpha < beta;
    public static double code(double alpha, double beta) {
    	double t_0 = -2.0 - (beta + alpha);
    	double tmp;
    	if (beta <= 5e+117) {
    		tmp = (((beta + 1.0) * (1.0 + alpha)) / t_0) / (((beta + alpha) + 3.0) * t_0);
    	} else {
    		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
    	}
    	return tmp;
    }
    
    [alpha, beta] = sort([alpha, beta])
    def code(alpha, beta):
    	t_0 = -2.0 - (beta + alpha)
    	tmp = 0
    	if beta <= 5e+117:
    		tmp = (((beta + 1.0) * (1.0 + alpha)) / t_0) / (((beta + alpha) + 3.0) * t_0)
    	else:
    		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta)
    	return tmp
    
    alpha, beta = sort([alpha, beta])
    function code(alpha, beta)
    	t_0 = Float64(-2.0 - Float64(beta + alpha))
    	tmp = 0.0
    	if (beta <= 5e+117)
    		tmp = Float64(Float64(Float64(Float64(beta + 1.0) * Float64(1.0 + alpha)) / t_0) / Float64(Float64(Float64(beta + alpha) + 3.0) * t_0));
    	else
    		tmp = Float64(Float64(Float64(1.0 + alpha) / Float64(beta + Float64(alpha + 3.0))) / Float64(Float64(2.0 + alpha) + beta));
    	end
    	return tmp
    end
    
    alpha, beta = num2cell(sort([alpha, beta])){:}
    function tmp_2 = code(alpha, beta)
    	t_0 = -2.0 - (beta + alpha);
    	tmp = 0.0;
    	if (beta <= 5e+117)
    		tmp = (((beta + 1.0) * (1.0 + alpha)) / t_0) / (((beta + alpha) + 3.0) * t_0);
    	else
    		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
    	end
    	tmp_2 = tmp;
    end
    
    NOTE: alpha and beta should be sorted in increasing order before calling this function.
    code[alpha_, beta_] := Block[{t$95$0 = N[(-2.0 - N[(beta + alpha), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[beta, 5e+117], N[(N[(N[(N[(beta + 1.0), $MachinePrecision] * N[(1.0 + alpha), $MachinePrecision]), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(N[(N[(beta + alpha), $MachinePrecision] + 3.0), $MachinePrecision] * t$95$0), $MachinePrecision]), $MachinePrecision], N[(N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta + N[(alpha + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(N[(2.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]), $MachinePrecision]]]
    
    \begin{array}{l}
    [alpha, beta] = \mathsf{sort}([alpha, beta])\\
    \\
    \begin{array}{l}
    t_0 := -2 - \left(\beta + \alpha\right)\\
    \mathbf{if}\;\beta \leq 5 \cdot 10^{+117}:\\
    \;\;\;\;\frac{\frac{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}{t\_0}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot t\_0}\\
    
    \mathbf{else}:\\
    \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if beta < 4.99999999999999983e117

      1. Initial program 99.2%

        \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      2. Add Preprocessing
      3. Step-by-step derivation
        1. Applied rewrites98.8%

          \[\leadsto \color{blue}{\frac{\left(\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1\right) \cdot {\left(\left(\beta + \alpha\right) + 2\right)}^{-2}}{3 + \left(\beta + \alpha\right)}} \]
        2. Applied rewrites98.4%

          \[\leadsto \frac{\color{blue}{\frac{-\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(2 + \alpha\right) + \beta\right) \cdot \left(-\left(\left(2 + \alpha\right) + \beta\right)\right)}}}{3 + \left(\beta + \alpha\right)} \]
        3. Applied rewrites98.5%

          \[\leadsto \color{blue}{\frac{\frac{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}{-2 - \left(\beta + \alpha\right)}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)}} \]

        if 4.99999999999999983e117 < beta

        1. Initial program 75.0%

          \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        2. Add Preprocessing
        3. Taylor expanded in beta around -inf

          \[\leadsto \frac{\frac{\color{blue}{-1 \cdot \left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        4. Step-by-step derivation
          1. mul-1-negN/A

            \[\leadsto \frac{\frac{\color{blue}{\mathsf{neg}\left(\left(-1 \cdot \alpha - 1\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          2. lower-neg.f64N/A

            \[\leadsto \frac{\frac{\color{blue}{-\left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          3. sub-negN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(-1 \cdot \alpha + \left(\mathsf{neg}\left(1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          4. mul-1-negN/A

            \[\leadsto \frac{\frac{-\left(\color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          5. distribute-neg-inN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(\mathsf{neg}\left(\left(\alpha + 1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          6. +-commutativeN/A

            \[\leadsto \frac{\frac{-\left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right)}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          7. distribute-neg-inN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(\left(\mathsf{neg}\left(1\right)\right) + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          8. metadata-evalN/A

            \[\leadsto \frac{\frac{-\left(\color{blue}{-1} + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          9. unsub-negN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          10. lower--.f6490.6

            \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        5. Applied rewrites90.6%

          \[\leadsto \frac{\frac{\color{blue}{-\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        6. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \color{blue}{\frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
          2. lift-/.f64N/A

            \[\leadsto \frac{\color{blue}{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          3. lift-+.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
          4. lift-+.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} + 1} \]
          5. lift-+.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\color{blue}{\left(\alpha + \beta\right)} + 2 \cdot 1\right) + 1} \]
          6. lift-*.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}\right) + 1} \]
          7. metadata-evalN/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2}\right) + 1} \]
        7. Applied rewrites90.6%

          \[\leadsto \color{blue}{\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}} \]
      4. Recombined 2 regimes into one program.
      5. Add Preprocessing

      Alternative 3: 99.5% accurate, 1.4× speedup?

      \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} t_0 := \left(\beta + \alpha\right) + 2\\ \mathbf{if}\;\beta \leq 10^{+60}:\\ \;\;\;\;\frac{\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1}{\left(\left(3 + \left(\beta + \alpha\right)\right) \cdot t\_0\right) \cdot t\_0}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\ \end{array} \end{array} \]
      NOTE: alpha and beta should be sorted in increasing order before calling this function.
      (FPCore (alpha beta)
       :precision binary64
       (let* ((t_0 (+ (+ beta alpha) 2.0)))
         (if (<= beta 1e+60)
           (/
            (+ (fma beta alpha (+ beta alpha)) 1.0)
            (* (* (+ 3.0 (+ beta alpha)) t_0) t_0))
           (/ (/ (+ 1.0 alpha) (+ beta (+ alpha 3.0))) (+ (+ 2.0 alpha) beta)))))
      assert(alpha < beta);
      double code(double alpha, double beta) {
      	double t_0 = (beta + alpha) + 2.0;
      	double tmp;
      	if (beta <= 1e+60) {
      		tmp = (fma(beta, alpha, (beta + alpha)) + 1.0) / (((3.0 + (beta + alpha)) * t_0) * t_0);
      	} else {
      		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
      	}
      	return tmp;
      }
      
      alpha, beta = sort([alpha, beta])
      function code(alpha, beta)
      	t_0 = Float64(Float64(beta + alpha) + 2.0)
      	tmp = 0.0
      	if (beta <= 1e+60)
      		tmp = Float64(Float64(fma(beta, alpha, Float64(beta + alpha)) + 1.0) / Float64(Float64(Float64(3.0 + Float64(beta + alpha)) * t_0) * t_0));
      	else
      		tmp = Float64(Float64(Float64(1.0 + alpha) / Float64(beta + Float64(alpha + 3.0))) / Float64(Float64(2.0 + alpha) + beta));
      	end
      	return tmp
      end
      
      NOTE: alpha and beta should be sorted in increasing order before calling this function.
      code[alpha_, beta_] := Block[{t$95$0 = N[(N[(beta + alpha), $MachinePrecision] + 2.0), $MachinePrecision]}, If[LessEqual[beta, 1e+60], N[(N[(N[(beta * alpha + N[(beta + alpha), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / N[(N[(N[(3.0 + N[(beta + alpha), $MachinePrecision]), $MachinePrecision] * t$95$0), $MachinePrecision] * t$95$0), $MachinePrecision]), $MachinePrecision], N[(N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta + N[(alpha + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(N[(2.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]), $MachinePrecision]]]
      
      \begin{array}{l}
      [alpha, beta] = \mathsf{sort}([alpha, beta])\\
      \\
      \begin{array}{l}
      t_0 := \left(\beta + \alpha\right) + 2\\
      \mathbf{if}\;\beta \leq 10^{+60}:\\
      \;\;\;\;\frac{\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1}{\left(\left(3 + \left(\beta + \alpha\right)\right) \cdot t\_0\right) \cdot t\_0}\\
      
      \mathbf{else}:\\
      \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if beta < 9.9999999999999995e59

        1. Initial program 99.8%

          \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \color{blue}{\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
          2. lift-/.f64N/A

            \[\leadsto \frac{\color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          3. associate-/l/N/A

            \[\leadsto \color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
          4. lift-/.f64N/A

            \[\leadsto \frac{\color{blue}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
          5. associate-/l/N/A

            \[\leadsto \color{blue}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
          6. lower-/.f64N/A

            \[\leadsto \color{blue}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
        4. Applied rewrites93.1%

          \[\leadsto \color{blue}{\frac{\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1}{\left(\left(3 + \left(\beta + \alpha\right)\right) \cdot \left(\left(\beta + \alpha\right) + 2\right)\right) \cdot \left(\left(\beta + \alpha\right) + 2\right)}} \]

        if 9.9999999999999995e59 < beta

        1. Initial program 78.2%

          \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        2. Add Preprocessing
        3. Taylor expanded in beta around -inf

          \[\leadsto \frac{\frac{\color{blue}{-1 \cdot \left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        4. Step-by-step derivation
          1. mul-1-negN/A

            \[\leadsto \frac{\frac{\color{blue}{\mathsf{neg}\left(\left(-1 \cdot \alpha - 1\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          2. lower-neg.f64N/A

            \[\leadsto \frac{\frac{\color{blue}{-\left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          3. sub-negN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(-1 \cdot \alpha + \left(\mathsf{neg}\left(1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          4. mul-1-negN/A

            \[\leadsto \frac{\frac{-\left(\color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          5. distribute-neg-inN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(\mathsf{neg}\left(\left(\alpha + 1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          6. +-commutativeN/A

            \[\leadsto \frac{\frac{-\left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right)}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          7. distribute-neg-inN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(\left(\mathsf{neg}\left(1\right)\right) + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          8. metadata-evalN/A

            \[\leadsto \frac{\frac{-\left(\color{blue}{-1} + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          9. unsub-negN/A

            \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          10. lower--.f6489.0

            \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        5. Applied rewrites89.0%

          \[\leadsto \frac{\frac{\color{blue}{-\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        6. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \color{blue}{\frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
          2. lift-/.f64N/A

            \[\leadsto \frac{\color{blue}{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          3. lift-+.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
          4. lift-+.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} + 1} \]
          5. lift-+.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\color{blue}{\left(\alpha + \beta\right)} + 2 \cdot 1\right) + 1} \]
          6. lift-*.f64N/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}\right) + 1} \]
          7. metadata-evalN/A

            \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2}\right) + 1} \]
        7. Applied rewrites89.0%

          \[\leadsto \color{blue}{\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}} \]
      3. Recombined 2 regimes into one program.
      4. Add Preprocessing

      Alternative 4: 99.5% accurate, 1.5× speedup?

      \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} t_0 := \left(2 + \beta\right) + \alpha\\ \mathbf{if}\;\beta \leq 10^{+60}:\\ \;\;\;\;\frac{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}{\left(\left(\left(\beta + \alpha\right) + 3\right) \cdot t\_0\right) \cdot t\_0}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\ \end{array} \end{array} \]
      NOTE: alpha and beta should be sorted in increasing order before calling this function.
      (FPCore (alpha beta)
       :precision binary64
       (let* ((t_0 (+ (+ 2.0 beta) alpha)))
         (if (<= beta 1e+60)
           (/ (* (+ beta 1.0) (+ 1.0 alpha)) (* (* (+ (+ beta alpha) 3.0) t_0) t_0))
           (/ (/ (+ 1.0 alpha) (+ beta (+ alpha 3.0))) (+ (+ 2.0 alpha) beta)))))
      assert(alpha < beta);
      double code(double alpha, double beta) {
      	double t_0 = (2.0 + beta) + alpha;
      	double tmp;
      	if (beta <= 1e+60) {
      		tmp = ((beta + 1.0) * (1.0 + alpha)) / ((((beta + alpha) + 3.0) * t_0) * t_0);
      	} else {
      		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
      	}
      	return tmp;
      }
      
      NOTE: alpha and beta should be sorted in increasing order before calling this function.
      real(8) function code(alpha, beta)
          real(8), intent (in) :: alpha
          real(8), intent (in) :: beta
          real(8) :: t_0
          real(8) :: tmp
          t_0 = (2.0d0 + beta) + alpha
          if (beta <= 1d+60) then
              tmp = ((beta + 1.0d0) * (1.0d0 + alpha)) / ((((beta + alpha) + 3.0d0) * t_0) * t_0)
          else
              tmp = ((1.0d0 + alpha) / (beta + (alpha + 3.0d0))) / ((2.0d0 + alpha) + beta)
          end if
          code = tmp
      end function
      
      assert alpha < beta;
      public static double code(double alpha, double beta) {
      	double t_0 = (2.0 + beta) + alpha;
      	double tmp;
      	if (beta <= 1e+60) {
      		tmp = ((beta + 1.0) * (1.0 + alpha)) / ((((beta + alpha) + 3.0) * t_0) * t_0);
      	} else {
      		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
      	}
      	return tmp;
      }
      
      [alpha, beta] = sort([alpha, beta])
      def code(alpha, beta):
      	t_0 = (2.0 + beta) + alpha
      	tmp = 0
      	if beta <= 1e+60:
      		tmp = ((beta + 1.0) * (1.0 + alpha)) / ((((beta + alpha) + 3.0) * t_0) * t_0)
      	else:
      		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta)
      	return tmp
      
      alpha, beta = sort([alpha, beta])
      function code(alpha, beta)
      	t_0 = Float64(Float64(2.0 + beta) + alpha)
      	tmp = 0.0
      	if (beta <= 1e+60)
      		tmp = Float64(Float64(Float64(beta + 1.0) * Float64(1.0 + alpha)) / Float64(Float64(Float64(Float64(beta + alpha) + 3.0) * t_0) * t_0));
      	else
      		tmp = Float64(Float64(Float64(1.0 + alpha) / Float64(beta + Float64(alpha + 3.0))) / Float64(Float64(2.0 + alpha) + beta));
      	end
      	return tmp
      end
      
      alpha, beta = num2cell(sort([alpha, beta])){:}
      function tmp_2 = code(alpha, beta)
      	t_0 = (2.0 + beta) + alpha;
      	tmp = 0.0;
      	if (beta <= 1e+60)
      		tmp = ((beta + 1.0) * (1.0 + alpha)) / ((((beta + alpha) + 3.0) * t_0) * t_0);
      	else
      		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
      	end
      	tmp_2 = tmp;
      end
      
      NOTE: alpha and beta should be sorted in increasing order before calling this function.
      code[alpha_, beta_] := Block[{t$95$0 = N[(N[(2.0 + beta), $MachinePrecision] + alpha), $MachinePrecision]}, If[LessEqual[beta, 1e+60], N[(N[(N[(beta + 1.0), $MachinePrecision] * N[(1.0 + alpha), $MachinePrecision]), $MachinePrecision] / N[(N[(N[(N[(beta + alpha), $MachinePrecision] + 3.0), $MachinePrecision] * t$95$0), $MachinePrecision] * t$95$0), $MachinePrecision]), $MachinePrecision], N[(N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta + N[(alpha + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(N[(2.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]), $MachinePrecision]]]
      
      \begin{array}{l}
      [alpha, beta] = \mathsf{sort}([alpha, beta])\\
      \\
      \begin{array}{l}
      t_0 := \left(2 + \beta\right) + \alpha\\
      \mathbf{if}\;\beta \leq 10^{+60}:\\
      \;\;\;\;\frac{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}{\left(\left(\left(\beta + \alpha\right) + 3\right) \cdot t\_0\right) \cdot t\_0}\\
      
      \mathbf{else}:\\
      \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if beta < 9.9999999999999995e59

        1. Initial program 99.8%

          \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. Applied rewrites99.3%

            \[\leadsto \color{blue}{\frac{\left(\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1\right) \cdot {\left(\left(\beta + \alpha\right) + 2\right)}^{-2}}{3 + \left(\beta + \alpha\right)}} \]
          2. Applied rewrites98.9%

            \[\leadsto \frac{\color{blue}{\frac{-\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(2 + \alpha\right) + \beta\right) \cdot \left(-\left(\left(2 + \alpha\right) + \beta\right)\right)}}}{3 + \left(\beta + \alpha\right)} \]
          3. Applied rewrites93.1%

            \[\leadsto \color{blue}{\frac{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}{\left(\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(\left(2 + \beta\right) + \alpha\right)\right) \cdot \left(\left(2 + \beta\right) + \alpha\right)}} \]

          if 9.9999999999999995e59 < beta

          1. Initial program 78.2%

            \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          2. Add Preprocessing
          3. Taylor expanded in beta around -inf

            \[\leadsto \frac{\frac{\color{blue}{-1 \cdot \left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          4. Step-by-step derivation
            1. mul-1-negN/A

              \[\leadsto \frac{\frac{\color{blue}{\mathsf{neg}\left(\left(-1 \cdot \alpha - 1\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            2. lower-neg.f64N/A

              \[\leadsto \frac{\frac{\color{blue}{-\left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            3. sub-negN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(-1 \cdot \alpha + \left(\mathsf{neg}\left(1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            4. mul-1-negN/A

              \[\leadsto \frac{\frac{-\left(\color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            5. distribute-neg-inN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(\mathsf{neg}\left(\left(\alpha + 1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            6. +-commutativeN/A

              \[\leadsto \frac{\frac{-\left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right)}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            7. distribute-neg-inN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(\left(\mathsf{neg}\left(1\right)\right) + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            8. metadata-evalN/A

              \[\leadsto \frac{\frac{-\left(\color{blue}{-1} + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            9. unsub-negN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            10. lower--.f6489.0

              \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          5. Applied rewrites89.0%

            \[\leadsto \frac{\frac{\color{blue}{-\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          6. Step-by-step derivation
            1. lift-/.f64N/A

              \[\leadsto \color{blue}{\frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
            2. lift-/.f64N/A

              \[\leadsto \frac{\color{blue}{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            3. lift-+.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
            4. lift-+.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} + 1} \]
            5. lift-+.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\color{blue}{\left(\alpha + \beta\right)} + 2 \cdot 1\right) + 1} \]
            6. lift-*.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}\right) + 1} \]
            7. metadata-evalN/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2}\right) + 1} \]
          7. Applied rewrites89.0%

            \[\leadsto \color{blue}{\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}} \]
        4. Recombined 2 regimes into one program.
        5. Add Preprocessing

        Alternative 5: 98.6% accurate, 1.6× speedup?

        \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} t_0 := \left(2 + \alpha\right) + \beta\\ \mathbf{if}\;\beta \leq 5.8 \cdot 10^{+15}:\\ \;\;\;\;\frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(3 + \beta\right) \cdot \left(2 + \beta\right)\right) \cdot t\_0}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{t\_0}\\ \end{array} \end{array} \]
        NOTE: alpha and beta should be sorted in increasing order before calling this function.
        (FPCore (alpha beta)
         :precision binary64
         (let* ((t_0 (+ (+ 2.0 alpha) beta)))
           (if (<= beta 5.8e+15)
             (/
              (fma (+ 1.0 alpha) beta (+ 1.0 alpha))
              (* (* (+ 3.0 beta) (+ 2.0 beta)) t_0))
             (/ (/ (+ 1.0 alpha) (+ beta (+ alpha 3.0))) t_0))))
        assert(alpha < beta);
        double code(double alpha, double beta) {
        	double t_0 = (2.0 + alpha) + beta;
        	double tmp;
        	if (beta <= 5.8e+15) {
        		tmp = fma((1.0 + alpha), beta, (1.0 + alpha)) / (((3.0 + beta) * (2.0 + beta)) * t_0);
        	} else {
        		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / t_0;
        	}
        	return tmp;
        }
        
        alpha, beta = sort([alpha, beta])
        function code(alpha, beta)
        	t_0 = Float64(Float64(2.0 + alpha) + beta)
        	tmp = 0.0
        	if (beta <= 5.8e+15)
        		tmp = Float64(fma(Float64(1.0 + alpha), beta, Float64(1.0 + alpha)) / Float64(Float64(Float64(3.0 + beta) * Float64(2.0 + beta)) * t_0));
        	else
        		tmp = Float64(Float64(Float64(1.0 + alpha) / Float64(beta + Float64(alpha + 3.0))) / t_0);
        	end
        	return tmp
        end
        
        NOTE: alpha and beta should be sorted in increasing order before calling this function.
        code[alpha_, beta_] := Block[{t$95$0 = N[(N[(2.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]}, If[LessEqual[beta, 5.8e+15], N[(N[(N[(1.0 + alpha), $MachinePrecision] * beta + N[(1.0 + alpha), $MachinePrecision]), $MachinePrecision] / N[(N[(N[(3.0 + beta), $MachinePrecision] * N[(2.0 + beta), $MachinePrecision]), $MachinePrecision] * t$95$0), $MachinePrecision]), $MachinePrecision], N[(N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta + N[(alpha + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / t$95$0), $MachinePrecision]]]
        
        \begin{array}{l}
        [alpha, beta] = \mathsf{sort}([alpha, beta])\\
        \\
        \begin{array}{l}
        t_0 := \left(2 + \alpha\right) + \beta\\
        \mathbf{if}\;\beta \leq 5.8 \cdot 10^{+15}:\\
        \;\;\;\;\frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(3 + \beta\right) \cdot \left(2 + \beta\right)\right) \cdot t\_0}\\
        
        \mathbf{else}:\\
        \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{t\_0}\\
        
        
        \end{array}
        \end{array}
        
        Derivation
        1. Split input into 2 regimes
        2. if beta < 5.8e15

          1. Initial program 99.8%

            \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          2. Add Preprocessing
          3. Taylor expanded in beta around 0

            \[\leadsto \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{3 + \alpha}} \]
          4. Step-by-step derivation
            1. lower-+.f6496.3

              \[\leadsto \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{3 + \alpha}} \]
          5. Applied rewrites96.3%

            \[\leadsto \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{3 + \alpha}} \]
          6. Step-by-step derivation
            1. lift-/.f64N/A

              \[\leadsto \color{blue}{\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{3 + \alpha}} \]
            2. lift-/.f64N/A

              \[\leadsto \frac{\color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{3 + \alpha} \]
            3. associate-/l/N/A

              \[\leadsto \color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(3 + \alpha\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
            4. lift-/.f64N/A

              \[\leadsto \frac{\color{blue}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(3 + \alpha\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
            5. associate-/l/N/A

              \[\leadsto \color{blue}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\left(3 + \alpha\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
            6. lower-/.f64N/A

              \[\leadsto \color{blue}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\left(3 + \alpha\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
          7. Applied rewrites90.1%

            \[\leadsto \color{blue}{\frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(\alpha + 3\right) \cdot \left(\left(2 + \alpha\right) + \beta\right)\right) \cdot \left(\left(2 + \alpha\right) + \beta\right)}} \]
          8. Taylor expanded in alpha around 0

            \[\leadsto \frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\color{blue}{\left(\left(2 + \beta\right) \cdot \left(3 + \beta\right)\right)} \cdot \left(\left(2 + \alpha\right) + \beta\right)} \]
          9. Step-by-step derivation
            1. *-commutativeN/A

              \[\leadsto \frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\color{blue}{\left(\left(3 + \beta\right) \cdot \left(2 + \beta\right)\right)} \cdot \left(\left(2 + \alpha\right) + \beta\right)} \]
            2. lower-*.f64N/A

              \[\leadsto \frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\color{blue}{\left(\left(3 + \beta\right) \cdot \left(2 + \beta\right)\right)} \cdot \left(\left(2 + \alpha\right) + \beta\right)} \]
            3. lower-+.f64N/A

              \[\leadsto \frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\color{blue}{\left(3 + \beta\right)} \cdot \left(2 + \beta\right)\right) \cdot \left(\left(2 + \alpha\right) + \beta\right)} \]
            4. lower-+.f6462.5

              \[\leadsto \frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(3 + \beta\right) \cdot \color{blue}{\left(2 + \beta\right)}\right) \cdot \left(\left(2 + \alpha\right) + \beta\right)} \]
          10. Applied rewrites62.5%

            \[\leadsto \frac{\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\color{blue}{\left(\left(3 + \beta\right) \cdot \left(2 + \beta\right)\right)} \cdot \left(\left(2 + \alpha\right) + \beta\right)} \]

          if 5.8e15 < beta

          1. Initial program 81.3%

            \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          2. Add Preprocessing
          3. Taylor expanded in beta around -inf

            \[\leadsto \frac{\frac{\color{blue}{-1 \cdot \left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          4. Step-by-step derivation
            1. mul-1-negN/A

              \[\leadsto \frac{\frac{\color{blue}{\mathsf{neg}\left(\left(-1 \cdot \alpha - 1\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            2. lower-neg.f64N/A

              \[\leadsto \frac{\frac{\color{blue}{-\left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            3. sub-negN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(-1 \cdot \alpha + \left(\mathsf{neg}\left(1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            4. mul-1-negN/A

              \[\leadsto \frac{\frac{-\left(\color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            5. distribute-neg-inN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(\mathsf{neg}\left(\left(\alpha + 1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            6. +-commutativeN/A

              \[\leadsto \frac{\frac{-\left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right)}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            7. distribute-neg-inN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(\left(\mathsf{neg}\left(1\right)\right) + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            8. metadata-evalN/A

              \[\leadsto \frac{\frac{-\left(\color{blue}{-1} + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            9. unsub-negN/A

              \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            10. lower--.f6485.2

              \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          5. Applied rewrites85.2%

            \[\leadsto \frac{\frac{\color{blue}{-\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          6. Step-by-step derivation
            1. lift-/.f64N/A

              \[\leadsto \color{blue}{\frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
            2. lift-/.f64N/A

              \[\leadsto \frac{\color{blue}{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            3. lift-+.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
            4. lift-+.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} + 1} \]
            5. lift-+.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\color{blue}{\left(\alpha + \beta\right)} + 2 \cdot 1\right) + 1} \]
            6. lift-*.f64N/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}\right) + 1} \]
            7. metadata-evalN/A

              \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2}\right) + 1} \]
          7. Applied rewrites85.2%

            \[\leadsto \color{blue}{\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}} \]
        3. Recombined 2 regimes into one program.
        4. Add Preprocessing

        Alternative 6: 98.5% accurate, 1.6× speedup?

        \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} \mathbf{if}\;\beta \leq 7200000000:\\ \;\;\;\;\frac{\frac{-1 - \beta}{2 + \beta}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\ \end{array} \end{array} \]
        NOTE: alpha and beta should be sorted in increasing order before calling this function.
        (FPCore (alpha beta)
         :precision binary64
         (if (<= beta 7200000000.0)
           (/
            (/ (- -1.0 beta) (+ 2.0 beta))
            (* (+ (+ beta alpha) 3.0) (- -2.0 (+ beta alpha))))
           (/ (/ (+ 1.0 alpha) (+ beta (+ alpha 3.0))) (+ (+ 2.0 alpha) beta))))
        assert(alpha < beta);
        double code(double alpha, double beta) {
        	double tmp;
        	if (beta <= 7200000000.0) {
        		tmp = ((-1.0 - beta) / (2.0 + beta)) / (((beta + alpha) + 3.0) * (-2.0 - (beta + alpha)));
        	} else {
        		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
        	}
        	return tmp;
        }
        
        NOTE: alpha and beta should be sorted in increasing order before calling this function.
        real(8) function code(alpha, beta)
            real(8), intent (in) :: alpha
            real(8), intent (in) :: beta
            real(8) :: tmp
            if (beta <= 7200000000.0d0) then
                tmp = (((-1.0d0) - beta) / (2.0d0 + beta)) / (((beta + alpha) + 3.0d0) * ((-2.0d0) - (beta + alpha)))
            else
                tmp = ((1.0d0 + alpha) / (beta + (alpha + 3.0d0))) / ((2.0d0 + alpha) + beta)
            end if
            code = tmp
        end function
        
        assert alpha < beta;
        public static double code(double alpha, double beta) {
        	double tmp;
        	if (beta <= 7200000000.0) {
        		tmp = ((-1.0 - beta) / (2.0 + beta)) / (((beta + alpha) + 3.0) * (-2.0 - (beta + alpha)));
        	} else {
        		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
        	}
        	return tmp;
        }
        
        [alpha, beta] = sort([alpha, beta])
        def code(alpha, beta):
        	tmp = 0
        	if beta <= 7200000000.0:
        		tmp = ((-1.0 - beta) / (2.0 + beta)) / (((beta + alpha) + 3.0) * (-2.0 - (beta + alpha)))
        	else:
        		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta)
        	return tmp
        
        alpha, beta = sort([alpha, beta])
        function code(alpha, beta)
        	tmp = 0.0
        	if (beta <= 7200000000.0)
        		tmp = Float64(Float64(Float64(-1.0 - beta) / Float64(2.0 + beta)) / Float64(Float64(Float64(beta + alpha) + 3.0) * Float64(-2.0 - Float64(beta + alpha))));
        	else
        		tmp = Float64(Float64(Float64(1.0 + alpha) / Float64(beta + Float64(alpha + 3.0))) / Float64(Float64(2.0 + alpha) + beta));
        	end
        	return tmp
        end
        
        alpha, beta = num2cell(sort([alpha, beta])){:}
        function tmp_2 = code(alpha, beta)
        	tmp = 0.0;
        	if (beta <= 7200000000.0)
        		tmp = ((-1.0 - beta) / (2.0 + beta)) / (((beta + alpha) + 3.0) * (-2.0 - (beta + alpha)));
        	else
        		tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
        	end
        	tmp_2 = tmp;
        end
        
        NOTE: alpha and beta should be sorted in increasing order before calling this function.
        code[alpha_, beta_] := If[LessEqual[beta, 7200000000.0], N[(N[(N[(-1.0 - beta), $MachinePrecision] / N[(2.0 + beta), $MachinePrecision]), $MachinePrecision] / N[(N[(N[(beta + alpha), $MachinePrecision] + 3.0), $MachinePrecision] * N[(-2.0 - N[(beta + alpha), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta + N[(alpha + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(N[(2.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]), $MachinePrecision]]
        
        \begin{array}{l}
        [alpha, beta] = \mathsf{sort}([alpha, beta])\\
        \\
        \begin{array}{l}
        \mathbf{if}\;\beta \leq 7200000000:\\
        \;\;\;\;\frac{\frac{-1 - \beta}{2 + \beta}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)}\\
        
        \mathbf{else}:\\
        \;\;\;\;\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}\\
        
        
        \end{array}
        \end{array}
        
        Derivation
        1. Split input into 2 regimes
        2. if beta < 7.2e9

          1. Initial program 99.8%

            \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
          2. Add Preprocessing
          3. Step-by-step derivation
            1. Applied rewrites99.8%

              \[\leadsto \color{blue}{\frac{\left(\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1\right) \cdot {\left(\left(\beta + \alpha\right) + 2\right)}^{-2}}{3 + \left(\beta + \alpha\right)}} \]
            2. Applied rewrites99.4%

              \[\leadsto \frac{\color{blue}{\frac{-\mathsf{fma}\left(1 + \alpha, \beta, 1 + \alpha\right)}{\left(\left(2 + \alpha\right) + \beta\right) \cdot \left(-\left(\left(2 + \alpha\right) + \beta\right)\right)}}}{3 + \left(\beta + \alpha\right)} \]
            3. Applied rewrites99.5%

              \[\leadsto \color{blue}{\frac{\frac{\left(\beta + 1\right) \cdot \left(1 + \alpha\right)}{-2 - \left(\beta + \alpha\right)}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)}} \]
            4. Taylor expanded in alpha around 0

              \[\leadsto \frac{\color{blue}{-1 \cdot \frac{1 + \beta}{2 + \beta}}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
            5. Step-by-step derivation
              1. associate-*r/N/A

                \[\leadsto \frac{\color{blue}{\frac{-1 \cdot \left(1 + \beta\right)}{2 + \beta}}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
              2. lower-/.f64N/A

                \[\leadsto \frac{\color{blue}{\frac{-1 \cdot \left(1 + \beta\right)}{2 + \beta}}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
              3. distribute-lft-inN/A

                \[\leadsto \frac{\frac{\color{blue}{-1 \cdot 1 + -1 \cdot \beta}}{2 + \beta}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
              4. metadata-evalN/A

                \[\leadsto \frac{\frac{\color{blue}{-1} + -1 \cdot \beta}{2 + \beta}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
              5. mul-1-negN/A

                \[\leadsto \frac{\frac{-1 + \color{blue}{\left(\mathsf{neg}\left(\beta\right)\right)}}{2 + \beta}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
              6. unsub-negN/A

                \[\leadsto \frac{\frac{\color{blue}{-1 - \beta}}{2 + \beta}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
              7. lower--.f64N/A

                \[\leadsto \frac{\frac{\color{blue}{-1 - \beta}}{2 + \beta}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
              8. lower-+.f6482.8

                \[\leadsto \frac{\frac{-1 - \beta}{\color{blue}{2 + \beta}}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]
            6. Applied rewrites82.8%

              \[\leadsto \frac{\color{blue}{\frac{-1 - \beta}{2 + \beta}}}{\left(\left(\beta + \alpha\right) + 3\right) \cdot \left(-2 - \left(\beta + \alpha\right)\right)} \]

            if 7.2e9 < beta

            1. Initial program 81.7%

              \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            2. Add Preprocessing
            3. Taylor expanded in beta around -inf

              \[\leadsto \frac{\frac{\color{blue}{-1 \cdot \left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            4. Step-by-step derivation
              1. mul-1-negN/A

                \[\leadsto \frac{\frac{\color{blue}{\mathsf{neg}\left(\left(-1 \cdot \alpha - 1\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              2. lower-neg.f64N/A

                \[\leadsto \frac{\frac{\color{blue}{-\left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              3. sub-negN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 \cdot \alpha + \left(\mathsf{neg}\left(1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              4. mul-1-negN/A

                \[\leadsto \frac{\frac{-\left(\color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              5. distribute-neg-inN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(\mathsf{neg}\left(\left(\alpha + 1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              6. +-commutativeN/A

                \[\leadsto \frac{\frac{-\left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right)}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              7. distribute-neg-inN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(\left(\mathsf{neg}\left(1\right)\right) + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              8. metadata-evalN/A

                \[\leadsto \frac{\frac{-\left(\color{blue}{-1} + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              9. unsub-negN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              10. lower--.f6485.1

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            5. Applied rewrites85.1%

              \[\leadsto \frac{\frac{\color{blue}{-\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            6. Step-by-step derivation
              1. lift-/.f64N/A

                \[\leadsto \color{blue}{\frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
              2. lift-/.f64N/A

                \[\leadsto \frac{\color{blue}{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              3. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
              4. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} + 1} \]
              5. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\color{blue}{\left(\alpha + \beta\right)} + 2 \cdot 1\right) + 1} \]
              6. lift-*.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}\right) + 1} \]
              7. metadata-evalN/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2}\right) + 1} \]
            7. Applied rewrites85.1%

              \[\leadsto \color{blue}{\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}} \]
          4. Recombined 2 regimes into one program.
          5. Add Preprocessing

          Alternative 7: 61.8% accurate, 2.2× speedup?

          \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} \mathbf{if}\;\beta \leq 10^{+60}:\\ \;\;\;\;\frac{1 + \alpha}{\left(\beta + \left(\alpha + 3\right)\right) \cdot \left(\left(2 + \alpha\right) + \beta\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{1 + \alpha}{\beta}}{3 + \left(\beta + \alpha\right)}\\ \end{array} \end{array} \]
          NOTE: alpha and beta should be sorted in increasing order before calling this function.
          (FPCore (alpha beta)
           :precision binary64
           (if (<= beta 1e+60)
             (/ (+ 1.0 alpha) (* (+ beta (+ alpha 3.0)) (+ (+ 2.0 alpha) beta)))
             (/ (/ (+ 1.0 alpha) beta) (+ 3.0 (+ beta alpha)))))
          assert(alpha < beta);
          double code(double alpha, double beta) {
          	double tmp;
          	if (beta <= 1e+60) {
          		tmp = (1.0 + alpha) / ((beta + (alpha + 3.0)) * ((2.0 + alpha) + beta));
          	} else {
          		tmp = ((1.0 + alpha) / beta) / (3.0 + (beta + alpha));
          	}
          	return tmp;
          }
          
          NOTE: alpha and beta should be sorted in increasing order before calling this function.
          real(8) function code(alpha, beta)
              real(8), intent (in) :: alpha
              real(8), intent (in) :: beta
              real(8) :: tmp
              if (beta <= 1d+60) then
                  tmp = (1.0d0 + alpha) / ((beta + (alpha + 3.0d0)) * ((2.0d0 + alpha) + beta))
              else
                  tmp = ((1.0d0 + alpha) / beta) / (3.0d0 + (beta + alpha))
              end if
              code = tmp
          end function
          
          assert alpha < beta;
          public static double code(double alpha, double beta) {
          	double tmp;
          	if (beta <= 1e+60) {
          		tmp = (1.0 + alpha) / ((beta + (alpha + 3.0)) * ((2.0 + alpha) + beta));
          	} else {
          		tmp = ((1.0 + alpha) / beta) / (3.0 + (beta + alpha));
          	}
          	return tmp;
          }
          
          [alpha, beta] = sort([alpha, beta])
          def code(alpha, beta):
          	tmp = 0
          	if beta <= 1e+60:
          		tmp = (1.0 + alpha) / ((beta + (alpha + 3.0)) * ((2.0 + alpha) + beta))
          	else:
          		tmp = ((1.0 + alpha) / beta) / (3.0 + (beta + alpha))
          	return tmp
          
          alpha, beta = sort([alpha, beta])
          function code(alpha, beta)
          	tmp = 0.0
          	if (beta <= 1e+60)
          		tmp = Float64(Float64(1.0 + alpha) / Float64(Float64(beta + Float64(alpha + 3.0)) * Float64(Float64(2.0 + alpha) + beta)));
          	else
          		tmp = Float64(Float64(Float64(1.0 + alpha) / beta) / Float64(3.0 + Float64(beta + alpha)));
          	end
          	return tmp
          end
          
          alpha, beta = num2cell(sort([alpha, beta])){:}
          function tmp_2 = code(alpha, beta)
          	tmp = 0.0;
          	if (beta <= 1e+60)
          		tmp = (1.0 + alpha) / ((beta + (alpha + 3.0)) * ((2.0 + alpha) + beta));
          	else
          		tmp = ((1.0 + alpha) / beta) / (3.0 + (beta + alpha));
          	end
          	tmp_2 = tmp;
          end
          
          NOTE: alpha and beta should be sorted in increasing order before calling this function.
          code[alpha_, beta_] := If[LessEqual[beta, 1e+60], N[(N[(1.0 + alpha), $MachinePrecision] / N[(N[(beta + N[(alpha + 3.0), $MachinePrecision]), $MachinePrecision] * N[(N[(2.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(N[(N[(1.0 + alpha), $MachinePrecision] / beta), $MachinePrecision] / N[(3.0 + N[(beta + alpha), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]
          
          \begin{array}{l}
          [alpha, beta] = \mathsf{sort}([alpha, beta])\\
          \\
          \begin{array}{l}
          \mathbf{if}\;\beta \leq 10^{+60}:\\
          \;\;\;\;\frac{1 + \alpha}{\left(\beta + \left(\alpha + 3\right)\right) \cdot \left(\left(2 + \alpha\right) + \beta\right)}\\
          
          \mathbf{else}:\\
          \;\;\;\;\frac{\frac{1 + \alpha}{\beta}}{3 + \left(\beta + \alpha\right)}\\
          
          
          \end{array}
          \end{array}
          
          Derivation
          1. Split input into 2 regimes
          2. if beta < 9.9999999999999995e59

            1. Initial program 99.8%

              \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            2. Add Preprocessing
            3. Taylor expanded in beta around -inf

              \[\leadsto \frac{\frac{\color{blue}{-1 \cdot \left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            4. Step-by-step derivation
              1. mul-1-negN/A

                \[\leadsto \frac{\frac{\color{blue}{\mathsf{neg}\left(\left(-1 \cdot \alpha - 1\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              2. lower-neg.f64N/A

                \[\leadsto \frac{\frac{\color{blue}{-\left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              3. sub-negN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 \cdot \alpha + \left(\mathsf{neg}\left(1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              4. mul-1-negN/A

                \[\leadsto \frac{\frac{-\left(\color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              5. distribute-neg-inN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(\mathsf{neg}\left(\left(\alpha + 1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              6. +-commutativeN/A

                \[\leadsto \frac{\frac{-\left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right)}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              7. distribute-neg-inN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(\left(\mathsf{neg}\left(1\right)\right) + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              8. metadata-evalN/A

                \[\leadsto \frac{\frac{-\left(\color{blue}{-1} + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              9. unsub-negN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              10. lower--.f6419.2

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            5. Applied rewrites19.2%

              \[\leadsto \frac{\frac{\color{blue}{-\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            6. Step-by-step derivation
              1. lift-/.f64N/A

                \[\leadsto \color{blue}{\frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
              2. lift-/.f64N/A

                \[\leadsto \frac{\color{blue}{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              3. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
              4. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} + 1} \]
              5. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\color{blue}{\left(\alpha + \beta\right)} + 2 \cdot 1\right) + 1} \]
              6. lift-*.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}\right) + 1} \]
              7. metadata-evalN/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2}\right) + 1} \]
            7. Applied rewrites36.0%

              \[\leadsto \color{blue}{\frac{1 + \alpha}{\left(\beta + \left(\alpha + 3\right)\right) \cdot \left(\left(2 + \alpha\right) + \beta\right)}} \]

            if 9.9999999999999995e59 < beta

            1. Initial program 78.2%

              \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            2. Add Preprocessing
            3. Step-by-step derivation
              1. Applied rewrites76.9%

                \[\leadsto \color{blue}{\frac{\left(\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1\right) \cdot {\left(\left(\beta + \alpha\right) + 2\right)}^{-2}}{3 + \left(\beta + \alpha\right)}} \]
              2. Taylor expanded in beta around inf

                \[\leadsto \frac{\color{blue}{\frac{1 + \alpha}{\beta}}}{3 + \left(\beta + \alpha\right)} \]
              3. Step-by-step derivation
                1. lower-/.f64N/A

                  \[\leadsto \frac{\color{blue}{\frac{1 + \alpha}{\beta}}}{3 + \left(\beta + \alpha\right)} \]
                2. lower-+.f6488.6

                  \[\leadsto \frac{\frac{\color{blue}{1 + \alpha}}{\beta}}{3 + \left(\beta + \alpha\right)} \]
              4. Applied rewrites88.6%

                \[\leadsto \frac{\color{blue}{\frac{1 + \alpha}{\beta}}}{3 + \left(\beta + \alpha\right)} \]
            4. Recombined 2 regimes into one program.
            5. Add Preprocessing

            Alternative 8: 61.8% accurate, 2.2× speedup?

            \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta} \end{array} \]
            NOTE: alpha and beta should be sorted in increasing order before calling this function.
            (FPCore (alpha beta)
             :precision binary64
             (/ (/ (+ 1.0 alpha) (+ beta (+ alpha 3.0))) (+ (+ 2.0 alpha) beta)))
            assert(alpha < beta);
            double code(double alpha, double beta) {
            	return ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
            }
            
            NOTE: alpha and beta should be sorted in increasing order before calling this function.
            real(8) function code(alpha, beta)
                real(8), intent (in) :: alpha
                real(8), intent (in) :: beta
                code = ((1.0d0 + alpha) / (beta + (alpha + 3.0d0))) / ((2.0d0 + alpha) + beta)
            end function
            
            assert alpha < beta;
            public static double code(double alpha, double beta) {
            	return ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
            }
            
            [alpha, beta] = sort([alpha, beta])
            def code(alpha, beta):
            	return ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta)
            
            alpha, beta = sort([alpha, beta])
            function code(alpha, beta)
            	return Float64(Float64(Float64(1.0 + alpha) / Float64(beta + Float64(alpha + 3.0))) / Float64(Float64(2.0 + alpha) + beta))
            end
            
            alpha, beta = num2cell(sort([alpha, beta])){:}
            function tmp = code(alpha, beta)
            	tmp = ((1.0 + alpha) / (beta + (alpha + 3.0))) / ((2.0 + alpha) + beta);
            end
            
            NOTE: alpha and beta should be sorted in increasing order before calling this function.
            code[alpha_, beta_] := N[(N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta + N[(alpha + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(N[(2.0 + alpha), $MachinePrecision] + beta), $MachinePrecision]), $MachinePrecision]
            
            \begin{array}{l}
            [alpha, beta] = \mathsf{sort}([alpha, beta])\\
            \\
            \frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}
            \end{array}
            
            Derivation
            1. Initial program 92.9%

              \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            2. Add Preprocessing
            3. Taylor expanded in beta around -inf

              \[\leadsto \frac{\frac{\color{blue}{-1 \cdot \left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            4. Step-by-step derivation
              1. mul-1-negN/A

                \[\leadsto \frac{\frac{\color{blue}{\mathsf{neg}\left(\left(-1 \cdot \alpha - 1\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              2. lower-neg.f64N/A

                \[\leadsto \frac{\frac{\color{blue}{-\left(-1 \cdot \alpha - 1\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              3. sub-negN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 \cdot \alpha + \left(\mathsf{neg}\left(1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              4. mul-1-negN/A

                \[\leadsto \frac{\frac{-\left(\color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              5. distribute-neg-inN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(\mathsf{neg}\left(\left(\alpha + 1\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              6. +-commutativeN/A

                \[\leadsto \frac{\frac{-\left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right)}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              7. distribute-neg-inN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(\left(\mathsf{neg}\left(1\right)\right) + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              8. metadata-evalN/A

                \[\leadsto \frac{\frac{-\left(\color{blue}{-1} + \left(\mathsf{neg}\left(\alpha\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              9. unsub-negN/A

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              10. lower--.f6441.6

                \[\leadsto \frac{\frac{-\color{blue}{\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            5. Applied rewrites41.6%

              \[\leadsto \frac{\frac{\color{blue}{-\left(-1 - \alpha\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            6. Step-by-step derivation
              1. lift-/.f64N/A

                \[\leadsto \color{blue}{\frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
              2. lift-/.f64N/A

                \[\leadsto \frac{\color{blue}{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
              3. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
              4. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} + 1} \]
              5. lift-+.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\color{blue}{\left(\alpha + \beta\right)} + 2 \cdot 1\right) + 1} \]
              6. lift-*.f64N/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}\right) + 1} \]
              7. metadata-evalN/A

                \[\leadsto \frac{\frac{-\left(-1 - \alpha\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + \color{blue}{2}\right) + 1} \]
            7. Applied rewrites41.6%

              \[\leadsto \color{blue}{\frac{\frac{1 + \alpha}{\beta + \left(\alpha + 3\right)}}{\left(2 + \alpha\right) + \beta}} \]
            8. Add Preprocessing

            Alternative 9: 55.3% accurate, 2.6× speedup?

            \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \frac{\frac{1 + \alpha}{\beta}}{3 + \left(\beta + \alpha\right)} \end{array} \]
            NOTE: alpha and beta should be sorted in increasing order before calling this function.
            (FPCore (alpha beta)
             :precision binary64
             (/ (/ (+ 1.0 alpha) beta) (+ 3.0 (+ beta alpha))))
            assert(alpha < beta);
            double code(double alpha, double beta) {
            	return ((1.0 + alpha) / beta) / (3.0 + (beta + alpha));
            }
            
            NOTE: alpha and beta should be sorted in increasing order before calling this function.
            real(8) function code(alpha, beta)
                real(8), intent (in) :: alpha
                real(8), intent (in) :: beta
                code = ((1.0d0 + alpha) / beta) / (3.0d0 + (beta + alpha))
            end function
            
            assert alpha < beta;
            public static double code(double alpha, double beta) {
            	return ((1.0 + alpha) / beta) / (3.0 + (beta + alpha));
            }
            
            [alpha, beta] = sort([alpha, beta])
            def code(alpha, beta):
            	return ((1.0 + alpha) / beta) / (3.0 + (beta + alpha))
            
            alpha, beta = sort([alpha, beta])
            function code(alpha, beta)
            	return Float64(Float64(Float64(1.0 + alpha) / beta) / Float64(3.0 + Float64(beta + alpha)))
            end
            
            alpha, beta = num2cell(sort([alpha, beta])){:}
            function tmp = code(alpha, beta)
            	tmp = ((1.0 + alpha) / beta) / (3.0 + (beta + alpha));
            end
            
            NOTE: alpha and beta should be sorted in increasing order before calling this function.
            code[alpha_, beta_] := N[(N[(N[(1.0 + alpha), $MachinePrecision] / beta), $MachinePrecision] / N[(3.0 + N[(beta + alpha), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
            
            \begin{array}{l}
            [alpha, beta] = \mathsf{sort}([alpha, beta])\\
            \\
            \frac{\frac{1 + \alpha}{\beta}}{3 + \left(\beta + \alpha\right)}
            \end{array}
            
            Derivation
            1. Initial program 92.9%

              \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
            2. Add Preprocessing
            3. Step-by-step derivation
              1. Applied rewrites92.1%

                \[\leadsto \color{blue}{\frac{\left(\mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right) + 1\right) \cdot {\left(\left(\beta + \alpha\right) + 2\right)}^{-2}}{3 + \left(\beta + \alpha\right)}} \]
              2. Taylor expanded in beta around inf

                \[\leadsto \frac{\color{blue}{\frac{1 + \alpha}{\beta}}}{3 + \left(\beta + \alpha\right)} \]
              3. Step-by-step derivation
                1. lower-/.f64N/A

                  \[\leadsto \frac{\color{blue}{\frac{1 + \alpha}{\beta}}}{3 + \left(\beta + \alpha\right)} \]
                2. lower-+.f6434.5

                  \[\leadsto \frac{\frac{\color{blue}{1 + \alpha}}{\beta}}{3 + \left(\beta + \alpha\right)} \]
              4. Applied rewrites34.5%

                \[\leadsto \frac{\color{blue}{\frac{1 + \alpha}{\beta}}}{3 + \left(\beta + \alpha\right)} \]
              5. Add Preprocessing

              Alternative 10: 54.7% accurate, 2.9× speedup?

              \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} \mathbf{if}\;\beta \leq 10^{+156}:\\ \;\;\;\;\frac{1 + \alpha}{\beta \cdot \beta}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{\alpha}{\beta}}{\beta}\\ \end{array} \end{array} \]
              NOTE: alpha and beta should be sorted in increasing order before calling this function.
              (FPCore (alpha beta)
               :precision binary64
               (if (<= beta 1e+156) (/ (+ 1.0 alpha) (* beta beta)) (/ (/ alpha beta) beta)))
              assert(alpha < beta);
              double code(double alpha, double beta) {
              	double tmp;
              	if (beta <= 1e+156) {
              		tmp = (1.0 + alpha) / (beta * beta);
              	} else {
              		tmp = (alpha / beta) / beta;
              	}
              	return tmp;
              }
              
              NOTE: alpha and beta should be sorted in increasing order before calling this function.
              real(8) function code(alpha, beta)
                  real(8), intent (in) :: alpha
                  real(8), intent (in) :: beta
                  real(8) :: tmp
                  if (beta <= 1d+156) then
                      tmp = (1.0d0 + alpha) / (beta * beta)
                  else
                      tmp = (alpha / beta) / beta
                  end if
                  code = tmp
              end function
              
              assert alpha < beta;
              public static double code(double alpha, double beta) {
              	double tmp;
              	if (beta <= 1e+156) {
              		tmp = (1.0 + alpha) / (beta * beta);
              	} else {
              		tmp = (alpha / beta) / beta;
              	}
              	return tmp;
              }
              
              [alpha, beta] = sort([alpha, beta])
              def code(alpha, beta):
              	tmp = 0
              	if beta <= 1e+156:
              		tmp = (1.0 + alpha) / (beta * beta)
              	else:
              		tmp = (alpha / beta) / beta
              	return tmp
              
              alpha, beta = sort([alpha, beta])
              function code(alpha, beta)
              	tmp = 0.0
              	if (beta <= 1e+156)
              		tmp = Float64(Float64(1.0 + alpha) / Float64(beta * beta));
              	else
              		tmp = Float64(Float64(alpha / beta) / beta);
              	end
              	return tmp
              end
              
              alpha, beta = num2cell(sort([alpha, beta])){:}
              function tmp_2 = code(alpha, beta)
              	tmp = 0.0;
              	if (beta <= 1e+156)
              		tmp = (1.0 + alpha) / (beta * beta);
              	else
              		tmp = (alpha / beta) / beta;
              	end
              	tmp_2 = tmp;
              end
              
              NOTE: alpha and beta should be sorted in increasing order before calling this function.
              code[alpha_, beta_] := If[LessEqual[beta, 1e+156], N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta * beta), $MachinePrecision]), $MachinePrecision], N[(N[(alpha / beta), $MachinePrecision] / beta), $MachinePrecision]]
              
              \begin{array}{l}
              [alpha, beta] = \mathsf{sort}([alpha, beta])\\
              \\
              \begin{array}{l}
              \mathbf{if}\;\beta \leq 10^{+156}:\\
              \;\;\;\;\frac{1 + \alpha}{\beta \cdot \beta}\\
              
              \mathbf{else}:\\
              \;\;\;\;\frac{\frac{\alpha}{\beta}}{\beta}\\
              
              
              \end{array}
              \end{array}
              
              Derivation
              1. Split input into 2 regimes
              2. if beta < 9.9999999999999998e155

                1. Initial program 97.3%

                  \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
                2. Add Preprocessing
                3. Taylor expanded in beta around inf

                  \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                4. Step-by-step derivation
                  1. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                  2. lower-+.f64N/A

                    \[\leadsto \frac{\color{blue}{1 + \alpha}}{{\beta}^{2}} \]
                  3. unpow2N/A

                    \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                  4. lower-*.f6417.8

                    \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                5. Applied rewrites17.8%

                  \[\leadsto \color{blue}{\frac{1 + \alpha}{\beta \cdot \beta}} \]

                if 9.9999999999999998e155 < beta

                1. Initial program 77.1%

                  \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
                2. Add Preprocessing
                3. Taylor expanded in beta around inf

                  \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                4. Step-by-step derivation
                  1. lower-/.f64N/A

                    \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                  2. lower-+.f64N/A

                    \[\leadsto \frac{\color{blue}{1 + \alpha}}{{\beta}^{2}} \]
                  3. unpow2N/A

                    \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                  4. lower-*.f6490.6

                    \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                5. Applied rewrites90.6%

                  \[\leadsto \color{blue}{\frac{1 + \alpha}{\beta \cdot \beta}} \]
                6. Taylor expanded in alpha around inf

                  \[\leadsto \frac{\alpha}{\color{blue}{{\beta}^{2}}} \]
                7. Step-by-step derivation
                  1. Applied rewrites90.6%

                    \[\leadsto \frac{\alpha}{\color{blue}{\beta \cdot \beta}} \]
                  2. Step-by-step derivation
                    1. Applied rewrites94.4%

                      \[\leadsto \frac{\frac{\alpha}{\beta}}{\beta} \]
                  3. Recombined 2 regimes into one program.
                  4. Add Preprocessing

                  Alternative 11: 55.2% accurate, 3.2× speedup?

                  \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \frac{\frac{1 + \alpha}{\beta}}{\beta} \end{array} \]
                  NOTE: alpha and beta should be sorted in increasing order before calling this function.
                  (FPCore (alpha beta) :precision binary64 (/ (/ (+ 1.0 alpha) beta) beta))
                  assert(alpha < beta);
                  double code(double alpha, double beta) {
                  	return ((1.0 + alpha) / beta) / beta;
                  }
                  
                  NOTE: alpha and beta should be sorted in increasing order before calling this function.
                  real(8) function code(alpha, beta)
                      real(8), intent (in) :: alpha
                      real(8), intent (in) :: beta
                      code = ((1.0d0 + alpha) / beta) / beta
                  end function
                  
                  assert alpha < beta;
                  public static double code(double alpha, double beta) {
                  	return ((1.0 + alpha) / beta) / beta;
                  }
                  
                  [alpha, beta] = sort([alpha, beta])
                  def code(alpha, beta):
                  	return ((1.0 + alpha) / beta) / beta
                  
                  alpha, beta = sort([alpha, beta])
                  function code(alpha, beta)
                  	return Float64(Float64(Float64(1.0 + alpha) / beta) / beta)
                  end
                  
                  alpha, beta = num2cell(sort([alpha, beta])){:}
                  function tmp = code(alpha, beta)
                  	tmp = ((1.0 + alpha) / beta) / beta;
                  end
                  
                  NOTE: alpha and beta should be sorted in increasing order before calling this function.
                  code[alpha_, beta_] := N[(N[(N[(1.0 + alpha), $MachinePrecision] / beta), $MachinePrecision] / beta), $MachinePrecision]
                  
                  \begin{array}{l}
                  [alpha, beta] = \mathsf{sort}([alpha, beta])\\
                  \\
                  \frac{\frac{1 + \alpha}{\beta}}{\beta}
                  \end{array}
                  
                  Derivation
                  1. Initial program 92.9%

                    \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
                  2. Add Preprocessing
                  3. Taylor expanded in beta around inf

                    \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                  4. Step-by-step derivation
                    1. lower-/.f64N/A

                      \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                    2. lower-+.f64N/A

                      \[\leadsto \frac{\color{blue}{1 + \alpha}}{{\beta}^{2}} \]
                    3. unpow2N/A

                      \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                    4. lower-*.f6433.8

                      \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                  5. Applied rewrites33.8%

                    \[\leadsto \color{blue}{\frac{1 + \alpha}{\beta \cdot \beta}} \]
                  6. Step-by-step derivation
                    1. Applied rewrites34.7%

                      \[\leadsto \frac{\frac{1 + \alpha}{\beta}}{\color{blue}{\beta}} \]
                    2. Add Preprocessing

                    Alternative 12: 51.7% accurate, 3.6× speedup?

                    \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \begin{array}{l} \mathbf{if}\;\alpha \leq 1:\\ \;\;\;\;\frac{1}{\beta \cdot \beta}\\ \mathbf{else}:\\ \;\;\;\;\frac{\alpha}{\beta \cdot \beta}\\ \end{array} \end{array} \]
                    NOTE: alpha and beta should be sorted in increasing order before calling this function.
                    (FPCore (alpha beta)
                     :precision binary64
                     (if (<= alpha 1.0) (/ 1.0 (* beta beta)) (/ alpha (* beta beta))))
                    assert(alpha < beta);
                    double code(double alpha, double beta) {
                    	double tmp;
                    	if (alpha <= 1.0) {
                    		tmp = 1.0 / (beta * beta);
                    	} else {
                    		tmp = alpha / (beta * beta);
                    	}
                    	return tmp;
                    }
                    
                    NOTE: alpha and beta should be sorted in increasing order before calling this function.
                    real(8) function code(alpha, beta)
                        real(8), intent (in) :: alpha
                        real(8), intent (in) :: beta
                        real(8) :: tmp
                        if (alpha <= 1.0d0) then
                            tmp = 1.0d0 / (beta * beta)
                        else
                            tmp = alpha / (beta * beta)
                        end if
                        code = tmp
                    end function
                    
                    assert alpha < beta;
                    public static double code(double alpha, double beta) {
                    	double tmp;
                    	if (alpha <= 1.0) {
                    		tmp = 1.0 / (beta * beta);
                    	} else {
                    		tmp = alpha / (beta * beta);
                    	}
                    	return tmp;
                    }
                    
                    [alpha, beta] = sort([alpha, beta])
                    def code(alpha, beta):
                    	tmp = 0
                    	if alpha <= 1.0:
                    		tmp = 1.0 / (beta * beta)
                    	else:
                    		tmp = alpha / (beta * beta)
                    	return tmp
                    
                    alpha, beta = sort([alpha, beta])
                    function code(alpha, beta)
                    	tmp = 0.0
                    	if (alpha <= 1.0)
                    		tmp = Float64(1.0 / Float64(beta * beta));
                    	else
                    		tmp = Float64(alpha / Float64(beta * beta));
                    	end
                    	return tmp
                    end
                    
                    alpha, beta = num2cell(sort([alpha, beta])){:}
                    function tmp_2 = code(alpha, beta)
                    	tmp = 0.0;
                    	if (alpha <= 1.0)
                    		tmp = 1.0 / (beta * beta);
                    	else
                    		tmp = alpha / (beta * beta);
                    	end
                    	tmp_2 = tmp;
                    end
                    
                    NOTE: alpha and beta should be sorted in increasing order before calling this function.
                    code[alpha_, beta_] := If[LessEqual[alpha, 1.0], N[(1.0 / N[(beta * beta), $MachinePrecision]), $MachinePrecision], N[(alpha / N[(beta * beta), $MachinePrecision]), $MachinePrecision]]
                    
                    \begin{array}{l}
                    [alpha, beta] = \mathsf{sort}([alpha, beta])\\
                    \\
                    \begin{array}{l}
                    \mathbf{if}\;\alpha \leq 1:\\
                    \;\;\;\;\frac{1}{\beta \cdot \beta}\\
                    
                    \mathbf{else}:\\
                    \;\;\;\;\frac{\alpha}{\beta \cdot \beta}\\
                    
                    
                    \end{array}
                    \end{array}
                    
                    Derivation
                    1. Split input into 2 regimes
                    2. if alpha < 1

                      1. Initial program 99.8%

                        \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
                      2. Add Preprocessing
                      3. Taylor expanded in beta around inf

                        \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                      4. Step-by-step derivation
                        1. lower-/.f64N/A

                          \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                        2. lower-+.f64N/A

                          \[\leadsto \frac{\color{blue}{1 + \alpha}}{{\beta}^{2}} \]
                        3. unpow2N/A

                          \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                        4. lower-*.f6442.3

                          \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                      5. Applied rewrites42.3%

                        \[\leadsto \color{blue}{\frac{1 + \alpha}{\beta \cdot \beta}} \]
                      6. Taylor expanded in alpha around 0

                        \[\leadsto \frac{1}{\color{blue}{\beta} \cdot \beta} \]
                      7. Step-by-step derivation
                        1. Applied rewrites41.0%

                          \[\leadsto \frac{1}{\color{blue}{\beta} \cdot \beta} \]

                        if 1 < alpha

                        1. Initial program 80.7%

                          \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
                        2. Add Preprocessing
                        3. Taylor expanded in beta around inf

                          \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                        4. Step-by-step derivation
                          1. lower-/.f64N/A

                            \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                          2. lower-+.f64N/A

                            \[\leadsto \frac{\color{blue}{1 + \alpha}}{{\beta}^{2}} \]
                          3. unpow2N/A

                            \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                          4. lower-*.f6418.7

                            \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                        5. Applied rewrites18.7%

                          \[\leadsto \color{blue}{\frac{1 + \alpha}{\beta \cdot \beta}} \]
                        6. Taylor expanded in alpha around inf

                          \[\leadsto \frac{\alpha}{\color{blue}{{\beta}^{2}}} \]
                        7. Step-by-step derivation
                          1. Applied rewrites18.1%

                            \[\leadsto \frac{\alpha}{\color{blue}{\beta \cdot \beta}} \]
                        8. Recombined 2 regimes into one program.
                        9. Add Preprocessing

                        Alternative 13: 52.3% accurate, 4.2× speedup?

                        \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \frac{1 + \alpha}{\beta \cdot \beta} \end{array} \]
                        NOTE: alpha and beta should be sorted in increasing order before calling this function.
                        (FPCore (alpha beta) :precision binary64 (/ (+ 1.0 alpha) (* beta beta)))
                        assert(alpha < beta);
                        double code(double alpha, double beta) {
                        	return (1.0 + alpha) / (beta * beta);
                        }
                        
                        NOTE: alpha and beta should be sorted in increasing order before calling this function.
                        real(8) function code(alpha, beta)
                            real(8), intent (in) :: alpha
                            real(8), intent (in) :: beta
                            code = (1.0d0 + alpha) / (beta * beta)
                        end function
                        
                        assert alpha < beta;
                        public static double code(double alpha, double beta) {
                        	return (1.0 + alpha) / (beta * beta);
                        }
                        
                        [alpha, beta] = sort([alpha, beta])
                        def code(alpha, beta):
                        	return (1.0 + alpha) / (beta * beta)
                        
                        alpha, beta = sort([alpha, beta])
                        function code(alpha, beta)
                        	return Float64(Float64(1.0 + alpha) / Float64(beta * beta))
                        end
                        
                        alpha, beta = num2cell(sort([alpha, beta])){:}
                        function tmp = code(alpha, beta)
                        	tmp = (1.0 + alpha) / (beta * beta);
                        end
                        
                        NOTE: alpha and beta should be sorted in increasing order before calling this function.
                        code[alpha_, beta_] := N[(N[(1.0 + alpha), $MachinePrecision] / N[(beta * beta), $MachinePrecision]), $MachinePrecision]
                        
                        \begin{array}{l}
                        [alpha, beta] = \mathsf{sort}([alpha, beta])\\
                        \\
                        \frac{1 + \alpha}{\beta \cdot \beta}
                        \end{array}
                        
                        Derivation
                        1. Initial program 92.9%

                          \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
                        2. Add Preprocessing
                        3. Taylor expanded in beta around inf

                          \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                        4. Step-by-step derivation
                          1. lower-/.f64N/A

                            \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                          2. lower-+.f64N/A

                            \[\leadsto \frac{\color{blue}{1 + \alpha}}{{\beta}^{2}} \]
                          3. unpow2N/A

                            \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                          4. lower-*.f6433.8

                            \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                        5. Applied rewrites33.8%

                          \[\leadsto \color{blue}{\frac{1 + \alpha}{\beta \cdot \beta}} \]
                        6. Add Preprocessing

                        Alternative 14: 31.5% accurate, 4.9× speedup?

                        \[\begin{array}{l} [alpha, beta] = \mathsf{sort}([alpha, beta])\\ \\ \frac{\alpha}{\beta \cdot \beta} \end{array} \]
                        NOTE: alpha and beta should be sorted in increasing order before calling this function.
                        (FPCore (alpha beta) :precision binary64 (/ alpha (* beta beta)))
                        assert(alpha < beta);
                        double code(double alpha, double beta) {
                        	return alpha / (beta * beta);
                        }
                        
                        NOTE: alpha and beta should be sorted in increasing order before calling this function.
                        real(8) function code(alpha, beta)
                            real(8), intent (in) :: alpha
                            real(8), intent (in) :: beta
                            code = alpha / (beta * beta)
                        end function
                        
                        assert alpha < beta;
                        public static double code(double alpha, double beta) {
                        	return alpha / (beta * beta);
                        }
                        
                        [alpha, beta] = sort([alpha, beta])
                        def code(alpha, beta):
                        	return alpha / (beta * beta)
                        
                        alpha, beta = sort([alpha, beta])
                        function code(alpha, beta)
                        	return Float64(alpha / Float64(beta * beta))
                        end
                        
                        alpha, beta = num2cell(sort([alpha, beta])){:}
                        function tmp = code(alpha, beta)
                        	tmp = alpha / (beta * beta);
                        end
                        
                        NOTE: alpha and beta should be sorted in increasing order before calling this function.
                        code[alpha_, beta_] := N[(alpha / N[(beta * beta), $MachinePrecision]), $MachinePrecision]
                        
                        \begin{array}{l}
                        [alpha, beta] = \mathsf{sort}([alpha, beta])\\
                        \\
                        \frac{\alpha}{\beta \cdot \beta}
                        \end{array}
                        
                        Derivation
                        1. Initial program 92.9%

                          \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
                        2. Add Preprocessing
                        3. Taylor expanded in beta around inf

                          \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                        4. Step-by-step derivation
                          1. lower-/.f64N/A

                            \[\leadsto \color{blue}{\frac{1 + \alpha}{{\beta}^{2}}} \]
                          2. lower-+.f64N/A

                            \[\leadsto \frac{\color{blue}{1 + \alpha}}{{\beta}^{2}} \]
                          3. unpow2N/A

                            \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                          4. lower-*.f6433.8

                            \[\leadsto \frac{1 + \alpha}{\color{blue}{\beta \cdot \beta}} \]
                        5. Applied rewrites33.8%

                          \[\leadsto \color{blue}{\frac{1 + \alpha}{\beta \cdot \beta}} \]
                        6. Taylor expanded in alpha around inf

                          \[\leadsto \frac{\alpha}{\color{blue}{{\beta}^{2}}} \]
                        7. Step-by-step derivation
                          1. Applied rewrites23.4%

                            \[\leadsto \frac{\alpha}{\color{blue}{\beta \cdot \beta}} \]
                          2. Add Preprocessing

                          Reproduce

                          ?
                          herbie shell --seed 2024314 
                          (FPCore (alpha beta)
                            :name "Octave 3.8, jcobi/3"
                            :precision binary64
                            :pre (and (> alpha -1.0) (> beta -1.0))
                            (/ (/ (/ (+ (+ (+ alpha beta) (* beta alpha)) 1.0) (+ (+ alpha beta) (* 2.0 1.0))) (+ (+ alpha beta) (* 2.0 1.0))) (+ (+ (+ alpha beta) (* 2.0 1.0)) 1.0)))