Octave 3.8, jcobi/3

Percentage Accurate: 94.2% → 99.7%
Time: 14.9s
Alternatives: 24
Speedup: 1.3×

Specification

?
\[\alpha > -1 \land \beta > -1\]
\[\begin{array}{l} \\ \begin{array}{l} t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\ \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1} \end{array} \end{array} \]
(FPCore (alpha beta)
 :precision binary64
 (let* ((t_0 (+ (+ alpha beta) (* 2.0 1.0))))
   (/ (/ (/ (+ (+ (+ alpha beta) (* beta alpha)) 1.0) t_0) t_0) (+ t_0 1.0))))
double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
real(8) function code(alpha, beta)
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    real(8) :: t_0
    t_0 = (alpha + beta) + (2.0d0 * 1.0d0)
    code = (((((alpha + beta) + (beta * alpha)) + 1.0d0) / t_0) / t_0) / (t_0 + 1.0d0)
end function
public static double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
def code(alpha, beta):
	t_0 = (alpha + beta) + (2.0 * 1.0)
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0)
function code(alpha, beta)
	t_0 = Float64(Float64(alpha + beta) + Float64(2.0 * 1.0))
	return Float64(Float64(Float64(Float64(Float64(Float64(alpha + beta) + Float64(beta * alpha)) + 1.0) / t_0) / t_0) / Float64(t_0 + 1.0))
end
function tmp = code(alpha, beta)
	t_0 = (alpha + beta) + (2.0 * 1.0);
	tmp = (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
end
code[alpha_, beta_] := Block[{t$95$0 = N[(N[(alpha + beta), $MachinePrecision] + N[(2.0 * 1.0), $MachinePrecision]), $MachinePrecision]}, N[(N[(N[(N[(N[(N[(alpha + beta), $MachinePrecision] + N[(beta * alpha), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / t$95$0), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(t$95$0 + 1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\
\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1}
\end{array}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 24 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 94.2% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\ \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1} \end{array} \end{array} \]
(FPCore (alpha beta)
 :precision binary64
 (let* ((t_0 (+ (+ alpha beta) (* 2.0 1.0))))
   (/ (/ (/ (+ (+ (+ alpha beta) (* beta alpha)) 1.0) t_0) t_0) (+ t_0 1.0))))
double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
real(8) function code(alpha, beta)
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    real(8) :: t_0
    t_0 = (alpha + beta) + (2.0d0 * 1.0d0)
    code = (((((alpha + beta) + (beta * alpha)) + 1.0d0) / t_0) / t_0) / (t_0 + 1.0d0)
end function
public static double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + (2.0 * 1.0);
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
}
def code(alpha, beta):
	t_0 = (alpha + beta) + (2.0 * 1.0)
	return (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0)
function code(alpha, beta)
	t_0 = Float64(Float64(alpha + beta) + Float64(2.0 * 1.0))
	return Float64(Float64(Float64(Float64(Float64(Float64(alpha + beta) + Float64(beta * alpha)) + 1.0) / t_0) / t_0) / Float64(t_0 + 1.0))
end
function tmp = code(alpha, beta)
	t_0 = (alpha + beta) + (2.0 * 1.0);
	tmp = (((((alpha + beta) + (beta * alpha)) + 1.0) / t_0) / t_0) / (t_0 + 1.0);
end
code[alpha_, beta_] := Block[{t$95$0 = N[(N[(alpha + beta), $MachinePrecision] + N[(2.0 * 1.0), $MachinePrecision]), $MachinePrecision]}, N[(N[(N[(N[(N[(N[(alpha + beta), $MachinePrecision] + N[(beta * alpha), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / t$95$0), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(t$95$0 + 1.0), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \left(\alpha + \beta\right) + 2 \cdot 1\\
\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{t\_0}}{t\_0}}{t\_0 + 1}
\end{array}
\end{array}

Alternative 1: 99.7% accurate, 0.4× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \left(\alpha + \beta\right) + 2\\ t_1 := \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \alpha \cdot \beta\right) + 1}{t\_0}}{t\_0}}{1 + t\_0}\\ \mathbf{if}\;t\_1 \leq 0.1:\\ \;\;\;\;t\_1\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(t\_0 \cdot \left(\frac{\frac{2}{\beta + 1} + \left(\frac{\beta}{\beta + 1} + \frac{-1 - \beta}{\left(-1 - \beta\right) \cdot \left(-1 - \beta\right)}\right)}{\alpha} + \frac{1}{\beta + 1}\right)\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)}\\ \end{array} \end{array} \]
(FPCore (alpha beta)
 :precision binary64
 (let* ((t_0 (+ (+ alpha beta) 2.0))
        (t_1
         (/
          (/ (/ (+ (+ (+ alpha beta) (* alpha beta)) 1.0) t_0) t_0)
          (+ 1.0 t_0))))
   (if (<= t_1 0.1)
     t_1
     (/
      1.0
      (*
       (*
        t_0
        (+
         (/
          (+
           (/ 2.0 (+ beta 1.0))
           (+
            (/ beta (+ beta 1.0))
            (/ (- -1.0 beta) (* (- -1.0 beta) (- -1.0 beta)))))
          alpha)
         (/ 1.0 (+ beta 1.0))))
       (+ alpha (+ beta 3.0)))))))
double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + 2.0;
	double t_1 = (((((alpha + beta) + (alpha * beta)) + 1.0) / t_0) / t_0) / (1.0 + t_0);
	double tmp;
	if (t_1 <= 0.1) {
		tmp = t_1;
	} else {
		tmp = 1.0 / ((t_0 * ((((2.0 / (beta + 1.0)) + ((beta / (beta + 1.0)) + ((-1.0 - beta) / ((-1.0 - beta) * (-1.0 - beta))))) / alpha) + (1.0 / (beta + 1.0)))) * (alpha + (beta + 3.0)));
	}
	return tmp;
}
real(8) function code(alpha, beta)
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    real(8) :: t_0
    real(8) :: t_1
    real(8) :: tmp
    t_0 = (alpha + beta) + 2.0d0
    t_1 = (((((alpha + beta) + (alpha * beta)) + 1.0d0) / t_0) / t_0) / (1.0d0 + t_0)
    if (t_1 <= 0.1d0) then
        tmp = t_1
    else
        tmp = 1.0d0 / ((t_0 * ((((2.0d0 / (beta + 1.0d0)) + ((beta / (beta + 1.0d0)) + (((-1.0d0) - beta) / (((-1.0d0) - beta) * ((-1.0d0) - beta))))) / alpha) + (1.0d0 / (beta + 1.0d0)))) * (alpha + (beta + 3.0d0)))
    end if
    code = tmp
end function
public static double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + 2.0;
	double t_1 = (((((alpha + beta) + (alpha * beta)) + 1.0) / t_0) / t_0) / (1.0 + t_0);
	double tmp;
	if (t_1 <= 0.1) {
		tmp = t_1;
	} else {
		tmp = 1.0 / ((t_0 * ((((2.0 / (beta + 1.0)) + ((beta / (beta + 1.0)) + ((-1.0 - beta) / ((-1.0 - beta) * (-1.0 - beta))))) / alpha) + (1.0 / (beta + 1.0)))) * (alpha + (beta + 3.0)));
	}
	return tmp;
}
def code(alpha, beta):
	t_0 = (alpha + beta) + 2.0
	t_1 = (((((alpha + beta) + (alpha * beta)) + 1.0) / t_0) / t_0) / (1.0 + t_0)
	tmp = 0
	if t_1 <= 0.1:
		tmp = t_1
	else:
		tmp = 1.0 / ((t_0 * ((((2.0 / (beta + 1.0)) + ((beta / (beta + 1.0)) + ((-1.0 - beta) / ((-1.0 - beta) * (-1.0 - beta))))) / alpha) + (1.0 / (beta + 1.0)))) * (alpha + (beta + 3.0)))
	return tmp
function code(alpha, beta)
	t_0 = Float64(Float64(alpha + beta) + 2.0)
	t_1 = Float64(Float64(Float64(Float64(Float64(Float64(alpha + beta) + Float64(alpha * beta)) + 1.0) / t_0) / t_0) / Float64(1.0 + t_0))
	tmp = 0.0
	if (t_1 <= 0.1)
		tmp = t_1;
	else
		tmp = Float64(1.0 / Float64(Float64(t_0 * Float64(Float64(Float64(Float64(2.0 / Float64(beta + 1.0)) + Float64(Float64(beta / Float64(beta + 1.0)) + Float64(Float64(-1.0 - beta) / Float64(Float64(-1.0 - beta) * Float64(-1.0 - beta))))) / alpha) + Float64(1.0 / Float64(beta + 1.0)))) * Float64(alpha + Float64(beta + 3.0))));
	end
	return tmp
end
function tmp_2 = code(alpha, beta)
	t_0 = (alpha + beta) + 2.0;
	t_1 = (((((alpha + beta) + (alpha * beta)) + 1.0) / t_0) / t_0) / (1.0 + t_0);
	tmp = 0.0;
	if (t_1 <= 0.1)
		tmp = t_1;
	else
		tmp = 1.0 / ((t_0 * ((((2.0 / (beta + 1.0)) + ((beta / (beta + 1.0)) + ((-1.0 - beta) / ((-1.0 - beta) * (-1.0 - beta))))) / alpha) + (1.0 / (beta + 1.0)))) * (alpha + (beta + 3.0)));
	end
	tmp_2 = tmp;
end
code[alpha_, beta_] := Block[{t$95$0 = N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]}, Block[{t$95$1 = N[(N[(N[(N[(N[(N[(alpha + beta), $MachinePrecision] + N[(alpha * beta), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / t$95$0), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(1.0 + t$95$0), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[t$95$1, 0.1], t$95$1, N[(1.0 / N[(N[(t$95$0 * N[(N[(N[(N[(2.0 / N[(beta + 1.0), $MachinePrecision]), $MachinePrecision] + N[(N[(beta / N[(beta + 1.0), $MachinePrecision]), $MachinePrecision] + N[(N[(-1.0 - beta), $MachinePrecision] / N[(N[(-1.0 - beta), $MachinePrecision] * N[(-1.0 - beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / alpha), $MachinePrecision] + N[(1.0 / N[(beta + 1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] * N[(alpha + N[(beta + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \left(\alpha + \beta\right) + 2\\
t_1 := \frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \alpha \cdot \beta\right) + 1}{t\_0}}{t\_0}}{1 + t\_0}\\
\mathbf{if}\;t\_1 \leq 0.1:\\
\;\;\;\;t\_1\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{\left(t\_0 \cdot \left(\frac{\frac{2}{\beta + 1} + \left(\frac{\beta}{\beta + 1} + \frac{-1 - \beta}{\left(-1 - \beta\right) \cdot \left(-1 - \beta\right)}\right)}{\alpha} + \frac{1}{\beta + 1}\right)\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (/.f64 (/.f64 (/.f64 (+.f64 (+.f64 (+.f64 alpha beta) (*.f64 beta alpha)) #s(literal 1 binary64)) (+.f64 (+.f64 alpha beta) (*.f64 #s(literal 2 binary64) #s(literal 1 binary64)))) (+.f64 (+.f64 alpha beta) (*.f64 #s(literal 2 binary64) #s(literal 1 binary64)))) (+.f64 (+.f64 (+.f64 alpha beta) (*.f64 #s(literal 2 binary64) #s(literal 1 binary64))) #s(literal 1 binary64))) < 0.10000000000000001

    1. Initial program 99.9%

      \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
    2. Add Preprocessing

    if 0.10000000000000001 < (/.f64 (/.f64 (/.f64 (+.f64 (+.f64 (+.f64 alpha beta) (*.f64 beta alpha)) #s(literal 1 binary64)) (+.f64 (+.f64 alpha beta) (*.f64 #s(literal 2 binary64) #s(literal 1 binary64)))) (+.f64 (+.f64 alpha beta) (*.f64 #s(literal 2 binary64) #s(literal 1 binary64)))) (+.f64 (+.f64 (+.f64 alpha beta) (*.f64 #s(literal 2 binary64) #s(literal 1 binary64))) #s(literal 1 binary64)))

    1. Initial program 1.6%

      \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-/.f64N/A

        \[\leadsto \color{blue}{\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
      2. div-invN/A

        \[\leadsto \color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1} \cdot \frac{1}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
      3. lift-/.f64N/A

        \[\leadsto \color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}} \cdot \frac{1}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      4. clear-numN/A

        \[\leadsto \color{blue}{\frac{1}{\frac{\left(\alpha + \beta\right) + 2 \cdot 1}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}}} \cdot \frac{1}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      5. frac-timesN/A

        \[\leadsto \color{blue}{\frac{1 \cdot 1}{\frac{\left(\alpha + \beta\right) + 2 \cdot 1}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}} \cdot \left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right)}} \]
      6. metadata-evalN/A

        \[\leadsto \frac{\color{blue}{1}}{\frac{\left(\alpha + \beta\right) + 2 \cdot 1}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}} \cdot \left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right)} \]
      7. lower-/.f64N/A

        \[\leadsto \color{blue}{\frac{1}{\frac{\left(\alpha + \beta\right) + 2 \cdot 1}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}} \cdot \left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right)}} \]
      8. lower-*.f64N/A

        \[\leadsto \frac{1}{\color{blue}{\frac{\left(\alpha + \beta\right) + 2 \cdot 1}{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}} \cdot \left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right)}} \]
    4. Applied rewrites1.6%

      \[\leadsto \color{blue}{\frac{1}{\left(\left(\left(\alpha + \beta\right) + 2\right) \cdot \frac{\left(\alpha + \beta\right) + 2}{\mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right) + 1}\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)}} \]
    5. Taylor expanded in alpha around -inf

      \[\leadsto \frac{1}{\left(\left(\left(\alpha + \beta\right) + 2\right) \cdot \color{blue}{\left(-1 \cdot \frac{\left(2 \cdot \frac{1}{-1 \cdot \beta - 1} + \frac{\beta}{-1 \cdot \beta - 1}\right) - -1 \cdot \frac{1 + \beta}{{\left(-1 \cdot \beta - 1\right)}^{2}}}{\alpha} - \frac{1}{-1 \cdot \beta - 1}\right)}\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)} \]
    6. Step-by-step derivation
      1. lower--.f64N/A

        \[\leadsto \frac{1}{\left(\left(\left(\alpha + \beta\right) + 2\right) \cdot \color{blue}{\left(-1 \cdot \frac{\left(2 \cdot \frac{1}{-1 \cdot \beta - 1} + \frac{\beta}{-1 \cdot \beta - 1}\right) - -1 \cdot \frac{1 + \beta}{{\left(-1 \cdot \beta - 1\right)}^{2}}}{\alpha} - \frac{1}{-1 \cdot \beta - 1}\right)}\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)} \]
    7. Applied rewrites99.7%

      \[\leadsto \frac{1}{\left(\left(\left(\alpha + \beta\right) + 2\right) \cdot \color{blue}{\left(\left(-\frac{\frac{2}{\left(-\beta\right) + -1} + \left(\frac{\beta}{\left(-\beta\right) + -1} + \frac{1 + \beta}{\left(\left(-\beta\right) + -1\right) \cdot \left(\left(-\beta\right) + -1\right)}\right)}{\alpha}\right) - \frac{1}{\left(-\beta\right) + -1}\right)}\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.9%

    \[\leadsto \begin{array}{l} \mathbf{if}\;\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \alpha \cdot \beta\right) + 1}{\left(\alpha + \beta\right) + 2}}{\left(\alpha + \beta\right) + 2}}{1 + \left(\left(\alpha + \beta\right) + 2\right)} \leq 0.1:\\ \;\;\;\;\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \alpha \cdot \beta\right) + 1}{\left(\alpha + \beta\right) + 2}}{\left(\alpha + \beta\right) + 2}}{1 + \left(\left(\alpha + \beta\right) + 2\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\left(\left(\left(\alpha + \beta\right) + 2\right) \cdot \left(\frac{\frac{2}{\beta + 1} + \left(\frac{\beta}{\beta + 1} + \frac{-1 - \beta}{\left(-1 - \beta\right) \cdot \left(-1 - \beta\right)}\right)}{\alpha} + \frac{1}{\beta + 1}\right)\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 2: 96.2% accurate, 0.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \left(\alpha + \beta\right) + 2\\ \mathbf{if}\;\beta \leq 2.3 \cdot 10^{+82}:\\ \;\;\;\;\frac{\frac{1 + \mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right)}{t\_0}}{t\_0 \cdot \left(\alpha + \left(\beta + 3\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{\left(\frac{1}{\beta} + \left(\alpha + \frac{\alpha}{\beta}\right)\right) + \left(1 + \left(-1 - \alpha\right) \cdot \frac{\alpha + 2}{\beta}\right)}{t\_0}}{1 + t\_0}\\ \end{array} \end{array} \]
(FPCore (alpha beta)
 :precision binary64
 (let* ((t_0 (+ (+ alpha beta) 2.0)))
   (if (<= beta 2.3e+82)
     (/
      (/ (+ 1.0 (fma alpha beta (+ alpha beta))) t_0)
      (* t_0 (+ alpha (+ beta 3.0))))
     (/
      (/
       (+
        (+ (/ 1.0 beta) (+ alpha (/ alpha beta)))
        (+ 1.0 (* (- -1.0 alpha) (/ (+ alpha 2.0) beta))))
       t_0)
      (+ 1.0 t_0)))))
double code(double alpha, double beta) {
	double t_0 = (alpha + beta) + 2.0;
	double tmp;
	if (beta <= 2.3e+82) {
		tmp = ((1.0 + fma(alpha, beta, (alpha + beta))) / t_0) / (t_0 * (alpha + (beta + 3.0)));
	} else {
		tmp = ((((1.0 / beta) + (alpha + (alpha / beta))) + (1.0 + ((-1.0 - alpha) * ((alpha + 2.0) / beta)))) / t_0) / (1.0 + t_0);
	}
	return tmp;
}
function code(alpha, beta)
	t_0 = Float64(Float64(alpha + beta) + 2.0)
	tmp = 0.0
	if (beta <= 2.3e+82)
		tmp = Float64(Float64(Float64(1.0 + fma(alpha, beta, Float64(alpha + beta))) / t_0) / Float64(t_0 * Float64(alpha + Float64(beta + 3.0))));
	else
		tmp = Float64(Float64(Float64(Float64(Float64(1.0 / beta) + Float64(alpha + Float64(alpha / beta))) + Float64(1.0 + Float64(Float64(-1.0 - alpha) * Float64(Float64(alpha + 2.0) / beta)))) / t_0) / Float64(1.0 + t_0));
	end
	return tmp
end
code[alpha_, beta_] := Block[{t$95$0 = N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]}, If[LessEqual[beta, 2.3e+82], N[(N[(N[(1.0 + N[(alpha * beta + N[(alpha + beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(t$95$0 * N[(alpha + N[(beta + 3.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(N[(N[(N[(N[(1.0 / beta), $MachinePrecision] + N[(alpha + N[(alpha / beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + N[(1.0 + N[(N[(-1.0 - alpha), $MachinePrecision] * N[(N[(alpha + 2.0), $MachinePrecision] / beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / t$95$0), $MachinePrecision] / N[(1.0 + t$95$0), $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \left(\alpha + \beta\right) + 2\\
\mathbf{if}\;\beta \leq 2.3 \cdot 10^{+82}:\\
\;\;\;\;\frac{\frac{1 + \mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right)}{t\_0}}{t\_0 \cdot \left(\alpha + \left(\beta + 3\right)\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{\frac{\left(\frac{1}{\beta} + \left(\alpha + \frac{\alpha}{\beta}\right)\right) + \left(1 + \left(-1 - \alpha\right) \cdot \frac{\alpha + 2}{\beta}\right)}{t\_0}}{1 + t\_0}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if beta < 2.29999999999999988e82

    1. Initial program 99.2%

      \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-/.f64N/A

        \[\leadsto \color{blue}{\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1}} \]
      2. lift-/.f64N/A

        \[\leadsto \frac{\color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      3. associate-/l/N/A

        \[\leadsto \color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
      4. lower-/.f64N/A

        \[\leadsto \color{blue}{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)}} \]
      5. lift-+.f64N/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right)} + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
      6. +-commutativeN/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\beta \cdot \alpha + \left(\alpha + \beta\right)\right)} + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
      7. lift-*.f64N/A

        \[\leadsto \frac{\frac{\left(\color{blue}{\beta \cdot \alpha} + \left(\alpha + \beta\right)\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
      8. *-commutativeN/A

        \[\leadsto \frac{\frac{\left(\color{blue}{\alpha \cdot \beta} + \left(\alpha + \beta\right)\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
      9. lower-fma.f64N/A

        \[\leadsto \frac{\frac{\color{blue}{\mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right)} + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
      10. lift-*.f64N/A

        \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right) + 1}{\left(\alpha + \beta\right) + \color{blue}{2 \cdot 1}}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
      11. metadata-evalN/A

        \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right) + 1}{\left(\alpha + \beta\right) + \color{blue}{2}}}{\left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right) \cdot \left(\left(\alpha + \beta\right) + 2 \cdot 1\right)} \]
      12. *-commutativeN/A

        \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right) + 1}{\left(\alpha + \beta\right) + 2}}{\color{blue}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) \cdot \left(\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1\right)}} \]
    4. Applied rewrites98.7%

      \[\leadsto \color{blue}{\frac{\frac{\mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right) + 1}{\left(\alpha + \beta\right) + 2}}{\left(\left(\alpha + \beta\right) + 2\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)}} \]

    if 2.29999999999999988e82 < beta

    1. Initial program 78.6%

      \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
    2. Add Preprocessing
    3. Taylor expanded in beta around inf

      \[\leadsto \frac{\frac{\color{blue}{\left(1 + \left(\alpha + \left(\frac{1}{\beta} + \frac{\alpha}{\beta}\right)\right)\right) - \frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
    4. Step-by-step derivation
      1. sub-negN/A

        \[\leadsto \frac{\frac{\color{blue}{\left(1 + \left(\alpha + \left(\frac{1}{\beta} + \frac{\alpha}{\beta}\right)\right)\right) + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      2. +-commutativeN/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\left(\alpha + \left(\frac{1}{\beta} + \frac{\alpha}{\beta}\right)\right) + 1\right)} + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      3. associate-+l+N/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\alpha + \left(\frac{1}{\beta} + \frac{\alpha}{\beta}\right)\right) + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      4. lower-+.f64N/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\alpha + \left(\frac{1}{\beta} + \frac{\alpha}{\beta}\right)\right) + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      5. +-commutativeN/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\left(\frac{1}{\beta} + \frac{\alpha}{\beta}\right) + \alpha\right)} + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      6. associate-+l+N/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right)} + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      7. lower-+.f64N/A

        \[\leadsto \frac{\frac{\color{blue}{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right)} + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      8. lower-/.f64N/A

        \[\leadsto \frac{\frac{\left(\color{blue}{\frac{1}{\beta}} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      9. lower-+.f64N/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \color{blue}{\left(\frac{\alpha}{\beta} + \alpha\right)}\right) + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      10. lower-/.f64N/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\color{blue}{\frac{\alpha}{\beta}} + \alpha\right)\right) + \left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      11. lower-+.f64N/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \color{blue}{\left(1 + \left(\mathsf{neg}\left(\frac{\left(1 + \alpha\right) \cdot \left(2 + \alpha\right)}{\beta}\right)\right)\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      12. associate-/l*N/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \left(\mathsf{neg}\left(\color{blue}{\left(1 + \alpha\right) \cdot \frac{2 + \alpha}{\beta}}\right)\right)\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      13. distribute-lft-neg-inN/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \color{blue}{\left(\mathsf{neg}\left(\left(1 + \alpha\right)\right)\right) \cdot \frac{2 + \alpha}{\beta}}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      14. mul-1-negN/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \color{blue}{\left(-1 \cdot \left(1 + \alpha\right)\right)} \cdot \frac{2 + \alpha}{\beta}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      15. lower-*.f64N/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \color{blue}{\left(-1 \cdot \left(1 + \alpha\right)\right) \cdot \frac{2 + \alpha}{\beta}}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      16. distribute-lft-inN/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \color{blue}{\left(-1 \cdot 1 + -1 \cdot \alpha\right)} \cdot \frac{2 + \alpha}{\beta}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      17. metadata-evalN/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \left(\color{blue}{-1} + -1 \cdot \alpha\right) \cdot \frac{2 + \alpha}{\beta}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      18. mul-1-negN/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \left(-1 + \color{blue}{\left(\mathsf{neg}\left(\alpha\right)\right)}\right) \cdot \frac{2 + \alpha}{\beta}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      19. unsub-negN/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \color{blue}{\left(-1 - \alpha\right)} \cdot \frac{2 + \alpha}{\beta}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      20. lower--.f64N/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \color{blue}{\left(-1 - \alpha\right)} \cdot \frac{2 + \alpha}{\beta}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
      21. lower-/.f64N/A

        \[\leadsto \frac{\frac{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \left(-1 - \alpha\right) \cdot \color{blue}{\frac{2 + \alpha}{\beta}}\right)}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
    5. Applied rewrites88.2%

      \[\leadsto \frac{\frac{\color{blue}{\left(\frac{1}{\beta} + \left(\frac{\alpha}{\beta} + \alpha\right)\right) + \left(1 + \left(-1 - \alpha\right) \cdot \frac{2 + \alpha}{\beta}\right)}}{\left(\alpha + \beta\right) + 2 \cdot 1}}{\left(\left(\alpha + \beta\right) + 2 \cdot 1\right) + 1} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification96.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;\beta \leq 2.3 \cdot 10^{+82}:\\ \;\;\;\;\frac{\frac{1 + \mathsf{fma}\left(\alpha, \beta, \alpha + \beta\right)}{\left(\alpha + \beta\right) + 2}}{\left(\left(\alpha + \beta\right) + 2\right) \cdot \left(\alpha + \left(\beta + 3\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{\left(\frac{1}{\beta} + \left(\alpha + \frac{\alpha}{\beta}\right)\right) + \left(1 + \left(-1 - \alpha\right) \cdot \frac{\alpha + 2}{\beta}\right)}{\left(\alpha + \beta\right) + 2}}{1 + \left(\left(\alpha + \beta\right) + 2\right)}\\ \end{array} \]
  5. Add Preprocessing

Reproduce

?
herbie shell --seed 2024223 
(FPCore (alpha beta)
  :name "Octave 3.8, jcobi/3"
  :precision binary64
  :pre (and (> alpha -1.0) (> beta -1.0))
  (/ (/ (/ (+ (+ (+ alpha beta) (* beta alpha)) 1.0) (+ (+ alpha beta) (* 2.0 1.0))) (+ (+ alpha beta) (* 2.0 1.0))) (+ (+ (+ alpha beta) (* 2.0 1.0)) 1.0)))