Octave 3.8, jcobi/1

Percentage Accurate: 63.3% → 81.7%
Time: 3.0s
Alternatives: 9
Speedup: 0.9×

Specification

?
\[\alpha > -1 \land \beta > -1\]
\[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
(FPCore (alpha beta)
  :precision binary64
  (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0))
double code(double alpha, double beta) {
	return (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
}
real(8) function code(alpha, beta)
use fmin_fmax_functions
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    code = (((beta - alpha) / ((alpha + beta) + 2.0d0)) + 1.0d0) / 2.0d0
end function
public static double code(double alpha, double beta) {
	return (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
}
def code(alpha, beta):
	return (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0
function code(alpha, beta)
	return Float64(Float64(Float64(Float64(beta - alpha) / Float64(Float64(alpha + beta) + 2.0)) + 1.0) / 2.0)
end
function tmp = code(alpha, beta)
	tmp = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
end
code[alpha_, beta_] := N[(N[(N[(N[(beta - alpha), $MachinePrecision] / N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / 2.0), $MachinePrecision]
\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 9 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 63.3% accurate, 1.0× speedup?

\[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
(FPCore (alpha beta)
  :precision binary64
  (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0))
double code(double alpha, double beta) {
	return (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
}
real(8) function code(alpha, beta)
use fmin_fmax_functions
    real(8), intent (in) :: alpha
    real(8), intent (in) :: beta
    code = (((beta - alpha) / ((alpha + beta) + 2.0d0)) + 1.0d0) / 2.0d0
end function
public static double code(double alpha, double beta) {
	return (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
}
def code(alpha, beta):
	return (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0
function code(alpha, beta)
	return Float64(Float64(Float64(Float64(beta - alpha) / Float64(Float64(alpha + beta) + 2.0)) + 1.0) / 2.0)
end
function tmp = code(alpha, beta)
	tmp = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
end
code[alpha_, beta_] := N[(N[(N[(N[(beta - alpha), $MachinePrecision] / N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / 2.0), $MachinePrecision]
\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}

Alternative 1: 81.7% accurate, 0.9× speedup?

\[\begin{array}{l} \mathbf{if}\;\beta \leq -5200000:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, 0.5\right)\\ \end{array} \]
(FPCore (alpha beta)
  :precision binary64
  (if (<= beta -5200000.0)
  (- -0.5 -0.5)
  (fma (/ (- alpha beta) (- -2.0 (+ alpha beta))) 0.5 0.5)))
double code(double alpha, double beta) {
	double tmp;
	if (beta <= -5200000.0) {
		tmp = -0.5 - -0.5;
	} else {
		tmp = fma(((alpha - beta) / (-2.0 - (alpha + beta))), 0.5, 0.5);
	}
	return tmp;
}
function code(alpha, beta)
	tmp = 0.0
	if (beta <= -5200000.0)
		tmp = Float64(-0.5 - -0.5);
	else
		tmp = fma(Float64(Float64(alpha - beta) / Float64(-2.0 - Float64(alpha + beta))), 0.5, 0.5);
	end
	return tmp
end
code[alpha_, beta_] := If[LessEqual[beta, -5200000.0], N[(-0.5 - -0.5), $MachinePrecision], N[(N[(N[(alpha - beta), $MachinePrecision] / N[(-2.0 - N[(alpha + beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] * 0.5 + 0.5), $MachinePrecision]]
\begin{array}{l}
\mathbf{if}\;\beta \leq -5200000:\\
\;\;\;\;-0.5 - -0.5\\

\mathbf{else}:\\
\;\;\;\;\mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, 0.5\right)\\


\end{array}
Derivation
  1. Split input into 2 regimes
  2. if beta < -5.2e6

    1. Initial program 63.3%

      \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
    2. Step-by-step derivation
      1. lift-/.f64N/A

        \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
      2. lift-+.f64N/A

        \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
      3. div-addN/A

        \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
      4. lift-/.f64N/A

        \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
      5. mult-flipN/A

        \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
      6. associate-/l*N/A

        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
      7. lower-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
      8. lower-/.f64N/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
      9. frac-2negN/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
      10. lower-/.f64N/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
      11. metadata-evalN/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
      12. lift-+.f64N/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
      13. add-flipN/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
      14. sub-negateN/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
      15. lower--.f64N/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
      16. metadata-evalN/A

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
      17. metadata-eval43.9%

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
    3. Applied rewrites43.9%

      \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
    4. Taylor expanded in beta around inf

      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
    5. Step-by-step derivation
      1. lower-/.f6420.0%

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
    6. Applied rewrites20.0%

      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
    7. Step-by-step derivation
      1. lift-fma.f64N/A

        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
      2. add-flipN/A

        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
      3. lower--.f64N/A

        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
      4. *-commutativeN/A

        \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
      5. lower-*.f64N/A

        \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
      6. metadata-eval20.0%

        \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
    8. Applied rewrites20.0%

      \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
    9. Taylor expanded in alpha around inf

      \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
    10. Step-by-step derivation
      1. Applied rewrites45.2%

        \[\leadsto \color{blue}{-0.5} - -0.5 \]

      if -5.2e6 < beta

      1. Initial program 63.3%

        \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
      2. Step-by-step derivation
        1. lift-/.f64N/A

          \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
        2. lift-+.f64N/A

          \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
        3. div-addN/A

          \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
        4. mult-flipN/A

          \[\leadsto \color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} \cdot \frac{1}{2}} + \frac{1}{2} \]
        5. lower-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}, \frac{1}{2}, \frac{1}{2}\right)} \]
        6. lift-/.f64N/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}, \frac{1}{2}, \frac{1}{2}\right) \]
        7. frac-2negN/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\mathsf{neg}\left(\left(\beta - \alpha\right)\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
        8. lower-/.f64N/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\mathsf{neg}\left(\left(\beta - \alpha\right)\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
        9. lift--.f64N/A

          \[\leadsto \mathsf{fma}\left(\frac{\mathsf{neg}\left(\color{blue}{\left(\beta - \alpha\right)}\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
        10. sub-negate-revN/A

          \[\leadsto \mathsf{fma}\left(\frac{\color{blue}{\alpha - \beta}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
        11. lower--.f64N/A

          \[\leadsto \mathsf{fma}\left(\frac{\color{blue}{\alpha - \beta}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
        12. lift-+.f64N/A

          \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
        13. add-flipN/A

          \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
        14. sub-negateN/A

          \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
        15. lower--.f64N/A

          \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
        16. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{-2} - \left(\alpha + \beta\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
        17. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, \color{blue}{\frac{1}{2}}, \frac{1}{2}\right) \]
        18. metadata-eval63.3%

          \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, \color{blue}{0.5}\right) \]
      3. Applied rewrites63.3%

        \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, 0.5\right)} \]
    11. Recombined 2 regimes into one program.
    12. Add Preprocessing

    Alternative 2: 77.9% accurate, 0.9× speedup?

    \[\begin{array}{l} \mathbf{if}\;\beta \leq -5200000:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{elif}\;\beta \leq 2.1 \cdot 10^{-42}:\\ \;\;\;\;0.5 \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(\frac{\beta}{2 + \beta}, 0.5, 0.5\right)\\ \end{array} \]
    (FPCore (alpha beta)
      :precision binary64
      (if (<= beta -5200000.0)
      (- -0.5 -0.5)
      (if (<= beta 2.1e-42)
        (* 0.5 (- 1.0 (/ alpha (+ 2.0 alpha))))
        (fma (/ beta (+ 2.0 beta)) 0.5 0.5))))
    double code(double alpha, double beta) {
    	double tmp;
    	if (beta <= -5200000.0) {
    		tmp = -0.5 - -0.5;
    	} else if (beta <= 2.1e-42) {
    		tmp = 0.5 * (1.0 - (alpha / (2.0 + alpha)));
    	} else {
    		tmp = fma((beta / (2.0 + beta)), 0.5, 0.5);
    	}
    	return tmp;
    }
    
    function code(alpha, beta)
    	tmp = 0.0
    	if (beta <= -5200000.0)
    		tmp = Float64(-0.5 - -0.5);
    	elseif (beta <= 2.1e-42)
    		tmp = Float64(0.5 * Float64(1.0 - Float64(alpha / Float64(2.0 + alpha))));
    	else
    		tmp = fma(Float64(beta / Float64(2.0 + beta)), 0.5, 0.5);
    	end
    	return tmp
    end
    
    code[alpha_, beta_] := If[LessEqual[beta, -5200000.0], N[(-0.5 - -0.5), $MachinePrecision], If[LessEqual[beta, 2.1e-42], N[(0.5 * N[(1.0 - N[(alpha / N[(2.0 + alpha), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(N[(beta / N[(2.0 + beta), $MachinePrecision]), $MachinePrecision] * 0.5 + 0.5), $MachinePrecision]]]
    
    \begin{array}{l}
    \mathbf{if}\;\beta \leq -5200000:\\
    \;\;\;\;-0.5 - -0.5\\
    
    \mathbf{elif}\;\beta \leq 2.1 \cdot 10^{-42}:\\
    \;\;\;\;0.5 \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)\\
    
    \mathbf{else}:\\
    \;\;\;\;\mathsf{fma}\left(\frac{\beta}{2 + \beta}, 0.5, 0.5\right)\\
    
    
    \end{array}
    
    Derivation
    1. Split input into 3 regimes
    2. if beta < -5.2e6

      1. Initial program 63.3%

        \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
      2. Step-by-step derivation
        1. lift-/.f64N/A

          \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
        2. lift-+.f64N/A

          \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
        3. div-addN/A

          \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
        4. lift-/.f64N/A

          \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
        5. mult-flipN/A

          \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
        6. associate-/l*N/A

          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
        7. lower-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
        8. lower-/.f64N/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
        9. frac-2negN/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
        10. lower-/.f64N/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
        11. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
        12. lift-+.f64N/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
        13. add-flipN/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
        14. sub-negateN/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
        15. lower--.f64N/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
        16. metadata-evalN/A

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
        17. metadata-eval43.9%

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
      3. Applied rewrites43.9%

        \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
      4. Taylor expanded in beta around inf

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
      5. Step-by-step derivation
        1. lower-/.f6420.0%

          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
      6. Applied rewrites20.0%

        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
      7. Step-by-step derivation
        1. lift-fma.f64N/A

          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
        2. add-flipN/A

          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
        3. lower--.f64N/A

          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
        4. *-commutativeN/A

          \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
        5. lower-*.f64N/A

          \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
        6. metadata-eval20.0%

          \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
      8. Applied rewrites20.0%

        \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
      9. Taylor expanded in alpha around inf

        \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
      10. Step-by-step derivation
        1. Applied rewrites45.2%

          \[\leadsto \color{blue}{-0.5} - -0.5 \]

        if -5.2e6 < beta < 2.1000000000000001e-42

        1. Initial program 63.3%

          \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
        2. Taylor expanded in beta around 0

          \[\leadsto \color{blue}{\frac{1}{2} \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
        3. Step-by-step derivation
          1. lower-*.f64N/A

            \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
          2. lower--.f64N/A

            \[\leadsto \frac{1}{2} \cdot \left(1 - \color{blue}{\frac{\alpha}{2 + \alpha}}\right) \]
          3. lower-/.f64N/A

            \[\leadsto \frac{1}{2} \cdot \left(1 - \frac{\alpha}{\color{blue}{2 + \alpha}}\right) \]
          4. lower-+.f6458.4%

            \[\leadsto 0.5 \cdot \left(1 - \frac{\alpha}{2 + \color{blue}{\alpha}}\right) \]
        4. Applied rewrites58.4%

          \[\leadsto \color{blue}{0.5 \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)} \]

        if 2.1000000000000001e-42 < beta

        1. Initial program 63.3%

          \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
        2. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
          2. lift-+.f64N/A

            \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
          3. div-addN/A

            \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
          4. mult-flipN/A

            \[\leadsto \color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} \cdot \frac{1}{2}} + \frac{1}{2} \]
          5. lower-fma.f64N/A

            \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}, \frac{1}{2}, \frac{1}{2}\right)} \]
          6. lift-/.f64N/A

            \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}, \frac{1}{2}, \frac{1}{2}\right) \]
          7. frac-2negN/A

            \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\mathsf{neg}\left(\left(\beta - \alpha\right)\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
          8. lower-/.f64N/A

            \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\mathsf{neg}\left(\left(\beta - \alpha\right)\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
          9. lift--.f64N/A

            \[\leadsto \mathsf{fma}\left(\frac{\mathsf{neg}\left(\color{blue}{\left(\beta - \alpha\right)}\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
          10. sub-negate-revN/A

            \[\leadsto \mathsf{fma}\left(\frac{\color{blue}{\alpha - \beta}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
          11. lower--.f64N/A

            \[\leadsto \mathsf{fma}\left(\frac{\color{blue}{\alpha - \beta}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
          12. lift-+.f64N/A

            \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
          13. add-flipN/A

            \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
          14. sub-negateN/A

            \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
          15. lower--.f64N/A

            \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
          16. metadata-evalN/A

            \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{-2} - \left(\alpha + \beta\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
          17. metadata-evalN/A

            \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, \color{blue}{\frac{1}{2}}, \frac{1}{2}\right) \]
          18. metadata-eval63.3%

            \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, \color{blue}{0.5}\right) \]
        3. Applied rewrites63.3%

          \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, 0.5\right)} \]
        4. Taylor expanded in alpha around inf

          \[\leadsto \mathsf{fma}\left(\color{blue}{-1}, 0.5, 0.5\right) \]
        5. Step-by-step derivation
          1. Applied rewrites45.2%

            \[\leadsto \mathsf{fma}\left(\color{blue}{-1}, 0.5, 0.5\right) \]
          2. Taylor expanded in alpha around 0

            \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\beta}{2 + \beta}}, 0.5, 0.5\right) \]
          3. Step-by-step derivation
            1. lower-/.f64N/A

              \[\leadsto \mathsf{fma}\left(\frac{\beta}{\color{blue}{2 + \beta}}, \frac{1}{2}, \frac{1}{2}\right) \]
            2. lower-+.f6443.0%

              \[\leadsto \mathsf{fma}\left(\frac{\beta}{2 + \color{blue}{\beta}}, 0.5, 0.5\right) \]
          4. Applied rewrites43.0%

            \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\beta}{2 + \beta}}, 0.5, 0.5\right) \]
        6. Recombined 3 regimes into one program.
        7. Add Preprocessing

        Alternative 3: 75.8% accurate, 0.8× speedup?

        \[\begin{array}{l} \mathbf{if}\;\alpha \leq -5.4 \cdot 10^{-46}:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{elif}\;\alpha \leq 1.6 \cdot 10^{-16}:\\ \;\;\;\;\mathsf{fma}\left(\frac{\beta}{2 + \beta}, 0.5, 0.5\right)\\ \mathbf{elif}\;\alpha \leq 4.4 \cdot 10^{+41}:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{\alpha} + \frac{\beta}{\alpha}\\ \end{array} \]
        (FPCore (alpha beta)
          :precision binary64
          (if (<= alpha -5.4e-46)
          (- -0.5 -0.5)
          (if (<= alpha 1.6e-16)
            (fma (/ beta (+ 2.0 beta)) 0.5 0.5)
            (if (<= alpha 4.4e+41)
              (- -0.5 -0.5)
              (+ (/ 1.0 alpha) (/ beta alpha))))))
        double code(double alpha, double beta) {
        	double tmp;
        	if (alpha <= -5.4e-46) {
        		tmp = -0.5 - -0.5;
        	} else if (alpha <= 1.6e-16) {
        		tmp = fma((beta / (2.0 + beta)), 0.5, 0.5);
        	} else if (alpha <= 4.4e+41) {
        		tmp = -0.5 - -0.5;
        	} else {
        		tmp = (1.0 / alpha) + (beta / alpha);
        	}
        	return tmp;
        }
        
        function code(alpha, beta)
        	tmp = 0.0
        	if (alpha <= -5.4e-46)
        		tmp = Float64(-0.5 - -0.5);
        	elseif (alpha <= 1.6e-16)
        		tmp = fma(Float64(beta / Float64(2.0 + beta)), 0.5, 0.5);
        	elseif (alpha <= 4.4e+41)
        		tmp = Float64(-0.5 - -0.5);
        	else
        		tmp = Float64(Float64(1.0 / alpha) + Float64(beta / alpha));
        	end
        	return tmp
        end
        
        code[alpha_, beta_] := If[LessEqual[alpha, -5.4e-46], N[(-0.5 - -0.5), $MachinePrecision], If[LessEqual[alpha, 1.6e-16], N[(N[(beta / N[(2.0 + beta), $MachinePrecision]), $MachinePrecision] * 0.5 + 0.5), $MachinePrecision], If[LessEqual[alpha, 4.4e+41], N[(-0.5 - -0.5), $MachinePrecision], N[(N[(1.0 / alpha), $MachinePrecision] + N[(beta / alpha), $MachinePrecision]), $MachinePrecision]]]]
        
        \begin{array}{l}
        \mathbf{if}\;\alpha \leq -5.4 \cdot 10^{-46}:\\
        \;\;\;\;-0.5 - -0.5\\
        
        \mathbf{elif}\;\alpha \leq 1.6 \cdot 10^{-16}:\\
        \;\;\;\;\mathsf{fma}\left(\frac{\beta}{2 + \beta}, 0.5, 0.5\right)\\
        
        \mathbf{elif}\;\alpha \leq 4.4 \cdot 10^{+41}:\\
        \;\;\;\;-0.5 - -0.5\\
        
        \mathbf{else}:\\
        \;\;\;\;\frac{1}{\alpha} + \frac{\beta}{\alpha}\\
        
        
        \end{array}
        
        Derivation
        1. Split input into 3 regimes
        2. if alpha < -5.4e-46 or 1.6000000000000001e-16 < alpha < 4.3999999999999998e41

          1. Initial program 63.3%

            \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
          2. Step-by-step derivation
            1. lift-/.f64N/A

              \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
            2. lift-+.f64N/A

              \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
            3. div-addN/A

              \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
            4. lift-/.f64N/A

              \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
            5. mult-flipN/A

              \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
            6. associate-/l*N/A

              \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
            7. lower-fma.f64N/A

              \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
            8. lower-/.f64N/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
            9. frac-2negN/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
            10. lower-/.f64N/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
            11. metadata-evalN/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
            12. lift-+.f64N/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
            13. add-flipN/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
            14. sub-negateN/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
            15. lower--.f64N/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
            16. metadata-evalN/A

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
            17. metadata-eval43.9%

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
          3. Applied rewrites43.9%

            \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
          4. Taylor expanded in beta around inf

            \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
          5. Step-by-step derivation
            1. lower-/.f6420.0%

              \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
          6. Applied rewrites20.0%

            \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
          7. Step-by-step derivation
            1. lift-fma.f64N/A

              \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
            2. add-flipN/A

              \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
            3. lower--.f64N/A

              \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
            4. *-commutativeN/A

              \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
            5. lower-*.f64N/A

              \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
            6. metadata-eval20.0%

              \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
          8. Applied rewrites20.0%

            \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
          9. Taylor expanded in alpha around inf

            \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
          10. Step-by-step derivation
            1. Applied rewrites45.2%

              \[\leadsto \color{blue}{-0.5} - -0.5 \]

            if -5.4e-46 < alpha < 1.6000000000000001e-16

            1. Initial program 63.3%

              \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
            2. Step-by-step derivation
              1. lift-/.f64N/A

                \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
              2. lift-+.f64N/A

                \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
              3. div-addN/A

                \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
              4. mult-flipN/A

                \[\leadsto \color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} \cdot \frac{1}{2}} + \frac{1}{2} \]
              5. lower-fma.f64N/A

                \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}, \frac{1}{2}, \frac{1}{2}\right)} \]
              6. lift-/.f64N/A

                \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}, \frac{1}{2}, \frac{1}{2}\right) \]
              7. frac-2negN/A

                \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\mathsf{neg}\left(\left(\beta - \alpha\right)\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
              8. lower-/.f64N/A

                \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\mathsf{neg}\left(\left(\beta - \alpha\right)\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
              9. lift--.f64N/A

                \[\leadsto \mathsf{fma}\left(\frac{\mathsf{neg}\left(\color{blue}{\left(\beta - \alpha\right)}\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
              10. sub-negate-revN/A

                \[\leadsto \mathsf{fma}\left(\frac{\color{blue}{\alpha - \beta}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
              11. lower--.f64N/A

                \[\leadsto \mathsf{fma}\left(\frac{\color{blue}{\alpha - \beta}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
              12. lift-+.f64N/A

                \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
              13. add-flipN/A

                \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
              14. sub-negateN/A

                \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
              15. lower--.f64N/A

                \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}, \frac{1}{2}, \frac{1}{2}\right) \]
              16. metadata-evalN/A

                \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{\color{blue}{-2} - \left(\alpha + \beta\right)}, \frac{1}{2}, \frac{1}{2}\right) \]
              17. metadata-evalN/A

                \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, \color{blue}{\frac{1}{2}}, \frac{1}{2}\right) \]
              18. metadata-eval63.3%

                \[\leadsto \mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, \color{blue}{0.5}\right) \]
            3. Applied rewrites63.3%

              \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{\alpha - \beta}{-2 - \left(\alpha + \beta\right)}, 0.5, 0.5\right)} \]
            4. Taylor expanded in alpha around inf

              \[\leadsto \mathsf{fma}\left(\color{blue}{-1}, 0.5, 0.5\right) \]
            5. Step-by-step derivation
              1. Applied rewrites45.2%

                \[\leadsto \mathsf{fma}\left(\color{blue}{-1}, 0.5, 0.5\right) \]
              2. Taylor expanded in alpha around 0

                \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\beta}{2 + \beta}}, 0.5, 0.5\right) \]
              3. Step-by-step derivation
                1. lower-/.f64N/A

                  \[\leadsto \mathsf{fma}\left(\frac{\beta}{\color{blue}{2 + \beta}}, \frac{1}{2}, \frac{1}{2}\right) \]
                2. lower-+.f6443.0%

                  \[\leadsto \mathsf{fma}\left(\frac{\beta}{2 + \color{blue}{\beta}}, 0.5, 0.5\right) \]
              4. Applied rewrites43.0%

                \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{\beta}{2 + \beta}}, 0.5, 0.5\right) \]

              if 4.3999999999999998e41 < alpha

              1. Initial program 63.3%

                \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
              2. Taylor expanded in alpha around inf

                \[\leadsto \color{blue}{\frac{1}{2} \cdot \frac{2 + 2 \cdot \beta}{\alpha}} \]
              3. Step-by-step derivation
                1. lower-*.f64N/A

                  \[\leadsto \frac{1}{2} \cdot \color{blue}{\frac{2 + 2 \cdot \beta}{\alpha}} \]
                2. lower-/.f64N/A

                  \[\leadsto \frac{1}{2} \cdot \frac{2 + 2 \cdot \beta}{\color{blue}{\alpha}} \]
                3. lower-+.f64N/A

                  \[\leadsto \frac{1}{2} \cdot \frac{2 + 2 \cdot \beta}{\alpha} \]
                4. lower-*.f6417.9%

                  \[\leadsto 0.5 \cdot \frac{2 + 2 \cdot \beta}{\alpha} \]
              4. Applied rewrites17.9%

                \[\leadsto \color{blue}{0.5 \cdot \frac{2 + 2 \cdot \beta}{\alpha}} \]
              5. Taylor expanded in beta around 0

                \[\leadsto \frac{1}{\alpha} + \color{blue}{\frac{\beta}{\alpha}} \]
              6. Step-by-step derivation
                1. lower-+.f64N/A

                  \[\leadsto \frac{1}{\alpha} + \frac{\beta}{\color{blue}{\alpha}} \]
                2. lower-/.f64N/A

                  \[\leadsto \frac{1}{\alpha} + \frac{\beta}{\alpha} \]
                3. lower-/.f6417.9%

                  \[\leadsto \frac{1}{\alpha} + \frac{\beta}{\alpha} \]
              7. Applied rewrites17.9%

                \[\leadsto \frac{1}{\alpha} + \color{blue}{\frac{\beta}{\alpha}} \]
            6. Recombined 3 regimes into one program.
            7. Add Preprocessing

            Alternative 4: 69.0% accurate, 0.3× speedup?

            \[\begin{array}{l} t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\ \mathbf{if}\;t\_0 \leq 0.04:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{elif}\;t\_0 \leq 0.6:\\ \;\;\;\;0.5 + \beta \cdot \left(0.25 + -0.125 \cdot \beta\right)\\ \mathbf{else}:\\ \;\;\;\;-0.5 - -0.5\\ \end{array} \]
            (FPCore (alpha beta)
              :precision binary64
              (let* ((t_0
                    (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0)))
              (if (<= t_0 0.04)
                (- -0.5 -0.5)
                (if (<= t_0 0.6)
                  (+ 0.5 (* beta (+ 0.25 (* -0.125 beta))))
                  (- -0.5 -0.5)))))
            double code(double alpha, double beta) {
            	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
            	double tmp;
            	if (t_0 <= 0.04) {
            		tmp = -0.5 - -0.5;
            	} else if (t_0 <= 0.6) {
            		tmp = 0.5 + (beta * (0.25 + (-0.125 * beta)));
            	} else {
            		tmp = -0.5 - -0.5;
            	}
            	return tmp;
            }
            
            real(8) function code(alpha, beta)
            use fmin_fmax_functions
                real(8), intent (in) :: alpha
                real(8), intent (in) :: beta
                real(8) :: t_0
                real(8) :: tmp
                t_0 = (((beta - alpha) / ((alpha + beta) + 2.0d0)) + 1.0d0) / 2.0d0
                if (t_0 <= 0.04d0) then
                    tmp = (-0.5d0) - (-0.5d0)
                else if (t_0 <= 0.6d0) then
                    tmp = 0.5d0 + (beta * (0.25d0 + ((-0.125d0) * beta)))
                else
                    tmp = (-0.5d0) - (-0.5d0)
                end if
                code = tmp
            end function
            
            public static double code(double alpha, double beta) {
            	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
            	double tmp;
            	if (t_0 <= 0.04) {
            		tmp = -0.5 - -0.5;
            	} else if (t_0 <= 0.6) {
            		tmp = 0.5 + (beta * (0.25 + (-0.125 * beta)));
            	} else {
            		tmp = -0.5 - -0.5;
            	}
            	return tmp;
            }
            
            def code(alpha, beta):
            	t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0
            	tmp = 0
            	if t_0 <= 0.04:
            		tmp = -0.5 - -0.5
            	elif t_0 <= 0.6:
            		tmp = 0.5 + (beta * (0.25 + (-0.125 * beta)))
            	else:
            		tmp = -0.5 - -0.5
            	return tmp
            
            function code(alpha, beta)
            	t_0 = Float64(Float64(Float64(Float64(beta - alpha) / Float64(Float64(alpha + beta) + 2.0)) + 1.0) / 2.0)
            	tmp = 0.0
            	if (t_0 <= 0.04)
            		tmp = Float64(-0.5 - -0.5);
            	elseif (t_0 <= 0.6)
            		tmp = Float64(0.5 + Float64(beta * Float64(0.25 + Float64(-0.125 * beta))));
            	else
            		tmp = Float64(-0.5 - -0.5);
            	end
            	return tmp
            end
            
            function tmp_2 = code(alpha, beta)
            	t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
            	tmp = 0.0;
            	if (t_0 <= 0.04)
            		tmp = -0.5 - -0.5;
            	elseif (t_0 <= 0.6)
            		tmp = 0.5 + (beta * (0.25 + (-0.125 * beta)));
            	else
            		tmp = -0.5 - -0.5;
            	end
            	tmp_2 = tmp;
            end
            
            code[alpha_, beta_] := Block[{t$95$0 = N[(N[(N[(N[(beta - alpha), $MachinePrecision] / N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / 2.0), $MachinePrecision]}, If[LessEqual[t$95$0, 0.04], N[(-0.5 - -0.5), $MachinePrecision], If[LessEqual[t$95$0, 0.6], N[(0.5 + N[(beta * N[(0.25 + N[(-0.125 * beta), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(-0.5 - -0.5), $MachinePrecision]]]]
            
            \begin{array}{l}
            t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\
            \mathbf{if}\;t\_0 \leq 0.04:\\
            \;\;\;\;-0.5 - -0.5\\
            
            \mathbf{elif}\;t\_0 \leq 0.6:\\
            \;\;\;\;0.5 + \beta \cdot \left(0.25 + -0.125 \cdot \beta\right)\\
            
            \mathbf{else}:\\
            \;\;\;\;-0.5 - -0.5\\
            
            
            \end{array}
            
            Derivation
            1. Split input into 2 regimes
            2. if (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 0.040000000000000001 or 0.59999999999999998 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64))

              1. Initial program 63.3%

                \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
              2. Step-by-step derivation
                1. lift-/.f64N/A

                  \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
                2. lift-+.f64N/A

                  \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
                3. div-addN/A

                  \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
                4. lift-/.f64N/A

                  \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                5. mult-flipN/A

                  \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                6. associate-/l*N/A

                  \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
                7. lower-fma.f64N/A

                  \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
                8. lower-/.f64N/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
                9. frac-2negN/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                10. lower-/.f64N/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                11. metadata-evalN/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
                12. lift-+.f64N/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
                13. add-flipN/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
                14. sub-negateN/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                15. lower--.f64N/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                16. metadata-evalN/A

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
                17. metadata-eval43.9%

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
              3. Applied rewrites43.9%

                \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
              4. Taylor expanded in beta around inf

                \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
              5. Step-by-step derivation
                1. lower-/.f6420.0%

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
              6. Applied rewrites20.0%

                \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
              7. Step-by-step derivation
                1. lift-fma.f64N/A

                  \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
                2. add-flipN/A

                  \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                3. lower--.f64N/A

                  \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                4. *-commutativeN/A

                  \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                5. lower-*.f64N/A

                  \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                6. metadata-eval20.0%

                  \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
              8. Applied rewrites20.0%

                \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
              9. Taylor expanded in alpha around inf

                \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
              10. Step-by-step derivation
                1. Applied rewrites45.2%

                  \[\leadsto \color{blue}{-0.5} - -0.5 \]

                if 0.040000000000000001 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 0.59999999999999998

                1. Initial program 63.3%

                  \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                2. Taylor expanded in alpha around 0

                  \[\leadsto \color{blue}{\frac{1}{2} \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                3. Step-by-step derivation
                  1. lower-*.f64N/A

                    \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(1 + \frac{\beta}{2 + \beta}\right)} \]
                  2. lower-+.f64N/A

                    \[\leadsto \frac{1}{2} \cdot \left(1 + \color{blue}{\frac{\beta}{2 + \beta}}\right) \]
                  3. lower-/.f64N/A

                    \[\leadsto \frac{1}{2} \cdot \left(1 + \frac{\beta}{\color{blue}{2 + \beta}}\right) \]
                  4. lower-+.f6443.0%

                    \[\leadsto 0.5 \cdot \left(1 + \frac{\beta}{2 + \color{blue}{\beta}}\right) \]
                4. Applied rewrites43.0%

                  \[\leadsto \color{blue}{0.5 \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                5. Taylor expanded in beta around 0

                  \[\leadsto \frac{1}{2} + \color{blue}{\beta \cdot \left(\frac{1}{4} + \frac{-1}{8} \cdot \beta\right)} \]
                6. Step-by-step derivation
                  1. lower-+.f64N/A

                    \[\leadsto \frac{1}{2} + \beta \cdot \color{blue}{\left(\frac{1}{4} + \frac{-1}{8} \cdot \beta\right)} \]
                  2. lower-*.f64N/A

                    \[\leadsto \frac{1}{2} + \beta \cdot \left(\frac{1}{4} + \color{blue}{\frac{-1}{8} \cdot \beta}\right) \]
                  3. lower-+.f64N/A

                    \[\leadsto \frac{1}{2} + \beta \cdot \left(\frac{1}{4} + \frac{-1}{8} \cdot \color{blue}{\beta}\right) \]
                  4. lower-*.f6426.5%

                    \[\leadsto 0.5 + \beta \cdot \left(0.25 + -0.125 \cdot \beta\right) \]
                7. Applied rewrites26.5%

                  \[\leadsto 0.5 + \color{blue}{\beta \cdot \left(0.25 + -0.125 \cdot \beta\right)} \]
              11. Recombined 2 regimes into one program.
              12. Add Preprocessing

              Alternative 5: 69.0% accurate, 0.3× speedup?

              \[\begin{array}{l} t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\ \mathbf{if}\;t\_0 \leq 10^{-7}:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{elif}\;t\_0 \leq 0.6:\\ \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.125, \alpha, -0.25\right), \alpha, 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;-0.5 - -0.5\\ \end{array} \]
              (FPCore (alpha beta)
                :precision binary64
                (let* ((t_0
                      (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0)))
                (if (<= t_0 1e-7)
                  (- -0.5 -0.5)
                  (if (<= t_0 0.6)
                    (fma (fma 0.125 alpha -0.25) alpha 0.5)
                    (- -0.5 -0.5)))))
              double code(double alpha, double beta) {
              	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
              	double tmp;
              	if (t_0 <= 1e-7) {
              		tmp = -0.5 - -0.5;
              	} else if (t_0 <= 0.6) {
              		tmp = fma(fma(0.125, alpha, -0.25), alpha, 0.5);
              	} else {
              		tmp = -0.5 - -0.5;
              	}
              	return tmp;
              }
              
              function code(alpha, beta)
              	t_0 = Float64(Float64(Float64(Float64(beta - alpha) / Float64(Float64(alpha + beta) + 2.0)) + 1.0) / 2.0)
              	tmp = 0.0
              	if (t_0 <= 1e-7)
              		tmp = Float64(-0.5 - -0.5);
              	elseif (t_0 <= 0.6)
              		tmp = fma(fma(0.125, alpha, -0.25), alpha, 0.5);
              	else
              		tmp = Float64(-0.5 - -0.5);
              	end
              	return tmp
              end
              
              code[alpha_, beta_] := Block[{t$95$0 = N[(N[(N[(N[(beta - alpha), $MachinePrecision] / N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / 2.0), $MachinePrecision]}, If[LessEqual[t$95$0, 1e-7], N[(-0.5 - -0.5), $MachinePrecision], If[LessEqual[t$95$0, 0.6], N[(N[(0.125 * alpha + -0.25), $MachinePrecision] * alpha + 0.5), $MachinePrecision], N[(-0.5 - -0.5), $MachinePrecision]]]]
              
              \begin{array}{l}
              t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\
              \mathbf{if}\;t\_0 \leq 10^{-7}:\\
              \;\;\;\;-0.5 - -0.5\\
              
              \mathbf{elif}\;t\_0 \leq 0.6:\\
              \;\;\;\;\mathsf{fma}\left(\mathsf{fma}\left(0.125, \alpha, -0.25\right), \alpha, 0.5\right)\\
              
              \mathbf{else}:\\
              \;\;\;\;-0.5 - -0.5\\
              
              
              \end{array}
              
              Derivation
              1. Split input into 2 regimes
              2. if (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 9.9999999999999995e-8 or 0.59999999999999998 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64))

                1. Initial program 63.3%

                  \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                2. Step-by-step derivation
                  1. lift-/.f64N/A

                    \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
                  2. lift-+.f64N/A

                    \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
                  3. div-addN/A

                    \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
                  4. lift-/.f64N/A

                    \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                  5. mult-flipN/A

                    \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                  6. associate-/l*N/A

                    \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
                  7. lower-fma.f64N/A

                    \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
                  8. lower-/.f64N/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
                  9. frac-2negN/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                  10. lower-/.f64N/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                  11. metadata-evalN/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
                  12. lift-+.f64N/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
                  13. add-flipN/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
                  14. sub-negateN/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                  15. lower--.f64N/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                  16. metadata-evalN/A

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
                  17. metadata-eval43.9%

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
                3. Applied rewrites43.9%

                  \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
                4. Taylor expanded in beta around inf

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
                5. Step-by-step derivation
                  1. lower-/.f6420.0%

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
                6. Applied rewrites20.0%

                  \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
                7. Step-by-step derivation
                  1. lift-fma.f64N/A

                    \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
                  2. add-flipN/A

                    \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                  3. lower--.f64N/A

                    \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                  4. *-commutativeN/A

                    \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                  5. lower-*.f64N/A

                    \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                  6. metadata-eval20.0%

                    \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
                8. Applied rewrites20.0%

                  \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
                9. Taylor expanded in alpha around inf

                  \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
                10. Step-by-step derivation
                  1. Applied rewrites45.2%

                    \[\leadsto \color{blue}{-0.5} - -0.5 \]

                  if 9.9999999999999995e-8 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 0.59999999999999998

                  1. Initial program 63.3%

                    \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                  2. Taylor expanded in beta around 0

                    \[\leadsto \color{blue}{\frac{1}{2} \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
                  3. Step-by-step derivation
                    1. lower-*.f64N/A

                      \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
                    2. lower--.f64N/A

                      \[\leadsto \frac{1}{2} \cdot \left(1 - \color{blue}{\frac{\alpha}{2 + \alpha}}\right) \]
                    3. lower-/.f64N/A

                      \[\leadsto \frac{1}{2} \cdot \left(1 - \frac{\alpha}{\color{blue}{2 + \alpha}}\right) \]
                    4. lower-+.f6458.4%

                      \[\leadsto 0.5 \cdot \left(1 - \frac{\alpha}{2 + \color{blue}{\alpha}}\right) \]
                  4. Applied rewrites58.4%

                    \[\leadsto \color{blue}{0.5 \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
                  5. Taylor expanded in alpha around 0

                    \[\leadsto \frac{1}{2} + \color{blue}{\alpha \cdot \left(\frac{1}{8} \cdot \alpha - \frac{1}{4}\right)} \]
                  6. Step-by-step derivation
                    1. lower-+.f64N/A

                      \[\leadsto \frac{1}{2} + \alpha \cdot \color{blue}{\left(\frac{1}{8} \cdot \alpha - \frac{1}{4}\right)} \]
                    2. lower-*.f64N/A

                      \[\leadsto \frac{1}{2} + \alpha \cdot \left(\frac{1}{8} \cdot \alpha - \color{blue}{\frac{1}{4}}\right) \]
                    3. lower--.f64N/A

                      \[\leadsto \frac{1}{2} + \alpha \cdot \left(\frac{1}{8} \cdot \alpha - \frac{1}{4}\right) \]
                    4. lower-*.f6428.7%

                      \[\leadsto 0.5 + \alpha \cdot \left(0.125 \cdot \alpha - 0.25\right) \]
                  7. Applied rewrites28.7%

                    \[\leadsto 0.5 + \color{blue}{\alpha \cdot \left(0.125 \cdot \alpha - 0.25\right)} \]
                  8. Step-by-step derivation
                    1. lift-+.f64N/A

                      \[\leadsto \frac{1}{2} + \alpha \cdot \color{blue}{\left(\frac{1}{8} \cdot \alpha - \frac{1}{4}\right)} \]
                    2. +-commutativeN/A

                      \[\leadsto \alpha \cdot \left(\frac{1}{8} \cdot \alpha - \frac{1}{4}\right) + \frac{1}{2} \]
                    3. lift-*.f64N/A

                      \[\leadsto \alpha \cdot \left(\frac{1}{8} \cdot \alpha - \frac{1}{4}\right) + \frac{1}{2} \]
                    4. *-commutativeN/A

                      \[\leadsto \left(\frac{1}{8} \cdot \alpha - \frac{1}{4}\right) \cdot \alpha + \frac{1}{2} \]
                    5. lower-fma.f6428.7%

                      \[\leadsto \mathsf{fma}\left(0.125 \cdot \alpha - 0.25, \alpha, 0.5\right) \]
                    6. lift--.f64N/A

                      \[\leadsto \mathsf{fma}\left(\frac{1}{8} \cdot \alpha - \frac{1}{4}, \alpha, \frac{1}{2}\right) \]
                    7. sub-flipN/A

                      \[\leadsto \mathsf{fma}\left(\frac{1}{8} \cdot \alpha + \left(\mathsf{neg}\left(\frac{1}{4}\right)\right), \alpha, \frac{1}{2}\right) \]
                    8. lift-*.f64N/A

                      \[\leadsto \mathsf{fma}\left(\frac{1}{8} \cdot \alpha + \left(\mathsf{neg}\left(\frac{1}{4}\right)\right), \alpha, \frac{1}{2}\right) \]
                    9. metadata-evalN/A

                      \[\leadsto \mathsf{fma}\left(\frac{1}{8} \cdot \alpha + \frac{-1}{4}, \alpha, \frac{1}{2}\right) \]
                    10. lower-fma.f6428.7%

                      \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(0.125, \alpha, -0.25\right), \alpha, 0.5\right) \]
                  9. Applied rewrites28.7%

                    \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(0.125, \alpha, -0.25\right), \alpha, 0.5\right) \]
                11. Recombined 2 regimes into one program.
                12. Add Preprocessing

                Alternative 6: 68.9% accurate, 0.4× speedup?

                \[\begin{array}{l} t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\ \mathbf{if}\;t\_0 \leq 10^{-7}:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{elif}\;t\_0 \leq 0.6:\\ \;\;\;\;0.5 + 0.25 \cdot \beta\\ \mathbf{else}:\\ \;\;\;\;-0.5 - -0.5\\ \end{array} \]
                (FPCore (alpha beta)
                  :precision binary64
                  (let* ((t_0
                        (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0)))
                  (if (<= t_0 1e-7)
                    (- -0.5 -0.5)
                    (if (<= t_0 0.6) (+ 0.5 (* 0.25 beta)) (- -0.5 -0.5)))))
                double code(double alpha, double beta) {
                	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
                	double tmp;
                	if (t_0 <= 1e-7) {
                		tmp = -0.5 - -0.5;
                	} else if (t_0 <= 0.6) {
                		tmp = 0.5 + (0.25 * beta);
                	} else {
                		tmp = -0.5 - -0.5;
                	}
                	return tmp;
                }
                
                real(8) function code(alpha, beta)
                use fmin_fmax_functions
                    real(8), intent (in) :: alpha
                    real(8), intent (in) :: beta
                    real(8) :: t_0
                    real(8) :: tmp
                    t_0 = (((beta - alpha) / ((alpha + beta) + 2.0d0)) + 1.0d0) / 2.0d0
                    if (t_0 <= 1d-7) then
                        tmp = (-0.5d0) - (-0.5d0)
                    else if (t_0 <= 0.6d0) then
                        tmp = 0.5d0 + (0.25d0 * beta)
                    else
                        tmp = (-0.5d0) - (-0.5d0)
                    end if
                    code = tmp
                end function
                
                public static double code(double alpha, double beta) {
                	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
                	double tmp;
                	if (t_0 <= 1e-7) {
                		tmp = -0.5 - -0.5;
                	} else if (t_0 <= 0.6) {
                		tmp = 0.5 + (0.25 * beta);
                	} else {
                		tmp = -0.5 - -0.5;
                	}
                	return tmp;
                }
                
                def code(alpha, beta):
                	t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0
                	tmp = 0
                	if t_0 <= 1e-7:
                		tmp = -0.5 - -0.5
                	elif t_0 <= 0.6:
                		tmp = 0.5 + (0.25 * beta)
                	else:
                		tmp = -0.5 - -0.5
                	return tmp
                
                function code(alpha, beta)
                	t_0 = Float64(Float64(Float64(Float64(beta - alpha) / Float64(Float64(alpha + beta) + 2.0)) + 1.0) / 2.0)
                	tmp = 0.0
                	if (t_0 <= 1e-7)
                		tmp = Float64(-0.5 - -0.5);
                	elseif (t_0 <= 0.6)
                		tmp = Float64(0.5 + Float64(0.25 * beta));
                	else
                		tmp = Float64(-0.5 - -0.5);
                	end
                	return tmp
                end
                
                function tmp_2 = code(alpha, beta)
                	t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
                	tmp = 0.0;
                	if (t_0 <= 1e-7)
                		tmp = -0.5 - -0.5;
                	elseif (t_0 <= 0.6)
                		tmp = 0.5 + (0.25 * beta);
                	else
                		tmp = -0.5 - -0.5;
                	end
                	tmp_2 = tmp;
                end
                
                code[alpha_, beta_] := Block[{t$95$0 = N[(N[(N[(N[(beta - alpha), $MachinePrecision] / N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / 2.0), $MachinePrecision]}, If[LessEqual[t$95$0, 1e-7], N[(-0.5 - -0.5), $MachinePrecision], If[LessEqual[t$95$0, 0.6], N[(0.5 + N[(0.25 * beta), $MachinePrecision]), $MachinePrecision], N[(-0.5 - -0.5), $MachinePrecision]]]]
                
                \begin{array}{l}
                t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\
                \mathbf{if}\;t\_0 \leq 10^{-7}:\\
                \;\;\;\;-0.5 - -0.5\\
                
                \mathbf{elif}\;t\_0 \leq 0.6:\\
                \;\;\;\;0.5 + 0.25 \cdot \beta\\
                
                \mathbf{else}:\\
                \;\;\;\;-0.5 - -0.5\\
                
                
                \end{array}
                
                Derivation
                1. Split input into 2 regimes
                2. if (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 9.9999999999999995e-8 or 0.59999999999999998 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64))

                  1. Initial program 63.3%

                    \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                  2. Step-by-step derivation
                    1. lift-/.f64N/A

                      \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
                    2. lift-+.f64N/A

                      \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
                    3. div-addN/A

                      \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
                    4. lift-/.f64N/A

                      \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                    5. mult-flipN/A

                      \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                    6. associate-/l*N/A

                      \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
                    7. lower-fma.f64N/A

                      \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
                    8. lower-/.f64N/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
                    9. frac-2negN/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                    10. lower-/.f64N/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                    11. metadata-evalN/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
                    12. lift-+.f64N/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
                    13. add-flipN/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
                    14. sub-negateN/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                    15. lower--.f64N/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                    16. metadata-evalN/A

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
                    17. metadata-eval43.9%

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
                  3. Applied rewrites43.9%

                    \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
                  4. Taylor expanded in beta around inf

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
                  5. Step-by-step derivation
                    1. lower-/.f6420.0%

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
                  6. Applied rewrites20.0%

                    \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
                  7. Step-by-step derivation
                    1. lift-fma.f64N/A

                      \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
                    2. add-flipN/A

                      \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                    3. lower--.f64N/A

                      \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                    4. *-commutativeN/A

                      \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                    5. lower-*.f64N/A

                      \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                    6. metadata-eval20.0%

                      \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
                  8. Applied rewrites20.0%

                    \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
                  9. Taylor expanded in alpha around inf

                    \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
                  10. Step-by-step derivation
                    1. Applied rewrites45.2%

                      \[\leadsto \color{blue}{-0.5} - -0.5 \]

                    if 9.9999999999999995e-8 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 0.59999999999999998

                    1. Initial program 63.3%

                      \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                    2. Taylor expanded in alpha around 0

                      \[\leadsto \color{blue}{\frac{1}{2} \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                    3. Step-by-step derivation
                      1. lower-*.f64N/A

                        \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(1 + \frac{\beta}{2 + \beta}\right)} \]
                      2. lower-+.f64N/A

                        \[\leadsto \frac{1}{2} \cdot \left(1 + \color{blue}{\frac{\beta}{2 + \beta}}\right) \]
                      3. lower-/.f64N/A

                        \[\leadsto \frac{1}{2} \cdot \left(1 + \frac{\beta}{\color{blue}{2 + \beta}}\right) \]
                      4. lower-+.f6443.0%

                        \[\leadsto 0.5 \cdot \left(1 + \frac{\beta}{2 + \color{blue}{\beta}}\right) \]
                    4. Applied rewrites43.0%

                      \[\leadsto \color{blue}{0.5 \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                    5. Taylor expanded in beta around 0

                      \[\leadsto \frac{1}{2} + \color{blue}{\frac{1}{4} \cdot \beta} \]
                    6. Step-by-step derivation
                      1. lower-+.f64N/A

                        \[\leadsto \frac{1}{2} + \frac{1}{4} \cdot \color{blue}{\beta} \]
                      2. lower-*.f6427.3%

                        \[\leadsto 0.5 + 0.25 \cdot \beta \]
                    7. Applied rewrites27.3%

                      \[\leadsto 0.5 + \color{blue}{0.25 \cdot \beta} \]
                  11. Recombined 2 regimes into one program.
                  12. Add Preprocessing

                  Alternative 7: 68.8% accurate, 0.4× speedup?

                  \[\begin{array}{l} t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\ \mathbf{if}\;t\_0 \leq 0.04:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{elif}\;t\_0 \leq 0.6:\\ \;\;\;\;\mathsf{fma}\left(\alpha, -0.25, 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;-0.5 - -0.5\\ \end{array} \]
                  (FPCore (alpha beta)
                    :precision binary64
                    (let* ((t_0
                          (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0)))
                    (if (<= t_0 0.04)
                      (- -0.5 -0.5)
                      (if (<= t_0 0.6) (fma alpha -0.25 0.5) (- -0.5 -0.5)))))
                  double code(double alpha, double beta) {
                  	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
                  	double tmp;
                  	if (t_0 <= 0.04) {
                  		tmp = -0.5 - -0.5;
                  	} else if (t_0 <= 0.6) {
                  		tmp = fma(alpha, -0.25, 0.5);
                  	} else {
                  		tmp = -0.5 - -0.5;
                  	}
                  	return tmp;
                  }
                  
                  function code(alpha, beta)
                  	t_0 = Float64(Float64(Float64(Float64(beta - alpha) / Float64(Float64(alpha + beta) + 2.0)) + 1.0) / 2.0)
                  	tmp = 0.0
                  	if (t_0 <= 0.04)
                  		tmp = Float64(-0.5 - -0.5);
                  	elseif (t_0 <= 0.6)
                  		tmp = fma(alpha, -0.25, 0.5);
                  	else
                  		tmp = Float64(-0.5 - -0.5);
                  	end
                  	return tmp
                  end
                  
                  code[alpha_, beta_] := Block[{t$95$0 = N[(N[(N[(N[(beta - alpha), $MachinePrecision] / N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / 2.0), $MachinePrecision]}, If[LessEqual[t$95$0, 0.04], N[(-0.5 - -0.5), $MachinePrecision], If[LessEqual[t$95$0, 0.6], N[(alpha * -0.25 + 0.5), $MachinePrecision], N[(-0.5 - -0.5), $MachinePrecision]]]]
                  
                  \begin{array}{l}
                  t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\
                  \mathbf{if}\;t\_0 \leq 0.04:\\
                  \;\;\;\;-0.5 - -0.5\\
                  
                  \mathbf{elif}\;t\_0 \leq 0.6:\\
                  \;\;\;\;\mathsf{fma}\left(\alpha, -0.25, 0.5\right)\\
                  
                  \mathbf{else}:\\
                  \;\;\;\;-0.5 - -0.5\\
                  
                  
                  \end{array}
                  
                  Derivation
                  1. Split input into 2 regimes
                  2. if (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 0.040000000000000001 or 0.59999999999999998 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64))

                    1. Initial program 63.3%

                      \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                    2. Step-by-step derivation
                      1. lift-/.f64N/A

                        \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
                      2. lift-+.f64N/A

                        \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
                      3. div-addN/A

                        \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
                      4. lift-/.f64N/A

                        \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                      5. mult-flipN/A

                        \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                      6. associate-/l*N/A

                        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
                      7. lower-fma.f64N/A

                        \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
                      8. lower-/.f64N/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
                      9. frac-2negN/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                      10. lower-/.f64N/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                      11. metadata-evalN/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
                      12. lift-+.f64N/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
                      13. add-flipN/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
                      14. sub-negateN/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                      15. lower--.f64N/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                      16. metadata-evalN/A

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
                      17. metadata-eval43.9%

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
                    3. Applied rewrites43.9%

                      \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
                    4. Taylor expanded in beta around inf

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
                    5. Step-by-step derivation
                      1. lower-/.f6420.0%

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
                    6. Applied rewrites20.0%

                      \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
                    7. Step-by-step derivation
                      1. lift-fma.f64N/A

                        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
                      2. add-flipN/A

                        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                      3. lower--.f64N/A

                        \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                      4. *-commutativeN/A

                        \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                      5. lower-*.f64N/A

                        \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                      6. metadata-eval20.0%

                        \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
                    8. Applied rewrites20.0%

                      \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
                    9. Taylor expanded in alpha around inf

                      \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
                    10. Step-by-step derivation
                      1. Applied rewrites45.2%

                        \[\leadsto \color{blue}{-0.5} - -0.5 \]

                      if 0.040000000000000001 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 0.59999999999999998

                      1. Initial program 63.3%

                        \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                      2. Taylor expanded in beta around 0

                        \[\leadsto \color{blue}{\frac{1}{2} \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
                      3. Step-by-step derivation
                        1. lower-*.f64N/A

                          \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
                        2. lower--.f64N/A

                          \[\leadsto \frac{1}{2} \cdot \left(1 - \color{blue}{\frac{\alpha}{2 + \alpha}}\right) \]
                        3. lower-/.f64N/A

                          \[\leadsto \frac{1}{2} \cdot \left(1 - \frac{\alpha}{\color{blue}{2 + \alpha}}\right) \]
                        4. lower-+.f6458.4%

                          \[\leadsto 0.5 \cdot \left(1 - \frac{\alpha}{2 + \color{blue}{\alpha}}\right) \]
                      4. Applied rewrites58.4%

                        \[\leadsto \color{blue}{0.5 \cdot \left(1 - \frac{\alpha}{2 + \alpha}\right)} \]
                      5. Taylor expanded in alpha around 0

                        \[\leadsto \frac{1}{2} + \color{blue}{\frac{-1}{4} \cdot \alpha} \]
                      6. Step-by-step derivation
                        1. lower-+.f64N/A

                          \[\leadsto \frac{1}{2} + \frac{-1}{4} \cdot \color{blue}{\alpha} \]
                        2. lower-*.f6428.4%

                          \[\leadsto 0.5 + -0.25 \cdot \alpha \]
                      7. Applied rewrites28.4%

                        \[\leadsto 0.5 + \color{blue}{-0.25 \cdot \alpha} \]
                      8. Step-by-step derivation
                        1. lift-+.f64N/A

                          \[\leadsto \frac{1}{2} + \frac{-1}{4} \cdot \color{blue}{\alpha} \]
                        2. +-commutativeN/A

                          \[\leadsto \frac{-1}{4} \cdot \alpha + \frac{1}{2} \]
                        3. lift-*.f64N/A

                          \[\leadsto \frac{-1}{4} \cdot \alpha + \frac{1}{2} \]
                        4. *-commutativeN/A

                          \[\leadsto \alpha \cdot \frac{-1}{4} + \frac{1}{2} \]
                        5. lower-fma.f6428.4%

                          \[\leadsto \mathsf{fma}\left(\alpha, -0.25, 0.5\right) \]
                      9. Applied rewrites28.4%

                        \[\leadsto \mathsf{fma}\left(\alpha, -0.25, 0.5\right) \]
                    11. Recombined 2 regimes into one program.
                    12. Add Preprocessing

                    Alternative 8: 68.6% accurate, 0.4× speedup?

                    \[\begin{array}{l} t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\ \mathbf{if}\;t\_0 \leq 10^{-7}:\\ \;\;\;\;-0.5 - -0.5\\ \mathbf{elif}\;t\_0 \leq 0.6:\\ \;\;\;\;0.5\\ \mathbf{else}:\\ \;\;\;\;-0.5 - -0.5\\ \end{array} \]
                    (FPCore (alpha beta)
                      :precision binary64
                      (let* ((t_0
                            (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0)))
                      (if (<= t_0 1e-7)
                        (- -0.5 -0.5)
                        (if (<= t_0 0.6) 0.5 (- -0.5 -0.5)))))
                    double code(double alpha, double beta) {
                    	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
                    	double tmp;
                    	if (t_0 <= 1e-7) {
                    		tmp = -0.5 - -0.5;
                    	} else if (t_0 <= 0.6) {
                    		tmp = 0.5;
                    	} else {
                    		tmp = -0.5 - -0.5;
                    	}
                    	return tmp;
                    }
                    
                    real(8) function code(alpha, beta)
                    use fmin_fmax_functions
                        real(8), intent (in) :: alpha
                        real(8), intent (in) :: beta
                        real(8) :: t_0
                        real(8) :: tmp
                        t_0 = (((beta - alpha) / ((alpha + beta) + 2.0d0)) + 1.0d0) / 2.0d0
                        if (t_0 <= 1d-7) then
                            tmp = (-0.5d0) - (-0.5d0)
                        else if (t_0 <= 0.6d0) then
                            tmp = 0.5d0
                        else
                            tmp = (-0.5d0) - (-0.5d0)
                        end if
                        code = tmp
                    end function
                    
                    public static double code(double alpha, double beta) {
                    	double t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
                    	double tmp;
                    	if (t_0 <= 1e-7) {
                    		tmp = -0.5 - -0.5;
                    	} else if (t_0 <= 0.6) {
                    		tmp = 0.5;
                    	} else {
                    		tmp = -0.5 - -0.5;
                    	}
                    	return tmp;
                    }
                    
                    def code(alpha, beta):
                    	t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0
                    	tmp = 0
                    	if t_0 <= 1e-7:
                    		tmp = -0.5 - -0.5
                    	elif t_0 <= 0.6:
                    		tmp = 0.5
                    	else:
                    		tmp = -0.5 - -0.5
                    	return tmp
                    
                    function code(alpha, beta)
                    	t_0 = Float64(Float64(Float64(Float64(beta - alpha) / Float64(Float64(alpha + beta) + 2.0)) + 1.0) / 2.0)
                    	tmp = 0.0
                    	if (t_0 <= 1e-7)
                    		tmp = Float64(-0.5 - -0.5);
                    	elseif (t_0 <= 0.6)
                    		tmp = 0.5;
                    	else
                    		tmp = Float64(-0.5 - -0.5);
                    	end
                    	return tmp
                    end
                    
                    function tmp_2 = code(alpha, beta)
                    	t_0 = (((beta - alpha) / ((alpha + beta) + 2.0)) + 1.0) / 2.0;
                    	tmp = 0.0;
                    	if (t_0 <= 1e-7)
                    		tmp = -0.5 - -0.5;
                    	elseif (t_0 <= 0.6)
                    		tmp = 0.5;
                    	else
                    		tmp = -0.5 - -0.5;
                    	end
                    	tmp_2 = tmp;
                    end
                    
                    code[alpha_, beta_] := Block[{t$95$0 = N[(N[(N[(N[(beta - alpha), $MachinePrecision] / N[(N[(alpha + beta), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision] + 1.0), $MachinePrecision] / 2.0), $MachinePrecision]}, If[LessEqual[t$95$0, 1e-7], N[(-0.5 - -0.5), $MachinePrecision], If[LessEqual[t$95$0, 0.6], 0.5, N[(-0.5 - -0.5), $MachinePrecision]]]]
                    
                    \begin{array}{l}
                    t_0 := \frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}\\
                    \mathbf{if}\;t\_0 \leq 10^{-7}:\\
                    \;\;\;\;-0.5 - -0.5\\
                    
                    \mathbf{elif}\;t\_0 \leq 0.6:\\
                    \;\;\;\;0.5\\
                    
                    \mathbf{else}:\\
                    \;\;\;\;-0.5 - -0.5\\
                    
                    
                    \end{array}
                    
                    Derivation
                    1. Split input into 2 regimes
                    2. if (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 9.9999999999999995e-8 or 0.59999999999999998 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64))

                      1. Initial program 63.3%

                        \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                      2. Step-by-step derivation
                        1. lift-/.f64N/A

                          \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2}} \]
                        2. lift-+.f64N/A

                          \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}}{2} \]
                        3. div-addN/A

                          \[\leadsto \color{blue}{\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}{2} + \frac{1}{2}} \]
                        4. lift-/.f64N/A

                          \[\leadsto \frac{\color{blue}{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                        5. mult-flipN/A

                          \[\leadsto \frac{\color{blue}{\left(\beta - \alpha\right) \cdot \frac{1}{\left(\alpha + \beta\right) + 2}}}{2} + \frac{1}{2} \]
                        6. associate-/l*N/A

                          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}} + \frac{1}{2} \]
                        7. lower-fma.f64N/A

                          \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}, \frac{1}{2}\right)} \]
                        8. lower-/.f64N/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{\left(\alpha + \beta\right) + 2}}{2}}, \frac{1}{2}\right) \]
                        9. frac-2negN/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                        10. lower-/.f64N/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\color{blue}{\frac{\mathsf{neg}\left(1\right)}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}}{2}, \frac{1}{2}\right) \]
                        11. metadata-evalN/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{\color{blue}{-1}}{\mathsf{neg}\left(\left(\left(\alpha + \beta\right) + 2\right)\right)}}{2}, \frac{1}{2}\right) \]
                        12. lift-+.f64N/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) + 2\right)}\right)}}{2}, \frac{1}{2}\right) \]
                        13. add-flipN/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\mathsf{neg}\left(\color{blue}{\left(\left(\alpha + \beta\right) - \left(\mathsf{neg}\left(2\right)\right)\right)}\right)}}{2}, \frac{1}{2}\right) \]
                        14. sub-negateN/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                        15. lower--.f64N/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{\left(\mathsf{neg}\left(2\right)\right) - \left(\alpha + \beta\right)}}}{2}, \frac{1}{2}\right) \]
                        16. metadata-evalN/A

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{\color{blue}{-2} - \left(\alpha + \beta\right)}}{2}, \frac{1}{2}\right) \]
                        17. metadata-eval43.9%

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, \color{blue}{0.5}\right) \]
                      3. Applied rewrites43.9%

                        \[\leadsto \color{blue}{\mathsf{fma}\left(\beta - \alpha, \frac{\frac{-1}{-2 - \left(\alpha + \beta\right)}}{2}, 0.5\right)} \]
                      4. Taylor expanded in beta around inf

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{\frac{1}{2}}{\beta}}, 0.5\right) \]
                      5. Step-by-step derivation
                        1. lower-/.f6420.0%

                          \[\leadsto \mathsf{fma}\left(\beta - \alpha, \frac{0.5}{\color{blue}{\beta}}, 0.5\right) \]
                      6. Applied rewrites20.0%

                        \[\leadsto \mathsf{fma}\left(\beta - \alpha, \color{blue}{\frac{0.5}{\beta}}, 0.5\right) \]
                      7. Step-by-step derivation
                        1. lift-fma.f64N/A

                          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} + \frac{1}{2}} \]
                        2. add-flipN/A

                          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                        3. lower--.f64N/A

                          \[\leadsto \color{blue}{\left(\beta - \alpha\right) \cdot \frac{\frac{1}{2}}{\beta} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right)} \]
                        4. *-commutativeN/A

                          \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                        5. lower-*.f64N/A

                          \[\leadsto \color{blue}{\frac{\frac{1}{2}}{\beta} \cdot \left(\beta - \alpha\right)} - \left(\mathsf{neg}\left(\frac{1}{2}\right)\right) \]
                        6. metadata-eval20.0%

                          \[\leadsto \frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - \color{blue}{-0.5} \]
                      8. Applied rewrites20.0%

                        \[\leadsto \color{blue}{\frac{0.5}{\beta} \cdot \left(\beta - \alpha\right) - -0.5} \]
                      9. Taylor expanded in alpha around inf

                        \[\leadsto \color{blue}{\frac{-1}{2}} - -0.5 \]
                      10. Step-by-step derivation
                        1. Applied rewrites45.2%

                          \[\leadsto \color{blue}{-0.5} - -0.5 \]

                        if 9.9999999999999995e-8 < (/.f64 (+.f64 (/.f64 (-.f64 beta alpha) (+.f64 (+.f64 alpha beta) #s(literal 2 binary64))) #s(literal 1 binary64)) #s(literal 2 binary64)) < 0.59999999999999998

                        1. Initial program 63.3%

                          \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                        2. Taylor expanded in alpha around 0

                          \[\leadsto \color{blue}{\frac{1}{2} \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                        3. Step-by-step derivation
                          1. lower-*.f64N/A

                            \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(1 + \frac{\beta}{2 + \beta}\right)} \]
                          2. lower-+.f64N/A

                            \[\leadsto \frac{1}{2} \cdot \left(1 + \color{blue}{\frac{\beta}{2 + \beta}}\right) \]
                          3. lower-/.f64N/A

                            \[\leadsto \frac{1}{2} \cdot \left(1 + \frac{\beta}{\color{blue}{2 + \beta}}\right) \]
                          4. lower-+.f6443.0%

                            \[\leadsto 0.5 \cdot \left(1 + \frac{\beta}{2 + \color{blue}{\beta}}\right) \]
                        4. Applied rewrites43.0%

                          \[\leadsto \color{blue}{0.5 \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                        5. Taylor expanded in beta around 0

                          \[\leadsto \frac{1}{2} \]
                        6. Step-by-step derivation
                          1. Applied rewrites29.6%

                            \[\leadsto 0.5 \]
                        7. Recombined 2 regimes into one program.
                        8. Add Preprocessing

                        Alternative 9: 29.6% accurate, 19.0× speedup?

                        \[0.5 \]
                        (FPCore (alpha beta)
                          :precision binary64
                          0.5)
                        double code(double alpha, double beta) {
                        	return 0.5;
                        }
                        
                        real(8) function code(alpha, beta)
                        use fmin_fmax_functions
                            real(8), intent (in) :: alpha
                            real(8), intent (in) :: beta
                            code = 0.5d0
                        end function
                        
                        public static double code(double alpha, double beta) {
                        	return 0.5;
                        }
                        
                        def code(alpha, beta):
                        	return 0.5
                        
                        function code(alpha, beta)
                        	return 0.5
                        end
                        
                        function tmp = code(alpha, beta)
                        	tmp = 0.5;
                        end
                        
                        code[alpha_, beta_] := 0.5
                        
                        0.5
                        
                        Derivation
                        1. Initial program 63.3%

                          \[\frac{\frac{\beta - \alpha}{\left(\alpha + \beta\right) + 2} + 1}{2} \]
                        2. Taylor expanded in alpha around 0

                          \[\leadsto \color{blue}{\frac{1}{2} \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                        3. Step-by-step derivation
                          1. lower-*.f64N/A

                            \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(1 + \frac{\beta}{2 + \beta}\right)} \]
                          2. lower-+.f64N/A

                            \[\leadsto \frac{1}{2} \cdot \left(1 + \color{blue}{\frac{\beta}{2 + \beta}}\right) \]
                          3. lower-/.f64N/A

                            \[\leadsto \frac{1}{2} \cdot \left(1 + \frac{\beta}{\color{blue}{2 + \beta}}\right) \]
                          4. lower-+.f6443.0%

                            \[\leadsto 0.5 \cdot \left(1 + \frac{\beta}{2 + \color{blue}{\beta}}\right) \]
                        4. Applied rewrites43.0%

                          \[\leadsto \color{blue}{0.5 \cdot \left(1 + \frac{\beta}{2 + \beta}\right)} \]
                        5. Taylor expanded in beta around 0

                          \[\leadsto \frac{1}{2} \]
                        6. Step-by-step derivation
                          1. Applied rewrites29.6%

                            \[\leadsto 0.5 \]
                          2. Add Preprocessing

                          Reproduce

                          ?
                          herbie shell --seed 2025313 -o setup:search
                          (FPCore (alpha beta)
                            :name "Octave 3.8, jcobi/1"
                            :precision binary64
                            :pre (and (> alpha -1.0) (> beta -1.0))
                            (/ (+ (/ (- beta alpha) (+ (+ alpha beta) 2.0)) 1.0) 2.0))