Average Error: 3.8 → 3.8
Time: 51.8s
Precision: 64
\[\alpha \gt -1.0 \land \beta \gt -1.0\]
\[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1.0}{\left(\alpha + \beta\right) + 2.0 \cdot 1.0}}{\left(\alpha + \beta\right) + 2.0 \cdot 1.0}}{\left(\left(\alpha + \beta\right) + 2.0 \cdot 1.0\right) + 1.0}\]
\[\frac{\frac{1.0 + \mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right)}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)} \cdot \frac{1}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}}{1.0 + \mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}\]
\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1.0}{\left(\alpha + \beta\right) + 2.0 \cdot 1.0}}{\left(\alpha + \beta\right) + 2.0 \cdot 1.0}}{\left(\left(\alpha + \beta\right) + 2.0 \cdot 1.0\right) + 1.0}
\frac{\frac{1.0 + \mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right)}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)} \cdot \frac{1}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}}{1.0 + \mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}
double f(double alpha, double beta) {
        double r3108343 = alpha;
        double r3108344 = beta;
        double r3108345 = r3108343 + r3108344;
        double r3108346 = r3108344 * r3108343;
        double r3108347 = r3108345 + r3108346;
        double r3108348 = 1.0;
        double r3108349 = r3108347 + r3108348;
        double r3108350 = 2.0;
        double r3108351 = r3108350 * r3108348;
        double r3108352 = r3108345 + r3108351;
        double r3108353 = r3108349 / r3108352;
        double r3108354 = r3108353 / r3108352;
        double r3108355 = r3108352 + r3108348;
        double r3108356 = r3108354 / r3108355;
        return r3108356;
}

double f(double alpha, double beta) {
        double r3108357 = 1.0;
        double r3108358 = beta;
        double r3108359 = alpha;
        double r3108360 = r3108358 + r3108359;
        double r3108361 = fma(r3108358, r3108359, r3108360);
        double r3108362 = r3108357 + r3108361;
        double r3108363 = 2.0;
        double r3108364 = fma(r3108363, r3108357, r3108360);
        double r3108365 = r3108362 / r3108364;
        double r3108366 = 1.0;
        double r3108367 = r3108366 / r3108364;
        double r3108368 = r3108365 * r3108367;
        double r3108369 = r3108357 + r3108364;
        double r3108370 = r3108368 / r3108369;
        return r3108370;
}

Error

Bits error versus alpha

Bits error versus beta

Derivation

  1. Initial program 3.8

    \[\frac{\frac{\frac{\left(\left(\alpha + \beta\right) + \beta \cdot \alpha\right) + 1.0}{\left(\alpha + \beta\right) + 2.0 \cdot 1.0}}{\left(\alpha + \beta\right) + 2.0 \cdot 1.0}}{\left(\left(\alpha + \beta\right) + 2.0 \cdot 1.0\right) + 1.0}\]
  2. Simplified3.8

    \[\leadsto \color{blue}{\frac{\frac{\frac{1.0 + \mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right)}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}}{1.0 + \mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}}\]
  3. Using strategy rm
  4. Applied div-inv3.8

    \[\leadsto \frac{\color{blue}{\frac{1.0 + \mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right)}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)} \cdot \frac{1}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}}}{1.0 + \mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}\]
  5. Final simplification3.8

    \[\leadsto \frac{\frac{1.0 + \mathsf{fma}\left(\beta, \alpha, \beta + \alpha\right)}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)} \cdot \frac{1}{\mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}}{1.0 + \mathsf{fma}\left(2.0, 1.0, \beta + \alpha\right)}\]

Reproduce

herbie shell --seed 2019165 +o rules:numerics
(FPCore (alpha beta)
  :name "Octave 3.8, jcobi/3"
  :pre (and (> alpha -1.0) (> beta -1.0))
  (/ (/ (/ (+ (+ (+ alpha beta) (* beta alpha)) 1.0) (+ (+ alpha beta) (* 2.0 1.0))) (+ (+ alpha beta) (* 2.0 1.0))) (+ (+ (+ alpha beta) (* 2.0 1.0)) 1.0)))