logq (problem 3.4.3)

Percentage Accurate: 8.5% → 100.0%
Time: 9.4s
Alternatives: 7
Speedup: 19.7×

Specification

?
\[\left|\varepsilon\right| < 1\]
\[\begin{array}{l} \\ \log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \end{array} \]
(FPCore (eps) :precision binary64 (log (/ (- 1.0 eps) (+ 1.0 eps))))
double code(double eps) {
	return log(((1.0 - eps) / (1.0 + eps)));
}
real(8) function code(eps)
    real(8), intent (in) :: eps
    code = log(((1.0d0 - eps) / (1.0d0 + eps)))
end function
public static double code(double eps) {
	return Math.log(((1.0 - eps) / (1.0 + eps)));
}
def code(eps):
	return math.log(((1.0 - eps) / (1.0 + eps)))
function code(eps)
	return log(Float64(Float64(1.0 - eps) / Float64(1.0 + eps)))
end
function tmp = code(eps)
	tmp = log(((1.0 - eps) / (1.0 + eps)));
end
code[eps_] := N[Log[N[(N[(1.0 - eps), $MachinePrecision] / N[(1.0 + eps), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right)
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 7 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 8.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \end{array} \]
(FPCore (eps) :precision binary64 (log (/ (- 1.0 eps) (+ 1.0 eps))))
double code(double eps) {
	return log(((1.0 - eps) / (1.0 + eps)));
}
real(8) function code(eps)
    real(8), intent (in) :: eps
    code = log(((1.0d0 - eps) / (1.0d0 + eps)))
end function
public static double code(double eps) {
	return Math.log(((1.0 - eps) / (1.0 + eps)));
}
def code(eps):
	return math.log(((1.0 - eps) / (1.0 + eps)))
function code(eps)
	return log(Float64(Float64(1.0 - eps) / Float64(1.0 + eps)))
end
function tmp = code(eps)
	tmp = log(((1.0 - eps) / (1.0 + eps)));
end
code[eps_] := N[Log[N[(N[(1.0 - eps), $MachinePrecision] / N[(1.0 + eps), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right)
\end{array}

Alternative 1: 100.0% accurate, 0.6× speedup?

\[\begin{array}{l} \\ \mathsf{log1p}\left(-\varepsilon\right) - \mathsf{log1p}\left(\varepsilon\right) \end{array} \]
(FPCore (eps) :precision binary64 (- (log1p (- eps)) (log1p eps)))
double code(double eps) {
	return log1p(-eps) - log1p(eps);
}
public static double code(double eps) {
	return Math.log1p(-eps) - Math.log1p(eps);
}
def code(eps):
	return math.log1p(-eps) - math.log1p(eps)
function code(eps)
	return Float64(log1p(Float64(-eps)) - log1p(eps))
end
code[eps_] := N[(N[Log[1 + (-eps)], $MachinePrecision] - N[Log[1 + eps], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\mathsf{log1p}\left(-\varepsilon\right) - \mathsf{log1p}\left(\varepsilon\right)
\end{array}
Derivation
  1. Initial program 8.2%

    \[\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-+.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{1 + \varepsilon}}\right) \]
    2. +-commutativeN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\varepsilon + 1}}\right) \]
    3. flip-+N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\varepsilon - 1}}}\right) \]
    4. sub-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\color{blue}{\varepsilon + \left(\mathsf{neg}\left(1\right)\right)}}}\right) \]
    5. remove-double-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\color{blue}{\left(\mathsf{neg}\left(\left(\mathsf{neg}\left(\varepsilon\right)\right)\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)}}\right) \]
    6. distribute-neg-inN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\color{blue}{\mathsf{neg}\left(\left(\left(\mathsf{neg}\left(\varepsilon\right)\right) + 1\right)\right)}}}\right) \]
    7. +-commutativeN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\color{blue}{\left(1 + \left(\mathsf{neg}\left(\varepsilon\right)\right)\right)}\right)}}\right) \]
    8. sub-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\color{blue}{\left(1 - \varepsilon\right)}\right)}}\right) \]
    9. lift--.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\color{blue}{\left(1 - \varepsilon\right)}\right)}}\right) \]
    10. lower-/.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}}\right) \]
    11. metadata-evalN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - \color{blue}{1}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    12. sub-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\color{blue}{\varepsilon \cdot \varepsilon + \left(\mathsf{neg}\left(1\right)\right)}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    13. metadata-evalN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon + \color{blue}{-1}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    14. lower-fma.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\color{blue}{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    15. neg-sub0N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{0 - \left(1 - \varepsilon\right)}}}\right) \]
    16. lift--.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{0 - \color{blue}{\left(1 - \varepsilon\right)}}}\right) \]
    17. associate--r-N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{\left(0 - 1\right) + \varepsilon}}}\right) \]
    18. metadata-evalN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{-1} + \varepsilon}}\right) \]
    19. +-commutativeN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{\varepsilon + -1}}}\right) \]
    20. lower-+.f648.0

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{\varepsilon + -1}}}\right) \]
  4. Applied rewrites8.0%

    \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\varepsilon + -1}}}\right) \]
  5. Step-by-step derivation
    1. lift-log.f64N/A

      \[\leadsto \color{blue}{\log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\varepsilon + -1}}\right)} \]
    2. lift-/.f64N/A

      \[\leadsto \log \color{blue}{\left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\varepsilon + -1}}\right)} \]
    3. frac-2negN/A

      \[\leadsto \log \color{blue}{\left(\frac{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}{\mathsf{neg}\left(\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\varepsilon + -1}\right)}\right)} \]
    4. log-divN/A

      \[\leadsto \color{blue}{\log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\varepsilon + -1}\right)\right)} \]
    5. lift-/.f64N/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\color{blue}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\varepsilon + -1}}\right)\right) \]
    6. lift-fma.f64N/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\color{blue}{\varepsilon \cdot \varepsilon + -1}}{\varepsilon + -1}\right)\right) \]
    7. lift-*.f64N/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\color{blue}{\varepsilon \cdot \varepsilon} + -1}{\varepsilon + -1}\right)\right) \]
    8. metadata-evalN/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\varepsilon \cdot \varepsilon + \color{blue}{\left(\mathsf{neg}\left(1\right)\right)}}{\varepsilon + -1}\right)\right) \]
    9. sub-negN/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\color{blue}{\varepsilon \cdot \varepsilon - 1}}{\varepsilon + -1}\right)\right) \]
    10. lift-*.f64N/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\color{blue}{\varepsilon \cdot \varepsilon} - 1}{\varepsilon + -1}\right)\right) \]
    11. metadata-evalN/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\varepsilon \cdot \varepsilon - \color{blue}{-1 \cdot -1}}{\varepsilon + -1}\right)\right) \]
    12. lift-+.f64N/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\frac{\varepsilon \cdot \varepsilon - -1 \cdot -1}{\color{blue}{\varepsilon + -1}}\right)\right) \]
    13. flip--N/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\color{blue}{\left(\varepsilon - -1\right)}\right)\right) \]
    14. sub-negN/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\color{blue}{\left(\varepsilon + \left(\mathsf{neg}\left(-1\right)\right)\right)}\right)\right) \]
    15. metadata-evalN/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\left(\varepsilon + \color{blue}{1}\right)\right)\right) \]
    16. +-commutativeN/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\color{blue}{\left(1 + \varepsilon\right)}\right)\right) \]
    17. lift-+.f64N/A

      \[\leadsto \log \left(\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)\right) - \log \left(\mathsf{neg}\left(\color{blue}{\left(1 + \varepsilon\right)}\right)\right) \]
    18. log-divN/A

      \[\leadsto \color{blue}{\log \left(\frac{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}{\mathsf{neg}\left(\left(1 + \varepsilon\right)\right)}\right)} \]
  6. Applied rewrites100.0%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(-\varepsilon\right) - \mathsf{log1p}\left(\varepsilon\right)} \]
  7. Add Preprocessing

Alternative 2: 99.8% accurate, 3.0× speedup?

\[\begin{array}{l} \\ \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, -0.2857142857142857, -0.4\right), -0.6666666666666666\right), -2\right) \end{array} \]
(FPCore (eps)
 :precision binary64
 (*
  eps
  (fma
   eps
   (*
    eps
    (fma
     eps
     (* eps (fma (* eps eps) -0.2857142857142857 -0.4))
     -0.6666666666666666))
   -2.0)))
double code(double eps) {
	return eps * fma(eps, (eps * fma(eps, (eps * fma((eps * eps), -0.2857142857142857, -0.4)), -0.6666666666666666)), -2.0);
}
function code(eps)
	return Float64(eps * fma(eps, Float64(eps * fma(eps, Float64(eps * fma(Float64(eps * eps), -0.2857142857142857, -0.4)), -0.6666666666666666)), -2.0))
end
code[eps_] := N[(eps * N[(eps * N[(eps * N[(eps * N[(eps * N[(N[(eps * eps), $MachinePrecision] * -0.2857142857142857 + -0.4), $MachinePrecision]), $MachinePrecision] + -0.6666666666666666), $MachinePrecision]), $MachinePrecision] + -2.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, -0.2857142857142857, -0.4\right), -0.6666666666666666\right), -2\right)
\end{array}
Derivation
  1. Initial program 8.2%

    \[\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \]
  2. Add Preprocessing
  3. Taylor expanded in eps around 0

    \[\leadsto \color{blue}{\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{7} \cdot {\varepsilon}^{2} - \frac{2}{5}\right) - \frac{2}{3}\right) - 2\right)} \]
  4. Step-by-step derivation
    1. lower-*.f64N/A

      \[\leadsto \color{blue}{\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{7} \cdot {\varepsilon}^{2} - \frac{2}{5}\right) - \frac{2}{3}\right) - 2\right)} \]
    2. sub-negN/A

      \[\leadsto \varepsilon \cdot \color{blue}{\left({\varepsilon}^{2} \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{7} \cdot {\varepsilon}^{2} - \frac{2}{5}\right) - \frac{2}{3}\right) + \left(\mathsf{neg}\left(2\right)\right)\right)} \]
    3. unpow2N/A

      \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(\varepsilon \cdot \varepsilon\right)} \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{7} \cdot {\varepsilon}^{2} - \frac{2}{5}\right) - \frac{2}{3}\right) + \left(\mathsf{neg}\left(2\right)\right)\right) \]
    4. associate-*l*N/A

      \[\leadsto \varepsilon \cdot \left(\color{blue}{\varepsilon \cdot \left(\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{7} \cdot {\varepsilon}^{2} - \frac{2}{5}\right) - \frac{2}{3}\right)\right)} + \left(\mathsf{neg}\left(2\right)\right)\right) \]
    5. metadata-evalN/A

      \[\leadsto \varepsilon \cdot \left(\varepsilon \cdot \left(\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{7} \cdot {\varepsilon}^{2} - \frac{2}{5}\right) - \frac{2}{3}\right)\right) + \color{blue}{-2}\right) \]
    6. lower-fma.f64N/A

      \[\leadsto \varepsilon \cdot \color{blue}{\mathsf{fma}\left(\varepsilon, \varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{7} \cdot {\varepsilon}^{2} - \frac{2}{5}\right) - \frac{2}{3}\right), -2\right)} \]
  5. Applied rewrites99.6%

    \[\leadsto \color{blue}{\varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, -0.2857142857142857, -0.4\right), -0.6666666666666666\right), -2\right)} \]
  6. Add Preprocessing

Alternative 3: 99.7% accurate, 3.6× speedup?

\[\begin{array}{l} \\ \mathsf{fma}\left(\mathsf{fma}\left(\varepsilon \cdot \varepsilon, -0.4, -0.6666666666666666\right), \varepsilon \cdot \left(\varepsilon \cdot \varepsilon\right), \varepsilon \cdot -2\right) \end{array} \]
(FPCore (eps)
 :precision binary64
 (fma
  (fma (* eps eps) -0.4 -0.6666666666666666)
  (* eps (* eps eps))
  (* eps -2.0)))
double code(double eps) {
	return fma(fma((eps * eps), -0.4, -0.6666666666666666), (eps * (eps * eps)), (eps * -2.0));
}
function code(eps)
	return fma(fma(Float64(eps * eps), -0.4, -0.6666666666666666), Float64(eps * Float64(eps * eps)), Float64(eps * -2.0))
end
code[eps_] := N[(N[(N[(eps * eps), $MachinePrecision] * -0.4 + -0.6666666666666666), $MachinePrecision] * N[(eps * N[(eps * eps), $MachinePrecision]), $MachinePrecision] + N[(eps * -2.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\mathsf{fma}\left(\mathsf{fma}\left(\varepsilon \cdot \varepsilon, -0.4, -0.6666666666666666\right), \varepsilon \cdot \left(\varepsilon \cdot \varepsilon\right), \varepsilon \cdot -2\right)
\end{array}
Derivation
  1. Initial program 8.2%

    \[\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-+.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{1 + \varepsilon}}\right) \]
    2. +-commutativeN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\varepsilon + 1}}\right) \]
    3. flip-+N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\varepsilon - 1}}}\right) \]
    4. sub-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\color{blue}{\varepsilon + \left(\mathsf{neg}\left(1\right)\right)}}}\right) \]
    5. remove-double-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\color{blue}{\left(\mathsf{neg}\left(\left(\mathsf{neg}\left(\varepsilon\right)\right)\right)\right)} + \left(\mathsf{neg}\left(1\right)\right)}}\right) \]
    6. distribute-neg-inN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\color{blue}{\mathsf{neg}\left(\left(\left(\mathsf{neg}\left(\varepsilon\right)\right) + 1\right)\right)}}}\right) \]
    7. +-commutativeN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\color{blue}{\left(1 + \left(\mathsf{neg}\left(\varepsilon\right)\right)\right)}\right)}}\right) \]
    8. sub-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\color{blue}{\left(1 - \varepsilon\right)}\right)}}\right) \]
    9. lift--.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\color{blue}{\left(1 - \varepsilon\right)}\right)}}\right) \]
    10. lower-/.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\frac{\varepsilon \cdot \varepsilon - 1 \cdot 1}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}}\right) \]
    11. metadata-evalN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon - \color{blue}{1}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    12. sub-negN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\color{blue}{\varepsilon \cdot \varepsilon + \left(\mathsf{neg}\left(1\right)\right)}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    13. metadata-evalN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\varepsilon \cdot \varepsilon + \color{blue}{-1}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    14. lower-fma.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\color{blue}{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}}{\mathsf{neg}\left(\left(1 - \varepsilon\right)\right)}}\right) \]
    15. neg-sub0N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{0 - \left(1 - \varepsilon\right)}}}\right) \]
    16. lift--.f64N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{0 - \color{blue}{\left(1 - \varepsilon\right)}}}\right) \]
    17. associate--r-N/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{\left(0 - 1\right) + \varepsilon}}}\right) \]
    18. metadata-evalN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{-1} + \varepsilon}}\right) \]
    19. +-commutativeN/A

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{\varepsilon + -1}}}\right) \]
    20. lower-+.f648.0

      \[\leadsto \log \left(\frac{1 - \varepsilon}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\color{blue}{\varepsilon + -1}}}\right) \]
  4. Applied rewrites8.0%

    \[\leadsto \log \left(\frac{1 - \varepsilon}{\color{blue}{\frac{\mathsf{fma}\left(\varepsilon, \varepsilon, -1\right)}{\varepsilon + -1}}}\right) \]
  5. Taylor expanded in eps around 0

    \[\leadsto \color{blue}{\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) - 2\right)} \]
  6. Step-by-step derivation
    1. lower-*.f64N/A

      \[\leadsto \color{blue}{\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) - 2\right)} \]
    2. sub-negN/A

      \[\leadsto \varepsilon \cdot \color{blue}{\left({\varepsilon}^{2} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) + \left(\mathsf{neg}\left(2\right)\right)\right)} \]
    3. metadata-evalN/A

      \[\leadsto \varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) + \color{blue}{-2}\right) \]
    4. lower-fma.f64N/A

      \[\leadsto \varepsilon \cdot \color{blue}{\mathsf{fma}\left({\varepsilon}^{2}, \frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}, -2\right)} \]
    5. unpow2N/A

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\color{blue}{\varepsilon \cdot \varepsilon}, \frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}, -2\right) \]
    6. lower-*.f64N/A

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\color{blue}{\varepsilon \cdot \varepsilon}, \frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}, -2\right) \]
    7. sub-negN/A

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, \color{blue}{\frac{-2}{5} \cdot {\varepsilon}^{2} + \left(\mathsf{neg}\left(\frac{2}{3}\right)\right)}, -2\right) \]
    8. *-commutativeN/A

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, \color{blue}{{\varepsilon}^{2} \cdot \frac{-2}{5}} + \left(\mathsf{neg}\left(\frac{2}{3}\right)\right), -2\right) \]
    9. metadata-evalN/A

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, {\varepsilon}^{2} \cdot \frac{-2}{5} + \color{blue}{\frac{-2}{3}}, -2\right) \]
    10. lower-fma.f64N/A

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, \color{blue}{\mathsf{fma}\left({\varepsilon}^{2}, \frac{-2}{5}, \frac{-2}{3}\right)}, -2\right) \]
    11. unpow2N/A

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, \mathsf{fma}\left(\color{blue}{\varepsilon \cdot \varepsilon}, \frac{-2}{5}, \frac{-2}{3}\right), -2\right) \]
    12. lower-*.f6499.6

      \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, \mathsf{fma}\left(\color{blue}{\varepsilon \cdot \varepsilon}, -0.4, -0.6666666666666666\right), -2\right) \]
  7. Applied rewrites99.6%

    \[\leadsto \color{blue}{\varepsilon \cdot \mathsf{fma}\left(\varepsilon \cdot \varepsilon, \mathsf{fma}\left(\varepsilon \cdot \varepsilon, -0.4, -0.6666666666666666\right), -2\right)} \]
  8. Step-by-step derivation
    1. Applied rewrites99.6%

      \[\leadsto \mathsf{fma}\left(\mathsf{fma}\left(\varepsilon \cdot \varepsilon, -0.4, -0.6666666666666666\right), \color{blue}{\varepsilon \cdot \left(\varepsilon \cdot \varepsilon\right)}, \varepsilon \cdot -2\right) \]
    2. Add Preprocessing

    Alternative 4: 99.7% accurate, 4.2× speedup?

    \[\begin{array}{l} \\ \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot -0.4, -0.6666666666666666\right), -2\right) \end{array} \]
    (FPCore (eps)
     :precision binary64
     (* eps (fma eps (* eps (fma eps (* eps -0.4) -0.6666666666666666)) -2.0)))
    double code(double eps) {
    	return eps * fma(eps, (eps * fma(eps, (eps * -0.4), -0.6666666666666666)), -2.0);
    }
    
    function code(eps)
    	return Float64(eps * fma(eps, Float64(eps * fma(eps, Float64(eps * -0.4), -0.6666666666666666)), -2.0))
    end
    
    code[eps_] := N[(eps * N[(eps * N[(eps * N[(eps * N[(eps * -0.4), $MachinePrecision] + -0.6666666666666666), $MachinePrecision]), $MachinePrecision] + -2.0), $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot -0.4, -0.6666666666666666\right), -2\right)
    \end{array}
    
    Derivation
    1. Initial program 8.2%

      \[\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in eps around 0

      \[\leadsto \color{blue}{\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) - 2\right)} \]
    4. Step-by-step derivation
      1. lower-*.f64N/A

        \[\leadsto \color{blue}{\varepsilon \cdot \left({\varepsilon}^{2} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) - 2\right)} \]
      2. sub-negN/A

        \[\leadsto \varepsilon \cdot \color{blue}{\left({\varepsilon}^{2} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) + \left(\mathsf{neg}\left(2\right)\right)\right)} \]
      3. unpow2N/A

        \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(\varepsilon \cdot \varepsilon\right)} \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) + \left(\mathsf{neg}\left(2\right)\right)\right) \]
      4. associate-*l*N/A

        \[\leadsto \varepsilon \cdot \left(\color{blue}{\varepsilon \cdot \left(\varepsilon \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right)\right)} + \left(\mathsf{neg}\left(2\right)\right)\right) \]
      5. *-commutativeN/A

        \[\leadsto \varepsilon \cdot \left(\varepsilon \cdot \color{blue}{\left(\left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) \cdot \varepsilon\right)} + \left(\mathsf{neg}\left(2\right)\right)\right) \]
      6. metadata-evalN/A

        \[\leadsto \varepsilon \cdot \left(\varepsilon \cdot \left(\left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) \cdot \varepsilon\right) + \color{blue}{-2}\right) \]
      7. lower-fma.f64N/A

        \[\leadsto \varepsilon \cdot \color{blue}{\mathsf{fma}\left(\varepsilon, \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right) \cdot \varepsilon, -2\right)} \]
      8. *-commutativeN/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \color{blue}{\varepsilon \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right)}, -2\right) \]
      9. lower-*.f64N/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \color{blue}{\varepsilon \cdot \left(\frac{-2}{5} \cdot {\varepsilon}^{2} - \frac{2}{3}\right)}, -2\right) \]
      10. sub-negN/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \color{blue}{\left(\frac{-2}{5} \cdot {\varepsilon}^{2} + \left(\mathsf{neg}\left(\frac{2}{3}\right)\right)\right)}, -2\right) \]
      11. unpow2N/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \left(\frac{-2}{5} \cdot \color{blue}{\left(\varepsilon \cdot \varepsilon\right)} + \left(\mathsf{neg}\left(\frac{2}{3}\right)\right)\right), -2\right) \]
      12. associate-*r*N/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \left(\color{blue}{\left(\frac{-2}{5} \cdot \varepsilon\right) \cdot \varepsilon} + \left(\mathsf{neg}\left(\frac{2}{3}\right)\right)\right), -2\right) \]
      13. *-commutativeN/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \left(\color{blue}{\varepsilon \cdot \left(\frac{-2}{5} \cdot \varepsilon\right)} + \left(\mathsf{neg}\left(\frac{2}{3}\right)\right)\right), -2\right) \]
      14. metadata-evalN/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \left(\varepsilon \cdot \left(\frac{-2}{5} \cdot \varepsilon\right) + \color{blue}{\frac{-2}{3}}\right), -2\right) \]
      15. lower-fma.f64N/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \color{blue}{\mathsf{fma}\left(\varepsilon, \frac{-2}{5} \cdot \varepsilon, \frac{-2}{3}\right)}, -2\right) \]
      16. *-commutativeN/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \color{blue}{\varepsilon \cdot \frac{-2}{5}}, \frac{-2}{3}\right), -2\right) \]
      17. lower-*.f6499.6

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \color{blue}{\varepsilon \cdot -0.4}, -0.6666666666666666\right), -2\right) \]
    5. Applied rewrites99.6%

      \[\leadsto \color{blue}{\varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot -0.4, -0.6666666666666666\right), -2\right)} \]
    6. Add Preprocessing

    Alternative 5: 99.5% accurate, 6.9× speedup?

    \[\begin{array}{l} \\ \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot -0.6666666666666666, -2\right) \end{array} \]
    (FPCore (eps)
     :precision binary64
     (* eps (fma eps (* eps -0.6666666666666666) -2.0)))
    double code(double eps) {
    	return eps * fma(eps, (eps * -0.6666666666666666), -2.0);
    }
    
    function code(eps)
    	return Float64(eps * fma(eps, Float64(eps * -0.6666666666666666), -2.0))
    end
    
    code[eps_] := N[(eps * N[(eps * N[(eps * -0.6666666666666666), $MachinePrecision] + -2.0), $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot -0.6666666666666666, -2\right)
    \end{array}
    
    Derivation
    1. Initial program 8.2%

      \[\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in eps around 0

      \[\leadsto \color{blue}{\varepsilon \cdot \left(\frac{-2}{3} \cdot {\varepsilon}^{2} - 2\right)} \]
    4. Step-by-step derivation
      1. lower-*.f64N/A

        \[\leadsto \color{blue}{\varepsilon \cdot \left(\frac{-2}{3} \cdot {\varepsilon}^{2} - 2\right)} \]
      2. sub-negN/A

        \[\leadsto \varepsilon \cdot \color{blue}{\left(\frac{-2}{3} \cdot {\varepsilon}^{2} + \left(\mathsf{neg}\left(2\right)\right)\right)} \]
      3. unpow2N/A

        \[\leadsto \varepsilon \cdot \left(\frac{-2}{3} \cdot \color{blue}{\left(\varepsilon \cdot \varepsilon\right)} + \left(\mathsf{neg}\left(2\right)\right)\right) \]
      4. associate-*r*N/A

        \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(\frac{-2}{3} \cdot \varepsilon\right) \cdot \varepsilon} + \left(\mathsf{neg}\left(2\right)\right)\right) \]
      5. *-commutativeN/A

        \[\leadsto \varepsilon \cdot \left(\color{blue}{\varepsilon \cdot \left(\frac{-2}{3} \cdot \varepsilon\right)} + \left(\mathsf{neg}\left(2\right)\right)\right) \]
      6. metadata-evalN/A

        \[\leadsto \varepsilon \cdot \left(\varepsilon \cdot \left(\frac{-2}{3} \cdot \varepsilon\right) + \color{blue}{-2}\right) \]
      7. lower-fma.f64N/A

        \[\leadsto \varepsilon \cdot \color{blue}{\mathsf{fma}\left(\varepsilon, \frac{-2}{3} \cdot \varepsilon, -2\right)} \]
      8. *-commutativeN/A

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \color{blue}{\varepsilon \cdot \frac{-2}{3}}, -2\right) \]
      9. lower-*.f6499.5

        \[\leadsto \varepsilon \cdot \mathsf{fma}\left(\varepsilon, \color{blue}{\varepsilon \cdot -0.6666666666666666}, -2\right) \]
    5. Applied rewrites99.5%

      \[\leadsto \color{blue}{\varepsilon \cdot \mathsf{fma}\left(\varepsilon, \varepsilon \cdot -0.6666666666666666, -2\right)} \]
    6. Add Preprocessing

    Alternative 6: 98.9% accurate, 19.7× speedup?

    \[\begin{array}{l} \\ \varepsilon \cdot -2 \end{array} \]
    (FPCore (eps) :precision binary64 (* eps -2.0))
    double code(double eps) {
    	return eps * -2.0;
    }
    
    real(8) function code(eps)
        real(8), intent (in) :: eps
        code = eps * (-2.0d0)
    end function
    
    public static double code(double eps) {
    	return eps * -2.0;
    }
    
    def code(eps):
    	return eps * -2.0
    
    function code(eps)
    	return Float64(eps * -2.0)
    end
    
    function tmp = code(eps)
    	tmp = eps * -2.0;
    end
    
    code[eps_] := N[(eps * -2.0), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \varepsilon \cdot -2
    \end{array}
    
    Derivation
    1. Initial program 8.2%

      \[\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in eps around 0

      \[\leadsto \color{blue}{-2 \cdot \varepsilon} \]
    4. Step-by-step derivation
      1. lower-*.f6499.1

        \[\leadsto \color{blue}{-2 \cdot \varepsilon} \]
    5. Applied rewrites99.1%

      \[\leadsto \color{blue}{-2 \cdot \varepsilon} \]
    6. Final simplification99.1%

      \[\leadsto \varepsilon \cdot -2 \]
    7. Add Preprocessing

    Alternative 7: 5.3% accurate, 118.0× speedup?

    \[\begin{array}{l} \\ 0 \end{array} \]
    (FPCore (eps) :precision binary64 0.0)
    double code(double eps) {
    	return 0.0;
    }
    
    real(8) function code(eps)
        real(8), intent (in) :: eps
        code = 0.0d0
    end function
    
    public static double code(double eps) {
    	return 0.0;
    }
    
    def code(eps):
    	return 0.0
    
    function code(eps)
    	return 0.0
    end
    
    function tmp = code(eps)
    	tmp = 0.0;
    end
    
    code[eps_] := 0.0
    
    \begin{array}{l}
    
    \\
    0
    \end{array}
    
    Derivation
    1. Initial program 8.2%

      \[\log \left(\frac{1 - \varepsilon}{1 + \varepsilon}\right) \]
    2. Add Preprocessing
    3. Applied rewrites5.5%

      \[\leadsto \color{blue}{0} \]
    4. Add Preprocessing

    Developer Target 1: 100.0% accurate, 0.6× speedup?

    \[\begin{array}{l} \\ \mathsf{log1p}\left(-\varepsilon\right) - \mathsf{log1p}\left(\varepsilon\right) \end{array} \]
    (FPCore (eps) :precision binary64 (- (log1p (- eps)) (log1p eps)))
    double code(double eps) {
    	return log1p(-eps) - log1p(eps);
    }
    
    public static double code(double eps) {
    	return Math.log1p(-eps) - Math.log1p(eps);
    }
    
    def code(eps):
    	return math.log1p(-eps) - math.log1p(eps)
    
    function code(eps)
    	return Float64(log1p(Float64(-eps)) - log1p(eps))
    end
    
    code[eps_] := N[(N[Log[1 + (-eps)], $MachinePrecision] - N[Log[1 + eps], $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \mathsf{log1p}\left(-\varepsilon\right) - \mathsf{log1p}\left(\varepsilon\right)
    \end{array}
    

    Reproduce

    ?
    herbie shell --seed 2024227 
    (FPCore (eps)
      :name "logq (problem 3.4.3)"
      :precision binary64
      :pre (< (fabs eps) 1.0)
    
      :alt
      (! :herbie-platform default (- (log1p (- eps)) (log1p eps)))
    
      (log (/ (- 1.0 eps) (+ 1.0 eps))))