neg log

Percentage Accurate: 100.0% → 100.0%
Time: 9.3s
Alternatives: 9
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ -\log \left(\frac{1}{x} - 1\right) \end{array} \]
(FPCore (x) :precision binary64 (- (log (- (/ 1.0 x) 1.0))))
double code(double x) {
	return -log(((1.0 / x) - 1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = -log(((1.0d0 / x) - 1.0d0))
end function
public static double code(double x) {
	return -Math.log(((1.0 / x) - 1.0));
}
def code(x):
	return -math.log(((1.0 / x) - 1.0))
function code(x)
	return Float64(-log(Float64(Float64(1.0 / x) - 1.0)))
end
function tmp = code(x)
	tmp = -log(((1.0 / x) - 1.0));
end
code[x_] := (-N[Log[N[(N[(1.0 / x), $MachinePrecision] - 1.0), $MachinePrecision]], $MachinePrecision])
\begin{array}{l}

\\
-\log \left(\frac{1}{x} - 1\right)
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 9 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ -\log \left(\frac{1}{x} - 1\right) \end{array} \]
(FPCore (x) :precision binary64 (- (log (- (/ 1.0 x) 1.0))))
double code(double x) {
	return -log(((1.0 / x) - 1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = -log(((1.0d0 / x) - 1.0d0))
end function
public static double code(double x) {
	return -Math.log(((1.0 / x) - 1.0));
}
def code(x):
	return -math.log(((1.0 / x) - 1.0))
function code(x)
	return Float64(-log(Float64(Float64(1.0 / x) - 1.0)))
end
function tmp = code(x)
	tmp = -log(((1.0 / x) - 1.0));
end
code[x_] := (-N[Log[N[(N[(1.0 / x), $MachinePrecision] - 1.0), $MachinePrecision]], $MachinePrecision])
\begin{array}{l}

\\
-\log \left(\frac{1}{x} - 1\right)
\end{array}

Alternative 1: 100.0% accurate, 0.8× speedup?

\[\begin{array}{l} \\ \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - x \cdot \left(x \cdot x\right)}\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (log (/ (* (+ 1.0 x) (fma x x 1.0)) (- (/ 1.0 x) (* x (* x x))))))
double code(double x) {
	return log((((1.0 + x) * fma(x, x, 1.0)) / ((1.0 / x) - (x * (x * x)))));
}
function code(x)
	return log(Float64(Float64(Float64(1.0 + x) * fma(x, x, 1.0)) / Float64(Float64(1.0 / x) - Float64(x * Float64(x * x)))))
end
code[x_] := N[Log[N[(N[(N[(1.0 + x), $MachinePrecision] * N[(x * x + 1.0), $MachinePrecision]), $MachinePrecision] / N[(N[(1.0 / x), $MachinePrecision] - N[(x * N[(x * x), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - x \cdot \left(x \cdot x\right)}\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[-\log \left(\frac{1}{x} - 1\right) \]
  2. Add Preprocessing
  3. Applied egg-rr26.5%

    \[\leadsto -\log \color{blue}{\left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\left(1 + \frac{1}{x}\right) \cdot \left(1 + \frac{1}{x \cdot x}\right)}\right)} \]
  4. Taylor expanded in x around 0

    \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\color{blue}{\frac{1 + x \cdot \left(1 + x \cdot \left(1 + x\right)\right)}{{x}^{3}}}}\right)\right) \]
  5. Step-by-step derivation
    1. /-lowering-/.f64N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\color{blue}{\frac{1 + x \cdot \left(1 + x \cdot \left(1 + x\right)\right)}{{x}^{3}}}}\right)\right) \]
    2. distribute-rgt-inN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{1 + \color{blue}{\left(1 \cdot x + \left(x \cdot \left(1 + x\right)\right) \cdot x\right)}}{{x}^{3}}}\right)\right) \]
    3. *-lft-identityN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{1 + \left(\color{blue}{x} + \left(x \cdot \left(1 + x\right)\right) \cdot x\right)}{{x}^{3}}}\right)\right) \]
    4. associate-+r+N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\color{blue}{\left(1 + x\right) + \left(x \cdot \left(1 + x\right)\right) \cdot x}}{{x}^{3}}}\right)\right) \]
    5. *-rgt-identityN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\color{blue}{\left(1 + x\right) \cdot 1} + \left(x \cdot \left(1 + x\right)\right) \cdot x}{{x}^{3}}}\right)\right) \]
    6. *-commutativeN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(1 + x\right) \cdot 1 + \color{blue}{\left(\left(1 + x\right) \cdot x\right)} \cdot x}{{x}^{3}}}\right)\right) \]
    7. associate-*l*N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(1 + x\right) \cdot 1 + \color{blue}{\left(1 + x\right) \cdot \left(x \cdot x\right)}}{{x}^{3}}}\right)\right) \]
    8. unpow2N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(1 + x\right) \cdot 1 + \left(1 + x\right) \cdot \color{blue}{{x}^{2}}}{{x}^{3}}}\right)\right) \]
    9. distribute-lft-outN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\color{blue}{\left(1 + x\right) \cdot \left(1 + {x}^{2}\right)}}{{x}^{3}}}\right)\right) \]
    10. +-commutativeN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(1 + x\right) \cdot \color{blue}{\left({x}^{2} + 1\right)}}{{x}^{3}}}\right)\right) \]
    11. *-lowering-*.f64N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\color{blue}{\left(1 + x\right) \cdot \left({x}^{2} + 1\right)}}{{x}^{3}}}\right)\right) \]
    12. +-commutativeN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\color{blue}{\left(x + 1\right)} \cdot \left({x}^{2} + 1\right)}{{x}^{3}}}\right)\right) \]
    13. +-lowering-+.f64N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\color{blue}{\left(x + 1\right)} \cdot \left({x}^{2} + 1\right)}{{x}^{3}}}\right)\right) \]
    14. unpow2N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \left(\color{blue}{x \cdot x} + 1\right)}{{x}^{3}}}\right)\right) \]
    15. accelerator-lowering-fma.f64N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \color{blue}{\mathsf{fma}\left(x, x, 1\right)}}{{x}^{3}}}\right)\right) \]
    16. cube-multN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\color{blue}{x \cdot \left(x \cdot x\right)}}}\right)\right) \]
    17. unpow2N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{x \cdot \color{blue}{{x}^{2}}}}\right)\right) \]
    18. *-lowering-*.f64N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\color{blue}{x \cdot {x}^{2}}}}\right)\right) \]
    19. unpow2N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{x \cdot \color{blue}{\left(x \cdot x\right)}}}\right)\right) \]
    20. *-lowering-*.f6426.5

      \[\leadsto -\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{x \cdot \color{blue}{\left(x \cdot x\right)}}}\right) \]
  6. Simplified26.5%

    \[\leadsto -\log \left(\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\color{blue}{\frac{\left(x + 1\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{x \cdot \left(x \cdot x\right)}}}\right) \]
  7. Step-by-step derivation
    1. neg-logN/A

      \[\leadsto \color{blue}{\log \left(\frac{1}{\frac{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}{\frac{\left(x + 1\right) \cdot \left(x \cdot x + 1\right)}{x \cdot \left(x \cdot x\right)}}}\right)} \]
    2. clear-numN/A

      \[\leadsto \log \color{blue}{\left(\frac{\frac{\left(x + 1\right) \cdot \left(x \cdot x + 1\right)}{x \cdot \left(x \cdot x\right)}}{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}\right)} \]
    3. log-lowering-log.f64N/A

      \[\leadsto \color{blue}{\log \left(\frac{\frac{\left(x + 1\right) \cdot \left(x \cdot x + 1\right)}{x \cdot \left(x \cdot x\right)}}{\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1}\right)} \]
    4. associate-/l/N/A

      \[\leadsto \log \color{blue}{\left(\frac{\left(x + 1\right) \cdot \left(x \cdot x + 1\right)}{\left(\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right)} \]
    5. /-lowering-/.f64N/A

      \[\leadsto \log \color{blue}{\left(\frac{\left(x + 1\right) \cdot \left(x \cdot x + 1\right)}{\left(\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right)} \]
    6. *-lowering-*.f64N/A

      \[\leadsto \log \left(\frac{\color{blue}{\left(x + 1\right) \cdot \left(x \cdot x + 1\right)}}{\left(\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right) \]
    7. +-commutativeN/A

      \[\leadsto \log \left(\frac{\color{blue}{\left(1 + x\right)} \cdot \left(x \cdot x + 1\right)}{\left(\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right) \]
    8. +-lowering-+.f64N/A

      \[\leadsto \log \left(\frac{\color{blue}{\left(1 + x\right)} \cdot \left(x \cdot x + 1\right)}{\left(\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right) \]
    9. accelerator-lowering-fma.f64N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \color{blue}{\mathsf{fma}\left(x, x, 1\right)}}{\left(\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right) \]
    10. *-lowering-*.f64N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\color{blue}{\left(\frac{1}{\left(x \cdot x\right) \cdot \left(x \cdot x\right)} + -1\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}}\right) \]
  8. Applied egg-rr26.5%

    \[\leadsto \color{blue}{\log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\left(-1 - \frac{-1}{x \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right) \cdot \left(x \cdot \left(x \cdot x\right)\right)}\right)} \]
  9. Taylor expanded in x around 0

    \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\color{blue}{\frac{1 + -1 \cdot {x}^{4}}{x}}}\right) \]
  10. Step-by-step derivation
    1. mul-1-negN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1 + \color{blue}{\left(\mathsf{neg}\left({x}^{4}\right)\right)}}{x}}\right) \]
    2. unsub-negN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{\color{blue}{1 - {x}^{4}}}{x}}\right) \]
    3. div-subN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\color{blue}{\frac{1}{x} - \frac{{x}^{4}}{x}}}\right) \]
    4. metadata-evalN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{\color{blue}{\left(3 + 1\right)}}}{x}}\right) \]
    5. pow-plusN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{\color{blue}{{x}^{3} \cdot x}}{x}}\right) \]
    6. *-lft-identityN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot \color{blue}{\left(1 \cdot x\right)}}{x}}\right) \]
    7. lft-mult-inverseN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot \left(\color{blue}{\left(\frac{1}{{x}^{2}} \cdot {x}^{2}\right)} \cdot x\right)}{x}}\right) \]
    8. associate-*r*N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot \color{blue}{\left(\frac{1}{{x}^{2}} \cdot \left({x}^{2} \cdot x\right)\right)}}{x}}\right) \]
    9. unpow2N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot \left(\frac{1}{{x}^{2}} \cdot \left(\color{blue}{\left(x \cdot x\right)} \cdot x\right)\right)}{x}}\right) \]
    10. unpow3N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot \left(\frac{1}{{x}^{2}} \cdot \color{blue}{{x}^{3}}\right)}{x}}\right) \]
    11. associate-*l/N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot \color{blue}{\frac{1 \cdot {x}^{3}}{{x}^{2}}}}{x}}\right) \]
    12. *-lft-identityN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot \frac{\color{blue}{{x}^{3}}}{{x}^{2}}}{x}}\right) \]
    13. associate-/l*N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{\color{blue}{\frac{{x}^{3} \cdot {x}^{3}}{{x}^{2}}}}{x}}\right) \]
    14. pow-sqrN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{\frac{\color{blue}{{x}^{\left(2 \cdot 3\right)}}}{{x}^{2}}}{x}}\right) \]
    15. metadata-evalN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{\frac{{x}^{\color{blue}{6}}}{{x}^{2}}}{x}}\right) \]
    16. associate-/l/N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \color{blue}{\frac{{x}^{6}}{x \cdot {x}^{2}}}}\right) \]
    17. metadata-evalN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{\color{blue}{\left(2 \cdot 3\right)}}}{x \cdot {x}^{2}}}\right) \]
    18. pow-sqrN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{\color{blue}{{x}^{3} \cdot {x}^{3}}}{x \cdot {x}^{2}}}\right) \]
    19. unpow2N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot {x}^{3}}{x \cdot \color{blue}{\left(x \cdot x\right)}}}\right) \]
    20. cube-multN/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \frac{{x}^{3} \cdot {x}^{3}}{\color{blue}{{x}^{3}}}}\right) \]
    21. associate-/l*N/A

      \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\frac{1}{x} - \color{blue}{{x}^{3} \cdot \frac{{x}^{3}}{{x}^{3}}}}\right) \]
  11. Simplified100.0%

    \[\leadsto \log \left(\frac{\left(1 + x\right) \cdot \mathsf{fma}\left(x, x, 1\right)}{\color{blue}{\frac{1}{x} - x \cdot \left(x \cdot x\right)}}\right) \]
  12. Add Preprocessing

Alternative 2: 7.3% accurate, 0.6× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := x \cdot \left(x \cdot x\right)\\ \mathbf{if}\;-\log \left(\frac{1}{x} + -1\right) \leq -250:\\ \;\;\;\;x \cdot \left(x \cdot 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(x, x \cdot x, t\_0 \cdot \left(t\_0 \cdot 0.125\right)\right)}{\mathsf{fma}\left(x, \left(x \cdot x\right) \cdot -0.5, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot 0.25\right)\right)}\\ \end{array} \end{array} \]
(FPCore (x)
 :precision binary64
 (let* ((t_0 (* x (* x x))))
   (if (<= (- (log (+ (/ 1.0 x) -1.0))) -250.0)
     (* x (* x 0.5))
     (/
      (fma x (* x x) (* t_0 (* t_0 0.125)))
      (fma x (* (* x x) -0.5) (* (* x x) (* (* x x) 0.25)))))))
double code(double x) {
	double t_0 = x * (x * x);
	double tmp;
	if (-log(((1.0 / x) + -1.0)) <= -250.0) {
		tmp = x * (x * 0.5);
	} else {
		tmp = fma(x, (x * x), (t_0 * (t_0 * 0.125))) / fma(x, ((x * x) * -0.5), ((x * x) * ((x * x) * 0.25)));
	}
	return tmp;
}
function code(x)
	t_0 = Float64(x * Float64(x * x))
	tmp = 0.0
	if (Float64(-log(Float64(Float64(1.0 / x) + -1.0))) <= -250.0)
		tmp = Float64(x * Float64(x * 0.5));
	else
		tmp = Float64(fma(x, Float64(x * x), Float64(t_0 * Float64(t_0 * 0.125))) / fma(x, Float64(Float64(x * x) * -0.5), Float64(Float64(x * x) * Float64(Float64(x * x) * 0.25))));
	end
	return tmp
end
code[x_] := Block[{t$95$0 = N[(x * N[(x * x), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[(-N[Log[N[(N[(1.0 / x), $MachinePrecision] + -1.0), $MachinePrecision]], $MachinePrecision]), -250.0], N[(x * N[(x * 0.5), $MachinePrecision]), $MachinePrecision], N[(N[(x * N[(x * x), $MachinePrecision] + N[(t$95$0 * N[(t$95$0 * 0.125), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(x * N[(N[(x * x), $MachinePrecision] * -0.5), $MachinePrecision] + N[(N[(x * x), $MachinePrecision] * N[(N[(x * x), $MachinePrecision] * 0.25), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := x \cdot \left(x \cdot x\right)\\
\mathbf{if}\;-\log \left(\frac{1}{x} + -1\right) \leq -250:\\
\;\;\;\;x \cdot \left(x \cdot 0.5\right)\\

\mathbf{else}:\\
\;\;\;\;\frac{\mathsf{fma}\left(x, x \cdot x, t\_0 \cdot \left(t\_0 \cdot 0.125\right)\right)}{\mathsf{fma}\left(x, \left(x \cdot x\right) \cdot -0.5, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot 0.25\right)\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (neg.f64 (log.f64 (-.f64 (/.f64 #s(literal 1 binary64) x) #s(literal 1 binary64)))) < -250

    1. Initial program 100.0%

      \[-\log \left(\frac{1}{x} - 1\right) \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) - -1 \cdot \log x} \]
    4. Step-by-step derivation
      1. cancel-sign-sub-invN/A

        \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \left(\mathsf{neg}\left(-1\right)\right) \cdot \log x} \]
      2. metadata-evalN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{1} \cdot \log x \]
      3. *-lft-identityN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{\log x} \]
      4. accelerator-lowering-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(x, 1 + \frac{1}{2} \cdot x, \log x\right)} \]
      5. +-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\frac{1}{2} \cdot x + 1}, \log x\right) \]
      6. *-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}} + 1, \log x\right) \]
      7. accelerator-lowering-fma.f64N/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\mathsf{fma}\left(x, \frac{1}{2}, 1\right)}, \log x\right) \]
      8. log-lowering-log.f64100.0

        \[\leadsto \mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \color{blue}{\log x}\right) \]
    5. Simplified100.0%

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right)} \]
    6. Taylor expanded in x around inf

      \[\leadsto \color{blue}{\frac{1}{2} \cdot {x}^{2}} \]
    7. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(x \cdot x\right)} \]
      2. associate-*r*N/A

        \[\leadsto \color{blue}{\left(\frac{1}{2} \cdot x\right) \cdot x} \]
      3. *-commutativeN/A

        \[\leadsto \color{blue}{x \cdot \left(\frac{1}{2} \cdot x\right)} \]
      4. *-lowering-*.f64N/A

        \[\leadsto \color{blue}{x \cdot \left(\frac{1}{2} \cdot x\right)} \]
      5. *-commutativeN/A

        \[\leadsto x \cdot \color{blue}{\left(x \cdot \frac{1}{2}\right)} \]
      6. *-lowering-*.f643.0

        \[\leadsto x \cdot \color{blue}{\left(x \cdot 0.5\right)} \]
    8. Simplified3.0%

      \[\leadsto \color{blue}{x \cdot \left(x \cdot 0.5\right)} \]

    if -250 < (neg.f64 (log.f64 (-.f64 (/.f64 #s(literal 1 binary64) x) #s(literal 1 binary64))))

    1. Initial program 100.0%

      \[-\log \left(\frac{1}{x} - 1\right) \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) - -1 \cdot \log x} \]
    4. Step-by-step derivation
      1. cancel-sign-sub-invN/A

        \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \left(\mathsf{neg}\left(-1\right)\right) \cdot \log x} \]
      2. metadata-evalN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{1} \cdot \log x \]
      3. *-lft-identityN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{\log x} \]
      4. accelerator-lowering-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(x, 1 + \frac{1}{2} \cdot x, \log x\right)} \]
      5. +-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\frac{1}{2} \cdot x + 1}, \log x\right) \]
      6. *-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}} + 1, \log x\right) \]
      7. accelerator-lowering-fma.f64N/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\mathsf{fma}\left(x, \frac{1}{2}, 1\right)}, \log x\right) \]
      8. log-lowering-log.f6497.8

        \[\leadsto \mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \color{blue}{\log x}\right) \]
    5. Simplified97.8%

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right)} \]
    6. Taylor expanded in x around inf

      \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{x}\right)} \]
    7. Step-by-step derivation
      1. distribute-lft-inN/A

        \[\leadsto \color{blue}{{x}^{2} \cdot \frac{1}{2} + {x}^{2} \cdot \frac{1}{x}} \]
      2. unpow2N/A

        \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \frac{1}{2} + {x}^{2} \cdot \frac{1}{x} \]
      3. associate-*r*N/A

        \[\leadsto \color{blue}{x \cdot \left(x \cdot \frac{1}{2}\right)} + {x}^{2} \cdot \frac{1}{x} \]
      4. *-commutativeN/A

        \[\leadsto x \cdot \color{blue}{\left(\frac{1}{2} \cdot x\right)} + {x}^{2} \cdot \frac{1}{x} \]
      5. unpow2N/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + \color{blue}{\left(x \cdot x\right)} \cdot \frac{1}{x} \]
      6. associate-*l*N/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + \color{blue}{x \cdot \left(x \cdot \frac{1}{x}\right)} \]
      7. rgt-mult-inverseN/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + x \cdot \color{blue}{1} \]
      8. *-rgt-identityN/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + \color{blue}{x} \]
      9. accelerator-lowering-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(x, \frac{1}{2} \cdot x, x\right)} \]
      10. *-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}}, x\right) \]
      11. *-lowering-*.f641.9

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot 0.5}, x\right) \]
    8. Simplified1.9%

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, x \cdot 0.5, x\right)} \]
    9. Step-by-step derivation
      1. flip3-+N/A

        \[\leadsto \color{blue}{\frac{{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right)}^{3} + {x}^{3}}{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) + \left(x \cdot x - \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot x\right)}} \]
      2. /-lowering-/.f64N/A

        \[\leadsto \color{blue}{\frac{{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right)}^{3} + {x}^{3}}{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) + \left(x \cdot x - \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot x\right)}} \]
    10. Applied egg-rr1.9%

      \[\leadsto \color{blue}{\frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot 0.125\right)\right)}{\mathsf{fma}\left(x, x - x \cdot \left(x \cdot 0.5\right), \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot 0.25\right)\right)}} \]
    11. Taylor expanded in x around inf

      \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot \frac{1}{8}\right)\right)}{\mathsf{fma}\left(x, \color{blue}{\frac{-1}{2} \cdot {x}^{2}}, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot \frac{1}{4}\right)\right)} \]
    12. Step-by-step derivation
      1. *-commutativeN/A

        \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot \frac{1}{8}\right)\right)}{\mathsf{fma}\left(x, \color{blue}{{x}^{2} \cdot \frac{-1}{2}}, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot \frac{1}{4}\right)\right)} \]
      2. *-lowering-*.f64N/A

        \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot \frac{1}{8}\right)\right)}{\mathsf{fma}\left(x, \color{blue}{{x}^{2} \cdot \frac{-1}{2}}, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot \frac{1}{4}\right)\right)} \]
      3. unpow2N/A

        \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot \frac{1}{8}\right)\right)}{\mathsf{fma}\left(x, \color{blue}{\left(x \cdot x\right)} \cdot \frac{-1}{2}, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot \frac{1}{4}\right)\right)} \]
      4. *-lowering-*.f6414.9

        \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot 0.125\right)\right)}{\mathsf{fma}\left(x, \color{blue}{\left(x \cdot x\right)} \cdot -0.5, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot 0.25\right)\right)} \]
    13. Simplified14.9%

      \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot 0.125\right)\right)}{\mathsf{fma}\left(x, \color{blue}{\left(x \cdot x\right) \cdot -0.5}, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot 0.25\right)\right)} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification7.6%

    \[\leadsto \begin{array}{l} \mathbf{if}\;-\log \left(\frac{1}{x} + -1\right) \leq -250:\\ \;\;\;\;x \cdot \left(x \cdot 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot 0.125\right)\right)}{\mathsf{fma}\left(x, \left(x \cdot x\right) \cdot -0.5, \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot 0.25\right)\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 3: 6.3% accurate, 0.6× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := x \cdot \left(x \cdot x\right)\\ \mathbf{if}\;-\log \left(\frac{1}{x} + -1\right) \leq -250:\\ \;\;\;\;x \cdot \left(x \cdot 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(x, x \cdot x, t\_0 \cdot \left(t\_0 \cdot 0.125\right)\right)}{\left(x \cdot t\_0\right) \cdot \left(0.25 + \frac{-0.5}{x}\right)}\\ \end{array} \end{array} \]
(FPCore (x)
 :precision binary64
 (let* ((t_0 (* x (* x x))))
   (if (<= (- (log (+ (/ 1.0 x) -1.0))) -250.0)
     (* x (* x 0.5))
     (/
      (fma x (* x x) (* t_0 (* t_0 0.125)))
      (* (* x t_0) (+ 0.25 (/ -0.5 x)))))))
double code(double x) {
	double t_0 = x * (x * x);
	double tmp;
	if (-log(((1.0 / x) + -1.0)) <= -250.0) {
		tmp = x * (x * 0.5);
	} else {
		tmp = fma(x, (x * x), (t_0 * (t_0 * 0.125))) / ((x * t_0) * (0.25 + (-0.5 / x)));
	}
	return tmp;
}
function code(x)
	t_0 = Float64(x * Float64(x * x))
	tmp = 0.0
	if (Float64(-log(Float64(Float64(1.0 / x) + -1.0))) <= -250.0)
		tmp = Float64(x * Float64(x * 0.5));
	else
		tmp = Float64(fma(x, Float64(x * x), Float64(t_0 * Float64(t_0 * 0.125))) / Float64(Float64(x * t_0) * Float64(0.25 + Float64(-0.5 / x))));
	end
	return tmp
end
code[x_] := Block[{t$95$0 = N[(x * N[(x * x), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[(-N[Log[N[(N[(1.0 / x), $MachinePrecision] + -1.0), $MachinePrecision]], $MachinePrecision]), -250.0], N[(x * N[(x * 0.5), $MachinePrecision]), $MachinePrecision], N[(N[(x * N[(x * x), $MachinePrecision] + N[(t$95$0 * N[(t$95$0 * 0.125), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / N[(N[(x * t$95$0), $MachinePrecision] * N[(0.25 + N[(-0.5 / x), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := x \cdot \left(x \cdot x\right)\\
\mathbf{if}\;-\log \left(\frac{1}{x} + -1\right) \leq -250:\\
\;\;\;\;x \cdot \left(x \cdot 0.5\right)\\

\mathbf{else}:\\
\;\;\;\;\frac{\mathsf{fma}\left(x, x \cdot x, t\_0 \cdot \left(t\_0 \cdot 0.125\right)\right)}{\left(x \cdot t\_0\right) \cdot \left(0.25 + \frac{-0.5}{x}\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (neg.f64 (log.f64 (-.f64 (/.f64 #s(literal 1 binary64) x) #s(literal 1 binary64)))) < -250

    1. Initial program 100.0%

      \[-\log \left(\frac{1}{x} - 1\right) \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) - -1 \cdot \log x} \]
    4. Step-by-step derivation
      1. cancel-sign-sub-invN/A

        \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \left(\mathsf{neg}\left(-1\right)\right) \cdot \log x} \]
      2. metadata-evalN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{1} \cdot \log x \]
      3. *-lft-identityN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{\log x} \]
      4. accelerator-lowering-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(x, 1 + \frac{1}{2} \cdot x, \log x\right)} \]
      5. +-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\frac{1}{2} \cdot x + 1}, \log x\right) \]
      6. *-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}} + 1, \log x\right) \]
      7. accelerator-lowering-fma.f64N/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\mathsf{fma}\left(x, \frac{1}{2}, 1\right)}, \log x\right) \]
      8. log-lowering-log.f64100.0

        \[\leadsto \mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \color{blue}{\log x}\right) \]
    5. Simplified100.0%

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right)} \]
    6. Taylor expanded in x around inf

      \[\leadsto \color{blue}{\frac{1}{2} \cdot {x}^{2}} \]
    7. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(x \cdot x\right)} \]
      2. associate-*r*N/A

        \[\leadsto \color{blue}{\left(\frac{1}{2} \cdot x\right) \cdot x} \]
      3. *-commutativeN/A

        \[\leadsto \color{blue}{x \cdot \left(\frac{1}{2} \cdot x\right)} \]
      4. *-lowering-*.f64N/A

        \[\leadsto \color{blue}{x \cdot \left(\frac{1}{2} \cdot x\right)} \]
      5. *-commutativeN/A

        \[\leadsto x \cdot \color{blue}{\left(x \cdot \frac{1}{2}\right)} \]
      6. *-lowering-*.f643.0

        \[\leadsto x \cdot \color{blue}{\left(x \cdot 0.5\right)} \]
    8. Simplified3.0%

      \[\leadsto \color{blue}{x \cdot \left(x \cdot 0.5\right)} \]

    if -250 < (neg.f64 (log.f64 (-.f64 (/.f64 #s(literal 1 binary64) x) #s(literal 1 binary64))))

    1. Initial program 100.0%

      \[-\log \left(\frac{1}{x} - 1\right) \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) - -1 \cdot \log x} \]
    4. Step-by-step derivation
      1. cancel-sign-sub-invN/A

        \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \left(\mathsf{neg}\left(-1\right)\right) \cdot \log x} \]
      2. metadata-evalN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{1} \cdot \log x \]
      3. *-lft-identityN/A

        \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{\log x} \]
      4. accelerator-lowering-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(x, 1 + \frac{1}{2} \cdot x, \log x\right)} \]
      5. +-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\frac{1}{2} \cdot x + 1}, \log x\right) \]
      6. *-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}} + 1, \log x\right) \]
      7. accelerator-lowering-fma.f64N/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{\mathsf{fma}\left(x, \frac{1}{2}, 1\right)}, \log x\right) \]
      8. log-lowering-log.f6497.8

        \[\leadsto \mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \color{blue}{\log x}\right) \]
    5. Simplified97.8%

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right)} \]
    6. Taylor expanded in x around inf

      \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{2} + \frac{1}{x}\right)} \]
    7. Step-by-step derivation
      1. distribute-lft-inN/A

        \[\leadsto \color{blue}{{x}^{2} \cdot \frac{1}{2} + {x}^{2} \cdot \frac{1}{x}} \]
      2. unpow2N/A

        \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \frac{1}{2} + {x}^{2} \cdot \frac{1}{x} \]
      3. associate-*r*N/A

        \[\leadsto \color{blue}{x \cdot \left(x \cdot \frac{1}{2}\right)} + {x}^{2} \cdot \frac{1}{x} \]
      4. *-commutativeN/A

        \[\leadsto x \cdot \color{blue}{\left(\frac{1}{2} \cdot x\right)} + {x}^{2} \cdot \frac{1}{x} \]
      5. unpow2N/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + \color{blue}{\left(x \cdot x\right)} \cdot \frac{1}{x} \]
      6. associate-*l*N/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + \color{blue}{x \cdot \left(x \cdot \frac{1}{x}\right)} \]
      7. rgt-mult-inverseN/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + x \cdot \color{blue}{1} \]
      8. *-rgt-identityN/A

        \[\leadsto x \cdot \left(\frac{1}{2} \cdot x\right) + \color{blue}{x} \]
      9. accelerator-lowering-fma.f64N/A

        \[\leadsto \color{blue}{\mathsf{fma}\left(x, \frac{1}{2} \cdot x, x\right)} \]
      10. *-commutativeN/A

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}}, x\right) \]
      11. *-lowering-*.f641.9

        \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot 0.5}, x\right) \]
    8. Simplified1.9%

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, x \cdot 0.5, x\right)} \]
    9. Step-by-step derivation
      1. flip3-+N/A

        \[\leadsto \color{blue}{\frac{{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right)}^{3} + {x}^{3}}{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) + \left(x \cdot x - \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot x\right)}} \]
      2. /-lowering-/.f64N/A

        \[\leadsto \color{blue}{\frac{{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right)}^{3} + {x}^{3}}{\left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) + \left(x \cdot x - \left(x \cdot \left(x \cdot \frac{1}{2}\right)\right) \cdot x\right)}} \]
    10. Applied egg-rr1.9%

      \[\leadsto \color{blue}{\frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot 0.125\right)\right)}{\mathsf{fma}\left(x, x - x \cdot \left(x \cdot 0.5\right), \left(x \cdot x\right) \cdot \left(\left(x \cdot x\right) \cdot 0.25\right)\right)}} \]
    11. Taylor expanded in x around inf

      \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot \frac{1}{8}\right)\right)}{\color{blue}{{x}^{4} \cdot \left(\frac{1}{4} - \frac{1}{2} \cdot \frac{1}{x}\right)}} \]
    12. Simplified11.9%

      \[\leadsto \frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot 0.125\right)\right)}{\color{blue}{\left(x \cdot \left(x \cdot \left(x \cdot x\right)\right)\right) \cdot \left(0.25 + \frac{-0.5}{x}\right)}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification6.4%

    \[\leadsto \begin{array}{l} \mathbf{if}\;-\log \left(\frac{1}{x} + -1\right) \leq -250:\\ \;\;\;\;x \cdot \left(x \cdot 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(x, x \cdot x, \left(x \cdot \left(x \cdot x\right)\right) \cdot \left(\left(x \cdot \left(x \cdot x\right)\right) \cdot 0.125\right)\right)}{\left(x \cdot \left(x \cdot \left(x \cdot x\right)\right)\right) \cdot \left(0.25 + \frac{-0.5}{x}\right)}\\ \end{array} \]
  5. Add Preprocessing

Alternative 4: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ -\log \left(\frac{1}{x} + -1\right) \end{array} \]
(FPCore (x) :precision binary64 (- (log (+ (/ 1.0 x) -1.0))))
double code(double x) {
	return -log(((1.0 / x) + -1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = -log(((1.0d0 / x) + (-1.0d0)))
end function
public static double code(double x) {
	return -Math.log(((1.0 / x) + -1.0));
}
def code(x):
	return -math.log(((1.0 / x) + -1.0))
function code(x)
	return Float64(-log(Float64(Float64(1.0 / x) + -1.0)))
end
function tmp = code(x)
	tmp = -log(((1.0 / x) + -1.0));
end
code[x_] := (-N[Log[N[(N[(1.0 / x), $MachinePrecision] + -1.0), $MachinePrecision]], $MachinePrecision])
\begin{array}{l}

\\
-\log \left(\frac{1}{x} + -1\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[-\log \left(\frac{1}{x} - 1\right) \]
  2. Add Preprocessing
  3. Final simplification100.0%

    \[\leadsto -\log \left(\frac{1}{x} + -1\right) \]
  4. Add Preprocessing

Alternative 5: 99.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right) \end{array} \]
(FPCore (x) :precision binary64 (fma x (fma x 0.5 1.0) (log x)))
double code(double x) {
	return fma(x, fma(x, 0.5, 1.0), log(x));
}
function code(x)
	return fma(x, fma(x, 0.5, 1.0), log(x))
end
code[x_] := N[(x * N[(x * 0.5 + 1.0), $MachinePrecision] + N[Log[x], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[-\log \left(\frac{1}{x} - 1\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) - -1 \cdot \log x} \]
  4. Step-by-step derivation
    1. cancel-sign-sub-invN/A

      \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \left(\mathsf{neg}\left(-1\right)\right) \cdot \log x} \]
    2. metadata-evalN/A

      \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{1} \cdot \log x \]
    3. *-lft-identityN/A

      \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{\log x} \]
    4. accelerator-lowering-fma.f64N/A

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, 1 + \frac{1}{2} \cdot x, \log x\right)} \]
    5. +-commutativeN/A

      \[\leadsto \mathsf{fma}\left(x, \color{blue}{\frac{1}{2} \cdot x + 1}, \log x\right) \]
    6. *-commutativeN/A

      \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}} + 1, \log x\right) \]
    7. accelerator-lowering-fma.f64N/A

      \[\leadsto \mathsf{fma}\left(x, \color{blue}{\mathsf{fma}\left(x, \frac{1}{2}, 1\right)}, \log x\right) \]
    8. log-lowering-log.f6499.2

      \[\leadsto \mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \color{blue}{\log x}\right) \]
  5. Simplified99.2%

    \[\leadsto \color{blue}{\mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right)} \]
  6. Add Preprocessing

Alternative 6: 99.2% accurate, 1.1× speedup?

\[\begin{array}{l} \\ x + \log x \end{array} \]
(FPCore (x) :precision binary64 (+ x (log x)))
double code(double x) {
	return x + log(x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x + log(x)
end function
public static double code(double x) {
	return x + Math.log(x);
}
def code(x):
	return x + math.log(x)
function code(x)
	return Float64(x + log(x))
end
function tmp = code(x)
	tmp = x + log(x);
end
code[x_] := N[(x + N[Log[x], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
x + \log x
\end{array}
Derivation
  1. Initial program 100.0%

    \[-\log \left(\frac{1}{x} - 1\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x - -1 \cdot \log x} \]
  4. Step-by-step derivation
    1. sub-negN/A

      \[\leadsto \color{blue}{x + \left(\mathsf{neg}\left(-1 \cdot \log x\right)\right)} \]
    2. mul-1-negN/A

      \[\leadsto x + \left(\mathsf{neg}\left(\color{blue}{\left(\mathsf{neg}\left(\log x\right)\right)}\right)\right) \]
    3. remove-double-negN/A

      \[\leadsto x + \color{blue}{\log x} \]
    4. +-lowering-+.f64N/A

      \[\leadsto \color{blue}{x + \log x} \]
    5. log-lowering-log.f6498.9

      \[\leadsto x + \color{blue}{\log x} \]
  5. Simplified98.9%

    \[\leadsto \color{blue}{x + \log x} \]
  6. Add Preprocessing

Alternative 7: 98.3% accurate, 1.2× speedup?

\[\begin{array}{l} \\ \log x \end{array} \]
(FPCore (x) :precision binary64 (log x))
double code(double x) {
	return log(x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = log(x)
end function
public static double code(double x) {
	return Math.log(x);
}
def code(x):
	return math.log(x)
function code(x)
	return log(x)
end
function tmp = code(x)
	tmp = log(x);
end
code[x_] := N[Log[x], $MachinePrecision]
\begin{array}{l}

\\
\log x
\end{array}
Derivation
  1. Initial program 100.0%

    \[-\log \left(\frac{1}{x} - 1\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{\log x} \]
  4. Step-by-step derivation
    1. log-lowering-log.f6497.5

      \[\leadsto \color{blue}{\log x} \]
  5. Simplified97.5%

    \[\leadsto \color{blue}{\log x} \]
  6. Add Preprocessing

Alternative 8: 2.7% accurate, 10.6× speedup?

\[\begin{array}{l} \\ x \cdot \left(x \cdot 0.5\right) \end{array} \]
(FPCore (x) :precision binary64 (* x (* x 0.5)))
double code(double x) {
	return x * (x * 0.5);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x * (x * 0.5d0)
end function
public static double code(double x) {
	return x * (x * 0.5);
}
def code(x):
	return x * (x * 0.5)
function code(x)
	return Float64(x * Float64(x * 0.5))
end
function tmp = code(x)
	tmp = x * (x * 0.5);
end
code[x_] := N[(x * N[(x * 0.5), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
x \cdot \left(x \cdot 0.5\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[-\log \left(\frac{1}{x} - 1\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) - -1 \cdot \log x} \]
  4. Step-by-step derivation
    1. cancel-sign-sub-invN/A

      \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \left(\mathsf{neg}\left(-1\right)\right) \cdot \log x} \]
    2. metadata-evalN/A

      \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{1} \cdot \log x \]
    3. *-lft-identityN/A

      \[\leadsto x \cdot \left(1 + \frac{1}{2} \cdot x\right) + \color{blue}{\log x} \]
    4. accelerator-lowering-fma.f64N/A

      \[\leadsto \color{blue}{\mathsf{fma}\left(x, 1 + \frac{1}{2} \cdot x, \log x\right)} \]
    5. +-commutativeN/A

      \[\leadsto \mathsf{fma}\left(x, \color{blue}{\frac{1}{2} \cdot x + 1}, \log x\right) \]
    6. *-commutativeN/A

      \[\leadsto \mathsf{fma}\left(x, \color{blue}{x \cdot \frac{1}{2}} + 1, \log x\right) \]
    7. accelerator-lowering-fma.f64N/A

      \[\leadsto \mathsf{fma}\left(x, \color{blue}{\mathsf{fma}\left(x, \frac{1}{2}, 1\right)}, \log x\right) \]
    8. log-lowering-log.f6499.2

      \[\leadsto \mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \color{blue}{\log x}\right) \]
  5. Simplified99.2%

    \[\leadsto \color{blue}{\mathsf{fma}\left(x, \mathsf{fma}\left(x, 0.5, 1\right), \log x\right)} \]
  6. Taylor expanded in x around inf

    \[\leadsto \color{blue}{\frac{1}{2} \cdot {x}^{2}} \]
  7. Step-by-step derivation
    1. unpow2N/A

      \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(x \cdot x\right)} \]
    2. associate-*r*N/A

      \[\leadsto \color{blue}{\left(\frac{1}{2} \cdot x\right) \cdot x} \]
    3. *-commutativeN/A

      \[\leadsto \color{blue}{x \cdot \left(\frac{1}{2} \cdot x\right)} \]
    4. *-lowering-*.f64N/A

      \[\leadsto \color{blue}{x \cdot \left(\frac{1}{2} \cdot x\right)} \]
    5. *-commutativeN/A

      \[\leadsto x \cdot \color{blue}{\left(x \cdot \frac{1}{2}\right)} \]
    6. *-lowering-*.f642.7

      \[\leadsto x \cdot \color{blue}{\left(x \cdot 0.5\right)} \]
  8. Simplified2.7%

    \[\leadsto \color{blue}{x \cdot \left(x \cdot 0.5\right)} \]
  9. Add Preprocessing

Alternative 9: 2.3% accurate, 117.0× speedup?

\[\begin{array}{l} \\ x \end{array} \]
(FPCore (x) :precision binary64 x)
double code(double x) {
	return x;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x
end function
public static double code(double x) {
	return x;
}
def code(x):
	return x
function code(x)
	return x
end
function tmp = code(x)
	tmp = x;
end
code[x_] := x
\begin{array}{l}

\\
x
\end{array}
Derivation
  1. Initial program 100.0%

    \[-\log \left(\frac{1}{x} - 1\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x - -1 \cdot \log x} \]
  4. Step-by-step derivation
    1. sub-negN/A

      \[\leadsto \color{blue}{x + \left(\mathsf{neg}\left(-1 \cdot \log x\right)\right)} \]
    2. mul-1-negN/A

      \[\leadsto x + \left(\mathsf{neg}\left(\color{blue}{\left(\mathsf{neg}\left(\log x\right)\right)}\right)\right) \]
    3. remove-double-negN/A

      \[\leadsto x + \color{blue}{\log x} \]
    4. +-lowering-+.f64N/A

      \[\leadsto \color{blue}{x + \log x} \]
    5. log-lowering-log.f6498.9

      \[\leadsto x + \color{blue}{\log x} \]
  5. Simplified98.9%

    \[\leadsto \color{blue}{x + \log x} \]
  6. Taylor expanded in x around inf

    \[\leadsto \color{blue}{x} \]
  7. Step-by-step derivation
    1. Simplified2.3%

      \[\leadsto \color{blue}{x} \]
    2. Add Preprocessing

    Reproduce

    ?
    herbie shell --seed 2024205 
    (FPCore (x)
      :name "neg log"
      :precision binary64
      (- (log (- (/ 1.0 x) 1.0))))