expq2 (section 3.11)

Percentage Accurate: 37.9% → 100.0%
Time: 7.2s
Alternatives: 9
Speedup: 68.3×

Specification

?
\[710 > x\]
\[\begin{array}{l} \\ \frac{e^{x}}{e^{x} - 1} \end{array} \]
(FPCore (x) :precision binary64 (/ (exp x) (- (exp x) 1.0)))
double code(double x) {
	return exp(x) / (exp(x) - 1.0);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = exp(x) / (exp(x) - 1.0d0)
end function
public static double code(double x) {
	return Math.exp(x) / (Math.exp(x) - 1.0);
}
def code(x):
	return math.exp(x) / (math.exp(x) - 1.0)
function code(x)
	return Float64(exp(x) / Float64(exp(x) - 1.0))
end
function tmp = code(x)
	tmp = exp(x) / (exp(x) - 1.0);
end
code[x_] := N[(N[Exp[x], $MachinePrecision] / N[(N[Exp[x], $MachinePrecision] - 1.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{x}}{e^{x} - 1}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 9 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 37.9% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{x}}{e^{x} - 1} \end{array} \]
(FPCore (x) :precision binary64 (/ (exp x) (- (exp x) 1.0)))
double code(double x) {
	return exp(x) / (exp(x) - 1.0);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = exp(x) / (exp(x) - 1.0d0)
end function
public static double code(double x) {
	return Math.exp(x) / (Math.exp(x) - 1.0);
}
def code(x):
	return math.exp(x) / (math.exp(x) - 1.0)
function code(x)
	return Float64(exp(x) / Float64(exp(x) - 1.0))
end
function tmp = code(x)
	tmp = exp(x) / (exp(x) - 1.0);
end
code[x_] := N[(N[Exp[x], $MachinePrecision] / N[(N[Exp[x], $MachinePrecision] - 1.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{x}}{e^{x} - 1}
\end{array}

Alternative 1: 100.0% accurate, 2.0× speedup?

\[\begin{array}{l} \\ \frac{-1}{\mathsf{expm1}\left(-x\right)} \end{array} \]
(FPCore (x) :precision binary64 (/ -1.0 (expm1 (- x))))
double code(double x) {
	return -1.0 / expm1(-x);
}
public static double code(double x) {
	return -1.0 / Math.expm1(-x);
}
def code(x):
	return -1.0 / math.expm1(-x)
function code(x)
	return Float64(-1.0 / expm1(Float64(-x)))
end
code[x_] := N[(-1.0 / N[(Exp[(-x)] - 1), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{\mathsf{expm1}\left(-x\right)}
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Add Preprocessing

Alternative 2: 91.3% accurate, 12.1× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664 - 0.16666666666666666\right)\right) + -1\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/
  -1.0
  (*
   x
   (+
    (* x (+ 0.5 (* x (- (* x 0.041666666666666664) 0.16666666666666666))))
    -1.0))))
double code(double x) {
	return -1.0 / (x * ((x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666)))) + -1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((x * (0.5d0 + (x * ((x * 0.041666666666666664d0) - 0.16666666666666666d0)))) + (-1.0d0)))
end function
public static double code(double x) {
	return -1.0 / (x * ((x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666)))) + -1.0));
}
def code(x):
	return -1.0 / (x * ((x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666)))) + -1.0))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(Float64(x * Float64(0.5 + Float64(x * Float64(Float64(x * 0.041666666666666664) - 0.16666666666666666)))) + -1.0)))
end
function tmp = code(x)
	tmp = -1.0 / (x * ((x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666)))) + -1.0));
end
code[x_] := N[(-1.0 / N[(x * N[(N[(x * N[(0.5 + N[(x * N[(N[(x * 0.041666666666666664), $MachinePrecision] - 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664 - 0.16666666666666666\right)\right) + -1\right)}
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 93.6%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(0.041666666666666664 \cdot x - 0.16666666666666666\right)\right) - 1\right)}} \]
  6. Final simplification93.6%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664 - 0.16666666666666666\right)\right) + -1\right)} \]
  7. Add Preprocessing

Alternative 3: 90.9% accurate, 13.7× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664\right)\right) + -1\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/ -1.0 (* x (+ (* x (+ 0.5 (* x (* x 0.041666666666666664)))) -1.0))))
double code(double x) {
	return -1.0 / (x * ((x * (0.5 + (x * (x * 0.041666666666666664)))) + -1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((x * (0.5d0 + (x * (x * 0.041666666666666664d0)))) + (-1.0d0)))
end function
public static double code(double x) {
	return -1.0 / (x * ((x * (0.5 + (x * (x * 0.041666666666666664)))) + -1.0));
}
def code(x):
	return -1.0 / (x * ((x * (0.5 + (x * (x * 0.041666666666666664)))) + -1.0))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(Float64(x * Float64(0.5 + Float64(x * Float64(x * 0.041666666666666664)))) + -1.0)))
end
function tmp = code(x)
	tmp = -1.0 / (x * ((x * (0.5 + (x * (x * 0.041666666666666664)))) + -1.0));
end
code[x_] := N[(-1.0 / N[(x * N[(N[(x * N[(0.5 + N[(x * N[(x * 0.041666666666666664), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664\right)\right) + -1\right)}
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 93.6%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(0.041666666666666664 \cdot x - 0.16666666666666666\right)\right) - 1\right)}} \]
  6. Taylor expanded in x around inf 93.3%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \color{blue}{\left(0.041666666666666664 \cdot x\right)}\right) - 1\right)} \]
  7. Step-by-step derivation
    1. *-commutative93.3%

      \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \color{blue}{\left(x \cdot 0.041666666666666664\right)}\right) - 1\right)} \]
  8. Simplified93.3%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \color{blue}{\left(x \cdot 0.041666666666666664\right)}\right) - 1\right)} \]
  9. Final simplification93.3%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664\right)\right) + -1\right)} \]
  10. Add Preprocessing

Alternative 4: 88.4% accurate, 15.8× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot -0.16666666666666666\right) + -1\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/ -1.0 (* x (+ (* x (+ 0.5 (* x -0.16666666666666666))) -1.0))))
double code(double x) {
	return -1.0 / (x * ((x * (0.5 + (x * -0.16666666666666666))) + -1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((x * (0.5d0 + (x * (-0.16666666666666666d0)))) + (-1.0d0)))
end function
public static double code(double x) {
	return -1.0 / (x * ((x * (0.5 + (x * -0.16666666666666666))) + -1.0));
}
def code(x):
	return -1.0 / (x * ((x * (0.5 + (x * -0.16666666666666666))) + -1.0))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(Float64(x * Float64(0.5 + Float64(x * -0.16666666666666666))) + -1.0)))
end
function tmp = code(x)
	tmp = -1.0 / (x * ((x * (0.5 + (x * -0.16666666666666666))) + -1.0));
end
code[x_] := N[(-1.0 / N[(x * N[(N[(x * N[(0.5 + N[(x * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot -0.16666666666666666\right) + -1\right)}
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 88.5%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + -0.16666666666666666 \cdot x\right) - 1\right)}} \]
  6. Final simplification88.5%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot -0.16666666666666666\right) + -1\right)} \]
  7. Add Preprocessing

Alternative 5: 87.5% accurate, 18.6× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(x \cdot \left(x \cdot -0.16666666666666666\right) + -1\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/ -1.0 (* x (+ (* x (* x -0.16666666666666666)) -1.0))))
double code(double x) {
	return -1.0 / (x * ((x * (x * -0.16666666666666666)) + -1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((x * (x * (-0.16666666666666666d0))) + (-1.0d0)))
end function
public static double code(double x) {
	return -1.0 / (x * ((x * (x * -0.16666666666666666)) + -1.0));
}
def code(x):
	return -1.0 / (x * ((x * (x * -0.16666666666666666)) + -1.0))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(Float64(x * Float64(x * -0.16666666666666666)) + -1.0)))
end
function tmp = code(x)
	tmp = -1.0 / (x * ((x * (x * -0.16666666666666666)) + -1.0));
end
code[x_] := N[(-1.0 / N[(x * N[(N[(x * N[(x * -0.16666666666666666), $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(x \cdot \left(x \cdot -0.16666666666666666\right) + -1\right)}
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 93.6%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(0.041666666666666664 \cdot x - 0.16666666666666666\right)\right) - 1\right)}} \]
  6. Taylor expanded in x around 0 88.5%

    \[\leadsto \frac{-1}{x \cdot \left(\color{blue}{x \cdot \left(0.5 + -0.16666666666666666 \cdot x\right)} - 1\right)} \]
  7. Step-by-step derivation
    1. +-commutative88.5%

      \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(-0.16666666666666666 \cdot x + 0.5\right)} - 1\right)} \]
  8. Simplified88.5%

    \[\leadsto \frac{-1}{x \cdot \left(\color{blue}{x \cdot \left(-0.16666666666666666 \cdot x + 0.5\right)} - 1\right)} \]
  9. Taylor expanded in x around inf 88.1%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(-0.16666666666666666 \cdot x\right)} - 1\right)} \]
  10. Step-by-step derivation
    1. *-commutative88.1%

      \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(x \cdot -0.16666666666666666\right)} - 1\right)} \]
  11. Simplified88.1%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(x \cdot -0.16666666666666666\right)} - 1\right)} \]
  12. Final simplification88.1%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(x \cdot -0.16666666666666666\right) + -1\right)} \]
  13. Add Preprocessing

Alternative 6: 83.0% accurate, 22.8× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(x \cdot 0.5 + -1\right)} \end{array} \]
(FPCore (x) :precision binary64 (/ -1.0 (* x (+ (* x 0.5) -1.0))))
double code(double x) {
	return -1.0 / (x * ((x * 0.5) + -1.0));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((x * 0.5d0) + (-1.0d0)))
end function
public static double code(double x) {
	return -1.0 / (x * ((x * 0.5) + -1.0));
}
def code(x):
	return -1.0 / (x * ((x * 0.5) + -1.0))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(Float64(x * 0.5) + -1.0)))
end
function tmp = code(x)
	tmp = -1.0 / (x * ((x * 0.5) + -1.0));
end
code[x_] := N[(-1.0 / N[(x * N[(N[(x * 0.5), $MachinePrecision] + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(x \cdot 0.5 + -1\right)}
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 83.8%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(0.5 \cdot x - 1\right)}} \]
  6. Final simplification83.8%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot 0.5 + -1\right)} \]
  7. Add Preprocessing

Alternative 7: 66.5% accurate, 68.3× speedup?

\[\begin{array}{l} \\ \frac{1}{x} \end{array} \]
(FPCore (x) :precision binary64 (/ 1.0 x))
double code(double x) {
	return 1.0 / x;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 1.0d0 / x
end function
public static double code(double x) {
	return 1.0 / x;
}
def code(x):
	return 1.0 / x
function code(x)
	return Float64(1.0 / x)
end
function tmp = code(x)
	tmp = 1.0 / x;
end
code[x_] := N[(1.0 / x), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{x}
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 65.1%

    \[\leadsto \color{blue}{\frac{1}{x}} \]
  6. Add Preprocessing

Alternative 8: 3.3% accurate, 68.3× speedup?

\[\begin{array}{l} \\ x \cdot 0.08333333333333333 \end{array} \]
(FPCore (x) :precision binary64 (* x 0.08333333333333333))
double code(double x) {
	return x * 0.08333333333333333;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x * 0.08333333333333333d0
end function
public static double code(double x) {
	return x * 0.08333333333333333;
}
def code(x):
	return x * 0.08333333333333333
function code(x)
	return Float64(x * 0.08333333333333333)
end
function tmp = code(x)
	tmp = x * 0.08333333333333333;
end
code[x_] := N[(x * 0.08333333333333333), $MachinePrecision]
\begin{array}{l}

\\
x \cdot 0.08333333333333333
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 64.6%

    \[\leadsto \color{blue}{\frac{1 + x \cdot \left(0.5 + 0.08333333333333333 \cdot x\right)}{x}} \]
  6. Step-by-step derivation
    1. *-commutative64.6%

      \[\leadsto \frac{1 + x \cdot \left(0.5 + \color{blue}{x \cdot 0.08333333333333333}\right)}{x} \]
  7. Simplified64.6%

    \[\leadsto \color{blue}{\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}} \]
  8. Taylor expanded in x around -inf 32.6%

    \[\leadsto \color{blue}{-1 \cdot \left(x \cdot \left(-1 \cdot \frac{0.5 + \frac{1}{x}}{x} - 0.08333333333333333\right)\right)} \]
  9. Step-by-step derivation
    1. mul-1-neg32.6%

      \[\leadsto \color{blue}{-x \cdot \left(-1 \cdot \frac{0.5 + \frac{1}{x}}{x} - 0.08333333333333333\right)} \]
    2. distribute-rgt-neg-in32.6%

      \[\leadsto \color{blue}{x \cdot \left(-\left(-1 \cdot \frac{0.5 + \frac{1}{x}}{x} - 0.08333333333333333\right)\right)} \]
    3. sub-neg32.6%

      \[\leadsto x \cdot \left(-\color{blue}{\left(-1 \cdot \frac{0.5 + \frac{1}{x}}{x} + \left(-0.08333333333333333\right)\right)}\right) \]
    4. associate-*r/32.6%

      \[\leadsto x \cdot \left(-\left(\color{blue}{\frac{-1 \cdot \left(0.5 + \frac{1}{x}\right)}{x}} + \left(-0.08333333333333333\right)\right)\right) \]
    5. +-commutative32.6%

      \[\leadsto x \cdot \left(-\left(\frac{-1 \cdot \color{blue}{\left(\frac{1}{x} + 0.5\right)}}{x} + \left(-0.08333333333333333\right)\right)\right) \]
    6. distribute-lft-in32.6%

      \[\leadsto x \cdot \left(-\left(\frac{\color{blue}{-1 \cdot \frac{1}{x} + -1 \cdot 0.5}}{x} + \left(-0.08333333333333333\right)\right)\right) \]
    7. neg-mul-132.6%

      \[\leadsto x \cdot \left(-\left(\frac{\color{blue}{\left(-\frac{1}{x}\right)} + -1 \cdot 0.5}{x} + \left(-0.08333333333333333\right)\right)\right) \]
    8. distribute-neg-frac32.6%

      \[\leadsto x \cdot \left(-\left(\frac{\color{blue}{\frac{-1}{x}} + -1 \cdot 0.5}{x} + \left(-0.08333333333333333\right)\right)\right) \]
    9. metadata-eval32.6%

      \[\leadsto x \cdot \left(-\left(\frac{\frac{\color{blue}{-1}}{x} + -1 \cdot 0.5}{x} + \left(-0.08333333333333333\right)\right)\right) \]
    10. metadata-eval32.6%

      \[\leadsto x \cdot \left(-\left(\frac{\frac{-1}{x} + \color{blue}{-0.5}}{x} + \left(-0.08333333333333333\right)\right)\right) \]
    11. metadata-eval32.6%

      \[\leadsto x \cdot \left(-\left(\frac{\frac{-1}{x} + -0.5}{x} + \color{blue}{-0.08333333333333333}\right)\right) \]
  10. Simplified32.6%

    \[\leadsto \color{blue}{x \cdot \left(-\left(\frac{\frac{-1}{x} + -0.5}{x} + -0.08333333333333333\right)\right)} \]
  11. Taylor expanded in x around inf 3.3%

    \[\leadsto x \cdot \color{blue}{0.08333333333333333} \]
  12. Add Preprocessing

Alternative 9: 3.2% accurate, 205.0× speedup?

\[\begin{array}{l} \\ -1 \end{array} \]
(FPCore (x) :precision binary64 -1.0)
double code(double x) {
	return -1.0;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = -1.0d0
end function
public static double code(double x) {
	return -1.0;
}
def code(x):
	return -1.0
function code(x)
	return -1.0
end
function tmp = code(x)
	tmp = -1.0;
end
code[x_] := -1.0
\begin{array}{l}

\\
-1
\end{array}
Derivation
  1. Initial program 39.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative39.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse4.0%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg4.0%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in4.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub04.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-4.0%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub03.9%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*3.9%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity3.9%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/3.9%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse39.1%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg239.1%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac39.1%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval39.1%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Applied egg-rr0.3%

    \[\leadsto \color{blue}{\sqrt{\mathsf{expm1}\left(x\right)} \cdot {\left(-\sqrt{\mathsf{expm1}\left(x\right)}\right)}^{-1}} \]
  6. Step-by-step derivation
    1. unpow-10.3%

      \[\leadsto \sqrt{\mathsf{expm1}\left(x\right)} \cdot \color{blue}{\frac{1}{-\sqrt{\mathsf{expm1}\left(x\right)}}} \]
    2. associate-*r/0.3%

      \[\leadsto \color{blue}{\frac{\sqrt{\mathsf{expm1}\left(x\right)} \cdot 1}{-\sqrt{\mathsf{expm1}\left(x\right)}}} \]
    3. *-rgt-identity0.3%

      \[\leadsto \frac{\color{blue}{\sqrt{\mathsf{expm1}\left(x\right)}}}{-\sqrt{\mathsf{expm1}\left(x\right)}} \]
    4. distribute-frac-neg20.3%

      \[\leadsto \color{blue}{-\frac{\sqrt{\mathsf{expm1}\left(x\right)}}{\sqrt{\mathsf{expm1}\left(x\right)}}} \]
    5. *-inverses3.3%

      \[\leadsto -\color{blue}{1} \]
    6. metadata-eval3.3%

      \[\leadsto \color{blue}{-1} \]
  7. Simplified3.3%

    \[\leadsto \color{blue}{-1} \]
  8. Add Preprocessing

Developer Target 1: 100.0% accurate, 2.0× speedup?

\[\begin{array}{l} \\ \frac{-1}{\mathsf{expm1}\left(-x\right)} \end{array} \]
(FPCore (x) :precision binary64 (/ (- 1.0) (expm1 (- x))))
double code(double x) {
	return -1.0 / expm1(-x);
}
public static double code(double x) {
	return -1.0 / Math.expm1(-x);
}
def code(x):
	return -1.0 / math.expm1(-x)
function code(x)
	return Float64(Float64(-1.0) / expm1(Float64(-x)))
end
code[x_] := N[((-1.0) / N[(Exp[(-x)] - 1), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{\mathsf{expm1}\left(-x\right)}
\end{array}

Reproduce

?
herbie shell --seed 2024157 
(FPCore (x)
  :name "expq2 (section 3.11)"
  :precision binary64
  :pre (> 710.0 x)

  :alt
  (! :herbie-platform default (/ (- 1) (expm1 (- x))))

  (/ (exp x) (- (exp x) 1.0)))