expq2 (section 3.11)

Percentage Accurate: 37.7% → 100.0%
Time: 7.2s
Alternatives: 14
Speedup: 68.3×

Specification

?
\[710 > x\]
\[\begin{array}{l} \\ \frac{e^{x}}{e^{x} - 1} \end{array} \]
(FPCore (x) :precision binary64 (/ (exp x) (- (exp x) 1.0)))
double code(double x) {
	return exp(x) / (exp(x) - 1.0);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = exp(x) / (exp(x) - 1.0d0)
end function
public static double code(double x) {
	return Math.exp(x) / (Math.exp(x) - 1.0);
}
def code(x):
	return math.exp(x) / (math.exp(x) - 1.0)
function code(x)
	return Float64(exp(x) / Float64(exp(x) - 1.0))
end
function tmp = code(x)
	tmp = exp(x) / (exp(x) - 1.0);
end
code[x_] := N[(N[Exp[x], $MachinePrecision] / N[(N[Exp[x], $MachinePrecision] - 1.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{x}}{e^{x} - 1}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 14 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 37.7% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{e^{x}}{e^{x} - 1} \end{array} \]
(FPCore (x) :precision binary64 (/ (exp x) (- (exp x) 1.0)))
double code(double x) {
	return exp(x) / (exp(x) - 1.0);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = exp(x) / (exp(x) - 1.0d0)
end function
public static double code(double x) {
	return Math.exp(x) / (Math.exp(x) - 1.0);
}
def code(x):
	return math.exp(x) / (math.exp(x) - 1.0)
function code(x)
	return Float64(exp(x) / Float64(exp(x) - 1.0))
end
function tmp = code(x)
	tmp = exp(x) / (exp(x) - 1.0);
end
code[x_] := N[(N[Exp[x], $MachinePrecision] / N[(N[Exp[x], $MachinePrecision] - 1.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{e^{x}}{e^{x} - 1}
\end{array}

Alternative 1: 100.0% accurate, 2.0× speedup?

\[\begin{array}{l} \\ \frac{-1}{\mathsf{expm1}\left(-x\right)} \end{array} \]
(FPCore (x) :precision binary64 (/ -1.0 (expm1 (- x))))
double code(double x) {
	return -1.0 / expm1(-x);
}
public static double code(double x) {
	return -1.0 / Math.expm1(-x);
}
def code(x):
	return -1.0 / math.expm1(-x)
function code(x)
	return Float64(-1.0 / expm1(Float64(-x)))
end
code[x_] := N[(-1.0 / N[(Exp[(-x)] - 1), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{\mathsf{expm1}\left(-x\right)}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Add Preprocessing

Alternative 2: 99.2% accurate, 1.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;x \leq -3.9:\\ \;\;\;\;\frac{e^{x}}{x}\\ \mathbf{else}:\\ \;\;\;\;\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}\\ \end{array} \end{array} \]
(FPCore (x)
 :precision binary64
 (if (<= x -3.9)
   (/ (exp x) x)
   (/ (+ 1.0 (* x (+ 0.5 (* x 0.08333333333333333)))) x)))
double code(double x) {
	double tmp;
	if (x <= -3.9) {
		tmp = exp(x) / x;
	} else {
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x;
	}
	return tmp;
}
real(8) function code(x)
    real(8), intent (in) :: x
    real(8) :: tmp
    if (x <= (-3.9d0)) then
        tmp = exp(x) / x
    else
        tmp = (1.0d0 + (x * (0.5d0 + (x * 0.08333333333333333d0)))) / x
    end if
    code = tmp
end function
public static double code(double x) {
	double tmp;
	if (x <= -3.9) {
		tmp = Math.exp(x) / x;
	} else {
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x;
	}
	return tmp;
}
def code(x):
	tmp = 0
	if x <= -3.9:
		tmp = math.exp(x) / x
	else:
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x
	return tmp
function code(x)
	tmp = 0.0
	if (x <= -3.9)
		tmp = Float64(exp(x) / x);
	else
		tmp = Float64(Float64(1.0 + Float64(x * Float64(0.5 + Float64(x * 0.08333333333333333)))) / x);
	end
	return tmp
end
function tmp_2 = code(x)
	tmp = 0.0;
	if (x <= -3.9)
		tmp = exp(x) / x;
	else
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x;
	end
	tmp_2 = tmp;
end
code[x_] := If[LessEqual[x, -3.9], N[(N[Exp[x], $MachinePrecision] / x), $MachinePrecision], N[(N[(1.0 + N[(x * N[(0.5 + N[(x * 0.08333333333333333), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / x), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;x \leq -3.9:\\
\;\;\;\;\frac{e^{x}}{x}\\

\mathbf{else}:\\
\;\;\;\;\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if x < -3.89999999999999991

    1. Initial program 100.0%

      \[\frac{e^{x}}{e^{x} - 1} \]
    2. Step-by-step derivation
      1. expm1-define100.0%

        \[\leadsto \frac{e^{x}}{\color{blue}{\mathsf{expm1}\left(x\right)}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{e^{x}}{\mathsf{expm1}\left(x\right)}} \]
    4. Add Preprocessing
    5. Taylor expanded in x around 0 99.1%

      \[\leadsto \frac{e^{x}}{\color{blue}{x}} \]

    if -3.89999999999999991 < x

    1. Initial program 9.3%

      \[\frac{e^{x}}{e^{x} - 1} \]
    2. Step-by-step derivation
      1. sub-neg9.3%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
      2. +-commutative9.3%

        \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
      3. rgt-mult-inverse9.2%

        \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
      4. exp-neg9.1%

        \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
      5. distribute-rgt-neg-out9.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
      6. *-rgt-identity9.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
      7. distribute-lft-in9.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
      8. neg-sub09.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
      9. associate-+l-9.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
      10. neg-sub09.3%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
      11. associate-/r*9.3%

        \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
      12. *-rgt-identity9.3%

        \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
      13. associate-*r/9.3%

        \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
      14. rgt-mult-inverse9.3%

        \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
      15. distribute-frac-neg29.3%

        \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
      16. distribute-neg-frac9.3%

        \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
      17. metadata-eval9.3%

        \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
      18. expm1-define100.0%

        \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
    4. Add Preprocessing
    5. Taylor expanded in x around 0 98.7%

      \[\leadsto \color{blue}{\frac{1 + x \cdot \left(0.5 + 0.08333333333333333 \cdot x\right)}{x}} \]
    6. Step-by-step derivation
      1. *-commutative98.7%

        \[\leadsto \frac{1 + x \cdot \left(0.5 + \color{blue}{x \cdot 0.08333333333333333}\right)}{x} \]
    7. Simplified98.7%

      \[\leadsto \color{blue}{\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}} \]
  3. Recombined 2 regimes into one program.
  4. Add Preprocessing

Alternative 3: 91.5% accurate, 12.1× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664 - 0.16666666666666666\right)\right)\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/
  -1.0
  (*
   x
   (+
    -1.0
    (* x (+ 0.5 (* x (- (* x 0.041666666666666664) 0.16666666666666666))))))))
double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666))))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((-1.0d0) + (x * (0.5d0 + (x * ((x * 0.041666666666666664d0) - 0.16666666666666666d0))))))
end function
public static double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666))))));
}
def code(x):
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666))))))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(-1.0 + Float64(x * Float64(0.5 + Float64(x * Float64(Float64(x * 0.041666666666666664) - 0.16666666666666666)))))))
end
function tmp = code(x)
	tmp = -1.0 / (x * (-1.0 + (x * (0.5 + (x * ((x * 0.041666666666666664) - 0.16666666666666666))))));
end
code[x_] := N[(-1.0 / N[(x * N[(-1.0 + N[(x * N[(0.5 + N[(x * N[(N[(x * 0.041666666666666664), $MachinePrecision] - 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664 - 0.16666666666666666\right)\right)\right)}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 90.3%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(0.041666666666666664 \cdot x - 0.16666666666666666\right)\right) - 1\right)}} \]
  6. Final simplification90.3%

    \[\leadsto \frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664 - 0.16666666666666666\right)\right)\right)} \]
  7. Add Preprocessing

Alternative 4: 89.1% accurate, 12.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;x \leq -3.8:\\ \;\;\;\;\frac{-1}{x \cdot \left(-1 + x \cdot \left(x \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}\\ \end{array} \end{array} \]
(FPCore (x)
 :precision binary64
 (if (<= x -3.8)
   (/ -1.0 (* x (+ -1.0 (* x (* x -0.16666666666666666)))))
   (/ (+ 1.0 (* x (+ 0.5 (* x 0.08333333333333333)))) x)))
double code(double x) {
	double tmp;
	if (x <= -3.8) {
		tmp = -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))));
	} else {
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x;
	}
	return tmp;
}
real(8) function code(x)
    real(8), intent (in) :: x
    real(8) :: tmp
    if (x <= (-3.8d0)) then
        tmp = (-1.0d0) / (x * ((-1.0d0) + (x * (x * (-0.16666666666666666d0)))))
    else
        tmp = (1.0d0 + (x * (0.5d0 + (x * 0.08333333333333333d0)))) / x
    end if
    code = tmp
end function
public static double code(double x) {
	double tmp;
	if (x <= -3.8) {
		tmp = -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))));
	} else {
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x;
	}
	return tmp;
}
def code(x):
	tmp = 0
	if x <= -3.8:
		tmp = -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))))
	else:
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x
	return tmp
function code(x)
	tmp = 0.0
	if (x <= -3.8)
		tmp = Float64(-1.0 / Float64(x * Float64(-1.0 + Float64(x * Float64(x * -0.16666666666666666)))));
	else
		tmp = Float64(Float64(1.0 + Float64(x * Float64(0.5 + Float64(x * 0.08333333333333333)))) / x);
	end
	return tmp
end
function tmp_2 = code(x)
	tmp = 0.0;
	if (x <= -3.8)
		tmp = -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))));
	else
		tmp = (1.0 + (x * (0.5 + (x * 0.08333333333333333)))) / x;
	end
	tmp_2 = tmp;
end
code[x_] := If[LessEqual[x, -3.8], N[(-1.0 / N[(x * N[(-1.0 + N[(x * N[(x * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(N[(1.0 + N[(x * N[(0.5 + N[(x * 0.08333333333333333), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] / x), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;x \leq -3.8:\\
\;\;\;\;\frac{-1}{x \cdot \left(-1 + x \cdot \left(x \cdot -0.16666666666666666\right)\right)}\\

\mathbf{else}:\\
\;\;\;\;\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if x < -3.7999999999999998

    1. Initial program 100.0%

      \[\frac{e^{x}}{e^{x} - 1} \]
    2. Step-by-step derivation
      1. sub-neg100.0%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
      2. +-commutative100.0%

        \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
      3. rgt-mult-inverse1.1%

        \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
      4. exp-neg1.1%

        \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
      5. distribute-rgt-neg-out1.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
      6. *-rgt-identity1.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
      7. distribute-lft-in1.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
      8. neg-sub01.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
      9. associate-+l-1.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
      10. neg-sub01.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
      11. associate-/r*1.1%

        \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
      12. *-rgt-identity1.1%

        \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
      13. associate-*r/1.1%

        \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
      14. rgt-mult-inverse100.0%

        \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
      15. distribute-frac-neg2100.0%

        \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
      16. distribute-neg-frac100.0%

        \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
      18. expm1-define100.0%

        \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
    4. Add Preprocessing
    5. Taylor expanded in x around 0 66.6%

      \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + -0.16666666666666666 \cdot x\right) - 1\right)}} \]
    6. Taylor expanded in x around inf 66.6%

      \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(-0.16666666666666666 \cdot x\right)} - 1\right)} \]
    7. Step-by-step derivation
      1. *-commutative66.6%

        \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(x \cdot -0.16666666666666666\right)} - 1\right)} \]
    8. Simplified66.6%

      \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(x \cdot -0.16666666666666666\right)} - 1\right)} \]

    if -3.7999999999999998 < x

    1. Initial program 9.3%

      \[\frac{e^{x}}{e^{x} - 1} \]
    2. Step-by-step derivation
      1. sub-neg9.3%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
      2. +-commutative9.3%

        \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
      3. rgt-mult-inverse9.2%

        \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
      4. exp-neg9.1%

        \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
      5. distribute-rgt-neg-out9.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
      6. *-rgt-identity9.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
      7. distribute-lft-in9.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
      8. neg-sub09.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
      9. associate-+l-9.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
      10. neg-sub09.3%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
      11. associate-/r*9.3%

        \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
      12. *-rgt-identity9.3%

        \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
      13. associate-*r/9.3%

        \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
      14. rgt-mult-inverse9.3%

        \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
      15. distribute-frac-neg29.3%

        \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
      16. distribute-neg-frac9.3%

        \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
      17. metadata-eval9.3%

        \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
      18. expm1-define100.0%

        \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
    4. Add Preprocessing
    5. Taylor expanded in x around 0 98.7%

      \[\leadsto \color{blue}{\frac{1 + x \cdot \left(0.5 + 0.08333333333333333 \cdot x\right)}{x}} \]
    6. Step-by-step derivation
      1. *-commutative98.7%

        \[\leadsto \frac{1 + x \cdot \left(0.5 + \color{blue}{x \cdot 0.08333333333333333}\right)}{x} \]
    7. Simplified98.7%

      \[\leadsto \color{blue}{\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification87.0%

    \[\leadsto \begin{array}{l} \mathbf{if}\;x \leq -3.8:\\ \;\;\;\;\frac{-1}{x \cdot \left(-1 + x \cdot \left(x \cdot -0.16666666666666666\right)\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{1 + x \cdot \left(0.5 + x \cdot 0.08333333333333333\right)}{x}\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 91.1% accurate, 13.7× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664\right)\right)\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/ -1.0 (* x (+ -1.0 (* x (+ 0.5 (* x (* x 0.041666666666666664))))))))
double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * (x * 0.041666666666666664))))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((-1.0d0) + (x * (0.5d0 + (x * (x * 0.041666666666666664d0))))))
end function
public static double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * (x * 0.041666666666666664))))));
}
def code(x):
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * (x * 0.041666666666666664))))))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(-1.0 + Float64(x * Float64(0.5 + Float64(x * Float64(x * 0.041666666666666664)))))))
end
function tmp = code(x)
	tmp = -1.0 / (x * (-1.0 + (x * (0.5 + (x * (x * 0.041666666666666664))))));
end
code[x_] := N[(-1.0 / N[(x * N[(-1.0 + N[(x * N[(0.5 + N[(x * N[(x * 0.041666666666666664), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664\right)\right)\right)}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 90.3%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + x \cdot \left(0.041666666666666664 \cdot x - 0.16666666666666666\right)\right) - 1\right)}} \]
  6. Taylor expanded in x around inf 89.7%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \color{blue}{\left(0.041666666666666664 \cdot x\right)}\right) - 1\right)} \]
  7. Step-by-step derivation
    1. *-commutative89.7%

      \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \color{blue}{\left(x \cdot 0.041666666666666664\right)}\right) - 1\right)} \]
  8. Simplified89.7%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \left(0.5 + x \cdot \color{blue}{\left(x \cdot 0.041666666666666664\right)}\right) - 1\right)} \]
  9. Final simplification89.7%

    \[\leadsto \frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot \left(x \cdot 0.041666666666666664\right)\right)\right)} \]
  10. Add Preprocessing

Alternative 6: 88.9% accurate, 15.8× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot -0.16666666666666666\right)\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/ -1.0 (* x (+ -1.0 (* x (+ 0.5 (* x -0.16666666666666666)))))))
double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * -0.16666666666666666)))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((-1.0d0) + (x * (0.5d0 + (x * (-0.16666666666666666d0))))))
end function
public static double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * -0.16666666666666666)))));
}
def code(x):
	return -1.0 / (x * (-1.0 + (x * (0.5 + (x * -0.16666666666666666)))))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(-1.0 + Float64(x * Float64(0.5 + Float64(x * -0.16666666666666666))))))
end
function tmp = code(x)
	tmp = -1.0 / (x * (-1.0 + (x * (0.5 + (x * -0.16666666666666666)))));
end
code[x_] := N[(-1.0 / N[(x * N[(-1.0 + N[(x * N[(0.5 + N[(x * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot -0.16666666666666666\right)\right)}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 86.7%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + -0.16666666666666666 \cdot x\right) - 1\right)}} \]
  6. Final simplification86.7%

    \[\leadsto \frac{-1}{x \cdot \left(-1 + x \cdot \left(0.5 + x \cdot -0.16666666666666666\right)\right)} \]
  7. Add Preprocessing

Alternative 7: 83.5% accurate, 17.1× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;x \leq -1.75:\\ \;\;\;\;\frac{-1}{x \cdot \left(x \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;0.5 + \frac{1}{x}\\ \end{array} \end{array} \]
(FPCore (x)
 :precision binary64
 (if (<= x -1.75) (/ -1.0 (* x (* x 0.5))) (+ 0.5 (/ 1.0 x))))
double code(double x) {
	double tmp;
	if (x <= -1.75) {
		tmp = -1.0 / (x * (x * 0.5));
	} else {
		tmp = 0.5 + (1.0 / x);
	}
	return tmp;
}
real(8) function code(x)
    real(8), intent (in) :: x
    real(8) :: tmp
    if (x <= (-1.75d0)) then
        tmp = (-1.0d0) / (x * (x * 0.5d0))
    else
        tmp = 0.5d0 + (1.0d0 / x)
    end if
    code = tmp
end function
public static double code(double x) {
	double tmp;
	if (x <= -1.75) {
		tmp = -1.0 / (x * (x * 0.5));
	} else {
		tmp = 0.5 + (1.0 / x);
	}
	return tmp;
}
def code(x):
	tmp = 0
	if x <= -1.75:
		tmp = -1.0 / (x * (x * 0.5))
	else:
		tmp = 0.5 + (1.0 / x)
	return tmp
function code(x)
	tmp = 0.0
	if (x <= -1.75)
		tmp = Float64(-1.0 / Float64(x * Float64(x * 0.5)));
	else
		tmp = Float64(0.5 + Float64(1.0 / x));
	end
	return tmp
end
function tmp_2 = code(x)
	tmp = 0.0;
	if (x <= -1.75)
		tmp = -1.0 / (x * (x * 0.5));
	else
		tmp = 0.5 + (1.0 / x);
	end
	tmp_2 = tmp;
end
code[x_] := If[LessEqual[x, -1.75], N[(-1.0 / N[(x * N[(x * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[(0.5 + N[(1.0 / x), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;x \leq -1.75:\\
\;\;\;\;\frac{-1}{x \cdot \left(x \cdot 0.5\right)}\\

\mathbf{else}:\\
\;\;\;\;0.5 + \frac{1}{x}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if x < -1.75

    1. Initial program 100.0%

      \[\frac{e^{x}}{e^{x} - 1} \]
    2. Step-by-step derivation
      1. sub-neg100.0%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
      2. +-commutative100.0%

        \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
      3. rgt-mult-inverse1.1%

        \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
      4. exp-neg1.1%

        \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
      5. distribute-rgt-neg-out1.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
      6. *-rgt-identity1.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
      7. distribute-lft-in1.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
      8. neg-sub01.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
      9. associate-+l-1.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
      10. neg-sub01.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
      11. associate-/r*1.1%

        \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
      12. *-rgt-identity1.1%

        \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
      13. associate-*r/1.1%

        \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
      14. rgt-mult-inverse100.0%

        \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
      15. distribute-frac-neg2100.0%

        \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
      16. distribute-neg-frac100.0%

        \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
      17. metadata-eval100.0%

        \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
      18. expm1-define100.0%

        \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
    4. Add Preprocessing
    5. Taylor expanded in x around 0 50.3%

      \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(0.5 \cdot x - 1\right)}} \]
    6. Taylor expanded in x around inf 50.3%

      \[\leadsto \frac{-1}{x \cdot \color{blue}{\left(0.5 \cdot x\right)}} \]
    7. Step-by-step derivation
      1. *-commutative50.3%

        \[\leadsto \frac{-1}{x \cdot \color{blue}{\left(x \cdot 0.5\right)}} \]
    8. Simplified50.3%

      \[\leadsto \frac{-1}{x \cdot \color{blue}{\left(x \cdot 0.5\right)}} \]

    if -1.75 < x

    1. Initial program 9.3%

      \[\frac{e^{x}}{e^{x} - 1} \]
    2. Step-by-step derivation
      1. sub-neg9.3%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
      2. +-commutative9.3%

        \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
      3. rgt-mult-inverse9.2%

        \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
      4. exp-neg9.1%

        \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
      5. distribute-rgt-neg-out9.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
      6. *-rgt-identity9.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
      7. distribute-lft-in9.1%

        \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
      8. neg-sub09.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
      9. associate-+l-9.1%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
      10. neg-sub09.3%

        \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
      11. associate-/r*9.3%

        \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
      12. *-rgt-identity9.3%

        \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
      13. associate-*r/9.3%

        \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
      14. rgt-mult-inverse9.3%

        \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
      15. distribute-frac-neg29.3%

        \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
      16. distribute-neg-frac9.3%

        \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
      17. metadata-eval9.3%

        \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
      18. expm1-define100.0%

        \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
    4. Add Preprocessing
    5. Taylor expanded in x around 0 97.7%

      \[\leadsto \color{blue}{\frac{1 + 0.5 \cdot x}{x}} \]
    6. Step-by-step derivation
      1. *-commutative97.7%

        \[\leadsto \frac{1 + \color{blue}{x \cdot 0.5}}{x} \]
    7. Simplified97.7%

      \[\leadsto \color{blue}{\frac{1 + x \cdot 0.5}{x}} \]
    8. Taylor expanded in x around inf 97.7%

      \[\leadsto \color{blue}{0.5 + \frac{1}{x}} \]
    9. Step-by-step derivation
      1. +-commutative97.7%

        \[\leadsto \color{blue}{\frac{1}{x} + 0.5} \]
    10. Simplified97.7%

      \[\leadsto \color{blue}{\frac{1}{x} + 0.5} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification80.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;x \leq -1.75:\\ \;\;\;\;\frac{-1}{x \cdot \left(x \cdot 0.5\right)}\\ \mathbf{else}:\\ \;\;\;\;0.5 + \frac{1}{x}\\ \end{array} \]
  5. Add Preprocessing

Alternative 8: 87.9% accurate, 18.6× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(-1 + x \cdot \left(x \cdot -0.16666666666666666\right)\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (/ -1.0 (* x (+ -1.0 (* x (* x -0.16666666666666666))))))
double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((-1.0d0) + (x * (x * (-0.16666666666666666d0)))))
end function
public static double code(double x) {
	return -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))));
}
def code(x):
	return -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(-1.0 + Float64(x * Float64(x * -0.16666666666666666)))))
end
function tmp = code(x)
	tmp = -1.0 / (x * (-1.0 + (x * (x * -0.16666666666666666))));
end
code[x_] := N[(-1.0 / N[(x * N[(-1.0 + N[(x * N[(x * -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(-1 + x \cdot \left(x \cdot -0.16666666666666666\right)\right)}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 86.7%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(x \cdot \left(0.5 + -0.16666666666666666 \cdot x\right) - 1\right)}} \]
  6. Taylor expanded in x around inf 85.1%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(-0.16666666666666666 \cdot x\right)} - 1\right)} \]
  7. Step-by-step derivation
    1. *-commutative85.1%

      \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(x \cdot -0.16666666666666666\right)} - 1\right)} \]
  8. Simplified85.1%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot \color{blue}{\left(x \cdot -0.16666666666666666\right)} - 1\right)} \]
  9. Final simplification85.1%

    \[\leadsto \frac{-1}{x \cdot \left(-1 + x \cdot \left(x \cdot -0.16666666666666666\right)\right)} \]
  10. Add Preprocessing

Alternative 9: 83.4% accurate, 22.8× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(x \cdot 0.5\right) - x} \end{array} \]
(FPCore (x) :precision binary64 (/ -1.0 (- (* x (* x 0.5)) x)))
double code(double x) {
	return -1.0 / ((x * (x * 0.5)) - x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / ((x * (x * 0.5d0)) - x)
end function
public static double code(double x) {
	return -1.0 / ((x * (x * 0.5)) - x);
}
def code(x):
	return -1.0 / ((x * (x * 0.5)) - x)
function code(x)
	return Float64(-1.0 / Float64(Float64(x * Float64(x * 0.5)) - x))
end
function tmp = code(x)
	tmp = -1.0 / ((x * (x * 0.5)) - x);
end
code[x_] := N[(-1.0 / N[(N[(x * N[(x * 0.5), $MachinePrecision]), $MachinePrecision] - x), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(x \cdot 0.5\right) - x}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 80.4%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(0.5 \cdot x - 1\right)}} \]
  6. Step-by-step derivation
    1. sub-neg80.4%

      \[\leadsto \frac{-1}{x \cdot \color{blue}{\left(0.5 \cdot x + \left(-1\right)\right)}} \]
    2. metadata-eval80.4%

      \[\leadsto \frac{-1}{x \cdot \left(0.5 \cdot x + \color{blue}{-1}\right)} \]
    3. distribute-rgt-in80.4%

      \[\leadsto \frac{-1}{\color{blue}{\left(0.5 \cdot x\right) \cdot x + -1 \cdot x}} \]
    4. *-commutative80.4%

      \[\leadsto \frac{-1}{\color{blue}{\left(x \cdot 0.5\right)} \cdot x + -1 \cdot x} \]
    5. neg-mul-180.4%

      \[\leadsto \frac{-1}{\left(x \cdot 0.5\right) \cdot x + \color{blue}{\left(-x\right)}} \]
  7. Applied egg-rr80.4%

    \[\leadsto \frac{-1}{\color{blue}{\left(x \cdot 0.5\right) \cdot x + \left(-x\right)}} \]
  8. Final simplification80.4%

    \[\leadsto \frac{-1}{x \cdot \left(x \cdot 0.5\right) - x} \]
  9. Add Preprocessing

Alternative 10: 83.4% accurate, 22.8× speedup?

\[\begin{array}{l} \\ \frac{-1}{x \cdot \left(-1 + x \cdot 0.5\right)} \end{array} \]
(FPCore (x) :precision binary64 (/ -1.0 (* x (+ -1.0 (* x 0.5)))))
double code(double x) {
	return -1.0 / (x * (-1.0 + (x * 0.5)));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (-1.0d0) / (x * ((-1.0d0) + (x * 0.5d0)))
end function
public static double code(double x) {
	return -1.0 / (x * (-1.0 + (x * 0.5)));
}
def code(x):
	return -1.0 / (x * (-1.0 + (x * 0.5)))
function code(x)
	return Float64(-1.0 / Float64(x * Float64(-1.0 + Float64(x * 0.5))))
end
function tmp = code(x)
	tmp = -1.0 / (x * (-1.0 + (x * 0.5)));
end
code[x_] := N[(-1.0 / N[(x * N[(-1.0 + N[(x * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{x \cdot \left(-1 + x \cdot 0.5\right)}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 80.4%

    \[\leadsto \frac{-1}{\color{blue}{x \cdot \left(0.5 \cdot x - 1\right)}} \]
  6. Final simplification80.4%

    \[\leadsto \frac{-1}{x \cdot \left(-1 + x \cdot 0.5\right)} \]
  7. Add Preprocessing

Alternative 11: 66.8% accurate, 41.0× speedup?

\[\begin{array}{l} \\ 0.5 + \frac{1}{x} \end{array} \]
(FPCore (x) :precision binary64 (+ 0.5 (/ 1.0 x)))
double code(double x) {
	return 0.5 + (1.0 / x);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 0.5d0 + (1.0d0 / x)
end function
public static double code(double x) {
	return 0.5 + (1.0 / x);
}
def code(x):
	return 0.5 + (1.0 / x)
function code(x)
	return Float64(0.5 + Float64(1.0 / x))
end
function tmp = code(x)
	tmp = 0.5 + (1.0 / x);
end
code[x_] := N[(0.5 + N[(1.0 / x), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 + \frac{1}{x}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 63.3%

    \[\leadsto \color{blue}{\frac{1 + 0.5 \cdot x}{x}} \]
  6. Step-by-step derivation
    1. *-commutative63.3%

      \[\leadsto \frac{1 + \color{blue}{x \cdot 0.5}}{x} \]
  7. Simplified63.3%

    \[\leadsto \color{blue}{\frac{1 + x \cdot 0.5}{x}} \]
  8. Taylor expanded in x around inf 63.3%

    \[\leadsto \color{blue}{0.5 + \frac{1}{x}} \]
  9. Step-by-step derivation
    1. +-commutative63.3%

      \[\leadsto \color{blue}{\frac{1}{x} + 0.5} \]
  10. Simplified63.3%

    \[\leadsto \color{blue}{\frac{1}{x} + 0.5} \]
  11. Final simplification63.3%

    \[\leadsto 0.5 + \frac{1}{x} \]
  12. Add Preprocessing

Alternative 12: 66.8% accurate, 68.3× speedup?

\[\begin{array}{l} \\ \frac{1}{x} \end{array} \]
(FPCore (x) :precision binary64 (/ 1.0 x))
double code(double x) {
	return 1.0 / x;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 1.0d0 / x
end function
public static double code(double x) {
	return 1.0 / x;
}
def code(x):
	return 1.0 / x
function code(x)
	return Float64(1.0 / x)
end
function tmp = code(x)
	tmp = 1.0 / x;
end
code[x_] := N[(1.0 / x), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{x}
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 62.9%

    \[\leadsto \color{blue}{\frac{1}{x}} \]
  6. Add Preprocessing

Alternative 13: 3.4% accurate, 205.0× speedup?

\[\begin{array}{l} \\ 1 \end{array} \]
(FPCore (x) :precision binary64 1.0)
double code(double x) {
	return 1.0;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 1.0d0
end function
public static double code(double x) {
	return 1.0;
}
def code(x):
	return 1.0
function code(x)
	return 1.0
end
function tmp = code(x)
	tmp = 1.0;
end
code[x_] := 1.0
\begin{array}{l}

\\
1
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. expm1-define100.0%

      \[\leadsto \frac{e^{x}}{\color{blue}{\mathsf{expm1}\left(x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{e^{x}}{\mathsf{expm1}\left(x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 96.9%

    \[\leadsto \frac{e^{x}}{\color{blue}{x}} \]
  6. Taylor expanded in x around 0 62.1%

    \[\leadsto \color{blue}{\frac{1 + x}{x}} \]
  7. Taylor expanded in x around inf 3.8%

    \[\leadsto \color{blue}{1} \]
  8. Add Preprocessing

Alternative 14: 3.2% accurate, 205.0× speedup?

\[\begin{array}{l} \\ 0.5 \end{array} \]
(FPCore (x) :precision binary64 0.5)
double code(double x) {
	return 0.5;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 0.5d0
end function
public static double code(double x) {
	return 0.5;
}
def code(x):
	return 0.5
function code(x)
	return 0.5
end
function tmp = code(x)
	tmp = 0.5;
end
code[x_] := 0.5
\begin{array}{l}

\\
0.5
\end{array}
Derivation
  1. Initial program 42.2%

    \[\frac{e^{x}}{e^{x} - 1} \]
  2. Step-by-step derivation
    1. sub-neg42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} + \left(-1\right)}} \]
    2. +-commutative42.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{\left(-1\right) + e^{x}}} \]
    3. rgt-mult-inverse6.3%

      \[\leadsto \frac{e^{x}}{\left(-\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}\right) + e^{x}} \]
    4. exp-neg6.2%

      \[\leadsto \frac{e^{x}}{\left(-e^{x} \cdot \color{blue}{e^{-x}}\right) + e^{x}} \]
    5. distribute-rgt-neg-out6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(-e^{-x}\right)} + e^{x}} \]
    6. *-rgt-identity6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(-e^{-x}\right) + \color{blue}{e^{x} \cdot 1}} \]
    7. distribute-lft-in6.2%

      \[\leadsto \frac{e^{x}}{\color{blue}{e^{x} \cdot \left(\left(-e^{-x}\right) + 1\right)}} \]
    8. neg-sub06.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \left(\color{blue}{\left(0 - e^{-x}\right)} + 1\right)} \]
    9. associate-+l-6.2%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(0 - \left(e^{-x} - 1\right)\right)}} \]
    10. neg-sub06.3%

      \[\leadsto \frac{e^{x}}{e^{x} \cdot \color{blue}{\left(-\left(e^{-x} - 1\right)\right)}} \]
    11. associate-/r*6.3%

      \[\leadsto \color{blue}{\frac{\frac{e^{x}}{e^{x}}}{-\left(e^{-x} - 1\right)}} \]
    12. *-rgt-identity6.3%

      \[\leadsto \frac{\frac{\color{blue}{e^{x} \cdot 1}}{e^{x}}}{-\left(e^{-x} - 1\right)} \]
    13. associate-*r/6.3%

      \[\leadsto \frac{\color{blue}{e^{x} \cdot \frac{1}{e^{x}}}}{-\left(e^{-x} - 1\right)} \]
    14. rgt-mult-inverse42.2%

      \[\leadsto \frac{\color{blue}{1}}{-\left(e^{-x} - 1\right)} \]
    15. distribute-frac-neg242.2%

      \[\leadsto \color{blue}{-\frac{1}{e^{-x} - 1}} \]
    16. distribute-neg-frac42.2%

      \[\leadsto \color{blue}{\frac{-1}{e^{-x} - 1}} \]
    17. metadata-eval42.2%

      \[\leadsto \frac{\color{blue}{-1}}{e^{-x} - 1} \]
    18. expm1-define100.0%

      \[\leadsto \frac{-1}{\color{blue}{\mathsf{expm1}\left(-x\right)}} \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\frac{-1}{\mathsf{expm1}\left(-x\right)}} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 63.3%

    \[\leadsto \color{blue}{\frac{1 + 0.5 \cdot x}{x}} \]
  6. Step-by-step derivation
    1. *-commutative63.3%

      \[\leadsto \frac{1 + \color{blue}{x \cdot 0.5}}{x} \]
  7. Simplified63.3%

    \[\leadsto \color{blue}{\frac{1 + x \cdot 0.5}{x}} \]
  8. Taylor expanded in x around inf 3.5%

    \[\leadsto \color{blue}{0.5} \]
  9. Add Preprocessing

Developer Target 1: 100.0% accurate, 2.0× speedup?

\[\begin{array}{l} \\ \frac{-1}{\mathsf{expm1}\left(-x\right)} \end{array} \]
(FPCore (x) :precision binary64 (/ (- 1.0) (expm1 (- x))))
double code(double x) {
	return -1.0 / expm1(-x);
}
public static double code(double x) {
	return -1.0 / Math.expm1(-x);
}
def code(x):
	return -1.0 / math.expm1(-x)
function code(x)
	return Float64(Float64(-1.0) / expm1(Float64(-x)))
end
code[x_] := N[((-1.0) / N[(Exp[(-x)] - 1), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{\mathsf{expm1}\left(-x\right)}
\end{array}

Reproduce

?
herbie shell --seed 2024145 
(FPCore (x)
  :name "expq2 (section 3.11)"
  :precision binary64
  :pre (> 710.0 x)

  :alt
  (! :herbie-platform default (/ (- 1) (expm1 (- x))))

  (/ (exp x) (- (exp x) 1.0)))