Rust f64::atanh

Percentage Accurate: 100.0% → 100.0%
Time: 11.8s
Alternatives: 8
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ \tanh^{-1} x \end{array} \]
(FPCore (x) :precision binary64 (atanh x))
double code(double x) {
	return atanh(x);
}
def code(x):
	return math.atanh(x)
function code(x)
	return atanh(x)
end
function tmp = code(x)
	tmp = atanh(x);
end
code[x_] := N[ArcTanh[x], $MachinePrecision]
\begin{array}{l}

\\
\tanh^{-1} x
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 8 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \end{array} \]
(FPCore (x) :precision binary64 (* 0.5 (log1p (/ (* 2.0 x) (- 1.0 x)))))
double code(double x) {
	return 0.5 * log1p(((2.0 * x) / (1.0 - x)));
}
public static double code(double x) {
	return 0.5 * Math.log1p(((2.0 * x) / (1.0 - x)));
}
def code(x):
	return 0.5 * math.log1p(((2.0 * x) / (1.0 - x)))
function code(x)
	return Float64(0.5 * log1p(Float64(Float64(2.0 * x) / Float64(1.0 - x))))
end
code[x_] := N[(0.5 * N[Log[1 + N[(N[(2.0 * x), $MachinePrecision] / N[(1.0 - x), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right)
\end{array}

Alternative 1: 100.0% accurate, 0.9× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x + \left(x \cdot x + 1\right)\right)\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (*
  0.5
  (log1p (* (/ (* x -2.0) (+ -1.0 (* x (* x x)))) (+ x (+ (* x x) 1.0))))))
double code(double x) {
	return 0.5 * log1p((((x * -2.0) / (-1.0 + (x * (x * x)))) * (x + ((x * x) + 1.0))));
}
public static double code(double x) {
	return 0.5 * Math.log1p((((x * -2.0) / (-1.0 + (x * (x * x)))) * (x + ((x * x) + 1.0))));
}
def code(x):
	return 0.5 * math.log1p((((x * -2.0) / (-1.0 + (x * (x * x)))) * (x + ((x * x) + 1.0))))
function code(x)
	return Float64(0.5 * log1p(Float64(Float64(Float64(x * -2.0) / Float64(-1.0 + Float64(x * Float64(x * x)))) * Float64(x + Float64(Float64(x * x) + 1.0)))))
end
code[x_] := N[(0.5 * N[Log[1 + N[(N[(N[(x * -2.0), $MachinePrecision] / N[(-1.0 + N[(x * N[(x * x), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] * N[(x + N[(N[(x * x), $MachinePrecision] + 1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x + \left(x \cdot x + 1\right)\right)\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{1 - x} \cdot x}\right) \]
    2. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \frac{2}{1 - x}}\right) \]
    3. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{1 + \left(-x\right)}}\right) \]
    4. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(-x\right) + 1}}\right) \]
    5. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(0 - x\right)} + 1}\right) \]
    6. associate-+l-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{0 - \left(x - 1\right)}}\right) \]
    7. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{-\left(x - 1\right)}}\right) \]
    8. distribute-frac-neg2100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\left(-\frac{2}{x - 1}\right)}\right) \]
    9. distribute-neg-frac100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\frac{-2}{x - 1}}\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{\color{blue}{-2}}{x - 1}\right) \]
    11. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{\color{blue}{x + \left(-1\right)}}\right) \]
    12. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + \color{blue}{-1}}\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)} \]
  4. Add Preprocessing
  5. Step-by-step derivation
    1. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{-2}{x + -1} \cdot x}\right) \]
    2. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{-2 \cdot x}{x + -1}}\right) \]
    3. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{\color{blue}{\left(-2\right)} \cdot x}{x + -1}\right) \]
    4. distribute-lft-neg-in100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{\color{blue}{-2 \cdot x}}{x + -1}\right) \]
    5. flip3-+100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{-2 \cdot x}{\color{blue}{\frac{{x}^{3} + {-1}^{3}}{x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)}}}\right) \]
    6. associate-/r/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{-2 \cdot x}{{x}^{3} + {-1}^{3}} \cdot \left(x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)\right)}\right) \]
    7. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{-\color{blue}{x \cdot 2}}{{x}^{3} + {-1}^{3}} \cdot \left(x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)\right)\right) \]
    8. distribute-rgt-neg-in100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{\color{blue}{x \cdot \left(-2\right)}}{{x}^{3} + {-1}^{3}} \cdot \left(x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)\right)\right) \]
    9. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot \color{blue}{-2}}{{x}^{3} + {-1}^{3}} \cdot \left(x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)\right)\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{{x}^{3} + \color{blue}{-1}} \cdot \left(x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)\right)\right) \]
    11. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{\color{blue}{-1 + {x}^{3}}} \cdot \left(x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)\right)\right) \]
    12. cube-mult100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + \color{blue}{x \cdot \left(x \cdot x\right)}} \cdot \left(x \cdot x + \left(-1 \cdot -1 - x \cdot -1\right)\right)\right) \]
    13. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x \cdot x + \left(\color{blue}{1} - x \cdot -1\right)\right)\right) \]
    14. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x \cdot x + \left(1 - \color{blue}{-1 \cdot x}\right)\right)\right) \]
    15. neg-mul-1100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x \cdot x + \left(1 - \color{blue}{\left(-x\right)}\right)\right)\right) \]
    16. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x \cdot x + \left(1 - \color{blue}{\left(0 - x\right)}\right)\right)\right) \]
  6. Applied egg-rr100.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x \cdot x + \left(1 - \left(0 - x\right)\right)\right)}\right) \]
  7. Step-by-step derivation
    1. associate-+r-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \color{blue}{\left(\left(x \cdot x + 1\right) - \left(0 - x\right)\right)}\right) \]
    2. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \color{blue}{\left(\left(x \cdot x + 1\right) + \left(-\left(0 - x\right)\right)\right)}\right) \]
    3. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(\left(x \cdot x + 1\right) + \left(-\color{blue}{\left(-x\right)}\right)\right)\right) \]
    4. remove-double-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(\left(x \cdot x + 1\right) + \color{blue}{x}\right)\right) \]
  8. Applied egg-rr100.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \color{blue}{\left(\left(x \cdot x + 1\right) + x\right)}\right) \]
  9. Final simplification100.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot -2}{-1 + x \cdot \left(x \cdot x\right)} \cdot \left(x + \left(x \cdot x + 1\right)\right)\right) \]
  10. Add Preprocessing

Alternative 2: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot 2}{1 - x}\right) \end{array} \]
(FPCore (x) :precision binary64 (* 0.5 (log1p (/ (* x 2.0) (- 1.0 x)))))
double code(double x) {
	return 0.5 * log1p(((x * 2.0) / (1.0 - x)));
}
public static double code(double x) {
	return 0.5 * Math.log1p(((x * 2.0) / (1.0 - x)));
}
def code(x):
	return 0.5 * math.log1p(((x * 2.0) / (1.0 - x)))
function code(x)
	return Float64(0.5 * log1p(Float64(Float64(x * 2.0) / Float64(1.0 - x))))
end
code[x_] := N[(0.5 * N[Log[1 + N[(N[(x * 2.0), $MachinePrecision] / N[(1.0 - x), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot 2}{1 - x}\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Add Preprocessing
  3. Final simplification100.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{x \cdot 2}{1 - x}\right) \]
  4. Add Preprocessing

Alternative 3: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right) \end{array} \]
(FPCore (x) :precision binary64 (* 0.5 (log1p (* x (/ -2.0 (+ x -1.0))))))
double code(double x) {
	return 0.5 * log1p((x * (-2.0 / (x + -1.0))));
}
public static double code(double x) {
	return 0.5 * Math.log1p((x * (-2.0 / (x + -1.0))));
}
def code(x):
	return 0.5 * math.log1p((x * (-2.0 / (x + -1.0))))
function code(x)
	return Float64(0.5 * log1p(Float64(x * Float64(-2.0 / Float64(x + -1.0)))))
end
code[x_] := N[(0.5 * N[Log[1 + N[(x * N[(-2.0 / N[(x + -1.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{1 - x} \cdot x}\right) \]
    2. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \frac{2}{1 - x}}\right) \]
    3. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{1 + \left(-x\right)}}\right) \]
    4. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(-x\right) + 1}}\right) \]
    5. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(0 - x\right)} + 1}\right) \]
    6. associate-+l-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{0 - \left(x - 1\right)}}\right) \]
    7. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{-\left(x - 1\right)}}\right) \]
    8. distribute-frac-neg2100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\left(-\frac{2}{x - 1}\right)}\right) \]
    9. distribute-neg-frac100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\frac{-2}{x - 1}}\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{\color{blue}{-2}}{x - 1}\right) \]
    11. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{\color{blue}{x + \left(-1\right)}}\right) \]
    12. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + \color{blue}{-1}}\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)} \]
  4. Add Preprocessing
  5. Add Preprocessing

Alternative 4: 99.8% accurate, 4.7× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \left(x \cdot x\right) \cdot \left(0.4 + \left(x \cdot x\right) \cdot 0.2857142857142857\right)\right)\right)\right)\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (*
  0.5
  (*
   x
   (+
    2.0
    (*
     x
     (*
      x
      (+
       0.6666666666666666
       (* (* x x) (+ 0.4 (* (* x x) 0.2857142857142857))))))))))
double code(double x) {
	return 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + ((x * x) * (0.4 + ((x * x) * 0.2857142857142857))))))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 0.5d0 * (x * (2.0d0 + (x * (x * (0.6666666666666666d0 + ((x * x) * (0.4d0 + ((x * x) * 0.2857142857142857d0))))))))
end function
public static double code(double x) {
	return 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + ((x * x) * (0.4 + ((x * x) * 0.2857142857142857))))))));
}
def code(x):
	return 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + ((x * x) * (0.4 + ((x * x) * 0.2857142857142857))))))))
function code(x)
	return Float64(0.5 * Float64(x * Float64(2.0 + Float64(x * Float64(x * Float64(0.6666666666666666 + Float64(Float64(x * x) * Float64(0.4 + Float64(Float64(x * x) * 0.2857142857142857)))))))))
end
function tmp = code(x)
	tmp = 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + ((x * x) * (0.4 + ((x * x) * 0.2857142857142857))))))));
end
code[x_] := N[(0.5 * N[(x * N[(2.0 + N[(x * N[(x * N[(0.6666666666666666 + N[(N[(x * x), $MachinePrecision] * N[(0.4 + N[(N[(x * x), $MachinePrecision] * 0.2857142857142857), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \left(x \cdot x\right) \cdot \left(0.4 + \left(x \cdot x\right) \cdot 0.2857142857142857\right)\right)\right)\right)\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{1 - x} \cdot x}\right) \]
    2. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \frac{2}{1 - x}}\right) \]
    3. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{1 + \left(-x\right)}}\right) \]
    4. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(-x\right) + 1}}\right) \]
    5. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(0 - x\right)} + 1}\right) \]
    6. associate-+l-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{0 - \left(x - 1\right)}}\right) \]
    7. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{-\left(x - 1\right)}}\right) \]
    8. distribute-frac-neg2100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\left(-\frac{2}{x - 1}\right)}\right) \]
    9. distribute-neg-frac100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\frac{-2}{x - 1}}\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{\color{blue}{-2}}{x - 1}\right) \]
    11. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{\color{blue}{x + \left(-1\right)}}\right) \]
    12. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + \color{blue}{-1}}\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 99.5%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + {x}^{2} \cdot \left(0.6666666666666666 + {x}^{2} \cdot \left(0.4 + 0.2857142857142857 \cdot {x}^{2}\right)\right)\right)\right)} \]
  6. Step-by-step derivation
    1. unpow299.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{\left(x \cdot x\right)} \cdot \left(0.6666666666666666 + {x}^{2} \cdot \left(0.4 + 0.2857142857142857 \cdot {x}^{2}\right)\right)\right)\right) \]
    2. associate-*l*99.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{x \cdot \left(x \cdot \left(0.6666666666666666 + {x}^{2} \cdot \left(0.4 + 0.2857142857142857 \cdot {x}^{2}\right)\right)\right)}\right)\right) \]
    3. unpow299.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \color{blue}{\left(x \cdot x\right)} \cdot \left(0.4 + 0.2857142857142857 \cdot {x}^{2}\right)\right)\right)\right)\right) \]
    4. *-commutative99.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \left(x \cdot x\right) \cdot \left(0.4 + \color{blue}{{x}^{2} \cdot 0.2857142857142857}\right)\right)\right)\right)\right) \]
    5. unpow299.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \left(x \cdot x\right) \cdot \left(0.4 + \color{blue}{\left(x \cdot x\right)} \cdot 0.2857142857142857\right)\right)\right)\right)\right) \]
  7. Simplified99.5%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \left(x \cdot x\right) \cdot \left(0.4 + \left(x \cdot x\right) \cdot 0.2857142857142857\right)\right)\right)\right)\right)} \]
  8. Add Preprocessing

Alternative 5: 99.7% accurate, 6.4× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + x \cdot \left(x \cdot 0.4\right)\right)\right)\right)\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (* 0.5 (* x (+ 2.0 (* x (* x (+ 0.6666666666666666 (* x (* x 0.4)))))))))
double code(double x) {
	return 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + (x * (x * 0.4)))))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 0.5d0 * (x * (2.0d0 + (x * (x * (0.6666666666666666d0 + (x * (x * 0.4d0)))))))
end function
public static double code(double x) {
	return 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + (x * (x * 0.4)))))));
}
def code(x):
	return 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + (x * (x * 0.4)))))))
function code(x)
	return Float64(0.5 * Float64(x * Float64(2.0 + Float64(x * Float64(x * Float64(0.6666666666666666 + Float64(x * Float64(x * 0.4))))))))
end
function tmp = code(x)
	tmp = 0.5 * (x * (2.0 + (x * (x * (0.6666666666666666 + (x * (x * 0.4)))))));
end
code[x_] := N[(0.5 * N[(x * N[(2.0 + N[(x * N[(x * N[(0.6666666666666666 + N[(x * N[(x * 0.4), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + x \cdot \left(x \cdot 0.4\right)\right)\right)\right)\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{1 - x} \cdot x}\right) \]
    2. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \frac{2}{1 - x}}\right) \]
    3. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{1 + \left(-x\right)}}\right) \]
    4. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(-x\right) + 1}}\right) \]
    5. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(0 - x\right)} + 1}\right) \]
    6. associate-+l-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{0 - \left(x - 1\right)}}\right) \]
    7. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{-\left(x - 1\right)}}\right) \]
    8. distribute-frac-neg2100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\left(-\frac{2}{x - 1}\right)}\right) \]
    9. distribute-neg-frac100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\frac{-2}{x - 1}}\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{\color{blue}{-2}}{x - 1}\right) \]
    11. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{\color{blue}{x + \left(-1\right)}}\right) \]
    12. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + \color{blue}{-1}}\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 99.5%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + {x}^{2} \cdot \left(0.6666666666666666 + 0.4 \cdot {x}^{2}\right)\right)\right)} \]
  6. Step-by-step derivation
    1. unpow299.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{\left(x \cdot x\right)} \cdot \left(0.6666666666666666 + 0.4 \cdot {x}^{2}\right)\right)\right) \]
    2. associate-*l*99.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{x \cdot \left(x \cdot \left(0.6666666666666666 + 0.4 \cdot {x}^{2}\right)\right)}\right)\right) \]
    3. *-commutative99.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \color{blue}{{x}^{2} \cdot 0.4}\right)\right)\right)\right) \]
    4. unpow299.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \color{blue}{\left(x \cdot x\right)} \cdot 0.4\right)\right)\right)\right) \]
    5. associate-*l*99.5%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + \color{blue}{x \cdot \left(x \cdot 0.4\right)}\right)\right)\right)\right) \]
  7. Simplified99.5%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + x \cdot \left(x \cdot \left(0.6666666666666666 + x \cdot \left(x \cdot 0.4\right)\right)\right)\right)\right)} \]
  8. Add Preprocessing

Alternative 6: 99.6% accurate, 8.4× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \left(x \cdot 2 + x \cdot \left(\left(x \cdot x\right) \cdot 0.6666666666666666\right)\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (* 0.5 (+ (* x 2.0) (* x (* (* x x) 0.6666666666666666)))))
double code(double x) {
	return 0.5 * ((x * 2.0) + (x * ((x * x) * 0.6666666666666666)));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 0.5d0 * ((x * 2.0d0) + (x * ((x * x) * 0.6666666666666666d0)))
end function
public static double code(double x) {
	return 0.5 * ((x * 2.0) + (x * ((x * x) * 0.6666666666666666)));
}
def code(x):
	return 0.5 * ((x * 2.0) + (x * ((x * x) * 0.6666666666666666)))
function code(x)
	return Float64(0.5 * Float64(Float64(x * 2.0) + Float64(x * Float64(Float64(x * x) * 0.6666666666666666))))
end
function tmp = code(x)
	tmp = 0.5 * ((x * 2.0) + (x * ((x * x) * 0.6666666666666666)));
end
code[x_] := N[(0.5 * N[(N[(x * 2.0), $MachinePrecision] + N[(x * N[(N[(x * x), $MachinePrecision] * 0.6666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \left(x \cdot 2 + x \cdot \left(\left(x \cdot x\right) \cdot 0.6666666666666666\right)\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{1 - x} \cdot x}\right) \]
    2. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \frac{2}{1 - x}}\right) \]
    3. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{1 + \left(-x\right)}}\right) \]
    4. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(-x\right) + 1}}\right) \]
    5. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(0 - x\right)} + 1}\right) \]
    6. associate-+l-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{0 - \left(x - 1\right)}}\right) \]
    7. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{-\left(x - 1\right)}}\right) \]
    8. distribute-frac-neg2100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\left(-\frac{2}{x - 1}\right)}\right) \]
    9. distribute-neg-frac100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\frac{-2}{x - 1}}\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{\color{blue}{-2}}{x - 1}\right) \]
    11. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{\color{blue}{x + \left(-1\right)}}\right) \]
    12. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + \color{blue}{-1}}\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 99.2%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + 0.6666666666666666 \cdot {x}^{2}\right)\right)} \]
  6. Step-by-step derivation
    1. *-commutative99.2%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{{x}^{2} \cdot 0.6666666666666666}\right)\right) \]
    2. unpow299.2%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{\left(x \cdot x\right)} \cdot 0.6666666666666666\right)\right) \]
    3. associate-*l*99.2%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{x \cdot \left(x \cdot 0.6666666666666666\right)}\right)\right) \]
  7. Simplified99.2%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + x \cdot \left(x \cdot 0.6666666666666666\right)\right)\right)} \]
  8. Step-by-step derivation
    1. distribute-lft-in99.2%

      \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot 2 + x \cdot \left(x \cdot \left(x \cdot 0.6666666666666666\right)\right)\right)} \]
    2. associate-*r*99.2%

      \[\leadsto 0.5 \cdot \left(x \cdot 2 + x \cdot \color{blue}{\left(\left(x \cdot x\right) \cdot 0.6666666666666666\right)}\right) \]
  9. Applied egg-rr99.2%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot 2 + x \cdot \left(\left(x \cdot x\right) \cdot 0.6666666666666666\right)\right)} \]
  10. Add Preprocessing

Alternative 7: 99.6% accurate, 9.9× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot 0.6666666666666666\right)\right)\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (* 0.5 (* x (+ 2.0 (* x (* x 0.6666666666666666))))))
double code(double x) {
	return 0.5 * (x * (2.0 + (x * (x * 0.6666666666666666))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 0.5d0 * (x * (2.0d0 + (x * (x * 0.6666666666666666d0))))
end function
public static double code(double x) {
	return 0.5 * (x * (2.0 + (x * (x * 0.6666666666666666))));
}
def code(x):
	return 0.5 * (x * (2.0 + (x * (x * 0.6666666666666666))))
function code(x)
	return Float64(0.5 * Float64(x * Float64(2.0 + Float64(x * Float64(x * 0.6666666666666666)))))
end
function tmp = code(x)
	tmp = 0.5 * (x * (2.0 + (x * (x * 0.6666666666666666))));
end
code[x_] := N[(0.5 * N[(x * N[(2.0 + N[(x * N[(x * 0.6666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \left(x \cdot \left(2 + x \cdot \left(x \cdot 0.6666666666666666\right)\right)\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{1 - x} \cdot x}\right) \]
    2. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \frac{2}{1 - x}}\right) \]
    3. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{1 + \left(-x\right)}}\right) \]
    4. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(-x\right) + 1}}\right) \]
    5. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(0 - x\right)} + 1}\right) \]
    6. associate-+l-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{0 - \left(x - 1\right)}}\right) \]
    7. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{-\left(x - 1\right)}}\right) \]
    8. distribute-frac-neg2100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\left(-\frac{2}{x - 1}\right)}\right) \]
    9. distribute-neg-frac100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\frac{-2}{x - 1}}\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{\color{blue}{-2}}{x - 1}\right) \]
    11. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{\color{blue}{x + \left(-1\right)}}\right) \]
    12. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + \color{blue}{-1}}\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 99.2%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + 0.6666666666666666 \cdot {x}^{2}\right)\right)} \]
  6. Step-by-step derivation
    1. *-commutative99.2%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{{x}^{2} \cdot 0.6666666666666666}\right)\right) \]
    2. unpow299.2%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{\left(x \cdot x\right)} \cdot 0.6666666666666666\right)\right) \]
    3. associate-*l*99.2%

      \[\leadsto 0.5 \cdot \left(x \cdot \left(2 + \color{blue}{x \cdot \left(x \cdot 0.6666666666666666\right)}\right)\right) \]
  7. Simplified99.2%

    \[\leadsto 0.5 \cdot \color{blue}{\left(x \cdot \left(2 + x \cdot \left(x \cdot 0.6666666666666666\right)\right)\right)} \]
  8. Add Preprocessing

Alternative 8: 99.1% accurate, 21.8× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \left(x \cdot 2\right) \end{array} \]
(FPCore (x) :precision binary64 (* 0.5 (* x 2.0)))
double code(double x) {
	return 0.5 * (x * 2.0);
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = 0.5d0 * (x * 2.0d0)
end function
public static double code(double x) {
	return 0.5 * (x * 2.0);
}
def code(x):
	return 0.5 * (x * 2.0)
function code(x)
	return Float64(0.5 * Float64(x * 2.0))
end
function tmp = code(x)
	tmp = 0.5 * (x * 2.0);
end
code[x_] := N[(0.5 * N[(x * 2.0), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
0.5 \cdot \left(x \cdot 2\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-*l/100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{1 - x} \cdot x}\right) \]
    2. *-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \frac{2}{1 - x}}\right) \]
    3. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{1 + \left(-x\right)}}\right) \]
    4. +-commutative100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(-x\right) + 1}}\right) \]
    5. neg-sub0100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{\left(0 - x\right)} + 1}\right) \]
    6. associate-+l-100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{0 - \left(x - 1\right)}}\right) \]
    7. sub0-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{2}{\color{blue}{-\left(x - 1\right)}}\right) \]
    8. distribute-frac-neg2100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\left(-\frac{2}{x - 1}\right)}\right) \]
    9. distribute-neg-frac100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \color{blue}{\frac{-2}{x - 1}}\right) \]
    10. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{\color{blue}{-2}}{x - 1}\right) \]
    11. sub-neg100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{\color{blue}{x + \left(-1\right)}}\right) \]
    12. metadata-eval100.0%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + \color{blue}{-1}}\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(x \cdot \frac{-2}{x + -1}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0 98.8%

    \[\leadsto 0.5 \cdot \color{blue}{\left(2 \cdot x\right)} \]
  6. Final simplification98.8%

    \[\leadsto 0.5 \cdot \left(x \cdot 2\right) \]
  7. Add Preprocessing

Reproduce

?
herbie shell --seed 2024107 
(FPCore (x)
  :name "Rust f64::atanh"
  :precision binary64
  (* 0.5 (log1p (/ (* 2.0 x) (- 1.0 x)))))