Hyperbolic arc-(co)tangent

Percentage Accurate: 8.5% → 99.8%
Time: 8.9s
Alternatives: 4
Speedup: 111.0×

Specification

?
\[\begin{array}{l} \\ \frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right) \end{array} \]
(FPCore (x) :precision binary64 (* (/ 1.0 2.0) (log (/ (+ 1.0 x) (- 1.0 x)))))
double code(double x) {
	return (1.0 / 2.0) * log(((1.0 + x) / (1.0 - x)));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (1.0d0 / 2.0d0) * log(((1.0d0 + x) / (1.0d0 - x)))
end function
public static double code(double x) {
	return (1.0 / 2.0) * Math.log(((1.0 + x) / (1.0 - x)));
}
def code(x):
	return (1.0 / 2.0) * math.log(((1.0 + x) / (1.0 - x)))
function code(x)
	return Float64(Float64(1.0 / 2.0) * log(Float64(Float64(1.0 + x) / Float64(1.0 - x))))
end
function tmp = code(x)
	tmp = (1.0 / 2.0) * log(((1.0 + x) / (1.0 - x)));
end
code[x_] := N[(N[(1.0 / 2.0), $MachinePrecision] * N[Log[N[(N[(1.0 + x), $MachinePrecision] / N[(1.0 - x), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right)
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 4 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 8.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right) \end{array} \]
(FPCore (x) :precision binary64 (* (/ 1.0 2.0) (log (/ (+ 1.0 x) (- 1.0 x)))))
double code(double x) {
	return (1.0 / 2.0) * log(((1.0 + x) / (1.0 - x)));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = (1.0d0 / 2.0d0) * log(((1.0d0 + x) / (1.0d0 - x)))
end function
public static double code(double x) {
	return (1.0 / 2.0) * Math.log(((1.0 + x) / (1.0 - x)));
}
def code(x):
	return (1.0 / 2.0) * math.log(((1.0 + x) / (1.0 - x)))
function code(x)
	return Float64(Float64(1.0 / 2.0) * log(Float64(Float64(1.0 + x) / Float64(1.0 - x))))
end
function tmp = code(x)
	tmp = (1.0 / 2.0) * log(((1.0 + x) / (1.0 - x)));
end
code[x_] := N[(N[(1.0 / 2.0), $MachinePrecision] * N[Log[N[(N[(1.0 + x), $MachinePrecision] / N[(1.0 - x), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right)
\end{array}

Alternative 1: 99.8% accurate, 5.3× speedup?

\[\begin{array}{l} \\ x \cdot \left(1 + \left(x \cdot x\right) \cdot \left(0.3333333333333333 + \left(x \cdot x\right) \cdot \left(0.2 + x \cdot \left(x \cdot 0.14285714285714285\right)\right)\right)\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (*
  x
  (+
   1.0
   (*
    (* x x)
    (+
     0.3333333333333333
     (* (* x x) (+ 0.2 (* x (* x 0.14285714285714285)))))))))
double code(double x) {
	return x * (1.0 + ((x * x) * (0.3333333333333333 + ((x * x) * (0.2 + (x * (x * 0.14285714285714285)))))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x * (1.0d0 + ((x * x) * (0.3333333333333333d0 + ((x * x) * (0.2d0 + (x * (x * 0.14285714285714285d0)))))))
end function
public static double code(double x) {
	return x * (1.0 + ((x * x) * (0.3333333333333333 + ((x * x) * (0.2 + (x * (x * 0.14285714285714285)))))));
}
def code(x):
	return x * (1.0 + ((x * x) * (0.3333333333333333 + ((x * x) * (0.2 + (x * (x * 0.14285714285714285)))))))
function code(x)
	return Float64(x * Float64(1.0 + Float64(Float64(x * x) * Float64(0.3333333333333333 + Float64(Float64(x * x) * Float64(0.2 + Float64(x * Float64(x * 0.14285714285714285))))))))
end
function tmp = code(x)
	tmp = x * (1.0 + ((x * x) * (0.3333333333333333 + ((x * x) * (0.2 + (x * (x * 0.14285714285714285)))))));
end
code[x_] := N[(x * N[(1.0 + N[(N[(x * x), $MachinePrecision] * N[(0.3333333333333333 + N[(N[(x * x), $MachinePrecision] * N[(0.2 + N[(x * N[(x * 0.14285714285714285), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
x \cdot \left(1 + \left(x \cdot x\right) \cdot \left(0.3333333333333333 + \left(x \cdot x\right) \cdot \left(0.2 + x \cdot \left(x \cdot 0.14285714285714285\right)\right)\right)\right)
\end{array}
Derivation
  1. Initial program 7.8%

    \[\frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\left(\frac{1}{2}\right), \color{blue}{\log \left(\frac{1 + x}{1 - x}\right)}\right) \]
    2. metadata-evalN/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \log \color{blue}{\left(\frac{1 + x}{1 - x}\right)}\right) \]
    3. log-lowering-log.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\left(\frac{1 + x}{1 - x}\right)\right)\right) \]
    4. /-lowering-/.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\left(1 + x\right), \left(1 - x\right)\right)\right)\right) \]
    5. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \left(1 - x\right)\right)\right)\right) \]
    6. --lowering--.f647.8%

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \mathsf{\_.f64}\left(1, x\right)\right)\right)\right) \]
  3. Simplified7.8%

    \[\leadsto \color{blue}{0.5 \cdot \log \left(\frac{1 + x}{1 - x}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{3} + {x}^{2} \cdot \left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)\right)\right)} \]
  6. Step-by-step derivation
    1. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{3} + {x}^{2} \cdot \left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)\right)\right)}\right) \]
    2. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \color{blue}{\left({x}^{2} \cdot \left(\frac{1}{3} + {x}^{2} \cdot \left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)\right)\right)}\right)\right) \]
    3. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\left({x}^{2}\right), \color{blue}{\left(\frac{1}{3} + {x}^{2} \cdot \left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)\right)}\right)\right)\right) \]
    4. unpow2N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\left(x \cdot x\right), \left(\color{blue}{\frac{1}{3}} + {x}^{2} \cdot \left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)\right)\right)\right)\right) \]
    5. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \left(\color{blue}{\frac{1}{3}} + {x}^{2} \cdot \left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)\right)\right)\right)\right) \]
    6. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \color{blue}{\left({x}^{2} \cdot \left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)\right)}\right)\right)\right)\right) \]
    7. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\left({x}^{2}\right), \color{blue}{\left(\frac{1}{5} + \frac{1}{7} \cdot {x}^{2}\right)}\right)\right)\right)\right)\right) \]
    8. unpow2N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\left(x \cdot x\right), \left(\color{blue}{\frac{1}{5}} + \frac{1}{7} \cdot {x}^{2}\right)\right)\right)\right)\right)\right) \]
    9. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \left(\color{blue}{\frac{1}{5}} + \frac{1}{7} \cdot {x}^{2}\right)\right)\right)\right)\right)\right) \]
    10. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{5}, \color{blue}{\left(\frac{1}{7} \cdot {x}^{2}\right)}\right)\right)\right)\right)\right)\right) \]
    11. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{5}, \left({x}^{2} \cdot \color{blue}{\frac{1}{7}}\right)\right)\right)\right)\right)\right)\right) \]
    12. unpow2N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{5}, \left(\left(x \cdot x\right) \cdot \frac{1}{7}\right)\right)\right)\right)\right)\right)\right) \]
    13. associate-*l*N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{5}, \left(x \cdot \color{blue}{\left(x \cdot \frac{1}{7}\right)}\right)\right)\right)\right)\right)\right)\right) \]
    14. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{5}, \mathsf{*.f64}\left(x, \color{blue}{\left(x \cdot \frac{1}{7}\right)}\right)\right)\right)\right)\right)\right)\right) \]
    15. *-lowering-*.f6499.8%

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(\mathsf{*.f64}\left(x, x\right), \mathsf{+.f64}\left(\frac{1}{5}, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \color{blue}{\frac{1}{7}}\right)\right)\right)\right)\right)\right)\right)\right) \]
  7. Simplified99.8%

    \[\leadsto \color{blue}{x \cdot \left(1 + \left(x \cdot x\right) \cdot \left(0.3333333333333333 + \left(x \cdot x\right) \cdot \left(0.2 + x \cdot \left(x \cdot 0.14285714285714285\right)\right)\right)\right)} \]
  8. Add Preprocessing

Alternative 2: 99.7% accurate, 7.4× speedup?

\[\begin{array}{l} \\ x \cdot \left(1 + x \cdot \left(x \cdot \left(0.3333333333333333 + x \cdot \left(x \cdot 0.2\right)\right)\right)\right) \end{array} \]
(FPCore (x)
 :precision binary64
 (* x (+ 1.0 (* x (* x (+ 0.3333333333333333 (* x (* x 0.2))))))))
double code(double x) {
	return x * (1.0 + (x * (x * (0.3333333333333333 + (x * (x * 0.2))))));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x * (1.0d0 + (x * (x * (0.3333333333333333d0 + (x * (x * 0.2d0))))))
end function
public static double code(double x) {
	return x * (1.0 + (x * (x * (0.3333333333333333 + (x * (x * 0.2))))));
}
def code(x):
	return x * (1.0 + (x * (x * (0.3333333333333333 + (x * (x * 0.2))))))
function code(x)
	return Float64(x * Float64(1.0 + Float64(x * Float64(x * Float64(0.3333333333333333 + Float64(x * Float64(x * 0.2)))))))
end
function tmp = code(x)
	tmp = x * (1.0 + (x * (x * (0.3333333333333333 + (x * (x * 0.2))))));
end
code[x_] := N[(x * N[(1.0 + N[(x * N[(x * N[(0.3333333333333333 + N[(x * N[(x * 0.2), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
x \cdot \left(1 + x \cdot \left(x \cdot \left(0.3333333333333333 + x \cdot \left(x \cdot 0.2\right)\right)\right)\right)
\end{array}
Derivation
  1. Initial program 7.8%

    \[\frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\left(\frac{1}{2}\right), \color{blue}{\log \left(\frac{1 + x}{1 - x}\right)}\right) \]
    2. metadata-evalN/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \log \color{blue}{\left(\frac{1 + x}{1 - x}\right)}\right) \]
    3. log-lowering-log.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\left(\frac{1 + x}{1 - x}\right)\right)\right) \]
    4. /-lowering-/.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\left(1 + x\right), \left(1 - x\right)\right)\right)\right) \]
    5. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \left(1 - x\right)\right)\right)\right) \]
    6. --lowering--.f647.8%

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \mathsf{\_.f64}\left(1, x\right)\right)\right)\right) \]
  3. Simplified7.8%

    \[\leadsto \color{blue}{0.5 \cdot \log \left(\frac{1 + x}{1 - x}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x \cdot \left(1 + {x}^{2} \cdot \left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right)\right)} \]
  6. Step-by-step derivation
    1. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \color{blue}{\left(1 + {x}^{2} \cdot \left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right)\right)}\right) \]
    2. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \color{blue}{\left({x}^{2} \cdot \left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right)\right)}\right)\right) \]
    3. unpow2N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \left(\left(x \cdot x\right) \cdot \left(\color{blue}{\frac{1}{3}} + \frac{1}{5} \cdot {x}^{2}\right)\right)\right)\right) \]
    4. associate-*l*N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \left(x \cdot \color{blue}{\left(x \cdot \left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right)\right)}\right)\right)\right) \]
    5. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \left(x \cdot \left(\left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right) \cdot \color{blue}{x}\right)\right)\right)\right) \]
    6. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \color{blue}{\left(\left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right) \cdot x\right)}\right)\right)\right) \]
    7. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \left(x \cdot \color{blue}{\left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right)}\right)\right)\right)\right) \]
    8. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \color{blue}{\left(\frac{1}{3} + \frac{1}{5} \cdot {x}^{2}\right)}\right)\right)\right)\right) \]
    9. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(\frac{1}{3}, \color{blue}{\left(\frac{1}{5} \cdot {x}^{2}\right)}\right)\right)\right)\right)\right) \]
    10. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(\frac{1}{3}, \left({x}^{2} \cdot \color{blue}{\frac{1}{5}}\right)\right)\right)\right)\right)\right) \]
    11. unpow2N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(\frac{1}{3}, \left(\left(x \cdot x\right) \cdot \frac{1}{5}\right)\right)\right)\right)\right)\right) \]
    12. associate-*l*N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(\frac{1}{3}, \left(x \cdot \color{blue}{\left(x \cdot \frac{1}{5}\right)}\right)\right)\right)\right)\right)\right) \]
    13. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(x, \color{blue}{\left(x \cdot \frac{1}{5}\right)}\right)\right)\right)\right)\right)\right) \]
    14. *-lowering-*.f6499.8%

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(\frac{1}{3}, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \color{blue}{\frac{1}{5}}\right)\right)\right)\right)\right)\right)\right) \]
  7. Simplified99.8%

    \[\leadsto \color{blue}{x \cdot \left(1 + x \cdot \left(x \cdot \left(0.3333333333333333 + x \cdot \left(x \cdot 0.2\right)\right)\right)\right)} \]
  8. Add Preprocessing

Alternative 3: 99.5% accurate, 12.3× speedup?

\[\begin{array}{l} \\ x \cdot \left(1 + x \cdot \left(x \cdot 0.3333333333333333\right)\right) \end{array} \]
(FPCore (x) :precision binary64 (* x (+ 1.0 (* x (* x 0.3333333333333333)))))
double code(double x) {
	return x * (1.0 + (x * (x * 0.3333333333333333)));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x * (1.0d0 + (x * (x * 0.3333333333333333d0)))
end function
public static double code(double x) {
	return x * (1.0 + (x * (x * 0.3333333333333333)));
}
def code(x):
	return x * (1.0 + (x * (x * 0.3333333333333333)))
function code(x)
	return Float64(x * Float64(1.0 + Float64(x * Float64(x * 0.3333333333333333))))
end
function tmp = code(x)
	tmp = x * (1.0 + (x * (x * 0.3333333333333333)));
end
code[x_] := N[(x * N[(1.0 + N[(x * N[(x * 0.3333333333333333), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
x \cdot \left(1 + x \cdot \left(x \cdot 0.3333333333333333\right)\right)
\end{array}
Derivation
  1. Initial program 7.8%

    \[\frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\left(\frac{1}{2}\right), \color{blue}{\log \left(\frac{1 + x}{1 - x}\right)}\right) \]
    2. metadata-evalN/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \log \color{blue}{\left(\frac{1 + x}{1 - x}\right)}\right) \]
    3. log-lowering-log.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\left(\frac{1 + x}{1 - x}\right)\right)\right) \]
    4. /-lowering-/.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\left(1 + x\right), \left(1 - x\right)\right)\right)\right) \]
    5. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \left(1 - x\right)\right)\right)\right) \]
    6. --lowering--.f647.8%

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \mathsf{\_.f64}\left(1, x\right)\right)\right)\right) \]
  3. Simplified7.8%

    \[\leadsto \color{blue}{0.5 \cdot \log \left(\frac{1 + x}{1 - x}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x \cdot \left(1 + \frac{1}{3} \cdot {x}^{2}\right)} \]
  6. Step-by-step derivation
    1. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \color{blue}{\left(1 + \frac{1}{3} \cdot {x}^{2}\right)}\right) \]
    2. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \color{blue}{\left(\frac{1}{3} \cdot {x}^{2}\right)}\right)\right) \]
    3. unpow2N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \left(\frac{1}{3} \cdot \left(x \cdot \color{blue}{x}\right)\right)\right)\right) \]
    4. associate-*r*N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \left(\left(\frac{1}{3} \cdot x\right) \cdot \color{blue}{x}\right)\right)\right) \]
    5. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \left(x \cdot \color{blue}{\left(\frac{1}{3} \cdot x\right)}\right)\right)\right) \]
    6. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \color{blue}{\left(\frac{1}{3} \cdot x\right)}\right)\right)\right) \]
    7. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \left(x \cdot \color{blue}{\frac{1}{3}}\right)\right)\right)\right) \]
    8. *-lowering-*.f6499.7%

      \[\leadsto \mathsf{*.f64}\left(x, \mathsf{+.f64}\left(1, \mathsf{*.f64}\left(x, \mathsf{*.f64}\left(x, \color{blue}{\frac{1}{3}}\right)\right)\right)\right) \]
  7. Simplified99.7%

    \[\leadsto \color{blue}{x \cdot \left(1 + x \cdot \left(x \cdot 0.3333333333333333\right)\right)} \]
  8. Add Preprocessing

Alternative 4: 99.0% accurate, 111.0× speedup?

\[\begin{array}{l} \\ x \end{array} \]
(FPCore (x) :precision binary64 x)
double code(double x) {
	return x;
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = x
end function
public static double code(double x) {
	return x;
}
def code(x):
	return x
function code(x)
	return x
end
function tmp = code(x)
	tmp = x;
end
code[x_] := x
\begin{array}{l}

\\
x
\end{array}
Derivation
  1. Initial program 7.8%

    \[\frac{1}{2} \cdot \log \left(\frac{1 + x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\left(\frac{1}{2}\right), \color{blue}{\log \left(\frac{1 + x}{1 - x}\right)}\right) \]
    2. metadata-evalN/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \log \color{blue}{\left(\frac{1 + x}{1 - x}\right)}\right) \]
    3. log-lowering-log.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\left(\frac{1 + x}{1 - x}\right)\right)\right) \]
    4. /-lowering-/.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\left(1 + x\right), \left(1 - x\right)\right)\right)\right) \]
    5. +-lowering-+.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \left(1 - x\right)\right)\right)\right) \]
    6. --lowering--.f647.8%

      \[\leadsto \mathsf{*.f64}\left(\frac{1}{2}, \mathsf{log.f64}\left(\mathsf{/.f64}\left(\mathsf{+.f64}\left(1, x\right), \mathsf{\_.f64}\left(1, x\right)\right)\right)\right) \]
  3. Simplified7.8%

    \[\leadsto \color{blue}{0.5 \cdot \log \left(\frac{1 + x}{1 - x}\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in x around 0

    \[\leadsto \color{blue}{x} \]
  6. Step-by-step derivation
    1. Simplified99.4%

      \[\leadsto \color{blue}{x} \]
    2. Add Preprocessing

    Reproduce

    ?
    herbie shell --seed 2024139 
    (FPCore (x)
      :name "Hyperbolic arc-(co)tangent"
      :precision binary64
      (* (/ 1.0 2.0) (log (/ (+ 1.0 x) (- 1.0 x)))))