Rust f32::atanh

Percentage Accurate: 99.8% → 99.8%
Time: 5.3s
Alternatives: 5
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ \tanh^{-1} x \end{array} \]
(FPCore (x) :precision binary32 (atanh x))
float code(float x) {
	return atanhf(x);
}
function code(x)
	return atanh(x)
end
function tmp = code(x)
	tmp = atanh(x);
end
\begin{array}{l}

\\
\tanh^{-1} x
\end{array}

Sampling outcomes in binary32 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 5 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 99.8% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \end{array} \]
(FPCore (x) :precision binary32 (* 0.5 (log1p (/ (* 2.0 x) (- 1.0 x)))))
float code(float x) {
	return 0.5f * log1pf(((2.0f * x) / (1.0f - x)));
}
function code(x)
	return Float32(Float32(0.5) * log1p(Float32(Float32(Float32(2.0) * x) / Float32(Float32(1.0) - x))))
end
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right)
\end{array}

Alternative 1: 99.8% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \end{array} \]
(FPCore (x) :precision binary32 (* 0.5 (log1p (/ (* 2.0 x) (- 1.0 x)))))
float code(float x) {
	return 0.5f * log1pf(((2.0f * x) / (1.0f - x)));
}
function code(x)
	return Float32(Float32(0.5) * log1p(Float32(Float32(Float32(2.0) * x) / Float32(Float32(1.0) - x))))
end
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right)
\end{array}
Derivation
  1. Initial program 99.7%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Final simplification99.7%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]

Alternative 2: 96.4% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(2 \cdot \left(x + x \cdot x\right)\right) \end{array} \]
(FPCore (x) :precision binary32 (* 0.5 (log1p (* 2.0 (+ x (* x x))))))
float code(float x) {
	return 0.5f * log1pf((2.0f * (x + (x * x))));
}
function code(x)
	return Float32(Float32(0.5) * log1p(Float32(Float32(2.0) * Float32(x + Float32(x * x)))))
end
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(2 \cdot \left(x + x \cdot x\right)\right)
\end{array}
Derivation
  1. Initial program 99.7%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-/l*98.9%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{\frac{1 - x}{x}}}\right) \]
  3. Simplified98.9%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(\frac{2}{\frac{1 - x}{x}}\right)} \]
  4. Taylor expanded in x around 0 94.5%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{2 \cdot {x}^{2} + 2 \cdot x}\right) \]
  5. Step-by-step derivation
    1. distribute-lft-out94.5%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{2 \cdot \left({x}^{2} + x\right)}\right) \]
    2. unpow294.5%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(2 \cdot \left(\color{blue}{x \cdot x} + x\right)\right) \]
  6. Simplified94.5%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{2 \cdot \left(x \cdot x + x\right)}\right) \]
  7. Final simplification94.5%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(2 \cdot \left(x + x \cdot x\right)\right) \]

Alternative 3: 99.2% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(\frac{2}{\frac{1}{x} + -1}\right) \end{array} \]
(FPCore (x) :precision binary32 (* 0.5 (log1p (/ 2.0 (+ (/ 1.0 x) -1.0)))))
float code(float x) {
	return 0.5f * log1pf((2.0f / ((1.0f / x) + -1.0f)));
}
function code(x)
	return Float32(Float32(0.5) * log1p(Float32(Float32(2.0) / Float32(Float32(Float32(1.0) / x) + Float32(-1.0)))))
end
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(\frac{2}{\frac{1}{x} + -1}\right)
\end{array}
Derivation
  1. Initial program 99.7%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-/l*98.9%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{\frac{1 - x}{x}}}\right) \]
  3. Simplified98.9%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(\frac{2}{\frac{1 - x}{x}}\right)} \]
  4. Taylor expanded in x around 0 99.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{2}{\color{blue}{\frac{1}{x} - 1}}\right) \]
  5. Final simplification99.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\frac{2}{\frac{1}{x} + -1}\right) \]

Alternative 4: 92.6% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(x + x\right) \end{array} \]
(FPCore (x) :precision binary32 (* 0.5 (log1p (+ x x))))
float code(float x) {
	return 0.5f * log1pf((x + x));
}
function code(x)
	return Float32(Float32(0.5) * log1p(Float32(x + x)))
end
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(x + x\right)
\end{array}
Derivation
  1. Initial program 99.7%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-/l*98.9%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{\frac{1 - x}{x}}}\right) \]
  3. Simplified98.9%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(\frac{2}{\frac{1 - x}{x}}\right)} \]
  4. Taylor expanded in x around 0 91.1%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{2 \cdot x}\right) \]
  5. Step-by-step derivation
    1. count-291.1%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x + x}\right) \]
  6. Simplified91.1%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{x + x}\right) \]
  7. Final simplification91.1%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(x + x\right) \]

Alternative 5: -0.0% accurate, 1.1× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(-2\right) \end{array} \]
(FPCore (x) :precision binary32 (* 0.5 (log1p -2.0)))
float code(float x) {
	return 0.5f * log1pf(-2.0f);
}
function code(x)
	return Float32(Float32(0.5) * log1p(Float32(-2.0)))
end
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(-2\right)
\end{array}
Derivation
  1. Initial program 99.7%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Step-by-step derivation
    1. associate-/l*98.9%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\frac{2}{\frac{1 - x}{x}}}\right) \]
  3. Simplified98.9%

    \[\leadsto \color{blue}{0.5 \cdot \mathsf{log1p}\left(\frac{2}{\frac{1 - x}{x}}\right)} \]
  4. Taylor expanded in x around inf -0.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{-2}\right) \]
  5. Final simplification-0.0%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(-2\right) \]

Reproduce

?
herbie shell --seed 2023182 
(FPCore (x)
  :name "Rust f32::atanh"
  :precision binary32
  (* 0.5 (log1p (/ (* 2.0 x) (- 1.0 x)))))