Rust f32::atanh

Percentage Accurate: 92.5% → 98.4%
Time: 6.1s
Alternatives: 4
Speedup: 11.4×

Specification

?
\[\begin{array}{l} \\ \tanh^{-1} x \end{array} \]
(FPCore (x) :precision binary32 (atanh x))
float code(float x) {
	return atanhf(x);
}
function code(x)
	return atanh(x)
end
function tmp = code(x)
	tmp = atanh(x);
end
\begin{array}{l}

\\
\tanh^{-1} x
\end{array}

Sampling outcomes in binary32 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 4 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 92.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ 0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \end{array} \]
(FPCore (x) :precision binary32 (* 0.5 (log1p (/ (* 2.0 x) (- 1.0 x)))))
float code(float x) {
	return 0.5f * log1pf(((2.0f * x) / (1.0f - x)));
}
function code(x)
	return Float32(Float32(0.5) * log1p(Float32(Float32(Float32(2.0) * x) / Float32(Float32(1.0) - x))))
end
\begin{array}{l}

\\
0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right)
\end{array}

Alternative 1: 98.4% accurate, 4.3× speedup?

\[\begin{array}{l} \\ \left(x \cdot 2 + \left(0.6666666666666666 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot 0.5 \end{array} \]
(FPCore (x)
 :precision binary32
 (* (+ (* x 2.0) (* (* 0.6666666666666666 (* x x)) x)) 0.5))
float code(float x) {
	return ((x * 2.0f) + ((0.6666666666666666f * (x * x)) * x)) * 0.5f;
}
real(4) function code(x)
    real(4), intent (in) :: x
    code = ((x * 2.0e0) + ((0.6666666666666666e0 * (x * x)) * x)) * 0.5e0
end function
function code(x)
	return Float32(Float32(Float32(x * Float32(2.0)) + Float32(Float32(Float32(0.6666666666666666) * Float32(x * x)) * x)) * Float32(0.5))
end
function tmp = code(x)
	tmp = ((x * single(2.0)) + ((single(0.6666666666666666) * (x * x)) * x)) * single(0.5);
end
\begin{array}{l}

\\
\left(x \cdot 2 + \left(0.6666666666666666 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot 0.5
\end{array}
Derivation
  1. Initial program 93.4%

    \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \left(2 + x \cdot \left(2 + 2 \cdot x\right)\right)}\right) \]
  4. Step-by-step derivation
    1. *-commutativeN/A

      \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\left(2 + x \cdot \left(2 + 2 \cdot x\right)\right) \cdot x}\right) \]
    2. lower-*.f32N/A

      \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\left(2 + x \cdot \left(2 + 2 \cdot x\right)\right) \cdot x}\right) \]
    3. +-commutativeN/A

      \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\left(x \cdot \left(2 + 2 \cdot x\right) + 2\right)} \cdot x\right) \]
    4. *-commutativeN/A

      \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\left(\color{blue}{\left(2 + 2 \cdot x\right) \cdot x} + 2\right) \cdot x\right) \]
    5. lower-fma.f32N/A

      \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\mathsf{fma}\left(2 + 2 \cdot x, x, 2\right)} \cdot x\right) \]
    6. +-commutativeN/A

      \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\mathsf{fma}\left(\color{blue}{2 \cdot x + 2}, x, 2\right) \cdot x\right) \]
    7. lower-fma.f3296.7

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(2, x, 2\right)}, x, 2\right) \cdot x\right) \]
  5. Applied rewrites96.7%

    \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(2, x, 2\right), x, 2\right) \cdot x}\right) \]
  6. Step-by-step derivation
    1. Applied rewrites96.4%

      \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\mathsf{fma}\left(x \cdot 2 + 2, x, 2\right) \cdot x\right) \]
    2. Taylor expanded in x around 0

      \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(x \cdot \left(2 + \frac{2}{3} \cdot {x}^{2}\right)\right)} \]
    3. Step-by-step derivation
      1. *-commutativeN/A

        \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(\left(2 + \frac{2}{3} \cdot {x}^{2}\right) \cdot x\right)} \]
      2. lower-*.f32N/A

        \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(\left(2 + \frac{2}{3} \cdot {x}^{2}\right) \cdot x\right)} \]
      3. +-commutativeN/A

        \[\leadsto \frac{1}{2} \cdot \left(\color{blue}{\left(\frac{2}{3} \cdot {x}^{2} + 2\right)} \cdot x\right) \]
      4. lower-fma.f32N/A

        \[\leadsto \frac{1}{2} \cdot \left(\color{blue}{\mathsf{fma}\left(\frac{2}{3}, {x}^{2}, 2\right)} \cdot x\right) \]
      5. unpow2N/A

        \[\leadsto \frac{1}{2} \cdot \left(\mathsf{fma}\left(\frac{2}{3}, \color{blue}{x \cdot x}, 2\right) \cdot x\right) \]
      6. lower-*.f3296.3

        \[\leadsto 0.5 \cdot \left(\mathsf{fma}\left(0.6666666666666666, \color{blue}{x \cdot x}, 2\right) \cdot x\right) \]
    4. Applied rewrites95.6%

      \[\leadsto 0.5 \cdot \color{blue}{\left(\mathsf{fma}\left(0.6666666666666666, x \cdot x, 2\right) \cdot x\right)} \]
    5. Step-by-step derivation
      1. Applied rewrites98.2%

        \[\leadsto 0.5 \cdot \left(\left(\left(x \cdot x\right) \cdot 0.6666666666666666\right) \cdot x + \color{blue}{x \cdot 2}\right) \]
      2. Final simplification98.2%

        \[\leadsto \left(x \cdot 2 + \left(0.6666666666666666 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot 0.5 \]
      3. Add Preprocessing

      Alternative 2: 98.3% accurate, 5.2× speedup?

      \[\begin{array}{l} \\ \left(\left(0.6666666666666666 \cdot \left(x \cdot x\right) + 2\right) \cdot x\right) \cdot 0.5 \end{array} \]
      (FPCore (x)
       :precision binary32
       (* (* (+ (* 0.6666666666666666 (* x x)) 2.0) x) 0.5))
      float code(float x) {
      	return (((0.6666666666666666f * (x * x)) + 2.0f) * x) * 0.5f;
      }
      
      real(4) function code(x)
          real(4), intent (in) :: x
          code = (((0.6666666666666666e0 * (x * x)) + 2.0e0) * x) * 0.5e0
      end function
      
      function code(x)
      	return Float32(Float32(Float32(Float32(Float32(0.6666666666666666) * Float32(x * x)) + Float32(2.0)) * x) * Float32(0.5))
      end
      
      function tmp = code(x)
      	tmp = (((single(0.6666666666666666) * (x * x)) + single(2.0)) * x) * single(0.5);
      end
      
      \begin{array}{l}
      
      \\
      \left(\left(0.6666666666666666 \cdot \left(x \cdot x\right) + 2\right) \cdot x\right) \cdot 0.5
      \end{array}
      
      Derivation
      1. Initial program 88.2%

        \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
      2. Add Preprocessing
      3. Taylor expanded in x around 0

        \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{x \cdot \left(2 + x \cdot \left(2 + 2 \cdot x\right)\right)}\right) \]
      4. Step-by-step derivation
        1. *-commutativeN/A

          \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\left(2 + x \cdot \left(2 + 2 \cdot x\right)\right) \cdot x}\right) \]
        2. lower-*.f32N/A

          \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\left(2 + x \cdot \left(2 + 2 \cdot x\right)\right) \cdot x}\right) \]
        3. +-commutativeN/A

          \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\left(x \cdot \left(2 + 2 \cdot x\right) + 2\right)} \cdot x\right) \]
        4. *-commutativeN/A

          \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\left(\color{blue}{\left(2 + 2 \cdot x\right) \cdot x} + 2\right) \cdot x\right) \]
        5. lower-fma.f32N/A

          \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\color{blue}{\mathsf{fma}\left(2 + 2 \cdot x, x, 2\right)} \cdot x\right) \]
        6. +-commutativeN/A

          \[\leadsto \frac{1}{2} \cdot \mathsf{log1p}\left(\mathsf{fma}\left(\color{blue}{2 \cdot x + 2}, x, 2\right) \cdot x\right) \]
        7. lower-fma.f3296.7

          \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(2, x, 2\right)}, x, 2\right) \cdot x\right) \]
      5. Applied rewrites96.7%

        \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(2, x, 2\right), x, 2\right) \cdot x}\right) \]
      6. Step-by-step derivation
        1. Applied rewrites96.4%

          \[\leadsto 0.5 \cdot \mathsf{log1p}\left(\mathsf{fma}\left(x \cdot 2 + 2, x, 2\right) \cdot x\right) \]
        2. Taylor expanded in x around 0

          \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(x \cdot \left(2 + \frac{2}{3} \cdot {x}^{2}\right)\right)} \]
        3. Step-by-step derivation
          1. *-commutativeN/A

            \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(\left(2 + \frac{2}{3} \cdot {x}^{2}\right) \cdot x\right)} \]
          2. lower-*.f32N/A

            \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(\left(2 + \frac{2}{3} \cdot {x}^{2}\right) \cdot x\right)} \]
          3. +-commutativeN/A

            \[\leadsto \frac{1}{2} \cdot \left(\color{blue}{\left(\frac{2}{3} \cdot {x}^{2} + 2\right)} \cdot x\right) \]
          4. lower-fma.f32N/A

            \[\leadsto \frac{1}{2} \cdot \left(\color{blue}{\mathsf{fma}\left(\frac{2}{3}, {x}^{2}, 2\right)} \cdot x\right) \]
          5. unpow2N/A

            \[\leadsto \frac{1}{2} \cdot \left(\mathsf{fma}\left(\frac{2}{3}, \color{blue}{x \cdot x}, 2\right) \cdot x\right) \]
          6. lower-*.f3296.3

            \[\leadsto 0.5 \cdot \left(\mathsf{fma}\left(0.6666666666666666, \color{blue}{x \cdot x}, 2\right) \cdot x\right) \]
        4. Applied rewrites95.6%

          \[\leadsto 0.5 \cdot \color{blue}{\left(\mathsf{fma}\left(0.6666666666666666, x \cdot x, 2\right) \cdot x\right)} \]
        5. Step-by-step derivation
          1. Applied rewrites98.1%

            \[\leadsto 0.5 \cdot \left(\left(\left(x \cdot x\right) \cdot 0.6666666666666666 + 2\right) \cdot x\right) \]
          2. Final simplification98.1%

            \[\leadsto \left(\left(0.6666666666666666 \cdot \left(x \cdot x\right) + 2\right) \cdot x\right) \cdot 0.5 \]
          3. Add Preprocessing

          Alternative 3: 96.7% accurate, 11.4× speedup?

          \[\begin{array}{l} \\ \left(x \cdot 2\right) \cdot 0.5 \end{array} \]
          (FPCore (x) :precision binary32 (* (* x 2.0) 0.5))
          float code(float x) {
          	return (x * 2.0f) * 0.5f;
          }
          
          real(4) function code(x)
              real(4), intent (in) :: x
              code = (x * 2.0e0) * 0.5e0
          end function
          
          function code(x)
          	return Float32(Float32(x * Float32(2.0)) * Float32(0.5))
          end
          
          function tmp = code(x)
          	tmp = (x * single(2.0)) * single(0.5);
          end
          
          \begin{array}{l}
          
          \\
          \left(x \cdot 2\right) \cdot 0.5
          \end{array}
          
          Derivation
          1. Initial program 93.6%

            \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
          2. Add Preprocessing
          3. Taylor expanded in x around 0

            \[\leadsto \frac{1}{2} \cdot \color{blue}{\left(2 \cdot x\right)} \]
          4. Step-by-step derivation
            1. lower-*.f3296.7

              \[\leadsto 0.5 \cdot \color{blue}{\left(2 \cdot x\right)} \]
          5. Applied rewrites96.7%

            \[\leadsto 0.5 \cdot \color{blue}{\left(2 \cdot x\right)} \]
          6. Final simplification96.7%

            \[\leadsto \left(x \cdot 2\right) \cdot 0.5 \]
          7. Add Preprocessing

          Alternative 4: 7.7% accurate, 41.7× speedup?

          \[\begin{array}{l} \\ -x \end{array} \]
          (FPCore (x) :precision binary32 (- x))
          float code(float x) {
          	return -x;
          }
          
          real(4) function code(x)
              real(4), intent (in) :: x
              code = -x
          end function
          
          function code(x)
          	return Float32(-x)
          end
          
          function tmp = code(x)
          	tmp = -x;
          end
          
          \begin{array}{l}
          
          \\
          -x
          \end{array}
          
          Derivation
          1. Initial program 89.3%

            \[0.5 \cdot \mathsf{log1p}\left(\frac{2 \cdot x}{1 - x}\right) \]
          2. Add Preprocessing
          3. Applied rewrites8.9%

            \[\leadsto 0.5 \cdot \color{blue}{\left(\mathsf{log1p}\left(-8 \cdot {\left(\frac{x}{1 - x}\right)}^{3}\right) - \mathsf{log1p}\left(\frac{x}{1 - x} \cdot \mathsf{fma}\left(\frac{x}{1 - x}, 4, 2\right)\right)\right)} \]
          4. Taylor expanded in x around 0

            \[\leadsto \color{blue}{-1 \cdot x} \]
          5. Step-by-step derivation
            1. mul-1-negN/A

              \[\leadsto \color{blue}{\mathsf{neg}\left(x\right)} \]
            2. lower-neg.f328.0

              \[\leadsto \color{blue}{-x} \]
          6. Applied rewrites8.0%

            \[\leadsto \color{blue}{-x} \]
          7. Add Preprocessing

          Reproduce

          ?
          herbie shell --seed 2024295 
          (FPCore (x)
            :name "Rust f32::atanh"
            :precision binary32
            (* 0.5 (log1p (/ (* 2.0 x) (- 1.0 x)))))