bug500, discussion (missed optimization)

Percentage Accurate: 52.2% → 96.8%
Time: 11.3s
Alternatives: 8
Speedup: 19.3×

Specification

?
\[\begin{array}{l} \\ \log \left(\frac{\sinh x}{x}\right) \end{array} \]
(FPCore (x) :precision binary64 (log (/ (sinh x) x)))
double code(double x) {
	return log((sinh(x) / x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = log((sinh(x) / x))
end function
public static double code(double x) {
	return Math.log((Math.sinh(x) / x));
}
def code(x):
	return math.log((math.sinh(x) / x))
function code(x)
	return log(Float64(sinh(x) / x))
end
function tmp = code(x)
	tmp = log((sinh(x) / x));
end
code[x_] := N[Log[N[(N[Sinh[x], $MachinePrecision] / x), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(\frac{\sinh x}{x}\right)
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 8 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 52.2% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \log \left(\frac{\sinh x}{x}\right) \end{array} \]
(FPCore (x) :precision binary64 (log (/ (sinh x) x)))
double code(double x) {
	return log((sinh(x) / x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = log((sinh(x) / x))
end function
public static double code(double x) {
	return Math.log((Math.sinh(x) / x));
}
def code(x):
	return math.log((math.sinh(x) / x))
function code(x)
	return log(Float64(sinh(x) / x))
end
function tmp = code(x)
	tmp = log((sinh(x) / x));
end
code[x_] := N[Log[N[(N[Sinh[x], $MachinePrecision] / x), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(\frac{\sinh x}{x}\right)
\end{array}

Alternative 1: 96.8% accurate, 4.2× speedup?

\[\begin{array}{l} \\ \frac{x \cdot x}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}} \end{array} \]
(FPCore (x)
 :precision binary64
 (/
  (* x x)
  (/
   1.0
   (fma
    (fma (* x x) 0.0003527336860670194 -0.005555555555555556)
    (* x x)
    0.16666666666666666))))
double code(double x) {
	return (x * x) / (1.0 / fma(fma((x * x), 0.0003527336860670194, -0.005555555555555556), (x * x), 0.16666666666666666));
}
function code(x)
	return Float64(Float64(x * x) / Float64(1.0 / fma(fma(Float64(x * x), 0.0003527336860670194, -0.005555555555555556), Float64(x * x), 0.16666666666666666)))
end
code[x_] := N[(N[(x * x), $MachinePrecision] / N[(1.0 / N[(N[(N[(x * x), $MachinePrecision] * 0.0003527336860670194 + -0.005555555555555556), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{x \cdot x}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}}
\end{array}
Derivation
  1. Initial program 52.1%

    \[\log \left(\frac{\sinh x}{x}\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
  4. Step-by-step derivation
    1. unpow2N/A

      \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
    2. associate-*l*N/A

      \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
    3. *-commutativeN/A

      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
    4. lower-*.f64N/A

      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
    5. *-commutativeN/A

      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
    6. lower-*.f64N/A

      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
    7. +-commutativeN/A

      \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
    8. *-commutativeN/A

      \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
    9. lower-fma.f64N/A

      \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
    10. sub-negN/A

      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    11. metadata-evalN/A

      \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    12. lower-fma.f64N/A

      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    13. unpow2N/A

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    14. lower-*.f64N/A

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    15. unpow2N/A

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    16. lower-*.f6498.7

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
  5. Applied rewrites98.7%

    \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
  6. Step-by-step derivation
    1. Applied rewrites98.8%

      \[\leadsto \frac{x \cdot x}{\color{blue}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}}} \]
    2. Add Preprocessing

    Alternative 2: 96.8% accurate, 6.4× speedup?

    \[\begin{array}{l} \\ \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x \end{array} \]
    (FPCore (x)
     :precision binary64
     (*
      (*
       (fma
        (fma 0.0003527336860670194 (* x x) -0.005555555555555556)
        (* x x)
        0.16666666666666666)
       x)
      x))
    double code(double x) {
    	return (fma(fma(0.0003527336860670194, (x * x), -0.005555555555555556), (x * x), 0.16666666666666666) * x) * x;
    }
    
    function code(x)
    	return Float64(Float64(fma(fma(0.0003527336860670194, Float64(x * x), -0.005555555555555556), Float64(x * x), 0.16666666666666666) * x) * x)
    end
    
    code[x_] := N[(N[(N[(N[(0.0003527336860670194 * N[(x * x), $MachinePrecision] + -0.005555555555555556), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x
    \end{array}
    
    Derivation
    1. Initial program 52.1%

      \[\log \left(\frac{\sinh x}{x}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
    4. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
      2. associate-*l*N/A

        \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
      3. *-commutativeN/A

        \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
      4. lower-*.f64N/A

        \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
      5. *-commutativeN/A

        \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
      6. lower-*.f64N/A

        \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
      7. +-commutativeN/A

        \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
      8. *-commutativeN/A

        \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
      9. lower-fma.f64N/A

        \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
      10. sub-negN/A

        \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      11. metadata-evalN/A

        \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      12. lower-fma.f64N/A

        \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      13. unpow2N/A

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      14. lower-*.f64N/A

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      15. unpow2N/A

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      16. lower-*.f6498.7

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
    5. Applied rewrites98.7%

      \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
    6. Add Preprocessing

    Alternative 3: 96.8% accurate, 7.6× speedup?

    \[\begin{array}{l} \\ \frac{x \cdot x}{\mathsf{fma}\left(0.2, x \cdot x, 6\right)} \end{array} \]
    (FPCore (x) :precision binary64 (/ (* x x) (fma 0.2 (* x x) 6.0)))
    double code(double x) {
    	return (x * x) / fma(0.2, (x * x), 6.0);
    }
    
    function code(x)
    	return Float64(Float64(x * x) / fma(0.2, Float64(x * x), 6.0))
    end
    
    code[x_] := N[(N[(x * x), $MachinePrecision] / N[(0.2 * N[(x * x), $MachinePrecision] + 6.0), $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \frac{x \cdot x}{\mathsf{fma}\left(0.2, x \cdot x, 6\right)}
    \end{array}
    
    Derivation
    1. Initial program 52.1%

      \[\log \left(\frac{\sinh x}{x}\right) \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
    4. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
      2. associate-*l*N/A

        \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
      3. *-commutativeN/A

        \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
      4. lower-*.f64N/A

        \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
      5. *-commutativeN/A

        \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
      6. lower-*.f64N/A

        \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
      7. +-commutativeN/A

        \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
      8. *-commutativeN/A

        \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
      9. lower-fma.f64N/A

        \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
      10. sub-negN/A

        \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      11. metadata-evalN/A

        \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      12. lower-fma.f64N/A

        \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      13. unpow2N/A

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      14. lower-*.f64N/A

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      15. unpow2N/A

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
      16. lower-*.f6498.7

        \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
    5. Applied rewrites98.7%

      \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
    6. Step-by-step derivation
      1. Applied rewrites98.8%

        \[\leadsto \frac{x \cdot x}{\color{blue}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}}} \]
      2. Taylor expanded in x around 0

        \[\leadsto \frac{x \cdot x}{6 + \color{blue}{\frac{1}{5} \cdot {x}^{2}}} \]
      3. Step-by-step derivation
        1. Applied rewrites98.7%

          \[\leadsto \frac{x \cdot x}{\mathsf{fma}\left(0.2, \color{blue}{x \cdot x}, 6\right)} \]
        2. Add Preprocessing

        Alternative 4: 96.3% accurate, 7.9× speedup?

        \[\begin{array}{l} \\ \mathsf{fma}\left(x, 0.16666666666666666, \left(-0.005555555555555556 \cdot x\right) \cdot \left(x \cdot x\right)\right) \cdot x \end{array} \]
        (FPCore (x)
         :precision binary64
         (* (fma x 0.16666666666666666 (* (* -0.005555555555555556 x) (* x x))) x))
        double code(double x) {
        	return fma(x, 0.16666666666666666, ((-0.005555555555555556 * x) * (x * x))) * x;
        }
        
        function code(x)
        	return Float64(fma(x, 0.16666666666666666, Float64(Float64(-0.005555555555555556 * x) * Float64(x * x))) * x)
        end
        
        code[x_] := N[(N[(x * 0.16666666666666666 + N[(N[(-0.005555555555555556 * x), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        \mathsf{fma}\left(x, 0.16666666666666666, \left(-0.005555555555555556 \cdot x\right) \cdot \left(x \cdot x\right)\right) \cdot x
        \end{array}
        
        Derivation
        1. Initial program 52.1%

          \[\log \left(\frac{\sinh x}{x}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in x around 0

          \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)} \]
        4. Step-by-step derivation
          1. unpow2N/A

            \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \]
          2. associate-*l*N/A

            \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right)} \]
          3. *-commutativeN/A

            \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
          4. lower-*.f64N/A

            \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
          5. *-commutativeN/A

            \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
          6. lower-*.f64N/A

            \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
          7. +-commutativeN/A

            \[\leadsto \left(\color{blue}{\left(\frac{-1}{180} \cdot {x}^{2} + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
          8. lower-fma.f64N/A

            \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{-1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
          9. unpow2N/A

            \[\leadsto \left(\mathsf{fma}\left(\frac{-1}{180}, \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
          10. lower-*.f6498.4

            \[\leadsto \left(\mathsf{fma}\left(-0.005555555555555556, \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
        5. Applied rewrites98.4%

          \[\leadsto \color{blue}{\left(\mathsf{fma}\left(-0.005555555555555556, x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
        6. Step-by-step derivation
          1. Applied rewrites98.4%

            \[\leadsto \mathsf{fma}\left(x, 0.16666666666666666, {x}^{3} \cdot -0.005555555555555556\right) \cdot x \]
          2. Step-by-step derivation
            1. Applied rewrites98.4%

              \[\leadsto \mathsf{fma}\left(x, 0.16666666666666666, \left(x \cdot x\right) \cdot \left(-0.005555555555555556 \cdot x\right)\right) \cdot x \]
            2. Final simplification98.4%

              \[\leadsto \mathsf{fma}\left(x, 0.16666666666666666, \left(-0.005555555555555556 \cdot x\right) \cdot \left(x \cdot x\right)\right) \cdot x \]
            3. Add Preprocessing

            Alternative 5: 96.3% accurate, 9.6× speedup?

            \[\begin{array}{l} \\ \left(\mathsf{fma}\left(-0.005555555555555556, x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x \end{array} \]
            (FPCore (x)
             :precision binary64
             (* (* (fma -0.005555555555555556 (* x x) 0.16666666666666666) x) x))
            double code(double x) {
            	return (fma(-0.005555555555555556, (x * x), 0.16666666666666666) * x) * x;
            }
            
            function code(x)
            	return Float64(Float64(fma(-0.005555555555555556, Float64(x * x), 0.16666666666666666) * x) * x)
            end
            
            code[x_] := N[(N[(N[(-0.005555555555555556 * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision]
            
            \begin{array}{l}
            
            \\
            \left(\mathsf{fma}\left(-0.005555555555555556, x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x
            \end{array}
            
            Derivation
            1. Initial program 52.1%

              \[\log \left(\frac{\sinh x}{x}\right) \]
            2. Add Preprocessing
            3. Taylor expanded in x around 0

              \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)} \]
            4. Step-by-step derivation
              1. unpow2N/A

                \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \]
              2. associate-*l*N/A

                \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right)} \]
              3. *-commutativeN/A

                \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
              4. lower-*.f64N/A

                \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
              5. *-commutativeN/A

                \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
              6. lower-*.f64N/A

                \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
              7. +-commutativeN/A

                \[\leadsto \left(\color{blue}{\left(\frac{-1}{180} \cdot {x}^{2} + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
              8. lower-fma.f64N/A

                \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{-1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
              9. unpow2N/A

                \[\leadsto \left(\mathsf{fma}\left(\frac{-1}{180}, \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
              10. lower-*.f6498.4

                \[\leadsto \left(\mathsf{fma}\left(-0.005555555555555556, \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
            5. Applied rewrites98.4%

              \[\leadsto \color{blue}{\left(\mathsf{fma}\left(-0.005555555555555556, x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
            6. Add Preprocessing

            Alternative 6: 96.2% accurate, 12.5× speedup?

            \[\begin{array}{l} \\ \frac{x}{6} \cdot x \end{array} \]
            (FPCore (x) :precision binary64 (* (/ x 6.0) x))
            double code(double x) {
            	return (x / 6.0) * x;
            }
            
            real(8) function code(x)
                real(8), intent (in) :: x
                code = (x / 6.0d0) * x
            end function
            
            public static double code(double x) {
            	return (x / 6.0) * x;
            }
            
            def code(x):
            	return (x / 6.0) * x
            
            function code(x)
            	return Float64(Float64(x / 6.0) * x)
            end
            
            function tmp = code(x)
            	tmp = (x / 6.0) * x;
            end
            
            code[x_] := N[(N[(x / 6.0), $MachinePrecision] * x), $MachinePrecision]
            
            \begin{array}{l}
            
            \\
            \frac{x}{6} \cdot x
            \end{array}
            
            Derivation
            1. Initial program 52.1%

              \[\log \left(\frac{\sinh x}{x}\right) \]
            2. Add Preprocessing
            3. Taylor expanded in x around 0

              \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
            4. Step-by-step derivation
              1. unpow2N/A

                \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
              2. associate-*l*N/A

                \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
              3. *-commutativeN/A

                \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
              4. lower-*.f64N/A

                \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
              5. *-commutativeN/A

                \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
              6. lower-*.f64N/A

                \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
              7. +-commutativeN/A

                \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
              8. *-commutativeN/A

                \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
              9. lower-fma.f64N/A

                \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
              10. sub-negN/A

                \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
              11. metadata-evalN/A

                \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
              12. lower-fma.f64N/A

                \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
              13. unpow2N/A

                \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
              14. lower-*.f64N/A

                \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
              15. unpow2N/A

                \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
              16. lower-*.f6498.7

                \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
            5. Applied rewrites98.7%

              \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
            6. Step-by-step derivation
              1. Applied rewrites98.8%

                \[\leadsto \frac{x \cdot x}{\color{blue}{\frac{1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}}} \]
              2. Taylor expanded in x around 0

                \[\leadsto \frac{x \cdot x}{6} \]
              3. Step-by-step derivation
                1. Applied rewrites98.1%

                  \[\leadsto \frac{x \cdot x}{6} \]
                2. Step-by-step derivation
                  1. Applied rewrites98.1%

                    \[\leadsto \color{blue}{\frac{x}{6} \cdot x} \]
                  2. Add Preprocessing

                  Alternative 7: 96.2% accurate, 19.3× speedup?

                  \[\begin{array}{l} \\ \left(0.16666666666666666 \cdot x\right) \cdot x \end{array} \]
                  (FPCore (x) :precision binary64 (* (* 0.16666666666666666 x) x))
                  double code(double x) {
                  	return (0.16666666666666666 * x) * x;
                  }
                  
                  real(8) function code(x)
                      real(8), intent (in) :: x
                      code = (0.16666666666666666d0 * x) * x
                  end function
                  
                  public static double code(double x) {
                  	return (0.16666666666666666 * x) * x;
                  }
                  
                  def code(x):
                  	return (0.16666666666666666 * x) * x
                  
                  function code(x)
                  	return Float64(Float64(0.16666666666666666 * x) * x)
                  end
                  
                  function tmp = code(x)
                  	tmp = (0.16666666666666666 * x) * x;
                  end
                  
                  code[x_] := N[(N[(0.16666666666666666 * x), $MachinePrecision] * x), $MachinePrecision]
                  
                  \begin{array}{l}
                  
                  \\
                  \left(0.16666666666666666 \cdot x\right) \cdot x
                  \end{array}
                  
                  Derivation
                  1. Initial program 52.1%

                    \[\log \left(\frac{\sinh x}{x}\right) \]
                  2. Add Preprocessing
                  3. Taylor expanded in x around 0

                    \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
                  4. Step-by-step derivation
                    1. unpow2N/A

                      \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
                    2. associate-*l*N/A

                      \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
                    3. *-commutativeN/A

                      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                    4. lower-*.f64N/A

                      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                    5. *-commutativeN/A

                      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                    6. lower-*.f64N/A

                      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                    7. +-commutativeN/A

                      \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                    8. *-commutativeN/A

                      \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    9. lower-fma.f64N/A

                      \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                    10. sub-negN/A

                      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    11. metadata-evalN/A

                      \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    12. lower-fma.f64N/A

                      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    13. unpow2N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    14. lower-*.f64N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    15. unpow2N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    16. lower-*.f6498.7

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                  5. Applied rewrites98.7%

                    \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                  6. Taylor expanded in x around 0

                    \[\leadsto \left(\frac{1}{6} \cdot x\right) \cdot x \]
                  7. Step-by-step derivation
                    1. Applied rewrites98.0%

                      \[\leadsto \left(0.16666666666666666 \cdot x\right) \cdot x \]
                    2. Add Preprocessing

                    Alternative 8: 96.1% accurate, 19.3× speedup?

                    \[\begin{array}{l} \\ 0.16666666666666666 \cdot \left(x \cdot x\right) \end{array} \]
                    (FPCore (x) :precision binary64 (* 0.16666666666666666 (* x x)))
                    double code(double x) {
                    	return 0.16666666666666666 * (x * x);
                    }
                    
                    real(8) function code(x)
                        real(8), intent (in) :: x
                        code = 0.16666666666666666d0 * (x * x)
                    end function
                    
                    public static double code(double x) {
                    	return 0.16666666666666666 * (x * x);
                    }
                    
                    def code(x):
                    	return 0.16666666666666666 * (x * x)
                    
                    function code(x)
                    	return Float64(0.16666666666666666 * Float64(x * x))
                    end
                    
                    function tmp = code(x)
                    	tmp = 0.16666666666666666 * (x * x);
                    end
                    
                    code[x_] := N[(0.16666666666666666 * N[(x * x), $MachinePrecision]), $MachinePrecision]
                    
                    \begin{array}{l}
                    
                    \\
                    0.16666666666666666 \cdot \left(x \cdot x\right)
                    \end{array}
                    
                    Derivation
                    1. Initial program 52.1%

                      \[\log \left(\frac{\sinh x}{x}\right) \]
                    2. Add Preprocessing
                    3. Taylor expanded in x around 0

                      \[\leadsto \color{blue}{\frac{1}{6} \cdot {x}^{2}} \]
                    4. Step-by-step derivation
                      1. *-commutativeN/A

                        \[\leadsto \color{blue}{{x}^{2} \cdot \frac{1}{6}} \]
                      2. lower-*.f64N/A

                        \[\leadsto \color{blue}{{x}^{2} \cdot \frac{1}{6}} \]
                      3. unpow2N/A

                        \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \frac{1}{6} \]
                      4. lower-*.f6498.0

                        \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot 0.16666666666666666 \]
                    5. Applied rewrites98.0%

                      \[\leadsto \color{blue}{\left(x \cdot x\right) \cdot 0.16666666666666666} \]
                    6. Final simplification98.0%

                      \[\leadsto 0.16666666666666666 \cdot \left(x \cdot x\right) \]
                    7. Add Preprocessing

                    Developer Target 1: 97.7% accurate, 1.0× speedup?

                    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\left|x\right| < 0.085:\\ \;\;\;\;\left(x \cdot x\right) \cdot \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-2.6455026455026456 \cdot 10^{-5}, x \cdot x, 0.0003527336860670194\right), x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)\\ \mathbf{else}:\\ \;\;\;\;\log \left(\frac{\sinh x}{x}\right)\\ \end{array} \end{array} \]
                    (FPCore (x)
                     :precision binary64
                     (if (< (fabs x) 0.085)
                       (*
                        (* x x)
                        (fma
                         (fma
                          (fma -2.6455026455026456e-5 (* x x) 0.0003527336860670194)
                          (* x x)
                          -0.005555555555555556)
                         (* x x)
                         0.16666666666666666))
                       (log (/ (sinh x) x))))
                    double code(double x) {
                    	double tmp;
                    	if (fabs(x) < 0.085) {
                    		tmp = (x * x) * fma(fma(fma(-2.6455026455026456e-5, (x * x), 0.0003527336860670194), (x * x), -0.005555555555555556), (x * x), 0.16666666666666666);
                    	} else {
                    		tmp = log((sinh(x) / x));
                    	}
                    	return tmp;
                    }
                    
                    function code(x)
                    	tmp = 0.0
                    	if (abs(x) < 0.085)
                    		tmp = Float64(Float64(x * x) * fma(fma(fma(-2.6455026455026456e-5, Float64(x * x), 0.0003527336860670194), Float64(x * x), -0.005555555555555556), Float64(x * x), 0.16666666666666666));
                    	else
                    		tmp = log(Float64(sinh(x) / x));
                    	end
                    	return tmp
                    end
                    
                    code[x_] := If[Less[N[Abs[x], $MachinePrecision], 0.085], N[(N[(x * x), $MachinePrecision] * N[(N[(N[(-2.6455026455026456e-5 * N[(x * x), $MachinePrecision] + 0.0003527336860670194), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.005555555555555556), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision]), $MachinePrecision], N[Log[N[(N[Sinh[x], $MachinePrecision] / x), $MachinePrecision]], $MachinePrecision]]
                    
                    \begin{array}{l}
                    
                    \\
                    \begin{array}{l}
                    \mathbf{if}\;\left|x\right| < 0.085:\\
                    \;\;\;\;\left(x \cdot x\right) \cdot \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-2.6455026455026456 \cdot 10^{-5}, x \cdot x, 0.0003527336860670194\right), x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)\\
                    
                    \mathbf{else}:\\
                    \;\;\;\;\log \left(\frac{\sinh x}{x}\right)\\
                    
                    
                    \end{array}
                    \end{array}
                    

                    Reproduce

                    ?
                    herbie shell --seed 2024308 
                    (FPCore (x)
                      :name "bug500, discussion (missed optimization)"
                      :precision binary64
                    
                      :alt
                      (! :herbie-platform default (if (< (fabs x) 17/200) (let ((x2 (* x x))) (* x2 (fma (fma (fma -1/37800 x2 1/2835) x2 -1/180) x2 1/6))) (log (/ (sinh x) x))))
                    
                      (log (/ (sinh x) x)))