bug500, discussion (missed optimization)

Percentage Accurate: 53.4% → 97.2%
Time: 12.1s
Alternatives: 11
Speedup: 19.3×

Specification

?
\[\begin{array}{l} \\ \log \left(\frac{\sinh x}{x}\right) \end{array} \]
(FPCore (x) :precision binary64 (log (/ (sinh x) x)))
double code(double x) {
	return log((sinh(x) / x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = log((sinh(x) / x))
end function
public static double code(double x) {
	return Math.log((Math.sinh(x) / x));
}
def code(x):
	return math.log((math.sinh(x) / x))
function code(x)
	return log(Float64(sinh(x) / x))
end
function tmp = code(x)
	tmp = log((sinh(x) / x));
end
code[x_] := N[Log[N[(N[Sinh[x], $MachinePrecision] / x), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(\frac{\sinh x}{x}\right)
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 11 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 53.4% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \log \left(\frac{\sinh x}{x}\right) \end{array} \]
(FPCore (x) :precision binary64 (log (/ (sinh x) x)))
double code(double x) {
	return log((sinh(x) / x));
}
real(8) function code(x)
    real(8), intent (in) :: x
    code = log((sinh(x) / x))
end function
public static double code(double x) {
	return Math.log((Math.sinh(x) / x));
}
def code(x):
	return math.log((math.sinh(x) / x))
function code(x)
	return log(Float64(sinh(x) / x))
end
function tmp = code(x)
	tmp = log((sinh(x) / x));
end
code[x_] := N[Log[N[(N[Sinh[x], $MachinePrecision] / x), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\log \left(\frac{\sinh x}{x}\right)
\end{array}

Alternative 1: 97.2% accurate, 1.4× speedup?

\[\begin{array}{l} \\ \left(\mathsf{fma}\left({x}^{4}, 3.08641975308642 \cdot 10^{-5}, -0.027777777777777776\right) \cdot x\right) \cdot \frac{x}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, -0.16666666666666666\right)} \end{array} \]
(FPCore (x)
 :precision binary64
 (*
  (* (fma (pow x 4.0) 3.08641975308642e-5 -0.027777777777777776) x)
  (/
   x
   (fma
    (fma (* x x) 0.0003527336860670194 -0.005555555555555556)
    (* x x)
    -0.16666666666666666))))
double code(double x) {
	return (fma(pow(x, 4.0), 3.08641975308642e-5, -0.027777777777777776) * x) * (x / fma(fma((x * x), 0.0003527336860670194, -0.005555555555555556), (x * x), -0.16666666666666666));
}
function code(x)
	return Float64(Float64(fma((x ^ 4.0), 3.08641975308642e-5, -0.027777777777777776) * x) * Float64(x / fma(fma(Float64(x * x), 0.0003527336860670194, -0.005555555555555556), Float64(x * x), -0.16666666666666666)))
end
code[x_] := N[(N[(N[(N[Power[x, 4.0], $MachinePrecision] * 3.08641975308642e-5 + -0.027777777777777776), $MachinePrecision] * x), $MachinePrecision] * N[(x / N[(N[(N[(x * x), $MachinePrecision] * 0.0003527336860670194 + -0.005555555555555556), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\left(\mathsf{fma}\left({x}^{4}, 3.08641975308642 \cdot 10^{-5}, -0.027777777777777776\right) \cdot x\right) \cdot \frac{x}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, -0.16666666666666666\right)}
\end{array}
Derivation
  1. Initial program 52.1%

    \[\log \left(\frac{\sinh x}{x}\right) \]
  2. Add Preprocessing
  3. Taylor expanded in x around 0

    \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
  4. Step-by-step derivation
    1. unpow2N/A

      \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
    2. associate-*l*N/A

      \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
    3. *-commutativeN/A

      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
    4. lower-*.f64N/A

      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
    5. *-commutativeN/A

      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
    6. lower-*.f64N/A

      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
    7. +-commutativeN/A

      \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
    8. *-commutativeN/A

      \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
    9. lower-fma.f64N/A

      \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
    10. sub-negN/A

      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    11. metadata-evalN/A

      \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    12. lower-fma.f64N/A

      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    13. unpow2N/A

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    14. lower-*.f64N/A

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    15. unpow2N/A

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
    16. lower-*.f6497.7

      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
  5. Applied rewrites97.7%

    \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
  6. Step-by-step derivation
    1. Applied rewrites97.6%

      \[\leadsto \frac{1}{\frac{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, -0.16666666666666666\right)}{\mathsf{fma}\left({\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right)\right)}^{2}, {x}^{4}, -0.027777777777777776\right) \cdot x}} \cdot x \]
    2. Taylor expanded in x around 0

      \[\leadsto \frac{1}{\frac{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, \frac{1}{2835}, \frac{-1}{180}\right), x \cdot x, \frac{-1}{6}\right)}{x \cdot \left(\frac{1}{32400} \cdot {x}^{4} - \frac{1}{36}\right)}} \cdot x \]
    3. Step-by-step derivation
      1. Applied rewrites97.7%

        \[\leadsto \frac{1}{\frac{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, -0.16666666666666666\right)}{\mathsf{fma}\left(3.08641975308642 \cdot 10^{-5}, {x}^{4}, -0.027777777777777776\right) \cdot x}} \cdot x \]
      2. Step-by-step derivation
        1. Applied rewrites97.8%

          \[\leadsto \color{blue}{\frac{x}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, -0.16666666666666666\right)} \cdot \left(\mathsf{fma}\left({x}^{4}, 3.08641975308642 \cdot 10^{-5}, -0.027777777777777776\right) \cdot x\right)} \]
        2. Final simplification97.8%

          \[\leadsto \left(\mathsf{fma}\left({x}^{4}, 3.08641975308642 \cdot 10^{-5}, -0.027777777777777776\right) \cdot x\right) \cdot \frac{x}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, -0.16666666666666666\right)} \]
        3. Add Preprocessing

        Alternative 2: 97.1% accurate, 1.5× speedup?

        \[\begin{array}{l} \\ \frac{x}{\frac{{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right) \cdot x, x, 0.16666666666666666\right)\right)}^{-1}}{x}} \end{array} \]
        (FPCore (x)
         :precision binary64
         (/
          x
          (/
           (pow
            (fma
             (* (fma 0.0003527336860670194 (* x x) -0.005555555555555556) x)
             x
             0.16666666666666666)
            -1.0)
           x)))
        double code(double x) {
        	return x / (pow(fma((fma(0.0003527336860670194, (x * x), -0.005555555555555556) * x), x, 0.16666666666666666), -1.0) / x);
        }
        
        function code(x)
        	return Float64(x / Float64((fma(Float64(fma(0.0003527336860670194, Float64(x * x), -0.005555555555555556) * x), x, 0.16666666666666666) ^ -1.0) / x))
        end
        
        code[x_] := N[(x / N[(N[Power[N[(N[(N[(0.0003527336860670194 * N[(x * x), $MachinePrecision] + -0.005555555555555556), $MachinePrecision] * x), $MachinePrecision] * x + 0.16666666666666666), $MachinePrecision], -1.0], $MachinePrecision] / x), $MachinePrecision]), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        \frac{x}{\frac{{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right) \cdot x, x, 0.16666666666666666\right)\right)}^{-1}}{x}}
        \end{array}
        
        Derivation
        1. Initial program 52.1%

          \[\log \left(\frac{\sinh x}{x}\right) \]
        2. Add Preprocessing
        3. Taylor expanded in x around 0

          \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
        4. Step-by-step derivation
          1. unpow2N/A

            \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
          2. associate-*l*N/A

            \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
          3. *-commutativeN/A

            \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
          4. lower-*.f64N/A

            \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
          5. *-commutativeN/A

            \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
          6. lower-*.f64N/A

            \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
          7. +-commutativeN/A

            \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
          8. *-commutativeN/A

            \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
          9. lower-fma.f64N/A

            \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
          10. sub-negN/A

            \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
          11. metadata-evalN/A

            \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
          12. lower-fma.f64N/A

            \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
          13. unpow2N/A

            \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
          14. lower-*.f64N/A

            \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
          15. unpow2N/A

            \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
          16. lower-*.f6497.7

            \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
        5. Applied rewrites97.7%

          \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
        6. Step-by-step derivation
          1. Applied rewrites97.7%

            \[\leadsto \mathsf{fma}\left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \]
          2. Applied rewrites97.8%

            \[\leadsto \frac{x}{\color{blue}{\frac{{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right) \cdot x, x, 0.16666666666666666\right)\right)}^{-1}}{x}}} \]
          3. Add Preprocessing

          Alternative 3: 97.1% accurate, 4.1× speedup?

          \[\begin{array}{l} \\ \left(-x\right) \cdot \frac{x}{\frac{-1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}} \end{array} \]
          (FPCore (x)
           :precision binary64
           (*
            (- x)
            (/
             x
             (/
              -1.0
              (fma
               (fma (* x x) 0.0003527336860670194 -0.005555555555555556)
               (* x x)
               0.16666666666666666)))))
          double code(double x) {
          	return -x * (x / (-1.0 / fma(fma((x * x), 0.0003527336860670194, -0.005555555555555556), (x * x), 0.16666666666666666)));
          }
          
          function code(x)
          	return Float64(Float64(-x) * Float64(x / Float64(-1.0 / fma(fma(Float64(x * x), 0.0003527336860670194, -0.005555555555555556), Float64(x * x), 0.16666666666666666))))
          end
          
          code[x_] := N[((-x) * N[(x / N[(-1.0 / N[(N[(N[(x * x), $MachinePrecision] * 0.0003527336860670194 + -0.005555555555555556), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
          
          \begin{array}{l}
          
          \\
          \left(-x\right) \cdot \frac{x}{\frac{-1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}}
          \end{array}
          
          Derivation
          1. Initial program 52.1%

            \[\log \left(\frac{\sinh x}{x}\right) \]
          2. Add Preprocessing
          3. Taylor expanded in x around 0

            \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
          4. Step-by-step derivation
            1. unpow2N/A

              \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
            2. associate-*l*N/A

              \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
            3. *-commutativeN/A

              \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
            4. lower-*.f64N/A

              \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
            5. *-commutativeN/A

              \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
            6. lower-*.f64N/A

              \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
            7. +-commutativeN/A

              \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
            8. *-commutativeN/A

              \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
            9. lower-fma.f64N/A

              \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
            10. sub-negN/A

              \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
            11. metadata-evalN/A

              \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
            12. lower-fma.f64N/A

              \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
            13. unpow2N/A

              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
            14. lower-*.f64N/A

              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
            15. unpow2N/A

              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
            16. lower-*.f6497.7

              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
          5. Applied rewrites97.7%

            \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
          6. Step-by-step derivation
            1. Applied rewrites97.7%

              \[\leadsto \mathsf{fma}\left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \]
            2. Applied rewrites97.8%

              \[\leadsto \frac{x}{\color{blue}{\frac{{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right) \cdot x, x, 0.16666666666666666\right)\right)}^{-1}}{x}}} \]
            3. Step-by-step derivation
              1. Applied rewrites97.8%

                \[\leadsto \frac{x}{\frac{-1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}} \cdot \color{blue}{\left(-x\right)} \]
              2. Final simplification97.8%

                \[\leadsto \left(-x\right) \cdot \frac{x}{\frac{-1}{\mathsf{fma}\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)}} \]
              3. Add Preprocessing

              Alternative 4: 97.1% accurate, 5.6× speedup?

              \[\begin{array}{l} \\ \mathsf{fma}\left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \end{array} \]
              (FPCore (x)
               :precision binary64
               (*
                (fma
                 (* (* (fma (* x x) 0.0003527336860670194 -0.005555555555555556) x) x)
                 x
                 (* 0.16666666666666666 x))
                x))
              double code(double x) {
              	return fma(((fma((x * x), 0.0003527336860670194, -0.005555555555555556) * x) * x), x, (0.16666666666666666 * x)) * x;
              }
              
              function code(x)
              	return Float64(fma(Float64(Float64(fma(Float64(x * x), 0.0003527336860670194, -0.005555555555555556) * x) * x), x, Float64(0.16666666666666666 * x)) * x)
              end
              
              code[x_] := N[(N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 0.0003527336860670194 + -0.005555555555555556), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision] * x + N[(0.16666666666666666 * x), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision]
              
              \begin{array}{l}
              
              \\
              \mathsf{fma}\left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x
              \end{array}
              
              Derivation
              1. Initial program 52.1%

                \[\log \left(\frac{\sinh x}{x}\right) \]
              2. Add Preprocessing
              3. Taylor expanded in x around 0

                \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
              4. Step-by-step derivation
                1. unpow2N/A

                  \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
                2. associate-*l*N/A

                  \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
                3. *-commutativeN/A

                  \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                4. lower-*.f64N/A

                  \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                5. *-commutativeN/A

                  \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                6. lower-*.f64N/A

                  \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                7. +-commutativeN/A

                  \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                8. *-commutativeN/A

                  \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                9. lower-fma.f64N/A

                  \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                10. sub-negN/A

                  \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                11. metadata-evalN/A

                  \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                12. lower-fma.f64N/A

                  \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                13. unpow2N/A

                  \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                14. lower-*.f64N/A

                  \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                15. unpow2N/A

                  \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                16. lower-*.f6497.7

                  \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
              5. Applied rewrites97.7%

                \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
              6. Step-by-step derivation
                1. Applied rewrites97.7%

                  \[\leadsto \mathsf{fma}\left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \]
                2. Add Preprocessing

                Alternative 5: 97.1% accurate, 5.6× speedup?

                \[\begin{array}{l} \\ \mathsf{fma}\left(x, 0.16666666666666666, \left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x\right) \cdot x\right) \cdot x \end{array} \]
                (FPCore (x)
                 :precision binary64
                 (*
                  (fma
                   x
                   0.16666666666666666
                   (* (* (* (fma (* x x) 0.0003527336860670194 -0.005555555555555556) x) x) x))
                  x))
                double code(double x) {
                	return fma(x, 0.16666666666666666, (((fma((x * x), 0.0003527336860670194, -0.005555555555555556) * x) * x) * x)) * x;
                }
                
                function code(x)
                	return Float64(fma(x, 0.16666666666666666, Float64(Float64(Float64(fma(Float64(x * x), 0.0003527336860670194, -0.005555555555555556) * x) * x) * x)) * x)
                end
                
                code[x_] := N[(N[(x * 0.16666666666666666 + N[(N[(N[(N[(N[(x * x), $MachinePrecision] * 0.0003527336860670194 + -0.005555555555555556), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision]
                
                \begin{array}{l}
                
                \\
                \mathsf{fma}\left(x, 0.16666666666666666, \left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x\right) \cdot x\right) \cdot x
                \end{array}
                
                Derivation
                1. Initial program 52.1%

                  \[\log \left(\frac{\sinh x}{x}\right) \]
                2. Add Preprocessing
                3. Taylor expanded in x around 0

                  \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
                4. Step-by-step derivation
                  1. unpow2N/A

                    \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
                  2. associate-*l*N/A

                    \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
                  3. *-commutativeN/A

                    \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                  4. lower-*.f64N/A

                    \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                  5. *-commutativeN/A

                    \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                  6. lower-*.f64N/A

                    \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                  7. +-commutativeN/A

                    \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                  8. *-commutativeN/A

                    \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                  9. lower-fma.f64N/A

                    \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                  10. sub-negN/A

                    \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                  11. metadata-evalN/A

                    \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                  12. lower-fma.f64N/A

                    \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                  13. unpow2N/A

                    \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                  14. lower-*.f64N/A

                    \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                  15. unpow2N/A

                    \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                  16. lower-*.f6497.7

                    \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                5. Applied rewrites97.7%

                  \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                6. Step-by-step derivation
                  1. Applied rewrites97.7%

                    \[\leadsto \mathsf{fma}\left(x, 0.16666666666666666, \left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x\right) \cdot x\right) \cdot x \]
                  2. Add Preprocessing

                  Alternative 6: 97.1% accurate, 6.4× speedup?

                  \[\begin{array}{l} \\ \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x \end{array} \]
                  (FPCore (x)
                   :precision binary64
                   (*
                    (*
                     (fma
                      (fma 0.0003527336860670194 (* x x) -0.005555555555555556)
                      (* x x)
                      0.16666666666666666)
                     x)
                    x))
                  double code(double x) {
                  	return (fma(fma(0.0003527336860670194, (x * x), -0.005555555555555556), (x * x), 0.16666666666666666) * x) * x;
                  }
                  
                  function code(x)
                  	return Float64(Float64(fma(fma(0.0003527336860670194, Float64(x * x), -0.005555555555555556), Float64(x * x), 0.16666666666666666) * x) * x)
                  end
                  
                  code[x_] := N[(N[(N[(N[(0.0003527336860670194 * N[(x * x), $MachinePrecision] + -0.005555555555555556), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision]
                  
                  \begin{array}{l}
                  
                  \\
                  \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x
                  \end{array}
                  
                  Derivation
                  1. Initial program 52.1%

                    \[\log \left(\frac{\sinh x}{x}\right) \]
                  2. Add Preprocessing
                  3. Taylor expanded in x around 0

                    \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
                  4. Step-by-step derivation
                    1. unpow2N/A

                      \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
                    2. associate-*l*N/A

                      \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
                    3. *-commutativeN/A

                      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                    4. lower-*.f64N/A

                      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                    5. *-commutativeN/A

                      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                    6. lower-*.f64N/A

                      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                    7. +-commutativeN/A

                      \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                    8. *-commutativeN/A

                      \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    9. lower-fma.f64N/A

                      \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                    10. sub-negN/A

                      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    11. metadata-evalN/A

                      \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    12. lower-fma.f64N/A

                      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    13. unpow2N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    14. lower-*.f64N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    15. unpow2N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    16. lower-*.f6497.7

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                  5. Applied rewrites97.7%

                    \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                  6. Add Preprocessing

                  Alternative 7: 96.7% accurate, 7.9× speedup?

                  \[\begin{array}{l} \\ \mathsf{fma}\left(\left(-0.005555555555555556 \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \end{array} \]
                  (FPCore (x)
                   :precision binary64
                   (* (fma (* (* -0.005555555555555556 x) x) x (* 0.16666666666666666 x)) x))
                  double code(double x) {
                  	return fma(((-0.005555555555555556 * x) * x), x, (0.16666666666666666 * x)) * x;
                  }
                  
                  function code(x)
                  	return Float64(fma(Float64(Float64(-0.005555555555555556 * x) * x), x, Float64(0.16666666666666666 * x)) * x)
                  end
                  
                  code[x_] := N[(N[(N[(N[(-0.005555555555555556 * x), $MachinePrecision] * x), $MachinePrecision] * x + N[(0.16666666666666666 * x), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision]
                  
                  \begin{array}{l}
                  
                  \\
                  \mathsf{fma}\left(\left(-0.005555555555555556 \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x
                  \end{array}
                  
                  Derivation
                  1. Initial program 52.1%

                    \[\log \left(\frac{\sinh x}{x}\right) \]
                  2. Add Preprocessing
                  3. Taylor expanded in x around 0

                    \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
                  4. Step-by-step derivation
                    1. unpow2N/A

                      \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
                    2. associate-*l*N/A

                      \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
                    3. *-commutativeN/A

                      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                    4. lower-*.f64N/A

                      \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                    5. *-commutativeN/A

                      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                    6. lower-*.f64N/A

                      \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                    7. +-commutativeN/A

                      \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                    8. *-commutativeN/A

                      \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    9. lower-fma.f64N/A

                      \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                    10. sub-negN/A

                      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    11. metadata-evalN/A

                      \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    12. lower-fma.f64N/A

                      \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    13. unpow2N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    14. lower-*.f64N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    15. unpow2N/A

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                    16. lower-*.f6497.7

                      \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                  5. Applied rewrites97.7%

                    \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                  6. Step-by-step derivation
                    1. Applied rewrites97.7%

                      \[\leadsto \mathsf{fma}\left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \]
                    2. Taylor expanded in x around 0

                      \[\leadsto \mathsf{fma}\left(\left(\frac{-1}{180} \cdot x\right) \cdot x, x, \frac{1}{6} \cdot x\right) \cdot x \]
                    3. Step-by-step derivation
                      1. Applied rewrites97.4%

                        \[\leadsto \mathsf{fma}\left(\left(-0.005555555555555556 \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \]
                      2. Add Preprocessing

                      Alternative 8: 96.7% accurate, 7.9× speedup?

                      \[\begin{array}{l} \\ \mathsf{fma}\left(x, 0.16666666666666666, \left(-0.005555555555555556 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot x \end{array} \]
                      (FPCore (x)
                       :precision binary64
                       (* (fma x 0.16666666666666666 (* (* -0.005555555555555556 (* x x)) x)) x))
                      double code(double x) {
                      	return fma(x, 0.16666666666666666, ((-0.005555555555555556 * (x * x)) * x)) * x;
                      }
                      
                      function code(x)
                      	return Float64(fma(x, 0.16666666666666666, Float64(Float64(-0.005555555555555556 * Float64(x * x)) * x)) * x)
                      end
                      
                      code[x_] := N[(N[(x * 0.16666666666666666 + N[(N[(-0.005555555555555556 * N[(x * x), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision]), $MachinePrecision] * x), $MachinePrecision]
                      
                      \begin{array}{l}
                      
                      \\
                      \mathsf{fma}\left(x, 0.16666666666666666, \left(-0.005555555555555556 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot x
                      \end{array}
                      
                      Derivation
                      1. Initial program 52.1%

                        \[\log \left(\frac{\sinh x}{x}\right) \]
                      2. Add Preprocessing
                      3. Taylor expanded in x around 0

                        \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
                      4. Step-by-step derivation
                        1. unpow2N/A

                          \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
                        2. associate-*l*N/A

                          \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
                        3. *-commutativeN/A

                          \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                        4. lower-*.f64N/A

                          \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                        5. *-commutativeN/A

                          \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                        6. lower-*.f64N/A

                          \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                        7. +-commutativeN/A

                          \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                        8. *-commutativeN/A

                          \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                        9. lower-fma.f64N/A

                          \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                        10. sub-negN/A

                          \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                        11. metadata-evalN/A

                          \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                        12. lower-fma.f64N/A

                          \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                        13. unpow2N/A

                          \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                        14. lower-*.f64N/A

                          \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                        15. unpow2N/A

                          \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                        16. lower-*.f6497.7

                          \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                      5. Applied rewrites97.7%

                        \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                      6. Step-by-step derivation
                        1. Applied rewrites97.7%

                          \[\leadsto \mathsf{fma}\left(\left(\mathsf{fma}\left(x \cdot x, 0.0003527336860670194, -0.005555555555555556\right) \cdot x\right) \cdot x, x, 0.16666666666666666 \cdot x\right) \cdot x \]
                        2. Taylor expanded in x around 0

                          \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)} \]
                        3. Step-by-step derivation
                          1. unpow2N/A

                            \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \]
                          2. associate-*l*N/A

                            \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right)} \]
                          3. *-commutativeN/A

                            \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
                          4. lower-*.f64N/A

                            \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
                          5. *-commutativeN/A

                            \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
                          6. lower-*.f64N/A

                            \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
                          7. +-commutativeN/A

                            \[\leadsto \left(\color{blue}{\left(\frac{-1}{180} \cdot {x}^{2} + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                          8. *-commutativeN/A

                            \[\leadsto \left(\left(\color{blue}{{x}^{2} \cdot \frac{-1}{180}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                          9. lower-fma.f64N/A

                            \[\leadsto \left(\color{blue}{\mathsf{fma}\left({x}^{2}, \frac{-1}{180}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                          10. unpow2N/A

                            \[\leadsto \left(\mathsf{fma}\left(\color{blue}{x \cdot x}, \frac{-1}{180}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                          11. lower-*.f6497.4

                            \[\leadsto \left(\mathsf{fma}\left(\color{blue}{x \cdot x}, -0.005555555555555556, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                        4. Applied rewrites97.4%

                          \[\leadsto \color{blue}{\left(\mathsf{fma}\left(x \cdot x, -0.005555555555555556, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                        5. Step-by-step derivation
                          1. Applied rewrites97.4%

                            \[\leadsto \mathsf{fma}\left(x, 0.16666666666666666, x \cdot \left(-0.005555555555555556 \cdot \left(x \cdot x\right)\right)\right) \cdot x \]
                          2. Final simplification97.4%

                            \[\leadsto \mathsf{fma}\left(x, 0.16666666666666666, \left(-0.005555555555555556 \cdot \left(x \cdot x\right)\right) \cdot x\right) \cdot x \]
                          3. Add Preprocessing

                          Alternative 9: 96.7% accurate, 9.6× speedup?

                          \[\begin{array}{l} \\ \left(\mathsf{fma}\left(-0.005555555555555556, x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x \end{array} \]
                          (FPCore (x)
                           :precision binary64
                           (* (* (fma -0.005555555555555556 (* x x) 0.16666666666666666) x) x))
                          double code(double x) {
                          	return (fma(-0.005555555555555556, (x * x), 0.16666666666666666) * x) * x;
                          }
                          
                          function code(x)
                          	return Float64(Float64(fma(-0.005555555555555556, Float64(x * x), 0.16666666666666666) * x) * x)
                          end
                          
                          code[x_] := N[(N[(N[(-0.005555555555555556 * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision]
                          
                          \begin{array}{l}
                          
                          \\
                          \left(\mathsf{fma}\left(-0.005555555555555556, x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x
                          \end{array}
                          
                          Derivation
                          1. Initial program 52.1%

                            \[\log \left(\frac{\sinh x}{x}\right) \]
                          2. Add Preprocessing
                          3. Taylor expanded in x around 0

                            \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)} \]
                          4. Step-by-step derivation
                            1. unpow2N/A

                              \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \]
                            2. associate-*l*N/A

                              \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right)} \]
                            3. *-commutativeN/A

                              \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
                            4. lower-*.f64N/A

                              \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right)\right) \cdot x} \]
                            5. *-commutativeN/A

                              \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
                            6. lower-*.f64N/A

                              \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + \frac{-1}{180} \cdot {x}^{2}\right) \cdot x\right)} \cdot x \]
                            7. +-commutativeN/A

                              \[\leadsto \left(\color{blue}{\left(\frac{-1}{180} \cdot {x}^{2} + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                            8. lower-fma.f64N/A

                              \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{-1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                            9. unpow2N/A

                              \[\leadsto \left(\mathsf{fma}\left(\frac{-1}{180}, \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            10. lower-*.f6497.4

                              \[\leadsto \left(\mathsf{fma}\left(-0.005555555555555556, \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                          5. Applied rewrites97.4%

                            \[\leadsto \color{blue}{\left(\mathsf{fma}\left(-0.005555555555555556, x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                          6. Add Preprocessing

                          Alternative 10: 96.5% accurate, 19.3× speedup?

                          \[\begin{array}{l} \\ \left(0.16666666666666666 \cdot x\right) \cdot x \end{array} \]
                          (FPCore (x) :precision binary64 (* (* 0.16666666666666666 x) x))
                          double code(double x) {
                          	return (0.16666666666666666 * x) * x;
                          }
                          
                          real(8) function code(x)
                              real(8), intent (in) :: x
                              code = (0.16666666666666666d0 * x) * x
                          end function
                          
                          public static double code(double x) {
                          	return (0.16666666666666666 * x) * x;
                          }
                          
                          def code(x):
                          	return (0.16666666666666666 * x) * x
                          
                          function code(x)
                          	return Float64(Float64(0.16666666666666666 * x) * x)
                          end
                          
                          function tmp = code(x)
                          	tmp = (0.16666666666666666 * x) * x;
                          end
                          
                          code[x_] := N[(N[(0.16666666666666666 * x), $MachinePrecision] * x), $MachinePrecision]
                          
                          \begin{array}{l}
                          
                          \\
                          \left(0.16666666666666666 \cdot x\right) \cdot x
                          \end{array}
                          
                          Derivation
                          1. Initial program 52.1%

                            \[\log \left(\frac{\sinh x}{x}\right) \]
                          2. Add Preprocessing
                          3. Taylor expanded in x around 0

                            \[\leadsto \color{blue}{{x}^{2} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)} \]
                          4. Step-by-step derivation
                            1. unpow2N/A

                              \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \]
                            2. associate-*l*N/A

                              \[\leadsto \color{blue}{x \cdot \left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right)} \]
                            3. *-commutativeN/A

                              \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                            4. lower-*.f64N/A

                              \[\leadsto \color{blue}{\left(x \cdot \left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right)\right) \cdot x} \]
                            5. *-commutativeN/A

                              \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                            6. lower-*.f64N/A

                              \[\leadsto \color{blue}{\left(\left(\frac{1}{6} + {x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right)\right) \cdot x\right)} \cdot x \]
                            7. +-commutativeN/A

                              \[\leadsto \left(\color{blue}{\left({x}^{2} \cdot \left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) + \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                            8. *-commutativeN/A

                              \[\leadsto \left(\left(\color{blue}{\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}\right) \cdot {x}^{2}} + \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            9. lower-fma.f64N/A

                              \[\leadsto \left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} - \frac{1}{180}, {x}^{2}, \frac{1}{6}\right)} \cdot x\right) \cdot x \]
                            10. sub-negN/A

                              \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\frac{1}{2835} \cdot {x}^{2} + \left(\mathsf{neg}\left(\frac{1}{180}\right)\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            11. metadata-evalN/A

                              \[\leadsto \left(\mathsf{fma}\left(\frac{1}{2835} \cdot {x}^{2} + \color{blue}{\frac{-1}{180}}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            12. lower-fma.f64N/A

                              \[\leadsto \left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(\frac{1}{2835}, {x}^{2}, \frac{-1}{180}\right)}, {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            13. unpow2N/A

                              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            14. lower-*.f64N/A

                              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, \color{blue}{x \cdot x}, \frac{-1}{180}\right), {x}^{2}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            15. unpow2N/A

                              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(\frac{1}{2835}, x \cdot x, \frac{-1}{180}\right), \color{blue}{x \cdot x}, \frac{1}{6}\right) \cdot x\right) \cdot x \]
                            16. lower-*.f6497.7

                              \[\leadsto \left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), \color{blue}{x \cdot x}, 0.16666666666666666\right) \cdot x\right) \cdot x \]
                          5. Applied rewrites97.7%

                            \[\leadsto \color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.0003527336860670194, x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right) \cdot x\right) \cdot x} \]
                          6. Taylor expanded in x around 0

                            \[\leadsto \left(\frac{1}{6} \cdot x\right) \cdot x \]
                          7. Step-by-step derivation
                            1. Applied rewrites97.1%

                              \[\leadsto \left(0.16666666666666666 \cdot x\right) \cdot x \]
                            2. Add Preprocessing

                            Alternative 11: 96.4% accurate, 19.3× speedup?

                            \[\begin{array}{l} \\ 0.16666666666666666 \cdot \left(x \cdot x\right) \end{array} \]
                            (FPCore (x) :precision binary64 (* 0.16666666666666666 (* x x)))
                            double code(double x) {
                            	return 0.16666666666666666 * (x * x);
                            }
                            
                            real(8) function code(x)
                                real(8), intent (in) :: x
                                code = 0.16666666666666666d0 * (x * x)
                            end function
                            
                            public static double code(double x) {
                            	return 0.16666666666666666 * (x * x);
                            }
                            
                            def code(x):
                            	return 0.16666666666666666 * (x * x)
                            
                            function code(x)
                            	return Float64(0.16666666666666666 * Float64(x * x))
                            end
                            
                            function tmp = code(x)
                            	tmp = 0.16666666666666666 * (x * x);
                            end
                            
                            code[x_] := N[(0.16666666666666666 * N[(x * x), $MachinePrecision]), $MachinePrecision]
                            
                            \begin{array}{l}
                            
                            \\
                            0.16666666666666666 \cdot \left(x \cdot x\right)
                            \end{array}
                            
                            Derivation
                            1. Initial program 52.1%

                              \[\log \left(\frac{\sinh x}{x}\right) \]
                            2. Add Preprocessing
                            3. Taylor expanded in x around 0

                              \[\leadsto \color{blue}{\frac{1}{6} \cdot {x}^{2}} \]
                            4. Step-by-step derivation
                              1. *-commutativeN/A

                                \[\leadsto \color{blue}{{x}^{2} \cdot \frac{1}{6}} \]
                              2. lower-*.f64N/A

                                \[\leadsto \color{blue}{{x}^{2} \cdot \frac{1}{6}} \]
                              3. unpow2N/A

                                \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot \frac{1}{6} \]
                              4. lower-*.f6497.0

                                \[\leadsto \color{blue}{\left(x \cdot x\right)} \cdot 0.16666666666666666 \]
                            5. Applied rewrites97.0%

                              \[\leadsto \color{blue}{\left(x \cdot x\right) \cdot 0.16666666666666666} \]
                            6. Final simplification97.0%

                              \[\leadsto 0.16666666666666666 \cdot \left(x \cdot x\right) \]
                            7. Add Preprocessing

                            Developer Target 1: 97.8% accurate, 1.0× speedup?

                            \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\left|x\right| < 0.085:\\ \;\;\;\;\left(x \cdot x\right) \cdot \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-2.6455026455026456 \cdot 10^{-5}, x \cdot x, 0.0003527336860670194\right), x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)\\ \mathbf{else}:\\ \;\;\;\;\log \left(\frac{\sinh x}{x}\right)\\ \end{array} \end{array} \]
                            (FPCore (x)
                             :precision binary64
                             (if (< (fabs x) 0.085)
                               (*
                                (* x x)
                                (fma
                                 (fma
                                  (fma -2.6455026455026456e-5 (* x x) 0.0003527336860670194)
                                  (* x x)
                                  -0.005555555555555556)
                                 (* x x)
                                 0.16666666666666666))
                               (log (/ (sinh x) x))))
                            double code(double x) {
                            	double tmp;
                            	if (fabs(x) < 0.085) {
                            		tmp = (x * x) * fma(fma(fma(-2.6455026455026456e-5, (x * x), 0.0003527336860670194), (x * x), -0.005555555555555556), (x * x), 0.16666666666666666);
                            	} else {
                            		tmp = log((sinh(x) / x));
                            	}
                            	return tmp;
                            }
                            
                            function code(x)
                            	tmp = 0.0
                            	if (abs(x) < 0.085)
                            		tmp = Float64(Float64(x * x) * fma(fma(fma(-2.6455026455026456e-5, Float64(x * x), 0.0003527336860670194), Float64(x * x), -0.005555555555555556), Float64(x * x), 0.16666666666666666));
                            	else
                            		tmp = log(Float64(sinh(x) / x));
                            	end
                            	return tmp
                            end
                            
                            code[x_] := If[Less[N[Abs[x], $MachinePrecision], 0.085], N[(N[(x * x), $MachinePrecision] * N[(N[(N[(-2.6455026455026456e-5 * N[(x * x), $MachinePrecision] + 0.0003527336860670194), $MachinePrecision] * N[(x * x), $MachinePrecision] + -0.005555555555555556), $MachinePrecision] * N[(x * x), $MachinePrecision] + 0.16666666666666666), $MachinePrecision]), $MachinePrecision], N[Log[N[(N[Sinh[x], $MachinePrecision] / x), $MachinePrecision]], $MachinePrecision]]
                            
                            \begin{array}{l}
                            
                            \\
                            \begin{array}{l}
                            \mathbf{if}\;\left|x\right| < 0.085:\\
                            \;\;\;\;\left(x \cdot x\right) \cdot \mathsf{fma}\left(\mathsf{fma}\left(\mathsf{fma}\left(-2.6455026455026456 \cdot 10^{-5}, x \cdot x, 0.0003527336860670194\right), x \cdot x, -0.005555555555555556\right), x \cdot x, 0.16666666666666666\right)\\
                            
                            \mathbf{else}:\\
                            \;\;\;\;\log \left(\frac{\sinh x}{x}\right)\\
                            
                            
                            \end{array}
                            \end{array}
                            

                            Reproduce

                            ?
                            herbie shell --seed 2024331 
                            (FPCore (x)
                              :name "bug500, discussion (missed optimization)"
                              :precision binary64
                            
                              :alt
                              (! :herbie-platform default (if (< (fabs x) 17/200) (let ((x2 (* x x))) (* x2 (fma (fma (fma -1/37800 x2 1/2835) x2 -1/180) x2 1/6))) (log (/ (sinh x) x))))
                            
                              (log (/ (sinh x) x)))