Data.Random.Distribution.Normal:normalF from random-fu-0.2.6.2

Percentage Accurate: 100.0% → 100.0%
Time: 8.4s
Alternatives: 8
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ e^{\left(x \cdot y\right) \cdot y} \end{array} \]
(FPCore (x y) :precision binary64 (exp (* (* x y) y)))
double code(double x, double y) {
	return exp(((x * y) * y));
}
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    code = exp(((x * y) * y))
end function
public static double code(double x, double y) {
	return Math.exp(((x * y) * y));
}
def code(x, y):
	return math.exp(((x * y) * y))
function code(x, y)
	return exp(Float64(Float64(x * y) * y))
end
function tmp = code(x, y)
	tmp = exp(((x * y) * y));
end
code[x_, y_] := N[Exp[N[(N[(x * y), $MachinePrecision] * y), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
e^{\left(x \cdot y\right) \cdot y}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 8 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{\left(x \cdot y\right) \cdot y} \end{array} \]
(FPCore (x y) :precision binary64 (exp (* (* x y) y)))
double code(double x, double y) {
	return exp(((x * y) * y));
}
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    code = exp(((x * y) * y))
end function
public static double code(double x, double y) {
	return Math.exp(((x * y) * y));
}
def code(x, y):
	return math.exp(((x * y) * y))
function code(x, y)
	return exp(Float64(Float64(x * y) * y))
end
function tmp = code(x, y)
	tmp = exp(((x * y) * y));
end
code[x_, y_] := N[Exp[N[(N[(x * y), $MachinePrecision] * y), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
e^{\left(x \cdot y\right) \cdot y}
\end{array}

Alternative 1: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ {\mathsf{E}\left(\right)}^{\left(\left(y \cdot x\right) \cdot y\right)} \end{array} \]
(FPCore (x y) :precision binary64 (pow (E) (* (* y x) y)))
\begin{array}{l}

\\
{\mathsf{E}\left(\right)}^{\left(\left(y \cdot x\right) \cdot y\right)}
\end{array}
Derivation
  1. Initial program 100.0%

    \[e^{\left(x \cdot y\right) \cdot y} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-exp.f64N/A

      \[\leadsto \color{blue}{e^{\left(x \cdot y\right) \cdot y}} \]
    2. remove-double-negN/A

      \[\leadsto e^{\color{blue}{\mathsf{neg}\left(\left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)\right)}} \]
    3. sinh---cosh-revN/A

      \[\leadsto \color{blue}{\cosh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
    4. cosh-neg-revN/A

      \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) \]
    5. sinh-neg-revN/A

      \[\leadsto \cosh \left(\left(x \cdot y\right) \cdot y\right) - \color{blue}{\left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
    6. lower--.f64N/A

      \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
    7. lower-cosh.f64N/A

      \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
    8. lift-*.f64N/A

      \[\leadsto \cosh \left(\color{blue}{\left(x \cdot y\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
    9. *-commutativeN/A

      \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
    10. lower-*.f64N/A

      \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
    11. sinh-neg-revN/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
    12. lower-sinh.f64N/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
    13. lift-*.f64N/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right) \cdot y}\right)\right) \]
    14. lift-*.f64N/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right)} \cdot y\right)\right) \]
    15. associate-*l*N/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{x \cdot \left(y \cdot y\right)}\right)\right) \]
    16. distribute-rgt-neg-inN/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(x \cdot \left(\mathsf{neg}\left(y \cdot y\right)\right)\right)} \]
    17. *-commutativeN/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
    18. lower-*.f64N/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
    19. distribute-lft-neg-inN/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
    20. lower-*.f64N/A

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
    21. lower-neg.f6477.2

      \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\color{blue}{\left(-y\right)} \cdot y\right) \cdot x\right) \]
  4. Applied rewrites77.2%

    \[\leadsto \color{blue}{\cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} \]
  5. Step-by-step derivation
    1. lift--.f64N/A

      \[\leadsto \color{blue}{\cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} \]
    2. lift-cosh.f64N/A

      \[\leadsto \color{blue}{\cosh \left(\left(y \cdot x\right) \cdot y\right)} - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    3. cosh-neg-revN/A

      \[\leadsto \color{blue}{\cosh \left(\mathsf{neg}\left(\left(y \cdot x\right) \cdot y\right)\right)} - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    4. lift-*.f64N/A

      \[\leadsto \cosh \left(\mathsf{neg}\left(\color{blue}{\left(y \cdot x\right) \cdot y}\right)\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    5. *-commutativeN/A

      \[\leadsto \cosh \left(\mathsf{neg}\left(\color{blue}{y \cdot \left(y \cdot x\right)}\right)\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    6. distribute-lft-neg-inN/A

      \[\leadsto \cosh \color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot \left(y \cdot x\right)\right)} - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    7. lift-neg.f64N/A

      \[\leadsto \cosh \left(\color{blue}{\left(-y\right)} \cdot \left(y \cdot x\right)\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    8. lift-*.f64N/A

      \[\leadsto \cosh \left(\left(-y\right) \cdot \color{blue}{\left(y \cdot x\right)}\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    9. associate-*l*N/A

      \[\leadsto \cosh \color{blue}{\left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    10. lift-*.f64N/A

      \[\leadsto \cosh \left(\color{blue}{\left(\left(-y\right) \cdot y\right)} \cdot x\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    11. lift-*.f64N/A

      \[\leadsto \cosh \color{blue}{\left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) \]
    12. lift-sinh.f64N/A

      \[\leadsto \cosh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right) - \color{blue}{\sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} \]
    13. sinh---cosh-revN/A

      \[\leadsto \color{blue}{e^{\mathsf{neg}\left(\left(\left(-y\right) \cdot y\right) \cdot x\right)}} \]
    14. lift-*.f64N/A

      \[\leadsto e^{\mathsf{neg}\left(\color{blue}{\left(\left(-y\right) \cdot y\right) \cdot x}\right)} \]
    15. *-commutativeN/A

      \[\leadsto e^{\mathsf{neg}\left(\color{blue}{x \cdot \left(\left(-y\right) \cdot y\right)}\right)} \]
    16. distribute-rgt-neg-inN/A

      \[\leadsto e^{\color{blue}{x \cdot \left(\mathsf{neg}\left(\left(-y\right) \cdot y\right)\right)}} \]
    17. lift-*.f64N/A

      \[\leadsto e^{x \cdot \left(\mathsf{neg}\left(\color{blue}{\left(-y\right) \cdot y}\right)\right)} \]
    18. distribute-lft-neg-inN/A

      \[\leadsto e^{x \cdot \color{blue}{\left(\left(\mathsf{neg}\left(\left(-y\right)\right)\right) \cdot y\right)}} \]
    19. lift-neg.f64N/A

      \[\leadsto e^{x \cdot \left(\left(\mathsf{neg}\left(\color{blue}{\left(\mathsf{neg}\left(y\right)\right)}\right)\right) \cdot y\right)} \]
    20. remove-double-negN/A

      \[\leadsto e^{x \cdot \left(\color{blue}{y} \cdot y\right)} \]
    21. lift-*.f64N/A

      \[\leadsto e^{x \cdot \color{blue}{\left(y \cdot y\right)}} \]
  6. Applied rewrites100.0%

    \[\leadsto \color{blue}{{\left(e^{1}\right)}^{\left(\left(y \cdot x\right) \cdot y\right)}} \]
  7. Step-by-step derivation
    1. lift-exp.f64N/A

      \[\leadsto {\color{blue}{\left(e^{1}\right)}}^{\left(\left(y \cdot x\right) \cdot y\right)} \]
    2. exp-1-eN/A

      \[\leadsto {\color{blue}{\mathsf{E}\left(\right)}}^{\left(\left(y \cdot x\right) \cdot y\right)} \]
    3. lower-E.f64100.0

      \[\leadsto {\color{blue}{\mathsf{E}\left(\right)}}^{\left(\left(y \cdot x\right) \cdot y\right)} \]
  8. Applied rewrites100.0%

    \[\leadsto {\color{blue}{\mathsf{E}\left(\right)}}^{\left(\left(y \cdot x\right) \cdot y\right)} \]
  9. Add Preprocessing

Alternative 2: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{\left(x \cdot y\right) \cdot y} \end{array} \]
(FPCore (x y) :precision binary64 (exp (* (* x y) y)))
double code(double x, double y) {
	return exp(((x * y) * y));
}
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    code = exp(((x * y) * y))
end function
public static double code(double x, double y) {
	return Math.exp(((x * y) * y));
}
def code(x, y):
	return math.exp(((x * y) * y))
function code(x, y)
	return exp(Float64(Float64(x * y) * y))
end
function tmp = code(x, y)
	tmp = exp(((x * y) * y));
end
code[x_, y_] := N[Exp[N[(N[(x * y), $MachinePrecision] * y), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
e^{\left(x \cdot y\right) \cdot y}
\end{array}
Derivation
  1. Initial program 100.0%

    \[e^{\left(x \cdot y\right) \cdot y} \]
  2. Add Preprocessing
  3. Add Preprocessing

Alternative 3: 61.1% accurate, 2.5× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;y \leq 5 \cdot 10^{-111}:\\ \;\;\;\;1\\ \mathbf{elif}\;y \leq 1.85 \cdot 10^{+153}:\\ \;\;\;\;\mathsf{fma}\left(\left(\left(y \cdot y\right) \cdot \left(x \cdot x\right)\right) \cdot 0.5, y \cdot y, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(y \cdot y, x, 1\right)\\ \end{array} \end{array} \]
(FPCore (x y)
 :precision binary64
 (if (<= y 5e-111)
   1.0
   (if (<= y 1.85e+153)
     (fma (* (* (* y y) (* x x)) 0.5) (* y y) 1.0)
     (fma (* y y) x 1.0))))
double code(double x, double y) {
	double tmp;
	if (y <= 5e-111) {
		tmp = 1.0;
	} else if (y <= 1.85e+153) {
		tmp = fma((((y * y) * (x * x)) * 0.5), (y * y), 1.0);
	} else {
		tmp = fma((y * y), x, 1.0);
	}
	return tmp;
}
function code(x, y)
	tmp = 0.0
	if (y <= 5e-111)
		tmp = 1.0;
	elseif (y <= 1.85e+153)
		tmp = fma(Float64(Float64(Float64(y * y) * Float64(x * x)) * 0.5), Float64(y * y), 1.0);
	else
		tmp = fma(Float64(y * y), x, 1.0);
	end
	return tmp
end
code[x_, y_] := If[LessEqual[y, 5e-111], 1.0, If[LessEqual[y, 1.85e+153], N[(N[(N[(N[(y * y), $MachinePrecision] * N[(x * x), $MachinePrecision]), $MachinePrecision] * 0.5), $MachinePrecision] * N[(y * y), $MachinePrecision] + 1.0), $MachinePrecision], N[(N[(y * y), $MachinePrecision] * x + 1.0), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;y \leq 5 \cdot 10^{-111}:\\
\;\;\;\;1\\

\mathbf{elif}\;y \leq 1.85 \cdot 10^{+153}:\\
\;\;\;\;\mathsf{fma}\left(\left(\left(y \cdot y\right) \cdot \left(x \cdot x\right)\right) \cdot 0.5, y \cdot y, 1\right)\\

\mathbf{else}:\\
\;\;\;\;\mathsf{fma}\left(y \cdot y, x, 1\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if y < 5.0000000000000003e-111

    1. Initial program 100.0%

      \[e^{\left(x \cdot y\right) \cdot y} \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{1} \]
    4. Step-by-step derivation
      1. Applied rewrites63.1%

        \[\leadsto \color{blue}{1} \]

      if 5.0000000000000003e-111 < y < 1.8500000000000001e153

      1. Initial program 100.0%

        \[e^{\left(x \cdot y\right) \cdot y} \]
      2. Add Preprocessing
      3. Step-by-step derivation
        1. lift-exp.f64N/A

          \[\leadsto \color{blue}{e^{\left(x \cdot y\right) \cdot y}} \]
        2. remove-double-negN/A

          \[\leadsto e^{\color{blue}{\mathsf{neg}\left(\left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)\right)}} \]
        3. sinh---cosh-revN/A

          \[\leadsto \color{blue}{\cosh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
        4. cosh-neg-revN/A

          \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) \]
        5. sinh-neg-revN/A

          \[\leadsto \cosh \left(\left(x \cdot y\right) \cdot y\right) - \color{blue}{\left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
        6. lower--.f64N/A

          \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
        7. lower-cosh.f64N/A

          \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
        8. lift-*.f64N/A

          \[\leadsto \cosh \left(\color{blue}{\left(x \cdot y\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
        9. *-commutativeN/A

          \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
        10. lower-*.f64N/A

          \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
        11. sinh-neg-revN/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
        12. lower-sinh.f64N/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
        13. lift-*.f64N/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right) \cdot y}\right)\right) \]
        14. lift-*.f64N/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right)} \cdot y\right)\right) \]
        15. associate-*l*N/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{x \cdot \left(y \cdot y\right)}\right)\right) \]
        16. distribute-rgt-neg-inN/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(x \cdot \left(\mathsf{neg}\left(y \cdot y\right)\right)\right)} \]
        17. *-commutativeN/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
        18. lower-*.f64N/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
        19. distribute-lft-neg-inN/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
        20. lower-*.f64N/A

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
        21. lower-neg.f6474.4

          \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\color{blue}{\left(-y\right)} \cdot y\right) \cdot x\right) \]
      4. Applied rewrites74.4%

        \[\leadsto \color{blue}{\cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} \]
      5. Taylor expanded in y around 0

        \[\leadsto \color{blue}{1 + {y}^{2} \cdot \left({y}^{2} \cdot \left(\frac{1}{2} \cdot {x}^{2} - \frac{1}{2} \cdot \left(\frac{1}{2} \cdot {x}^{2} - \left(\frac{-1}{2} \cdot {x}^{2} + {x}^{2}\right)\right)\right) - \frac{1}{2} \cdot \left(-1 \cdot x - x\right)\right)} \]
      6. Applied rewrites62.5%

        \[\leadsto \color{blue}{\mathsf{fma}\left(0.5 \cdot \left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x - \left(\left(-x\right) - x\right)\right), y \cdot y, 1\right)} \]
      7. Taylor expanded in x around inf

        \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot \left({x}^{2} \cdot {y}^{2}\right), \color{blue}{y} \cdot y, 1\right) \]
      8. Step-by-step derivation
        1. Applied rewrites61.2%

          \[\leadsto \mathsf{fma}\left(\left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x\right) \cdot 0.5, \color{blue}{y} \cdot y, 1\right) \]
        2. Step-by-step derivation
          1. Applied rewrites60.9%

            \[\leadsto \mathsf{fma}\left(\left(\left(y \cdot y\right) \cdot \left(x \cdot x\right)\right) \cdot 0.5, y \cdot y, 1\right) \]

          if 1.8500000000000001e153 < y

          1. Initial program 100.0%

            \[e^{\left(x \cdot y\right) \cdot y} \]
          2. Add Preprocessing
          3. Taylor expanded in x around 0

            \[\leadsto \color{blue}{1 + x \cdot {y}^{2}} \]
          4. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto \color{blue}{x \cdot {y}^{2} + 1} \]
            2. *-commutativeN/A

              \[\leadsto \color{blue}{{y}^{2} \cdot x} + 1 \]
            3. lower-fma.f64N/A

              \[\leadsto \color{blue}{\mathsf{fma}\left({y}^{2}, x, 1\right)} \]
            4. unpow2N/A

              \[\leadsto \mathsf{fma}\left(\color{blue}{y \cdot y}, x, 1\right) \]
            5. lower-*.f6465.4

              \[\leadsto \mathsf{fma}\left(\color{blue}{y \cdot y}, x, 1\right) \]
          5. Applied rewrites65.4%

            \[\leadsto \color{blue}{\mathsf{fma}\left(y \cdot y, x, 1\right)} \]
        3. Recombined 3 regimes into one program.
        4. Final simplification63.0%

          \[\leadsto \begin{array}{l} \mathbf{if}\;y \leq 5 \cdot 10^{-111}:\\ \;\;\;\;1\\ \mathbf{elif}\;y \leq 1.85 \cdot 10^{+153}:\\ \;\;\;\;\mathsf{fma}\left(\left(\left(y \cdot y\right) \cdot \left(x \cdot x\right)\right) \cdot 0.5, y \cdot y, 1\right)\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(y \cdot y, x, 1\right)\\ \end{array} \]
        5. Add Preprocessing

        Alternative 4: 71.6% accurate, 3.4× speedup?

        \[\begin{array}{l} \\ \mathsf{fma}\left(0.5 \cdot \left(\mathsf{fma}\left(y \cdot y, x, 2\right) \cdot x\right), y \cdot y, 1\right) \end{array} \]
        (FPCore (x y)
         :precision binary64
         (fma (* 0.5 (* (fma (* y y) x 2.0) x)) (* y y) 1.0))
        double code(double x, double y) {
        	return fma((0.5 * (fma((y * y), x, 2.0) * x)), (y * y), 1.0);
        }
        
        function code(x, y)
        	return fma(Float64(0.5 * Float64(fma(Float64(y * y), x, 2.0) * x)), Float64(y * y), 1.0)
        end
        
        code[x_, y_] := N[(N[(0.5 * N[(N[(N[(y * y), $MachinePrecision] * x + 2.0), $MachinePrecision] * x), $MachinePrecision]), $MachinePrecision] * N[(y * y), $MachinePrecision] + 1.0), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        \mathsf{fma}\left(0.5 \cdot \left(\mathsf{fma}\left(y \cdot y, x, 2\right) \cdot x\right), y \cdot y, 1\right)
        \end{array}
        
        Derivation
        1. Initial program 100.0%

          \[e^{\left(x \cdot y\right) \cdot y} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-exp.f64N/A

            \[\leadsto \color{blue}{e^{\left(x \cdot y\right) \cdot y}} \]
          2. remove-double-negN/A

            \[\leadsto e^{\color{blue}{\mathsf{neg}\left(\left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)\right)}} \]
          3. sinh---cosh-revN/A

            \[\leadsto \color{blue}{\cosh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
          4. cosh-neg-revN/A

            \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) \]
          5. sinh-neg-revN/A

            \[\leadsto \cosh \left(\left(x \cdot y\right) \cdot y\right) - \color{blue}{\left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
          6. lower--.f64N/A

            \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
          7. lower-cosh.f64N/A

            \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
          8. lift-*.f64N/A

            \[\leadsto \cosh \left(\color{blue}{\left(x \cdot y\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
          9. *-commutativeN/A

            \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
          10. lower-*.f64N/A

            \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
          11. sinh-neg-revN/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
          12. lower-sinh.f64N/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
          13. lift-*.f64N/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right) \cdot y}\right)\right) \]
          14. lift-*.f64N/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right)} \cdot y\right)\right) \]
          15. associate-*l*N/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{x \cdot \left(y \cdot y\right)}\right)\right) \]
          16. distribute-rgt-neg-inN/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(x \cdot \left(\mathsf{neg}\left(y \cdot y\right)\right)\right)} \]
          17. *-commutativeN/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
          18. lower-*.f64N/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
          19. distribute-lft-neg-inN/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
          20. lower-*.f64N/A

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
          21. lower-neg.f6477.2

            \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\color{blue}{\left(-y\right)} \cdot y\right) \cdot x\right) \]
        4. Applied rewrites77.2%

          \[\leadsto \color{blue}{\cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} \]
        5. Taylor expanded in y around 0

          \[\leadsto \color{blue}{1 + {y}^{2} \cdot \left({y}^{2} \cdot \left(\frac{1}{2} \cdot {x}^{2} - \frac{1}{2} \cdot \left(\frac{1}{2} \cdot {x}^{2} - \left(\frac{-1}{2} \cdot {x}^{2} + {x}^{2}\right)\right)\right) - \frac{1}{2} \cdot \left(-1 \cdot x - x\right)\right)} \]
        6. Applied rewrites73.1%

          \[\leadsto \color{blue}{\mathsf{fma}\left(0.5 \cdot \left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x - \left(\left(-x\right) - x\right)\right), y \cdot y, 1\right)} \]
        7. Taylor expanded in x around 0

          \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot \left(x \cdot \left(2 + x \cdot {y}^{2}\right)\right), y \cdot y, 1\right) \]
        8. Step-by-step derivation
          1. Applied rewrites73.1%

            \[\leadsto \mathsf{fma}\left(0.5 \cdot \left(\mathsf{fma}\left(y \cdot y, x, 2\right) \cdot x\right), y \cdot y, 1\right) \]
          2. Final simplification73.1%

            \[\leadsto \mathsf{fma}\left(0.5 \cdot \left(\mathsf{fma}\left(y \cdot y, x, 2\right) \cdot x\right), y \cdot y, 1\right) \]
          3. Add Preprocessing

          Alternative 5: 71.1% accurate, 3.5× speedup?

          \[\begin{array}{l} \\ \mathsf{fma}\left(\left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x\right) \cdot 0.5, y \cdot y, 1\right) \end{array} \]
          (FPCore (x y)
           :precision binary64
           (fma (* (* (* (* y y) x) x) 0.5) (* y y) 1.0))
          double code(double x, double y) {
          	return fma(((((y * y) * x) * x) * 0.5), (y * y), 1.0);
          }
          
          function code(x, y)
          	return fma(Float64(Float64(Float64(Float64(y * y) * x) * x) * 0.5), Float64(y * y), 1.0)
          end
          
          code[x_, y_] := N[(N[(N[(N[(N[(y * y), $MachinePrecision] * x), $MachinePrecision] * x), $MachinePrecision] * 0.5), $MachinePrecision] * N[(y * y), $MachinePrecision] + 1.0), $MachinePrecision]
          
          \begin{array}{l}
          
          \\
          \mathsf{fma}\left(\left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x\right) \cdot 0.5, y \cdot y, 1\right)
          \end{array}
          
          Derivation
          1. Initial program 100.0%

            \[e^{\left(x \cdot y\right) \cdot y} \]
          2. Add Preprocessing
          3. Step-by-step derivation
            1. lift-exp.f64N/A

              \[\leadsto \color{blue}{e^{\left(x \cdot y\right) \cdot y}} \]
            2. remove-double-negN/A

              \[\leadsto e^{\color{blue}{\mathsf{neg}\left(\left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)\right)}} \]
            3. sinh---cosh-revN/A

              \[\leadsto \color{blue}{\cosh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
            4. cosh-neg-revN/A

              \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right) \]
            5. sinh-neg-revN/A

              \[\leadsto \cosh \left(\left(x \cdot y\right) \cdot y\right) - \color{blue}{\left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
            6. lower--.f64N/A

              \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right)} \]
            7. lower-cosh.f64N/A

              \[\leadsto \color{blue}{\cosh \left(\left(x \cdot y\right) \cdot y\right)} - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
            8. lift-*.f64N/A

              \[\leadsto \cosh \left(\color{blue}{\left(x \cdot y\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
            9. *-commutativeN/A

              \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
            10. lower-*.f64N/A

              \[\leadsto \cosh \left(\color{blue}{\left(y \cdot x\right)} \cdot y\right) - \left(\mathsf{neg}\left(\sinh \left(\left(x \cdot y\right) \cdot y\right)\right)\right) \]
            11. sinh-neg-revN/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
            12. lower-sinh.f64N/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \color{blue}{\sinh \left(\mathsf{neg}\left(\left(x \cdot y\right) \cdot y\right)\right)} \]
            13. lift-*.f64N/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right) \cdot y}\right)\right) \]
            14. lift-*.f64N/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot y\right)} \cdot y\right)\right) \]
            15. associate-*l*N/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\mathsf{neg}\left(\color{blue}{x \cdot \left(y \cdot y\right)}\right)\right) \]
            16. distribute-rgt-neg-inN/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(x \cdot \left(\mathsf{neg}\left(y \cdot y\right)\right)\right)} \]
            17. *-commutativeN/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
            18. lower-*.f64N/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \color{blue}{\left(\left(\mathsf{neg}\left(y \cdot y\right)\right) \cdot x\right)} \]
            19. distribute-lft-neg-inN/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
            20. lower-*.f64N/A

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\color{blue}{\left(\left(\mathsf{neg}\left(y\right)\right) \cdot y\right)} \cdot x\right) \]
            21. lower-neg.f6477.2

              \[\leadsto \cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\color{blue}{\left(-y\right)} \cdot y\right) \cdot x\right) \]
          4. Applied rewrites77.2%

            \[\leadsto \color{blue}{\cosh \left(\left(y \cdot x\right) \cdot y\right) - \sinh \left(\left(\left(-y\right) \cdot y\right) \cdot x\right)} \]
          5. Taylor expanded in y around 0

            \[\leadsto \color{blue}{1 + {y}^{2} \cdot \left({y}^{2} \cdot \left(\frac{1}{2} \cdot {x}^{2} - \frac{1}{2} \cdot \left(\frac{1}{2} \cdot {x}^{2} - \left(\frac{-1}{2} \cdot {x}^{2} + {x}^{2}\right)\right)\right) - \frac{1}{2} \cdot \left(-1 \cdot x - x\right)\right)} \]
          6. Applied rewrites73.1%

            \[\leadsto \color{blue}{\mathsf{fma}\left(0.5 \cdot \left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x - \left(\left(-x\right) - x\right)\right), y \cdot y, 1\right)} \]
          7. Taylor expanded in x around inf

            \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot \left({x}^{2} \cdot {y}^{2}\right), \color{blue}{y} \cdot y, 1\right) \]
          8. Step-by-step derivation
            1. Applied rewrites72.7%

              \[\leadsto \mathsf{fma}\left(\left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x\right) \cdot 0.5, \color{blue}{y} \cdot y, 1\right) \]
            2. Final simplification72.7%

              \[\leadsto \mathsf{fma}\left(\left(\left(\left(y \cdot y\right) \cdot x\right) \cdot x\right) \cdot 0.5, y \cdot y, 1\right) \]
            3. Add Preprocessing

            Alternative 6: 66.6% accurate, 9.3× speedup?

            \[\begin{array}{l} \\ \mathsf{fma}\left(y \cdot y, x, 1\right) \end{array} \]
            (FPCore (x y) :precision binary64 (fma (* y y) x 1.0))
            double code(double x, double y) {
            	return fma((y * y), x, 1.0);
            }
            
            function code(x, y)
            	return fma(Float64(y * y), x, 1.0)
            end
            
            code[x_, y_] := N[(N[(y * y), $MachinePrecision] * x + 1.0), $MachinePrecision]
            
            \begin{array}{l}
            
            \\
            \mathsf{fma}\left(y \cdot y, x, 1\right)
            \end{array}
            
            Derivation
            1. Initial program 100.0%

              \[e^{\left(x \cdot y\right) \cdot y} \]
            2. Add Preprocessing
            3. Taylor expanded in x around 0

              \[\leadsto \color{blue}{1 + x \cdot {y}^{2}} \]
            4. Step-by-step derivation
              1. +-commutativeN/A

                \[\leadsto \color{blue}{x \cdot {y}^{2} + 1} \]
              2. *-commutativeN/A

                \[\leadsto \color{blue}{{y}^{2} \cdot x} + 1 \]
              3. lower-fma.f64N/A

                \[\leadsto \color{blue}{\mathsf{fma}\left({y}^{2}, x, 1\right)} \]
              4. unpow2N/A

                \[\leadsto \mathsf{fma}\left(\color{blue}{y \cdot y}, x, 1\right) \]
              5. lower-*.f6467.5

                \[\leadsto \mathsf{fma}\left(\color{blue}{y \cdot y}, x, 1\right) \]
            5. Applied rewrites67.5%

              \[\leadsto \color{blue}{\mathsf{fma}\left(y \cdot y, x, 1\right)} \]
            6. Add Preprocessing

            Alternative 7: 63.4% accurate, 9.3× speedup?

            \[\begin{array}{l} \\ \mathsf{fma}\left(y \cdot x, y, 1\right) \end{array} \]
            (FPCore (x y) :precision binary64 (fma (* y x) y 1.0))
            double code(double x, double y) {
            	return fma((y * x), y, 1.0);
            }
            
            function code(x, y)
            	return fma(Float64(y * x), y, 1.0)
            end
            
            code[x_, y_] := N[(N[(y * x), $MachinePrecision] * y + 1.0), $MachinePrecision]
            
            \begin{array}{l}
            
            \\
            \mathsf{fma}\left(y \cdot x, y, 1\right)
            \end{array}
            
            Derivation
            1. Initial program 100.0%

              \[e^{\left(x \cdot y\right) \cdot y} \]
            2. Add Preprocessing
            3. Taylor expanded in x around 0

              \[\leadsto \color{blue}{1 + x \cdot {y}^{2}} \]
            4. Step-by-step derivation
              1. +-commutativeN/A

                \[\leadsto \color{blue}{x \cdot {y}^{2} + 1} \]
              2. *-commutativeN/A

                \[\leadsto \color{blue}{{y}^{2} \cdot x} + 1 \]
              3. lower-fma.f64N/A

                \[\leadsto \color{blue}{\mathsf{fma}\left({y}^{2}, x, 1\right)} \]
              4. unpow2N/A

                \[\leadsto \mathsf{fma}\left(\color{blue}{y \cdot y}, x, 1\right) \]
              5. lower-*.f6467.5

                \[\leadsto \mathsf{fma}\left(\color{blue}{y \cdot y}, x, 1\right) \]
            5. Applied rewrites67.5%

              \[\leadsto \color{blue}{\mathsf{fma}\left(y \cdot y, x, 1\right)} \]
            6. Step-by-step derivation
              1. Applied rewrites65.3%

                \[\leadsto \mathsf{fma}\left(y \cdot x, \color{blue}{y}, 1\right) \]
              2. Add Preprocessing

              Alternative 8: 50.8% accurate, 111.0× speedup?

              \[\begin{array}{l} \\ 1 \end{array} \]
              (FPCore (x y) :precision binary64 1.0)
              double code(double x, double y) {
              	return 1.0;
              }
              
              real(8) function code(x, y)
                  real(8), intent (in) :: x
                  real(8), intent (in) :: y
                  code = 1.0d0
              end function
              
              public static double code(double x, double y) {
              	return 1.0;
              }
              
              def code(x, y):
              	return 1.0
              
              function code(x, y)
              	return 1.0
              end
              
              function tmp = code(x, y)
              	tmp = 1.0;
              end
              
              code[x_, y_] := 1.0
              
              \begin{array}{l}
              
              \\
              1
              \end{array}
              
              Derivation
              1. Initial program 100.0%

                \[e^{\left(x \cdot y\right) \cdot y} \]
              2. Add Preprocessing
              3. Taylor expanded in x around 0

                \[\leadsto \color{blue}{1} \]
              4. Step-by-step derivation
                1. Applied rewrites50.3%

                  \[\leadsto \color{blue}{1} \]
                2. Add Preprocessing

                Reproduce

                ?
                herbie shell --seed 2024337 
                (FPCore (x y)
                  :name "Data.Random.Distribution.Normal:normalF from random-fu-0.2.6.2"
                  :precision binary64
                  (exp (* (* x y) y)))