exp-w (used to crash)

Percentage Accurate: 99.5% → 99.4%
Time: 17.7s
Alternatives: 12
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \end{array} \]
(FPCore (w l) :precision binary64 (* (exp (- w)) (pow l (exp w))))
double code(double w, double l) {
	return exp(-w) * pow(l, exp(w));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = exp(-w) * (l ** exp(w))
end function
public static double code(double w, double l) {
	return Math.exp(-w) * Math.pow(l, Math.exp(w));
}
def code(w, l):
	return math.exp(-w) * math.pow(l, math.exp(w))
function code(w, l)
	return Float64(exp(Float64(-w)) * (l ^ exp(w)))
end
function tmp = code(w, l)
	tmp = exp(-w) * (l ^ exp(w));
end
code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] * N[Power[l, N[Exp[w], $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
e^{-w} \cdot {\ell}^{\left(e^{w}\right)}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 12 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 99.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \end{array} \]
(FPCore (w l) :precision binary64 (* (exp (- w)) (pow l (exp w))))
double code(double w, double l) {
	return exp(-w) * pow(l, exp(w));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = exp(-w) * (l ** exp(w))
end function
public static double code(double w, double l) {
	return Math.exp(-w) * Math.pow(l, Math.exp(w));
}
def code(w, l):
	return math.exp(-w) * math.pow(l, math.exp(w))
function code(w, l)
	return Float64(exp(Float64(-w)) * (l ^ exp(w)))
end
function tmp = code(w, l)
	tmp = exp(-w) * (l ^ exp(w));
end
code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] * N[Power[l, N[Exp[w], $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
e^{-w} \cdot {\ell}^{\left(e^{w}\right)}
\end{array}

Alternative 1: 99.4% accurate, 0.4× speedup?

\[\begin{array}{l} \\ \frac{{\left({\ell}^{\left(\sqrt[3]{e^{w + w}}\right)}\right)}^{\left(\sqrt[3]{e^{w}}\right)}}{e^{w}} \end{array} \]
(FPCore (w l)
 :precision binary64
 (/ (pow (pow l (cbrt (exp (+ w w)))) (cbrt (exp w))) (exp w)))
double code(double w, double l) {
	return pow(pow(l, cbrt(exp((w + w)))), cbrt(exp(w))) / exp(w);
}
public static double code(double w, double l) {
	return Math.pow(Math.pow(l, Math.cbrt(Math.exp((w + w)))), Math.cbrt(Math.exp(w))) / Math.exp(w);
}
function code(w, l)
	return Float64(((l ^ cbrt(exp(Float64(w + w)))) ^ cbrt(exp(w))) / exp(w))
end
code[w_, l_] := N[(N[Power[N[Power[l, N[Power[N[Exp[N[(w + w), $MachinePrecision]], $MachinePrecision], 1/3], $MachinePrecision]], $MachinePrecision], N[Power[N[Exp[w], $MachinePrecision], 1/3], $MachinePrecision]], $MachinePrecision] / N[Exp[w], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{{\left({\ell}^{\left(\sqrt[3]{e^{w + w}}\right)}\right)}^{\left(\sqrt[3]{e^{w}}\right)}}{e^{w}}
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. remove-double-neg99.0%

      \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    3. associate-*l/99.0%

      \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
    4. *-lft-identity99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
    5. remove-double-neg99.0%

      \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
  3. Simplified99.0%

    \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
  4. Add Preprocessing
  5. Taylor expanded in l around inf 93.9%

    \[\leadsto \frac{\color{blue}{e^{-1 \cdot \left(e^{w} \cdot \log \left(\frac{1}{\ell}\right)\right)}}}{e^{w}} \]
  6. Step-by-step derivation
    1. add-cube-cbrt93.9%

      \[\leadsto \frac{\color{blue}{\left(\sqrt[3]{e^{-1 \cdot \left(e^{w} \cdot \log \left(\frac{1}{\ell}\right)\right)}} \cdot \sqrt[3]{e^{-1 \cdot \left(e^{w} \cdot \log \left(\frac{1}{\ell}\right)\right)}}\right) \cdot \sqrt[3]{e^{-1 \cdot \left(e^{w} \cdot \log \left(\frac{1}{\ell}\right)\right)}}}}{e^{w}} \]
    2. pow393.9%

      \[\leadsto \frac{\color{blue}{{\left(\sqrt[3]{e^{-1 \cdot \left(e^{w} \cdot \log \left(\frac{1}{\ell}\right)\right)}}\right)}^{3}}}{e^{w}} \]
  7. Applied egg-rr97.8%

    \[\leadsto \frac{\color{blue}{{\left(\sqrt[3]{{\ell}^{\left(e^{w}\right)}}\right)}^{3}}}{e^{w}} \]
  8. Step-by-step derivation
    1. rem-cube-cbrt99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{w}} \]
    2. add-cube-cbrt98.9%

      \[\leadsto \frac{{\ell}^{\color{blue}{\left(\left(\sqrt[3]{e^{w}} \cdot \sqrt[3]{e^{w}}\right) \cdot \sqrt[3]{e^{w}}\right)}}}{e^{w}} \]
    3. pow-unpow98.9%

      \[\leadsto \frac{\color{blue}{{\left({\ell}^{\left(\sqrt[3]{e^{w}} \cdot \sqrt[3]{e^{w}}\right)}\right)}^{\left(\sqrt[3]{e^{w}}\right)}}}{e^{w}} \]
    4. cbrt-unprod99.0%

      \[\leadsto \frac{{\left({\ell}^{\color{blue}{\left(\sqrt[3]{e^{w} \cdot e^{w}}\right)}}\right)}^{\left(\sqrt[3]{e^{w}}\right)}}{e^{w}} \]
    5. prod-exp99.0%

      \[\leadsto \frac{{\left({\ell}^{\left(\sqrt[3]{\color{blue}{e^{w + w}}}\right)}\right)}^{\left(\sqrt[3]{e^{w}}\right)}}{e^{w}} \]
  9. Applied egg-rr99.0%

    \[\leadsto \frac{\color{blue}{{\left({\ell}^{\left(\sqrt[3]{e^{w + w}}\right)}\right)}^{\left(\sqrt[3]{e^{w}}\right)}}}{e^{w}} \]
  10. Add Preprocessing

Alternative 2: 99.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \end{array} \]
(FPCore (w l) :precision binary64 (* (exp (- w)) (pow l (exp w))))
double code(double w, double l) {
	return exp(-w) * pow(l, exp(w));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = exp(-w) * (l ** exp(w))
end function
public static double code(double w, double l) {
	return Math.exp(-w) * Math.pow(l, Math.exp(w));
}
def code(w, l):
	return math.exp(-w) * math.pow(l, math.exp(w))
function code(w, l)
	return Float64(exp(Float64(-w)) * (l ^ exp(w)))
end
function tmp = code(w, l)
	tmp = exp(-w) * (l ^ exp(w));
end
code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] * N[Power[l, N[Exp[w], $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
e^{-w} \cdot {\ell}^{\left(e^{w}\right)}
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Add Preprocessing
  3. Add Preprocessing

Alternative 3: 99.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{{\ell}^{\left(e^{w}\right)}}{e^{w}} \end{array} \]
(FPCore (w l) :precision binary64 (/ (pow l (exp w)) (exp w)))
double code(double w, double l) {
	return pow(l, exp(w)) / exp(w);
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = (l ** exp(w)) / exp(w)
end function
public static double code(double w, double l) {
	return Math.pow(l, Math.exp(w)) / Math.exp(w);
}
def code(w, l):
	return math.pow(l, math.exp(w)) / math.exp(w)
function code(w, l)
	return Float64((l ^ exp(w)) / exp(w))
end
function tmp = code(w, l)
	tmp = (l ^ exp(w)) / exp(w);
end
code[w_, l_] := N[(N[Power[l, N[Exp[w], $MachinePrecision]], $MachinePrecision] / N[Exp[w], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. remove-double-neg99.0%

      \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    3. associate-*l/99.0%

      \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
    4. *-lft-identity99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
    5. remove-double-neg99.0%

      \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
  3. Simplified99.0%

    \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
  4. Add Preprocessing
  5. Add Preprocessing

Alternative 4: 81.3% accurate, 2.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;w \leq 0.057:\\ \;\;\;\;\ell + w \cdot \left(w \cdot \left(w \cdot \left(\left(\ell \cdot 0.5 - \ell\right) - \ell \cdot 0.6666666666666666\right) + \left(\ell - \ell \cdot 0.5\right)\right) - \ell\right)\\ \mathbf{else}:\\ \;\;\;\;\sqrt{\ell \cdot \ell}\\ \end{array} \end{array} \]
(FPCore (w l)
 :precision binary64
 (if (<= w 0.057)
   (+
    l
    (*
     w
     (-
      (*
       w
       (+ (* w (- (- (* l 0.5) l) (* l 0.6666666666666666))) (- l (* l 0.5))))
      l)))
   (sqrt (* l l))))
double code(double w, double l) {
	double tmp;
	if (w <= 0.057) {
		tmp = l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l));
	} else {
		tmp = sqrt((l * l));
	}
	return tmp;
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    real(8) :: tmp
    if (w <= 0.057d0) then
        tmp = l + (w * ((w * ((w * (((l * 0.5d0) - l) - (l * 0.6666666666666666d0))) + (l - (l * 0.5d0)))) - l))
    else
        tmp = sqrt((l * l))
    end if
    code = tmp
end function
public static double code(double w, double l) {
	double tmp;
	if (w <= 0.057) {
		tmp = l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l));
	} else {
		tmp = Math.sqrt((l * l));
	}
	return tmp;
}
def code(w, l):
	tmp = 0
	if w <= 0.057:
		tmp = l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l))
	else:
		tmp = math.sqrt((l * l))
	return tmp
function code(w, l)
	tmp = 0.0
	if (w <= 0.057)
		tmp = Float64(l + Float64(w * Float64(Float64(w * Float64(Float64(w * Float64(Float64(Float64(l * 0.5) - l) - Float64(l * 0.6666666666666666))) + Float64(l - Float64(l * 0.5)))) - l)));
	else
		tmp = sqrt(Float64(l * l));
	end
	return tmp
end
function tmp_2 = code(w, l)
	tmp = 0.0;
	if (w <= 0.057)
		tmp = l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l));
	else
		tmp = sqrt((l * l));
	end
	tmp_2 = tmp;
end
code[w_, l_] := If[LessEqual[w, 0.057], N[(l + N[(w * N[(N[(w * N[(N[(w * N[(N[(N[(l * 0.5), $MachinePrecision] - l), $MachinePrecision] - N[(l * 0.6666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + N[(l - N[(l * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] - l), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], N[Sqrt[N[(l * l), $MachinePrecision]], $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;w \leq 0.057:\\
\;\;\;\;\ell + w \cdot \left(w \cdot \left(w \cdot \left(\left(\ell \cdot 0.5 - \ell\right) - \ell \cdot 0.6666666666666666\right) + \left(\ell - \ell \cdot 0.5\right)\right) - \ell\right)\\

\mathbf{else}:\\
\;\;\;\;\sqrt{\ell \cdot \ell}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if w < 0.0570000000000000021

    1. Initial program 99.7%

      \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. Step-by-step derivation
      1. exp-neg99.7%

        \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. remove-double-neg99.7%

        \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
      3. associate-*l/99.7%

        \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
      4. *-lft-identity99.7%

        \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
      5. remove-double-neg99.7%

        \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
    3. Simplified99.7%

      \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
    4. Add Preprocessing
    5. Taylor expanded in w around 0 98.5%

      \[\leadsto \frac{\color{blue}{\ell}}{e^{w}} \]
    6. Taylor expanded in w around 0 87.0%

      \[\leadsto \color{blue}{\ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right)} \]
    7. Step-by-step derivation
      1. *-commutative87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\ell \cdot -0.5} + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      2. metadata-eval87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\ell \cdot \color{blue}{\left(-1 + 0.5\right)} + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      3. distribute-rgt-out87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\left(-1 \cdot \ell + 0.5 \cdot \ell\right)} + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      4. metadata-eval87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + \color{blue}{\left(-1 \cdot -0.5\right)} \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      5. associate-*r*87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + \color{blue}{-1 \cdot \left(-0.5 \cdot \ell\right)}\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      6. *-commutative87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + -1 \cdot \color{blue}{\left(\ell \cdot -0.5\right)}\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      7. metadata-eval87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + -1 \cdot \left(\ell \cdot \color{blue}{\left(-1 + 0.5\right)}\right)\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      8. distribute-rgt-out87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + -1 \cdot \color{blue}{\left(-1 \cdot \ell + 0.5 \cdot \ell\right)}\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      9. associate-+l+87.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(-1 \cdot \ell + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      10. add-sqr-sqrt0.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\sqrt{-1 \cdot \ell} \cdot \sqrt{-1 \cdot \ell}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      11. sqrt-unprod71.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\sqrt{\left(-1 \cdot \ell\right) \cdot \left(-1 \cdot \ell\right)}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      12. mul-1-neg71.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\sqrt{\color{blue}{\left(-\ell\right)} \cdot \left(-1 \cdot \ell\right)} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      13. mul-1-neg71.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\sqrt{\left(-\ell\right) \cdot \color{blue}{\left(-\ell\right)}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      14. sqr-neg71.0%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\sqrt{\color{blue}{\ell \cdot \ell}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      15. sqrt-unprod87.1%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\sqrt{\ell} \cdot \sqrt{\ell}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      16. add-sqr-sqrt87.1%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\ell} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    8. Applied egg-rr87.1%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(\ell + \ell \cdot -0.3333333333333333\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    9. Step-by-step derivation
      1. *-rgt-identity87.1%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\ell \cdot 1} + \ell \cdot -0.3333333333333333\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      2. distribute-lft-out87.1%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\ell \cdot \left(1 + -0.3333333333333333\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
      3. metadata-eval87.1%

        \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \ell \cdot \color{blue}{0.6666666666666666}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    10. Simplified87.1%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\ell \cdot 0.6666666666666666}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]

    if 0.0570000000000000021 < w

    1. Initial program 95.1%

      \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. add-sqr-sqrt95.1%

        \[\leadsto \color{blue}{\left(\sqrt{e^{-w}} \cdot \sqrt{e^{-w}}\right)} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. sqrt-unprod95.1%

        \[\leadsto \color{blue}{\sqrt{e^{-w} \cdot e^{-w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
      3. add-sqr-sqrt0.0%

        \[\leadsto \sqrt{e^{\color{blue}{\sqrt{-w} \cdot \sqrt{-w}}} \cdot e^{-w}} \cdot {\ell}^{\left(e^{w}\right)} \]
      4. sqrt-unprod2.4%

        \[\leadsto \sqrt{e^{\color{blue}{\sqrt{\left(-w\right) \cdot \left(-w\right)}}} \cdot e^{-w}} \cdot {\ell}^{\left(e^{w}\right)} \]
      5. sqr-neg2.4%

        \[\leadsto \sqrt{e^{\sqrt{\color{blue}{w \cdot w}}} \cdot e^{-w}} \cdot {\ell}^{\left(e^{w}\right)} \]
      6. sqrt-unprod2.4%

        \[\leadsto \sqrt{e^{\color{blue}{\sqrt{w} \cdot \sqrt{w}}} \cdot e^{-w}} \cdot {\ell}^{\left(e^{w}\right)} \]
      7. add-sqr-sqrt2.4%

        \[\leadsto \sqrt{e^{\color{blue}{w}} \cdot e^{-w}} \cdot {\ell}^{\left(e^{w}\right)} \]
      8. pow12.4%

        \[\leadsto \sqrt{\color{blue}{{\left(e^{w}\right)}^{1}} \cdot e^{-w}} \cdot {\ell}^{\left(e^{w}\right)} \]
      9. exp-neg2.4%

        \[\leadsto \sqrt{{\left(e^{w}\right)}^{1} \cdot \color{blue}{\frac{1}{e^{w}}}} \cdot {\ell}^{\left(e^{w}\right)} \]
      10. inv-pow2.4%

        \[\leadsto \sqrt{{\left(e^{w}\right)}^{1} \cdot \color{blue}{{\left(e^{w}\right)}^{-1}}} \cdot {\ell}^{\left(e^{w}\right)} \]
      11. pow-prod-up100.0%

        \[\leadsto \sqrt{\color{blue}{{\left(e^{w}\right)}^{\left(1 + -1\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
      12. metadata-eval100.0%

        \[\leadsto \sqrt{{\left(e^{w}\right)}^{\color{blue}{0}}} \cdot {\ell}^{\left(e^{w}\right)} \]
      13. metadata-eval100.0%

        \[\leadsto \sqrt{\color{blue}{1}} \cdot {\ell}^{\left(e^{w}\right)} \]
      14. metadata-eval100.0%

        \[\leadsto \color{blue}{1} \cdot {\ell}^{\left(e^{w}\right)} \]
      15. *-un-lft-identity100.0%

        \[\leadsto \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
      16. add-sqr-sqrt100.0%

        \[\leadsto {\ell}^{\left(e^{\color{blue}{\sqrt{w} \cdot \sqrt{w}}}\right)} \]
      17. sqrt-unprod100.0%

        \[\leadsto {\ell}^{\left(e^{\color{blue}{\sqrt{w \cdot w}}}\right)} \]
      18. sqr-neg100.0%

        \[\leadsto {\ell}^{\left(e^{\sqrt{\color{blue}{\left(-w\right) \cdot \left(-w\right)}}}\right)} \]
      19. sqrt-unprod0.0%

        \[\leadsto {\ell}^{\left(e^{\color{blue}{\sqrt{-w} \cdot \sqrt{-w}}}\right)} \]
      20. add-sqr-sqrt3.1%

        \[\leadsto {\ell}^{\left(e^{\color{blue}{-w}}\right)} \]
    4. Applied egg-rr46.1%

      \[\leadsto \color{blue}{\sqrt{\ell \cdot \ell}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification80.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;w \leq 0.057:\\ \;\;\;\;\ell + w \cdot \left(w \cdot \left(w \cdot \left(\left(\ell \cdot 0.5 - \ell\right) - \ell \cdot 0.6666666666666666\right) + \left(\ell - \ell \cdot 0.5\right)\right) - \ell\right)\\ \mathbf{else}:\\ \;\;\;\;\sqrt{\ell \cdot \ell}\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 97.9% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \ell \cdot e^{-w} \end{array} \]
(FPCore (w l) :precision binary64 (* l (exp (- w))))
double code(double w, double l) {
	return l * exp(-w);
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = l * exp(-w)
end function
public static double code(double w, double l) {
	return l * Math.exp(-w);
}
def code(w, l):
	return l * math.exp(-w)
function code(w, l)
	return Float64(l * exp(Float64(-w)))
end
function tmp = code(w, l)
	tmp = l * exp(-w);
end
code[w_, l_] := N[(l * N[Exp[(-w)], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\ell \cdot e^{-w}
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Add Preprocessing
  3. Taylor expanded in w around 0 97.6%

    \[\leadsto e^{-w} \cdot \color{blue}{\ell} \]
  4. Final simplification97.6%

    \[\leadsto \ell \cdot e^{-w} \]
  5. Add Preprocessing

Alternative 6: 97.9% accurate, 3.0× speedup?

\[\begin{array}{l} \\ \frac{\ell}{e^{w}} \end{array} \]
(FPCore (w l) :precision binary64 (/ l (exp w)))
double code(double w, double l) {
	return l / exp(w);
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = l / exp(w)
end function
public static double code(double w, double l) {
	return l / Math.exp(w);
}
def code(w, l):
	return l / math.exp(w)
function code(w, l)
	return Float64(l / exp(w))
end
function tmp = code(w, l)
	tmp = l / exp(w);
end
code[w_, l_] := N[(l / N[Exp[w], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{\ell}{e^{w}}
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. remove-double-neg99.0%

      \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    3. associate-*l/99.0%

      \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
    4. *-lft-identity99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
    5. remove-double-neg99.0%

      \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
  3. Simplified99.0%

    \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
  4. Add Preprocessing
  5. Taylor expanded in w around 0 97.6%

    \[\leadsto \frac{\color{blue}{\ell}}{e^{w}} \]
  6. Add Preprocessing

Alternative 7: 74.8% accurate, 12.2× speedup?

\[\begin{array}{l} \\ \ell + w \cdot \left(w \cdot \left(w \cdot \left(\left(\ell \cdot 0.5 - \ell\right) - \ell \cdot 0.6666666666666666\right) + \left(\ell - \ell \cdot 0.5\right)\right) - \ell\right) \end{array} \]
(FPCore (w l)
 :precision binary64
 (+
  l
  (*
   w
   (-
    (*
     w
     (+ (* w (- (- (* l 0.5) l) (* l 0.6666666666666666))) (- l (* l 0.5))))
    l))))
double code(double w, double l) {
	return l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = l + (w * ((w * ((w * (((l * 0.5d0) - l) - (l * 0.6666666666666666d0))) + (l - (l * 0.5d0)))) - l))
end function
public static double code(double w, double l) {
	return l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l));
}
def code(w, l):
	return l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l))
function code(w, l)
	return Float64(l + Float64(w * Float64(Float64(w * Float64(Float64(w * Float64(Float64(Float64(l * 0.5) - l) - Float64(l * 0.6666666666666666))) + Float64(l - Float64(l * 0.5)))) - l)))
end
function tmp = code(w, l)
	tmp = l + (w * ((w * ((w * (((l * 0.5) - l) - (l * 0.6666666666666666))) + (l - (l * 0.5)))) - l));
end
code[w_, l_] := N[(l + N[(w * N[(N[(w * N[(N[(w * N[(N[(N[(l * 0.5), $MachinePrecision] - l), $MachinePrecision] - N[(l * 0.6666666666666666), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + N[(l - N[(l * 0.5), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] - l), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\ell + w \cdot \left(w \cdot \left(w \cdot \left(\left(\ell \cdot 0.5 - \ell\right) - \ell \cdot 0.6666666666666666\right) + \left(\ell - \ell \cdot 0.5\right)\right) - \ell\right)
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. remove-double-neg99.0%

      \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    3. associate-*l/99.0%

      \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
    4. *-lft-identity99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
    5. remove-double-neg99.0%

      \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
  3. Simplified99.0%

    \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
  4. Add Preprocessing
  5. Taylor expanded in w around 0 97.6%

    \[\leadsto \frac{\color{blue}{\ell}}{e^{w}} \]
  6. Taylor expanded in w around 0 73.5%

    \[\leadsto \color{blue}{\ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right)} \]
  7. Step-by-step derivation
    1. *-commutative73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\ell \cdot -0.5} + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    2. metadata-eval73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\ell \cdot \color{blue}{\left(-1 + 0.5\right)} + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    3. distribute-rgt-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\left(-1 \cdot \ell + 0.5 \cdot \ell\right)} + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    4. metadata-eval73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + \color{blue}{\left(-1 \cdot -0.5\right)} \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    5. associate-*r*73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + \color{blue}{-1 \cdot \left(-0.5 \cdot \ell\right)}\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    6. *-commutative73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + -1 \cdot \color{blue}{\left(\ell \cdot -0.5\right)}\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    7. metadata-eval73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + -1 \cdot \left(\ell \cdot \color{blue}{\left(-1 + 0.5\right)}\right)\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    8. distribute-rgt-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\left(-1 \cdot \ell + -1 \cdot \color{blue}{\left(-1 \cdot \ell + 0.5 \cdot \ell\right)}\right) + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    9. associate-+l+73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(-1 \cdot \ell + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    10. add-sqr-sqrt0.0%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\sqrt{-1 \cdot \ell} \cdot \sqrt{-1 \cdot \ell}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    11. sqrt-unprod60.0%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\sqrt{\left(-1 \cdot \ell\right) \cdot \left(-1 \cdot \ell\right)}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    12. mul-1-neg60.0%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\sqrt{\color{blue}{\left(-\ell\right)} \cdot \left(-1 \cdot \ell\right)} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    13. mul-1-neg60.0%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\sqrt{\left(-\ell\right) \cdot \color{blue}{\left(-\ell\right)}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    14. sqr-neg60.0%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\sqrt{\color{blue}{\ell \cdot \ell}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    15. sqrt-unprod73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\sqrt{\ell} \cdot \sqrt{\ell}} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    16. add-sqr-sqrt73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\ell} + \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + 0.16666666666666666 \cdot \ell\right)\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  8. Applied egg-rr73.5%

    \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(\ell + \ell \cdot -0.3333333333333333\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  9. Step-by-step derivation
    1. *-rgt-identity73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(\color{blue}{\ell \cdot 1} + \ell \cdot -0.3333333333333333\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    2. distribute-lft-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\ell \cdot \left(1 + -0.3333333333333333\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    3. metadata-eval73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \ell \cdot \color{blue}{0.6666666666666666}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  10. Simplified73.5%

    \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\ell \cdot 0.6666666666666666}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  11. Final simplification73.5%

    \[\leadsto \ell + w \cdot \left(w \cdot \left(w \cdot \left(\left(\ell \cdot 0.5 - \ell\right) - \ell \cdot 0.6666666666666666\right) + \left(\ell - \ell \cdot 0.5\right)\right) - \ell\right) \]
  12. Add Preprocessing

Alternative 8: 74.8% accurate, 16.1× speedup?

\[\begin{array}{l} \\ \ell + w \cdot \left(w \cdot \left(\left(\ell - \ell \cdot 0.5\right) - w \cdot \left(\ell \cdot 0.8333333333333334\right)\right) - \ell\right) \end{array} \]
(FPCore (w l)
 :precision binary64
 (+ l (* w (- (* w (- (- l (* l 0.5)) (* w (* l 0.8333333333333334)))) l))))
double code(double w, double l) {
	return l + (w * ((w * ((l - (l * 0.5)) - (w * (l * 0.8333333333333334)))) - l));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = l + (w * ((w * ((l - (l * 0.5d0)) - (w * (l * 0.8333333333333334d0)))) - l))
end function
public static double code(double w, double l) {
	return l + (w * ((w * ((l - (l * 0.5)) - (w * (l * 0.8333333333333334)))) - l));
}
def code(w, l):
	return l + (w * ((w * ((l - (l * 0.5)) - (w * (l * 0.8333333333333334)))) - l))
function code(w, l)
	return Float64(l + Float64(w * Float64(Float64(w * Float64(Float64(l - Float64(l * 0.5)) - Float64(w * Float64(l * 0.8333333333333334)))) - l)))
end
function tmp = code(w, l)
	tmp = l + (w * ((w * ((l - (l * 0.5)) - (w * (l * 0.8333333333333334)))) - l));
end
code[w_, l_] := N[(l + N[(w * N[(N[(w * N[(N[(l - N[(l * 0.5), $MachinePrecision]), $MachinePrecision] - N[(w * N[(l * 0.8333333333333334), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] - l), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\ell + w \cdot \left(w \cdot \left(\left(\ell - \ell \cdot 0.5\right) - w \cdot \left(\ell \cdot 0.8333333333333334\right)\right) - \ell\right)
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. remove-double-neg99.0%

      \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    3. associate-*l/99.0%

      \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
    4. *-lft-identity99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
    5. remove-double-neg99.0%

      \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
  3. Simplified99.0%

    \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
  4. Add Preprocessing
  5. Taylor expanded in w around 0 97.6%

    \[\leadsto \frac{\color{blue}{\ell}}{e^{w}} \]
  6. Taylor expanded in w around 0 73.5%

    \[\leadsto \color{blue}{\ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right)} \]
  7. Step-by-step derivation
    1. distribute-rgt-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\ell \cdot \left(-0.5 + 0.16666666666666666\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    2. add-sqr-sqrt73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(\sqrt{\ell} \cdot \sqrt{\ell}\right)} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    3. sqrt-unprod55.3%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\sqrt{\ell \cdot \ell}} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    4. sqr-neg55.3%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \sqrt{\color{blue}{\left(-\ell\right) \cdot \left(-\ell\right)}} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    5. mul-1-neg55.3%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \sqrt{\color{blue}{\left(-1 \cdot \ell\right)} \cdot \left(-\ell\right)} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    6. mul-1-neg55.3%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \sqrt{\left(-1 \cdot \ell\right) \cdot \color{blue}{\left(-1 \cdot \ell\right)}} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    7. sqrt-unprod0.0%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(\sqrt{-1 \cdot \ell} \cdot \sqrt{-1 \cdot \ell}\right)} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    8. add-sqr-sqrt73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(-1 \cdot \ell\right)} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    9. mul-1-neg73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) + \color{blue}{\left(-\ell\right)} \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    10. cancel-sign-sub-inv73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \color{blue}{\left(-1 \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right) - \ell \cdot \left(-0.5 + 0.16666666666666666\right)\right)}\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    11. distribute-rgt-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \color{blue}{\left(\ell \cdot \left(-1 + 0.5\right)\right)} - \ell \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    12. metadata-eval73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(\ell \cdot \color{blue}{-0.5}\right) - \ell \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    13. *-commutative73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \color{blue}{\left(-0.5 \cdot \ell\right)} - \ell \cdot \left(-0.5 + 0.16666666666666666\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    14. distribute-rgt-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \left(-0.5 \cdot \ell\right) - \color{blue}{\left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    15. fma-neg73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \color{blue}{\mathsf{fma}\left(-1, -0.5 \cdot \ell, -\left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)\right)}\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    16. *-un-lft-identity73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \mathsf{fma}\left(-1, -0.5 \cdot \ell, -\color{blue}{1 \cdot \left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    17. *-commutative73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \mathsf{fma}\left(-1, -0.5 \cdot \ell, -\color{blue}{\left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right) \cdot 1}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    18. *-commutative73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \mathsf{fma}\left(-1, -0.5 \cdot \ell, -\color{blue}{1 \cdot \left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    19. *-un-lft-identity73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \mathsf{fma}\left(-1, -0.5 \cdot \ell, -\color{blue}{\left(-0.5 \cdot \ell + 0.16666666666666666 \cdot \ell\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    20. distribute-rgt-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \mathsf{fma}\left(-1, -0.5 \cdot \ell, -\color{blue}{\ell \cdot \left(-0.5 + 0.16666666666666666\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  8. Applied egg-rr73.5%

    \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \color{blue}{\mathsf{fma}\left(-1, \ell \cdot -0.5, -\ell \cdot -0.3333333333333333\right)}\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  9. Step-by-step derivation
    1. fma-undefine73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \color{blue}{\left(-1 \cdot \left(\ell \cdot -0.5\right) + \left(-\ell \cdot -0.3333333333333333\right)\right)}\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    2. neg-mul-173.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(\color{blue}{\left(-\ell \cdot -0.5\right)} + \left(-\ell \cdot -0.3333333333333333\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    3. distribute-neg-in73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \color{blue}{\left(-\left(\ell \cdot -0.5 + \ell \cdot -0.3333333333333333\right)\right)}\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    4. distribute-lft-out73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(-\color{blue}{\ell \cdot \left(-0.5 + -0.3333333333333333\right)}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    5. distribute-rgt-neg-in73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \color{blue}{\left(\ell \cdot \left(-\left(-0.5 + -0.3333333333333333\right)\right)\right)}\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    6. metadata-eval73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(\ell \cdot \left(-\color{blue}{-0.8333333333333334}\right)\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
    7. metadata-eval73.5%

      \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \left(\ell \cdot \color{blue}{0.8333333333333334}\right)\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  10. Simplified73.5%

    \[\leadsto \ell + w \cdot \left(w \cdot \left(-1 \cdot \left(w \cdot \color{blue}{\left(\ell \cdot 0.8333333333333334\right)}\right) - \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right) \]
  11. Final simplification73.5%

    \[\leadsto \ell + w \cdot \left(w \cdot \left(\left(\ell - \ell \cdot 0.5\right) - w \cdot \left(\ell \cdot 0.8333333333333334\right)\right) - \ell\right) \]
  12. Add Preprocessing

Alternative 9: 70.4% accurate, 23.5× speedup?

\[\begin{array}{l} \\ \ell - w \cdot \left(\ell + w \cdot \left(\ell \cdot 0.5 - \ell\right)\right) \end{array} \]
(FPCore (w l) :precision binary64 (- l (* w (+ l (* w (- (* l 0.5) l))))))
double code(double w, double l) {
	return l - (w * (l + (w * ((l * 0.5) - l))));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = l - (w * (l + (w * ((l * 0.5d0) - l))))
end function
public static double code(double w, double l) {
	return l - (w * (l + (w * ((l * 0.5) - l))));
}
def code(w, l):
	return l - (w * (l + (w * ((l * 0.5) - l))))
function code(w, l)
	return Float64(l - Float64(w * Float64(l + Float64(w * Float64(Float64(l * 0.5) - l)))))
end
function tmp = code(w, l)
	tmp = l - (w * (l + (w * ((l * 0.5) - l))));
end
code[w_, l_] := N[(l - N[(w * N[(l + N[(w * N[(N[(l * 0.5), $MachinePrecision] - l), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\ell - w \cdot \left(\ell + w \cdot \left(\ell \cdot 0.5 - \ell\right)\right)
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. remove-double-neg99.0%

      \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    3. associate-*l/99.0%

      \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
    4. *-lft-identity99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
    5. remove-double-neg99.0%

      \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
  3. Simplified99.0%

    \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
  4. Add Preprocessing
  5. Taylor expanded in w around 0 97.6%

    \[\leadsto \frac{\color{blue}{\ell}}{e^{w}} \]
  6. Taylor expanded in w around 0 69.4%

    \[\leadsto \color{blue}{\ell + w \cdot \left(-1 \cdot \left(w \cdot \left(-1 \cdot \ell + 0.5 \cdot \ell\right)\right) - \ell\right)} \]
  7. Final simplification69.4%

    \[\leadsto \ell - w \cdot \left(\ell + w \cdot \left(\ell \cdot 0.5 - \ell\right)\right) \]
  8. Add Preprocessing

Alternative 10: 63.8% accurate, 61.0× speedup?

\[\begin{array}{l} \\ \ell - \ell \cdot w \end{array} \]
(FPCore (w l) :precision binary64 (- l (* l w)))
double code(double w, double l) {
	return l - (l * w);
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = l - (l * w)
end function
public static double code(double w, double l) {
	return l - (l * w);
}
def code(w, l):
	return l - (l * w)
function code(w, l)
	return Float64(l - Float64(l * w))
end
function tmp = code(w, l)
	tmp = l - (l * w);
end
code[w_, l_] := N[(l - N[(l * w), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\ell - \ell \cdot w
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. remove-double-neg99.0%

      \[\leadsto \frac{1}{e^{\color{blue}{-\left(-w\right)}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    3. associate-*l/99.0%

      \[\leadsto \color{blue}{\frac{1 \cdot {\ell}^{\left(e^{w}\right)}}{e^{-\left(-w\right)}}} \]
    4. *-lft-identity99.0%

      \[\leadsto \frac{\color{blue}{{\ell}^{\left(e^{w}\right)}}}{e^{-\left(-w\right)}} \]
    5. remove-double-neg99.0%

      \[\leadsto \frac{{\ell}^{\left(e^{w}\right)}}{e^{\color{blue}{w}}} \]
  3. Simplified99.0%

    \[\leadsto \color{blue}{\frac{{\ell}^{\left(e^{w}\right)}}{e^{w}}} \]
  4. Add Preprocessing
  5. Taylor expanded in w around 0 97.6%

    \[\leadsto \frac{\color{blue}{\ell}}{e^{w}} \]
  6. Taylor expanded in w around 0 62.4%

    \[\leadsto \color{blue}{\ell + -1 \cdot \left(\ell \cdot w\right)} \]
  7. Step-by-step derivation
    1. mul-1-neg62.4%

      \[\leadsto \ell + \color{blue}{\left(-\ell \cdot w\right)} \]
    2. *-commutative62.4%

      \[\leadsto \ell + \left(-\color{blue}{w \cdot \ell}\right) \]
    3. unsub-neg62.4%

      \[\leadsto \color{blue}{\ell - w \cdot \ell} \]
    4. *-commutative62.4%

      \[\leadsto \ell - \color{blue}{\ell \cdot w} \]
  8. Simplified62.4%

    \[\leadsto \color{blue}{\ell - \ell \cdot w} \]
  9. Add Preprocessing

Alternative 11: 57.2% accurate, 305.0× speedup?

\[\begin{array}{l} \\ \ell \end{array} \]
(FPCore (w l) :precision binary64 l)
double code(double w, double l) {
	return l;
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = l
end function
public static double code(double w, double l) {
	return l;
}
def code(w, l):
	return l
function code(w, l)
	return l
end
function tmp = code(w, l)
	tmp = l;
end
code[w_, l_] := l
\begin{array}{l}

\\
\ell
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Add Preprocessing
  3. Taylor expanded in w around 0 58.0%

    \[\leadsto \color{blue}{\ell} \]
  4. Add Preprocessing

Alternative 12: 4.4% accurate, 305.0× speedup?

\[\begin{array}{l} \\ 1 \end{array} \]
(FPCore (w l) :precision binary64 1.0)
double code(double w, double l) {
	return 1.0;
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = 1.0d0
end function
public static double code(double w, double l) {
	return 1.0;
}
def code(w, l):
	return 1.0
function code(w, l)
	return 1.0
end
function tmp = code(w, l)
	tmp = 1.0;
end
code[w_, l_] := 1.0
\begin{array}{l}

\\
1
\end{array}
Derivation
  1. Initial program 99.0%

    \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. exp-neg99.0%

      \[\leadsto \color{blue}{\frac{1}{e^{w}}} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. associate-/r/98.8%

      \[\leadsto \color{blue}{\frac{1}{\frac{e^{w}}{{\ell}^{\left(e^{w}\right)}}}} \]
  4. Applied egg-rr57.8%

    \[\leadsto \color{blue}{\frac{1}{\frac{1}{\ell}}} \]
  5. Step-by-step derivation
    1. add-sqr-sqrt57.5%

      \[\leadsto \frac{1}{\color{blue}{\sqrt{\frac{1}{\ell}} \cdot \sqrt{\frac{1}{\ell}}}} \]
    2. associate-/r*57.5%

      \[\leadsto \color{blue}{\frac{\frac{1}{\sqrt{\frac{1}{\ell}}}}{\sqrt{\frac{1}{\ell}}}} \]
    3. metadata-eval57.5%

      \[\leadsto \frac{\frac{\color{blue}{\sqrt{1}}}{\sqrt{\frac{1}{\ell}}}}{\sqrt{\frac{1}{\ell}}} \]
    4. sqrt-div57.7%

      \[\leadsto \frac{\color{blue}{\sqrt{\frac{1}{\frac{1}{\ell}}}}}{\sqrt{\frac{1}{\ell}}} \]
    5. remove-double-div57.7%

      \[\leadsto \frac{\sqrt{\color{blue}{\ell}}}{\sqrt{\frac{1}{\ell}}} \]
    6. /-rgt-identity57.7%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{\color{blue}{\frac{\frac{1}{\ell}}{1}}}} \]
    7. /-rgt-identity57.7%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{\color{blue}{\frac{1}{\ell}}}} \]
    8. add-exp-log53.7%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{\color{blue}{e^{\log \left(\frac{1}{\ell}\right)}}}} \]
    9. add-sqr-sqrt24.9%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{e^{\color{blue}{\sqrt{\log \left(\frac{1}{\ell}\right)} \cdot \sqrt{\log \left(\frac{1}{\ell}\right)}}}}} \]
    10. sqrt-unprod27.4%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{e^{\color{blue}{\sqrt{\log \left(\frac{1}{\ell}\right) \cdot \log \left(\frac{1}{\ell}\right)}}}}} \]
    11. log-rec27.4%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{e^{\sqrt{\color{blue}{\left(-\log \ell\right)} \cdot \log \left(\frac{1}{\ell}\right)}}}} \]
    12. log-rec27.4%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{e^{\sqrt{\left(-\log \ell\right) \cdot \color{blue}{\left(-\log \ell\right)}}}}} \]
    13. sqr-neg27.4%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{e^{\sqrt{\color{blue}{\log \ell \cdot \log \ell}}}}} \]
    14. sqrt-unprod2.1%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{e^{\color{blue}{\sqrt{\log \ell} \cdot \sqrt{\log \ell}}}}} \]
    15. add-sqr-sqrt4.4%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{e^{\color{blue}{\log \ell}}}} \]
    16. add-exp-log4.4%

      \[\leadsto \frac{\sqrt{\ell}}{\sqrt{\color{blue}{\ell}}} \]
  6. Applied egg-rr4.4%

    \[\leadsto \color{blue}{\frac{\sqrt{\ell}}{\sqrt{\ell}}} \]
  7. Step-by-step derivation
    1. *-inverses4.4%

      \[\leadsto \color{blue}{1} \]
  8. Simplified4.4%

    \[\leadsto \color{blue}{1} \]
  9. Add Preprocessing

Reproduce

?
herbie shell --seed 2024097 
(FPCore (w l)
  :name "exp-w (used to crash)"
  :precision binary64
  (* (exp (- w)) (pow l (exp w))))