exp-w (used to crash)

Percentage Accurate: 99.5% → 99.2%
Time: 16.7s
Alternatives: 12
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \end{array} \]
(FPCore (w l) :precision binary64 (* (exp (- w)) (pow l (exp w))))
double code(double w, double l) {
	return exp(-w) * pow(l, exp(w));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = exp(-w) * (l ** exp(w))
end function
public static double code(double w, double l) {
	return Math.exp(-w) * Math.pow(l, Math.exp(w));
}
def code(w, l):
	return math.exp(-w) * math.pow(l, math.exp(w))
function code(w, l)
	return Float64(exp(Float64(-w)) * (l ^ exp(w)))
end
function tmp = code(w, l)
	tmp = exp(-w) * (l ^ exp(w));
end
code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] * N[Power[l, N[Exp[w], $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
e^{-w} \cdot {\ell}^{\left(e^{w}\right)}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 12 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 99.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \end{array} \]
(FPCore (w l) :precision binary64 (* (exp (- w)) (pow l (exp w))))
double code(double w, double l) {
	return exp(-w) * pow(l, exp(w));
}
real(8) function code(w, l)
    real(8), intent (in) :: w
    real(8), intent (in) :: l
    code = exp(-w) * (l ** exp(w))
end function
public static double code(double w, double l) {
	return Math.exp(-w) * Math.pow(l, Math.exp(w));
}
def code(w, l):
	return math.exp(-w) * math.pow(l, math.exp(w))
function code(w, l)
	return Float64(exp(Float64(-w)) * (l ^ exp(w)))
end
function tmp = code(w, l)
	tmp = exp(-w) * (l ^ exp(w));
end
code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] * N[Power[l, N[Exp[w], $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
e^{-w} \cdot {\ell}^{\left(e^{w}\right)}
\end{array}

Alternative 1: 99.2% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;w \leq -1.12 \cdot 10^{-14}:\\ \;\;\;\;e^{\mathsf{fma}\left(\log \ell, e^{w}, -w\right)}\\ \mathbf{else}:\\ \;\;\;\;1 \cdot {\ell}^{\left(1 + w\right)}\\ \end{array} \end{array} \]
(FPCore (w l)
 :precision binary64
 (if (<= w -1.12e-14)
   (exp (fma (log l) (exp w) (- w)))
   (* 1.0 (pow l (+ 1.0 w)))))
double code(double w, double l) {
	double tmp;
	if (w <= -1.12e-14) {
		tmp = exp(fma(log(l), exp(w), -w));
	} else {
		tmp = 1.0 * pow(l, (1.0 + w));
	}
	return tmp;
}
function code(w, l)
	tmp = 0.0
	if (w <= -1.12e-14)
		tmp = exp(fma(log(l), exp(w), Float64(-w)));
	else
		tmp = Float64(1.0 * (l ^ Float64(1.0 + w)));
	end
	return tmp
end
code[w_, l_] := If[LessEqual[w, -1.12e-14], N[Exp[N[(N[Log[l], $MachinePrecision] * N[Exp[w], $MachinePrecision] + (-w)), $MachinePrecision]], $MachinePrecision], N[(1.0 * N[Power[l, N[(1.0 + w), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;w \leq -1.12 \cdot 10^{-14}:\\
\;\;\;\;e^{\mathsf{fma}\left(\log \ell, e^{w}, -w\right)}\\

\mathbf{else}:\\
\;\;\;\;1 \cdot {\ell}^{\left(1 + w\right)}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if w < -1.12000000000000006e-14

    1. Initial program 99.8%

      \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-*.f64N/A

        \[\leadsto \color{blue}{e^{-w} \cdot {\ell}^{\left(e^{w}\right)}} \]
      2. *-commutativeN/A

        \[\leadsto \color{blue}{{\ell}^{\left(e^{w}\right)} \cdot e^{-w}} \]
      3. lift-pow.f64N/A

        \[\leadsto \color{blue}{{\ell}^{\left(e^{w}\right)}} \cdot e^{-w} \]
      4. pow-to-expN/A

        \[\leadsto \color{blue}{e^{\log \ell \cdot e^{w}}} \cdot e^{-w} \]
      5. lift-exp.f64N/A

        \[\leadsto e^{\log \ell \cdot e^{w}} \cdot \color{blue}{e^{-w}} \]
      6. prod-expN/A

        \[\leadsto \color{blue}{e^{\log \ell \cdot e^{w} + \left(-w\right)}} \]
      7. lower-exp.f64N/A

        \[\leadsto \color{blue}{e^{\log \ell \cdot e^{w} + \left(-w\right)}} \]
      8. lower-fma.f64N/A

        \[\leadsto e^{\color{blue}{\mathsf{fma}\left(\log \ell, e^{w}, -w\right)}} \]
      9. lower-log.f6499.7

        \[\leadsto e^{\mathsf{fma}\left(\color{blue}{\log \ell}, e^{w}, -w\right)} \]
    4. Applied rewrites99.7%

      \[\leadsto \color{blue}{e^{\mathsf{fma}\left(\log \ell, e^{w}, -w\right)}} \]

    if -1.12000000000000006e-14 < w

    1. Initial program 98.7%

      \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. Add Preprocessing
    3. Taylor expanded in w around 0

      \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
    4. Step-by-step derivation
      1. lower-+.f6498.4

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
    5. Applied rewrites98.4%

      \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
    6. Taylor expanded in w around 0

      \[\leadsto \color{blue}{1} \cdot {\ell}^{\left(1 + w\right)} \]
    7. Step-by-step derivation
      1. Applied rewrites99.5%

        \[\leadsto \color{blue}{1} \cdot {\ell}^{\left(1 + w\right)} \]
    8. Recombined 2 regimes into one program.
    9. Add Preprocessing

    Alternative 2: 97.5% accurate, 1.0× speedup?

    \[\begin{array}{l} \\ e^{-w} \cdot {\left({\ell}^{-1}\right)}^{-1} \end{array} \]
    (FPCore (w l) :precision binary64 (* (exp (- w)) (pow (pow l -1.0) -1.0)))
    double code(double w, double l) {
    	return exp(-w) * pow(pow(l, -1.0), -1.0);
    }
    
    real(8) function code(w, l)
        real(8), intent (in) :: w
        real(8), intent (in) :: l
        code = exp(-w) * ((l ** (-1.0d0)) ** (-1.0d0))
    end function
    
    public static double code(double w, double l) {
    	return Math.exp(-w) * Math.pow(Math.pow(l, -1.0), -1.0);
    }
    
    def code(w, l):
    	return math.exp(-w) * math.pow(math.pow(l, -1.0), -1.0)
    
    function code(w, l)
    	return Float64(exp(Float64(-w)) * ((l ^ -1.0) ^ -1.0))
    end
    
    function tmp = code(w, l)
    	tmp = exp(-w) * ((l ^ -1.0) ^ -1.0);
    end
    
    code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] * N[Power[N[Power[l, -1.0], $MachinePrecision], -1.0], $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    e^{-w} \cdot {\left({\ell}^{-1}\right)}^{-1}
    \end{array}
    
    Derivation
    1. Initial program 99.0%

      \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-pow.f64N/A

        \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
      2. pow-to-expN/A

        \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
      3. sinh-+-cosh-revN/A

        \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
      4. flip-+N/A

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
      5. sinh-coshN/A

        \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
      6. sinh---cosh-revN/A

        \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
      7. lower-/.f64N/A

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
      8. exp-negN/A

        \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
      9. pow-to-expN/A

        \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
      10. lift-pow.f64N/A

        \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
      11. lower-/.f6498.9

        \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
    4. Applied rewrites98.9%

      \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
    5. Taylor expanded in w around 0

      \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{\ell}}} \]
    6. Step-by-step derivation
      1. lower-/.f6497.3

        \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{\ell}}} \]
    7. Applied rewrites97.3%

      \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{\ell}}} \]
    8. Final simplification97.3%

      \[\leadsto e^{-w} \cdot {\left({\ell}^{-1}\right)}^{-1} \]
    9. Add Preprocessing

    Alternative 3: 99.5% accurate, 1.0× speedup?

    \[\begin{array}{l} \\ e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \end{array} \]
    (FPCore (w l) :precision binary64 (* (exp (- w)) (pow l (exp w))))
    double code(double w, double l) {
    	return exp(-w) * pow(l, exp(w));
    }
    
    real(8) function code(w, l)
        real(8), intent (in) :: w
        real(8), intent (in) :: l
        code = exp(-w) * (l ** exp(w))
    end function
    
    public static double code(double w, double l) {
    	return Math.exp(-w) * Math.pow(l, Math.exp(w));
    }
    
    def code(w, l):
    	return math.exp(-w) * math.pow(l, math.exp(w))
    
    function code(w, l)
    	return Float64(exp(Float64(-w)) * (l ^ exp(w)))
    end
    
    function tmp = code(w, l)
    	tmp = exp(-w) * (l ^ exp(w));
    end
    
    code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] * N[Power[l, N[Exp[w], $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    e^{-w} \cdot {\ell}^{\left(e^{w}\right)}
    \end{array}
    
    Derivation
    1. Initial program 99.0%

      \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. Add Preprocessing
    3. Add Preprocessing

    Alternative 4: 97.5% accurate, 1.4× speedup?

    \[\begin{array}{l} \\ \frac{e^{-w}}{{\ell}^{-1}} \end{array} \]
    (FPCore (w l) :precision binary64 (/ (exp (- w)) (pow l -1.0)))
    double code(double w, double l) {
    	return exp(-w) / pow(l, -1.0);
    }
    
    real(8) function code(w, l)
        real(8), intent (in) :: w
        real(8), intent (in) :: l
        code = exp(-w) / (l ** (-1.0d0))
    end function
    
    public static double code(double w, double l) {
    	return Math.exp(-w) / Math.pow(l, -1.0);
    }
    
    def code(w, l):
    	return math.exp(-w) / math.pow(l, -1.0)
    
    function code(w, l)
    	return Float64(exp(Float64(-w)) / (l ^ -1.0))
    end
    
    function tmp = code(w, l)
    	tmp = exp(-w) / (l ^ -1.0);
    end
    
    code[w_, l_] := N[(N[Exp[(-w)], $MachinePrecision] / N[Power[l, -1.0], $MachinePrecision]), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    \frac{e^{-w}}{{\ell}^{-1}}
    \end{array}
    
    Derivation
    1. Initial program 99.0%

      \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift-pow.f64N/A

        \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
      2. pow-to-expN/A

        \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
      3. sinh-+-cosh-revN/A

        \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
      4. flip-+N/A

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
      5. sinh-coshN/A

        \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
      6. sinh---cosh-revN/A

        \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
      7. lower-/.f64N/A

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
      8. exp-negN/A

        \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
      9. pow-to-expN/A

        \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
      10. lift-pow.f64N/A

        \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
      11. lower-/.f6498.9

        \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
    4. Applied rewrites98.9%

      \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
    5. Step-by-step derivation
      1. lift-*.f64N/A

        \[\leadsto \color{blue}{e^{-w} \cdot \frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      2. lift-/.f64N/A

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      3. associate-*r/N/A

        \[\leadsto \color{blue}{\frac{e^{-w} \cdot 1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      4. *-rgt-identityN/A

        \[\leadsto \frac{\color{blue}{e^{-w}}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}} \]
      5. lower-/.f6498.9

        \[\leadsto \color{blue}{\frac{e^{-w}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      6. lift-/.f64N/A

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      7. lift-pow.f64N/A

        \[\leadsto \frac{e^{-w}}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
      8. pow-flipN/A

        \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
      9. lower-pow.f64N/A

        \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
      10. lower-neg.f6498.9

        \[\leadsto \frac{e^{-w}}{{\ell}^{\color{blue}{\left(-e^{w}\right)}}} \]
    6. Applied rewrites98.9%

      \[\leadsto \color{blue}{\frac{e^{-w}}{{\ell}^{\left(-e^{w}\right)}}} \]
    7. Taylor expanded in w around 0

      \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
    8. Step-by-step derivation
      1. lower-/.f6497.3

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
    9. Applied rewrites97.3%

      \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
    10. Final simplification97.3%

      \[\leadsto \frac{e^{-w}}{{\ell}^{-1}} \]
    11. Add Preprocessing

    Alternative 5: 98.9% accurate, 2.2× speedup?

    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\ell \leq 0.41:\\ \;\;\;\;\left(1 - w\right) \cdot {\ell}^{\left(1 + w\right)}\\ \mathbf{else}:\\ \;\;\;\;\mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right) \cdot {\ell}^{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, w, 1\right), w, 1\right)\right)}\\ \end{array} \end{array} \]
    (FPCore (w l)
     :precision binary64
     (if (<= l 0.41)
       (* (- 1.0 w) (pow l (+ 1.0 w)))
       (* (fma (- (* 0.5 w) 1.0) w 1.0) (pow l (fma (fma 0.5 w 1.0) w 1.0)))))
    double code(double w, double l) {
    	double tmp;
    	if (l <= 0.41) {
    		tmp = (1.0 - w) * pow(l, (1.0 + w));
    	} else {
    		tmp = fma(((0.5 * w) - 1.0), w, 1.0) * pow(l, fma(fma(0.5, w, 1.0), w, 1.0));
    	}
    	return tmp;
    }
    
    function code(w, l)
    	tmp = 0.0
    	if (l <= 0.41)
    		tmp = Float64(Float64(1.0 - w) * (l ^ Float64(1.0 + w)));
    	else
    		tmp = Float64(fma(Float64(Float64(0.5 * w) - 1.0), w, 1.0) * (l ^ fma(fma(0.5, w, 1.0), w, 1.0)));
    	end
    	return tmp
    end
    
    code[w_, l_] := If[LessEqual[l, 0.41], N[(N[(1.0 - w), $MachinePrecision] * N[Power[l, N[(1.0 + w), $MachinePrecision]], $MachinePrecision]), $MachinePrecision], N[(N[(N[(N[(0.5 * w), $MachinePrecision] - 1.0), $MachinePrecision] * w + 1.0), $MachinePrecision] * N[Power[l, N[(N[(0.5 * w + 1.0), $MachinePrecision] * w + 1.0), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]]
    
    \begin{array}{l}
    
    \\
    \begin{array}{l}
    \mathbf{if}\;\ell \leq 0.41:\\
    \;\;\;\;\left(1 - w\right) \cdot {\ell}^{\left(1 + w\right)}\\
    
    \mathbf{else}:\\
    \;\;\;\;\mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right) \cdot {\ell}^{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, w, 1\right), w, 1\right)\right)}\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if l < 0.409999999999999976

      1. Initial program 99.8%

        \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. Add Preprocessing
      3. Taylor expanded in w around 0

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      4. Step-by-step derivation
        1. lower-+.f6498.5

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      5. Applied rewrites98.5%

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      6. Taylor expanded in w around 0

        \[\leadsto \color{blue}{\left(1 + -1 \cdot w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
      7. Step-by-step derivation
        1. fp-cancel-sign-sub-invN/A

          \[\leadsto \color{blue}{\left(1 - \left(\mathsf{neg}\left(-1\right)\right) \cdot w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
        2. metadata-evalN/A

          \[\leadsto \left(1 - \color{blue}{1} \cdot w\right) \cdot {\ell}^{\left(1 + w\right)} \]
        3. *-lft-identityN/A

          \[\leadsto \left(1 - \color{blue}{w}\right) \cdot {\ell}^{\left(1 + w\right)} \]
        4. lower--.f6498.5

          \[\leadsto \color{blue}{\left(1 - w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
      8. Applied rewrites98.5%

        \[\leadsto \color{blue}{\left(1 - w\right)} \cdot {\ell}^{\left(1 + w\right)} \]

      if 0.409999999999999976 < l

      1. Initial program 97.9%

        \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. Add Preprocessing
      3. Taylor expanded in w around 0

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      4. Step-by-step derivation
        1. lower-+.f6459.2

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      5. Applied rewrites59.2%

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      6. Taylor expanded in w around 0

        \[\leadsto \color{blue}{\left(1 + w \cdot \left(\frac{1}{2} \cdot w - 1\right)\right)} \cdot {\ell}^{\left(1 + w\right)} \]
      7. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \color{blue}{\left(w \cdot \left(\frac{1}{2} \cdot w - 1\right) + 1\right)} \cdot {\ell}^{\left(1 + w\right)} \]
        2. *-commutativeN/A

          \[\leadsto \left(\color{blue}{\left(\frac{1}{2} \cdot w - 1\right) \cdot w} + 1\right) \cdot {\ell}^{\left(1 + w\right)} \]
        3. lower-fma.f64N/A

          \[\leadsto \color{blue}{\mathsf{fma}\left(\frac{1}{2} \cdot w - 1, w, 1\right)} \cdot {\ell}^{\left(1 + w\right)} \]
        4. lower--.f64N/A

          \[\leadsto \mathsf{fma}\left(\color{blue}{\frac{1}{2} \cdot w - 1}, w, 1\right) \cdot {\ell}^{\left(1 + w\right)} \]
        5. lower-*.f6461.3

          \[\leadsto \mathsf{fma}\left(\color{blue}{0.5 \cdot w} - 1, w, 1\right) \cdot {\ell}^{\left(1 + w\right)} \]
      8. Applied rewrites61.3%

        \[\leadsto \color{blue}{\mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right)} \cdot {\ell}^{\left(1 + w\right)} \]
      9. Taylor expanded in w around 0

        \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot w - 1, w, 1\right) \cdot {\ell}^{\color{blue}{\left(1 + w \cdot \left(1 + \frac{1}{2} \cdot w\right)\right)}} \]
      10. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot w - 1, w, 1\right) \cdot {\ell}^{\color{blue}{\left(w \cdot \left(1 + \frac{1}{2} \cdot w\right) + 1\right)}} \]
        2. *-commutativeN/A

          \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot w - 1, w, 1\right) \cdot {\ell}^{\left(\color{blue}{\left(1 + \frac{1}{2} \cdot w\right) \cdot w} + 1\right)} \]
        3. lower-fma.f64N/A

          \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot w - 1, w, 1\right) \cdot {\ell}^{\color{blue}{\left(\mathsf{fma}\left(1 + \frac{1}{2} \cdot w, w, 1\right)\right)}} \]
        4. +-commutativeN/A

          \[\leadsto \mathsf{fma}\left(\frac{1}{2} \cdot w - 1, w, 1\right) \cdot {\ell}^{\left(\mathsf{fma}\left(\color{blue}{\frac{1}{2} \cdot w + 1}, w, 1\right)\right)} \]
        5. lower-fma.f6499.6

          \[\leadsto \mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right) \cdot {\ell}^{\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(0.5, w, 1\right)}, w, 1\right)\right)} \]
      11. Applied rewrites99.6%

        \[\leadsto \mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right) \cdot {\ell}^{\color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, w, 1\right), w, 1\right)\right)}} \]
    3. Recombined 2 regimes into one program.
    4. Add Preprocessing

    Alternative 6: 95.9% accurate, 2.2× speedup?

    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\ell \leq 0.41:\\ \;\;\;\;\left(1 - w\right) \cdot {\ell}^{\left(1 + w\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}}\\ \end{array} \end{array} \]
    (FPCore (w l)
     :precision binary64
     (if (<= l 0.41)
       (* (- 1.0 w) (pow l (+ 1.0 w)))
       (/
        (fma (- (* (fma -0.16666666666666666 w 0.5) w) 1.0) w 1.0)
        (pow l -1.0))))
    double code(double w, double l) {
    	double tmp;
    	if (l <= 0.41) {
    		tmp = (1.0 - w) * pow(l, (1.0 + w));
    	} else {
    		tmp = fma(((fma(-0.16666666666666666, w, 0.5) * w) - 1.0), w, 1.0) / pow(l, -1.0);
    	}
    	return tmp;
    }
    
    function code(w, l)
    	tmp = 0.0
    	if (l <= 0.41)
    		tmp = Float64(Float64(1.0 - w) * (l ^ Float64(1.0 + w)));
    	else
    		tmp = Float64(fma(Float64(Float64(fma(-0.16666666666666666, w, 0.5) * w) - 1.0), w, 1.0) / (l ^ -1.0));
    	end
    	return tmp
    end
    
    code[w_, l_] := If[LessEqual[l, 0.41], N[(N[(1.0 - w), $MachinePrecision] * N[Power[l, N[(1.0 + w), $MachinePrecision]], $MachinePrecision]), $MachinePrecision], N[(N[(N[(N[(N[(-0.16666666666666666 * w + 0.5), $MachinePrecision] * w), $MachinePrecision] - 1.0), $MachinePrecision] * w + 1.0), $MachinePrecision] / N[Power[l, -1.0], $MachinePrecision]), $MachinePrecision]]
    
    \begin{array}{l}
    
    \\
    \begin{array}{l}
    \mathbf{if}\;\ell \leq 0.41:\\
    \;\;\;\;\left(1 - w\right) \cdot {\ell}^{\left(1 + w\right)}\\
    
    \mathbf{else}:\\
    \;\;\;\;\frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}}\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if l < 0.409999999999999976

      1. Initial program 99.8%

        \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. Add Preprocessing
      3. Taylor expanded in w around 0

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      4. Step-by-step derivation
        1. lower-+.f6498.5

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      5. Applied rewrites98.5%

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      6. Taylor expanded in w around 0

        \[\leadsto \color{blue}{\left(1 + -1 \cdot w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
      7. Step-by-step derivation
        1. fp-cancel-sign-sub-invN/A

          \[\leadsto \color{blue}{\left(1 - \left(\mathsf{neg}\left(-1\right)\right) \cdot w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
        2. metadata-evalN/A

          \[\leadsto \left(1 - \color{blue}{1} \cdot w\right) \cdot {\ell}^{\left(1 + w\right)} \]
        3. *-lft-identityN/A

          \[\leadsto \left(1 - \color{blue}{w}\right) \cdot {\ell}^{\left(1 + w\right)} \]
        4. lower--.f6498.5

          \[\leadsto \color{blue}{\left(1 - w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
      8. Applied rewrites98.5%

        \[\leadsto \color{blue}{\left(1 - w\right)} \cdot {\ell}^{\left(1 + w\right)} \]

      if 0.409999999999999976 < l

      1. Initial program 97.9%

        \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. Add Preprocessing
      3. Step-by-step derivation
        1. lift-pow.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
        2. pow-to-expN/A

          \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
        3. sinh-+-cosh-revN/A

          \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
        4. flip-+N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
        5. sinh-coshN/A

          \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
        6. sinh---cosh-revN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
        7. lower-/.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
        8. exp-negN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
        9. pow-to-expN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        10. lift-pow.f64N/A

          \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        11. lower-/.f6497.8

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      4. Applied rewrites97.8%

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      5. Step-by-step derivation
        1. lift-*.f64N/A

          \[\leadsto \color{blue}{e^{-w} \cdot \frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        2. lift-/.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        3. associate-*r/N/A

          \[\leadsto \color{blue}{\frac{e^{-w} \cdot 1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        4. *-rgt-identityN/A

          \[\leadsto \frac{\color{blue}{e^{-w}}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}} \]
        5. lower-/.f6497.8

          \[\leadsto \color{blue}{\frac{e^{-w}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        6. lift-/.f64N/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        7. lift-pow.f64N/A

          \[\leadsto \frac{e^{-w}}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        8. pow-flipN/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
        9. lower-pow.f64N/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
        10. lower-neg.f6497.8

          \[\leadsto \frac{e^{-w}}{{\ell}^{\color{blue}{\left(-e^{w}\right)}}} \]
      6. Applied rewrites97.8%

        \[\leadsto \color{blue}{\frac{e^{-w}}{{\ell}^{\left(-e^{w}\right)}}} \]
      7. Taylor expanded in w around 0

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      8. Step-by-step derivation
        1. lower-/.f6497.0

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      9. Applied rewrites97.0%

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      10. Taylor expanded in w around 0

        \[\leadsto \frac{\color{blue}{1 + w \cdot \left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right)}}{\frac{1}{\ell}} \]
      11. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \frac{\color{blue}{w \cdot \left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right) + 1}}{\frac{1}{\ell}} \]
        2. *-commutativeN/A

          \[\leadsto \frac{\color{blue}{\left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right) \cdot w} + 1}{\frac{1}{\ell}} \]
        3. lower-fma.f64N/A

          \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1, w, 1\right)}}{\frac{1}{\ell}} \]
        4. lower--.f64N/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1}, w, 1\right)}{\frac{1}{\ell}} \]
        5. *-commutativeN/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) \cdot w} - 1, w, 1\right)}{\frac{1}{\ell}} \]
        6. lower-*.f64N/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) \cdot w} - 1, w, 1\right)}{\frac{1}{\ell}} \]
        7. +-commutativeN/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{-1}{6} \cdot w + \frac{1}{2}\right)} \cdot w - 1, w, 1\right)}{\frac{1}{\ell}} \]
        8. lower-fma.f6491.7

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right)} \cdot w - 1, w, 1\right)}{\frac{1}{\ell}} \]
      12. Applied rewrites91.7%

        \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}}{\frac{1}{\ell}} \]
    3. Recombined 2 regimes into one program.
    4. Final simplification95.7%

      \[\leadsto \begin{array}{l} \mathbf{if}\;\ell \leq 0.41:\\ \;\;\;\;\left(1 - w\right) \cdot {\ell}^{\left(1 + w\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}}\\ \end{array} \]
    5. Add Preprocessing

    Alternative 7: 95.7% accurate, 2.2× speedup?

    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\ell \leq 0.41:\\ \;\;\;\;1 \cdot {\ell}^{\left(1 + w\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}}\\ \end{array} \end{array} \]
    (FPCore (w l)
     :precision binary64
     (if (<= l 0.41)
       (* 1.0 (pow l (+ 1.0 w)))
       (/
        (fma (- (* (fma -0.16666666666666666 w 0.5) w) 1.0) w 1.0)
        (pow l -1.0))))
    double code(double w, double l) {
    	double tmp;
    	if (l <= 0.41) {
    		tmp = 1.0 * pow(l, (1.0 + w));
    	} else {
    		tmp = fma(((fma(-0.16666666666666666, w, 0.5) * w) - 1.0), w, 1.0) / pow(l, -1.0);
    	}
    	return tmp;
    }
    
    function code(w, l)
    	tmp = 0.0
    	if (l <= 0.41)
    		tmp = Float64(1.0 * (l ^ Float64(1.0 + w)));
    	else
    		tmp = Float64(fma(Float64(Float64(fma(-0.16666666666666666, w, 0.5) * w) - 1.0), w, 1.0) / (l ^ -1.0));
    	end
    	return tmp
    end
    
    code[w_, l_] := If[LessEqual[l, 0.41], N[(1.0 * N[Power[l, N[(1.0 + w), $MachinePrecision]], $MachinePrecision]), $MachinePrecision], N[(N[(N[(N[(N[(-0.16666666666666666 * w + 0.5), $MachinePrecision] * w), $MachinePrecision] - 1.0), $MachinePrecision] * w + 1.0), $MachinePrecision] / N[Power[l, -1.0], $MachinePrecision]), $MachinePrecision]]
    
    \begin{array}{l}
    
    \\
    \begin{array}{l}
    \mathbf{if}\;\ell \leq 0.41:\\
    \;\;\;\;1 \cdot {\ell}^{\left(1 + w\right)}\\
    
    \mathbf{else}:\\
    \;\;\;\;\frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}}\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if l < 0.409999999999999976

      1. Initial program 99.8%

        \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. Add Preprocessing
      3. Taylor expanded in w around 0

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      4. Step-by-step derivation
        1. lower-+.f6498.5

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      5. Applied rewrites98.5%

        \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
      6. Taylor expanded in w around 0

        \[\leadsto \color{blue}{1} \cdot {\ell}^{\left(1 + w\right)} \]
      7. Step-by-step derivation
        1. Applied rewrites98.3%

          \[\leadsto \color{blue}{1} \cdot {\ell}^{\left(1 + w\right)} \]

        if 0.409999999999999976 < l

        1. Initial program 97.9%

          \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-pow.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
          2. pow-to-expN/A

            \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
          3. sinh-+-cosh-revN/A

            \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
          4. flip-+N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
          5. sinh-coshN/A

            \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
          6. sinh---cosh-revN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
          7. lower-/.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
          8. exp-negN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
          9. pow-to-expN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          10. lift-pow.f64N/A

            \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          11. lower-/.f6497.8

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        4. Applied rewrites97.8%

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        5. Step-by-step derivation
          1. lift-*.f64N/A

            \[\leadsto \color{blue}{e^{-w} \cdot \frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          2. lift-/.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          3. associate-*r/N/A

            \[\leadsto \color{blue}{\frac{e^{-w} \cdot 1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          4. *-rgt-identityN/A

            \[\leadsto \frac{\color{blue}{e^{-w}}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}} \]
          5. lower-/.f6497.8

            \[\leadsto \color{blue}{\frac{e^{-w}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          6. lift-/.f64N/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          7. lift-pow.f64N/A

            \[\leadsto \frac{e^{-w}}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          8. pow-flipN/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
          9. lower-pow.f64N/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
          10. lower-neg.f6497.8

            \[\leadsto \frac{e^{-w}}{{\ell}^{\color{blue}{\left(-e^{w}\right)}}} \]
        6. Applied rewrites97.8%

          \[\leadsto \color{blue}{\frac{e^{-w}}{{\ell}^{\left(-e^{w}\right)}}} \]
        7. Taylor expanded in w around 0

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        8. Step-by-step derivation
          1. lower-/.f6497.0

            \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        9. Applied rewrites97.0%

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        10. Taylor expanded in w around 0

          \[\leadsto \frac{\color{blue}{1 + w \cdot \left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right)}}{\frac{1}{\ell}} \]
        11. Step-by-step derivation
          1. +-commutativeN/A

            \[\leadsto \frac{\color{blue}{w \cdot \left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right) + 1}}{\frac{1}{\ell}} \]
          2. *-commutativeN/A

            \[\leadsto \frac{\color{blue}{\left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right) \cdot w} + 1}{\frac{1}{\ell}} \]
          3. lower-fma.f64N/A

            \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1, w, 1\right)}}{\frac{1}{\ell}} \]
          4. lower--.f64N/A

            \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1}, w, 1\right)}{\frac{1}{\ell}} \]
          5. *-commutativeN/A

            \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) \cdot w} - 1, w, 1\right)}{\frac{1}{\ell}} \]
          6. lower-*.f64N/A

            \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) \cdot w} - 1, w, 1\right)}{\frac{1}{\ell}} \]
          7. +-commutativeN/A

            \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{-1}{6} \cdot w + \frac{1}{2}\right)} \cdot w - 1, w, 1\right)}{\frac{1}{\ell}} \]
          8. lower-fma.f6491.7

            \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right)} \cdot w - 1, w, 1\right)}{\frac{1}{\ell}} \]
        12. Applied rewrites91.7%

          \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}}{\frac{1}{\ell}} \]
      8. Recombined 2 regimes into one program.
      9. Final simplification95.6%

        \[\leadsto \begin{array}{l} \mathbf{if}\;\ell \leq 0.41:\\ \;\;\;\;1 \cdot {\ell}^{\left(1 + w\right)}\\ \mathbf{else}:\\ \;\;\;\;\frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}}\\ \end{array} \]
      10. Add Preprocessing

      Alternative 8: 77.1% accurate, 2.3× speedup?

      \[\begin{array}{l} \\ \frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}} \end{array} \]
      (FPCore (w l)
       :precision binary64
       (/ (fma (- (* (fma -0.16666666666666666 w 0.5) w) 1.0) w 1.0) (pow l -1.0)))
      double code(double w, double l) {
      	return fma(((fma(-0.16666666666666666, w, 0.5) * w) - 1.0), w, 1.0) / pow(l, -1.0);
      }
      
      function code(w, l)
      	return Float64(fma(Float64(Float64(fma(-0.16666666666666666, w, 0.5) * w) - 1.0), w, 1.0) / (l ^ -1.0))
      end
      
      code[w_, l_] := N[(N[(N[(N[(N[(-0.16666666666666666 * w + 0.5), $MachinePrecision] * w), $MachinePrecision] - 1.0), $MachinePrecision] * w + 1.0), $MachinePrecision] / N[Power[l, -1.0], $MachinePrecision]), $MachinePrecision]
      
      \begin{array}{l}
      
      \\
      \frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}}
      \end{array}
      
      Derivation
      1. Initial program 99.0%

        \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. Add Preprocessing
      3. Step-by-step derivation
        1. lift-pow.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
        2. pow-to-expN/A

          \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
        3. sinh-+-cosh-revN/A

          \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
        4. flip-+N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
        5. sinh-coshN/A

          \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
        6. sinh---cosh-revN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
        7. lower-/.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
        8. exp-negN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
        9. pow-to-expN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        10. lift-pow.f64N/A

          \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        11. lower-/.f6498.9

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      4. Applied rewrites98.9%

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      5. Step-by-step derivation
        1. lift-*.f64N/A

          \[\leadsto \color{blue}{e^{-w} \cdot \frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        2. lift-/.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        3. associate-*r/N/A

          \[\leadsto \color{blue}{\frac{e^{-w} \cdot 1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        4. *-rgt-identityN/A

          \[\leadsto \frac{\color{blue}{e^{-w}}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}} \]
        5. lower-/.f6498.9

          \[\leadsto \color{blue}{\frac{e^{-w}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        6. lift-/.f64N/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        7. lift-pow.f64N/A

          \[\leadsto \frac{e^{-w}}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        8. pow-flipN/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
        9. lower-pow.f64N/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
        10. lower-neg.f6498.9

          \[\leadsto \frac{e^{-w}}{{\ell}^{\color{blue}{\left(-e^{w}\right)}}} \]
      6. Applied rewrites98.9%

        \[\leadsto \color{blue}{\frac{e^{-w}}{{\ell}^{\left(-e^{w}\right)}}} \]
      7. Taylor expanded in w around 0

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      8. Step-by-step derivation
        1. lower-/.f6497.3

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      9. Applied rewrites97.3%

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      10. Taylor expanded in w around 0

        \[\leadsto \frac{\color{blue}{1 + w \cdot \left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right)}}{\frac{1}{\ell}} \]
      11. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \frac{\color{blue}{w \cdot \left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right) + 1}}{\frac{1}{\ell}} \]
        2. *-commutativeN/A

          \[\leadsto \frac{\color{blue}{\left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1\right) \cdot w} + 1}{\frac{1}{\ell}} \]
        3. lower-fma.f64N/A

          \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1, w, 1\right)}}{\frac{1}{\ell}} \]
        4. lower--.f64N/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{w \cdot \left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) - 1}, w, 1\right)}{\frac{1}{\ell}} \]
        5. *-commutativeN/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) \cdot w} - 1, w, 1\right)}{\frac{1}{\ell}} \]
        6. lower-*.f64N/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{1}{2} + \frac{-1}{6} \cdot w\right) \cdot w} - 1, w, 1\right)}{\frac{1}{\ell}} \]
        7. +-commutativeN/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\left(\frac{-1}{6} \cdot w + \frac{1}{2}\right)} \cdot w - 1, w, 1\right)}{\frac{1}{\ell}} \]
        8. lower-fma.f6477.5

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right)} \cdot w - 1, w, 1\right)}{\frac{1}{\ell}} \]
      12. Applied rewrites77.5%

        \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}}{\frac{1}{\ell}} \]
      13. Final simplification77.5%

        \[\leadsto \frac{\mathsf{fma}\left(\mathsf{fma}\left(-0.16666666666666666, w, 0.5\right) \cdot w - 1, w, 1\right)}{{\ell}^{-1}} \]
      14. Add Preprocessing

      Alternative 9: 73.4% accurate, 2.4× speedup?

      \[\begin{array}{l} \\ \frac{\mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right)}{{\ell}^{-1}} \end{array} \]
      (FPCore (w l)
       :precision binary64
       (/ (fma (- (* 0.5 w) 1.0) w 1.0) (pow l -1.0)))
      double code(double w, double l) {
      	return fma(((0.5 * w) - 1.0), w, 1.0) / pow(l, -1.0);
      }
      
      function code(w, l)
      	return Float64(fma(Float64(Float64(0.5 * w) - 1.0), w, 1.0) / (l ^ -1.0))
      end
      
      code[w_, l_] := N[(N[(N[(N[(0.5 * w), $MachinePrecision] - 1.0), $MachinePrecision] * w + 1.0), $MachinePrecision] / N[Power[l, -1.0], $MachinePrecision]), $MachinePrecision]
      
      \begin{array}{l}
      
      \\
      \frac{\mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right)}{{\ell}^{-1}}
      \end{array}
      
      Derivation
      1. Initial program 99.0%

        \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
      2. Add Preprocessing
      3. Step-by-step derivation
        1. lift-pow.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
        2. pow-to-expN/A

          \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
        3. sinh-+-cosh-revN/A

          \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
        4. flip-+N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
        5. sinh-coshN/A

          \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
        6. sinh---cosh-revN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
        7. lower-/.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
        8. exp-negN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
        9. pow-to-expN/A

          \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        10. lift-pow.f64N/A

          \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        11. lower-/.f6498.9

          \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      4. Applied rewrites98.9%

        \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
      5. Step-by-step derivation
        1. lift-*.f64N/A

          \[\leadsto \color{blue}{e^{-w} \cdot \frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        2. lift-/.f64N/A

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        3. associate-*r/N/A

          \[\leadsto \color{blue}{\frac{e^{-w} \cdot 1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        4. *-rgt-identityN/A

          \[\leadsto \frac{\color{blue}{e^{-w}}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}} \]
        5. lower-/.f6498.9

          \[\leadsto \color{blue}{\frac{e^{-w}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        6. lift-/.f64N/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        7. lift-pow.f64N/A

          \[\leadsto \frac{e^{-w}}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
        8. pow-flipN/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
        9. lower-pow.f64N/A

          \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
        10. lower-neg.f6498.9

          \[\leadsto \frac{e^{-w}}{{\ell}^{\color{blue}{\left(-e^{w}\right)}}} \]
      6. Applied rewrites98.9%

        \[\leadsto \color{blue}{\frac{e^{-w}}{{\ell}^{\left(-e^{w}\right)}}} \]
      7. Taylor expanded in w around 0

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      8. Step-by-step derivation
        1. lower-/.f6497.3

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      9. Applied rewrites97.3%

        \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
      10. Taylor expanded in w around 0

        \[\leadsto \frac{\color{blue}{1 + w \cdot \left(\frac{1}{2} \cdot w - 1\right)}}{\frac{1}{\ell}} \]
      11. Step-by-step derivation
        1. +-commutativeN/A

          \[\leadsto \frac{\color{blue}{w \cdot \left(\frac{1}{2} \cdot w - 1\right) + 1}}{\frac{1}{\ell}} \]
        2. *-commutativeN/A

          \[\leadsto \frac{\color{blue}{\left(\frac{1}{2} \cdot w - 1\right) \cdot w} + 1}{\frac{1}{\ell}} \]
        3. lower-fma.f64N/A

          \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(\frac{1}{2} \cdot w - 1, w, 1\right)}}{\frac{1}{\ell}} \]
        4. lower--.f64N/A

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{\frac{1}{2} \cdot w - 1}, w, 1\right)}{\frac{1}{\ell}} \]
        5. lower-*.f6473.9

          \[\leadsto \frac{\mathsf{fma}\left(\color{blue}{0.5 \cdot w} - 1, w, 1\right)}{\frac{1}{\ell}} \]
      12. Applied rewrites73.9%

        \[\leadsto \frac{\color{blue}{\mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right)}}{\frac{1}{\ell}} \]
      13. Final simplification73.9%

        \[\leadsto \frac{\mathsf{fma}\left(0.5 \cdot w - 1, w, 1\right)}{{\ell}^{-1}} \]
      14. Add Preprocessing

      Alternative 10: 98.3% accurate, 2.5× speedup?

      \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\ell \leq 8.5 \cdot 10^{-10}:\\ \;\;\;\;\left(1 - w\right) \cdot {\ell}^{\left(1 + w\right)}\\ \mathbf{else}:\\ \;\;\;\;1 \cdot {\ell}^{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, w, 1\right), w, 1\right)\right)}\\ \end{array} \end{array} \]
      (FPCore (w l)
       :precision binary64
       (if (<= l 8.5e-10)
         (* (- 1.0 w) (pow l (+ 1.0 w)))
         (* 1.0 (pow l (fma (fma 0.5 w 1.0) w 1.0)))))
      double code(double w, double l) {
      	double tmp;
      	if (l <= 8.5e-10) {
      		tmp = (1.0 - w) * pow(l, (1.0 + w));
      	} else {
      		tmp = 1.0 * pow(l, fma(fma(0.5, w, 1.0), w, 1.0));
      	}
      	return tmp;
      }
      
      function code(w, l)
      	tmp = 0.0
      	if (l <= 8.5e-10)
      		tmp = Float64(Float64(1.0 - w) * (l ^ Float64(1.0 + w)));
      	else
      		tmp = Float64(1.0 * (l ^ fma(fma(0.5, w, 1.0), w, 1.0)));
      	end
      	return tmp
      end
      
      code[w_, l_] := If[LessEqual[l, 8.5e-10], N[(N[(1.0 - w), $MachinePrecision] * N[Power[l, N[(1.0 + w), $MachinePrecision]], $MachinePrecision]), $MachinePrecision], N[(1.0 * N[Power[l, N[(N[(0.5 * w + 1.0), $MachinePrecision] * w + 1.0), $MachinePrecision]], $MachinePrecision]), $MachinePrecision]]
      
      \begin{array}{l}
      
      \\
      \begin{array}{l}
      \mathbf{if}\;\ell \leq 8.5 \cdot 10^{-10}:\\
      \;\;\;\;\left(1 - w\right) \cdot {\ell}^{\left(1 + w\right)}\\
      
      \mathbf{else}:\\
      \;\;\;\;1 \cdot {\ell}^{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, w, 1\right), w, 1\right)\right)}\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if l < 8.4999999999999996e-10

        1. Initial program 99.8%

          \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
        2. Add Preprocessing
        3. Taylor expanded in w around 0

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
        4. Step-by-step derivation
          1. lower-+.f6498.5

            \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
        5. Applied rewrites98.5%

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
        6. Taylor expanded in w around 0

          \[\leadsto \color{blue}{\left(1 + -1 \cdot w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
        7. Step-by-step derivation
          1. fp-cancel-sign-sub-invN/A

            \[\leadsto \color{blue}{\left(1 - \left(\mathsf{neg}\left(-1\right)\right) \cdot w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
          2. metadata-evalN/A

            \[\leadsto \left(1 - \color{blue}{1} \cdot w\right) \cdot {\ell}^{\left(1 + w\right)} \]
          3. *-lft-identityN/A

            \[\leadsto \left(1 - \color{blue}{w}\right) \cdot {\ell}^{\left(1 + w\right)} \]
          4. lower--.f6498.5

            \[\leadsto \color{blue}{\left(1 - w\right)} \cdot {\ell}^{\left(1 + w\right)} \]
        8. Applied rewrites98.5%

          \[\leadsto \color{blue}{\left(1 - w\right)} \cdot {\ell}^{\left(1 + w\right)} \]

        if 8.4999999999999996e-10 < l

        1. Initial program 98.0%

          \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
        2. Add Preprocessing
        3. Taylor expanded in w around 0

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
        4. Step-by-step derivation
          1. lower-+.f6460.0

            \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
        5. Applied rewrites60.0%

          \[\leadsto e^{-w} \cdot {\ell}^{\color{blue}{\left(1 + w\right)}} \]
        6. Taylor expanded in w around 0

          \[\leadsto \color{blue}{1} \cdot {\ell}^{\left(1 + w\right)} \]
        7. Step-by-step derivation
          1. Applied rewrites62.2%

            \[\leadsto \color{blue}{1} \cdot {\ell}^{\left(1 + w\right)} \]
          2. Taylor expanded in w around 0

            \[\leadsto 1 \cdot {\ell}^{\color{blue}{\left(1 + w \cdot \left(1 + \frac{1}{2} \cdot w\right)\right)}} \]
          3. Step-by-step derivation
            1. +-commutativeN/A

              \[\leadsto 1 \cdot {\ell}^{\color{blue}{\left(w \cdot \left(1 + \frac{1}{2} \cdot w\right) + 1\right)}} \]
            2. *-commutativeN/A

              \[\leadsto 1 \cdot {\ell}^{\left(\color{blue}{\left(1 + \frac{1}{2} \cdot w\right) \cdot w} + 1\right)} \]
            3. lower-fma.f64N/A

              \[\leadsto 1 \cdot {\ell}^{\color{blue}{\left(\mathsf{fma}\left(1 + \frac{1}{2} \cdot w, w, 1\right)\right)}} \]
            4. +-commutativeN/A

              \[\leadsto 1 \cdot {\ell}^{\left(\mathsf{fma}\left(\color{blue}{\frac{1}{2} \cdot w + 1}, w, 1\right)\right)} \]
            5. lower-fma.f6499.2

              \[\leadsto 1 \cdot {\ell}^{\left(\mathsf{fma}\left(\color{blue}{\mathsf{fma}\left(0.5, w, 1\right)}, w, 1\right)\right)} \]
          4. Applied rewrites99.2%

            \[\leadsto 1 \cdot {\ell}^{\color{blue}{\left(\mathsf{fma}\left(\mathsf{fma}\left(0.5, w, 1\right), w, 1\right)\right)}} \]
        8. Recombined 2 regimes into one program.
        9. Add Preprocessing

        Alternative 11: 63.6% accurate, 2.7× speedup?

        \[\begin{array}{l} \\ \frac{1 - w}{{\ell}^{-1}} \end{array} \]
        (FPCore (w l) :precision binary64 (/ (- 1.0 w) (pow l -1.0)))
        double code(double w, double l) {
        	return (1.0 - w) / pow(l, -1.0);
        }
        
        real(8) function code(w, l)
            real(8), intent (in) :: w
            real(8), intent (in) :: l
            code = (1.0d0 - w) / (l ** (-1.0d0))
        end function
        
        public static double code(double w, double l) {
        	return (1.0 - w) / Math.pow(l, -1.0);
        }
        
        def code(w, l):
        	return (1.0 - w) / math.pow(l, -1.0)
        
        function code(w, l)
        	return Float64(Float64(1.0 - w) / (l ^ -1.0))
        end
        
        function tmp = code(w, l)
        	tmp = (1.0 - w) / (l ^ -1.0);
        end
        
        code[w_, l_] := N[(N[(1.0 - w), $MachinePrecision] / N[Power[l, -1.0], $MachinePrecision]), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        \frac{1 - w}{{\ell}^{-1}}
        \end{array}
        
        Derivation
        1. Initial program 99.0%

          \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-pow.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
          2. pow-to-expN/A

            \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
          3. sinh-+-cosh-revN/A

            \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
          4. flip-+N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
          5. sinh-coshN/A

            \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
          6. sinh---cosh-revN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
          7. lower-/.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
          8. exp-negN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
          9. pow-to-expN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          10. lift-pow.f64N/A

            \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          11. lower-/.f6498.9

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        4. Applied rewrites98.9%

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        5. Step-by-step derivation
          1. lift-*.f64N/A

            \[\leadsto \color{blue}{e^{-w} \cdot \frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          2. lift-/.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          3. associate-*r/N/A

            \[\leadsto \color{blue}{\frac{e^{-w} \cdot 1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          4. *-rgt-identityN/A

            \[\leadsto \frac{\color{blue}{e^{-w}}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}} \]
          5. lower-/.f6498.9

            \[\leadsto \color{blue}{\frac{e^{-w}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          6. lift-/.f64N/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          7. lift-pow.f64N/A

            \[\leadsto \frac{e^{-w}}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          8. pow-flipN/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
          9. lower-pow.f64N/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
          10. lower-neg.f6498.9

            \[\leadsto \frac{e^{-w}}{{\ell}^{\color{blue}{\left(-e^{w}\right)}}} \]
        6. Applied rewrites98.9%

          \[\leadsto \color{blue}{\frac{e^{-w}}{{\ell}^{\left(-e^{w}\right)}}} \]
        7. Taylor expanded in w around 0

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        8. Step-by-step derivation
          1. lower-/.f6497.3

            \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        9. Applied rewrites97.3%

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        10. Taylor expanded in w around 0

          \[\leadsto \frac{\color{blue}{1 + -1 \cdot w}}{\frac{1}{\ell}} \]
        11. Step-by-step derivation
          1. fp-cancel-sign-sub-invN/A

            \[\leadsto \frac{\color{blue}{1 - \left(\mathsf{neg}\left(-1\right)\right) \cdot w}}{\frac{1}{\ell}} \]
          2. metadata-evalN/A

            \[\leadsto \frac{1 - \color{blue}{1} \cdot w}{\frac{1}{\ell}} \]
          3. *-lft-identityN/A

            \[\leadsto \frac{1 - \color{blue}{w}}{\frac{1}{\ell}} \]
          4. lower--.f6462.9

            \[\leadsto \frac{\color{blue}{1 - w}}{\frac{1}{\ell}} \]
        12. Applied rewrites62.9%

          \[\leadsto \frac{\color{blue}{1 - w}}{\frac{1}{\ell}} \]
        13. Final simplification62.9%

          \[\leadsto \frac{1 - w}{{\ell}^{-1}} \]
        14. Add Preprocessing

        Alternative 12: 57.0% accurate, 2.7× speedup?

        \[\begin{array}{l} \\ \frac{1}{{\ell}^{-1}} \end{array} \]
        (FPCore (w l) :precision binary64 (/ 1.0 (pow l -1.0)))
        double code(double w, double l) {
        	return 1.0 / pow(l, -1.0);
        }
        
        real(8) function code(w, l)
            real(8), intent (in) :: w
            real(8), intent (in) :: l
            code = 1.0d0 / (l ** (-1.0d0))
        end function
        
        public static double code(double w, double l) {
        	return 1.0 / Math.pow(l, -1.0);
        }
        
        def code(w, l):
        	return 1.0 / math.pow(l, -1.0)
        
        function code(w, l)
        	return Float64(1.0 / (l ^ -1.0))
        end
        
        function tmp = code(w, l)
        	tmp = 1.0 / (l ^ -1.0);
        end
        
        code[w_, l_] := N[(1.0 / N[Power[l, -1.0], $MachinePrecision]), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        \frac{1}{{\ell}^{-1}}
        \end{array}
        
        Derivation
        1. Initial program 99.0%

          \[e^{-w} \cdot {\ell}^{\left(e^{w}\right)} \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-pow.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{{\ell}^{\left(e^{w}\right)}} \]
          2. pow-to-expN/A

            \[\leadsto e^{-w} \cdot \color{blue}{e^{\log \ell \cdot e^{w}}} \]
          3. sinh-+-cosh-revN/A

            \[\leadsto e^{-w} \cdot \color{blue}{\left(\cosh \left(\log \ell \cdot e^{w}\right) + \sinh \left(\log \ell \cdot e^{w}\right)\right)} \]
          4. flip-+N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{\cosh \left(\log \ell \cdot e^{w}\right) \cdot \cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right) \cdot \sinh \left(\log \ell \cdot e^{w}\right)}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)}} \]
          5. sinh-coshN/A

            \[\leadsto e^{-w} \cdot \frac{\color{blue}{1}}{\cosh \left(\log \ell \cdot e^{w}\right) - \sinh \left(\log \ell \cdot e^{w}\right)} \]
          6. sinh---cosh-revN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
          7. lower-/.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{e^{\mathsf{neg}\left(\log \ell \cdot e^{w}\right)}}} \]
          8. exp-negN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{e^{\log \ell \cdot e^{w}}}}} \]
          9. pow-to-expN/A

            \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          10. lift-pow.f64N/A

            \[\leadsto e^{-w} \cdot \frac{1}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          11. lower-/.f6498.9

            \[\leadsto e^{-w} \cdot \frac{1}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        4. Applied rewrites98.9%

          \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
        5. Step-by-step derivation
          1. lift-*.f64N/A

            \[\leadsto \color{blue}{e^{-w} \cdot \frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          2. lift-/.f64N/A

            \[\leadsto e^{-w} \cdot \color{blue}{\frac{1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          3. associate-*r/N/A

            \[\leadsto \color{blue}{\frac{e^{-w} \cdot 1}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          4. *-rgt-identityN/A

            \[\leadsto \frac{\color{blue}{e^{-w}}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}} \]
          5. lower-/.f6498.9

            \[\leadsto \color{blue}{\frac{e^{-w}}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          6. lift-/.f64N/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{{\ell}^{\left(e^{w}\right)}}}} \]
          7. lift-pow.f64N/A

            \[\leadsto \frac{e^{-w}}{\frac{1}{\color{blue}{{\ell}^{\left(e^{w}\right)}}}} \]
          8. pow-flipN/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
          9. lower-pow.f64N/A

            \[\leadsto \frac{e^{-w}}{\color{blue}{{\ell}^{\left(\mathsf{neg}\left(e^{w}\right)\right)}}} \]
          10. lower-neg.f6498.9

            \[\leadsto \frac{e^{-w}}{{\ell}^{\color{blue}{\left(-e^{w}\right)}}} \]
        6. Applied rewrites98.9%

          \[\leadsto \color{blue}{\frac{e^{-w}}{{\ell}^{\left(-e^{w}\right)}}} \]
        7. Taylor expanded in w around 0

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        8. Step-by-step derivation
          1. lower-/.f6497.3

            \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        9. Applied rewrites97.3%

          \[\leadsto \frac{e^{-w}}{\color{blue}{\frac{1}{\ell}}} \]
        10. Taylor expanded in w around 0

          \[\leadsto \frac{\color{blue}{1}}{\frac{1}{\ell}} \]
        11. Step-by-step derivation
          1. Applied rewrites55.9%

            \[\leadsto \frac{\color{blue}{1}}{\frac{1}{\ell}} \]
          2. Final simplification55.9%

            \[\leadsto \frac{1}{{\ell}^{-1}} \]
          3. Add Preprocessing

          Reproduce

          ?
          herbie shell --seed 2024337 
          (FPCore (w l)
            :name "exp-w (used to crash)"
            :precision binary64
            (* (exp (- w)) (pow l (exp w))))