ENA, Section 1.4, Exercise 4b, n=2

Percentage Accurate: 75.2% → 100.0%
Time: 5.1s
Alternatives: 4
Speedup: 29.6×

Specification

?
\[\left(-1000000000 \leq x \land x \leq 1000000000\right) \land \left(-1 \leq \varepsilon \land \varepsilon \leq 1\right)\]
\[\begin{array}{l} \\ {\left(x + \varepsilon\right)}^{2} - {x}^{2} \end{array} \]
(FPCore (x eps) :precision binary64 (- (pow (+ x eps) 2.0) (pow x 2.0)))
double code(double x, double eps) {
	return pow((x + eps), 2.0) - pow(x, 2.0);
}
real(8) function code(x, eps)
    real(8), intent (in) :: x
    real(8), intent (in) :: eps
    code = ((x + eps) ** 2.0d0) - (x ** 2.0d0)
end function
public static double code(double x, double eps) {
	return Math.pow((x + eps), 2.0) - Math.pow(x, 2.0);
}
def code(x, eps):
	return math.pow((x + eps), 2.0) - math.pow(x, 2.0)
function code(x, eps)
	return Float64((Float64(x + eps) ^ 2.0) - (x ^ 2.0))
end
function tmp = code(x, eps)
	tmp = ((x + eps) ^ 2.0) - (x ^ 2.0);
end
code[x_, eps_] := N[(N[Power[N[(x + eps), $MachinePrecision], 2.0], $MachinePrecision] - N[Power[x, 2.0], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
{\left(x + \varepsilon\right)}^{2} - {x}^{2}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 4 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 75.2% accurate, 1.0× speedup?

\[\begin{array}{l} \\ {\left(x + \varepsilon\right)}^{2} - {x}^{2} \end{array} \]
(FPCore (x eps) :precision binary64 (- (pow (+ x eps) 2.0) (pow x 2.0)))
double code(double x, double eps) {
	return pow((x + eps), 2.0) - pow(x, 2.0);
}
real(8) function code(x, eps)
    real(8), intent (in) :: x
    real(8), intent (in) :: eps
    code = ((x + eps) ** 2.0d0) - (x ** 2.0d0)
end function
public static double code(double x, double eps) {
	return Math.pow((x + eps), 2.0) - Math.pow(x, 2.0);
}
def code(x, eps):
	return math.pow((x + eps), 2.0) - math.pow(x, 2.0)
function code(x, eps)
	return Float64((Float64(x + eps) ^ 2.0) - (x ^ 2.0))
end
function tmp = code(x, eps)
	tmp = ((x + eps) ^ 2.0) - (x ^ 2.0);
end
code[x_, eps_] := N[(N[Power[N[(x + eps), $MachinePrecision], 2.0], $MachinePrecision] - N[Power[x, 2.0], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
{\left(x + \varepsilon\right)}^{2} - {x}^{2}
\end{array}

Alternative 1: 100.0% accurate, 1.9× speedup?

\[\begin{array}{l} \\ \mathsf{fma}\left(\varepsilon \cdot 2, x, \varepsilon \cdot \varepsilon\right) \end{array} \]
(FPCore (x eps) :precision binary64 (fma (* eps 2.0) x (* eps eps)))
double code(double x, double eps) {
	return fma((eps * 2.0), x, (eps * eps));
}
function code(x, eps)
	return fma(Float64(eps * 2.0), x, Float64(eps * eps))
end
code[x_, eps_] := N[(N[(eps * 2.0), $MachinePrecision] * x + N[(eps * eps), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\mathsf{fma}\left(\varepsilon \cdot 2, x, \varepsilon \cdot \varepsilon\right)
\end{array}
Derivation
  1. Initial program 72.3%

    \[{\left(x + \varepsilon\right)}^{2} - {x}^{2} \]
  2. Step-by-step derivation
    1. unpow2N/A

      \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - {\color{blue}{x}}^{2} \]
    2. unpow2N/A

      \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - x \cdot \color{blue}{x} \]
    3. difference-of-squaresN/A

      \[\leadsto \left(\left(x + \varepsilon\right) + x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) - x\right)} \]
    4. *-commutativeN/A

      \[\leadsto \left(\left(x + \varepsilon\right) - x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) + x\right)} \]
    5. +-commutativeN/A

      \[\leadsto \left(\left(\varepsilon + x\right) - x\right) \cdot \left(\left(\color{blue}{x} + \varepsilon\right) + x\right) \]
    6. associate--l+N/A

      \[\leadsto \left(\varepsilon + \left(x - x\right)\right) \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
    7. +-inversesN/A

      \[\leadsto \left(\varepsilon + 0\right) \cdot \left(\left(x + \color{blue}{\varepsilon}\right) + x\right) \]
    8. +-rgt-identityN/A

      \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
    9. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\left(\left(x + \varepsilon\right) + x\right)}\right) \]
    10. +-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon + x\right) + x\right)\right) \]
    11. associate-+l+N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon + \color{blue}{\left(x + x\right)}\right)\right) \]
    12. --rgt-identityN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon - 0\right) + \left(\color{blue}{x} + x\right)\right)\right) \]
    13. associate-+l-N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \color{blue}{\left(0 - \left(x + x\right)\right)}\right)\right) \]
    14. neg-sub0N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \left(\mathsf{neg}\left(\left(x + x\right)\right)\right)\right)\right) \]
    15. --lowering--.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \color{blue}{\left(\mathsf{neg}\left(\left(x + x\right)\right)\right)}\right)\right) \]
    16. count-2N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(2 \cdot x\right)\right)\right)\right) \]
    17. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(x \cdot 2\right)\right)\right)\right) \]
    18. distribute-rgt-neg-inN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(x \cdot \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
    19. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
    20. metadata-eval100.0%

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, -2\right)\right)\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\varepsilon \cdot \left(\varepsilon - x \cdot -2\right)} \]
  4. Add Preprocessing
  5. Step-by-step derivation
    1. sub-negN/A

      \[\leadsto \varepsilon \cdot \left(\varepsilon + \color{blue}{\left(\mathsf{neg}\left(x \cdot -2\right)\right)}\right) \]
    2. distribute-rgt-inN/A

      \[\leadsto \varepsilon \cdot \varepsilon + \color{blue}{\left(\mathsf{neg}\left(x \cdot -2\right)\right) \cdot \varepsilon} \]
    3. distribute-lft-neg-outN/A

      \[\leadsto \varepsilon \cdot \varepsilon + \left(\mathsf{neg}\left(\left(x \cdot -2\right) \cdot \varepsilon\right)\right) \]
    4. +-lowering-+.f64N/A

      \[\leadsto \mathsf{+.f64}\left(\left(\varepsilon \cdot \varepsilon\right), \color{blue}{\left(\mathsf{neg}\left(\left(x \cdot -2\right) \cdot \varepsilon\right)\right)}\right) \]
    5. *-lowering-*.f64N/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \left(\mathsf{neg}\left(\color{blue}{\left(x \cdot -2\right) \cdot \varepsilon}\right)\right)\right) \]
    6. distribute-lft-neg-outN/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \left(\left(\mathsf{neg}\left(x \cdot -2\right)\right) \cdot \color{blue}{\varepsilon}\right)\right) \]
    7. *-commutativeN/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \left(\left(\mathsf{neg}\left(-2 \cdot x\right)\right) \cdot \varepsilon\right)\right) \]
    8. distribute-lft-neg-inN/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \left(\left(\left(\mathsf{neg}\left(-2\right)\right) \cdot x\right) \cdot \varepsilon\right)\right) \]
    9. metadata-evalN/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \left(\left(2 \cdot x\right) \cdot \varepsilon\right)\right) \]
    10. associate-*l*N/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \left(2 \cdot \color{blue}{\left(x \cdot \varepsilon\right)}\right)\right) \]
    11. *-lowering-*.f64N/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \mathsf{*.f64}\left(2, \color{blue}{\left(x \cdot \varepsilon\right)}\right)\right) \]
    12. *-commutativeN/A

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \mathsf{*.f64}\left(2, \left(\varepsilon \cdot \color{blue}{x}\right)\right)\right) \]
    13. *-lowering-*.f64100.0%

      \[\leadsto \mathsf{+.f64}\left(\mathsf{*.f64}\left(\varepsilon, \varepsilon\right), \mathsf{*.f64}\left(2, \mathsf{*.f64}\left(\varepsilon, \color{blue}{x}\right)\right)\right) \]
  6. Applied egg-rr100.0%

    \[\leadsto \color{blue}{\varepsilon \cdot \varepsilon + 2 \cdot \left(\varepsilon \cdot x\right)} \]
  7. Step-by-step derivation
    1. +-commutativeN/A

      \[\leadsto 2 \cdot \left(\varepsilon \cdot x\right) + \color{blue}{\varepsilon \cdot \varepsilon} \]
    2. associate-*r*N/A

      \[\leadsto \left(2 \cdot \varepsilon\right) \cdot x + \color{blue}{\varepsilon} \cdot \varepsilon \]
    3. fma-defineN/A

      \[\leadsto \mathsf{fma}\left(2 \cdot \varepsilon, \color{blue}{x}, \varepsilon \cdot \varepsilon\right) \]
    4. fma-lowering-fma.f64N/A

      \[\leadsto \mathsf{fma.f64}\left(\left(2 \cdot \varepsilon\right), \color{blue}{x}, \left(\varepsilon \cdot \varepsilon\right)\right) \]
    5. *-commutativeN/A

      \[\leadsto \mathsf{fma.f64}\left(\left(\varepsilon \cdot 2\right), x, \left(\varepsilon \cdot \varepsilon\right)\right) \]
    6. *-lowering-*.f64N/A

      \[\leadsto \mathsf{fma.f64}\left(\mathsf{*.f64}\left(\varepsilon, 2\right), x, \left(\varepsilon \cdot \varepsilon\right)\right) \]
    7. *-lowering-*.f64100.0%

      \[\leadsto \mathsf{fma.f64}\left(\mathsf{*.f64}\left(\varepsilon, 2\right), x, \mathsf{*.f64}\left(\varepsilon, \varepsilon\right)\right) \]
  8. Applied egg-rr100.0%

    \[\leadsto \color{blue}{\mathsf{fma}\left(\varepsilon \cdot 2, x, \varepsilon \cdot \varepsilon\right)} \]
  9. Add Preprocessing

Alternative 2: 89.9% accurate, 13.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \varepsilon \cdot \left(2 \cdot x\right)\\ \mathbf{if}\;x \leq -3.4 \cdot 10^{-130}:\\ \;\;\;\;t\_0\\ \mathbf{elif}\;x \leq 7.5 \cdot 10^{-120}:\\ \;\;\;\;\varepsilon \cdot \varepsilon\\ \mathbf{else}:\\ \;\;\;\;t\_0\\ \end{array} \end{array} \]
(FPCore (x eps)
 :precision binary64
 (let* ((t_0 (* eps (* 2.0 x))))
   (if (<= x -3.4e-130) t_0 (if (<= x 7.5e-120) (* eps eps) t_0))))
double code(double x, double eps) {
	double t_0 = eps * (2.0 * x);
	double tmp;
	if (x <= -3.4e-130) {
		tmp = t_0;
	} else if (x <= 7.5e-120) {
		tmp = eps * eps;
	} else {
		tmp = t_0;
	}
	return tmp;
}
real(8) function code(x, eps)
    real(8), intent (in) :: x
    real(8), intent (in) :: eps
    real(8) :: t_0
    real(8) :: tmp
    t_0 = eps * (2.0d0 * x)
    if (x <= (-3.4d-130)) then
        tmp = t_0
    else if (x <= 7.5d-120) then
        tmp = eps * eps
    else
        tmp = t_0
    end if
    code = tmp
end function
public static double code(double x, double eps) {
	double t_0 = eps * (2.0 * x);
	double tmp;
	if (x <= -3.4e-130) {
		tmp = t_0;
	} else if (x <= 7.5e-120) {
		tmp = eps * eps;
	} else {
		tmp = t_0;
	}
	return tmp;
}
def code(x, eps):
	t_0 = eps * (2.0 * x)
	tmp = 0
	if x <= -3.4e-130:
		tmp = t_0
	elif x <= 7.5e-120:
		tmp = eps * eps
	else:
		tmp = t_0
	return tmp
function code(x, eps)
	t_0 = Float64(eps * Float64(2.0 * x))
	tmp = 0.0
	if (x <= -3.4e-130)
		tmp = t_0;
	elseif (x <= 7.5e-120)
		tmp = Float64(eps * eps);
	else
		tmp = t_0;
	end
	return tmp
end
function tmp_2 = code(x, eps)
	t_0 = eps * (2.0 * x);
	tmp = 0.0;
	if (x <= -3.4e-130)
		tmp = t_0;
	elseif (x <= 7.5e-120)
		tmp = eps * eps;
	else
		tmp = t_0;
	end
	tmp_2 = tmp;
end
code[x_, eps_] := Block[{t$95$0 = N[(eps * N[(2.0 * x), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[x, -3.4e-130], t$95$0, If[LessEqual[x, 7.5e-120], N[(eps * eps), $MachinePrecision], t$95$0]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \varepsilon \cdot \left(2 \cdot x\right)\\
\mathbf{if}\;x \leq -3.4 \cdot 10^{-130}:\\
\;\;\;\;t\_0\\

\mathbf{elif}\;x \leq 7.5 \cdot 10^{-120}:\\
\;\;\;\;\varepsilon \cdot \varepsilon\\

\mathbf{else}:\\
\;\;\;\;t\_0\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if x < -3.40000000000000005e-130 or 7.5000000000000004e-120 < x

    1. Initial program 39.9%

      \[{\left(x + \varepsilon\right)}^{2} - {x}^{2} \]
    2. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - {\color{blue}{x}}^{2} \]
      2. unpow2N/A

        \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - x \cdot \color{blue}{x} \]
      3. difference-of-squaresN/A

        \[\leadsto \left(\left(x + \varepsilon\right) + x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) - x\right)} \]
      4. *-commutativeN/A

        \[\leadsto \left(\left(x + \varepsilon\right) - x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) + x\right)} \]
      5. +-commutativeN/A

        \[\leadsto \left(\left(\varepsilon + x\right) - x\right) \cdot \left(\left(\color{blue}{x} + \varepsilon\right) + x\right) \]
      6. associate--l+N/A

        \[\leadsto \left(\varepsilon + \left(x - x\right)\right) \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
      7. +-inversesN/A

        \[\leadsto \left(\varepsilon + 0\right) \cdot \left(\left(x + \color{blue}{\varepsilon}\right) + x\right) \]
      8. +-rgt-identityN/A

        \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
      9. *-lowering-*.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\left(\left(x + \varepsilon\right) + x\right)}\right) \]
      10. +-commutativeN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon + x\right) + x\right)\right) \]
      11. associate-+l+N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon + \color{blue}{\left(x + x\right)}\right)\right) \]
      12. --rgt-identityN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon - 0\right) + \left(\color{blue}{x} + x\right)\right)\right) \]
      13. associate-+l-N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \color{blue}{\left(0 - \left(x + x\right)\right)}\right)\right) \]
      14. neg-sub0N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \left(\mathsf{neg}\left(\left(x + x\right)\right)\right)\right)\right) \]
      15. --lowering--.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \color{blue}{\left(\mathsf{neg}\left(\left(x + x\right)\right)\right)}\right)\right) \]
      16. count-2N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(2 \cdot x\right)\right)\right)\right) \]
      17. *-commutativeN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(x \cdot 2\right)\right)\right)\right) \]
      18. distribute-rgt-neg-inN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(x \cdot \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
      19. *-lowering-*.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
      20. metadata-eval100.0%

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, -2\right)\right)\right) \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\varepsilon \cdot \left(\varepsilon - x \cdot -2\right)} \]
    4. Add Preprocessing
    5. Taylor expanded in eps around 0

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\left(2 \cdot x\right)}\right) \]
    6. Step-by-step derivation
      1. *-lowering-*.f6483.9%

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{*.f64}\left(2, \color{blue}{x}\right)\right) \]
    7. Simplified83.9%

      \[\leadsto \varepsilon \cdot \color{blue}{\left(2 \cdot x\right)} \]

    if -3.40000000000000005e-130 < x < 7.5000000000000004e-120

    1. Initial program 98.8%

      \[{\left(x + \varepsilon\right)}^{2} - {x}^{2} \]
    2. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - {\color{blue}{x}}^{2} \]
      2. unpow2N/A

        \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - x \cdot \color{blue}{x} \]
      3. difference-of-squaresN/A

        \[\leadsto \left(\left(x + \varepsilon\right) + x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) - x\right)} \]
      4. *-commutativeN/A

        \[\leadsto \left(\left(x + \varepsilon\right) - x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) + x\right)} \]
      5. +-commutativeN/A

        \[\leadsto \left(\left(\varepsilon + x\right) - x\right) \cdot \left(\left(\color{blue}{x} + \varepsilon\right) + x\right) \]
      6. associate--l+N/A

        \[\leadsto \left(\varepsilon + \left(x - x\right)\right) \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
      7. +-inversesN/A

        \[\leadsto \left(\varepsilon + 0\right) \cdot \left(\left(x + \color{blue}{\varepsilon}\right) + x\right) \]
      8. +-rgt-identityN/A

        \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
      9. *-lowering-*.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\left(\left(x + \varepsilon\right) + x\right)}\right) \]
      10. +-commutativeN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon + x\right) + x\right)\right) \]
      11. associate-+l+N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon + \color{blue}{\left(x + x\right)}\right)\right) \]
      12. --rgt-identityN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon - 0\right) + \left(\color{blue}{x} + x\right)\right)\right) \]
      13. associate-+l-N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \color{blue}{\left(0 - \left(x + x\right)\right)}\right)\right) \]
      14. neg-sub0N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \left(\mathsf{neg}\left(\left(x + x\right)\right)\right)\right)\right) \]
      15. --lowering--.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \color{blue}{\left(\mathsf{neg}\left(\left(x + x\right)\right)\right)}\right)\right) \]
      16. count-2N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(2 \cdot x\right)\right)\right)\right) \]
      17. *-commutativeN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(x \cdot 2\right)\right)\right)\right) \]
      18. distribute-rgt-neg-inN/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(x \cdot \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
      19. *-lowering-*.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
      20. metadata-eval100.0%

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, -2\right)\right)\right) \]
    3. Simplified100.0%

      \[\leadsto \color{blue}{\varepsilon \cdot \left(\varepsilon - x \cdot -2\right)} \]
    4. Add Preprocessing
    5. Taylor expanded in eps around inf

      \[\leadsto \color{blue}{{\varepsilon}^{2}} \]
    6. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto \varepsilon \cdot \color{blue}{\varepsilon} \]
      2. *-lowering-*.f6498.0%

        \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\varepsilon}\right) \]
    7. Simplified98.0%

      \[\leadsto \color{blue}{\varepsilon \cdot \varepsilon} \]
  3. Recombined 2 regimes into one program.
  4. Add Preprocessing

Alternative 3: 100.0% accurate, 29.6× speedup?

\[\begin{array}{l} \\ \varepsilon \cdot \left(\varepsilon - x \cdot -2\right) \end{array} \]
(FPCore (x eps) :precision binary64 (* eps (- eps (* x -2.0))))
double code(double x, double eps) {
	return eps * (eps - (x * -2.0));
}
real(8) function code(x, eps)
    real(8), intent (in) :: x
    real(8), intent (in) :: eps
    code = eps * (eps - (x * (-2.0d0)))
end function
public static double code(double x, double eps) {
	return eps * (eps - (x * -2.0));
}
def code(x, eps):
	return eps * (eps - (x * -2.0))
function code(x, eps)
	return Float64(eps * Float64(eps - Float64(x * -2.0)))
end
function tmp = code(x, eps)
	tmp = eps * (eps - (x * -2.0));
end
code[x_, eps_] := N[(eps * N[(eps - N[(x * -2.0), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\varepsilon \cdot \left(\varepsilon - x \cdot -2\right)
\end{array}
Derivation
  1. Initial program 72.3%

    \[{\left(x + \varepsilon\right)}^{2} - {x}^{2} \]
  2. Step-by-step derivation
    1. unpow2N/A

      \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - {\color{blue}{x}}^{2} \]
    2. unpow2N/A

      \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - x \cdot \color{blue}{x} \]
    3. difference-of-squaresN/A

      \[\leadsto \left(\left(x + \varepsilon\right) + x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) - x\right)} \]
    4. *-commutativeN/A

      \[\leadsto \left(\left(x + \varepsilon\right) - x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) + x\right)} \]
    5. +-commutativeN/A

      \[\leadsto \left(\left(\varepsilon + x\right) - x\right) \cdot \left(\left(\color{blue}{x} + \varepsilon\right) + x\right) \]
    6. associate--l+N/A

      \[\leadsto \left(\varepsilon + \left(x - x\right)\right) \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
    7. +-inversesN/A

      \[\leadsto \left(\varepsilon + 0\right) \cdot \left(\left(x + \color{blue}{\varepsilon}\right) + x\right) \]
    8. +-rgt-identityN/A

      \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
    9. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\left(\left(x + \varepsilon\right) + x\right)}\right) \]
    10. +-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon + x\right) + x\right)\right) \]
    11. associate-+l+N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon + \color{blue}{\left(x + x\right)}\right)\right) \]
    12. --rgt-identityN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon - 0\right) + \left(\color{blue}{x} + x\right)\right)\right) \]
    13. associate-+l-N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \color{blue}{\left(0 - \left(x + x\right)\right)}\right)\right) \]
    14. neg-sub0N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \left(\mathsf{neg}\left(\left(x + x\right)\right)\right)\right)\right) \]
    15. --lowering--.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \color{blue}{\left(\mathsf{neg}\left(\left(x + x\right)\right)\right)}\right)\right) \]
    16. count-2N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(2 \cdot x\right)\right)\right)\right) \]
    17. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(x \cdot 2\right)\right)\right)\right) \]
    18. distribute-rgt-neg-inN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(x \cdot \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
    19. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
    20. metadata-eval100.0%

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, -2\right)\right)\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\varepsilon \cdot \left(\varepsilon - x \cdot -2\right)} \]
  4. Add Preprocessing
  5. Add Preprocessing

Alternative 4: 72.9% accurate, 69.0× speedup?

\[\begin{array}{l} \\ \varepsilon \cdot \varepsilon \end{array} \]
(FPCore (x eps) :precision binary64 (* eps eps))
double code(double x, double eps) {
	return eps * eps;
}
real(8) function code(x, eps)
    real(8), intent (in) :: x
    real(8), intent (in) :: eps
    code = eps * eps
end function
public static double code(double x, double eps) {
	return eps * eps;
}
def code(x, eps):
	return eps * eps
function code(x, eps)
	return Float64(eps * eps)
end
function tmp = code(x, eps)
	tmp = eps * eps;
end
code[x_, eps_] := N[(eps * eps), $MachinePrecision]
\begin{array}{l}

\\
\varepsilon \cdot \varepsilon
\end{array}
Derivation
  1. Initial program 72.3%

    \[{\left(x + \varepsilon\right)}^{2} - {x}^{2} \]
  2. Step-by-step derivation
    1. unpow2N/A

      \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - {\color{blue}{x}}^{2} \]
    2. unpow2N/A

      \[\leadsto \left(x + \varepsilon\right) \cdot \left(x + \varepsilon\right) - x \cdot \color{blue}{x} \]
    3. difference-of-squaresN/A

      \[\leadsto \left(\left(x + \varepsilon\right) + x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) - x\right)} \]
    4. *-commutativeN/A

      \[\leadsto \left(\left(x + \varepsilon\right) - x\right) \cdot \color{blue}{\left(\left(x + \varepsilon\right) + x\right)} \]
    5. +-commutativeN/A

      \[\leadsto \left(\left(\varepsilon + x\right) - x\right) \cdot \left(\left(\color{blue}{x} + \varepsilon\right) + x\right) \]
    6. associate--l+N/A

      \[\leadsto \left(\varepsilon + \left(x - x\right)\right) \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
    7. +-inversesN/A

      \[\leadsto \left(\varepsilon + 0\right) \cdot \left(\left(x + \color{blue}{\varepsilon}\right) + x\right) \]
    8. +-rgt-identityN/A

      \[\leadsto \varepsilon \cdot \left(\color{blue}{\left(x + \varepsilon\right)} + x\right) \]
    9. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\left(\left(x + \varepsilon\right) + x\right)}\right) \]
    10. +-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon + x\right) + x\right)\right) \]
    11. associate-+l+N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon + \color{blue}{\left(x + x\right)}\right)\right) \]
    12. --rgt-identityN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\left(\varepsilon - 0\right) + \left(\color{blue}{x} + x\right)\right)\right) \]
    13. associate-+l-N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \color{blue}{\left(0 - \left(x + x\right)\right)}\right)\right) \]
    14. neg-sub0N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \left(\varepsilon - \left(\mathsf{neg}\left(\left(x + x\right)\right)\right)\right)\right) \]
    15. --lowering--.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \color{blue}{\left(\mathsf{neg}\left(\left(x + x\right)\right)\right)}\right)\right) \]
    16. count-2N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(2 \cdot x\right)\right)\right)\right) \]
    17. *-commutativeN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(\mathsf{neg}\left(x \cdot 2\right)\right)\right)\right) \]
    18. distribute-rgt-neg-inN/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \left(x \cdot \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
    19. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, \color{blue}{\left(\mathsf{neg}\left(2\right)\right)}\right)\right)\right) \]
    20. metadata-eval100.0%

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \mathsf{\_.f64}\left(\varepsilon, \mathsf{*.f64}\left(x, -2\right)\right)\right) \]
  3. Simplified100.0%

    \[\leadsto \color{blue}{\varepsilon \cdot \left(\varepsilon - x \cdot -2\right)} \]
  4. Add Preprocessing
  5. Taylor expanded in eps around inf

    \[\leadsto \color{blue}{{\varepsilon}^{2}} \]
  6. Step-by-step derivation
    1. unpow2N/A

      \[\leadsto \varepsilon \cdot \color{blue}{\varepsilon} \]
    2. *-lowering-*.f6469.8%

      \[\leadsto \mathsf{*.f64}\left(\varepsilon, \color{blue}{\varepsilon}\right) \]
  7. Simplified69.8%

    \[\leadsto \color{blue}{\varepsilon \cdot \varepsilon} \]
  8. Add Preprocessing

Reproduce

?
herbie shell --seed 2024152 
(FPCore (x eps)
  :name "ENA, Section 1.4, Exercise 4b, n=2"
  :precision binary64
  :pre (and (and (<= -1000000000.0 x) (<= x 1000000000.0)) (and (<= -1.0 eps) (<= eps 1.0)))
  (- (pow (+ x eps) 2.0) (pow x 2.0)))