?

Average Accuracy: 99.8% → 99.9%
Time: 7.1s
Precision: binary64
Cost: 448

?

\[x \cdot \left(1 - x \cdot y\right) \]
\[x - x \cdot \left(x \cdot y\right) \]
(FPCore (x y) :precision binary64 (* x (- 1.0 (* x y))))
(FPCore (x y) :precision binary64 (- x (* x (* x y))))
double code(double x, double y) {
	return x * (1.0 - (x * y));
}
double code(double x, double y) {
	return x - (x * (x * y));
}
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    code = x * (1.0d0 - (x * y))
end function
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    code = x - (x * (x * y))
end function
public static double code(double x, double y) {
	return x * (1.0 - (x * y));
}
public static double code(double x, double y) {
	return x - (x * (x * y));
}
def code(x, y):
	return x * (1.0 - (x * y))
def code(x, y):
	return x - (x * (x * y))
function code(x, y)
	return Float64(x * Float64(1.0 - Float64(x * y)))
end
function code(x, y)
	return Float64(x - Float64(x * Float64(x * y)))
end
function tmp = code(x, y)
	tmp = x * (1.0 - (x * y));
end
function tmp = code(x, y)
	tmp = x - (x * (x * y));
end
code[x_, y_] := N[(x * N[(1.0 - N[(x * y), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
code[x_, y_] := N[(x - N[(x * N[(x * y), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
x \cdot \left(1 - x \cdot y\right)
x - x \cdot \left(x \cdot y\right)

Error?

Try it out?

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation?

  1. Initial program 99.8%

    \[x \cdot \left(1 - x \cdot y\right) \]
  2. Simplified87.0%

    \[\leadsto \color{blue}{\mathsf{fma}\left(-y, x \cdot x, x\right)} \]
    Proof

    [Start]99.8

    \[ x \cdot \left(1 - x \cdot y\right) \]

    sub-neg [=>]99.8

    \[ x \cdot \color{blue}{\left(1 + \left(-x \cdot y\right)\right)} \]

    +-commutative [=>]99.8

    \[ x \cdot \color{blue}{\left(\left(-x \cdot y\right) + 1\right)} \]

    distribute-rgt-in [=>]99.9

    \[ \color{blue}{\left(-x \cdot y\right) \cdot x + 1 \cdot x} \]

    *-lft-identity [=>]99.9

    \[ \left(-x \cdot y\right) \cdot x + \color{blue}{x} \]

    *-commutative [=>]99.9

    \[ \left(-\color{blue}{y \cdot x}\right) \cdot x + x \]

    distribute-lft-neg-in [=>]99.9

    \[ \color{blue}{\left(\left(-y\right) \cdot x\right)} \cdot x + x \]

    associate-*l* [=>]87.0

    \[ \color{blue}{\left(-y\right) \cdot \left(x \cdot x\right)} + x \]

    fma-def [=>]87.0

    \[ \color{blue}{\mathsf{fma}\left(-y, x \cdot x, x\right)} \]
  3. Taylor expanded in y around 0 87.0%

    \[\leadsto \color{blue}{-1 \cdot \left(y \cdot {x}^{2}\right) + x} \]
  4. Simplified99.9%

    \[\leadsto \color{blue}{x - x \cdot \left(x \cdot y\right)} \]
    Proof

    [Start]87.0

    \[ -1 \cdot \left(y \cdot {x}^{2}\right) + x \]

    +-commutative [=>]87.0

    \[ \color{blue}{x + -1 \cdot \left(y \cdot {x}^{2}\right)} \]

    mul-1-neg [=>]87.0

    \[ x + \color{blue}{\left(-y \cdot {x}^{2}\right)} \]

    unpow2 [=>]87.0

    \[ x + \left(-y \cdot \color{blue}{\left(x \cdot x\right)}\right) \]

    sub-neg [<=]87.0

    \[ \color{blue}{x - y \cdot \left(x \cdot x\right)} \]

    *-commutative [=>]87.0

    \[ x - \color{blue}{\left(x \cdot x\right) \cdot y} \]

    associate-*l* [=>]99.9

    \[ x - \color{blue}{x \cdot \left(x \cdot y\right)} \]
  5. Final simplification99.9%

    \[\leadsto x - x \cdot \left(x \cdot y\right) \]

Alternatives

Alternative 1
Accuracy77.1%
Cost649
\[\begin{array}{l} \mathbf{if}\;y \leq -1.1 \cdot 10^{+65} \lor \neg \left(y \leq 8.5 \cdot 10^{+88}\right):\\ \;\;\;\;x \cdot \left(x \cdot \left(-y\right)\right)\\ \mathbf{else}:\\ \;\;\;\;x\\ \end{array} \]
Alternative 2
Accuracy99.8%
Cost448
\[x \cdot \left(1 - x \cdot y\right) \]
Alternative 3
Accuracy66.6%
Cost64
\[x \]

Error

Reproduce?

herbie shell --seed 2023147 
(FPCore (x y)
  :name "Numeric.SpecFunctions:log1p from math-functions-0.1.5.2, A"
  :precision binary64
  (* x (- 1.0 (* x y))))