?

Average Accuracy: 55.8% → 99.0%
Time: 9.7s
Precision: binary32
Cost: 3424

?

\[\left(0.0001 \leq \alpha \land \alpha \leq 1\right) \land \left(2.328306437 \cdot 10^{-10} \leq u0 \land u0 \leq 1\right)\]
\[\left(\left(-\alpha\right) \cdot \alpha\right) \cdot \log \left(1 - u0\right) \]
\[\left(\alpha \cdot \left(-\alpha\right)\right) \cdot \mathsf{log1p}\left(-u0\right) \]
(FPCore (alpha u0)
 :precision binary32
 (* (* (- alpha) alpha) (log (- 1.0 u0))))
(FPCore (alpha u0) :precision binary32 (* (* alpha (- alpha)) (log1p (- u0))))
float code(float alpha, float u0) {
	return (-alpha * alpha) * logf((1.0f - u0));
}
float code(float alpha, float u0) {
	return (alpha * -alpha) * log1pf(-u0);
}
function code(alpha, u0)
	return Float32(Float32(Float32(-alpha) * alpha) * log(Float32(Float32(1.0) - u0)))
end
function code(alpha, u0)
	return Float32(Float32(alpha * Float32(-alpha)) * log1p(Float32(-u0)))
end
\left(\left(-\alpha\right) \cdot \alpha\right) \cdot \log \left(1 - u0\right)
\left(\alpha \cdot \left(-\alpha\right)\right) \cdot \mathsf{log1p}\left(-u0\right)

Error?

Try it out?

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation?

  1. Initial program 55.8%

    \[\left(\left(-\alpha\right) \cdot \alpha\right) \cdot \log \left(1 - u0\right) \]
  2. Applied egg-rr99.0%

    \[\leadsto \left(\left(-\alpha\right) \cdot \alpha\right) \cdot \color{blue}{\left(0 + \mathsf{log1p}\left(-u0\right)\right)} \]
  3. Simplified99.0%

    \[\leadsto \left(\left(-\alpha\right) \cdot \alpha\right) \cdot \color{blue}{\mathsf{log1p}\left(-u0\right)} \]
    Proof

    [Start]99.0

    \[ \left(\left(-\alpha\right) \cdot \alpha\right) \cdot \left(0 + \mathsf{log1p}\left(-u0\right)\right) \]

    +-lft-identity [=>]99.0

    \[ \left(\left(-\alpha\right) \cdot \alpha\right) \cdot \color{blue}{\mathsf{log1p}\left(-u0\right)} \]
  4. Final simplification99.0%

    \[\leadsto \left(\alpha \cdot \left(-\alpha\right)\right) \cdot \mathsf{log1p}\left(-u0\right) \]

Alternatives

Alternative 1
Accuracy99.0%
Cost3424
\[\left(-\alpha\right) \cdot \left(\alpha \cdot \mathsf{log1p}\left(-u0\right)\right) \]
Alternative 2
Accuracy91.5%
Cost864
\[u0 \cdot \left(\alpha \cdot \alpha\right) + u0 \cdot \left(\left(\alpha \cdot \alpha\right) \cdot \frac{u0 \cdot \left(0.1111111111111111 \cdot \left(u0 \cdot u0\right) + -0.25\right)}{u0 \cdot 0.3333333333333333 + -0.5}\right) \]
Alternative 3
Accuracy91.5%
Cost864
\[\begin{array}{l} t_0 := u0 \cdot \left(\alpha \cdot \alpha\right)\\ t_0 + \left(\left(u0 \cdot \left(u0 \cdot 0.3333333333333333\right)\right) \cdot t_0 + t_0 \cdot \left(u0 \cdot 0.5\right)\right) \end{array} \]
Alternative 4
Accuracy91.5%
Cost672
\[u0 \cdot \left(\alpha \cdot \alpha\right) + u0 \cdot \left(\left(\alpha \cdot \alpha\right) \cdot \left(u0 \cdot \left(u0 \cdot 0.3333333333333333\right) + u0 \cdot 0.5\right)\right) \]
Alternative 5
Accuracy91.5%
Cost480
\[\left(\alpha \cdot \alpha\right) \cdot \left(u0 + \left(u0 \cdot u0\right) \cdot \left(u0 \cdot 0.3333333333333333 + 0.5\right)\right) \]
Alternative 6
Accuracy91.5%
Cost480
\[\alpha \cdot \left(\alpha \cdot \left(u0 + \left(u0 \cdot u0\right) \cdot \left(u0 \cdot 0.3333333333333333 + 0.5\right)\right)\right) \]
Alternative 7
Accuracy87.4%
Cost352
\[\left(\alpha \cdot \alpha\right) \cdot \left(u0 + 0.5 \cdot \left(u0 \cdot u0\right)\right) \]
Alternative 8
Accuracy87.3%
Cost352
\[\alpha \cdot \left(\alpha \cdot \left(u0 + u0 \cdot \left(u0 \cdot 0.5\right)\right)\right) \]
Alternative 9
Accuracy74.6%
Cost160
\[\alpha \cdot \left(\alpha \cdot u0\right) \]

Error

Reproduce?

herbie shell --seed 2023122 
(FPCore (alpha u0)
  :name "Beckmann Distribution sample, tan2theta, alphax == alphay"
  :precision binary32
  :pre (and (and (<= 0.0001 alpha) (<= alpha 1.0)) (and (<= 2.328306437e-10 u0) (<= u0 1.0)))
  (* (* (- alpha) alpha) (log (- 1.0 u0))))