Jmat.Real.lambertw, estimator

?

Percentage Accurate: 99.6% → 100.0%
Time: 2.4s
Precision: binary64
Cost: 13120

?

\[\log x - \log \log x \]
\[\mathsf{log1p}\left(\frac{x}{\log x} + -1\right) \]
(FPCore (x) :precision binary64 (- (log x) (log (log x))))
(FPCore (x) :precision binary64 (log1p (+ (/ x (log x)) -1.0)))
double code(double x) {
	return log(x) - log(log(x));
}
double code(double x) {
	return log1p(((x / log(x)) + -1.0));
}
public static double code(double x) {
	return Math.log(x) - Math.log(Math.log(x));
}
public static double code(double x) {
	return Math.log1p(((x / Math.log(x)) + -1.0));
}
def code(x):
	return math.log(x) - math.log(math.log(x))
def code(x):
	return math.log1p(((x / math.log(x)) + -1.0))
function code(x)
	return Float64(log(x) - log(log(x)))
end
function code(x)
	return log1p(Float64(Float64(x / log(x)) + -1.0))
end
code[x_] := N[(N[Log[x], $MachinePrecision] - N[Log[N[Log[x], $MachinePrecision]], $MachinePrecision]), $MachinePrecision]
code[x_] := N[Log[1 + N[(N[(x / N[Log[x], $MachinePrecision]), $MachinePrecision] + -1.0), $MachinePrecision]], $MachinePrecision]
\log x - \log \log x
\mathsf{log1p}\left(\frac{x}{\log x} + -1\right)

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Herbie found 3 alternatives:

AlternativeAccuracySpeedup

Accuracy vs Speed

The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Bogosity?

Bogosity

Try it out?

Your Program's Arguments

Results

Enter valid numbers for all inputs

Derivation?

  1. Initial program 99.7%

    \[\log x - \log \log x \]
  2. Applied egg-rr100.0%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(\frac{x}{\log x} - 1\right)} \]
    Step-by-step derivation

    [Start]99.7%

    \[ \log x - \log \log x \]

    log1p-expm1-u [=>]99.7%

    \[ \color{blue}{\mathsf{log1p}\left(\mathsf{expm1}\left(\log x - \log \log x\right)\right)} \]

    expm1-udef [=>]99.7%

    \[ \mathsf{log1p}\left(\color{blue}{e^{\log x - \log \log x} - 1}\right) \]

    diff-log [=>]100.0%

    \[ \mathsf{log1p}\left(e^{\color{blue}{\log \left(\frac{x}{\log x}\right)}} - 1\right) \]

    add-exp-log [<=]100.0%

    \[ \mathsf{log1p}\left(\color{blue}{\frac{x}{\log x}} - 1\right) \]
  3. Final simplification100.0%

    \[\leadsto \mathsf{log1p}\left(\frac{x}{\log x} + -1\right) \]

Alternatives

Alternative 1
Accuracy100.0%
Cost13120
\[\mathsf{log1p}\left(\frac{x}{\log x} + -1\right) \]
Alternative 2
Accuracy100.0%
Cost12992
\[\log \left(\frac{x}{\log x}\right) \]
Alternative 3
Accuracy0.6%
Cost6464
\[\mathsf{log1p}\left(-1\right) \]

Reproduce?

herbie shell --seed 2023178 
(FPCore (x)
  :name "Jmat.Real.lambertw, estimator"
  :precision binary64
  (- (log x) (log (log x))))