2log (problem 3.3.6)

Percentage Accurate: 24.0% → 99.3%
Time: 10.2s
Alternatives: 9
Speedup: 68.3×

Specification

?
\[N > 1 \land N < 10^{+40}\]
\[\begin{array}{l} \\ \log \left(N + 1\right) - \log N \end{array} \]
(FPCore (N) :precision binary64 (- (log (+ N 1.0)) (log N)))
double code(double N) {
	return log((N + 1.0)) - log(N);
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = log((n + 1.0d0)) - log(n)
end function
public static double code(double N) {
	return Math.log((N + 1.0)) - Math.log(N);
}
def code(N):
	return math.log((N + 1.0)) - math.log(N)
function code(N)
	return Float64(log(Float64(N + 1.0)) - log(N))
end
function tmp = code(N)
	tmp = log((N + 1.0)) - log(N);
end
code[N_] := N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\log \left(N + 1\right) - \log N
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 9 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 24.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \log \left(N + 1\right) - \log N \end{array} \]
(FPCore (N) :precision binary64 (- (log (+ N 1.0)) (log N)))
double code(double N) {
	return log((N + 1.0)) - log(N);
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = log((n + 1.0d0)) - log(n)
end function
public static double code(double N) {
	return Math.log((N + 1.0)) - Math.log(N);
}
def code(N):
	return math.log((N + 1.0)) - math.log(N)
function code(N)
	return Float64(log(Float64(N + 1.0)) - log(N))
end
function tmp = code(N)
	tmp = log((N + 1.0)) - log(N);
end
code[N_] := N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\log \left(N + 1\right) - \log N
\end{array}

Alternative 1: 99.3% accurate, 0.4× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 0.001:\\ \;\;\;\;\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= (- (log (+ N 1.0)) (log N)) 0.001)
   (+
    (/ 1.0 N)
    (-
     (/ 0.3333333333333333 (pow N 3.0))
     (+ (/ 0.5 (pow N 2.0)) (/ 0.25 (pow N 4.0)))))
   (- (log (/ N (+ N 1.0))))))
double code(double N) {
	double tmp;
	if ((log((N + 1.0)) - log(N)) <= 0.001) {
		tmp = (1.0 / N) + ((0.3333333333333333 / pow(N, 3.0)) - ((0.5 / pow(N, 2.0)) + (0.25 / pow(N, 4.0))));
	} else {
		tmp = -log((N / (N + 1.0)));
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if ((log((n + 1.0d0)) - log(n)) <= 0.001d0) then
        tmp = (1.0d0 / n) + ((0.3333333333333333d0 / (n ** 3.0d0)) - ((0.5d0 / (n ** 2.0d0)) + (0.25d0 / (n ** 4.0d0))))
    else
        tmp = -log((n / (n + 1.0d0)))
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if ((Math.log((N + 1.0)) - Math.log(N)) <= 0.001) {
		tmp = (1.0 / N) + ((0.3333333333333333 / Math.pow(N, 3.0)) - ((0.5 / Math.pow(N, 2.0)) + (0.25 / Math.pow(N, 4.0))));
	} else {
		tmp = -Math.log((N / (N + 1.0)));
	}
	return tmp;
}
def code(N):
	tmp = 0
	if (math.log((N + 1.0)) - math.log(N)) <= 0.001:
		tmp = (1.0 / N) + ((0.3333333333333333 / math.pow(N, 3.0)) - ((0.5 / math.pow(N, 2.0)) + (0.25 / math.pow(N, 4.0))))
	else:
		tmp = -math.log((N / (N + 1.0)))
	return tmp
function code(N)
	tmp = 0.0
	if (Float64(log(Float64(N + 1.0)) - log(N)) <= 0.001)
		tmp = Float64(Float64(1.0 / N) + Float64(Float64(0.3333333333333333 / (N ^ 3.0)) - Float64(Float64(0.5 / (N ^ 2.0)) + Float64(0.25 / (N ^ 4.0)))));
	else
		tmp = Float64(-log(Float64(N / Float64(N + 1.0))));
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if ((log((N + 1.0)) - log(N)) <= 0.001)
		tmp = (1.0 / N) + ((0.3333333333333333 / (N ^ 3.0)) - ((0.5 / (N ^ 2.0)) + (0.25 / (N ^ 4.0))));
	else
		tmp = -log((N / (N + 1.0)));
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision], 0.001], N[(N[(1.0 / N), $MachinePrecision] + N[(N[(0.3333333333333333 / N[Power[N, 3.0], $MachinePrecision]), $MachinePrecision] - N[(N[(0.5 / N[Power[N, 2.0], $MachinePrecision]), $MachinePrecision] + N[(0.25 / N[Power[N, 4.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], (-N[Log[N[(N / N[(N + 1.0), $MachinePrecision]), $MachinePrecision]], $MachinePrecision])]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;\log \left(N + 1\right) - \log N \leq 0.001:\\
\;\;\;\;\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)\\

\mathbf{else}:\\
\;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N)) < 1e-3

    1. Initial program 17.9%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative17.9%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define17.9%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified17.9%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Taylor expanded in N around inf 99.8%

      \[\leadsto \color{blue}{\left(0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \frac{1}{N}\right) - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)} \]
    6. Step-by-step derivation
      1. +-commutative99.8%

        \[\leadsto \color{blue}{\left(\frac{1}{N} + 0.3333333333333333 \cdot \frac{1}{{N}^{3}}\right)} - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right) \]
      2. associate--l+99.8%

        \[\leadsto \color{blue}{\frac{1}{N} + \left(0.3333333333333333 \cdot \frac{1}{{N}^{3}} - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)\right)} \]
      3. associate-*r/99.8%

        \[\leadsto \frac{1}{N} + \left(\color{blue}{\frac{0.3333333333333333 \cdot 1}{{N}^{3}}} - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)\right) \]
      4. metadata-eval99.8%

        \[\leadsto \frac{1}{N} + \left(\frac{\color{blue}{0.3333333333333333}}{{N}^{3}} - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)\right) \]
      5. +-commutative99.8%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \color{blue}{\left(0.5 \cdot \frac{1}{{N}^{2}} + 0.25 \cdot \frac{1}{{N}^{4}}\right)}\right) \]
      6. associate-*r/99.8%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\color{blue}{\frac{0.5 \cdot 1}{{N}^{2}}} + 0.25 \cdot \frac{1}{{N}^{4}}\right)\right) \]
      7. metadata-eval99.8%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\frac{\color{blue}{0.5}}{{N}^{2}} + 0.25 \cdot \frac{1}{{N}^{4}}\right)\right) \]
      8. associate-*r/99.8%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\frac{0.5}{{N}^{2}} + \color{blue}{\frac{0.25 \cdot 1}{{N}^{4}}}\right)\right) \]
      9. metadata-eval99.8%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\frac{0.5}{{N}^{2}} + \frac{\color{blue}{0.25}}{{N}^{4}}\right)\right) \]
    7. Simplified99.8%

      \[\leadsto \color{blue}{\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)} \]

    if 1e-3 < (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N))

    1. Initial program 93.4%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative93.4%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define93.6%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified93.6%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-exp-log93.7%

        \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    6. Applied egg-rr93.7%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    7. Step-by-step derivation
      1. rem-exp-log93.6%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
      2. log1p-undefine93.4%

        \[\leadsto \color{blue}{\log \left(1 + N\right)} - \log N \]
      3. +-commutative93.4%

        \[\leadsto \log \color{blue}{\left(N + 1\right)} - \log N \]
      4. diff-log94.9%

        \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
      5. clear-num94.6%

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{N}{N + 1}}\right)} \]
      6. log-rec95.4%

        \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
    8. Applied egg-rr95.4%

      \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 0.001:\\ \;\;\;\;\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \]
  5. Add Preprocessing

Alternative 2: 99.3% accurate, 0.4× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 0.001:\\ \;\;\;\;\frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= (- (log (+ N 1.0)) (log N)) 0.001)
   (+
    (/ 0.3333333333333333 (pow N 3.0))
    (- (/ 1.0 N) (+ (/ 0.5 (pow N 2.0)) (/ 0.25 (pow N 4.0)))))
   (- (log (/ N (+ N 1.0))))))
double code(double N) {
	double tmp;
	if ((log((N + 1.0)) - log(N)) <= 0.001) {
		tmp = (0.3333333333333333 / pow(N, 3.0)) + ((1.0 / N) - ((0.5 / pow(N, 2.0)) + (0.25 / pow(N, 4.0))));
	} else {
		tmp = -log((N / (N + 1.0)));
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if ((log((n + 1.0d0)) - log(n)) <= 0.001d0) then
        tmp = (0.3333333333333333d0 / (n ** 3.0d0)) + ((1.0d0 / n) - ((0.5d0 / (n ** 2.0d0)) + (0.25d0 / (n ** 4.0d0))))
    else
        tmp = -log((n / (n + 1.0d0)))
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if ((Math.log((N + 1.0)) - Math.log(N)) <= 0.001) {
		tmp = (0.3333333333333333 / Math.pow(N, 3.0)) + ((1.0 / N) - ((0.5 / Math.pow(N, 2.0)) + (0.25 / Math.pow(N, 4.0))));
	} else {
		tmp = -Math.log((N / (N + 1.0)));
	}
	return tmp;
}
def code(N):
	tmp = 0
	if (math.log((N + 1.0)) - math.log(N)) <= 0.001:
		tmp = (0.3333333333333333 / math.pow(N, 3.0)) + ((1.0 / N) - ((0.5 / math.pow(N, 2.0)) + (0.25 / math.pow(N, 4.0))))
	else:
		tmp = -math.log((N / (N + 1.0)))
	return tmp
function code(N)
	tmp = 0.0
	if (Float64(log(Float64(N + 1.0)) - log(N)) <= 0.001)
		tmp = Float64(Float64(0.3333333333333333 / (N ^ 3.0)) + Float64(Float64(1.0 / N) - Float64(Float64(0.5 / (N ^ 2.0)) + Float64(0.25 / (N ^ 4.0)))));
	else
		tmp = Float64(-log(Float64(N / Float64(N + 1.0))));
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if ((log((N + 1.0)) - log(N)) <= 0.001)
		tmp = (0.3333333333333333 / (N ^ 3.0)) + ((1.0 / N) - ((0.5 / (N ^ 2.0)) + (0.25 / (N ^ 4.0))));
	else
		tmp = -log((N / (N + 1.0)));
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision], 0.001], N[(N[(0.3333333333333333 / N[Power[N, 3.0], $MachinePrecision]), $MachinePrecision] + N[(N[(1.0 / N), $MachinePrecision] - N[(N[(0.5 / N[Power[N, 2.0], $MachinePrecision]), $MachinePrecision] + N[(0.25 / N[Power[N, 4.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], (-N[Log[N[(N / N[(N + 1.0), $MachinePrecision]), $MachinePrecision]], $MachinePrecision])]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;\log \left(N + 1\right) - \log N \leq 0.001:\\
\;\;\;\;\frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)\\

\mathbf{else}:\\
\;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N)) < 1e-3

    1. Initial program 17.9%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative17.9%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define17.9%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified17.9%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Taylor expanded in N around inf 99.8%

      \[\leadsto \color{blue}{\left(0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \frac{1}{N}\right) - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)} \]
    6. Step-by-step derivation
      1. associate--l+99.8%

        \[\leadsto \color{blue}{0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \left(\frac{1}{N} - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)\right)} \]
      2. associate-*r/99.8%

        \[\leadsto \color{blue}{\frac{0.3333333333333333 \cdot 1}{{N}^{3}}} + \left(\frac{1}{N} - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)\right) \]
      3. metadata-eval99.8%

        \[\leadsto \frac{\color{blue}{0.3333333333333333}}{{N}^{3}} + \left(\frac{1}{N} - \left(0.25 \cdot \frac{1}{{N}^{4}} + 0.5 \cdot \frac{1}{{N}^{2}}\right)\right) \]
      4. +-commutative99.8%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \color{blue}{\left(0.5 \cdot \frac{1}{{N}^{2}} + 0.25 \cdot \frac{1}{{N}^{4}}\right)}\right) \]
      5. associate-*r/99.8%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\color{blue}{\frac{0.5 \cdot 1}{{N}^{2}}} + 0.25 \cdot \frac{1}{{N}^{4}}\right)\right) \]
      6. metadata-eval99.8%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\frac{\color{blue}{0.5}}{{N}^{2}} + 0.25 \cdot \frac{1}{{N}^{4}}\right)\right) \]
      7. associate-*r/99.8%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\frac{0.5}{{N}^{2}} + \color{blue}{\frac{0.25 \cdot 1}{{N}^{4}}}\right)\right) \]
      8. metadata-eval99.8%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\frac{0.5}{{N}^{2}} + \frac{\color{blue}{0.25}}{{N}^{4}}\right)\right) \]
    7. Simplified99.8%

      \[\leadsto \color{blue}{\frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)} \]

    if 1e-3 < (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N))

    1. Initial program 93.4%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative93.4%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define93.6%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified93.6%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-exp-log93.7%

        \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    6. Applied egg-rr93.7%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    7. Step-by-step derivation
      1. rem-exp-log93.6%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
      2. log1p-undefine93.4%

        \[\leadsto \color{blue}{\log \left(1 + N\right)} - \log N \]
      3. +-commutative93.4%

        \[\leadsto \log \color{blue}{\left(N + 1\right)} - \log N \]
      4. diff-log94.9%

        \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
      5. clear-num94.6%

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{N}{N + 1}}\right)} \]
      6. log-rec95.4%

        \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
    8. Applied egg-rr95.4%

      \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.4%

    \[\leadsto \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 0.001:\\ \;\;\;\;\frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \left(\frac{0.5}{{N}^{2}} + \frac{0.25}{{N}^{4}}\right)\right)\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \]
  5. Add Preprocessing

Alternative 3: 98.9% accurate, 0.5× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 8 \cdot 10^{-5}:\\ \;\;\;\;\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \frac{-0.5}{{N}^{2}}\right)\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= (- (log (+ N 1.0)) (log N)) 8e-5)
   (+ (/ 1.0 N) (+ (/ 0.3333333333333333 (pow N 3.0)) (/ -0.5 (pow N 2.0))))
   (- (log (/ N (+ N 1.0))))))
double code(double N) {
	double tmp;
	if ((log((N + 1.0)) - log(N)) <= 8e-5) {
		tmp = (1.0 / N) + ((0.3333333333333333 / pow(N, 3.0)) + (-0.5 / pow(N, 2.0)));
	} else {
		tmp = -log((N / (N + 1.0)));
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if ((log((n + 1.0d0)) - log(n)) <= 8d-5) then
        tmp = (1.0d0 / n) + ((0.3333333333333333d0 / (n ** 3.0d0)) + ((-0.5d0) / (n ** 2.0d0)))
    else
        tmp = -log((n / (n + 1.0d0)))
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if ((Math.log((N + 1.0)) - Math.log(N)) <= 8e-5) {
		tmp = (1.0 / N) + ((0.3333333333333333 / Math.pow(N, 3.0)) + (-0.5 / Math.pow(N, 2.0)));
	} else {
		tmp = -Math.log((N / (N + 1.0)));
	}
	return tmp;
}
def code(N):
	tmp = 0
	if (math.log((N + 1.0)) - math.log(N)) <= 8e-5:
		tmp = (1.0 / N) + ((0.3333333333333333 / math.pow(N, 3.0)) + (-0.5 / math.pow(N, 2.0)))
	else:
		tmp = -math.log((N / (N + 1.0)))
	return tmp
function code(N)
	tmp = 0.0
	if (Float64(log(Float64(N + 1.0)) - log(N)) <= 8e-5)
		tmp = Float64(Float64(1.0 / N) + Float64(Float64(0.3333333333333333 / (N ^ 3.0)) + Float64(-0.5 / (N ^ 2.0))));
	else
		tmp = Float64(-log(Float64(N / Float64(N + 1.0))));
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if ((log((N + 1.0)) - log(N)) <= 8e-5)
		tmp = (1.0 / N) + ((0.3333333333333333 / (N ^ 3.0)) + (-0.5 / (N ^ 2.0)));
	else
		tmp = -log((N / (N + 1.0)));
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision], 8e-5], N[(N[(1.0 / N), $MachinePrecision] + N[(N[(0.3333333333333333 / N[Power[N, 3.0], $MachinePrecision]), $MachinePrecision] + N[(-0.5 / N[Power[N, 2.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], (-N[Log[N[(N / N[(N + 1.0), $MachinePrecision]), $MachinePrecision]], $MachinePrecision])]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;\log \left(N + 1\right) - \log N \leq 8 \cdot 10^{-5}:\\
\;\;\;\;\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \frac{-0.5}{{N}^{2}}\right)\\

\mathbf{else}:\\
\;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N)) < 8.00000000000000065e-5

    1. Initial program 16.8%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative16.8%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define16.9%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified16.9%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Taylor expanded in N around inf 99.7%

      \[\leadsto \color{blue}{\left(0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \frac{1}{N}\right) - 0.5 \cdot \frac{1}{{N}^{2}}} \]
    6. Step-by-step derivation
      1. sub-neg99.7%

        \[\leadsto \color{blue}{\left(0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \frac{1}{N}\right) + \left(-0.5 \cdot \frac{1}{{N}^{2}}\right)} \]
      2. +-commutative99.7%

        \[\leadsto \color{blue}{\left(\frac{1}{N} + 0.3333333333333333 \cdot \frac{1}{{N}^{3}}\right)} + \left(-0.5 \cdot \frac{1}{{N}^{2}}\right) \]
      3. associate-+l+99.7%

        \[\leadsto \color{blue}{\frac{1}{N} + \left(0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \left(-0.5 \cdot \frac{1}{{N}^{2}}\right)\right)} \]
      4. associate-*r/99.7%

        \[\leadsto \frac{1}{N} + \left(\color{blue}{\frac{0.3333333333333333 \cdot 1}{{N}^{3}}} + \left(-0.5 \cdot \frac{1}{{N}^{2}}\right)\right) \]
      5. metadata-eval99.7%

        \[\leadsto \frac{1}{N} + \left(\frac{\color{blue}{0.3333333333333333}}{{N}^{3}} + \left(-0.5 \cdot \frac{1}{{N}^{2}}\right)\right) \]
      6. associate-*r/99.7%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \left(-\color{blue}{\frac{0.5 \cdot 1}{{N}^{2}}}\right)\right) \]
      7. metadata-eval99.7%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \left(-\frac{\color{blue}{0.5}}{{N}^{2}}\right)\right) \]
      8. distribute-neg-frac99.7%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{-0.5}{{N}^{2}}}\right) \]
      9. metadata-eval99.7%

        \[\leadsto \frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{-0.5}}{{N}^{2}}\right) \]
    7. Simplified99.7%

      \[\leadsto \color{blue}{\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \frac{-0.5}{{N}^{2}}\right)} \]

    if 8.00000000000000065e-5 < (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N))

    1. Initial program 91.0%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative91.0%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define91.2%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified91.2%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-exp-log91.3%

        \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    6. Applied egg-rr91.3%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    7. Step-by-step derivation
      1. rem-exp-log91.2%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
      2. log1p-undefine91.0%

        \[\leadsto \color{blue}{\log \left(1 + N\right)} - \log N \]
      3. +-commutative91.0%

        \[\leadsto \log \color{blue}{\left(N + 1\right)} - \log N \]
      4. diff-log93.3%

        \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
      5. clear-num93.0%

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{N}{N + 1}}\right)} \]
      6. log-rec93.9%

        \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
    8. Applied egg-rr93.9%

      \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.1%

    \[\leadsto \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 8 \cdot 10^{-5}:\\ \;\;\;\;\frac{1}{N} + \left(\frac{0.3333333333333333}{{N}^{3}} + \frac{-0.5}{{N}^{2}}\right)\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \]
  5. Add Preprocessing

Alternative 4: 99.0% accurate, 0.6× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 8 \cdot 10^{-5}:\\ \;\;\;\;\frac{0.3333333333333333}{{N}^{3}} + \frac{\frac{N + -0.5}{N}}{N}\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= (- (log (+ N 1.0)) (log N)) 8e-5)
   (+ (/ 0.3333333333333333 (pow N 3.0)) (/ (/ (+ N -0.5) N) N))
   (- (log (/ N (+ N 1.0))))))
double code(double N) {
	double tmp;
	if ((log((N + 1.0)) - log(N)) <= 8e-5) {
		tmp = (0.3333333333333333 / pow(N, 3.0)) + (((N + -0.5) / N) / N);
	} else {
		tmp = -log((N / (N + 1.0)));
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if ((log((n + 1.0d0)) - log(n)) <= 8d-5) then
        tmp = (0.3333333333333333d0 / (n ** 3.0d0)) + (((n + (-0.5d0)) / n) / n)
    else
        tmp = -log((n / (n + 1.0d0)))
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if ((Math.log((N + 1.0)) - Math.log(N)) <= 8e-5) {
		tmp = (0.3333333333333333 / Math.pow(N, 3.0)) + (((N + -0.5) / N) / N);
	} else {
		tmp = -Math.log((N / (N + 1.0)));
	}
	return tmp;
}
def code(N):
	tmp = 0
	if (math.log((N + 1.0)) - math.log(N)) <= 8e-5:
		tmp = (0.3333333333333333 / math.pow(N, 3.0)) + (((N + -0.5) / N) / N)
	else:
		tmp = -math.log((N / (N + 1.0)))
	return tmp
function code(N)
	tmp = 0.0
	if (Float64(log(Float64(N + 1.0)) - log(N)) <= 8e-5)
		tmp = Float64(Float64(0.3333333333333333 / (N ^ 3.0)) + Float64(Float64(Float64(N + -0.5) / N) / N));
	else
		tmp = Float64(-log(Float64(N / Float64(N + 1.0))));
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if ((log((N + 1.0)) - log(N)) <= 8e-5)
		tmp = (0.3333333333333333 / (N ^ 3.0)) + (((N + -0.5) / N) / N);
	else
		tmp = -log((N / (N + 1.0)));
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision], 8e-5], N[(N[(0.3333333333333333 / N[Power[N, 3.0], $MachinePrecision]), $MachinePrecision] + N[(N[(N[(N + -0.5), $MachinePrecision] / N), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision], (-N[Log[N[(N / N[(N + 1.0), $MachinePrecision]), $MachinePrecision]], $MachinePrecision])]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;\log \left(N + 1\right) - \log N \leq 8 \cdot 10^{-5}:\\
\;\;\;\;\frac{0.3333333333333333}{{N}^{3}} + \frac{\frac{N + -0.5}{N}}{N}\\

\mathbf{else}:\\
\;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N)) < 8.00000000000000065e-5

    1. Initial program 16.8%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative16.8%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define16.9%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified16.9%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Taylor expanded in N around inf 99.7%

      \[\leadsto \color{blue}{\left(0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \frac{1}{N}\right) - 0.5 \cdot \frac{1}{{N}^{2}}} \]
    6. Step-by-step derivation
      1. associate--l+99.7%

        \[\leadsto \color{blue}{0.3333333333333333 \cdot \frac{1}{{N}^{3}} + \left(\frac{1}{N} - 0.5 \cdot \frac{1}{{N}^{2}}\right)} \]
      2. associate-*r/99.7%

        \[\leadsto \color{blue}{\frac{0.3333333333333333 \cdot 1}{{N}^{3}}} + \left(\frac{1}{N} - 0.5 \cdot \frac{1}{{N}^{2}}\right) \]
      3. metadata-eval99.7%

        \[\leadsto \frac{\color{blue}{0.3333333333333333}}{{N}^{3}} + \left(\frac{1}{N} - 0.5 \cdot \frac{1}{{N}^{2}}\right) \]
      4. associate-*r/99.7%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \color{blue}{\frac{0.5 \cdot 1}{{N}^{2}}}\right) \]
      5. metadata-eval99.7%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \frac{\color{blue}{0.5}}{{N}^{2}}\right) \]
    7. Simplified99.7%

      \[\leadsto \color{blue}{\frac{0.3333333333333333}{{N}^{3}} + \left(\frac{1}{N} - \frac{0.5}{{N}^{2}}\right)} \]
    8. Step-by-step derivation
      1. frac-sub99.4%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{1 \cdot {N}^{2} - N \cdot 0.5}{N \cdot {N}^{2}}} \]
      2. *-un-lft-identity99.4%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{{N}^{2}} - N \cdot 0.5}{N \cdot {N}^{2}} \]
      3. unpow299.4%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{{N}^{2} - N \cdot 0.5}{N \cdot \color{blue}{\left(N \cdot N\right)}} \]
      4. cube-mult99.3%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{{N}^{2} - N \cdot 0.5}{\color{blue}{{N}^{3}}} \]
    9. Applied egg-rr99.3%

      \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{{N}^{2} - N \cdot 0.5}{{N}^{3}}} \]
    10. Step-by-step derivation
      1. unpow299.3%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{N \cdot N} - N \cdot 0.5}{{N}^{3}} \]
      2. distribute-lft-out--99.3%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{N \cdot \left(N - 0.5\right)}}{{N}^{3}} \]
    11. Simplified99.3%

      \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{N \cdot \left(N - 0.5\right)}{{N}^{3}}} \]
    12. Step-by-step derivation
      1. *-un-lft-identity99.3%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{1 \cdot \left(N \cdot \left(N - 0.5\right)\right)}}{{N}^{3}} \]
      2. cube-mult99.3%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{1 \cdot \left(N \cdot \left(N - 0.5\right)\right)}{\color{blue}{N \cdot \left(N \cdot N\right)}} \]
      3. unpow299.3%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{1 \cdot \left(N \cdot \left(N - 0.5\right)\right)}{N \cdot \color{blue}{{N}^{2}}} \]
      4. times-frac99.6%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{1}{N} \cdot \frac{N \cdot \left(N - 0.5\right)}{{N}^{2}}} \]
      5. sub-neg99.6%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{1}{N} \cdot \frac{N \cdot \color{blue}{\left(N + \left(-0.5\right)\right)}}{{N}^{2}} \]
      6. metadata-eval99.6%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{1}{N} \cdot \frac{N \cdot \left(N + \color{blue}{-0.5}\right)}{{N}^{2}} \]
    13. Applied egg-rr99.6%

      \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{1}{N} \cdot \frac{N \cdot \left(N + -0.5\right)}{{N}^{2}}} \]
    14. Step-by-step derivation
      1. associate-*l/99.6%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{1 \cdot \frac{N \cdot \left(N + -0.5\right)}{{N}^{2}}}{N}} \]
      2. *-lft-identity99.6%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{\frac{N \cdot \left(N + -0.5\right)}{{N}^{2}}}}{N} \]
      3. unpow299.6%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\frac{N \cdot \left(N + -0.5\right)}{\color{blue}{N \cdot N}}}{N} \]
      4. times-frac99.7%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{\frac{N}{N} \cdot \frac{N + -0.5}{N}}}{N} \]
      5. *-inverses99.7%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{\color{blue}{1} \cdot \frac{N + -0.5}{N}}{N} \]
      6. +-commutative99.7%

        \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \frac{1 \cdot \frac{\color{blue}{-0.5 + N}}{N}}{N} \]
    15. Simplified99.7%

      \[\leadsto \frac{0.3333333333333333}{{N}^{3}} + \color{blue}{\frac{1 \cdot \frac{-0.5 + N}{N}}{N}} \]

    if 8.00000000000000065e-5 < (-.f64 (log.f64 (+.f64 N 1)) (log.f64 N))

    1. Initial program 91.0%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative91.0%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define91.2%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified91.2%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-exp-log91.3%

        \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    6. Applied egg-rr91.3%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    7. Step-by-step derivation
      1. rem-exp-log91.2%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
      2. log1p-undefine91.0%

        \[\leadsto \color{blue}{\log \left(1 + N\right)} - \log N \]
      3. +-commutative91.0%

        \[\leadsto \log \color{blue}{\left(N + 1\right)} - \log N \]
      4. diff-log93.3%

        \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
      5. clear-num93.0%

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{N}{N + 1}}\right)} \]
      6. log-rec93.9%

        \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
    8. Applied egg-rr93.9%

      \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.1%

    \[\leadsto \begin{array}{l} \mathbf{if}\;\log \left(N + 1\right) - \log N \leq 8 \cdot 10^{-5}:\\ \;\;\;\;\frac{0.3333333333333333}{{N}^{3}} + \frac{\frac{N + -0.5}{N}}{N}\\ \mathbf{else}:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \end{array} \]
  5. Add Preprocessing

Alternative 5: 97.9% accurate, 1.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;N \leq 250000:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= N 250000.0) (- (log (/ N (+ N 1.0)))) (+ (/ 1.0 N) (/ (/ -0.5 N) N))))
double code(double N) {
	double tmp;
	if (N <= 250000.0) {
		tmp = -log((N / (N + 1.0)));
	} else {
		tmp = (1.0 / N) + ((-0.5 / N) / N);
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if (n <= 250000.0d0) then
        tmp = -log((n / (n + 1.0d0)))
    else
        tmp = (1.0d0 / n) + (((-0.5d0) / n) / n)
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if (N <= 250000.0) {
		tmp = -Math.log((N / (N + 1.0)));
	} else {
		tmp = (1.0 / N) + ((-0.5 / N) / N);
	}
	return tmp;
}
def code(N):
	tmp = 0
	if N <= 250000.0:
		tmp = -math.log((N / (N + 1.0)))
	else:
		tmp = (1.0 / N) + ((-0.5 / N) / N)
	return tmp
function code(N)
	tmp = 0.0
	if (N <= 250000.0)
		tmp = Float64(-log(Float64(N / Float64(N + 1.0))));
	else
		tmp = Float64(Float64(1.0 / N) + Float64(Float64(-0.5 / N) / N));
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if (N <= 250000.0)
		tmp = -log((N / (N + 1.0)));
	else
		tmp = (1.0 / N) + ((-0.5 / N) / N);
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N, 250000.0], (-N[Log[N[(N / N[(N + 1.0), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), N[(N[(1.0 / N), $MachinePrecision] + N[(N[(-0.5 / N), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;N \leq 250000:\\
\;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if N < 2.5e5

    1. Initial program 87.2%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative87.2%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define87.3%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified87.3%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-exp-log87.4%

        \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    6. Applied egg-rr87.4%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    7. Step-by-step derivation
      1. rem-exp-log87.3%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
      2. log1p-undefine87.2%

        \[\leadsto \color{blue}{\log \left(1 + N\right)} - \log N \]
      3. +-commutative87.2%

        \[\leadsto \log \color{blue}{\left(N + 1\right)} - \log N \]
      4. diff-log90.1%

        \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
      5. clear-num89.9%

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{N}{N + 1}}\right)} \]
      6. log-rec90.9%

        \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
    8. Applied egg-rr90.9%

      \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]

    if 2.5e5 < N

    1. Initial program 15.0%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative15.0%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define15.1%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified15.1%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-exp-log15.1%

        \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    6. Applied egg-rr15.1%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    7. Taylor expanded in N around inf 94.5%

      \[\leadsto e^{\color{blue}{\left(\log \left(\frac{1}{N}\right) + 0.20833333333333334 \cdot \frac{1}{{N}^{2}}\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)}} \]
    8. Step-by-step derivation
      1. log-rec94.5%

        \[\leadsto e^{\left(\color{blue}{\left(-\log N\right)} + 0.20833333333333334 \cdot \frac{1}{{N}^{2}}\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      2. +-commutative94.5%

        \[\leadsto e^{\color{blue}{\left(0.20833333333333334 \cdot \frac{1}{{N}^{2}} + \left(-\log N\right)\right)} - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      3. unsub-neg94.5%

        \[\leadsto e^{\color{blue}{\left(0.20833333333333334 \cdot \frac{1}{{N}^{2}} - \log N\right)} - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      4. associate-*r/94.5%

        \[\leadsto e^{\left(\color{blue}{\frac{0.20833333333333334 \cdot 1}{{N}^{2}}} - \log N\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      5. metadata-eval94.5%

        \[\leadsto e^{\left(\frac{\color{blue}{0.20833333333333334}}{{N}^{2}} - \log N\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      6. +-commutative94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \color{blue}{\left(0.5 \cdot \frac{1}{N} + 0.125 \cdot \frac{1}{{N}^{3}}\right)}} \]
      7. associate-*r/94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\color{blue}{\frac{0.5 \cdot 1}{N}} + 0.125 \cdot \frac{1}{{N}^{3}}\right)} \]
      8. metadata-eval94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{\color{blue}{0.5}}{N} + 0.125 \cdot \frac{1}{{N}^{3}}\right)} \]
      9. associate-*r/94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \color{blue}{\frac{0.125 \cdot 1}{{N}^{3}}}\right)} \]
      10. metadata-eval94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \frac{\color{blue}{0.125}}{{N}^{3}}\right)} \]
    9. Simplified94.5%

      \[\leadsto e^{\color{blue}{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \frac{0.125}{{N}^{3}}\right)}} \]
    10. Taylor expanded in N around inf 94.0%

      \[\leadsto \color{blue}{e^{--1 \cdot \log \left(\frac{1}{N}\right)} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N}} \]
    11. Step-by-step derivation
      1. exp-neg94.0%

        \[\leadsto \color{blue}{\frac{1}{e^{-1 \cdot \log \left(\frac{1}{N}\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      2. mul-1-neg94.0%

        \[\leadsto \frac{1}{e^{\color{blue}{-\log \left(\frac{1}{N}\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      3. log-rec94.0%

        \[\leadsto \frac{1}{e^{-\color{blue}{\left(-\log N\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      4. remove-double-neg94.0%

        \[\leadsto \frac{1}{e^{\color{blue}{\log N}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      5. rem-exp-log98.9%

        \[\leadsto \frac{1}{\color{blue}{N}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      6. associate-*r/98.9%

        \[\leadsto \frac{1}{N} + \color{blue}{\frac{-0.5 \cdot e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N}} \]
      7. metadata-eval98.9%

        \[\leadsto \frac{1}{N} + \frac{\color{blue}{\left(-0.5\right)} \cdot e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      8. exp-neg98.9%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \color{blue}{\frac{1}{e^{-1 \cdot \log \left(\frac{1}{N}\right)}}}}{N} \]
      9. mul-1-neg98.9%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{\color{blue}{-\log \left(\frac{1}{N}\right)}}}}{N} \]
      10. log-rec98.9%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{-\color{blue}{\left(-\log N\right)}}}}{N} \]
      11. remove-double-neg98.9%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{\color{blue}{\log N}}}}{N} \]
      12. rem-exp-log98.9%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{\color{blue}{N}}}{N} \]
      13. distribute-lft-neg-in98.9%

        \[\leadsto \frac{1}{N} + \frac{\color{blue}{-0.5 \cdot \frac{1}{N}}}{N} \]
      14. associate-*r/98.9%

        \[\leadsto \frac{1}{N} + \frac{-\color{blue}{\frac{0.5 \cdot 1}{N}}}{N} \]
      15. metadata-eval98.9%

        \[\leadsto \frac{1}{N} + \frac{-\frac{\color{blue}{0.5}}{N}}{N} \]
      16. distribute-neg-frac98.9%

        \[\leadsto \frac{1}{N} + \frac{\color{blue}{\frac{-0.5}{N}}}{N} \]
      17. metadata-eval98.9%

        \[\leadsto \frac{1}{N} + \frac{\frac{\color{blue}{-0.5}}{N}}{N} \]
    12. Simplified98.9%

      \[\leadsto \color{blue}{\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification97.9%

    \[\leadsto \begin{array}{l} \mathbf{if}\;N \leq 250000:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}\\ \end{array} \]
  5. Add Preprocessing

Alternative 6: 97.7% accurate, 1.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;N \leq 240000:\\ \;\;\;\;\log \left(1 + \frac{1}{N}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= N 240000.0) (log (+ 1.0 (/ 1.0 N))) (+ (/ 1.0 N) (/ (/ -0.5 N) N))))
double code(double N) {
	double tmp;
	if (N <= 240000.0) {
		tmp = log((1.0 + (1.0 / N)));
	} else {
		tmp = (1.0 / N) + ((-0.5 / N) / N);
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if (n <= 240000.0d0) then
        tmp = log((1.0d0 + (1.0d0 / n)))
    else
        tmp = (1.0d0 / n) + (((-0.5d0) / n) / n)
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if (N <= 240000.0) {
		tmp = Math.log((1.0 + (1.0 / N)));
	} else {
		tmp = (1.0 / N) + ((-0.5 / N) / N);
	}
	return tmp;
}
def code(N):
	tmp = 0
	if N <= 240000.0:
		tmp = math.log((1.0 + (1.0 / N)))
	else:
		tmp = (1.0 / N) + ((-0.5 / N) / N)
	return tmp
function code(N)
	tmp = 0.0
	if (N <= 240000.0)
		tmp = log(Float64(1.0 + Float64(1.0 / N)));
	else
		tmp = Float64(Float64(1.0 / N) + Float64(Float64(-0.5 / N) / N));
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if (N <= 240000.0)
		tmp = log((1.0 + (1.0 / N)));
	else
		tmp = (1.0 / N) + ((-0.5 / N) / N);
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N, 240000.0], N[Log[N[(1.0 + N[(1.0 / N), $MachinePrecision]), $MachinePrecision]], $MachinePrecision], N[(N[(1.0 / N), $MachinePrecision] + N[(N[(-0.5 / N), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;N \leq 240000:\\
\;\;\;\;\log \left(1 + \frac{1}{N}\right)\\

\mathbf{else}:\\
\;\;\;\;\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if N < 2.4e5

    1. Initial program 87.8%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative87.8%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define88.0%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified88.0%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-log-exp87.9%

        \[\leadsto \color{blue}{\log \left(e^{\mathsf{log1p}\left(N\right)}\right)} - \log N \]
      2. log1p-expm1-u87.9%

        \[\leadsto \log \left(e^{\mathsf{log1p}\left(N\right)}\right) - \color{blue}{\mathsf{log1p}\left(\mathsf{expm1}\left(\log N\right)\right)} \]
      3. log1p-undefine87.9%

        \[\leadsto \log \left(e^{\mathsf{log1p}\left(N\right)}\right) - \color{blue}{\log \left(1 + \mathsf{expm1}\left(\log N\right)\right)} \]
      4. diff-log87.9%

        \[\leadsto \color{blue}{\log \left(\frac{e^{\mathsf{log1p}\left(N\right)}}{1 + \mathsf{expm1}\left(\log N\right)}\right)} \]
      5. log1p-undefine87.9%

        \[\leadsto \log \left(\frac{e^{\color{blue}{\log \left(1 + N\right)}}}{1 + \mathsf{expm1}\left(\log N\right)}\right) \]
      6. rem-exp-log87.6%

        \[\leadsto \log \left(\frac{\color{blue}{1 + N}}{1 + \mathsf{expm1}\left(\log N\right)}\right) \]
      7. +-commutative87.6%

        \[\leadsto \log \left(\frac{\color{blue}{N + 1}}{1 + \mathsf{expm1}\left(\log N\right)}\right) \]
      8. add-exp-log87.6%

        \[\leadsto \log \left(\frac{N + 1}{\color{blue}{e^{\log \left(1 + \mathsf{expm1}\left(\log N\right)\right)}}}\right) \]
      9. log1p-undefine87.6%

        \[\leadsto \log \left(\frac{N + 1}{e^{\color{blue}{\mathsf{log1p}\left(\mathsf{expm1}\left(\log N\right)\right)}}}\right) \]
      10. log1p-expm1-u87.6%

        \[\leadsto \log \left(\frac{N + 1}{e^{\color{blue}{\log N}}}\right) \]
      11. add-exp-log90.6%

        \[\leadsto \log \left(\frac{N + 1}{\color{blue}{N}}\right) \]
    6. Applied egg-rr90.6%

      \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
    7. Taylor expanded in N around 0 90.7%

      \[\leadsto \log \color{blue}{\left(1 + \frac{1}{N}\right)} \]

    if 2.4e5 < N

    1. Initial program 15.3%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative15.3%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define15.3%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified15.3%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-exp-log15.3%

        \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    6. Applied egg-rr15.3%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
    7. Taylor expanded in N around inf 94.5%

      \[\leadsto e^{\color{blue}{\left(\log \left(\frac{1}{N}\right) + 0.20833333333333334 \cdot \frac{1}{{N}^{2}}\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)}} \]
    8. Step-by-step derivation
      1. log-rec94.5%

        \[\leadsto e^{\left(\color{blue}{\left(-\log N\right)} + 0.20833333333333334 \cdot \frac{1}{{N}^{2}}\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      2. +-commutative94.5%

        \[\leadsto e^{\color{blue}{\left(0.20833333333333334 \cdot \frac{1}{{N}^{2}} + \left(-\log N\right)\right)} - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      3. unsub-neg94.5%

        \[\leadsto e^{\color{blue}{\left(0.20833333333333334 \cdot \frac{1}{{N}^{2}} - \log N\right)} - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      4. associate-*r/94.5%

        \[\leadsto e^{\left(\color{blue}{\frac{0.20833333333333334 \cdot 1}{{N}^{2}}} - \log N\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      5. metadata-eval94.5%

        \[\leadsto e^{\left(\frac{\color{blue}{0.20833333333333334}}{{N}^{2}} - \log N\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
      6. +-commutative94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \color{blue}{\left(0.5 \cdot \frac{1}{N} + 0.125 \cdot \frac{1}{{N}^{3}}\right)}} \]
      7. associate-*r/94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\color{blue}{\frac{0.5 \cdot 1}{N}} + 0.125 \cdot \frac{1}{{N}^{3}}\right)} \]
      8. metadata-eval94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{\color{blue}{0.5}}{N} + 0.125 \cdot \frac{1}{{N}^{3}}\right)} \]
      9. associate-*r/94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \color{blue}{\frac{0.125 \cdot 1}{{N}^{3}}}\right)} \]
      10. metadata-eval94.5%

        \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \frac{\color{blue}{0.125}}{{N}^{3}}\right)} \]
    9. Simplified94.5%

      \[\leadsto e^{\color{blue}{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \frac{0.125}{{N}^{3}}\right)}} \]
    10. Taylor expanded in N around inf 93.9%

      \[\leadsto \color{blue}{e^{--1 \cdot \log \left(\frac{1}{N}\right)} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N}} \]
    11. Step-by-step derivation
      1. exp-neg93.9%

        \[\leadsto \color{blue}{\frac{1}{e^{-1 \cdot \log \left(\frac{1}{N}\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      2. mul-1-neg93.9%

        \[\leadsto \frac{1}{e^{\color{blue}{-\log \left(\frac{1}{N}\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      3. log-rec93.9%

        \[\leadsto \frac{1}{e^{-\color{blue}{\left(-\log N\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      4. remove-double-neg93.9%

        \[\leadsto \frac{1}{e^{\color{blue}{\log N}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      5. rem-exp-log98.8%

        \[\leadsto \frac{1}{\color{blue}{N}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      6. associate-*r/98.8%

        \[\leadsto \frac{1}{N} + \color{blue}{\frac{-0.5 \cdot e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N}} \]
      7. metadata-eval98.8%

        \[\leadsto \frac{1}{N} + \frac{\color{blue}{\left(-0.5\right)} \cdot e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
      8. exp-neg98.8%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \color{blue}{\frac{1}{e^{-1 \cdot \log \left(\frac{1}{N}\right)}}}}{N} \]
      9. mul-1-neg98.8%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{\color{blue}{-\log \left(\frac{1}{N}\right)}}}}{N} \]
      10. log-rec98.8%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{-\color{blue}{\left(-\log N\right)}}}}{N} \]
      11. remove-double-neg98.8%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{\color{blue}{\log N}}}}{N} \]
      12. rem-exp-log98.8%

        \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{\color{blue}{N}}}{N} \]
      13. distribute-lft-neg-in98.8%

        \[\leadsto \frac{1}{N} + \frac{\color{blue}{-0.5 \cdot \frac{1}{N}}}{N} \]
      14. associate-*r/98.8%

        \[\leadsto \frac{1}{N} + \frac{-\color{blue}{\frac{0.5 \cdot 1}{N}}}{N} \]
      15. metadata-eval98.8%

        \[\leadsto \frac{1}{N} + \frac{-\frac{\color{blue}{0.5}}{N}}{N} \]
      16. distribute-neg-frac98.8%

        \[\leadsto \frac{1}{N} + \frac{\color{blue}{\frac{-0.5}{N}}}{N} \]
      17. metadata-eval98.8%

        \[\leadsto \frac{1}{N} + \frac{\frac{\color{blue}{-0.5}}{N}}{N} \]
    12. Simplified98.8%

      \[\leadsto \color{blue}{\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification97.8%

    \[\leadsto \begin{array}{l} \mathbf{if}\;N \leq 240000:\\ \;\;\;\;\log \left(1 + \frac{1}{N}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}\\ \end{array} \]
  5. Add Preprocessing

Alternative 7: 92.2% accurate, 22.8× speedup?

\[\begin{array}{l} \\ \frac{1}{N} + \frac{\frac{-0.5}{N}}{N} \end{array} \]
(FPCore (N) :precision binary64 (+ (/ 1.0 N) (/ (/ -0.5 N) N)))
double code(double N) {
	return (1.0 / N) + ((-0.5 / N) / N);
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = (1.0d0 / n) + (((-0.5d0) / n) / n)
end function
public static double code(double N) {
	return (1.0 / N) + ((-0.5 / N) / N);
}
def code(N):
	return (1.0 / N) + ((-0.5 / N) / N)
function code(N)
	return Float64(Float64(1.0 / N) + Float64(Float64(-0.5 / N) / N))
end
function tmp = code(N)
	tmp = (1.0 / N) + ((-0.5 / N) / N);
end
code[N_] := N[(N[(1.0 / N), $MachinePrecision] + N[(N[(-0.5 / N), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}
\end{array}
Derivation
  1. Initial program 24.1%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.1%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.1%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.1%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Step-by-step derivation
    1. add-exp-log24.1%

      \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
  6. Applied egg-rr24.1%

    \[\leadsto \color{blue}{e^{\log \left(\mathsf{log1p}\left(N\right) - \log N\right)}} \]
  7. Taylor expanded in N around inf 91.1%

    \[\leadsto e^{\color{blue}{\left(\log \left(\frac{1}{N}\right) + 0.20833333333333334 \cdot \frac{1}{{N}^{2}}\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)}} \]
  8. Step-by-step derivation
    1. log-rec91.1%

      \[\leadsto e^{\left(\color{blue}{\left(-\log N\right)} + 0.20833333333333334 \cdot \frac{1}{{N}^{2}}\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
    2. +-commutative91.1%

      \[\leadsto e^{\color{blue}{\left(0.20833333333333334 \cdot \frac{1}{{N}^{2}} + \left(-\log N\right)\right)} - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
    3. unsub-neg91.1%

      \[\leadsto e^{\color{blue}{\left(0.20833333333333334 \cdot \frac{1}{{N}^{2}} - \log N\right)} - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
    4. associate-*r/91.1%

      \[\leadsto e^{\left(\color{blue}{\frac{0.20833333333333334 \cdot 1}{{N}^{2}}} - \log N\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
    5. metadata-eval91.1%

      \[\leadsto e^{\left(\frac{\color{blue}{0.20833333333333334}}{{N}^{2}} - \log N\right) - \left(0.125 \cdot \frac{1}{{N}^{3}} + 0.5 \cdot \frac{1}{N}\right)} \]
    6. +-commutative91.1%

      \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \color{blue}{\left(0.5 \cdot \frac{1}{N} + 0.125 \cdot \frac{1}{{N}^{3}}\right)}} \]
    7. associate-*r/91.1%

      \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\color{blue}{\frac{0.5 \cdot 1}{N}} + 0.125 \cdot \frac{1}{{N}^{3}}\right)} \]
    8. metadata-eval91.1%

      \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{\color{blue}{0.5}}{N} + 0.125 \cdot \frac{1}{{N}^{3}}\right)} \]
    9. associate-*r/91.1%

      \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \color{blue}{\frac{0.125 \cdot 1}{{N}^{3}}}\right)} \]
    10. metadata-eval91.1%

      \[\leadsto e^{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \frac{\color{blue}{0.125}}{{N}^{3}}\right)} \]
  9. Simplified91.1%

    \[\leadsto e^{\color{blue}{\left(\frac{0.20833333333333334}{{N}^{2}} - \log N\right) - \left(\frac{0.5}{N} + \frac{0.125}{{N}^{3}}\right)}} \]
  10. Taylor expanded in N around inf 87.9%

    \[\leadsto \color{blue}{e^{--1 \cdot \log \left(\frac{1}{N}\right)} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N}} \]
  11. Step-by-step derivation
    1. exp-neg87.9%

      \[\leadsto \color{blue}{\frac{1}{e^{-1 \cdot \log \left(\frac{1}{N}\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
    2. mul-1-neg87.9%

      \[\leadsto \frac{1}{e^{\color{blue}{-\log \left(\frac{1}{N}\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
    3. log-rec87.9%

      \[\leadsto \frac{1}{e^{-\color{blue}{\left(-\log N\right)}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
    4. remove-double-neg87.9%

      \[\leadsto \frac{1}{e^{\color{blue}{\log N}}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
    5. rem-exp-log92.2%

      \[\leadsto \frac{1}{\color{blue}{N}} + -0.5 \cdot \frac{e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
    6. associate-*r/92.2%

      \[\leadsto \frac{1}{N} + \color{blue}{\frac{-0.5 \cdot e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N}} \]
    7. metadata-eval92.2%

      \[\leadsto \frac{1}{N} + \frac{\color{blue}{\left(-0.5\right)} \cdot e^{--1 \cdot \log \left(\frac{1}{N}\right)}}{N} \]
    8. exp-neg92.2%

      \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \color{blue}{\frac{1}{e^{-1 \cdot \log \left(\frac{1}{N}\right)}}}}{N} \]
    9. mul-1-neg92.2%

      \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{\color{blue}{-\log \left(\frac{1}{N}\right)}}}}{N} \]
    10. log-rec92.2%

      \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{-\color{blue}{\left(-\log N\right)}}}}{N} \]
    11. remove-double-neg92.2%

      \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{e^{\color{blue}{\log N}}}}{N} \]
    12. rem-exp-log92.2%

      \[\leadsto \frac{1}{N} + \frac{\left(-0.5\right) \cdot \frac{1}{\color{blue}{N}}}{N} \]
    13. distribute-lft-neg-in92.2%

      \[\leadsto \frac{1}{N} + \frac{\color{blue}{-0.5 \cdot \frac{1}{N}}}{N} \]
    14. associate-*r/92.2%

      \[\leadsto \frac{1}{N} + \frac{-\color{blue}{\frac{0.5 \cdot 1}{N}}}{N} \]
    15. metadata-eval92.2%

      \[\leadsto \frac{1}{N} + \frac{-\frac{\color{blue}{0.5}}{N}}{N} \]
    16. distribute-neg-frac92.2%

      \[\leadsto \frac{1}{N} + \frac{\color{blue}{\frac{-0.5}{N}}}{N} \]
    17. metadata-eval92.2%

      \[\leadsto \frac{1}{N} + \frac{\frac{\color{blue}{-0.5}}{N}}{N} \]
  12. Simplified92.2%

    \[\leadsto \color{blue}{\frac{1}{N} + \frac{\frac{-0.5}{N}}{N}} \]
  13. Final simplification92.2%

    \[\leadsto \frac{1}{N} + \frac{\frac{-0.5}{N}}{N} \]
  14. Add Preprocessing

Alternative 8: 84.3% accurate, 68.3× speedup?

\[\begin{array}{l} \\ \frac{1}{N} \end{array} \]
(FPCore (N) :precision binary64 (/ 1.0 N))
double code(double N) {
	return 1.0 / N;
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = 1.0d0 / n
end function
public static double code(double N) {
	return 1.0 / N;
}
def code(N):
	return 1.0 / N
function code(N)
	return Float64(1.0 / N)
end
function tmp = code(N)
	tmp = 1.0 / N;
end
code[N_] := N[(1.0 / N), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{N}
\end{array}
Derivation
  1. Initial program 24.1%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.1%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.1%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.1%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Taylor expanded in N around inf 84.2%

    \[\leadsto \color{blue}{\frac{1}{N}} \]
  6. Final simplification84.2%

    \[\leadsto \frac{1}{N} \]
  7. Add Preprocessing

Alternative 9: 3.3% accurate, 205.0× speedup?

\[\begin{array}{l} \\ 0 \end{array} \]
(FPCore (N) :precision binary64 0.0)
double code(double N) {
	return 0.0;
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = 0.0d0
end function
public static double code(double N) {
	return 0.0;
}
def code(N):
	return 0.0
function code(N)
	return 0.0
end
function tmp = code(N)
	tmp = 0.0;
end
code[N_] := 0.0
\begin{array}{l}

\\
0
\end{array}
Derivation
  1. Initial program 24.1%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.1%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.1%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.1%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Step-by-step derivation
    1. sub-neg24.1%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) + \left(-\log N\right)} \]
    2. +-commutative24.1%

      \[\leadsto \color{blue}{\left(-\log N\right) + \mathsf{log1p}\left(N\right)} \]
    3. add-sqr-sqrt24.2%

      \[\leadsto \left(-\color{blue}{\sqrt{\log N} \cdot \sqrt{\log N}}\right) + \mathsf{log1p}\left(N\right) \]
    4. distribute-rgt-neg-in24.2%

      \[\leadsto \color{blue}{\sqrt{\log N} \cdot \left(-\sqrt{\log N}\right)} + \mathsf{log1p}\left(N\right) \]
    5. fma-define25.6%

      \[\leadsto \color{blue}{\mathsf{fma}\left(\sqrt{\log N}, -\sqrt{\log N}, \mathsf{log1p}\left(N\right)\right)} \]
  6. Applied egg-rr25.6%

    \[\leadsto \color{blue}{\mathsf{fma}\left(\sqrt{\log N}, -\sqrt{\log N}, \mathsf{log1p}\left(N\right)\right)} \]
  7. Taylor expanded in N around inf 3.3%

    \[\leadsto \color{blue}{\log \left(\frac{1}{N}\right) + -1 \cdot \log \left(\frac{1}{N}\right)} \]
  8. Step-by-step derivation
    1. log-rec3.3%

      \[\leadsto \color{blue}{\left(-\log N\right)} + -1 \cdot \log \left(\frac{1}{N}\right) \]
    2. log-rec3.3%

      \[\leadsto \left(-\log N\right) + -1 \cdot \color{blue}{\left(-\log N\right)} \]
    3. distribute-rgt1-in3.3%

      \[\leadsto \color{blue}{\left(-1 + 1\right) \cdot \left(-\log N\right)} \]
    4. metadata-eval3.3%

      \[\leadsto \color{blue}{0} \cdot \left(-\log N\right) \]
    5. mul0-lft3.3%

      \[\leadsto \color{blue}{0} \]
  9. Simplified3.3%

    \[\leadsto \color{blue}{0} \]
  10. Final simplification3.3%

    \[\leadsto 0 \]
  11. Add Preprocessing

Developer target: 99.8% accurate, 2.0× speedup?

\[\begin{array}{l} \\ \mathsf{log1p}\left(\frac{1}{N}\right) \end{array} \]
(FPCore (N) :precision binary64 (log1p (/ 1.0 N)))
double code(double N) {
	return log1p((1.0 / N));
}
public static double code(double N) {
	return Math.log1p((1.0 / N));
}
def code(N):
	return math.log1p((1.0 / N))
function code(N)
	return log1p(Float64(1.0 / N))
end
code[N_] := N[Log[1 + N[(1.0 / N), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\mathsf{log1p}\left(\frac{1}{N}\right)
\end{array}

Reproduce

?
herbie shell --seed 2024044 
(FPCore (N)
  :name "2log (problem 3.3.6)"
  :precision binary64
  :pre (and (> N 1.0) (< N 1e+40))

  :herbie-target
  (log1p (/ 1.0 N))

  (- (log (+ N 1.0)) (log N)))