2log (problem 3.3.6)

Percentage Accurate: 23.9% → 99.7%
Time: 10.8s
Alternatives: 9
Speedup: 68.3×

Specification

?
\[N > 1 \land N < 10^{+40}\]
\[\begin{array}{l} \\ \log \left(N + 1\right) - \log N \end{array} \]
(FPCore (N) :precision binary64 (- (log (+ N 1.0)) (log N)))
double code(double N) {
	return log((N + 1.0)) - log(N);
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = log((n + 1.0d0)) - log(n)
end function
public static double code(double N) {
	return Math.log((N + 1.0)) - Math.log(N);
}
def code(N):
	return math.log((N + 1.0)) - math.log(N)
function code(N)
	return Float64(log(Float64(N + 1.0)) - log(N))
end
function tmp = code(N)
	tmp = log((N + 1.0)) - log(N);
end
code[N_] := N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\log \left(N + 1\right) - \log N
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 9 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 23.9% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \log \left(N + 1\right) - \log N \end{array} \]
(FPCore (N) :precision binary64 (- (log (+ N 1.0)) (log N)))
double code(double N) {
	return log((N + 1.0)) - log(N);
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = log((n + 1.0d0)) - log(n)
end function
public static double code(double N) {
	return Math.log((N + 1.0)) - Math.log(N);
}
def code(N):
	return math.log((N + 1.0)) - math.log(N)
function code(N)
	return Float64(log(Float64(N + 1.0)) - log(N))
end
function tmp = code(N)
	tmp = log((N + 1.0)) - log(N);
end
code[N_] := N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\log \left(N + 1\right) - \log N
\end{array}

Alternative 1: 99.7% accurate, 1.8× speedup?

\[\begin{array}{l} \\ \mathsf{log1p}\left(\frac{\frac{-2 + \frac{-1}{N}}{N}}{-1 + \frac{-1 - N}{N}}\right) \end{array} \]
(FPCore (N)
 :precision binary64
 (log1p (/ (/ (+ -2.0 (/ -1.0 N)) N) (+ -1.0 (/ (- -1.0 N) N)))))
double code(double N) {
	return log1p((((-2.0 + (-1.0 / N)) / N) / (-1.0 + ((-1.0 - N) / N))));
}
public static double code(double N) {
	return Math.log1p((((-2.0 + (-1.0 / N)) / N) / (-1.0 + ((-1.0 - N) / N))));
}
def code(N):
	return math.log1p((((-2.0 + (-1.0 / N)) / N) / (-1.0 + ((-1.0 - N) / N))))
function code(N)
	return log1p(Float64(Float64(Float64(-2.0 + Float64(-1.0 / N)) / N) / Float64(-1.0 + Float64(Float64(-1.0 - N) / N))))
end
code[N_] := N[Log[1 + N[(N[(N[(-2.0 + N[(-1.0 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] / N[(-1.0 + N[(N[(-1.0 - N), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\mathsf{log1p}\left(\frac{\frac{-2 + \frac{-1}{N}}{N}}{-1 + \frac{-1 - N}{N}}\right)
\end{array}
Derivation
  1. Initial program 24.5%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.5%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Step-by-step derivation
    1. log1p-expm1-u24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(\mathsf{expm1}\left(\mathsf{log1p}\left(N\right) - \log N\right)\right)} \]
    2. expm1-undefine24.5%

      \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{\mathsf{log1p}\left(N\right) - \log N} - 1}\right) \]
    3. exp-diff24.5%

      \[\leadsto \mathsf{log1p}\left(\color{blue}{\frac{e^{\mathsf{log1p}\left(N\right)}}{e^{\log N}}} - 1\right) \]
    4. log1p-undefine24.5%

      \[\leadsto \mathsf{log1p}\left(\frac{e^{\color{blue}{\log \left(1 + N\right)}}}{e^{\log N}} - 1\right) \]
    5. rem-exp-log26.7%

      \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{1 + N}}{e^{\log N}} - 1\right) \]
    6. add-exp-log27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{1 + N}{\color{blue}{N}} - 1\right) \]
    7. +-commutative27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{N + 1}}{N} - 1\right) \]
  6. Applied egg-rr27.0%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(\frac{N + 1}{N} - 1\right)} \]
  7. Step-by-step derivation
    1. flip--27.0%

      \[\leadsto \mathsf{log1p}\left(\color{blue}{\frac{\frac{N + 1}{N} \cdot \frac{N + 1}{N} - 1 \cdot 1}{\frac{N + 1}{N} + 1}}\right) \]
    2. frac-2neg27.0%

      \[\leadsto \mathsf{log1p}\left(\color{blue}{\frac{-\left(\frac{N + 1}{N} \cdot \frac{N + 1}{N} - 1 \cdot 1\right)}{-\left(\frac{N + 1}{N} + 1\right)}}\right) \]
    3. metadata-eval27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{-\left(\frac{N + 1}{N} \cdot \frac{N + 1}{N} - \color{blue}{1}\right)}{-\left(\frac{N + 1}{N} + 1\right)}\right) \]
    4. sub-neg27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{-\color{blue}{\left(\frac{N + 1}{N} \cdot \frac{N + 1}{N} + \left(-1\right)\right)}}{-\left(\frac{N + 1}{N} + 1\right)}\right) \]
    5. pow227.0%

      \[\leadsto \mathsf{log1p}\left(\frac{-\left(\color{blue}{{\left(\frac{N + 1}{N}\right)}^{2}} + \left(-1\right)\right)}{-\left(\frac{N + 1}{N} + 1\right)}\right) \]
    6. metadata-eval27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{-\left({\left(\frac{N + 1}{N}\right)}^{2} + \color{blue}{-1}\right)}{-\left(\frac{N + 1}{N} + 1\right)}\right) \]
    7. +-commutative27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{-\left({\left(\frac{N + 1}{N}\right)}^{2} + -1\right)}{-\color{blue}{\left(1 + \frac{N + 1}{N}\right)}}\right) \]
  8. Applied egg-rr27.0%

    \[\leadsto \mathsf{log1p}\left(\color{blue}{\frac{-\left({\left(\frac{N + 1}{N}\right)}^{2} + -1\right)}{-\left(1 + \frac{N + 1}{N}\right)}}\right) \]
  9. Step-by-step derivation
    1. neg-sub027.0%

      \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{0 - \left({\left(\frac{N + 1}{N}\right)}^{2} + -1\right)}}{-\left(1 + \frac{N + 1}{N}\right)}\right) \]
    2. +-commutative27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{0 - \color{blue}{\left(-1 + {\left(\frac{N + 1}{N}\right)}^{2}\right)}}{-\left(1 + \frac{N + 1}{N}\right)}\right) \]
    3. associate--r+27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{\left(0 - -1\right) - {\left(\frac{N + 1}{N}\right)}^{2}}}{-\left(1 + \frac{N + 1}{N}\right)}\right) \]
    4. metadata-eval27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{1} - {\left(\frac{N + 1}{N}\right)}^{2}}{-\left(1 + \frac{N + 1}{N}\right)}\right) \]
    5. distribute-neg-in27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{1 - {\left(\frac{N + 1}{N}\right)}^{2}}{\color{blue}{\left(-1\right) + \left(-\frac{N + 1}{N}\right)}}\right) \]
    6. metadata-eval27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{1 - {\left(\frac{N + 1}{N}\right)}^{2}}{\color{blue}{-1} + \left(-\frac{N + 1}{N}\right)}\right) \]
    7. unsub-neg27.0%

      \[\leadsto \mathsf{log1p}\left(\frac{1 - {\left(\frac{N + 1}{N}\right)}^{2}}{\color{blue}{-1 - \frac{N + 1}{N}}}\right) \]
  10. Simplified27.0%

    \[\leadsto \mathsf{log1p}\left(\color{blue}{\frac{1 - {\left(\frac{N + 1}{N}\right)}^{2}}{-1 - \frac{N + 1}{N}}}\right) \]
  11. Taylor expanded in N around inf 99.6%

    \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{-1 \cdot \frac{2 + \frac{1}{N}}{N}}}{-1 - \frac{N + 1}{N}}\right) \]
  12. Step-by-step derivation
    1. associate-*r/99.6%

      \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{\frac{-1 \cdot \left(2 + \frac{1}{N}\right)}{N}}}{-1 - \frac{N + 1}{N}}\right) \]
    2. distribute-rgt-in99.6%

      \[\leadsto \mathsf{log1p}\left(\frac{\frac{\color{blue}{2 \cdot -1 + \frac{1}{N} \cdot -1}}{N}}{-1 - \frac{N + 1}{N}}\right) \]
    3. metadata-eval99.6%

      \[\leadsto \mathsf{log1p}\left(\frac{\frac{\color{blue}{-2} + \frac{1}{N} \cdot -1}{N}}{-1 - \frac{N + 1}{N}}\right) \]
    4. associate-*l/99.6%

      \[\leadsto \mathsf{log1p}\left(\frac{\frac{-2 + \color{blue}{\frac{1 \cdot -1}{N}}}{N}}{-1 - \frac{N + 1}{N}}\right) \]
    5. metadata-eval99.6%

      \[\leadsto \mathsf{log1p}\left(\frac{\frac{-2 + \frac{\color{blue}{-1}}{N}}{N}}{-1 - \frac{N + 1}{N}}\right) \]
  13. Simplified99.6%

    \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{\frac{-2 + \frac{-1}{N}}{N}}}{-1 - \frac{N + 1}{N}}\right) \]
  14. Final simplification99.6%

    \[\leadsto \mathsf{log1p}\left(\frac{\frac{-2 + \frac{-1}{N}}{N}}{-1 + \frac{-1 - N}{N}}\right) \]
  15. Add Preprocessing

Alternative 2: 99.4% accurate, 1.8× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;N \leq 1660:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N}\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= N 1660.0)
   (- (log (/ N (+ N 1.0))))
   (/
    (-
     (/
      (-
       -0.5
       (/
        (- (/ (- 0.25 (/ (+ 0.375 (/ -0.28125 N)) N)) N) 0.3333333333333333)
        N))
      N)
     -1.0)
    N)))
double code(double N) {
	double tmp;
	if (N <= 1660.0) {
		tmp = -log((N / (N + 1.0)));
	} else {
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if (n <= 1660.0d0) then
        tmp = -log((n / (n + 1.0d0)))
    else
        tmp = ((((-0.5d0) - ((((0.25d0 - ((0.375d0 + ((-0.28125d0) / n)) / n)) / n) - 0.3333333333333333d0) / n)) / n) - (-1.0d0)) / n
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if (N <= 1660.0) {
		tmp = -Math.log((N / (N + 1.0)));
	} else {
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
	}
	return tmp;
}
def code(N):
	tmp = 0
	if N <= 1660.0:
		tmp = -math.log((N / (N + 1.0)))
	else:
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N
	return tmp
function code(N)
	tmp = 0.0
	if (N <= 1660.0)
		tmp = Float64(-log(Float64(N / Float64(N + 1.0))));
	else
		tmp = Float64(Float64(Float64(Float64(-0.5 - Float64(Float64(Float64(Float64(0.25 - Float64(Float64(0.375 + Float64(-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N);
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if (N <= 1660.0)
		tmp = -log((N / (N + 1.0)));
	else
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N, 1660.0], (-N[Log[N[(N / N[(N + 1.0), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), N[(N[(N[(N[(-0.5 - N[(N[(N[(N[(0.25 - N[(N[(0.375 + N[(-0.28125 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] - 0.3333333333333333), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] - -1.0), $MachinePrecision] / N), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;N \leq 1660:\\
\;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\

\mathbf{else}:\\
\;\;\;\;\frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if N < 1660

    1. Initial program 91.4%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative91.4%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define91.4%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified91.4%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. log1p-expm1-u91.4%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(\mathsf{expm1}\left(\mathsf{log1p}\left(N\right) - \log N\right)\right)} \]
      2. expm1-undefine90.9%

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{\mathsf{log1p}\left(N\right) - \log N} - 1}\right) \]
      3. exp-diff91.2%

        \[\leadsto \mathsf{log1p}\left(\color{blue}{\frac{e^{\mathsf{log1p}\left(N\right)}}{e^{\log N}}} - 1\right) \]
      4. log1p-undefine91.2%

        \[\leadsto \mathsf{log1p}\left(\frac{e^{\color{blue}{\log \left(1 + N\right)}}}{e^{\log N}} - 1\right) \]
      5. rem-exp-log91.8%

        \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{1 + N}}{e^{\log N}} - 1\right) \]
      6. add-exp-log93.3%

        \[\leadsto \mathsf{log1p}\left(\frac{1 + N}{\color{blue}{N}} - 1\right) \]
      7. +-commutative93.3%

        \[\leadsto \mathsf{log1p}\left(\frac{\color{blue}{N + 1}}{N} - 1\right) \]
    6. Applied egg-rr93.3%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(\frac{N + 1}{N} - 1\right)} \]
    7. Step-by-step derivation
      1. add-exp-log93.3%

        \[\leadsto \mathsf{log1p}\left(\color{blue}{e^{\log \left(\frac{N + 1}{N}\right)}} - 1\right) \]
      2. expm1-define93.3%

        \[\leadsto \mathsf{log1p}\left(\color{blue}{\mathsf{expm1}\left(\log \left(\frac{N + 1}{N}\right)\right)}\right) \]
      3. log1p-expm1-u93.2%

        \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
      4. clear-num92.9%

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{N}{N + 1}}\right)} \]
      5. log-div95.1%

        \[\leadsto \color{blue}{\log 1 - \log \left(\frac{N}{N + 1}\right)} \]
      6. metadata-eval95.1%

        \[\leadsto \color{blue}{0} - \log \left(\frac{N}{N + 1}\right) \]
    8. Applied egg-rr95.1%

      \[\leadsto \color{blue}{0 - \log \left(\frac{N}{N + 1}\right)} \]
    9. Step-by-step derivation
      1. neg-sub095.1%

        \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]
    10. Simplified95.1%

      \[\leadsto \color{blue}{-\log \left(\frac{N}{N + 1}\right)} \]

    if 1660 < N

    1. Initial program 18.5%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative18.5%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define18.5%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified18.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Taylor expanded in N around -inf 99.8%

      \[\leadsto \color{blue}{-1 \cdot \frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
    6. Step-by-step derivation
      1. mul-1-neg99.8%

        \[\leadsto \color{blue}{-\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
      2. distribute-neg-frac299.8%

        \[\leadsto \color{blue}{\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{-N}} \]
    7. Simplified99.8%

      \[\leadsto \color{blue}{\frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 + \frac{-0.25}{N}}{N}}{N}}{-N}} \]
    8. Step-by-step derivation
      1. flip-+99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.3333333333333333 \cdot 0.3333333333333333 - \frac{-0.25}{N} \cdot \frac{-0.25}{N}}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
      2. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{-\left(0.3333333333333333 \cdot 0.3333333333333333 - \frac{-0.25}{N} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
      3. cancel-sign-sub-inv99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\color{blue}{\left(0.3333333333333333 \cdot 0.3333333333333333 + \left(-\frac{-0.25}{N}\right) \cdot \frac{-0.25}{N}\right)}}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      4. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(\color{blue}{0.1111111111111111} + \left(-\frac{-0.25}{N}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      5. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\color{blue}{\frac{--0.25}{-N}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      6. add-sqr-sqrt0.0%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{-N} \cdot \sqrt{-N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      7. sqrt-unprod99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{\left(-N\right) \cdot \left(-N\right)}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      8. sqr-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\sqrt{\color{blue}{N \cdot N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      9. sqrt-unprod99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{N} \cdot \sqrt{N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      10. add-sqr-sqrt99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{N}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      11. distribute-frac-neg299.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{--0.25}{-N}} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      12. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{-0.25}{N}} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      13. frac-times99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{-0.25 \cdot -0.25}{N \cdot N}}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      14. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{\color{blue}{0.0625}}{N \cdot N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      15. pow299.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{\color{blue}{{N}^{2}}}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      16. sub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\color{blue}{\left(0.3333333333333333 + \left(-\frac{-0.25}{N}\right)\right)}}}{N}}{N}}{-N} \]
      17. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\left(0.3333333333333333 + \left(-\color{blue}{\frac{--0.25}{-N}}\right)\right)}}{N}}{N}}{-N} \]
      18. add-sqr-sqrt0.0%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\left(0.3333333333333333 + \left(-\frac{--0.25}{\color{blue}{\sqrt{-N} \cdot \sqrt{-N}}}\right)\right)}}{N}}{N}}{-N} \]
    9. Applied egg-rr99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{\frac{-0.25}{N} + -0.3333333333333333}}}{N}}{N}}{-N} \]
    10. Step-by-step derivation
      1. distribute-frac-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{-\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\frac{-0.25}{N} + -0.3333333333333333}}}{N}}{N}}{-N} \]
      2. distribute-neg-frac299.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{-\left(\frac{-0.25}{N} + -0.3333333333333333\right)}}}{N}}{N}}{-N} \]
      3. +-commutative99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{-\color{blue}{\left(-0.3333333333333333 + \frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
      4. distribute-neg-in99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{\left(--0.3333333333333333\right) + \left(-\frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
      5. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{0.3333333333333333} + \left(-\frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      6. sub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
    11. Simplified99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
    12. Taylor expanded in N around -inf 99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 + -1 \cdot \frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}}}{N}}{N}}{-N} \]
    13. Step-by-step derivation
      1. mul-1-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 + \color{blue}{\left(-\frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}\right)}}{N}}{N}}{-N} \]
      2. unsub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 - \frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}}}{N}}{N}}{-N} \]
      3. mul-1-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 + \color{blue}{\left(-\frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}\right)}}{N}}{N}}{N}}{-N} \]
      4. unsub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{\color{blue}{0.25 - \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}}{N}}{N}}{N}}{-N} \]
      5. sub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{\color{blue}{0.375 + \left(-0.28125 \cdot \frac{1}{N}\right)}}{N}}{N}}{N}}{N}}{-N} \]
      6. associate-*r/99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \left(-\color{blue}{\frac{0.28125 \cdot 1}{N}}\right)}{N}}{N}}{N}}{N}}{-N} \]
      7. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \left(-\frac{\color{blue}{0.28125}}{N}\right)}{N}}{N}}{N}}{N}}{-N} \]
      8. distribute-neg-frac99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \color{blue}{\frac{-0.28125}{N}}}{N}}{N}}{N}}{N}}{-N} \]
      9. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \frac{\color{blue}{-0.28125}}{N}}{N}}{N}}{N}}{N}}{-N} \]
    14. Simplified99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N}}}{N}}{N}}{-N} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.4%

    \[\leadsto \begin{array}{l} \mathbf{if}\;N \leq 1660:\\ \;\;\;\;-\log \left(\frac{N}{N + 1}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N}\\ \end{array} \]
  5. Add Preprocessing

Alternative 3: 99.4% accurate, 1.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;N \leq 1500:\\ \;\;\;\;\log \left(\frac{N + 1}{N}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N}\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= N 1500.0)
   (log (/ (+ N 1.0) N))
   (/
    (-
     (/
      (-
       -0.5
       (/
        (- (/ (- 0.25 (/ (+ 0.375 (/ -0.28125 N)) N)) N) 0.3333333333333333)
        N))
      N)
     -1.0)
    N)))
double code(double N) {
	double tmp;
	if (N <= 1500.0) {
		tmp = log(((N + 1.0) / N));
	} else {
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
	}
	return tmp;
}
real(8) function code(n)
    real(8), intent (in) :: n
    real(8) :: tmp
    if (n <= 1500.0d0) then
        tmp = log(((n + 1.0d0) / n))
    else
        tmp = ((((-0.5d0) - ((((0.25d0 - ((0.375d0 + ((-0.28125d0) / n)) / n)) / n) - 0.3333333333333333d0) / n)) / n) - (-1.0d0)) / n
    end if
    code = tmp
end function
public static double code(double N) {
	double tmp;
	if (N <= 1500.0) {
		tmp = Math.log(((N + 1.0) / N));
	} else {
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
	}
	return tmp;
}
def code(N):
	tmp = 0
	if N <= 1500.0:
		tmp = math.log(((N + 1.0) / N))
	else:
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N
	return tmp
function code(N)
	tmp = 0.0
	if (N <= 1500.0)
		tmp = log(Float64(Float64(N + 1.0) / N));
	else
		tmp = Float64(Float64(Float64(Float64(-0.5 - Float64(Float64(Float64(Float64(0.25 - Float64(Float64(0.375 + Float64(-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N);
	end
	return tmp
end
function tmp_2 = code(N)
	tmp = 0.0;
	if (N <= 1500.0)
		tmp = log(((N + 1.0) / N));
	else
		tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
	end
	tmp_2 = tmp;
end
code[N_] := If[LessEqual[N, 1500.0], N[Log[N[(N[(N + 1.0), $MachinePrecision] / N), $MachinePrecision]], $MachinePrecision], N[(N[(N[(N[(-0.5 - N[(N[(N[(N[(0.25 - N[(N[(0.375 + N[(-0.28125 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] - 0.3333333333333333), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] - -1.0), $MachinePrecision] / N), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;N \leq 1500:\\
\;\;\;\;\log \left(\frac{N + 1}{N}\right)\\

\mathbf{else}:\\
\;\;\;\;\frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if N < 1500

    1. Initial program 91.5%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative91.5%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define91.5%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified91.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Step-by-step derivation
      1. add-log-exp91.5%

        \[\leadsto \color{blue}{\log \left(e^{\mathsf{log1p}\left(N\right)}\right)} - \log N \]
      2. log1p-expm1-u91.5%

        \[\leadsto \log \left(e^{\mathsf{log1p}\left(N\right)}\right) - \color{blue}{\mathsf{log1p}\left(\mathsf{expm1}\left(\log N\right)\right)} \]
      3. log1p-undefine91.5%

        \[\leadsto \log \left(e^{\mathsf{log1p}\left(N\right)}\right) - \color{blue}{\log \left(1 + \mathsf{expm1}\left(\log N\right)\right)} \]
      4. diff-log91.3%

        \[\leadsto \color{blue}{\log \left(\frac{e^{\mathsf{log1p}\left(N\right)}}{1 + \mathsf{expm1}\left(\log N\right)}\right)} \]
      5. log1p-undefine91.3%

        \[\leadsto \log \left(\frac{e^{\color{blue}{\log \left(1 + N\right)}}}{1 + \mathsf{expm1}\left(\log N\right)}\right) \]
      6. rem-exp-log92.0%

        \[\leadsto \log \left(\frac{\color{blue}{1 + N}}{1 + \mathsf{expm1}\left(\log N\right)}\right) \]
      7. +-commutative92.0%

        \[\leadsto \log \left(\frac{\color{blue}{N + 1}}{1 + \mathsf{expm1}\left(\log N\right)}\right) \]
      8. add-exp-log92.2%

        \[\leadsto \log \left(\frac{N + 1}{\color{blue}{e^{\log \left(1 + \mathsf{expm1}\left(\log N\right)\right)}}}\right) \]
      9. log1p-undefine92.2%

        \[\leadsto \log \left(\frac{N + 1}{e^{\color{blue}{\mathsf{log1p}\left(\mathsf{expm1}\left(\log N\right)\right)}}}\right) \]
      10. log1p-expm1-u92.2%

        \[\leadsto \log \left(\frac{N + 1}{e^{\color{blue}{\log N}}}\right) \]
      11. add-exp-log93.6%

        \[\leadsto \log \left(\frac{N + 1}{\color{blue}{N}}\right) \]
    6. Applied egg-rr93.6%

      \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]

    if 1500 < N

    1. Initial program 18.8%

      \[\log \left(N + 1\right) - \log N \]
    2. Step-by-step derivation
      1. +-commutative18.8%

        \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
      2. log1p-define18.8%

        \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
    3. Simplified18.8%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
    4. Add Preprocessing
    5. Taylor expanded in N around -inf 99.8%

      \[\leadsto \color{blue}{-1 \cdot \frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
    6. Step-by-step derivation
      1. mul-1-neg99.8%

        \[\leadsto \color{blue}{-\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
      2. distribute-neg-frac299.8%

        \[\leadsto \color{blue}{\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{-N}} \]
    7. Simplified99.8%

      \[\leadsto \color{blue}{\frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 + \frac{-0.25}{N}}{N}}{N}}{-N}} \]
    8. Step-by-step derivation
      1. flip-+99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.3333333333333333 \cdot 0.3333333333333333 - \frac{-0.25}{N} \cdot \frac{-0.25}{N}}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
      2. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{-\left(0.3333333333333333 \cdot 0.3333333333333333 - \frac{-0.25}{N} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
      3. cancel-sign-sub-inv99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\color{blue}{\left(0.3333333333333333 \cdot 0.3333333333333333 + \left(-\frac{-0.25}{N}\right) \cdot \frac{-0.25}{N}\right)}}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      4. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(\color{blue}{0.1111111111111111} + \left(-\frac{-0.25}{N}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      5. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\color{blue}{\frac{--0.25}{-N}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      6. add-sqr-sqrt0.0%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{-N} \cdot \sqrt{-N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      7. sqrt-unprod99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{\left(-N\right) \cdot \left(-N\right)}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      8. sqr-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\sqrt{\color{blue}{N \cdot N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      9. sqrt-unprod99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{N} \cdot \sqrt{N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      10. add-sqr-sqrt99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{N}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      11. distribute-frac-neg299.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{--0.25}{-N}} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      12. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{-0.25}{N}} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      13. frac-times99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{-0.25 \cdot -0.25}{N \cdot N}}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      14. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{\color{blue}{0.0625}}{N \cdot N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      15. pow299.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{\color{blue}{{N}^{2}}}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      16. sub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\color{blue}{\left(0.3333333333333333 + \left(-\frac{-0.25}{N}\right)\right)}}}{N}}{N}}{-N} \]
      17. frac-2neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\left(0.3333333333333333 + \left(-\color{blue}{\frac{--0.25}{-N}}\right)\right)}}{N}}{N}}{-N} \]
      18. add-sqr-sqrt0.0%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\left(0.3333333333333333 + \left(-\frac{--0.25}{\color{blue}{\sqrt{-N} \cdot \sqrt{-N}}}\right)\right)}}{N}}{N}}{-N} \]
    9. Applied egg-rr99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{\frac{-0.25}{N} + -0.3333333333333333}}}{N}}{N}}{-N} \]
    10. Step-by-step derivation
      1. distribute-frac-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{-\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\frac{-0.25}{N} + -0.3333333333333333}}}{N}}{N}}{-N} \]
      2. distribute-neg-frac299.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{-\left(\frac{-0.25}{N} + -0.3333333333333333\right)}}}{N}}{N}}{-N} \]
      3. +-commutative99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{-\color{blue}{\left(-0.3333333333333333 + \frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
      4. distribute-neg-in99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{\left(--0.3333333333333333\right) + \left(-\frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
      5. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{0.3333333333333333} + \left(-\frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
      6. sub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
    11. Simplified99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
    12. Taylor expanded in N around -inf 99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 + -1 \cdot \frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}}}{N}}{N}}{-N} \]
    13. Step-by-step derivation
      1. mul-1-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 + \color{blue}{\left(-\frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}\right)}}{N}}{N}}{-N} \]
      2. unsub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 - \frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}}}{N}}{N}}{-N} \]
      3. mul-1-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 + \color{blue}{\left(-\frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}\right)}}{N}}{N}}{N}}{-N} \]
      4. unsub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{\color{blue}{0.25 - \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}}{N}}{N}}{N}}{-N} \]
      5. sub-neg99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{\color{blue}{0.375 + \left(-0.28125 \cdot \frac{1}{N}\right)}}{N}}{N}}{N}}{N}}{-N} \]
      6. associate-*r/99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \left(-\color{blue}{\frac{0.28125 \cdot 1}{N}}\right)}{N}}{N}}{N}}{N}}{-N} \]
      7. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \left(-\frac{\color{blue}{0.28125}}{N}\right)}{N}}{N}}{N}}{N}}{-N} \]
      8. distribute-neg-frac99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \color{blue}{\frac{-0.28125}{N}}}{N}}{N}}{N}}{N}}{-N} \]
      9. metadata-eval99.8%

        \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \frac{\color{blue}{-0.28125}}{N}}{N}}{N}}{N}}{N}}{-N} \]
    14. Simplified99.8%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N}}}{N}}{N}}{-N} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification99.3%

    \[\leadsto \begin{array}{l} \mathbf{if}\;N \leq 1500:\\ \;\;\;\;\log \left(\frac{N + 1}{N}\right)\\ \mathbf{else}:\\ \;\;\;\;\frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N}\\ \end{array} \]
  5. Add Preprocessing

Alternative 4: 96.3% accurate, 8.9× speedup?

\[\begin{array}{l} \\ \frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N} \end{array} \]
(FPCore (N)
 :precision binary64
 (/
  (-
   (/
    (-
     -0.5
     (/
      (- (/ (- 0.25 (/ (+ 0.375 (/ -0.28125 N)) N)) N) 0.3333333333333333)
      N))
    N)
   -1.0)
  N))
double code(double N) {
	return (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = ((((-0.5d0) - ((((0.25d0 - ((0.375d0 + ((-0.28125d0) / n)) / n)) / n) - 0.3333333333333333d0) / n)) / n) - (-1.0d0)) / n
end function
public static double code(double N) {
	return (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
}
def code(N):
	return (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N
function code(N)
	return Float64(Float64(Float64(Float64(-0.5 - Float64(Float64(Float64(Float64(0.25 - Float64(Float64(0.375 + Float64(-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N)
end
function tmp = code(N)
	tmp = (((-0.5 - ((((0.25 - ((0.375 + (-0.28125 / N)) / N)) / N) - 0.3333333333333333) / N)) / N) - -1.0) / N;
end
code[N_] := N[(N[(N[(N[(-0.5 - N[(N[(N[(N[(0.25 - N[(N[(0.375 + N[(-0.28125 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] - 0.3333333333333333), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] - -1.0), $MachinePrecision] / N), $MachinePrecision]
\begin{array}{l}

\\
\frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N}
\end{array}
Derivation
  1. Initial program 24.5%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.5%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Taylor expanded in N around -inf 96.2%

    \[\leadsto \color{blue}{-1 \cdot \frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
  6. Step-by-step derivation
    1. mul-1-neg96.2%

      \[\leadsto \color{blue}{-\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
    2. distribute-neg-frac296.2%

      \[\leadsto \color{blue}{\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{-N}} \]
  7. Simplified96.2%

    \[\leadsto \color{blue}{\frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 + \frac{-0.25}{N}}{N}}{N}}{-N}} \]
  8. Step-by-step derivation
    1. flip-+96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.3333333333333333 \cdot 0.3333333333333333 - \frac{-0.25}{N} \cdot \frac{-0.25}{N}}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
    2. frac-2neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{-\left(0.3333333333333333 \cdot 0.3333333333333333 - \frac{-0.25}{N} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
    3. cancel-sign-sub-inv96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\color{blue}{\left(0.3333333333333333 \cdot 0.3333333333333333 + \left(-\frac{-0.25}{N}\right) \cdot \frac{-0.25}{N}\right)}}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    4. metadata-eval96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(\color{blue}{0.1111111111111111} + \left(-\frac{-0.25}{N}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    5. frac-2neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\color{blue}{\frac{--0.25}{-N}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    6. add-sqr-sqrt0.0%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{-N} \cdot \sqrt{-N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    7. sqrt-unprod96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{\left(-N\right) \cdot \left(-N\right)}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    8. sqr-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\sqrt{\color{blue}{N \cdot N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    9. sqrt-unprod96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{\sqrt{N} \cdot \sqrt{N}}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    10. add-sqr-sqrt96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \left(-\frac{--0.25}{\color{blue}{N}}\right) \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    11. distribute-frac-neg296.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{--0.25}{-N}} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    12. frac-2neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{-0.25}{N}} \cdot \frac{-0.25}{N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    13. frac-times96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \color{blue}{\frac{-0.25 \cdot -0.25}{N \cdot N}}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    14. metadata-eval96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{\color{blue}{0.0625}}{N \cdot N}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    15. pow296.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{\color{blue}{{N}^{2}}}\right)}{-\left(0.3333333333333333 - \frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    16. sub-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\color{blue}{\left(0.3333333333333333 + \left(-\frac{-0.25}{N}\right)\right)}}}{N}}{N}}{-N} \]
    17. frac-2neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\left(0.3333333333333333 + \left(-\color{blue}{\frac{--0.25}{-N}}\right)\right)}}{N}}{N}}{-N} \]
    18. add-sqr-sqrt0.0%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{-\left(0.3333333333333333 + \left(-\frac{--0.25}{\color{blue}{\sqrt{-N} \cdot \sqrt{-N}}}\right)\right)}}{N}}{N}}{-N} \]
  9. Applied egg-rr96.2%

    \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{-\left(0.1111111111111111 + \frac{0.0625}{{N}^{2}}\right)}{\frac{-0.25}{N} + -0.3333333333333333}}}{N}}{N}}{-N} \]
  10. Step-by-step derivation
    1. distribute-frac-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{-\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\frac{-0.25}{N} + -0.3333333333333333}}}{N}}{N}}{-N} \]
    2. distribute-neg-frac296.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{-\left(\frac{-0.25}{N} + -0.3333333333333333\right)}}}{N}}{N}}{-N} \]
    3. +-commutative96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{-\color{blue}{\left(-0.3333333333333333 + \frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
    4. distribute-neg-in96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{\left(--0.3333333333333333\right) + \left(-\frac{-0.25}{N}\right)}}}{N}}{N}}{-N} \]
    5. metadata-eval96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{0.3333333333333333} + \left(-\frac{-0.25}{N}\right)}}{N}}{N}}{-N} \]
    6. sub-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{\color{blue}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
  11. Simplified96.2%

    \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{\frac{0.1111111111111111 + \frac{0.0625}{{N}^{2}}}{0.3333333333333333 - \frac{-0.25}{N}}}}{N}}{N}}{-N} \]
  12. Taylor expanded in N around -inf 96.2%

    \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 + -1 \cdot \frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}}}{N}}{N}}{-N} \]
  13. Step-by-step derivation
    1. mul-1-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 + \color{blue}{\left(-\frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}\right)}}{N}}{N}}{-N} \]
    2. unsub-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 - \frac{0.25 + -1 \cdot \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}{N}}}{N}}{N}}{-N} \]
    3. mul-1-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 + \color{blue}{\left(-\frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}\right)}}{N}}{N}}{N}}{-N} \]
    4. unsub-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{\color{blue}{0.25 - \frac{0.375 - 0.28125 \cdot \frac{1}{N}}{N}}}{N}}{N}}{N}}{-N} \]
    5. sub-neg96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{\color{blue}{0.375 + \left(-0.28125 \cdot \frac{1}{N}\right)}}{N}}{N}}{N}}{N}}{-N} \]
    6. associate-*r/96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \left(-\color{blue}{\frac{0.28125 \cdot 1}{N}}\right)}{N}}{N}}{N}}{N}}{-N} \]
    7. metadata-eval96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \left(-\frac{\color{blue}{0.28125}}{N}\right)}{N}}{N}}{N}}{N}}{-N} \]
    8. distribute-neg-frac96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \color{blue}{\frac{-0.28125}{N}}}{N}}{N}}{N}}{N}}{-N} \]
    9. metadata-eval96.2%

      \[\leadsto \frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \frac{\color{blue}{-0.28125}}{N}}{N}}{N}}{N}}{N}}{-N} \]
  14. Simplified96.2%

    \[\leadsto \frac{-1 - \frac{-0.5 + \frac{\color{blue}{0.3333333333333333 - \frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N}}}{N}}{N}}{-N} \]
  15. Final simplification96.2%

    \[\leadsto \frac{\frac{-0.5 - \frac{\frac{0.25 - \frac{0.375 + \frac{-0.28125}{N}}{N}}{N} - 0.3333333333333333}{N}}{N} - -1}{N} \]
  16. Add Preprocessing

Alternative 5: 96.3% accurate, 13.7× speedup?

\[\begin{array}{l} \\ \frac{1 + \frac{-0.5 + \frac{0.3333333333333333 + \frac{-0.25}{N}}{N}}{N}}{N} \end{array} \]
(FPCore (N)
 :precision binary64
 (/ (+ 1.0 (/ (+ -0.5 (/ (+ 0.3333333333333333 (/ -0.25 N)) N)) N)) N))
double code(double N) {
	return (1.0 + ((-0.5 + ((0.3333333333333333 + (-0.25 / N)) / N)) / N)) / N;
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = (1.0d0 + (((-0.5d0) + ((0.3333333333333333d0 + ((-0.25d0) / n)) / n)) / n)) / n
end function
public static double code(double N) {
	return (1.0 + ((-0.5 + ((0.3333333333333333 + (-0.25 / N)) / N)) / N)) / N;
}
def code(N):
	return (1.0 + ((-0.5 + ((0.3333333333333333 + (-0.25 / N)) / N)) / N)) / N
function code(N)
	return Float64(Float64(1.0 + Float64(Float64(-0.5 + Float64(Float64(0.3333333333333333 + Float64(-0.25 / N)) / N)) / N)) / N)
end
function tmp = code(N)
	tmp = (1.0 + ((-0.5 + ((0.3333333333333333 + (-0.25 / N)) / N)) / N)) / N;
end
code[N_] := N[(N[(1.0 + N[(N[(-0.5 + N[(N[(0.3333333333333333 + N[(-0.25 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]
\begin{array}{l}

\\
\frac{1 + \frac{-0.5 + \frac{0.3333333333333333 + \frac{-0.25}{N}}{N}}{N}}{N}
\end{array}
Derivation
  1. Initial program 24.5%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.5%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Taylor expanded in N around -inf 96.2%

    \[\leadsto \color{blue}{-1 \cdot \frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
  6. Step-by-step derivation
    1. mul-1-neg96.2%

      \[\leadsto \color{blue}{-\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{N}} \]
    2. distribute-neg-frac296.2%

      \[\leadsto \color{blue}{\frac{-1 \cdot \frac{-1 \cdot \frac{0.25 \cdot \frac{1}{N} - 0.3333333333333333}{N} - 0.5}{N} - 1}{-N}} \]
  7. Simplified96.2%

    \[\leadsto \color{blue}{\frac{-1 - \frac{-0.5 + \frac{0.3333333333333333 + \frac{-0.25}{N}}{N}}{N}}{-N}} \]
  8. Taylor expanded in N around inf 96.2%

    \[\leadsto \color{blue}{\frac{\left(1 + \frac{0.3333333333333333}{{N}^{2}}\right) - \left(0.5 \cdot \frac{1}{N} + 0.25 \cdot \frac{1}{{N}^{3}}\right)}{N}} \]
  9. Simplified96.2%

    \[\leadsto \color{blue}{\frac{1 + \frac{\frac{0.3333333333333333 + \frac{-0.25}{N}}{N} + -0.5}{N}}{N}} \]
  10. Final simplification96.2%

    \[\leadsto \frac{1 + \frac{-0.5 + \frac{0.3333333333333333 + \frac{-0.25}{N}}{N}}{N}}{N} \]
  11. Add Preprocessing

Alternative 6: 95.1% accurate, 18.6× speedup?

\[\begin{array}{l} \\ \frac{1 + \frac{-0.5 + \frac{0.3333333333333333}{N}}{N}}{N} \end{array} \]
(FPCore (N)
 :precision binary64
 (/ (+ 1.0 (/ (+ -0.5 (/ 0.3333333333333333 N)) N)) N))
double code(double N) {
	return (1.0 + ((-0.5 + (0.3333333333333333 / N)) / N)) / N;
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = (1.0d0 + (((-0.5d0) + (0.3333333333333333d0 / n)) / n)) / n
end function
public static double code(double N) {
	return (1.0 + ((-0.5 + (0.3333333333333333 / N)) / N)) / N;
}
def code(N):
	return (1.0 + ((-0.5 + (0.3333333333333333 / N)) / N)) / N
function code(N)
	return Float64(Float64(1.0 + Float64(Float64(-0.5 + Float64(0.3333333333333333 / N)) / N)) / N)
end
function tmp = code(N)
	tmp = (1.0 + ((-0.5 + (0.3333333333333333 / N)) / N)) / N;
end
code[N_] := N[(N[(1.0 + N[(N[(-0.5 + N[(0.3333333333333333 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]
\begin{array}{l}

\\
\frac{1 + \frac{-0.5 + \frac{0.3333333333333333}{N}}{N}}{N}
\end{array}
Derivation
  1. Initial program 24.5%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.5%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Taylor expanded in N around inf 94.9%

    \[\leadsto \color{blue}{\frac{\left(1 + \frac{0.3333333333333333}{{N}^{2}}\right) - 0.5 \cdot \frac{1}{N}}{N}} \]
  6. Step-by-step derivation
    1. associate--l+95.0%

      \[\leadsto \frac{\color{blue}{1 + \left(\frac{0.3333333333333333}{{N}^{2}} - 0.5 \cdot \frac{1}{N}\right)}}{N} \]
    2. unpow295.0%

      \[\leadsto \frac{1 + \left(\frac{0.3333333333333333}{\color{blue}{N \cdot N}} - 0.5 \cdot \frac{1}{N}\right)}{N} \]
    3. associate-/r*95.0%

      \[\leadsto \frac{1 + \left(\color{blue}{\frac{\frac{0.3333333333333333}{N}}{N}} - 0.5 \cdot \frac{1}{N}\right)}{N} \]
    4. metadata-eval95.0%

      \[\leadsto \frac{1 + \left(\frac{\frac{\color{blue}{0.3333333333333333 \cdot 1}}{N}}{N} - 0.5 \cdot \frac{1}{N}\right)}{N} \]
    5. associate-*r/95.0%

      \[\leadsto \frac{1 + \left(\frac{\color{blue}{0.3333333333333333 \cdot \frac{1}{N}}}{N} - 0.5 \cdot \frac{1}{N}\right)}{N} \]
    6. associate-*r/95.0%

      \[\leadsto \frac{1 + \left(\frac{0.3333333333333333 \cdot \frac{1}{N}}{N} - \color{blue}{\frac{0.5 \cdot 1}{N}}\right)}{N} \]
    7. metadata-eval95.0%

      \[\leadsto \frac{1 + \left(\frac{0.3333333333333333 \cdot \frac{1}{N}}{N} - \frac{\color{blue}{0.5}}{N}\right)}{N} \]
    8. div-sub95.0%

      \[\leadsto \frac{1 + \color{blue}{\frac{0.3333333333333333 \cdot \frac{1}{N} - 0.5}{N}}}{N} \]
    9. sub-neg95.0%

      \[\leadsto \frac{1 + \frac{\color{blue}{0.3333333333333333 \cdot \frac{1}{N} + \left(-0.5\right)}}{N}}{N} \]
    10. metadata-eval95.0%

      \[\leadsto \frac{1 + \frac{0.3333333333333333 \cdot \frac{1}{N} + \color{blue}{-0.5}}{N}}{N} \]
    11. +-commutative95.0%

      \[\leadsto \frac{1 + \frac{\color{blue}{-0.5 + 0.3333333333333333 \cdot \frac{1}{N}}}{N}}{N} \]
    12. associate-*r/95.0%

      \[\leadsto \frac{1 + \frac{-0.5 + \color{blue}{\frac{0.3333333333333333 \cdot 1}{N}}}{N}}{N} \]
    13. metadata-eval95.0%

      \[\leadsto \frac{1 + \frac{-0.5 + \frac{\color{blue}{0.3333333333333333}}{N}}{N}}{N} \]
  7. Simplified95.0%

    \[\leadsto \color{blue}{\frac{1 + \frac{-0.5 + \frac{0.3333333333333333}{N}}{N}}{N}} \]
  8. Add Preprocessing

Alternative 7: 92.5% accurate, 22.8× speedup?

\[\begin{array}{l} \\ \frac{-1}{\frac{N}{-1 - \frac{-0.5}{N}}} \end{array} \]
(FPCore (N) :precision binary64 (/ -1.0 (/ N (- -1.0 (/ -0.5 N)))))
double code(double N) {
	return -1.0 / (N / (-1.0 - (-0.5 / N)));
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = (-1.0d0) / (n / ((-1.0d0) - ((-0.5d0) / n)))
end function
public static double code(double N) {
	return -1.0 / (N / (-1.0 - (-0.5 / N)));
}
def code(N):
	return -1.0 / (N / (-1.0 - (-0.5 / N)))
function code(N)
	return Float64(-1.0 / Float64(N / Float64(-1.0 - Float64(-0.5 / N))))
end
function tmp = code(N)
	tmp = -1.0 / (N / (-1.0 - (-0.5 / N)));
end
code[N_] := N[(-1.0 / N[(N / N[(-1.0 - N[(-0.5 / N), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{-1}{\frac{N}{-1 - \frac{-0.5}{N}}}
\end{array}
Derivation
  1. Initial program 24.5%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.5%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Taylor expanded in N around inf 92.3%

    \[\leadsto \color{blue}{\frac{1 - 0.5 \cdot \frac{1}{N}}{N}} \]
  6. Step-by-step derivation
    1. associate-*r/92.3%

      \[\leadsto \frac{1 - \color{blue}{\frac{0.5 \cdot 1}{N}}}{N} \]
    2. metadata-eval92.3%

      \[\leadsto \frac{1 - \frac{\color{blue}{0.5}}{N}}{N} \]
  7. Simplified92.3%

    \[\leadsto \color{blue}{\frac{1 - \frac{0.5}{N}}{N}} \]
  8. Step-by-step derivation
    1. clear-num92.3%

      \[\leadsto \color{blue}{\frac{1}{\frac{N}{1 - \frac{0.5}{N}}}} \]
    2. inv-pow92.3%

      \[\leadsto \color{blue}{{\left(\frac{N}{1 - \frac{0.5}{N}}\right)}^{-1}} \]
  9. Applied egg-rr92.3%

    \[\leadsto \color{blue}{{\left(\frac{N}{1 - \frac{0.5}{N}}\right)}^{-1}} \]
  10. Step-by-step derivation
    1. unpow-192.3%

      \[\leadsto \color{blue}{\frac{1}{\frac{N}{1 - \frac{0.5}{N}}}} \]
    2. sub-neg92.3%

      \[\leadsto \frac{1}{\frac{N}{\color{blue}{1 + \left(-\frac{0.5}{N}\right)}}} \]
    3. distribute-neg-frac92.3%

      \[\leadsto \frac{1}{\frac{N}{1 + \color{blue}{\frac{-0.5}{N}}}} \]
    4. metadata-eval92.3%

      \[\leadsto \frac{1}{\frac{N}{1 + \frac{\color{blue}{-0.5}}{N}}} \]
  11. Simplified92.3%

    \[\leadsto \color{blue}{\frac{1}{\frac{N}{1 + \frac{-0.5}{N}}}} \]
  12. Final simplification92.3%

    \[\leadsto \frac{-1}{\frac{N}{-1 - \frac{-0.5}{N}}} \]
  13. Add Preprocessing

Alternative 8: 92.5% accurate, 29.3× speedup?

\[\begin{array}{l} \\ \frac{1 - \frac{0.5}{N}}{N} \end{array} \]
(FPCore (N) :precision binary64 (/ (- 1.0 (/ 0.5 N)) N))
double code(double N) {
	return (1.0 - (0.5 / N)) / N;
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = (1.0d0 - (0.5d0 / n)) / n
end function
public static double code(double N) {
	return (1.0 - (0.5 / N)) / N;
}
def code(N):
	return (1.0 - (0.5 / N)) / N
function code(N)
	return Float64(Float64(1.0 - Float64(0.5 / N)) / N)
end
function tmp = code(N)
	tmp = (1.0 - (0.5 / N)) / N;
end
code[N_] := N[(N[(1.0 - N[(0.5 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]
\begin{array}{l}

\\
\frac{1 - \frac{0.5}{N}}{N}
\end{array}
Derivation
  1. Initial program 24.5%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.5%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Taylor expanded in N around inf 92.3%

    \[\leadsto \color{blue}{\frac{1 - 0.5 \cdot \frac{1}{N}}{N}} \]
  6. Step-by-step derivation
    1. associate-*r/92.3%

      \[\leadsto \frac{1 - \color{blue}{\frac{0.5 \cdot 1}{N}}}{N} \]
    2. metadata-eval92.3%

      \[\leadsto \frac{1 - \frac{\color{blue}{0.5}}{N}}{N} \]
  7. Simplified92.3%

    \[\leadsto \color{blue}{\frac{1 - \frac{0.5}{N}}{N}} \]
  8. Add Preprocessing

Alternative 9: 84.4% accurate, 68.3× speedup?

\[\begin{array}{l} \\ \frac{1}{N} \end{array} \]
(FPCore (N) :precision binary64 (/ 1.0 N))
double code(double N) {
	return 1.0 / N;
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = 1.0d0 / n
end function
public static double code(double N) {
	return 1.0 / N;
}
def code(N):
	return 1.0 / N
function code(N)
	return Float64(1.0 / N)
end
function tmp = code(N)
	tmp = 1.0 / N;
end
code[N_] := N[(1.0 / N), $MachinePrecision]
\begin{array}{l}

\\
\frac{1}{N}
\end{array}
Derivation
  1. Initial program 24.5%

    \[\log \left(N + 1\right) - \log N \]
  2. Step-by-step derivation
    1. +-commutative24.5%

      \[\leadsto \log \color{blue}{\left(1 + N\right)} - \log N \]
    2. log1p-define24.5%

      \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right)} - \log N \]
  3. Simplified24.5%

    \[\leadsto \color{blue}{\mathsf{log1p}\left(N\right) - \log N} \]
  4. Add Preprocessing
  5. Taylor expanded in N around inf 83.9%

    \[\leadsto \color{blue}{\frac{1}{N}} \]
  6. Add Preprocessing

Developer target: 99.8% accurate, 2.0× speedup?

\[\begin{array}{l} \\ \mathsf{log1p}\left(\frac{1}{N}\right) \end{array} \]
(FPCore (N) :precision binary64 (log1p (/ 1.0 N)))
double code(double N) {
	return log1p((1.0 / N));
}
public static double code(double N) {
	return Math.log1p((1.0 / N));
}
def code(N):
	return math.log1p((1.0 / N))
function code(N)
	return log1p(Float64(1.0 / N))
end
code[N_] := N[Log[1 + N[(1.0 / N), $MachinePrecision]], $MachinePrecision]
\begin{array}{l}

\\
\mathsf{log1p}\left(\frac{1}{N}\right)
\end{array}

Reproduce

?
herbie shell --seed 2024089 
(FPCore (N)
  :name "2log (problem 3.3.6)"
  :precision binary64
  :pre (and (> N 1.0) (< N 1e+40))

  :alt
  (log1p (/ 1.0 N))

  (- (log (+ N 1.0)) (log N)))