2log (problem 3.3.6)

Percentage Accurate: 23.5% → 99.5%
Time: 7.9s
Alternatives: 7
Speedup: 5.2×

Specification

?
\[N > 1 \land N < 10^{+40}\]
\[\begin{array}{l} \\ \log \left(N + 1\right) - \log N \end{array} \]
(FPCore (N) :precision binary64 (- (log (+ N 1.0)) (log N)))
double code(double N) {
	return log((N + 1.0)) - log(N);
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = log((n + 1.0d0)) - log(n)
end function
public static double code(double N) {
	return Math.log((N + 1.0)) - Math.log(N);
}
def code(N):
	return math.log((N + 1.0)) - math.log(N)
function code(N)
	return Float64(log(Float64(N + 1.0)) - log(N))
end
function tmp = code(N)
	tmp = log((N + 1.0)) - log(N);
end
code[N_] := N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\log \left(N + 1\right) - \log N
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 7 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 23.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \log \left(N + 1\right) - \log N \end{array} \]
(FPCore (N) :precision binary64 (- (log (+ N 1.0)) (log N)))
double code(double N) {
	return log((N + 1.0)) - log(N);
}
real(8) function code(n)
    real(8), intent (in) :: n
    code = log((n + 1.0d0)) - log(n)
end function
public static double code(double N) {
	return Math.log((N + 1.0)) - Math.log(N);
}
def code(N):
	return math.log((N + 1.0)) - math.log(N)
function code(N)
	return Float64(log(Float64(N + 1.0)) - log(N))
end
function tmp = code(N)
	tmp = log((N + 1.0)) - log(N);
end
code[N_] := N[(N[Log[N[(N + 1.0), $MachinePrecision]], $MachinePrecision] - N[Log[N], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\log \left(N + 1\right) - \log N
\end{array}

Alternative 1: 99.5% accurate, 1.4× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;N \leq 940:\\ \;\;\;\;-\log \left(\frac{N}{1 + N}\right)\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1}\\ \end{array} \end{array} \]
(FPCore (N)
 :precision binary64
 (if (<= N 940.0)
   (- (log (/ N (+ 1.0 N))))
   (pow
    (fma
     (/ (- 0.5 (/ (- 0.08333333333333333 (/ 0.041666666666666664 N)) N)) N)
     N
     N)
    -1.0)))
double code(double N) {
	double tmp;
	if (N <= 940.0) {
		tmp = -log((N / (1.0 + N)));
	} else {
		tmp = pow(fma(((0.5 - ((0.08333333333333333 - (0.041666666666666664 / N)) / N)) / N), N, N), -1.0);
	}
	return tmp;
}
function code(N)
	tmp = 0.0
	if (N <= 940.0)
		tmp = Float64(-log(Float64(N / Float64(1.0 + N))));
	else
		tmp = fma(Float64(Float64(0.5 - Float64(Float64(0.08333333333333333 - Float64(0.041666666666666664 / N)) / N)) / N), N, N) ^ -1.0;
	end
	return tmp
end
code[N_] := If[LessEqual[N, 940.0], (-N[Log[N[(N / N[(1.0 + N), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]), N[Power[N[(N[(N[(0.5 - N[(N[(0.08333333333333333 - N[(0.041666666666666664 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] * N + N), $MachinePrecision], -1.0], $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;N \leq 940:\\
\;\;\;\;-\log \left(\frac{N}{1 + N}\right)\\

\mathbf{else}:\\
\;\;\;\;{\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1}\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if N < 940

    1. Initial program 90.8%

      \[\log \left(N + 1\right) - \log N \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. lift--.f64N/A

        \[\leadsto \color{blue}{\log \left(N + 1\right) - \log N} \]
      2. lift-log.f64N/A

        \[\leadsto \color{blue}{\log \left(N + 1\right)} - \log N \]
      3. lift-log.f64N/A

        \[\leadsto \log \left(N + 1\right) - \color{blue}{\log N} \]
      4. diff-logN/A

        \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
      5. clear-numN/A

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{N}{N + 1}}\right)} \]
      6. clear-numN/A

        \[\leadsto \log \color{blue}{\left(\frac{1}{\frac{\frac{N}{N + 1}}{1}}\right)} \]
      7. log-recN/A

        \[\leadsto \color{blue}{\mathsf{neg}\left(\log \left(\frac{\frac{N}{N + 1}}{1}\right)\right)} \]
      8. lower-neg.f64N/A

        \[\leadsto \color{blue}{-\log \left(\frac{\frac{N}{N + 1}}{1}\right)} \]
      9. lower-log.f64N/A

        \[\leadsto -\color{blue}{\log \left(\frac{\frac{N}{N + 1}}{1}\right)} \]
      10. lower-/.f64N/A

        \[\leadsto -\log \color{blue}{\left(\frac{\frac{N}{N + 1}}{1}\right)} \]
      11. lower-/.f6494.7

        \[\leadsto -\log \left(\frac{\color{blue}{\frac{N}{N + 1}}}{1}\right) \]
      12. lift-+.f64N/A

        \[\leadsto -\log \left(\frac{\frac{N}{\color{blue}{N + 1}}}{1}\right) \]
      13. +-commutativeN/A

        \[\leadsto -\log \left(\frac{\frac{N}{\color{blue}{1 + N}}}{1}\right) \]
      14. lower-+.f6494.7

        \[\leadsto -\log \left(\frac{\frac{N}{\color{blue}{1 + N}}}{1}\right) \]
    4. Applied rewrites94.7%

      \[\leadsto \color{blue}{-\log \left(\frac{\frac{N}{1 + N}}{1}\right)} \]
    5. Step-by-step derivation
      1. lift-/.f64N/A

        \[\leadsto -\log \color{blue}{\left(\frac{\frac{N}{1 + N}}{1}\right)} \]
      2. /-rgt-identity94.7

        \[\leadsto -\log \color{blue}{\left(\frac{N}{1 + N}\right)} \]
    6. Applied rewrites94.7%

      \[\leadsto -\log \color{blue}{\left(\frac{N}{1 + N}\right)} \]

    if 940 < N

    1. Initial program 16.4%

      \[\log \left(N + 1\right) - \log N \]
    2. Add Preprocessing
    3. Taylor expanded in N around inf

      \[\leadsto \color{blue}{\frac{\left(1 + \frac{\frac{1}{3}}{{N}^{2}}\right) - \left(\frac{1}{2} \cdot \frac{1}{N} + \frac{1}{4} \cdot \frac{1}{{N}^{3}}\right)}{N}} \]
    4. Applied rewrites99.8%

      \[\leadsto \color{blue}{\frac{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}{N}} \]
    5. Step-by-step derivation
      1. Applied rewrites99.9%

        \[\leadsto \frac{1}{\color{blue}{\frac{N}{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}}} \]
      2. Taylor expanded in N around -inf

        \[\leadsto \frac{1}{-1 \cdot \color{blue}{\left(N \cdot \left(-1 \cdot \frac{\frac{1}{2} + -1 \cdot \frac{\frac{1}{12} - \frac{1}{24} \cdot \frac{1}{N}}{N}}{N} - 1\right)\right)}} \]
      3. Step-by-step derivation
        1. Applied rewrites99.8%

          \[\leadsto \frac{1}{\left(-N\right) \cdot \color{blue}{\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, -1, -1\right)}} \]
        2. Step-by-step derivation
          1. Applied rewrites99.9%

            \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)} \]
        3. Recombined 2 regimes into one program.
        4. Final simplification99.5%

          \[\leadsto \begin{array}{l} \mathbf{if}\;N \leq 940:\\ \;\;\;\;-\log \left(\frac{N}{1 + N}\right)\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1}\\ \end{array} \]
        5. Add Preprocessing

        Alternative 2: 99.4% accurate, 1.4× speedup?

        \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;N \leq 780:\\ \;\;\;\;\log \left(\frac{1 + N}{N}\right)\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1}\\ \end{array} \end{array} \]
        (FPCore (N)
         :precision binary64
         (if (<= N 780.0)
           (log (/ (+ 1.0 N) N))
           (pow
            (fma
             (/ (- 0.5 (/ (- 0.08333333333333333 (/ 0.041666666666666664 N)) N)) N)
             N
             N)
            -1.0)))
        double code(double N) {
        	double tmp;
        	if (N <= 780.0) {
        		tmp = log(((1.0 + N) / N));
        	} else {
        		tmp = pow(fma(((0.5 - ((0.08333333333333333 - (0.041666666666666664 / N)) / N)) / N), N, N), -1.0);
        	}
        	return tmp;
        }
        
        function code(N)
        	tmp = 0.0
        	if (N <= 780.0)
        		tmp = log(Float64(Float64(1.0 + N) / N));
        	else
        		tmp = fma(Float64(Float64(0.5 - Float64(Float64(0.08333333333333333 - Float64(0.041666666666666664 / N)) / N)) / N), N, N) ^ -1.0;
        	end
        	return tmp
        end
        
        code[N_] := If[LessEqual[N, 780.0], N[Log[N[(N[(1.0 + N), $MachinePrecision] / N), $MachinePrecision]], $MachinePrecision], N[Power[N[(N[(N[(0.5 - N[(N[(0.08333333333333333 - N[(0.041666666666666664 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] * N + N), $MachinePrecision], -1.0], $MachinePrecision]]
        
        \begin{array}{l}
        
        \\
        \begin{array}{l}
        \mathbf{if}\;N \leq 780:\\
        \;\;\;\;\log \left(\frac{1 + N}{N}\right)\\
        
        \mathbf{else}:\\
        \;\;\;\;{\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1}\\
        
        
        \end{array}
        \end{array}
        
        Derivation
        1. Split input into 2 regimes
        2. if N < 780

          1. Initial program 90.8%

            \[\log \left(N + 1\right) - \log N \]
          2. Add Preprocessing
          3. Step-by-step derivation
            1. lift--.f64N/A

              \[\leadsto \color{blue}{\log \left(N + 1\right) - \log N} \]
            2. lift-log.f64N/A

              \[\leadsto \color{blue}{\log \left(N + 1\right)} - \log N \]
            3. lift-log.f64N/A

              \[\leadsto \log \left(N + 1\right) - \color{blue}{\log N} \]
            4. diff-logN/A

              \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
            5. lower-log.f64N/A

              \[\leadsto \color{blue}{\log \left(\frac{N + 1}{N}\right)} \]
            6. lower-/.f6493.3

              \[\leadsto \log \color{blue}{\left(\frac{N + 1}{N}\right)} \]
            7. lift-+.f64N/A

              \[\leadsto \log \left(\frac{\color{blue}{N + 1}}{N}\right) \]
            8. +-commutativeN/A

              \[\leadsto \log \left(\frac{\color{blue}{1 + N}}{N}\right) \]
            9. lower-+.f6493.3

              \[\leadsto \log \left(\frac{\color{blue}{1 + N}}{N}\right) \]
          4. Applied rewrites93.3%

            \[\leadsto \color{blue}{\log \left(\frac{1 + N}{N}\right)} \]

          if 780 < N

          1. Initial program 16.4%

            \[\log \left(N + 1\right) - \log N \]
          2. Add Preprocessing
          3. Taylor expanded in N around inf

            \[\leadsto \color{blue}{\frac{\left(1 + \frac{\frac{1}{3}}{{N}^{2}}\right) - \left(\frac{1}{2} \cdot \frac{1}{N} + \frac{1}{4} \cdot \frac{1}{{N}^{3}}\right)}{N}} \]
          4. Applied rewrites99.8%

            \[\leadsto \color{blue}{\frac{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}{N}} \]
          5. Step-by-step derivation
            1. Applied rewrites99.9%

              \[\leadsto \frac{1}{\color{blue}{\frac{N}{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}}} \]
            2. Taylor expanded in N around -inf

              \[\leadsto \frac{1}{-1 \cdot \color{blue}{\left(N \cdot \left(-1 \cdot \frac{\frac{1}{2} + -1 \cdot \frac{\frac{1}{12} - \frac{1}{24} \cdot \frac{1}{N}}{N}}{N} - 1\right)\right)}} \]
            3. Step-by-step derivation
              1. Applied rewrites99.8%

                \[\leadsto \frac{1}{\left(-N\right) \cdot \color{blue}{\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, -1, -1\right)}} \]
              2. Step-by-step derivation
                1. Applied rewrites99.9%

                  \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)} \]
              3. Recombined 2 regimes into one program.
              4. Final simplification99.4%

                \[\leadsto \begin{array}{l} \mathbf{if}\;N \leq 780:\\ \;\;\;\;\log \left(\frac{1 + N}{N}\right)\\ \mathbf{else}:\\ \;\;\;\;{\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1}\\ \end{array} \]
              5. Add Preprocessing

              Alternative 3: 96.7% accurate, 1.4× speedup?

              \[\begin{array}{l} \\ {\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1} \end{array} \]
              (FPCore (N)
               :precision binary64
               (pow
                (fma
                 (/ (- 0.5 (/ (- 0.08333333333333333 (/ 0.041666666666666664 N)) N)) N)
                 N
                 N)
                -1.0))
              double code(double N) {
              	return pow(fma(((0.5 - ((0.08333333333333333 - (0.041666666666666664 / N)) / N)) / N), N, N), -1.0);
              }
              
              function code(N)
              	return fma(Float64(Float64(0.5 - Float64(Float64(0.08333333333333333 - Float64(0.041666666666666664 / N)) / N)) / N), N, N) ^ -1.0
              end
              
              code[N_] := N[Power[N[(N[(N[(0.5 - N[(N[(0.08333333333333333 - N[(0.041666666666666664 / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision]), $MachinePrecision] / N), $MachinePrecision] * N + N), $MachinePrecision], -1.0], $MachinePrecision]
              
              \begin{array}{l}
              
              \\
              {\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1}
              \end{array}
              
              Derivation
              1. Initial program 22.2%

                \[\log \left(N + 1\right) - \log N \]
              2. Add Preprocessing
              3. Taylor expanded in N around inf

                \[\leadsto \color{blue}{\frac{\left(1 + \frac{\frac{1}{3}}{{N}^{2}}\right) - \left(\frac{1}{2} \cdot \frac{1}{N} + \frac{1}{4} \cdot \frac{1}{{N}^{3}}\right)}{N}} \]
              4. Applied rewrites96.7%

                \[\leadsto \color{blue}{\frac{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}{N}} \]
              5. Step-by-step derivation
                1. Applied rewrites96.7%

                  \[\leadsto \frac{1}{\color{blue}{\frac{N}{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}}} \]
                2. Taylor expanded in N around -inf

                  \[\leadsto \frac{1}{-1 \cdot \color{blue}{\left(N \cdot \left(-1 \cdot \frac{\frac{1}{2} + -1 \cdot \frac{\frac{1}{12} - \frac{1}{24} \cdot \frac{1}{N}}{N}}{N} - 1\right)\right)}} \]
                3. Step-by-step derivation
                  1. Applied rewrites97.0%

                    \[\leadsto \frac{1}{\left(-N\right) \cdot \color{blue}{\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, -1, -1\right)}} \]
                  2. Step-by-step derivation
                    1. Applied rewrites97.1%

                      \[\leadsto \frac{1}{\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)} \]
                    2. Final simplification97.1%

                      \[\leadsto {\left(\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, N, N\right)\right)}^{-1} \]
                    3. Add Preprocessing

                    Alternative 4: 96.5% accurate, 1.6× speedup?

                    \[\begin{array}{l} \\ {\left(\frac{\mathsf{fma}\left(\mathsf{fma}\left(0.5 + N, N, -0.08333333333333333\right), N, 0.041666666666666664\right)}{N \cdot N}\right)}^{-1} \end{array} \]
                    (FPCore (N)
                     :precision binary64
                     (pow
                      (/
                       (fma (fma (+ 0.5 N) N -0.08333333333333333) N 0.041666666666666664)
                       (* N N))
                      -1.0))
                    double code(double N) {
                    	return pow((fma(fma((0.5 + N), N, -0.08333333333333333), N, 0.041666666666666664) / (N * N)), -1.0);
                    }
                    
                    function code(N)
                    	return Float64(fma(fma(Float64(0.5 + N), N, -0.08333333333333333), N, 0.041666666666666664) / Float64(N * N)) ^ -1.0
                    end
                    
                    code[N_] := N[Power[N[(N[(N[(N[(0.5 + N), $MachinePrecision] * N + -0.08333333333333333), $MachinePrecision] * N + 0.041666666666666664), $MachinePrecision] / N[(N * N), $MachinePrecision]), $MachinePrecision], -1.0], $MachinePrecision]
                    
                    \begin{array}{l}
                    
                    \\
                    {\left(\frac{\mathsf{fma}\left(\mathsf{fma}\left(0.5 + N, N, -0.08333333333333333\right), N, 0.041666666666666664\right)}{N \cdot N}\right)}^{-1}
                    \end{array}
                    
                    Derivation
                    1. Initial program 22.2%

                      \[\log \left(N + 1\right) - \log N \]
                    2. Add Preprocessing
                    3. Taylor expanded in N around inf

                      \[\leadsto \color{blue}{\frac{\left(1 + \frac{\frac{1}{3}}{{N}^{2}}\right) - \left(\frac{1}{2} \cdot \frac{1}{N} + \frac{1}{4} \cdot \frac{1}{{N}^{3}}\right)}{N}} \]
                    4. Applied rewrites96.7%

                      \[\leadsto \color{blue}{\frac{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}{N}} \]
                    5. Step-by-step derivation
                      1. Applied rewrites96.7%

                        \[\leadsto \frac{1}{\color{blue}{\frac{N}{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}}} \]
                      2. Taylor expanded in N around -inf

                        \[\leadsto \frac{1}{-1 \cdot \color{blue}{\left(N \cdot \left(-1 \cdot \frac{\frac{1}{2} + -1 \cdot \frac{\frac{1}{12} - \frac{1}{24} \cdot \frac{1}{N}}{N}}{N} - 1\right)\right)}} \]
                      3. Step-by-step derivation
                        1. Applied rewrites97.0%

                          \[\leadsto \frac{1}{\left(-N\right) \cdot \color{blue}{\mathsf{fma}\left(\frac{0.5 - \frac{0.08333333333333333 - \frac{0.041666666666666664}{N}}{N}}{N}, -1, -1\right)}} \]
                        2. Taylor expanded in N around 0

                          \[\leadsto \frac{1}{\frac{\frac{1}{24} + N \cdot \left(N \cdot \left(\frac{1}{2} + N\right) - \frac{1}{12}\right)}{{N}^{\color{blue}{2}}}} \]
                        3. Step-by-step derivation
                          1. Applied rewrites96.9%

                            \[\leadsto \frac{1}{\frac{\mathsf{fma}\left(\mathsf{fma}\left(0.5 + N, N, -0.08333333333333333\right), N, 0.041666666666666664\right)}{N \cdot \color{blue}{N}}} \]
                          2. Final simplification96.9%

                            \[\leadsto {\left(\frac{\mathsf{fma}\left(\mathsf{fma}\left(0.5 + N, N, -0.08333333333333333\right), N, 0.041666666666666664\right)}{N \cdot N}\right)}^{-1} \]
                          3. Add Preprocessing

                          Alternative 5: 93.0% accurate, 2.0× speedup?

                          \[\begin{array}{l} \\ {\left(0.5 + N\right)}^{-1} \end{array} \]
                          (FPCore (N) :precision binary64 (pow (+ 0.5 N) -1.0))
                          double code(double N) {
                          	return pow((0.5 + N), -1.0);
                          }
                          
                          real(8) function code(n)
                              real(8), intent (in) :: n
                              code = (0.5d0 + n) ** (-1.0d0)
                          end function
                          
                          public static double code(double N) {
                          	return Math.pow((0.5 + N), -1.0);
                          }
                          
                          def code(N):
                          	return math.pow((0.5 + N), -1.0)
                          
                          function code(N)
                          	return Float64(0.5 + N) ^ -1.0
                          end
                          
                          function tmp = code(N)
                          	tmp = (0.5 + N) ^ -1.0;
                          end
                          
                          code[N_] := N[Power[N[(0.5 + N), $MachinePrecision], -1.0], $MachinePrecision]
                          
                          \begin{array}{l}
                          
                          \\
                          {\left(0.5 + N\right)}^{-1}
                          \end{array}
                          
                          Derivation
                          1. Initial program 22.2%

                            \[\log \left(N + 1\right) - \log N \]
                          2. Add Preprocessing
                          3. Taylor expanded in N around inf

                            \[\leadsto \color{blue}{\frac{\left(1 + \frac{\frac{1}{3}}{{N}^{2}}\right) - \left(\frac{1}{2} \cdot \frac{1}{N} + \frac{1}{4} \cdot \frac{1}{{N}^{3}}\right)}{N}} \]
                          4. Applied rewrites96.7%

                            \[\leadsto \color{blue}{\frac{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}{N}} \]
                          5. Step-by-step derivation
                            1. Applied rewrites96.7%

                              \[\leadsto \frac{1}{\color{blue}{\frac{N}{\frac{-0.5 - \frac{\frac{0.25}{N} - 0.3333333333333333}{N}}{N} - -1}}} \]
                            2. Taylor expanded in N around inf

                              \[\leadsto \frac{1}{N \cdot \color{blue}{\left(1 + \frac{1}{2} \cdot \frac{1}{N}\right)}} \]
                            3. Step-by-step derivation
                              1. Applied rewrites94.0%

                                \[\leadsto \frac{1}{\left(\frac{0.5}{N} + 1\right) \cdot \color{blue}{N}} \]
                              2. Taylor expanded in N around 0

                                \[\leadsto \frac{1}{\frac{1}{2} + N} \]
                              3. Step-by-step derivation
                                1. Applied rewrites94.1%

                                  \[\leadsto \frac{1}{0.5 + N} \]
                                2. Final simplification94.1%

                                  \[\leadsto {\left(0.5 + N\right)}^{-1} \]
                                3. Add Preprocessing

                                Alternative 6: 84.6% accurate, 2.0× speedup?

                                \[\begin{array}{l} \\ {N}^{-1} \end{array} \]
                                (FPCore (N) :precision binary64 (pow N -1.0))
                                double code(double N) {
                                	return pow(N, -1.0);
                                }
                                
                                real(8) function code(n)
                                    real(8), intent (in) :: n
                                    code = n ** (-1.0d0)
                                end function
                                
                                public static double code(double N) {
                                	return Math.pow(N, -1.0);
                                }
                                
                                def code(N):
                                	return math.pow(N, -1.0)
                                
                                function code(N)
                                	return N ^ -1.0
                                end
                                
                                function tmp = code(N)
                                	tmp = N ^ -1.0;
                                end
                                
                                code[N_] := N[Power[N, -1.0], $MachinePrecision]
                                
                                \begin{array}{l}
                                
                                \\
                                {N}^{-1}
                                \end{array}
                                
                                Derivation
                                1. Initial program 22.2%

                                  \[\log \left(N + 1\right) - \log N \]
                                2. Add Preprocessing
                                3. Taylor expanded in N around inf

                                  \[\leadsto \color{blue}{\frac{1}{N}} \]
                                4. Step-by-step derivation
                                  1. lower-/.f6485.9

                                    \[\leadsto \color{blue}{\frac{1}{N}} \]
                                5. Applied rewrites85.9%

                                  \[\leadsto \color{blue}{\frac{1}{N}} \]
                                6. Final simplification85.9%

                                  \[\leadsto {N}^{-1} \]
                                7. Add Preprocessing

                                Alternative 7: 95.0% accurate, 5.2× speedup?

                                \[\begin{array}{l} \\ \frac{\frac{\frac{0.3333333333333333}{N} - 0.5}{N} - -1}{N} \end{array} \]
                                (FPCore (N)
                                 :precision binary64
                                 (/ (- (/ (- (/ 0.3333333333333333 N) 0.5) N) -1.0) N))
                                double code(double N) {
                                	return ((((0.3333333333333333 / N) - 0.5) / N) - -1.0) / N;
                                }
                                
                                real(8) function code(n)
                                    real(8), intent (in) :: n
                                    code = ((((0.3333333333333333d0 / n) - 0.5d0) / n) - (-1.0d0)) / n
                                end function
                                
                                public static double code(double N) {
                                	return ((((0.3333333333333333 / N) - 0.5) / N) - -1.0) / N;
                                }
                                
                                def code(N):
                                	return ((((0.3333333333333333 / N) - 0.5) / N) - -1.0) / N
                                
                                function code(N)
                                	return Float64(Float64(Float64(Float64(Float64(0.3333333333333333 / N) - 0.5) / N) - -1.0) / N)
                                end
                                
                                function tmp = code(N)
                                	tmp = ((((0.3333333333333333 / N) - 0.5) / N) - -1.0) / N;
                                end
                                
                                code[N_] := N[(N[(N[(N[(N[(0.3333333333333333 / N), $MachinePrecision] - 0.5), $MachinePrecision] / N), $MachinePrecision] - -1.0), $MachinePrecision] / N), $MachinePrecision]
                                
                                \begin{array}{l}
                                
                                \\
                                \frac{\frac{\frac{0.3333333333333333}{N} - 0.5}{N} - -1}{N}
                                \end{array}
                                
                                Derivation
                                1. Initial program 22.2%

                                  \[\log \left(N + 1\right) - \log N \]
                                2. Add Preprocessing
                                3. Taylor expanded in N around inf

                                  \[\leadsto \color{blue}{\frac{\left(1 + \frac{\frac{1}{3}}{{N}^{2}}\right) - \frac{1}{2} \cdot \frac{1}{N}}{N}} \]
                                4. Step-by-step derivation
                                  1. lower-/.f64N/A

                                    \[\leadsto \color{blue}{\frac{\left(1 + \frac{\frac{1}{3}}{{N}^{2}}\right) - \frac{1}{2} \cdot \frac{1}{N}}{N}} \]
                                5. Applied rewrites95.4%

                                  \[\leadsto \color{blue}{\frac{\frac{\frac{0.3333333333333333}{N} - 0.5}{N} - -1}{N}} \]
                                6. Add Preprocessing

                                Developer Target 1: 96.1% accurate, 0.6× speedup?

                                \[\begin{array}{l} \\ \left(\left(\frac{1}{N} + \frac{-1}{2 \cdot {N}^{2}}\right) + \frac{1}{3 \cdot {N}^{3}}\right) + \frac{-1}{4 \cdot {N}^{4}} \end{array} \]
                                (FPCore (N)
                                 :precision binary64
                                 (+
                                  (+ (+ (/ 1.0 N) (/ -1.0 (* 2.0 (pow N 2.0)))) (/ 1.0 (* 3.0 (pow N 3.0))))
                                  (/ -1.0 (* 4.0 (pow N 4.0)))))
                                double code(double N) {
                                	return (((1.0 / N) + (-1.0 / (2.0 * pow(N, 2.0)))) + (1.0 / (3.0 * pow(N, 3.0)))) + (-1.0 / (4.0 * pow(N, 4.0)));
                                }
                                
                                real(8) function code(n)
                                    real(8), intent (in) :: n
                                    code = (((1.0d0 / n) + ((-1.0d0) / (2.0d0 * (n ** 2.0d0)))) + (1.0d0 / (3.0d0 * (n ** 3.0d0)))) + ((-1.0d0) / (4.0d0 * (n ** 4.0d0)))
                                end function
                                
                                public static double code(double N) {
                                	return (((1.0 / N) + (-1.0 / (2.0 * Math.pow(N, 2.0)))) + (1.0 / (3.0 * Math.pow(N, 3.0)))) + (-1.0 / (4.0 * Math.pow(N, 4.0)));
                                }
                                
                                def code(N):
                                	return (((1.0 / N) + (-1.0 / (2.0 * math.pow(N, 2.0)))) + (1.0 / (3.0 * math.pow(N, 3.0)))) + (-1.0 / (4.0 * math.pow(N, 4.0)))
                                
                                function code(N)
                                	return Float64(Float64(Float64(Float64(1.0 / N) + Float64(-1.0 / Float64(2.0 * (N ^ 2.0)))) + Float64(1.0 / Float64(3.0 * (N ^ 3.0)))) + Float64(-1.0 / Float64(4.0 * (N ^ 4.0))))
                                end
                                
                                function tmp = code(N)
                                	tmp = (((1.0 / N) + (-1.0 / (2.0 * (N ^ 2.0)))) + (1.0 / (3.0 * (N ^ 3.0)))) + (-1.0 / (4.0 * (N ^ 4.0)));
                                end
                                
                                code[N_] := N[(N[(N[(N[(1.0 / N), $MachinePrecision] + N[(-1.0 / N[(2.0 * N[Power[N, 2.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + N[(1.0 / N[(3.0 * N[Power[N, 3.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision] + N[(-1.0 / N[(4.0 * N[Power[N, 4.0], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
                                
                                \begin{array}{l}
                                
                                \\
                                \left(\left(\frac{1}{N} + \frac{-1}{2 \cdot {N}^{2}}\right) + \frac{1}{3 \cdot {N}^{3}}\right) + \frac{-1}{4 \cdot {N}^{4}}
                                \end{array}
                                

                                Reproduce

                                ?
                                herbie shell --seed 2024327 
                                (FPCore (N)
                                  :name "2log (problem 3.3.6)"
                                  :precision binary64
                                  :pre (and (> N 1.0) (< N 1e+40))
                                
                                  :alt
                                  (! :herbie-platform default (+ (/ 1 N) (/ -1 (* 2 (pow N 2))) (/ 1 (* 3 (pow N 3))) (/ -1 (* 4 (pow N 4)))))
                                
                                  (- (log (+ N 1.0)) (log N)))