math.log10 on complex, imaginary part

Percentage Accurate: 98.7% → 99.8%
Time: 2.2s
Alternatives: 3
Speedup: 1.3×

Specification

?
\[\frac{\tan^{-1}_* \frac{im}{re}}{\log 10} \]
(FPCore (re im)
  :precision binary64
  (/ (atan2 im re) (log 10.0)))
double code(double re, double im) {
	return atan2(im, re) / log(10.0);
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = atan2(im, re) / log(10.0d0)
end function
public static double code(double re, double im) {
	return Math.atan2(im, re) / Math.log(10.0);
}
def code(re, im):
	return math.atan2(im, re) / math.log(10.0)
function code(re, im)
	return Float64(atan(im, re) / log(10.0))
end
function tmp = code(re, im)
	tmp = atan2(im, re) / log(10.0);
end
code[re_, im_] := N[(N[ArcTan[im / re], $MachinePrecision] / N[Log[10.0], $MachinePrecision]), $MachinePrecision]
\frac{\tan^{-1}_* \frac{im}{re}}{\log 10}

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 3 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 98.7% accurate, 1.0× speedup?

\[\frac{\tan^{-1}_* \frac{im}{re}}{\log 10} \]
(FPCore (re im)
  :precision binary64
  (/ (atan2 im re) (log 10.0)))
double code(double re, double im) {
	return atan2(im, re) / log(10.0);
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = atan2(im, re) / log(10.0d0)
end function
public static double code(double re, double im) {
	return Math.atan2(im, re) / Math.log(10.0);
}
def code(re, im):
	return math.atan2(im, re) / math.log(10.0)
function code(re, im)
	return Float64(atan(im, re) / log(10.0))
end
function tmp = code(re, im)
	tmp = atan2(im, re) / log(10.0);
end
code[re_, im_] := N[(N[ArcTan[im / re], $MachinePrecision] / N[Log[10.0], $MachinePrecision]), $MachinePrecision]
\frac{\tan^{-1}_* \frac{im}{re}}{\log 10}

Alternative 1: 99.8% accurate, 1.0× speedup?

\[\frac{\tan^{-1}_* \frac{im}{re}}{-\log 0.1} \]
(FPCore (re im)
  :precision binary64
  (/ (atan2 im re) (- (log 0.1))))
double code(double re, double im) {
	return atan2(im, re) / -log(0.1);
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = atan2(im, re) / -log(0.1d0)
end function
public static double code(double re, double im) {
	return Math.atan2(im, re) / -Math.log(0.1);
}
def code(re, im):
	return math.atan2(im, re) / -math.log(0.1)
function code(re, im)
	return Float64(atan(im, re) / Float64(-log(0.1)))
end
function tmp = code(re, im)
	tmp = atan2(im, re) / -log(0.1);
end
code[re_, im_] := N[(N[ArcTan[im / re], $MachinePrecision] / (-N[Log[0.1], $MachinePrecision])), $MachinePrecision]
\frac{\tan^{-1}_* \frac{im}{re}}{-\log 0.1}
Derivation
  1. Initial program 98.7%

    \[\frac{\tan^{-1}_* \frac{im}{re}}{\log 10} \]
  2. Step-by-step derivation
    1. remove-double-negN/A

      \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{\color{blue}{\mathsf{neg}\left(\left(\mathsf{neg}\left(\log 10\right)\right)\right)}} \]
    2. lower-neg.f64N/A

      \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{\color{blue}{-\left(\mathsf{neg}\left(\log 10\right)\right)}} \]
    3. lift-log.f64N/A

      \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{-\left(\mathsf{neg}\left(\color{blue}{\log 10}\right)\right)} \]
    4. neg-logN/A

      \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{-\color{blue}{\log \left(\frac{1}{10}\right)}} \]
    5. lower-log.f64N/A

      \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{-\color{blue}{\log \left(\frac{1}{10}\right)}} \]
    6. metadata-eval99.8%

      \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{-\log \color{blue}{0.1}} \]
  3. Applied rewrites99.8%

    \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{\color{blue}{-\log 0.1}} \]
  4. Add Preprocessing

Alternative 2: 98.7% accurate, 1.3× speedup?

\[\frac{\tan^{-1}_* \frac{im}{re}}{2.302585092994046} \]
(FPCore (re im)
  :precision binary64
  (/ (atan2 im re) 2.302585092994046))
double code(double re, double im) {
	return atan2(im, re) / 2.302585092994046;
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = atan2(im, re) / 2.302585092994046d0
end function
public static double code(double re, double im) {
	return Math.atan2(im, re) / 2.302585092994046;
}
def code(re, im):
	return math.atan2(im, re) / 2.302585092994046
function code(re, im)
	return Float64(atan(im, re) / 2.302585092994046)
end
function tmp = code(re, im)
	tmp = atan2(im, re) / 2.302585092994046;
end
code[re_, im_] := N[(N[ArcTan[im / re], $MachinePrecision] / 2.302585092994046), $MachinePrecision]
\frac{\tan^{-1}_* \frac{im}{re}}{2.302585092994046}
Derivation
  1. Initial program 98.7%

    \[\frac{\tan^{-1}_* \frac{im}{re}}{\log 10} \]
  2. Evaluated real constant98.7%

    \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{\color{blue}{2.302585092994046}} \]
  3. Add Preprocessing

Alternative 3: 98.6% accurate, 1.3× speedup?

\[0.43429448190325176 \cdot \tan^{-1}_* \frac{im}{re} \]
(FPCore (re im)
  :precision binary64
  (* 0.43429448190325176 (atan2 im re)))
double code(double re, double im) {
	return 0.43429448190325176 * atan2(im, re);
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = 0.43429448190325176d0 * atan2(im, re)
end function
public static double code(double re, double im) {
	return 0.43429448190325176 * Math.atan2(im, re);
}
def code(re, im):
	return 0.43429448190325176 * math.atan2(im, re)
function code(re, im)
	return Float64(0.43429448190325176 * atan(im, re))
end
function tmp = code(re, im)
	tmp = 0.43429448190325176 * atan2(im, re);
end
code[re_, im_] := N[(0.43429448190325176 * N[ArcTan[im / re], $MachinePrecision]), $MachinePrecision]
0.43429448190325176 \cdot \tan^{-1}_* \frac{im}{re}
Derivation
  1. Initial program 98.7%

    \[\frac{\tan^{-1}_* \frac{im}{re}}{\log 10} \]
  2. Evaluated real constant98.7%

    \[\leadsto \frac{\tan^{-1}_* \frac{im}{re}}{\color{blue}{2.302585092994046}} \]
  3. Taylor expanded in re around 0

    \[\leadsto \color{blue}{\frac{1125899906842624}{2592480341699211} \cdot \tan^{-1}_* \frac{im}{re}} \]
  4. Step-by-step derivation
    1. lower-*.f64N/A

      \[\leadsto \frac{1125899906842624}{2592480341699211} \cdot \color{blue}{\tan^{-1}_* \frac{im}{re}} \]
    2. lower-atan2.f6498.6%

      \[\leadsto 0.43429448190325176 \cdot \tan^{-1}_* \frac{im}{\color{blue}{re}} \]
  5. Applied rewrites98.6%

    \[\leadsto \color{blue}{0.43429448190325176 \cdot \tan^{-1}_* \frac{im}{re}} \]
  6. Add Preprocessing

Reproduce

?
herbie shell --seed 2025313 -o setup:search
(FPCore (re im)
  :name "math.log10 on complex, imaginary part"
  :precision binary64
  (/ (atan2 im re) (log 10.0)))