math.log/1 on complex, real part

Percentage Accurate: 51.8% → 100.0%
Time: 1.6s
Alternatives: 2
Speedup: 1.5×

Specification

?
\[\log \left(\sqrt{re \cdot re + im \cdot im}\right) \]
(FPCore (re im)
  :precision binary64
  (log (sqrt (+ (* re re) (* im im)))))
double code(double re, double im) {
	return log(sqrt(((re * re) + (im * im))));
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = log(sqrt(((re * re) + (im * im))))
end function
public static double code(double re, double im) {
	return Math.log(Math.sqrt(((re * re) + (im * im))));
}
def code(re, im):
	return math.log(math.sqrt(((re * re) + (im * im))))
function code(re, im)
	return log(sqrt(Float64(Float64(re * re) + Float64(im * im))))
end
function tmp = code(re, im)
	tmp = log(sqrt(((re * re) + (im * im))));
end
code[re_, im_] := N[Log[N[Sqrt[N[(N[(re * re), $MachinePrecision] + N[(im * im), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]], $MachinePrecision]
\log \left(\sqrt{re \cdot re + im \cdot im}\right)

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 2 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 51.8% accurate, 1.0× speedup?

\[\log \left(\sqrt{re \cdot re + im \cdot im}\right) \]
(FPCore (re im)
  :precision binary64
  (log (sqrt (+ (* re re) (* im im)))))
double code(double re, double im) {
	return log(sqrt(((re * re) + (im * im))));
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = log(sqrt(((re * re) + (im * im))))
end function
public static double code(double re, double im) {
	return Math.log(Math.sqrt(((re * re) + (im * im))));
}
def code(re, im):
	return math.log(math.sqrt(((re * re) + (im * im))))
function code(re, im)
	return log(sqrt(Float64(Float64(re * re) + Float64(im * im))))
end
function tmp = code(re, im)
	tmp = log(sqrt(((re * re) + (im * im))));
end
code[re_, im_] := N[Log[N[Sqrt[N[(N[(re * re), $MachinePrecision] + N[(im * im), $MachinePrecision]), $MachinePrecision]], $MachinePrecision]], $MachinePrecision]
\log \left(\sqrt{re \cdot re + im \cdot im}\right)

Alternative 1: 100.0% accurate, 0.8× speedup?

\[\log \left(\mathsf{hypot}\left(re, im\right)\right) \]
(FPCore (re im)
  :precision binary64
  (log (hypot re im)))
double code(double re, double im) {
	return log(hypot(re, im));
}
public static double code(double re, double im) {
	return Math.log(Math.hypot(re, im));
}
def code(re, im):
	return math.log(math.hypot(re, im))
function code(re, im)
	return log(hypot(re, im))
end
function tmp = code(re, im)
	tmp = log(hypot(re, im));
end
code[re_, im_] := N[Log[N[Sqrt[re ^ 2 + im ^ 2], $MachinePrecision]], $MachinePrecision]
\log \left(\mathsf{hypot}\left(re, im\right)\right)
Derivation
  1. Initial program 51.8%

    \[\log \left(\sqrt{re \cdot re + im \cdot im}\right) \]
  2. Step-by-step derivation
    1. lift-sqrt.f64N/A

      \[\leadsto \log \color{blue}{\left(\sqrt{re \cdot re + im \cdot im}\right)} \]
    2. sqrt-fabs-revN/A

      \[\leadsto \log \color{blue}{\left(\left|\sqrt{re \cdot re + im \cdot im}\right|\right)} \]
    3. lift-sqrt.f64N/A

      \[\leadsto \log \left(\left|\color{blue}{\sqrt{re \cdot re + im \cdot im}}\right|\right) \]
    4. rem-sqrt-square-revN/A

      \[\leadsto \log \color{blue}{\left(\sqrt{\sqrt{re \cdot re + im \cdot im} \cdot \sqrt{re \cdot re + im \cdot im}}\right)} \]
    5. lift-sqrt.f64N/A

      \[\leadsto \log \left(\sqrt{\color{blue}{\sqrt{re \cdot re + im \cdot im}} \cdot \sqrt{re \cdot re + im \cdot im}}\right) \]
    6. lift-sqrt.f64N/A

      \[\leadsto \log \left(\sqrt{\sqrt{re \cdot re + im \cdot im} \cdot \color{blue}{\sqrt{re \cdot re + im \cdot im}}}\right) \]
    7. rem-square-sqrtN/A

      \[\leadsto \log \left(\sqrt{\color{blue}{re \cdot re + im \cdot im}}\right) \]
    8. lift-+.f64N/A

      \[\leadsto \log \left(\sqrt{\color{blue}{re \cdot re + im \cdot im}}\right) \]
    9. lift-*.f64N/A

      \[\leadsto \log \left(\sqrt{re \cdot re + \color{blue}{im \cdot im}}\right) \]
    10. sqr-neg-revN/A

      \[\leadsto \log \left(\sqrt{re \cdot re + \color{blue}{\left(\mathsf{neg}\left(im\right)\right) \cdot \left(\mathsf{neg}\left(im\right)\right)}}\right) \]
    11. fp-cancel-sign-sub-invN/A

      \[\leadsto \log \left(\sqrt{\color{blue}{re \cdot re - \left(\mathsf{neg}\left(\left(\mathsf{neg}\left(im\right)\right)\right)\right) \cdot \left(\mathsf{neg}\left(im\right)\right)}}\right) \]
    12. fp-cancel-sub-sign-invN/A

      \[\leadsto \log \left(\sqrt{\color{blue}{re \cdot re + \left(\mathsf{neg}\left(\left(\mathsf{neg}\left(\left(\mathsf{neg}\left(im\right)\right)\right)\right)\right)\right) \cdot \left(\mathsf{neg}\left(im\right)\right)}}\right) \]
    13. lift-*.f64N/A

      \[\leadsto \log \left(\sqrt{\color{blue}{re \cdot re} + \left(\mathsf{neg}\left(\left(\mathsf{neg}\left(\left(\mathsf{neg}\left(im\right)\right)\right)\right)\right)\right) \cdot \left(\mathsf{neg}\left(im\right)\right)}\right) \]
    14. distribute-lft-neg-inN/A

      \[\leadsto \log \left(\sqrt{re \cdot re + \color{blue}{\left(\mathsf{neg}\left(\left(\mathsf{neg}\left(\left(\mathsf{neg}\left(im\right)\right)\right)\right) \cdot \left(\mathsf{neg}\left(im\right)\right)\right)\right)}}\right) \]
    15. distribute-rgt-neg-outN/A

      \[\leadsto \log \left(\sqrt{re \cdot re + \color{blue}{\left(\mathsf{neg}\left(\left(\mathsf{neg}\left(im\right)\right)\right)\right) \cdot \left(\mathsf{neg}\left(\left(\mathsf{neg}\left(im\right)\right)\right)\right)}}\right) \]
    16. sqr-neg-revN/A

      \[\leadsto \log \left(\sqrt{re \cdot re + \color{blue}{\left(\mathsf{neg}\left(im\right)\right) \cdot \left(\mathsf{neg}\left(im\right)\right)}}\right) \]
    17. sqr-neg-revN/A

      \[\leadsto \log \left(\sqrt{re \cdot re + \color{blue}{im \cdot im}}\right) \]
    18. lower-hypot.f64100.0%

      \[\leadsto \log \color{blue}{\left(\mathsf{hypot}\left(re, im\right)\right)} \]
  3. Applied rewrites100.0%

    \[\leadsto \log \color{blue}{\left(\mathsf{hypot}\left(re, im\right)\right)} \]
  4. Add Preprocessing

Alternative 2: 99.2% accurate, 1.5× speedup?

\[\log \left(\mathsf{max}\left(\left|re\right|, \left|im\right|\right)\right) \]
(FPCore (re im)
  :precision binary64
  (log (fmax (fabs re) (fabs im))))
double code(double re, double im) {
	return log(fmax(fabs(re), fabs(im)));
}
real(8) function code(re, im)
use fmin_fmax_functions
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = log(fmax(abs(re), abs(im)))
end function
public static double code(double re, double im) {
	return Math.log(fmax(Math.abs(re), Math.abs(im)));
}
def code(re, im):
	return math.log(fmax(math.fabs(re), math.fabs(im)))
function code(re, im)
	return log(fmax(abs(re), abs(im)))
end
function tmp = code(re, im)
	tmp = log(max(abs(re), abs(im)));
end
code[re_, im_] := N[Log[N[Max[N[Abs[re], $MachinePrecision], N[Abs[im], $MachinePrecision]], $MachinePrecision]], $MachinePrecision]
\log \left(\mathsf{max}\left(\left|re\right|, \left|im\right|\right)\right)
Derivation
  1. Initial program 51.8%

    \[\log \left(\sqrt{re \cdot re + im \cdot im}\right) \]
  2. Taylor expanded in im around inf

    \[\leadsto \color{blue}{-1 \cdot \log \left(\frac{1}{im}\right)} \]
  3. Step-by-step derivation
    1. lower-*.f64N/A

      \[\leadsto -1 \cdot \color{blue}{\log \left(\frac{1}{im}\right)} \]
    2. lower-log.f64N/A

      \[\leadsto -1 \cdot \log \left(\frac{1}{im}\right) \]
    3. lower-/.f6428.2%

      \[\leadsto -1 \cdot \log \left(\frac{1}{im}\right) \]
  4. Applied rewrites28.2%

    \[\leadsto \color{blue}{-1 \cdot \log \left(\frac{1}{im}\right)} \]
  5. Step-by-step derivation
    1. lift-*.f64N/A

      \[\leadsto -1 \cdot \color{blue}{\log \left(\frac{1}{im}\right)} \]
    2. mul-1-negN/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{1}{im}\right)\right) \]
    3. lift-log.f64N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{1}{im}\right)\right) \]
    4. lift-/.f64N/A

      \[\leadsto \mathsf{neg}\left(\log \left(\frac{1}{im}\right)\right) \]
    5. log-recN/A

      \[\leadsto \mathsf{neg}\left(\left(\mathsf{neg}\left(\log im\right)\right)\right) \]
    6. remove-double-negN/A

      \[\leadsto \log im \]
    7. lower-log.f6428.2%

      \[\leadsto \log im \]
  6. Applied rewrites28.2%

    \[\leadsto \log im \]
  7. Add Preprocessing

Reproduce

?
herbie shell --seed 2025313 -o setup:search
(FPCore (re im)
  :name "math.log/1 on complex, real part"
  :precision binary64
  (log (sqrt (+ (* re re) (* im im)))))