math.square on complex, imaginary part

Percentage Accurate: 100.0% → 100.0%
Time: 1.4s
Alternatives: 3
Speedup: 1.4×

Specification

?
\[\begin{array}{l} \\ re \cdot im + im \cdot re \end{array} \]
(FPCore im_sqr (re im) :precision binary64 (+ (* re im) (* im re)))
double im_sqr(double re, double im) {
	return (re * im) + (im * re);
}
real(8) function im_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    im_sqr = (re * im) + (im * re)
end function
public static double im_sqr(double re, double im) {
	return (re * im) + (im * re);
}
def im_sqr(re, im):
	return (re * im) + (im * re)
function im_sqr(re, im)
	return Float64(Float64(re * im) + Float64(im * re))
end
function tmp = im_sqr(re, im)
	tmp = (re * im) + (im * re);
end
im$95$sqr[re_, im_] := N[(N[(re * im), $MachinePrecision] + N[(im * re), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
re \cdot im + im \cdot re
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 3 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ re \cdot im + im \cdot re \end{array} \]
(FPCore im_sqr (re im) :precision binary64 (+ (* re im) (* im re)))
double im_sqr(double re, double im) {
	return (re * im) + (im * re);
}
real(8) function im_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    im_sqr = (re * im) + (im * re)
end function
public static double im_sqr(double re, double im) {
	return (re * im) + (im * re);
}
def im_sqr(re, im):
	return (re * im) + (im * re)
function im_sqr(re, im)
	return Float64(Float64(re * im) + Float64(im * re))
end
function tmp = im_sqr(re, im)
	tmp = (re * im) + (im * re);
end
im$95$sqr[re_, im_] := N[(N[(re * im), $MachinePrecision] + N[(im * re), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
re \cdot im + im \cdot re
\end{array}

Alternative 1: 100.0% accurate, 1.4× speedup?

\[\begin{array}{l} [re, im] = \mathsf{sort}([re, im])\\ \\ \left(re \cdot 2\right) \cdot im \end{array} \]
NOTE: re and im should be sorted in increasing order before calling this function.
(FPCore im_sqr (re im) :precision binary64 (* (* re 2.0) im))
assert(re < im);
double im_sqr(double re, double im) {
	return (re * 2.0) * im;
}
NOTE: re and im should be sorted in increasing order before calling this function.
real(8) function im_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    im_sqr = (re * 2.0d0) * im
end function
assert re < im;
public static double im_sqr(double re, double im) {
	return (re * 2.0) * im;
}
[re, im] = sort([re, im])
def im_sqr(re, im):
	return (re * 2.0) * im
re, im = sort([re, im])
function im_sqr(re, im)
	return Float64(Float64(re * 2.0) * im)
end
re, im = num2cell(sort([re, im])){:}
function tmp = im_sqr(re, im)
	tmp = (re * 2.0) * im;
end
NOTE: re and im should be sorted in increasing order before calling this function.
im$95$sqr[re_, im_] := N[(N[(re * 2.0), $MachinePrecision] * im), $MachinePrecision]
\begin{array}{l}
[re, im] = \mathsf{sort}([re, im])\\
\\
\left(re \cdot 2\right) \cdot im
\end{array}
Derivation
  1. Initial program 100.0%

    \[re \cdot im + im \cdot re \]
  2. Step-by-step derivation
    1. *-commutative100.0%

      \[\leadsto re \cdot im + \color{blue}{re \cdot im} \]
    2. distribute-lft-in99.6%

      \[\leadsto \color{blue}{re \cdot \left(im + im\right)} \]
    3. count-299.6%

      \[\leadsto re \cdot \color{blue}{\left(2 \cdot im\right)} \]
    4. associate-*r*99.6%

      \[\leadsto \color{blue}{\left(re \cdot 2\right) \cdot im} \]
  3. Applied egg-rr99.6%

    \[\leadsto \color{blue}{\left(re \cdot 2\right) \cdot im} \]
  4. Final simplification99.6%

    \[\leadsto \left(re \cdot 2\right) \cdot im \]

Alternative 2: 100.0% accurate, 1.4× speedup?

\[\begin{array}{l} [re, im] = \mathsf{sort}([re, im])\\ \\ re \cdot \left(im + im\right) \end{array} \]
NOTE: re and im should be sorted in increasing order before calling this function.
(FPCore im_sqr (re im) :precision binary64 (* re (+ im im)))
assert(re < im);
double im_sqr(double re, double im) {
	return re * (im + im);
}
NOTE: re and im should be sorted in increasing order before calling this function.
real(8) function im_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    im_sqr = re * (im + im)
end function
assert re < im;
public static double im_sqr(double re, double im) {
	return re * (im + im);
}
[re, im] = sort([re, im])
def im_sqr(re, im):
	return re * (im + im)
re, im = sort([re, im])
function im_sqr(re, im)
	return Float64(re * Float64(im + im))
end
re, im = num2cell(sort([re, im])){:}
function tmp = im_sqr(re, im)
	tmp = re * (im + im);
end
NOTE: re and im should be sorted in increasing order before calling this function.
im$95$sqr[re_, im_] := N[(re * N[(im + im), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}
[re, im] = \mathsf{sort}([re, im])\\
\\
re \cdot \left(im + im\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[re \cdot im + im \cdot re \]
  2. Step-by-step derivation
    1. *-commutative100.0%

      \[\leadsto \color{blue}{im \cdot re} + im \cdot re \]
    2. distribute-rgt-out99.6%

      \[\leadsto \color{blue}{re \cdot \left(im + im\right)} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{re \cdot \left(im + im\right)} \]
  4. Final simplification99.6%

    \[\leadsto re \cdot \left(im + im\right) \]

Alternative 3: 3.4% accurate, 7.0× speedup?

\[\begin{array}{l} [re, im] = \mathsf{sort}([re, im])\\ \\ -2 \end{array} \]
NOTE: re and im should be sorted in increasing order before calling this function.
(FPCore im_sqr (re im) :precision binary64 -2.0)
assert(re < im);
double im_sqr(double re, double im) {
	return -2.0;
}
NOTE: re and im should be sorted in increasing order before calling this function.
real(8) function im_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    im_sqr = -2.0d0
end function
assert re < im;
public static double im_sqr(double re, double im) {
	return -2.0;
}
[re, im] = sort([re, im])
def im_sqr(re, im):
	return -2.0
re, im = sort([re, im])
function im_sqr(re, im)
	return -2.0
end
re, im = num2cell(sort([re, im])){:}
function tmp = im_sqr(re, im)
	tmp = -2.0;
end
NOTE: re and im should be sorted in increasing order before calling this function.
im$95$sqr[re_, im_] := -2.0
\begin{array}{l}
[re, im] = \mathsf{sort}([re, im])\\
\\
-2
\end{array}
Derivation
  1. Initial program 100.0%

    \[re \cdot im + im \cdot re \]
  2. Step-by-step derivation
    1. *-commutative100.0%

      \[\leadsto \color{blue}{im \cdot re} + im \cdot re \]
    2. distribute-rgt-out99.6%

      \[\leadsto \color{blue}{re \cdot \left(im + im\right)} \]
  3. Simplified99.6%

    \[\leadsto \color{blue}{re \cdot \left(im + im\right)} \]
  4. Step-by-step derivation
    1. expm1-log1p-u70.8%

      \[\leadsto \color{blue}{\mathsf{expm1}\left(\mathsf{log1p}\left(re \cdot \left(im + im\right)\right)\right)} \]
    2. expm1-udef35.0%

      \[\leadsto \color{blue}{e^{\mathsf{log1p}\left(re \cdot \left(im + im\right)\right)} - 1} \]
    3. log1p-udef35.0%

      \[\leadsto e^{\color{blue}{\log \left(1 + re \cdot \left(im + im\right)\right)}} - 1 \]
    4. add-exp-log63.8%

      \[\leadsto \color{blue}{\left(1 + re \cdot \left(im + im\right)\right)} - 1 \]
    5. distribute-rgt-in64.2%

      \[\leadsto \left(1 + \color{blue}{\left(im \cdot re + im \cdot re\right)}\right) - 1 \]
    6. flip-+0.0%

      \[\leadsto \left(1 + \color{blue}{\frac{\left(im \cdot re\right) \cdot \left(im \cdot re\right) - \left(im \cdot re\right) \cdot \left(im \cdot re\right)}{im \cdot re - im \cdot re}}\right) - 1 \]
    7. +-inverses0.0%

      \[\leadsto \left(1 + \frac{\color{blue}{0}}{im \cdot re - im \cdot re}\right) - 1 \]
    8. +-inverses0.0%

      \[\leadsto \left(1 + \frac{0}{\color{blue}{0}}\right) - 1 \]
  5. Applied egg-rr0.0%

    \[\leadsto \color{blue}{\left(1 + \frac{0}{0}\right) - 1} \]
  6. Simplified3.7%

    \[\leadsto \color{blue}{-2} \]
  7. Final simplification3.7%

    \[\leadsto -2 \]

Reproduce

?
herbie shell --seed 2023196 
(FPCore im_sqr (re im)
  :name "math.square on complex, imaginary part"
  :precision binary64
  (+ (* re im) (* im re)))