math.square on complex, real part

Percentage Accurate: 94.1% → 100.0%
Time: 4.4s
Alternatives: 4
Speedup: 1.0×

Specification

?
\[\begin{array}{l} \\ re \cdot re - im \cdot im \end{array} \]
(FPCore re_sqr (re im) :precision binary64 (- (* re re) (* im im)))
double re_sqr(double re, double im) {
	return (re * re) - (im * im);
}
real(8) function re_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    re_sqr = (re * re) - (im * im)
end function
public static double re_sqr(double re, double im) {
	return (re * re) - (im * im);
}
def re_sqr(re, im):
	return (re * re) - (im * im)
function re_sqr(re, im)
	return Float64(Float64(re * re) - Float64(im * im))
end
function tmp = re_sqr(re, im)
	tmp = (re * re) - (im * im);
end
re$95$sqr[re_, im_] := N[(N[(re * re), $MachinePrecision] - N[(im * im), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
re \cdot re - im \cdot im
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 4 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 94.1% accurate, 1.0× speedup?

\[\begin{array}{l} \\ re \cdot re - im \cdot im \end{array} \]
(FPCore re_sqr (re im) :precision binary64 (- (* re re) (* im im)))
double re_sqr(double re, double im) {
	return (re * re) - (im * im);
}
real(8) function re_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    re_sqr = (re * re) - (im * im)
end function
public static double re_sqr(double re, double im) {
	return (re * re) - (im * im);
}
def re_sqr(re, im):
	return (re * re) - (im * im)
function re_sqr(re, im)
	return Float64(Float64(re * re) - Float64(im * im))
end
function tmp = re_sqr(re, im)
	tmp = (re * re) - (im * im);
end
re$95$sqr[re_, im_] := N[(N[(re * re), $MachinePrecision] - N[(im * im), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
re \cdot re - im \cdot im
\end{array}

Alternative 1: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \left(re - im\right) \cdot \left(re + im\right) \end{array} \]
(FPCore re_sqr (re im) :precision binary64 (* (- re im) (+ re im)))
double re_sqr(double re, double im) {
	return (re - im) * (re + im);
}
real(8) function re_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    re_sqr = (re - im) * (re + im)
end function
public static double re_sqr(double re, double im) {
	return (re - im) * (re + im);
}
def re_sqr(re, im):
	return (re - im) * (re + im)
function re_sqr(re, im)
	return Float64(Float64(re - im) * Float64(re + im))
end
function tmp = re_sqr(re, im)
	tmp = (re - im) * (re + im);
end
re$95$sqr[re_, im_] := N[(N[(re - im), $MachinePrecision] * N[(re + im), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\left(re - im\right) \cdot \left(re + im\right)
\end{array}
Derivation
  1. Initial program 93.3%

    \[re \cdot re - im \cdot im \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. difference-of-squaresN/A

      \[\leadsto \left(re + im\right) \cdot \color{blue}{\left(re - im\right)} \]
    2. *-commutativeN/A

      \[\leadsto \left(re - im\right) \cdot \color{blue}{\left(re + im\right)} \]
    3. *-lowering-*.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\left(re - im\right), \color{blue}{\left(re + im\right)}\right) \]
    4. --lowering--.f64N/A

      \[\leadsto \mathsf{*.f64}\left(\mathsf{\_.f64}\left(re, im\right), \left(\color{blue}{re} + im\right)\right) \]
    5. +-lowering-+.f64100.0%

      \[\leadsto \mathsf{*.f64}\left(\mathsf{\_.f64}\left(re, im\right), \mathsf{+.f64}\left(re, \color{blue}{im}\right)\right) \]
  4. Applied egg-rr100.0%

    \[\leadsto \color{blue}{\left(re - im\right) \cdot \left(re + im\right)} \]
  5. Add Preprocessing

Alternative 2: 80.7% accurate, 0.6× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;im \cdot im \leq 10^{+85}:\\ \;\;\;\;re \cdot re\\ \mathbf{else}:\\ \;\;\;\;im \cdot \left(re - im\right)\\ \end{array} \end{array} \]
(FPCore re_sqr (re im)
 :precision binary64
 (if (<= (* im im) 1e+85) (* re re) (* im (- re im))))
double re_sqr(double re, double im) {
	double tmp;
	if ((im * im) <= 1e+85) {
		tmp = re * re;
	} else {
		tmp = im * (re - im);
	}
	return tmp;
}
real(8) function re_sqr(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    real(8) :: tmp
    if ((im * im) <= 1d+85) then
        tmp = re * re
    else
        tmp = im * (re - im)
    end if
    re_sqr = tmp
end function
public static double re_sqr(double re, double im) {
	double tmp;
	if ((im * im) <= 1e+85) {
		tmp = re * re;
	} else {
		tmp = im * (re - im);
	}
	return tmp;
}
def re_sqr(re, im):
	tmp = 0
	if (im * im) <= 1e+85:
		tmp = re * re
	else:
		tmp = im * (re - im)
	return tmp
function re_sqr(re, im)
	tmp = 0.0
	if (Float64(im * im) <= 1e+85)
		tmp = Float64(re * re);
	else
		tmp = Float64(im * Float64(re - im));
	end
	return tmp
end
function tmp_2 = re_sqr(re, im)
	tmp = 0.0;
	if ((im * im) <= 1e+85)
		tmp = re * re;
	else
		tmp = im * (re - im);
	end
	tmp_2 = tmp;
end
re$95$sqr[re_, im_] := If[LessEqual[N[(im * im), $MachinePrecision], 1e+85], N[(re * re), $MachinePrecision], N[(im * N[(re - im), $MachinePrecision]), $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;im \cdot im \leq 10^{+85}:\\
\;\;\;\;re \cdot re\\

\mathbf{else}:\\
\;\;\;\;im \cdot \left(re - im\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if (*.f64 im im) < 1e85

    1. Initial program 100.0%

      \[re \cdot re - im \cdot im \]
    2. Add Preprocessing
    3. Taylor expanded in re around inf

      \[\leadsto \color{blue}{{re}^{2}} \]
    4. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto re \cdot \color{blue}{re} \]
      2. *-lowering-*.f6479.6%

        \[\leadsto \mathsf{*.f64}\left(re, \color{blue}{re}\right) \]
    5. Simplified79.6%

      \[\leadsto \color{blue}{re \cdot re} \]

    if 1e85 < (*.f64 im im)

    1. Initial program 83.2%

      \[re \cdot re - im \cdot im \]
    2. Add Preprocessing
    3. Step-by-step derivation
      1. difference-of-squaresN/A

        \[\leadsto \left(re + im\right) \cdot \color{blue}{\left(re - im\right)} \]
      2. *-commutativeN/A

        \[\leadsto \left(re - im\right) \cdot \color{blue}{\left(re + im\right)} \]
      3. *-lowering-*.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\left(re - im\right), \color{blue}{\left(re + im\right)}\right) \]
      4. --lowering--.f64N/A

        \[\leadsto \mathsf{*.f64}\left(\mathsf{\_.f64}\left(re, im\right), \left(\color{blue}{re} + im\right)\right) \]
      5. +-lowering-+.f64100.0%

        \[\leadsto \mathsf{*.f64}\left(\mathsf{\_.f64}\left(re, im\right), \mathsf{+.f64}\left(re, \color{blue}{im}\right)\right) \]
    4. Applied egg-rr100.0%

      \[\leadsto \color{blue}{\left(re - im\right) \cdot \left(re + im\right)} \]
    5. Taylor expanded in re around 0

      \[\leadsto \mathsf{*.f64}\left(\mathsf{\_.f64}\left(re, im\right), \color{blue}{im}\right) \]
    6. Step-by-step derivation
      1. Simplified85.7%

        \[\leadsto \left(re - im\right) \cdot \color{blue}{im} \]
    7. Recombined 2 regimes into one program.
    8. Final simplification82.0%

      \[\leadsto \begin{array}{l} \mathbf{if}\;im \cdot im \leq 10^{+85}:\\ \;\;\;\;re \cdot re\\ \mathbf{else}:\\ \;\;\;\;im \cdot \left(re - im\right)\\ \end{array} \]
    9. Add Preprocessing

    Alternative 3: 78.0% accurate, 0.6× speedup?

    \[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;re \cdot re \leq 3.8 \cdot 10^{-49}:\\ \;\;\;\;0 - im \cdot im\\ \mathbf{else}:\\ \;\;\;\;re \cdot re\\ \end{array} \end{array} \]
    (FPCore re_sqr (re im)
     :precision binary64
     (if (<= (* re re) 3.8e-49) (- 0.0 (* im im)) (* re re)))
    double re_sqr(double re, double im) {
    	double tmp;
    	if ((re * re) <= 3.8e-49) {
    		tmp = 0.0 - (im * im);
    	} else {
    		tmp = re * re;
    	}
    	return tmp;
    }
    
    real(8) function re_sqr(re, im)
        real(8), intent (in) :: re
        real(8), intent (in) :: im
        real(8) :: tmp
        if ((re * re) <= 3.8d-49) then
            tmp = 0.0d0 - (im * im)
        else
            tmp = re * re
        end if
        re_sqr = tmp
    end function
    
    public static double re_sqr(double re, double im) {
    	double tmp;
    	if ((re * re) <= 3.8e-49) {
    		tmp = 0.0 - (im * im);
    	} else {
    		tmp = re * re;
    	}
    	return tmp;
    }
    
    def re_sqr(re, im):
    	tmp = 0
    	if (re * re) <= 3.8e-49:
    		tmp = 0.0 - (im * im)
    	else:
    		tmp = re * re
    	return tmp
    
    function re_sqr(re, im)
    	tmp = 0.0
    	if (Float64(re * re) <= 3.8e-49)
    		tmp = Float64(0.0 - Float64(im * im));
    	else
    		tmp = Float64(re * re);
    	end
    	return tmp
    end
    
    function tmp_2 = re_sqr(re, im)
    	tmp = 0.0;
    	if ((re * re) <= 3.8e-49)
    		tmp = 0.0 - (im * im);
    	else
    		tmp = re * re;
    	end
    	tmp_2 = tmp;
    end
    
    re$95$sqr[re_, im_] := If[LessEqual[N[(re * re), $MachinePrecision], 3.8e-49], N[(0.0 - N[(im * im), $MachinePrecision]), $MachinePrecision], N[(re * re), $MachinePrecision]]
    
    \begin{array}{l}
    
    \\
    \begin{array}{l}
    \mathbf{if}\;re \cdot re \leq 3.8 \cdot 10^{-49}:\\
    \;\;\;\;0 - im \cdot im\\
    
    \mathbf{else}:\\
    \;\;\;\;re \cdot re\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if (*.f64 re re) < 3.7999999999999997e-49

      1. Initial program 100.0%

        \[re \cdot re - im \cdot im \]
      2. Add Preprocessing
      3. Taylor expanded in re around 0

        \[\leadsto \color{blue}{-1 \cdot {im}^{2}} \]
      4. Step-by-step derivation
        1. mul-1-negN/A

          \[\leadsto \mathsf{neg}\left({im}^{2}\right) \]
        2. neg-sub0N/A

          \[\leadsto 0 - \color{blue}{{im}^{2}} \]
        3. --lowering--.f64N/A

          \[\leadsto \mathsf{\_.f64}\left(0, \color{blue}{\left({im}^{2}\right)}\right) \]
        4. unpow2N/A

          \[\leadsto \mathsf{\_.f64}\left(0, \left(im \cdot \color{blue}{im}\right)\right) \]
        5. *-lowering-*.f6486.4%

          \[\leadsto \mathsf{\_.f64}\left(0, \mathsf{*.f64}\left(im, \color{blue}{im}\right)\right) \]
      5. Simplified86.4%

        \[\leadsto \color{blue}{0 - im \cdot im} \]
      6. Step-by-step derivation
        1. sub0-negN/A

          \[\leadsto \mathsf{neg}\left(im \cdot im\right) \]
        2. neg-lowering-neg.f64N/A

          \[\leadsto \mathsf{neg.f64}\left(\left(im \cdot im\right)\right) \]
        3. *-lowering-*.f6486.4%

          \[\leadsto \mathsf{neg.f64}\left(\mathsf{*.f64}\left(im, im\right)\right) \]
      7. Applied egg-rr86.4%

        \[\leadsto \color{blue}{-im \cdot im} \]

      if 3.7999999999999997e-49 < (*.f64 re re)

      1. Initial program 88.1%

        \[re \cdot re - im \cdot im \]
      2. Add Preprocessing
      3. Taylor expanded in re around inf

        \[\leadsto \color{blue}{{re}^{2}} \]
      4. Step-by-step derivation
        1. unpow2N/A

          \[\leadsto re \cdot \color{blue}{re} \]
        2. *-lowering-*.f6478.3%

          \[\leadsto \mathsf{*.f64}\left(re, \color{blue}{re}\right) \]
      5. Simplified78.3%

        \[\leadsto \color{blue}{re \cdot re} \]
    3. Recombined 2 regimes into one program.
    4. Final simplification81.9%

      \[\leadsto \begin{array}{l} \mathbf{if}\;re \cdot re \leq 3.8 \cdot 10^{-49}:\\ \;\;\;\;0 - im \cdot im\\ \mathbf{else}:\\ \;\;\;\;re \cdot re\\ \end{array} \]
    5. Add Preprocessing

    Alternative 4: 53.3% accurate, 2.3× speedup?

    \[\begin{array}{l} \\ re \cdot re \end{array} \]
    (FPCore re_sqr (re im) :precision binary64 (* re re))
    double re_sqr(double re, double im) {
    	return re * re;
    }
    
    real(8) function re_sqr(re, im)
        real(8), intent (in) :: re
        real(8), intent (in) :: im
        re_sqr = re * re
    end function
    
    public static double re_sqr(double re, double im) {
    	return re * re;
    }
    
    def re_sqr(re, im):
    	return re * re
    
    function re_sqr(re, im)
    	return Float64(re * re)
    end
    
    function tmp = re_sqr(re, im)
    	tmp = re * re;
    end
    
    re$95$sqr[re_, im_] := N[(re * re), $MachinePrecision]
    
    \begin{array}{l}
    
    \\
    re \cdot re
    \end{array}
    
    Derivation
    1. Initial program 93.3%

      \[re \cdot re - im \cdot im \]
    2. Add Preprocessing
    3. Taylor expanded in re around inf

      \[\leadsto \color{blue}{{re}^{2}} \]
    4. Step-by-step derivation
      1. unpow2N/A

        \[\leadsto re \cdot \color{blue}{re} \]
      2. *-lowering-*.f6458.1%

        \[\leadsto \mathsf{*.f64}\left(re, \color{blue}{re}\right) \]
    5. Simplified58.1%

      \[\leadsto \color{blue}{re \cdot re} \]
    6. Add Preprocessing

    Reproduce

    ?
    herbie shell --seed 2024161 
    (FPCore re_sqr (re im)
      :name "math.square on complex, real part"
      :precision binary64
      (- (* re re) (* im im)))