math.sin on complex, real part

Percentage Accurate: 100.0% → 100.0%
Time: 8.9s
Alternatives: 11
Speedup: 1.5×

Specification

?
\[\begin{array}{l} \\ \left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \end{array} \]
(FPCore (re im)
 :precision binary64
 (* (* 0.5 (sin re)) (+ (exp (- 0.0 im)) (exp im))))
double code(double re, double im) {
	return (0.5 * sin(re)) * (exp((0.0 - im)) + exp(im));
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = (0.5d0 * sin(re)) * (exp((0.0d0 - im)) + exp(im))
end function
public static double code(double re, double im) {
	return (0.5 * Math.sin(re)) * (Math.exp((0.0 - im)) + Math.exp(im));
}
def code(re, im):
	return (0.5 * math.sin(re)) * (math.exp((0.0 - im)) + math.exp(im))
function code(re, im)
	return Float64(Float64(0.5 * sin(re)) * Float64(exp(Float64(0.0 - im)) + exp(im)))
end
function tmp = code(re, im)
	tmp = (0.5 * sin(re)) * (exp((0.0 - im)) + exp(im));
end
code[re_, im_] := N[(N[(0.5 * N[Sin[re], $MachinePrecision]), $MachinePrecision] * N[(N[Exp[N[(0.0 - im), $MachinePrecision]], $MachinePrecision] + N[Exp[im], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right)
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 11 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \end{array} \]
(FPCore (re im)
 :precision binary64
 (* (* 0.5 (sin re)) (+ (exp (- 0.0 im)) (exp im))))
double code(double re, double im) {
	return (0.5 * sin(re)) * (exp((0.0 - im)) + exp(im));
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = (0.5d0 * sin(re)) * (exp((0.0d0 - im)) + exp(im))
end function
public static double code(double re, double im) {
	return (0.5 * Math.sin(re)) * (Math.exp((0.0 - im)) + Math.exp(im));
}
def code(re, im):
	return (0.5 * math.sin(re)) * (math.exp((0.0 - im)) + math.exp(im))
function code(re, im)
	return Float64(Float64(0.5 * sin(re)) * Float64(exp(Float64(0.0 - im)) + exp(im)))
end
function tmp = code(re, im)
	tmp = (0.5 * sin(re)) * (exp((0.0 - im)) + exp(im));
end
code[re_, im_] := N[(N[(0.5 * N[Sin[re], $MachinePrecision]), $MachinePrecision] * N[(N[Exp[N[(0.0 - im), $MachinePrecision]], $MachinePrecision] + N[Exp[im], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right)
\end{array}

Alternative 1: 100.0% accurate, 1.5× speedup?

\[\begin{array}{l} \\ \frac{\sin re}{\frac{1}{\cosh im}} \end{array} \]
(FPCore (re im) :precision binary64 (/ (sin re) (/ 1.0 (cosh im))))
double code(double re, double im) {
	return sin(re) / (1.0 / cosh(im));
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = sin(re) / (1.0d0 / cosh(im))
end function
public static double code(double re, double im) {
	return Math.sin(re) / (1.0 / Math.cosh(im));
}
def code(re, im):
	return math.sin(re) / (1.0 / math.cosh(im))
function code(re, im)
	return Float64(sin(re) / Float64(1.0 / cosh(im)))
end
function tmp = code(re, im)
	tmp = sin(re) / (1.0 / cosh(im));
end
code[re_, im_] := N[(N[Sin[re], $MachinePrecision] / N[(1.0 / N[Cosh[im], $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\frac{\sin re}{\frac{1}{\cosh im}}
\end{array}
Derivation
  1. Initial program 100.0%

    \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
  2. Step-by-step derivation
    1. +-commutative100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(e^{im} + e^{0 - im}\right)} \]
    2. sub0-neg100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \left(e^{im} + e^{\color{blue}{-im}}\right) \]
    3. cosh-undef100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
  3. Applied egg-rr100.0%

    \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
  4. Step-by-step derivation
    1. add-log-exp77.5%

      \[\leadsto \color{blue}{\log \left(e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right)} \]
    2. *-un-lft-identity77.5%

      \[\leadsto \log \color{blue}{\left(1 \cdot e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right)} \]
    3. log-prod77.5%

      \[\leadsto \color{blue}{\log 1 + \log \left(e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right)} \]
    4. metadata-eval77.5%

      \[\leadsto \color{blue}{0} + \log \left(e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right) \]
    5. add-log-exp100.0%

      \[\leadsto 0 + \color{blue}{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)} \]
    6. associate-*r*100.0%

      \[\leadsto 0 + \color{blue}{\left(\left(0.5 \cdot \sin re\right) \cdot 2\right) \cdot \cosh im} \]
    7. *-commutative100.0%

      \[\leadsto 0 + \color{blue}{\left(2 \cdot \left(0.5 \cdot \sin re\right)\right)} \cdot \cosh im \]
    8. associate-*r*100.0%

      \[\leadsto 0 + \color{blue}{\left(\left(2 \cdot 0.5\right) \cdot \sin re\right)} \cdot \cosh im \]
    9. metadata-eval100.0%

      \[\leadsto 0 + \left(\color{blue}{1} \cdot \sin re\right) \cdot \cosh im \]
    10. *-un-lft-identity100.0%

      \[\leadsto 0 + \color{blue}{\sin re} \cdot \cosh im \]
  5. Applied egg-rr100.0%

    \[\leadsto \color{blue}{0 + \sin re \cdot \cosh im} \]
  6. Step-by-step derivation
    1. +-lft-identity100.0%

      \[\leadsto \color{blue}{\sin re \cdot \cosh im} \]
  7. Simplified100.0%

    \[\leadsto \color{blue}{\sin re \cdot \cosh im} \]
  8. Step-by-step derivation
    1. cosh-def100.0%

      \[\leadsto \sin re \cdot \color{blue}{\frac{e^{im} + e^{-im}}{2}} \]
    2. cosh-undef100.0%

      \[\leadsto \sin re \cdot \frac{\color{blue}{2 \cdot \cosh im}}{2} \]
    3. associate-*r/100.0%

      \[\leadsto \color{blue}{\frac{\sin re \cdot \left(2 \cdot \cosh im\right)}{2}} \]
    4. *-commutative100.0%

      \[\leadsto \frac{\sin re \cdot \color{blue}{\left(\cosh im \cdot 2\right)}}{2} \]
  9. Applied egg-rr100.0%

    \[\leadsto \color{blue}{\frac{\sin re \cdot \left(\cosh im \cdot 2\right)}{2}} \]
  10. Step-by-step derivation
    1. associate-/l*100.0%

      \[\leadsto \color{blue}{\frac{\sin re}{\frac{2}{\cosh im \cdot 2}}} \]
    2. *-commutative100.0%

      \[\leadsto \frac{\sin re}{\frac{2}{\color{blue}{2 \cdot \cosh im}}} \]
    3. associate-/r*100.0%

      \[\leadsto \frac{\sin re}{\color{blue}{\frac{\frac{2}{2}}{\cosh im}}} \]
    4. metadata-eval100.0%

      \[\leadsto \frac{\sin re}{\frac{\color{blue}{1}}{\cosh im}} \]
  11. Simplified100.0%

    \[\leadsto \color{blue}{\frac{\sin re}{\frac{1}{\cosh im}}} \]
  12. Final simplification100.0%

    \[\leadsto \frac{\sin re}{\frac{1}{\cosh im}} \]

Alternative 2: 100.0% accurate, 1.5× speedup?

\[\begin{array}{l} \\ \sin re \cdot \cosh im \end{array} \]
(FPCore (re im) :precision binary64 (* (sin re) (cosh im)))
double code(double re, double im) {
	return sin(re) * cosh(im);
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = sin(re) * cosh(im)
end function
public static double code(double re, double im) {
	return Math.sin(re) * Math.cosh(im);
}
def code(re, im):
	return math.sin(re) * math.cosh(im)
function code(re, im)
	return Float64(sin(re) * cosh(im))
end
function tmp = code(re, im)
	tmp = sin(re) * cosh(im);
end
code[re_, im_] := N[(N[Sin[re], $MachinePrecision] * N[Cosh[im], $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\sin re \cdot \cosh im
\end{array}
Derivation
  1. Initial program 100.0%

    \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
  2. Step-by-step derivation
    1. +-commutative100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(e^{im} + e^{0 - im}\right)} \]
    2. sub0-neg100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \left(e^{im} + e^{\color{blue}{-im}}\right) \]
    3. cosh-undef100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
  3. Applied egg-rr100.0%

    \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
  4. Step-by-step derivation
    1. add-log-exp77.5%

      \[\leadsto \color{blue}{\log \left(e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right)} \]
    2. *-un-lft-identity77.5%

      \[\leadsto \log \color{blue}{\left(1 \cdot e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right)} \]
    3. log-prod77.5%

      \[\leadsto \color{blue}{\log 1 + \log \left(e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right)} \]
    4. metadata-eval77.5%

      \[\leadsto \color{blue}{0} + \log \left(e^{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)}\right) \]
    5. add-log-exp100.0%

      \[\leadsto 0 + \color{blue}{\left(0.5 \cdot \sin re\right) \cdot \left(2 \cdot \cosh im\right)} \]
    6. associate-*r*100.0%

      \[\leadsto 0 + \color{blue}{\left(\left(0.5 \cdot \sin re\right) \cdot 2\right) \cdot \cosh im} \]
    7. *-commutative100.0%

      \[\leadsto 0 + \color{blue}{\left(2 \cdot \left(0.5 \cdot \sin re\right)\right)} \cdot \cosh im \]
    8. associate-*r*100.0%

      \[\leadsto 0 + \color{blue}{\left(\left(2 \cdot 0.5\right) \cdot \sin re\right)} \cdot \cosh im \]
    9. metadata-eval100.0%

      \[\leadsto 0 + \left(\color{blue}{1} \cdot \sin re\right) \cdot \cosh im \]
    10. *-un-lft-identity100.0%

      \[\leadsto 0 + \color{blue}{\sin re} \cdot \cosh im \]
  5. Applied egg-rr100.0%

    \[\leadsto \color{blue}{0 + \sin re \cdot \cosh im} \]
  6. Step-by-step derivation
    1. +-lft-identity100.0%

      \[\leadsto \color{blue}{\sin re \cdot \cosh im} \]
  7. Simplified100.0%

    \[\leadsto \color{blue}{\sin re \cdot \cosh im} \]
  8. Final simplification100.0%

    \[\leadsto \sin re \cdot \cosh im \]

Alternative 3: 92.5% accurate, 2.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \left(im \cdot im\right) \cdot \left(\sin re \cdot 0.5\right)\\ t_1 := re \cdot \cosh im\\ \mathbf{if}\;im \leq -1.35 \cdot 10^{+154}:\\ \;\;\;\;t_0\\ \mathbf{elif}\;im \leq -18:\\ \;\;\;\;t_1\\ \mathbf{elif}\;im \leq 6.9 \cdot 10^{-18}:\\ \;\;\;\;\sin re\\ \mathbf{elif}\;im \leq 1.35 \cdot 10^{+154}:\\ \;\;\;\;t_1\\ \mathbf{else}:\\ \;\;\;\;t_0\\ \end{array} \end{array} \]
(FPCore (re im)
 :precision binary64
 (let* ((t_0 (* (* im im) (* (sin re) 0.5))) (t_1 (* re (cosh im))))
   (if (<= im -1.35e+154)
     t_0
     (if (<= im -18.0)
       t_1
       (if (<= im 6.9e-18) (sin re) (if (<= im 1.35e+154) t_1 t_0))))))
double code(double re, double im) {
	double t_0 = (im * im) * (sin(re) * 0.5);
	double t_1 = re * cosh(im);
	double tmp;
	if (im <= -1.35e+154) {
		tmp = t_0;
	} else if (im <= -18.0) {
		tmp = t_1;
	} else if (im <= 6.9e-18) {
		tmp = sin(re);
	} else if (im <= 1.35e+154) {
		tmp = t_1;
	} else {
		tmp = t_0;
	}
	return tmp;
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    real(8) :: t_0
    real(8) :: t_1
    real(8) :: tmp
    t_0 = (im * im) * (sin(re) * 0.5d0)
    t_1 = re * cosh(im)
    if (im <= (-1.35d+154)) then
        tmp = t_0
    else if (im <= (-18.0d0)) then
        tmp = t_1
    else if (im <= 6.9d-18) then
        tmp = sin(re)
    else if (im <= 1.35d+154) then
        tmp = t_1
    else
        tmp = t_0
    end if
    code = tmp
end function
public static double code(double re, double im) {
	double t_0 = (im * im) * (Math.sin(re) * 0.5);
	double t_1 = re * Math.cosh(im);
	double tmp;
	if (im <= -1.35e+154) {
		tmp = t_0;
	} else if (im <= -18.0) {
		tmp = t_1;
	} else if (im <= 6.9e-18) {
		tmp = Math.sin(re);
	} else if (im <= 1.35e+154) {
		tmp = t_1;
	} else {
		tmp = t_0;
	}
	return tmp;
}
def code(re, im):
	t_0 = (im * im) * (math.sin(re) * 0.5)
	t_1 = re * math.cosh(im)
	tmp = 0
	if im <= -1.35e+154:
		tmp = t_0
	elif im <= -18.0:
		tmp = t_1
	elif im <= 6.9e-18:
		tmp = math.sin(re)
	elif im <= 1.35e+154:
		tmp = t_1
	else:
		tmp = t_0
	return tmp
function code(re, im)
	t_0 = Float64(Float64(im * im) * Float64(sin(re) * 0.5))
	t_1 = Float64(re * cosh(im))
	tmp = 0.0
	if (im <= -1.35e+154)
		tmp = t_0;
	elseif (im <= -18.0)
		tmp = t_1;
	elseif (im <= 6.9e-18)
		tmp = sin(re);
	elseif (im <= 1.35e+154)
		tmp = t_1;
	else
		tmp = t_0;
	end
	return tmp
end
function tmp_2 = code(re, im)
	t_0 = (im * im) * (sin(re) * 0.5);
	t_1 = re * cosh(im);
	tmp = 0.0;
	if (im <= -1.35e+154)
		tmp = t_0;
	elseif (im <= -18.0)
		tmp = t_1;
	elseif (im <= 6.9e-18)
		tmp = sin(re);
	elseif (im <= 1.35e+154)
		tmp = t_1;
	else
		tmp = t_0;
	end
	tmp_2 = tmp;
end
code[re_, im_] := Block[{t$95$0 = N[(N[(im * im), $MachinePrecision] * N[(N[Sin[re], $MachinePrecision] * 0.5), $MachinePrecision]), $MachinePrecision]}, Block[{t$95$1 = N[(re * N[Cosh[im], $MachinePrecision]), $MachinePrecision]}, If[LessEqual[im, -1.35e+154], t$95$0, If[LessEqual[im, -18.0], t$95$1, If[LessEqual[im, 6.9e-18], N[Sin[re], $MachinePrecision], If[LessEqual[im, 1.35e+154], t$95$1, t$95$0]]]]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \left(im \cdot im\right) \cdot \left(\sin re \cdot 0.5\right)\\
t_1 := re \cdot \cosh im\\
\mathbf{if}\;im \leq -1.35 \cdot 10^{+154}:\\
\;\;\;\;t_0\\

\mathbf{elif}\;im \leq -18:\\
\;\;\;\;t_1\\

\mathbf{elif}\;im \leq 6.9 \cdot 10^{-18}:\\
\;\;\;\;\sin re\\

\mathbf{elif}\;im \leq 1.35 \cdot 10^{+154}:\\
\;\;\;\;t_1\\

\mathbf{else}:\\
\;\;\;\;t_0\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if im < -1.35000000000000003e154 or 1.35000000000000003e154 < im

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + {im}^{2}\right)} \]
    3. Simplified100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + im \cdot im\right)} \]
    4. Taylor expanded in im around inf 100.0%

      \[\leadsto \color{blue}{0.5 \cdot \left(\sin re \cdot {im}^{2}\right)} \]
    5. Step-by-step derivation
      1. unpow2100.0%

        \[\leadsto 0.5 \cdot \left(\sin re \cdot \color{blue}{\left(im \cdot im\right)}\right) \]
      2. associate-*r*100.0%

        \[\leadsto \color{blue}{\left(0.5 \cdot \sin re\right) \cdot \left(im \cdot im\right)} \]
      3. *-commutative100.0%

        \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]
    6. Simplified100.0%

      \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]

    if -1.35000000000000003e154 < im < -18 or 6.9000000000000003e-18 < im < 1.35000000000000003e154

    1. Initial program 99.9%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Step-by-step derivation
      1. +-commutative99.9%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(e^{im} + e^{0 - im}\right)} \]
      2. sub0-neg99.9%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \left(e^{im} + e^{\color{blue}{-im}}\right) \]
      3. cosh-undef99.9%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
    3. Applied egg-rr99.9%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
    4. Taylor expanded in re around 0 75.5%

      \[\leadsto \left(0.5 \cdot \color{blue}{re}\right) \cdot \left(2 \cdot \cosh im\right) \]
    5. Step-by-step derivation
      1. expm1-log1p-u45.9%

        \[\leadsto \color{blue}{\mathsf{expm1}\left(\mathsf{log1p}\left(\left(0.5 \cdot re\right) \cdot \left(2 \cdot \cosh im\right)\right)\right)} \]
      2. expm1-udef41.3%

        \[\leadsto \color{blue}{e^{\mathsf{log1p}\left(\left(0.5 \cdot re\right) \cdot \left(2 \cdot \cosh im\right)\right)} - 1} \]
      3. associate-*r*41.3%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(\left(0.5 \cdot re\right) \cdot 2\right) \cdot \cosh im}\right)} - 1 \]
      4. *-commutative41.3%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(2 \cdot \left(0.5 \cdot re\right)\right)} \cdot \cosh im\right)} - 1 \]
      5. associate-*r*41.3%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(\left(2 \cdot 0.5\right) \cdot re\right)} \cdot \cosh im\right)} - 1 \]
      6. metadata-eval41.3%

        \[\leadsto e^{\mathsf{log1p}\left(\left(\color{blue}{1} \cdot re\right) \cdot \cosh im\right)} - 1 \]
      7. *-un-lft-identity41.3%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{re} \cdot \cosh im\right)} - 1 \]
    6. Applied egg-rr41.3%

      \[\leadsto \color{blue}{e^{\mathsf{log1p}\left(re \cdot \cosh im\right)} - 1} \]
    7. Step-by-step derivation
      1. expm1-def45.9%

        \[\leadsto \color{blue}{\mathsf{expm1}\left(\mathsf{log1p}\left(re \cdot \cosh im\right)\right)} \]
      2. expm1-log1p75.5%

        \[\leadsto \color{blue}{re \cdot \cosh im} \]
    8. Simplified75.5%

      \[\leadsto \color{blue}{re \cdot \cosh im} \]

    if -18 < im < 6.9000000000000003e-18

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 98.6%

      \[\leadsto \color{blue}{\sin re} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification93.4%

    \[\leadsto \begin{array}{l} \mathbf{if}\;im \leq -1.35 \cdot 10^{+154}:\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(\sin re \cdot 0.5\right)\\ \mathbf{elif}\;im \leq -18:\\ \;\;\;\;re \cdot \cosh im\\ \mathbf{elif}\;im \leq 6.9 \cdot 10^{-18}:\\ \;\;\;\;\sin re\\ \mathbf{elif}\;im \leq 1.35 \cdot 10^{+154}:\\ \;\;\;\;re \cdot \cosh im\\ \mathbf{else}:\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(\sin re \cdot 0.5\right)\\ \end{array} \]

Alternative 4: 93.2% accurate, 2.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \sin re \cdot 0.5\\ t_1 := \left(im \cdot im\right) \cdot t_0\\ t_2 := re \cdot \cosh im\\ \mathbf{if}\;im \leq -1.35 \cdot 10^{+154}:\\ \;\;\;\;t_1\\ \mathbf{elif}\;im \leq -18:\\ \;\;\;\;t_2\\ \mathbf{elif}\;im \leq 0.112:\\ \;\;\;\;t_0 \cdot \left(im \cdot im + 2\right)\\ \mathbf{elif}\;im \leq 1.35 \cdot 10^{+154}:\\ \;\;\;\;t_2\\ \mathbf{else}:\\ \;\;\;\;t_1\\ \end{array} \end{array} \]
(FPCore (re im)
 :precision binary64
 (let* ((t_0 (* (sin re) 0.5)) (t_1 (* (* im im) t_0)) (t_2 (* re (cosh im))))
   (if (<= im -1.35e+154)
     t_1
     (if (<= im -18.0)
       t_2
       (if (<= im 0.112)
         (* t_0 (+ (* im im) 2.0))
         (if (<= im 1.35e+154) t_2 t_1))))))
double code(double re, double im) {
	double t_0 = sin(re) * 0.5;
	double t_1 = (im * im) * t_0;
	double t_2 = re * cosh(im);
	double tmp;
	if (im <= -1.35e+154) {
		tmp = t_1;
	} else if (im <= -18.0) {
		tmp = t_2;
	} else if (im <= 0.112) {
		tmp = t_0 * ((im * im) + 2.0);
	} else if (im <= 1.35e+154) {
		tmp = t_2;
	} else {
		tmp = t_1;
	}
	return tmp;
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    real(8) :: t_0
    real(8) :: t_1
    real(8) :: t_2
    real(8) :: tmp
    t_0 = sin(re) * 0.5d0
    t_1 = (im * im) * t_0
    t_2 = re * cosh(im)
    if (im <= (-1.35d+154)) then
        tmp = t_1
    else if (im <= (-18.0d0)) then
        tmp = t_2
    else if (im <= 0.112d0) then
        tmp = t_0 * ((im * im) + 2.0d0)
    else if (im <= 1.35d+154) then
        tmp = t_2
    else
        tmp = t_1
    end if
    code = tmp
end function
public static double code(double re, double im) {
	double t_0 = Math.sin(re) * 0.5;
	double t_1 = (im * im) * t_0;
	double t_2 = re * Math.cosh(im);
	double tmp;
	if (im <= -1.35e+154) {
		tmp = t_1;
	} else if (im <= -18.0) {
		tmp = t_2;
	} else if (im <= 0.112) {
		tmp = t_0 * ((im * im) + 2.0);
	} else if (im <= 1.35e+154) {
		tmp = t_2;
	} else {
		tmp = t_1;
	}
	return tmp;
}
def code(re, im):
	t_0 = math.sin(re) * 0.5
	t_1 = (im * im) * t_0
	t_2 = re * math.cosh(im)
	tmp = 0
	if im <= -1.35e+154:
		tmp = t_1
	elif im <= -18.0:
		tmp = t_2
	elif im <= 0.112:
		tmp = t_0 * ((im * im) + 2.0)
	elif im <= 1.35e+154:
		tmp = t_2
	else:
		tmp = t_1
	return tmp
function code(re, im)
	t_0 = Float64(sin(re) * 0.5)
	t_1 = Float64(Float64(im * im) * t_0)
	t_2 = Float64(re * cosh(im))
	tmp = 0.0
	if (im <= -1.35e+154)
		tmp = t_1;
	elseif (im <= -18.0)
		tmp = t_2;
	elseif (im <= 0.112)
		tmp = Float64(t_0 * Float64(Float64(im * im) + 2.0));
	elseif (im <= 1.35e+154)
		tmp = t_2;
	else
		tmp = t_1;
	end
	return tmp
end
function tmp_2 = code(re, im)
	t_0 = sin(re) * 0.5;
	t_1 = (im * im) * t_0;
	t_2 = re * cosh(im);
	tmp = 0.0;
	if (im <= -1.35e+154)
		tmp = t_1;
	elseif (im <= -18.0)
		tmp = t_2;
	elseif (im <= 0.112)
		tmp = t_0 * ((im * im) + 2.0);
	elseif (im <= 1.35e+154)
		tmp = t_2;
	else
		tmp = t_1;
	end
	tmp_2 = tmp;
end
code[re_, im_] := Block[{t$95$0 = N[(N[Sin[re], $MachinePrecision] * 0.5), $MachinePrecision]}, Block[{t$95$1 = N[(N[(im * im), $MachinePrecision] * t$95$0), $MachinePrecision]}, Block[{t$95$2 = N[(re * N[Cosh[im], $MachinePrecision]), $MachinePrecision]}, If[LessEqual[im, -1.35e+154], t$95$1, If[LessEqual[im, -18.0], t$95$2, If[LessEqual[im, 0.112], N[(t$95$0 * N[(N[(im * im), $MachinePrecision] + 2.0), $MachinePrecision]), $MachinePrecision], If[LessEqual[im, 1.35e+154], t$95$2, t$95$1]]]]]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \sin re \cdot 0.5\\
t_1 := \left(im \cdot im\right) \cdot t_0\\
t_2 := re \cdot \cosh im\\
\mathbf{if}\;im \leq -1.35 \cdot 10^{+154}:\\
\;\;\;\;t_1\\

\mathbf{elif}\;im \leq -18:\\
\;\;\;\;t_2\\

\mathbf{elif}\;im \leq 0.112:\\
\;\;\;\;t_0 \cdot \left(im \cdot im + 2\right)\\

\mathbf{elif}\;im \leq 1.35 \cdot 10^{+154}:\\
\;\;\;\;t_2\\

\mathbf{else}:\\
\;\;\;\;t_1\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if im < -1.35000000000000003e154 or 1.35000000000000003e154 < im

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + {im}^{2}\right)} \]
    3. Simplified100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + im \cdot im\right)} \]
    4. Taylor expanded in im around inf 100.0%

      \[\leadsto \color{blue}{0.5 \cdot \left(\sin re \cdot {im}^{2}\right)} \]
    5. Step-by-step derivation
      1. unpow2100.0%

        \[\leadsto 0.5 \cdot \left(\sin re \cdot \color{blue}{\left(im \cdot im\right)}\right) \]
      2. associate-*r*100.0%

        \[\leadsto \color{blue}{\left(0.5 \cdot \sin re\right) \cdot \left(im \cdot im\right)} \]
      3. *-commutative100.0%

        \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]
    6. Simplified100.0%

      \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]

    if -1.35000000000000003e154 < im < -18 or 0.112000000000000002 < im < 1.35000000000000003e154

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Step-by-step derivation
      1. +-commutative100.0%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(e^{im} + e^{0 - im}\right)} \]
      2. sub0-neg100.0%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \left(e^{im} + e^{\color{blue}{-im}}\right) \]
      3. cosh-undef100.0%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
    3. Applied egg-rr100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
    4. Taylor expanded in re around 0 75.4%

      \[\leadsto \left(0.5 \cdot \color{blue}{re}\right) \cdot \left(2 \cdot \cosh im\right) \]
    5. Step-by-step derivation
      1. expm1-log1p-u43.9%

        \[\leadsto \color{blue}{\mathsf{expm1}\left(\mathsf{log1p}\left(\left(0.5 \cdot re\right) \cdot \left(2 \cdot \cosh im\right)\right)\right)} \]
      2. expm1-udef43.9%

        \[\leadsto \color{blue}{e^{\mathsf{log1p}\left(\left(0.5 \cdot re\right) \cdot \left(2 \cdot \cosh im\right)\right)} - 1} \]
      3. associate-*r*43.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(\left(0.5 \cdot re\right) \cdot 2\right) \cdot \cosh im}\right)} - 1 \]
      4. *-commutative43.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(2 \cdot \left(0.5 \cdot re\right)\right)} \cdot \cosh im\right)} - 1 \]
      5. associate-*r*43.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(\left(2 \cdot 0.5\right) \cdot re\right)} \cdot \cosh im\right)} - 1 \]
      6. metadata-eval43.9%

        \[\leadsto e^{\mathsf{log1p}\left(\left(\color{blue}{1} \cdot re\right) \cdot \cosh im\right)} - 1 \]
      7. *-un-lft-identity43.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{re} \cdot \cosh im\right)} - 1 \]
    6. Applied egg-rr43.9%

      \[\leadsto \color{blue}{e^{\mathsf{log1p}\left(re \cdot \cosh im\right)} - 1} \]
    7. Step-by-step derivation
      1. expm1-def43.9%

        \[\leadsto \color{blue}{\mathsf{expm1}\left(\mathsf{log1p}\left(re \cdot \cosh im\right)\right)} \]
      2. expm1-log1p75.4%

        \[\leadsto \color{blue}{re \cdot \cosh im} \]
    8. Simplified75.4%

      \[\leadsto \color{blue}{re \cdot \cosh im} \]

    if -18 < im < 0.112000000000000002

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 98.6%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + {im}^{2}\right)} \]
    3. Simplified98.6%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + im \cdot im\right)} \]
  3. Recombined 3 regimes into one program.
  4. Final simplification93.8%

    \[\leadsto \begin{array}{l} \mathbf{if}\;im \leq -1.35 \cdot 10^{+154}:\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(\sin re \cdot 0.5\right)\\ \mathbf{elif}\;im \leq -18:\\ \;\;\;\;re \cdot \cosh im\\ \mathbf{elif}\;im \leq 0.112:\\ \;\;\;\;\left(\sin re \cdot 0.5\right) \cdot \left(im \cdot im + 2\right)\\ \mathbf{elif}\;im \leq 1.35 \cdot 10^{+154}:\\ \;\;\;\;re \cdot \cosh im\\ \mathbf{else}:\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(\sin re \cdot 0.5\right)\\ \end{array} \]

Alternative 5: 86.2% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;im \leq -18 \lor \neg \left(im \leq 6.9 \cdot 10^{-18}\right):\\ \;\;\;\;re \cdot \cosh im\\ \mathbf{else}:\\ \;\;\;\;\sin re\\ \end{array} \end{array} \]
(FPCore (re im)
 :precision binary64
 (if (or (<= im -18.0) (not (<= im 6.9e-18))) (* re (cosh im)) (sin re)))
double code(double re, double im) {
	double tmp;
	if ((im <= -18.0) || !(im <= 6.9e-18)) {
		tmp = re * cosh(im);
	} else {
		tmp = sin(re);
	}
	return tmp;
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    real(8) :: tmp
    if ((im <= (-18.0d0)) .or. (.not. (im <= 6.9d-18))) then
        tmp = re * cosh(im)
    else
        tmp = sin(re)
    end if
    code = tmp
end function
public static double code(double re, double im) {
	double tmp;
	if ((im <= -18.0) || !(im <= 6.9e-18)) {
		tmp = re * Math.cosh(im);
	} else {
		tmp = Math.sin(re);
	}
	return tmp;
}
def code(re, im):
	tmp = 0
	if (im <= -18.0) or not (im <= 6.9e-18):
		tmp = re * math.cosh(im)
	else:
		tmp = math.sin(re)
	return tmp
function code(re, im)
	tmp = 0.0
	if ((im <= -18.0) || !(im <= 6.9e-18))
		tmp = Float64(re * cosh(im));
	else
		tmp = sin(re);
	end
	return tmp
end
function tmp_2 = code(re, im)
	tmp = 0.0;
	if ((im <= -18.0) || ~((im <= 6.9e-18)))
		tmp = re * cosh(im);
	else
		tmp = sin(re);
	end
	tmp_2 = tmp;
end
code[re_, im_] := If[Or[LessEqual[im, -18.0], N[Not[LessEqual[im, 6.9e-18]], $MachinePrecision]], N[(re * N[Cosh[im], $MachinePrecision]), $MachinePrecision], N[Sin[re], $MachinePrecision]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;im \leq -18 \lor \neg \left(im \leq 6.9 \cdot 10^{-18}\right):\\
\;\;\;\;re \cdot \cosh im\\

\mathbf{else}:\\
\;\;\;\;\sin re\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if im < -18 or 6.9000000000000003e-18 < im

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Step-by-step derivation
      1. +-commutative100.0%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(e^{im} + e^{0 - im}\right)} \]
      2. sub0-neg100.0%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \left(e^{im} + e^{\color{blue}{-im}}\right) \]
      3. cosh-undef100.0%

        \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
    3. Applied egg-rr100.0%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 \cdot \cosh im\right)} \]
    4. Taylor expanded in re around 0 73.0%

      \[\leadsto \left(0.5 \cdot \color{blue}{re}\right) \cdot \left(2 \cdot \cosh im\right) \]
    5. Step-by-step derivation
      1. expm1-log1p-u34.1%

        \[\leadsto \color{blue}{\mathsf{expm1}\left(\mathsf{log1p}\left(\left(0.5 \cdot re\right) \cdot \left(2 \cdot \cosh im\right)\right)\right)} \]
      2. expm1-udef31.9%

        \[\leadsto \color{blue}{e^{\mathsf{log1p}\left(\left(0.5 \cdot re\right) \cdot \left(2 \cdot \cosh im\right)\right)} - 1} \]
      3. associate-*r*31.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(\left(0.5 \cdot re\right) \cdot 2\right) \cdot \cosh im}\right)} - 1 \]
      4. *-commutative31.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(2 \cdot \left(0.5 \cdot re\right)\right)} \cdot \cosh im\right)} - 1 \]
      5. associate-*r*31.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{\left(\left(2 \cdot 0.5\right) \cdot re\right)} \cdot \cosh im\right)} - 1 \]
      6. metadata-eval31.9%

        \[\leadsto e^{\mathsf{log1p}\left(\left(\color{blue}{1} \cdot re\right) \cdot \cosh im\right)} - 1 \]
      7. *-un-lft-identity31.9%

        \[\leadsto e^{\mathsf{log1p}\left(\color{blue}{re} \cdot \cosh im\right)} - 1 \]
    6. Applied egg-rr31.9%

      \[\leadsto \color{blue}{e^{\mathsf{log1p}\left(re \cdot \cosh im\right)} - 1} \]
    7. Step-by-step derivation
      1. expm1-def34.1%

        \[\leadsto \color{blue}{\mathsf{expm1}\left(\mathsf{log1p}\left(re \cdot \cosh im\right)\right)} \]
      2. expm1-log1p73.0%

        \[\leadsto \color{blue}{re \cdot \cosh im} \]
    8. Simplified73.0%

      \[\leadsto \color{blue}{re \cdot \cosh im} \]

    if -18 < im < 6.9000000000000003e-18

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 98.6%

      \[\leadsto \color{blue}{\sin re} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification86.0%

    \[\leadsto \begin{array}{l} \mathbf{if}\;im \leq -18 \lor \neg \left(im \leq 6.9 \cdot 10^{-18}\right):\\ \;\;\;\;re \cdot \cosh im\\ \mathbf{else}:\\ \;\;\;\;\sin re\\ \end{array} \]

Alternative 6: 71.2% accurate, 2.9× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;im \leq -2.7 \cdot 10^{+16}:\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(re \cdot 0.5\right)\\ \mathbf{elif}\;im \leq 6.9 \cdot 10^{-18}:\\ \;\;\;\;\sin re\\ \mathbf{else}:\\ \;\;\;\;re + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right)\\ \end{array} \end{array} \]
(FPCore (re im)
 :precision binary64
 (if (<= im -2.7e+16)
   (* (* im im) (* re 0.5))
   (if (<= im 6.9e-18) (sin re) (+ re (* 0.5 (* re (* im im)))))))
double code(double re, double im) {
	double tmp;
	if (im <= -2.7e+16) {
		tmp = (im * im) * (re * 0.5);
	} else if (im <= 6.9e-18) {
		tmp = sin(re);
	} else {
		tmp = re + (0.5 * (re * (im * im)));
	}
	return tmp;
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    real(8) :: tmp
    if (im <= (-2.7d+16)) then
        tmp = (im * im) * (re * 0.5d0)
    else if (im <= 6.9d-18) then
        tmp = sin(re)
    else
        tmp = re + (0.5d0 * (re * (im * im)))
    end if
    code = tmp
end function
public static double code(double re, double im) {
	double tmp;
	if (im <= -2.7e+16) {
		tmp = (im * im) * (re * 0.5);
	} else if (im <= 6.9e-18) {
		tmp = Math.sin(re);
	} else {
		tmp = re + (0.5 * (re * (im * im)));
	}
	return tmp;
}
def code(re, im):
	tmp = 0
	if im <= -2.7e+16:
		tmp = (im * im) * (re * 0.5)
	elif im <= 6.9e-18:
		tmp = math.sin(re)
	else:
		tmp = re + (0.5 * (re * (im * im)))
	return tmp
function code(re, im)
	tmp = 0.0
	if (im <= -2.7e+16)
		tmp = Float64(Float64(im * im) * Float64(re * 0.5));
	elseif (im <= 6.9e-18)
		tmp = sin(re);
	else
		tmp = Float64(re + Float64(0.5 * Float64(re * Float64(im * im))));
	end
	return tmp
end
function tmp_2 = code(re, im)
	tmp = 0.0;
	if (im <= -2.7e+16)
		tmp = (im * im) * (re * 0.5);
	elseif (im <= 6.9e-18)
		tmp = sin(re);
	else
		tmp = re + (0.5 * (re * (im * im)));
	end
	tmp_2 = tmp;
end
code[re_, im_] := If[LessEqual[im, -2.7e+16], N[(N[(im * im), $MachinePrecision] * N[(re * 0.5), $MachinePrecision]), $MachinePrecision], If[LessEqual[im, 6.9e-18], N[Sin[re], $MachinePrecision], N[(re + N[(0.5 * N[(re * N[(im * im), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]]]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;im \leq -2.7 \cdot 10^{+16}:\\
\;\;\;\;\left(im \cdot im\right) \cdot \left(re \cdot 0.5\right)\\

\mathbf{elif}\;im \leq 6.9 \cdot 10^{-18}:\\
\;\;\;\;\sin re\\

\mathbf{else}:\\
\;\;\;\;re + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if im < -2.7e16

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 55.6%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + {im}^{2}\right)} \]
    3. Simplified55.6%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + im \cdot im\right)} \]
    4. Taylor expanded in im around inf 55.6%

      \[\leadsto \color{blue}{0.5 \cdot \left(\sin re \cdot {im}^{2}\right)} \]
    5. Step-by-step derivation
      1. unpow255.6%

        \[\leadsto 0.5 \cdot \left(\sin re \cdot \color{blue}{\left(im \cdot im\right)}\right) \]
      2. associate-*r*55.6%

        \[\leadsto \color{blue}{\left(0.5 \cdot \sin re\right) \cdot \left(im \cdot im\right)} \]
      3. *-commutative55.6%

        \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]
    6. Simplified55.6%

      \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]
    7. Taylor expanded in re around 0 47.7%

      \[\leadsto \color{blue}{0.5 \cdot \left(re \cdot {im}^{2}\right)} \]
    8. Step-by-step derivation
      1. associate-*r*47.7%

        \[\leadsto \color{blue}{\left(0.5 \cdot re\right) \cdot {im}^{2}} \]
      2. *-commutative47.7%

        \[\leadsto \color{blue}{{im}^{2} \cdot \left(0.5 \cdot re\right)} \]
      3. unpow247.7%

        \[\leadsto \color{blue}{\left(im \cdot im\right)} \cdot \left(0.5 \cdot re\right) \]
    9. Simplified47.7%

      \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot re\right)} \]

    if -2.7e16 < im < 6.9000000000000003e-18

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 97.2%

      \[\leadsto \color{blue}{\sin re} \]

    if 6.9000000000000003e-18 < im

    1. Initial program 99.9%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 59.6%

      \[\leadsto \color{blue}{\sin re + 0.5 \cdot \left(\sin re \cdot {im}^{2}\right)} \]
    3. Simplified59.6%

      \[\leadsto \color{blue}{\sin re + 0.5 \cdot \left(\sin re \cdot \left(im \cdot im\right)\right)} \]
    4. Taylor expanded in re around 0 42.9%

      \[\leadsto \sin re + 0.5 \cdot \color{blue}{\left(re \cdot {im}^{2}\right)} \]
    5. Step-by-step derivation
      1. unpow242.9%

        \[\leadsto \sin re + 0.5 \cdot \left(re \cdot \color{blue}{\left(im \cdot im\right)}\right) \]
    6. Simplified42.9%

      \[\leadsto \sin re + 0.5 \cdot \color{blue}{\left(re \cdot \left(im \cdot im\right)\right)} \]
    7. Taylor expanded in re around 0 42.9%

      \[\leadsto \color{blue}{re} + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right) \]
  3. Recombined 3 regimes into one program.
  4. Final simplification72.2%

    \[\leadsto \begin{array}{l} \mathbf{if}\;im \leq -2.7 \cdot 10^{+16}:\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(re \cdot 0.5\right)\\ \mathbf{elif}\;im \leq 6.9 \cdot 10^{-18}:\\ \;\;\;\;\sin re\\ \mathbf{else}:\\ \;\;\;\;re + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right)\\ \end{array} \]

Alternative 7: 41.0% accurate, 27.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;im \leq -3.4 \cdot 10^{-7} \lor \neg \left(im \leq 0.0035\right):\\ \;\;\;\;0.5 \cdot \left(im \cdot \left(re \cdot im\right)\right)\\ \mathbf{else}:\\ \;\;\;\;re\\ \end{array} \end{array} \]
(FPCore (re im)
 :precision binary64
 (if (or (<= im -3.4e-7) (not (<= im 0.0035))) (* 0.5 (* im (* re im))) re))
double code(double re, double im) {
	double tmp;
	if ((im <= -3.4e-7) || !(im <= 0.0035)) {
		tmp = 0.5 * (im * (re * im));
	} else {
		tmp = re;
	}
	return tmp;
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    real(8) :: tmp
    if ((im <= (-3.4d-7)) .or. (.not. (im <= 0.0035d0))) then
        tmp = 0.5d0 * (im * (re * im))
    else
        tmp = re
    end if
    code = tmp
end function
public static double code(double re, double im) {
	double tmp;
	if ((im <= -3.4e-7) || !(im <= 0.0035)) {
		tmp = 0.5 * (im * (re * im));
	} else {
		tmp = re;
	}
	return tmp;
}
def code(re, im):
	tmp = 0
	if (im <= -3.4e-7) or not (im <= 0.0035):
		tmp = 0.5 * (im * (re * im))
	else:
		tmp = re
	return tmp
function code(re, im)
	tmp = 0.0
	if ((im <= -3.4e-7) || !(im <= 0.0035))
		tmp = Float64(0.5 * Float64(im * Float64(re * im)));
	else
		tmp = re;
	end
	return tmp
end
function tmp_2 = code(re, im)
	tmp = 0.0;
	if ((im <= -3.4e-7) || ~((im <= 0.0035)))
		tmp = 0.5 * (im * (re * im));
	else
		tmp = re;
	end
	tmp_2 = tmp;
end
code[re_, im_] := If[Or[LessEqual[im, -3.4e-7], N[Not[LessEqual[im, 0.0035]], $MachinePrecision]], N[(0.5 * N[(im * N[(re * im), $MachinePrecision]), $MachinePrecision]), $MachinePrecision], re]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;im \leq -3.4 \cdot 10^{-7} \lor \neg \left(im \leq 0.0035\right):\\
\;\;\;\;0.5 \cdot \left(im \cdot \left(re \cdot im\right)\right)\\

\mathbf{else}:\\
\;\;\;\;re\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if im < -3.39999999999999974e-7 or 0.00350000000000000007 < im

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 55.3%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + {im}^{2}\right)} \]
    3. Simplified55.3%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + im \cdot im\right)} \]
    4. Taylor expanded in re around 0 42.6%

      \[\leadsto \left(0.5 \cdot \color{blue}{re}\right) \cdot \left(2 + im \cdot im\right) \]
    5. Taylor expanded in im around inf 42.6%

      \[\leadsto \color{blue}{0.5 \cdot \left(re \cdot {im}^{2}\right)} \]
    6. Step-by-step derivation
      1. unpow242.6%

        \[\leadsto 0.5 \cdot \left(re \cdot \color{blue}{\left(im \cdot im\right)}\right) \]
      2. associate-*r*33.7%

        \[\leadsto 0.5 \cdot \color{blue}{\left(\left(re \cdot im\right) \cdot im\right)} \]
      3. *-commutative33.7%

        \[\leadsto 0.5 \cdot \left(\color{blue}{\left(im \cdot re\right)} \cdot im\right) \]
    7. Simplified33.7%

      \[\leadsto \color{blue}{0.5 \cdot \left(\left(im \cdot re\right) \cdot im\right)} \]

    if -3.39999999999999974e-7 < im < 0.00350000000000000007

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 99.4%

      \[\leadsto \color{blue}{\sin re} \]
    3. Taylor expanded in re around 0 47.2%

      \[\leadsto \color{blue}{re} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification40.5%

    \[\leadsto \begin{array}{l} \mathbf{if}\;im \leq -3.4 \cdot 10^{-7} \lor \neg \left(im \leq 0.0035\right):\\ \;\;\;\;0.5 \cdot \left(im \cdot \left(re \cdot im\right)\right)\\ \mathbf{else}:\\ \;\;\;\;re\\ \end{array} \]

Alternative 8: 46.9% accurate, 27.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} \mathbf{if}\;im \leq -3.4 \cdot 10^{-7} \lor \neg \left(im \leq 0.0035\right):\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(re \cdot 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;re\\ \end{array} \end{array} \]
(FPCore (re im)
 :precision binary64
 (if (or (<= im -3.4e-7) (not (<= im 0.0035))) (* (* im im) (* re 0.5)) re))
double code(double re, double im) {
	double tmp;
	if ((im <= -3.4e-7) || !(im <= 0.0035)) {
		tmp = (im * im) * (re * 0.5);
	} else {
		tmp = re;
	}
	return tmp;
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    real(8) :: tmp
    if ((im <= (-3.4d-7)) .or. (.not. (im <= 0.0035d0))) then
        tmp = (im * im) * (re * 0.5d0)
    else
        tmp = re
    end if
    code = tmp
end function
public static double code(double re, double im) {
	double tmp;
	if ((im <= -3.4e-7) || !(im <= 0.0035)) {
		tmp = (im * im) * (re * 0.5);
	} else {
		tmp = re;
	}
	return tmp;
}
def code(re, im):
	tmp = 0
	if (im <= -3.4e-7) or not (im <= 0.0035):
		tmp = (im * im) * (re * 0.5)
	else:
		tmp = re
	return tmp
function code(re, im)
	tmp = 0.0
	if ((im <= -3.4e-7) || !(im <= 0.0035))
		tmp = Float64(Float64(im * im) * Float64(re * 0.5));
	else
		tmp = re;
	end
	return tmp
end
function tmp_2 = code(re, im)
	tmp = 0.0;
	if ((im <= -3.4e-7) || ~((im <= 0.0035)))
		tmp = (im * im) * (re * 0.5);
	else
		tmp = re;
	end
	tmp_2 = tmp;
end
code[re_, im_] := If[Or[LessEqual[im, -3.4e-7], N[Not[LessEqual[im, 0.0035]], $MachinePrecision]], N[(N[(im * im), $MachinePrecision] * N[(re * 0.5), $MachinePrecision]), $MachinePrecision], re]
\begin{array}{l}

\\
\begin{array}{l}
\mathbf{if}\;im \leq -3.4 \cdot 10^{-7} \lor \neg \left(im \leq 0.0035\right):\\
\;\;\;\;\left(im \cdot im\right) \cdot \left(re \cdot 0.5\right)\\

\mathbf{else}:\\
\;\;\;\;re\\


\end{array}
\end{array}
Derivation
  1. Split input into 2 regimes
  2. if im < -3.39999999999999974e-7 or 0.00350000000000000007 < im

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 55.3%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + {im}^{2}\right)} \]
    3. Simplified55.3%

      \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + im \cdot im\right)} \]
    4. Taylor expanded in im around inf 54.0%

      \[\leadsto \color{blue}{0.5 \cdot \left(\sin re \cdot {im}^{2}\right)} \]
    5. Step-by-step derivation
      1. unpow254.0%

        \[\leadsto 0.5 \cdot \left(\sin re \cdot \color{blue}{\left(im \cdot im\right)}\right) \]
      2. associate-*r*54.0%

        \[\leadsto \color{blue}{\left(0.5 \cdot \sin re\right) \cdot \left(im \cdot im\right)} \]
      3. *-commutative54.0%

        \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]
    6. Simplified54.0%

      \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot \sin re\right)} \]
    7. Taylor expanded in re around 0 42.6%

      \[\leadsto \color{blue}{0.5 \cdot \left(re \cdot {im}^{2}\right)} \]
    8. Step-by-step derivation
      1. associate-*r*42.6%

        \[\leadsto \color{blue}{\left(0.5 \cdot re\right) \cdot {im}^{2}} \]
      2. *-commutative42.6%

        \[\leadsto \color{blue}{{im}^{2} \cdot \left(0.5 \cdot re\right)} \]
      3. unpow242.6%

        \[\leadsto \color{blue}{\left(im \cdot im\right)} \cdot \left(0.5 \cdot re\right) \]
    9. Simplified42.6%

      \[\leadsto \color{blue}{\left(im \cdot im\right) \cdot \left(0.5 \cdot re\right)} \]

    if -3.39999999999999974e-7 < im < 0.00350000000000000007

    1. Initial program 100.0%

      \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
    2. Taylor expanded in im around 0 99.4%

      \[\leadsto \color{blue}{\sin re} \]
    3. Taylor expanded in re around 0 47.2%

      \[\leadsto \color{blue}{re} \]
  3. Recombined 2 regimes into one program.
  4. Final simplification44.9%

    \[\leadsto \begin{array}{l} \mathbf{if}\;im \leq -3.4 \cdot 10^{-7} \lor \neg \left(im \leq 0.0035\right):\\ \;\;\;\;\left(im \cdot im\right) \cdot \left(re \cdot 0.5\right)\\ \mathbf{else}:\\ \;\;\;\;re\\ \end{array} \]

Alternative 9: 47.2% accurate, 34.3× speedup?

\[\begin{array}{l} \\ \left(im \cdot im + 2\right) \cdot \left(re \cdot 0.5\right) \end{array} \]
(FPCore (re im) :precision binary64 (* (+ (* im im) 2.0) (* re 0.5)))
double code(double re, double im) {
	return ((im * im) + 2.0) * (re * 0.5);
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = ((im * im) + 2.0d0) * (re * 0.5d0)
end function
public static double code(double re, double im) {
	return ((im * im) + 2.0) * (re * 0.5);
}
def code(re, im):
	return ((im * im) + 2.0) * (re * 0.5)
function code(re, im)
	return Float64(Float64(Float64(im * im) + 2.0) * Float64(re * 0.5))
end
function tmp = code(re, im)
	tmp = ((im * im) + 2.0) * (re * 0.5);
end
code[re_, im_] := N[(N[(N[(im * im), $MachinePrecision] + 2.0), $MachinePrecision] * N[(re * 0.5), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\left(im \cdot im + 2\right) \cdot \left(re \cdot 0.5\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
  2. Taylor expanded in im around 0 78.0%

    \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + {im}^{2}\right)} \]
  3. Simplified78.0%

    \[\leadsto \left(0.5 \cdot \sin re\right) \cdot \color{blue}{\left(2 + im \cdot im\right)} \]
  4. Taylor expanded in re around 0 45.2%

    \[\leadsto \left(0.5 \cdot \color{blue}{re}\right) \cdot \left(2 + im \cdot im\right) \]
  5. Final simplification45.2%

    \[\leadsto \left(im \cdot im + 2\right) \cdot \left(re \cdot 0.5\right) \]

Alternative 10: 47.2% accurate, 34.3× speedup?

\[\begin{array}{l} \\ re + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right) \end{array} \]
(FPCore (re im) :precision binary64 (+ re (* 0.5 (* re (* im im)))))
double code(double re, double im) {
	return re + (0.5 * (re * (im * im)));
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = re + (0.5d0 * (re * (im * im)))
end function
public static double code(double re, double im) {
	return re + (0.5 * (re * (im * im)));
}
def code(re, im):
	return re + (0.5 * (re * (im * im)))
function code(re, im)
	return Float64(re + Float64(0.5 * Float64(re * Float64(im * im))))
end
function tmp = code(re, im)
	tmp = re + (0.5 * (re * (im * im)));
end
code[re_, im_] := N[(re + N[(0.5 * N[(re * N[(im * im), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
re + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right)
\end{array}
Derivation
  1. Initial program 100.0%

    \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
  2. Taylor expanded in im around 0 78.0%

    \[\leadsto \color{blue}{\sin re + 0.5 \cdot \left(\sin re \cdot {im}^{2}\right)} \]
  3. Simplified78.0%

    \[\leadsto \color{blue}{\sin re + 0.5 \cdot \left(\sin re \cdot \left(im \cdot im\right)\right)} \]
  4. Taylor expanded in re around 0 66.1%

    \[\leadsto \sin re + 0.5 \cdot \color{blue}{\left(re \cdot {im}^{2}\right)} \]
  5. Step-by-step derivation
    1. unpow266.1%

      \[\leadsto \sin re + 0.5 \cdot \left(re \cdot \color{blue}{\left(im \cdot im\right)}\right) \]
  6. Simplified66.1%

    \[\leadsto \sin re + 0.5 \cdot \color{blue}{\left(re \cdot \left(im \cdot im\right)\right)} \]
  7. Taylor expanded in re around 0 45.2%

    \[\leadsto \color{blue}{re} + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right) \]
  8. Final simplification45.2%

    \[\leadsto re + 0.5 \cdot \left(re \cdot \left(im \cdot im\right)\right) \]

Alternative 11: 26.2% accurate, 309.0× speedup?

\[\begin{array}{l} \\ re \end{array} \]
(FPCore (re im) :precision binary64 re)
double code(double re, double im) {
	return re;
}
real(8) function code(re, im)
    real(8), intent (in) :: re
    real(8), intent (in) :: im
    code = re
end function
public static double code(double re, double im) {
	return re;
}
def code(re, im):
	return re
function code(re, im)
	return re
end
function tmp = code(re, im)
	tmp = re;
end
code[re_, im_] := re
\begin{array}{l}

\\
re
\end{array}
Derivation
  1. Initial program 100.0%

    \[\left(0.5 \cdot \sin re\right) \cdot \left(e^{0 - im} + e^{im}\right) \]
  2. Taylor expanded in im around 0 52.4%

    \[\leadsto \color{blue}{\sin re} \]
  3. Taylor expanded in re around 0 25.2%

    \[\leadsto \color{blue}{re} \]
  4. Final simplification25.2%

    \[\leadsto re \]

Reproduce

?
herbie shell --seed 2023187 
(FPCore (re im)
  :name "math.sin on complex, real part"
  :precision binary64
  (* (* 0.5 (sin re)) (+ (exp (- 0.0 im)) (exp im))))