Data.Colour.CIE:cieLAB from colour-2.3.3, A

Percentage Accurate: 99.7% → 99.8%
Time: 8.2s
Alternatives: 8
Speedup: 2.1×

Specification

?
\[\begin{array}{l} \\ \left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \end{array} \]
(FPCore (x y) :precision binary64 (* (* (- x (/ 16.0 116.0)) 3.0) y))
double code(double x, double y) {
	return ((x - (16.0 / 116.0)) * 3.0) * y;
}
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    code = ((x - (16.0d0 / 116.0d0)) * 3.0d0) * y
end function
public static double code(double x, double y) {
	return ((x - (16.0 / 116.0)) * 3.0) * y;
}
def code(x, y):
	return ((x - (16.0 / 116.0)) * 3.0) * y
function code(x, y)
	return Float64(Float64(Float64(x - Float64(16.0 / 116.0)) * 3.0) * y)
end
function tmp = code(x, y)
	tmp = ((x - (16.0 / 116.0)) * 3.0) * y;
end
code[x_, y_] := N[(N[(N[(x - N[(16.0 / 116.0), $MachinePrecision]), $MachinePrecision] * 3.0), $MachinePrecision] * y), $MachinePrecision]
\begin{array}{l}

\\
\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 8 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 99.7% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \end{array} \]
(FPCore (x y) :precision binary64 (* (* (- x (/ 16.0 116.0)) 3.0) y))
double code(double x, double y) {
	return ((x - (16.0 / 116.0)) * 3.0) * y;
}
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    code = ((x - (16.0d0 / 116.0d0)) * 3.0d0) * y
end function
public static double code(double x, double y) {
	return ((x - (16.0 / 116.0)) * 3.0) * y;
}
def code(x, y):
	return ((x - (16.0 / 116.0)) * 3.0) * y
function code(x, y)
	return Float64(Float64(Float64(x - Float64(16.0 / 116.0)) * 3.0) * y)
end
function tmp = code(x, y)
	tmp = ((x - (16.0 / 116.0)) * 3.0) * y;
end
code[x_, y_] := N[(N[(N[(x - N[(16.0 / 116.0), $MachinePrecision]), $MachinePrecision] * 3.0), $MachinePrecision] * y), $MachinePrecision]
\begin{array}{l}

\\
\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y
\end{array}

Alternative 1: 99.8% accurate, 1.5× speedup?

\[\begin{array}{l} \\ \mathsf{fma}\left(x \cdot y, 3, y \cdot -0.41379310344827586\right) \end{array} \]
(FPCore (x y) :precision binary64 (fma (* x y) 3.0 (* y -0.41379310344827586)))
double code(double x, double y) {
	return fma((x * y), 3.0, (y * -0.41379310344827586));
}
function code(x, y)
	return fma(Float64(x * y), 3.0, Float64(y * -0.41379310344827586))
end
code[x_, y_] := N[(N[(x * y), $MachinePrecision] * 3.0 + N[(y * -0.41379310344827586), $MachinePrecision]), $MachinePrecision]
\begin{array}{l}

\\
\mathsf{fma}\left(x \cdot y, 3, y \cdot -0.41379310344827586\right)
\end{array}
Derivation
  1. Initial program 99.4%

    \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
  2. Add Preprocessing
  3. Step-by-step derivation
    1. lift-/.f64N/A

      \[\leadsto \left(\left(x - \color{blue}{\frac{16}{116}}\right) \cdot 3\right) \cdot y \]
    2. lift--.f64N/A

      \[\leadsto \left(\color{blue}{\left(x - \frac{16}{116}\right)} \cdot 3\right) \cdot y \]
    3. associate-*l*N/A

      \[\leadsto \color{blue}{\left(x - \frac{16}{116}\right) \cdot \left(3 \cdot y\right)} \]
    4. lower-*.f64N/A

      \[\leadsto \color{blue}{\left(x - \frac{16}{116}\right) \cdot \left(3 \cdot y\right)} \]
    5. lift--.f64N/A

      \[\leadsto \color{blue}{\left(x - \frac{16}{116}\right)} \cdot \left(3 \cdot y\right) \]
    6. sub-negN/A

      \[\leadsto \color{blue}{\left(x + \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot \left(3 \cdot y\right) \]
    7. lower-+.f64N/A

      \[\leadsto \color{blue}{\left(x + \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot \left(3 \cdot y\right) \]
    8. lift-/.f64N/A

      \[\leadsto \left(x + \left(\mathsf{neg}\left(\color{blue}{\frac{16}{116}}\right)\right)\right) \cdot \left(3 \cdot y\right) \]
    9. metadata-evalN/A

      \[\leadsto \left(x + \left(\mathsf{neg}\left(\color{blue}{\frac{4}{29}}\right)\right)\right) \cdot \left(3 \cdot y\right) \]
    10. metadata-evalN/A

      \[\leadsto \left(x + \color{blue}{\frac{-4}{29}}\right) \cdot \left(3 \cdot y\right) \]
    11. lower-*.f6499.6

      \[\leadsto \left(x + -0.13793103448275862\right) \cdot \color{blue}{\left(3 \cdot y\right)} \]
  4. Applied egg-rr99.6%

    \[\leadsto \color{blue}{\left(x + -0.13793103448275862\right) \cdot \left(3 \cdot y\right)} \]
  5. Step-by-step derivation
    1. lift-+.f64N/A

      \[\leadsto \color{blue}{\left(x + \frac{-4}{29}\right)} \cdot \left(3 \cdot y\right) \]
    2. associate-*r*N/A

      \[\leadsto \color{blue}{\left(\left(x + \frac{-4}{29}\right) \cdot 3\right) \cdot y} \]
    3. *-commutativeN/A

      \[\leadsto \color{blue}{\left(3 \cdot \left(x + \frac{-4}{29}\right)\right)} \cdot y \]
    4. associate-*r*N/A

      \[\leadsto \color{blue}{3 \cdot \left(\left(x + \frac{-4}{29}\right) \cdot y\right)} \]
    5. *-commutativeN/A

      \[\leadsto 3 \cdot \color{blue}{\left(y \cdot \left(x + \frac{-4}{29}\right)\right)} \]
    6. lift-+.f64N/A

      \[\leadsto 3 \cdot \left(y \cdot \color{blue}{\left(x + \frac{-4}{29}\right)}\right) \]
    7. distribute-rgt-inN/A

      \[\leadsto 3 \cdot \color{blue}{\left(x \cdot y + \frac{-4}{29} \cdot y\right)} \]
    8. distribute-rgt-inN/A

      \[\leadsto \color{blue}{\left(x \cdot y\right) \cdot 3 + \left(\frac{-4}{29} \cdot y\right) \cdot 3} \]
    9. associate-*r*N/A

      \[\leadsto \left(x \cdot y\right) \cdot 3 + \color{blue}{\frac{-4}{29} \cdot \left(y \cdot 3\right)} \]
    10. *-commutativeN/A

      \[\leadsto \left(x \cdot y\right) \cdot 3 + \frac{-4}{29} \cdot \color{blue}{\left(3 \cdot y\right)} \]
    11. associate-*r*N/A

      \[\leadsto \left(x \cdot y\right) \cdot 3 + \color{blue}{\left(\frac{-4}{29} \cdot 3\right) \cdot y} \]
    12. metadata-evalN/A

      \[\leadsto \left(x \cdot y\right) \cdot 3 + \color{blue}{\frac{-12}{29}} \cdot y \]
    13. *-commutativeN/A

      \[\leadsto \left(x \cdot y\right) \cdot 3 + \color{blue}{y \cdot \frac{-12}{29}} \]
    14. lower-fma.f64N/A

      \[\leadsto \color{blue}{\mathsf{fma}\left(x \cdot y, 3, y \cdot \frac{-12}{29}\right)} \]
    15. lower-*.f64N/A

      \[\leadsto \mathsf{fma}\left(\color{blue}{x \cdot y}, 3, y \cdot \frac{-12}{29}\right) \]
    16. lower-*.f6499.7

      \[\leadsto \mathsf{fma}\left(x \cdot y, 3, \color{blue}{y \cdot -0.41379310344827586}\right) \]
  6. Applied egg-rr99.7%

    \[\leadsto \color{blue}{\mathsf{fma}\left(x \cdot y, 3, y \cdot -0.41379310344827586\right)} \]
  7. Add Preprocessing

Alternative 2: 97.6% accurate, 0.5× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := x - \frac{16}{116}\\ \mathbf{if}\;t\_0 \leq -5000000:\\ \;\;\;\;x \cdot \left(y \cdot 3\right)\\ \mathbf{elif}\;t\_0 \leq -0.1:\\ \;\;\;\;y \cdot -0.41379310344827586\\ \mathbf{else}:\\ \;\;\;\;y \cdot \left(x \cdot 3\right)\\ \end{array} \end{array} \]
(FPCore (x y)
 :precision binary64
 (let* ((t_0 (- x (/ 16.0 116.0))))
   (if (<= t_0 -5000000.0)
     (* x (* y 3.0))
     (if (<= t_0 -0.1) (* y -0.41379310344827586) (* y (* x 3.0))))))
double code(double x, double y) {
	double t_0 = x - (16.0 / 116.0);
	double tmp;
	if (t_0 <= -5000000.0) {
		tmp = x * (y * 3.0);
	} else if (t_0 <= -0.1) {
		tmp = y * -0.41379310344827586;
	} else {
		tmp = y * (x * 3.0);
	}
	return tmp;
}
real(8) function code(x, y)
    real(8), intent (in) :: x
    real(8), intent (in) :: y
    real(8) :: t_0
    real(8) :: tmp
    t_0 = x - (16.0d0 / 116.0d0)
    if (t_0 <= (-5000000.0d0)) then
        tmp = x * (y * 3.0d0)
    else if (t_0 <= (-0.1d0)) then
        tmp = y * (-0.41379310344827586d0)
    else
        tmp = y * (x * 3.0d0)
    end if
    code = tmp
end function
public static double code(double x, double y) {
	double t_0 = x - (16.0 / 116.0);
	double tmp;
	if (t_0 <= -5000000.0) {
		tmp = x * (y * 3.0);
	} else if (t_0 <= -0.1) {
		tmp = y * -0.41379310344827586;
	} else {
		tmp = y * (x * 3.0);
	}
	return tmp;
}
def code(x, y):
	t_0 = x - (16.0 / 116.0)
	tmp = 0
	if t_0 <= -5000000.0:
		tmp = x * (y * 3.0)
	elif t_0 <= -0.1:
		tmp = y * -0.41379310344827586
	else:
		tmp = y * (x * 3.0)
	return tmp
function code(x, y)
	t_0 = Float64(x - Float64(16.0 / 116.0))
	tmp = 0.0
	if (t_0 <= -5000000.0)
		tmp = Float64(x * Float64(y * 3.0));
	elseif (t_0 <= -0.1)
		tmp = Float64(y * -0.41379310344827586);
	else
		tmp = Float64(y * Float64(x * 3.0));
	end
	return tmp
end
function tmp_2 = code(x, y)
	t_0 = x - (16.0 / 116.0);
	tmp = 0.0;
	if (t_0 <= -5000000.0)
		tmp = x * (y * 3.0);
	elseif (t_0 <= -0.1)
		tmp = y * -0.41379310344827586;
	else
		tmp = y * (x * 3.0);
	end
	tmp_2 = tmp;
end
code[x_, y_] := Block[{t$95$0 = N[(x - N[(16.0 / 116.0), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[t$95$0, -5000000.0], N[(x * N[(y * 3.0), $MachinePrecision]), $MachinePrecision], If[LessEqual[t$95$0, -0.1], N[(y * -0.41379310344827586), $MachinePrecision], N[(y * N[(x * 3.0), $MachinePrecision]), $MachinePrecision]]]]
\begin{array}{l}

\\
\begin{array}{l}
t_0 := x - \frac{16}{116}\\
\mathbf{if}\;t\_0 \leq -5000000:\\
\;\;\;\;x \cdot \left(y \cdot 3\right)\\

\mathbf{elif}\;t\_0 \leq -0.1:\\
\;\;\;\;y \cdot -0.41379310344827586\\

\mathbf{else}:\\
\;\;\;\;y \cdot \left(x \cdot 3\right)\\


\end{array}
\end{array}
Derivation
  1. Split input into 3 regimes
  2. if (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64))) < -5e6

    1. Initial program 98.1%

      \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
    2. Add Preprocessing
    3. Taylor expanded in x around inf

      \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
    4. Step-by-step derivation
      1. lower-*.f6496.1

        \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
    5. Simplified96.1%

      \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
    6. Step-by-step derivation
      1. associate-*l*N/A

        \[\leadsto \color{blue}{3 \cdot \left(x \cdot y\right)} \]
      2. *-commutativeN/A

        \[\leadsto 3 \cdot \color{blue}{\left(y \cdot x\right)} \]
      3. associate-*r*N/A

        \[\leadsto \color{blue}{\left(3 \cdot y\right) \cdot x} \]
      4. lift-*.f64N/A

        \[\leadsto \color{blue}{\left(3 \cdot y\right)} \cdot x \]
      5. lower-*.f6497.7

        \[\leadsto \color{blue}{\left(3 \cdot y\right) \cdot x} \]
    7. Applied egg-rr97.7%

      \[\leadsto \color{blue}{\left(3 \cdot y\right) \cdot x} \]

    if -5e6 < (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64))) < -0.10000000000000001

    1. Initial program 99.9%

      \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
    2. Add Preprocessing
    3. Taylor expanded in x around 0

      \[\leadsto \color{blue}{\frac{-12}{29}} \cdot y \]
    4. Step-by-step derivation
      1. Simplified98.6%

        \[\leadsto \color{blue}{-0.41379310344827586} \cdot y \]

      if -0.10000000000000001 < (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64)))

      1. Initial program 99.8%

        \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
      2. Add Preprocessing
      3. Taylor expanded in x around inf

        \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
      4. Step-by-step derivation
        1. lower-*.f6496.8

          \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
      5. Simplified96.8%

        \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
    5. Recombined 3 regimes into one program.
    6. Final simplification97.9%

      \[\leadsto \begin{array}{l} \mathbf{if}\;x - \frac{16}{116} \leq -5000000:\\ \;\;\;\;x \cdot \left(y \cdot 3\right)\\ \mathbf{elif}\;x - \frac{16}{116} \leq -0.1:\\ \;\;\;\;y \cdot -0.41379310344827586\\ \mathbf{else}:\\ \;\;\;\;y \cdot \left(x \cdot 3\right)\\ \end{array} \]
    7. Add Preprocessing

    Alternative 3: 97.6% accurate, 0.5× speedup?

    \[\begin{array}{l} \\ \begin{array}{l} t_0 := x - \frac{16}{116}\\ \mathbf{if}\;t\_0 \leq -5000000:\\ \;\;\;\;\left(x \cdot y\right) \cdot 3\\ \mathbf{elif}\;t\_0 \leq -0.1:\\ \;\;\;\;y \cdot -0.41379310344827586\\ \mathbf{else}:\\ \;\;\;\;y \cdot \left(x \cdot 3\right)\\ \end{array} \end{array} \]
    (FPCore (x y)
     :precision binary64
     (let* ((t_0 (- x (/ 16.0 116.0))))
       (if (<= t_0 -5000000.0)
         (* (* x y) 3.0)
         (if (<= t_0 -0.1) (* y -0.41379310344827586) (* y (* x 3.0))))))
    double code(double x, double y) {
    	double t_0 = x - (16.0 / 116.0);
    	double tmp;
    	if (t_0 <= -5000000.0) {
    		tmp = (x * y) * 3.0;
    	} else if (t_0 <= -0.1) {
    		tmp = y * -0.41379310344827586;
    	} else {
    		tmp = y * (x * 3.0);
    	}
    	return tmp;
    }
    
    real(8) function code(x, y)
        real(8), intent (in) :: x
        real(8), intent (in) :: y
        real(8) :: t_0
        real(8) :: tmp
        t_0 = x - (16.0d0 / 116.0d0)
        if (t_0 <= (-5000000.0d0)) then
            tmp = (x * y) * 3.0d0
        else if (t_0 <= (-0.1d0)) then
            tmp = y * (-0.41379310344827586d0)
        else
            tmp = y * (x * 3.0d0)
        end if
        code = tmp
    end function
    
    public static double code(double x, double y) {
    	double t_0 = x - (16.0 / 116.0);
    	double tmp;
    	if (t_0 <= -5000000.0) {
    		tmp = (x * y) * 3.0;
    	} else if (t_0 <= -0.1) {
    		tmp = y * -0.41379310344827586;
    	} else {
    		tmp = y * (x * 3.0);
    	}
    	return tmp;
    }
    
    def code(x, y):
    	t_0 = x - (16.0 / 116.0)
    	tmp = 0
    	if t_0 <= -5000000.0:
    		tmp = (x * y) * 3.0
    	elif t_0 <= -0.1:
    		tmp = y * -0.41379310344827586
    	else:
    		tmp = y * (x * 3.0)
    	return tmp
    
    function code(x, y)
    	t_0 = Float64(x - Float64(16.0 / 116.0))
    	tmp = 0.0
    	if (t_0 <= -5000000.0)
    		tmp = Float64(Float64(x * y) * 3.0);
    	elseif (t_0 <= -0.1)
    		tmp = Float64(y * -0.41379310344827586);
    	else
    		tmp = Float64(y * Float64(x * 3.0));
    	end
    	return tmp
    end
    
    function tmp_2 = code(x, y)
    	t_0 = x - (16.0 / 116.0);
    	tmp = 0.0;
    	if (t_0 <= -5000000.0)
    		tmp = (x * y) * 3.0;
    	elseif (t_0 <= -0.1)
    		tmp = y * -0.41379310344827586;
    	else
    		tmp = y * (x * 3.0);
    	end
    	tmp_2 = tmp;
    end
    
    code[x_, y_] := Block[{t$95$0 = N[(x - N[(16.0 / 116.0), $MachinePrecision]), $MachinePrecision]}, If[LessEqual[t$95$0, -5000000.0], N[(N[(x * y), $MachinePrecision] * 3.0), $MachinePrecision], If[LessEqual[t$95$0, -0.1], N[(y * -0.41379310344827586), $MachinePrecision], N[(y * N[(x * 3.0), $MachinePrecision]), $MachinePrecision]]]]
    
    \begin{array}{l}
    
    \\
    \begin{array}{l}
    t_0 := x - \frac{16}{116}\\
    \mathbf{if}\;t\_0 \leq -5000000:\\
    \;\;\;\;\left(x \cdot y\right) \cdot 3\\
    
    \mathbf{elif}\;t\_0 \leq -0.1:\\
    \;\;\;\;y \cdot -0.41379310344827586\\
    
    \mathbf{else}:\\
    \;\;\;\;y \cdot \left(x \cdot 3\right)\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 3 regimes
    2. if (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64))) < -5e6

      1. Initial program 98.1%

        \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
      2. Add Preprocessing
      3. Taylor expanded in x around inf

        \[\leadsto \color{blue}{3 \cdot \left(x \cdot y\right)} \]
      4. Step-by-step derivation
        1. lower-*.f64N/A

          \[\leadsto \color{blue}{3 \cdot \left(x \cdot y\right)} \]
        2. lower-*.f6497.6

          \[\leadsto 3 \cdot \color{blue}{\left(x \cdot y\right)} \]
      5. Simplified97.6%

        \[\leadsto \color{blue}{3 \cdot \left(x \cdot y\right)} \]

      if -5e6 < (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64))) < -0.10000000000000001

      1. Initial program 99.9%

        \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
      2. Add Preprocessing
      3. Taylor expanded in x around 0

        \[\leadsto \color{blue}{\frac{-12}{29}} \cdot y \]
      4. Step-by-step derivation
        1. Simplified98.6%

          \[\leadsto \color{blue}{-0.41379310344827586} \cdot y \]

        if -0.10000000000000001 < (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64)))

        1. Initial program 99.8%

          \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
        2. Add Preprocessing
        3. Taylor expanded in x around inf

          \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
        4. Step-by-step derivation
          1. lower-*.f6496.8

            \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
        5. Simplified96.8%

          \[\leadsto \color{blue}{\left(3 \cdot x\right)} \cdot y \]
      5. Recombined 3 regimes into one program.
      6. Final simplification97.9%

        \[\leadsto \begin{array}{l} \mathbf{if}\;x - \frac{16}{116} \leq -5000000:\\ \;\;\;\;\left(x \cdot y\right) \cdot 3\\ \mathbf{elif}\;x - \frac{16}{116} \leq -0.1:\\ \;\;\;\;y \cdot -0.41379310344827586\\ \mathbf{else}:\\ \;\;\;\;y \cdot \left(x \cdot 3\right)\\ \end{array} \]
      7. Add Preprocessing

      Alternative 4: 97.6% accurate, 0.5× speedup?

      \[\begin{array}{l} \\ \begin{array}{l} t_0 := x - \frac{16}{116}\\ t_1 := \left(x \cdot y\right) \cdot 3\\ \mathbf{if}\;t\_0 \leq -5000000:\\ \;\;\;\;t\_1\\ \mathbf{elif}\;t\_0 \leq -0.1:\\ \;\;\;\;y \cdot -0.41379310344827586\\ \mathbf{else}:\\ \;\;\;\;t\_1\\ \end{array} \end{array} \]
      (FPCore (x y)
       :precision binary64
       (let* ((t_0 (- x (/ 16.0 116.0))) (t_1 (* (* x y) 3.0)))
         (if (<= t_0 -5000000.0)
           t_1
           (if (<= t_0 -0.1) (* y -0.41379310344827586) t_1))))
      double code(double x, double y) {
      	double t_0 = x - (16.0 / 116.0);
      	double t_1 = (x * y) * 3.0;
      	double tmp;
      	if (t_0 <= -5000000.0) {
      		tmp = t_1;
      	} else if (t_0 <= -0.1) {
      		tmp = y * -0.41379310344827586;
      	} else {
      		tmp = t_1;
      	}
      	return tmp;
      }
      
      real(8) function code(x, y)
          real(8), intent (in) :: x
          real(8), intent (in) :: y
          real(8) :: t_0
          real(8) :: t_1
          real(8) :: tmp
          t_0 = x - (16.0d0 / 116.0d0)
          t_1 = (x * y) * 3.0d0
          if (t_0 <= (-5000000.0d0)) then
              tmp = t_1
          else if (t_0 <= (-0.1d0)) then
              tmp = y * (-0.41379310344827586d0)
          else
              tmp = t_1
          end if
          code = tmp
      end function
      
      public static double code(double x, double y) {
      	double t_0 = x - (16.0 / 116.0);
      	double t_1 = (x * y) * 3.0;
      	double tmp;
      	if (t_0 <= -5000000.0) {
      		tmp = t_1;
      	} else if (t_0 <= -0.1) {
      		tmp = y * -0.41379310344827586;
      	} else {
      		tmp = t_1;
      	}
      	return tmp;
      }
      
      def code(x, y):
      	t_0 = x - (16.0 / 116.0)
      	t_1 = (x * y) * 3.0
      	tmp = 0
      	if t_0 <= -5000000.0:
      		tmp = t_1
      	elif t_0 <= -0.1:
      		tmp = y * -0.41379310344827586
      	else:
      		tmp = t_1
      	return tmp
      
      function code(x, y)
      	t_0 = Float64(x - Float64(16.0 / 116.0))
      	t_1 = Float64(Float64(x * y) * 3.0)
      	tmp = 0.0
      	if (t_0 <= -5000000.0)
      		tmp = t_1;
      	elseif (t_0 <= -0.1)
      		tmp = Float64(y * -0.41379310344827586);
      	else
      		tmp = t_1;
      	end
      	return tmp
      end
      
      function tmp_2 = code(x, y)
      	t_0 = x - (16.0 / 116.0);
      	t_1 = (x * y) * 3.0;
      	tmp = 0.0;
      	if (t_0 <= -5000000.0)
      		tmp = t_1;
      	elseif (t_0 <= -0.1)
      		tmp = y * -0.41379310344827586;
      	else
      		tmp = t_1;
      	end
      	tmp_2 = tmp;
      end
      
      code[x_, y_] := Block[{t$95$0 = N[(x - N[(16.0 / 116.0), $MachinePrecision]), $MachinePrecision]}, Block[{t$95$1 = N[(N[(x * y), $MachinePrecision] * 3.0), $MachinePrecision]}, If[LessEqual[t$95$0, -5000000.0], t$95$1, If[LessEqual[t$95$0, -0.1], N[(y * -0.41379310344827586), $MachinePrecision], t$95$1]]]]
      
      \begin{array}{l}
      
      \\
      \begin{array}{l}
      t_0 := x - \frac{16}{116}\\
      t_1 := \left(x \cdot y\right) \cdot 3\\
      \mathbf{if}\;t\_0 \leq -5000000:\\
      \;\;\;\;t\_1\\
      
      \mathbf{elif}\;t\_0 \leq -0.1:\\
      \;\;\;\;y \cdot -0.41379310344827586\\
      
      \mathbf{else}:\\
      \;\;\;\;t\_1\\
      
      
      \end{array}
      \end{array}
      
      Derivation
      1. Split input into 2 regimes
      2. if (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64))) < -5e6 or -0.10000000000000001 < (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64)))

        1. Initial program 98.9%

          \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
        2. Add Preprocessing
        3. Taylor expanded in x around inf

          \[\leadsto \color{blue}{3 \cdot \left(x \cdot y\right)} \]
        4. Step-by-step derivation
          1. lower-*.f64N/A

            \[\leadsto \color{blue}{3 \cdot \left(x \cdot y\right)} \]
          2. lower-*.f6497.1

            \[\leadsto 3 \cdot \color{blue}{\left(x \cdot y\right)} \]
        5. Simplified97.1%

          \[\leadsto \color{blue}{3 \cdot \left(x \cdot y\right)} \]

        if -5e6 < (-.f64 x (/.f64 #s(literal 16 binary64) #s(literal 116 binary64))) < -0.10000000000000001

        1. Initial program 99.9%

          \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
        2. Add Preprocessing
        3. Taylor expanded in x around 0

          \[\leadsto \color{blue}{\frac{-12}{29}} \cdot y \]
        4. Step-by-step derivation
          1. Simplified98.6%

            \[\leadsto \color{blue}{-0.41379310344827586} \cdot y \]
        5. Recombined 2 regimes into one program.
        6. Final simplification97.9%

          \[\leadsto \begin{array}{l} \mathbf{if}\;x - \frac{16}{116} \leq -5000000:\\ \;\;\;\;\left(x \cdot y\right) \cdot 3\\ \mathbf{elif}\;x - \frac{16}{116} \leq -0.1:\\ \;\;\;\;y \cdot -0.41379310344827586\\ \mathbf{else}:\\ \;\;\;\;\left(x \cdot y\right) \cdot 3\\ \end{array} \]
        7. Add Preprocessing

        Alternative 5: 99.6% accurate, 1.8× speedup?

        \[\begin{array}{l} \\ \left(x + -0.13793103448275862\right) \cdot \left(y \cdot 3\right) \end{array} \]
        (FPCore (x y) :precision binary64 (* (+ x -0.13793103448275862) (* y 3.0)))
        double code(double x, double y) {
        	return (x + -0.13793103448275862) * (y * 3.0);
        }
        
        real(8) function code(x, y)
            real(8), intent (in) :: x
            real(8), intent (in) :: y
            code = (x + (-0.13793103448275862d0)) * (y * 3.0d0)
        end function
        
        public static double code(double x, double y) {
        	return (x + -0.13793103448275862) * (y * 3.0);
        }
        
        def code(x, y):
        	return (x + -0.13793103448275862) * (y * 3.0)
        
        function code(x, y)
        	return Float64(Float64(x + -0.13793103448275862) * Float64(y * 3.0))
        end
        
        function tmp = code(x, y)
        	tmp = (x + -0.13793103448275862) * (y * 3.0);
        end
        
        code[x_, y_] := N[(N[(x + -0.13793103448275862), $MachinePrecision] * N[(y * 3.0), $MachinePrecision]), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        \left(x + -0.13793103448275862\right) \cdot \left(y \cdot 3\right)
        \end{array}
        
        Derivation
        1. Initial program 99.4%

          \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \left(\left(x - \color{blue}{\frac{16}{116}}\right) \cdot 3\right) \cdot y \]
          2. lift--.f64N/A

            \[\leadsto \left(\color{blue}{\left(x - \frac{16}{116}\right)} \cdot 3\right) \cdot y \]
          3. associate-*l*N/A

            \[\leadsto \color{blue}{\left(x - \frac{16}{116}\right) \cdot \left(3 \cdot y\right)} \]
          4. lower-*.f64N/A

            \[\leadsto \color{blue}{\left(x - \frac{16}{116}\right) \cdot \left(3 \cdot y\right)} \]
          5. lift--.f64N/A

            \[\leadsto \color{blue}{\left(x - \frac{16}{116}\right)} \cdot \left(3 \cdot y\right) \]
          6. sub-negN/A

            \[\leadsto \color{blue}{\left(x + \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot \left(3 \cdot y\right) \]
          7. lower-+.f64N/A

            \[\leadsto \color{blue}{\left(x + \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot \left(3 \cdot y\right) \]
          8. lift-/.f64N/A

            \[\leadsto \left(x + \left(\mathsf{neg}\left(\color{blue}{\frac{16}{116}}\right)\right)\right) \cdot \left(3 \cdot y\right) \]
          9. metadata-evalN/A

            \[\leadsto \left(x + \left(\mathsf{neg}\left(\color{blue}{\frac{4}{29}}\right)\right)\right) \cdot \left(3 \cdot y\right) \]
          10. metadata-evalN/A

            \[\leadsto \left(x + \color{blue}{\frac{-4}{29}}\right) \cdot \left(3 \cdot y\right) \]
          11. lower-*.f6499.6

            \[\leadsto \left(x + -0.13793103448275862\right) \cdot \color{blue}{\left(3 \cdot y\right)} \]
        4. Applied egg-rr99.6%

          \[\leadsto \color{blue}{\left(x + -0.13793103448275862\right) \cdot \left(3 \cdot y\right)} \]
        5. Final simplification99.6%

          \[\leadsto \left(x + -0.13793103448275862\right) \cdot \left(y \cdot 3\right) \]
        6. Add Preprocessing

        Alternative 6: 99.6% accurate, 1.8× speedup?

        \[\begin{array}{l} \\ 3 \cdot \left(y \cdot \left(x + -0.13793103448275862\right)\right) \end{array} \]
        (FPCore (x y) :precision binary64 (* 3.0 (* y (+ x -0.13793103448275862))))
        double code(double x, double y) {
        	return 3.0 * (y * (x + -0.13793103448275862));
        }
        
        real(8) function code(x, y)
            real(8), intent (in) :: x
            real(8), intent (in) :: y
            code = 3.0d0 * (y * (x + (-0.13793103448275862d0)))
        end function
        
        public static double code(double x, double y) {
        	return 3.0 * (y * (x + -0.13793103448275862));
        }
        
        def code(x, y):
        	return 3.0 * (y * (x + -0.13793103448275862))
        
        function code(x, y)
        	return Float64(3.0 * Float64(y * Float64(x + -0.13793103448275862)))
        end
        
        function tmp = code(x, y)
        	tmp = 3.0 * (y * (x + -0.13793103448275862));
        end
        
        code[x_, y_] := N[(3.0 * N[(y * N[(x + -0.13793103448275862), $MachinePrecision]), $MachinePrecision]), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        3 \cdot \left(y \cdot \left(x + -0.13793103448275862\right)\right)
        \end{array}
        
        Derivation
        1. Initial program 99.4%

          \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \left(\left(x - \color{blue}{\frac{16}{116}}\right) \cdot 3\right) \cdot y \]
          2. lift--.f64N/A

            \[\leadsto \left(\color{blue}{\left(x - \frac{16}{116}\right)} \cdot 3\right) \cdot y \]
          3. associate-*l*N/A

            \[\leadsto \color{blue}{\left(x - \frac{16}{116}\right) \cdot \left(3 \cdot y\right)} \]
          4. *-commutativeN/A

            \[\leadsto \left(x - \frac{16}{116}\right) \cdot \color{blue}{\left(y \cdot 3\right)} \]
          5. associate-*r*N/A

            \[\leadsto \color{blue}{\left(\left(x - \frac{16}{116}\right) \cdot y\right) \cdot 3} \]
          6. lower-*.f64N/A

            \[\leadsto \color{blue}{\left(\left(x - \frac{16}{116}\right) \cdot y\right) \cdot 3} \]
          7. lower-*.f6499.4

            \[\leadsto \color{blue}{\left(\left(x - \frac{16}{116}\right) \cdot y\right)} \cdot 3 \]
          8. lift--.f64N/A

            \[\leadsto \left(\color{blue}{\left(x - \frac{16}{116}\right)} \cdot y\right) \cdot 3 \]
          9. sub-negN/A

            \[\leadsto \left(\color{blue}{\left(x + \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot y\right) \cdot 3 \]
          10. lower-+.f64N/A

            \[\leadsto \left(\color{blue}{\left(x + \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot y\right) \cdot 3 \]
          11. lift-/.f64N/A

            \[\leadsto \left(\left(x + \left(\mathsf{neg}\left(\color{blue}{\frac{16}{116}}\right)\right)\right) \cdot y\right) \cdot 3 \]
          12. metadata-evalN/A

            \[\leadsto \left(\left(x + \left(\mathsf{neg}\left(\color{blue}{\frac{4}{29}}\right)\right)\right) \cdot y\right) \cdot 3 \]
          13. metadata-eval99.4

            \[\leadsto \left(\left(x + \color{blue}{-0.13793103448275862}\right) \cdot y\right) \cdot 3 \]
        4. Applied egg-rr99.4%

          \[\leadsto \color{blue}{\left(\left(x + -0.13793103448275862\right) \cdot y\right) \cdot 3} \]
        5. Final simplification99.4%

          \[\leadsto 3 \cdot \left(y \cdot \left(x + -0.13793103448275862\right)\right) \]
        6. Add Preprocessing

        Alternative 7: 99.7% accurate, 2.1× speedup?

        \[\begin{array}{l} \\ y \cdot \mathsf{fma}\left(x, 3, -0.41379310344827586\right) \end{array} \]
        (FPCore (x y) :precision binary64 (* y (fma x 3.0 -0.41379310344827586)))
        double code(double x, double y) {
        	return y * fma(x, 3.0, -0.41379310344827586);
        }
        
        function code(x, y)
        	return Float64(y * fma(x, 3.0, -0.41379310344827586))
        end
        
        code[x_, y_] := N[(y * N[(x * 3.0 + -0.41379310344827586), $MachinePrecision]), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        y \cdot \mathsf{fma}\left(x, 3, -0.41379310344827586\right)
        \end{array}
        
        Derivation
        1. Initial program 99.4%

          \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
        2. Add Preprocessing
        3. Step-by-step derivation
          1. lift-/.f64N/A

            \[\leadsto \left(\left(x - \color{blue}{\frac{16}{116}}\right) \cdot 3\right) \cdot y \]
          2. lift--.f64N/A

            \[\leadsto \left(\color{blue}{\left(x - \frac{16}{116}\right)} \cdot 3\right) \cdot y \]
          3. *-commutativeN/A

            \[\leadsto \color{blue}{\left(3 \cdot \left(x - \frac{16}{116}\right)\right)} \cdot y \]
          4. lift--.f64N/A

            \[\leadsto \left(3 \cdot \color{blue}{\left(x - \frac{16}{116}\right)}\right) \cdot y \]
          5. sub-negN/A

            \[\leadsto \left(3 \cdot \color{blue}{\left(x + \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)}\right) \cdot y \]
          6. distribute-lft-inN/A

            \[\leadsto \color{blue}{\left(3 \cdot x + 3 \cdot \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot y \]
          7. *-commutativeN/A

            \[\leadsto \left(\color{blue}{x \cdot 3} + 3 \cdot \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right) \cdot y \]
          8. lower-fma.f64N/A

            \[\leadsto \color{blue}{\mathsf{fma}\left(x, 3, 3 \cdot \left(\mathsf{neg}\left(\frac{16}{116}\right)\right)\right)} \cdot y \]
          9. lift-/.f64N/A

            \[\leadsto \mathsf{fma}\left(x, 3, 3 \cdot \left(\mathsf{neg}\left(\color{blue}{\frac{16}{116}}\right)\right)\right) \cdot y \]
          10. metadata-evalN/A

            \[\leadsto \mathsf{fma}\left(x, 3, 3 \cdot \left(\mathsf{neg}\left(\color{blue}{\frac{4}{29}}\right)\right)\right) \cdot y \]
          11. metadata-evalN/A

            \[\leadsto \mathsf{fma}\left(x, 3, 3 \cdot \color{blue}{\frac{-4}{29}}\right) \cdot y \]
          12. metadata-eval99.4

            \[\leadsto \mathsf{fma}\left(x, 3, \color{blue}{-0.41379310344827586}\right) \cdot y \]
        4. Applied egg-rr99.4%

          \[\leadsto \color{blue}{\mathsf{fma}\left(x, 3, -0.41379310344827586\right)} \cdot y \]
        5. Final simplification99.4%

          \[\leadsto y \cdot \mathsf{fma}\left(x, 3, -0.41379310344827586\right) \]
        6. Add Preprocessing

        Alternative 8: 49.8% accurate, 4.2× speedup?

        \[\begin{array}{l} \\ y \cdot -0.41379310344827586 \end{array} \]
        (FPCore (x y) :precision binary64 (* y -0.41379310344827586))
        double code(double x, double y) {
        	return y * -0.41379310344827586;
        }
        
        real(8) function code(x, y)
            real(8), intent (in) :: x
            real(8), intent (in) :: y
            code = y * (-0.41379310344827586d0)
        end function
        
        public static double code(double x, double y) {
        	return y * -0.41379310344827586;
        }
        
        def code(x, y):
        	return y * -0.41379310344827586
        
        function code(x, y)
        	return Float64(y * -0.41379310344827586)
        end
        
        function tmp = code(x, y)
        	tmp = y * -0.41379310344827586;
        end
        
        code[x_, y_] := N[(y * -0.41379310344827586), $MachinePrecision]
        
        \begin{array}{l}
        
        \\
        y \cdot -0.41379310344827586
        \end{array}
        
        Derivation
        1. Initial program 99.4%

          \[\left(\left(x - \frac{16}{116}\right) \cdot 3\right) \cdot y \]
        2. Add Preprocessing
        3. Taylor expanded in x around 0

          \[\leadsto \color{blue}{\frac{-12}{29}} \cdot y \]
        4. Step-by-step derivation
          1. Simplified50.8%

            \[\leadsto \color{blue}{-0.41379310344827586} \cdot y \]
          2. Final simplification50.8%

            \[\leadsto y \cdot -0.41379310344827586 \]
          3. Add Preprocessing

          Developer Target 1: 99.7% accurate, 1.8× speedup?

          \[\begin{array}{l} \\ y \cdot \left(x \cdot 3 - 0.41379310344827586\right) \end{array} \]
          (FPCore (x y) :precision binary64 (* y (- (* x 3.0) 0.41379310344827586)))
          double code(double x, double y) {
          	return y * ((x * 3.0) - 0.41379310344827586);
          }
          
          real(8) function code(x, y)
              real(8), intent (in) :: x
              real(8), intent (in) :: y
              code = y * ((x * 3.0d0) - 0.41379310344827586d0)
          end function
          
          public static double code(double x, double y) {
          	return y * ((x * 3.0) - 0.41379310344827586);
          }
          
          def code(x, y):
          	return y * ((x * 3.0) - 0.41379310344827586)
          
          function code(x, y)
          	return Float64(y * Float64(Float64(x * 3.0) - 0.41379310344827586))
          end
          
          function tmp = code(x, y)
          	tmp = y * ((x * 3.0) - 0.41379310344827586);
          end
          
          code[x_, y_] := N[(y * N[(N[(x * 3.0), $MachinePrecision] - 0.41379310344827586), $MachinePrecision]), $MachinePrecision]
          
          \begin{array}{l}
          
          \\
          y \cdot \left(x \cdot 3 - 0.41379310344827586\right)
          \end{array}
          

          Reproduce

          ?
          herbie shell --seed 2024207 
          (FPCore (x y)
            :name "Data.Colour.CIE:cieLAB from colour-2.3.3, A"
            :precision binary64
          
            :alt
            (! :herbie-platform default (* y (- (* x 3) 20689655172413793/50000000000000000)))
          
            (* (* (- x (/ 16.0 116.0)) 3.0) y))