fabs fraction 2

Percentage Accurate: 100.0% → 100.0%
Time: 3.3s
Alternatives: 3
Speedup: 1.9×

Specification

?
\[\begin{array}{l} \\ \frac{\left|a - b\right|}{2} \end{array} \]
(FPCore (a b) :precision binary64 (/ (fabs (- a b)) 2.0))
double code(double a, double b) {
	return fabs((a - b)) / 2.0;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = abs((a - b)) / 2.0d0
end function
public static double code(double a, double b) {
	return Math.abs((a - b)) / 2.0;
}
def code(a, b):
	return math.fabs((a - b)) / 2.0
function code(a, b)
	return Float64(abs(Float64(a - b)) / 2.0)
end
function tmp = code(a, b)
	tmp = abs((a - b)) / 2.0;
end
code[a_, b_] := N[(N[Abs[N[(a - b), $MachinePrecision]], $MachinePrecision] / 2.0), $MachinePrecision]
\begin{array}{l}

\\
\frac{\left|a - b\right|}{2}
\end{array}

Sampling outcomes in binary64 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 3 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 100.0% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{\left|a - b\right|}{2} \end{array} \]
(FPCore (a b) :precision binary64 (/ (fabs (- a b)) 2.0))
double code(double a, double b) {
	return fabs((a - b)) / 2.0;
}
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = abs((a - b)) / 2.0d0
end function
public static double code(double a, double b) {
	return Math.abs((a - b)) / 2.0;
}
def code(a, b):
	return math.fabs((a - b)) / 2.0
function code(a, b)
	return Float64(abs(Float64(a - b)) / 2.0)
end
function tmp = code(a, b)
	tmp = abs((a - b)) / 2.0;
end
code[a_, b_] := N[(N[Abs[N[(a - b), $MachinePrecision]], $MachinePrecision] / 2.0), $MachinePrecision]
\begin{array}{l}

\\
\frac{\left|a - b\right|}{2}
\end{array}

Alternative 1: 100.0% accurate, 1.9× speedup?

\[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \left(b - a\right) \cdot 0.5 \end{array} \]
NOTE: a and b should be sorted in increasing order before calling this function.
(FPCore (a b) :precision binary64 (* (- b a) 0.5))
assert(a < b);
double code(double a, double b) {
	return (b - a) * 0.5;
}
NOTE: a and b should be sorted in increasing order before calling this function.
real(8) function code(a, b)
    real(8), intent (in) :: a
    real(8), intent (in) :: b
    code = (b - a) * 0.5d0
end function
assert a < b;
public static double code(double a, double b) {
	return (b - a) * 0.5;
}
[a, b] = sort([a, b])
def code(a, b):
	return (b - a) * 0.5
a, b = sort([a, b])
function code(a, b)
	return Float64(Float64(b - a) * 0.5)
end
a, b = num2cell(sort([a, b])){:}
function tmp = code(a, b)
	tmp = (b - a) * 0.5;
end
NOTE: a and b should be sorted in increasing order before calling this function.
code[a_, b_] := N[(N[(b - a), $MachinePrecision] * 0.5), $MachinePrecision]
\begin{array}{l}
[a, b] = \mathsf{sort}([a, b])\\
\\
\left(b - a\right) \cdot 0.5
\end{array}
Derivation
  1. Initial program 100.0%

    \[\frac{\left|a - b\right|}{2} \]
  2. Add Preprocessing
  3. Taylor expanded in a around 0

    \[\leadsto \color{blue}{\frac{1}{2} \cdot \left|a - b\right|} \]
  4. Step-by-step derivation
    1. *-commutativeN/A

      \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
    2. lower-*.f64N/A

      \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
    3. *-lft-identityN/A

      \[\leadsto \left|a - \color{blue}{1 \cdot b}\right| \cdot \frac{1}{2} \]
    4. metadata-evalN/A

      \[\leadsto \left|a - \color{blue}{\left(\mathsf{neg}\left(-1\right)\right)} \cdot b\right| \cdot \frac{1}{2} \]
    5. fp-cancel-sign-sub-invN/A

      \[\leadsto \left|\color{blue}{a + -1 \cdot b}\right| \cdot \frac{1}{2} \]
    6. lower-fabs.f64N/A

      \[\leadsto \color{blue}{\left|a + -1 \cdot b\right|} \cdot \frac{1}{2} \]
    7. fp-cancel-sign-sub-invN/A

      \[\leadsto \left|\color{blue}{a - \left(\mathsf{neg}\left(-1\right)\right) \cdot b}\right| \cdot \frac{1}{2} \]
    8. metadata-evalN/A

      \[\leadsto \left|a - \color{blue}{1} \cdot b\right| \cdot \frac{1}{2} \]
    9. *-lft-identityN/A

      \[\leadsto \left|a - \color{blue}{b}\right| \cdot \frac{1}{2} \]
    10. lower--.f64100.0

      \[\leadsto \left|\color{blue}{a - b}\right| \cdot 0.5 \]
  5. Applied rewrites100.0%

    \[\leadsto \color{blue}{\left|a - b\right| \cdot 0.5} \]
  6. Step-by-step derivation
    1. Applied rewrites46.8%

      \[\leadsto \left(b - a\right) \cdot \color{blue}{0.5} \]
    2. Add Preprocessing

    Alternative 2: 82.4% accurate, 1.4× speedup?

    \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ \begin{array}{l} \mathbf{if}\;a \leq -3.55 \cdot 10^{-124}:\\ \;\;\;\;-0.5 \cdot a\\ \mathbf{else}:\\ \;\;\;\;0.5 \cdot b\\ \end{array} \end{array} \]
    NOTE: a and b should be sorted in increasing order before calling this function.
    (FPCore (a b) :precision binary64 (if (<= a -3.55e-124) (* -0.5 a) (* 0.5 b)))
    assert(a < b);
    double code(double a, double b) {
    	double tmp;
    	if (a <= -3.55e-124) {
    		tmp = -0.5 * a;
    	} else {
    		tmp = 0.5 * b;
    	}
    	return tmp;
    }
    
    NOTE: a and b should be sorted in increasing order before calling this function.
    real(8) function code(a, b)
        real(8), intent (in) :: a
        real(8), intent (in) :: b
        real(8) :: tmp
        if (a <= (-3.55d-124)) then
            tmp = (-0.5d0) * a
        else
            tmp = 0.5d0 * b
        end if
        code = tmp
    end function
    
    assert a < b;
    public static double code(double a, double b) {
    	double tmp;
    	if (a <= -3.55e-124) {
    		tmp = -0.5 * a;
    	} else {
    		tmp = 0.5 * b;
    	}
    	return tmp;
    }
    
    [a, b] = sort([a, b])
    def code(a, b):
    	tmp = 0
    	if a <= -3.55e-124:
    		tmp = -0.5 * a
    	else:
    		tmp = 0.5 * b
    	return tmp
    
    a, b = sort([a, b])
    function code(a, b)
    	tmp = 0.0
    	if (a <= -3.55e-124)
    		tmp = Float64(-0.5 * a);
    	else
    		tmp = Float64(0.5 * b);
    	end
    	return tmp
    end
    
    a, b = num2cell(sort([a, b])){:}
    function tmp_2 = code(a, b)
    	tmp = 0.0;
    	if (a <= -3.55e-124)
    		tmp = -0.5 * a;
    	else
    		tmp = 0.5 * b;
    	end
    	tmp_2 = tmp;
    end
    
    NOTE: a and b should be sorted in increasing order before calling this function.
    code[a_, b_] := If[LessEqual[a, -3.55e-124], N[(-0.5 * a), $MachinePrecision], N[(0.5 * b), $MachinePrecision]]
    
    \begin{array}{l}
    [a, b] = \mathsf{sort}([a, b])\\
    \\
    \begin{array}{l}
    \mathbf{if}\;a \leq -3.55 \cdot 10^{-124}:\\
    \;\;\;\;-0.5 \cdot a\\
    
    \mathbf{else}:\\
    \;\;\;\;0.5 \cdot b\\
    
    
    \end{array}
    \end{array}
    
    Derivation
    1. Split input into 2 regimes
    2. if a < -3.55000000000000019e-124

      1. Initial program 100.0%

        \[\frac{\left|a - b\right|}{2} \]
      2. Add Preprocessing
      3. Taylor expanded in a around 0

        \[\leadsto \color{blue}{\frac{1}{2} \cdot \left|a - b\right|} \]
      4. Step-by-step derivation
        1. *-commutativeN/A

          \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
        2. lower-*.f64N/A

          \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
        3. *-lft-identityN/A

          \[\leadsto \left|a - \color{blue}{1 \cdot b}\right| \cdot \frac{1}{2} \]
        4. metadata-evalN/A

          \[\leadsto \left|a - \color{blue}{\left(\mathsf{neg}\left(-1\right)\right)} \cdot b\right| \cdot \frac{1}{2} \]
        5. fp-cancel-sign-sub-invN/A

          \[\leadsto \left|\color{blue}{a + -1 \cdot b}\right| \cdot \frac{1}{2} \]
        6. lower-fabs.f64N/A

          \[\leadsto \color{blue}{\left|a + -1 \cdot b\right|} \cdot \frac{1}{2} \]
        7. fp-cancel-sign-sub-invN/A

          \[\leadsto \left|\color{blue}{a - \left(\mathsf{neg}\left(-1\right)\right) \cdot b}\right| \cdot \frac{1}{2} \]
        8. metadata-evalN/A

          \[\leadsto \left|a - \color{blue}{1} \cdot b\right| \cdot \frac{1}{2} \]
        9. *-lft-identityN/A

          \[\leadsto \left|a - \color{blue}{b}\right| \cdot \frac{1}{2} \]
        10. lower--.f64100.0

          \[\leadsto \left|\color{blue}{a - b}\right| \cdot 0.5 \]
      5. Applied rewrites100.0%

        \[\leadsto \color{blue}{\left|a - b\right| \cdot 0.5} \]
      6. Step-by-step derivation
        1. Applied rewrites82.1%

          \[\leadsto \left(b - a\right) \cdot \color{blue}{0.5} \]
        2. Taylor expanded in a around inf

          \[\leadsto \frac{-1}{2} \cdot \color{blue}{a} \]
        3. Step-by-step derivation
          1. Applied rewrites67.9%

            \[\leadsto -0.5 \cdot \color{blue}{a} \]

          if -3.55000000000000019e-124 < a

          1. Initial program 100.0%

            \[\frac{\left|a - b\right|}{2} \]
          2. Add Preprocessing
          3. Taylor expanded in a around 0

            \[\leadsto \color{blue}{\frac{1}{2} \cdot \left|a - b\right|} \]
          4. Step-by-step derivation
            1. *-commutativeN/A

              \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
            2. lower-*.f64N/A

              \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
            3. *-lft-identityN/A

              \[\leadsto \left|a - \color{blue}{1 \cdot b}\right| \cdot \frac{1}{2} \]
            4. metadata-evalN/A

              \[\leadsto \left|a - \color{blue}{\left(\mathsf{neg}\left(-1\right)\right)} \cdot b\right| \cdot \frac{1}{2} \]
            5. fp-cancel-sign-sub-invN/A

              \[\leadsto \left|\color{blue}{a + -1 \cdot b}\right| \cdot \frac{1}{2} \]
            6. lower-fabs.f64N/A

              \[\leadsto \color{blue}{\left|a + -1 \cdot b\right|} \cdot \frac{1}{2} \]
            7. fp-cancel-sign-sub-invN/A

              \[\leadsto \left|\color{blue}{a - \left(\mathsf{neg}\left(-1\right)\right) \cdot b}\right| \cdot \frac{1}{2} \]
            8. metadata-evalN/A

              \[\leadsto \left|a - \color{blue}{1} \cdot b\right| \cdot \frac{1}{2} \]
            9. *-lft-identityN/A

              \[\leadsto \left|a - \color{blue}{b}\right| \cdot \frac{1}{2} \]
            10. lower--.f64100.0

              \[\leadsto \left|\color{blue}{a - b}\right| \cdot 0.5 \]
          5. Applied rewrites100.0%

            \[\leadsto \color{blue}{\left|a - b\right| \cdot 0.5} \]
          6. Step-by-step derivation
            1. Applied rewrites33.1%

              \[\leadsto \left(b - a\right) \cdot \color{blue}{0.5} \]
            2. Taylor expanded in a around 0

              \[\leadsto \frac{1}{2} \cdot \color{blue}{b} \]
            3. Step-by-step derivation
              1. Applied rewrites30.8%

                \[\leadsto 0.5 \cdot \color{blue}{b} \]
            4. Recombined 2 regimes into one program.
            5. Add Preprocessing

            Alternative 3: 50.4% accurate, 2.8× speedup?

            \[\begin{array}{l} [a, b] = \mathsf{sort}([a, b])\\ \\ -0.5 \cdot a \end{array} \]
            NOTE: a and b should be sorted in increasing order before calling this function.
            (FPCore (a b) :precision binary64 (* -0.5 a))
            assert(a < b);
            double code(double a, double b) {
            	return -0.5 * a;
            }
            
            NOTE: a and b should be sorted in increasing order before calling this function.
            real(8) function code(a, b)
                real(8), intent (in) :: a
                real(8), intent (in) :: b
                code = (-0.5d0) * a
            end function
            
            assert a < b;
            public static double code(double a, double b) {
            	return -0.5 * a;
            }
            
            [a, b] = sort([a, b])
            def code(a, b):
            	return -0.5 * a
            
            a, b = sort([a, b])
            function code(a, b)
            	return Float64(-0.5 * a)
            end
            
            a, b = num2cell(sort([a, b])){:}
            function tmp = code(a, b)
            	tmp = -0.5 * a;
            end
            
            NOTE: a and b should be sorted in increasing order before calling this function.
            code[a_, b_] := N[(-0.5 * a), $MachinePrecision]
            
            \begin{array}{l}
            [a, b] = \mathsf{sort}([a, b])\\
            \\
            -0.5 \cdot a
            \end{array}
            
            Derivation
            1. Initial program 100.0%

              \[\frac{\left|a - b\right|}{2} \]
            2. Add Preprocessing
            3. Taylor expanded in a around 0

              \[\leadsto \color{blue}{\frac{1}{2} \cdot \left|a - b\right|} \]
            4. Step-by-step derivation
              1. *-commutativeN/A

                \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
              2. lower-*.f64N/A

                \[\leadsto \color{blue}{\left|a - b\right| \cdot \frac{1}{2}} \]
              3. *-lft-identityN/A

                \[\leadsto \left|a - \color{blue}{1 \cdot b}\right| \cdot \frac{1}{2} \]
              4. metadata-evalN/A

                \[\leadsto \left|a - \color{blue}{\left(\mathsf{neg}\left(-1\right)\right)} \cdot b\right| \cdot \frac{1}{2} \]
              5. fp-cancel-sign-sub-invN/A

                \[\leadsto \left|\color{blue}{a + -1 \cdot b}\right| \cdot \frac{1}{2} \]
              6. lower-fabs.f64N/A

                \[\leadsto \color{blue}{\left|a + -1 \cdot b\right|} \cdot \frac{1}{2} \]
              7. fp-cancel-sign-sub-invN/A

                \[\leadsto \left|\color{blue}{a - \left(\mathsf{neg}\left(-1\right)\right) \cdot b}\right| \cdot \frac{1}{2} \]
              8. metadata-evalN/A

                \[\leadsto \left|a - \color{blue}{1} \cdot b\right| \cdot \frac{1}{2} \]
              9. *-lft-identityN/A

                \[\leadsto \left|a - \color{blue}{b}\right| \cdot \frac{1}{2} \]
              10. lower--.f64100.0

                \[\leadsto \left|\color{blue}{a - b}\right| \cdot 0.5 \]
            5. Applied rewrites100.0%

              \[\leadsto \color{blue}{\left|a - b\right| \cdot 0.5} \]
            6. Step-by-step derivation
              1. Applied rewrites46.8%

                \[\leadsto \left(b - a\right) \cdot \color{blue}{0.5} \]
              2. Taylor expanded in a around inf

                \[\leadsto \frac{-1}{2} \cdot \color{blue}{a} \]
              3. Step-by-step derivation
                1. Applied rewrites22.2%

                  \[\leadsto -0.5 \cdot \color{blue}{a} \]
                2. Add Preprocessing

                Reproduce

                ?
                herbie shell --seed 2024329 
                (FPCore (a b)
                  :name "fabs fraction 2"
                  :precision binary64
                  (/ (fabs (- a b)) 2.0))