GTR1 distribution

Percentage Accurate: 98.5% → 98.5%
Time: 3.8s
Alternatives: 10
Speedup: 1.0×

Specification

?
\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\begin{array}{l} \\ \begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (let* ((t_0 (- (* alpha alpha) 1.0)))
   (/
    t_0
    (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / ((single(pi) * log((alpha * alpha))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}
\end{array}

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 10 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 98.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (let* ((t_0 (- (* alpha alpha) 1.0)))
   (/
    t_0
    (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / ((single(pi) * log((alpha * alpha))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}
\end{array}

Alternative 1: 98.5% accurate, 0.5× speedup?

\[\begin{array}{l} \\ \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \log \left({\alpha}^{2}\right)\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/
  (- (* alpha alpha) 1.0)
  (*
   PI
   (*
    (fma (* (- (pow alpha 2.0) 1.0) cosTheta) cosTheta 1.0)
    (log (pow alpha 2.0))))))
float code(float cosTheta, float alpha) {
	return ((alpha * alpha) - 1.0f) / (((float) M_PI) * (fmaf(((powf(alpha, 2.0f) - 1.0f) * cosTheta), cosTheta, 1.0f) * logf(powf(alpha, 2.0f))));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / Float32(Float32(pi) * Float32(fma(Float32(Float32((alpha ^ Float32(2.0)) - Float32(1.0)) * cosTheta), cosTheta, Float32(1.0)) * log((alpha ^ Float32(2.0))))))
end
\begin{array}{l}

\\
\frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \log \left({\alpha}^{2}\right)\right)}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)}} \]
  3. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \log \left({\alpha}^{2}\right)\right)}} \]
  4. Add Preprocessing

Alternative 2: 98.4% accurate, 0.6× speedup?

\[\begin{array}{l} \\ \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \left(\log \alpha \cdot 2\right)\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/
  (- (* alpha alpha) 1.0)
  (*
   PI
   (*
    (fma (* (- (pow alpha 2.0) 1.0) cosTheta) cosTheta 1.0)
    (* (log alpha) 2.0)))))
float code(float cosTheta, float alpha) {
	return ((alpha * alpha) - 1.0f) / (((float) M_PI) * (fmaf(((powf(alpha, 2.0f) - 1.0f) * cosTheta), cosTheta, 1.0f) * (logf(alpha) * 2.0f)));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / Float32(Float32(pi) * Float32(fma(Float32(Float32((alpha ^ Float32(2.0)) - Float32(1.0)) * cosTheta), cosTheta, Float32(1.0)) * Float32(log(alpha) * Float32(2.0)))))
end
\begin{array}{l}

\\
\frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \left(\log \alpha \cdot 2\right)\right)}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)}} \]
  3. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \log \left({\alpha}^{2}\right)\right)}} \]
  4. Applied rewrites98.4%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \color{blue}{\left(\log \alpha \cdot 2\right)}\right)} \]
  5. Add Preprocessing

Alternative 3: 98.4% accurate, 0.6× speedup?

\[\begin{array}{l} \\ \frac{\alpha \cdot \alpha - 1}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right) \cdot \mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/
  (- (* alpha alpha) 1.0)
  (*
   (* (+ PI PI) (log alpha))
   (fma (* (- (pow alpha 2.0) 1.0) cosTheta) cosTheta 1.0))))
float code(float cosTheta, float alpha) {
	return ((alpha * alpha) - 1.0f) / (((((float) M_PI) + ((float) M_PI)) * logf(alpha)) * fmaf(((powf(alpha, 2.0f) - 1.0f) * cosTheta), cosTheta, 1.0f));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / Float32(Float32(Float32(Float32(pi) + Float32(pi)) * log(alpha)) * fma(Float32(Float32((alpha ^ Float32(2.0)) - Float32(1.0)) * cosTheta), cosTheta, Float32(1.0))))
end
\begin{array}{l}

\\
\frac{\alpha \cdot \alpha - 1}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right) \cdot \mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)}} \]
  3. Applied rewrites98.4%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right)} \cdot \mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)} \]
  4. Add Preprocessing

Alternative 4: 98.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (let* ((t_0 (- (* alpha alpha) 1.0)))
   (/
    t_0
    (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / ((single(pi) * log((alpha * alpha))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Add Preprocessing

Alternative 5: 97.6% accurate, 1.1× speedup?

\[\begin{array}{l} \\ \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(-cosTheta, cosTheta, 1\right) \cdot \left(\log \alpha \cdot 2\right)\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/
  (- (* alpha alpha) 1.0)
  (* PI (* (fma (- cosTheta) cosTheta 1.0) (* (log alpha) 2.0)))))
float code(float cosTheta, float alpha) {
	return ((alpha * alpha) - 1.0f) / (((float) M_PI) * (fmaf(-cosTheta, cosTheta, 1.0f) * (logf(alpha) * 2.0f)));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / Float32(Float32(pi) * Float32(fma(Float32(-cosTheta), cosTheta, Float32(1.0)) * Float32(log(alpha) * Float32(2.0)))))
end
\begin{array}{l}

\\
\frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(-cosTheta, cosTheta, 1\right) \cdot \left(\log \alpha \cdot 2\right)\right)}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)}} \]
  3. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \log \left({\alpha}^{2}\right)\right)}} \]
  4. Applied rewrites98.4%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \color{blue}{\left(\log \alpha \cdot 2\right)}\right)} \]
  5. Taylor expanded in alpha around 0

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\color{blue}{-1 \cdot cosTheta}, cosTheta, 1\right) \cdot \left(\log \alpha \cdot 2\right)\right)} \]
  6. Applied rewrites97.6%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\color{blue}{-cosTheta}, cosTheta, 1\right) \cdot \left(\log \alpha \cdot 2\right)\right)} \]
  7. Add Preprocessing

Alternative 6: 97.6% accurate, 1.1× speedup?

\[\begin{array}{l} \\ \frac{\alpha \cdot \alpha - 1}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right) \cdot \mathsf{fma}\left(-cosTheta, cosTheta, 1\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/
  (- (* alpha alpha) 1.0)
  (* (* (+ PI PI) (log alpha)) (fma (- cosTheta) cosTheta 1.0))))
float code(float cosTheta, float alpha) {
	return ((alpha * alpha) - 1.0f) / (((((float) M_PI) + ((float) M_PI)) * logf(alpha)) * fmaf(-cosTheta, cosTheta, 1.0f));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / Float32(Float32(Float32(Float32(pi) + Float32(pi)) * log(alpha)) * fma(Float32(-cosTheta), cosTheta, Float32(1.0))))
end
\begin{array}{l}

\\
\frac{\alpha \cdot \alpha - 1}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right) \cdot \mathsf{fma}\left(-cosTheta, cosTheta, 1\right)}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)}} \]
  3. Applied rewrites98.4%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right)} \cdot \mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)} \]
  4. Taylor expanded in alpha around 0

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right) \cdot \mathsf{fma}\left(\color{blue}{-1 \cdot cosTheta}, cosTheta, 1\right)} \]
  5. Applied rewrites97.6%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\left(\pi + \pi\right) \cdot \log \alpha\right) \cdot \mathsf{fma}\left(\color{blue}{-cosTheta}, cosTheta, 1\right)} \]
  6. Add Preprocessing

Alternative 7: 95.2% accurate, 1.2× speedup?

\[\begin{array}{l} \\ \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot 1} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/ (- (* alpha alpha) 1.0) (* (* PI (log (* alpha alpha))) 1.0)))
float code(float cosTheta, float alpha) {
	return ((alpha * alpha) - 1.0f) / ((((float) M_PI) * logf((alpha * alpha))) * 1.0f);
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(1.0)))
end
function tmp = code(cosTheta, alpha)
	tmp = ((alpha * alpha) - single(1.0)) / ((single(pi) * log((alpha * alpha))) * single(1.0));
end
\begin{array}{l}

\\
\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot 1}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in cosTheta around 0

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{1}} \]
  3. Applied rewrites95.2%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{1}} \]
  4. Add Preprocessing

Alternative 8: 95.2% accurate, 1.2× speedup?

\[\begin{array}{l} \\ \frac{\alpha \cdot \alpha - 1}{\left(\log \alpha \cdot \pi\right) \cdot 2} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/ (- (* alpha alpha) 1.0) (* (* (log alpha) PI) 2.0)))
float code(float cosTheta, float alpha) {
	return ((alpha * alpha) - 1.0f) / ((logf(alpha) * ((float) M_PI)) * 2.0f);
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / Float32(Float32(log(alpha) * Float32(pi)) * Float32(2.0)))
end
function tmp = code(cosTheta, alpha)
	tmp = ((alpha * alpha) - single(1.0)) / ((log(alpha) * single(pi)) * single(2.0));
end
\begin{array}{l}

\\
\frac{\alpha \cdot \alpha - 1}{\left(\log \alpha \cdot \pi\right) \cdot 2}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \color{blue}{\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right)}} \]
  3. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \log \left({\alpha}^{2}\right)\right)}} \]
  4. Applied rewrites98.4%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\pi \cdot \left(\mathsf{fma}\left(\left({\alpha}^{2} - 1\right) \cdot cosTheta, cosTheta, 1\right) \cdot \color{blue}{\left(\log \alpha \cdot 2\right)}\right)} \]
  5. Taylor expanded in cosTheta around 0

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{2 \cdot \left(\mathsf{PI}\left(\right) \cdot \log \alpha\right)}} \]
  6. Applied rewrites95.2%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\left(\log \alpha \cdot \pi\right) \cdot 2}} \]
  7. Add Preprocessing

Alternative 9: 65.2% accurate, 1.3× speedup?

\[\begin{array}{l} \\ \frac{\frac{-0.5}{\pi}}{\log \alpha} \end{array} \]
(FPCore (cosTheta alpha) :precision binary32 (/ (/ -0.5 PI) (log alpha)))
float code(float cosTheta, float alpha) {
	return (-0.5f / ((float) M_PI)) / logf(alpha);
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(-0.5) / Float32(pi)) / log(alpha))
end
function tmp = code(cosTheta, alpha)
	tmp = (single(-0.5) / single(pi)) / log(alpha);
end
\begin{array}{l}

\\
\frac{\frac{-0.5}{\pi}}{\log \alpha}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in cosTheta around 0

    \[\leadsto \color{blue}{\frac{{\alpha}^{2} - 1}{\mathsf{PI}\left(\right) \cdot \log \left({\alpha}^{2}\right)}} \]
  3. Applied rewrites95.2%

    \[\leadsto \color{blue}{\frac{\frac{{\alpha}^{2} - 1}{\pi}}{\log \left({\alpha}^{2}\right)}} \]
  4. Taylor expanded in alpha around 0

    \[\leadsto \frac{\frac{-1}{2}}{\color{blue}{\mathsf{PI}\left(\right) \cdot \log \alpha}} \]
  5. Applied rewrites65.1%

    \[\leadsto \frac{-0.5}{\color{blue}{\log \alpha \cdot \pi}} \]
  6. Applied rewrites65.2%

    \[\leadsto \frac{\frac{-0.5}{\pi}}{\log \alpha} \]
  7. Add Preprocessing

Alternative 10: 65.1% accurate, 1.3× speedup?

\[\begin{array}{l} \\ \frac{-0.5}{\log \alpha \cdot \pi} \end{array} \]
(FPCore (cosTheta alpha) :precision binary32 (/ -0.5 (* (log alpha) PI)))
float code(float cosTheta, float alpha) {
	return -0.5f / (logf(alpha) * ((float) M_PI));
}
function code(cosTheta, alpha)
	return Float32(Float32(-0.5) / Float32(log(alpha) * Float32(pi)))
end
function tmp = code(cosTheta, alpha)
	tmp = single(-0.5) / (log(alpha) * single(pi));
end
\begin{array}{l}

\\
\frac{-0.5}{\log \alpha \cdot \pi}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in cosTheta around 0

    \[\leadsto \color{blue}{\frac{{\alpha}^{2} - 1}{\mathsf{PI}\left(\right) \cdot \log \left({\alpha}^{2}\right)}} \]
  3. Applied rewrites95.2%

    \[\leadsto \color{blue}{\frac{\frac{{\alpha}^{2} - 1}{\pi}}{\log \left({\alpha}^{2}\right)}} \]
  4. Taylor expanded in alpha around 0

    \[\leadsto \frac{\frac{-1}{2}}{\color{blue}{\mathsf{PI}\left(\right) \cdot \log \alpha}} \]
  5. Applied rewrites65.1%

    \[\leadsto \frac{-0.5}{\color{blue}{\log \alpha \cdot \pi}} \]
  6. Add Preprocessing

Reproduce

?
herbie shell --seed 2025100 
(FPCore (cosTheta alpha)
  :name "GTR1 distribution"
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0)) (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/ (- (* alpha alpha) 1.0) (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* (- (* alpha alpha) 1.0) cosTheta) cosTheta)))))