GTR1 distribution

Percentage Accurate: 98.5% → 98.6%
Time: 41.9s
Alternatives: 8
Speedup: 0.8×

Specification

?
\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (let* ((t_0 (- (* alpha alpha) 1.0)))
  (/
   t_0
   (*
    (* PI (log (* alpha alpha)))
    (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / ((single(pi) * log((alpha * alpha))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 8 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 98.5% accurate, 1.0× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (let* ((t_0 (- (* alpha alpha) 1.0)))
  (/
   t_0
   (*
    (* PI (log (* alpha alpha)))
    (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / ((single(pi) * log((alpha * alpha))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t\_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}

Alternative 1: 98.6% accurate, 0.7× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t\_0}{\log \left({\left(\alpha \cdot \alpha\right)}^{\pi}\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (let* ((t_0 (- (* alpha alpha) 1.0)))
  (/
   t_0
   (*
    (log (pow (* alpha alpha) PI))
    (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / (logf(powf((alpha * alpha), ((float) M_PI))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(log((Float32(alpha * alpha) ^ Float32(pi))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / (log(((alpha * alpha) ^ single(pi))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t\_0}{\log \left({\left(\alpha \cdot \alpha\right)}^{\pi}\right) \cdot \left(1 + \left(t\_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.6%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left({\left(\alpha \cdot \alpha\right)}^{\pi}\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  3. Add Preprocessing

Alternative 2: 98.5% accurate, 0.8× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\begin{array}{l} t_0 := \log \left(\alpha \cdot \alpha\right)\\ \frac{\alpha \cdot \alpha - 1}{\mathsf{fma}\left(\pi, t\_0, \left(\left(\left(\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot cosTheta\right) \cdot cosTheta\right) \cdot t\_0\right) \cdot \pi\right)} \end{array} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (let* ((t_0 (log (* alpha alpha))))
  (/
   (- (* alpha alpha) 1.0)
   (fma
    PI
    t_0
    (* (* (* (* (fma alpha alpha -1.0) cosTheta) cosTheta) t_0) PI)))))
float code(float cosTheta, float alpha) {
	float t_0 = logf((alpha * alpha));
	return ((alpha * alpha) - 1.0f) / fmaf(((float) M_PI), t_0, ((((fmaf(alpha, alpha, -1.0f) * cosTheta) * cosTheta) * t_0) * ((float) M_PI)));
}
function code(cosTheta, alpha)
	t_0 = log(Float32(alpha * alpha))
	return Float32(Float32(Float32(alpha * alpha) - Float32(1.0)) / fma(Float32(pi), t_0, Float32(Float32(Float32(Float32(fma(alpha, alpha, Float32(-1.0)) * cosTheta) * cosTheta) * t_0) * Float32(pi))))
end
\begin{array}{l}
t_0 := \log \left(\alpha \cdot \alpha\right)\\
\frac{\alpha \cdot \alpha - 1}{\mathsf{fma}\left(\pi, t\_0, \left(\left(\left(\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot cosTheta\right) \cdot cosTheta\right) \cdot t\_0\right) \cdot \pi\right)}
\end{array}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\mathsf{fma}\left(\pi, \log \left(\alpha \cdot \alpha\right), \left(\left(\left(\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot cosTheta\right) \cdot cosTheta\right) \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \pi\right)} \]
  3. Add Preprocessing

Alternative 3: 98.4% accurate, 1.3× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\frac{\frac{0.5}{\log \alpha}}{\mathsf{fma}\left(cosTheta, cosTheta \cdot \pi, \frac{\pi}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/
 (/ 0.5 (log alpha))
 (fma cosTheta (* cosTheta PI) (/ PI (fma alpha alpha -1.0)))))
float code(float cosTheta, float alpha) {
	return (0.5f / logf(alpha)) / fmaf(cosTheta, (cosTheta * ((float) M_PI)), (((float) M_PI) / fmaf(alpha, alpha, -1.0f)));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(0.5) / log(alpha)) / fma(cosTheta, Float32(cosTheta * Float32(pi)), Float32(Float32(pi) / fma(alpha, alpha, Float32(-1.0)))))
end
\frac{\frac{0.5}{\log \alpha}}{\mathsf{fma}\left(cosTheta, cosTheta \cdot \pi, \frac{\pi}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \left(\pi \cdot \mathsf{fma}\left(cosTheta \cdot cosTheta, \mathsf{fma}\left(\alpha, \alpha, -1\right), 1\right)\right)} \]
  3. Applied rewrites97.8%

    \[\leadsto \frac{1}{\mathsf{fma}\left(cosTheta, cosTheta, \frac{1}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right) \cdot \left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right)} \]
  4. Applied rewrites97.9%

    \[\leadsto \frac{\frac{1}{\pi \cdot \mathsf{fma}\left(cosTheta, cosTheta, \frac{1}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)}}{\log \left(\alpha \cdot \alpha\right)} \]
  5. Applied rewrites98.4%

    \[\leadsto \frac{\frac{0.5}{\log \alpha}}{\mathsf{fma}\left(cosTheta, cosTheta \cdot \pi, \frac{\pi}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)} \]
  6. Add Preprocessing

Alternative 4: 97.7% accurate, 1.3× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\frac{0.31830987334251404}{\log \left(\alpha \cdot \alpha\right) \cdot \mathsf{fma}\left(cosTheta, cosTheta, \frac{1}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/
 0.31830987334251404
 (*
  (log (* alpha alpha))
  (fma cosTheta cosTheta (/ 1.0 (fma alpha alpha -1.0))))))
float code(float cosTheta, float alpha) {
	return 0.31830987334251404f / (logf((alpha * alpha)) * fmaf(cosTheta, cosTheta, (1.0f / fmaf(alpha, alpha, -1.0f))));
}
function code(cosTheta, alpha)
	return Float32(Float32(0.31830987334251404) / Float32(log(Float32(alpha * alpha)) * fma(cosTheta, cosTheta, Float32(Float32(1.0) / fma(alpha, alpha, Float32(-1.0))))))
end
\frac{0.31830987334251404}{\log \left(\alpha \cdot \alpha\right) \cdot \mathsf{fma}\left(cosTheta, cosTheta, \frac{1}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \left(\pi \cdot \mathsf{fma}\left(cosTheta \cdot cosTheta, \mathsf{fma}\left(\alpha, \alpha, -1\right), 1\right)\right)} \]
  3. Applied rewrites97.8%

    \[\leadsto \frac{1}{\mathsf{fma}\left(cosTheta, cosTheta, \frac{1}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right) \cdot \left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right)} \]
  4. Applied rewrites97.7%

    \[\leadsto \frac{\frac{1}{\pi}}{\log \left(\alpha \cdot \alpha\right) \cdot \mathsf{fma}\left(cosTheta, cosTheta, \frac{1}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)} \]
  5. Evaluated real constant97.7%

    \[\leadsto \frac{0.31830987334251404}{\log \left(\alpha \cdot \alpha\right) \cdot \mathsf{fma}\left(cosTheta, cosTheta, \frac{1}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}\right)} \]
  6. Add Preprocessing

Alternative 5: 95.2% accurate, 1.8× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/ (fma alpha alpha -1.0) (* (* 2.0 (log alpha)) PI)))
float code(float cosTheta, float alpha) {
	return fmaf(alpha, alpha, -1.0f) / ((2.0f * logf(alpha)) * ((float) M_PI));
}
function code(cosTheta, alpha)
	return Float32(fma(alpha, alpha, Float32(-1.0)) / Float32(Float32(Float32(2.0) * log(alpha)) * Float32(pi)))
end
\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\left(2 \cdot \log \alpha\right) \cdot \pi}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \left(\pi \cdot \mathsf{fma}\left(cosTheta \cdot cosTheta, \mathsf{fma}\left(\alpha, \alpha, -1\right), 1\right)\right)} \]
  3. Taylor expanded in cosTheta around 0

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \pi} \]
  4. Applied rewrites95.2%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \pi} \]
  5. Applied rewrites95.1%

    \[\leadsto \frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\log \left(\alpha \cdot \alpha\right) \cdot \pi} \]
  6. Taylor expanded in alpha around 0

    \[\leadsto \frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
  7. Applied rewrites95.2%

    \[\leadsto \frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
  8. Add Preprocessing

Alternative 6: 65.5% accurate, 2.2× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\frac{\frac{-1}{\pi}}{2 \cdot \log \alpha} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/ (/ -1.0 PI) (* 2.0 (log alpha))))
float code(float cosTheta, float alpha) {
	return (-1.0f / ((float) M_PI)) / (2.0f * logf(alpha));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(-1.0) / Float32(pi)) / Float32(Float32(2.0) * log(alpha)))
end
function tmp = code(cosTheta, alpha)
	tmp = (single(-1.0) / single(pi)) / (single(2.0) * log(alpha));
end
\frac{\frac{-1}{\pi}}{2 \cdot \log \alpha}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.3%

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\mathsf{fma}\left(cosTheta \cdot cosTheta, \mathsf{fma}\left(\alpha, \alpha, -1\right), 1\right) \cdot \log \left(\alpha \cdot \alpha\right)} \]
  3. Taylor expanded in cosTheta around 0

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \left({\alpha}^{2}\right)} \]
  4. Applied rewrites95.0%

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \left({\alpha}^{2}\right)} \]
  5. Applied rewrites95.1%

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{2 \cdot \log \alpha} \]
  6. Taylor expanded in alpha around 0

    \[\leadsto \frac{\frac{-1}{\pi}}{2 \cdot \log \alpha} \]
  7. Applied rewrites65.5%

    \[\leadsto \frac{\frac{-1}{\pi}}{2 \cdot \log \alpha} \]
  8. Add Preprocessing

Alternative 7: 65.5% accurate, 2.2× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\frac{\frac{-1}{\pi}}{\log \left(\alpha \cdot \alpha\right)} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/ (/ -1.0 PI) (log (* alpha alpha))))
float code(float cosTheta, float alpha) {
	return (-1.0f / ((float) M_PI)) / logf((alpha * alpha));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(-1.0) / Float32(pi)) / log(Float32(alpha * alpha)))
end
function tmp = code(cosTheta, alpha)
	tmp = (single(-1.0) / single(pi)) / log((alpha * alpha));
end
\frac{\frac{-1}{\pi}}{\log \left(\alpha \cdot \alpha\right)}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.3%

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\mathsf{fma}\left(cosTheta \cdot cosTheta, \mathsf{fma}\left(\alpha, \alpha, -1\right), 1\right) \cdot \log \left(\alpha \cdot \alpha\right)} \]
  3. Taylor expanded in cosTheta around 0

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \left({\alpha}^{2}\right)} \]
  4. Applied rewrites95.0%

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \left({\alpha}^{2}\right)} \]
  5. Applied rewrites95.0%

    \[\leadsto \frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \left(\alpha \cdot \alpha\right)} \]
  6. Taylor expanded in alpha around 0

    \[\leadsto \frac{\frac{-1}{\pi}}{\log \left(\alpha \cdot \alpha\right)} \]
  7. Applied rewrites65.5%

    \[\leadsto \frac{\frac{-1}{\pi}}{\log \left(\alpha \cdot \alpha\right)} \]
  8. Add Preprocessing

Alternative 8: 65.5% accurate, 2.4× speedup?

\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\frac{-1}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
(FPCore (cosTheta alpha)
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0))
     (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/ -1.0 (* (* 2.0 (log alpha)) PI)))
float code(float cosTheta, float alpha) {
	return -1.0f / ((2.0f * logf(alpha)) * ((float) M_PI));
}
function code(cosTheta, alpha)
	return Float32(Float32(-1.0) / Float32(Float32(Float32(2.0) * log(alpha)) * Float32(pi)))
end
function tmp = code(cosTheta, alpha)
	tmp = single(-1.0) / ((single(2.0) * log(alpha)) * single(pi));
end
\frac{-1}{\left(2 \cdot \log \alpha\right) \cdot \pi}
Derivation
  1. Initial program 98.5%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Applied rewrites98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \left(\pi \cdot \mathsf{fma}\left(cosTheta \cdot cosTheta, \mathsf{fma}\left(\alpha, \alpha, -1\right), 1\right)\right)} \]
  3. Taylor expanded in cosTheta around 0

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \pi} \]
  4. Applied rewrites95.2%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\log \left(\alpha \cdot \alpha\right) \cdot \pi} \]
  5. Taylor expanded in alpha around 0

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
  6. Applied rewrites95.1%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
  7. Taylor expanded in alpha around 0

    \[\leadsto \frac{-1}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
  8. Applied rewrites65.5%

    \[\leadsto \frac{-1}{\left(2 \cdot \log \alpha\right) \cdot \pi} \]
  9. Add Preprocessing

Reproduce

?
herbie shell --seed 2026089 +o generate:egglog
(FPCore (cosTheta alpha)
  :name "GTR1 distribution"
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0)) (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/ (- (* alpha alpha) 1.0) (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* (- (* alpha alpha) 1.0) cosTheta) cosTheta)))))