GTR1 distribution

Percentage Accurate: 98.5% → 98.5%
Time: 9.5s
Alternatives: 9
Speedup: 1.0×

Specification

?
\[\left(0 \leq cosTheta \land cosTheta \leq 1\right) \land \left(0.0001 \leq \alpha \land \alpha \leq 1\right)\]
\[\begin{array}{l} \\ \begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (let* ((t_0 (- (* alpha alpha) 1.0)))
   (/
    t_0
    (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / ((single(pi) * log((alpha * alpha))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}
\end{array}

Sampling outcomes in binary32 precision:

Local Percentage Accuracy vs ?

The average percentage accuracy by input value. Horizontal axis shows value of an input variable; the variable is choosen in the title. Vertical axis is accuracy; higher is better. Red represent the original program, while blue represents Herbie's suggestion. These can be toggled with buttons below the plot. The line is an average while dots represent individual samples.

Accuracy vs Speed?

Herbie found 9 alternatives:

AlternativeAccuracySpeedup
The accuracy (vertical axis) and speed (horizontal axis) of each alternatives. Up and to the right is better. The red square shows the initial program, and each blue circle shows an alternative.The line shows the best available speed-accuracy tradeoffs.

Initial Program: 98.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := \alpha \cdot \alpha - 1\\ \frac{t_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t_0 \cdot cosTheta\right) \cdot cosTheta\right)} \end{array} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (let* ((t_0 (- (* alpha alpha) 1.0)))
   (/
    t_0
    (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* t_0 cosTheta) cosTheta))))))
float code(float cosTheta, float alpha) {
	float t_0 = (alpha * alpha) - 1.0f;
	return t_0 / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f + ((t_0 * cosTheta) * cosTheta)));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(alpha * alpha) - Float32(1.0))
	return Float32(t_0 / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) + Float32(Float32(t_0 * cosTheta) * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	t_0 = (alpha * alpha) - single(1.0);
	tmp = t_0 / ((single(pi) * log((alpha * alpha))) * (single(1.0) + ((t_0 * cosTheta) * cosTheta)));
end
\begin{array}{l}

\\
\begin{array}{l}
t_0 := \alpha \cdot \alpha - 1\\
\frac{t_0}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(t_0 \cdot cosTheta\right) \cdot cosTheta\right)}
\end{array}
\end{array}

Alternative 1: 98.5% accurate, 0.5× speedup?

\[\begin{array}{l} \\ \frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \alpha \cdot 2}}{\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot \left(cosTheta \cdot cosTheta\right) + 1} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/
  (/ (/ (fma alpha alpha -1.0) PI) (* (log alpha) 2.0))
  (+ (* (fma alpha alpha -1.0) (* cosTheta cosTheta)) 1.0)))
float code(float cosTheta, float alpha) {
	return ((fmaf(alpha, alpha, -1.0f) / ((float) M_PI)) / (logf(alpha) * 2.0f)) / ((fmaf(alpha, alpha, -1.0f) * (cosTheta * cosTheta)) + 1.0f);
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(fma(alpha, alpha, Float32(-1.0)) / Float32(pi)) / Float32(log(alpha) * Float32(2.0))) / Float32(Float32(fma(alpha, alpha, Float32(-1.0)) * Float32(cosTheta * cosTheta)) + Float32(1.0)))
end
\begin{array}{l}

\\
\frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \alpha \cdot 2}}{\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot \left(cosTheta \cdot cosTheta\right) + 1}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Step-by-step derivation
    1. associate-/r*98.4%

      \[\leadsto \color{blue}{\frac{\frac{\alpha \cdot \alpha - 1}{\pi \cdot \log \left(\alpha \cdot \alpha\right)}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta}} \]
    2. difference-of-sqr-198.0%

      \[\leadsto \frac{\frac{\color{blue}{\left(\alpha + 1\right) \cdot \left(\alpha - 1\right)}}{\pi \cdot \log \left(\alpha \cdot \alpha\right)}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    3. *-commutative98.0%

      \[\leadsto \frac{\frac{\color{blue}{\left(\alpha - 1\right) \cdot \left(\alpha + 1\right)}}{\pi \cdot \log \left(\alpha \cdot \alpha\right)}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    4. times-frac98.1%

      \[\leadsto \frac{\color{blue}{\frac{\alpha - 1}{\pi} \cdot \frac{\alpha + 1}{\log \left(\alpha \cdot \alpha\right)}}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    5. *-commutative98.1%

      \[\leadsto \frac{\color{blue}{\frac{\alpha + 1}{\log \left(\alpha \cdot \alpha\right)} \cdot \frac{\alpha - 1}{\pi}}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    6. times-frac98.0%

      \[\leadsto \frac{\color{blue}{\frac{\left(\alpha + 1\right) \cdot \left(\alpha - 1\right)}{\log \left(\alpha \cdot \alpha\right) \cdot \pi}}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    7. difference-of-sqr-198.4%

      \[\leadsto \frac{\frac{\color{blue}{\alpha \cdot \alpha - 1}}{\log \left(\alpha \cdot \alpha\right) \cdot \pi}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    8. associate-/l/98.5%

      \[\leadsto \frac{\color{blue}{\frac{\frac{\alpha \cdot \alpha - 1}{\pi}}{\log \left(\alpha \cdot \alpha\right)}}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    9. log-prod98.4%

      \[\leadsto \frac{\frac{\frac{\alpha \cdot \alpha - 1}{\pi}}{\color{blue}{\log \alpha + \log \alpha}}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    10. count-298.4%

      \[\leadsto \frac{\frac{\frac{\alpha \cdot \alpha - 1}{\pi}}{\color{blue}{2 \cdot \log \alpha}}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    11. *-commutative98.4%

      \[\leadsto \frac{\frac{\frac{\alpha \cdot \alpha - 1}{\pi}}{\color{blue}{\log \alpha \cdot 2}}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    12. fma-neg98.6%

      \[\leadsto \frac{\frac{\frac{\color{blue}{\mathsf{fma}\left(\alpha, \alpha, -1\right)}}{\pi}}{\log \alpha \cdot 2}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    13. metadata-eval98.6%

      \[\leadsto \frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, \color{blue}{-1}\right)}{\pi}}{\log \alpha \cdot 2}}{1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta} \]
    14. +-commutative98.6%

      \[\leadsto \frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \alpha \cdot 2}}{\color{blue}{\left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta + 1}} \]
  3. Simplified98.5%

    \[\leadsto \color{blue}{\frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \alpha \cdot 2}}{\mathsf{fma}\left(\mathsf{fma}\left(\alpha, \alpha, -1\right), cosTheta \cdot cosTheta, 1\right)}} \]
  4. Step-by-step derivation
    1. fma-udef98.5%

      \[\leadsto \frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \alpha \cdot 2}}{\color{blue}{\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot \left(cosTheta \cdot cosTheta\right) + 1}} \]
  5. Applied egg-rr98.5%

    \[\leadsto \frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \alpha \cdot 2}}{\color{blue}{\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot \left(cosTheta \cdot cosTheta\right) + 1}} \]
  6. Final simplification98.5%

    \[\leadsto \frac{\frac{\frac{\mathsf{fma}\left(\alpha, \alpha, -1\right)}{\pi}}{\log \alpha \cdot 2}}{\mathsf{fma}\left(\alpha, \alpha, -1\right) \cdot \left(cosTheta \cdot cosTheta\right) + 1} \]

Alternative 2: 98.7% accurate, 0.7× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := -1 + \alpha \cdot \alpha\\ \frac{t_0}{\log \left({\left(\alpha \cdot \alpha\right)}^{\pi}\right) \cdot \left(1 + cosTheta \cdot \left(cosTheta \cdot t_0\right)\right)} \end{array} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (let* ((t_0 (+ -1.0 (* alpha alpha))))
   (/
    t_0
    (* (log (pow (* alpha alpha) PI)) (+ 1.0 (* cosTheta (* cosTheta t_0)))))))
float code(float cosTheta, float alpha) {
	float t_0 = -1.0f + (alpha * alpha);
	return t_0 / (logf(powf((alpha * alpha), ((float) M_PI))) * (1.0f + (cosTheta * (cosTheta * t_0))));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(-1.0) + Float32(alpha * alpha))
	return Float32(t_0 / Float32(log((Float32(alpha * alpha) ^ Float32(pi))) * Float32(Float32(1.0) + Float32(cosTheta * Float32(cosTheta * t_0)))))
end
function tmp = code(cosTheta, alpha)
	t_0 = single(-1.0) + (alpha * alpha);
	tmp = t_0 / (log(((alpha * alpha) ^ single(pi))) * (single(1.0) + (cosTheta * (cosTheta * t_0))));
end
\begin{array}{l}

\\
\begin{array}{l}
t_0 := -1 + \alpha \cdot \alpha\\
\frac{t_0}{\log \left({\left(\alpha \cdot \alpha\right)}^{\pi}\right) \cdot \left(1 + cosTheta \cdot \left(cosTheta \cdot t_0\right)\right)}
\end{array}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in alpha around 0 98.3%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\left(2 \cdot \left(\log \alpha \cdot \pi\right)\right)} \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  3. Simplified98.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\color{blue}{\log \left({\left(\alpha \cdot \alpha\right)}^{\pi}\right)} \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  4. Final simplification98.5%

    \[\leadsto \frac{-1 + \alpha \cdot \alpha}{\log \left({\left(\alpha \cdot \alpha\right)}^{\pi}\right) \cdot \left(1 + cosTheta \cdot \left(cosTheta \cdot \left(-1 + \alpha \cdot \alpha\right)\right)\right)} \]

Alternative 3: 98.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \begin{array}{l} t_0 := -1 + \alpha \cdot \alpha\\ \frac{t_0}{\left(1 + cosTheta \cdot \left(cosTheta \cdot t_0\right)\right) \cdot \left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right)} \end{array} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (let* ((t_0 (+ -1.0 (* alpha alpha))))
   (/
    t_0
    (* (+ 1.0 (* cosTheta (* cosTheta t_0))) (* PI (log (* alpha alpha)))))))
float code(float cosTheta, float alpha) {
	float t_0 = -1.0f + (alpha * alpha);
	return t_0 / ((1.0f + (cosTheta * (cosTheta * t_0))) * (((float) M_PI) * logf((alpha * alpha))));
}
function code(cosTheta, alpha)
	t_0 = Float32(Float32(-1.0) + Float32(alpha * alpha))
	return Float32(t_0 / Float32(Float32(Float32(1.0) + Float32(cosTheta * Float32(cosTheta * t_0))) * Float32(Float32(pi) * log(Float32(alpha * alpha)))))
end
function tmp = code(cosTheta, alpha)
	t_0 = single(-1.0) + (alpha * alpha);
	tmp = t_0 / ((single(1.0) + (cosTheta * (cosTheta * t_0))) * (single(pi) * log((alpha * alpha))));
end
\begin{array}{l}

\\
\begin{array}{l}
t_0 := -1 + \alpha \cdot \alpha\\
\frac{t_0}{\left(1 + cosTheta \cdot \left(cosTheta \cdot t_0\right)\right) \cdot \left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right)}
\end{array}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Final simplification98.4%

    \[\leadsto \frac{-1 + \alpha \cdot \alpha}{\left(1 + cosTheta \cdot \left(cosTheta \cdot \left(-1 + \alpha \cdot \alpha\right)\right)\right) \cdot \left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right)} \]

Alternative 4: 97.5% accurate, 1.0× speedup?

\[\begin{array}{l} \\ \frac{-1 + \alpha \cdot \alpha}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 - cosTheta \cdot cosTheta\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/
  (+ -1.0 (* alpha alpha))
  (* (* PI (log (* alpha alpha))) (- 1.0 (* cosTheta cosTheta)))))
float code(float cosTheta, float alpha) {
	return (-1.0f + (alpha * alpha)) / ((((float) M_PI) * logf((alpha * alpha))) * (1.0f - (cosTheta * cosTheta)));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(-1.0) + Float32(alpha * alpha)) / Float32(Float32(Float32(pi) * log(Float32(alpha * alpha))) * Float32(Float32(1.0) - Float32(cosTheta * cosTheta))))
end
function tmp = code(cosTheta, alpha)
	tmp = (single(-1.0) + (alpha * alpha)) / ((single(pi) * log((alpha * alpha))) * (single(1.0) - (cosTheta * cosTheta)));
end
\begin{array}{l}

\\
\frac{-1 + \alpha \cdot \alpha}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 - cosTheta \cdot cosTheta\right)}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in alpha around 0 97.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \color{blue}{\left(-1 \cdot cosTheta\right)} \cdot cosTheta\right)} \]
  3. Simplified97.5%

    \[\leadsto \frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \color{blue}{\left(-cosTheta\right)} \cdot cosTheta\right)} \]
  4. Final simplification97.5%

    \[\leadsto \frac{-1 + \alpha \cdot \alpha}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 - cosTheta \cdot cosTheta\right)} \]

Alternative 5: 66.9% accurate, 1.1× speedup?

\[\begin{array}{l} \\ \frac{-0.5}{\log \alpha \cdot \left(\pi \cdot \left(1 - cosTheta \cdot cosTheta\right)\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/ -0.5 (* (log alpha) (* PI (- 1.0 (* cosTheta cosTheta))))))
float code(float cosTheta, float alpha) {
	return -0.5f / (logf(alpha) * (((float) M_PI) * (1.0f - (cosTheta * cosTheta))));
}
function code(cosTheta, alpha)
	return Float32(Float32(-0.5) / Float32(log(alpha) * Float32(Float32(pi) * Float32(Float32(1.0) - Float32(cosTheta * cosTheta)))))
end
function tmp = code(cosTheta, alpha)
	tmp = single(-0.5) / (log(alpha) * (single(pi) * (single(1.0) - (cosTheta * cosTheta))));
end
\begin{array}{l}

\\
\frac{-0.5}{\log \alpha \cdot \left(\pi \cdot \left(1 - cosTheta \cdot cosTheta\right)\right)}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in alpha around 0 63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\left(1 + -1 \cdot {cosTheta}^{2}\right) \cdot \pi\right)}} \]
  3. Simplified63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\pi \cdot \left(1 + cosTheta \cdot \left(-cosTheta\right)\right)\right)}} \]
  4. Taylor expanded in cosTheta around 0 63.4%

    \[\leadsto \frac{-0.5}{\log \alpha \cdot \color{blue}{\left(-1 \cdot \left({cosTheta}^{2} \cdot \pi\right) + \pi\right)}} \]
  5. Step-by-step derivation
    1. associate-*r*63.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\color{blue}{\left(-1 \cdot {cosTheta}^{2}\right) \cdot \pi} + \pi\right)} \]
    2. distribute-lft1-in63.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \color{blue}{\left(\left(-1 \cdot {cosTheta}^{2} + 1\right) \cdot \pi\right)}} \]
    3. +-commutative63.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\color{blue}{\left(1 + -1 \cdot {cosTheta}^{2}\right)} \cdot \pi\right)} \]
    4. mul-1-neg63.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\left(1 + \color{blue}{\left(-{cosTheta}^{2}\right)}\right) \cdot \pi\right)} \]
    5. unpow263.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\left(1 + \left(-\color{blue}{cosTheta \cdot cosTheta}\right)\right) \cdot \pi\right)} \]
    6. distribute-rgt-neg-out63.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\left(1 + \color{blue}{cosTheta \cdot \left(-cosTheta\right)}\right) \cdot \pi\right)} \]
    7. distribute-rgt-neg-out63.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\left(1 + \color{blue}{\left(-cosTheta \cdot cosTheta\right)}\right) \cdot \pi\right)} \]
    8. unpow263.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\left(1 + \left(-\color{blue}{{cosTheta}^{2}}\right)\right) \cdot \pi\right)} \]
    9. unsub-neg63.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\color{blue}{\left(1 - {cosTheta}^{2}\right)} \cdot \pi\right)} \]
    10. unpow263.4%

      \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\left(1 - \color{blue}{cosTheta \cdot cosTheta}\right) \cdot \pi\right)} \]
  6. Simplified63.4%

    \[\leadsto \frac{-0.5}{\log \alpha \cdot \color{blue}{\left(\left(1 - cosTheta \cdot cosTheta\right) \cdot \pi\right)}} \]
  7. Final simplification63.4%

    \[\leadsto \frac{-0.5}{\log \alpha \cdot \left(\pi \cdot \left(1 - cosTheta \cdot cosTheta\right)\right)} \]

Alternative 6: 95.4% accurate, 1.1× speedup?

\[\begin{array}{l} \\ \frac{-1 + \alpha \cdot \alpha}{\pi \cdot \log \left(\alpha \cdot \alpha\right)} \end{array} \]
(FPCore (cosTheta alpha)
 :precision binary32
 (/ (+ -1.0 (* alpha alpha)) (* PI (log (* alpha alpha)))))
float code(float cosTheta, float alpha) {
	return (-1.0f + (alpha * alpha)) / (((float) M_PI) * logf((alpha * alpha)));
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(-1.0) + Float32(alpha * alpha)) / Float32(Float32(pi) * log(Float32(alpha * alpha))))
end
function tmp = code(cosTheta, alpha)
	tmp = (single(-1.0) + (alpha * alpha)) / (single(pi) * log((alpha * alpha)));
end
\begin{array}{l}

\\
\frac{-1 + \alpha \cdot \alpha}{\pi \cdot \log \left(\alpha \cdot \alpha\right)}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in cosTheta around 0 95.0%

    \[\leadsto \color{blue}{\frac{{\alpha}^{2} - 1}{\log \left({\alpha}^{2}\right) \cdot \pi}} \]
  3. Step-by-step derivation
    1. pow295.0%

      \[\leadsto \frac{\color{blue}{\alpha \cdot \alpha} - 1}{\log \left({\alpha}^{2}\right) \cdot \pi} \]
    2. sub-neg95.0%

      \[\leadsto \frac{\color{blue}{\alpha \cdot \alpha + \left(-1\right)}}{\log \left({\alpha}^{2}\right) \cdot \pi} \]
    3. metadata-eval95.0%

      \[\leadsto \frac{\alpha \cdot \alpha + \color{blue}{-1}}{\log \left({\alpha}^{2}\right) \cdot \pi} \]
  4. Applied egg-rr95.0%

    \[\leadsto \frac{\color{blue}{\alpha \cdot \alpha + -1}}{\log \left({\alpha}^{2}\right) \cdot \pi} \]
  5. Taylor expanded in alpha around 0 94.9%

    \[\leadsto \frac{\alpha \cdot \alpha + -1}{\color{blue}{\left(2 \cdot \log \alpha\right)} \cdot \pi} \]
  6. Step-by-step derivation
    1. count-294.9%

      \[\leadsto \frac{\alpha \cdot \alpha + -1}{\color{blue}{\left(\log \alpha + \log \alpha\right)} \cdot \pi} \]
    2. log-prod95.0%

      \[\leadsto \frac{\alpha \cdot \alpha + -1}{\color{blue}{\log \left(\alpha \cdot \alpha\right)} \cdot \pi} \]
  7. Simplified95.0%

    \[\leadsto \frac{\alpha \cdot \alpha + -1}{\color{blue}{\log \left(\alpha \cdot \alpha\right)} \cdot \pi} \]
  8. Final simplification95.0%

    \[\leadsto \frac{-1 + \alpha \cdot \alpha}{\pi \cdot \log \left(\alpha \cdot \alpha\right)} \]

Alternative 7: 65.7% accurate, 1.1× speedup?

\[\begin{array}{l} \\ \frac{\frac{0.5}{\pi}}{-\log \alpha} \end{array} \]
(FPCore (cosTheta alpha) :precision binary32 (/ (/ 0.5 PI) (- (log alpha))))
float code(float cosTheta, float alpha) {
	return (0.5f / ((float) M_PI)) / -logf(alpha);
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(0.5) / Float32(pi)) / Float32(-log(alpha)))
end
function tmp = code(cosTheta, alpha)
	tmp = (single(0.5) / single(pi)) / -log(alpha);
end
\begin{array}{l}

\\
\frac{\frac{0.5}{\pi}}{-\log \alpha}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in alpha around 0 63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\left(1 + -1 \cdot {cosTheta}^{2}\right) \cdot \pi\right)}} \]
  3. Simplified63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\pi \cdot \left(1 + cosTheta \cdot \left(-cosTheta\right)\right)\right)}} \]
  4. Taylor expanded in cosTheta around 0 62.5%

    \[\leadsto \frac{-0.5}{\color{blue}{\log \alpha \cdot \pi}} \]
  5. Taylor expanded in alpha around inf 62.4%

    \[\leadsto \color{blue}{\frac{0.5}{\log \left(\frac{1}{\alpha}\right) \cdot \pi}} \]
  6. Step-by-step derivation
    1. *-commutative62.4%

      \[\leadsto \frac{0.5}{\color{blue}{\pi \cdot \log \left(\frac{1}{\alpha}\right)}} \]
    2. associate-/r*62.5%

      \[\leadsto \color{blue}{\frac{\frac{0.5}{\pi}}{\log \left(\frac{1}{\alpha}\right)}} \]
    3. log-rec62.5%

      \[\leadsto \frac{\frac{0.5}{\pi}}{\color{blue}{-\log \alpha}} \]
  7. Simplified62.5%

    \[\leadsto \color{blue}{\frac{\frac{0.5}{\pi}}{-\log \alpha}} \]
  8. Final simplification62.5%

    \[\leadsto \frac{\frac{0.5}{\pi}}{-\log \alpha} \]

Alternative 8: 65.7% accurate, 1.1× speedup?

\[\begin{array}{l} \\ \frac{-0.5}{\pi \cdot \log \alpha} \end{array} \]
(FPCore (cosTheta alpha) :precision binary32 (/ -0.5 (* PI (log alpha))))
float code(float cosTheta, float alpha) {
	return -0.5f / (((float) M_PI) * logf(alpha));
}
function code(cosTheta, alpha)
	return Float32(Float32(-0.5) / Float32(Float32(pi) * log(alpha)))
end
function tmp = code(cosTheta, alpha)
	tmp = single(-0.5) / (single(pi) * log(alpha));
end
\begin{array}{l}

\\
\frac{-0.5}{\pi \cdot \log \alpha}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in alpha around 0 63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\left(1 + -1 \cdot {cosTheta}^{2}\right) \cdot \pi\right)}} \]
  3. Simplified63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\pi \cdot \left(1 + cosTheta \cdot \left(-cosTheta\right)\right)\right)}} \]
  4. Taylor expanded in cosTheta around 0 62.5%

    \[\leadsto \frac{-0.5}{\color{blue}{\log \alpha \cdot \pi}} \]
  5. Final simplification62.5%

    \[\leadsto \frac{-0.5}{\pi \cdot \log \alpha} \]

Alternative 9: 65.7% accurate, 1.1× speedup?

\[\begin{array}{l} \\ \frac{\frac{-0.5}{\log \alpha}}{\pi} \end{array} \]
(FPCore (cosTheta alpha) :precision binary32 (/ (/ -0.5 (log alpha)) PI))
float code(float cosTheta, float alpha) {
	return (-0.5f / logf(alpha)) / ((float) M_PI);
}
function code(cosTheta, alpha)
	return Float32(Float32(Float32(-0.5) / log(alpha)) / Float32(pi))
end
function tmp = code(cosTheta, alpha)
	tmp = (single(-0.5) / log(alpha)) / single(pi);
end
\begin{array}{l}

\\
\frac{\frac{-0.5}{\log \alpha}}{\pi}
\end{array}
Derivation
  1. Initial program 98.4%

    \[\frac{\alpha \cdot \alpha - 1}{\left(\pi \cdot \log \left(\alpha \cdot \alpha\right)\right) \cdot \left(1 + \left(\left(\alpha \cdot \alpha - 1\right) \cdot cosTheta\right) \cdot cosTheta\right)} \]
  2. Taylor expanded in alpha around 0 63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\left(1 + -1 \cdot {cosTheta}^{2}\right) \cdot \pi\right)}} \]
  3. Simplified63.4%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \left(\pi \cdot \left(1 + cosTheta \cdot \left(-cosTheta\right)\right)\right)}} \]
  4. Taylor expanded in cosTheta around 0 62.5%

    \[\leadsto \color{blue}{\frac{-0.5}{\log \alpha \cdot \pi}} \]
  5. Step-by-step derivation
    1. associate-/r*62.5%

      \[\leadsto \color{blue}{\frac{\frac{-0.5}{\log \alpha}}{\pi}} \]
  6. Simplified62.5%

    \[\leadsto \color{blue}{\frac{\frac{-0.5}{\log \alpha}}{\pi}} \]
  7. Final simplification62.5%

    \[\leadsto \frac{\frac{-0.5}{\log \alpha}}{\pi} \]

Reproduce

?
herbie shell --seed 2023188 
(FPCore (cosTheta alpha)
  :name "GTR1 distribution"
  :precision binary32
  :pre (and (and (<= 0.0 cosTheta) (<= cosTheta 1.0)) (and (<= 0.0001 alpha) (<= alpha 1.0)))
  (/ (- (* alpha alpha) 1.0) (* (* PI (log (* alpha alpha))) (+ 1.0 (* (* (- (* alpha alpha) 1.0) cosTheta) cosTheta)))))