You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By profiling my code that uses logerf(a::Float64, b::Float64), I realised that most time is spent in performing conversion to BigFloat. The reason is that this line compares the input values to the irrational value invsqrt2. This comparison, in turn, is defined in terms of comparing BigFloats, and dwarfs the computation time of the rest of the function.
Actually, I wonder why the constant invsqrt2 is used at all. I think the compiler can very well compile inv(sqrt(oftype(a, 2)) into a single statically known number.
I can't judge the numerical situation, though. Is it necessary, for some reason, to have the higher precision in this comparison?
EDIT: A quick benchmark using BenchmarkTools shows a 7x speedup when switching to the "naive" computation of invsqrt2.
The text was updated successfully, but these errors were encountered:
By profiling my code that uses
logerf(a::Float64, b::Float64)
, I realised that most time is spent in performing conversion toBigFloat
. The reason is that this line compares the input values to the irrational valueinvsqrt2
. This comparison, in turn, is defined in terms of comparingBigFloat
s, and dwarfs the computation time of the rest of the function.Actually, I wonder why the constant
invsqrt2
is used at all. I think the compiler can very well compileinv(sqrt(oftype(a, 2))
into a single statically known number.I can't judge the numerical situation, though. Is it necessary, for some reason, to have the higher precision in this comparison?
EDIT: A quick benchmark using BenchmarkTools shows a 7x speedup when switching to the "naive" computation of
invsqrt2
.The text was updated successfully, but these errors were encountered: