-
Notifications
You must be signed in to change notification settings - Fork 27
Description
One relatively fast way to get high-precision eigenvalues is to first compute the eigenvalues in Float64 precision, and then use these as starting guesses for Newton iterations on
(This doesn't work if the eigenvalues are closer to one another than Float64 precision can distinguish, however.)
The trick to doing fast Newton iterations is to do a Hessenberg reduction of the matrix first, since this only needs to be done once regardless of the shift, and we can exploit fast Hessenberg determinant algorithms from LinearAlgebra.jl.
Of course, the Hessenberg reduction itself is eigvals(A) anyway. But for moderate-sized non-symmetric BigFloat matrices I find that hessenberg(A) is still significantly faster than eigvals(A). So, doing only the Hessenberg step and then using it for Newton iterations can be a significant speedup, especially if you want only a few of the eigenvalues.
I'm not sure if this belongs in this package, but since coding up the Newton iterations is a little tricky, and QuadGK.jl only has the SymTridiagonal case, I figured I would post it here for posterity:
using LinearAlgebra
using LinearAlgebra: checksquare
# compute 1 / d(logdet(F+µI))/dµ = det(F+µI) / d(det(F+µI))/dµ, used
# for Newton steps to solve det(F+µI)=0.
function invlogdet′(F::UpperHessenberg, μ::Number)
# derivative of Cahill (2003) algorithm from LinearAlgebra's hessenberg.jl
checksquare(F)
H = F.data
m = size(H,1)
m == 0 && return zero(zero(eltype(H)) + μ)
determinant = H[1,1] + μ
determinant′ = one(determinant)
prevdeterminant = one(determinant)
prevdeterminant′ = zero(determinant)
m == 1 && return determinant / determinant′
prods = Vector{typeof(determinant)}(undef, m-1) # temporary storage for partial products
prods′ = zero(prods)
detabs = abs(determinant)
big, small = sqrt(floatmax(detabs)), sqrt(floatmin(detabs))
@inbounds for n = 2:m
prods[n-1], prods′[n-1] = prevdeterminant, prevdeterminant′
prevdeterminant, prevdeterminant′ = determinant, determinant′
determinant′ = determinant′ * (diag = H[n,n] + μ) + determinant
determinant *= diag
h = H[n,n-1]
@simd for r = n-1:-2:2
determinant -= H[r,n] * (prods[r] *= h) - H[r-1,n] * (prods[r-1] *= h)
determinant′ -= H[r,n] * (prods′[r] *= h) - H[r-1,n] * (prods′[r-1] *= h)
end
if iseven(n)
determinant -= H[1,n] * (prods[1] *= h)
determinant′ -= H[1,n] * (prods′[1] *= h)
end
# rescale to avoid under/overflow
detabs = abs(determinant)
if detabs > big || floatmin(detabs) < detabs < small
scale = inv(detabs)
determinant *= scale
determinant′ *= scale
prevdeterminant *= scale
prevdeterminant′ *= scale
for r = 1:n-1
prods[r] *= scale
prods′[r] *= scale
end
end
end
return determinant / determinant′
end
# compute an eigenvalue of H by Newton's method, starting with λ_guess
function newtoneig(H::UpperHessenberg, λ_guess::Number; maxiter::Integer=100)
λ = λ_guess + float(zero(eltype(H)))
ϵ = eps(abs(λ))
for i = 1:maxiter
dλ = invlogdet′(H, -λ)
λ += dλ
abs2(dλ) ≤ ϵ * abs(λ) && break
end
return λ
end Example usage to compute one eigenvalue of a 100x100 BigFloat matrix:
julia> using GenericLinearAlgebra, BenchmarkTools
julia> Abig = randn(BigFloat, 100,100);
julia> λ_big = @btime eigvals($Abig);
1.315 s (27837247 allocations: 2.46 GiB)
julia> Hbig = @btime UpperHessenberg(hessenberg(Abig).H.data);
141.378 ms (3363701 allocations: 308.11 MiB)
julia> λ = eigvals(Float64.(Abig));
julia> λ_big_1 = @btime newtoneig(Hbig, λ[1]);
6.550 ms (184604 allocations: 16.87 MiB)
julia> abs(λ_big_1 - λ_big[1]) / abs(λ_big_1)
2.786326310496859493258546447488132511825320457178930769427547477175253081732913e-76Notice that it's about 10x faster, even including the hessenberg call, than calling eigvals(Abig).