Skip to content

Conversation

@nsiccha
Copy link
Contributor

@nsiccha nsiccha commented Dec 18, 2025

Thinks were to messy for my tastes to merge as is or with small changes. Supersedes #439.

Comment on lines +17 to +19
- `premetric`: a function which, for a given posterior position `pos`, computes either
a) a symmetric, **positive definite** matrix acting as the position dependent Riemannian metric (if `metric_map = IdentityMap()`), or
b) a symmetric, **not necessarily positive definite** matrix acting as the position dependent Riemannian metric after being passed through the `metric_map` argument, which will have to ensure that its return value *is* positive definite (like `metric_map = SoftAbsMap(alpha)`),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
- `premetric`: a function which, for a given posterior position `pos`, computes either
a) a symmetric, **positive definite** matrix acting as the position dependent Riemannian metric (if `metric_map = IdentityMap()`), or
b) a symmetric, **not necessarily positive definite** matrix acting as the position dependent Riemannian metric after being passed through the `metric_map` argument, which will have to ensure that its return value *is* positive definite (like `metric_map = SoftAbsMap(alpha)`),
- `premetric`: a function which, for a given posterior position `pos`, computes either
a) a symmetric, **positive definite** matrix acting as the position dependent Riemannian metric (if `metric_map = IdentityMap()`), or
b) a symmetric, **not necessarily positive definite** matrix acting as the position dependent Riemannian metric after being passed through the `metric_map` argument, which will have to ensure that its return value *is* positive definite (like `metric_map = SoftAbsMap(alpha)`),


ϵ = fwd ? step_size(lf) : -step_size(lf)
ϵ = ϵ'

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change

h::Hamiltonian{<:DenseRiemannianMetric{T,<:IdentityMap},<:GaussianKinetic},
θ::AbstractVector{T},
r::AbstractVector{T};
cache=nothing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
cache=nothing
cache=nothing,

r::AbstractVector{T};
cache=nothing
) where {T}
cache = @something cache begin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
cache = @something cache begin
cache = @something cache begin

Comment on lines +47 to +48
rv1 = map(eachindex(log_density_gradient)) do i
-log_density_gradient[i] + .5 * tr_product(inv_metric, metric_sensitivities[:, :, i])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
rv1 = map(eachindex(log_density_gradient)) do i
-log_density_gradient[i] + .5 * tr_product(inv_metric, metric_sensitivities[:, :, i])
rv1 = map(eachindex(log_density_gradient)) do i
-log_density_gradient[i] +
0.5 * tr_product(inv_metric, metric_sensitivities[:, :, i])

Comment on lines +57 to +60
cache.rv1 .- Base.broadcasted(eachindex(cache.rv1)) do i
.5 * tr_product(cache.metric_sensitivities[:, :, i], inv_metric_r)
end
#! Eq (18) of Girolami & Calderhead (2011)
(; value, gradient) = ∂H∂θ(h, θ_full, r_half)
r_full = r_half - ϵ / 2 * gradient
# println("r_full: ", r_full)
# Tempering
#r = temper(lf, r, (i=i, is_half=false), n_steps)
# Create a new phase point by caching the logdensity and gradient
z = phasepoint(h, θ_full, r_full; ℓπ=DualValue(value, gradient))
# Update result
if FullTraj
res[i] = z
), cache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
cache.rv1 .- Base.broadcasted(eachindex(cache.rv1)) do i
.5 * tr_product(cache.metric_sensitivities[:, :, i], inv_metric_r)
end
#! Eq (18) of Girolami & Calderhead (2011)
(; value, gradient) = ∂H∂θ(h, θ_full, r_half)
r_full = r_half - ϵ / 2 * gradient
# println("r_full: ", r_full)
# Tempering
#r = temper(lf, r, (i=i, is_half=false), n_steps)
# Create a new phase point by caching the logdensity and gradient
z = phasepoint(h, θ_full, r_full; ℓπ=DualValue(value, gradient))
# Update result
if FullTraj
res[i] = z
), cache
cache.rv1 .- Base.broadcasted(eachindex(cache.rv1)) do i
0.5 * tr_product(cache.metric_sensitivities[:, :, i], inv_metric_r)
end,
),
cache

r::AbstractVector{T};
cache=nothing,
) where {T}
cache = @something cache begin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
cache = @something cache begin
cache = @something cache begin

Comment on lines +96 to +97
rv1 = map(eachindex(log_density_gradient)) do i
-log_density_gradient[i] + .5 * tr_product(tmpm, premetric_sensitivities[:, :, i])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
rv1 = map(eachindex(log_density_gradient)) do i
-log_density_gradient[i] + .5 * tr_product(tmpm, premetric_sensitivities[:, :, i])
rv1 = map(eachindex(log_density_gradient)) do i
-log_density_gradient[i] + 0.5 * tr_product(tmpm, premetric_sensitivities[:, :, i])

rv1 = map(eachindex(log_density_gradient)) do i
-log_density_gradient[i] + .5 * tr_product(tmpm, premetric_sensitivities[:, :, i])
end
(;log_density, Q, softabsλ, tmpv, tmpm, rv1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
(;log_density, Q, softabsλ, tmpv, tmpm, rv1)
(; log_density, Q, softabsλ, tmpv, tmpm, rv1)

Comment on lines +106 to +109
cache.rv1 .- Base.broadcasted(eachindex(cache.rv1)) do i
.5 * tr_product(cache.tmpm, cache.premetric_sensitivities[:, :, i])
end
), cache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
cache.rv1 .- Base.broadcasted(eachindex(cache.rv1)) do i
.5 * tr_product(cache.tmpm, cache.premetric_sensitivities[:, :, i])
end
), cache
cache.rv1 .- Base.broadcasted(eachindex(cache.rv1)) do i
0.5 * tr_product(cache.tmpm, cache.premetric_sensitivities[:, :, i])
end,
),
cache

@github-actions
Copy link
Contributor

AdvancedHMC.jl documentation for PR #484 is available at:
https://TuringLang.github.io/AdvancedHMC.jl/previews/PR484/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants