Skip to content

Add Thierry-Biegler l1-exact penalty-barrier phase for degenerate NLPs #23

@jkitchin

Description

@jkitchin

Goal

Implement the Thierry & Biegler ℓ₁-Exact Penalty-Barrier Phase for degenerate NLPs as an Ipopt-parity option in ripopt.

Primary reference: Thierry, D. & Biegler, L.T. (2020). "The ℓ₁—Exact Penalty-Barrier Phase for Degenerate Nonlinear Programming Problems in Ipopt" — IFAC-PapersOnLine, 21st IFAC World Congress.

What the method does

The ℓ₁ norm of the equality constraints c(x) = 0 is moved into the objective and penalized; non-negative slack variables p, n ≥ 0 are introduced so the penalty term becomes smooth:

min   f(x) + ρ · 1ᵀ(p + n)
s.t.  c(x) - p + n = 0
      x_L ≤ x ≤ x_U,    p ≥ 0,    n ≥ 0

The reformulated NLP automatically satisfies LICQ on the augmented variables, so the standard interior-point machinery (filter LS, inertia correction, fraction-to-boundary) just works — no special LICQ-failure handling. The penalty parameter ρ is updated dynamically following Byrd, Nocedal & Waltz (2012): increased as stationarity is approached so the original problem's KKT system is recovered in the limit.

Why ripopt should have this

Degenerate NLPs where LICQ / MFCQ fails — MPCCs, problems with redundant equalities, near-active-set boundaries — are exactly the cases where ripopt's filter LS cycles or thrashes in/out of restoration. The Thierry-Biegler paper reports substantial robustness wins on this slice vs. stock Ipopt.

This is distinct from the upstream line_search_method = penalty / cg-penalty acceptors (Wächter / Laird / Chen / Wen), which swap the line-search merit function but keep the original NLP structure. The ℓ₁ exact penalty-barrier phase reformulates the NLP itself.

Suggested phasing

  1. NLP reformulation layer. Wrap the user-supplied NlpProblem in an augmented-NLP adapter that adds 2·m_eq slack variables, transforms equality rows to c(x) − p + n = 0, and adds ρ · 1ᵀ(p + n) to the objective. Live behind a SolverOptions flag (e.g. l1_exact_penalty_barrier: bool + l1_penalty_init: f64).
  2. Dynamic ρ update. Implement the Byrd–Nocedal–Waltz update rule. Hook it into the main IPM loop after each accepted step.
  3. Solution mapping. On termination, project the augmented solution back to the original variable space and report the original-problem KKT residuals (so the user sees (x*, λ*), not the augmented (x*, p*, n*, λ*)).
  4. Restoration handoff. Decide whether the ℓ₁-penalty path replaces or augments restoration — the paper's claim is that restoration is rarely needed once the reformulation is active, but ripopt should still fall through to its existing restoration phase as a safety net.

Acceptance

  • l1_exact_penalty_barrier = true selectable from SolverOptions; default false.
  • Default behavior (flag off) byte-identical to current ripopt — verified by CUTEst run.
  • With flag on: at least one MPCC-like test problem (e.g. an HS / CUTEst degenerate equality case) where stock ripopt fails or thrashes solves cleanly.
  • Unit tests: NLP reformulation correctness, ρ update rule, solution back-projection.

Origin

User asked about David Thierry's ℓ₁ work in Ipopt; the relevant paper is Thierry & Biegler (2020) above. (Note: this work is not in upstream Ipopt 3.14 — it's a separate research implementation by the authors, not part of the public release.)

Related

  • Byrd, Nocedal & Waltz (2012), "A line search exact penalty method using steering rules" — penalty-update rule the paper builds on.
  • Upstream Ipopt's PenaltyLSAcceptor and CGPenaltyLSAcceptor are different mechanisms (line-search merit functions, not NLP reformulation); if we want those too they belong in a separate issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions