Is Widening Actually Necessary? (a provocative post) ✓ RESOLVED/CONCLUDED 🔒 LOCKED

Forum: Static Analysis Started by: AbstractKitty Total posts: 247 Views: 18,342 Pages: 123 24 25
Pages: 1 2 22 23 24 25   Showing posts 241–247 of 247
#241

Okay I've been lurking for 24 pages and someone in the last thread drift made me finally make an account.

▸ NarrowingEnjoyer wrote (post #238):
"…but that's basically what the Rust borrow checker is doing — it's computing a fixed-point over a borrow graph lattice…"

This is almost right but not quite. The borrow checker does reason about a fixed-point, but it's not abstract interpretation in the Cousot–Cousot sense. The domain isn't a complete lattice with a widening operator — Rust's analysis terminates by construction because lifetime regions are finitely bounded by the syntactic structure of the program.

Abstract interpretation in the classical sense needs widening precisely because the abstract domain might be infinite (think intervals over ℤ). Rust sidesteps this by having a domain that's already finite — there are only finitely many borrow graph edges possible in a given function body.

So it's a static analysis, yes. Is it abstract interpretation? Only in the broadest "safe approximation over an abstract domain" sense. Which, honestly, some people use that definition and it's fine, but then we're back to asking whether type checking is abstract interpretation, and that way lies madness.

[ edited 02:19 — fixed a typo ]

#242

PolyvariantPawPrints has it right, and I'll add some nuance.

The original Rust borrow checker (pre-NLL) was essentially a syntactic check with a very restricted lifetime discipline — no abstract interpretation involved at all, just scope nesting rules. The Non-Lexical Lifetimes (NLL) version is more interesting: it's computing dataflow equations over a region constraint graph.

// NLL computes: for each loan L and point P, // does the lifetime 'a "contain" P? // Formally: liveness(var) ∪ constraints propagated // This IS a dataflow fixed-point computation. let region_inference: FixedPoint<RegionConstraintGraph> = ...

So in NLL's case you do have a monotone dataflow analysis over a finite lattice (the powerset of program points). Termination is guaranteed because the lattice is finite — no widening needed. This is technically a restricted form of abstract interpretation where the abstract domain happens to be finite, making widening vacuous.

The question "is Rust's borrow checker abstract interpretation?" has the same energy as "is a pushdown automaton a Turing machine?" — technically you can embed one in the other with the right framing, but the framing obscures what's actually interesting about each.

Anyway this is a completely different topic from widening for numerical domains which is what this thread was supposed to be about. Hi everyone, sorry for the tangent.

#243  📌 OP — Thread Summary

Hi everyone. I'm the one who started this thread 847 days ago with what I admit was a deliberately provocative title. I owe you all a proper closing summary because this thread genuinely produced something worth preserving, even if it took 25 pages and a detour through Rust's borrow checker to get there.

═══ THREAD SUMMARY: Is Widening Actually Necessary? ═══

The original question: My claim was that widening is a pragmatic hack — a kludge to force termination that sacrifices precision, and that in principle you should always be able to design a better abstract domain or use a different termination strategy.

What the thread established (roughly):

1. Widening is not definitionally necessary for termination in abstract interpretation — you can use narrowing-first approaches, acceleration techniques, or simply work in finite domains. (Thanks LatticeTabby for the early formalization.)

2. However, widening is practically indispensable for infinite abstract domains (intervals, octagons, polyhedra) on programs with loops — the convergence speed without it is unacceptable in most real tools. (BorrowCatChecker and PushdownPurr made this point across pages 3–7.)

3. The coalgebra perspective (pages 11–14, hat tip MeownadTransformer) reframes the question beautifully: widening operators correspond to a choice of corecursion strategy for computing the greatest fixpoint in the dual lattice. Whether this is "necessary" depends on what you accept as a valid termination proof.

4. There exist widening-free analyses for important fragments — model checking of pushdown systems, type-theoretic analysis à la Rust's borrow checker — but these rely on structural bounds unavailable in general numerical analysis.

5. The provocative answer to the original question: Widening is necessary iff your abstract domain is infinite AND you want a computable analysis. Otherwise, it's an engineering choice.

A note on the Rust tangent: BorrowCatChecker and PolyvariantPawPrints are both right. NLL's region inference is a finite-domain dataflow fixed-point — widening is trivially vacuous because the lattice is already finite. This doesn't make Rust's borrow checker "abstract interpretation" in the Cousot sense any more than bubble sort is a Turing machine. But it's a beautiful illustration of why domain finiteness is the real condition, not "is widening present."

📄 Upcoming preprint announcement

I'm writing up the coalgebra–widening connection that MeownadTransformer sketched in posts #128–#134 as a proper short paper. Working title:

"Widening as Corecursion: A Coalgebraic Account of Convergence Acceleration in Abstract Interpretation"

Draft target: SAS 2025 extended abstract submission. Will post preprint link here when it's on arXiv. Co-authorship offer is open to MeownadTransformer if you want it — you did the hard conceptual work. LatticeTabby, your formalization from page 2 will be cited prominently.

Thank you all. This is why I love this forum. Even the drift into Rust borrow checking was productive in the end.

— AbstractKitty 🐈

#244

AbstractKitty, that is a genuinely excellent summary. Thank you for writing it — I know these closing-summary posts take effort, and this one does the thread justice.

I want to add one small observation to close my own arc in this conversation. When I wrote the formalization on page 2, I was somewhat skeptical of your original provocative stance. My instinct was to defend widening as a necessary construct. What I didn't anticipate was that trying to formalize "necessary" precisely would force me to confront how much the answer depends on what problem you're actually solving.

By page 8 I had quietly changed my position. Not dramatically — widening is still the right tool for the job in nearly all practical numerical analyses — but I now think the more honest framing is:

▸ LatticeTabby (personal notes, paraphrased):
Widening is a constructive proof of termination for analyses over infinite abstract domains. It is "necessary" in the same sense that a termination proof is necessary — you need something to play that role. Whether that something must be called "widening" is a matter of definition, not mathematics.

That's as close to a resolution as I think we'll get on the theoretical side, and I'm satisfied with it.

AbstractKitty — I would be honoured to have my page-2 formalization cited. Please do reach out before submission and I can send you the cleaned-up Lean version I've been sitting on.

It has been a genuine pleasure, everyone. This community continues to be one of the few places on the internet where you can spend 25 pages arguing about abstract interpretation and come away having actually learned something.  🔷🐈

#245

AbstractKitty — yes, absolutely, I'm in for co-authorship. The framing you've given in post #243 is cleaner than anything I managed in posts #128–#134, and I think the paper will be much better for your involvement. I'll DM you the notes I have on the categorical semantics side.

Also: the Coq formalization of the coalgebraic widening correspondence is almost done. Just 3 more lemmas to go:

(* Remaining lemmas — Coq 8.18 *) (* 1 *) Lemma coiter_widening_commutes : forall (F : Functor) (W : WideningOp F), coiter_with_widening F W ≃ greatest_fixpoint F. (* 2 *) Lemma widening_acceleration_sound : forall (D : AbstractDomain) (n : nat), n_wide_steps D n ⊒ true_semantics D. (* 3 *) Lemma finite_domain_widening_trivial : forall (D : AbstractDomain), is_finite D → widening_op D = join_op D.

Lemma 3 is what makes the Rust borrow-checker observation a formal theorem rather than an observation — finite-domain abstract interpretation has a canonical "widening" operator that's just the join, so the question of whether widening is "present" in NLL dissolves.

Lemma 1 is the one that's making me slightly nervous. The coherence condition for the natural transformation is fiddly. I may need to ask on the Coq Discourse. But we're close!

#246
🎯
🔒

ModKitty has locked this thread.

This thread has run its natural course and concluded with an excellent summary by the OP. Locking to preserve the discussion as a reference. This is exactly the kind of thread CGPA exists for — 847 days, 247 posts, genuine intellectual progress, and only 3 incidents requiring mod intervention (all minor). A preprint is apparently incoming. We will post a link in the Announcements subforum when it appears on arXiv.

If you wish to continue any of the sub-discussions (Rust/abstract interpretation, coalgebraic semantics, etc.), please open a new thread with a focused title.

Outstanding work, everyone. 🐾

ModKitty | Forum Moderator  |  Posts: 12,044  |  Joined: Feb 2012  |  🕐 Tue 25 Mar 2025  18:05 UTC  |  Post #247 (final)
🔒 This thread is now LOCKED.
You cannot reply to this thread. Start a new thread to continue the discussion.
Pages: 1 2 22 23 24 25   End of thread (247 posts)