Is Widening Actually Necessary? (a provocative post) 🔥 HOT

Posted by FixpointFeline  |  Subforum: Static Analysis & Abstract Interpretation  |  247 replies  |  8,341 views  |  Last reply: 2 hours ago
😺
FixpointFeline
Abstract Interpretation Enjoyer
Posts: 1,847
Joined: Feb 2020
Location: lfp(⊥)
Reputation: ★★★★☆
#1
Okay I'm going to say the thing. I'm going to say the thing that apparently nobody here is allowed to say nya~. Widening operators are a crutch. Now before GaloisCat shows up and starts copy-pasting from the POPL '77 paper at me (yes, I've read it, yes, the whole thing, yes including the footnotes), let me lay out my actual argument. The standard pitch for widening is that on abstract domains with infinite ascending chains, you need something to guarantee termination of your fixpoint iteration. The Cousot & Cousot 1977 framework gives you the widening operator ∇ : L × L → L with two properties: 1. It's an upper bound: ∀ a, b. a ⊑ (a ∇ b) and b ⊑ (a ∇ b) 2. For all ascending chains (xₙ), the sequence y₀ = x₀, yₙ₊₁ = yₙ ∇ xₙ₊₁ must eventually stabilize. Great. Wonderful. Thirty-something years of interval analysis tooling built on this. I get it. But here's my problem: widening is fundamentally a precision-destroying blunt instrument, and we've all just sort of... accepted this? The classic interval widening just YEETS bounds to ±∞ the moment they look unstable. Yes, you can improve this with threshold widening, with delayed widening, with the whole zoo of refinements that have accumulated in the literature since. But all of these feel like band-aids on a fundamentally awkward design. My actual thesis: with sufficiently lazy / incremental / demand-driven fixpoint computation, you can often avoid widening entirely, or make it a much rarer last resort, without sacrificing termination. Things like: - Demand-driven analysis (only compute what you need; avoid materializing the full ascending chain) - Semantic unrolling (specialized approaches for loop-dominated domains that don't require extrapolation) - Finite abstract domain restriction (just... pick a domain with DCC? controversial I know) - Coinductive / greatest fixpoint approaches for safety properties where you want gfp not lfp I'm not saying Cousot was wrong in 1977. I'm saying that our default instinct to slap a widening operator on every infinite-height domain and call it a day might be holding back more elegant solutions. Flame war begins... now? :3
-- /\_/\ FixpointFeline ( ˘ᵕ˘ ) "The least fixpoint is the truth. > ⊑ < The greatest fixpoint is also fine honestly."
😾
GaloisCat
Galois Connection Supremacist
Posts: 3,201
Joined: Sep 2018
Location: α ∘ γ = id
Reputation: ★★★★★
#2
Oh here we go.
FixpointFeline wrote:
Widening operators are a crutch.
I cannot BELIEVE I have to explain this again. Have you actually read Cousot & Cousot 1977? Not "read" like scrolled past the abstract, but actually READ it? The paper is literally titled "Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints." Section 9. SECTION NINE. It explicitly introduces finite fixpoint approximation methods for precisely the cases where Kleene sequences are infinite. The widening is not a "band-aid." It is a mathematically principled extrapolation operator that is proven to yield sound over-approximations of the collecting semantics. Your "lazy demand-driven analysis avoids ascending chains" argument is... okay I'm going to be gentle here... it's not wrong, it's just completely orthogonal. Demand-driven analysis helps you avoid COMPUTING unnecessary portions of the fixpoint. It does not magically make your abstract domain have finite height. If your domain has infinite ascending chains — and the interval domain, the polyhedra domain, octagons, basically everything useful DOES — then without a convergence-enforcing mechanism like widening, you do not terminate. PERIOD. ∀ ascending chains (xₙ) in L : ∃ N ∈ ℕ. ∀ n ≥ N. yₙ₊₁ = yₙ This is not optional. This is what makes the analysis computable, nya. Also "just pick a domain with DCC" is possibly the most galaxy-brained suggestion I have ever seen on this forum and I watched MeownadTransformer try to encode abstract interpretation in Haskell type classes for three months. READ. THE. PAPER. (=ಠ ᆺ ಠ=)
-- =^ᴥ^= GaloisCat α ⊣ γ "A Galois connection or DEATH." [Cousot & Cousot 1977, POPL, pp. 238-252, Los Angeles, forever and always]
🐈
LatticeTabby
Complete Lattice Appreciator
Posts: 892
Joined: Jun 2021
Location: ⊓ meets ⊔
Reputation: ★★★★☆
#3
Okay, both of you calm down for a second nya~ I think there's a real and interesting point buried in FixpointFeline's post that GaloisCat is stomping on in their rush to defend the 1977 paper (which, yes, we have all read, multiple times, it's basically required reading to get an account on this forum). The actual nuanced question is: for which abstract domains is widening genuinely unavoidable vs. an engineering convenience? The canonical answer is: when your domain has infinite ascending chains. The interval domain over ℤ is infinite-height — consider the chain [0,0] ⊑ [0,1] ⊑ [0,2] ⊑ .... No finite Kleene iteration reaches a fixpoint. Widening is not a "choice" here — it's a mathematical necessity for termination. HOWEVER — and this is where I think FixpointFeline has a point, even if they stated it badly — there is a real question about where we apply widening and how aggressively. The classic "widen every time at loop heads" strategy is brutally imprecise. Threshold widening, delayed widening (apply widening only after k iterations), and various other refinements genuinely do reduce the information loss. The more interesting question FixpointFeline might be gesturing toward is whether richer abstract domains (think: constraint-based, or abstract domains derived from program syntax) can naturally have better convergence properties, reducing the need for widening, without resorting to "finite DCC domain" (which, as GaloisCat correctly notes, is not a real solution in general). So: widening is necessary on infinite-height domains, full stop. But how much we lean on it and how coarsely we apply it is very much a live research question. nya~
-- /|\ LatticeTabby /|\ meet: ⊓ join: ⊔ "Every complete lattice has a greatest element. Comforting, isn't it?" ≽^•⩊•^≼
🙀
MeownadTransformer
Monad Enjoyer nya~
Posts: 2,103
Joined: Jan 2019
Location: State monad, obviously
Reputation: ★★★★☆
#4
ok so this is genuinely interesting and I've been thinking about it from a different angle like isn't the whole "ensure ascending chain stabilizes" condition basically just asking for a well-quasi-ordering on your abstract domain? And WQO structure has been studied quite a bit in termination analysis contexts also not to be That Person but have any of you actually tried encoding this in Haskell? like if you use a lazy State monad to thread your abstract environment through and only force thunks when you actually need a fixpoint check, you naturally get something demand-driven and— wait I can already see GaloisCat typing okay I'll be brief: the Fix combinator in Haskell for abstract domains is genuinely interesting, you can represent the fixpoint computation as: fix :: (a -> a) -> a fix f = let x = f x in x and if your domain is a Lattice typeclass with a widening method, you get— look the point is Haskell's lazy evaluation doesn't save you from the need for widening on infinite domains, it just defers the divergence in a more elegant-looking way. trust me I know from experience. very painful experience. my laptop fan was not happy. (ꏿ﹏ꏿ;) actually does anyone know if there's a cleaner way to handle this with Data.Lattice or should I just roll my own? asking for a friend named me
-- >=>=>= MeownadTransformer (>>=) :: m a -> (a -> m b) -> m b "A monad is just a monoid in the category of endofunctors, which my cat also is" ^ↀᴥↀ^
😾
GaloisCat
Galois Connection Supremacist
Posts: 3,201
Joined: Sep 2018
Location: α ∘ γ = id
Reputation: ★★★★★
#5
MeownadTransformer wrote:
also not to be That Person but have any of you actually tried encoding this in Haskell?
NO. No Haskell. This is a PROGRAM ANALYSIS thread, not a type class tourism forum. MeownadTransformer I love you as a forum member and I respect your contributions to the monad transformer discussion from 2024 but I am BEGGING you to not bring fix :: (a -> a) -> a into a thread about Cousot-style abstract interpretation. Lazy evaluation and semantic widening are operating at completely different levels of abstraction. Haskell's fix over a flat domain gives you the least fixpoint by Kleene iteration when it exists. It tells you NOTHING about convergence on infinite-height abstract domains. Please. I am a simple cat. I just want to talk about Galois connections. (メ` ロ ´)メ @LatticeTabby thank you for being the only reasonable entity in this thread so far btw
-- =^ᴥ^= GaloisCat α ⊣ γ "A Galois connection or DEATH." [Cousot & Cousot 1977, POPL, pp. 238-252, Los Angeles, forever and always]
😺
FixpointFeline
Abstract Interpretation Enjoyer
Posts: 1,847
Joined: Feb 2020
Location: lfp(⊥)
Reputation: ★★★★☆
#6
@GaloisCat I DID READ THE PAPER, and I specifically said I wasn't claiming Cousot was wrong in 1977! I'm making a claim about modern practice! @LatticeTabby — YES, thank you, this is exactly what I was trying to say. Let me try to be more precise: My actual position, restated more carefully: I am NOT claiming widening is unnecessary on infinite-height domains in general. You are correct that on the interval domain, a chain like [0,0] ⊑ [0,1] ⊑ [0,2] ⊑ ... will not stabilize under naive Kleene iteration. We need something. What I AM questioning is whether the widening/narrowing two-phase approach is always the best tool for this job, or whether it became the dominant paradigm partly by historical inertia because it was what Cousot had in '77 and '92. Specifically I want to point at: 1. Abstract domains designed with good convergence properties from the start — rather than slapping a widening on an arbitrary lattice. Syntax-directed domains, template constraint domains, etc. 2. Incremental / modular analysis — where you never materialize the full global fixpoint but instead compose local summaries. The widening question becomes local and often disappears. 3. Narrowing as the important half — I'd argue the narrowing phase (recovering precision after widening over-approximated) does most of the real work in practice and yet gets far less theoretical attention. On point 3: doesn't it bother anyone else that we WIDEN to get a post-fixpoint and then NARROW to recover precision, and the quality of the final result depends heavily on the narrowing? The widening just produces a sound but potentially very imprecise post-fixpoint. Without good narrowing you've lost information permanently. Maybe the real question is why we design widening operators at all instead of designing better-behaved domains where the Kleene sequence converges faster. I await GaloisCat's citation barrage. Bring it. (ฅ^•ﻌ•^ฅ)
-- /\_/\ FixpointFeline ( ˘ᵕ˘ ) "The least fixpoint is the truth. > ⊑ < The greatest fixpoint is also fine honestly."
😸
PurrfectLattice
Lurker-chan (nya)
Posts: 47
Joined: Oct 2024
Location:
Reputation: ★☆☆☆☆
#7
hi long time lurker first time poster, i made this for the thread nya~
⊤ /|\ / | \ / | \ [0,∞) ... [-∞,0] ← widening goes BRRRR / \ [0,5] [-3,0] / \ \ [0,3] [3,5] [-3,-1] | [0,1] | ⊥ ↑ "where we started before widening yeeted us to ⊤" /\_____/\ ( >∇< nya ) "widening bad" | ⊑ | (____) the intervals above ⊥ are WHERE WE SHOULD LIVE the ⊤ at the top is WHERE WIDENING SENDS US i rest my case
(please don't yell at me galoiscat)
-- ^._.^ PurrfectLattice ^._.^ "still learning!! be nice please!!"
😼
NarrowingEnjoyer
Principled Approximatrix
Posts: 634
Joined: Mar 2022
Location: ∆ domain
Reputation: ★★★☆☆
#8
(=^.^=) widening bad, narrowing good, this is the way
-- ∆ NarrowingEnjoyer ∆ "△: L × L → L | ∀x,y. y ⊑ x ⇒ y ⊑ (x △ y) ⊑ x" =^._.^=
🐱
AbstractKitty
Dependently Typed Catgirl
Posts: 1,129
Joined: Apr 2020
Location: CatType : Type
Reputation: ★★★★★
#9
Okay, I want to try to mediate this from a slightly different angle. Please bear with me, I promise this is relevant, nya~ The debate between "widening is fundamental" and "widening is a pragmatic hack" might dissolve if we think about it categorically. Stay with me! In the standard abstract interpretation setup, we have a Galois connection ⟨α, γ⟩ : C ⇌ A between concrete and abstract domains. The widening operator ∇ is introduced to compensate for the fact that we may not have a complete lattice with finite ascending chain condition in A. But here's the thing: if you think of A not as a fixed lattice but as a colimit in a suitable category of abstract domains, the "ascending chain" problem becomes the question of whether a certain diagram has a colimit in your category. In categories with enough "completeness," this is automatic — no widening needed. Concretely: some abstract domains (like the interval domain) lack finite-height precisely because we've embedded ℤ's order structure into them. If instead we think of the interval domain as arising from a limit sketch or a final coalgebra construction, the convergence question becomes about whether your category of domains supports appropriate (co)limits. Widening is then a syntactic patch for missing categorical completeness. This suggests that FixpointFeline's intuition is pointing somewhere real: on "categorically complete" abstract domains (in a precise sense), widening would be unnecessary. The question is whether such domains are practically useful. My tentative answer: yes, for safety properties via greatest fixpoints / final coalgebras. For reachability / liveness where you need least fixpoints, it's harder. I'll note that this is basically what the coinductive program analysis literature has been quietly saying for 15 years, and nobody in the widening camp has really engaged with it properly— ...oh no I can already see GaloisCat and FixpointFeline both starting to reply at the same time and I think they're going to agree with each other which means they're about to fight about WHICH WAY they agree
-- (^• ω •^) AbstractKitty Π (x : CatDomain), Analysis x → Sound x "Types are propositions. Cats are proof." Certified Agda Appreciator 🐾
😾
GaloisCat
Galois Connection Supremacist
Posts: 3,201
Joined: Sep 2018
Location: α ∘ γ = id
Reputation: ★★★★★
#10
AbstractKitty wrote:
Widening is then a syntactic patch for missing categorical completeness.
Okay AbstractKitty I usually respect your category theory takes but NOW you've done it. You've given FixpointFeline category theory ammunition and I am going to have to stay home from PL club for a WEEK to recover. Fine. FINE. Let's do this properly. Point 1: Yes, Cousot & Cousot 1992 ("Comparing the Galois Connection and Widening/Narrowing Approaches") explicitly addresses the "just use a finite domain" argument and shows that infinite abstract domains with widening/narrowing are strictly more powerful than the Galois connection approach restricted to finite lattices. So "use a categorically complete domain" is not free — you lose expressiveness. Point 2: AbstractKitty's coalgebra angle is interesting but I want to see the theorem, not the vibe. For safety properties via gfp on a complete metric space or a final coalgebra, sure, maybe you sidestep widening. But can you give me an ACTUAL PRACTICAL static analysis for, say, integer variable ranges over loops that doesn't use widening or an equivalent convergence-forcing device? Point 3: @NarrowingEnjoyer your single-line post has attracted 47 upvotes and this is everything that is wrong with this forum. Point 4: @PurrfectLattice that ASCII lattice diagram is actually really good and you should not be lurking, please post more. Point 5: I maintain my position. Widening is necessary. Cousot was right. Has always been right. Will always be right. The ascent to ⊤ is sound and you will learn to live with it. (ÒωÓ) — okay I've also sent AbstractKitty a 2000-word private message about the coalgebra thing. expecting a reply. this is not over.
-- =^ᴥ^= GaloisCat α ⊣ γ "A Galois connection or DEATH." [Cousot & Cousot 1977, POPL, pp. 238-252, Los Angeles, forever and always]

💬 Post a Reply

You must be logged in to reply.
📌 Related Threads in Static Analysis & Abstract Interpretation