Narrowing Operators: The Underrated Half of Abstract Interpretation

Posted by NarrowingEnjoyer • Started 14 days ago67 replies2,341 viewsnarrowing abstract-interpretation intervals fixpoints
πŸ“Œ Subscribe πŸ”– Bookmark πŸ“„ Print View βš‘ Report Thread ← Related: Widening Thread
Showing posts 1–18 of 68
😼
β˜…β˜…β˜…β˜…β˜†
Lattice Witch
Posts: 1,042 Joined: 3 years ago Online
"the descent is where the truth lives"
β€” me, just now
Narrowing Operators: The Underrated Half of Abstract Interpretation

Okay. I am going to write more than one line this time, because this has been festering in my brain for weeks and I need to say it: narrowing operators are the underappreciated half of abstract interpretation and we need to talk about this.

Every time someone posts about convergence in abstract interpretation β€” including the thread on widening from last month β€” the discussion is dominated by βˆ‡ (widening). "Widening ensures termination." "Widening is the hard part." "Widening is where the creativity is." And okay, fair, widening deserves respect. But the entire story of how we actually compute useful results hinges on what happens after widening: the descending phase, driven by β–³.

Let me frame the problem. Widening overshoots. That is its whole job β€” it introduces an over-approximation to force termination of the ascending chain. But the result is frequently garbage. Consider the classic example:

x = 1;
while (*) { x = 2; }

After widening, x's abstract value in the interval domain converges to [1, +∞). The true invariant is [1, 2]. The widening threw away a finite upper bound and replaced it with +∞ β€” which is technically sound but completely useless for any downstream check.

Enter narrowing. The βˆ† operator is applied in a second, descending phase starting from the post-fixpoint produced by widening. It iteratively recovers precision that widening discarded. In the interval domain, the narrowing definition is elegantly simple:

[a, b] β–³ [c, d] = [(a = βˆ’βˆž ? c : a), (b = +∞ ? d : b)]

Read that formula. What it says is: only replace a bound if it's currently infinite. If your left endpoint is already finite, keep it. If your right endpoint is +∞, replace it with the concrete value d. It recovers finite bounds from infinite ones β€” nothing more, nothing less.

Now here is the thing: the theory of narrowing is substantially richer than this simple definition suggests, and I feel like it barely gets discussed. The formal requirements for a narrowing operator are:

(1) βˆ€x, y ∈ DΜ‚ . (y βŠ‘ x) ⟹ y βŠ‘ (x β–³ y) βŠ‘ x
(2) All decreasing chains yβ‚€, y₁, yβ‚‚, ... where yα΅’β‚Šβ‚ = yα΅’ β–³ f(yα΅’) are finite

Condition (1) says narrowing stays within the current post-fixpoint from below β€” it's sandwiched. Condition (2) guarantees termination of the descending iteration. These are dual to the widening conditions in an interesting structural sense, and yet I see papers agonize for pages over widening design while treating narrowing as a footnote.

Some things I think deserve more attention:

  • The duality between narrowing and widening via the adjoint relationship in Galois connections
  • Designing narrowing operators for non-interval domains (octagons, polyhedra, templates) β€” this is genuinely hard
  • The question of how many narrowing iterations to perform (stopping after one is common practice, but sub-optimal)
  • Narrowing in non-monotone equation systems, where the standard theory breaks down
  • The relationship between narrowing and interpolation in the Cousot sense

I'll be honest: part of what prompted this post is that I tried to explain narrowing to a friend who has read three papers on widening and had never heard of the descending phase. That's a failure mode of how we teach and discuss this stuff. Widening gets you to a fixpoint. Narrowing makes it actually mean something.

Discuss. Please. I'm normal about this.

😺
β˜…β˜…β˜…β˜…β˜…
Kleene Completionist
Posts: 3,887 Joined: 5 years ago Online
lfp(F) exists and so do I
β€” barely
Re: Narrowing Operators: The Underrated Half of Abstract Interpretation
NarrowingEnjoyer wrote:
I'll be honest: part of what prompted this post is that I tried to explain narrowing to a friend who has read three papers on widening and had never heard of the descending phase.

THANK YOU. I have been waiting for someone to write this post since the widening thread and I'm glad it's finally here.

For those who didn't see the widening thread, I spent a lot of time there arguing that widening gets a disproportionate amount of theoretical attention relative to its complexity budget in real analyses. The flip side of that argument β€” which I hinted at but didn't push on enough β€” is that narrowing's theoretical properties are far more interesting than its typical one-paragraph treatment suggests.

The key thing OP didn't mention that I want to add: narrowing is doing something deep in terms of information retrieval. Widening deliberately throws away information to ensure termination. But narrowing isn't just "put the information back" β€” it's operating on a post-fixpoint, descending toward the true least fixpoint from above, and the requirement that it terminates in finitely many steps is non-trivial. You can't just freely recover information; the operator is constrained by the sandwich condition.

Also: the common practice of doing just one narrowing step after widening stabilizes is something I find maddening. A single step! You fought all the way up to a coarse post-fixpoint and then you do one refinement pass and call it done. The full descending chain to convergence gives you the greatest fixpoint below the post-fixpoint, which is strictly better. Implementations skip it for performance reasons and I understand that, but let's be honest about what we're giving up.

This thread has my full attention. 🐾

🐈
β˜…β˜…β˜…β˜…β˜…
Domain Theorist
Posts: 2,241 Joined: 4 years ago Offline
every lattice is a preorder;
not every preorder is a lattice.
Re: [TECHNICAL BREAKDOWN] Interval narrowing in detail

Let me give the full technical picture for interval narrowing, because I think the formula OP wrote deserves a worked example and a clear statement of what it's achieving.

Setup. We're in the interval abstract domain Iβ„€ = { [a,b] | a ∈ β„€βˆͺ{βˆ’βˆž}, b ∈ β„€βˆͺ{+∞}, a ≀ b } βˆͺ { βŠ₯ }. The partial order is by inclusion: [a,b] βŠ‘ [c,d] iff c ≀ a and b ≀ d. Ascending chains can be infinite (that's the whole point of needing widening in the first place).

The narrowing operator β–³ on intervals.

[a, b] β–³ βŠ₯ = βŠ₯
βŠ₯ β–³ [a, b] = βŠ₯
[a, b] β–³ [c, d] = [ (a = βˆ’βˆž ? c : a), (b = +∞ ? d : b) ]

The semantics: only "heal" an infinite bound by substituting the corresponding bound from the right-hand argument. Finite bounds are frozen. This is why the descending chain terminates β€” each application can replace at most one βˆ’βˆž and one +∞, and once a bound is made finite, it stays finite forever.

Worked example. Let's trace through the classic loop analysis:

x := 0;
while (x < 5) { x := x + 1; }
// post-loop: what is x?

Ascending phase (with widening βˆ‡):

Xβ‚€ = βŠ₯
X₁ = [0, 0]   (initial assignment)
Xβ‚‚ = [0, 0] βˆ‡ [0, 1] = [0, +∞]   (widening fires, upper bound explodes)
X₃ = [0, +∞] βˆ‡ [0, +∞] = [0, +∞]   (stable β€” post-fixpoint reached)

So widening gives us [0, +∞]. Not ideal. The loop guard x < 5 tells us the body only executes when x ≀ 4, so the post-loop value should be exactly 5 β€” but our widened result says "could be anything non-negative." Useless for a bounds check.

Descending phase (with narrowing β–³):

Yβ‚€ = [0, +∞]   (start from post-fixpoint)
Y₁ = [0, +∞] β–³ F([0, +∞]) = [0, +∞] β–³ [0, 5] = [0, 5]
Yβ‚‚ = [0, 5] β–³ F([0, 5]) = [0, 5] β–³ [0, 5] = [0, 5]   (stable!)

After just one narrowing iteration, we recover [0, 5]. The +∞ was replaced by 5 because the transfer function applied to [0, +∞] β€” accounting for the loop guard β€” produces a finite bound. The lower bound was already finite so it stays put.

The key insight OP touched on but I want to make explicit: narrowing is not trying to recover the least fixpoint. It's trying to recover any fixpoint below the post-fixpoint that's more precise. The conditions guarantee it lands somewhere useful:

βˆ€x, y ∈ DΜ‚ : (y βŠ‘ x) ⟹ y βŠ‘ (x β–³ y) βŠ‘ x

So x β–³ y is always between y and x. In the descending chain, each step stays at or below the previous one. Combined with the termination condition on descending chains, you're guaranteed to stabilize at something strictly more precise than the widened result, yet still sound.

The harder case: non-interval domains. For octagons and polyhedra, designing a good narrowing operator is substantially more involved. The interval trick works because the lattice structure is so simple β€” you just have two independent "slots" for bounds. For octagons, you have O(nΒ²) constraints, and naively applying the same trick can violate consistency conditions. For polyhedra, there's a whole separate discussion to be had. Some domains (like general polyhedra) don't even have a standard narrowing in the literature β€” which is exactly OP's point about underappreciation.

🐱
β˜…β˜…β˜…β˜…β˜…
α ⊣ γ Enthusiast
Posts: 4,102 Joined: 6 years ago Online
Ξ±(Ξ³(x)) βŠ‘ x
Ξ³(Ξ±(x)) βŠ‡ x
this is fine.
Re: Narrowing Operators: The Underrated Half of Abstract Interpretation

*exhales slowly*

Fine. I'll admit it. Narrowing is underappreciated. I have historically been in the camp of "widening is the intellectually interesting operator, narrowing is just cleanup" and I think... that was wrong of me, or at least imprecise.

The thing that's shifting my view is LatticeTabby's point about non-interval domains. I've been working on octagon analysis lately and the narrowing story there is genuinely non-trivial. Octagons are defined by constraints of the form Β±xα΅’ Β± xβ±Ό ≀ c, and when you try to apply the interval-style "only heal infinite bounds" trick, you run into the closure problem: the octagon domain requires you to propagate constraint updates through a closure procedure, and that closure can re-introduce infinite values or collapse the narrowed result in unexpected ways.

There's a reason papers on octagon narrowing are actually cited β€” it's not as trivial as "just do what intervals do but with more variables."

Where I still push back a little: I think the reason narrowing gets less theoretical attention is somewhat defensible. The widening operator is what determines whether your analysis terminates at all. A bad narrowing gives you imprecise results; a bad widening gives you an infinite loop. In industrial static analyzers, you can ship with a "do one narrowing step" policy and have users mostly not notice, but you absolutely cannot ship with a broken widening. The consequence asymmetry is real.

That said: I withdraw my long-held position that narrowing is trivial. LatticeTabby's worked example alone makes the case. The descending phase deserves a proper treatment in survey papers, not a footnote.

*grudging upvote given*

😼
β˜…β˜…β˜…β˜…β˜†
Lattice Witch
Posts: 1,042 Joined: 3 years ago Online
"the descent is where the truth lives"
β€” me, just now
Re: Narrowing Operators: The Underrated Half of Abstract Interpretation
GaloisCat wrote:
A bad narrowing gives you imprecise results; a bad widening gives you an infinite loop. The consequence asymmetry is real.

This is a fair point and I don't want to dismiss it. The severity asymmetry is real in practice. But I'd argue it's also exactly why we need more theory for narrowing, not less. Because the "just do one step, it's fine" approach is a pragmatic cop-out that works empirically until it doesn't, and we don't have good theoretical characterizations of when one step is sufficient and when it leaves precision on the table.

For widening, we have rich theory about when a widening is "good" (stabilizes in few iterations, produces results close to the fixpoint, etc.). For narrowing, the analogous theory is much thinner. When is a single narrowing step as precise as the full convergence? Under what structural conditions on the transfer function and the domain does one-step narrowing equal the greatest fixpoint below the post-fixpoint? I don't know of a clean answer in the literature and I've looked.

Also: GaloisCat admitting I'm right is going in my screenshot folder. Evidence for the next heated static analysis debate. 🐾

😸
β˜…β˜…β˜…β˜…β˜†
Galois Girlie
Posts: 1,739 Joined: 2.5 years ago Online
⟨α, γ⟩ :: Galois
my whole personality
Re: dual Galois connections and the real story of narrowing

Okay I want to connect this to Galois connection theory because I think the structure here is gorgeous and not talked about enough.

Standard abstract interpretation is built on a Galois connection ⟨α, γ⟩ : β„˜(C) ⇆ A where Ξ± is the abstraction function and Ξ³ is concretization. The key property:

βˆ€S ∈ β„˜(C), βˆ€a ∈ A: Ξ±(S) βŠ‘ a ⟺ S βŠ† Ξ³(a)

Widening is an operator that speeds up the ascending iteration toward a fixpoint. It sacrifices precision for termination. Now, here's what I want to point out: narrowing is related to the dual Galois connection.

If you flip the ordering β€” work in the opposite lattice β€” the roles of βŠ” and βŠ“ swap, the ascending chain condition becomes a descending chain condition, and the "fixpoint" you're looking for flips from least to greatest. Narrowing is essentially performing a descending Kleene iteration in this dual structure. The operator β–³ plays a role in the co-domain that mirrors what βˆ‡ does in the original.

This isn't just aesthetics. It has a practical consequence: you can sometimes derive a valid narrowing operator mechanically from a widening operator by "dualizing" it β€” reversing the direction of bounds-checking and applying it to the opposite chain condition. For interval widening:

[a,b] βˆ‡ [c,d] = [(c < a ? βˆ’βˆž : a), (d > b ? +∞ : b)]

dual/narrowing:
[a,b] β–³ [c,d] = [(a = βˆ’βˆž ? c : a), (b = +∞ ? d : b)]

See the symmetry? Widening "gives up" on a bound when the new value exceeds it, sending it to infinity. Narrowing "heals" a bound when it's currently infinite, retrieving the finite value. They're each other's conceptual duals β€” both are acceleration techniques, one ascending and one descending.

The question of whether every widening has a meaningful dual narrowing is, to my knowledge, open for complex domains β€” and this is precisely the kind of question that a richer theory of narrowing would answer. We have nice characterizations of when a widening operator is "good" (in terms of the induced abstract domain lattice), and I'd love to see analogous results for the dual side.

References anyone has on the dual Galois connection framing of narrowing: please drop them below. I know Cousot et al. gesture at this but I'm not sure anyone has fully cashed it out.

🐾
β˜…β˜…β˜…β˜†β˜†
Functional Furball
Posts: 612 Joined: 1.5 years ago Online
return >>= return
≑ return (law 1)
Re: I wrote a narrowing monad (you were warned)

hi yes hello I read "narrowing is dual to widening via the adjoint relationship" and immediately had to go write something. sorry in advance.

I've been thinking about how to capture the descending iteration with narrowing as a monadic computation. The idea: narrowing iterations are a sequence of refinement steps, each one staying below the current approximation. That has a natural monadic structure where bind sequences these refinement steps and the monad laws enforce the sandwich condition.

Here's a sketch in Haskell-ish pseudocode:

-- Abstract domain with narrowing
class Lattice a where
  bot    :: a
  leq    :: a -> a -> Bool
  narrow :: a -> a -> a  -- x β–³ y, requires y βŠ‘ x

-- The Narrowing monad: wraps a descending approximation
newtype Narrowing d a = Narrowing
  { runNarrowing :: d -> (a, d) }

instance Functor (Narrowing d) where
  fmap f (Narrowing g) = Narrowing (Ξ»d -> let (a, d') = g d in (f a, d'))

instance Applicative (Narrowing d) where
  pure a = Narrowing (Ξ»d -> (a, d))  -- no refinement
  ...

instance Monad (Narrowing d) where
  return = pure
  (Narrowing g) >>= f = Narrowing $ Ξ»dβ‚€ ->
    let (a,  d₁) = g dβ‚€
        (Narrowing h) = f a
        (b,  dβ‚‚) = h d₁
    in  (b, d₁ `narrow` dβ‚‚)  -- sandwich enforced by bind!

The key line is d₁ `narrow` dβ‚‚ in the bind. When we sequence two narrowing computations, the bound on the domain component is the narrowing of the intermediate state with the refined state β€” which by the sandwich condition lands between dβ‚‚ and d₁. Chaining bind gives you the descending chain, and termination of the chain falls out of the narrowing operator's second condition.

The monad laws correspond directly to the narrowing axioms:

  • Left identity (return a >>= f ≑ f a): applying no refinement before a step = just the step. This is x β–³ x = x (narrowing is idempotent when both args are equal).
  • Right identity (m >>= return ≑ m): a refinement step followed by no further step = the step alone. Sandwich condition ensures x β–³ x βŠ‘ x holds.
  • Associativity: the descent sequence doesn't matter how you bracket it β€” the accumulated narrowing is associative. This corresponds to the chain condition.

I'm not claiming this is earth-shattering but I think it's a nice way to reason about narrowing iteration compositionally β€” and it suggests you could build verified descending analyses in a proof assistant using this monad as the effect type. Related to the verification thread from last week.

If anyone sees a problem with the monad laws as I've stated them please yell at me, I wrote this at 2am. 🐾

🐈
β˜…β˜…β˜…β˜…β˜…
Domain Theorist
Posts: 2,241 Joined: 4 years ago Offline
every lattice is a preorder;
not every preorder is a lattice.
Re: I wrote a narrowing monad (you were warned)
MeownadTransformer wrote:
The key line is d₁ `narrow` dβ‚‚ in the bind. When we sequence two narrowing computations, the bound on the domain component is the narrowing of the intermediate state...

This is actually really nice. I want to flag one potential issue: the monad laws as you've written them rely on x β–³ x = x (idempotence of narrowing when both arguments are equal), but the standard definition of narrowing only requires y βŠ‘ (x β–³ y) βŠ‘ x β€” it doesn't require x β–³ x = x. You'd need to add idempotence as an extra axiom, or slightly restate your monad laws to only require ≑ up to the lattice order rather than strict equality.

For the interval domain your specific narrowing is idempotent (applying it to equal intervals is the identity), so it works there. But a general narrowing-monad framework would need to account for non-idempotent narrowing operators. There are such operators in the literature β€” see the polyhedra narrowing thread.

Minor point, still love the construction. The associativity-as-chain-condition correspondence is a genuinely good observation. 🐾

🐱
β˜…β˜…β˜…β˜…β˜…
α ⊣ γ Enthusiast
Posts: 4,102 Joined: 6 years ago Online
Ξ±(Ξ³(x)) βŠ‘ x
Ξ³(Ξ±(x)) βŠ‡ x
this is fine.
Re: narrowing + Astree + practical implications

One thing nobody's mentioned yet: the practical importance of widening and narrowing was dramatically demonstrated by the AstrΓ©e analyzer, which was used to verify the absence of runtime errors in flight control software for the Airbus A380. AstrΓ©e used interval-based analysis with custom widening and narrowing operators and achieved extremely high precision on safety-critical numerical code.

The lesson from AstrΓ©e isn't just "widening works." It's that the widening-narrowing pair, tuned carefully together, is what makes infinite-domain abstract interpretation both terminating and usefully precise on real code. You can't separate them. They're designed as a duo.

This is another reason I'm revising my view. I had been treating narrowing as the lesser partner in this duo. But "the lesser partner in a duo that proved Airbus flight software is safe" is still... pretty important.

Also: AbstractKitty's point about the dual Galois connection framing deserves a paper. If nobody else writes it in the next year, I might. Someone hold me to that.

😺
β˜…β˜…β˜…β˜…β˜…
Kleene Completionist
Posts: 3,887 Joined: 5 years ago Online
lfp(F) exists and so do I
β€” barely
Re: survey of narrowing in non-numerical domains

Adding one more dimension: narrowing in non-numerical domains.

String analysis, tree automata, abstract heap domains β€” all of these are areas where the widening literature is fairly mature and the narrowing literature is thin to nonexistent. For string domains based on automata, for example, constructing a widening that terminates while preserving language-theoretic structure is already hard. Constructing a dual narrowing that recovers precision from an over-approximated automaton is, as far as I know, almost completely open.

Pointer analysis domains are even worse. The heap is a graph, your abstract domain is some approximation of reachable heap shapes, and doing a descending iteration from a post-fixpoint to recover precision is... I don't even know where to start. This is my main research direction right now and the narrowing literature gives me almost nothing to build on. I have to borrow from order theory and bisimulation theory to even formulate the question cleanly.

So: OP is right, and the underappreciation of narrowing has direct consequences for how far abstract interpretation can be pushed into new domains. The theory gap is an actual blocker. More on heap narrowing in a future thread, maybe.

🐾
β˜…β˜…β˜…β˜†β˜†
Functional Furball
Posts: 612 Joined: 1.5 years ago Online
return >>= return
≑ return (law 1)
Re: good catch on idempotence
LatticeTabby wrote:
For the interval domain your specific narrowing is idempotent (...). But a general narrowing-monad framework would need to account for non-idempotent narrowing operators.

Yes, good catch, I should have stated this assumption explicitly. The fix is to replace the return a >>= f ≑ f a law with a weaker ordering law: the domain component of the result is at most the domain component of just running f directly. That gives you a "lax monad" (or monad in a 2-category sense) rather than a strict monad, and the laws hold up to βŠ‘ rather than =.

Actually, that might be a better formulation anyway. The whole point of the descending iteration is that we're moving downward in the lattice, so having laws stated up to lattice order is more natural than requiring equality. There's a whole lax/oplax monad literature that might be exactly the right tool here.

Okay now I'm going to go actually think about this properly. Thanks for the correction. The 2am disclaimer stands. 🐾

😸
β˜…β˜…β˜…β˜…β˜†
Galois Girlie
Posts: 1,739 Joined: 2.5 years ago Online
⟨α, γ⟩ :: Galois
my whole personality
Re: open questions and reading list

Let me try to consolidate the open questions raised in this thread, because I think this has been one of the best technical discussions we've had here in months and I don't want it to get lost:

Open Questions from this Thread:

  • Under what conditions on F and the domain does one narrowing step suffice to reach the greatest fixpoint below the widened post-fixpoint?
  • Is there a systematic way to derive a narrowing operator from a widening operator via duality, for complex domains like octagons and polyhedra?
  • Can MeownadTransformer's narrowing monad (in lax form) be formalized enough to use as an effect type in verified abstract interpreters?
  • What does narrowing look like for non-numerical domains β€” heap shapes, string automata, tree automata?
  • Is there a clean characterization of the dual Galois connection perspective on narrowing operators?

Suggested reading:

  • Cousot & Cousot (1977) β€” the original POPL paper where both widening and narrowing are introduced (and where narrowing gets notably less space)
  • Cousot & Cousot (1992) β€” Comparing the Galois Connection and Widening/Narrowing Approaches β€” relevant to AbstractKitty's duality framing
  • Amato et al. (2015) β€” Narrowing Operators on Template Abstract Domains β€” probably the most thorough modern treatment
  • The AstrΓ©e papers (Blanchet et al.) β€” for practical context on why this matters

This thread should be pinned. Mods please. 🐾

Β·Β·Β· 56 more replies on pages 2–4 Β·Β·Β· Continue reading β†’
68 posts total

✏️ Post Reply

Posting as AbstractKitty
πŸ“Œ Related Threads
Widening Operators: Why Everyone Talks About Them (Static Analysis) FixpointFeline • 94 replies
Interval Domain Deep Dive: From Basics to Octagons (Abstract Domains) LatticeTabby • 41 replies
Narrowing for Polyhedral Domains: Is It Even Possible? (Static Analysis) GaloisCat • 22 replies
Galois Connections for Beginners: A Gentle Introduction (Abstract Domains) AbstractKitty • 58 replies
Lax Monads and Their Applications in Static Analysis (Type Theory) MeownadTransformer • 17 replies
The AstrΓ©e Analyzer: Lessons from Industrial Abstract Interpretation (Tools) GaloisCat • 36 replies
Heap Shape Narrowing: An Open Problem (Static Analysis) FixpointFeline • 8 replies
Verified Abstract Interpreters in Proof Assistants (Type Theory) MeownadTransformer • 31 replies