I want to step back from the category theory cage match for a moment and lay out the concrete picture, because I think some of the GaloisCat/FixpointFeline/AbstractKitty discussion is talking past some important technical distinctions. Let me do a proper breakdown of threshold widening vs. delayed widening since both have been mentioned loosely.
== Baseline: Standard Widening ==
In the interval domain, the standard widening operator β on [a,b] β [c,d] extrapolates unstable bounds to Β±β:
widen :: Interval -> Interval -> Interval
widen [a,b] [c,d] =
[ if c < a then -β else a
, if d > b then +β else b
]
This terminates in at most 2 iterations but is extremely imprecise. A loop i = 0; while i < 100: i++ gives you i β [0, +β) instead of i β [0, 100]. That's a lot of precision loss.
== Threshold Widening (Widening with Thresholds) ==
The idea: keep a finite set of threshold values T = {tβ, tβ, ..., tβ} (e.g., constants that appear in the program text). When widening would push a bound to Β±β, instead check if a threshold value lies between the old and new bound. If yes, use the threshold instead of Β±β.
widen_T :: Set Int -> Interval -> Interval -> Interval
widen_T thresholds [a,b] [c,d] =
let lo = if c < a
then max_threshold_leq thresholds c
else a
hi = if d > b
then min_threshold_geq thresholds d
else b
in [lo, hi]
For our example with T = {0, 100}, we'd get i β [0, 100] β exact! The cost is that analysis may take up to |T| more iterations per variable to converge. There's a neat result by Kim et al. (2016) that using binary search over thresholds reduces worst-case cost from O(|T|) to O(log|T|) per variable.
== Delayed Widening ==
Different approach entirely. Instead of widening every iteration, you first do k steps of plain join (β), then start applying widening. The intuition: let the fixpoint iteration "warm up" before accelerating.
iterate_delayed :: Int -> AbsState -> AbsState
iterate_delayed k init =
let warmup = iterate (join . F) init !! k
in fix_with_widening warmup
This often gives much better precision in practice because the join iterations can "discover" the right shape of the invariant before widening blows up the bounds. The risk: if k is too large, you might never terminate (if the domain has infinite height and ascending chains don't stabilize). A fairness condition is required β you can't delay forever.
== Key Tradeoff Summary ==
- Standard widening: Fast termination, severe precision loss.
- Threshold widening: Better precision, linear (or log) extra iterations, needs good threshold set. Choosing thresholds from program constants is the standard heuristic.
- Delayed widening: Can dramatically improve precision for "nice" programs, but adds risk and requires careful k-selection. Works best when loops have predictable structure.
- Combined (delay then threshold): Best precision, most bookkeeping. Used in tools like AstrΓ©e for safety-critical software.
Now, back to the original question: is widening necessary? In all of these variants, yes β you still need some mechanism to guarantee termination. The variants are about where and how aggressively you widen, not whether you widen at all. So I come down on the "widening is necessary" side, with the asterisk that you have a lot of freedom in how you implement it.
// LatticeTabby — [0, +β) is not a loop invariant, it is a confession of defeat