ok I was going to stay out of this but the polyhedra question is my jam. Let me do a proper breakdown of the Halbwachs standard widening vs the Bagnara et al. precise widening because this comes up every six months and nobody ever links the actual papers.
βββ BACKGROUND: What is a convex polyhedron (abstract domain sense) βββ
A convex polyhedron over variables xβ,...,xβ is a set of points defined by a finite conjunction of linear inequalities: { (xβ,...,xβ) | Ax β€ b }. This is the H-representation (half-space representation). The V-representation (generator/vertex representation) describes the same polyhedron as the convex hull of a set of vertices, rays, and lines. Converting between these is the "double-description" problem, and it's the source of most of the computational pain.
βββ THE HALBWACHS STANDARD WIDENING βββ
Cousot and Halbwachs introduced the polyhedral domain and its widening operator in their landmark 1978 POPL paper on automatic discovery of linear restraints among program variables. The standard widening P β Q is defined as follows: you keep exactly those constraints of P that are still satisfied by all points in Q. Constraints that Q "violates" get dropped.
P β Q = { x | cα΅’(x) for all cα΅’ such that Q β¨ cα΅’ }
This is elegant and cheap β just test each constraint of P against Q. It terminates because the set of constraints can only shrink or stabilize. But it's often wildly imprecise β it throws away any constraint that Q doesn't satisfy, even if Q only barely violates it.
βββ THE BAGNARA PRECISE WIDENING βββ
Bagnara, Hill, Ricci, and Zaffanella (SAS 2003 / Science of Computer Programming 2005) addressed this head-on. Their paper "Precise Widening Operators for Convex Polyhedra" presents a framework for defining widening operators that are never less precise than the standard Halbwachs widening. The key idea: instead of just testing the constraints of P, they use a more sophisticated procedure that considers additional candidate constraints drawn from the history of iterates.
P β_precise Q = { keeps more constraints than standard β }
Their framework combines multiple heuristics β including widening up to a set of "landmarks" or threshold constraints, and use of the previous iterate's constraints β to achieve strictly better precision while still guaranteeing termination.
The key improvement: the standard widening may drop a constraint c from P even if Q only barely violates it, and that constraint might be crucial for the invariant you're trying to prove. Bagnara's widening is more conservative about dropping constraints β it only drops them when it's absolutely forced to by the termination requirement.
βββ COMPLEXITY CONCERNS βββ
As LatticeTabby correctly noted, the complexity is a real concern. The standard widening is cheap (polynomial per step). Bagnara's precise widening involves more expensive constraint solving at each step. More importantly: the number of widening steps before stabilization can be exponential in the number of dimensions/constraints in the worst case. This is intrinsic to the polyhedra domain β it admits infinite ascending chains, so some widening mechanism is mandatory, but the complexity of that mechanism varies.
References:
β’ Cousot & Halbwachs (1978) β Automatic discovery of linear restraints, POPL
β’ Halbwachs PhD thesis (1979) β standard widening algorithm on constraint repr.
β’ Bagnara, Hill, Ricci, Zaffanella (2003/2005) β Precise Widening Operators for Convex Polyhedra, SAS/SciCP
β’ Bagnara, Hill, Zaffanella (2008) β The Parma Polyhedra Library (PPL implementation)
See also: wiki: polyhedra widening comparison table | PPL library thread