Coinductive Program Analysis โ€” Is Anyone Actually Doing This? ๐Ÿ”ฅ HOT โš™ TECHNICAL

๐Ÿ’ฌ 44 replies ๐Ÿ‘ 1,847 views ๐Ÿ“Œ Stickied by AbstractKitty
Jump to page:
#16
Quote Edit Report

Ok so I actually got more benchmark results back from my interval analyser. I posted the basic setup last night (see page 1) but here's the Haskell I'm using for the coinductive iteration โ€” it replaces the standard widening loop with a coiterative unfold:

Haskell โ€” CoinductiveInterval.hs
-- Coiterative abstract fixpoint using Stream comonad
import Control.Comonad
import Data.Stream (Stream(..))
import qualified Data.Stream as S

-- Interval lattice
data Interval = Bot
              | Iv Int Int   -- [lo, hi]
              | Top
  deriving (Eq, Show)

-- Join (least upper bound)
joinIv :: Interval -> Interval -> Interval
joinIv Bot x          = x
joinIv x   Bot        = x
joinIv _   Top        = Top
joinIv Top _           = Top
joinIv (Iv a b) (Iv c d) = Iv (min a c) (max b d)

-- Coinductive iteration: unfold a stream of iterates
-- stopping when two consecutive elements are equal
coiterFix :: Eq a => (a -> a) -> a -> Stream a
coiterFix f seed = S.unfold (\x -> (x, f x)) seed

takeUntilFixed :: Eq a => Stream a -> [a]
takeUntilFixed (Cons x xs@(Cons y _))
  | x == y    = [x]
  | otherwise  = x : takeUntilFixed xs

-- Concrete usage: run abstract transformer F until fixpoint
coinductiveLFP :: (Eq a, Lattice a) => (a -> a) -> a -> a
coinductiveLFP f bot = last $ takeUntilFixed $ coiterFix f bot

The key insight is that coiterFix doesn't commit to termination at definition time โ€” it produces an infinite stream of iterates lazily, and takeUntilFixed consumes it until a post-fixpoint is found. This is the dual of proving a property by induction over the iterate count.

Now, the issue I ran into: for the programs with unbounded loops, the stream never stabilises on its own, so you still need widening. But the coinductive framing makes it clear where to plug widening in โ€” you're essentially switching from observing the greatest fixpoint to computing an over-approximation of it. That clarity is really useful.

Benchmark results from 12 simple loop programs (no recursion):

  • โœ… 10/12 converged without widening (bounded loops)
  • โš ๏ธ 2/12 required widening injection at stream position 4
  • โฑ๏ธ Average iteration count: 3.1 (vs 4.8 for standard Kleene)

So the lazy coiterative approach was slightly more efficient at detecting fixpoints. Not groundbreaking but interesting. Will try recursive programs next.

-- MeownadTransformer | "The fixpoint is already there. We just have to observe it."
#17
Quote Edit Report

@MeownadTransformer Those results make sense. The coinductive framing essentially makes laziness a first-class citizen in your analysis loop โ€” you're not "deciding to stop at step N", you're observing the stream and waiting for the coalgebraic fixed point to announce itself.

This connects very directly to something the Cousots did that I think is underappreciated in this community. Their 2000 POPL paper on Temporal Abstract Interpretation is essentially coinduction + abstract interpretation unified.

๐Ÿ“„ Reference: Cousot & Cousot, POPL 2000
The work applies abstract interpretation to temporal calculi and logics in a general syntax, semantics, and abstraction-independent setting โ€” extended to the reversible mu-calculus with new reversal and abstraction modalities and a time-symmetric trace-based semantics.

The crucial point: the more classical set-based semantics is shown to be an abstract interpretation of the trace-based semantics, which leads to the understanding of model-checking and its application to data-flow analysis as incomplete temporal abstract interpretations.

Read that again: model checking is just an incomplete coinductive abstract interpretation. Standard data-flow analysis โ€” the thing we teach in PL101 โ€” is also an incomplete coinductive abstract interpretation, just very lossy.

The full coinductive analysis would compute trace-based properties. Widening/narrowing are the mechanisms by which we make the infinite coinductive iteration terminate, at the cost of completeness. This unification is beautiful and I don't think people appreciate how deeply it reorganises our understanding of what analysis even is.

-- AbstractKitty | Moderator | "The best abstract domain is the one you don't have to implement." ๐Ÿฑ
#18
Quote Edit Report

@AbstractKitty Yes! And it's worth being precise about which lattice-theoretic machinery underpins this. The standard Cousot '77 framework computes least fixpoints (for safety) by ascending Kleene sequences. The coinductive dual computes greatest fixpoints (for liveness, non-termination, trace properties) by descending sequences.

But here's my practical question: what are people actually using for narrowing operators in practice?

For the uninitiated: widening (โ–ฝ) accelerates ascending chains to ensure termination, but it over-approximates. The theory is based on the correspondence between concrete and abstract semantics through Galois connections/insertions, and the feasibility of a fixed point computation of the abstract semantics through the combination of widening operators (to get fast convergence) and narrowing operators (to improve the accuracy of the resulting analysis).

So narrowing (โ–ณ) is how you recover precision after widening. The problem is that standard narrowing is not a dual of widening in any clean sense โ€” it's more of a heuristic post-processing step.

In a coinductive setting, can we do better? My intuition is that if you're working in a complete lattice and your analysis semantics is coinductively defined, the greatest fixpoint is the canonical narrowing target โ€” you widen to get a post-fixpoint above the gfp, then narrow downward toward it. But I've never seen this formalised cleanly.

Has anyone implemented anything like this? I keep trying to sketch it with intervals and failing to make the narrowing termination argument work without a finite-height assumption.

-- GaloisCat | Galois connections are just adjunctions for cowards ๐Ÿ˜ผ
#19
Quote Edit Report

@GaloisCat The narrowing termination problem is real. The standard definition doesn't require termination โ€” that's a known footgun. For convergence we need acceleration using a special operator called widening. If we do too much widening then we may need narrowing โ€” but there's no guarantee narrowing terminates in infinite-height domains. You usually need extra conditions (finite image of the narrowing, finite chain condition on the downward sequence, etc).

In my interval analyser I'm currently doing the dumbest possible thing: run K iterations of narrowing and stop. Not principled but it works for the benchmarks I care about.

The coinductive approach you're sketching โ€” "widen to get above gfp, then narrow toward it" โ€” I think there's something in Cortesi & Zanioli (2011) about this but I don't have the paper in front of me. The idea of using the greatest fixpoint as a target for narrowing rather than computing toward the least fixpoint is interesting. It would require your analysis to have a sound over-approximation of the gfp available, which is exactly what coinduction gives you.

Pseudocode โ€” coinductive narrowing strategy
-- Phase 1: ascending widening to post-fixpoint
wideningPhase f bot = head . dropWhile (\(a,b) -> a /= b) $
  zip xs (tail xs)
  where xs = scanl (\acc _ -> acc `widen` f acc) bot [0..]

-- Phase 2: descending narrowing from post-fixpoint toward gfp
-- (requires: postfix โŠ’ gfp(f), f is monotone)
narrowingPhase f postfix = head . dropWhile (\(a,b) -> a /= b) $
  zip ys (tail ys)
  where ys = iterate (\x -> x `meet` f x) postfix

The narrowing phase only terminates if the descending chain x โŠ’ f(x) โŠ’ f(f(x)) โŠ’ ... stabilises. In a finite-height domain: guaranteed. In intervals: not guaranteed without extra tricks.

#20
Quote Report

Hi everyone! Long-time lurker, first time posting here ๐Ÿ‘‹ I work on numerical static analysis for safety-critical embedded systems (aerospace context), and the coinduction angle in this thread has me very interested.

My day-to-day domain is polyhedral analysis, and I want to connect it to this discussion. Numerical abstract domains are an important ingredient of modern static analyzers used for verifying critical program properties (e.g., absence of buffer overflow or memory safety). Among the many numerical domains introduced over the years, Polyhedra is the most expressive one, but also the most expensive: it has worst-case exponential space and time complexity.

This expense is why people reach for intervals or octagons in practice. But here's the thing โ€” in our safety verification pipeline, we need the relational precision of polyhedra because the invariants we're trying to prove involve tight linear relationships between variables (think: sensor fusion with bounded error propagation).

The widening problem is brutal for polyhedral domains. Since the domain of convex polyhedra contains infinite increasing chains, the use of a widening operator for convex polyhedra is needed. And widening for polyhedra is particularly lossy โ€” you throw away constraints that don't survive into the next iterate, which can be a lot.

Here's where coinduction might help: rather than computing least-fixpoints bottom-up with widening, could we formulate our safety property as a greatest fixpoint directly? The property "all executions stay within this polyhedron" is naturally coinductive โ€” it's a property of infinite-length runs (or at least runs of unbounded length).

Safe(S) โ‰œ ฮฝX. { x | f(x) โˆˆ X โˆง x โˆˆ S }
(Greatest fixpoint = maximal safe invariant within S)

You'd want to compute this greatest fixpoint directly rather than approximating from below. For polyhedra this would mean starting at the safety specification (the target polyhedron) and iterating the pre-image operator inward. This is essentially a backward reachability / invariant synthesis approach. Anyone in aerospace/embedded is probably familiar with this as "backward analysis" but framing it as coinductive fixpoint computation gives you the full lattice-theoretic toolkit for precision/convergence tradeoffs.

#21
Quote Edit Report

@PurrfectPolyhedra Welcome!! ๐Ÿพ A very good first post โ€” aerospace context is exactly the kind of application that makes this theory feel urgent rather than just beautiful.

Your reformulation of safety as a greatest fixpoint is exactly right and it's what Cousot calls the "maximal invariant" approach. The formula:

Safe(S) = ฮฝX. S โˆฉ fโปยน(X)

...is the coinductive characterisation of the largest set of states from which all future states stay within S. Computing it directly via descending iteration on the pre-image operator is principled and elegant.

The interesting thing is that in this formulation, you don't need widening at all โ€” you start at S (top of the downward iteration) and repeatedly intersect with pre-images. If S is a polyhedron and f is linear/affine, each step produces a polyhedron (or the intersection thereof). This approach combines the benefits of Polyhedral Abstract Interpretation with the flexibility of Property Directed Reachability algorithms for computing safe inductive convex polyhedral invariants.

The question is whether this descending chain terminates โ€” and for general nonlinear dynamics, it won't without acceleration. But for the affine systems common in aerospace (linearised models, etc), you often get termination in a small number of steps. This is basically what the PDR/IC3 community rediscovered in the model-checking world, reconnected to the abstract interpretation world by some people at Aalborg.

-- AbstractKitty | Moderator | ๐Ÿฑ
#22
Quote Edit Report

Coming back to bisimulation up-to from my earlier post (p1) because the polyhedral discussion is making me think about it differently.

@PurrfectPolyhedra Your greatest-fixpoint safety characterisation is exactly a coinductive predicate, and bisimulation up-to techniques were developed to make coinductive proofs of exactly this shape more tractable. Let me explain the connection.

In the bisimulation world, proving two systems bisimilar means finding a bisimulation relation R (a post-fixpoint of the bisimulation functional F: R โІ F(R)). The problem is that F(R) is huge โ€” you need R to be closed under all possible F-steps. Bisimulation up-to techniques are methods for reducing the size of relations needed in bisimulation proofs.

The "up-to" trick: instead of requiring R โІ F(R), you require R โІ F(b(R)) for some sound "enhancement" function b. This lets you work with a much smaller candidate relation โ€” b essentially says "don't close under steps you can already handle by other means".

Key theorem (Pous, generalising Sangiorgi): Pous has generalised this theory to arbitrary coinductive predicates, by stating it in the context of complete lattices. The starting point in this setting is Knaster-Tarski's theorem: in every complete lattice, every monotone function has a greatest fixpoint.

Now map this back to abstract interpretation:

  • The coinductive predicate = your safety invariant (maximal safe set)
  • The "candidate relation" = your abstract polyhedron overapproximation
  • The "enhancement function b" = the abstract transformer / Galois connection
  • Soundness of b = soundness of the abstract interpreter

Under this reading, all of abstract interpretation is an up-to technique for the coinductive invariant proof. The abstract domain is how you "close up-to abstraction". Whether this view is actually fruitful for designing new analyses or just a nice picture, I'm not sure, but I find it very suggestive.

-- FixpointFeline | "ฮฝ > ฮผ, fight me" ๐Ÿฆ
#23
Quote Edit Report

@FixpointFeline Ok that mapping at the end made my brain do a little explosion. Let me make sure I understand the Knaster-Tarski connection you're invoking, because I've always been slightly fuzzy on it.

Knaster-Tarski says: on a complete lattice, every monotone function has both a least and a greatest fixpoint. The Knaster-Tarski theorem states that on a complete lattice, a monotone endofunction has a complete lattice of fixed points. In particular, the greatest fixed point of the function is the join of all its post-fixed points, and the least fixed point is... the meet of all pre-fixed points.

So for coinduction specifically: a post-fixed point of B, i.e., a relation R โІ B(R), is called a bisimulation. The Knaster-Tarski fixed-point theorem characterises ฮฝB as the union of all post-fixed points of B.

This means that to prove something is in the greatest fixpoint (the coinductive predicate), it suffices to show it's in any post-fixpoint โ€” i.e., you just have to exhibit a bisimulation containing it. You don't have to compute the whole ฮฝB!

Now I think I see why up-to techniques are so powerful: parameterised coinduction is a technique making it possible to perform and write coinductive proofs on the fly, without having to announce the bisimulation candidate beforehand. This is convenient because when trying to prove something by coinduction it is in general difficult to correctly guess the appropriate bisimulation candidate from scratch.

So for abstract interpretation: you don't have to exhibit the full abstract invariant upfront โ€” you can build it iteratively, and the up-to enhancement (the abstract transformer) handles the parts you can't directly close over. @FixpointFeline is this what you meant?

-- LatticeTabby | working through the Jacobs coalgebra book one chapter per week
#24
Quote Edit Report

@LatticeTabby Exactly right. The proof-obligation in coinduction is: show your candidate set R is a post-fixpoint (R โІ F(R)). If it is, Knaster-Tarski guarantees R โІ ฮฝF. You never need to compute ฮฝF.

In abstract interpretation terms: the abstract invariant I compute (say, a polyhedron P) doesn't need to equal the true invariant โ€” it just needs to be a sound overapproximation, i.e., a post-fixpoint of the abstract transformer: P โŠ‘ Fโ™ฏ(P). And soundness of the abstract transformer w.r.t. the Galois connection ensures that the concretisation of P overapproximates the true concrete invariant.

The "up-to" enhancement takes this further: you need R โІ F(b(R)) where b is sound. For abstract interpretation, b is essentially "apply the abstraction ฮฑ and then concretize ฮณ" โ€” a projection onto the abstract domain. This is exactly the galois-connection round-trip, and it's sound (non-expansive) by definition of a Galois insertion.

Where it gets interesting is when you want completeness too, not just soundness. Complete abstract interpretations are those where the abstraction loses no information relevant to the property. As a proof of concept, introducing the notion of companion (already known in coinduction up-to) into abstract interpretation provides a way to simplify the checking of completeness for a generic abstraction.

The "companion" of F is the largest sound up-to enhancement โ€” it's the canonical way to maximally enhance your coinductive proof without losing soundness. Translating this to abstract interpretation: the companion is the most expressive abstract domain that is still fully complete. Very cool theoretical object. Probably not computable in general.

Key relationships (informal)
-- Inductive analysis:   lfp(Fโ™ฏ) โŠ’ ฮฑ(lfp(F))    [soundness from below]
-- Coinductive analysis: gfp(Fโ™ฏ) โŠ‘ ฮฑ(gfp(F))    [soundness from above]
-- Complete domain:      Fโ™ฏ โˆ˜ ฮฑ = ฮฑ โˆ˜ F          [no information loss]
-- Up-to enhancement b: R โІ F(b(R))  โ†’  R โІ ฮฝF  [sound if b compatible]
-- Companion t(F):      largest b such that b is compatible w/ F
#25
Quote Report

This is making a lot of sense. One thing I want to add from the safety-engineering perspective: in practice, what we care about most is not "compute the exact maximal safe invariant" but "verify that this particular certificate is a valid invariant".

In other words: I already have a candidate polyhedron P (given by a safety engineer, or found by a prior analysis), and I want to confirm that P is indeed an invariant. This is a verification task, not a synthesis task.

And here coinduction is perfect: I just need to check that P is a post-fixpoint of the transformer (i.e., P โŠ‘ F(P) in the abstract, or equivalently that ฮณ(P) is closed under f). This is a single linear-programming check for affine systems and polyhedral domains โ€” way cheaper than computing the fixpoint from scratch.

The payoff in our pipeline: we analyze large benchmarks from the popular software verification competition, including Linux device drivers with over 50K lines of code. Experimental results demonstrate massive gains in both space and time: speedups of two to five orders of magnitude compared to state-of-the-art Polyhedra implementations.

(That quote is from the decomposed polyhedra paper, not our work, but similar gains are possible when you know what invariant to check rather than computing it.) In our actual pipeline we've seen 10-50x speedups by switching from "compute invariant" to "verify candidate invariant coinductively". The synthesis stage can use cheaper domains; the verification stage uses full polyhedra but only runs once.

#26
Quote Edit Report

@PurrfectPolyhedra That "verify candidate" vs "synthesise from scratch" split is crucial and I think it maps to an important theoretical distinction that hasn't been made explicit in this thread yet: the difference between coinductive proof and coinductive computation.

Coinductive proof: exhibit a post-fixpoint. Once you have a candidate, the check is cheap (as you showed).

Coinductive computation: derive the greatest fixpoint. This requires iteration and potentially does not terminate, hence widening etc.

Standard program analysis is coinductive computation (deriving loop invariants). Your approach is coinductive proof (verifying a given invariant certificate). Most industrial verification tools (think: Astree, CodeSonar) are actually doing coinductive computation in the synthesis phase and coinductive proof in the discharge phase, but they don't frame it that way.

The reason framing matters: if you know you're doing coinductive proof, you can use up-to techniques โ€” a classical approach used in coinductive settings to obtain easier or feasible proofs โ€” which can be interpreted as abstractions that naturally fit into the fixpoint-system framework and extend to systems of equations.

So the abstract domain isn't just a computational hack for termination โ€” it's a sound up-to technique for the coinductive proof. The abstract domain is justified by the coinductive proof theory, not just by engineering pragmatism. I find that philosophically satisfying.

-- GaloisCat | Galois connections are just adjunctions for cowards ๐Ÿ˜ผ
#27
Quote Edit Report

Wait, let me make sure I have the Knaster-Tarski story for coinduction straight. I've been reading Sangiorgi's history paper and I want to sanity-check my understanding.

Bisimilarity and the bisimulation proof method represent examples of a coinductive definition and the coinduction proof method, and as such are intimately related to fixed points, in particular greatest fixed points. We therefore also discuss the introduction of fixed points, and of coinduction.

So the chain is: Knaster-Tarski โ†’ greatest fixpoints โ†’ coinduction โ†’ bisimulation โ†’ bisimulation up-to โ†’ abstract interpretation as up-to technique. That's the lineage?

And on the "what does coinduction actually mean" question โ€” I think the cleanest statement is: coinduction is the proof principle dual to induction, justified by the fact that for a monotone F on a complete lattice, ฮฝF = join of all post-fixpoints (dually: ฮผF = meet of all pre-fixpoints). So you prove "x โˆˆ ฮฝF" by exhibiting any post-fixpoint containing x, without computing ฮฝF itself. Induction proves "x โˆ‰ ฮผF" by... actually wait, does the dual even make sense for safety? I keep confusing myself on the induction/coinduction duality for safety vs liveness.

Can someone diagram this? I'll attempt it:

Induction: x โˆˆ ฮผF โ†” โˆƒ pre-fixpoint P: P โІ F(P) and x โˆˆ P
Coinduction: x โˆˆ ฮฝF โ†” โˆƒ post-fixpoint Q: F(Q) โІ Q and x โˆˆ Q

Safety invariant = ฮฝF (coinductive: closed under transitions)
Reachable states = ฮผF (inductive: built up from init states)

Is that right? So proving safety is coinduction and computing reachability is induction?

#28
Quote Edit Report

@LatticeTabby Your diagram is mostly right but the direction conventions on post-fixpoints are backwards. Let me fix it:

Induction (lfp): x โˆˆ ฮผF โ†” โˆ€ pre-fixpoint P: F(P) โІ P โŸน x โˆˆ P
Coinduction (gfp): x โˆˆ ฮฝF โ†” โˆƒ post-fixpoint Q: Q โІ F(Q) and x โˆˆ Q

Note the asymmetry: induction requires ALL pre-fixpoints (it's a universal property โ€” ฮผF is the intersection of all pre-fixpoints). Coinduction requires SOME post-fixpoint (existential โ€” ฮฝF is the union of all post-fixpoints).

This is why coinductive proofs are often easier than inductive ones for safety properties: you just have to find one post-fixpoint containing your initial state. The abstract domain gives you a (possibly lossy) post-fixpoint to work with.

Your safety/reachability assignment is correct:

  • ๐Ÿ”ต Reachable states = ฮผF (inductive, built forward from init)
  • ๐ŸŸข Safe invariant = ฮฝF (coinductive, closed backward under transitions)
  • ๐Ÿ”ด Safety property holds = initial states โІ ฮฝF = โˆƒ coinductive invariant containing init and excluding bad states

The Cousot approach unifies both by having a single framework that can express both ฮผ and ฮฝ, connected by the duality of least/greatest fixpoints on the same complete lattice. That's the power of abstract interpretation as a foundation โ€” systems of fixpoint equations over complete lattices, consisting of mixed least and greatest fixpoint equations, allow one to express many verification tasks such as model-checking of various kinds of specification logics or the check of coinductive behavioural equivalences.

-- AbstractKitty | ๐Ÿฑ | "The lattice was complete all along"
#29
Quote Edit Report

Coming back to the original question in the thread title: "Is Anyone Actually Doing This?"

I think the answer is: yes, but nobody calls it that. The coinductive framing is the theory; the practice is:

  • ๐Ÿ’ก Backward analysis / pre-image computation (used in aerospace, embedded verification)
  • ๐Ÿ’ก Greatest-fixpoint model checking (CTL [AG], LTL safety fragments)
  • ๐Ÿ’ก Type systems for coinductive types (co-lists, streams in Haskell/Coq)
  • ๐Ÿ’ก Invariant verification (given P, check P โІ Fโ™ฏ(P))
  • ๐Ÿ’ก CEGAR refinement loops viewed coinductively

The problem is that the word "coinductive" is used almost exclusively in the PL theory literature (semantics, type theory) and not much in the static analysis/verification literature, even when they're doing the same thing. @PurrfectPolyhedra's safety invariant example is coinduction. The Cousot temporal AI paper is coinduction. The up-to bisimulation techniques @FixpointFeline describes are coinduction.

I wonder if coining (ha) a shared vocabulary would actually help cross-pollination. Like, if the static analysis community explicitly labelled "greatest fixpoint inductive invariant" computations as coinduction, would the bisimulation people have more to contribute and vice versa?

State of the art (my rough mental map)
-- Well-developed:
  Coinductive type systems   (Haskell, Coq coind, Agda copatterns)
  Coinductive semantics       (Cousot, Leroy big-step, Nakata/Uustalu)
  Bisimulation up-to          (Sangiorgi/Pous complete lattice theory)

-- Emerging:
  Coinductive abstract domains (this thread!)
  Coinductive safety analysis  (backward AI + polyhedral domains)
  Coinductive narrowing        (theoretical, mostly open)

-- Basically unexplored:
  Automated coinductive invariant synthesis (no good tools yet)
  Coinductive inter-proc analysis    (summary coinduction?)
#30
Quote Edit Report

@MeownadTransformer That map is excellent. Let me add one more thing to "basically unexplored" that I've been thinking about: coinductive program equivalence for analysis purposes.

Right now, when we analyse two programs for equivalence (e.g., in verified compilation, or differential privacy), we typically reduce it to proving both programs satisfy the same invariants. But bisimulation gives us a direct relational proof method that doesn't go through individual invariants at all.

Bisimulation is an instance of coinduction. Both bisimulation and coinduction are today widely used, in many areas of Computer Science, as well as outside Computer Science. But in program analysis tooling? We barely use it. We have bisimulation-based compiler verification (CompCert) but almost no bisimulation-based static analyzers.

The hypothesis I want to test: if you run two programs through an abstract interpreter that tracks a relational abstract domain (pairs of states rather than individual states), you're computing an abstract bisimulation. The abstract domain is an "up to" enhancement on the bisimulation proof. If the abstract bisimulation closes, the programs are equivalent up-to the abstraction.

This would give you a static analyzer for program equivalence that's both sound (abstraction is a safe up-to) and potentially more precise than going through invariants for each program separately, because you can capture correlations between the two runs.

Is this original? I feel like someone must have thought of this but I can't find a paper that does it in this framing. @AbstractKitty do you know of prior work?

-- FixpointFeline | "ฮฝ > ฮผ, fight me" ๐Ÿฆ | now also: "relational domains are bisimulations in disguise"
Showing posts 16โ€“30 of 44
Posts 16โ€“30 of 44

โœ๏ธ Post a Reply