I have spent the last six months trying to incorporate dependent types into a real production codebase — not a toy, not a "verify this sorting algorithm" tutorial, an actual system with messy I/O, legacy data, and stakeholders who do not care about the Curry-Howard correspondence.
Here is my thesis, plainly stated:
Dependent types are theoretically beautiful and practically ruinous for most software engineering contexts.
My arguments:
1. The proof burden is enormous. You spend more time convincing the type-checker you're right than you do writing code that's actually useful. Every time you touch a function that tracks a length, a bound, or a state machine step, you're suddenly writing 40 lines of propositional plumbing. Proofs about list lengths shouldn't need a PhD to read.
2. Most useful properties don't need full dependent types. You want to know that your config file is valid? A good sum type does that. You want to know your state machine only transitions correctly? A phantom type + smart constructor pattern in vanilla Haskell or Rust covers 90% of real use cases. You don't need ∏ (n : ℕ), Vec A n → Vec A n to check "this list is non-empty."
3. A simpler type system with great tooling beats a theoretically perfect system nobody uses. Haskell's type system is not as expressive as Agda's. It doesn't matter — Haskell ships software. Agda ships papers.
4. Adoption cost is brutal. Onboarding a new engineer to a dependently-typed codebase takes months, not days. You are paying a massive human capital cost for guarantees that, in practice, you could get cheaper via property-based testing (QuickCheck, Hypothesis) plus code review.
I am not saying formal methods are useless. I am not saying type theory is a waste of time. I am saying: the proof obligation imposed by full dependent types is disproportionate to the practical safety gains in 95% of software engineering contexts. Fight me. ฅ^•ﻌ•^ฅ
--
/\_/\
( o.o ) "A type that no one uses proves nothing."
> ^ < — PracticalPurr, 2025