Okay, I'll say it. Dependent types are not a silver bullet and the cult around them is getting exhausting.
I've been writing Lean 4 on and off for about a year now for a verified network protocol stack at $dayjob. I think dependent types are genuinely remarkable for certain problems — statically sized vectors, protocol state machines, resource tracking. But every time I go to a conference or hop on here, someone is evangelising full dependent types as the solution to every software correctness problem, and it just... isn't true.
Here's my actual take: the proof burden for realistic software is enormous, and it scales badly. Let me give a concrete example. In Lean 4, something as simple as proving that a map over a filtered list has the right length requires you to write or invoke lemmas about List.length_filter, List.length_map, and their interactions. That's before you even get to your domain logic.
-- Something that looks "simple" in Lean 4 theorem mapFilter_length (p : α → Bool) (f : α → β) (l : List α) : (l.filter p |>.map f).length = (l.filter p).length := by simp [List.length_map] -- okay this one's easy -- but now prove the bound you actually care about... -- and hope simp closes it. It often doesn't.
For most industry code, what you actually want is refinement types — something like Liquid Haskell where an SMT solver handles the arithmetic automatically, or F*'s hybrid approach. You get 80% of the safety guarantees for maybe 5% of the proof effort.
The Curry–Howard correspondence is beautiful. I'm not denying that. But "proofs are programs" doesn't mean "everyone should write proofs for every program." There's a reason most of the CompCert and seL4 work involves entire teams of verification experts working for years on relatively small codebases.
Fight me in the replies. I have a lot of time on my hands and several stongly-held opinions.