Okay, bumping this with a proper summary post since we're on page 43 and new folks keep asking the same questions. Let me do a quick recap of where the discussion has landed.
What are session types?
Session types are, at the highest level, a way to describe communication protocols as types.
Session types can be thought of as "types for protocols", essentially extending the notion of the data type that we know and love to communicating systems.
So instead of just saying "this channel carries ints", you say "this channel first sends an int, then receives a bool, then closes." The type enforces the protocol.
Why do we need linearity?
In order for session typing in programming languages to make sense, we need linearity, wherein each session endpoint must be used exactly once. This is, I would say, the largest barrier to implementation.
If you could duplicate channel endpoints, all your protocol guarantees fall apart immediately.
The three implementation strategies:
Static: Conformance to session types is fully checked at compile time. Any error (be it sending the wrong message, not completing a session, or duplicating an endpoint) will be reported before a program compiles.
Dynamic: Conformance to session types is checked at runtime. Session types are compiled into communicating finite-state machines, and messages are verified against these CFSMs. These approaches are very flexible, extending session types to dynamically-checked languages, and allowing things like assertions on data.
Hybrid: Sending messages in the right order is checked statically. Linearity is checked dynamically. This is a promising approach, with drop-in libraries available to be used in general-purpose languages today!
See also: static vs. dynamic session typing thread | CFSMs and their relation to session types