CBPV and Natural Deduction - Part 4. Polarized Logic

2023-09-16
/blog#logic#abstract machine#cbpv

This is the last and final part of my little study of polarized natural deduction by means of CBPV. The previous parts are "pt1. small steps", "pt2. sums and products", and "pt3. linear logic".

In this part, I will sum up what I find this interesting. This is going to be a bit more personal/opinionated/colored than the previous parts.

Proof-theoretical semantics

CBPV as polarized natural deduction helps develop an intuition for "proof-theoretical semantics."

It may be a subjective preference, but I consider intuition important. I believe that in the ideal world, intuition, "explanatory power" and teachability would play a role that has weight equal or greater than "new results" of research. This is clearly not the world we live in, but we can nevertheless asipre.

So let me draw a rough "baseline" of logic, computation and programming languages.

Focusing and uniform proofs

Now that we have established a (possibly subjective) baseline, let's talk about focusing. Focusing makes it appearance in logic programming: here computation is not proof normalization, but proof search.

Focusing is a structuring principle for proofs. It is a way to eliminate the redundancy that is introduced when passing from natural deduction to sequent calculus (Pfennings lectures notes). Girard was aware of Andreoli's work and referenced it "On the unity of logic", introducing "positive" and "negative" polarity as concepts.

The key idea is that if a rule of inference is invertible, it makes sense to apply it directly and not look for other rules. When invertible rules have priority over others, this reduces the large search space of proofs.

Polarized focusing goes further: in the context of proof search is about "combining runs of connectives that are positive or negative, with explicit coercions between runs. These coercions, written $\uparrow$ and $\downarrow$ are called shift operators." (Pfenning, ibid.) This is what we discussed in this series.

What just happened?

We saw how call-by-push-value (CBPV) "separates" typed $\lambda$-calculus into value types and computation types. We obtain a type system which can be viewed as a natural deduction calculus. This view may seem slightly forced, since operations like $\mathtt{thunk}$ and $\mathtt{force}$ that control evalution do not seem very logical. But there are compelling reasons to have them!

If we consider the treatment of inference and assumptions in linear logic, the familiar operators $\vee, \wedge$ separate into additive versions $\oplus, \&$ as well as multiplicate versions ⅋ $\otimes$.

Some of these types $\oplus, \otimes$ look like data type constructors, and we called these positive or value types. For the others ⅋ $\otimes$. evaluation does not proceed until the environment demands it. These are negative or computation types.

There is more to say here (concurrency and "par", applications to memory management, category theory) but that will have to happen some other time.

Putting it all together

Types can be defined by either introduction or elimination rules, and the straightforward way to interpret this is to think of negative types as suspending evaluation ("lazy", unevaluated, "objects") and positive types as being fully evaluated ("structured data").

Having both of these in a single calculus, that moreover corresponds to polarized natural deduction, is simply amazing. Surely, being able to encode various formal calculi that are either call-by-name or call-by-value is interesting. From a more practical perspective, it is now easy to see that when one wants to formalize "object-oriented style," which will decidedly involve something like records on unevaluated functions, this will involve negative types.

To give a vauge, but hopefully illuminating idea: consider how a "vtable" is quite literally a record of function pointers, "waiting" for program execution to select one among them before proceeding with computation. And how representing an object involves carrying around a vtable.

All this makes me think that when formalizing a programming language, it seems very attractive to translate the calculus to CBPV (or something like CBPV). Of course, this supposes that formalization is taking place - it looks like I will finally have some motivation to learn a proof assistant.

I hope you found this little series useful in developing intuition for linear logic as well as the role of "negative types."