I heard something from Rongji Kang in a private conversation last week. The POVs in this post might be controversial, please read carefully and think critically. I will be very happy if you can share any comment (email/twitter dm/anything else XD).
Cubical boundaries hell
In cubical type theory, we have:
- Cubes! Of course, we have cubes. Cubes may have constant faces.
- HITs pattern matchings where there are “boundary conditions” for pattern matching over higher constructors, certain definitional equalities must hold.
As an example, when we implement addition for rational as a higher inductive type, we need to prove the preservation of the path (cancellation of common divisors). When we prove the commutativity of addition on rational, we need to show that the proofs of path preservation are equivalent (up to path).
We assume the reader to be familiar with the above fact. A big problem arises when working with >=3-dimensional cubes: we can rotate a 3-dimensional cube horizontally to get four different cubes in the sense of definitional equality.
This happens mostly when the cube has only one of its sides specified, and the other sides are all trivial. In this case, we only want to care about one side, but we have to take care of all sides.
While humans think of these cubes as arbitrarily rotatable (thus “exactly the same”), it is possible to generate a lot of equivalent but syntactically different cubes in a cubical type theory. To show that these 3-dim cubes are equivalent, one needs to construct a 4-dim cube (a path between two 3-dim cubes), which, we may do by composing a 5-dim open cube.
This is very hard! We may essentially fall into some “cubical boundary hell” if we do not take the orientation of these cubes carefully.
Frege and Cantor tried to create a “seemingly good” set theory, but Russell jumped out and showed that these are inconsistent. People admitted the fact and continued with a better (but more complicated) set theory.
Hilbert tried to create a complete and consistent axiomatic system for doing mathematics, but Gödel jumped out and showed it to be impossible. People admitted the fact and continued with various (potentially incomplete) proof theories and models.
One may wonder, what’s next?
It might be “inefficiency”.
Cubical type theory offers us a calculator for homotopy and homology problems, where simple constructions can be understood by a computer and calculated in a type theory with canonicity, but it computes (yet) slowly.
It may turn out to be impossible to implement a univalent type theory that efficiently computes univalence. This video might explain some details behind the scene.
I would very like to see the person after Russell and Gödel :-D