What is a Type?
Let's start by investigating what a type is. According to Wikipedia a type system can be defined as "a tractable syntactic framework for classifying phrases according to the kinds of values they compute". So, the type of a phrase (whatever that is) is simply the set of the values that can be computed from this phrase. In this post I will use the terms type and set interchangeably as I consider them to be the same thing (mathematically this might not be correct).
The Type of 1
So, what is the type of the value 1? In Scala (and Java, C/C++, C# etc.) the type of 1 is a 32-bit signed integer. This is strange as the value 1 is a member of many sets (in fact an infinite number of them), so why is this set chosen? Probably because of historical reasons and because it's a convenient type to do calculations with on most computers, but mathematically this type doesn't have any special meaning.
In Haskell the type of 1 is (Num t) => t. Indeed confusing, the type is defined in terms of a typeclass. This set includes a lot of values, including user defined ones. Really, Haskell must resort to typeclasses to define the type of 1?
I would argue that there are only two reasonable types for the value 1: the set containing all values and the set containing only the value 1. The set containing all values is really not useful in static type checking, so that leaves the set which only element is the value 1, {1} in set notation (also called a singleton set). The same reasoning would apply to all values. I find it remarkable that pretty much all commonly used programming languages gets this simple type wrong. How can one expect a type system to be useful if it can't even infer the correct type of the most basic expressions?
The Type of a Function
A (total) function is a relation that maps each element of one set, let's call it the input set, to exactly one element of another (or the same) set, let's call it the output set. This relation can be modelled as a set of pairs a -> b where a is a member of the input set and b is a member of the output set. The additional restriction on the relation set is that there must be exactly one pair a -> * in the set (where * represents any member of the output set).
So, let's apply this model to a simple function in Scala:
val foo = (b : Boolean) => if (b) 1 else "a"
What is the only reasonable type of foo? Well, the input set of foo is Boolean which is equal to {true, false}. The output set of foo is {1, "a"}. If the input value is true, foo will output the value 1 and if the input value is false, foo will output value "a". So we can write the type of foo as {true -> 1, false -> "a"}. In Scala the type of foo is Boolean => Any, which is a pretty useless type as Any is the set of all values (i.e. the type doesn't tell us anything about the value). In Haskell the corresponding definition won't even type check. Seriously, these are state of the art programming languages, is this the best we can do?
SubtypingWell, what if we have:
val addOne : Int => Int
val x : {1} = 1
addOne(x) // Type error?
No, this should not be a type error. The first thing to notice is that the set Int contains the value 1 and thus {1} is a subset of Int. This means that it's perfectly safe to apply the function addOne on the input set {1} as we're guaranteed that the relation set addOne contains a mapping for each element in the input set. In type theory one would say that {1} is a subtype of Int.
Final Words
I've only skimmed the surface of how to apply the concept of sets and values to programming languages. The concept can be extended to ADT's, pattern matching, type classes etc. I think there is a lot to gain from thinking about types this way because there is something fundamentally broken in the way types are implemented in most programming languages. Do you agree?
I don't agree. Note that your Wikipedia definition stated that a type system is "tractable" and "syntactic". In other words, it's simple enough that we can practically implement it, and it works based on the syntax of the language. So we should expect that it is an abstraction. It is certainly possible to write more specific types and to have more expressive and accurate type systems, and there's always a tradeoff between specificity, user-friendliness, and ease of implementation.
ReplyDeleteBut I think that this blog post has a more fundamental issue: I believe that you're confusing some specific representations of the number 1 with the mathematical concept of 1. There are various interesting models of 1-ness in a computer, depending on what we want to do with it. It should be just fine that these models inhabit different types, in the same way that I might represent myself one way in an employment database and another way as a user account in a blogging platform and a third way in a medical records system.
Generally, when we deal with a particular type, we're interested in the properties that all members of that type have in common. That way, we can write code that's generic. When we define datatypes in a programming language, we're modeling some domain to a particular degree of fidelity, balancing computational difficulty against accuracy.
To implement types like {1, "a"}, I'd recommend that you try to think about isomorphisms a bit. Your type containing only 1 is isomorphic to Unit. Likewise, your type containing 1 and "a" is isomorphic to Boolean. One way to get at what you're going for in a dependent type system is to interpret a finite set of n elements by assigning their meanings to unique elements of Fin n. I know you've been a bit on the Idris list, so this should be accessible.
WRT to Haskell: the reason why the literal 1 has the type "(Num t) => t" is to allow for overloading. It's to allow one to write 1 both for Float, Int, Integer, and for user-defined types. It's just for syntactic convenience. Using a typeclass makes it so that user-defined types can also have integer literals if it makes sense.
@David:
ReplyDeleteThanks for you response. I really don't think it's user friendly for the type checker to throw away information that the developer has given, and also assume things that the developer hasn't specified.
I don't think I'm confusing the value 1 with its representation. The value 1 is constant and unique, regardless of the representation or domain. Really, the developer should not see or worry about what representation the compiler chooses for the value (except maybe for performance reasons). Exposing the representation in the type is IMHO a mistake.
I understand why Haskell has chosen the type Num t => t, but I still don't think it's correct. The value 1 is a member of many sets that are not numerical. One might as well chose the type Show t => t for example.
I think the problem lies in the computation of types from (recursive) functions. In general, you want your typechecker to guarantee termination. What you propose is a turing complete system, aka dynamic typing. In other words: When a typecheck requires a full evaluation of the program, then I do not need a typechek at all.
ReplyDelete@Unknown:
ReplyDeleteI'm not suggesting that recursive functions should be fully evaluated to infer its type. The type should definitely be inferable without evaluation.
Jesper,
ReplyDeleteYou say "I really don't think it's user friendly for the type checker to throw away information that the developer has given, and also assume things that the developer hasn't specified."
However, this is what a type system _must_ do in order to be tractable and friendly. Try to work through the practicalities of the system you're proposing: if I make a function from {1, 17.8, "a"} -> {"b", 99}, then in order to call it, I'd need a type system that could _prove_ that my argument was a member of the domain. In practice, this will not be convenient.
What do you think the type of "def f(x) = if true then 17 else "a"" should be? Any => {"a"}? Any => {17, "a"}? What about the type of "def f(x) = { while (true) {}; 1}"? In order to do the kind of very precise tracking of values that you're interested in, one would have to solve the halting problem. Take a look at Rice's theorem.
And I do think that you're confusing 1 with its representation. The mathematical concept 1 is not the same as its representation in a computer, in precisely the same sense that I am not identical to my representation in a database. We need to care what representation is used because it affects our ability to reason about data.
@David:
ReplyDelete"I'd need a type system that could _prove_ that my argument was a member of the domain. In practice, this will not be convenient."
I don't see how that is different from current type systems. In order to be able apply a function Int -> Int the type system must prove that the argument is a member of Int.
The type for "def f(x) = if true then 17 else "a"" should be "Any => {17}" IMHO.
Regarding recursion, yes, it's not possible to always determine if a function terminates or not, but there are lots of functions for which we can automatically determine termination status (you should be familiar with the totality checker in Idris for example). How and if to encode non-termination in the type is another topic which I haven't totally analyzed yet. I think the Haskell way of adding a bottom value to every type is wrong and impractical. Idris adds an non-type annotation. Maybe you have some other ideas how this can be encoded?
I'm not following your reasoning about the representation of 1. Are you saying that there are many different conceptual interpretations of the symbol 1?
Proving that the argument is an element of Int is exactly why we have type systems at the level of abstraction that we do - they are decidable and (relatively) straightforward. I don't think that your proposed system would be decidable - but please work out the details and prove me wrong!
ReplyDeleteCertainly many analyses become easier when working in a total language. As far as encoding non-termination in the types, you can take a look at the partiality monad. But it's far from a solved problem - I don't think that as a community we know enough about programming with a mix of data and codata, for instance.
WRT 1: I'm saying that, like every other value in the computer, we work with a particular representation that has particular tradeoffs. This is true whether we use machine ints, inductively defined integers, IEEE floats, etc. We need to be cognizant of this, and 1 being a mathematical object doesn't free us from that.
Hmm, I really don't see why you would want to worry about the representation of 1 when you're writing a program. The only cases where I think the representation is important is when you're interfacing to external systems using FFI, serialization etc. and for performance reasons. The first case can be easily solved by switching to a well specified representation in the interface. The performance case is not that important if I can trust that the compiler will choose a good representation for my values. In the small number of cases where I really care about getting maximum performance I can explicitly specify which set to use. By not specifying the representation the compiler will have maximum freedom of choosing an efficient representation (and hopefully it can do a better job than the developer in most cases).
ReplyDeleteI care very much whether a machine int, a machine float, or an inductively-defined datatype represents 1. They have quite different semantics, even though they are modeling the same concept. It's not just a matter of efficiency - multiplication has a different meaning on machine float than it has on machine ints, for instance. I don't really see how I should expect the computer to know which of these operations I intend.
ReplyDeletePerformance is a separate issue - if we don't get the correct answers, then it doesn't matter how fast our computers are!
I'm not trying to say that the language shouldn't do as much work for us as possible - just that I think you're vastly underestimating the difficulty of getting it to do certain kinds of things.
Yes, floats and wrapping ints are indeed special cases, but still I argue that those representations are only chosen for compatibility and performance reasons. They are only used because the hardware we run the program on has built-in support for them.
ReplyDelete"I would argue that there are only two reasonable types for the value 1: the set containing all values and the set containing only the value 1."
ReplyDeleteIn the context of programming languages, the set of available types is a design decision, just like the set of available expressions. In fact, these language components should be designed to work together. Ideally, they should work for the programmer, but that's a question of how good the language designer is.
The reason Java has a type for 32-bit integer is that it also has arithmetic operations for 32-bit integers and storage for 32-bit integers. Change any of those three things to use 33-bit integers, or 31-bit integers, and the resulting language will be a total mess.
Likewise, Scala includes singleton types, because it blends with path-dependent types and with field override (val over val). Java doesn't have those features, so it doesn't need singleton types.
@Lex:
ReplyDeleteChoosing a type based on the machine representation makes some sense for a system language with limited type system like C, but when talking about high level language with a powerful type system it makes less sense. On the contrary I think there are severe drawbacks of that choice (which I try to show with this blog post). The compiler can still choose an efficient representation for the type in the compiled code.
Jesper,
ReplyDeleteThe compiler can't choose an efficient representation willy-nilly when the semantics of the various representations vary in important ways. This is the case in int vs float situations, and when you get into more expressive type systems, then the careful design of your representations gets to be even more important.
Should the compiler infer a representation for people in an employment database? Should it infer a representation for parsed JSON that you get from some Web service? If not those, then why numbers?
@David:
ReplyDeleteLet's say that you define a person as a record type: "type Person = {name : String, age : Int}". In many languages (excluding system languages like C/C++) this is just a logical view of the actual data representation used in the program (i.e. the representation is not exposed to the programmer). I argue that it's a good thing to leave the actual memory layout, alignment, bit arrangement etc. of the data to the compiler. Defining the atomic building blocks of the type system is a task separate from defining the representation of the data in the running program. If that's not what you're arguing against, I'm afraid I don't understand your point.
I think I see where we are not properly communicating.
ReplyDeleteWhen I'm talking about representations, I'm talking about the representation in the universe of the language, not the way that the language runtime represents the data in terms of bits and pointers.
Your record type represents people in a particular way: as a record with a name and an age. Another representation might leave out the age but include some kind of government ID number or a list of skills or a salary or whatever.
Instances of these types exist in your programming language, but they are intended to model objects that exist outside of your programming language. This is what makes the program useful - it's not calculating nonsense.
In the same way, numbers as such are not part of our programming language. They exist externally, either as mental objects (if you're an Intuitionist) or in some special universe of forms (if you're a Platonist) or whatever. The point is that I have an idea of "1"-ness, and when I want to get useful work done with it, I capture it by defining a representation in my programming language that has the correct properties to get the job done. In the same sense that I might store a person's salary in an employee database and their reading habits in a library database. Just as my programming language doesn't have a notion of "David"-ness apart from the models we put in it, it also doesn't have a notion of "1"-ness apart from the mdoels we put in it.
@David:
ReplyDeleteI think I see what you mean, but I don't agree. A value can have meaning by itself, and the properties for it can be defined separately from the value. Let's take a bigger example:
x = 1
y = 2
f : {1, 3} -> {2, 4}
g v f : a -> (a -> {2, 4}) -> {3, 5} = (f v) + 1
g x f
g y f // Type error
x and y are represented by their respective values (i.e. their types are {1} and {2}), and f defines a "property" of the values 1 and 3. What other representations do you need?
David, I think Jesper's reasoning on representations is legitimate, and can be formalized using small-step operational semantics. The problem is elsewhere — you're right on your argument about decidability.
ReplyDeleteExamples which use integers are not very helpful, because properties on integers are especially easy to decide — you can use integers in the type system and still decide many interesting properties, because decision procedures for (subsets of) arithmetic exist. I'm not familiar with the details, but Dependent ML has integer in its type language (though in a different form) and is still decidable. It seems to me you could encode all of Jesper's examples in Dependent ML, with a type "Int n" which contains only n as a value. The paper is "Dependent ML: An Approach to Practical Programming with Dependent Types".
General theorems about integers are not always decidable, but you need much more complicated examples to show that.
However, this stops as soon as you get more interesting stuff in your language.
I think the below part of the conversation is relevant independent of the discussion of representations. And unfortunately, I see no answer about that. This is a key point of the argument.
>>> "I'd need a type system that could _prove_ that my argument was a member of the domain. In practice, this will not be convenient."
>> I don't see how that is different from current type systems. In order to be able apply a function Int -> Int the type system must prove that the argument is a member of Int.
> Proving that the argument is an element of Int is exactly why we have type systems at the level of abstraction that we do - they are decidable and (relatively) straightforward. I don't think that your proposed system would be decidable - but please work out the details and prove me wrong!
That's correct. Because of that, in general in dependently-typed languages you need to manually prove what the typechecker can't prove itself.
An example is shown in "Why dependent types matter" (http://www.cs.nott.ac.uk/~txa/publ/ydtm.pdf). In Sec. 4.1, writing the reverse functions on vectors (which track the length in the type) requires a proof that x + 0 = x. This proof is on integer and could be automated, but that's distracting since you often need more complicated proofs which can't be automated as easily.
Then I have another question:
>I would argue that there are only two reasonable types for the value 1: the set containing all values and the set containing only the value 1.
If those are the only types which exist, what's the type of x => x + 1? Is it 0 -> 1? Or 1 -> 2? Or the intersection of all those types together?
I forgot one thing though. There's a cool paper which uses dependent types for JavaScript:
ReplyDeletehttp://goto.ucsd.edu/~ravi/research/oopsla12-djs.pdf
I think this happens to explore the direction you're discussing. Their type system is complicated enough that they resort to an SMT solver for the proofs the type system requires. Still, SMT solvers aren't magical.
Moreover, Jesper, you refer multiple times to type inference. Type inference for the languages we're discussing is much harder — the manual annotation overhead in that paper is 70%. There's space for more research of course, but more expressive type systems most of the time come with less powerful type inference.
Generally, while you don't yet have a worked out proposal, there's certainly space for improvement. But a lot of the questions you ask and the problems you complain about are well-known to experts in the field. If you want to learn about that, improve on the state of the art, and you're ready to work really hard, you might want to learn about type systems theory. If you then still think you can make things better, you should get into a PhD program, solve the problem, and convince scientists that what you have is better.