About the non-nullable types debate

前端 未结 7 635
北海茫月
北海茫月 2020-12-20 18:07

I keep hearing people talk about how non-nullable reference types would solve so many bugs and make programming so much easier. Even the creator of null calls it his billion

7条回答
  •  春和景丽
    2020-12-20 18:20

    Its a little odd that the response marked "answer" in this thread actually highlights the problem with null in the first place, namely:

    I've also found that most of my NULL pointer errors revolve around functions from forgetting to check the return of the functions of string.h, where NULL is used as an indicator.

    Wouldn't it be nice if the compiler could catch these kinds of errors at compile time, instead of runtime?

    If you've used an ML-like language (SML, OCaml, SML, and F# to some extent) or Haskell, reference types are non-nullable. Instead, you represent a "null" value by wrapping it an option type. In this way, you actually change the return type of a function if it can return null as a legal value. So, let's say I wanted to pull a user out of the database:

    let findUser username =
        let recordset = executeQuery("select * from users where username = @username")
        if recordset.getCount() > 0 then
            let user = initUser(recordset)
            Some(user)
        else
            None
    

    Find user has the type val findUser : string -> user option, so the return type of the function actually tells you that it can return a null value. To consume the code, you need to handle both the Some and None cases:

    match findUser "Juliet Thunderwitch" with
    | Some x -> print_endline "Juliet exists in database"
    | None -> print_endline "Juliet not in database"
    

    If you don't handle both cases, the code won't even compile. So the type-system guarantees that you'll never get a null-reference exception, and it guarantees that you always handle nulls. And if a function returns user, its guaranteed to be an actual instance of an object. Awesomeness.

    Now we see the problem in the OP's sample code:

    class Class { ... }
    
    void main() {
        Class c = new Class(); // set to new Class() by default
        // ... ... ... code ...
        for(int i = 0; i < c.count; ++i) { ... }
    }
    

    Initialized and uninitialized objects have the same datatype, you can't tell the difference between them. Occasionally, the null object pattern can be useful, but the code above demonstrates that the compiler has no way to determine whether you're using your types correctly.

提交回复
热议问题