Stripping out let in Haskell

余生长醉 提交于 2021-02-06 19:56:47

问题


I should probably first mention that I'm pretty new to Haskell. Is there a particular reason to keep the let expression in Haskell?

I know that Haskell got rid of the rec keyword that corresponds to the Y-combinator portion of a let statement that indicates it's recursive. Why didn't they get rid of the let statement altogether?

If they did, statements will seem more iterative to some degree. For example, something like:

let y = 1+2
    z = 4+6
    in y+z

would just be:

y = 1+2
z = 4+6
y+z

Which is more readable and easier for someone new to functional programming to follow. The only reason I can think of to keep it around is something like this:

aaa = let y = 1+2
          z = 4+6
          in  y+z

Which would look this this without the let, which I think ends up being ambiguous grammar:

aaa = 
  y = 1+2
  z = 4+6
  y+z

But if Haskell didn't ignore whitespace, and code blocks/scope worked similar to Python, would it be able to remove the let?

Is there a stronger reason to keep around let?

Sorry if this question seems stupid, I'm just trying to understand more about why it's in there.


回答1:


Syntactically you can easily imagine a language without let. Immediately, we can produce this in Haskell by simply relying on where if we wanted. Beyond that are many possible syntaxes.


Semantically, you might think that let could translate away to something like this

let x = e in g      ==>    (\x -> g) e

and, indeed, at runtime these two expressions are identical (modulo recursive bindings, but those can be achieved with fix). Traditionally, however, let has special typing semantics (along with where and top-level name definitions... all of which being, effectively, syntax sugar for let).


In particular, in the Hindley-Milner type system which forms the foundation of Haskell there's a notion of let-generalization. Intuitively, it regards situations where we upgrade functions to their most polymorphic form. In particular, if we have a function appearing in an expression somewhere with a type like

a -> b -> c

those variables, a, b, and c, may or may not already have meaning in that expression. In particular, they're assumed to be fixed yet unknown types. Compare that to the type

forall a b c. a -> b -> c

which includes the notion of polymorphism by stating, immediately, that even if there happen to be type variables a, b, and c available in the envionment, these references are fresh.

This is an incredibly important step in the HM inference algorithm as it is how polymorphism is generated allowing HM to reach its more general types. Unfortunately, it's not possible to do this step whenever we please—it must be done at controlled points.

This is what let-generalization does: it says that types should be generalized to polymorphic types when they are let-bound to a particular name. Such generalization does not occur when they are merely passed into functions as arguments.


So, ultimately, you need a form of "let" in order to run the HM inference algorithm. Further, it cannot just be syntax sugar for function application despite them having equivalent runtime characteristics.

Syntactically, this "let" notion might be called let or where or by a convention of top-level name binding (all three are available in Haskell). So long as it exists and is a primary method for generating bound names where people expect polymorphism then it'll have the right behavior.




回答2:


There are important reasons why Haskell and other functional languages use let. I'll try to describe them step by step:

Quantification of type variables

The Damas-Hindley-Milner type system used in Haskell and other functional languages allows polymorphic types, but the type quantifiers are allowed only in front of a given type expression. For example, if we write

const :: a -> b -> a
const x y = x

then the type of const is polymorphic, it is implicitly universally quantified as

∀a.∀b. a -> b -> a

and const can be specialized to any type that we obtain by substituting two type expressions for a and b.

However, the type system doesn't allow quantifiers inside type expressions, such as

(∀a. a -> a) -> (∀b. b -> b)

Such types are allowed in System F, but then type checking and type inference is undecidable, which means that the compiler wouldn't be able to infer types for us and we would have to explicitly annotate expressions with types.

(For long time the question of decidability of type-checking in System F had been open, and it had been sometimes addressed as "an embarrassing open problem", because the undecidability had been proven for many other systems but this one, until proved by Joe Wells in 1994.)

(GHC allows you to enable such explicit inner quantifiers using the RankNTypes extension, but as mentioned, the types can't be inferred automatically.)

Types of lambda abstractions

Consider the expression λx.M, or in Haskell notation \x -> M, where M is some term containing x. If the type of x is a and the type of M is b, then the type of the whole expression will be λx.M : a → b. Because of the above restriction, a must not contain ∀, therefore the type of x can't contain type quantifiers, it can't be polymorphic (or in other words it must be monomorphic).

Why lambda abstraction isn't enough

Consider this simple Haskell program:

i :: a -> a
i x = x

foo :: a -> a
foo = i i

Let's disregard for now that foo isn't very useful. The main point is that id in the definition of foo is instantiated with two different types. The first one

i :: (a -> a) -> (a -> a)

and the second one

i :: a -> a

Now if we try to convert this program into the pure lambda calculus syntax without let, we'd end up with

(λi.i i)(λx.x)

where the first part is the definition of foo and the second part is the definition of i. But this term will not type check. The problem is that i must have a monomorphic type (as described above), but we need it polymorphic so that we can instantiate i to the two different types.

Indeed, if you try to typecheck i -> i i in Haskell, it will fail. There is no monomorphic type we can assign to i so that i i would typecheck.

let solves the problem

If we write let i x = x in i i, the situation is different. Unlike in the previous paragraph, there is no lambda here, there is no self-contained expression like λi.i i, where we'd need a polymorphic type for the abstracted variable i. Therefore let can allow i to have a polymorhpic type, in this case ∀a.a → a and so i i typechecks.

Without let, if we compiled a Haskell program and converted it to a single lambda term, every function would have to be assigned a single monomorphic type! This would be pretty useless.

So let is an essential construction that allows polymorhism in languages based on Damas-Hindley-Milner type systems.




回答3:


The History of Haskell speaks a bit to the fact that Haskell has long since embraced a complex surface syntax.

It took some while to identify the stylistic choice as we have done here, but once we had done so, we engaged in furious debate about which style was “better.” An underlying assumption was that if possible there should be “just one way to do something,” so that, for example, having both let and where would be redundant and confusing.

In the end, we abandoned the underlying assumption, and provided full syntactic support for both styles. This may seem like a classic committee decision, but it is one that the present authors believe was a fine choice, and that we now regard as a strength of the language. Different constructs have different nuances, and real programmers do in practice employ both let and where, both guards and conditionals, both pattern-matching definitions and case expressions—not only in the same program but sometimes in the same function definition. It is certainly true that the additional syntactic sugar makes the language seem more elaborate, but it is a superficial sort of complexity, easily explained by purely syntactic transformations.




回答4:


This is not a stupid question. It is completely reasonable.

First, let/in bindings are syntactically unambiguous and can be rewritten in a simple mechanical way into lambdas.

Second, and because of this, let ... in ... is an expression: that is, it can be written wherever expressions are allowed. In contrast, your suggested syntax is more similar to where, which is bound to a surrounding syntactic construct, like the pattern matching line of a function definition.

One might also make an argument that your suggested syntax is too imperative in style, but this is certainly subjective.

You might prefer using where to let. Many Haskell developers do. It's a reasonable choice.




回答5:


There is a good reason why let is there:

  • let can be used within the do notation.
  • It can be used within list comprehension.
  • It can be used within function definition as mentioned here conveniently.

You give the following example as an alternative to let :

y = 1+2
z = 4+6
y+z

The above example will not typecheck and the y and z will also lead to the pollution of global namespace which can be avoided using let.




回答6:


Part of the reason Haskell's let looks like it does is also the consistent way it manages its indentation sensitivity. Every indentation-sensitive construct works the same way: first there's an introducing keyword (let, where, do, of); then the next token's position determines what is the indentation level for this block; and subsequent lines that start at the same level are considered to be a new element in the block. That's why you can have

let a = 1
    b = 2
in a + b

or

let
  a = 1
  b = 2
in a + b

but not

let a = 1
  b = 2
in a + b

I think it might actually be possible to have keywordless indentation-based bindings without making the syntax technically ambiguous. But I think there is value in the current consistency, at least for the principle of least surprise. Once you see how one indentation-sensitive construct works, they all work the same. And as a bonus, they all have the same indentation-insensitive equivalent. This

keyword <element 1>
        <element 2>
        <element 3>

is always equivalent to

keyword { <element 1>; <element 2>; <element 3> }

In fact, as a mainly F# developer, this is something I envy from Haskell: F#'s indentation rules are more complex and not always consistent.



来源:https://stackoverflow.com/questions/23452343/stripping-out-let-in-haskell

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!