In almost all examples, a y-combinator in ML-type languages is written like this:
let rec y f x = f (y f) x
let factorial = y (fun f -> function 0 -> 1
Case and let statements in ML derivatives are what makes it Turing Complete, I believe they're based on System F and not simply typed but the point is the same.
System F cannot find a type for the any fixed point combinator, if it could, it wasn't strongly normalizing.
What strongly normalizing means is that any expression has exactly one normal form, where a normal form is an expression that cannot be reduced any further, this differs from untyped where every expression has at max one normal form, it can also have no normal form at all.
If typed lambda calculi could construct a fixed point operator in what ever way, it was quite possible for an expression to have no normal form.
Another famous theorem, the Halting Problem, implies that strongly normalizing languages are not Turing complete, it says that's impossible to decide (different than prove) of a turing complete language what subset of its programs will halt on what input. If a language is strongly normalizing, it's decidable if it halts, namely it always halts. Our algorithm to decide this is the program: true;
.
To solve this, ML-derivatives extend System-F with case and let (rec) to overcome this. Functions can thus refer to themselves in their definitions again, making them in effect no lambda calculi at all any more, it's no longer possible to rely on anonymous functions alone for all computable functions. They can thus again enter infinite loops and regain their turing-completeness.