How does Swift's int literal to float inference work?

*爱你&永不变心* 提交于 2020-01-06 04:43:06

问题


Swift can infer int literals to be doubles or floats

let x: Float = 3

It even works with arithmetic. It will convert everything before doing the math, so this is also 3:

let y: Float = 5/2 + 0.5

But what are the actual rules for this? There are ambiguous situations, for example if the inference is for a parameter:

func foobar(_ x: Int) -> Int {
  return x
}

func foobar(_ x: Float) -> Float {
  return y
}

foobar(1/2)

In this case it infers it as an int and returns 0, but if you delete the first function it switches to a float and returns 0.5.

What are the rules? Where is it documented?

Even more annoying is when Swift could infer it as a float but doesn't

func foobar(_ x: Float) -> Float {
   return x
}

let x = 1/2
foobar(x) // Cannot convert value of type 'Int' to expected argument type 'Float'

回答1:


Literals don't have a type as such. The docs say,

If there isn’t suitable type information available, Swift infers that the literal’s type is one of the default literal types defined in the Swift standard library. The default types are Int for integer literals, Double for floating-point literals, String for string literals, and Bool for Boolean literals.

So unless your argument explicity says anything other than Int, it will infer integer literals as Int.


Refer this for more information, Lexical Structure - Literals.




回答2:


There are two Swift behaviors at play here:

  1. Swift can infer an int literal as type float if needed (or double)
  2. Each literal has a default type that is used if inference fails. The type for int literals is Int

With only one function, rule 1 applies. It sees that a float is needed, so it infers the division as float division and the int literals as floats:

func foobar(_ x: Float) -> Float {
  return y
}
foobar(1/2) // 0.5

If you overload the function, rule 1 no longer works. The type is now ambiguous so it falls back to the default type of Int, which luckily matches one of the definitions:

func foobar(_ x: Int) -> Int {
  return x
}
func foobar(_ x: Float) -> Float {
  return y
}
foobar(1/2)  // 0

See what happens if you make it so the default no longer works. Neither rule applies so you get an error:

func foobar(_ x: Double) -> Double {
  return x
}
func foobar(_ x: Float) -> Float {
  return y
}
foobar(1/2)  // Ambiguous use of operator '/'



回答3:


By default, passing in 1/2 as your argument, you are performing a calculation on two Integers which will evaluate to a result of type Integer thus the first function being used.

To have a Float, one or all of the arguments have to be of type Float so either 1.0/2 or 1/2.0 or 1.0/2.0. This will cause the second function to run instead.

In let x = 1/2, x is inferred to be of type Int because both 1 and 2 are of type Int.

Swift will not try to infer a Float if not indicated.



来源:https://stackoverflow.com/questions/52392373/how-does-swifts-int-literal-to-float-inference-work

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!