问题
I am trying to change all the columns of a spark dataframe to double type but i want to know if there is a better way of doing it than just looping over the columns and casting.
回答1:
With this dataframe:
df = spark.createDataFrame(
[
(1,2),
(2,3),
],
["foo","bar"]
)
df.show()
+---+---+
|foo|bar|
+---+---+
| 1| 2|
| 2| 3|
+---+---+
the for
loop is problably the easiest and more natural solution.
from pyspark.sql import functions as F
for col in df.columns:
df = df.withColumn(
col,
F.col(col).cast("double")
)
df.show()
+---+---+
|foo|bar|
+---+---+
|1.0|2.0|
|2.0|3.0|
+---+---+
Of course, you can also use python comprehension:
df.select(
*(
F.col(col).cast("double").alias(col)
for col
in df.columns
)
).show()
+---+---+
|foo|bar|
+---+---+
|1.0|2.0|
|2.0|3.0|
+---+---+
If you have a lot of columns, the second solution is a little bit better.
来源:https://stackoverflow.com/questions/54399188/how-to-change-all-columns-to-double-type-in-a-spark-dataframe