Spark Data Frames - Check if column is of type integer

扶醉桌前 提交于 2019-12-08 10:52:18

问题


I am trying to figure out what data type my column in a spark data frame is and manipulate the column based on that dedeuction.

Here is what I have so far:

import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('MyApp').getOrCreate()
df = spark.read.csv('Path To csv File',inferSchema=True,header=True)

for x in df.columns:
    if type(x) == 'integer':
    print(x+": inside if loop")

The print(x+": inside if loop") statement never seems to get executed but I am sure there are several columns that are integer data type. What am I missing here?


回答1:


You are iterating over the names of your columns so type(x) will never equal "integer" (it's always a string).

You need to use pyspark.sql.DataFrame.dtypes

for x, t in df.dtypes:
    if t=="int":
        print("{col} is integer type".format(col=x))

It can also be useful to look at the schema using df.printSchema().




回答2:


You can try:

dict(df.dtypes)['column name'] == 'int'

df.dtypes returns list of tuples and the easiest way to get the type as string for each column is to convert it to dict.




回答3:


Try:

if type(x) == int:

type(x) doesn't return 'integers', they should return int for integers.



来源:https://stackoverflow.com/questions/49784063/spark-data-frames-check-if-column-is-of-type-integer

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!