Converting Pandas dataframe into Spark dataframe error

蹲街弑〆低调 提交于 2019-12-28 08:17:10

问题


I'm trying to convert Pandas DF into Spark one. DF head:

10000001,1,0,1,12:35,OK,10002,1,0,9,f,NA,24,24,0,3,9,0,0,1,1,0,0,4,543
10000001,2,0,1,12:36,OK,10002,1,0,9,f,NA,24,24,0,3,9,2,1,1,3,1,3,2,611
10000002,1,0,4,12:19,PA,10003,1,1,7,f,NA,74,74,0,2,15,2,0,2,3,1,2,2,691

Code:

dataset = pd.read_csv("data/AS/test_v2.csv")
sc = SparkContext(conf=conf)
sqlCtx = SQLContext(sc)
sdf = sqlCtx.createDataFrame(dataset)

And I got an error:

TypeError: Can not merge type <class 'pyspark.sql.types.StringType'> and <class 'pyspark.sql.types.DoubleType'>

回答1:


You need to make sure your pandas dataframe columns are appropriate for the type spark is inferring. If your pandas dataframe lists something like:

pd.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5062 entries, 0 to 5061
Data columns (total 51 columns):
SomeCol                    5062 non-null object
Col2                       5062 non-null object

And you're getting that error try:

df[['SomeCol', 'Col2']] = df[['SomeCol', 'Col2']].astype(str)

Now, make sure .astype(str) is actually the type you want those columns to be. Basically, when the underlying Java code tries to infer the type from an object in python it uses some observations and makes a guess, if that guess doesn't apply to all the data in the column(s) it's trying to convert from pandas to spark it will fail.




回答2:


Type related errors can be avoided by imposing a schema as follows:

note: a text file was created (test.csv) with the original data (as above) and hypothetical column names were inserted ("col1","col2",...,"col25").

import pyspark
from pyspark.sql import SparkSession
import pandas as pd

spark = SparkSession.builder.appName('pandasToSparkDF').getOrCreate()

pdDF = pd.read_csv("test.csv")

contents of the pandas data frame:

pdDF

col1    col2    col3    col4    col5    col6    col7    col8    col9    col10   ... col16   col17   col18   col19   col20   col21   col22   col23   col24   col25
0   10000001    1   0   1   12:35   OK  10002   1   0   9   ... 3   9   0   0   1   1   0   0   4   543
1   10000001    2   0   1   12:36   OK  10002   1   0   9   ... 3   9   2   1   1   3   1   3   2   611
2   10000002    1   0   4   12:19   PA  10003   1   1   7   ... 2   15  2   0   2   3   1   2   2   691

Next, create the schema:

from pyspark.sql.types import *

mySchema = StructType([ StructField("Col1", LongType(), True)\
                       ,StructField("Col2", IntegerType(), True)\
                       ,StructField("Col3", IntegerType(), True)\
                       ,StructField("Col4", IntegerType(), True)\
                       ,StructField("Col5", StringType(), True)\
                       ,StructField("Col6", StringType(), True)\
                       ,StructField("Col7", IntegerType(), True)\
                       ,StructField("Col8", IntegerType(), True)\
                       ,StructField("Col9", IntegerType(), True)\
                       ,StructField("Col10", IntegerType(), True)\
                       ,StructField("Col11", StringType(), True)\
                       ,StructField("Col12", StringType(), True)\
                       ,StructField("Col13", IntegerType(), True)\
                       ,StructField("Col14", IntegerType(), True)\
                       ,StructField("Col15", IntegerType(), True)\
                       ,StructField("Col16", IntegerType(), True)\
                       ,StructField("Col17", IntegerType(), True)\
                       ,StructField("Col18", IntegerType(), True)\
                       ,StructField("Col19", IntegerType(), True)\
                       ,StructField("Col20", IntegerType(), True)\
                       ,StructField("Col21", IntegerType(), True)\
                       ,StructField("Col22", IntegerType(), True)\
                       ,StructField("Col23", IntegerType(), True)\
                       ,StructField("Col24", IntegerType(), True)\
                       ,StructField("Col25", IntegerType(), True)])

Note: True (implies nullable allowed)

create the pyspark dataframe:

df = spark.createDataFrame(pdDF,schema=mySchema)

confirm the pandas data frame is now a pyspark data frame:

type(df)

output:

pyspark.sql.dataframe.DataFrame

Aside:

To address Kate's comment below - to impose a general (String) schema you can do the following:

df=spark.createDataFrame(pdDF.astype(str)) 



回答3:


I made this algorithm, It worked for my 10 pandas Data frames

from pyspark.sql.types import *

# Auxiliar functions
def equivalent_type(f):
    if f == 'datetime64[ns]': return DateType()
    elif f == 'int64': return LongType()
    elif f == 'int32': return IntegerType()
    elif f == 'float64': return FloatType()
    else: return StringType()

def define_structure(string, format_type):
    try: typo = equivalent_type(format_type)
    except: typo = StringType()
    return StructField(string, typo)


# Given pandas dataframe, it will return a spark's dataframe.
def pandas_to_spark(pandas_df):
    columns = list(pandas_df.columns)
    types = list(pandas_df.dtypes)
    struct_list = []
    for column, typo in zip(columns, types): 
      struct_list.append(define_structure(column, typo))
    p_schema = StructType(struct_list)
    return sqlContext.createDataFrame(pandas_df, p_schema)

You can see it also in this gist

With this you just have to call spark_df = pandas_to_spark(pandas_df)




回答4:


I have tried this with your data and it is working :

%pyspark
import pandas as pd
from pyspark.sql import SQLContext
print sc
df = pd.read_csv("test.csv")
print type(df)
print df
sqlCtx = SQLContext(sc)
sqlCtx.createDataFrame(df).show()



回答5:


I received a similar error message once, in my case it was because my pandas dataframe contained NULLs. I will recommend to try & handle this in pandas before converting to spark (this resolved the issue in my case).



来源:https://stackoverflow.com/questions/37513355/converting-pandas-dataframe-into-spark-dataframe-error

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!