Removing duplicate columns after a DF join in Spark

后端 未结 7 684
小鲜肉
小鲜肉 2020-12-24 05:46

When you join two DFs with similar column names:

df = df1.join(df2, df1[\'id\'] == df2[\'id\'])

Join works fine but you can\'t call the

相关标签:
7条回答
  • 2020-12-24 05:54

    After I've joined multiple tables together, I run them through a simple function to drop columns in the DF if it encounters duplicates while walking from left to right. Alternatively, you could rename these columns too.

    Where Names is a table with columns ['Id', 'Name', 'DateId', 'Description'] and Dates is a table with columns ['Id', 'Date', 'Description'], the columns Id and Description will be duplicated after being joined.

    Names = sparkSession.sql("SELECT * FROM Names")
    Dates = sparkSession.sql("SELECT * FROM Dates")
    NamesAndDates = Names.join(Dates, Names.DateId == Dates.Id, "inner")
    NamesAndDates = dropDupeDfCols(NamesAndDates)
    NamesAndDates.saveAsTable("...", format="parquet", mode="overwrite", path="...")
    

    Where dropDupeDfCols is defined as:

    def dropDupeDfCols(df):
        newcols = []
        dupcols = []
    
        for i in range(len(df.columns)):
            if df.columns[i] not in newcols:
                newcols.append(df.columns[i])
            else:
                dupcols.append(i)
    
        df = df.toDF(*[str(i) for i in range(len(df.columns))])
        for dupcol in dupcols:
            df = df.drop(str(dupcol))
    
        return df.toDF(*newcols)
    

    The resulting data frame will contain columns ['Id', 'Name', 'DateId', 'Description', 'Date'].

    0 讨论(0)
  • 2020-12-24 06:05

    df.join(other, on, how) when on is a column name string, or a list of column names strings, the returned dataframe will prevent duplicate columns. when on is a join expression, it will result in duplicate columns. We can use .drop(df.a) to drop duplicate columns. Example:

    cond = [df.a == other.a, df.b == other.bb, df.c == other.ccc]
    # result will have duplicate column a
    result = df.join(other, cond, 'inner').drop(df.a)
    
    0 讨论(0)
  • 2020-12-24 06:06

    In my case I had a dataframe with multiple duplicate columns after joins and I was trying to same that dataframe in csv format, but due to duplicate column I was getting error. I followed below steps to drop duplicate columns. Code is in scala

    1) Rename all the duplicate columns and make new dataframe 2) make separate list for all the renamed columns 3) Make new dataframe with all columns (including renamed - step 1) 4) drop all the renamed column

    private def removeDuplicateColumns(dataFrame:DataFrame): DataFrame = {
    var allColumns:  mutable.MutableList[String] = mutable.MutableList()
    val dup_Columns: mutable.MutableList[String] = mutable.MutableList()
    dataFrame.columns.foreach((i: String) =>{
    if(allColumns.contains(i))
    
    if(allColumns.contains(i))
    {allColumns += "dup_" + i
    dup_Columns += "dup_" +i
    }else{
    allColumns += i
    }println(i)
    })
    val columnSeq = allColumns.toSeq
    val df = dataFrame.toDF(columnSeq:_*)
    val unDF = df.drop(dup_Columns:_*)
    unDF
    }
    

    to call the above function use below code and pass your dataframe which contains duplicate columns

    val uniColDF = removeDuplicateColumns(df)
    
    0 讨论(0)
  • 2020-12-24 06:08

    In pyspark, you can join on multiple columns as per below

    df = df1.join(df2, ['each', 'shared', 'col'], how='full')
    

    Original answer from: How to perform union on two DataFrames with different amounts of columns in spark?

    0 讨论(0)
  • 2020-12-24 06:17

    Assuming 'a' is a dataframe with column 'id' and 'b' is another dataframe with column 'id'

    I use the following two methods to remove duplicates:

    Method 1: Using String Join Expression as opposed to boolean expression. This automatically remove a duplicate column for you

    a.join(b, 'id')
    

    Method 2: Renaming the column before the join and dropping it after

    b.withColumnRenamed('id', 'b_id')
    joinexpr = a['id'] == b['b_id']
    a.join(b, joinexpr).drop('b_id)
    
    0 讨论(0)
  • 2020-12-24 06:18

    The code below works with Spark 1.6.0 and above.

    salespeople_df.show()
    +---+------+-----+
    |Num|  Name|Store|
    +---+------+-----+
    |  1| Henry|  100|
    |  2| Karen|  100|
    |  3|  Paul|  101|
    |  4| Jimmy|  102|
    |  5|Janice|  103|
    +---+------+-----+
    
    storeaddress_df.show()
    +-----+--------------------+
    |Store|             Address|
    +-----+--------------------+
    |  100|    64 E Illinos Ave|
    |  101|         74 Grand Pl|
    |  102|          2298 Hwy 7|
    |  103|No address available|
    +-----+--------------------+
    

    Assuming -in this example- that the name of the shared column is the same:

    joined=salespeople_df.join(storeaddress_df, ['Store'])
    joined.orderBy('Num', ascending=True).show()
    
    +-----+---+------+--------------------+
    |Store|Num|  Name|             Address|
    +-----+---+------+--------------------+
    |  100|  1| Henry|    64 E Illinos Ave|
    |  100|  2| Karen|    64 E Illinos Ave|
    |  101|  3|  Paul|         74 Grand Pl|
    |  102|  4| Jimmy|          2298 Hwy 7|
    |  103|  5|Janice|No address available|
    +-----+---+------+--------------------+
    

    .join will prevent the duplication of the shared column.

    Let's assume that you want to remove the column Num in this example, you can just use .drop('colname')

    joined=joined.drop('Num')
    joined.show()
    
    +-----+------+--------------------+
    |Store|  Name|             Address|
    +-----+------+--------------------+
    |  103|Janice|No address available|
    |  100| Henry|    64 E Illinos Ave|
    |  100| Karen|    64 E Illinos Ave|
    |  101|  Paul|         74 Grand Pl|
    |  102| Jimmy|          2298 Hwy 7|
    +-----+------+--------------------+
    
    0 讨论(0)
提交回复
热议问题