Spark dataframe filter

…衆ロ難τιáo~ 提交于 2019-12-04 19:24:14

问题


val df = sc.parallelize(Seq((1,"Emailab"), (2,"Phoneab"), (3, "Faxab"),(4,"Mail"),(5,"Other"),(6,"MSL12"),(7,"MSL"),(8,"HCP"),(9,"HCP12"))).toDF("c1","c2")

+---+-------+
| c1|     c2|
+---+-------+
|  1|Emailab|
|  2|Phoneab|
|  3|  Faxab|
|  4|   Mail|
|  5|  Other|
|  6|  MSL12|
|  7|    MSL|
|  8|    HCP|
|  9|  HCP12|
+---+-------+

I want to filter out records which have first 3 characters of column 'c2' either 'MSL' or 'HCP'.

So the output should be like below.

+---+-------+
| c1|     c2|
+---+-------+
|  1|Emailab|
|  2|Phoneab|
|  3|  Faxab|
|  4|   Mail|
|  5|  Other|
+---+-------+

Can any one please help on this?

I knew that df.filter($"c2".rlike("MSL")) -- This is for selecting the records but how to exclude the records. ?

Version: Spark 1.6.2 Scala : 2.10


回答1:


df.filter(not(
    substring(col("c2"), 0, 3).isin("MSL", "HCP"))
    )



回答2:


This works too. Concise and very similar to SQL.

df.filter("c2 not like 'MSL%' and c2 not like 'HCP%'").show
+---+-------+
| c1|     c2|
+---+-------+
|  1|Emailab|
|  2|Phoneab|
|  3|  Faxab|
|  4|   Mail|
|  5|  Other|
+---+-------+



回答3:


I used below to filter rows from dataframe and this worked form me.Spark 2.2

val spark = new org.apache.spark.sql.SQLContext(sc)    
val data = spark.read.format("csv").
          option("header", "true").
          option("delimiter", "|").
          option("inferSchema", "true").
          load("D:\\test.csv")   


import  spark.implicits._
val filter=data.filter($"dept" === "IT" )

OR

val filter=data.filter($"dept" =!= "IT" )



回答4:


val df1 = df.filter(not(df("c2").rlike("MSL"))&&not(df("c2").rlike("HCP")))

This worked.



来源:https://stackoverflow.com/questions/42951905/spark-dataframe-filter

工具导航Map

JSON相关