How to check the Boolean condition from the another Dataframe

大兔子大兔子 提交于 2020-01-06 07:06:10

问题


I have three DF first is base df second is behavior df and third is rule df

Base df:
+---+----+------+
| ID|Name|Salary|
+---+----+------+
|  1|   A|   100|
|  2|   B|   200|
|  3|   C|   300|
|  4|   D|  1000|
|  5|   E|   500|
+---+----+------+

Behavior DF:
+----+---------+------+
|S.NO|Operation|Points|
+----+---------+------+
|   1|  a AND b|   100|
|   2|   a OR b|   200|
|   3|otherwise|     0|
+----+---------+------+

Rule DF:
+----+-----+------+------------+-----+
|RULE|Table|   col|   operation|value|
+----+-----+------+------------+-----+
|   a| Base|Salary|       equal| 1000|
|   b| Base|Salary|Greater Than|  500|
+----+-----+------+------------+-----+

I want to calculate the reward point of every person and add the column in the base df by the name of reward and check the condition in behavior df If a AND b is true it will assign 100 Point or If a OR b is True so 200 points will assign otherwise 0 points will assign where a or b condition in the Rule Table

Expected DF
 +---+----+------+------+
| ID|Name|Salary|Reward|
+---+----+------+------+
|  1|   A|   100|     0|
|  2|   B|   200|     0|
|  3|   C|   300|     0|
|  4|   D|  1000|   200|
|  5|   E|   500|     0|
+---+----+------+------+

回答1:


You can follow this approach -

I have to make slight changes in Rule and Behavior dataframe. Stored operations as logic ("==") instead of string ("equal").

Base = spark.createDataFrame([(1,'A',100),(2,'B',200),(3,'C',300),(4,'D',1000),(5,'E',500)],['ID','Name','Salary'])

Behavior = spark.createDataFrame([(1,'df.rule_a & df.rule_b',100),(2,'df.rule_a | df.rule_b',200),(3,'otherwise',0)],['SNo','Operation','Points'])

Rule = spark.createDataFrame([(1,'Base','Salary','==',1000),(2,'Base','Salary','>',500)],['RULE','Table','col','operation','value'])

Base.show()

#+---+----+------+
#| ID|Name|Salary|
#+---+----+------+
#|  1|   A|   100|
#|  2|   B|   200|
#|  3|   C|   300|
#|  4|   D|  1000|
#|  5|   E|   500|
#+---+----+------+

Behavior.show()

#+---+---------------------+------+
#|SNo|           Operation |Points|
#+---+---------------------+------+
#|  1|df.rule_a & df.rule_b|   100|
#|  2|df.rule_a | df.rule_b|   200|
#|  3|           otherwise |     0|
#+---+---------------------+------+

Rule.show()

#+----+-----+------+---------+-----+
#|RULE|Table|   col|operation|value|
#+----+-----+------+---------+-----+
#|   1| Base|Salary|       ==| 1000|
#|   2| Base|Salary|        >|  500|
#+----+-----+------+---------+-----+

Prepare logic for rules stored in Rules dataframe

For dynamically preparing rules, you can run a for loop over Rule dataframe and pass the iteration number to filter transaformation and Rule variable.

from pyspark.sql.functions import  col,concat,lit

rule_a = Rule.filter("RULE == 1").select(concat(col("Table"), lit("['"),  col("col"), lit("']"), lit(" "),  col("Operation"), col("Value"))).collect()[0][0]

rule_b = Rule.filter("RULE == 2").select(concat(col("Table"), lit("['"),  col("col"), lit("']"), lit(" "),  col("Operation"), col("Value"))).collect()[0][0]

Add boolean result of rules execution to dataframe

df = Base.withColumn("rule_a", eval(rule_a)).withColumn("rule_b", eval(rule_b))

df.show()

#+---+----+------+------+------+
#| ID|Name|Salary|rule_a|rule_b|
#+---+----+------+------+------+
#|  1|   A|   100| false| false|
#|  2|   B|   200| false| false|
#|  3|   C|   300| false| false|
#|  4|   D|  1000|  true|  true|
#|  5|   E|   500| false| false|
#+---+----+------+------+------+

store behavior and corresponding points from Behavior dataframe to variables

For dynamically preparing variables, you can run a for loop over Behavior dataframe and pass the iteration number as variable to filter transformation and column name.

behavior1 = Behavior.filter("SNo==1").select( col("Operation")).collect()[0][0]
behavior1_points = Behavior.filter("SNo==1").select( col("Points")).collect()[0][0]

behavior2 = Behavior.filter("SNo==2").select( col("Operation")).collect()[0][0]
behavior2_points = Behavior.filter("SNo==2").select( col("Points")).collect()[0][0]

behavior3 = Behavior.filter("SNo==3").select( col("Operation")).collect()[0][0]
behavior3_points = Behavior.filter("SNo==3").select( col("Points")).collect()[0][0]

Final solution

from pyspark.sql.functions import lit,when,col,greatest 

df\
  .withColumn("b1", eval(behavior1))\
  .withColumn("b2", eval(behavior2))\
           .select('*'
                   ,greatest(when(col('b1') == 'true',lit(behavior1_points)).otherwise(0)
                             ,when(col('b2') == 'true',lit(behavior2_points)).otherwise(0)
                             ,lit(behavior3_points)).alias('point')).drop('rule_a','rule_b','b1','b2').show()

#+---+----+------+-----+
#| ID|Name|Salary|point|
#+---+----+------+-----+
#|  1|   A|   100|    0|
#|  2|   B|   200|    0|
#|  3|   C|   300|    0|
#|  4|   D|  1000|  200|
#|  5|   E|   500|    0|
#+---+----+------+-----+


来源:https://stackoverflow.com/questions/56416679/how-to-check-the-boolean-condition-from-the-another-dataframe

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!