I\'m trying to figure out the best way to get the largest value in a Spark dataframe column.
Consider the following example:
df = spark.createDataFra
First add the import line:
from pyspark.sql.functions import min, max
df.agg(min("age")).show()
+--------+
|min(age)|
+--------+
| 29|
+--------+
df.agg(max("age")).show()
+--------+
|max(age)|
+--------+
| 77|
+--------+