问题
I'm new to Spark. I have a dataframe that contains the results of some analysis. I converted that dataframe into JSON so I could display it in a Flask App:
results = result.toJSON().collect()
An example entry in my json file is below. I then tried to run a for loop in order to get specific results:
{"userId":"1","systemId":"30","title":"interest"}
for i in results:
print i["userId"]
This doesn't work at all and I get errors such as: Python (json) : TypeError: expected string or buffer
I used json.dumps
and json.loads
and still nothing - I keep on getting errors such as string indices must be integers, as well as the above error.
I then tried this:
print i[0]
This gave me the first character in the json "{" instead of the first line. I don't really know what to do, can anyone tell me where I'm going wrong?
Many Thanks.
回答1:
If the result of result.toJSON().collect()
is a JSON encoded string, then you would use json.loads()
to convert it to a dict
. The issue you're running into is that when you iterate a dict
with a for
loop, you're given the keys of the dict
. In your for
loop, you're treating the key as if it's a dict
, when in fact it is just a string
. Try this:
# toJSON() turns each row of the DataFrame into a JSON string
# calling first() on the result will fetch the first row.
results = json.loads(result.toJSON().first())
for key in results:
print results[key]
# To decode the entire DataFrame iterate over the result
# of toJSON()
def print_rows(row):
data = json.loads(row)
for key in data:
print "{key}:{value}".format(key=key, value=data[key])
results = result.toJSON()
results.foreach(print_rows)
EDIT: The issue is that collect returns a list
, not a dict
. I've updated the code. Always read the docs.
collect() Return a list that contains all of the elements in this RDD.
Note This method should only be used if the resulting array is expected to be small, as all the data is loaded into the driver’s memory.
EDIT2: I can't emphasize enough, always read the docs.
EDIT3: Look here.
回答2:
import json
>>> df = sqlContext.read.table("n1")
>>> df.show()
+-----+-------+----+---------------+-------+----+
| c1| c2| c3| c4| c5| c6|
+-----+-------+----+---------------+-------+----+
|00001|Content| 1|Content-article| |2018|
|00002|Content|null|Content-article|Content|2015|
+-----+-------+----+---------------+-------+----+
>>> results = df.toJSON().map(lambda j: json.loads(j)).collect()
>>> for i in results: print i["c1"], i["c6"]
...
00001 2018
00002 2015
回答3:
Here is what worked for me:
df_json = df.toJSON()
for row in df_json.collect():
#json string
print(row)
#json object
line = json.loads(row)
print(line[some_key])
Keep in mind that using .collect() is not advisable, since it collects the distributed data frames, and defeats the purpose of using data frames.
来源:https://stackoverflow.com/questions/43232169/converting-a-dataframe-into-json-in-pyspark-and-then-selecting-desired-fields