write a spark Dataset to json with all keys in the schema, including null columns
问题 I am writing a dataset to json using: ds.coalesce(1).write.format("json").option("nullValue",null).save("project/src/test/resources") For records that have columns with null values, the json document does not write that key at all. Is there a way to enforce null value keys to the json output? This is needed since I use this json to read it onto another dataset (in a test case) and cannot enforce a schema if some documents do not have all the keys in the case class (I am reading it by putting