Cannot save model using PySpark xgboost4j

女生的网名这么多〃 提交于 2021-02-07 08:12:20

问题


I have a small PySpark program that uses xgboost4j and xgboost4j-spark in order to train a given dataset in a spark dataframe form.

The training is done, but It seems I cannot save the model.

Current libraries versions:

  • Pyspark 2.4.0
  • xgboost4j 0.90
  • xgboost4j-spark 0.90

Spark submit args:

    os.environ['PYSPARK_SUBMIT_ARGS'] = "--py-files dist/DNA-0.0.2-py3.6.egg " \
                                        "--jars dna/resources/xgboost4j-spark-0.90.jar," \
                                        "dna/resources/xgboost4j-0.90.jar pyspark-shell"

The training process is as follows:

def spark_xgboost_train(spark=None, models_path='', train_df=None):
    spark.sparkContext.addPyFile("dna/resources/xgboost4j-spark-0.90.jar")
    spark.sparkContext.addPyFile("dna/resources/xgboost4j-0.90.jar")
    spark.sparkContext.addPyFile('dna/resources/pyspark-xgboost_0.90_261ab52e07bec461c711d209b70428ab481db470.zip')

    import sparkxgb as sxgb
    from sparkxgb import XGBoostClassifier, XGBoostClassificationModel

    # pre-process
    train_df = train_df.drop('url')
    train_df = train_df.na.fill(0)

    x = train_df.columns
    x.remove('label')

    vectorAssembler = VectorAssembler() \
        .setInputCols(x) \
        .setOutputCol("features")

    xgboost = XGBoostClassifier(
        featuresCol="features",
        labelCol="label",
        predictionCol="prediction",
    )

    pipeline = Pipeline().setStages([vectorAssembler])
    df = pipeline.fit(train_df).transform(train_df)
    model = xgboost.fit(df)

    # save
    model.write().overwrite().save(models_path + "model.dat")

The error I get:

Traceback (most recent call last):
  File "/storage/env/DNAtestenv/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/storage/env/DNAtestenv/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/elad/DNA/dna/__main__.py", line 360, in <module>
    main()
  File "/home/elad/DNA/dna/__main__.py", line 325, in main
    run_pipelines(config)
  File "/home/elad/DNA/dna/__main__.py", line 311, in run_pipelines
    objective=config['objective'], nthread=config['nthread'])
  File "/home/elad/DNA/dna/__main__.py", line 234, in train_model
    max_depth=max_depth, eta=eta, silent=silent, objective=objective, nthread=1)
  File "/home/elad/DNA/dna/model/xgboost_train.py", line 82, in spark_xgboost_train
    model.write().save(models_path + '/model.dat')
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/pyspark/ml/util.py", line 183, in save
    self._jwrite.save(path)
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/storage/env/DNAtestenv/lib/python3.7/site-packages/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o484.save.
: java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue;
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1$$anonfun$3.apply(DefaultXGBoostParamsWriter.scala:73)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1$$anonfun$3.apply(DefaultXGBoostParamsWriter.scala:71)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.AbstractTraversable.map(Traversable.scala:104)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1.apply(DefaultXGBoostParamsWriter.scala:71)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$$anonfun$1.apply(DefaultXGBoostParamsWriter.scala:69)
    at scala.Option.getOrElse(Option.scala:121)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$.getMetadataToSave(DefaultXGBoostParamsWriter.scala:69)
    at ml.dmlc.xgboost4j.scala.spark.params.DefaultXGBoostParamsWriter$.saveMetadata(DefaultXGBoostParamsWriter.scala:51)
    at ml.dmlc.xgboost4j.scala.spark.XGBoostModel$XGBoostModelModelWriter.saveImpl(XGBoostModel.scala:371)
    at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:180)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:745)

What I would like to do is to save and load the model, like this:

    # save
    model.write().save(models_path + '/model.dat')

    # load
    model2 = sxgb.xgboost.XGBoostClassificationModel().load(models_path + '/model.dat')

I tried using other xgboost4j versions as well (0.80, 0.72) I cant seem to find the cause for this, I was even trying to read the wrapper source code and the jars source code, I could not find anything.

thanks in advance.


回答1:


After hours of researching, I got it to work by adding xgboost to the pipeline, which then produces a PipelineModel rather than an xgboost model.

I was able to save the PipelineModel and then load it just fine.

Here is what I changed:

    xgboost = XGBoostClassifier(
        featuresCol="features",
        labelCol="label",
        predictionCol="prediction",
    )

    pipeline = Pipeline().setStages([vectorAssembler, xgboost])
    model = pipeline.fit(train_df)

    # save
    model.write().overwrite().save(models_path + "/xgb_model.model")

    # load
    model2 = PipelineModel.load(models_path + "/xgb_model.model"


来源:https://stackoverflow.com/questions/60522529/cannot-save-model-using-pyspark-xgboost4j

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!