Skip to content

Deploy Spark MLlib model with PMML InferenceService


  1. Install pyspark 3.0.x and pyspark2pmml
    pip install pyspark~=3.0.0
    pip install pyspark2pmml
  2. Get JPMML-SparkML jar

Train a Spark MLlib model and export to PMML file

Launch pyspark with --jars to specify the location of the JPMML-SparkML uber-JAR

pyspark --jars ./jpmml-sparkml-executable-1.6.3.jar

Fitting a Spark ML pipeline:

from import Pipeline
from import DecisionTreeClassifier
from import RFormula

df ="Iris.csv", header = True, inferSchema = True)

formula = RFormula(formula = "Species ~ .")
classifier = DecisionTreeClassifier()
pipeline = Pipeline(stages = [formula, classifier])
pipelineModel =

from pyspark2pmml import PMMLBuilder

pmmlBuilder = PMMLBuilder(sc, df, pipelineModel)


Upload the DecisionTreeIris.pmml to a GCS bucket, note that the PMMLServer expect model file name to be model.pmml

gsutil cp ./DecisionTreeIris.pmml gs://$BUCKET_NAME/sparkpmml/model.pmml

Create the InferenceService with PMMLServer

Create the InferenceService with pmml predictor and specify the storageUri with bucket location you uploaded to

apiVersion: ""
kind: "InferenceService"
  name: "spark-pmml"
      storageUri: gs://kfserving-examples/models/sparkpmml

Apply the InferenceService custom resource

kubectl apply -f spark_pmml.yaml

Expected Output

$ created

Wait the InferenceService to be ready

kubectl wait --for=condition=Ready inferenceservice spark-pmml condition met

Run a prediction

The first step is to determine the ingress IP and ports and set INGRESS_HOST and INGRESS_PORT

SERVICE_HOSTNAME=$(kubectl get inferenceservice spark-pmml -o jsonpath='{.status.url}' | cut -d "/" -f 3)
curl -v -H "Host: ${SERVICE_HOSTNAME}" http://${INGRESS_HOST}:${INGRESS_PORT}/v1/models/$MODEL_NAME:predict -d $INPUT_PATH

Expected Output

* Connected to ( port 80 (#0)
> POST /v1/models/spark-pmml:predict HTTP/1.1
> Host:
> User-Agent: curl/7.73.0
> Accept: */*
> Content-Length: 45
> Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 45 out of 45 bytes
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-length: 39
< content-type: application/json; charset=UTF-8
< date: Sun, 07 Mar 2021 19:32:50 GMT
< server: istio-envoy
< x-envoy-upstream-service-time: 14
* Connection #0 to host left intact
{"predictions": [[1.0, 0.0, 1.0, 0.0]]}
Back to top