Description
contDF = spark.range(500).selectExpr("cast(id as double) as id") import org.apache.spark.ml.feature.Bucketizer val splits = Array(5.0, 10.0, 250.0, 500.0) val bucketer = new Bucketizer() .setSplits(splits) .setInputCol("id") .setHandleInvalid("skip") bucketer.transform(contDF).show()
You would expect that this would handle the invalid buckets. However it fails
Caused by: org.apache.spark.SparkException: Feature value 0.0 out of Bucketizer bounds [5.0, 500.0]. Check your features, or loosen the lower/upper bound constraints.
It seems strange that handleInvalud doesn't actually handleInvalid inputs.
Thoughts anyone?