Skip to content

Commit a2ccb8a

Browse files
authored
Model 1 and Model 2 ParamMaps Missing
@dongjoon-hyun @HyukjinKwon Error in PySpark example code: [https://github.com/apache/spark/blob/master/examples/src/main/python/ml/estimator_transformer_param_example.py] The original Scala code says println("Model 2 was fit using parameters: " + model2.parent.extractParamMap) The parent is lr There is no method for accessing parent as is done in Scala. This code has been tested in Python, and returns values consistent with Scala
1 parent 49968de commit a2ccb8a

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/src/main/python/ml/estimator_transformer_param_example.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@
5353
# This prints the parameter (name: value) pairs, where names are unique IDs for this
5454
# LogisticRegression instance.
5555
print("Model 1 was fit using parameters: ")
56-
print(model1.extractParamMap())
56+
print(lr.extractParamMap())
5757

5858
# We may alternatively specify parameters using a Python dictionary as a paramMap
5959
paramMap = {lr.maxIter: 20}
@@ -69,7 +69,7 @@
6969
# paramMapCombined overrides all parameters set earlier via lr.set* methods.
7070
model2 = lr.fit(training, paramMapCombined)
7171
print("Model 2 was fit using parameters: ")
72-
print(model2.extractParamMap())
72+
print(lr.extractParamMap(extra=paramMapCombined))
7373

7474
# Prepare test data
7575
test = spark.createDataFrame([

0 commit comments

Comments
 (0)