Description
In Spark, you can do this:
// Scala
val a = sc.parallelize(List(1, 2, 3, 4), 4)
a.partitions.size
Please make this possible in PySpark too.
The work-around available is quite simple:
# Python a = sc.parallelize([1, 2, 3, 4], 4) a._jrdd.splits().size()