Conversation
db42583 to
1715b5f
Compare
| ) -> Result<Self, CometError> { | ||
| let metrics_set = ExecutionPlanMetricsSet::default(); | ||
| let baseline_metrics = BaselineMetrics::new(&metrics_set, 0); | ||
| let arrow_ffi_time = MetricBuilder::new(&metrics_set).subset_time("arrow_ffi_time", 0); |
There was a problem hiding this comment.
I see now that this isn't just FFI time. It is the cost of calling CometBatchIterator.next() so includes the cost of that method getting the next input batch as well as the FFI export cost ...
There was a problem hiding this comment.
Moving this to draft for now while I think about this more
There was a problem hiding this comment.
I'd be astonished if arrow_ffi has a substantial cost. It is, after all, zero-copy.
There was a problem hiding this comment.
It is zero copy for the data buffers, but the schema does get serialized with each batch. However, it does not look to be an issue after all
There was a problem hiding this comment.
but the schema does get serialized with each batch
fair point
comphead
left a comment
There was a problem hiding this comment.
lgtm thanks @andygrove
one thing I'd like to mention are you planning to have it permanently or enable this internal metrics based on some spark key so as to spend resources on metrics only when its really needed
Which issue does this PR close?
N/A
Rationale for this change
This is a subset of #1111, separated out to make reviews easier.
What changes are included in this PR?
Record time spent performing Arrow FFI to transfer batches between JVM and Rust code.
Note that these timings won't be fully exposed to Spark UI until we merge #1111.
How are these changes tested?