Create QA component specific benchmarks#491
Conversation
|
Timing with torch Events instead of time.perf_counter gives pretty much the same times |
|
Baseline speed when running question_answering_components.py {'language model': 1610.3536376953125, {'language model': 595.620849609375, {'language model': 1586.070556640625, {'language model': 596.1339111328125, |
Timoeller
left a comment
There was a problem hiding this comment.
Love it.
Only that we should add a timing mode and then do the synchronize() calls. In normal inference mode this might slow down computations
Timoeller
left a comment
There was a problem hiding this comment.
Looking good - lets put the sample files into another folder. E.g. test/benchmarks/samples?
This adds a script and some code that allows the separate benchmarking of preprocessing, language modelling and prediction head processing.