-
Notifications
You must be signed in to change notification settings - Fork 111
updates for leaderboard! #100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…is.py - Add baseline_file param to override default baseline path - Add eval_results_dir param to override default runs directory - Add output_file param to write results as JSON - Return results dict from analyze_greedy_eval() - All changes backward compatible (existing usage unchanged)
153e0cb to
87ba3c2
Compare
|
this has just sort of become a PR that contains all the changes needed to work with the external leaderboard repo. a couple things here:
|
|
LGTM, tysm for the thoughtful PR @pythonomar22
|
* Add optional path parameters and JSON output to benchmark_eval_analysis.py - Add baseline_file param to override default baseline path - Add eval_results_dir param to override default runs directory - Add output_file param to write results as JSON - Return results dict from analyze_greedy_eval() - All changes backward compatible (existing usage unchanged) * h100 modal timing, and some changes * lgtm; nit small annotation --------- Co-authored-by: Simon Guo <[email protected]>
No description provided.