The official benchmark suite for aeneas
- Version: 0.0.3 (aeneas 1.7.2)
- Date: 2017-03-04
- Developed by: ReadBeyond
- Lead Developer: Alberto Pettarin
- License: the GNU Affero General Public License Version 3 (AGPL v3)
- Contact: [email protected]
- Quick Links: Results - Code - aeneas
To benchmark aeneas on different machines and under different I/O and parameter configurations.
-
Clone the repository from GitHub and switch to the
masterbranch:$ git clone https://github.com/ReadBeyond/aeneas-benchmark $ cd aeneas-benchmark $ git checkout master -
Enter the
benchmarkdirectory:$ cd benchmark -
Rename the
ENVINFO.editmefile toENVINFO, and edit it with your system info. -
Run the benchmark suite (it requires several minutes to complete):
$ python run_benchmark.py --all
If you want to contribute your benchmark, please contact us via email (the email address is written at the beginning of the page).
-
Run the benchmark (see above) on at least one machine.
-
If you run the benchmark on more than one machine, please copy the
*.jsonfiles from the other machines into thebenchmark/outputdirectory of the machine you want to build the static HTML files (e.g., your personal PC/laptop). Note that it is crucial each machine to have its own (distinct) ENVINFO file. -
Edit the last lines of the
sitebuilder/build_all.shscript, and theoutput/index.htmlfile, depending on the actual machines you have. (TODO: automate this.) -
Run:
$ cd sitebuilder $ bash build_all.shThe static HTML files are created in the
output/directory.
aeneas-benchmark is released under the terms of the GNU Affero General Public License Version 3. See the LICENSE file for details.
Licenses for third party code and files included in aeneas-benchmark can be found in the licenses directory.
No copy rights were harmed in the making of this project.