Scaling laws are useful guides for developing language models, but there are still gaps
between current scaling studies and how language models are ultimately trained and evaluated. For instance, scaling
is usually studied in the compute-optimal training regime (i.e., “Chinchilla optimal” regime); however, in
practice, models are often over-trained to reduce inference costs. Moreover, scaling laws mostly predict loss on
next-token prediction, but ultimately models are compared based on downstream task performance. In this paper, we
address both shortcomings. To do so, we create a testbed of 104 models with 0.011B to 6.9B parameters trained with
various numbers of tokens on three data distributions. First, we investigate scaling in the over-trained regime.
We fit scaling laws that extrapolate in both the number of model parameters and the ratio of training tokens to
parameters. This enables us to predict the validation loss of a 1.4B parameter, 900B token run (i.e., 32x
over-trained) and a 6.9B parameter, 138B token run — each from experiments that take 300x less compute.
Second, we relate the perplexity of a language model to its downstream task performance via a power law. We use
this law to predict top-1 error averaged over downstream tasks for the two aforementioned models using experiments
that take 20x less compute. Our experiments are available at
https://github.com/mlfoundations/scaling.