Errata: Mathematics for Machine Learning
Cambridge University Press, 2020
Marc Peter Deisenroth, A. Aldo Faisal, Cheng Soon Ong
December 4, 2019
In this document, we record typos and mistakes of our book Mathematics for Machine Learning,
published by Cambridge University Press (2020).
In this document, we will refer to the pagination of the online book 1 , which differs from the
printed version. An up-to-date version of the book (which includes the changes described in this
document) is available at https://mml-book.github.io/book/mml-book.pdf.
Chapter 1
Chapter 2
Chapter 3
• https://github.com/mml-book/mml-book.github.io/issues/438
p. 97, exercise 3.7: Let V be a vector space and π an endomorphism of V .
Chapter 4
• https://github.com/mml-book/mml-book.github.io/issues/436
1 1
p. 110: A5 = 1 2 is a shear-and-stretch mapping that shrinks scales space by 75% since
2 1
| det(A5 )| = 34
• https://github.com/mml-book/mml-book.github.io/issues/440
p. 133, Example 4.15, below (4.100b): This first rank-1 approximation A1 is insightful: it tells
us that Ali and Beatrix like science fiction movies, such as Star Wars and Bladerunner (entries
have values > 4 > 0.4), but fails to capture the ratings of the other movies by Chandra.
• https://github.com/mml-book/mml-book.github.io/issues/433
p. 136: Therefore, the Cholesky decomposition enables us to compute the reparametrization
trick where we want to perform continuous differentiation over random variables, e.g., in
variational autoencoders (Jimenez Rezende et al., 2014; Kingma and Ba, 2014 Kingma and
Welling, 2014).
1 https://mml-book.github.io/book/mml-book_printed.pdf
1
Chapter 5
Chapter 6
Chapter 7
• https://github.com/mml-book/mml-book.github.io/issues/442
p.241, Section 7.3.2, line 1: objctive objective
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
• https://github.com/mml-book/mml-book.github.io/issues/441
p. 371, caption of Figure 12.1: [...] separates red orange crosses from blue dots discs.
• https://github.com/mml-book/mml-book.github.io/issues/441
p. 371, last paragraph: red orange cross
• https://github.com/mml-book/mml-book.github.io/issues/441
p. 374, caption of Figure 12.3: [...] separate red orange crosses from blue dots discs.
• p. 371, 2nd paragraph: “In the SVM case, we start by designing an objective function a
loss function that is to be minimized on training data, following the principles of empirical
risk minimization (Section 8.2). This can also be understood as designing a particular loss
function.
• https://github.com/mml-book/mml-book.github.io/issues/444
p. 384, remark: explaination explanation
• https://github.com/mml-book/mml-book.github.io/issues/441
p. 386, caption of Figure 12.9(b): [...] (red) (orange) examples
• https://github.com/mml-book/mml-book.github.io/issues/445
PN PN
p. 386, equation (12.36): n=1 αn yn − n=1 αn yn (negate the partial derivative)