Calculating Days of the Week (In Your Head)

On what date does Easter fall this year?  What day of the week is your dad’s birthday?  What day of the week is July 4th this year?  Answering questions like this, quickly, without consulting a calendar, seems to be a frequent parlor trick recently.  This is a quick post describing the approach that I use to calculate the day of the week of a given date (i.e., year, month, and day).

Note that there are really two common applications here.  One is an algorithm for a computer.  In that case, I think the best approach is to generalize slightly, using the (modified) Julian day number or something similar to associate consecutive dates with consecutive integers.  That algorithm is well-documented.  The other application is a method for performing the calculation quickly in your head, which is a different problem, and is what I am focusing on here.

There is plenty of literature on this problem as well, the most commonly referenced methods being Conway’s Doomsday algorithm and Zeller’s congruence.  The Doomsday algorithm never really appealed to me, seeming to consist of a hierarchical set of special cases progressing from months to years to centuries.  And Zeller’s congruence is often presented with the accurate but complicated arithmetic involving the month, which is fine for a computer, but I certainly can’t do it in my head.

The following approach is one that I remember, with some fondness, working out in the backseat of my parents’ station wagon while traveling to visit my grandparents.  I had found a book (from the public library, now lost to the mists of antiquity) that contained a perpetual calendar, a two-page table that involved a complex sequence of row/column intersections to determine the day of the week for a given date.  I was fascinated by this table, and in an effort to understand how it worked, I tried to turn that complex sequence of steps tracing through the table into a mathematical formula.  In hindsight, the result was effectively Zeller’s congruence, but with some slight modifications that I find to be simpler for computing in your head… although this is perhaps only because it is how I learned it.

The algorithm: Represent the year as 100c + y, 0 \leq y < 100, considering dates in January and February to be in the previous year.  Then given also the month m \in \left\{1, 2, ..., 12\right\} and day d \in \left\{1, 2, ..., 31\right\}, the day of the week w is given by:

w \equiv 5(c\bmod 4) + y + \lfloor\frac{y}{4}\rfloor + M_m + d \pmod{7}

M = (0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4)

where w=0 corresponds to Sunday, w=1 to Monday, etc.  For example, New Year’s Day 2012 will correspond to c=20, y=11, m=1, d=1, yielding w \equiv 0, a Sunday.  Note that this formula works for any non-negative century in the (proleptic) Gregorian calendar.

Don’t Have Faith In Science

From last week’s episode of The Big Bang Theory:

Sheldon: “Dr. Greene.  Question?”

Brian Greene: “Yes?”

Sheldon: “You’ve dedicated your life’s work to educating the general populace about complex scientific ideas.”

Brian Greene: “Yes, in part.”

Sheldon: “Have you ever considered trying to do something useful?”

Last week I read an interesting article titled, “Is Science Just a Matter of Faith?”  (I actually stumbled across it via Slashdot, where it very quickly generated well over one thousand comments.)  I disagree with much of the article… but I like the article anyway, because I think it raises some very useful and thought-provoking questions.  As usual, I recommend reading it first.

The main point of the article is the suggestion that some scientist-authors who write for a “general public” audience use religious connotation, particularly in book titles, as a means of “signalling that they have the authority to speak scientifically to the fundamental questions that formerly only religion had the authority to address.”  The author provides an array of book jackets, over half of which have the word “God” in their title, to make his point.  The idea is that if you mention or at least allude to religion, then non-scientists will be more likely to read, to believe, to have faith in, your explanations of how our world works.

I tend to dismiss the religious angle rather quickly.  In most cases, this is at worst a marketing hook on the part of the publishers, and not actual intent of persuasion on the part of the authors.  Consider Feynman’s The Meaning of It All, for example.  To infer religious authority even from just the title is a stretch for me.  Also, the book was published posthumously by Feynman’s daughter; the title has little relation to the titles of the three lectures of which the book is a compilation.  And certainly nowhere in any of his writings do I find even the suggestion that we should “believe Feynman because God told him.”

In short, I think “faith” is the wrong word to use here.  “Trust,” on the other hand, is more accurate, and less charged with unintended additional meaning… and here we have a valid and much more interesting question.  Science is complicated– how hard should scientists work to try to present those complicated ideas to an uncomplicated public?  When to give up and simply say, “Trust me; it works this way”?  Even within the scientific community, to what extent do we, or must we, trust and depend on the observations and theories of others?

In the situation of explaining complex science to a general audience, I think the answer had better be pretty simple: it is never acceptable to give up and say, “This is too complicated, so just trust me.”  To do so would be to miss the point of scientific debate in the first place.  To do so would suggest to the listener that, even at a single link in the long logical chain of reasoning, dogma is an acceptable foundation for understanding how our world works.

Striving to achieve this very difficult goal does not just benefit the non-scientific audience.  It benefits the scientific community as well, since it forces us to be clear, visual, rigorous, etc., which can often end up strengthening or even extending our understanding of what is going on.  In other words, you never really understand something until you teach it (I am not sure who first said this).

The challenge is that these are not just esoteric questions specific to particular scientific fields of interest.  Every human being wants answers to basic questions like: How did we come to be here?  Why are we here?  What, if anything, happens after?  What is the “meaning of it all”?  And the problem is that, when pressed to come up with answers, in the absence of anything better, humans are more than willing to make something up.  The resulting comfort with faulty, “missing link” chains of reasoning leads to things like religion, which is mostly harmless… but I suppose also to things like the Tea Party, which is rather less benign.

Contrary to Sheldon’s attitude, I think communication between scientists and the general public is indeed worthwhile and must continue.  It is important to provide accurate explanations of our current best understanding of the world– not just so that we will have the facts, so to speak, since the facts and theories themselves may not be particularly useful to John Q. Non-Scientist; but perhaps even more importantly, so that we can see what the reasoning looks like, which can then be re-applied everywhere, by everyone.

Analysis and a variant of “The Premo” card trick

As part of last week’s discussion of card shuffling, I mentioned “The Premo,” a card trick worked out by Charles Jordan almost 100 years ago, and analyzed by Bayer and Diaconis in 1992 (see the references at the end of this post).  This week I want to focus on how this trick works, and more importantly, how frequently it doesn’t work… and finally, to propose a variant that I think is a slightly more powerful yet still robust effect.

(Card games in general, and card magic in particular, are a great source of interesting problems in mathematics and computer science.  This isn’t the first time card tricks have come up here; see here for another of my favorite “mathematical” card tricks.)

To begin, following is a description of the original effect as described by Bayer-Diaconis (reference 1):

The performer removes a deck of cards from its case, hands it to a spectator and turns away from the spectators: “Give the deck a cut and a riffle shuffle. Give it another cut and another shuffle. Give it a final cut. I’m sure you’ll agree that no living human could know the name of the top card. Remove this card, note its value, and insert it into the pack. Give the pack a further cut, a final shuffle, and a final cut.” Now the performer takes back the pack, spreads it in a wide arc on the table, and, after staring intensely, names the selected card.

The trick is based on the observation that even after three riffle shuffles, the resulting arrangement of cards still has a lot of exploitable structure (assuming that the initial arrangement was known), and it is possible to identify a card that has been moved from its “place” in that structure.  I like this trick, not just because it is a pretty powerful effect, but also because it is relatively simple to learn… and because it is amenable to computer analysis, which makes for a great programming project for students.  (E.g., how do you model and implement riffle shuffling a deck?  Can you do it in place?  Etc.)

With surprisingly little practice, you can learn to spot the selected card reasonably quickly, especially with some distracting patter while you’re looking.  I will leave you to consult the references for good explanations of how the trick works.  [Edit: I put together a description of the trick, with a visual example, on my web site here.]  I want to focus instead on the following interesting comment by Bayer and Diaconis:

The trick as described is not sure-fire… as expected, the trick is most successful when the card is moved after [my emphasis] the final shuffle. We programmed a computer to shuffle the cards [three] times according to the GSR distribution, cut the deck uniformly at random, move the top card to a binomially distributed position and then cut the deck again.

In other words, there is some probability that the trick will fail, and Bayer-Diaconis suggest that we can reduce this probability of failure by waiting to select a card until after the third and final shuffle.  If this were true, I think this would detract from the effect; that extra shuffle after re-inserting the selected card makes it seem that much more challenging to find the card.

Unfortunately, my initial computer simulations seemed to agree with this; the Bayer-Diaconis version worked about 84% of the time, while the original trick only worked about 43% of the time!  I thought something must surely be wrong with my model or my code.  Otherwise, who would bother with an effect that worked less than half the time?

Things got more interesting when I tracked down the following variant in a collection of Jordan’s tricks (reference 2):

The spectator cuts the deck, gives it a shuffle, cuts it again and then gives it another shuffle.  He removes the top quarter of the deck, takes the top card of the larger packet and replaces the top quarter back on top.  The card removed from the deck is looked at and replaced on top or bottom.  The deck is cut and then given another shuffle.

In this version, the final shuffle comes after the card is selected, but there are a couple of differences: (1) the selected card is moved from inside the pack to the top, not the other way around; and (2) the “distance” that the card is moved is explicitly specified, about one quarter of the deck.

What is interesting about this variant is that it also works about 84% of the time!  This led me to suspect that the critical variable in the trick was not whether the card is selected before or after the third and final shuffle, but how far in the deck the selected card is moved.

To verify this, I considered 256 different variations on the trick:

  1. Select the card before or after the final shuffle (2 options);
  2. Select the top card and insert it into the pack, or select a card from inside the pack and move it to the top (2 options);
  3. Before and after each of the four shuffle/select steps, choose whether to cut the deck or not (32 options);
  4. For the cuts of the deck, use a uniform or binomial distribution (2 options).

For each variant, I estimated the probability that the trick succeeds, as a function of the position in the deck where the selected card is inserted (or taken).  The result is the following figure:

Premo
Probability of success of variants of the Premo card trick.

Each variant falls into one of two categories, indicated by the single- and double-peaked curves above.  It turns out that only option (1) above matters: if the card is selected after the third shuffle, the trick behaves according to the single-peaked curve.  If the card is selected before the third shuffle, we get the double-peaked curve.

This explains the behavior observed in the Bayer-Diaconis version; by selecting an insertion position with a binomial distribution, most of the time we are near the relatively flat single peak, and the trick usually works.  Even more interesting, this also explains Jordan’s motivation for explicitly cutting one quarter of the pack in his version, since this is the first peak of the double-peaked curve.  I find this amazing; remember, Jordan came up with this almost a century ago.

So we see now that it is possible for the trick to succeed with high probability even when shuffling after selecting the card… but only as long as we are careful about exactly where the selected card is moved.  Jordan’s approach is to explicitly tell the spectator to cut to one quarter of the deck, to stay near the first of the two peaks in the curve above.  But it is worth observing that either of the two peaks is acceptable.  That is, viewing the deck cyclically, we just need to move the selected card about 13 cards forward or backward in the deck.

To that end, my wife suggested the following great twist on the trick, which I think hides the ruse more effectively, and still ensures a high probability of success without the magician ever having to touch the deck: the magician tells the spectator, “Cut the deck, give it a riffle shuffle, then cut, shuffle, and cut again.  Now cut the deck into two roughly equal packs and place both on the table.  Pick one of the two packs, and take the top card from that pack and look at and remember it.  Now insert your card into the center of either of the two packs, and place one pack on top of the other.  Give the deck another cut, another shuffle, and a final cut.”

I think the two-stage division into “roughly equal packs” and “insert into the center (of half a deck)” seem less contrived than trying for exactly one quarter of the full deck.  And the reader can verify that, no matter how the spectator selects a pack, selects his card, or reassembles the deck, the end result is that the selected card has moved approximately 13 cards from its original position in the cycle.

References:

  1. Bayer and Diaconis, Trailing the Dovetail Shuffle to its Lair. Ann. Appl. Prob., 2(2):294-313. (1992) [PDF]
  2. Fulves, Karl, Charles Jordan’s Best Card Tricks. Mineola: Dover Publications, 1992, p. 118-119. [Google Books]