Last night’s Oscars proved to be quite the spectacle, with Neil Patrick Harris walking around in his underwear and everyone finding out who John Legend’s and Common’s real names are. The results were interesting as well, with only a handful of upsets. Having taken it all in, there are a few things I noticed regarding the selections made by some of the sites I polled, and the effectiveness of various winner-choosing methods.
First, how did I perform? Well, I ended up winning 20 out of the 24 categories (or 83%). This fared pretty well against the sites that I polled; only one of the nine sites beat me, I tied with one other site and the other sites all won fewer categories than I, with Hollywood Reporter performing the worst with only 13 wins.
When I took a closer look at which sites performed well and which ones did not, one thing became immediately clear; the individuals or sites that used statistics vastly outperformed the sites that did not. For example, Ben Zauzmer won 21 categories (beat me by 1) and GoldDerby’s predictions led to 20 category wins (tied me). The other sites I polled averaged roughly 16.5 wins, which is about 18% worse than the stats-based sites.
As I mentioned in my original article, the film that wins Best Film also wins Best Director about 72% of the time. Interestingly, 4 of the sites I polled actually chose two different films for Best Film and Best Director, which strongly indicates they were making decisions with their gut rather than with calculated probabilities.
I made the mistake of going with my gut when I chose “Joanna” for Best Documentary Short Subject, even though “Crisis Hotline” was a decisive favorite. I chose “Joanna” because I saw both films and simply felt it was a better film than “Crisis Hotline.” Unfortunately, there is no correlation between who I feel will win and who actually wins, so it was a poor decision on my part. Ironically, at the Oscar party I attended I ended up tying for first instead of winning first because of the one pick where I strayed from the probabilistic approach. I’ve learned my lesson as it pertains to selecting Oscar winners, and as a search engine marketer I was reminded that we cannot ignore what the data is telling us. A probabilistic approach can provide huge advantages when making key optimization decisions within your digital marketing campaigns.
One last thing I’ll mention is that Ben Zauzmer, whom I mentioned earlier, made a very astute observation regarding the Best Original Screenplay category. He noticed that the WGA correctly forecasts the Oscar winner for this category 70% of the time, which would have meant that “The Grand Budapest Hotel” was the favorite to win. However, “Birdman” was ruled ineligible by the WGA so it didn’t have an opportunity to win this award. Instead of blindly believing the numbers, he adjusted the model to account for the likelihood that “Birdman” would have won if it had been eligible, which resulted in predicting that “Birdman” would win Best Original Screenplay, which it did. As marketers, we are required to constantly synthesize, and sometimes question, the data to ensure we’re making decisions based on signals rather than noise (shout out to Nate Silver). This approach has shaped our campaign management strategies, and I’m hoping it will also help you make better marketing decisions moving forward.