Big data has become a trusted friend of the golf world. In many cases — TrackMan, Strokes Gained, Pace of Play — stats help courses, equipment companies and Tour pros analyze trends and strategize for the future.
The art of projecting a tournament champion has also received the full-data treatment. Now it’s getting better.
Last summer The Economist launched EAGLE (Economist Advantage in Golf Likelihood Estimator), a program that featured hole-by-hole estimations of the win probability of every golfer in every major over the last 15 years. Originally conceived to contextualize Jordan Spieth’s epic collapse at the 2016 Masters, it was also a fresh way to illustrate a major championship’s moment-to-moment volatility. You can find our original report here, which includes some of the most shocking major finishes in recent memory. Check out the 2011 PGA Championship.
Exceptional visuals weren’t enough for The Economist. Over the last seven months, EAGLE’s three-person team further enhanced the system to create more accurate (and more personal) simulations. The end-game? To help fans understand exactly how golf’s biggest events are expected to play out. These improvements led to the launch of EAGLE 2, which data editor Dan Rosenheck described as “a whole new world.”
The original EAGLE system was based entirely on a player’s past performance in major championships and world ranking. Now, EAGLE 2 has used those world rankings to determine the relative skill level of tournament fields and to categorize each tournament’s strength relative to majors.
At the MIT Sloan Sports Analytics Conference last weekend, the EAGLE team presented a case study where Ernie Els scored worse than the field average in the first round of the 2006 Nedbank Challenge. Considering the field strength, Els’ round, though worse than average that day, was stronger (on their “major” scale) than the better-than-field-average second round of journeyman Darryn Lloyd at the 2015 Wealth Design Invitational. In short, it’s a simple concept: scoring worse in a great field can, at times, be a better performance than scoring great against a weak field.
A second adjustment to EAGLE 2 focused on the specific strengths of certain players. Jason Day is not the same golfer as Luke Donald; Day averaged 304 yards off the tee last year while Donald averaged 283. Using Donald’s score to par from a variety of distances over the years, EAGLE 2 asserts that Donald is better than the average player scoring-wise, but the gap between him and that hypothetical Average Pro is greater on short holes. On longer holes, Donald drifts closer to the mean.
Still with me?
EAGLE 2 has thus created Donald’s “personal track record,” and it’s used for projecting the scores he would likely make on a given major course. Since Donald thrives on shorter holes, he would project to finish better at, say, a 6,900-yard U.S. Open at Merion than a 7,700-yard Open at Chambers Bay.
Finally, EAGLE 2 factors all consistencies (or lack thereof) in players’ games. At MIT, they pitted Mr. Consistency Jim Furyk against Mr. Volatile John Daly on the same chart. Though Big John has displayed a greater eagle-making ability than Furyk’s par-par-par-par game, Daly has also shown to be susceptible to double bogey or worse. (To put it mildly!) You can see below how Furyk’s plodding game tends to land him birdies, pars, bogeys and not much else. Daly, to no surprise, is far less steady. This data was then blended with the hole setups. For example, if a par-3 is guarded by water, players will make double bogey much more frequently than they would on a par-3 of the same distance without water.
All of this should make EAGLE 2 substantially more accurate than EAGLE 1. Is it fail proof? Of course not. Predicting a field of 150 or more players in a game littered with randomness is often a fruitless endeavor, but that doesn’t mean we can’t come closer than ever before.
It all starts in four weeks at the Masters. EAGLE 2.0 will be ready, and we will be watching.