Saturday, September 24, 2005

POW-R-'ANKINGS explained

Who's the most powerful team in the NFL? It's a debate that begins long before the season starts and is renewed every week until February. Then comes the draft, the free agent signings and the cap cuts, and it starts all over again. Every publication, every website, every football guru has a preferred way of hashing out who's the strongest and who's the weakest. Sports Illustrated's Dr. Z draws up his Power Rankings every week (here's this week's) based on his analysis of team performance. They're entirely subjective. Very 19th century of him. USA Today Sports Weekly polls its writers college-football style, then ranks the 32 teams. That's an attempt to quantify opinion -- a scientific gloss on the back of a hunch. Jeff Sagarin and others of his ilk drop science all over, pouring sacks of statistics into a hopper that churns and burns and spits out rankings bathed in the sweet smell of objectivity.

I wanted to have my own rankings, with their own high-tech-sounding name, so I devised the exclusive Down and Distance POW-R-'ANKINGS. First I came up with the name. Catchy! Then I went fishing for a simple statistic that I could dress up and pass off as some sort of advanced analysis. My first thought was simply to rank teams by the number of points scored. But it'd very quickly be clear that I was cheating, and no one would take me seriously (anymore). So then I thought about point differential -- points scored minus points allowed. The team with biggest differential in their favor would be the league's strongest; the team deepest in the hole would be the weakest. Simple, straightforward and actually a pretty good predictor of team strength. But it still wasn't sexy enough. So I made one little tweak ...

And damned if I didn't come up with a spookily accurate way of gauging team strength. The POW-R-'ANKINGS are determined by a simple relationship of the two most important stats a team has (beyond wins and losses). POW-R is defined as point differential as a percentage of total points scored. Take the number of points a team scores ("P.F." in the typical standings) and subtract the number of points that team allowed ("P.A."). That's your point differential. (Teams that have allowed more points than they've scored obviously have a negative differential.) Now take that differential and divide it by the total number of points scored in all the team's games. That gives you what I call the POW-R rating. Here's the formula:

POW-R = (P.F. - P.A.) / (P.F. + P.A.)

The number you get is a percentage, expressed as a decimal. Last year's Patriots, for example, scored 437 points in the regular season and allowed 260. That's a differential of 177 points on a total of 697. Divide 177 by 697 for a POW-R rating of 0.25395, or 25.40%, which was tops in the league. Last year's 49ers, on the other hand, scored 259 and allowed 452, for a differential of -193 and a POW-R rating of -0.27145, or -27.40%, by far the worst in the league.

Let's see how the rankings play out in context. I ran every team of the last 15 seasons through the formula, which involved hours hunched over a spreadsheet, which is what I was trying to avoid in the first place. These were the teams with the five highest POW-R ratings since 1990:

YEARTEAMREC.POW-R
1999 Rams13-336.98%
1996 Packers13-336.94%
1991 Redskins14-236.81%
2000 Ravens12-4 33.73%
199249ers14-232.23%

Each of those teams except the 1992 49ers won the Super Bowl. Now, you can look at these numbers and say, "You don't need a mathematical formula to figure out that a 13-3 or 14-2 team is going to win the Super Bowl." Maybe not, but of the four Super Bowl champs in the chart above, only two had the best record in the league that year: the '91 Redskins, and the '96 Packers (who actually tied Denver for the best record but beat the Broncos head-to-head in the regular season). In fact, the team with the league's best record, after tiebreakers are taken into account, won the Super Bowl in only four of the past 15 years.

You could also point to the fact that three of the four Super Bowl winners in the chart led the league in scoring ('99 Rams, '96 Packers and '91 Redskins), and two led the league in scoring defense ('96 Packers and '00 Ravens). But in the past 15 years, the top-scoring team has won the Super Bowl just five times, and the team with stingiest scoring defense has also won just five times. POW-R's reliance on point differential takes into account what we know about football: You need both offense and defense. A great offense puts fans in the seats, but without a competent defense, the team isn't going anywhere. (Ask Dick Vermeil. His '99 Rams had the No. 4 scoring defense. His perennially high-scoring Chiefs have been in the toilet defensively since before he arrived.) On the flip side, a great defense needs an effective offense -- or at the very least a mistake-free one. (The 1992 Saints, for example, gave up only 202 points, the fourth-lowest total in the past 15 years, but their offense was weak. They finished 12-4 but were one and done in the playoffs.) The point-differential component of POW-R takes into account the need for both offense and defense. Point differential rewards the Bengals for scoring 58 points in a game last year, but it also punishes them for allowing the Browns to score 48 in the same game.

Just picking the team with the biggest point differential gives us the correct Super Bowl winner in eight of the past 15 seasons. That's a majority, and I could have stopped there and proclaimed myself the next Aaron Schatz. But by the time I had figured that out, I had so much time invested that I added the twist: dividing the differential by total points scored. There's a reason for this. In Week 1 of this season, two teams beat their opponents by 14 points: Detroit over Green Bay 17-3, and Cincinnati over Cleveland 27-13. Which win is stronger? I'd argue that it's Detroit's. Not all 14-point victories -- or 10-point or 3-point or 1-point victories -- are created equal. The Lions allowed only one score, a field goal; the Bengals allowed a touchdown and two FGs. There has to be a way to give the Lions a tad more credit than the Bengals. The POW-R formula does that. The Lions' differential of 14 points is equal to 70% of the 20 points scored in their game. The Bengals' 27 points equals 67.5% of the 40 scored in theirs. Detroit thus gets a 2.5-percentage-point edge for Week 1.

You'll notice that POW-R doesn't even try to consider several supposedly critical things:
  • Yardage gained or surrendered. Irrelevant. Only points matter.
  • Who earned the points -- offense or defense. Seven points on the board is seven points, regardless of whether they came on a long drive, an interception return, a kick return, or a short drive set up by a fumble.
  • Home vs. road performance. Good teams win on the road. Bad teams don't. Any questions?
  • Strength of opponent. I've discovered that the formula is self-correcting. Over the course of a season, each team settles to its proper level. That's why when I post my weekly results, I point out that the rankings get more accurate as the season progresses.
So now you understand where the POW-R-'ANKINGS come from. Let's see how they stack up in predicting Super Bowl winners over the past 15 years against other measures we've discussed: won-lost record; most points scored (P.F.); fewest points allowed (P.A.); and widest point differential:


YEAR
BEST
W-L
MOST
P.F.
FEWEST
P.A.
WIDEST
DIFF.
TOP
POW-R
S.BOWL
CHAMP
2004PITINDPITNE NE NE
2003NE KC NE NE NE NE
2002PHI*KC TB TB TB TB
2001STLSTLCHISTLSTLNE
2000TENSTLBALOAKBALBAL
1999JAXSTLJAXSTLSTLSTL
1998MINMINMIAMINMINDEN
1997KC* DENKC DENDENDEN
1996GB* GB GB GB GB GB
1995KC SF KC SFSFDAL
1994SF SF CLESF SF SF
1993BUF*SFNYGSFDALDAL
1992SF SFNOSFSFDAL
1991WSHWSHNOWSHWSHWSH
1990SF BUFNYGBUFBUFNYG
TOT.455810

(*Tiebreakers: 2002: PHI won conference tiebreaker over GB and TB. 1997: KC held strength-of-schedule advantage over SF and GB. 1996: GB beat DEN in regular season. 1993: BUF won conference tiebreaker over HOU and beat DAL in regular season.

The POW-R-'ANKINGS predicted 10 of 15 Super Bowl winners (including the 1997 Broncos ... and you thought the Broncos upset the Packers). That's two-thirds, and that's about as good as you're going to get. No formula can predict every year's NFL champ, simply because of the any-given-Sunday nature of the league. Even the strongest teams will have one or two down games. Unfortunately for the 1998 Vikings (30.52% POW-R), that game was against the Falcons (20.93%) for the NFC title game. Who was POW-R's No. 2 team for 1998? NFL champion Denver (23.70%).

With that in mind, here's a look at the five years in which POW-R failed to predict the champ:


YEAR
NO. 1
POW-R
POW-R
SCORE
S. BOWL
CHAMP
POW-R
SCORE
NFL
RANK
1990BUF23.88%NYG22.71%2
1992SF29.24%DAL25.46%2
1995SF27.83%DAL19.83%2
1998MIN30.52%DEN23.70%2
2001STL29.64%NE15.40%7

Scoring at home? Fourteen out of 15 years, the Super Bowl champ was the No. 1 or 2 POW-R team.

This chart tells us a number of things. First, Brett Favre wasn't the only one who had trouble with the Cowboys in the 1990s. When Steve Young spent years talking about the monkey on his back, he was referring to the one wearing the star. Second, it makes perfect sense that the Bills-Giants Super Bowl after the 1990 season was the closest ever. And third, the 2001 Patriots' victory over the Rams was the biggest upset in Super Bowl history, Joe Namath be damned. Any way you slice the numbers from that year, the Rams were by far the superior team. Even if you count only the 14 regular season games Tom Brady started, or only the last 12, after Brady had settled down, the Patriots still were only the No. 4 POW-R team in the NFL in 2001. The numbers really reinforce how the Patriots' 2001 championship was a freak occurrence. Bob Kraft and Bill Belichick were building a champion, no doubt about it, but the plan was not to win it all in 2001. The Pats' 9-7 finish in 2002 wasn't so much a down year; their NFL title in 2001 was a totally unbelievable up year.

Earlier I gave the top five POW-R teams of the past 15 years, and we've spent a lot of time talking about winners. Just for farts and giggles, here are the worst POW-R teams of that same period:

YEARTEAMREC.POW-R
1991 Colts 1-15 -45.42
2000 Browns 2-14 -44.48
1990 Patriots 1-15 -42.26
1992 Seahawks 2-14 -38.05
1998 Eagles 3-13 -36.24
2000 Cardinals 3-13 -35.68
1990 Browns 3-13 -33.91
1999 Browns 2-14 -33.64
2003 Cardinals 4-12 -33.53
1993 Colts 4-12 -33.33
2000 Bengals 4-12 -31.99
1991 Buccaneers 3-13 -29.45
1992 Patriots 2-14 -27.82
1991 Cardinals 4-12 -27.41
2004 49ers 2-14 -27.15
1993 Bengals 3-13 -26.09
1998 Bengals 3-13 -25.56
2002 Texans 4-12 -25.13
1999 Saints 3-13 -25.07
1991 Rams 3-13 -25.00

A lot of these teams didn't even get the top draft pick the next season. But it's worth it to go 4-12 rather than 3-13!

We'll have some more fun with POW-R analysis in the future. But there's one other thing to talk about, and that's the centigrade scale. The raw percentages allow teams to be compared from year to year. Within a season, however, there's another way to rank the teams: on a scale of 0 to 100. We simply make the strongest team's POW-R ranking equivalent to 100 and the weakest team's to 0. We then apply a multiplier that converts all the scores in between. The centigrade rankings can't be compared from year to year, but they give us a sense of how close your team is to the league's best and worst.

So there you have it: As good a ranking system as any. How's that for a ringing vote of confidence?

No comments: