© 2013 Michael Parkatti MPDO4

Formulating a New Statistic for Team Luck, Part I

Previously on this blog I’ve completed a three-part series that looked at team PDO critically to understand whether it truly is a measure of “team luck”.  PDO is a combination of even strength team save percentage and shooting percentage — the higher it is the luckier your team has been assumed to be.  However, what I found was that PDO is a statistic that can be sustained by a team either high or low — and it does not tend towards league average over time in the fashion expected by chance.  This effect was entirely because of team save percentage — save percentage can be sustained at high or low levels, which obviously makes sense when you consider teams have similar goaltending year after year.  Shooting percentage was proven to be essentially random, so completely plausible as a component of luck.

These observations got me thinking about how to best reflect team luck: what improvements could be made to PDO?  I concentrated on save percentage, as shooting percentage wasn’t broke and didn’t need to be fixed.  How can we know how “lucky” team’s goaltending is?  To me, the obvious answer was to consider the historical performance of the actual goaltenders they employ and play.

The first thing I needed to do was compile a list of all goaltenders who’ve played in the NHL by season since 2001 and their even strength save percentage.  I downloaded this data from NHL.com and compiled a series of huge tables covering the years from 2001-02 to 2012-13 for each goaltender facing an even strength shot over that period.  The first table showed the total amount of even strength shots they faced by season.  The second showed the total number of even strength goals they allowed by season, and the third showed their even strength save percentage by season.  I then created a 4th table which took a 5-year moving average of a goaltender’s performance over time by season.  To qualify, you needed to have faced at least 500 even strength shots over the last 5 years (to cut down on small sample sizes).  If you were a new goalie, or had not faced 500 shots in the last 5 years, I just assigned the average ES save percentage of all new goalies in the NHL over the last 5 years (0.918 entering this season).

The next step was to compile what I called the Expected SV%, or what I would expect your team’s even strength save percentage to be for a season based on your goaltenders.  For an example, let’s consider this year.  Here’s the table showing how I calculated each team’s expected save percentage at even strength:

MPDO1

First I found all goaltenders appearing this year by team, and found how many games each had started for their teams.  Then I looked up what each of their 5-year historical even strength save percentages were according to my previously constructed table.  This SV% appears in the table above two cells to the right of each goaltender (again, remember that 0.918 was used for new or little-used goaltenders).  The final step was to create a sumproduct team expected save percentage that combined all the goalies historical save percentages and how many games they started.  Weighting by number of games started made sense to me to keep it simple.  If we were in the middle of the season, you’d simply find out how many games had been started by which goaltender up to that point in the season, and then multiply by that goalie’s historical save percentage *entering that season*.

Now we can compile what I’m calling the Modified-PDO, or MPDO, for each team.

MPDO5

This data includes all 5 on 5 and 4 on 4 situations, and combines data compile from Behind the Net and NHL.com.  I had to use Behind the Net to find shots for and against and goals for and against at even strength, but then I had to use NHL.com data to strip empty net goals out of Behind The Net’s totals.  There will be a few empty net goals that I had to strip out that must have been scored on the powerplay that I could not identify given the source constraints, so this data will not be perfectly accurate, but will be extremely representative.

First you see the expected team save percentages based on the numbers I’d found in the earlier table, which takes the historical performance of each goaltender and weights it based on how many games he started.  Then I found what the team’s actual even strength save percentage was. Finally, I subtracted the expected SV% from the Actual SV% to leave a residual performance — a positive number indicates the goalies did better than their historical performance would have suggested, a negative number means they underperformed.

I then found the team shooting percentages in the traditional way, finding how many shots they took at even strength and then how many of those turned into goals.  Then I subtracted the league average shooting percentage (near 0.079), to come up with a shooting percentage difference score, again with higher scores meaning luckier scores.

I took a total difference score, which just added the SV% difference score to the SH% difference score — again, higher equals luckier.  Then to modify this number to match the current PDO convention, I added one and multiplied by 1000 to create the familiar format we all seem to like.

I’ve added ‘lucky ranks’ on the far right to show the luckiest team (1st) all the way down to the unluckiest (30th).  Toronto is the luckiest on the strength of an insanely good shooting percentage, while Chicago is 2nd because of equally lucky shooting and save percentages.  Edmonton ranks as the 13th luckiest team with a moderately lucky save percentage and just a slight bit of bad luck for shooting percentage.

So, how does MPDO compare with PDO?

MPDO3

This table compares MPDO to traditional PDO.  It shows which teams by MPDO are luckier (with a positive difference) or unluckier (with a negative difference) than when compared with PDO.  A team like Winnipeg is luckier with MPDO because, of course, the Jets play Ondrej Pavalec, and his expected save percentage was lower than the league average this season.  On the flip-side, the Canucks’ MPDO is much lower than their PDO because their goaltending is known to be excellent, and had a bit of a down season.

And that’s really the spirit of this: how boring is it for the Canucks to be near the league leaders in PDO season after season, especially considering the presupposition that their PDO will gravitate towards 1000 over time.  It doesn’t.  But MPDO does allow their score to approach 1000 — Luongo and Schneider can have average seasons that are STILL above league averages.  Doesn’t mean they’re average goaltenders, it just means that this season could be considered right in line with expectations for them, even though it was 0.007 SV% points above league average.

The next part in this series will examine if this new MPDO displays characteristics of truly reflecting chance, or luck, using similar mechanisms I used in my original three-part exploration of team percentages.  I can’t wait to find out what the evidence says. No really, I haven’t done it yet — this stuff takes forever.

2 Comments

  1. Posted May 2, 2013 at 3:20 pm | #

    Nicely done! My guess is that your MPDO stat will regress more to the mean than regular PDO, but who knows.

    One other thought I have is maybe using Fenwick-For rather than SF is a better divisor to use when calculating “shooting percentage”. The problem with SF is that it doesn’t count posts or shots that barely missed, while the problem with Fenwick-For may be that wild or weak shots from far out that had no chance anyway are counted. What do you think; has someone already investigated this?

  2. Posted May 3, 2013 at 1:13 pm | #

    The theory makes sense, and it’s something I do myself.

    But just looking at that last table, I’m not convinced the data is proving out to be terribly interesting. The correlation between PDO and MPDO in that table is 0.98. I imagine you have a more thorough analysis than that in mind, and maybe it’ll turn something up, but at first glance I’m skeptical that corrections that small will prove to be worth the additional complexity.

    Anyway, if it does prove to be worth the effort, there’s a simple way it could probably be improved a bit. The goalies’ save percentage entering the season should probably be regressed to the mean a bit, as a function of number of games played. After all, you don’t want to list a team as being unlucky on the grounds that the guy who had a .935 save percentage in 20 games his rookie year didn’t keep that up in his sophomore year. (This would also let you get away from the 500-shot-minimum threshold and instead introduce a smooth, inclusive correction factor.)

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>