© 2013 Michael Parkatti PDOO8

Investigation into Team Percentages, Part II: Save Percentage

On Tuesday I wrote a post Part 1 of this series looking into the sustainability of team shooting and save percentages, with a concentration on PDO (which combines both shooting and save percentage into an index that tends to 1000).  I found evidence that PDO is not a random variable that automatically regresses to 1000, in fact many teams had more or less than their share of expected above or below average seasons, based on probability theory.  Starting today I will begin to break down these components in an attempt to find out what is causing this empirical non-regression towards long-run league averages, starting with team save percentage.

In my previous piece on PDO, I transformed each team’s PDO for each season into a normal cumulative distribution score between 0 and 1, or 0% and 100%.  Anything above 50% is considered above average for that season, anything below is considered below average.  I repeated this technique using only team save percentages at even strength (5 on 5), scaling each team’s season against how the rest of the league finished that season.

PDOO2

I’ve found the results for each of the last 6 seasons (including the current partial season), and then taken a simple average of all 6 cumulative scores.  Our initial expectation would be for these team averages to tend towards 50% over such a long time scale if it were truly random.  However, it immediately becomes clear that certain teams have received consistently high save percentages, while others have received consistently low team save percentages. Let’s look at this graphically:

PDOO1

This graph represents the 6-season average team save percentage for all 30 NHL teams.  A long-run league average score would be 0.50, and I’ve overlaid an arbitrary yellow band around +/- 10% of this mark to highlight teams close to the 50% average.  Incredibly, only 7 teams out of 30 are inside this yellow band, indicating a distinct level of sustainability in various franchise’s team save percentages.

Let’s have a look at the top 5 and bottom 5 teams in terms of their 6-year long-run averages:

PDOO3

In all of these cases, the results should not be all that surprising to regular fans of the NHL.  Boston has had the highest sustained average above 50% at 86.6% — they also happen to have enjoyed two Vezina performances from Tim Thomas and very strong spot duty from Tuuka Rask otherwise.  Vancouver (at second highest) has benefited from years of quality goaltending from one of the top 3 goaltenders in recent history in terms of sustained above-average save percentage from Roberto Luongo.  The Rangers have had a goaltender that’s putting together an all-time string of elite goaltending seasons (0.010 or above league average) in Henrik Lundqvist and are 4th on this chart.  The Coyotes and Sharks have changed goaltenders over the years, but gotten the best out of peak seasons from goalies like Nabokov, Niemi, Bryzgalov, and Mike Smith.

On the opposite side of the coin, teams like Tampa Bay, the Islanders, the Leafs, the Avalanche, and Atlanta/Winnipeg have all gone through rotating characters of merely average to terrible goaltenders, and all end up in the bottom 5 teams in terms of sustained team save percentage.

In any case, these teams certainly show that a team’s save percentage tendency is not necessarily towards league average.

In Tuesday’s post I also investigated the the expected probabilities for teams to have 3 above average seasons and 3 below average seasons out of the 6 season in my dataset (and all other combinations of above/below average seasons), under the assumption that the true probability of a team having either type of season was 50%.  The expected probabilities I found still hold for this analysis of team save percentage, because we start with the same assumed probability of 50% likelihood of having both an above or below average season.  I then found how many teams fit into each category, and compared them with my expected probabilities:

PDOO4

Comparing the actual % of teams in each category to the expected percentage of teams shows great discrepancies.  It’s easier to see graphically:

PDOO5

We would expected 31.3% of our teams to have 3 above average seasons and 3 below average seasons out of 6 seasons, if team save percentages were truly random.  Instead, only 13.3% of teams actually had that ratio of above:below average seasons.  Out of the 7 possible outcomes, the 3 above / 3 below scenario actually tied for 4th most likely, well below its expected level of the most likely.

So, where are all the other teams?  They are in the far left and right of this graph, showing a much higher than expected tendency to have a large majority of either good or bad team save percentage seasons.  There were 5 combined teams that had either all above average or all below average seasons, whereas the model expected to be only ~one team in those combined categories.  Only about 22% of teams were expected to have either 6 or 5 above or below average seasons — in reality, almost 47% of teams fit in those categories.

The conclusion is clear: team save percentages do not regress to league average, even over the long term.  Some teams employ good goaltenders and sustain high percentages for long periods of time, while other teams employ a cast of substandard goaltenders, and sustain poor percentages.

Considering that save percentage is one of the components of PDO, I’d suggest that as a community we start to question the assumption that PDO necessarily regresses to 1000.  As I showed on Tuesday, it doesn’t, and this investigation into save percentages should help to show why.  PDO is commonly used as a proxy for ‘luck’ — having a good goaltender on your team obviously is not luck, it’s a substantiated reality.

However, there may be something that tends to regress to league average over time, and I’ll explore that in the next part of this series…

EDIT: I forgot to add the last statistical test — I set up a regression using every team’s team save percentage normal distribution score between 0 and 1 as an explanatory variable for the next year’s score.  Basically, can I use one year’s team save percentage as a predictor of next year’s?  The equation found was:

Next year’s predicted score = 0.33 + (This year’s score) * 0.35

So if you were an average team at 0.5 this year, you’d expect to have a team save percentage score of, yep, 0.505.  If you had a terrible season and scored 0.00 (like this year’s Flames!), you’d expect to have a score of 0.33 next year.

This year’s score has a P-value of 0.000016, so well under the threshold of 0.05 — we can now reject a hypothesis that this year’s team save percentage has nothing to do with next year’s.  It clearly does, and therefore is not random.

2 Comments

  1. Posted April 18, 2013 at 2:12 pm | #

    This brings in to question the usefulness of PDO in determining how ‘lucky’ teams have been. We should maybe just focus on shooting percentage to estimate luck (which I guess may be the topic of the next post in this series)

  2. wheatnoil
    Posted April 18, 2013 at 6:41 pm | #

    Fascinating work. I would have expected two or three teams with “elite” goaltenders and “Steve Mason” to make up the extremes, but it appears even strength save percentage is more repeatable than one might have thought.

    Interestingly, as you mentioned, both San Jose and Phoenix have used different goaltenders over this period. So is it the goaltenders or the team defensive system that determines even-strength save percentage over time? I mean, the answer is likely both, but I wonder how much each contributes. If you put Luongo and Schneider on the Oilers and gave Khabibulin and Dubnyk to the Canucks, how close would the Oilers save percentage climb to the Canucks and how far would the Canucks fall to the Oilers?

3 Trackbacks

  1. [...] through peaks and valleys, it can be sustained in a fashion more than probability would allow.  We’ve also seen that team save percentages are a large culprit for this behaviour — employing a good goalie is not something that should [...]

  2. [...] on this blog I’ve completed a three-part series that looked at team PDO critically to understand whether it truly is a measure of [...]

  3. [...] random over time.  I found that while team shooting percentage does seem to be random over time, team save percentage shows distinct evidence of not being random.  The influence from SV% was so strong that it made PDO statistically predictable over time [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>