clock menu more-arrow no yes mobile

Filed under:

Analytics Profile: Wrap-up

Addressing comments and questions

NHL: Stanley Cup Playoffs-Boston Bruins at Tampa Bay Lightning Kim Klement-USA TODAY Sports

Now that we’ve gone through sixteen player profiles, we have finished our journey. Unfortunately, Jake DeBrusk and Matt Grzelyck didn’t make the time on ice requirement for a profile, as those are the brightest spots for the Bruins, at least from an analytics point of view.

Similar to evaluating players based on the eye test, everyone views statistics differently. That’s part of the reason one-number stats can be so powerful. The grades from these profiles should not be taken as a representation of the whole analytics community.

There were a few great questions and comments from our readers this series, and I wanted to address them in a more formal context for all of our readers to see. The first one was from jeffo20.

First, thanks, Shawn, for running this series. It’s been interesting to say the least, and I appreciate the work that you’ve put into it. I’ve been stewing over this since the start of the series, and do have a couple of questions. These are not so much about Krug, but about the team as a whole: From the start, we’ve seen that our best offensive guys at their positions, with the exception of Pasta, are not as good at creating quality shots as we think (though Marchand’s B in that category hardly seems like a bad grade). Yet these guys produce like crazy, and sure, they get a fair amount of power play points, but they do pretty well 5 on 5. Are they producing because of sheer volume, i.e. throw the puck toward the net and good things may happen? Are they producing because of great shooting percentage? This may be part of it, as Marchand in particular is a high percentage scorer. I’m also curious about how they stack up as a team agains the League. Do the Bruins score on a higher percentage of “low danger” and “mid danger” shots than the rest of the League? And, finally, the Big Question: These are three year profiles, which cover a season and a half + of Claude, and a season and a half of Butch. What has changed, if anything, with the change in system that can be reflected in these numbers? Thank you!

This is a very complex question, but let’s start with the first part of that question addressing shot quality grades. The data we have gives us limitations to evaluate shot quality. I’ll address that in a minute, but from what we can conclude, high-danger shots are more scarce than people might imagine. Most fans have know about the home plate area, but that’s not good enough. In reality, it is a semi-circle that barely reaches the inner hash marks of the low slot. I haven’t gotten the chance to study this at the player level, but at the team level, teams who rely heavily on high-danger shots are more likely to fall below their expected shooting percentage.

Where public data limits our evaluations of shot quality is pre-shot movement. We have some data on this from manual trackers. Focus on DZSA60 here. Marchand is better than 75% of the league over the last four seasons in the rate he creates dangerous shot assists. Bergeron is about average.

When we try to evaluate shot quality, it’s really about the effect a player can have outside of shooting talent. While I don’t have any doubt that the Bruins best players are above average shooters, being able to increase your team’s expected shooting percentage will hopefully give you more consistency, and the ability to win games when you didn’t play your best.

As for how the team lines up, the Bruins have been below average in offensive shot quality. Over the last three seasons, the Bruins are 24th in expected unblocked shooting percentage and 18th in actual unblocked shooting percentage at 5v5. Under the last full season of Claude Julien (2015-16), the Bruins were 28th in expected unblocked shooting percentage. The lack of shot quality under Julien, in my opinion, led to frustrations and his eventual firing. Cassidy doesn’t have pixie dust. The Bruins only finished 21st in this metric last season and 16th in actual unblocked shooting percentage. However, it has certainly improved under Bruce Cassidy, and is probably something he will focus even more on this season.

Seabass#8 addressed on of my favorite topics in regards to John Moore.

I disagree. Effectiveness is more important than skill. The hockey scrap heap is littered with guys who had tons of raw talent, but never put it all together. As we used to say back when I played, “He’s got all the tools, but no toolbox.” I hope I am wrong about Moore. Maybe he can be improved in the right situation. But I am skeptical. He’s been around long enough that we know what he brings. He’s a 7th D-man. If Cassidy sits Gryz for this guy…well, that will be disappointing to say the least.

While I agree with this statement, the skills that John Moore possesses are a hot commodity. Outside of Torey Krug, Moore is arguably the best skater on the Bruins back end. As the game is changing into a fast-paced, transition game, his skating skills are more important than ever. The Bruins management and coaching staff believes they can put Moore in the right situation to succeed. Personally, I am not a fan of this type of move, but I think there is logic behind it. At the end of the day, the Bruins will look smart or dumb, but that’s what happens when you make decisions.

Finally, joeboxford had a good question regarding Brandon Carlo.

He plays year one with Chara. Against the other teams best lines. He plays year two with Krug who was a turnover machine at times. How does the model possibly take those two data points and pop out a Brandon Carlo rating? Unless the coach is constantly mixing pairings and putting them against different O line pairings it seems like it’s very hard to isolate one skater’s impact. You would need to measure each shift against the opponent’s 5 players given your 4 teammates that are with you. Does the model do that? Seems like a ton of number crunching to get there.

In truth, it is very hard to completely isolate a players impact. Collinearity is probably the biggest issue in evaluating talent. Carlo spent 72% of his 5v5 minutes with Chara in his first season. He spent 53% with Krug last season. Trying to isolate him from Chara when he only spent about 400 minutes away from him in 2016-17 can be difficult. The same can be said for last season when he really only spent about 500 minutes away from Krug.

We saw an increase in Carlo’s defensive play and a decrease in his offensive play. Is this due to his his usage or his skill? In reality, the answer is a little bit of both. We can’t fully isolate a players impacts, but that doesn’t mean we can’t come close. There is a certain level of uncertainty, especially in a young player like Carlo. In order to evaluate a player you should use ALL of the resources available to you. As a fan, you’re free to evaluate a player as you choose, but organizations should be using a combination of analytics, scouting reports, coaching feedback, etc. Analytics are just a piece of the equation.