Week-to-Week Adjustments
Calibrating your model’s week-to-week adjustments is where the money is made.
In this post I will:
Explain the approach I take to making weekly adjustments, and,
How to identify value
Now that you’ve selected your preferred modeling technique and have your ratings dialed in for Week 1, after the games finish, where do you go from there?
If you have a regression-based model, the Week 1 outcomes will likely automatically update ratings for Week 2. Sounds great, right? Well not exactly. The Achilles’ heel of regression models is small sample size data. Calibrating a regression model against the previous season’s data may give you confidence in the model’s predictive ability but last season’s characteristics may differ from what will play out this season.
To overcome the small sample size data problem, many regression models (including play-by-play models) rely heavily on pre-season ratings and slowly reduce this leverage as more data is collected during the regular season. Of course, if you have little confidence in your Week 1 ratings, you have the opportunity to reassess after you see the on-field performance.
One of the unique challenges of modeling the NFL season is that there are only 16 games. The small sample size issue and lack of continuity season-to-season is one reason I chose not to build a regression model. And I don’t have the time nor resources to build a play-by-play model (though I’ve heard it argued play-by-play model types are more robust against small sample size data as since there are hundreds of plays across the league every week). For these reasons, and if nothing else because it was simple to build, I chose the power ratings method.
There is no right answer for making reasonable week-to-week adjustments. What has worked well for me is simply looking at the final score and updating the ratings for the next week using a modified Brier score. In layman’s terms, I take a team’s projected pre-game winning percentage and compare it against the actual outcome (1 for a win and 0 for a loss). If the winning team had a 60% chance of winning pre-game and they indeed won, they get a +0.4 upgrade to their power rating. I also consider the margin of victory compared to the spread and adjust accordingly (positive if covered).
In summary, be purposeful about how to make week-to-week adjustments in your model. This has the biggest impact on your model’s performance. I’ve spent a lot of time refining my adjustment algorithm. When done effectively, this can provide you a great marker to identify value against the spread.
————
All models are wrong. The good ones are helpful.
Identifying value—the application of the algorithm.
Recall our previous discussion about turning line spreads into implied probabilities. After you’ve finished making all of the adjustments you desire for a specific game, your algorithm will produce an expected spread, say Green Bay Packers -4 vs. Arizona Cardinals. You now know from a previous posts that this implies GB has a win probability of about 64%. If your contest offers ARI +5.5, using just your expected spread, the algorithm is implying 1.5 points of “value” taking the 5.5 with the Cardinals. In this example, the 1.5 points of value is worth about 4% towards your probability of getting this pick correct.
I want to take a quick timeout to explain why value is so important. Most sportsbooks have a 10% vig. For a standard ATS line, you’ll see the vig manifested as “-110” next to the spread. This means a bettor needs to put down $110 to win $100. (N.B. If ever you see a line shaded +105, this means you put down $100 to win $105.) With the standard vig (-110), a bettor needs to win greater than 52.4% of their bets to be profitable. Assuming the game lines are essentially 50/50 outcomes (which they’re not, ignoring that for now), bettors need to properly identify and capitalize on the lines that provide value taking opportunities.
Using our GB-ARI example above, an algorithmic player should take the Cardinals +5.5.
We now have established the basic algorithm mechanics. We understand the differences between model types, we have a framework for making week-to-week adjustments, and we know how to apply the algorithm to help us identify value in the market.
All you have to do now is learn to accept the uncertainty of outcomes. You must trust the process.