Adjusted Ratings Explanation and Discussion

In the following essay I will explain the origin of the adjusted ratings, how it works, and why it is a better measure of scoring averages for College Golf than using ratings and slope.

History:

Back when I got started with the NCAA Women in 1989 they had a system where we used course rating (slope was still pretty new at this time). By 1992 that system was abolished because of the following reasons:

While they were correctly requiring all golf courses to be rated based upon the yardage set up for that particular event, you still have differences in how people rate golf courses (if you think this isn’t true there is no need for you to read any further; and don’t call me either).

Even with a well rated golf course, the set up and/or the conditions can have a large impact on scoring.

The system was being abused as Coach’s found out which golf courses were rated harder than they should be (everyone wanted to go to those events) and which courses were rated easier than they played (these would be the events to avoid).

When this system was abolished we went to a new system where actual head to head records became the most important criteria (and still is today; and should be). In the first few years we were not even allowed to release any scoring information to the selection committees.

In 1995 I remember looking at the results from a Men’s Tournament in Hawaii. The field was so strong that the average score of the players in this field was about 1 over par. On the first day the winds were over 40 miles per hour and the average score was 85. On day two the average scores under more normal conditions were around 73. It dawned on me that we could use the actual data to determine which rounds were flukes and which rounds were normal. So I created Adjusted Ratings. After about a year and a half of testing and a lot of discussion, the NCAA started using Adjusted Ratings as a reliable indicator of scoring average. Surprisingly there were very few minor modifications made to the first model and even after 5 years of testing and scrutiny, the Adjusted Ratings have preformed flawlessly based upon the knowledge that no system is perfect.

 

How do Adjusted Ratings work:

Every time we run a rating (ranking of scoring averages) we also run the Adjusted Ratings side by side. It has gotten to the point where we run a rating almost every day and sometimes more than once a day when the season has become established (everyone has played). We start with the raw scoring average versus par and equalize every score to a par 72. Then we go into every round of every tournament and look at the scoring averages of all of the players in that field. We compare their scoring average as a group with what they actually average as a group in each round. For example: if we went into an event where the average score of the field (based upon par 72) was 74 and the average score of the field for the first round was 76, we would adjust all scores down by 2 shots. If the next round the average score of the field is 73 we would adjust all scores for that day up by 1 shot. The theory is that we know that something caused those players (as a group) to shoot that average score of the day. We call that difference “conditions”. We don’t really care what those conditions are. They could be weather, they could be course set up, or they could be ease or difficulty of the golf course. It is possible that the group, as a group, just had an abnormal day. The law of large numbers works in our favor as the likelihood of “abnormal group days” is quite unlikely. Of course there can be small fields. The adjustment takes that into consideration as those results are weighed with less consideration in calculating the Adjusted Ratings. One of the premises of the Adjusted Ratings is that it is better to under-adjust in those situations where data is less reliable than to over-adjust.

The best part of the Adjusted Ratings system is that the adjustments are made in a dynamic fashion meaning that every time we run an Adjusted Rating we take the latest raw scoring averages. Lets go back to our example and assume that the calculation was made in October. By April those same players in that field may now have an average score of 75. Therefore we would now adjust the first round down by 1 shot and the second round up by 2 shots. By doing this, the data gets more and more reliable so that by the end of the season (the most critical time), we have identified those rounds of those tournaments that “conditions” dictated scores that were “not normal”.

Why are Adjusted Ratings a good indicator of scoring for College Golf?

Because they are based upon actual results and completely void of opinion, the Adjusted Ratings cannot be abused. If you go to a golf course that is very easy you pretty much know that your scores will be adjusted upwards and conversely if you go to a very difficult golf course or play in horrible conditions you have the comfort of knowing that those scores will be adjusted down.

The dynamic nature of always re-adjusting throughout the year lessens the already low chance that the scores are affected by an “abnormal day by a group of players”.

Course Rating and Slope cannot take into account weather conditions or course set up (both of which can drastically affect scores).

The Adjusted Ratings are totally objective in their approach (no opinions like: the weather was horrible).

Strength of field has zero impact on Adjusted Ratings as it is the golf course that is actually being adjusted.

This discussion is not to bash the USGA Course Rating and Slope System. That is a fine system that works nicely for Handicapping where you take your best 10 of 20 scores. Rather this discussion follows the idea that for College Golf, Adjusted Ratings are a better indicator of scoring.

In closing here is the example that I always use. If we had a score of rating the “systems” where 0 was imperfect and 10 was perfect I would say that the following scores would be reasonable.

Raw Scores

1

Slope and Course Rating

2

Adjusted Ratings for NAIA

4

Adjusted Ratings for NCAA Division III

5

Adjusted Ratings for NCAA Division II

6

Adjusted Ratings for NCAA Division I

8

The first problem with what I have done here is that I have stereotyped by Division. The assumption is that Division I Teams travel more and have a wider variety of opponents than Division II and so on. While this assumption is correct for the Divisions as a whole, there are groups of teams within each Division that schedule more like one of the other Divisions (i.e. Some Division I programs do not schedule a wide variety of teams while some NAIA programs do.). A wider variety of opponents has a positive impact on the Adjusted Ratings, but even at the lowest level of scheduling the ratings are more reliable than a straight Slope and Course Rating system based upon the aforementioned reasons.

Bottom line is that better programs have a wider variety of opponents and therefore the Adjusted Ratings work very well at the top end of the spectrum and while they do not work as well at the lower end the performance is still better than straight Slope and Course Rating. The main purpose of Golfstat is to provide information for the various Divisions to determine the schools advancing to post season. The difference in reliability between Adjusted Ratings and Slope and Course Rating is considerable for the potential post-season teams.

This is not to say that someday we will come up with something even better. I am not closed minded about this and we are always trying to improve what we have.

Sincerely,

Mark Laesch
Golfstat