clock menu more-arrow no yes mobile

Filed under:

Logarithmic Decay: How the NFL Draft is Like a Sprite

Anyone familiar with my articles and comments on Niners Nation is well aware of my beliefs about what good teams accomplish in the NFL draft (or what good fantasy football teams accomplish in a fantasy draft; or what good NCAA March Madness pool players accomplish in filling out their brackets). If you're not familiar, my philosophy basically has 2 components:

  1. Good teams avoid picking busts with their more crucial picks (e.g., 1st-rounders, NCAA Final Four teams)
  2. Good teams find diamonds in the rough with their less crucial picks (e.g., 2nd-rounders, NCAA Cinderellas)

As I presented prior to last year's draft, the Patriots exemplify the first component to a tee, whereas Bill Walsh's 49ers exemplified the second component.

Going into this year's draft, I wanted to put my philosophy to the test. After all, I formed these beliefs based on anecdotal evidence from my own fantasy football/NCAA pool experiences, general observations from my NFL fandom, and pattern recognition in last year's pre-draft posts. In other words, there's not really any science behind it. So, the goal this year was to blind myself with science, see whether I've been full of crap all these years, and prepare to apologize to anyone I've ever argued with vis-à-vis my beliefs.

Although a noble cause, one pretty significant problem arose when I sat down to actually do the testing: There don't seem to be any objective definitions of a "bust" or "diamond in the rough" as these terms relate to the NFL draft. Just a light perusing of SB Nation uncovers many subjective definitions of a this one from our own smileyman, this one from Sklz711 at Windy City Gridiron, and this one from Rafael Vela (via One.Cool.Customer) at Blogging the Boys. And that's just on SB Nation! Imagine how many there must be throughout the internets.

With respect to definitions of "diamond in the rough," there's even less out there from a subjective standpoint, and obviously nothing out there from an objective standpoint. Apparently, a diamond in the rough to NFL fans is what obscenity was to Justice Potter Stewart, we know it when we see it.

So, before I could test my own beliefs about good teams, busts, and diamonds in the rough, I had to kind of invent the wheel so to speak. Namely, I had to develop my own objective definitions of a bust and diamond in the rough. Today, in Part 1 of this 3-part series, I present that analysis, as well as the definitions that arose from it.

After the jump, find out what the heck that title means...


If I were to grossly summarize all of the subjective definitions of "bust" and "diamond in the rough," I'd say the basic point is that a bust performs much worse than expected, whereas a diamond in the rough performs much better than expected. So, with that in mind, the question becomes: What's "expected?" That, after all, is the crux of the definitional dilemma. Well, statistical prediction is all about expectations, so this particular question lends itself well to an answer via stats.

To determine the "expected performance" of a draft pick, I looked at all NFL draft picks from 1994-2005. I chose 1994 because that's when the salary cap kicked in, and I chose 2005 because all of the draftees that year have now had the opportunity to play an average-length career. Using pro-football-reference's draft database, I gathered an amazing amount of data related to each draft pick, the most important/relevant of which are below:

  • Round selected
  • Pick selected
  • Position
  • Career length
  • Career approximate value (Career AV)

Of the 5 stats above, Career AV was my measure of performance. Granted, AV is flawed in several ways that I've discussed previously. But, as a very general measure of player value that can be used to compare players across positions, it's about the best/only thing out there these days. One minor modification I made to Career AV was that I divided it by career length to put everyone on a level playing field. Obviously, players with longer careers have more opportunities to accumulate value, so that has the potential to skew the data. Also, as many of the players in the data set are still playing, their "career length" for the purposes of my analysis was cut short through no fault of their own; and that definitely has the potential to skew the data. Therefore, by controlling for career length, I limited these potential biases as much as possible.

So, based on these considerations, the performance measure I'm looking at is technically called "Average Weighted Season AV." It's basically the average seasonal value for a player.* From now on, and for the sake of readability, I'm going to simply refer to this as "performance" because that's basically what it is.


When you take the performance of each player selected from 1994-2005, and average these performances for each specific pick number, you get the following graph (click to enlarge):


The white trendline is what's called a logarithmic decay curve, which basically represents a situation in which there's initial rapid decline, followed by subsequent gradual decline that slows down even further with time...

Lesson time! Either by hearing about carbon dating or the difficulties of nuclear waste disposal, I'm guessing most of you are familiar with logarithmic decay's mathematical doppelganger, exponential decay. And I'm sure pretty much any of you who have dabbled in the business/banking world or have sat around on the weekends watching mold are familiar with its mathematical cousin, exponential growth.

However, unless you're a meteorologist, geophysicist, or atmospheric scientist, you've almost certainly never heard of a real-world phenomenon that exhibits logarithmic decay. Indeed, as I occupy none of the above careers, I had never encountered one either; that is, until I consulted Google. Come to find out that one of the few examples of logarithmic decay occurs about 60 miles above the surface of the Earth; in what's called a sprite. Apparently, there's a mysterious, once-thought-of-as-imaginary phenomenon above thunderstorms wherein a burst of red light shoots out of the top of the storm. These bursts are called sprites (See picture at top of post). Researchers have found that light emission during a sprite decays logarithmically over time; hence, the title of this post. OK, back to the show!

...What's amazing to me is that this logarithmic trendline explains over 85% of the performance variation between pick numbers (See R-squared). When you consider that we're talking about 262 different pick numbers - and 2,992 individual draft picks - that's stunning.

So, the moral of the graph is that, as the NFL draft proceeds, there's a steep decline from the expected performance of the 1st pick to that of the 65th; but after that there's only a gradual decline over the next 200 picks.

Looking a little deeper, and without getting into too much detail (aka see here for detail in plain English), the logarithmic function displayed on the graph suggests that about two-thirds of expected performance has already "decayed" by Pick 65. This has 2 important implications for the development of my objective definitions:

  • 1. It's kind of silly to call any player a bust if he was selected after Pick 64. If he's only supposed to be - at most - 32% as good as the #1 pick, can a team/fan really be that upset if he doesn't even turn out to be that good?
  • 2. Within each round, the actual pick numbers matter a heck of a lot for expected performance from Picks 1-32, somewhat less for Picks 33-64, and hardly at all for the rest of the draft. This is shown in the graph by the steepness of the drop at various points along the curve. Therefore, when defining a bust or diamond in the rough, there's really no need to distinguish between picks within each round after Pick 64; whereas it is important to distinguish between Picks 1-64.

For the non-logarithm-savvy among us, this latter point is also shown using plain ol' correlations: as the draft proceeds, the correlation between player performance and pick number within each round gets weaker and weaker (Rd 1 = -.281, Rd 2 = -.150, Rd 3 = -.132, etc.).


So we now know that (a) expected performance in the NFL draft follows a pattern of logarithmic decay from pick to pick, but (b) it's not that important to distinguish between different picks after the 2nd round. If (b) is the case, then it's useful to re-analyze the data by averaging the performance of all picks within a round, and seeing how expected performance decays from round to round. Below is a graph showing the result of this re-analysis (click to enlarge):


Amazingly, the logarithmic decay curve (aka the white trendline) explains the expected performance variation between rounds almost perfectly (99.4%; See R-squared)! Again, the fact that any relatively simple equation explains the NFL draft this well is downright stunning to me.

When comparing the 2 graphs, it's pretty easy to see why the round-by-round trendline fits better than the pick-by-pick trendline. Specifically, the former does a much better job of explaining the variation of expected performance at the top of the draft. If you look at the first graph, you can see that the expected performance of the first handful of picks is relatively underestimated, i.e., the peaks are below the trendline. In contrast, the second graph pretty much nails its top-of-the-draft estimation on the number.

Although it's nice in and of itself to discover this consistent logarithmic decay pattern in the draft, the point here was to define "bust" and "diamond in the rough." So what does the round-by-round analysis mean for that purpose? Well, the equation you see in the graph predicts that about 50% of expected performance has already "decayed" by Round 3. This has 2 implications that correspond to the previous pick-by-pick implications:

  • 1. It's kind of silly to call any player a bust if he was selected after Round 2. If he's only supposed to be - at most - 50% as good as a 1st-rounder, can a team/fan really be that upset if he doesn't even turn out to be that good?
  • 2. It's now questionable as to whether it's necessary to distinguish between picks at the top of the draft when looking at expected performance.


OK, so I think I've made it pretty apparent that the expected performance of NFL draft picks follows a logarithmic decay pattern. Now it's time to look for any factors that might influence that overall pattern. The main one that comes to mind is position. Basically, career expectations are different for different positions, so it stands to reason that the trajectory of expected performance over the course of the draft for players of one position might be different than for players of a different position. At least that's the intuitive way of looking at it.

But does the data support the intuition? Below is a chart showing the average expected performance for each position in the draft except for Ks and Ps (click to enlarge):


Even just from an eyeball test, it doesn't appear like there's much of a difference between the top 5 positions, nor between the next 5 positions; but there's definitely a drop-off after WR. Well, if we compare these averages statistically by using a series of t-tests, it turns out that there's a statistically significant difference between the expected performance of a FB draft pick and that of a DE, T, LB, C, RB, G, DT, or DB draft pick. In addition, there's also a statistically significant difference between the expected performance of a TE draft pick and that of a DE, T, or LB draft pick.

In other words, among the 12 positions shown in the chart, only the last 2 are at a statistically significant disadvantage when taken in the NFL draft; but they're not even at a disadvantage with all of the remaining 10. Indeed, out of 66 possible comparisons, only the 11 I listed above were statistically significant. Considering the robustness of the pick-by-pick and round-by-round findings, this lack of evidence seems to suggest that rounds and picks are far more important than positions when defining a bust or diamond in the rough.

There clearly are more factors to be considered besides positions, but I have limited space and time. Feel free to suggest more factors in the comments section. Also, just because I ruled out positions this time, it doesn't mean that's the end of all debate on the matter. More time and research might prove otherwise. 


The fruits of my labor have led to 2 relatively simple equations for predicting a draft pick's performance; one according to the specific pick at which the player was selected and the other according to the round he was selected. Reliably knowing what to expect from a given draft pick resolves the vast majority of subjectivity in the quest for an objective definition of "bust" and "diamond in the rough." All that's left is the part about how much above or below expectations a player would have to perform in order to be considered a bust or diamond in the rough, respectively.

The pattern that's emerged in my analyses is that there's somewhat of a cut-off point at the end of the 2nd round. Whether we're talking about specific picks or overall rounds, somewhere between one-half and two-thirds of expected performance has "decayed" by that point in the draft, depending on which equation you use. So, the first part of my definition for a "bust" is that he can only be a 1st- or 2nd-round pick; and the first part of my definition for "diamond in the rough" is that he cannot be a 1st- or 2nd- round pick.

Because of the remaining questions regarding whether specific pick numbers matter in the 1st or 2nd round, the second part of my definition for a "bust" relies on the pick-specific logarithmic decay equation, whereas my definition for a "diamond in the rough" relies on the round-specific equation.

Adding the two parts of each definition together we get the following:

  • An NFL draft bust is a player who was selected in the 1st or 2nd round and played 67% or more below the expected performance of his specific pick number.
  • An NFL draft diamond in the rough is a player who was selected after the 2nd round and played 200% or more above the expected performance of his specific round.

There you have it, objective definitions for both a "bust" and a "diamond in the rough."


One last thing I'll bring up about how the NFL draft is a real-world example of logarithmic decay. Here's an article I stumbled across by McDonald Mirabile that was published in The Sports Journal. Mirabile did an analysis showing that the amount of money an NFL team spends on their draft picks each season can be almost perfectly predicted by knowing (a) how many picks they had, and (b) the exact pick number of each of those picks. What struck me about this article was that it has a graph showing how rookie salaries change with each subsequent draft pick number. Here's the graph (reproduced from original article; click to enlarge):


Look familiar? Looks like logarithmic decay to me. So even with salaries, the NFL draft is like a sprite. It seems like logarithmic decay might come to be known as Danny's First Law of the NFL Draft? OK, so ignoring that bit of narcissism, it still is quite interesting to me that both expected performance and expected salary seem to follow the same pattern. I'll leave it to the comments for you to figure out why that is really interesting vis-à-vis overpaying/underpaying draft picks.


Based on what I've presented in Part 1 of this series, here's what you should remember for Part 2:

  1. Draft pick performance seems to follow a reliable logarithmic decay pattern across picks and rounds.
  2. An NFL draft bust is a player who was selected in the 1st or 2nd round and played 67% or more below the expected performance of his specific pick number.
  3. An NFL draft diamond in the rough is a player who was selected after the 2nd round and played 200% or more above the expected performance of his specific round.

Tomorrow, Part 2, in which I'll identify the busts and finally test my theory about how good teams avoid them like the plague.

* On pro-football-reference, Career AV is not just a simple sum of a player's Season AVs. Rather, it's a weighted sum such that a player's best season is weighted @ 100%, his 2nd-best season is weighted @ 95%, his 3rd-best season is weighted @ 90%, and so on. That's why I say my performance measure is actually "Averaged Weighted Season AV."