A few months back, I unveiled my wide receiver model, in a post cheekily titled “Another Receiver Model.” It had numerous issues, the foremost being its poor performance on the 2023 class. While that’s still a slight issue for my new version—let’s call it ARM 2.0—the updated model still performs meaningfully better on that class.
Key Features
While a lot has changed from my original model, ARM 2.0 still relies heavily on where a player was taken in the NFL draft. The logic, of course, is simple: beyond telling you what the league thinks of a guy, a player who was drafted earlier is simply more likely to see the field.
This isn’t to say, of course, that draft capital totally determines a prospect’s fate; our model finds real value in other metrics, too. Note that the model utilizes other features not pictured above, which are excluded for redundancy reasons.1
Many of these features are self-explanatory, such as rushing attempt percent (RA%)—i.e., how many of a receivers’s touches were runs—and body mass index (BMI), which is essentially a height/weight ratio. The story for BMI is simple, with higher values generally being better. After all, you don’t want a slight breeze to knock your guy over.
RA% is slightly trickier to unpack. The ideal seems to be dabbling very slightly in running: we’re talking an RA% of one or two percent here, with less than a yard per game rushing. Barring that, you either want a guy to be real rushing threat—think Deebo Samuel—or somebody who’s never even run the ball once.
BOA (20%), meanwhile, seems a bit less straightforward, but it’s just the age where a player produced at least 20% of his team’s receiving yards and TD’s. Naturally, the younger a player does this, the better: in most cases, you want your receivers to be wunderkind, and not late bloomers.
“Just right” usage
Below, we see the slightly more complicated case of target share. Though this example is smoothed out slightly, the overall picture is clear. It seems you want a prospect to reside in the “Goldilocks zone” between roughly a 15% and 20% target share.
Like with many of our other features, it’s possible this is picking up on other hidden effects, too. While a super-low target share is obviously bad, it’s not immediately clear why residing in the middle is superior to hogging most of your team’s targets.
Beyond small sample size concerns, my hunch is that if one guy is getting north of 25% of his team’s targets, the talent around him might be lacking. It’s also pretty likely TGT% is indicating whether a guy was a small-school player or not. After all, teams with only one NFL-level receiver tend to pepper him with targets, meaning his stats could be artificially inflated against weaker competition.
Model Performance
Now that we know how the ARM 2.0 works, how does it actually perform? The whole thrust for updating the model was the previous version’s middling performance; is the new one meaningfully better?
The answer, it seems, is a resounding yes. There are myriad reasons for our model’s improvement, the foremost being a refined feature-vetting process and trying out new model architectures.2
What’s equally important, however, is that it’s still explainable. That is to say, while ARM 2.0 is still a bit of a black box, our small handful of input features means it’s relatively easy to demonstrate why the it behaves the way it does. These features are still distinct (and useful) enough, however, that they keep us from relying too much on where a player went in the NFL draft.
ARM’s large training scope—dating back to 2009—also gives me confidence we aren’t just overfitting to recent developments. We’ve also got three out-of-sample classes from 2021 to 2023 to make sure our model performs well on unseen data. Together, these design considerations helps ARM avoid being blindsided by any single draft class.
Finally, the biggest improvement of all is on the 2023 draft class. There’s still room to improve further, of course, which might come from having a full three years’ worth of data (like the rest of the classes do). Still, jumping from a worrisome R² of .3 to a just-fine .47 is a big leap, and enough to give me real confidence in this version of the model.
The Results
Below are my new model’s projections for the 2025 receiver class. For each player, we have their three-year outlooks, as well as their relative risk and upside. (Note: for this table and the next two, click on the right arrow above the table to see more prospects.)
Already we see some big changes from my previous rankings, with Emeka Egbuka not only surpassing Travis Hunter as our top prospect, but practically dunking on the rest of the field. The model, of course, knows nothing about Egbuka’s stellar debut as a pro (I swear I didn’t rig it to like him, either). While I’d still take its bullishness with a grain of salt—some of my previous analysis was a tad cooler on Egbuka—it’s hard not to be excited by his early play.
Egbuka surpassing golden-boy Travis Hunter—the number two overall pick— is doubly impressive, really, given the heavy weight our model puts on draft capital. That Texas product Matthew Golden nearly matches Hunter’s projection—while also leapfrogging earlier pick Tetairoa McMillan—is noteworthy as well.
What’s most surprising, though, is Kyle Williams leapfrogging multiple second-rounders. In fact, he’s one of two players outside the first round who ARM sees as having “high” upside. The other, of course, is Luther Burden, which tracks: he has all the talent in the world, but comes with character concerns as well.
What about player comps? While ARM 2.0 produced some tantalizing upside comps for the 2025 class, some players have scary downside. Even though Egbuka has a high floor, his profile is still similar enough to Laquon Treadwell that he gets comp’d to him.
This does, of course, warrant a massive heap of salt: nobody’s confusing Egbuka for the lumbering Treadwell, after all. The same goes for Tetairoa McMillan and Devonta Smith, about as opposite of physical profiles as you could have.
Still, there are some gems here. The comps for Tre Harris are uncanny: his upside comp is fellow Ole Miss legend A.J. Brown, while his downside comp is straight-line speedster Andy Isabella. It’s a bit uncanny, frankly, given Harris seems destined to either end up as an everyday “X” receiver or a vertical-only threat.
Evening the field
OK, you might be wondering, but how would our model look if we didn’t account for draft capital? It’s likely a pointless hypothetical, given how heavily draft position affects ARM’s predictions. Still, it’s also easy to see draft capital makes ARM, by and large, higher on top-100 picks, and bearish on everyone else. By removing it from the equation, we can theoretically even the playing field
Unfortunately for me, the idiot who drafted Jalen Royals in multiple leagues after I saw his stellar advanced stats, ARM really just hates the guy. Also noteworthy: ARM 2.0 is no longer letting guys coast by on draft slot alone. While Jayden Higgins and Pat Bryant still get big boosts for being top-100 picks, if we remove draft capital from the picture, they’re among our worst prospects.
Again, this is a pretty messy experiment, built on assuming everybody was taken with the 50th pick in the draft. Still, it’s illuminating to see what potential sleepers might be flying under the radar. Tez Johnson, for example, shines here; my previous analysis also pegged him as a statistical superstar, his worryingly small frame notwithstanding.
Parting thoughts
What, then, should be the takeaways from this article? The answer’s a bit tricky, since there are few truisms we can derive here from any single stat.
I’ll still try, though. If you get anything out of this article, it should be these ideas:
The younger a guy breaks out, the better
You want a guy to be sturdy (high BMI)
Too few—or too many—targets is a red flag
Most importantly, though, is that for dynasty evaluation, the conversation starts and ends with draft capital. No matter how good a guy is, no matter how much the stats like him, the later he’s taken, the smaller the margin for error is.
Does this mean that you should’ve drafted, say, Isaac TeSlaa over Jalen Royals based on draft capital alone? Not necessarily; consensus big boards had Royals rated firmly as a top-75 prospect, while TeSlaa didn’t even crack the top 150.
Still, when a model as robust as ARM 2.0 thinks TeSlaa is that much better a bet than Royals, you should take notice. This isn’t to say that you should take a guy like TeSlaa a lot earlier than consensus just because a model says he’s a winner, or completely blacklist somebody it’s down on like Royals.
Rather, the name of the game is still value, and if a model like ARM says a guy is underpriced, you should take notice. The obvious use case is thus post-draft waivers, where everybody’s looking to pick up their favorite sleepers. It’s there where ARM 2.0 shines, expressing its affinity for guys like Jimmy Horn Jr. (who it likes more than many fourth-rounders).
In short, if the model strongly deviates from consensus on a player, I’d take notice. Does it mean it’s time to panic sell Jalen Royals, or trade the farm for Kyle Williams? Not necessarily. Hopefully, though, it can serve as a tool for you to find undervalued guys the next time you try and flip a vet for some prospects.
More specifically, there’s real collinearity between the features excluded, like receptions (decent, but not overwhelming relation to target share) and best-season rushing yards per game (extremely redundant with RA%).
Multiple model architectures were tested, with a tree-based model (a HistGradientBooster) prevailing. Features were culled from PFR and the ever-useful Pahowdy spreadsheet, itself aggregating various PFF-supplied features. The feature set used was reached after a near-exhaustive search for the best combination, weeding out low-performing features as I went along.