Posted on July 24, 2014
This essay, by Corey Whichard, won first place in the Co-Main Event Podcast’s second annual White Elephant Essay Contest, in the persuasive essay category.
“The UFC always has the fallback to where if some really bad shit happens, it can just have Dana White yell at us about it … Bellator doesn’t really have that. … It doesn’t have that figurehead who is endowed with the confidence to think that he can just make us believe whatever.”
– Chad Dundas, 5/12/14, Episode 103
“But, you know, there’s a lot of weird stuff going on with those rankings … It seems like if the UFC wanted to make those rankings into a thing that we could all take seriously, they would have to have some rules …”
– Ben Fowlkes, 5/12/14, Episode 103
In today’s MMA landscape, the UFC’s capricious abuse of its own ranking system is symptomatic of a much more serious threat to the overall health of MMA. That is, the UFC has too much control over how the sport is presented, and it often uses this control to benefit its own financial agenda at the expense of the sport’s integrity. If MMA is ever going to attain the kind of “sport for sport’s sake” legitimacy that attends football (or even tennis), an important first step is to develop a meaningful ranking system based on objective standards of athletic accomplishment. In this essay, I describe a method for creating such a system and demonstrate its validity.
One way to generate a standardized MMA ranking system involves drawing on techniques used in a sub-field of sociology called “social network analysis.” The basic idea is to model the structure of a social group by mapping out the relationships between individual group members (Borgatti, Everett, and Johnson 2013). It helps to think about this visually. For instance, picture all of the fighters in the UFC’s Welterweight division as large dots drawn on a piece of paper. Now imagine that there are lines linking certain dots together, where each line represents a fight, and each linked pair of dots represents fighters who have competed against each other. Using information from Sherdog.com to construct a win-loss matrix for all Welterweights employed by the UFC circa September 2013, I actually diagrammed the 170-pound division with a program called UCINet. [See Figure 1; Georges St. Pierre is the red dot.]
Once the network structure has been mapped out, it is possible to rank the fighters by calculating each fighter’s “Beta-centrality.” Beta-centrality functions by assigning each fighter a score based on the number of opponents in the network that he has beaten; it then adjusts that score based on the position of those opponents in the network, which itself is based on the position of the opponents that they have beaten, and so on. The process counts all opponents that are directly tied to the fighter, and all opponents that are indirectly tied to the fighter within 10 fights, though opponents that are “farther” away contribute less and less to the fighter’s score. Thus, when Jake Shields beat Martin Kampmann, his “Beta-centrality” score got a bump for this direct victory, but it also got a smaller bump for Kampmann’s win over Paulo Thiago, and an even smaller bump for Thiago’s win over Mike Swick, etc. This kind of recursive calculation is impossibly difficult to perform by hand, though relatively simple with the right computer program.
In plain English, a ranking system based on Beta-centrality means that the “best” fighter in the division does not simply have the most UFC victories, but he has the most victories over the most accomplished fighters in his division. Unlike the current ranking system, where the criteria for evaluating a fighter’s accomplishments largely rest on human opinion, a system based on Beta-centrality has the advantage of standardization. The relevant concept here is prestige, or the notion that a person’s prominence in a group only exists as an emergent quality of their relation to other group members. If you can empirically measure a person’s relationship to others in a group—using, say, a win/loss record—then you can empirically measure their relative position in that group. I used these techniques to generate a top-ten list of the Welterweights described above [see Table 1]. Keeping in mind that this ranking technique does not (yet) account for wins against fighters who were not employed by the UFC during September 2013, that it does not account for periods of inactivity (as long as the fighter was employed, their record was counted), and that it does not assign “style” points for impressive wins, it is notable that 50% of the same names appear (in different order) on my top-ten list that appear on Bloody Elbow’s September 2013 Welterweight meta-rankings (Wade 2013). This overlap provides suggestive evidence that the Beta-centrality rank is at least somewhat accurate. However, I ran one more test to verify this ranking technique’s validity.
Beta-Centrality Ranking for UFC Welterweights
Rank Fighter Name Prestige Score
1 Georges St. Pierre 5.43
2 Matt Hughes 3.16
3 BJ Penn 2.39
4 Martin Kampmann 2.22
5 Johny Hendricks 2.02
6 Carlos Condit 2.00
7 Thiago Alves 1.87
8 Jake Ellenberger 1.80
9 Matt Serra 1.79
10 Rick Story 1.63
Highly ranked fighters are highly successful fighters. If this ranking system is valid, then a fighter’s rank should be strongly related to other factors associated with professional success, such as financial compensation. It is reasonable to assume that the amount of show money that a fighter receives is a decent approximation of how much the UFC values that fighter. There are aberrations—Nate Diaz received 15K show money for UFC on Fox 7 (mixedmartialarts.com 2013)—but the overall pattern holds true. For the group of Welterweights described above, I recorded the amount of show money (in thousands) that they received for their most recent fight. I also recorded each fighter’s Beta-centrality (“prestige”) score. Because the amount of show money each fighter makes will be influenced by other factors, I also gathered data on how long each fighter had been employed by the UFC, their number of UFC victories, and the number of performance-based bonuses they had received. [Descriptive statistics for these variables can be found in Table 2.]
I then entered all of this information into Stata 11 (StataCorp 2009), a computer program designed to model statistical relationships between multiple variables. I used a statistical technique known as “Ordinary Least Squares” (OLS) regression to examine the correlation between Beta-centrality and show money, while simultaneously accounting for the influence of UFC wins, tenure, and bonuses. [See Table 3 for results.] Here’s how to read the table of results: the “b-coefficient” value estimates the correlation between the variable and “show money,” the “standard error” value represents the degree of imprecision, and the asterisks indicate the probability that the estimated correlation may be due to random chance. For instance, a “p-value” of 0.05 means that you can be 95% certain that the observed effect is real. To interpret the correlation, you read the b-coefficient as “a one-unit change in the predictor variable produces an X-unit change in the outcome variable.” Alright, so here’s what all this complicated shit really means: after accounting for the number of years a Welterweight has been employed by the UFC, how many wins they have, and how many bonuses they’ve won, each 1-point increase in the Welterweight’s prestige score translates to an additional $25K in show money, give or take about $4K. The model is more than 99.9% certain that the correlation is not due to random chance. In other words, Beta-centrality is powerfully correlated with financial success. The highest-rank fighters make the most show money, and the lowest-ranked fighters make the least show money. This confirms that the ranking metric is highly correlated with fighter success, which supports the notion that “Beta-centrality” is a legitimate way to go about ranking fighters.
With enough manpower, it is theoretically possible to use every professional fighter’s win/loss record from Sherdog.com to create one enormous MMA combat network. When fighters change promotions, it would be possible to treat them as “bridges” (Granovetter 1983) between the ranking structures of different organizations—a prospect that is increasingly plausible, given the UFC’s recent habit of releasing fairly high-profile fighters. In brief, using fancy mathematical techniques, it is totally possible to create an objective ranking system for MMA fighters. I propose that the implementation of such a system would go a long way toward elevating MMA’s status as a legitimate sport, and would wrest a core piece of the greater MMA narrative out from between Mr. White’s teeth.
Borgatti, Stephen P., Martin G. Everett, and Jeffrey C. Johnson. Analyzing Social Networks. Los Angeles: Sage, 2013. Print.
Dundas, Chad, and Ben Fowlkes. CoMainEvent.com. “Co-Main Event Podcast Episode 103.” 12 May 2014. Web. Accessed on 17 May 2014.
Granovetter, Mark. 1983. “The Strength of Weak Ties: A Network Theory Revisited” Sociological Theory 1: 201–233.
MixedMartialArts.com. “UFC on FOX 7 salaries + bonuses to Brown, Mein, Romero, Thomson.” 21 April 2013. <http://www.mixedmartialarts.com/news/436808/UFC-on-FOX-7-salaries–bonuses-to-Brown-Mein-Romero-Thomson/>. Accessed on 17 May 2014.
StataCorp. 2009. Stata Statistical Software: Release 11. College Station, Tx: StataCorp LP.
Wade, Richard. BloodyElbow.com. “Bloody Elbow September 2013 Meta-Rankings: Welterweight.” SB Nation. 4 October 2013. Web. Accessed on 17 May 2014.