Total Access Baseball

User login

Who's online

There are currently 0 users and 13 guests online.

Beane Counting: How To Grade an MLB General Manager

Although many refer to baseball's current generation as the "steroid era," far from the only changes in the game can be traced to medically enhanced physiques. Pitcher wins, losses, and saves, batting average, RBI, errors and bunting all seem to be on the way out in favor of adjusted ERA, BABIP, VORP, range factors, and pitch counts, all leading to a newer era in baseball influenced by the work of sabermetrics—trying to measure objectively what leads to wins and losses, challenging the conventional wisdom, and often upending it.

The book Moneyball and the success of small-market clubs like the Oakland A's in the early 2000s had a lot to do with this changing of the guard—and a consequence of that is the fact that general managers are in the public eye in a way not seen before in baseball history.

For example, nearly every baseball fan would associate Billy Beane with the Athletics more quickly than most of the current A's on the roster, but few would know offhand the executive of great teams of the past. Does anyone associate Bob Howsam with the Big Red Machine?

Because of this, a great irony of this sabermetric era has erupted—great strides have been made in measuring players objectively, but the man who has assembled that team gets a free pass when it comes to such analysis. In my experience, arguments among my peers about the best GMs in baseball tend to consist of hyperbolus statements like "X is the worst GM ever" or turn one bad transaction into the basis of failure, such as "He traded X, Y, and Z for W! That's an awful trade."

Discussion like this seems to get us no closer to an answer. Some articles have made an attempt at it, but the flaws in each seem to undermine the results.

David Gassko tried to do this in 2005 with his article Ranking the General Managers at The Hardball Times—basing his rankings on three factors: how well a team is built for its park, how much the team wins compared to its payroll, and how a team gained or lost at the trade deadline.

This only was done for the 2004 season, meaning that the basis for all of these factors was only that particular season—though that was the only goal of the article, so it is understandable.

However, the problems in the formula are significant. Building a team for a home park may not even be a positive factor at all—I can understand why it was included, as it may seem wise, but when the measurement basis is just having a better home team than road team, it can be seen as a detriment just as easily as a positive.

The midseason transactions idea is interesting, and has some merit, but both of these factors are taken into consideration as greatly as the "bang for your buck" of the team. That would seem to take precedent over either of the other factors—especially the home field—and that would cause significant problems in accepting his results.

Forbes published a study that measured the best GMs in sports in 2007, with the baseball GMs ranked here, which factored improvement on a predecessor and payroll relative to the league.

Aside from the problems of making this a sports-wide exercise without adjustments to the winning/payroll conditions of each specific league, the idea of comparing a GM only to a predecessor has its problems. Any GM with an historically awful predecessor will rank highly even with only a mediocre job on his part, and any GM with an historically great predecessor will only rank as mediocre even if he can maintain the team's success.

Also, winning is factored twice as much as payroll—an idea I don't object to in theory, but there is no reasoning for why this proportion was decided.

Among the factors seen, the only variables that can truly be used are the success of the team and the relative payroll. These factors also allow for comparison between different years, since the payrolls can be adjusted.

This idea is common in subjective analysis—another common statement in measuring GMs is "He won X games with a $Y million payroll in year Z." However, without a baseline for exactly how many games a certain payroll should have won in a certain year, how can any conclusions be drawn from it? Without that question answered, there is no logical starting point.

In order to get this baseline, I made a database of team payrolls and Pythagorean winning percentages for every team I had payroll information available for (I used USA Today's database, found here).

To adjust for the different average payrolls of each year, I simply found the Z-score of each payroll—figuring out how many standard deviations it was away from the mean spending of that season. By running a regression on those variables, an equation that shows what should be expected from a team at a certain payroll:

Y = 0.0263293618645785(X) + 0.500400325761534

This equation shows for every standard deviation the payroll changes, it should increase or decrease team winning percentage by about .026 (in terms of a 162-game season, that's a little over four wins.) The equation has a correlation coefficient of about 0.17 (which is significant with 600 team seasons worth of data), but indicates that having a large payroll is far from enough to win games (sorry, critics of Brian Cashman).

The middle 95 percent of payrolls fall between a Z-score of -2 and 2, which, according to the formula, indicates that an average GM should win 72.5 games at an extremely low payroll and 89.6 games at a relatively high payroll. (The only team to break these constraints on a regular basis are the Yankees.)

This is an interesting conclusion—73 wins for a team with minimal financial resources seems like a better than average job, in the same way that a team with 90 wins with a huge payroll seems like a bit of a disappointment, but this reinforces the correlation coefficient's determination that money is somewhat overrated in baseball.

<!-- my page break -->

Applying this to GMs is a matter of comparing how their team did compared to the baseline (in math terms, finding the residual value from the equation for each season). I took this a step further by adjusting these values to be more easily read by turning them into curved test grades: 75 is an average job, rarely will a GM get over 100 or below 50, and grades above 60 and 90 are extremely difficult to achieve.

Here are the rankings for the 10 best seasons and 10 worst seasons for a GM since 1988:

Team <!--[if gte vml 1]> Bitmap <![endif]--><!--[if !vml]--><!--[endif]-->
Year
Grade GM of Team
OAK 2001 100.4 Billy Beane
SEA 2001 98.7 Pat Gillick
WAS 1994 97.7 Kevin Malone
HOU 1998 97.2 Gerry Hunsinger
CLE 1995 94.5 John Hart
LAA 2002 94.1 Bill Stoneman
NYY 1998 94 Brian Cashman
ATL 1998 92.9 John Schuerholz
OAK 2002 92.6 Billy Beane
ATL 1993 92.5 John Schuerholz
TOR 1994 57.5 Pat Gillick
NYY 1990 57.1 Harding Peterson
DET 1996 56.5 Randy Smith
DET 1995 56.2 Joe Klein
FLA 1998 55.5 Dave Dombrowski
DET 1989 55 Bill Lajoie
BAL 1988 53.1 Roland Hemond
DET 2002 50.8 Randy Smith
ARZ 2004 50.4 Joe Garagiola
DET 2003 49.3 Dave Dombrowski

 

And here are the rankings for the 10 best and worst GMs (with a minimum of five seasons since 1988), found by averaging all the scores for each of their seasons:

GM (Min 5 Seasons) Average Grade
Billy Beane 84.30
John Schuerholz 83.00
Gerry Hunsinger 80.45
Theo Epstein 80.20
Mark Shapiro 79.75
Bill Stoneman 79.61
Kevin Malone 79.01
J.P. Ricciardi 78.95
Pat Gillick 78.95
Ron Schueler 78.60
Cam Bonfiay 72.02
Herk Robinson 71.71
Steve Phillips 71.51
Bob Gebhard 71.33
Dave Littlefield 70.97
Bill Bavasi 69.79
Ed Lynch 69.70
Randy Smith 68.21
Allaird Baird 65.93
Chuck LaMar 64.73

Poll

Best of the American League
Tampa Bay
19%
Boston
19%
Chicago
7%
Minnesota
10%
Los Angeles
17%
Texas
27%
Total votes: 270

Recent blog posts

Featured Sponsors