The philosophy behind performance funding formulas

child looking up at math equations
Back to News

The philosophy behind performance funding formulas

In Accountability by Mia Murphy December 2, 2019

This fall in the Michigan Legislature, the House and Senate Appropriations Subcommittees on Higher Education are holding informational hearings while they consider revisions to the state’s funding formula utilized for its 15 public universities. MASU is slated to testify on Dec. 10th, and other interested groups are offering their input on the existing formula. From my vantage point, the most important two considerations that need to be kept in mind are that:

  • The peer-reviewed research shows that performance-based funding (PBF) doesn’t have much of a positive effect at all; and
  • If we’re going to go through the trouble of examining PBF, then all universities need to be at the table and actually listened to. That didn’t happen last time.

Notably, when scholars of higher education actually point out that the research shows the inefficacy of PBF, proponents with deep pockets funded by the Gates Foundation or Lumina Foundation often strike back. But one thing that often gets left out of these discussions is an actual discussion on what the intent or purpose of performance funding actually is.

I served in the State Budget Office from 2008 to 2015, where I led the executive branch’s implementation of performance funding at the direction of Governor Rick Snyder. Within SBO, we as staff were very interested in the philosophical reasons for including each draft metric we brought to the Governor, and what hypothetical changes in performance each would incentivize. Neither the House nor Senate subcommittee leadership supported the governor’s proposal for PBF as presented in the 2013 Executive Recommendation, nor were universities openly consulted for their input during executive or legislative development. An eventual standoff between the two chambers on their versions of the formula meant that the performance formula that eventually passed was a compromise borne out of a 3 AM legislative negotiation, not a top-to-bottom cohesion.


The real hazard associated with PBF, I would argue, is when policymakers believe that when state funding for higher education is allocated via a formula that it is without bias. Any and every iteration of a formula has an assumption of values. But what are these values? In my research back in 2011, I found that there are really only two strategic choices a state can make when designing allocations for performance-based funding. A state can:

  • Reward excellence; or
  • Support improvement.

The problem with those two goals is that they are inherently in tension with each other.

Consider the following two universities:

  • University A has a graduation rate of 95%, and reliably graduates this percentage of students year after year. This university would perform well in a system that rewards high performance.
  • University B had an initial graduation rate of 50%, but slowly moved it up to 65% over time. This university would be well rewarded in a system that funded improvement in metrics.

If University A was placed in a system that funded improvement, though, it would “top out.” There exists a realistic ceiling to graduation rates in an entirely voluntary enrollment system. Even if University A had an unheard of 100% graduation rate, a perfect graduation rate would receive no new funding under a growth model. Similarly, University B would be in a bind if the state funded universities by looking only at absolute performance. A fifteen percentage point increase in graduation rates would be a national success story for the university, its faculty, and most importantly, its students. But the much improved 65% rate would still be swamped by University A’s 95% rate, and so funding would not flow to University B. Clearly, both of these universities have claims to be doing well, but which method can fund both accordingly?


The second philosophical quandary that I believe has been overlooked is whether performance funding is supposed be:

  • A reward for those universities doing well; or
  • A punishment for those not performing as the Legislature would like.

This carrot vs. stick debate is intrinsic to the entire PBF system, though. Should universities receive additional state funding if they’re already performing well? This would be the reward model, and such a university could then take this extra funding and build upon what has proven to work. The logic behind the other option is that the Legislature presumes a university will notice that they have received less money than other universities and take actions to change its outcomes. The reality, though, is that it takes money to make money—by that, I mean investing in resources needed to drive up institutional outcomes. Let’s consider now a University C that has low outcomes for whatever reason, be it impoverished students, state budget cuts, or anything else, and receives state funding based on a set of performance) metrics. Improvement in student outcomes requires investment in teaching, financial aid, and student success wraparound services. It’s a catch-22: funding under PBF models requires either already high performance or a pre-existing trajectory of improvement. Jump starting improvement requires investment. Whether top-level excellence or earnest improvement are rewarded by a formula makes no difference to a university that can’t afford sufficient investments in student success programs to begin with.

Finally, the logic behind all these arguments presumes that universities aren’t paying attention to their outcomes, and needed a formula to drive improvement. That is a patently false assumption. Michigan’s public universities have one goal above all else, and that’s student success. It’s both the moral imperative to do the utmost for students, and frankly, a much bigger economic driver than the PBF formula. Almost eighty percent of operating dollars come from students and their families; the State of Michigan chips in only about 20%. The public universities of Michigan received $3.9 million in new metric-scored funding in FY2020, and another $3.9 million across the board that is still trying to erase the deficit from the 15% cuts universities were dealt in FY2012. $3.9 million for the entire sector on a $7.1 billion university general fund base (tuition plus state appropriations) is a 0.05% increase. The slicing and dicing of metrics for such a small impact seems hardly worth the heartburn.


Nonetheless, we have a formula here in Michigan, and previously serving state lawmakers have already made choices about what should be in that formula. The PBF formula we have has a conglomeration of metrics and scoring rubrics that attempt to both reward excellence and support improvement (plus greater enrollment). It is an appreciated attempt by the legislative architects of the formula to recognize the tension inherent in the two goals of performance funding. But the practical effect is to essentially counteract one goal with the other, creating a complex muddle that doesn’t have drastic institutional winners or losers. At the end of the day, creating vast, unpredictable swings between existing base funding and new funding is in nobody’s interest. This isn’t unique to universities. One need only look as far as the state’s own community college funding formula as a comparison to see that it also mixes across the board funding, rewards for excellence, support for improvement, and enrollment-based funding. The result is, after hours of computations and calculations, a formula that generally distributes funding in roughly equal proportions to each college. Between the miniscule amount of additional funding added to the pot plus contradicting metrics, you may see why the research isn’t optimistic about PBF.


If given the choice, some of the state universities might choose to scrap the entire formula, and others might preserve it entirely as-is. The official (and only) position of MASU is that all universities must be at the table and have their voices considered if the formula is to be revised (or continued). But whether or not the formula is reconsidered, it’s worth reminding policymakers that the formula as it exists is not neutral. It has already made assumptions about what to fund, and how to do it. Again, it was borne out of political compromise and the development wasn’t entirely open.

MASU is pleased to serve as a consultative resource to policymakers at any time they wish to consider the formula, though I think much more useful questions than how to slice and dice a formula is:

  • Should we use a formula at all; and
  • How much funding is the state willing to invest in public higher education, regardless of method?

And that’s really it. What matters isn’t so much the formula, but the low level of investment that goes to Michigan public universities, formula or not. Governor Whitmer’s recommended increase of 3% for universities in FY2020 was deeply appreciated. It’s a vote of confidence that higher education is on the right track, and particularly meaningful when the state revenues are otherwise constrained by tax credit after tax cut. But legislative actions whittled that down to just a 0.5% increase. With almost a billion dollars in lost state funding since FY2002, 0.5% is a drop in a very deep bucket, and formula adjustments won’t change that.

Bob Murphy is the Director of University Relations and Policy at the Michigan Association of State Universities.

@BobWMurphy     @MASUmichigan