As indicated earlier, one of our basic problems with the Department of Education’s proposed GE Rule is not that the Department wants to set standards to measure poorly performing programs. To the contrary, we fully support the Department on the need to identify programs that do not provide value to students. Instead, our problem is with the metrics that ED has developed to measure a program’s success in preparing students for good jobs. The metrics that ED has fashioned fail to account for certain important factors while measuring other factors that are irrelevant to how well an educational program prepares students for gainful employment.

One of the most obvious and traditional ways to measure whether a program delivers value for its students is to consider its completion rates. Yet, graduation rates don’t factor into the Department’s approach. Instead, ED has chosen to develop new and complex financial tests, specifically the debt-to-earnings rates and program cohort default rates (pCDRs), as the “litmus tests” under the GE Rule.

It’s hard to understand the omission of graduation rates as a critical metric within the GE Rule. Program completion is a “must have” for entry into a number of fields, and we all know that prospective students are keenly interested in how many students successfully get through the programs offered at specific schools they are considering attending. The completion rate is a rather straightforward way for students to assess the likelihood that a program will help prepare them for – and become gainfully employed within – their intended field.  It’s odd, to say the least, that the Department would simply ignore this measure.

A USA TODAY analysis showed that there are a number of colleges where default rates actually exceed graduation rates – yet, amazingly, the Department shows no interest in this contrast that uses graduation rates.  How is this serving students’ best interests? We will leave it to others to consider why the Department is omitting completion rates from the GE Rule.

Critics of proprietary schools often tout community colleges as the alternative for underrepresented student populations. Maybe they should double-check the completion rates of community colleges before they trot out this comparison. Far too many taxpayer-supported community colleges have completion rates of less than one percent. Yes, that means that fewer than one in every 100 entering students completes his or her program on time.  Is it really good policy to broadly assume that community colleges are better equipped than proprietary colleges to serve these students, and then design a rule that avoids a metric that they find challenging?