imagine there’s new metrics (it’s easy if you try)

Academia has become obsessed with metrics. Institutions jostle for the “top” positions in international rankings, departments are evaluated nationally to identify the “best”, and individuals are lined up against one another to find the “leaders”.

Let’s take the international rankings (eg THE, QS, SJ) for example. These were established, apparently, to help students and staff identify the highest quality universities. The rankings would allow people to make informed decisions about where to study, teach and conduct research. It follows then that a higher rank will mean more students, especially international students, and this in turn means more money coming into the business university.

Indicators used to calculate these rankings include things like academic reputation, research income, number of publications, number of citations to these papers, industry income, and the ratios of faculty:student, international:local student, international:local staff, and doctoral:bachelors student. Data are gathered from detailed surveys sent to academics, employers, and universities, as well as from companies that specialise in providing research citation data (Thomson Reuters, Scopus etc).

There are two things that are troubling about these indicators. The first is that there are no upper limits where there probably should be. Considerable effort is expended by institutions to increase each indicator with the aim of getting a top spot. If we extrapolate, without applying upper bounds, what could be the consequences of behaviours driven by these metrics? Hmm, let’s see. If the number of PhD students is a key indicator and there is no upper bound, this might lead to oversupply. In turn, this might result in PhD student disenchantment with academia during the course of their studies. If large numbers of PhD graduates are being produced, it’s likely that a considerable proportion of early career researchers will be unable to find positions in academia. The pressures on PhD students to focus on producing research papers would lead to a lack of time and opportunity to “explore (other) aspects relevant to their future career options”. This situation might produce a generation of highly intelligent, highly qualified PhD students/early career researchers who feel like failures.

The second thing that troubles me is that international rankings are meant to identify the best workplaces, yet none of the rankings evaluate important indicators like job satisfaction, work-life balance, equal opportunity. Taken to the extreme, the research quality indicators might drive behaviour that leads academics to work ridiculous hours, taking no time off during the year, not celebrating their successes, and expecting the same work ethic from their team. In this scenario, leaders of the largest research teams would thrive, because they would produce more results than smaller groups, so that we might see the rise of academic Ponzi schemes (BTW the independent development plan linked in that blog is worth checking out).

With funding cuts to research and a growing number of large teams led by senior researchers, we might also see grant funding success fall to record lows and junior researchers miss out on funding. Those who work best in collaborative, cooperative settings will become disenfranchised and demoralised in the hypercompetitive environment that develops. Researchers might consider cutting corners, and the academic pipeline will likely leak first with those who have significant domestic and caring responsibilities. Perhaps we might observe an increase in mental health issues among academics.

Hmmm, does this set of circumstances seem familiar to anyone else?

While there are good reasons to evaluate research quality and impact, it is inevitable that bad things will happen when no checks are placed on how the loftiest research heights are attained. If the goal of international rankings is truly to identify the best places to study and work, then new metrics are needed to identify institutions that combine achieving research and teaching greatness with offering the “best” diversity in their professoriate, boasting the “top” work-life balance, and supporting “leaders” who train research students for positions outside academia.

With these thoughts in mind, I’ve dreamed up a few new metrics to use alongside the more traditional ones. Maybe this combination might lead to rankings that identify the most successful, most highly productive higher education training grounds and workplaces that are also best at supporting career aspirations and mental, emotional and physical well-being.

1. The no-asshole rule

A few weeks ago at the SAGE Forum in Canberra, I heard about the no-asshole rule. We’ve all met them. We’ve all had to work with them. According to Bob Sutton who wrote the book on the no-asshole rule, assholes are defined by two characteristics:

  • after encountering the person, people feel oppressed, humiliated or otherwise worse about themselves
  • the person targets less powerful people

If these characteristics apply, there is your asshole (figuratively speaking). Bob outlined the dirty dozen* actions that assholes use on a regular basis, and described an asshole scoring system (levels 0-3) and management metric. To be eligible to participate in the new international rankings, universities must teach the no-asshole rule to freshers, employ no level 3 assholes and tolerate no level 3 asshole behaviour by staff or students (see, one of too many recent examples, Dalhousie “gentleman” student dentists). One asshole incident without consequences means the whole institution gets the big red dislike button.

2. H-index

No not that one. This is the Happiness index. Bhutan has one, it’s called gross national happiness. Staff and students at universities will be surveyed each year to measure their happiness in the workplace. Someone else has already done the hard part by working out the questions for Bhutan, though no doubt we’ll have to add a few new ones for academic happiness (eg fairness in allocation of – and appropriate recognition of – teaching and service roles, simplicity of university travel approval system, availability of high quality coffee (or tea and biccies in my case) etc). An essential indicator of happiness will be the annual leave ratio (ALR). This is a simple calculation:

ALR = ALT/ALA

where ALT is the combined number of days of annual leave taken by all staff (unprompted by HR) for the previous year, ALA is the combined number of days of annual leave available to all staff each year (with a minimum of 20 days per staff member). Universities for which all available annual leave is taken by staff would rate highest with an ALR of 1.

3. F-index

The F-index is about fairness. Despite published pay scales, men are paid more than women in the upper echelons of academia for doing the same job. Because “loadings”. In my perfect world, to be eligible to participate in international  rankings, universities would make public the average pay for men and women in leadership (professoriate and above) by posting the data on the front page of their website every Jan 1. The F-index is calculated as follows:

F = W/M

where W is the average salary for women in leadership positions and M is the average salary for men in leadership positions. Universities with the highest ratio (and thus the smallest gender pay gap) would rank highest on international rankings. It would be a fun experiment to see how long it would take for this indicator of university ranking to reverse the gender pay gap in academia. Looks like the University of Sydney will skyrocket on this measure: the VC indicated at the “Women at Sydney” event in late 2014 that remedying the gender pay disparity is a strategic objective for the university in 2015.

4. D-index

D is for diversity. Diversity is goodDifferences in perspectives and methods of approaching problems lead to better outcomes. In Australia, our universities are populated by people of diverse culture, gender, age, socio-economic status. Yet our leaders are mostly male and mostly white. The D-index measures how well the leadership teams at universities (professoriate and above) reflect the diversity of the broader university (all staff and students). For example, let’s take gender. Plenty of studies show that more women in the workplace, especially more women in leadership positions, is not only the right thing to do it’s the smart thing to do because it’s good for business. Yet gender equity in leadership positions remains at dismally low levels (<20%) across the board while male CEOs dig their heels in at quotas. To counter the entrenched system, the D-index for gender (Dg) will be evaluated:

Dg = GB(lead)/GB(all)

where GB(lead) is the gender balance or proportion of women in leadership positions (usually <0.2) and GB(all) is the proportion of women across all staff and students (usually >0.5). Most universities would have Dg values of 0.4 or below. Those who score the maximum D values of 1 would zip up to the top of university rankings. If the D-index was implemented, would we see an exponential rise in women, POC, Asian and Indigenous people in leadership positions? I’d sure like to find out.

5. K-index

K is for kids. One issue that crops up again and again, is that primary caring responsibilities often fall to women, with a consequent reduction in their academic competitiveness (unless their track record is considered, fairly, relative to opportunity). So problematic is this issue, that some women choose to forgo having children to remain competitive in their career. Why do we make it so difficult for the smartest women to reproduce? Wouldn’t it be good for the world, and for universities, if we made it easier? At the same time, male academics want to spend more time caring for their kids, but face stigma and lack of support from colleagues and bosses for taking time off for parental duties. Why do we make it so difficult for the brightest men to participate in the most important work of all? Wouldn’t it be good for the world, and for universities, if we made it easier? Germany dealt with this specific problem by awarding an extra two months to the standard 12 months paid parental leave when both parents took time off to care for their children. The K-index celebrates the birth of children to academics:

K = (A+3B+C+D)/E

where A is the number of days of parental leave taken by women over the past year, B is the number of days of parental leave taken by men over the past year, C is the number of childcare places on campus, D is the number of parenting rooms on campus and E is the total number of staff and students on campus. On this metric, those universities that best support and encourage families will rocket to the top of international rankings, and most likely will have to turn away large numbers of outstanding students and academics.

endnote

You may say I’m a dreamer (but I’m not the only one). I’m also something of a realist. No doubt if these indices are implemented, game-playing would follow with unintended consequences. Nevertheless, it’s been interesting to think about university metrics that might drive new, perhaps more socially just, workplace behaviours. Maybe I’ll dream up some indicators along the same lines for ranking individual academics in the next post….

 

*Bob Sutton’s dirty dozen

  1. Personal insults
  2. Invading one’s “personal territory”
  3. Uninvited physical contact
  4. Threats and intimidation, both verbal and nonverbal
  5. “Sarcastic jokes” and “teasing” used as insult delivery systems
  6. Withering e-mail flames
  7. Status slaps intended to humiliate their victims
  8. Public shaming or “status degradation” rituals
  9. Rude interruptions
  10. Two-faced attacks
  11. Dirty looks
  12. Treating people as if they are invisible

26 thoughts on “imagine there’s new metrics (it’s easy if you try)

      • This was a thought-provoking post, and on reflection, I think that you identify one of the major issues up front with the comment about ‘upper bounds’. Very often the easy thing to measure is the maximum, when what we should be aiming at is the optimum. ‘More’ is equated to ‘better’, but we actually all know that isn’t the case. Places like EMBL and Janelia Farm have limits on group sizes for a reason, but cash-limited Universities see graduate students as ‘income streams with the potential for growth’, and at that point the interests of the institution and the individuals within in it begin to part ways.

        I like the new indices, and I think that getting the K-index right is probably fundamental to correcting the F-index and the D-index over the longer term.

  1. Wow, Jenny – you are at your best! I spent 15 min to send your post to all my friends, including lots of men and some male decision makers.

  2. Beautifully expressed Jenny, and a real thought-provoker. Excellent ideas here and I hope you get heaps of readers, and, more importantly people in power to have a good long look at their strategies with these ideas in mind.

  3. Pingback: Dbytes #180 (20 January 2015) | Dbytes

  4. Pingback: ¿Un índice de felicidad para clasificar a las universidades? | DoctoradoUMA

  5. Pingback: how to measure a professor | cubistcrystal

  6. Pingback: Recommended Reading | January & February 2015 | Cindy E Hauser

  7. Pingback: Women in Science (March 2015) | Geocoastal Research Group

  8. The number of academics with kids is likely also an indicator how well people feel. If I recall correctly that number for Germany is 22%. The radio moderator added: in a zoo that would be seen as a reason for concern about how the animals are kept.

  9. Pingback: Scientists and Heroism | Sciences in the Mural of Life

  10. Pingback: careful, your bias is showing | cubistcrystal

  11. Pingback: REF 2014 and all that (III): is RAE/REF a driver or passenger of change? | Ferniglab Blog

  12. Pingback: but what can I do? | cubistcrystal

  13. Pingback: merit and demerit | cubistcrystal

  14. Pingback: Enseignement supérieur : l’obsession des classements | Africpost

  15. Pingback: Tech Inclusion – The Other Sociologist

Leave a comment