how to measure a professor

“Many of those personal qualities that we hold dear….are exceedingly difficult to assess. And so, unfortunately, we are apt to measure what we can, and eventually come to value what is measured over what is left unmeasured. The shift is subtle and occurs gradually”. So wrote Robert Glaser of the USA National Academy of Education in 1987.

Those words – written about the standardised tests used in American schools in the 1980s – ring so true today for the way we assess academics. The things we tend to measure, because they are easy to measure, are things like publication numbers, impact factors, H-index (regrettably not the Happiness index), citations, grant income. And we tend to value most those who have big grants and papers in big name journals. Are we “driving out the very people we need to retain: those who are interested in science as an end in itself…“? Is the current “Impact factor mania (that) benefits a few” forcing academics to participate in a “winner-takes-all economics of science“? Is the “tournament” competition model ruining science by adversely affecting research integrity and creativity? Have we fallen into the trap Glaser warned of: do we now value what we can measure at the cost of losing what is actually most valuable?

Inspired by Glaser, education policy researcher Gerald Bracey generated a list:

Personal Qualities NOT measured by Standardised Tests

creativity     critical thinking    resilience    motivation   persistence    curiosity    question-asking    humour    endurance   reliability     enthusiasm    civic-mindedness    self-awareness    self discipline    empathy    leadership    compassion    courage   sense of beauty    sense of wonder    resourcefulness    spontaneity   humility

Do metrics for academics assess these qualities? In some respects, they do. Publications and grant success require a level of creativity, critical thinking, motivation, persistence, curiosity, question-asking, enthusiasm. But at best they are a proxy measure. And there are deeper issues. Counting grant income as well as scientific publications – well that’s double-dipping. What’s more, current metrics completely ignore many key responsibilities expected of academics. Committee work. Conference organisation. Reviewing. Mentoring. Outreach. I’ve been fortunate to work with fantastic supervisors and collaborators – people I trust, respect and like – but that’s certainly not everyone’s experience in academia. How do we ensure that academics with integrity, empathy, humility and compassion – as well as leadership, critical thinking and creativity – are rated highest and valued most of all if these personal qualities are not assessed or incentivised? In my mind, the best metrics would (1) enable a fair assessment relative to opportunity, (2) assess more of the duties expected of academics and (3) report on the personal qualities we hold dear in people we want to work with.

To address point 1, the metrics for those who have made it – full professors – ought to be different from those we use to assess academics still in the pipeline.

How might we measure a professor? Well let’s imagine a few more new metrics…..

Publication Efficiency. Currently we focus heavily on three metrics: publication quantity, publication quality and grant income – and “more is better”. Professors are expected to secure competitive grants, attract junior researchers (many bringing in their own competitive fellowships) and train scholarship-funded students. The more dollars pulled in (grants, scholarships, fellowships), the more people in the team, and hence the more outputs generated. But large teams are not necessarily better. How productive has the team leader been with those funds? Using the publication efficiency (PE) metric, publication metrics are weighted by income:

PE = PO/RI

where PO is a measure of publication output over the past 5 years (eg POc could be total citations past 5 years, POn number of publications past 5 years etc etc) and RI is the total research income over the past 7 years (that is, the certified total dollar value of all grants, all scholarships and all fellowships to all team members over that time). Seven years is chosen for research income aggregate, rather than five years, because it takes time to generate scientific publications. The higher the PE, the better.

Sponsorship Index. One of the most important roles a professor can take on is training the next generation of research leaders. Trouble is, the way we rank and assess academics leads to a hypercompetitive environment. Take for example publications, the major currency of academia. The senior author position on papers is highly coveted because it identifies the intellectual leader of the research. Future grant success (= future survival) for senior academics requires senior author papers – and the more the better. A well-established professor, leading a large group, traveling extensively and with a large admin/committee/teaching load, relies on mid-career researchers within the team to generate ideas, direct the day-to-day research, train students, analyse results, write the papers. Yet the way the system works at present, the professor needs to take the senior author positions on papers. This is justified because the work was done in the professor’s lab, using equipment or protocols they established and using grant money they brought in to cover the salaries of the team members. The sponsorship index, SI, changes the incentives. It rewards professors for supporting mid-career researchers in a team:

SI = (SAS+2M+4A) / N

where N is the total number of papers from the team in the past 5 years, SAS is the number of papers over that time for which senior authorship was shared between the professor and a team member, M is the number of papers where the professor was middle author and a team member was senior author, and A is the number of papers where a team member is senior author and the professor is gratefully thanked in the acknowledgements (and not by inclusion in the author list). Requiring that a professor maximise their sponsorship index will place greater emphasis on selflessness and in turn this will help ensure career development of the next generation of academics.

Good Mentorship Score. Following on directly from sponsorship is mentorship. Using current metrics “whether you are the best or worst mentor is irrelevant“. But it’s hardly irrelevant to potential team members and colleagues. How can a PhD student or postdoc find out if a professor is a person they can rely on to help them achieve their career goals (whatever they may be)? Horror stories abound of professors who treat team members appallinglytoxic academic mentors. Sadly, despite university policies that prohibit these behaviours, it’s usually the victims that suffer most. People in positions of power above the professor may not be aware of the problem (asshole behaviour is usually directed downwards) or may have an inkling but the grant income and papers generated by the professor are too valuable to risk losing. So how to address this? My solution – get references. From former team members. HR can provide a random selection of 10 diverse former team members (ie male/female, PhDs/postdocs, different ethnicities). These referees then use a 5 point scale, where 1 is strongly disagree and 5 is strongly agree, to rate the professor against various statements. You know the sort of thing: “My ideas for developing my research were respected and valued”, “I felt included and appreciated as a team member”, “My goals as a researcher and a person were supported”, “The professor was someone I respected and trusted and want to be like”, “I was confident to speak to the professor about issues that arose regarding my work-life balance”, “I was encouraged to explore career options outside the traditional academic path”. Perhaps we should also poll mid-career colleagues in the same school – for example “The professor actively helps more junior colleagues develop their career”, “The professor takes on a fair and equitable teaching and committee workload”, and “The professor is a positive and encouraging role model”. To generate the good mentorship score (GMS), the scores are averaged across all questions and all reviewers. The GMS can then be used in discussions at performance reviews and considered in a mentoring component of track record assessments for grants and fellowships.

Civic-Mindedness Tally. Academics are expected to do much more than research and teaching – though it is research and (to a lesser extent) teaching that are assessed, measured to the nth degree, and valued most highly. Those other things we do – contributing to department/institute committees, professional societies, conference organisation, peer review and community outreach are difficult to measure, so they tend not to be measured or assessed and therefore are not valued highly. The civic-mindedness tally (CMT) ensures that outstanding professorial citizens, who give their time for the good of society, are recognised for their altruistic contributions. The CMT is simply a sum for each year over the past 5 years of each certified committee, representative role, organisational appointment, grant review panel, editorial responsibility (see also academic karma for a new take on valuing peer review), science communication and community engagement – and yes I think that should include blog posts 🙂 .

I know, it’s too simplistic. But it’s better than nothing, which is what we do now. On its own, a high CMT won’t lead to *favourite* status for a professor. But in combination with current metrics, and the metrics described above, it should do wonders for improving the Happiness Index of institutions.

There you have it. That’s my philosophy for how we should measure a professor. It’s only a start, and no doubt there are many things that could be improved or are still missing (for other ideas see roadmap to academia beyond quantity and is competition ruining science?). So now over to you, what are the measures you think should be implemented to assess the qualities that really matter in our professoriate?

14 thoughts on “how to measure a professor

  1. I don’t like this idea that keeps coming up that senior professors should just give away last authorship or not be on the paper at all. It’s a dishonest representation of the work. It implies that the last author was entirely independent in managing this work, which isn’t the case if you’re working in someone else’s lab.

    • It shouldn’t be dishonest. Actually it’s meant to address dishonest authorship on papers. Perhaps read through the link to The Conversation article in that section to find out more. I’ve taken myself off papers or asked others to act as senior authors (they write the papers, coordinate input, submit and respond to editorial queries – with advice if they need it from me) on several occasions where someone in my lab has come up with an idea and I’ve encouraged them to pursue it. I’ve offered advice where possible, but to me it’s dishonest in those cases to claim authorship, esp senior authorship. In Australia, new research leaders have to show independence to get funding. They can’t do that if all their work is seen as being intellectually led by someone else. The idea is not to discount what professors do, but to help them be able to support junior researchers to develop their own projects and to value that support in some way. In Australia, grant-supported postdocs can spend up to 20% of their time working on other projects – the idea of a SI is to incentivise support of junior researchers.

  2. Thanks for a great post. I think it is ambitious and obviously difficult to change the entrenched metrics for measuring academic output but that does not mean we should not try. Your sponsorship index and good mentorship score are particularly interesting ideas, but I suspect they would be the most challenging to implement. Not just for the reasons you give, such as the pervasive culture in academia but also for reasons of human psychology towards success and publicity. Although if, as you say, we can create the right incentives for professors to encourage and support mid-level academics then we may be able to affect a profound shift in attitude and practice.

    • Thanks Mick. I’m told by more senior academics that if you find a problem with something in academia it’s no good simply highlighting the problem you need to come up with solutions. I realise – as you do – that these “metrics” are unlikely to be embraced but it’s a way of pointing out a problem and being pro-active about addressing it.

  3. I think we can all find problems with metrics – we measure what is measurable, not necessarily what is of value. I think you’re right to say if you think there is a problem with current metrics propose a solution. Maybe by putting that vision out there, others who have also thought about this might have gone further along to putting that vision into practice and can be supported by knowing they’re not alone

  4. I argued in my own tenure case that I had a high publication efficiency (small grants, lots of papers) and that I did things that favoured future high PE (theory). But this actually worked against me: for a given number of papers published, a smaller number of grant dollars (and so, higher PE) is actually seen as a negative, not a positive. Particularly in the US (where I was at the time), the indirect costs funding structure incentivizes a _low_ PE, not a high one! I completely agree with you that this is foolish and wrong, but there it is…

  5. (from twitter)

    Here is what I was trying to say about the PE metric on twitter. At least in Germany, MSc students aren’t paid and often publish, PhD students are not supposed to secure funding for themselves, their advisor is the one who should pay their salary with his own funding (I said on twitter that it comes from the uni but I was confusing the contract with the funding), and the same goes for post-docs – although they do some times apply to external funding, but this is not the norm.

    What I thinking about is a phenomenon that already happens in some groups. A PhD in Europe starts with the equivalent education of a 2nd/3rd PhD in the US (because we have already done a 1/2 years MSc program), and takes 3 years. They are expected to publish at least one paper per year, and this often comes in their contract. After 3 years, they should in principle defend their thesis, and go look for a post-doc with a higher salary. What can happen, especially if encouraged by a metric like that, is that instead of hiring the PhD student as a post-doc (or someone who has just finished and has the same experience) the professor can just drag a PhD for longer, and have essentially the same labour for less money. For the same grant income, you could have a team of more people by hiring only PhDs and accepting MSc students, and publish more than a more diverse team, discouraging career progression.

    I think in this scenario a metric that takes into account grant income, and the number of team members in a way to favor professors with diverse teams would be more fair, but I couldn’t come up with one. That or encouraging post-docs to also look for grants, since this is something they will have to do as professors anyway.

  6. Pingback: Recommended Reading | January & February 2015 | Cindy E Hauser

  7. Pingback: but what can I do? | cubistcrystal

  8. Pingback: Metrics for Mums: tips for writing about a track record with career interruptions | emily nicholson

  9. Pingback: merit and demerit | cubistcrystal

  10. Small thing. Simplistic = oversimplified (not simplified)
    ‘Too simplistic’ = over-oversimplified?

    Thoughtful, clearly written and _moral_ post – thanks for creating it.

Leave a comment