how to measure a professor

“Many of those personal qualities that we hold dear….are exceedingly difficult to assess. And so, unfortunately, we are apt to measure what we can, and eventually come to value what is measured over what is left unmeasured. The shift is subtle and occurs gradually”. So wrote Robert Glaser of the USA National Academy of Education in 1987.

Those words – written about the standardised tests used in American schools in the 1980s – ring so true today for the way we assess academics. The things we tend to measure, because they are easy to measure, are things like publication numbers, impact factors, H-index (regrettably not the Happiness index), citations, grant income. And we tend to value most those who have big grants and papers in big name journals. Are we “driving out the very people we need to retain: those who are interested in science as an end in itself…“? Is the current “Impact factor mania (that) benefits a few” forcing academics to participate in a “winner-takes-all economics of science“? Is the “tournament” competition model ruining science by adversely affecting research integrity and creativity? Have we fallen into the trap Glaser warned of: do we now value what we can measure at the cost of losing what is actually most valuable?

Inspired by Glaser, education policy researcher Gerald Bracey generated a list:

Personal Qualities NOT measured by Standardised Tests

creativity     critical thinking    resilience    motivation   persistence    curiosity    question-asking    humour    endurance   reliability     enthusiasm    civic-mindedness    self-awareness    self discipline    empathy    leadership    compassion    courage   sense of beauty    sense of wonder    resourcefulness    spontaneity   humility

Do metrics for academics assess these qualities? In some respects, they do. Publications and grant success require a level of creativity, critical thinking, motivation, persistence, curiosity, question-asking, enthusiasm. But at best they are a proxy measure. And there are deeper issues. Counting grant income as well as scientific publications – well that’s double-dipping. What’s more, current metrics completely ignore many key responsibilities expected of academics. Committee work. Conference organisation. Reviewing. Mentoring. Outreach. I’ve been fortunate to work with fantastic supervisors and collaborators – people I trust, respect and like – but that’s certainly not everyone’s experience in academia. How do we ensure that academics with integrity, empathy, humility and compassion – as well as leadership, critical thinking and creativity – are rated highest and valued most of all if these personal qualities are not assessed or incentivised? In my mind, the best metrics would (1) enable a fair assessment relative to opportunity, (2) assess more of the duties expected of academics and (3) report on the personal qualities we hold dear in people we want to work with.

To address point 1, the metrics for those who have made it – full professors – ought to be different from those we use to assess academics still in the pipeline.

How might we measure a professor? Well let’s imagine a few more new metrics…..

Publication Efficiency. Currently we focus heavily on three metrics: publication quantity, publication quality and grant income – and “more is better”. Professors are expected to secure competitive grants, attract junior researchers (many bringing in their own competitive fellowships) and train scholarship-funded students. The more dollars pulled in (grants, scholarships, fellowships), the more people in the team, and hence the more outputs generated. But large teams are not necessarily better. How productive has the team leader been with those funds? Using the publication efficiency (PE) metric, publication metrics are weighted by income:

PE = PO/RI

where PO is a measure of publication output over the past 5 years (eg POc could be total citations past 5 years, POn number of publications past 5 years etc etc) and RI is the total research income over the past 7 years (that is, the certified total dollar value of all grants, all scholarships and all fellowships to all team members over that time). Seven years is chosen for research income aggregate, rather than five years, because it takes time to generate scientific publications. The higher the PE, the better.

Sponsorship Index. One of the most important roles a professor can take on is training the next generation of research leaders. Trouble is, the way we rank and assess academics leads to a hypercompetitive environment. Take for example publications, the major currency of academia. The senior author position on papers is highly coveted because it identifies the intellectual leader of the research. Future grant success (= future survival) for senior academics requires senior author papers – and the more the better. A well-established professor, leading a large group, traveling extensively and with a large admin/committee/teaching load, relies on mid-career researchers within the team to generate ideas, direct the day-to-day research, train students, analyse results, write the papers. Yet the way the system works at present, the professor needs to take the senior author positions on papers. This is justified because the work was done in the professor’s lab, using equipment or protocols they established and using grant money they brought in to cover the salaries of the team members. The sponsorship index, SI, changes the incentives. It rewards professors for supporting mid-career researchers in a team:

SI = (SAS+2M+4A) / N

where N is the total number of papers from the team in the past 5 years, SAS is the number of papers over that time for which senior authorship was shared between the professor and a team member, M is the number of papers where the professor was middle author and a team member was senior author, and A is the number of papers where a team member is senior author and the professor is gratefully thanked in the acknowledgements (and not by inclusion in the author list). Requiring that a professor maximise their sponsorship index will place greater emphasis on selflessness and in turn this will help ensure career development of the next generation of academics.

Good Mentorship Score. Following on directly from sponsorship is mentorship. Using current metrics “whether you are the best or worst mentor is irrelevant“. But it’s hardly irrelevant to potential team members and colleagues. How can a PhD student or postdoc find out if a professor is a person they can rely on to help them achieve their career goals (whatever they may be)? Horror stories abound of professors who treat team members appallinglytoxic academic mentors. Sadly, despite university policies that prohibit these behaviours, it’s usually the victims that suffer most. People in positions of power above the professor may not be aware of the problem (asshole behaviour is usually directed downwards) or may have an inkling but the grant income and papers generated by the professor are too valuable to risk losing. So how to address this? My solution – get references. From former team members. HR can provide a random selection of 10 diverse former team members (ie male/female, PhDs/postdocs, different ethnicities). These referees then use a 5 point scale, where 1 is strongly disagree and 5 is strongly agree, to rate the professor against various statements. You know the sort of thing: “My ideas for developing my research were respected and valued”, “I felt included and appreciated as a team member”, “My goals as a researcher and a person were supported”, “The professor was someone I respected and trusted and want to be like”, “I was confident to speak to the professor about issues that arose regarding my work-life balance”, “I was encouraged to explore career options outside the traditional academic path”. Perhaps we should also poll mid-career colleagues in the same school – for example “The professor actively helps more junior colleagues develop their career”, “The professor takes on a fair and equitable teaching and committee workload”, and “The professor is a positive and encouraging role model”. To generate the good mentorship score (GMS), the scores are averaged across all questions and all reviewers. The GMS can then be used in discussions at performance reviews and considered in a mentoring component of track record assessments for grants and fellowships.

Civic-Mindedness Tally. Academics are expected to do much more than research and teaching – though it is research and (to a lesser extent) teaching that are assessed, measured to the nth degree, and valued most highly. Those other things we do – contributing to department/institute committees, professional societies, conference organisation, peer review and community outreach are difficult to measure, so they tend not to be measured or assessed and therefore are not valued highly. The civic-mindedness tally (CMT) ensures that outstanding professorial citizens, who give their time for the good of society, are recognised for their altruistic contributions. The CMT is simply a sum for each year over the past 5 years of each certified committee, representative role, organisational appointment, grant review panel, editorial responsibility (see also academic karma for a new take on valuing peer review), science communication and community engagement – and yes I think that should include blog posts 🙂 .

I know, it’s too simplistic. But it’s better than nothing, which is what we do now. On its own, a high CMT won’t lead to *favourite* status for a professor. But in combination with current metrics, and the metrics described above, it should do wonders for improving the Happiness Index of institutions.

There you have it. That’s my philosophy for how we should measure a professor. It’s only a start, and no doubt there are many things that could be improved or are still missing (for other ideas see roadmap to academia beyond quantity and is competition ruining science?). So now over to you, what are the measures you think should be implemented to assess the qualities that really matter in our professoriate?