Measures of Academic Performance … the Good and the Bad

Nature, in addition to adding PLoS-like commenting to its entire journal, has an opinion piece by Julia Lane on the long-discussed need for better metrics for measuring academic productivity. She starts off:

Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.

Better metrics will require clean, more easily accessible data not scattered among proprietary data sets … and more appropriate data for gauging performance. For example, “MESUR (Metrics from Scholarly Usage of Resources, http://www.mesur.org), a project funded by the Andrew W. Mellon Foundation and the National Science Foundation, record details such as how often articles are being searched and queried, and how long readers spend on them.” In suggesting other alternatives, she recognizes that:

Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes and even the production of YouTube videos. Many of these are more up-to-date measures of activity than citations. Knowledge transmission differs from field to field: physicists more commonly use preprint servers; computer scientists rely on working papers; others favour conference talks or books. Perhaps publications in these different media should be weighted differently in different fields.

Of course, a major factor in hiring and P&T decisions especially will continue to be grant funding, which in turn requires a solid publication record, no matter what productivity metric is used. One wonders if/when the US will be forced to adopt measures (possibly serving as a different sort of new metric) recently implemented in the UK to curb grant submissions by “repeatedly unsuccessful” PIs. Imagine how fun that discussion would be with your Chair come evaluation time….

5 Comments »

  1. BB said

    Several of my papers don’t show up in PubMed (they are in medicinal chemistry journals). Any metric that doesn’t cover every database has inherent inaccuracies.

    Funding rates are higher in the UK. I’ve served on a couple of their study sections and the grant program information included funding rates of ~50%. Repeatedly unsuccessful in that system vs repeatedly unsuccessful in ours with NIH funding rates dipping into single digits is hardly comparable.

  2. antipodean said

    BB. They’ve just told you that the rates are dipping.

    Also NIH, whilst tight on a percentage, has three rounds per year. So you’d probably want to compare the annual success rates across countries with different round frequencies. NIH apps get turned over quicker which might be part of the reason for the apparently horribly low rates compared with other places.

    • drugmonkey said

      NIH’s “success rate” incorporates revisions and original submissions as one application if they are in the same fiscal year. it adds needless complexity to the analysis…

  3. BB said

    Antipodean: pardon me, who told me rates are dipping?

    The US DoD, which has only a yearly round of apps, has a less than 20% funding rate (average) across all programs; that is still much lower than the UK and France (unless the UK and French guidelines lie to reviewers).

  4. CD0 said

    The funding criteria are completely different anyway. Having worked in 3 European countries and the US, I can say that in Europe how old you are, how many people you personally know and even who your father was, still matter as much as what you propose to do or your productivity.

    Fortunately, funding problems in the US can be solved with money and political vision.

RSS feed for comments on this post · TrackBack URI

Leave a comment