Nature, in addition to adding PLoS-like commenting to its entire journal, has an opinion piece by Julia Lane on the long-discussed need for better metrics for measuring academic productivity. She starts off:
Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.
Better metrics will require clean, more easily accessible data not scattered among proprietary data sets … and more appropriate data for gauging performance. For example, “MESUR (Metrics from Scholarly Usage of Resources, http://www.mesur.org), a project funded by the Andrew W. Mellon Foundation and the National Science Foundation, record details such as how often articles are being searched and queried, and how long readers spend on them.” In suggesting other alternatives, she recognizes that:
Knowledge creation is a complex process, so perhaps alternative measures of creativity and productivity should be included in scientific metrics, such as the filing of patents, the creation of prototypes and even the production of YouTube videos. Many of these are more up-to-date measures of activity than citations. Knowledge transmission differs from field to field: physicists more commonly use preprint servers; computer scientists rely on working papers; others favour conference talks or books. Perhaps publications in these different media should be weighted differently in different fields.
Of course, a major factor in hiring and P&T decisions especially will continue to be grant funding, which in turn requires a solid publication record, no matter what productivity metric is used. One wonders if/when the US will be forced to adopt measures (possibly serving as a different sort of new metric) recently implemented in the UK to curb grant submissions by “repeatedly unsuccessful” PIs. Imagine how fun that discussion would be with your Chair come evaluation time….