Menu Close

Push to quantify social impact of science goes global

Julia Lane oversaw the introduction of the STAR METRICS program in the US. vr.se

Australia is preparing to join a worldwide push to map the wider social returns on investments in science.

This week Professor Julia Lane, who developed and led the US National Science Foundations’s Science of Science & Innovation Policy program, visited Canberra to discuss her experience in charge of the STAR METRICS project to document the outcomes of science research for the American public.

Here she explains how STAR METRICS began, how far it has developed, and what other countries are doing to introduce more science to their science policy.

STAR METRICS is one of the first comprehensive attempts to evaluate the impact of government-funded scientific research beyond the world of peer-reviewed publications. How did it start and how far has it evolved?

I’d say “evaluating impact” is putting it a little bit strong. What we’re doing is building a data infrastructure. But to answer your question, it all started with Jack Marburger, who was the science advisor to then President George W. Bush. He was the equivalent to [Australia’s chief scientist] Ian Chubb, I guess. In 2005 he made a major speech to the American Association for the Advancement of Science in which he pointed out that there was no science to science. He said that countries and the US in particular are making massive investments in science, and they really don’t know what the results are.

He said there was no empirical basis to which he could point, the way there were in other parts of government. In health care we had some sense of how individuals worked their way through to the eventual outputs and outcomes. In education, in the workforce, too. But we didn’t have a science to our science policy. It was all driven by anecdote. It was a pretty strong statement because he said we basically don’t know what we’re doing.

So as a result of that, two things were set in place. One was that the White House formed an inter-agency group of the 17 major science agencies, which I ended up co-chairing. And the National Science Foundation (NSF) put together an investigatory research program, called the Science of Science and Innovation Policy (SciSIP).

So the inter-agency group did a roadmap to figure out whether what Jack had said was right. They looked at the literature and the practice of the 17 agencies and they came out with a roadmap in 2008. That roadmap, which is strong for a White House report, said the data infrastructure is inadequate for decision making.

At the same time, at NSF, the researchers were looking at science policy, which was a weak and marginalised field - the challenge was to get good researchers to ask the big policy questions, such as: What are we investing in? What are the emerging trends? What are the results of science investments? The researchers were very clear that the biggest impediment was the lack of a data infrastructure. So this was coming both from the Federal Government and the research community.

You can see it in other countries as well. What is essentially happening is that countries that are trying to look at the results of science, they’ve ended up bean counting, and doing it in a very manual and burdensome way - counting the number of publications of academic papers. How on earth anyone thinks that is the way you track the results of science is beyond me.

So things were pretty rotten in the state of science policy.

Scientists have mapped the human genome. How hard can it be to map the wider social and economic value of their own research? Flickr/SLU Madrid Campus

In 2009, when Obama came in, there was a stimulus package. And the science agencies went up on the hill and argued that there needed to be investments made in science because it would generate economic growth and jobs. The challenge was they had no systematic way of providing evidence for that claim. And that’s how STAR METRICS evolved.

First of all, the universities and the research communities said, “Hey wait a minute, we don’t just want to be defined by economic growth. That’s not what science is about. We want to talk about the science outputs, the social benefits, the workforce benefits.”

So the first part was, can we document how many people are supported by science funding and what are they doing. And then secondly, we want to tell a bunch of stories, and build a common empirical data infrastructure so that the community can tell that story.

For many years, the impact of scientific research has been measured by citations and the impact factors of journals in which the research was published. But as you said, STAR METRICS also takes into account social, workforce and economic indicators in an empirical way. Can you give an example of exactly how this works? In other words, how can you empirically measure the social impact of a scientific invention?

That’s obviously a really key question, and that is precisely what the NFS program is developing answers to. There’s been about $US60 million that’s been spent getting the research community to answer questions like that. And let me make it clear - that’s a deep and complex question, and I’m not going to even be able to pretend that we are close to answering it, nor do I think we’ll ever answer it. There’ll be an ongoing set of answers as science keeps evolving.

But we have been showing things with the data that’s been built. If you look at the SciSIP website you can see how people like Lynne Zucker and Michael Darby start to show the economic impacts, people like Jeff Furman and Fiona Murray have shown how investments in bio-banks have increased the knowledge of and access to mice. People like Pierre Azoulay have convincingly documented that funding individual researchers - star researchers, star scientists - can have a higher impact than funding individual projects. You’ve got people like Dan Sarewitz, who’s done work on public value mapping. Jerald Hage, the sociologist at the University of Maryland, has developed different ways of describing what the results of NIH investments have been, in terms of metrics that depend on the type of intervention that’s been done. Fred Block at UC Davis has looked at the results of DOE investments. So that’s too deep a question to answer glibly, but the community is pushing forward and developing data, models and tools to answer it.

How can scientists measure the returns on innovations such as the World Wide Web, conceived by Tim Berners-Lee in 1989? Flickr/campuspartybrasil

We also have the Application Program Interface (API), which will permit the community to contribute their own insights once it’s been built. One of the principles that we built this on is that the approach should be to build an infrastructure that’s open and transparent. So the API points to the different sources of data that are being used. It points to NSF data, but it will be built out to point to US Department of Agriculture data and Environmental Protection Agency data, obviously while preserving privacy. The notion here is that as the data infrastructure gets built out, it will point to more and more data sets. Then it’s up to the community. If that’s not what the community wants, then let’s make it open so people can add in additional voices.

Think of it as being a bit like the iPhone. When it first came out there were 10 or 15 apps, but Apple made available widgets that enabled the community to write all kinds of apps. So that’s what will happen here, rather than the Federal Government trying to figure out what needs to be done.

You advised the EU that case studies and anecdotal evidence are insufficient, and you’ve also called on the EU, US, Brazil, Japan and others to collaborate on measuring impact. Is there a global movement building, and which countries are leading it?

I think so, yes. This is not my problem, it’s our problem. It’s a global science challenge. What worries me is that what you measure is what you get. If you count publications, then you’re going to get a million publications. It’s like in the Stalinist system - when you had a quota system and you had to produce nails by weight, you got one big nail. If you had a quota system of nails by quantity, you got thousands of tiny little nails. In the capitalist system, if you go to the hardware store, you get a whole wide range of nails. I think we have to respond to the taxpayer request that we document the results of science investments, but we have to be able to do it in a way that preserves and fosters the integrity of science. That’s what science agencies ought to be focusing on.

As a global community, if we think hard about these problems, we can solve them. That’s what scientists do. We put the man on the moon. We mapped the human genome. You can’t tell me we can’t solve a problem like this.

The Japanese have established a program to advance the science of science and innovation. The Brazilians were way ahead of us anyway with their Lattes program. We were really impressed with Brazil. I would argue that part of the reason they’ve been so successful in science and technology policy is they’ve been a hell of a lot smarter about the way they’ve done their investments.

Certainly in Europe, participants from the European Commission and the US and Japan met in Bellagio, Italy, at the Rockefeller Foundation Bellagio Centre and issued the Bellagio Statement. So we’re trying, that’s all I can say.

Recently the open access movement has gathered momentum with a boycott of Elsevier and an announcement by the British government that all British research will eventually be accessible for free. In Australia, our two major funding bodies, the Australian Research Centre and the National Health and Medical Research Centre, are split, for want of a better word. The former encourages open access publishing, but the latter is preparing to mandate it. Do you have a view on this?

Absolutely. I’m an economist, and I think it’s important to understand that publishers do have a business model and they do contribute value. I want to make that clear, that it’s not a one-sided issue. At the same time, I think that in order for the results of science investment to have an impact, they have to be made broadly accessible. Now I think that there needs to be a carefully thought through business model that provides enough incentives for all the key parties to make that happen.

Open access is a critical part of that, but it’s not necessarily the same thing as free access. Absolutely the current model is broken. You can’t have an arrangement where the information is locked away, as it currently is. But building an open access model which has the right incentives in place is critical.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now