Europa

Responsible Innovation in a Messy World: An Interview with Jack Stilgoe

Science and technology is transforming both our societies and the environment, for good and for bad. The increased focus on societal and environmental challenges has led to a new debate on what has been called responsible research and innovation. Per Koch of Forskningspolitikk talked to Dr. Jack Stilgoe, Senior Lecturer in Social Studies of Science at University College London, about the role of research and innovation in society, and what this means for policy.

Per Koch, Forskningspolitikk

Stilgoe is an international expert on science policy research and science policy practice, with a special interest in governance of emerging technologies.

We started our conversation with some reflections on the social contract between research and innovation on the one hand and society on the other. I started out with presenting two simplistic versions of this contract: one being «Give us freedom and we will give you good stuff» and the other one «Give us freedom and we will give you the truth».

I would like to start with a strange question: Have you seen the TV series Westworld? I am asking because popular literature and entertainment might sometimes tell us something essential about the world. In the Westworld theme park, the main problem is not the robots, or the artificial intelligence (A.I.), but the cruelty of the human beings abusing the robots. The robots are, in some respects, more human than the humans. It is a story about us, as human beings, relating to technology. Is the need for responsible research and innovation something we can address in a rational and meaningful matter, or have we created some kind of socio-technological system that has a life of its own?

The debate you are getting at is an old debate about what you may call autonomous technology. There was a very interesting book by a guy called Kevin Kelly, one of the founders of Wired magazine – interesting because it was so wrong. It was called What Technology Wants. It was all about technology as a living, breathing, autonomous system.

It stated that technology has desires of its own, and attempts by human beings to interrupt those desires were going to fail. Technology was equated with progress, and you can’t argue against progress, so therefore the job of humans was to adapt to this system. It was a sort of naked exposition of technological determinism in which technology is seen as the driving force of history.

Jack Stilgoe

Jack Stilgoe presenting the concept of resposible research and innovation at a seminar at the Research Council of Norway (Photo P Koch)

In the acknowledgements to that book, Kevin Kelly acknowledges the work of philosopher Langdon Winner, who wrote a book called Autonomous Technology in 1977. This is an amazing book, which talks about Frankenstein, and the ways in which society has worried about technological change.

But Langdon Winner is not making the point that technology is inevitably autonomous and all we got to do is to adapt to it. He is asking the question: «How does it come to be that technology appears to be out of control, and what does that mean for how we can control technology?»

I believe that technologies are powerful, in that they give some humans power. But that power isn’t universal. It allows some humans to exert power over others. With that power should come responsibility, but that responsibility doesn’t often get appreciated.

Science fiction is a form of engagement that allows you to question whether that future is a desirable one and think about other alternative futures. Programmes like Westworld allow us to ask questions about the controllability of technology.

I would say that technologies are out of control only if we choose them to be out of control. This is largely an American story where technology is unfettered, which means that powerful people are in control of technology.

But if you go back to the social contract again: «Give us freedom and we will give you good stuff.» Many university scientists love that paradigm because it gives them freedom to do whatever they want. Or at least within certain frameworks. And for business people the parallel would be the freedom of the market. You let the market decide.

You say that we can actually control some of this. But the counter argument would be that this world is so complex; there are so many dimensions to it all, and you can defini-tely not predict the future. So you might as well go with the flow. The Chinese will do it or the South Koreans will do it, so you might as well do it because you can’t stop it.

This also makes it easier to be a policymaker. You report on socioeconomic effects and that’s it. Reality is simply too messy. 

Yes, it is messy but, you know, it’s no more or less messy than any other form of social and political activity. So when you say go with the flow, you have to presume that there is a technologically determinist starting point for that flow. I would say, as a social scientist: «Well, whose flow?» Who is saying that this is the direction of travel? You just need to think about some alternatives that have technological dimensions, like, for example: How are we going to generate our energy in the future?

Now, going with the flow if you’re the oil industry means that we’ll just carry on extracting and burning, and that’s the inevitable trajectory. Or you could say: What might the alternative futures be and how might we control technology in that direction? So how might we develop renewables more quickly and more sustainably? So that’s a different flow. So you already start to see that there are choices to be made; that there are multiple futures and multiple pathways. And what looks inevitable is actually a set of choices.

I mean in part we trick ourselves because – in looking back at the history of technology – it’s very easy to see a linear trajectory of progress because we forget all the roads not taken. We forget all the mistakes. So the history of technology looks linear and it looks inevitable. So we presume that there is just a longing that’s taking us forward into the future but it’s actually because we’ve ignored all the choices that were made in the past.

And it’s one reason why historians of technology are really valuable here because they allow those choices to become visible again. So when a policymaker says we should just go with the flow, we should adapt: «Everybody else is doing A.I. Therefore we have to do artificial intelligence as well!!»

That’s basically a lack of imagination, because it’s a failure to bother to see what the alternatives might be and ask what does a responsible approach to A.I. look like? What does a desirable future involving A.I. look like? What does A.I. look like if we allow different people to control its trajectory rather than just the three or four companies who have control of A.I. at the moment?

It’s an extraordinary concentration of power. If you’re a policymaker one of the questions you should be asking is how can we diversify that system in the same way as we’d want to diversify, you know, media or any other form of uncompetitive market. There should be some right wing arguments for the pluralizing of that debate, as well.

Android from Westworld.
Programmes like Westworld allow us to ask questions about the controllability of technology. Photo John P. Johnson/HBO

And in this respect you think that even the nation state can make a difference with its policies, rules and regulations?

There are examples of nation states making a difference.

I have mentioned the British approach to the regulation of in vitro fertilization. Britain decided to take a different approach in part because the first IVF baby was born in Britain. It said no, we need to think about this.

This is more than just a set of clinical interventions. It’s also a deep ethical debate and we need to have that ethical debate. It’s democratically problematic. There are questions of who should own it. Who should control it. And we need to talk about those things.

So nation states can be hugely influential in that regard.

I was involved in an OECD project looking at science and technology collaboration to meet global challenges. One of the huge challenges for that project was to imagine types of international collaboration that actually could influence the direction of research and technology. One obvious example would be to say: Well, the EU has a role here. But you’re pretty critical of what the EU is doing in this respect.

Well, I think everybody that’s close to the EU is critical of the EU because as with any other form of government it needs scrutiny. I support the European project. But I see that the EU has a big problem because of its detachment with its citizens. It has a sort of double democracy deficit which is an issue.

In terms of criticising the EU I think it urgently needs to find ways to strengthen the connection between science and technology (which is becoming a more important part of the European project) and European citizens. In the past this was done through a sort of collaboration between civil society organised through the SwafS (science with and for society) programme that I’ve been involved with.

Now, because of the proposal to get rid of that stuff, I worry that European science innovation is going to become more technocratically governed, and not more democratic, which seems to be going in the wrong direction.

So even if the EU is going to change its policy towards global challenges, which they have in some respects …

Well, they say they have. I mean there’s a real question of whether the money will follow the priorities.A lot of research funders use grand challenges as a way to justify what they do rather than as a way to direct what they do.

I think the open question with the European Commission is, as it takes seriously grand challenges and the possibility of mission driven innovation, what actually changes in terms of the choices that they make about the science and technology that they fund. That’s the question. Not just how do they talk about the stuff that they’re doing and how important it is for sustainability or any of these other big grand challenges.

Because in the past the action has not followed the words for national governments or for the European Commission. That’s what I’m looking forward to see: If they’re genuinely interested in delivering on it.

A lot of people are interested in delivering on global challenges in Norway as well, but they face problems, partly due to the institutional arrangements, the way you measure impact, the existence of complex «wicked problems» and so on. How can we carry out such learning processes, policy making processes, processes where scientists and society interact in practice? It’s not easy. 

No, it’s not easy. And if it were easy we would have done it. If you presume that these things are easy you can get into a style of thinking that you might sort of call technological fix type thinking, that presumes that problems can be sorted out cleanly and easily.

So one of my interests in the self-driving cars debate is the sense that this technology will solve all sorts of problems that have proven intractable problems, to do with congestion, sustainability and road safety.

These problems will be solved because we’ll get robots to do the driving instead of humans. Those problems are more or less wicked problems in that different people disagree on causes and symptoms. Different people disagree on whether or not those things are solvable. And so the analysis from wicked problems would be that they demand different forms of collaboration, where rather than saying these can be solved with easy interventions, they’re saying there’s a process of collectively tackling these problems together, which means that we need to rethink who’s involved in those sorts of things.

So rather than presuming that one group should be given power to come up with a solution, we’re instead saying: No, there’s a process of collective learning about understanding the nature of those problems, negotiating the nature of those problems and tackling them together.

So for a research council, for example, that means that you need to do something genuinely interdisciplinary, and I think a lot of research councils have got that message already. They’ve understood it. If you look at, for example, how research councils in developed countries tackle an issue like food security, there’s more joined up programs in tackling food security as a genuine interdisciplinary problem. Climate is now regarded as a genuinely interdisciplinary problem, rather than just a matter of understanding it using physical models.

The next time something else comes along, something like A.I. presenting itself as a solution to various problems, we need to relearn some of those lessons about wicked problems again and realize that no, if something seems too good to be true it probably is.

Go to part 2 of this interview with Jack Stilgoe.

Main photo: The androids in the TV series Westworld represents the threat of runaway innovation. Here’s one played by Norwegian actress Ingrid Bolsø Berdal, Photo John P. Johnson/HBO 2