Risks and opportunities of new data for innovation policy
On Thursday I’ll be in London for an event by Nesta, the OECD, and the European Commission on “New data for innovation policy: from explorations to impact”. I haven’t received the agenda yet and I don’t know who else is going, so I don’t know what to expect. But I’m looking forward to it.
Before these events I tend to write down what I’m hoping to achieve. The best events are ones where you achieve something completely different to what you expected, but being prepared does help a bit. And since I believe in working in the open, I often publish my expectations (though usually dressed up as something else).
The current state of new data for innovation policy
The first thing I want to find out about is what new data exists for innovation policy. I have a decent idea thanks to the work of Marion Maisonobe in Paris, Max Nathan and Rebecca Riley at The What Works Centre for Local Economic Growth and CityRedi in Birmingham, excellent open datasets such as Sirène by the French government, the great work by Microsoft’s Academic Knowledge and LinkedIn teams in Seattle, very interesting new work by The ONS Data Science Campus in Newport, and great work by Nesta in London, including their innovation policy lab Y Lab in Cardiff.
I’m very interested to learn what The OECD, The European Commission, and other attendees are doing and fill in gaps in what I know is happening. And of course I want to understand how the work I’ve been doing in Manchester and Leeds with The Data City fits into this work, so that we can learn where we need to improve, and share where we’re ahead.
What might soon be possible
This follows on naturally to my second expectation for the day. I want to learn what isn’t possible now, but might be soon. There are things that me and my team are trying that are proving much harder than we thought, and other things that have proven much easier than we thought.
As with almost all innovations, the detail of developing implementations of “new data for innovation policy” is a long way from the theoretical research.
Why we’re doing this
Last but not least, I’d like to get a sense of why the people working on these problems are doing it.
I have written before about my belief in data as an essential element of sovereignty. It’s why Finland keeps a cannon outside of its national statistics office and why Scotland collates its census separately from the rest of the UK, despite that census being nearly identical.
I have also written about my desire to use new data sources as part of “inventing a democratic vocabulary to free us from the reign of experts”. (Before the inevitable pro-expert abuse, I propose the creation of more and more diverse experts, not fewer).
One of my biggest fears about new data for innovation policy is similar to my worries about new data in economic policy analysis. I’ve written before about the role of new data in evaluation of policy choices, and why I’m worried that if we set standards too high, we’ll exclude all but the best-funded research, typically done by central organisations and irrelavant to most people.
Many in our society are increasingly worried about the centralisation of the economy we are seeing, accelerated by our shift from manufacturing industries to knowledge industries. Nesta held an event this week sharing their fears that “High-tech, creative, ‘knowledge’ businesses drive growth, but most people and places are cut off from the knowledge economy” for example.
New data in innovation policy has the potential to be hugely decentralising. It should enable local governments and small consultancies within the UK to rival or even surpass the analytic power of central government and national institutions. It should enable small-scale analysis, and experimentation. And indeed we are already doing that in Manchester, Birmingham, Bradford, and Leeds, with some excellent first results.
But all of these possible advantages are at risk from well-intentioned reactions from the centre and from the existing organisations with power.
Each standard or regulation that is applied to methods based on new data is a considerable cost to smaller governments and innovators. Costs that can easily be met by larger government and existing institutions will kill competitors. This is something that I rarely see worried about by groups such as Nesta, The OECD, The European Commission, and The UK Government.
So I want to listen to those people talk about their motivations for working with new data for innovation policy. And I want to understand how they’re balancing,
- The desire for comparability with the need for customised local solutions.
- The desire for rigour and repeatability with the need for low-budget solutions.
- The desire for independent and impartial expertise with the need for local democratic control and the fact that no analysis is independent or impartial.
So what happened?
The event has happened, I have caught up on sleep, and now I can share what happened.
First of all, it was an excellent event, well worth my time.
The talks by national and international organisations at the meeting convinced me that they are seriously investigating using new data methods, but that they desperately need assistance from people like me and the companies that I’ve co-founded to actually do it.
Kuansan Wang at Microsoft further convinced me that they are the biggest power for good in this area. Having grown up with a completely different Microsoft this is a fantastic change.
I picked up some very useful new ideas from Pierre-Alexandre Balland at Utrecht and MIT on the importance of place in innovation policy. Specifically it gave me ideas about what politicians and their experts mean when they say “place-based innovation policy” and how we convert that into arguments and methods that can inform policy. I’ll be back in London on the 5th of April to try and drag UKRI into the future based on some of his ideas.
I got a deeper understanding of the work of Jan Kinne on “Predicting Innovative Firms Using Web Mining and Deep Learning” in Germany. We have already collected most of the data needed to repeat this work in the UK and I’ll try and do so in coming months.
The only big negative moment for me was summed up right at the end of the day. A significant part of the audience seemed to truly believe that current political disruption in much of Europe and America is caused by a public rejection of expertise and that the answer to that is to somehow force policy to use expertise more.
I find this dangerously delusional. The experts in a room in London are good, but not especially good. They seem to genuinely believe their own myth of impartiality and therefore see the rejection of their recommendations by the public as a rejection of expertise rather than a rejection of their bias. Their plans to reconnect with the public seem to be about doing more and better of what they already do, rather than the fundamental change and decentralisation of expertise that they could lead.
I remain, back in Manchester and Leeds after the event, even more convinced of the need to invent a democratic vocabulary to free us from the reign of experts. I will keep working on it.
New data and AI gives us the first good chance in decades of winning and strengthening public trust in expert-informed policy. That trust will not be won by the current set of experts and their current institutions.