With companies and governments now amassing an unprecedented amount of highly vulnerable data, they clearly have to be more vigilant when considering how they use it and also how they protect it. Businesses and researchers alike are increasingly aware that the security and privacy of their data and users must be kept in check, and that they are accountable for the repercussions of any breaches. Open AI, for example, state that their mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible; advancing digital intelligence in the way that is most likely to benefit humanity as a whole. (OpenAI, 2018) Additionally, Google has stated that they ‘will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.  We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.’ At the Deep Learning Summit in London this September 20 - 21, Andrea Renda from Centre for European Policy Studies will join the Regulation and Global Policy panel, and we caught up with Andrea in advance of the summit to hear a bit more about policy in AI.

Give me a brief overview of your background - what came first, policy or AI?

Personally speaking, AI came first, when I was a kid and I developed a passion for sci-fi books with great insights by authors such as Isaac Asimov of Philip K. Dick. But professionally, policy came long before AI: as an expert in economic analysis of law, I started working on assessing the impacts of policy almost two decades ago. Then I started working on competition law and regulation in high tech markets, already during the years of the US Microsoft antitrust case. The rest came naturally: I grew up together with that industry and expanded my research interests into the fields the Internet was gradually permeating.

Tell me about your role and current work at the Centre for European Policy Studies

In Brussels, I work at CEPS as a Senior Fellow in charge of Global Governance, Regulation, Innovation and the Digital Economy. But half of my time I am in Bruges, at the College of Europe, where I hold the Chair in Digital Innovation. In both settings, I mix various social sciences to the benefit of policy-oriented research. At CEPS, I also run expert groups on new technologies such as blockchain and AI. I lead a small team of very competent researchers, some older, some younger than me: we have only one agenda: to increase the quality of public policy in the EU, including on AI.

There’s a large amount of data used in AI and deep learning - what policies are currently in place to help ensure privacy and security is maintained?

The big pieces of policy available in Europe are the GDPR, which sets a global standard on data protection; the e-Privacy directive, which looks at data protection online; the Network and Information Security Directive, which set up a large web of breach notifications, increasing the amount of information sharing for security in the EU; and the upcoming cybersecurity act, which should lead to a certification scheme for cyber-resilience in Europe. Are they enough? Frankly, I believe we need a central agency for cybersecurity at the EU level; and rules for privacy by design, which could come from the forthcoming Guidelines on Ethical development of AI, currently under preparation.

How can these technology and policy solutions help build a safer future with AI?

Technology is an “enabler”, and can possibly lead to both positive and negative results. It can lead to better privacy protection, but also to big brother. It can help cyberattackers, but also cyber defence. Pity that hackers and state-sponsored botnets typically take up new technologies faster than governments and companies. We need to change that, otherwise, AI will make the world less safe. We need to build a way to interact with technology that is more mature than the current one: we still look at AI as something that is done to us, rather than a way to augment our capabilities and massively improve quality of life. Also, we are nowhere in the debate on how to link AI to sustainable development, and in particular to the reduction of inequality and poverty: if we don’t address this problem, AI may just exacerbate the already worrying trend towards eroding social cohesion. Needless to say, the same applies to the fake news and democratic debate.

In this context, the EU would want to become a global “norm leader” in AI, but it needs to do the right things, at the right time, in the right sequence. Setting principles are not going to be very useful: they have to be promoted through policymaking, procurement, trade agreements, research funding and industrial policy. I don’t know if the current state of the EU is fit for this purpose. Sovranism and protectionism, together with a weak governance overall, are going to hamper the EU in its attempt to lead or at least compete in this space.

How can we ensure AI is used for good across multiple industries?

Governments have to put limits on AI. It’s always been like this with dual technologies. Nuclear energy technology can do both very good and very bad things, we regulate to ban the bad ones; the same applies to chemical plants, which can produce weapons that are internationally banned; and the same should happen with AI and autonomous weapons. Other than this, in most industries AI will develop without the need for heavy state promotion: but robust competition rules, data sharing and portability obligations, and policies oriented towards the diffusion and uptake of AI-enabled technologies (e.g. on skills) are important ingredients in this mix. If enforceable and meaningful, Guidelines and standards for ethical AI can also help, especially if backed by market incentives (e.g. large governments procure only ethically certified AI).

What is privacy engineering and why does it matter?

It is an emerging discipline that looks at possible ways to embed significant levels of privacy protection in processes, products, and organisations in a context in which governing information flows extremely difficult. It can be seen as an evolution of the “privacy by design” concept, which had a narrower scope; and it has a strong link to risk management practices, with a specific focus on privacy and confidentiality protection. Privacy engineering matters a since only technology (better if backed by legal rules) will be able to address the issue of quickly spreading personally identifiable information, as well as the problem of giving end users control over their data. Technical solutions such as zero-knowledge proofs and private practical computation can lead to very important breakthroughs in the reconciliation of privacy protection with the promise of big data and data-hungry AI. In the future, Ai may also become data hungry, but this is difficult to predict at this stage, and may depend by how much less demanding technologies are promoted and encouraged by legislation and R&D spending.

What’s next for you at the Centre for European Policy Studies?

I am drafting the report of the CEPS Task Force on AI, and exciting multi-stakeholder group of more than 50 people. This report is meant to contribute to the AI policy debate, especially in Europe, a High-Level Group on AI has been recently set up (of which I am a member), to draft Ethical Guidelines and Policy Recommendations. At the end of the year, I will start a new, extremely fascinating project, TRIGGER, recently funded by the EU, which will touch upon complex decisions in global governance, including emerging technologies such as AI. Lots of good prospects for my research, I am very committed to continuing this stream of work.

What are you most looking forward to about the RE•WORK Summit?

I am looking for a non-conventional, courageous, even eccentric debate on how can we be positively disruptive in AI, and what can governments do to help this process. There are countless conferences on AI every week around the globe: I chose the RE•WORK Summit in the hope that it will provide me with a fresh view on what is missing, even more than what is present, in the current debate on AI. I am sure it will be a great event!