This article is taken from the white paper Privacy and Security in AI. Download your free copy here. Contributing authors in this extract are: Andrew McStay, Bangor University; Clint Wheelock, Tractica; Jeremy Marvel, NIST.
Privacy and security. Two buzzwords that we hear a lot of, but why all the concern, and what do we actually mean by these frequently and sometimes freely used terms?
The recurring suggestion that privacy might be dropped from the human equation is to misunderstand it. This is because it is best understood by turning away from screens to recognise that privacy plays a fundamental role in daily interactions, what is considered taboo, intimacy, the confidences we share with others, how we arrange domestic and other spaces, and where we store thoughts and things of value.
Privacy is not a thing but an ethical protocol that governs interaction, relationships and behaviour in given situations.
Beyond its centrality in governing human interaction, and relationships with machines and organizations, the ethical components of privacy are also important. From liberal politics and philosophy around the 17th and 18th centuries, it is constructed out of notions such as freedom, consent, autonomy, self-determination, dignity and non-interference from unwanted others.
Privacy is personally, collectively, politically and ethically important.
As artificial intelligence (AI) and machine learning (ML) are increasingly applied to intimate dimensions of human life, they invoke a clear need to respect dignity and see people as ends rather than means. If this guiding principle were borne in mind and practice, there is scope to have the best of AI/ML and less of the dystopian.
Isn’t privacy another word for security?
In the domain of data protection, privacy can be seen as a parental principle to security. While privacy certainly has to do with keeping personal and sensitive information safe and secure (be these hand-written letters, encrypted financial information or a nation’s health data), the ethical building blocks of privacy causes it to be more than prevention of access by unwanted outsiders.
One way to think about the relationship is that security is about means, while privacy is about ends. For example, encryption is a security practice, but privacy is the right and freedom for people to communicate without fear of interception. Admittedly, privacy and ethics-by-design blurs the privacy/security and end/means distinction, but the argument that privacy is parental to security is sound.
Throughout the paper, we refer to the terms security and privacy as follows:
SECURITY: Protecting individuals or organisations from using your personal information against you, e.g. health records, personal finances, social security details
PRIVACY: Individuals secluding themselves from the public eye.
Is privacy about hiding things away?
Sometimes yes, but mostly no. Privacy should not be equated with seclusion and hiding. People are willing to share the most intimate of details about themselves under the right conditions. This means it isn’t a paradox to say that one can be highly open about one’s life yet enjoy privacy. It’s about respecting norms, conventions and choices, but perhaps foremost, dignity.
This means privacy is dynamic in that what people will share is subject to regional, circumstantial and historical change, but what doesn’t change is respect for human rights, dignity and agreed principles of data sharing with others (be these friends or organizations).
How is AI and ML altering privacy?
AI will continue to shape human life in profound ways. To give an example, Amazon’s Rekognition grants capability for all sorts of organizations to recognize and analyze objects, people, text, scenes and activities, in images and video. Computer vision has scope for social good, but also invokes seismic privacy questions. Foremost in the case of Amazon’s service is societal questions about the desirability of automated mass visual surveillance and loss of public anonymity. The significance of this is an undemocratic transference of power from citizens to governments. While debate regarding balance between liberty and security is old, AI raises concerns about an increase of power over society.
Privacy, for who?
AI applications are increasingly at the heart of modern institutions. As hidden algorithms increasingly shape important parts of our lives (loans, health predictions, employment chances, prison sentencing), it has become hard for citizens to challenge faulty decisions. Of course, human decision-makers consciously and unconsciously make bad decisions, but they are more easily challenged. A key unenviable challenge for services using AI for decision-making is to explain the terms of a decision and provide scope for meaningful redress. Given that AI will make determinations about life chances and opportunities, this transparency is important.