As AI continues to progress and businesses across the globe benefit from its capabilities, it’s important to ensure that the technology is being harnessed for good, to create a better, fairer society. AI systems are already superior to humans in certain tasks such as image recognition, data analysis and problem-solving tasks. These advances present a wealth of ethical questions surrounding biases that could appear in the data, security issues, and potential consequences if systems are hacked or used irresponsibly. There are several ‘guidelines’ for ethical practices of AI such as the way data is handled and the processes developers should go through when creating a product, but there are still grey areas which are a cause for concern.
At the Deep Learning for Robotics Summit and the AI in Industrial Automation Summit in San Francisco this June 28 - 29, we will be hosting panel discussions and breakout sessions focusing on these issues. David Gunkel, an award-winning educator and scholar, specializing in the ethics of new and emerging technology, will be joining us on the 'AI in Social Responsibility Panel', and also the 'Ethics Panel'. We're really excited to hear about David's valuable opinions on these topics and how they're impacting the industry, so we found out some more about his work in advance of this summits.
What’s your background, and how did you begin your work in ethics and AI/what came first?
I am a philosopher by training, and I practice a particular brand of philosophy that is often called “the philosophy of technology.” I am, therefore, principally concerned with unearthing and evaluating the underlying ideas and concepts that are operating behind the scenes of technological innovations—what we might call the “operating systems” of tech innovation and development. But this does not mean that I am some ivory tower egghead who spends his days staring at his navel and thinking “deep thoughts.” I have been a hands-on thinker from the beginning. I supported myself through graduate school by doing interactive media applications for an international architect-engineering group in Chicago. I teach web and app development as part of my standard teaching load at Northern Illinois University. And I am proficient with several languages necessary for developing neural networks and machine learning systems. In my mind, what is most needed right now are philosopher who are not afraid to get their hands dirty by writing code and making things, and developers who are able to formulate and pursue the important and difficult philosophical questions. I therefore try to occupy that liminal zone situated in between what C. P. Snow called the “two cultures.”
Tell us a bit more about your current work
As a philosopher, I was (and remain) very interested in the exclusions that are an inescapable part of any moral/legal system. One of the enduring concerns of ethics is distinguishing between “who” is deserving of moral/legal consideration and “what” is not. Basically all moral/legal systems need to define who counts as a legitimate subject and what does not. Initially, who counted typically meant other (white) men. Everything else was considered his property. Fortunately for these others, the practice of ethics has developed in such a way that it continually challenges its own restrictions and comes to encompass what had been previously marginalized or left out—women, children, foreigners, animals, and even the environment. Currently, I believe, we stand on the verge of another fundamental challenge to moral thinking. This challenge comes from the autonomous, intelligent machines of our own making, and it puts in question many deep-seated assumptions about who or what constitutes a moral/legal subject. And the way we address and respond to this challenge is going to have a profound effect on how we understand ourselves, our place in the world, and our responsibilities to the other entities encountered here.
What do you think are the main concerns in ethics and AI? Can the problems be avoided? What are the main challenges with ethics in robotics and why do they exist?
Whether we recognize it as such or not, we are in the midst of a robot invasion. The machines are now everywhere and doing virtually everything. We chat with them online, we play with them in digital games, we collaborate with them at work, and we rely on their capabilities to manage many aspects of our increasingly complex data-driven lives. Consequently, the “robot invasion” is not something that will transpire as we have imagined it in our science fiction, with a marauding army of evil-minded androids descending from the heavens. It is an already occurring event with machines of various configurations and capabilities coming to take up positions in our world through a slow but steady incursion. It looks less like Battlestar Galactica and more like the Fall of Rome. As these various mechanisms take up increasingly influential positions in contemporary culture—positions where they are not necessarily just tools or instruments of human action but a kind of interactive social entity in their own right—we will need to ask ourselves some rather interesting but difficult questions. At what point might a robot, algorithm, or other autonomous system be held accountable for the decisions it makes or the actions it initiates? When, if ever, would it make sense to say, “It’s the robot’s fault”? Conversely, when might a robot, an intelligent artifact, or other socially interactive mechanism be due some level of social standing or respect? When, in other words, would it no longer be considered nonsense to inquire about the rights of robots? Although considerable effort has already been expended on the question of AI, robots, and responsibility; the other question, the question of rights and legal status, remains conspicuously absent or at least marginalized. In fact, for most people, “the notion of robots having rights is unthinkable,” as David Levy has asserted. For this reason, my work mainly focuses on this other question--the question concerning the moral and legal standing of AI and robots. In fact, my newest book grapples directly with these matters and is titled Robot Rights (MIT Press, 2018).
How do you think we can ensure AI in robotics is used for a positive impact?
Answers to this question typically target the technological artefact and address its design, development, and deployment. But there is another aspect and artefact that needs to be considered and dealt with here, and that is law and ethics. Our moral/legal systems (for better or worse) operationalize a rather restrictive ontology that divides the world into one of two kinds of entities: persons and property. In the face of advancements in AI and robotics, the question that will need to be asked and answered is this: What are robots? Are they just property that can be used and even abused without further consideration? Or are they (or should they be considered) a kind of “person” with rights and responsibilities before the law? We are now at a tipping point where things could go one way or the other. The European Union, for instance, has recently been asked to consider whether robots should be legally defined mere tools to be utilized and regulated like other technologies or “electronic persons” for the sake of dealing with the matter of liability and taxation. The problem is multifaceted. On the one hand, we need to come up with ways to fit emerging technology to existing legal and moral categories and terminology. On the other hand, we will need to hold open the possibility that these entities might not fit either category and therefore will require some new third alternative that does not (at least at this time) even have a name. So the problem is this: Our technologies advance at something that seems to be approaching light speed, while our laws slowly evolve at pen and paper speed. Ensuring positive impact means figuring out a way to contend with this difference.
There’s so much discussion around the privacy and security issues that come along with the application of AI systems, do you think it’s a concern, and what should we do to make sure systems are safe?
Privacy is definitely a concern, and now it is on everyone’s radar as a result of what happened with Facebook and Cambridge Analytica. But addressing this matter effectively will, I believe, require a coordinated effort on three fronts. First, service providers and manufacturers of devices need to compose and operate with clearly written Terms of Service (ToS) and End User Licensing Agreements that are transparent about what kind of data is collected, why it is harvested in the first place, and how it is and/or can be used. Unfortunately, many of these documents suck, as Senator Kennedy of Louisiana told Mark Zuckerberg during the recent hearings before the US Congress. They are poorly written, inconsistently applied, and difficult to read without a law degree. Second, users need to know and fully appreciate what they are getting into. Too many of us simply ignore the ToS/EULA, click “agree,” and then share our information without a second thought as to what is being given away, at what price, and with what consequences. There is a pressing need for what we used to call “media literacy,” and this fundamental training—ostensible the skills and knowledge necessary to thrive in a world of increasing machinic intervention and involvement—needs to begin at an early age. Finally, mediating between users and providers there must be an outside third party that can ensure a level playing field and redress existing imbalances in power. This has typically been the purview of governments, and some form of regulation will be necessary to ensure that the rules of the privacy game are and remain fair, equitable, and just.
Do we need a chief ethics officer? Do we need a ‘common standard’ for AI ethics or does that depend on individual points of view?
Yes and no. Appointing a “chief ethics officer” may be the right step but in the wrong direction. It risks consolidating too much power and responsibility in the hands of one individual or office. What is needed is not a one-stop-shop “ethics czar” but an institutional commitment to ethics across the enterprise, overseen and administered by an interdisciplinary group of experts, like the institutional review boards (IRB) that have been in operation at universities, hospitals, and research institutions for decades. The moral/legal opportunities and challenges that arise in the wake of emerging technology are complex and multifaceted. We have a better chance of identifying potential problems, devising workable solutions, and anticipating adverse side-effect when there are diverse perspectives and approaches in the mix. For this reason, “a commons standard,” although a tempting solution, is less valuable and viable than productive differences and conflicting viewpoints. There is a good reason that the Supreme Court of the United States and the high courts of many other nations across the globe consists of a panel of judges coming from diverse backgrounds and often espousing conflicting ideologies. There is no one right way to address and resolve these important questions. Dialogue, debate, and even conflict are a necessary part of the process.
Join RE•WORK at the Deep Learning for Robotics Summit and the AI in Industrial Automation Summit in San Francisco this June 28 - 29 to learn more from David and other leading experts.