Artificial Intelligence is quickly becoming mainstream, engrained in the fabric of our lives, acting on our behalf – helping us get things done faster, more efficiently, giving us deeper insights, maybe even helping us be happier and healthier. AI is taking on tasks that were traditionally done by humans – from acting as our personal assistants and hiring our next co-worker, to driving our cars and assisting with our healthcare. But AI today has high IQ but no EQ, no emotional intelligence. We’re forging a new kind of partnership with technology. A new social contract that is based on mutual trust. In this talk, Dr. el Kaliouby will discuss the 5 tenets of this new social contract including how to build AI that has empathy, the ethical considerations of AI and the importance of guarding against data and algorithmic bias.
A pioneer in artificial emotional intelligence (Emotion AI), Rana el Kaliouby, PhD, is co-founder and CEO of MIT spinoff and category-defining company Affectiva. Rana will be presenting at the Deep Learning Summit in Boston this May 23 - 24 and is now paving the way for Human Perception AI: software that can detect all things human, from nuanced human emotions and complex cognitive states, to behaviors, activities and the objects people use. In advance of the summit, we caught up with Rana to hear more about her journey in AI as well as her current work.
How did you start your work in AI and emotional intelligence?
I got a scholarship to Cambridge University to pursue my PhD. I moved there from Cairo, Egypt, leaving my family behind. I realized quickly that I was spending more time with my computer than I was with other people, and there were days when I was really homesick. I’d be chatting with my family, but they had no idea how I was feeling -- all of the nuances and richness of my feelings disappeared in cyberspace. Then I had this “a-ha” moment: what would it take to get our technologies and devices to understand us in the same way as people do?
From that point on, I started imagining what it would take to build an emotionally intelligent machine. I read the book “Affective Computing” by Dr. Rosalind Picard, and eventually went on to meet Roz while doing my post-doc at MIT. Together, we did research to see how this technology -- what we now call “Emotion AI” -- could help people on the Autism spectrum. But we quickly realized that there was a lot of commercial interest in the technology, and potential for it in a range of industries, from automotive to advertising, social robotics and more, prompting us to spin out of MIT and co-found Affectiva.
Why do you think it’s important for machines to be able to feel, and where will we see this benefiting the average individual in society?
AI is taking on many new roles in society -- becoming our coworker, serving as a virtual assistant in our homes, operating our cars and more. But today, AI is people-blind. Sure, it can “see” us, but it’s completely unaware of how we’re feeling and how we are really reacting. And as we interact with technology, all of the nuances and richness of our feelings disappear in cyberspace. That’s a big barrier when it comes to building trust and communication -- not only between people and AI, but with other people, too. How can we trust in technology that strips us of what makes us human?
AI needs to be able to understand all things human, and truly relate to us, if we’re going to trust it. This is especially important as AI takes on new roles in society, and our interactions with AI become more relational and personal.
But trust is a two-way street. AI needs to be able to trust people too -- to be good co-workers, make the right decisions and use the technology ethically. This is only possible if AI can understand the emotions and expressions that are core to who we are.
In turn, I believe that our interactions with technology -- and ultimately other people -- will be more meaningful, relational, productive and fulfilling.
What are some of the ethical concerns that come along with your work, and how are you ensuring your work is used for social good?
As with any technology, AI is neutral -- it’s what you do with it that counts. Given the highly personal nature of our technology in particular -- relating to human emotions and states of being -- there is potential for the technology to be misused. I worry that, without action from us now, in the future, any human-centric AI can be used to manipulate people.
At Affectiva, we’ve considered these ethical issues from day one. They’ve guided business decisions we’ve made, including our decisions for which use cases to pursue, and which to stay away from, such as surveillance, for one. We also require opt-in and consent for use of our technology. Ethics and diversity have also guided our approaches to data collection. We’ve amassed the world’s largest emotion data repository, with over 7.7 million faces analyzed from 87 countries -- this helps ensure our data is diverse and our AI is not biased. And, I’m proud to say our team is incredibly diverse -- diverse in education, cultural upbringing, age and gender -- and that they bring a diversity of perspectives to the table.
It’s not just about looking into ethical development and deployment of AI at Affectiva, but in the industry at large. We want to engage in conversations about fair, accountable, ethical and transparent AI, and as such, we are a part of organizations like the Partnership on AI to help outline standards for ethical AI. I believe this is a critical responsibility for us as the technology becomes a larger part of society and our lives.
What other industries can benefit from your work?
Automotive is one area where we’re seeing a lot of potential and interest in our technology. Specifically, Human Perception AI can significantly improve road safety and the occupant experience, in the cars we drive today and the future of automated mobility. With Human Perception AI, cars can be built with advanced driver state monitoring solutions that can detect signs of dangerous driving behavior such as drowsy or distracted driving. The car can then alert the driver or intervene. This is especially important in semi-autonomous cars too, as the vehicle needs to be able to determine if a human driver is ready to take back control of the wheel at a moment’s notice.
As automated mobility comes to fruition, the driving experience will no longer be about humans needing to drive a car. It’ll become all about the occupant experience. With Human Perception AI, cars will be able to optimize the ride based on who’s in the vehicle, what their mood is, how they’re interacting with each other and the in-cabin system, and more - ultimately creating the most comfortable, safe and relaxing ride possible.
Automotive OEMs and Tier 1 suppliers are seeing this potential, and the way mobility is shifting, and we’re having a lot of interesting conversations and partnerships with the automotive industry as a result.
You’re presenting on Humanizing AI at the Deep Learning Summit in Boston - can you give us a brief overview of this?
In my talk, I plan to discuss the new social contract between people and AI that I mentioned above. There are a few tenets to this so-called social contract: it’s reciprocal, respects data, requires diverse teams, and is ethical. And at its core, it’s built on mutual trust between people and AI.
I’ll leave the details for my talk, but at the core, it’s all about taking a human-centric approach to AI. We can’t forget about the people behind AI, or those impacted by it. Without a human-centered approach, AI will fall short of its potential, so it’s only right that we consider the human before the artificial. This applies to the teams working on AI, the use cases we pursue for AI, the data fueling AI, and the way the technology is deployed -- all of which I will touch on at the Deep Learning Summit. I look forward to seeing how other speakers address these issues, and what the audience has to say about this ongoing dialogue.
As a female pioneer of emotion recognition, what do you think can be done to encourage more women and diverse backgrounds into AI?
This is an area I’m really passionate about as a female, Muslim-American scientist and entrepreneur. I think it’s really important to start with encouraging girls to pursue STEM at a young age. There are a lot of organizations like littleBits and Girls Who Code that are making amazing strides in getting girls into STEM.
I also believe that it’s up to women in the field to be mentors and advocates for the next generation. We need to help them in cultivating networks, to give them opportunities and funding that are unfortunately harder to come by for women and people of diverse backgrounds. And, also to support them in personal growth through for example financial literacy and opportunities for public speaking.
I recently wrote an Inc. article for International Women’s Day on this topic - read on if you’d like to see more of my thoughts.
What’s one piece of advice you’d give someone starting out in the field?
I’d tell young people starting out -- and young women in particular -- to trust themselves. Looking back, I realize that this is advice I could have used myself. When I was younger, I felt like I needed to check off every box, or be perfectly qualified, before raising my hand. This was the case a few years ago when I had the opportunity to step into the CEO role at Affectiva. I was my own harshest critic, but once I started to trust myself, it became easier to convince others that I was up for the challenge.
It’s well-documented that women are more likely to have extremely high expectations for themselves, but I’d encourage them to take risks and believe in themselves.
Where can we keep up to date with your work?
Follow me on LinkedIn, Twitter (@kaliouby), Facebook and Instagram (@ranaelkaliouby), or check out my site: https://go.affectiva.com/rana-el-kaliouby
You can also follow along with what Affectiva is doing on our blog and on Twitter
(@Affectiva), or by subscribing to our newsletter.