Ethical AI needs a multidisciplinary approach

Mar 25, 2021



In a report co-sponsored by LG Electronics and Element AI, experts from around the world with different vantage points within the AI ecosystem, were interviewed on the topic of human-centric design, or as the report puts it, the Artificial Intelligence Experience (AIX) design. And as with most discussions about the future of AI, the AIX Exchange delves into the topic of ethics.


However, the report doesn’t attempt to propose guidelines or even assume a hard position on the topic. Instead, it tries to collect the diverse perspectives of researchers, CEOs, consumer advocates, designers and policymakers, to better frame the challenges facing the industry today. For the kind of AI that is finding its way into our homes, offices, cars, and coat pockets, ethical AI can't merely be seen as an optional part of the development process, but as the natural result of a more multidisciplinary approach, with us everyday humans at the heart of its design.


The media has been keen to highlight AI’s failings of late. From applications of the algorithm PULSE, that turned Barack Obama Caucasian, to an inherently racist facial recognition system that claims to predict whether someone is a criminal, AI is often painted as a powerful tool in the hands of children, but what isn’t always understood is that it isn’t the technology that is biased, but humans.


As 2019 Turing Award-winning AI researcher, Yoshua Bengio describes it in his interview: “Human centric means to take into consideration the human aspect of how the tools are going to be used, for what purpose, and what's the consequence for humans who use the tool. And it's important because those tools are becoming more and more powerful. The more powerful the tool is, the more we need to be careful about how to use it.”


The report makes an effort to explore the topic of ethics more thoroughly through a specific consumer lens. It addresses how AI should be developed inclusively and takes into consideration the differing values of individuals and cultures while raising questions about responsibility for privacy and security. It organizes itself into five key subthemes:


Inclusivity examines the importance of representation throughout the development and application of AI systems and devices. It looks at diversity, not just in terms of skin tone, but the diversity of thought that comes from a broad range of cultures, beliefs and experiences.


Values looks at the risks of consolidation in the industry, the underlying values of the people that develop and deploy the technology and the bias that can be built into the system by programmers vs. the bias that becomes learned from end-user interaction.


Governance asks "what is the role of governments when technology’s rate of development outpaces the speed of politics?" How then is governance built into the system and managed by all stakeholders, including end-users?


Data Privacy reflects on issues of data ownership and transparency but also wonders what will happen should the AI systems that decide our most basic needs, turn off?


Purpose looks at intent and the need for all stakeholders to be able to answer the question "why" when building AI systems and products for end-users. If the technology isn’t answering a human need, then is it just stealing our data?


In the AIX Exchange, AI must take a practical, human-centric approach that considers what end-users believe is ethical, dangerous, or valuable. But the report also recognizes we have a lot to learn - for the industry, for the AI, and for ourselves.


“When it comes to responsible and ethical AI, we are all on a learning journey,” says Sri Shivananda, Senior Vice President and Chief Technology Officer at PayPal. “The industry has a better understanding now of the power of what AI can do to experiences and products. And at the same time, it has just started to experience what can actually go wrong with the process as well.”


“What we are all doing is taking from those first experiences, understanding the obligations that we have to the customers, the obligations we have to communities, and the obligation we have to the whole planet to make sure that we actually put guardrails around what AI can do. And collaborate across the industry to create new standards and best practices around how AI should be implemented and then adhering to those code of ethics.”


Check out the full report at and watch the interviews at