Transparency throughout the AI ecosystem

Apr 1, 2021



In an effort to build trust between consumers and organizations offering AI-enabled products and services, transparency becomes a critical lens through which AI developers, policymakers, and end users should approach the Artificial Intelligence Experience (AIX) design. According to a recent report from LG Electronics and Element AI, titled AIX Exchange: The Future of AI and Human Experience, which examines the challenges for the future of human-centric AI, we must consider important questions about explainability, purpose, and data management if we are to be successful.


The report brought together 12 of the world’s leading experts in AI to discuss the themes of Ethics, Transparency, Public Perception, User Experience, Context, and Relationships. Regarding the theme of Transparency, the report examined five critical areas that will not only affect how consumers adopt AI-enabled products and services, but also how companies and policymakers address the technology’s transformative power.



Companies must open up the “black box” to let consumers and regulators know what’s going on under the hood – how decisions are made and in what context.

“Data is the raw fuel on which AI runs the relationship between a customer and a company and it is based on the trust they build over time,” says Sri Shivananda, SVP, CTO, PayPal. “When a customer can trust the platform or the company that is delivering experiences based on AI, they begin to implicitly trust the AI behind the experiences that they are being put in front of them.”

Explainability helps place humans in the decision-making process so they can further refine and optimize their AI experience.



As AI becomes more ubiquitous, organizations must consider outward messaging that outlines the clear benefits of their AI product, how it achieves these benefits, and ensures the consumer feels part of the process by letting them “teach” AI to better understand them.

“When it comes to building AI-based experiences for our customers, all of us should think of the trust with the customer as the final line not to cross,” adds Shivananda. “Trust must be demonstrated through everything that a customer sees about the company – the core value system, how we execute, how do we treat them when they call us, how do we make it right when something goes wrong. As long as it is all centered around the customer.”



As AI advances, it will become more ubiquitous while constantly learning to seamlessly add value to a user’s life. It will become critical to be transparent and openly communicate the “purpose” of an AI-enabled product or service so that consumers can assess whether the AI is “successful” – or if the assigned purpose is even the right one for them. 

“How do we maintain our human rights, but also what I call our right to be human?” asks Dr. Christina Colclough from the Why Not Lab in her interview for the AIX Exchange. “How do we avoid the commodification of people, so they're not just seen as numerous data points and algorithmic influences, but the human you are – with your beauties, your bad sides, your good sides? How do you remain relevant and wanted and prioritized in this very digitalized world?”

Data Privacy

Data privacy and security is one of the most pressing issues in business today. For decades, consumers have made a bargain with emerging technology companies – we will give up our personal data for free access to your apps or service, and do so happily. It seems this bargain has come with its own set of risks, as privacy breaches and the mishandling of personal information for profit is a common sight in today’s headlines.

Will consumers simply shrug if their homes, equipped with AI technology, are hacked, wreaking havoc on them and their families? Likely not. And the prospect of having your “personal space” compromised – be it home, car, or work – will act as a serious barrier to adoption.



Trust in a company’s offerings, and by extension the company itself, also relies on addressing the fact that today’s consumers expect a product to be reliable, intuitive, and simple to operate. The evolution of AI-enabled products and services introduces a new challenge. If AI is to be truly ubiquitous and useful in our lives, we need massive amounts of personal data and a backend system that is accurately pulling together and analyzing this data.

Conversational AI should, in theory, create a more personal 1:1 experience than the way we currently interact with technology. Bad design, minimal transparency, and generally poor communication that is unaware of the user in terms of demographics, region, and culture makes evident that the organization hasn’t taken adequate time to anticipate their customer’s needs and wants. As a result, consumers will likely turn away or provide substandard data to minimize the AI experience. 



For centuries, humans have been the masters of the tools created to make them more productive at work and happier at home. Today, we are living in a moment in history where this relationship is changing. Machines now have the capability of being a much more dynamic part of our lives.

To get there, AI products and systems need more of our personal data than any other technology before it. Consumer trust in a company will be key, and that will come from strong transparency. At the highest levels, companies will need to have a very honest conversation about what they do, what they don't do, and why. Consumers will need to understand the philosophy behind a company’s actions, its frame of reference when developing the algorithms it is asking us to trust, and what recourse they have when things go wrong.

The AIX design that brings developers together with policymakers and especially end users is the first step in achieving greater transparency and long-term trust for the AI industry.

Read the report’s full analysis on Transparency at and watch the interviews at