Contact us!

Can we really trust AI?

Artificial Intelligence (AI) has a big challenge ahead of it before the broad masses adopt the new technology and this challenge is spelled t-r-u-s-t. Research shows that trust in new technology is often as important as the technology itself for it to start being used broadly. An important piece of the puzzle that helps to increase general confidence in new technology is improving knowledge of how the technology works.

Trust and transparency in AI

"No trust, no use" is a common expression in the AI ​​world, which means that if we cannot trust an AI system and are not entirely sure what risks and consequences it can entail, we should be careful and perhaps even wait to use it until we have more information. This means that it actually can be healthy to be a little hesitant. Much research continues to be performed on how the adoption of AI is closely linked to trust and transparency.

Trustworthy AI

Trustworthy AI is a research area within AI that looks at the trust aspect. Trustworthy AI is based on the idea that AI will reach its full potential when trust can be created at every stage of its life cycle - from design to development, implementation, and use. In fact, the EU has developed ethical guidelines for Trustworthy AI. Some of these guidelines are:

Human agency and oversight

AI systems should give people power, allow people to make informed decisions, and promote people's fundamental rights. At the same time, supervision of the AI ​​system must be possible, which can be achieved through, among other things, human-in-the-loop.

Technical robustness and safety

AI systems must be resilient and secure. They must ensure a return plan if something goes wrong. They must also be accurate, reliable, and reproducible in order to minimize and prevent damage, even if it is unintentional.


The business models for data, systems, and AI should be transparent, which can be achieved via various solutions that improve traceability. AI systems and their decisions should be explained in a way that is tailored to the recipient. People must also be aware that they are interacting with an AI system and be informed about the system's capabilities and limitations.

Explainable AI

When talking about trustworthy AI, explainable AI is also mentioned. Explainable AI is a set of processes and methods that can help the user understand and trust the results of AI and machine learning. As AI models become more advanced, we humans are challenged to understand and interpret how the algorithm arrived at a particular result. AI models and their decisions are sometimes described as "black boxes" - they refer to the difficulty of understanding its internal functions. In order to increase transparency, it is important in each individual case to understand how the AI ​​system made a certain decision and which factors were decisive in the decision-making process. Explainable AI can help developers ensure that the system works as intended and follows existing regulations.

Algorithm aversion - when the lack of trust goes out of control

Even if there is talk of "no trust, no use" when it comes to AI, an inherent lack of trust can sometimes be counterproductive. Studies show, for example, that people prefer to rely on decisions made by humans rather than decisions based on algorithms, even though the algorithm constantly exceeds human results. The phenomenon is called algorithm aversion and involves an irrational skepticism of algorithmic decisions.

Algorithm aversion can be costly, both for the individual organization and for society at large. In a study conducted by KPMG, for example, it was found that two thirds of business leaders ignore information for decision-making from data analysis when it contradicts their intuition for strategically important business decisions.

AI is already part of our everyday lives

Although it is not always obvious that it is AI that works in the background, AI is already a big part of our everyday lives. For example, AI helps to filter out spam in the inbox, control the robotic lawnmower and unlock your phone with face recognition. By paying attention to everything that AI already handles in our everyday lives and which works well, we can also more easily relate to AI and all the benefits of the new technology.

Human-in-the-loop AI and human collaboration

Unlike what is shown in various Hollywood movies, AI is not omniscient and cannot learn new things on its own without a human being involved. In fact, most current applications of AI are about sorting and processing data in the right way and into the right format,​ ​otherwise there is a great risk that AI generates the wrong answer. Therefore, it is crucial to involve a human-in-the-loop, that is, a human who guides the AI ​​system. With a person employed to train, test, and fine-tune the system, more reliable results are achieved. When AI and humans collaborate via human-in-the-loop, it’s easy to maximize the strengths of both the system and your staff.

‍How you can increase trust in AI internally

If you are going to start working with AI, you should also work strategically to increase internal knowledge and trust in AI. Here are some examples of how you can work to demystify AI within your organization:‍

1. Invest in competence development within AI
Increase basic knowledge of AI among your employees, especially the end-users on your staff, and clarify how your investment in AI will benefit them. For example, you can call in the experts to talk about AI in a concrete and understandable way. Alternatively you might invite your employees to take an online course on the basics of AI, or of course you could do both!

2. Context and transparency

If you are already working with AI, you can increase trust internally by being transparent with the AI ​​model's results and putting them in context. Explain the model and highlight the factors that led AI to arrive at the result. When users understand how AI delivers results and the value that the technology adds, they are more likely to trust the outcome and be positive about the use of AI within the organization.

3. Regular feedback for increased commitment

Help end-users feel more involved in the process by creating a simple feedback process. When users get feedback on the AI ​​system's results, they also become involved in improving the model's accuracy going forward. ‍

AI is a small part in a larger puzzle

Even though there is a lot of talk about AI, it is good to know that AI is often a very small but powerful part of a much bigger solution. Commonly a whole ecosystem of services are working together, and AI is part of a larger whole. By working strategically to create trust in AI internally, you are one step ahead of your competitors in successfully implementing AI.

Check how digitalized you are!

Take our assessment and find out how digital your business is.
Do the test!
Block Quote