Dans l'actualité > Paving the Road Toward Explainable & Responsible AI

Paving the Road Toward Explainable & Responsible AI

Earnix Team

February 10, 2022

Reuven Shnaps, PhD, Chief Analytics Officer at Earnix, reflects on the importance of ethics in AI, as the industry continues to grow.

According to a new Mckinsey study  “The data-driven enterprise of 2025”, most employees will increasingly use data to optimize nearly every aspect of their work. The year 2025 is just around the corner and we are already witnessing a surge in the availability of vast and varied new sources of data. To be able to optimize this amount of data, especially unstructured or high-frequency time-based data, organizations need to rely on modern computing technology and advanced ML/AI algorithms.

Unlocking “black box” algorithms

We can already see hundreds of millions, even billions of people using and benefiting from AI/ML based technology & applications in their daily activities: internet searches, navigation, health technology, autonomous vehicles, and assistants like Siri and Alexa, to name a few.

The reliance on such “black box” algorithms also carries with it risks and raises some ethical questions around transparency, “fairness” and whether these algorithms are used responsibly.

Technologies on their own are never “evil” nor “good”; their effect depends on how we use them. The new ML algorithms and AI applications are not different in this sense than other past technological advancements. The question we need to ask ourselves is how do we get the balance right between the great benefits and potential this technology & algorithms can yield and the risks that go along with them?

Are “black box” algorithms a problem?

Google, Amazon, Netflix, and many others are prime examples of companies that are very innovative in their usage of advanced ML/AI algorithms. They seem to base and run almost every aspect of their business on these advanced algorithms. Netflix for example is basing different personal movie recommendations, page layout or network routing decisions on the analysis of data through advanced ML/AI algorithms.

Often the inner logic of these advanced algorithms appears as a magical “black box”. The mere fact that we don’t understand the drivers behind a certain decision or how the “magic” happens is not necessarily a problem. This will obviously depend on the context, the type of application or the decision and how it is going to be used and for what purpose.

For example, in the case of Netflix, what are the consequences of making a wrong movie recommendation? Or what if there is a “bias” in some of the decisions made by Netflix? This is of course an unwanted outcome that should be avoided to begin with, but we can probably be more forgiving and just try to make it better the next time.

Explainable & responsible AI

Clearly for many industries, and specifically for regulated industries like insurance and banking, the need for governance, transparency and explainability is key. We are witnessing a flood in the usage of AI/ML algorithms to automate manual processes and decisions of great significance to the livelihood of consumers. Patients’ triage, disease diagnostics, identity authentication, credit decisioning, selection of job applicants, and claims settlements are just a few examples. These examples and many more represent decisions where the consequence of making the wrong recommendation or introducing “bias” or “unfair” treatment into the decision process can have a significant impact on consumers or businesses.

This has led to the emergence of new research fields: Explainable & Responsible Artificial Intelligence. Experts are developing tools that enable us to peek inside the black-box and unravel at least some of the magic. For businesses to trust and adopt the use of “black box” AI, there needs to be a mechanism that provides stakeholders and business professionals the ability to interpret complex AI decision-making processes and ensure they abide by regulatory demands. Consumers can also benefit from it by being able to understand and possibly steer key drivers behind crucial decisions.

Explainability is the bridge that makes complicated AI more understandable and transparent. We should not fear these new algorithms and technological advancements nor try to limit their power, but instead use them in a smart and responsible manner, to generate more value for our society.
Lire la suite

Partager article:

Earnix Team