In den Nachrichten > Dr. Reuven Shnaps, Chief Analytics Officer at Earnix – Interview Series

Dr. Reuven Shnaps, Chief Analytics Officer at Earnix – Interview Series

11. April 2021

Dr. Reuven Shnaps is the Chief Analytics Officer at Earnix, a leading provider of mission-critical systems for global insurers and banks.

What initially attracted you to data science and AI?

I have always had a fascination with math, data, and their potential to solve business challenges.  Throughout my academic studies and career, I have sought out opportunities to learn about statistics, economics, and how to apply these fields to understand consumer behavior. I admire data scientists, modern statisticians and econometricians, who have the unique ability to analyze vast amounts of data and address real-world business problems. I have dedicated my career to data science and combining traditional statistical methodologies, emerging technologies, new machine learning (ML) algorithms, and the latest artificial intelligence (AI) applications to create business solutions that deliver long-term value for our customers.

What does Earnix do?

Earnix is a global provider of software solutions that empower insurers and banks to provide faster, smarter and safer rates, prices and personalized product offerings to consumers. Our system is powered by AI & ML , and includes a wide array of analytical modeling tools, applications and advanced algorithms. Recently, Earnix made the list of “Insurtechs to watch in 2021” in the U.S. and was recognized as a market leader in predictive analytics by CB Insights, an analysis and research company in the technology sector. Earnix leverages innovative technology to help insurers and banks meet consumer needs in real-time.

Earnix recently wrote an article for us on the importance of Explainability in AI. How important do you believe this Explainability in AI is?

Explainability is a trending topic in AI and data analytics. It affects companies across industries, whether they use AI or not.

Most companies experience a trade-off between their level of control over and the effectiveness of AI. For businesses to trust and adopt the use of “black box” AI, there needs to be a mechanism that provides experts and stakeholders the ability to interpret complex AI decision-making processes and ensure adherence to regulatory demands. Consumers alike can benefit from it by being able to understand, and potentially navigate key drivers behind pricing, credit or underwriting decisions. Explainability is the bridge that makes complicated AI more understandable and transparent. With the ability to translate advanced ML models, analytics professionals do not have to sacrifice more advanced algorithms because of a lack of understanding. Explainability minimizes the trade-off between control and value while maximizing the benefits of AI.
Mehr erfahren

Teilen article: