By 2025–2026, artificial intelligence had ceased to be just a trendy technology and had become a subject of deep disappointment and reassessment. According to aBy 2025–2026, artificial intelligence had ceased to be just a trendy technology and had become a subject of deep disappointment and reassessment. According to a

Elena Levi: “AI should never be the starting point. The problem should be”

2026/03/18 15:48
10 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

By 2025–2026, artificial intelligence had ceased to be just a trendy technology and had become a subject of deep disappointment and reassessment. According to a report by the research company Campus Technology, despite the widespread adoption of AI, almost half of organizations face a “trust gap” between claims of full confidence in their systems and the actual ability to explain, justify, and control model outputs, which significantly limits the impact of implementation and delivers lower economic results (campustechnology.com).

Against this backdrop, many AI initiatives fail to deliver the expected value, prompting organizations to reassess their investments and strategies. The focus shifts from the technology itself to the ability to build products where automated decisions are understandable, explainable, and reliable in real conditions, where mistakes are costly for both users and business.

Elena Levi: “AI should never be the starting point. The problem should be”

Elena Levi, Director of Product at Payoneer, has been building and scaling AI-powered products in high-risk domains for over ten years. Her experience spans combating large-scale ad fraud at AppsFlyer, launching analytical AI tools under privacy constraints, and developing products for financial and marketing decisions where trust and transparency are paramount.

In these systems, the winners are those who understand not just algorithms but also the decision the product supports, can explain it to users, and balance automation with responsibility. This is what the following conversation is about—how to build AI products that are trusted and turn technology into real solutions rather than just a showcase.

“AI is not magic”

You have more than 15 years of experience with data and products, and in recent years with AI solutions in high-risk areas—from anti-fraud at AppsFlyer to predictive analytics after iOS 14. AI is everywhere today, yet many AI products still fail to create real value. Why do you think this happens?

AI should never be the starting point. The problem should be. AI is a wonderful technology, but at the end of the day, it’s just technology. When you build a product, the most important question is: what problem are we actually solving? Only then should we talk about how we will solve it.

What I often see now is that companies start from the wrong point. They say, “Let’s do something with AI,” because it’s trendy or investors expect it. As a result, instead of a real user pain, a beautiful idea appears, to which they then try to attach a problem. Usually, this ends with a flashy demo and zero real impact on the user’s life.

Moreover, AI is not magic. Users don’t fully trust it yet, and honestly, they have reasons. If a system is unreliable, opaque, or poorly tested, people simply won’t make real decisions based on it. When AI is treated as a shortcut rather than a responsibility, the product almost always fails.

“Technology can work, but the product may not”

You’ve led products where clients’ budgets depended on them, and a model error could cost a business millions of dollars. How often, in your experience, are AI product failures caused not by technical limitations but by product decisions?

Technology can work, but the product may not. Much more often than generally acknowledged. In most cases, the technology could have been effective, but the product decisions surrounding it were flawed. Weak or insignificant problem, wrong user, inflated expectations, sloppy UX, lack of a plan if AI fails.

I rarely saw a failure because the “model wasn’t smart enough.” What I constantly saw were failures where the product solved something users didn’t really care about, created risk without enough trust, or didn’t communicate uncertainty. Often, automation was forced where people still needed control. In the end, the technology exists, but there is no value.

You’ve scaled AI products at AppsFlyer and Voyantis—from anti-fraud to predictive analytics under signal loss. What, in your view, distinguishes AI products that users actually adopt from those that remain experiments?

First, an immediately understandable value for the client. The user should instantly know why it matters and what problem it solves.

Second, trust, which is built gradually. Successful products don’t promise magic. They honestly show confidence levels, let users verify decisions, and gradually prove usefulness. It’s like checking the weather forecast: you test it first, then just grab an umbrella without thinking.

Data foundation is also crucial. Clean, diverse, constantly monitored data—garbage in, garbage out—is still true. Plus responsibility: bias checks, drift monitoring, and understanding the consequences of decisions.

And of course, UX. You can have a brilliant model, but if interacting with it scares or overwhelms the user, the product won’t take off. Especially problematic are products where AI is just “sprinkled on top”—random chatbots or automation not integrated into workflows. Users feel it instantly.

“We don’t trust what we don’t understand”

You have worked for years with systems that make decisions instead of or with humans—from anti-fraud algorithms to budget allocation predictions. Why does the “black box” face such resistance both from clients and within companies?

Because we don’t trust what we don’t understand, especially when the risk is real.

I like the example of self-driving cars. Statistically, they’re already safer than humans, yet most of us still get nervous riding in one. Not because of the numbers, but because of the sense of lost control.

The same happens with AI black boxes. When a system gives a recommendation without explanation, questions immediately arise: why, based on what, what if it’s wrong? Within companies, this intensifies: sales can’t explain to clients, the product can’t defend the decision, and leadership doesn’t know who owns the risk. Resistance here is not about technology but psychology.

You’ve implemented AI in products with high business responsibility. What role does trust play in success or failure?

Trust is the product.

You can have the most accurate model in the world, but if people don’t trust it, it’s useless. At AppsFlyer and Voyantis, clients rarely asked, “Is this 93% or 95% accurate?” They asked, “Can I allocate the budget based on this?”

Trust develops when you are honest about limitations, don’t sell the illusion of absolute confidence, and let users test the system themselves. You can’t impose it; you can only earn it.

“Good AI UX doesn’t dump data, it guides decisions”

You recently served as a judge for a prestigious event, AITEX Summit, alongside experts in analytics and AI product development. What did this experience show you about the current state of the industry, and why was it important for you personally to join such a jury?

It was both an honor and a responsibility. AITEX brings together practitioners who have actually shipped AI/ML analytics products, so being invited to judge alongside them means your market judgment on what is truly productionready is trusted. What impressed me most were not just sophisticated models, but solutions that connected clean data, solid modeling, and a credible gotomarket story into one coherent product.

Personally, I see roles like this as a way to give back to the community by raising the bar for what we call a “real” analytics product. As judges we were expected not only to score, but to explain our reasoning and help teams understand how to move from prototypes to commercially viable, decisiondriving products. When you help define these standards together with other experienced leaders, you influence how the next wave of AI and analytics solutions will be built and evaluated.

 “AI can recommend, but responsibility always remains with humans”

After iOS 14, your teams essentially replaced lost data with predictions—a huge risk. How did you approach AI implementation in these conditions?

With great humility and endless testing.

We couldn’t just tell clients, “Now it’s a prediction, just trust us.” We ran experiments, A/B tests, lift tests, iterated constantly, and changed direction several times. We invested in product discovery to understand where users were ready to trust the system and where they weren’t.

At Voyantis, we did the same. We let clients start small, test, see real effects, and only then scale. You don’t sell confidence—you build it gradually.

Considering your experience, how should product leaders think about responsibility when AI affects major business decisions?

You can’t shift the blame to the model.

AI can recommend, but responsibility always remains with humans. The product leader must define in advance who makes the final decision, when human oversight is needed, and what happens if the system fails.

Another crucial point is honesty. If your presentation screams “AI, AI, AI,” but internally it’s just a set of rules, you’ve already undermined trust. Responsibility starts with not pretending to be wizards.

You not only build AI products in major tech companies but have also been teaching product management for several years, speaking at conferences, mentoring women in the industry, and actively discussing the future of the PM profession in podcasts and professional communities. 

You are one of the esteemed mentors and Part of the Experts Panel in the Senior PM Path for Give & Tech, where the new generation of product managers present their projects. How has this work influenced your understanding of where the profession is heading, especially amid rapid AI development?When you work with products and people at the same time, you see clearly what has changed over the past years. AI has become much more accessible, and today the technology itself is no longer rare. That’s why my focus has shifted from models to people, from accuracy to trust and responsibility. PM value is no longer in “adding AI” but in understanding where it’s justified and what decisions it actually helps make.

In teaching and mentoring, I often see the fear of “falling behind”—not using AI or not being technical enough. I always say the opposite: a PM’s job is not to master every model but to be an expert in decision-making, risks, and consequences. That’s exactly what I aim to teach.

“If AI is the first thing that catches your eye, you’ve already lost”

You’ve gone from analyst to product director and now speak a lot about the future of the PM profession in the AI era, especially as part of the TopProds association, where podcasts with distinguished members are one of the most prominent features. How has your perspective on AI products changed over the years?

Previously, just having a model felt like an achievement. Today, models are accessible to everyone. The question is no longer whether we can build it, but whether we should and whether it will be trusted.

My focus has shifted from accuracy to decision-making, from capabilities to responsibility, and from technology to people. Technology has matured. Product thinking had to mature even faster.

Looking five years ahead, what should product leaders change about their approach to AI to truly win?

AI no longer impresses. Winners won’t be those with the flashiest demos or the most complex models.

If AI is the first thing that catches your eye, you’ve already lost. The best AI is invisible. Trusted, reliable, seamlessly integrated into work. You need to build products people love, and AI that just works.

Comments
Market Opportunity
DeepBook Logo
DeepBook Price(DEEP)
$0.032022
$0.032022$0.032022
-0.68%
USD
DeepBook (DEEP) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.