
Jay Test Article – 1
Uncategorized Elementor #241071 Published May 7, 2025 in Uncategorized • 12 min read DownloadSave Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo....
by Didier Bonnet, Achim Plueckebaum Published April 8, 2025 in Artificial Intelligence • 12 min read • Audio available
With AI fever in overdrive, everyone is searching for winning AI use cases – business applications that provide competitive insights or productivity breakthroughs to improve performance. However, the process feels like solving a giant jigsaw puzzle – but without a picture on the box. A lot of trial and error is needed, probably along with substantial investments in technology and capabilities.
Defining what constitutes a use case is far from clear. A business executive told us it was “an industry-specific application of AI tools to increase efficiency or improve revenue.” In contrast, a technology vendor described it as “a proven application of our AI technology and competencies that has been successfully deployed in several customer environments.” Perspective is important.
Our research has led us to conclude that a good AI use case results from a “matching exercise” where value is found at the intersection of data sets and business problems/opportunities. That’s not easy. Many companies we interviewed still struggle with data quality, readiness, aggregation, etc. Conversely, business problems are hard to describe. The context and content are specific, with both changing over time. We often observed a “language gap” between business executives and their data science counterparts. When designing AI use cases, language matters.
So, what’s the answer to designing good AI use cases? The matching process between a dataset and a business problem/opportunity is rarely a one-off. It is highly iterative, progresses with learning, and takes time. There are four imperatives to follow when designing an AI use case. They may not guarantee that your AI implementation will be 100% successful, but at least at the design stage of your use case, they will help you avoid some common pitfalls.
An example of an AI experiment could be testing a machine-learning algorithm to gauge whether it can detect fraud patterns in historical data.
As we said, language matters, and we found overlaps in the definitions of what types of AI initiatives are undertaken within organizations. AI initiatives have different lengths, complexity, levels of uncertainty/risk, and outcomes, so it pays off to be clear at the outset.
AI experiments are small-scale, time-bound activities to test a hypothesis or explore a specific question. The goal is to validate or not the original assumption without heavy investment. An example could be testing a machine-learning algorithm to gauge whether it can detect fraud patterns in historical data. If positive, the outcome of an experiment should be to proceed to more structured initiatives such as a proof of concept or a pilot.
AI proof of concepts (POCs) or pilots are focused initiatives to prove the feasibility of an AI application under controlled conditions. They require more time than an experiment and usually involve a subset of real data and testing with operational systems (For example, testing that an AI chatbot can accurately answer customer support queries using a small dataset). If the technical feasibility is proven, the outcome is usually to validate performance, usability, and scalability to put the system into operation.
AI projects are structured and well-defined efforts that follow a clear methodology (e.g., agile development). They take months or even years, depending on their complexity. An example would be an industrial company deciding to roll out a company-wide AI-driven predictive maintenance system.
“The successful outcome of a use case is a full-scale AI project.”
AI use cases are specific scenarios or problems where AI is applied to validate business-oriented opportunities and create value from AI deployment. Use cases are the starting point and guide the direction of experiments, pilots, and POCs. They provide the context and criteria against which these initiatives are designed and evaluated. Developing successful AI initiatives is a highly iterative process. Use cases guide the matching exercise between a dataset and a business problem/opportunity and usually lead to an experiment. Experiments test the hypotheses that underpin the use case. Once validated, experiments lead to POCS and, in turn, successful POCS lead to scaled pilots. Pilots inform the broader deployment strategy, and the successful ones become full-blown AI projects operationally deployed across the enterprise. The successful outcome of a use case is, therefore, a full-scale AI project.
The business context should drive the development of an AI use case – for example, when a new, potentially transformative technology becomes available (e.g., GenAI) or when a larger business case is required to justify a potentially costly transformation. The business needs a measurable validation of outcomes from a limited scope to secure funding. Use cases help the organization define where the real value pools are in the organization to steer the implementation of AI strategies.
In our research, we found that successful AI use cases display specific characteristics, including:
For example, in our interviews, a pharma industry executive described a machine-learning use case aimed at optimizing clinical trial site selection. A machine learning model was developed using historical trial data, patient demographics, and site performance metrics, iteratively matching the business challenge with relevant datasets. The project objective was to validate hypotheses about reducing site selection errors through retrospective and prospective testing over a six-month timeline. Clear milestones and KPIs, such as recruitment speed improvements and adherence to timelines, ensured measurable outcomes, with a midpoint decision determining whether to move to a scaled project. The initiative was championed by a senior executive accountable for trial success, ensuring alignment and advocacy for broader adoption of the use case if successful. The iterative, hypothesis-driven approach demonstrated the potential of this AI use-case to deliver significant value to the organization and incorporate it into operations company-wide.
Common wisdom dictates that use cases should start with a business problem/opportunity and work back to the data required to solve it. With AI, it’s more “chicken and egg”: sometimes you start with a business problem/opportunity and sometimes with a data set. A common and elastic technology backbone is important but should never be the starting point. The content of a business problem/opportunity is often narrow, and the context matters. Good business problem definitions need to be specific, relevant, objective, and quantifiable (as with AI, data will be at the core of the solution).
For example, a healthcare executive described a use case where the early business problem was defined as: “We want to leverage AI to make our hospital admission process more efficient.” The executive admitted that such a definition was unlikely to get the company far as it had no specific problem area, context, success metrics, or indication of the data source. The company iterated on the definition and restated it as: “We want to lower the rate of patient readmission by identifying individuals at high risk of returning within 30 days and ensuring proper follow-up care with the objective of reducing readmission rates by 10% and improving patient outcome.” This redefinition started a fruitful matching exercise. The team began by looking at electronic health records, patient demographics, treatment plans, and historical readmission data, and then applied an AI model on top of those datasets.
Existing or accessible data sets can also be a good starting point. When powered with AI, useful patterns can be uncovered from data sets that can lead to assumptions or insights into a business problem/opportunity.
For instance, a credit card company was looking at potential applications of AI in credit card fraud. The company applied unsupervised machine learning to large volumes of transaction logs without a pre-defined question (context). The AI system uncovered a cluster of transactions originating from different merchant categories and regions that consistently showed suspicious timing anomalies and unusual card usage sequences (pattern identification). The pattern suggested the potential existence of a coordinated fraud ring operating across multiple merchants (hypothesis/insight). From this data insight, the company was able to define a use case and develop a targeted fraud detection model to proactively flag and block these sophisticated attack vectors.
Unfortunately, matching datasets and business problems/opportunities does not work like matching individuals on a dating site. The business and the data side will have dynamic characteristics and evolve. Datasets are not static: they exhibit complementarities where the value of the data increases when meshed with other data with complementary attributes (e.g., the weather conditions in which a machine is used). Equally, business problems/opportunities evolve as economic conditions, market, competition, and customer needs and behaviors change (e.g., growing health-conscious consumers seeking organic foods, transparency in sourcing, nutritional data, and sustainable production methods).
In addition, both datasets and business problems/opportunities have known and unknowns that will need to be identified. For example, a dataset covers a specific timeframe, but data patterns might change if we look further into our historical archives. A business opportunity might be based on today’s privacy regulatory environment, but regulatory changes may affect its feasibility.
So, under those circumstances, how do we start the matching exercise?
First, the business problem and the data need to be assessed and qualified regardless of the starting point. The key criteria for a business problem/opportunity are its feasibility (Can we deliver the outcome?) and its impact (Will the outcome substantially affect performance?). The key criteria to assess the data side are findability (can we find the reliable data needed to effectively inform the decision/action?) and accessibility (Can we economically access the data?). Second, you will need to iterate to properly qualify your matching dimensions. For example, could the feasibility of a business problem be improved if we were to change or adapt our workflow? Or could we find proxy data or public data that will inform the decision with a sufficiently high confidence ratio?
A deep understanding of your matching dimensions is critical to setting up the use case and will increase your chances of success.
Build joint teams from the start. Pairing data science with deep domain and process knowledge drives better results.
Once you’re clear on the data and the business dimensions that will define your AI use case, you can start matching and iterating. Don’t have business teams defining business problems/opportunities and then passing them on to the data science team (or vice versa). Build joint teams from the start. Pairing data science with deep domain and process knowledge drives better results.
The matching exercise is built around three phases:
Use cases should be essential components of your AI strategy. The successful ones will increase your certainty about where the real AI-driven business value is in your business. Alone, however, they will not drive ROI. The business case is realized when the application is scaled, not in the use cases of the pilot phase. Early on, you should ask the following to ensure the transition from a successful use case to a transformative project:
Most organizations run multiple use cases in parallel. This is fine if it follows certain rules and has clear internal governance. Focus your AI strategy on a portfolio of use cases enabled by a common technology and data backbone.
When scalability is treated early as a core objective rather than an afterthought, AI use cases can transition from exploration to production and pave the way for AI deployment that will deliver meaningful business value over the long run.
Business problems/opportunities and datasets can be a marriage made in heaven for your AI strategy. Successful use cases are at the heart of finding business value. But, as we’ve seen, it is a long, iterative, organizationally complex, and structured journey. Like Spotify – the more your “organizational algorithm” practices the use-case iterative muscle, the faster and better the matching will be. Leadership, business transformation, and accountability matter to success. To paraphrase Steve Jobs: “If you look closely, most AI overnight successes took a long time.”
Professor of Strategy and Digital Transformation
Didier Bonnet is Professor of Strategy and Digital Transformation at IMD and program co-director for Digital Transformation in Practice (DTIP). He also teaches strategy and digital transformation in several open programs such as Leading Digital Business Transformation (LDBT), Digital Execution (DE) and Digital Transformation for Boards (DTB). He has more than 30 years’ experience in strategy development and business transformation for a range of global clients.
May 7, 2025 • by David Bach in Artificial Intelligence
Uncategorized Elementor #241071 Published May 7, 2025 in Uncategorized • 12 min read DownloadSave Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo....
April 18, 2025 • by Frederik Anseel in Artificial Intelligence
As AI transforms industries, preparing young people for the future requires more than just mastering tools like ChatGPT. Success lies in developing deep knowledge, critical thinking, and the vision to understand how...
March 31, 2025 • by Michael Yaziji in Artificial Intelligence
AI carries the threat of bringing to life the chilling visions of Orwell and Huxley. We must work hard to fight against the convergence of micro-surveillance and digital inertia....
March 18, 2025 • by Tomoko Yokoi in Artificial Intelligence
Large behavior models (LBMs) promise to be even more impactful than large language models, says IMD’s Tomoko Yokoi...
Explore first person business intelligence from top minds curated for a global executive audience