Search
  • F33.ai

Successful Machine Learning Models Start with the Right Questions


The machine learning market size is expected to reach $96.7 billion by 2025, for a growth of 43.8% from 2019 to 2025. The growth is attributed to increased data generation and to added demand for forecasting future outcomes. The most popular use cases are for revenue growth and cost reduction; however, combating fraud is becoming a dominant use case with almost 50% of enterprises looking to implement a solution.

Despite the projected growth, only a small percentage of machine learning (ML) projects deliver successful business outcomes. According to Gartner, 85% of machine learning projects fail. Only 20% of those deployed result in analytic insights that deliver business outcomes. Yet, businesses continue to invest in the technology. A McKinsey report found that:

Companies, for example, are still spending disproportionate time cleaning and integrating data, not following standard protocols to build AI tools, or running “shiny object” analyses not tied to business value.

While focusing on data and protocols can increase the chances of a successful implementation, only tying a project to a business strategy can ensure a positive outcome. Failing to ask the right questions at the start of a project can lead to

  • Solving the wrong problem

  • Creating a deployment gap

  • Forgetting about culture

Let's look at how asking the wrong questions can impact results.

Solving the Wrong Problem

Nothing dooms a project like a vague or unrealistic goal. A general objective such as reducing operating costs or increasing revenue doesn't help much when defining project objectives. There are numerous ways to accomplish either one. Without a clear directive, a project team may focus on areas that management did not intend. As a result, the final product doesn't meet corporate objectives and is considered a failure.

Asking an organization what they want from an ML project can also lead to poor outcomes. Suppose a company wants to improve its sales forecasting with an ML model. It's clear what the company wants and how it wants to accomplish it. What isn't clear is why. Without a clear why, team members assume that the company wants to increase revenue. The problem is how does the modeling effort tie to a business strategy.

For companies using volume sales, forecasting sales requires different criteria than a company that depends on relationship sales. For example, companies that depend on relationships usually have long sales cycles. At the same time, customers may purchase add-on products or recurring services that impact revenue. If a traditional forecast model is used, recurring services and possibly some add-on purchases could be overlooked.

What the company needed was a model of its revenue streams. The problem wasn't having a better handle on new sales but an assessment of how well it was maximizing its existing customer base. Unless the right questions are asked, a project can solve the wrong problem.

Creating a Deployment Gap

Asking the wrong question means that important stakeholders may be excluded. For example, a company's customer support department has difficulty scheduling the right number of agents for the workload. Some days there aren't enough agents, which increases wait time; other times, agents wait for calls to come in. The company needs a way to forecast the number of agents to schedule.

The process seems straightforward, so management tasks the IT department with developing a model to forecast and schedule agents. IT starts to build a model to answer the question of how to schedule employees to eliminate idle agents. The developers look at historical data to determine peak call times and the average length of a support call. From that basis, the data scientists build algorithms to create a forecast model. After months of developing and testing, the model is placed into production.

The company is pleased with the initial results. Wait times are virtually eliminated, and agents are rarely idle, but forecasting isn't running smoothly within months of deployment. Complaints come into IT. The technical support staff tries to identify the problem, but they're struggling. Why? No one asked the question of how the model would be supported.

The modelers were careful to include algorithms to address changes in the number and length of support calls. They diligently tested every use case, but they didn't involve the people who would support the model once it was in production. The developers were laser-focused on delivering the forecast model.

What if the project started with a why question rather than a how or what? Instead of asking for a forecast model for scheduling, the company needed to start with why they wanted the model. The business value may seem obvious -- reduce operational costs -- but for this company, the business value had caveats. The model needed to help reduce operational costs without impacting customer satisfaction ratings.

If the right question had been asked, the project team would have included IT support. Instead, the project created a deployment gap. A knowledgeable support staff is essential to working technology. Without it, a company struggles to deliver on its crucial business strategy.

Forgetting About Culture

Underlying any project is the company's culture. Not every business is open to technology, while some can't wait for the next "shiny object." Most companies fall somewhere on the spectrum of technology acceptance. Even organizations that view tech positively admit it can be disruptive. ML projects have an added constraint in that not everyone trusts the results. That's why at the start of any ML project, a question regarding employee acceptance needs to be asked.

Ensuring a technical project's success requires asking questions far beyond the technical scope. If employees resist technology, the model's usefulness is curtailed. Developers can create the model, deploy it successfully, and deliver valid results, but what happens if no one trusts the results? When a model produces outcomes that go against what employees perceive, they may continue to operate as if the model did not exist.

For example, most companies use sales forecast models in some form. They may be on spreadsheets or in databases, but they exist. If sales doesn't trust the technology, they will continue to use their trusted methods, negating the value of the ML model. Asking the right question can reduce resistance. If sales participated in the project, data scientists could explain how the results were being produced, so they are more willing to use the model.

Asking the Right Question

The right questions can make a difference in modeling outcomes. They can align technology and business strategies to deliver value. They can identify the key performance indicators (KPIs) to use when evaluating a model's success. Whether it is creating a deployment gap or solving the wrong problem, asking the wrong question can lead to unusable results. That's why it's crucial that organizations work with experienced professionals who understand the right questions to ask.

Contact F33 today to have a discussion about how to operationalize ML in your organization and start with the right approach.

14 views0 comments