Skip to content
Data Science

POC: Build at Scale or Experiment First?

A proof of concept (POC) is a miniature project to demonstrate the value of a concept before fully committing to it. It is common practice to use this type of experiment to explore possible ways forward, discover pitfalls, and convince your organisation that you are on the right track. Unsurprisingly, there are many similar and related concepts in software development and data science, such as “AI experiments” and “minimum viable products”, but these terms will not be explored in-depth in this blog post. Instead, we’ll focus on when to commit to a POC and when it would be better to build at scale from the start.

 

When is POCs Worth Your While and When They are a Waste of Time?

 

As with most things, POC projects are common for a reason. They are great tools to test out novel ideas and minimise the risk of spending lots of effort and money on something that turns out to be completely useless. If you come up with the idea that you are sure that none of your competitors or even companies in other sectors have thought about it, it is reasonable to test it out before committing to it.

 

A good POC or discovery project begins with the idea that addresses some real aspect of the business, with a technical solution that is sound and could be transformed into something permanent. A failed POC can be a source of insight and save time in the right company, and a successful one can be like turning on the light in a dark room. 

 

If you are developing something that has never been done before, it might not be a good idea to commit to a proper setup for something that might not even work. But for an average company struggling to become data-driven, the chances are that the arising issues aren’t exactly cutting-edge problems. Someone had probably solved almost the exact same problem before and uploaded a conference talk about it on Youtube several years ago. In this case, there is no need to reinvent the wheel, and the best way forward would be to simply decide which type of wheel you would like and in what size.

 

…and When it’s Somewhere in Between

 

Looking back a few years to the time before machine learning hit the mainstream through giants such as Google, Spotify, Amazon, and Facebook, a lot of time and effort was spent trying to judge whether all these new machine learning tools were worth looking into, and further what they could be used for.

 

Nowadays, the situation is completely different. The value of advanced analytics is well established. In many industries, such as retail or financial services, it is a race between competitors to pick the low-hanging fruits of data science. In these industries, conducting a POC to see if you agree with the consensus is not optimal.

 

If you, like many today, find yourself in the situation where you want to build something that a tech giant would consider standard but which has not yet been done in your local market or industry, then it can be hard to decide how to proceed. Whilst the technical solution might seem straightforward, there are always situation-specific details that need to be ironed out. For instance, maybe the model you are trying to use works well for selling TVs in an online store, but it might need some tweaks in the modelling approach if you want to use it for products with substantially different purchase patterns, such as groceries. These types of problems, along with setting up the flow of data, are the most common roadblocks in this kind of project. 

 

In such a situation, the best thing to do is often a combination of building for scale, with small experiments on the side to help guide the system design. Knowing how to design your system and what to investigate can be tricky unless you have the right competence and experience on your team. You will be in a better position to weigh the risk of wasting time on smaller experiments versus the risk of investing heavily into something that ultimately turns out to be a bad idea.

 

To summarise the discussion of conducting POCs or not, it is always a good idea to conduct experiments, but not necessarily in a way that slows you down. There’s almost always something to figure out, and if it isn’t a technical problem, it is something concerning the business and the organisation around the modelling initiative than about the modelling itself.

 

“The value of advanced analytics is well established, and in many industries such as retail or financial services, it is a race between competitors to pick the low-hanging fruits of data science. In these industries, conducting a POC to see if you agree with the consensus is not exactly optimal.”

 

Bridges Between Teams

 

Suppose you are conducting a POC in your company, and the different parts of the company have different expectations and different ideas of what the purpose of the experiment is. In that case, it’s very difficult and confusing for these different teams to feel comfortable and cooperate. For instance, if you have an offer recommendation model optimising for redemption rate, but you measure it on spend, you might have good results initially. Eventually, the consequences of measuring something other than what the model is trained for will limit progress and add confusion.

 

In such a situation, workshopping together and discussing under informal conditions can be a good start. Reaching an agreement on what to measure should be of the highest priority. In our experience, regular workshops to discuss goals and KPIs can be a great way to build understanding to make work more enjoyable and productive throughout the whole process, even after leaving the POC stage.

 

As a business person who doesn’t necessarily understand the full technical details of a model, it can feel reassuring to know that you understand what particular metric measures and that you can see the progress being made by the data science team doing their thing. On the other hand, the data scientists will not need to explain the minutiae of their work to a stakeholder to get the approval to go ahead because their solutions can prove their worth in a practical test against agreed-upon metrics.

 

Where to Avoid and Embrace POC

 

Let us conclude by summarising when to conduct a POC or not:

 

  • Avoid POC: If you are trying to explore the way forward for your company’s technical capabilities, avoid unnecessary POCs, build a minimal version in the right system with the right data from the beginning, and plan continued development depending on the outcomes and learnings.
  • Embrace POC: If you are doing something novel that might not work, do it as an ad-hoc experiment and worry about productionising it later.

 

A POC can be an eye-opener if it incorporates the whole business case and can be very valuable in showing the business value of committing to a certain project. All is good if the setup is sound, the KPIs are well defined, and the path forward is clear. On the other hand, if the purpose of the experiment is unclear, don’t go ahead with it. Instead, focus on something where the purpose is clear or spend more time concretising the experiment. 

 

Contact us

 

×

Willkommen!

Möchtest du lieber auf Deutsch weiterlesen? Kein Problem, du kannst auch weiterhin auf der englischen Hauptseite stöbern. Wähle unten einfach deine Präferenz: