Originally published on Journalift.org
Chances are that all of you reading this have a stack of post-it notes close by. We all use them, we love them in different colors, they are almost part of our office decor. But that little piece of paper with adhesive on it is known in literature as a ‘’productive failure’’, an unintended by-product of a different process.
So, the story goes something like this. In the late 1960s, a scientist was trying to develop a very strong adhesive for industrial use but accidentally created the opposite, a glue that stuck lightly and peeled off cleanly. The material clearly met neither the need nor the specifications as it did not solve the original problem. It was parked as a mere attempt. The turning point came when a colleague recognised that this “useless” adhesive solved a different problem: temporary notes that could stick, be repositioned, and not damage surfaces.
Through internal experimentation, sampling, and market tests, the concept was refined into Post-it Notes, which then became a highly profitable global product line and an enduring symbol of everyday workplace creativity.
This is not a story about failure alone and turning it into an opportunity, although it can be that. In a more broad sense, it is the story about what happens when things do not go according to a plan.
Whoever went through a planned validation process of any sort knows the benefit of the approach and the underlying structure: it is a fast way to test whether we have a product or a service that the market would judge as valuable. Based on available resources and data, we make assumptions about customers, product/service, market, adoption speed, channels etc. Usually, assumptions look really pretty on colorful Post-It Notes.
The actual testing then reveals the extent to which we were right in interpreting the available data. And to those unaccustomed to the process (and in love with their idea, because they’ve worked so hard on it), any diversion from expectation can be odd, to say the least. But let’s unpack the role of ‘’failure’’ or ‘’diversion from expectation’’ as part of the validation process.
This case surfaces three core ideas for any idea validation program:
- Failure is often a mismatch between problem and solution, not an intrinsic flaw in the idea. Context matters.
- The signal in the failure only becomes visible if the organisation keeps the experiment “alive” long enough to reinterpret it. Do not give up. Reframe. Repurpose.
- Structured validation mechanisms (experiments, sampling, feedback loops) are what convert serendipitous failures into validated opportunities. Learn. Always. And from everything.
Why failure is built into validation
Idea validation is not a one-time gate but a series of hypothesis tests that are probabilistic by nature. Each test is designed to disconfirm or refine assumptions about customers, value propositions, channels, and revenue models, so a high rate of “failure” at the level of hypotheses is not only expected but desirable.
From a learning perspective, a failed test is a high-quality data point: it narrows the solution space, clarifies what customers do not value, and often reveals hidden constraints such as pricing sensitivity, adoption friction, or channel misfit. This is precisely what happened with the weak adhesive from the beginning of the story. Once the original “strong glue” hypothesis failed, the team was free to explore an entirely different use case centered on reversibility rather than strength.
Photo by Brett Jordan on Unsplash
Research on startups shows that a significant share of ventures fail because there was “no market need,” meaning teams built solutions before properly testing whether the underlying problem mattered to customers. In a well-designed validation program, these painful outcomes are pulled forward into cheap, early experiments where assumptions are exposed and invalidated quickly, reducing the risk of large-scale business failure later.
Case in point: Testing TotalCast with European newsrooms
TotalCast is an AI-powered media automation platform designed to help European local media do more with less by streamlining the entire newsroom workflow from monitoring to multi-channel publishing.
Built by practitioners with over 15 years of experience in local journalism, the platform combines artificial intelligence and no-code tools to automate routine tasks such as intelligent news monitoring, AI-assisted content adaptation, radio news production with voice cloning, video optimisation, and distribution across multiple platforms—while keeping human editors firmly in control of final decisions.
After demonstrating up to a 70% reduction in content processing time and fully automated radio news production in a Ukrainian newsroom, the team set out to test whether this modular solution, and in particular its radio automation and voice cloning capabilities, would be relevant, acceptable, and commercially viable for local media outlets across diverse European markets. The team recognised a problem: journalists spend way too much time on administrative tasks that contribute to depleting their cognitive workload and take away from the creativity and dynamics that is a recognised part of the journalistic profession. They even calculated the average number of hours a newsroom team of five wastes on tackling the tasks that could be easily outsourced to an AI-enabled solution. And they were ready to offer just that, an MVP that could help newsrooms understand the magnitude of the issue and set them on a path to the true nature of journalism: back to real life issues and topics.
With incredible discipline and dedication to the process, the team set out to test their assumptions in three distinct European markets: Italian, Czech Republic and Austria.
The core hypothesis that failed for this team was that European local radio outlets, particularly in Austria, would quickly recognise the value of an AI-powered automation and voice-cloning platform and actively engage with testing once the MVP was offered. The team assumed that the strong results achieved in a Ukrainian newsroom (including significant time savings and successful radio automation) would translate into comparable enthusiasm and adoption intent in new markets, provided that potential clients were informed about the product and given access to try it.
The validation process, however, revealed a very different reality: while there was a good level of enthusiasm in Italy, some interest in the Czech Republic, Austrian outlets showed very limited to no response at all, directly challenging the belief that demand and readiness were uniform across European markets.
A second, linked hypothesis that did not hold was that newsroom decision-makers would view AI-based automation primarily as a practical efficiency tool rather than a source of fear or resistance. The team expected editors and journalists to focus on time savings, the freeing up of creative capacity, and the accessibility of no-code AI, but validation surfaced concerns around regulation, job impact, and attachment to existing ways of working that significantly slowed or blocked engagement.
An important learning for the team was that “no response is a response”: the silence from some markets and the mixed reactions from editors indicated not only a go-to-market challenge but also the need to revisit assumptions about technology readiness, trust, and perceived risk, and to pivot the positioning and market focus rather than pushing the original hypothesis to completion at all costs.
Types of “failure” in idea validation
In an idea validation program, “failure” is not a single event but shows up at multiple levels:
- Problem–solution failure: Customers confirm the problem but reject the proposed way of solving it (e.g., a feature set they find clunky or irrelevant).
- Market–need failure: Discovery work reveals that the problem is not important or frequent enough to justify a dedicated solution, echoing why many startups collapse for lack of real demand.
- Business model failure: Early financial and behavioral signals show that customers will not pay enough, often seen in post-mortems of failed ventures with flawed models.
- Execution failure: The idea is sound, but validation reveals delivery constraints such as high operating costs, scalability limits, or regulatory barriers.
Framing these outcomes as distinct forms of information allows teams to tag and store failure data in a structured way. Over multiple cycles, the validation program builds an institutional memory of what has been tried, what failed at which level, and what patterns recur across projects, which can then be mined for new opportunities.
From failure to pivot and discovery
Many well-known products emerged when teams treated failure signals as prompts to pivot rather than to stop. Some now-dominant products such as Slack and Netflix evolved from earlier concepts that did not initially succeed, with founders using validation data from failed versions to reorient toward what users actually valued.
In this sense, failure is a navigation tool inside the validation program. A negative result on one hypothesis triggers three possible moves:
- Kill: Stop investing in the idea if repeated tests show no viable problem/solution fit.
- Pivot: Change one or more elements (segment, value proposition, channel, or model) based on specific rejection patterns in the data.
- Reframe: Look for adjacent use cases or segments where the “failed” asset (technology, process, content) might be uniquely valuable, as with the repurposed adhesive behind Post-it Notes.
What distinguishes a mature idea validation program is not fewer failures but better post-failure decisions: the discipline to sunset unviable ideas early and the imagination to recontextualise promising but misapplied assets.
So, what happened with our TotalCast team? They are treating one market’s “No” as a “Not Yet” type of problem. Giving up is a viable option only if you have enough valid data points to actually justify it. Until that is the case: reframing and pivoting it is.
Author: Samira Nuhanović
Originally published on Journalift.org
Validation Booster programme is implemented by Thomson Media as a part of Media Innovation Europe led by the Vienna-based International Press Institute (IPI), the consortium brings together Thomson Media (TM), The Fix Foundation (TFF), and the Balkan Investigative Reporting Network (BIRN). The programme is co-funded by the European Union.

