Most organisations have more data than they know what to do with. Storage is cheap, instrumentation is pervasive, and the tooling to collect and aggregate data has become broadly accessible. The scarcity is not in the data itself — it is in the capacity to turn that data into decisions. The gap between "we have data on this" and "we made a better decision because of that data" is wider than most organisations acknowledge, and the bottleneck almost never sits where people assume it does.

The Data-to-Decision Gap

The assumption embedded in most data investment is that more data, better tools, and more dashboards will lead to better decisions. Sometimes this is true. More often, it produces a situation where data exists, reports are generated, and decisions are made — but the connection between the three is looser than it appears. Decisions get made on intuition and political dynamics, with data cited as post-hoc justification rather than genuine input. The data infrastructure exists; the culture of using it does not.

Closing this gap requires attention to the process that connects collection to decision — and specifically to the stages in that process where things reliably go wrong.

The Five Stages: Where Things Go Wrong

The journey from raw data to actionable insight has five stages, each with its characteristic failure modes:

"The bottleneck in most organisations is not the volume of data. It is the structured, disciplined process of asking the right question, cleaning what you have, and communicating the answer clearly."

The Role of Domain Knowledge

Technical skill in data analysis is necessary but not sufficient. Domain knowledge — understanding the context in which data was generated, the meaning of the variables, the plausibility of the results, and the decisions they need to inform — is what distinguishes analysis that can be acted on from analysis that is technically correct but organisationally inert.

This is why the best analysts in applied contexts are rarely pure technicians. They have built enough familiarity with the domain to know when a result is surprising and worth investigating, when it is implausible and likely reflects a data error, and when it confirms something already known without adding new information. That contextual judgement cannot be automated, and it is not developed quickly.

Visualisation vs. Summary Statistics

A persistent debate in data communication is when to use visualisation and when to use summary statistics. The useful answer is that they serve different purposes and work best in combination. Summary statistics are efficient when the audience needs to know a specific value (what is the average, what is the range, how does this quarter compare to last?). Visualisation is stronger when the goal is to show a pattern, a relationship, or a distribution that a table would obscure.

The failure mode is using visualisation as decoration — producing charts because they look sophisticated rather than because they communicate something that a table cannot. A well-chosen chart that reveals a genuine pattern is far more useful than an elaborate dashboard that generates noise. The question to ask for any visualisation: what would a reader miss if this were a table instead? If the honest answer is "nothing much", use the table.

Closing the Loop

The final and most neglected stage is the feedback loop: does the decision get made, and does it produce the outcome the analysis predicted? In most organisations, analysts hand off their work and never learn whether it was used, whether the recommended action was taken, or whether the outcome matched the model's expectation. This disconnection is expensive — it means errors in assumptions or method go uncorrected, and improvements in the analytical process cannot happen systematically.

Closing the loop requires deliberate structure: a mechanism to track whether insights led to decisions, and whether those decisions produced the expected results. This feeds back into better data collection — because when you know what decisions your data needs to support, you can design collection around those needs rather than collecting everything and hoping something is useful later.

Data Pipeline Failure Mode Explorer

Click any stage in the pipeline to explore its common failure modes and the single most important best practice.

Select a stage to begin

Want to build a more rigorous data-to-decision process?

We work with organisations to identify where analysis is breaking down and design the processes and capabilities needed to turn data into reliable decisions.

Contact us