
Organizations waste millions on dashboards, models, and automation each year. If a data project does not improve human effectiveness, it has failed. As practitioners, we have seen success only when technology serves people, and failure whenever people are expected to serve the machine.
This failure is reflected in the broader landscape. Gartner projects that by 2027, 80% of data and analytics governance initiatives will fail because they lack a real or even manufactured crisis to anchor them. Meanwhile, headlines on mass layoffs have intensified concerns that automation is being used to replace people rather than empower them. These dynamics follow a familiar pattern: leaders pursue automation and scale, teams inherit layers of complexity, and frontline workers retreat to spreadsheets when new systems fail to address the everyday questions that matter most.
The question then becomes: "Why do so many projects stall?" The answer lies in a fundamental confusion between building and using. Teams produce dashboards by the dozen, measure everything, and mistake quantity for quality. Employees are drowning in numbers and starving for usable insight. Industry reports repeatedly show that the majority of data science projects never reach production or fail to deliver value when they do. Those are not technology failures alone. They are human ones.
Recognizing this, our own rollouts begin with one blunt question: "What will this tool allow a real person to do differently tomorrow?" If the answer is merely "automate this task," we push back. Automation should free people to do more valuable work, not simply replace them or bury them in notifications. The most useful data projects change decisions and behavior; they help people ask better questions and take different actions.
That is why the center of any data initiative must be the people closest to the work. Not "data teams" in the abstract, but the field technician, the plant manager, the shift supervisor, and the account executive. Different roles need different data, and these needs must be vertically integrated so insight flows up and context flows down. Metrics are worthless unless they inform action for a specific user. Designing for that person first ensures that their information ties meaningfully to the rest of the organization.
Of course, even well-intentioned projects can go astray. Leaders often fall into three traps: measuring everything, delivering data in the wrong way, or gaming the numbers to make key performance indicators look better. Each trap produces noise, disengagement, and worse decisions. When dashboards are divorced from meetings, workflows, and shared accountability, they become wallpaper, pretty but inert.
Beyond inefficiency, there is also a moral dimension. Data curated to hide poor performance or manipulated to protect metrics destroys trust. Leaders should foster a culture where unfavorable data is surfaced and used to improve processes. People do not need perfect data; they need the right data, the few metrics that drive outcomes, presented at the point where decisions are made. Embedding those signals into weekly team habits and decision points is where real value is realized.
Still, some critics might argue that speed and scale require top-down systems and heavy automation. They may point to headcount pressures and claim that fewer people must do more. While automation is indeed part of modern work, blunt cuts without regard for institutional knowledge create brittle organizations. Companies that have cut too deeply often end up rehiring to restore context and capacity, proving that efficiency without resilience is unsustainable.
And what if a rollout has already failed? Here, persistence in the wrong direction is dangerous. Doubling down on the same plan is rarely the answer. Failure is a diagnostic tool. It tells you that you worked on the wrong problem, built for the wrong user, or aimed too far into the future. The fix is incremental. Identify one concrete pain point, work with the people who feel it, and deliver immediate relief that proves value. Adoption often scales only after it proves itself in human steps. These small wins are what rebuild credibility. Each success restores confidence that data can actually serve people rather than burden them.
Once trust is established, the challenge becomes turning those few meaningful signals into practical tools that shape daily work. Meeting teams where they are means embedding the right signals directly into their workflows and decision points. When systems are designed to make people more effective, adoption follows naturally, and technology becomes a force that amplifies human judgment.
Ultimately, if you want data projects to succeed, stop building for the pipeline and start building for the person holding the pipe. Define success as human effectiveness. Our challenge to leaders is to invest in the people who use the tools, not only the tools. Do that, and your next "big" data project will be judged not by how many dashboards it produced but by how many better decisions it helped people make.
About the Authors
Joe Malucchi, co-founder of Quail Group, is focused on operational optimization and analytics-enabled product thinking. He helps teams turn big ideas into clear next steps, blending strategy with data to drive measurable improvements. Malucchi's approach emphasizes clarity and progress, enabling organizations to move confidently from vision to execution.
Zar Sewell is the co-founder of Quail Group, with 30 years of experience in change management and organizational performance. She has led Fortune 500 teams across industries, from strategy through execution, using methodologies like Prosci and Six Sigma. Fluent in three languages and seasoned in multicultural environments, Sewell is known for building strong relationships and communication patterns that make change stick.