Consulting engagements often begin without one thing being explicitly agreed: the metric that is expected to move.
In practice, the scope gets filled with activities-workshops, interviews, documentation, stakeholder sessions. These are treated as progress because they are visible and structured.
What is usually not fixed is the relationship between those activities and a measurable shift in the business. Revenue, cycle time, conversion, cost-to-serve, error rate-something that defines before/after in operational terms.
Without that anchor, two things happen in parallel
- First, delivery becomes self-validating. If sessions were run and documents were produced, the work is considered "done". The internal logic is completeness of activity, not change in performance.
- Second, decision-making loses pressure. If nothing is tied to movement in a metric, there is no hard threshold for saying: this intervention worked, or it didn’t. Everything remains interpretable.
This creates a specific type of failure mode. The organisation can complete a consulting engagement, implement parts of the recommendation, and still see no shift in underlying performance - but there is no structural requirement to declare that outcome clearly.
Instead, the result is absorbed into reporting language: improved alignment, increased visibility, better understanding of the process. These statements describe perception of progress, not system change.
The end state is simple. Work has been executed. Activity is documented. But the business is left without a verifiable answer to whether anything material moved.
Comments
Post a Comment