Galang specializes in good governance. This article was published in the Opinion Section, Yellow Pad Column of BusinessWorld, July 16, 2007 edition, page S1/4.
While googling the Web, I chanced upon an old article written by management writer Art Kleiner, the “writing coach” of Peter Senge. It was about two US professors of management quarreling, for over a decade, over a “vexing business question” hinted at in the title of his piece, “What are the measures that matter?” It’s about measuring performance by numbers.
Until this piece, I didn’t know it was a question vexing business as well. Now I think it must be an issue in any field where performance measurement is in place. Like in the development community where I come from.
In government, performance management was introduced about two decades ago as “management by results” along with a package of reforms (under the label “new public management”) that occasioned initiatives to “reinvent government.”
The development community, meanwhile, has preferred a label of its own, “results-based management” It’s the system of choice for planning and evaluating performance required by most foreign funding agencies, or “donors”, for development projects implemented in recipient countries. Here’s one approach I know and where the issues may lie.
You start with planning. Performance planning is really an exercise in making assumptions, said or unsaid. In the results-based management approach, for example, you agree on the ultimate “Strategic Objective” (Late Outcome in logic modeling), resting on your assumption that this has the most favorable impact on, say, the community.
Next, you assume a set of “Intermediate Results” (or Mid Outcome) in place, on the belief that these are your best steps to reaching your strategic objective. You get to them, in turn, through a lower layer of “Early Results”, the lowest in your hierarchy. This completes your “Results Framework.”
Closer to the ground, you assume a bigger set of tangible outputs as stepping stone to your early results. Then, finally, for each of the outputs, you list down the tasks or activities you assume you need to do to produce them.
In a word, you’re assuming a simple cause-and-effect relationship among your chosen results, outputs and activities: If this…Then this. But you are not assuming you’re in a vacuum, so you also assume that certain critical conditions are obtaining for all your assumptions to hold.
Once you’re finally done, experts will tell you you’ve just developed your project “theories of change.”
The scheme establishes accountability by making units and people responsible for meeting assigned targets. Targets are numerical proxies for otherwise abstract objectives, results or outcomes. For setting them, you aim to make performance countable, therefore measurable. The mantra is, “If you can’t measure it, you can’t manage it.”
Managing it, however, makes for a highly risky enterprise, but there are variations in this. In some cases project management is sourced out to for-profit or non-profit organizations (normally based in the donor country) or NGOs. In which case, the results framework will come to them ready-made.
It will come as part of the call for bids or a request for proposals to which vying entities respond with their most promising promise (or technical proposal). To the winner, it comes as part of the contract, to keep its winning promise.
Now, the risks. Firstly, you’re committing to achieve real targets resting on clouds of nested assumptions, which you played no part in developing, to begin with. Secondly, the assumed causal relationships follow a linear logic, ignoring feedback loops, which oversimplifies a complex, dynamic world.
Thirdly, project reality is almost always an open system, sensitive to elements and changes around it: How far can the planners see into the future to make assumptions about this, at what degree of detail?
And fourthly, the risks stem less from theory-building than from management believing that the assumptions hold true, now and forever: in other words, would management agree to relax the assumptions, if need be, at any point along the way?.
If you fail to manage the risks, evaluation time should come as the scariest moment of your professional life, especially if management confuses evaluation with exploration in faultfinding, with “coercive accountability.”
In an ideal world, evaluation should find use by helping to enrich project learning and informing policy and action. It’s aiming to know “what works better for whom in what circumstances, and why.” It’s looking beyond quantities and giving due place to qualities. But I said “ideal world.”
The fact is adherence to quantitative measures follows an established tradition that traces its roots to the beginning of modern science itself. Some call it now the “science of quantities.” It’s science speaking the language of math, as pioneered by Galileo, and thus requiring analysis to confine itself only to measurable elements. Those that you cannot measure, you make them measurable.
Since then, wrote one psychiatrist, “hardly anything has changed our world more than the obsession of scientists with measurement and quantification.” While the method has served science fruitfully well over the centuries, it was at the expense of the “unmeasurables.” Out goes quality!
Measurement, by itself, is not bad. But, said Lewis Thomas, “There is no doubt about it: measurement works when the instruments work, and when you have a fairly clear idea of what it is that is being measured, and when you know what to do with the numbers when they tumble out.”