|by Kent Aitken|
Last week's post was about how organizational language, culture, and processes encourage the oversimplification of both problems and solutions (see: Boundaryless Problems and the End of the Elevator Pitch). It makes it easy to ignore the context of problems, and hard to appreciate the indirect or long-term benefits of any action.
(I've written in the past about how this hampers innovation and restricts collaboration, and proposed strategies to overcome it. Further back, I wrote a deeper dive about its adverse effects. I'll stop hammering on this theme soon.)
But after writing the post, I kept wondering if the idea was remotely useful to on-the-ground public servants. So, we tend to oversimplify things. Is that a necessary shortcut? Especially given the competing demands on our time? Or a lens that can help improve our planning? I'm not sure.
There are a few possible scenarios for this "ecosystem of problems and solutions" lens:
- It's false
- It's true, but useless
- It's true, but only useful in some situations
- It's true, but requires a particular response to be useful
No plan survives contact with the enemy
I want to dig into 2 and 4, starting with 2. It's true, but useless. Yes, there's an ideal state, in which we tackle a given problem exactly the way we should. But day-to-day, there are multiple problems, approaches, and solutions competing for our time and attention (see: Idealism and Pragmatism for Organizations). Maybe an 80% effort is less than ideal for a given problem, but best for the portfolio of problems we're facing.
Paul Wells led us to an interesting possibility for 4. It's true, but requires a particular response to be useful in his book The Longer I'm Prime Minister. He pointed to Charles Lindblom's The Science of Muddling Through, the long story short of which is that yes, public policy is impossibly complex (zero hyperbole), so the only way of understanding one's own preferences is actually to choose a direction and run with it. It's ten pages long, and I highly, highly recommend reading it - first for the above, and second to note that the verbiage of "complex public policy problems" is not a new phenomenon based on modern global finance, terrorism, or digital interconnectedness. It fits as easily in this 1959 paper.
Lindblom's take would be that there's much merit in experiential knowledge and large-scale experiments in the form of jurisdiction-wide policy changes. Skip the theories, frameworks, and mutually-agreed upon goals. If you think big enough, everything can be an experiment (e.g., the 10-year tax breaks in the US).
Oversight and Results
However, I think that Lindblom's solution is insufficient. One, governments have a certain responsibility towards fairness, even at the cost of efficiency. The human impacts of experiments cannot be ignored. For instance, in both the UK and Greece, social scientists have linked austerity policies with increased suicide rates (see: this post on the importance of good public policy). And in the age of transparency, governments cannot just make backroom trades of fairness for effectiveness (see: The Social Contract).
Two, Lindblom suggests oversight in the form of multiple actors with competing interests: watchdog groups, lobbyists, and other responsibility centres within government. However, as Yves Morieux has pointed out, when someone has multiple people lobbying them, the marginal cost of ignoring any particular one of them is pretty low. Worse, none of those lobbying are paid to lobby for good systems overall; they're paid to adamantly recommend the solution that maximizes the variable they represent.
So where does this leave us? If I'm to be believed, both rational planning and experimentation and oversight are flawed approaches to public policy, which is not particularly inspiring. But I'll let it hang for today and pick it up again shortly.