|by Kent Aitken|
The last couple posts have been working through a question about how organizations plan. The long story short:
- We've institutionalized oversimplification (a symptom being the "Elevator Pitch")
- Correcting that oversimplification is likely an unrealistic goal
- The solution (according to Charles Lindblom) is letting policy decision-makers throw rational planning out the window and "try stuff"
- To avoid unfairness, policy proposals will be closely watched by an ecosystem of stakeholders: groups responsible for related goals in government, lobbyists, NGOs, and think tanks
On the surface, this sounds like the zeitgeist: experimentation, innovation, and collaboration. However, the "try stuff" here refers to large-scale national policy, not pilots: "trying stuff" on, say, tuition subsidies has a massive impact on people's lives. And the role of that "ecosystem of stakeholders" isn't collaboration: it's recommendation, or advice.
In the last post, I linked to Yves Morieux, who breaks down the economics of multi-stakeholder decision-making: where one person owns the decision, but not the inputs required to make it. He paints a portrait of a car manufacturer, in which the lead designer must satisfy the organizations' experts in noise reduction, fuel efficiency, repairability, safety, and much more. It's easy to imagine how fuel efficiency and safety could be at odds: do you make a car lightweight, or an urban armored personnel carrier? So we have a designer, whose bonus but not core salary depends on performance pay that is based on competing goals decided by 26 different people. Which makes their incentive to care about any individual one of those goals very close to zero.
Recommendation-based systems do nothing to address the asymmetry between the incentives of those involved. Put simply: recommenders don't get paid to contribute to the best outcome. They primarily get paid to promote the variable they represent, as loudly and voraciously as possible. They do not get paid to look for compromises, concede when others make valid arguments, or even to develop long-term relationships and credibility. In the above example, the safety expert's concern is chiefly to understand the optimal outcome from a safety perspective; it's the designer's job to worry about how to square that with fuel efficiency.
Why is this? It's partially innocent bias: people care about what they know about. I'm sure far more than 50% of the population thinks their expertise is of above-average importance. But more so, it's that the people that hold recommenders to account are a step removed from the decision space themselves, and likewise rely on oversimplified elevator pitches for setting goals. It doesn't help that recommenders rarely receive any feedback about the results of their role in the decision.
Recommendations exist in a partial vacuum, whereas decisions exist in an ecosystem.
So what's the solution? Morieux proposes six elements (paraphrasing):
- Ensure that players in the ecosystem understand what the others do
- Reinforce integrators
- Remove layers
- Increase the quantity of power so that you can empower everybody to use their judgment
- Create feedback loops that expose people to the consequences of their actions
- Increase reciprocity, by removing the buffers that make [people] self-sufficient
In other words: make people meaningfully responsible for the outcomes of their work, make people responsible for collaboration, and make sure they can see and understand the ecosystem.
No amount of communication or planning can solve this issue entirely - the change has to come in how power is distributed and used.