Wednesday, April 22, 2015

Innovation and Rigour


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


There's a general rule in academic research: you know you're coming to the end of your literature review when you've already read all the references in the articles you're reading. The second line of defence is that this gets validated with an advisor who has been studying the field for decades.

We don't have that luxury in public administration, and particularly in public sector innovation. There's no such systematic record of experiential and practitioner knowledge. How do we know when we know enough about a skill, project, or field?


The four stages of competence

You may be familiar with the four stages of competence model for a given skill. The idea is that we start incompetent, but don't truly know it. We know so little about the skill that we can't even meaningfully assess our own ability: (1) unconscious incompetence. As we learn more, we realize how little we know, reaching (2) conscious incompetence. Eventually we become adept and know it, the level of (3) conscious competence. When we master something, we can do it on autopilot, without really thinking: (4) unconscious competence.

I wrote last week that, in general, people are probably unconsciously incompetent at facilitating online collaboration (see: The Promise of Online Collaboration). Which isn't trivial: if someone has a bad experience collaborating with a group, they'll disengage and not return (just as they would attending a poorly designed meeting or conference). So where we fit in this competence rubric for a given project is important. We're making decisions in the public trust. How do we know when we're prepared to do so? When to move forward rather than signal-checking with others or conducting additional research?


The dark side of experimentation

Experimenting is good. Experimenting without truly knowing what we're experimenting on is not: it leads people to skimp on the equivalent of the 'literature review' (what's been done before? Who can I talk to for advice?), skip setting markers to know if they're on the right track, and fail at critically assessing and sharing the knowledge gained (see: Standardizing Innovation).

It's tempting to use a baseline of zero: "Before we did X, nothing was happening, then we did X and something happened: ergo, success." Unconscious incompetence applies both to implementation and measurement. Falsely declaring success creates complacency (see: Pilot Projects and Problems). It slows the move towards competence. And it represents underdelivery.


Get meta about innovation

So what's the equivalent of academic rigour for public sector innovation? Could we work out useful heuristics to ask ourselves? Something like:
  • Who are the leaders in this field? What are they doing? Can I talk to them?
  • Could I give a hour-long presentation on this field tomorrow?
  • Where would this project fit in the Cynefin framework for problem complexity?
  • What will this project impact?Is the level of rigour in designing the project commensurate with its potential impacts? (Future prospects for the organization? Groups of stakeholders? How many people?) 
I don't know what would work, but I'd love to hear ideas. Because every time we try something that hasn't been done before and succeed, the culture needle moves a tiny bit from risk-adverse to innovative. Every time we fail, it wiggles back. We owe it to ourselves, our colleagues, the next generation of driven public servants, and to Canadians to be thoughtful and purposeful. But, also to avoid the dark side of being unnecessarily rigorous - one project's reckless abandon could be another's costly analysis paralysis.

This is the get meta about innovation: how do we move from asking ourselves Is this a good idea? to How do I know this is a good idea?

No comments:

Post a Comment