Friday, February 10, 2017

Public sector innovation


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


This post is a response to an email from a provincial government colleague, who asked for frameworks and models for organizational innovation. I, perhaps, got carried away; it's a little lengthy for a blog post (here's a print-friendly version). I'd like to consider it an evergreen draft for the time being, as I'm sure I'm missing major chunks or have the blinders on for parts; I welcome comments on the working document. - Kent

Introduction, and innovation defined


It’s important to make sure that everyone involved in your conversation about innovation is using roughly the same meaning; innovation gets overused to the point where it’s amorphous and meaningless. For this document, we mean change or novelty - so change to existing concepts, or new concepts - that gets implemented at scale and that creates value. Here’s NESTA’s model:


Source: NESTA

Two takeaways from the above: 1) good ideas that flop during testing aren’t innovations, but they are part of the innovation process; 2) “making the case” comes after testing.

It’s unhelpful to think of innovation as a skill, or of people as innovative. Innovation is a discipline that can be practiced, and some people practice it - by inclination, affinity, or opportunity - more than others, and to greater effect. It is better understood as a process than an event.

At the core of innovation is the idea of problem definition: know what problem you’re trying to solve. (Put differently, by Lincoln: “If I had five minutes to chop down a tree, I'd spend the first three sharpening my axe.”) This applies both to individual efforts towards innovation, and the meta-level question of innovation in government.

Turning to that problem: is the question fostering innovation? Encouraging innovation? Allowing innovation? Facilitating innovation? Supporting innovation? A combination of all of the above? Even a seemingly innocuous change in what verb we’re using reflects very different mental models about how innovation happens, and what stops it.

The discourse about public sector innovation jumps too quickly from problem to solution, from the question “Why aren’t we innovating more now?”. (For starters, sometimes you actually just don't need it.) If you do, the solution is rarely about simply connecting ideas to senior executives. Surface-level fixes to perceived innovation gaps are what lead to Dragon’s Dens to pitch “innovative ideas,” under-supported innovation labs, hackathons to “just see what people come up with,” and crowdsourcing for the sake of testing the idea of crowdsourcing.

The need for innovation


The concept that “government needs innovation” is a second-order condition, not a starting point. In reality, it’s that - sometimes, in context - particular programs are better served by innovation than a continuation of previous activities. In aggregate, this can turn into “government needs more innovation.” But the question can’t be “How do we innovate more?”, it has to be “How do we create a system whereby the status quo and valuable alternatives are on a level playing field, such that we can test and prove new ideas as reliably as we accept old ones?”

To back up slightly, let’s consider an arc of innovation that is both an analogy and a predecessor, that of telecommunications. We’ve gone from letter-writing to printing presses, telegraphs, telephones, the internet, and now to low-cost ubiquitous mobile connections. Every combination of one-to-one, one-to-a-select-few, one-to-many, public forums, with every combination of attributed or anonymous, for every combination of formats, all at a vanishingly small cost.

But here's the key: at one point, to communicate long-distance you had one option: handwriting a letter. Later, you had two: handwriting a letter, or paying to have something reproduced many times on a printing press. You didn't have to rely on a letter when it wasn't the best option. As more and more options became available, you could match your communications goal more precisely to different ways to achieve it.

Likewise, now we have a wider range of policy development approaches and policy instruments, which means there’s a greater chance that we can match the right approach to the right situation - if we can deploy those approaches without artificial barriers.

Some common approaches


We shouldn’t slip into mental models that restrict ourselves to an established catalog of “innovations.” However, it might be useful to consider the toolkit that public sector organizations are often drawing on in efforts to innovate. Some of these are focused on inputs, some on outputs (that is, how we approach decisions on what to do, versus what we actually do).


  • Foresight: systematic exploration of a range of plausible futures for a field, technology, policy area, often used in environmental scanning.
  • “Open” approaches: citizen and stakeholder engagement in service and policy design. This can be online deliberation, argument mapping, citizen’s panels, facilitated sessions, roundtables, or dozens of other methods. See: People and Participation, Designing Public Participation, and Dialogue by Design.
  • Participatory budgeting: overlaps with citizen engagement, but actually sets aside portions of government budgets for the citizens to decide on. Usually comes with a lot of work to create a fair and inclusive process, including web platforms to help people explore, debate, and vote on options, and to consider trade-offs and competing views.
  • Crowdsourcing: overlaps with citizen engagement, but let’s think of crowdsourcing as aiming for light inputs (ideas, concerns, suggestions, edits, votes) from many people. See: Crowdsourcing as a new tool.
  • Citizen science: creating platforms (toolkits, web platforms, games, physical infrastructure) that allow citizen inputs to government data collection: e.g., water and pH levels, photographs from standardized perspectives, star field mapping, protein folding. Here’s a cool example: Water Rangers.
  • Open data: releasing data created and collected by government to allow for third-party uses: social, economic, and academic research; platforms for access to government services and data;  business intelligence, etc.
  • Hackathons: collaborative problem-solving sessions, typically with technological solutions (but not always), that bring people together to define a problem then prototype and test minimally viable solutions, usually within 48 hours.  I helped organize and run this one.
  • Behavioural insights: generating and testing hypotheses from the behavioural psychology and behavioural economics literature to examine and optimize citizens’ interactions with governments (e.g., different language on letters from tax organizations leads to different response rates and times). Often paired with A/B testing two products/approaches to get authoritative data on which worked better. Worth it to skip straight past articles to the classic book on the topic.
  • Impact-based delivery models: can be partnership models, procurement, grants and contributions, or other funding models. Governments are increasingly exploring ways to get away from defining strict requirements for contracts, partnerships, or products up front and instead dispensing funds based on measurable impact with the approach left up to the third party. E.g,. the UK Social Value Act, social finance, pay-for-performance, or many public-private partnership governance models.
  • Challenge prizes: Posting monetary prizes for hard-to-solve problems that can be attempted by any individual or group. Groups attempt to solve problems on their own volition, with no promise of financial compensation. Often looking for technological or research proofs-of-concept, which governments can then purchase or pursue. Challenge.gov and the X-prize are the common examples.

That said: thinking in the above terms can be a trap. The process of deciding on action can often be boring. Good problem definition and ideas analysis can lead to re-using old solutions, or tweaking them only a little, and this has to be okay.

If a government policy team spends time making observations and conducting exit interviews with citizens using a particular service (an activity that could be equally called citizen engagement, behavioural insights, or design thinking) and makes minor policy changes based on that insight, is that innovation? If it creates value and they can scale it, sure.
On the flip side, for any private sector leap forward or emerging technology, there’s always someone waiting to say “How might we use [artificial intelligence/big data/data analytics/platform models/driverless cars/virtual reality/quantum computing] in government?” It’s worth musing about, but will only be worthwhile when that thinking happens to connect with a genuine problem - in a real-world program, policy, or service area - that needs solving.

Which means that none of the above approaches can be used in a vacuum: they need the connection to mandates, and the people who can understand and implement them.  It’s tempting to think that any of the above might be the perfect approach in a given situation, but unless the problem owner can and should deliver on them, it isn’t. That governments exist to deliver on mandates from elected officials is bizarrely easily forgettable.

It’s worth noting that many of the listed approaches are, in essence, just different ways to gain information and insight into stakeholders’ needs - that is, they can be routes to better problem definition and different ideas. They are ways to understand, and hopefully manage, complexity.

Internal innovation campaigns

On that note, a quick thought on internal innovation campaigns: that is, asking employees for ideas on how to improve the workplace or the organization's outputs. This is where there’s more control, less risk, and more room to play. However, if you open the floodgates and receive 1,000 ideas, you probably have to reject 995+ of them. You only have so much time and executive willpower, and some ideas will just be unfeasible. Consider only asking for ideas when management is genuinely ready to make change, and consider scoping out a theme area.

Many ideas will be predictable: Google's 20% time, skip-level meetings, development opportunities, 360 degree reviews, use [technology] for [activity], bring-your-own-device, buy [technology] for employees, telework, etc. If you’re interested in porting an established process into your organization, consider instead engaging employees on the details and implementation.

Some companies take internal innovation campaigns very seriously, and show more promise than my skeptical take. See: Why governments would never deploy Adobe’s Kickbox and why maybe they should.

Design thinking

Design thinking gets its own section, because if you solve for design thinking, you often end up solving for much of the above. It’s a structured process that helps organizations work through the problem definition, their own capacity, and the right approach - which might be innovative, or it might be boring.

These five steps are at the core of design thinking:

Source: IDEO

  1. Empathize: understand stakeholders and their needs, mindsets, challenges, and attitudes. Common techniques include interviews, observing people interacting with services, or group dialogue models. People can’t comprehend how powerful this is - and how much they were assuming - until they actually do this themselves.
  2. Define: once again, a focus on defining your problem really, really well.
  3. Ideate: generate ideas for how to solve the problem. Allowing different stakeholders into this process, having different perspectives mix, and using facilitation techniques to create space for creative thinking are all proven to generate more novel ideas than simply asking.
  4. Prototype: make anything (e.g., paper mock-ups of service interactions, lego buildings, rough websites, draft policies) that people can touch, explore, and react to. The act of building will help problem owners and idea generators understand how something will look, feel, and work in practice...
  5. ...but not as much as testing it will. Ideas from the previous stage will invariably be laden with assumptions, and when users - real, honest-to-goodness end users - start interacting with even a rough design, assumptions will be revealed, which will create opportunities to correct them. A sample size of five testers will reveal most of the critical failure points in, for example, a web interaction.

Design is also a discipline to practice, not a skill or an easily replicable set of steps. There are many, many ways to get the social science wrong at the empathize stage, generating false results, if you don’t know what you’re doing. Work with experienced designers while building your own capacity.

A related model is the “double diamond”, below. It describes the process of opening the conversation up to allow many considerations into the problem space, looking at many possible stakeholders and issues, before narrowing the focus down into a tight problem definition. Once that is done, the conversation opens back up to allow many possible solutions to surface, before deciding on one or some and taking steps to define and implement them.

Source: Rachel Reynard

Organizational models and strategies

It’s very different to talk about innovation in a localized, contextualized way (e.g., within a policy/program/service unit) and in an organizational, cross-government way. Individual units will innovate constantly, and the efforts may never bubble up to the government-wide radar. In many cases, the change won’t be called “innovative” - it’ll be born out of incremental change, financial pressures, or an opportunity to simply improve the program and it’ll just get called “better.”

The classic piece on innovation in private sector organizations found significant differences depending on where “innovation units” were positioned within the organizational structure. The long story short is that the wrong design and governance decisions can nearly guarantee failure.

However, organizational-level strategies can support individual, contextualized innovations in a few ways, including:

  1. Identifying and removing barriers
  2. Creating safe space for experimentation
  3. Providing training, tools, and resources

Barriers to new approaches

Get used to the idea of problem DNA. Every barrier and obstacle isn’t a homogenous element, it’s a unique combination of policy, risk, culture, understanding, time constraints, opportunity costs, process, communications, and other building blocks. Solving for one element rarely solves the problem.

Let’s return to the idea that, with new options for policy development and implementation, there’s a greater chance to match the right approach to the right situation. However, rules, policies, and processes were often designed before these options became available, so there are often barriers to their use, including hiring, mobility, contracting, procurement, and training. Creating flexibilities or exception processes in these systems is the blanket, one-size-fits-all approach to freeing up space to innovate. However, systematically taking steps to understand, and adjust for, the impact of these forces on how employees work is more effective - albeit more time-consuming and it requires more management commitment.

Safe spaces and innovation labs

An increasingly common approach is to create “innovation labs,” which are theoretically safe spaces that can bring people together to define problems and experiment with solutions.

For a proper, peer-reviewed definition:

“An innovation lab is a semi-autonomous organization that engages diverse participants—on a long-term basis—in open collaboration for the purpose of creating, elaborating, and prototyping radical solutions to pre-identified systemic challenges.”

These can be within organizations, at arms-length, or purely external as partners. They tend to be centres of expertise for design thinking, prototyping, facilitation, and other useful process skills. It is not that they are full of smart people (though they tend to be), it is that they are full of people with toolkits for organizational learning and for helping groups of people reveal and explore their collective wisdom and knowledge.

Labs tend to partner with business units once a need for change or exploration is identified. They essentially act as a combination of host, consultant, and partner for a policy/program/service development, implementation, or evaluation journey.

In governments, they tend to have dotted lines to senior executives to help create staffing, spending, and governance exemptions to allow for more agile, free-flowing operations. Which is important, as partnerships and collaboration within and outside the organization tend to define such labs.

NESTA has an innovation lab, MindLab is one of the longest-standing examples, and the MaRS Solutions Lab in Toronto has a helpful article on Social innovation labs: Top tips and common pitfalls. (There are many others, in and out of government. E.g., Alberta's CoLab, New Brunswick's NouLAB.)

Anecdotally, working with external labs (i.e., those outside of government) appears more likely to allow for the expertise, safe space, and transparency required. Expect that work with such labs will take a lot of time. The good news is it’s the appropriate amount of time for meaningful change, and the simpler solutions you would have arrived at through a half-hearted exploration would be incomplete, misleading, and ultimately less effective.

Central hubs, expertise, and resources

Organization-wide innovation strategies can also take steps to ensure the on-demand availability of training (e.g., on citizen engagement, facilitation, or data science), resources (including people and money), or expertise and advice.

Many skills central to the common toolkit of approaches are disciplines in and of themselves. They’re too specialized to fit into many permanent teams, and they’re called on at irregular intervals. For such skills, governments are creating central centres of expertise to house permanent specialist staff, who act as common resources to the rest of government: e.g., public engagement offices or behavioural insights units. The Government of Canada has a hub for a variety of approaches. They might operate on cost recovery or as free resources but with criteria for choosing the most high-impact projects to work with. If such specialists are creating value above and beyond their cost, capacity can be added over time (e..g, the UK Nudge unit).

Communities of practices are also useful for building capacity for skills for which external professional networks are few and far between. Ideally, managing these communities should be part of someone’s job, if not their full-time job. It creates connective tissue between experiments and pockets of knowledge scattered across organizations, which is particularly important for evolving, emerging fields of practice.

On levers

What levers for change do you have? Is your organization willing to make substantial changes to, for example, HR or contracting policy for the sake of an innovation agenda? It’s helpful to think through two lenses:

  1. What’s the best we can possibly do, given our current parameters?
  2. What possibilities would be available with certain strategic structural changes?

Levers for change may include:

  • Laws, policies, and regulations
  • Performance management frameworks
  • Executive/political commitment and champions
  • Internal and external communications
  • Training
  • Toolkits and resources
  • Hiring (or alternative models like short-term tours, fellowships, exchanges)
  • Procurement
  • Contracting
  • Organizational design, including the labs and hubs from above
  • Changes to processes (e.g., mandatory fields in spending proposals)
  • Common-use programs (e.g., centrally-led open data portals, citizen engagement tools, challenge prize platforms, web platforms that any department can use)

What to do next


    1. Make sure the most important stakeholders in your organization are talking about the same thing when they say “innovation.”
    2. Explore the current barriers, and don’t settle for surface-level answers (e.g., “risk aversion,” the most common, is a symptom, not a cause: what’s behind it?).
      1. Risk symptoms are most easily solved by 1) crisis or scrutiny that suddenly makes novel approaches more palatable than a publicly failing status quo, or 2) commitment and willpower, most often stemming from the political layer. For smaller-scale risks, commitment and willpower from senior executives can suffice. That is, they have to personally evangelize and clear obstacles for the change. Repeatedly and consistently.
    3. Agree on parameters: resources, level of commitment, available levers for change.
    4. Identify, or more likely enlist or partner with, people with deep expertise.
      1. Take a moment to consider how your organization knows expertise when it sees it, particularly for new, rapidly evolving, un-professionalized skills (e.g., there are no equivalents to Certified Accountants for public sector innovation).
      2. Find a role for the people in-house who want to be a part, and make sure it is neither above their heads nor meaningless (considering oneself innovative doesn’t mean one can lead this work; however, anyone who’s been pushing for these approaches will feel left out if external expertise is parachuted in).
    5. Create criteria for project intake and create ways for people to find you.
      1. There’s danger here, commonly referred to as “solutions looking for problems.” Ideally the innovation model is that everyone is focusing on their mandates, but with a light layer of constant learning, environmental awareness, and knowledge of the organization-wide innovation capacity for when it’s needed and would solve the problem better.
    6. Read and talk to people: there’s tons of great resources out there. The key is to learn how to find credible sources that best match your current context, and to triangulate between a few for each concept. Supplement with rapid evidence assessments.
    7. Write the future in pencil: experiment at both the project level and the meta, innovation-in-government level, learn, and change. If your “innovative project” has to be a success - that is, a project success, not an organizational learning success -  you’ve sold it wrong and your organization is thinking about it wrong.

    Reading

    No comments:

    Post a Comment