Showing posts with label innovation labs. Show all posts
Showing posts with label innovation labs. Show all posts

Friday, February 10, 2017

Public sector innovation


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


This post is a response to an email from a provincial government colleague, who asked for frameworks and models for organizational innovation. I, perhaps, got carried away; it's a little lengthy for a blog post (here's a print-friendly version). I'd like to consider it an evergreen draft for the time being, as I'm sure I'm missing major chunks or have the blinders on for parts; I welcome comments on the working document. - Kent

Introduction, and innovation defined


It’s important to make sure that everyone involved in your conversation about innovation is using roughly the same meaning; innovation gets overused to the point where it’s amorphous and meaningless. For this document, we mean change or novelty - so change to existing concepts, or new concepts - that gets implemented at scale and that creates value. Here’s NESTA’s model:


Source: NESTA

Two takeaways from the above: 1) good ideas that flop during testing aren’t innovations, but they are part of the innovation process; 2) “making the case” comes after testing.

It’s unhelpful to think of innovation as a skill, or of people as innovative. Innovation is a discipline that can be practiced, and some people practice it - by inclination, affinity, or opportunity - more than others, and to greater effect. It is better understood as a process than an event.

At the core of innovation is the idea of problem definition: know what problem you’re trying to solve. (Put differently, by Lincoln: “If I had five minutes to chop down a tree, I'd spend the first three sharpening my axe.”) This applies both to individual efforts towards innovation, and the meta-level question of innovation in government.

Turning to that problem: is the question fostering innovation? Encouraging innovation? Allowing innovation? Facilitating innovation? Supporting innovation? A combination of all of the above? Even a seemingly innocuous change in what verb we’re using reflects very different mental models about how innovation happens, and what stops it.

The discourse about public sector innovation jumps too quickly from problem to solution, from the question “Why aren’t we innovating more now?”. (For starters, sometimes you actually just don't need it.) If you do, the solution is rarely about simply connecting ideas to senior executives. Surface-level fixes to perceived innovation gaps are what lead to Dragon’s Dens to pitch “innovative ideas,” under-supported innovation labs, hackathons to “just see what people come up with,” and crowdsourcing for the sake of testing the idea of crowdsourcing.

The need for innovation


The concept that “government needs innovation” is a second-order condition, not a starting point. In reality, it’s that - sometimes, in context - particular programs are better served by innovation than a continuation of previous activities. In aggregate, this can turn into “government needs more innovation.” But the question can’t be “How do we innovate more?”, it has to be “How do we create a system whereby the status quo and valuable alternatives are on a level playing field, such that we can test and prove new ideas as reliably as we accept old ones?”

To back up slightly, let’s consider an arc of innovation that is both an analogy and a predecessor, that of telecommunications. We’ve gone from letter-writing to printing presses, telegraphs, telephones, the internet, and now to low-cost ubiquitous mobile connections. Every combination of one-to-one, one-to-a-select-few, one-to-many, public forums, with every combination of attributed or anonymous, for every combination of formats, all at a vanishingly small cost.

But here's the key: at one point, to communicate long-distance you had one option: handwriting a letter. Later, you had two: handwriting a letter, or paying to have something reproduced many times on a printing press. You didn't have to rely on a letter when it wasn't the best option. As more and more options became available, you could match your communications goal more precisely to different ways to achieve it.

Likewise, now we have a wider range of policy development approaches and policy instruments, which means there’s a greater chance that we can match the right approach to the right situation - if we can deploy those approaches without artificial barriers.

Some common approaches


We shouldn’t slip into mental models that restrict ourselves to an established catalog of “innovations.” However, it might be useful to consider the toolkit that public sector organizations are often drawing on in efforts to innovate. Some of these are focused on inputs, some on outputs (that is, how we approach decisions on what to do, versus what we actually do).


  • Foresight: systematic exploration of a range of plausible futures for a field, technology, policy area, often used in environmental scanning.
  • “Open” approaches: citizen and stakeholder engagement in service and policy design. This can be online deliberation, argument mapping, citizen’s panels, facilitated sessions, roundtables, or dozens of other methods. See: People and Participation, Designing Public Participation, and Dialogue by Design.
  • Participatory budgeting: overlaps with citizen engagement, but actually sets aside portions of government budgets for the citizens to decide on. Usually comes with a lot of work to create a fair and inclusive process, including web platforms to help people explore, debate, and vote on options, and to consider trade-offs and competing views.
  • Crowdsourcing: overlaps with citizen engagement, but let’s think of crowdsourcing as aiming for light inputs (ideas, concerns, suggestions, edits, votes) from many people. See: Crowdsourcing as a new tool.
  • Citizen science: creating platforms (toolkits, web platforms, games, physical infrastructure) that allow citizen inputs to government data collection: e.g., water and pH levels, photographs from standardized perspectives, star field mapping, protein folding. Here’s a cool example: Water Rangers.
  • Open data: releasing data created and collected by government to allow for third-party uses: social, economic, and academic research; platforms for access to government services and data;  business intelligence, etc.
  • Hackathons: collaborative problem-solving sessions, typically with technological solutions (but not always), that bring people together to define a problem then prototype and test minimally viable solutions, usually within 48 hours.  I helped organize and run this one.
  • Behavioural insights: generating and testing hypotheses from the behavioural psychology and behavioural economics literature to examine and optimize citizens’ interactions with governments (e.g., different language on letters from tax organizations leads to different response rates and times). Often paired with A/B testing two products/approaches to get authoritative data on which worked better. Worth it to skip straight past articles to the classic book on the topic.
  • Impact-based delivery models: can be partnership models, procurement, grants and contributions, or other funding models. Governments are increasingly exploring ways to get away from defining strict requirements for contracts, partnerships, or products up front and instead dispensing funds based on measurable impact with the approach left up to the third party. E.g,. the UK Social Value Act, social finance, pay-for-performance, or many public-private partnership governance models.
  • Challenge prizes: Posting monetary prizes for hard-to-solve problems that can be attempted by any individual or group. Groups attempt to solve problems on their own volition, with no promise of financial compensation. Often looking for technological or research proofs-of-concept, which governments can then purchase or pursue. Challenge.gov and the X-prize are the common examples.

That said: thinking in the above terms can be a trap. The process of deciding on action can often be boring. Good problem definition and ideas analysis can lead to re-using old solutions, or tweaking them only a little, and this has to be okay.

If a government policy team spends time making observations and conducting exit interviews with citizens using a particular service (an activity that could be equally called citizen engagement, behavioural insights, or design thinking) and makes minor policy changes based on that insight, is that innovation? If it creates value and they can scale it, sure.
On the flip side, for any private sector leap forward or emerging technology, there’s always someone waiting to say “How might we use [artificial intelligence/big data/data analytics/platform models/driverless cars/virtual reality/quantum computing] in government?” It’s worth musing about, but will only be worthwhile when that thinking happens to connect with a genuine problem - in a real-world program, policy, or service area - that needs solving.

Which means that none of the above approaches can be used in a vacuum: they need the connection to mandates, and the people who can understand and implement them.  It’s tempting to think that any of the above might be the perfect approach in a given situation, but unless the problem owner can and should deliver on them, it isn’t. That governments exist to deliver on mandates from elected officials is bizarrely easily forgettable.

It’s worth noting that many of the listed approaches are, in essence, just different ways to gain information and insight into stakeholders’ needs - that is, they can be routes to better problem definition and different ideas. They are ways to understand, and hopefully manage, complexity.

Internal innovation campaigns

On that note, a quick thought on internal innovation campaigns: that is, asking employees for ideas on how to improve the workplace or the organization's outputs. This is where there’s more control, less risk, and more room to play. However, if you open the floodgates and receive 1,000 ideas, you probably have to reject 995+ of them. You only have so much time and executive willpower, and some ideas will just be unfeasible. Consider only asking for ideas when management is genuinely ready to make change, and consider scoping out a theme area.

Many ideas will be predictable: Google's 20% time, skip-level meetings, development opportunities, 360 degree reviews, use [technology] for [activity], bring-your-own-device, buy [technology] for employees, telework, etc. If you’re interested in porting an established process into your organization, consider instead engaging employees on the details and implementation.

Some companies take internal innovation campaigns very seriously, and show more promise than my skeptical take. See: Why governments would never deploy Adobe’s Kickbox and why maybe they should.

Design thinking

Design thinking gets its own section, because if you solve for design thinking, you often end up solving for much of the above. It’s a structured process that helps organizations work through the problem definition, their own capacity, and the right approach - which might be innovative, or it might be boring.

These five steps are at the core of design thinking:

Source: IDEO

  1. Empathize: understand stakeholders and their needs, mindsets, challenges, and attitudes. Common techniques include interviews, observing people interacting with services, or group dialogue models. People can’t comprehend how powerful this is - and how much they were assuming - until they actually do this themselves.
  2. Define: once again, a focus on defining your problem really, really well.
  3. Ideate: generate ideas for how to solve the problem. Allowing different stakeholders into this process, having different perspectives mix, and using facilitation techniques to create space for creative thinking are all proven to generate more novel ideas than simply asking.
  4. Prototype: make anything (e.g., paper mock-ups of service interactions, lego buildings, rough websites, draft policies) that people can touch, explore, and react to. The act of building will help problem owners and idea generators understand how something will look, feel, and work in practice...
  5. ...but not as much as testing it will. Ideas from the previous stage will invariably be laden with assumptions, and when users - real, honest-to-goodness end users - start interacting with even a rough design, assumptions will be revealed, which will create opportunities to correct them. A sample size of five testers will reveal most of the critical failure points in, for example, a web interaction.

Design is also a discipline to practice, not a skill or an easily replicable set of steps. There are many, many ways to get the social science wrong at the empathize stage, generating false results, if you don’t know what you’re doing. Work with experienced designers while building your own capacity.

A related model is the “double diamond”, below. It describes the process of opening the conversation up to allow many considerations into the problem space, looking at many possible stakeholders and issues, before narrowing the focus down into a tight problem definition. Once that is done, the conversation opens back up to allow many possible solutions to surface, before deciding on one or some and taking steps to define and implement them.

Source: Rachel Reynard

Organizational models and strategies

It’s very different to talk about innovation in a localized, contextualized way (e.g., within a policy/program/service unit) and in an organizational, cross-government way. Individual units will innovate constantly, and the efforts may never bubble up to the government-wide radar. In many cases, the change won’t be called “innovative” - it’ll be born out of incremental change, financial pressures, or an opportunity to simply improve the program and it’ll just get called “better.”

The classic piece on innovation in private sector organizations found significant differences depending on where “innovation units” were positioned within the organizational structure. The long story short is that the wrong design and governance decisions can nearly guarantee failure.

However, organizational-level strategies can support individual, contextualized innovations in a few ways, including:

  1. Identifying and removing barriers
  2. Creating safe space for experimentation
  3. Providing training, tools, and resources

Barriers to new approaches

Get used to the idea of problem DNA. Every barrier and obstacle isn’t a homogenous element, it’s a unique combination of policy, risk, culture, understanding, time constraints, opportunity costs, process, communications, and other building blocks. Solving for one element rarely solves the problem.

Let’s return to the idea that, with new options for policy development and implementation, there’s a greater chance to match the right approach to the right situation. However, rules, policies, and processes were often designed before these options became available, so there are often barriers to their use, including hiring, mobility, contracting, procurement, and training. Creating flexibilities or exception processes in these systems is the blanket, one-size-fits-all approach to freeing up space to innovate. However, systematically taking steps to understand, and adjust for, the impact of these forces on how employees work is more effective - albeit more time-consuming and it requires more management commitment.

Safe spaces and innovation labs

An increasingly common approach is to create “innovation labs,” which are theoretically safe spaces that can bring people together to define problems and experiment with solutions.

For a proper, peer-reviewed definition:

“An innovation lab is a semi-autonomous organization that engages diverse participants—on a long-term basis—in open collaboration for the purpose of creating, elaborating, and prototyping radical solutions to pre-identified systemic challenges.”

These can be within organizations, at arms-length, or purely external as partners. They tend to be centres of expertise for design thinking, prototyping, facilitation, and other useful process skills. It is not that they are full of smart people (though they tend to be), it is that they are full of people with toolkits for organizational learning and for helping groups of people reveal and explore their collective wisdom and knowledge.

Labs tend to partner with business units once a need for change or exploration is identified. They essentially act as a combination of host, consultant, and partner for a policy/program/service development, implementation, or evaluation journey.

In governments, they tend to have dotted lines to senior executives to help create staffing, spending, and governance exemptions to allow for more agile, free-flowing operations. Which is important, as partnerships and collaboration within and outside the organization tend to define such labs.

NESTA has an innovation lab, MindLab is one of the longest-standing examples, and the MaRS Solutions Lab in Toronto has a helpful article on Social innovation labs: Top tips and common pitfalls. (There are many others, in and out of government. E.g., Alberta's CoLab, New Brunswick's NouLAB.)

Anecdotally, working with external labs (i.e., those outside of government) appears more likely to allow for the expertise, safe space, and transparency required. Expect that work with such labs will take a lot of time. The good news is it’s the appropriate amount of time for meaningful change, and the simpler solutions you would have arrived at through a half-hearted exploration would be incomplete, misleading, and ultimately less effective.

Central hubs, expertise, and resources

Organization-wide innovation strategies can also take steps to ensure the on-demand availability of training (e.g., on citizen engagement, facilitation, or data science), resources (including people and money), or expertise and advice.

Many skills central to the common toolkit of approaches are disciplines in and of themselves. They’re too specialized to fit into many permanent teams, and they’re called on at irregular intervals. For such skills, governments are creating central centres of expertise to house permanent specialist staff, who act as common resources to the rest of government: e.g., public engagement offices or behavioural insights units. The Government of Canada has a hub for a variety of approaches. They might operate on cost recovery or as free resources but with criteria for choosing the most high-impact projects to work with. If such specialists are creating value above and beyond their cost, capacity can be added over time (e..g, the UK Nudge unit).

Communities of practices are also useful for building capacity for skills for which external professional networks are few and far between. Ideally, managing these communities should be part of someone’s job, if not their full-time job. It creates connective tissue between experiments and pockets of knowledge scattered across organizations, which is particularly important for evolving, emerging fields of practice.

On levers

What levers for change do you have? Is your organization willing to make substantial changes to, for example, HR or contracting policy for the sake of an innovation agenda? It’s helpful to think through two lenses:

  1. What’s the best we can possibly do, given our current parameters?
  2. What possibilities would be available with certain strategic structural changes?

Levers for change may include:

  • Laws, policies, and regulations
  • Performance management frameworks
  • Executive/political commitment and champions
  • Internal and external communications
  • Training
  • Toolkits and resources
  • Hiring (or alternative models like short-term tours, fellowships, exchanges)
  • Procurement
  • Contracting
  • Organizational design, including the labs and hubs from above
  • Changes to processes (e.g., mandatory fields in spending proposals)
  • Common-use programs (e.g., centrally-led open data portals, citizen engagement tools, challenge prize platforms, web platforms that any department can use)

What to do next


    1. Make sure the most important stakeholders in your organization are talking about the same thing when they say “innovation.”
    2. Explore the current barriers, and don’t settle for surface-level answers (e.g., “risk aversion,” the most common, is a symptom, not a cause: what’s behind it?).
      1. Risk symptoms are most easily solved by 1) crisis or scrutiny that suddenly makes novel approaches more palatable than a publicly failing status quo, or 2) commitment and willpower, most often stemming from the political layer. For smaller-scale risks, commitment and willpower from senior executives can suffice. That is, they have to personally evangelize and clear obstacles for the change. Repeatedly and consistently.
    3. Agree on parameters: resources, level of commitment, available levers for change.
    4. Identify, or more likely enlist or partner with, people with deep expertise.
      1. Take a moment to consider how your organization knows expertise when it sees it, particularly for new, rapidly evolving, un-professionalized skills (e.g., there are no equivalents to Certified Accountants for public sector innovation).
      2. Find a role for the people in-house who want to be a part, and make sure it is neither above their heads nor meaningless (considering oneself innovative doesn’t mean one can lead this work; however, anyone who’s been pushing for these approaches will feel left out if external expertise is parachuted in).
    5. Create criteria for project intake and create ways for people to find you.
      1. There’s danger here, commonly referred to as “solutions looking for problems.” Ideally the innovation model is that everyone is focusing on their mandates, but with a light layer of constant learning, environmental awareness, and knowledge of the organization-wide innovation capacity for when it’s needed and would solve the problem better.
    6. Read and talk to people: there’s tons of great resources out there. The key is to learn how to find credible sources that best match your current context, and to triangulate between a few for each concept. Supplement with rapid evidence assessments.
    7. Write the future in pencil: experiment at both the project level and the meta, innovation-in-government level, learn, and change. If your “innovative project” has to be a success - that is, a project success, not an organizational learning success -  you’ve sold it wrong and your organization is thinking about it wrong.

    Reading

    Friday, February 12, 2016

    Ten Enabling Conditions for Innovation Labs


    by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

    We've written a lot about innovation and innovation labs in the past (See: The Future of Innovation Labs: Accelerating Social Movements of Convening Solutions Ecosystems,Innovation Strategies: Centralized and Dedicated or Distributed and Open, On Prioritizing Policy Innovation: Wicked or Tame Problems?Why Governments Would Never Deploy Adobe's Kickbox and Why Maybe They Should, etc.) but now having worked inside one for the last month I can say with a degree of certainty that the biggest differentiating factor between a lab environment and a traditional government office (at least in my experience) is the underlying ethos of hope and experimentation. Both of which have been sadly absent from most of the places I've worked previously.

    Now, that's not to say I haven't worked with good people (I have) or done good work (I have) but even in environments where I've been able to do good work with good people I haven't felt the same sense of hope or willingness to experiment that I have in my current role. Maybe I've drunk the kool-aid but the energy is nothing short of contagious

    Surely there are a number of factors that help enable this kind of environment, and I doubt all labs are the same. That said, I'll try to name a few of the enabling conditions that I've observed thus far (in no particular order):

    1. People who are curiously optimistic, outcome focused and willing to fail in the pursuit of creating new public value streams
    2. Leadership that is willing (mindset) and able (skillset) to act as first line of defence against the "no-machine"
    3. Senior Management that is accessible, audibly champions experimentation, and puts their weight behind it through their actions
    4. Just enough hierarchy to get things through the system but not so much as to weigh any person or project down
    5. Direct ownership over projects / files (and all that comes with it)
    6. No duplication of work
    7. Trust as proxy for formal checks and balances
    8. Rigorous time management and firm commitment to deadlines
    9. Open and honest communication among team members
    10. Flexible working arrangements for everyone at all levels
    I'm sure there's more, but that's as good a place as any to start.

    Friday, January 22, 2016

    Innovation Strategies: Centralized and Dedicated or Distributed and Open?


    by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

    A while back I was in London speaking at Nesta's Global Labworks Conference. The panel was examined the differences between centralized and dedicated approaches versus distributed and open approaches. What follows is a piece I originally wrote for my fellow panelists in the lead up to the event, it roughly represents my contribution to the panel discussion. It's worth noting that I was asked to attend the panel as a critic of labs and spur debate.

    On Comparing Approaches

    While Westminster parliamentary democracies are widely credited with a high capacity to keep pace with the times, the realities of digital society are increasingly putting those claims to the test. Digital technologies and governance challenges are colliding, the landscape is shifting, and giving rise to new questions. Concurrently, policy-makers, civil servants and citizens are increasingly looking for innovative ways to smooth out the governance landscape. But how governing institutions can best innovate in an era where no one owns information, power is dispersed and authority and accountability are being constantly re-conceived is still a mystery. While there has been a recent surge in the popularity of innovation labs – centralized innovation spaces and dedicated teams tackling specific problems – it is still too early to judge their success or failure relative to the status quo or any other alternative innovation strategies.

    The rise of innovation labs is hardly attributable to any single set of drivers but the consensus – at least in government circles – is that traditional bureaucratic hierarchies are anathema to innovation. This is why leadership is willing to circumvent hierarchies to better accomplish their goals, why civil servants embrace flattening communication technologies to reach across their reporting structures, and why citizens co-create solutions to public problems outside them. But the irony here is that at its core, centralizing the innovation function in a dedicated innovation lab, could be considered an inherently bureaucratic approach to problem solving. After all, labs may centralize rather than diffuse the innovation function, create new institutional costs, situate those costs firmly within a subsection of the hierarchy, and reinforce the status quo of situational power structures where access and information are the ultimate sources of influence. As a result, labs are vulnerable to the same bureaucratic pressures that slow innovative forces in the rest of the organization. It may be argued that they are inherently exclusive and prestigious because not everyone can work in the lab — that would after all undermine its very essence. To some degree this can be useful and positive as a way of raising the profile of the lab’s work and using the skills and knowledge of the best and brightest to contribute to its success. But it also introduces potential challenges, like the risk of attracting those who are more concerned with career progress than mission success. In addition, labs tend to be task specific and thus more focused on building and diffusing a particular innovation or series of innovations rather than building enterprise wide capacity for innovation. Importantly, they may also take the responsibility for innovation out of the hierarchy and consolidate it into a single place alongside it. Centralization may not only send a strong signal about where innovation happens in the organization and where it doesn't but may also make the lab's output vulnerable if there is insufficient receptivity to the innovation within the hierarchy at the point of re-integration (e.g. death by a thousand cuts). Finally, establishing formal innovation labs can make governing institutions vulnerable to the sunk cost fallacy. Bureaucracies are often criticized for throwing good money after bad and may be unwilling or unlikely to walk away from established labs even if they are failing. Ironically, the sunk cost fallacy – the bureaucracy's unwillingness to walk away from structures that produce sub-optimal outcomes – is likely one of the contributing factors to the rise of innovation labs themselves. As a result of all this, lab practitioners may have to contend with lower than expected risk tolerance, less experimentation, lower rates of abandonment, and a push (or pull) towards early and easy wins. This suggests that centralized labs alone cannot yield the successes we demand of them. A receptive organizational culture will certainly be a critical success factor. This brings us to the second innovation strategy I want to consider.

    By comparison, distributed and open approaches such as Adobe’s Kickbox diffuse the innovation function, avoid additional institutional costs, invoke minimal hierarchy and create incentives for wider collaboration (See: Why Governments Would Never Deploy Adobe's Kickbox and Why Maybe They Should). As a result, these approaches aren’t as vulnerable to the bureaucratic pressures that typically slow innovation. Because it is universally accessible (or at least accessible to broad swaths at a given time) the distributed and open approach is more likely to attract more diverse talent and people genuinely looking to try something new. Open innovation approaches also send a clear signal about the need for, and the ability to, innovate anywhere in the organization and ameliorate the culture by building a more widespread organizational understanding of, and capacity for, innovation. Because open approaches allow innovation to bubble up from within, a given innovation that arises within this culture is more likely to be accepted by those around it who are critical to its success. Finally, distributed and open approaches to innovation allow governing institutions to walk away from sunk costs more easily. The staged methodology of something like Kickbox is not only built to promote good ideas through the innovation cycle but also provide frequent off-ramps for ideas and early concepts that ought to be abandoned while avoiding either great expense or consequence. As a result, innovators in open systems are less likely to have to contend with issues of risk intolerance and can engage in more genuine experimentation; they can walk away as required and test hypotheses that would otherwise go untested.

    That said, the true test of innovation strategies, be they centralized and dedicated or open and distributed, isn't what goes into their deployment but rather what results from that deployment. Too often bureaucracies put their best and brightest to work on matters of process rather than substance and – given the magnitude of the challenges – any innovation strategy that puts more smart people next to hard problems is worth pursuing. Furthermore while the two approaches may seem dichotomous, they need not be mutually exclusive. Innovation isn't black or white but rather an exploration of all the shades in-between. In fact, organizations may wish to pursue both strategies concurrently, incent a little friendly competition, and – to the degree possible – A/B test the results against the baseline of the status quo. What they are likely to find is that centralized and dedicated approaches produce sustaining innovations (meaning they will help governing institutions make the things they already do faster, better and cheaper), whereas open and distributed approaches – like Adobe's Kickbox – produce disruptive innovations (meaning they will help governing institutions do different things in fundamentally different ways). Ensuring the continued health of our Westminster systems likely requires some mix of concurrent centralized and distributed innovation strategies that allows for the serendipitous blend of sustaining and disruptive innovations born of both dedicated and open systems. Therefore knowing which strategy to pursue when and for what purpose is invaluable intelligence for governing institutions – invaluable intelligence that is on the verge of being within reach.

    Wednesday, October 14, 2015

    Innovation is Information

    by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

    In August, the Clerk of the Privy Council delivered a speech titled “A National Dialogue On Policy Innovation.” Elsewhere, #policyinnovation is one of the most used hashtags by Canadian public servants. It’s somewhat of a hot topic right now. But what is "policy innovation" in the first place? 

    For starters, it could refer to "new and interesting ways of developing policy." Or, "new and interesting policy." (See: On Prioritizing Policy Innovation) We tend to use both versions almost interchangeably, but this post tilts towards the former usage. I’ve heard the term used to refer to crowdsourcing and challenge prizes, deep dives into technological and social trends, improvements to government services, behavioural economics, and much more.

    But within that nebulous concept, I think there's a central core to the entire idea that may be a useful way to think about how we gather and understand evidence, and how we make and implement decisions. It's all about information.

    More options means more precise application


    To back up slightly, let’s consider another arc of innovation that is both an analogy and a predecessor, that of telecommunications. We’ve gone from letter-writing to printing presses, telegraphs, telephones, the internet, and now to low-cost ubiquitous mobile connections. Every combination of one-to-one, one-to-a-select-few, one-to-many, public forums, with every combination of attributed or anonymous, for every combination of formats, all at a vanishingly small cost.

    But here's the key: at one point, to communicate long-distance you had one option: handwriting a letter. Later, you had two: handwriting a letter, or paying to have something reproduced many times on a printing press. You didn't have to rely on a letter when it wasn't the best option. As more and more options became available, you could match your communications goal more precisely to different ways to achieve it.

    Likewise, now we have a wider range of policy development approaches and policy instruments, which means there’s a greater chance that we can match the right approach to the right situation. We have a wider range of options partially because we get inventive over time, but far more so because policy development and implementation often is communication and so we’re simply piggybacking on telecommunications advances.

    The information


    Which isn’t much of an insight, I recognize. Yes, the internet opens up options for how government does things. But if we start to think of policy innovation as communication, instead of as enabled by communication, it starts to shed light on what we’re really trying to accomplish, and where “innovative” approaches fit in more “traditional” approaches. Using the terms in quotations lightly.

    Basically, the approaches that get pegged as "policy innovation" often boil down to two key actions:

    • transferring information between people
    • arranging information for people

    It’s the crux of crowdsourcing, policy or service jams, innovation labs, open data, design thinking, challenge prizes, and citizen engagement approaches like consultations, townhalls, and social media chats. Someone has information that policymakers can use: ideas, problems, slogans, lived experience, or academic expertise (see: The Policy Innovator's Dilemma). Then it’s a matter of finding the best way to access it, which is a question of format. You just have to learn the formats. Similarly, once you've crossed the threshold and learned a new telecommunications approach (case in point might be parents and grandparents on Facebook), it becomes part of a passive mental algorithm that takes a need or goal and instantly knows how best to accomplish it.

    Talk of policy innovation tends to go hand-in-hand with the idea that policy issues increasingly cross jurisdictional or societal boundaries, and are a part of an increasingly complex environment (see: Complexity is a Measurement Problem or On Wicked Problems). Which is where arranging information becomes invaluable.

    Let's say  you get ten informed stakeholders of a given policy question in a room, and ask each for their concerns. They each reveal a different way of looking at the issue, revealings its complexity and pointing out legitimate pitfalls for policy options. The problem is that by the time the tenth stakeholder spoke you forgot the concerns of the first five, so it's impossible to understand all ten in context. It's Miller's Law: human beings can only hold seven things, plus or minus two, in our working memory. Which is where techniques like journey mapping, system mapping, and sticky noting everything are crucial for policy. They're the policy landscape equivalent of doing long division on paper so you can remember everything in play - what we might call mental scaffolding

    Many approaches include both transferring and arranging information. For instance, a public consultation might include a call for ideas with a voting mechanism that creates a ranking, signaling importance. Some deliberation platforms include argument mapping systems that use algorithms to arrange the discussions for participants, almost like Amazon bringing complementary products to the forefront. ("Are you outraged at your government about X? Many people outraged about X are also outraged about Y, perhaps you should consider lambasting them on that topic too.")

    In other cases, governments can (and should) map out what they already know about a given policy issue to get it out of working memory and focus on change drivers and relationships between forces. This will become increasingly important if we truly want to get out of siloed policy-making, find hard-to-see connections between once-distinct policy areas, and genuinely understand entire systems. Our governance model was built for a world we falsely believed was simpler than it was, and within that we're running into our own cognitive limits. We literally cannot hold all the elements of a complex policy issue in our heads without some kind of mental scaffolding, be it tools, other people, or paper.

    Metadata

    Two notes on metadata, or information about information (an example would be how DSLR cameras automatically include date stamps, aperture, shutter speed, iso, and more information in image files).

    First, some approaches that get lumped in with policy innovation don't fit perfectly with the transferring and arranging information categories. Behavioural economics, for instance (and its service delivery cousin of user testing), seems more like creating new information through research. But viewed from a policy lens, I'd suggest it's actually more like metadata.

    Let's say government wants to maximize the rate of tax returns, so tweaks the language on letters to taxpayers to see what framing resonates with people. Here's the UK example:

    "...replacing the sentence “Nine out of 10 people in the UK pay their tax on time” with “The great majority of people in [the taxpayer’s local area] pay their tax on time” increased the proportion of people who paid their income tax before the deadline."

    The core policy instrument here is a law, and the letter sent to taxpayers is supporting education about the importance of filing tax returns. In this case, the information is in the letter. The behavioural economics piece is metadata about that information: how many, and which, people acted upon the information they received. It's still really about transferring information between people, which puts tools like behavioural economics and data analytics in this common framework and may help practitioners navigate between possible approaches.

    Second, there's a meta-level to the idea of transferring and arranging information that changes the value of different approaches and formats. We might call it "conspicuous innovation" or "conspicuous engagement." Basically, the transfer and arrangement of information is not the only goal achieved by these approaches - someone emailing a policymaker a vital piece of information for a policy question is worth less than that same person posting it publicly during an official consultation. The metadata for that piece of publicly posted information includes the number of views from other people, the signals about government's attitude towards governance and transparency, and the future value to others. 

    So what?

    The "policy innovation" toolkit centers around two actions: transferring information between people and arranging information for people. Past this common core, it's often a question of forums and formats (increasingly, but not uniquely, about how we transfer information from non-governmental actors) (with exceptions, of course). So what?

    One, I think it's worthwhile to examine what binds the idea of policy innovation together, to refine our working concept of the term.

    Two, I think thinking in these terms highlights what we're actually trying to accomplish through these approaches, and might make it easier to choose between them.

    Three, putting them in a historical context puts the perceived risk in context. I mean two things here: first, that policy innovation is very similar to our personal experience with telecommunications advances: more options allows more niche approaches, and eventually they become routine. Second, that if some of these approaches are at a fundamental level analogous to things government has been doing for ages, they seem less daunting. For instance, there are dozens of consultations ongoing at http://www1.canada.ca/consultingcanadians at any given time. It's just a different way of transferring information between people and policymakers.



    Thank you to Blaise Hebert and Nick Charney for super interesting conversations on this topic.

    Also, two recent posts from Melissa that are good general fodder here: What Innovation Feels Like, Part 1: Fear; and Part 2: Lack of Trust