Friday, June 5, 2015

Building Trust and Transparency Through Failure


by Melissa Tullio RSS / cpsrenewalFacebook / cpsrenewaltwitter / creativegov

A couple of weeks ago, I attended a communications conference. The theme was "Think, Think, Nudge, Nudge": how we might start applying some behavioural insights techniques in our work to deliver more impact/meaning in our messaging to audiences (not to mention using a more evidence-based approach to do things). Ontario's own behavioural insights team was brought in to present how they've used randomized control trials to improve outcomes for various Ontario ministries, and speakers from ideas42 provided a couple of hands on workshops for us to learn some of the methods.

The keynote speaker was Michael I. Norton from Harvard Business School, who delivered an amusing and thought provoking talk that gave loads of examples illustrating how people react in the randomized control trials he's run. An example that really stuck with me was the results from a prototype of Boston's Citizens Connect app. It's kind of a "fix your street" for Boston, where citizens can report potholes, graffiti, and other broken things in the city. The data is uploaded to a map, which shows the status of each of the submissions - the prototype had red flags for open tickets, yellow for recently opened, and blue for closed tickets.

The test worked like this. They showed three versions of the app to people. The first didn't provide an illustration showing that the city was taking action on the open tickets - it was probably just a form for people to submit problems to the city. The second showed only the closed tickets, which cast the city in a positive light (but wasn't necessarily the most honest). The third showed all the open tickets and closed tickets (the open tickets greatly outnumbered the closed tickets, which may leave a negative impression of the city). A key insight that came out of the test was that even the map showing all of the open, red tickets was received more positively by people than the version that showed nothing at all.

What's going on here?

Michael's research on consumer behaviour shows that people like to see the work. Transparency in the processes that we're using to deliver a service builds credibility and trust in the institutions we rely on. As a consumer, this seems pretty intuitive to me. It feels a lot better seeing the person behind the glass preparing the burger with fresh ingredients than to order it through an intercom and pick it up a few minutes later like some human vending machine.

In a previous blog post (See: Open Gov, Values and the Social Contract), I talked about values, and mentioned how we can't say, as government, that we're open, if we don't then follow through and act in an open way. Ryan left a really interesting comment on that post: "not everything can (or dare I say, should) be 100% open 100% of the time." I agree with this; transparency isn't about revealing everything, but revealing what we can in the context of protecting citizens' privacy (we're not a burger joint). And, as behavioural insights evidence shows us, it's also about revealing, when we're able to reveal things, both the bad and the good outcomes of the things we're trying (e.g., the map showing the red flags as well as the blue ones).

Vulnerability as a Value

This all left me thinking about a value we're not so great at demonstrating in government: vulnerability. I'm willing to bet that every public servant working inside government today has felt risk aversion from colleagues or superiors in some form. We're afraid to fail, and even more afraid to show that we've failed. We cringe at the thought of ending up as a headline in the morning paper because of a mistake we made (I literally just got goosebumps thinking about this).

But what if the spaces we built inside government supported experimentation? Kent's recent post proposes that we're already experimenting, and I think it's true. The part we're missing is transparency - showing people that we don't have all the answers, and we need their help to figure stuff out.

What if we let users/citizens into the experiments? What if we had spaces to try stuff out before launching programs/policies that might fail anyway, regardless of how much thought we've put into them to avoid failure? And what if one of the guiding values for playing in those spaces were vulnerability - demonstrating to ourselves, and people who rely on us for services, that failure is OK, as long as we learn and build something better from it?

Is it reasonable to believe that transparently demonstrating to people that we're good at failure can build trust between us and the people we deliver services to? And if you agree, how might we move the culture towards embracing vulnerability as a value?

No comments:

Post a Comment