Showing posts with label behavioural economics. Show all posts
Showing posts with label behavioural economics. Show all posts

Friday, June 5, 2015

Building Trust and Transparency Through Failure


by Melissa Tullio RSS / cpsrenewalFacebook / cpsrenewaltwitter / creativegov

A couple of weeks ago, I attended a communications conference. The theme was "Think, Think, Nudge, Nudge": how we might start applying some behavioural insights techniques in our work to deliver more impact/meaning in our messaging to audiences (not to mention using a more evidence-based approach to do things). Ontario's own behavioural insights team was brought in to present how they've used randomized control trials to improve outcomes for various Ontario ministries, and speakers from ideas42 provided a couple of hands on workshops for us to learn some of the methods.

The keynote speaker was Michael I. Norton from Harvard Business School, who delivered an amusing and thought provoking talk that gave loads of examples illustrating how people react in the randomized control trials he's run. An example that really stuck with me was the results from a prototype of Boston's Citizens Connect app. It's kind of a "fix your street" for Boston, where citizens can report potholes, graffiti, and other broken things in the city. The data is uploaded to a map, which shows the status of each of the submissions - the prototype had red flags for open tickets, yellow for recently opened, and blue for closed tickets.

The test worked like this. They showed three versions of the app to people. The first didn't provide an illustration showing that the city was taking action on the open tickets - it was probably just a form for people to submit problems to the city. The second showed only the closed tickets, which cast the city in a positive light (but wasn't necessarily the most honest). The third showed all the open tickets and closed tickets (the open tickets greatly outnumbered the closed tickets, which may leave a negative impression of the city). A key insight that came out of the test was that even the map showing all of the open, red tickets was received more positively by people than the version that showed nothing at all.

What's going on here?

Michael's research on consumer behaviour shows that people like to see the work. Transparency in the processes that we're using to deliver a service builds credibility and trust in the institutions we rely on. As a consumer, this seems pretty intuitive to me. It feels a lot better seeing the person behind the glass preparing the burger with fresh ingredients than to order it through an intercom and pick it up a few minutes later like some human vending machine.

In a previous blog post (See: Open Gov, Values and the Social Contract), I talked about values, and mentioned how we can't say, as government, that we're open, if we don't then follow through and act in an open way. Ryan left a really interesting comment on that post: "not everything can (or dare I say, should) be 100% open 100% of the time." I agree with this; transparency isn't about revealing everything, but revealing what we can in the context of protecting citizens' privacy (we're not a burger joint). And, as behavioural insights evidence shows us, it's also about revealing, when we're able to reveal things, both the bad and the good outcomes of the things we're trying (e.g., the map showing the red flags as well as the blue ones).

Vulnerability as a Value

This all left me thinking about a value we're not so great at demonstrating in government: vulnerability. I'm willing to bet that every public servant working inside government today has felt risk aversion from colleagues or superiors in some form. We're afraid to fail, and even more afraid to show that we've failed. We cringe at the thought of ending up as a headline in the morning paper because of a mistake we made (I literally just got goosebumps thinking about this).

But what if the spaces we built inside government supported experimentation? Kent's recent post proposes that we're already experimenting, and I think it's true. The part we're missing is transparency - showing people that we don't have all the answers, and we need their help to figure stuff out.

What if we let users/citizens into the experiments? What if we had spaces to try stuff out before launching programs/policies that might fail anyway, regardless of how much thought we've put into them to avoid failure? And what if one of the guiding values for playing in those spaces were vulnerability - demonstrating to ourselves, and people who rely on us for services, that failure is OK, as long as we learn and build something better from it?

Is it reasonable to believe that transparently demonstrating to people that we're good at failure can build trust between us and the people we deliver services to? And if you agree, how might we move the culture towards embracing vulnerability as a value?

Wednesday, June 3, 2015

You're Experimenting Right Now


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken



You may be familiar with the trolley problem in ethics:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?

The latter option is better from a purely utilitarian perspective, but many people lean towards the first option, which absolves them of participation or responsibility in the outcome.

The current public administration zeitgeist would more likely point to behavioural economics to prove the power of defaults to provide a level of comfort and confidence in people’s decisions (see: How Nudges Work for Government). But the point stands: we are unduly comfortable with the status quo, and we wrongly absolve ourselves of culpability for outcomes generated through the status quo. In the trolley problem, doing nothing is a big decision*.


Duty and Responsibility

Let's say you're debating a pivot in your career. You've been working in a field for a few years, and you're considering trying something new. It could just be a different job within your organization, or it could be throwing everything out the window, including yourself, and taking a leap of faith. Everything along that spectrum comes with uncertainty, discomfort, and perhaps a degree of fear.

Or, let's say you're an politician debating a policy change: it could be a minor adjustment or something major like mandatory voting or a guaranteed basic income, both of which have been proposed in Canada lately. It's not the sort of thing you can pilot in a vacuum; you have to change the way things things work to gauge how people react. Like the career pivot, it's  uncertain, uncomfortable, and scary.

Who knows how such experiments will work out? Will they be worth the risk? It's impossible to say with 100% certainty, which is the nature of experiments.

It’s tempting to think that those changes are experiments, whereas the course we are on is not. But the status quo is not a valueless, neutral starting point. It’s an experiment. It represents a plethora of design decisions, all of which influence how people behave and make decisions. And you are — we all are — complicit. As Richard Thaler, co-author of Nudge has put it, “There’s no avoiding nudging. Like in a cafeteria: You have to arrange the food somehow. You can’t arrange it at random. That would be a chaotic cafeteria.”

You’re Experimenting Right Now

You've never gone through a career on your current track before. No one ever has, in today's particular environment. Are you in digital media, for instance? Exactly zero people ever have put in a 30-year career in that field.

Likewise for policy. Canada has never entered the 21st century before, our policies have never stood up in the economic, demographic, or technological context they're about to face.

You're experimenting right now. We all are. And we have to weigh the costs and benefits of both the changes we’re considering and the track we’re already on.

* If you find yourself finding holes and rationalizations, Michael Sandel will cure that in his amazing lecture on ethics.

Tuesday, August 12, 2014

The New Nature of Process


by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


In a recent report on the next frontier of digital technology, Accenture created a model of the long history of challenges that have faced management.

http://www.accenture.com/us-en/Pages/insight-looking-digital-being-digital-impact-technology-future-work.aspx 


In short: the industrial era was characterized by a transition from individual craftsman and artisans to large-scale processes, and this transition was enabled by repeatability. A worker didn’t need to know how the factory ran to screw part A into part B, and if he left, a replacement could be trained incredibly quickly. This was the age of Taylorism, of precision and measurability permitted by process and structure.
 
Throughout the last century we’ve transitioned into an economy far more based on knowledge work (see Deloitte’s assessment, below), which meant the industrial management style ran into a crisis of rigidity, the solution to which was adaptive processes. Judgment, discretion, if-then statements, case management.

http://dupress.com/articles/the-future-of-the-federal-workforce/ 

However, for senior executives ultimately managing a variety of adaptive processes, the problem then became one of complexity. There’s too much going on, it’s too hard to understand, and the performance reports that are so useful for widgets-per-second are far less revealing.
 
Accenture suggests that the solution to complexity is in digital. Specifically, “smart digital processes”, which would feed decision makers key information exactly when they need it. My response is: maybe? In some cases? It seems the more plausible answer is a return to process - which is happening all around us, albeit which a crucial difference from the Taylorism of old.
 
 
Process in the Knowledge Economy
 
There's a common thread among the emerging approaches to governance. In his equation for today’s public policy, Nick highlighted several, including design thinking, behavioural economics, and public sentiment. We could add the field of facilitation, the practice of public participation, and innovation labs to them mix. All of which are hugely reliant on defined processes. 
 
The key difference is that the interim goal of the process of old was to remove the need for learning, whereas the process of today is designed to maximize the speed of learning. At the end of this post there are some links to example process kits: if-then guides to, essentially, helping humans understand other humans and the systems they live in.

The end goal is the same: scalability and repeatability. In this case, it’s repeatably, reliably solving unpredictable, emerging, or complex problems. We’re on the same arc as the first graph, but for a completely different organizational paradigm.
 
So the challenge for management becomes a new, grander problem of complexity. Where executives have been struggling to manage adaptive processes via industrial-inspired organizational designs, they’re going to be overwhelmed by managing a variety of learning processes without significant changes in management style. In some cases the if-then flow will be impossibly complicated, and in others it’ll need to be thrown out the window. A single node in a hierarchy will never be able to understand each process, only the principles behind them.


What's in it for Us?
 
We need to do it. It’s where the performance gains in a complex environment will come from. I’ll exapt an HBR article about how our personal learning curves regularly plateau. Here’s the graph, with learning on the Y axis and time on the X axis:

http://blogs.hbr.org/2012/09/throw-your-life-a-curve/ 

Success comes from knowing when to jump to the next learning curve, which is incredibly hard at the outset but maximizes the speed of progress.

Embracing this learning curve will be cost-effective in two ways. First, there’s evidence that consensus-building through learning processes costs less in the long term than making and defending decisions (which will apply to both internal management and policy/program governance). Second, in the latter part of that learning curve we’ll reach a level of sophistication that allows economies of scale:
  • We’ll be able to reliably pull from a menu of processes and adjust to new situations, rather than starting near scratch every time
  • We’ll be able to recognize when we can leave these learning processes to citizens, businesses, and NGOS, and govern accordingly
  • We’ll be able to share and teach approaches broadly
Returning to Accenture’s claim, organizations have run into a problem of complexity. Particularly for governments, however, I don’t buy their claim that the answer is in smart digital. Instead, I think we have to recognize that in many ways we’re back at the beginning, worried about about scale and repeatable processes. Just very different processes.
 


Example process kits:

http://www.involve.org.uk/blog/2005/12/12/people-and-participation/
http://labcraft.co/
http://stepupbc.ca/explore-your-career-increase-collaboration/idea-navigators
http://www.mindtools.com

Friday, March 21, 2014

More thoughts on the Copernicus formula

by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

A while back I presented a model demonstrating what I consider to be the future of public policy (See: Blending Sentiment, Data Analytics, Design Thinking, and Behavioural Economics). Kent later observed that the model could in fact describe the more encompassing idea of governance writ large (See: Building Distributed Capacity). At first I agreed with his observation but it's something I've been quietly reflecting on a lot lately and the more I think about it, the more I get the sense that what I've put forward is more precisely a formula that informs governance. Or perhaps more rightly, could inform a particular way of "doing" governance, because governance is – as Kent himself recently noted (See: People Act, Technology Helps) – what people do.

Recapping Copernicus

If you didn't catch the original post (again, see: Blending Sentiment, Data Analytics, Design Thinking, and Behavioural Economics) here's the TL;DR recap of the formula:

(Public Sentiment + Data Analytics) / (Design Thinking + Behavioural Economics) = Future of Evidence Based Policy

It's a back-to-basics model that argues that the sum of what the public wants (sentiment) and what the evidence suggests is possible (data) is best achieved through policy interventions that are highly contextualized and can be empirically tested, tweaked, and maximized (design thinking + behavioural economics) while simultaneously creating new data to support or refute it and facing real-time and constantly shifting public scrutiny.

Naming Copernicus

I chose to name the formula Copernicus for the following reasons:
  • it speaks to the fact that the formula represents a significant reorientation in the field of policy development and execution; 
  • it infers the amount of effort that will be required to overcome the inertia that is inherent in current frame of reference; and
  • it conveys the sense that once the formula becomes the new frame of reference the old frame is no longer tenable.
You may have noticed that I sense "once the formula comes the new frame" and not "if the formula becomes the new frame"; I did so subconsciously, noticed, paused, reflected, and kept it as is because my gut feeling is that it is only a matter of time before the formula's elements become as ubiquitous as the social media that we used to talk about in similar veins.

Copernicus is a means

It's a frame that helps you lean into the hard work of figuring out the variables. What do people want? What does the evidence suggest is possible?

It's a frame that helps you lean even further into the harder work of structuring the execution. What policy levers are most likely to work? How do you design the interaction? How do you build adaptability into the prototype?

It's a frame that helps decision makers gather rich information points and brings them to a series of decision points.

Copernicus is not an end

What I'm trying to get at is the fact that the formula isn't a panacea of simplification but a lens through which to better understand complexity. It doesn't tell you how to weigh the variables against one another, or what choice(s) to make, but rather it helps identify that which you ought to consider when doing so.

To be honest, I was planning on writing a series of posts elaborating each of the formula's elements but every time I sit down to do so I get lost in the complexity of each of them. In short, I'm still learning, thinking them through, running them up against real world examples. I still plan on doing so, but I need to dedicate more time to think it all through.

To this end, I'm considering convening a small discussion to test the model against recent policy choices made by different organizations (e.g. Canada Post' decision to end home delivery) to see precisely how it could help me both understand and explain a policy choice if I was in the position to make one. If this is a thought exercise that you are interested in participating in, drop me a line, I'd be happy to run through it with you as a thought exercise.

Wednesday, January 29, 2014

Building Distributed Capacity

By Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

Last week Nick laid out a model that blends public sentiment, data analytics, design thinking, and behavioural economics as the future of evidence-based policy (see: basically, that was the title). The opportunity cost of inaction, here, is far greater than the immediate financial investments required. The only disagreement I can muster is that I'd actually call it the future of governance, writ large.


But, we're in an era of intense scrutiny. Governments are no longer entirely opaque entities, and spending can be held not just to account but to undue pressure. And that pressure is greatest when spending doesn't lead to immediate and obvious public benefits, which is the case for pursuing the future as described above.


However, there are examples of governments spending money on complex investments - those that are long-term, hard-to-measure, and with widely distributed benefits. It's largely because there are strong communities that envision the long term that are bellowing for these investments, creating crucial pressure and accountability.


And these investments line up with the model Nick proposed. For public sentiment, the U.K. is building capacity through organizations like Sciencewise, dedicated to helping government consult with citizens on science and technology policy. For design thinking, there are a handful of examples, established to help policy makers apply techniques in their work. In the Behavioural Economics field, the U.K. are again the leaders with the Behavioural Insights Unit, and the U.S. appointed Cass Sunstein to a key role to make progress there. For Data Analytics? I welcome examples. But there is good news in the technology space, however, as on Monday a bill was proposed in the U.S. that would codify the national Chief Technology Officer role and establish a Digital Government Office.


These are all wise investments, the success of which can only be measured in the long-term and at the macro scale. None of those investments solve an easily definable problem; rather, they create a distributed capacity, a system for more reliable problem-solving.



So where do we go from here?

At the highest level, it's a question of ensuring that we can make important investments in complex solutions. Where the counterfactual is the key question, and the opportunity cost of inaction far outweighs immediate financial costs. And with closely watching stakeholders than can be hard to convince.


More concretely? There's a group of brilliant and dedicated public servants pursuing capacity-building for design thinking close to home. This is both a discrete capacity and a way to improve virtually every decision-making process, so I think this will go a long way towards better results. Design thinking is properly merciless in testing and discarding sub-optimal solutions.


But data analytics, behavioural economics, and understanding public sentiment require their own skillsets. And I think (and have for some time) that the opportunity cost of not exploring capacity-building in these areas is too great to be ignored.

Friday, January 24, 2014

Blending Public Sentiment, Data Analytics, Design Thinking and Behavioural Economics

by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

The Thinker by Darwin Bell
Last year I wrote a lengthy piece that argued that understanding the future of evidence based policy meant understanding the confluence of big data and social media (See: Big Data, Social Media and the Long Tail of Public Policy). Today I want to further qualify my statements, and refine my conceptual model to reflect some of my more recent thinking.


Project Copernicus

To be fair the conceptual model – which I've decided to nickname Project Copernicus (See: Towards Copernicus if you don't get the reference) – is very much a moving target; and while it ebbs and flows as I come into contact with new (to me) thinking, it's very much about leaning into the hard stuff (See: Lean into it) and "building a better telescope" (See: Complexity is a Measurement Problem).


To recap quickly and push forward

At the outset of the aforementioned piece I offered up a TL;DR summation that was essentially:

Social Media + Big Data Analytics = Future of Public Policy

And feel that refining that statement is as good as a place to start as any; here's my latest thinking:

(Public Sentiment + Data Analytics) / (Design Thinking + Behavioural Economics) = Future of Evidence Based Policy

In a sense its a rather simple, back-to-basics model that argues that the sum of what the public wants (sentiment) and what the evidence suggests is possible (data) is best achieved through policy interventions that are highly contextualized and can be empirically tested, tweaked, and maximized (design thinking + behavioural economics) while simultaneously creating new data to support or refute it and facing real-time and constantly shifting public scrutiny.


I have a number of reasons for nuancing the model
  • Public Sentiment is broader than social media and it is incumbent on policy makers to be as inclusive as possible when incorporating sentiment. Focusing on social media ignores issues of the digital divide and unduly privileges those with greater digital literacy. This may be one of the reasons that the Deputy Minister's Committee on Social Media and Policy Development was recast as the Deputy Minister's Committee on Policy Innovation; social media may be innovative but it doesn't necessarily follow that innovative ideas flow from social media.
  • Data Analytics is broader than Big Data and includes both linked data and open data. These don't necessarily always fall into the category of big data on their own but will play an important role as more and more data sources start to rub up against each other. 
  • Design Thinking combines empathy for the context of a problem, creativity in the generation of insights and solutions, and rationality to analyze and fit solutions to the particular context
  • Behavioural Economics brings sentiment, analytics, and design to ground by emphasizing what people actually do when faced with a given situation (rather than what we think they ought to do)
  • Evidence Based is an important qualifier and cannot be narrowly construed as relating to only one of the variables on the left side of the equation; evidence comes in many forms and it is up to policy makers and elected officials to determine how to weigh the different sources of evidence (variables in the equation above) against each other in a given set of circumstances.

On Savvy Policy Makers

Savvy policy makers (and for that matter, elected officials) are likely the ones able (and willing) to chart their policy directions against this type of model; the one's who can say with confidence:
"Here is what we've heard from the public, here is what the evidence supports, and here is the most policy intervention we have determined to be the most efficacious. However, it is one we will continue to refine over time, as it creates new data, and is forced to stand up to real world public scrutiny"
When was the last time you heard someone qualify a policy position with that kind of preamble?

Wednesday, September 11, 2013

How Nudges Work for Government (and Might Work Against Blueprint 2020)


by Kent AitkenRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


In the last few weeks some people took notice of the Behavioural Insights Unit in the U.K. government, which sparked debate about behavioural economics and the "nudge" approach to public policy and social outcomes. It seemed that there were many misconceptions about what nudges are. So a brief primer, then what I believe they could mean to Blueprint 2020.


Nudges


"Nudge theory... argues that positive reinforcement and indirect suggestions to try to achieve non-forced compliance can influence the motives, incentives and decision making of groups and individuals alike, at least as effectively – if not more effectively - than direct instruction, legislation, or enforcement."

An example of a nudge, in contrast to alternatives: recently, New York City tried to stop sales of massive cups of soda on the basis that it was bad for both citizens and society (in health and health care spending, respectively). Here are some options:


  • Banning the sale of large sodas, which was their attempted approach, would be regulation.
  • Raising taxes on large sodas would be an economic incentive.
  • Running ads promoting healthy lifestyles and noting the health risks of sugary soda would be education.
A nudge, on the other hand, would have been something like changing the range of cup sizes so that the biggest cups seem more excessive. Tim Horton's recently did exactly this in reverse, by shifting their cup sizes a notch. If the XL coffee is called a large, it sends a signal that it's more "normal" to drink that volume of coffee.

Nudges can work in conjunction with other policy levers, and can be surprisingly potent. For instance, a study showed that a sign showing speeders a frowning face caused people to slow down more than showing their speed and the associated fine.


A Misunderstood Policy Instrument...

The misconceptions arise from the goals nudges tend to be assigned to. Activities that are almost universally regarded as bad (e.g., theft) are typically covered by direct means, such as laws. Very few people are willing to argue that laws preventing theft unduly restrict personal freedoms. However, activities such as smoking cigarettes are trickier. Here, people invoked the principle that they have the right to make informed decisions about their lives, even if unhealthy. However, back when this debate was ongoing in Canada, it was estimated that cigarettes cost about four times as much in health care costs as they raised in tax revenue. So less smoking was good for Canada, on the whole.

There are many such social outcomes worth pursuing that exist in a gray area for government intervention, and this when is nudges tend to be the best policy instrument. So nudges get maligned as paternalism, big government, and the nanny state. But in reality, nudges are about the implementation method. What constitutes appropriate social outcomes is a completely different question. In considering the utility of nudges, we may as well assume that societal goals are already established, and we're instead at the point of selecting policy levers.


...With an Important Role to Play

So I see this emerging field simply as the recognition that information alone is not necessarily sufficient for people to make decisions that are in their, or society's, best interests. 

This is because humans respond (and wildly) to their environment. It's a fascinating evolutionary quirk for socializing: we instinctively match others' postures, gestures, and even accents to build familiarity, taking cues on what constitutes normal behaviour. However, we've exapted (adapted traits for very different purposes) some shortcuts and rules of thumb in decision-making. The well-known example is the opt-in/opt-out framework for organ donations. Countries get ~90% donor rates with an opt-out model ("check here to be removed from the organ donor list"), and more like ~10-20% with an opt-in model ("check here to be included on the organ donor list"). It's largely because the default choice sends a signal about what is normal.

And there are many such examples. The U.K. Behavioural Insights Team simply acknowledges this, and sets about designing policy instruments for such a world. Their core function isn't deciding what society should be doing; it's taking the scientific method and applying it to the complex world of policy. Developing hypotheses, testing them, and adjusting approaches as necessary (see: Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials).

In my view, nudges are scarcely controversial. Basically, if you conducted use testing on policy instruments and tweaked for maximum effect, sometimes you'd get education, sometimes laws, sometimes incentives, and sometimes these bizarre indirect methods that we're currently calling nudges. But it's not paternalism, it's simply a question of what policy instruments work, and work cost-effectively. So I absolutely think we should be exploring this field in earnest.


But Wait, You Mentioned Blueprint 2020

If the premises behind nudges are valid, I think it's important to consider how our organizations' standards, defaults, and procedures (in the parlance, "choice architecture") are affecting our decisions. Since June we've been having this wide-ranging conversation about the future of the public service, and the difficulty of meaningful change is a common theme (see: Where Good Ideas Go to Die and Moving Public Service Mountains, Part I)

And adding to the many possible reasons, what if, even when we have direction or policy cover for positive progress, we're continuously stacking the deck on the side of the status quo?

An example (which I overuse, but it's easy to explain and so I beg forgiveness): let's say from workflow and policy perspectives, a desk-bound worker and a mobile worker are theoretically equivalent options. The information about mobile efficacy is available, the forms for securing the equipment exist, and both are permissable arrangements. However, when on day one at a job you're assigned a desk, a desktop computer, a landline, and no VPN, what signal does that send to both manager and employee about what is normal and what is aberrant?

Do such procedural barriers become cognitive biases? I would suggest the answer is "yes, and massively."

So when we're looking at moving the public service towards our ideal for 2020, we should be ruthless in examining the environment in which we work. Ideals can’t simply be possible, if forces are nudging in the opposite direction. Ideals have to seem like the standard.

They have to seem downright normal.


Making the Vision a Reality

The U.K. provides another concept to borrow and remix (both internally and externally) - contestable policy. We consult on policy in development; why not solicit feedback on existing policy and process, to see if it works as intended, or if the environment to which it applies has changed?

And we have the tools. This could happen today. Copy and paste into our GC-wide platforms that happen to have discussion threads built in, and just ask: does this still work the way we thought it would?

It’s the same approach as the Behavioural Insights Team: the scientific method, applied to government.