Loading...

5 Things About Online Public Engagement

Friday, August 19, 2016
by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Back in November I wrote a post entitled "Thinking, fast and slow about online public engagement" Today I'm going to push that thinking a deeper, provide some examples and generally expand the premise and reasoning behind the original piece. In so doing I will undoubtedly re-cover the some of the same ground so the original isn't mandatory reading. Oh and heads up this, is a long read.

Thinking, Fast and Slow about Online Public Engagement

Daniel Kahneman's Thinking, Fast and Slow is from the increasingly popular field of behavioural economics. It was widely read in government circles in Canada and elsewhere, so if you haven't read it yet, you might considering picking it up. If that's not your speed you could sit down for an hour and watch the video below or just read my quick explanation underneath it.



At it's core Kahneman's thesis is that the human mind is made up of two different and competing (metaphorical) systems: System 1 (fast) and System 2 (slow).

System 1 operates automatically, intuitively, involuntary, and effortlessly — like when we drive, recognize facial expressions, or remember our name.

System 2 requires conscious effort, deliberating, solving problems, reasoning, computing, focusing, concentrating, considering other data, and not jumping to quick conclusions — like when evaluate a trade-off (cost benefit analysis) or fill out a complicated form.

The problem — according to Kahneman — isn't that people have two systems of thinking, but that they often rely on one system in situations when they should be using the other.

So, what happens if we apply Kahneman's fast and slow thinking to the realm of online public engagement, where -- presumably -- we can citizens to be deliberate and considerate problem solvers?

First, let's look at the technology of participation

Nearly all of the popular (or would-be popular) the technology providers out there are relentlessly focused on design as a means to make things as easy and as intuitive to use as possible, to make things fast, to reduce friction. That's because more online product and/or service providers want to 'conversions' (e.g. want you to take a specific action, such as click, share or purchase). in fact there's a whole field called Conversion Rate Optimization (CRO); here's how the ever popular Shopify describes it (h/t Jason Pearman for the link):

"Your store needs to be designed with your customers in mind.

While boosting your traffic can generate more sales, it’s just as important to focus on turning your current traffic into paying customers.

At every step of your customers’ purchasing journeys, there are new opportunities for you to make their paths shorter, easier, and more enjoyable. Through rigorous experimentation and analysis, you can fine-tune your website to push people closer to making a purchase. This process is called Conversion Rate Optimization or CRO.

Conversion Rate Optimization is a technique for increasing the percentage of your website traffic that makes a purchase, also known as a conversion.
And, on a much smaller scale, conversions are happening all the time leading up to that moment, too.

For instance, a conversion on your homepage might mean having a visitor click through to a product. A conversion on a product page might mean a customer clicking ‘Add to Cart’. Conversions can be entirely dependent on the purpose that a specific part of your website serves.

To optimize your online store for conversions, both big and small, you need to be constantly testing each and every aspect of your website."
To be clear, there is nothing inherently wrong with the logic of using design to reduce friction and increase conversions if that is your ultimate goal (with perhaps the exception of Dark Patterns, user interfaces which are designed to trick people); businesses need to make money and conversions generate revenue. It's obvious that these firms want you to buy, like, or share as effortlessly as possible. It's why they use browser cookies to keep you logged into their network, why they store your credit card and shipping information, why they offer delightful mobile experiences and single click checkouts. It is clear that the vast majority product and/or service providers purposefully deploy design online in a way that primes them for system 1 thinking; in many cases their entire business model depends on it. Thus it shouldn't come as a surprise that the dominate design discourse is one of ease of use (i.e. ease of conversion) because the discourse itself is being predominately driven by the product and/or service providers themselves. From a public administration perspective, this is inherently problematic for a couple of reasons:
  • Private sector product and/or service leaders set digital experience expectations for citizens in the public domain
  • Governments follow private sector leaders and design accordingly, hoping to meet citizen expectations
  • Ease of use (system 1: fast thinking) may be congruent with some of governments objectives (e.g. reach and amplification) but not others (e.g. deliberate feedback on crunchy policy issues) which may require a more conscious effort (system 2: slow thinking)
Essentially my point here is a little bit of Marshall McLuhan's the medium is the message and/or if you prefer we shape the tools then the tools shape us.  In other words, if we use fast tools tools for online consultations then we ought to expect loose answers. Despite what anyone else will tell you Twitter is not a good medium for in-depth, meaningful, and sustained conversation. Sure it can be bent to suit that purpose from time to time but it certainly wasn't designed for it. Twitter chats are a great example of fast rather than slow thinking, the medium (Twitter) shapes the message. Participants have to be brief, reactive, and quick if they are to be a part of the conversation as it happens.



Second, let's look at language of participation

I recently read an interesting and related piece in the Atlantic entitled "The Decay of Twitter" that made a number of related arguments (as is worth reading in its entirety). The article discusses the work of Walter Ong (a student of the aforementioned McLuhan and his scholarly work on "the transition of human society from orality to literacy: "from sharing stories and ideas through spoken language alone, to sharing them through writing, text and printed media". Ong's work catalogued the differences between these two cultures. Noting that orality treats words as sound and action, emphasizes memory and redundancy and stays close to the real life experience while literacy treats words as something that can be looked up, abstracted, and analyzed. Ong's work perfectly illustrates how online communications channels such as Twitter shape the nature of the discourse that happen there on.

As a starting point I would argue that the differences between Ong's conceptualization of orality and literacy are congruent with the differences between Kahneman's fast and slow systems; and the similarities between their analytical frames, apparent. The article goes on to discuss at length the idea that the decay of Twitter has a lot to do with the notion that it blurs the distinction between orality and literacy and thus blends the lightweight nature of ephemeral conversation with the permanence of the declarative/analyzable nature of the Internet.

This blending is where all the faux-societal outrage comes from. Its why a single errant tweet can sink a brand, destroy a career, or make the entire Internet mad for the day:
"In other words, on Twitter, people say things that they think of as ephemeral and chatty. Their utterances are then treated as unequivocal political statements by people outside the conversation. Because there’s a kind of sensationalistic value in interpreting someone’s chattiness in partisan terms, tweets “are taken up as magnum opi to be leapt upon and eviscerated, not only by ideological opponents or threatened employers but by in-network peers.” Anthropologists who study digital spaces have diagnosed that a common problem of online communication is “context collapse.” This plays with the oral-literate distinction: When you speak face-to-face, you’re always judging what you’re saying by the reaction of the person you’re speaking to. But when you write (or make a video or a podcast) online, what you’re saying can go anywhere, get read by anyone, and suddenly your words are finding audiences you never imagined you were speaking to."
The article goes on to argue that not just that conceptions of orality and literacy are blending online, but also that the public and private blend, the personal and professional, and the subjective and the objective -- to which I would add Kahneman's the fast and the slow. This becomes especially troubling when we realize that the communications technologies we rely on for engagement and consultation are actively creating a disconnect between the what people say and how it is ultimately interpreted or understood (e.g. context collapse).


Third, let's look at the politics of participation

There was a lot of coverage on the confluence of Brexit and the social media ecoystem. It was an inflection point about how much ought we trust algorithms to decide what make it into our information diet. The Guardian ran a particularly interesting piece entitled The truth about Brexit didn’t stand a chance in the online bubble which argued that in the current media landscape the burden of being thoughtful and seeking out opposing views on particular political issue (in this case Brexit) falls predominately on consumers rather than producers of media and/or social networks. It opens:
"In the quaint steam age of Mark Twain it was the case, as the writer allegedly noted, that: “A lie can travel halfway round the world while the truth is putting on its shoes”. Owing to significant changes in the media landscape since 1900, the same lie can now circumnavigate the globe, get a million followers on Snapchat and reverse 60 years of political progress while the truth is snoozing in a Xanax-induced coma, eyeshade on, earplugs in.

Modern truth is not just outpaced by fiction, it can be bypassed altogether as part of a sound political strategy or as a central requirement of a media business plan. In an illuminating exchange with the Guardian last week, Arron Banks, the wealthy donor partly responsible for the Brexit campaign, explained leave’s media strategy thus: “The remain campaign featured fact, fact, fact, fact, fact. It just doesn’t work. You have got to connect with people emotionally. It’s the Trump success."
Again, this is classic fast/slow, orality/literacy playing out online. Success in the the political realm -- in the case of Trump and Brexit -- isn't about being slow or literate (or factually correct) it is about being fast and oral (or sensational). The article continues:
"Politics however is just exploiting an information ecosystem designed for the dissemination of material which gives us feelings rather than information."
And concludes:

"If we tolerate a political system which abandons facts and a media ecosystem which does not filter for truth, then this places a heavy burden on “users” to actively gather and interrogate information from all sides - to understand how they might be affected by the consequences of actions, and to know the origin of information and the integrity of the channels through which it reaches them."
It is clear that the media landscape currently favours fast/oral and thus hyper-partisanship over thoughtful discourse. Expecting citizens to exert more control over their media environment and actively slow themselves down in this environment is unrealistic. Anyone who has ever read the comments on an Ottawa Citizen piece about the government (regardless of political stripe) knows this to be true.

Fourth, let's look at the broader implications of this type of engagement on society

What if being reliant on technologies that prime the wrong system, falling victim to the hybridization of oralilty and literacy online, and the political exploitation of both, is only half the challenge? The half immediately in front of us. What if the net result of those two things coming together has a broader and longer-term impact on society? What if it is eroding the very idea of civic participation by over-simplifying the complex task of participating in governance. To wit -- from In the Clutches of Algorithms (also worth reading in full):
"Apple, with a reputation for simplifying large technological problems, making them manageable for most people. In other words, the company’s software masks the complexity of a task. But rather than helping us understand the task, this kind of simplification helps us ignore the task and instead understand the device. I now communicate with my phone as a surrogate for adjusting the temperature and flipping light switches. Modern living now applies the same obedience principle too often seen in classrooms: Our devices now teach us not how to do things but rather how to comply with their interfaces. We are, as Seymour Papert warns, not programming the machines, but instead being programmed by them." 
If you apply the same logic to governance as applied to Apple above, then the question becomes have we allowed our pre-digital understanding of public consultation to be re-programmed (re-imagined, re-understood, re-simplified) in the digital age? And if so, what have we lost in the process and what are the long term implications of that loss on our governing institutions? Who -- if anyone -- is looking at these questions? The closest corollary I can find is Rushkoff's Program or Be Programmed which makes the case that if you don't understand how the program works than you basically beholden to it (i.e. you are being programmed by it); another logical extension of McLuhan.

Moreover, what this does is make it incredibly hard to shift the normative discourse to one that is more thoughtful and civically minded. You simply can't introduce slow issues into fast environments and expect meaningful discourse. A normative fast culture also is anathema to the very discussion of fast versus slow because in order to understand the latter you need to actively engage in it for a moment. In other words, you need to slow down to understand how slowing down could work. The fast pace of the internet is running head long into the slow pace of governance, and while speeding some things up is important (e.g. current service delivery) speeding up others could be counterproductive (e.g. designing future services) if that speed causes them to miss the mark.

Fifth, let's consider what slower, more literate online public engagement could look like

The technology has to be different. It needs to prime people for a slow/literate process rather than a fast/oral one. That means its not likely not something that is already mainstream like Facebook, Twitter, Instagram. It is also likely that these companies will not be the birth place of slower more deliberative technologies.It will also need to crack the anonymity nut which exacerbates all of what I have outlined above by removing accountability and the consequences that flow therefrom (Placespeak is a good example in this regard).

The context needs to be clearly articulated. People need background information. They need white papers, videos explaining the problem, and links to additional information. Moreover, the process (and the technology) needs to nudge them into consumption and reflection before it asks solicits their input.

The questions need to be well articulated, specific, directed, and perhaps even technical and/or exclusionary. The truth of the matter is that you for any given engagement the proponent likely doesn't want everyone's input but rather a highly specific subset of it. Failing to narrow the scope of the engagement means receiving input that needs to be 'looked at' (which has a cost) but ultimately goes unconsidered.

All in all

I think the field relatively new, poorly understood, and littered with varying degrees of amateurs. There are a lot of interconnected pieces and insights from complimentary fields (I've strung together but a handful in a cursory way above) that have yet to gel. When this finally happens we will start to have a better sense of how to execute more sophisticated online public engagement, produce better outcomes, and ultimately create more public value and improve our system of governance.

Oh and in case you managed to read your way down this far -- yes, I am perfectly aware that I engaged your fast system with a click-bait title. It was deliberate and hopefully the irony wasn't lost on you.

Cheers

From innovation to business-as-usual

Wednesday, August 17, 2016
by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken


Last week the Mowat Centre released a report called Creating a High-Performing Canadian Civil Service against a Backdrop of Disruptive Change. The author, Mark Jarvis, pegs "innovation" as one of the six characteristics of a high-performing civil service. We'll borrow his words and define it as "the capacity and skill to develop new approaches to policy development and service delivery...  to meet the changing circumstances facing government."

He also included this: "While it may not be desirable for all civil servants to innovate in their individual roles, innovation needs to be a core competency of the civil service."

I'm worried that there's a disconnect between what we might call macro-level innovation - that is, the idea of government writ large experimenting with new approaches - and the experience at the individual level. 

At the individual ideas level, we  talk about crowdsourcing, ideas markets, and open innovation (MIT Press is dropping a book called Free Innovation later this year). Our co-op students are pitching to a "Dragon's Den" for interesting ideas conjured up from the working level later this week.

There is a lot of power and potential in open innovation systems, whereby ideas and experiments can spread and anyone can add to them or pick them up and run with them. 

But here's the disconnect: time and effort is not free; it's not even discretionary. Public servants have a duty to spend their time on the activities they've been directed to accomplish. (Although I do think we should often take a broader view of this (see: Short-term Thinking and Why Communication Can't Defeat Silos).)

Ideas and approaches succeed when they becomes embedded in the right place, and the people that can make it happen are the same people who need the approach to problem-solving in the first place. The lifespan of experiments and proofs-of-concept is determined by whether or not they ever get folded into mandates and business-as-usual.

And for any given good idea, there might be one person or team well-suited to making it a reality. Maybe a few, and in some cases maybe one in every department.

Next, even if the idea and that person or team do come into contact, there's another set of variables. Does that team have the resources? The time? The expertise? Have they just committed to a contradictory idea? (For instance, many otherwise "good" IT ideas are unfeasible when alternative, long-term IT decisions have already been made.)

In a true open innovation ecosystem, there's a much bigger critical mass of actors who can latch on to a given idea. And they can freely choose to spend their time on it. Government innovation, where it requires the exercise of government resources or even represents the choice of one approach over another, removes the "free" and "open" elements (see: On Somewhat Simpler Taxonomies). Duties, authorities, and mandates suddenly become fundamental.

Which is not a showstopper. But for those interested in government doing things differently, it means the conversation has to get into how ideas, once surfaced, are resourced and supported. Far more so than in the external collaboration ecosystems that serve as reference models.

The transparency antidote to risk

Friday, August 12, 2016
by Kent Aitken RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken

A couple years ago my team worked on a project that got some negative attention. I'll go light on the details, but we took some criticism and my teammates got worked up, looking for ways to defend the outcomes. I had the opposite response: now we had some ammunition to do things better in the future.

Across the public sector, the meme seems to be that people are scared of transparency (e.g., open data excuse bingo). Which is, at first blush, understandable: it might mean more scrutiny, more criticism, a loss of control over the conversation.

However, there's another meme that bureaucracies are risk-adverse and that they stick to the status quo. Transparency about public programs and services reveals the risk in the way things are now, and gives decision-makers an even playing field to make decisions about change.

Here's an example. Edmonton has had a public dashboard of public policies and services for years: transit ridership, 311 call response times, growth in small and medium businesses, etc. And some are in the yellow and red, not meeting the benchmarks set.


No one signs up for public service to deliver a service that doesn't meet the public's needs. Having the performance freely available on a struggling program means that the program managers will have to explain why things are the way they are, but it also means that the program will receive the support it needs to improve - or change. It's short term stress for the sake of being a part of something worthwhile in the long run.

And the public gets a better shot of getting what they need, sooner - both in terms of information and public outcomes. Where there's little transparency about current performance, potential changes from the status quo get disproportionately scrutinized. And given that even long-running activities - e.g., how governments have been doing IT - are still experiments with uncertain long-term outcomes, we need to put those activities on an equal footing with their alternatives.


On Vacation

Friday, July 22, 2016

by Nick Charney RSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Hey Everyone -- thanks for taking the time, just wanted to let you know I'm on vacation for the next two weeks. See you on the other side.

Cheers

Impossible Conversations: Smart Citizens, Smarter State

Friday, July 15, 2016

Smart Citizens, Smarter State makes the case that governments are woefully under-using the knowledge, expertise, and talents of their citizens - and that bridging that gap is “the future of governing.”

Author Beth Simone Noveck has been at the forefront of changing trends in government and governance for a while now. She’s the Director of NYU’s Governance Lab, and led the U.S. Open Government initiative as Deputy Chief Technology Officer of the White House from 2009-2011.

Having been at the forefront, she draws on a variety of case studies and examples of how government got to novel solutions through more engagement with the public, which covered a broad spectrum of models:

  • Crowdsourcing: gathering ideas, suggestions, or feedback on ideas from a broad base of participants
  • Citizen engagement on policy-making: involving people in decision-making through online townhalls, discussions, surveys, polls, or other approaches
  • Collaborative coding: getting a variety of people to work together to build open-source software
  • Citizen science: creating technological platforms through which people can contribute to research, like mapping star fields, taking photos of sea level rise, or measuring PH in a river system
  • Public challenges: posting a monetary prize for the first person or organization to design a solution to meet a particular technical, scientific, social, or governance challenge

To add to those models (some of which were covered in her 2009 book Wiki Government), Noveck lays out the concept of “technologies of expertise,” which go beyond the platforms that allow governments to communicate back-and-forth digitally with citizens and into models that help government officials not only find specific types of experts, but also proof and demonstration that those people actually do possess the expertise they claim to. For instance, Github is a website that hosts code repositories, but it also tracks users’ actions and how much they contribute to different coding projects. Or, this proof can be offered through reputation (e.g., eBay ratings), certification (e.g., online education programs), or badging (e.g., Code Academy).

So is this the future of governance?

Nick CharneyRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Nick Charneytwitter / nickcharneygovloop / nickcharneyGoogle+ / nickcharney

Overall, I enjoyed Noveck’s book, though there were a few chapters that where more US-centric and thus less germain to the Canadian experience (but one can hardly hold that against her). Here’s a few insights from the book (including page numbers) that I think are worth sharing, followed by my commentary.

  1. We have no good mental models for what smarter institutions look like -- i.e. the next iteration of governance (page 30), which leads me to conclude that we are slowly negotiating what those look like through ongoing societal discourse within and about the nature of democracy in a globalized world. Indeed many of these conversations have permeated our political discourse as of late (e.g. US election, Brexit, North American Leaders Summit, etc).
  2. There’s a lot of discussion about the role that professionalism plays in crowdsourcing in Noveck’s book (page 48) and I’d like to sit her down with Cult of the Amateur author Andrew Keen (See: New Thoughts from an Early Adopter) to discuss it at length.
  3. There’s some discussion about “the spoils of new, middle class jobs for thousands of people” (page 69) which to my mind simply signals the fact that any change in governance will have huge economic implications as those changes cascade through our complex and interrelated systems, that and that the stakes are obviously high and interests are vested.
  4. Noveck also talks at length about the “We the People” website (pages 77-79) which simply confirmed for me that we have no good models for engaging people in policy making in large numbers, that level of sophistication is still relatively low, and our technological systems are too fast for slow and thoughtful deliberation (See: Thinking, Fast and Slow About Online Public Engagement)
  5. I found the concept of “thick” versus “thin” engagment (page 82) to be interesting and likely helpful in framing discussions with others (e.g. do you want thick or thin engagement on this or do you have enough dedicated resources for thick engagement?).

Finally, if you read anything from the book, read pages 144-145. It's insightful and aligns with how the policy innovation and experimentation winds are blowing in Ottawa right now. Here’s a snippet:

“Although there is a lot of writing about improving the use of science in policymaking, Nudge work included, there is still little attention to the science of policymaking itself and only the whispers of an effort to test empirically new ways of deciding and of solving problems. Even if we vary where and how and at what level of government we restrict abortion or legalize prostitution or change the design of the forms used to enroll people in social programs, we do not often question the underlying assumptions about how we make these policies in the first place…

Examining the choice between policies arguably helps to spur a culture of experimentation, but it does not automatically lead to an assessment of the ways in which we actually make policy. To be clear, the Nudge Units often test choices between policies, or sometimes test the the impact of changing default rules, such as how and when to enroll people in a social service. Typically however, they do not experiment with approaches to making policy. They do not examine how governance innovations, such as open data, prize-backed challenges, crowdsourcing, or expert networking translate into results in real people’s lives …

The first step toward implementing smarter governance, therefore, is to develop an agenda for research and experimentation


Kent AitkenRSS / cpsrenewalFacebook / cpsrenewalLinkedIn / Kent Aitkentwitter / kentdaitkengovloop / KentAitken



I want to start by doubling down on Nick’s fourth point, with some nods to his first. Large-scale digital citizen engagement is still in its Myspace phase, and we’ll need to keep designing and experimenting to get to models and platforms that work really well. No one's really cracked the online deliberative dialogue nut. Here's a solid, if dated, overview, and here's MIT's work, but the long story short is: polarization around views, in-person power dynamics being replicated online, the order and timing of comments influencing debates, and people sometimes just being lousy.

That said, the exceptions prove the rule for intentional design: for crowdsourcing, some of the really successful case studies are incredibly specific to a particular activity (e.g., spotting supernovae or folding proteins).

During our book club discussion on this one we spent a lot of time on questions about the value of citizen engagement, and the perceived success of governments at both soliciting and using feedback and ideas. My reaction tends to be along these lines: of course it will be of benefit, in many cases, to work with people outside of government to solve public problems, whether that means niche experts, demographically representative samples, or broad public engagement. The question is knowing when to do it, and being able to design the approach such that it’s worthwhile for everyone involved. Subtitles of books like this tend to be hyperbolic (probably well beyond the authors’ claims), so I’d nuance the line to “part of the future of governing.”

Lastly, there’s one area where I tend to be very skeptical, which is the “technologies of expertise” that Noveck refers to: that is, platforms that are designed to surface experts and make them findable by governments. I suspect that if you dig beneath the connections made through platforms that allow people to use, and publicly prove, expertise (e.g., Github, Stack Overflow, Wikipedia, etc.) the lines will map to the interpersonal social networks of the people using those platforms. This argument could be a little chicken-and-egg (when the relationships start virtually over shared interests), but the social context isn’t going away.

From an impartiality perspective, this is a tad dangerous because it means it’s not a perfectly level playing field, but the relationship between government and citizens is going to grow through the aggregate relationships between citizens and government officials (see: Why Government Social Media Isn't Social (and the comments)). The Smart Citizens we're talking about will engage with government when government starts to get to know them.