Few UX designers enter the field because they hope to manipulate users’ political opinions. And yet here we are.
02/26/2021
Techworker
Few software engineers choose to enter the workforce so that they can deal with the messy, complicated questions of ethics. Few UX designers enter the field because they hope to manipulate users’ political opinions. Few founders create tech companies for the opportunity to testify in front of Congress.
Tech ethicists are the exceptions to these rules. They make up the small body of technologists who are willing to grapple head-on with issues like disinformation, algorithmic bias, and political manipulation – the very issues that make their colleagues squirm. They speak out about the problems they find — even when solving those problems hurts their employers’ bottom line.
“One of the things that drew me to computer science was that I could code and it seemed somehow detached from the problems of the real world,” says Joy Buolamwini at the outset of Coded Bias, a film released in November that covers the racially-biased risks of machine learning. In the film (and a widely-circulated Ted Talk) she describes a project she worked on as graduate student in the MIT Media Lab called the Aspire Mirror. The idea was simple: a mirror with a webcam that would detect faces and project digital masks on top of them. When she tried to use the mirror on her own Black face, she realized that the camera didn’t recognize her, though it ably took notice of light-skinned faces that stood in front of it.
Buolamwini went on to found the Algorithmic Justice League and has become an influential leader in the field of AI ethics. Since founding AJL, Buolamwini has sounded the alarm on Homeland Security’s use of facial recognition technology, testified in Congress about biased algorithms used at companies like Microsoft, IBM, and Amazon, and hosted numerous talks, exhibitions, and educational seminars to build solidarity among like-minded technologists.
Her work and that of other tech ethicists breaks an enduring mold: tech culture does not generally encourage problem-solving approaches which prioritize ethics. Instead, it favors the data-centric, quantitative reasoning believed to generate rapid commercial growth. To adapt to tech ethicists’ critiques, companies often must break ties with the very business model responsible for their success.
SHIFTING SCALES
“The ethos of tech has been built on order, structure, and scalability, whereas now, it’s going head first into messy areas dealing with speech and politics,” says David Polgar, founder of the nonprofit All Tech is Human. The organization is a hub for tech ethicists — leading panels, releasing reports, and conducting interviews with the field’s practitioners in an effort to generate effective solidarity. All Tech is Human provides a platform for interested individuals to consolidate their ideas into real plans of action.
Take the ongoing debate about how best to curb the spread of online misinformation, for example. While several companies have begun to label misinformation on their platform, a chorus of ethicists have pointed to the shortcomings of these efforts and proposed more effective measures. A core problem they identify is that misinformation labels often conform to existing templates and style guides which are ill-equipped to the task.
Conventional “design principles” are deployed to make it easier for users to intuitively navigate a platform, and subtly encourage them to engage with a product over and again. If misinformation labels are to have the effect of curing some of the ills of online misinformation (a big if!), the particulars of just how they appear on the screen make all the difference. Though Facebook and Twitter have recently taken the step of making labels in notification form less easy to dismiss, in most ways the companies in question have chosen in favor of maintaining their brand aesthetic rather than employing design approaches that would stand out, cause friction, and grab attention.
Alex Tamayo, a freelance UX Product Designer based in the Bay Area, says that text-heavy fact-checking labels like the ones used by Facebook and Twitter, “don’t deviate much from the established guidelines that they have published already,” for other labels on the platform. False posts on Facebook, he says, are either given a small text-based label and links to reputable fact-checking articles, or overlaid with a wash of transparent grey and given a similar text-based label over top of any media. In both instances, the labels use fonts, colors, and symbols that are used elsewhere on Facebook.
Word choice follows a templated pattern as well. John Moore-Williams, a UX content strategist currently at Google, says he’s seen examples in the industry that go so far as to specify when writers should use a verb or noun, though many templates are less restrictive and focus on calls to action or specific information to be communicated. Plain language, he points out, emphasizing brevity and active tense, is also standard — generating a similar tone of voice across most UX content no matter who wrote it.
In both cases, a user’s emotional reaction is still accounted for, and whether a piece of UX is likely to trigger strong emotional responses is thoroughly considered. However, these emotional responses are measured against business interests, like keeping users engaged for sustained periods, rather than best interests of the users themselves.
In most cases, misinformation labels appear to be a more effective intervention than they really are. With them, companies like Facebook are “trying really hard to look like they’re doing the right thing but then making it easy for people to not do the right thing,” Tamayo summarizes.
MOVING FAST AND BREAKING THINGS
The quest for scalability can exacerbate misinformation, in that scalable, frictionless UX encourages thoughtless user activity. It is scalability that has caused such a rampant misinformation problem in the first place. If users impulsively interact with a piece of content without processing or validating the information therein, it’s at least partly because the platforms are designed to encourage just this sort of engagement.
“The very reason we had to employ so much moderation to take down misinformation is because you had so much misinformation spread in the very beginning,” says Polgar.
In this way, social media newsfeeds are in and of themselves not human-centric: when they are used as intended, they have a negative effect on human psychology and society as a whole. Yet this is how technologists have been taught to create technology: it’s rare to find an engineering program, for example, that encourages students to take genuinely thought provoking and challenging social science classes, for example, or a coding bootcamp that integrates ethics. It’s hard to blame technologists for a culture that tells students math and science is a way to escape the ambiguities of history and literature.
In contrast to the narrow ways technologists have been trained to harness human psychology through the technology they create, entire disciplines committed to understanding human emotion, interaction and and psychology could be tapped for expertise if the cause of stemming misinformation is embraced in good faith. In her book Weapons of Math Destruction, Cathy O’Neil describes how math has been used as a shield for unethical things — when technology is not made with an interdisciplinary approach, there are inevitable ethical gaps. “The math-powered applications powering the data economy were based on choices made by fallible human beings,” she says. “Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed [our] lives,” she says.
Emily Saltz is a researcher dedicated to narrowing those gaps. She’s a fellow at the Partnership on AI, an organization conducting in-depth research on algorithmic biases, organizing discussions between researchers, practitioners, and companies while providing educational materials for the general public. She says that mass-made solutions are ill-equipped to deal with some of the politically and emotionally complicated cases they are applied to. Her team is “trying to dig more into what does that actually look like in individual examples — like a post that was rated partly false on Facebook.”
The most common measure of success in misinformation moderation is how much sharing of a post decreases after flagging. Facebook, for instance, touted last spring that it had “displayed warnings on about 40 million posts related to COVID-19 on Facebook,” and that “when people saw those warning labels, 95% of the time they did not go on to view the original content.”
But this overlooks the nuance involved in why such quantifiable decreases occur. Do people actually change their minds or simply fear censorship or retribution from the platform? It is also rare for platforms to publish the error rates of these labels — and multiple reports have shown that truthful information can be inadvertently targeted while genuine misinformation spreads freely.
PATERNALISTIC, BIASED, PUNITIVE
Efforts at moderating the spread of misinformation provide yet another example of how quantitative approaches fall short. At the same time, qualitative approaches require more manpower, time, and money to execute. For a lot of tech companies, it’s not worth the cost or, worse — the self-inflicted PR damage that might follow from admitting they’ve been avoiding these solutions for years. “There’s a lot of examples where clearly the decisions aren’t completely making sense from a human level. But [they’re] trying to approach it at a level of scale, and kind of getting as much as [they] can,” says Saltz.
Approaching misinformation from an exclusively quantitative perspective isn’t just insufficient. In fact, these quantitative approaches created and continue to amplify the polarization that encourages people to post and share salacious misinformation in the first place. In Saltz’s latest research, for example, she demonstrates that misinformation labels themselves can create strong emotional reactions that not only make users upset, but radicalize them. Misinformation labels can feel “paternalistic, biased, and punitive,” Saltz said, turning people from the concept of platform moderation and toward rhetoric that accuses platforms of having anti-conservative bias, for example.
COMPLEMENTARY GOODS
Though quick scalability is more profitable in the shorter term, multidisciplinary research and interventions are actually key for a product’s success long term, or so argue the contributors to The Business Case for AI Ethics, a report from All Tech is Human. Achieving this interdisciplinary approach requires the involvement of people from an array of disciplines — i.e., integrating ethicists and social scientists throughout a company’s hierarchy. “If you have those types of people and they’re siloed away, and the product design has already happened, things might already be baked in,” said All Tech is Human’s Polgar — things like emotionally polarizing language and design, or racial and gendered biases.
For the interdisciplinary approach to succeed the ‘non-technical’ experts need access and responsibility. Without the latter, their research often becomes a useless marketing ploy — a way for companies to show they’re trying to do better, without actually making the necessary changes that would sacrifice profit.
Take the case of Timnit Gebru, a high-profile AI ethicist hired by Google to judge ethical risks in the company’s algorithms. In the fall of last year she was tasked with creating a human language processing tool “consistent with [Google’s] AI principles.” But after seeing a draft of the project, Google fired Gebru while on vacation and demanded a paper detailing her findings be retracted. Employees on Gebru’s team also told the Washington Post that while Google publicly touted Gebru’s work, it had long been ignoring her advice and keeping her team separate from others who could implement her findings.
While researchers within the companies are limited in their ability to critique their employers, outside researchers lack the necessary information to provide genuinely useful suggestions. Saltz points out that “right now, we don’t have enough detail to try to replicate or even try out different interventions.”
In June, a Partnership on AI research team, including Saltz, published 12 principles for designers to more effectively and ethically label manipulated media. In it, they offer specific suggestions about animation, linking to other information, language choice, and more. The effectiveness of such a publication, however, is limited — who, after all, is reading and digesting this information? Tech ethicists and journalists reporting on tech ethicists, or the engineers creating such labels at the companies themselves and the executives they report to?
A culture shift that favors a more interdisciplinary approach has to ultimately penetrate tech companies themselves. An industry ethos built on quantitative approaches forecloses effective qualitative solutions. In the struggle to counter the harms of misinformation, the tech industry is an ambivalent Dr. Frankenstein asked to destroy his defining creation.
Techworker Link: https://techworker.com/2021/02/26/can-ux-design-really-fix-techs-misinformation-problem/
Kommentare