If Tech Fails to Design for the Most Vulnerable, It Fails Us All
What do Russian protesters have in common with Twitter users freaked out about Elon Musk reading their DMs and people worried about the criminalization of abortion? It would serve them all to be protected by a more robust set of design practices from companies developing technologies.
Let’s back up. Last month, Russian police coerced protesters into unlocking their phones to search for evidence of dissent, leading to arrests and fines. What’s worse is that Telegram, one of the main chat-based apps used in Russia, is vulnerable to these searches. Even just having the Telegram app on a personal device might imply that its owner doesn’t support the Kremlin’s war. But the builders of Telegram have failed to design the app with considerations for personal safety in high-risk environments, and not just in the Russian context. Telegram can thus be weaponized against its users.
Likewise, amid the back and forth about Elon Musk’s plan to buy Twitter, many people who use the platform have expressed concerns over his bid to forefront algorithmic content moderation and other design changes on the whim of his $44 billion fancy. Bringing in recommendations from someone with no framework of risk and harms to highly marginalized people leads to proclamations of “authenticating all humans.” This seems to be a push to remove online anonymity, something I’ve written about very personally. It is ill-thought-through, harmful to those most at risk, and backed by no actual methodology or evidence. Beyond his unclear outbursts for changes, Musk’s previous actions combined with the existing harms from Twitter’s current structures have made it clear that we’re heading toward further impacts on marginalized groups, such as Black and POC Twitter users and trans folks. Meanwhile, lack of safety infrastructure is hitting home hard in the US since the leak of the draft Supreme Court’s opinion in Dobbs v. Jackson showing that protections provided under Roe v. Wade are mortally under threat. With the projected criminalization of those seeking or providing abortion services, it has become more and more apparent that the tools and technologies most used for accessing vital health care data are insecure and dangerous.
The same steps could protect users in all these contexts. If the builders of these tools had designed their apps by focusing on safety in high-risk environments—for persons who are often seen as the more “extreme” or “edge” cases and therefore ignored—the weaponization that users fear would not be possible, or at the very least they would have tools to manage their risk.
The reality is that making better, safer, less harmful tech requires design based on the lived realities of those who are most marginalized. These “edge cases” are frequently ignored as being outside of the scope of a typical user’s likely experiences. Yet they are powerful indicators for understanding the flaws in our technologies.This is why I refer to these cases—of people, groups, and communities who are the most impacted and least supported—as “decentered.” The decentered are the most marginalized and often most criminalized. By understanding and establishing who is most impacted by distinct social, political, and legal frameworks, we can understand who would most likely be a victim of the weaponization of certain technologies. And, as an added benefit, technology which has recentered the extremes will always be generalizable to the broader usership.
From 2016 to early this year, I led a research project at the human rights organization Article 19 in conjunction with local organizations in Iran, Lebanon, and Egypt, with support from international experts. We explored the lived experiences of queer persons who faced police persecution as a result of using specific personal technologies. Take the experience of a queer Syrian refugee in Lebanon who was stopped at a police or army check point for papers. They had their phone arbitrarily searched. The icon for a queer app, Grindr, is seen, and the officer determines the person is queer. Other areas of the refugee’s phone are then checked, revealing what is deemed as “queer content.” The refugee is taken in for further interrogation and subjected to verbal and physical abuse. They now face sentencing under Article 534 of the Lebanese Penal Code and face potential imprisonment, fines, and/or revocation of their immigration status in Lebanon. This is one case among many.
But what if this logo was hidden, and an app indicating an individual’s sexuality was not readily available to them? While still letting the individual keep the app and connection to other queer people? Based on the research and collaboration with the Guardian Project, Grindr worked to implement a stealth mode on its product.
The company also implemented our other recommendations with similar success. Changes such as the Discreet App Icon allowed users to have the app appear as a common utility, such as a calendar or calculator. So, in an initial police search, users can bypass that risk of being outed by the content or visuals of the apps they own. While this feature was created solely based on the outcomes of extreme cases, such as the queer Syrian refugee, it proved popular with users globally. Indeed, it became so popular that it went from being fully available only in “high risk” countries to being available internationally for free in 2020, along with the popular PIN feature that was also introduced under this project. This was the first time a dating app took such radical security measures for its users; many of Grindr’s competitors followed suit.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.