Algorithms are everywhere. We rely on algorithms to tell us what movies to watch, which clothes to wear, and even whom we should date. Algorithms help us make decisions, and this generally makes our lives more convenient. But as consumers, we often aren't privy to what goes into an algorithm. Unless you're on the inside, an algorithm is more or less a black box.
At Helm, we’re building products that support our partners’ work on issue advocacy, community organizing, and efforts that strengthen our democracy. We believe data and algorithms are an important part of the solution. Political organizations now use algorithms to determine who is targeted with campaign advertising, where canvassers knock on doors, who receives texts or phone calls, and who's surveyed. But we also know that algorithms are biased thus they can also be part of the problem, especially with regards to political polarization (see here).
So, we believe we have a responsibility to actively work to demystify our own black boxes for our partners and users. This will be a long process and we don’t know exactly what it will look like. If you’re interested in tackling problems like this and have ideas, our team is hiring.
But to start, we are committed to communicating the strengths and limitations of our data and algorithms, so that our users and the public can make informed decisions in their organizing work. Ultimately, we believe data transparency is a key component of building healthy communities and strengthening democracy.
To get to the point of transparency we have a lot of listening, thinking, and understanding to do. Right now, here are some of the questions we’re asking ourselves:
How can we minimize the harm algorithms impose on already marginalized communities?
How have organizations communicated about data and algorithms that has resonated with people?
What are the best practices for drawing inferences from imperfect and incomplete data?
What data are missing, and what does that reveal about bias and inequity in the US?
How can we ensure that our data tell humanizing stories of the people they represent
Why it Matters
When we rely on algorithms to help make decisions, we often overlook the ways in which values and biases affect outcomes. In some cases, this is okay; there are too many takeout restaurants to choose from, so we offload the decision to an algorithm. And no matter what it tells us, we still get dinner.
In our political system, organizers and advocates are working to change minds and change laws. They’re building consensus and understanding. They’re supporting ballot initiatives that aim to enact more equitable policies and push for a more participatory democracy. This work is hard. It’s time consuming. It’s expensive. Algorithms play an important role in making this work feasible, sustainable, and impactful.
But given algorithms are just as subjective as the people who build them, we have to be particularly careful about how they’re used in political organizing. Left unchecked, they can and will exacerbate inequalities and make societal problems worse by cloaking critical decisions in an illusion of objectivity. As a society, we are only beginning to address this problem.
Now more than ever, digital products are marketed as ‘data-driven’ solutions to everyday problems. It’s tempting to think that data-driven decisions are somehow more “objective” than decisions made by individuals. After all, facts are facts and data are data. Yet this is an illusory comfort, and that’s because an algorithm is only as good as its supporting data. And the data feeding decision-making algorithms are not objective reflections of ground truth. Instead, data and algorithms reflect the biases and blindspots of the people who built them.
For example, political organizations rely on turnout models to figure out who receives “get out the vote” (GOTV) messaging. But these algorithms are built using historical voter data which excludes many people. And it tends to be the case that the folks who are excluded have been historically marginalized, perpetuating a system of exclusion and misunderstanding. Moreover, the use of historical data conceptually is a look backwards and to build an equitable future we need data and tools that are instead looking forward.
Whereas an individual’s biases are easier to spot, algorithmic bias hides behind numbers and equations. As Ruha Benjamin says in Race After Technology, “we are told that how tech sees “difference” is a more objective reflection of reality than if a mere human produced the same results.” In this way, algorithmic bias can be even more pernicious than an individual human’s bias; they are still subject to all of the messiness that affects human decision making, yet they have a veneer of objectivity because they create distance between the decided and the decision. And because algorithms make it difficult to point to an individual exhibiting bias, it’s harder to hold people to account.
At Helm, data and algorithms permeate the tools we’re building: whether to help advocacy organizations make decisions when building nationwide campaigns or to empower individuals mobilizing their neighbors. We certainly don’t have all the answers, but we’re committed to being a part of the solution. Here’s what we’re thinking:
First, we are scientists, engineers, designers, and campaigners, but we are all people. We know that the tools and algorithms we build will ultimately reflect our values (we previously shared a few of our beliefs here). We cannot hide behind a false notion of unbiased technology, but instead our team and culture needs to reflect our diverse democracy and the equitable world we’re working so hard to build.
Second, we want to empower our users to make informed decisions, and we do this by acknowledging the strengths and weaknesses of our data and algorithms. We will do this in a manner that is accessible to people irrespective of their formal training in data science. Fortunately we are not alone in tackling this. We take great inspiration from Gebru’s Datasheets for Datasets and Mitchell’s Model Cards for Model Reporting and hope that one day our approach will inspire others.
Third, we build products and processes to scale. So we want to make this explicit: We don’t believe discrimination is a problem we can “solve” with algorithms or automation. While technology can certainly play a role, we believe we cannot tackle issues of racism, sexism, and discrimination without including humans-in-the-loop, and critically, including humans from groups who have been most adversely impacted by discriminatory algorithms. We need to be thinking about the ethical ramifications of the data, tools, and algorithms we put out into the world at each step of the process.
We’re not perfect. We’re going to make mistakes, and we’re going to grow from those mistakes. Ultimately, our goal is to build digital tools that let people build and express their power. We do this, in part, by hiring folks who instinctively take a step back to think about the role of bias, fairness, equity, and ethics in data.
If you are interested in working to build toward a more participatory democracy through using a values-driven data science approach, please reach out to email@example.com!
Ben Calvin is the VP of Data Science and Erika Weisz, PhD, is the Director of Research at Helm.