top of page

Algorithms for a more equitable future

Political organizations now use algorithms to determine who is targeted with campaign advertising, where canvassers knock on doors, who receives texts or phone calls, and who's surveyed. But we also know that algorithms are biased thus they can also be part of the problem, especially with regards to political polarization.


At Helm, we’re building products that support issue advocacy and community organizing, and we believe data and algorithms are an important part of those products. So, we believe we have a responsibility to actively work to demystify our own black boxes for our product users. This will be a long process, and we don’t know exactly what it will look like.


But to start, we have a lot of listening, thinking, and understanding to do.

Here are some of the questions we ask ourselves:

  1. How can we minimize the harm algorithms impose on already marginalized communities?

  2. How have organizations communicated about data and algorithms that has resonated with people?

  3. What are the best practices for drawing inferences from imperfect and incomplete data?

  4. What data are missing, and what does that reveal about bias and inequity in the United States?

  5. How can we ensure that our data tell humanizing stories of the people they represent?


These questions matter

When we rely on algorithms to help make decisions, we often overlook the ways in which values and biases affect outcomes. In some cases, this is okay; there are too many takeout restaurants to choose from, so we offload the decision to an algorithm. And no matter what it tells us, we still get dinner.


In our political system, organizers and advocates are working to change minds and change laws. They’re building consensus and understanding. They’re supporting ballot initiatives that aim to enact more equitable policies. This work is hard. It’s time consuming. It’s expensive. Algorithms play an important role in making this work feasible, sustainable, and worthwhile.


Since algorithms are just as subjective as the people who build them, we have to be particularly careful about how they’re developed and used in political organizing.

Left unchecked, algorithms can and will exacerbate inequalities and make societal problems worse by cloaking critical decisions in an illusion of objectivity. As a society, we are only beginning to address this problem.


More and more, digital products are marketed as “data-driven” solutions to everyday problems. It’s tempting to think that data-driven decisions are somehow more objective than decisions made by humans. After all, facts are facts, and data are data.


Yet this is an illusory comfort, and that’s because an algorithm is only as good as its supporting data. And the data feeding decision-making algorithms are not objective reflections of ground truth. Instead, data and algorithms reflect the biases and blindspots of the people who built them.


For example, political organizations rely on turnout models to figure out who receives “get out the vote” (GOTV) messaging. But these algorithms are built using historical voter data which excludes many people. And many of the folks who are excluded have been historically marginalized, perpetuating a system of exclusion and misunderstanding.

Moreover, the use of historical data conceptually is a look backwards and to build an equitable future we need data and tools that are instead looking forward.


Whereas an individual’s biases are easier to spot, algorithmic bias hides behind numbers and equations. As Ruha Benjamin writes in Race After Technology, “We are told that how tech sees ‘difference’ is a more objective reflection of reality than if a mere human produced the same results.”


In this way, algorithmic bias can be even more pernicious than an individual human’s bias; they are still subject to all of the messiness that affects human decision making, yet they have a veneer of objectivity because they create distance between the decided and the decision. And because algorithms make it difficult to point to an individual exhibiting bias, it’s harder to hold people to account.


Our commitment

At Helm, data and algorithms permeate the tools we’re building: whether to help advocacy organizations make decisions when building nationwide campaigns or to empower individuals mobilizing their neighbors. We certainly don’t have all the answers, but we’re committed to being a part of the solution.


Here’s what we’re thinking:


  1. We are scientists, engineers, designers, and campaigners, but we are all people. We know that the tools and algorithms we build will ultimately reflect our values. We cannot hide behind a false notion of unbiased technology, but instead our team and culture need to reflect America’s diverse democracy and the equitable future we’re working toward. (We’re hiring!)

  2. We want to empower our users to make informed decisions, and we do this by acknowledging the strengths and weaknesses of our data and algorithms. We will do this in a manner that is accessible to people irrespective of their formal training in data science. Fortunately we are not alone in tackling this. We take great inspiration from Timnit Gebru’s Datasheets for Datasets and Margaret Mitchell’s Model Cards for Model Reporting.

  3. We don’t believe discrimination is a problem we can solve with algorithms or automation. While technology can certainly play a role, we believe we cannot tackle issues of racism, sexism, and discrimination without including humans and, most critically, including humans from groups who have been most adversely impacted by discriminatory algorithms.


We need to be thinking about the ethical ramifications of the data, tools, and algorithms we put out into the world at each step of the process.


The bottom line

We’re not perfect. We’re going to make mistakes, and we’re going to grow from those mistakes. We are committed to communicating the strengths and limitations of our data and algorithms so that our users can make informed decisions about their organizing work.


We believe data transparency is a key component of building healthy communities. Ultimately, our goal is to develop data and tools that equip organizers to power civic participation for a more equitable future.


We do this, in part, by hiring folks who take a step back to think about the role of bias, fairness, equity, and ethics in data.



bottom of page