At the End of the Data
Last week, Robert Williams was enjoying a day at home in Detroit with his family when police arrived and arrested him on his own front lawn, in front of his wife and kids, because he “fit the description” of a suspect who stole some watches from a local boutique. The arrest (which was luckily nonviolent) was triggered by a fluke in the facial recognition software used by the Detroit police department. A blurry photo from the crime scene was fed into the AI software and partially matched with Williams’s driver’s license photo. And even though the results came with a warning that the photo was too blurry to give an accurate result, Williams was humiliated in front of his family and forced to spend 30 hours in custody without a single question or acknowledgement as to why he was arrested. But here’s the worst part: This isn’t rare.
For years, concern has been growing over facial recognition software and its use by police. In the wake of the death of George Floyd, Amazon, IBM and Microsoft are all banning the use of their facial recognition software by police after acknowledging each contains biases that disproportionately put people with darker skin tones in jail. That’s cute, but can we talk about why the solution they chose is to take away the tool and not to fix it?
The reality is our lives are influenced by algorithms every day. The content you see as you scroll through Instagram, the likelihood of getting an interview after uploading your resume, your wait time on a customer service line, and who gets forcibly dragged off the plane when the flight is overbooked—all decided by algorithms. This practice is not inherently bad; algorithms are designed with the wholehearted intention to make our lives easier and more customized, to free up human minds to think bigger, and—perhaps most ironically—to remove some subjectivity in our decision-making. The effects we’re seeing now don’t point to a problem with facial recognition in particular or with technology as a whole. They point to the fact that algorithms are made by humans in our own image, with our own biases. Just as Black Mirror foreshadowed, the computers have learned our bad habits.
For over 4 years, Amazon used an AI recruiting tool that favored men to hire employees.
Predictive policing software uses past criminal activity to predict future crime, putting biased, unfair reporting practices on autopilot.
Up until very recently, if you typed into Google “Feminists are,” “Black people are” or “Jews are,” you’d see some pretty scary auto-complete options.
Just like anything else in life, you get out what you put in. And when you have biased people feeding machines biased data sets, you get results that don’t diminish our human error but instead cement it and deploy it on a massive scale.
The good news is, the people who brought us these tools are actively racing to fix the “bugs.” And we’re making some headway with figuring out what it’ll take to solve these complex problems with policy. We need regulations that oversee algorithms, audit them for fairness and give us transparency into them so we can all have a better understanding of how algorithms are built and how they affect us. If we’re lucky, the future may bring an array of solutions from annual reports by big tech companies to a rating system to labels on every piece of AI much like the ingredient labels on our food products today. Unfortunately, this all takes a literal act of Congress, and we can’t wait for that.
What we can do is diversify the group of people building the internet. It’s not a coincidence that the majority of professionals in STEM and in tech are white and male. Even in its 2019 diversity report, Amazon shows the workforce of its managers to be 60% white and 73% male. This isn’t meant to sound like an accusation, but we are all human beings who only know what we know. When women and people of color take their place at the table in tech, they not only make AI products better, but they get a fair share of the profits too.
The thing is, we’ve seen this before. We allowed an exclusive boys club to design our built environment, and what we got out of that was redlining, ghettos, partisan & racial gerrymandering, and pretty abysmal public restrooms. We got a system that really only works for one kind of person and makes everyone else feel like they don’t quite fit.
We can’t repeat these mistakes online. Diversity is necessary in all industries, but it’s hyper-critical in tech. Because algorithms already decide what happens to us in a few really important scenarios, and that list is about to grow.
So encourage the little girls in your life to pursue science. Support organizations like Black Girls Code, I. Am. Angel Foundation and NSBE’s SEEK program. Ask your police department to stop using flawed AI tools or, at the very least, to stop misapplying or over-relying on the information they get from them.