When politicians and business leaders talk about using technology to streamline service provision to the poor, they conjure up an efficient, values-free process. Helping homeless people might become like Airbnb, where shelter and subsidized housing are matched with available spaces. Assistance for the poor might bypass all the cumbersome application forms and waiting times. Risk assessments could identify children at risk of abuse before the abuse happens.
Technology can be a neutral tool. But, as Virginia Eubanks describes in “Automating Inequality: How High Tech Tools Profile Police & Punish the Poor,” the computerized tools applied to social service provision are designed with the institutional biases endemic in our society, starting with the idea that poverty is the fault of poor people and that a goal of our welfare systems is to make sure that nobody gets aid who doesn’t deserve it, even if that means denying aid to people who do. Eubanks calls the use of technology to evaluate and track poor people the “digital poorhouse,” consistent with efforts throughout U.S. history to distinguish between the “deserving” and “undeserving” poor.
"Automating Inequality" by Virginia Eubanks
Eubanks describes three examples of the application of high technology to social services in the United States. The first, Indiana’s experiment with automatic determination of welfare eligibility, can only be characterized as a disaster. A major goal was to reduce the number of people on welfare, and, by design, it reduced individual caseworkers’ contact with welfare clients, making it harder for them to become advocates.
Once in operation, the system automatically denied assistance to people who made minor errors in their applications; the results were so clearly inhumane — sometimes denying medical benefits to very sick people — that the experiment was partially abandoned, but not before it had helped achieve one of the main goals: “When the governor signed the contract with IBM in 2006, 38 percent of poor families with children were receiving cash benefits from TANF. By 2014, the number had dropped to 8 percent.”
Eubanks examines Los Angeles’ use of a comprehensive database to match homeless people with appropriate housing services. She argues that in a situation where housing needs far exceed supply (sound familiar?), the system became a form of cost-benefit triage. The people who got help were either very needy (because leaving them on the street would cost more in the long run) or else only needed a small financial intervention to get housing. In other words, resources went to the worst-off and the best-off, leaving vast numbers in the middle unhelped.
At the same time, Eubanks points out, people needing services were expected to answer some very personal questions about mental health and medical history; this involved a major invasion of people’s privacy, linking medical records, criminal history and social service reports in a way that they could be accessed inappropriately, including by the police.
The third example is a software program, the Allegheny Family Screening Tool, developed for a Pennsylvania county’s Office of Children, Youth and Families (CYF). The tool was designed to rank families in the database according to their likely risk of child neglect and abuse. As Eubanks discovered, the model disproportionately gives high-risk scores to certain families, based on their use of social services, and generates racially biased results. The model doesn’t evaluate middle-class and wealthy families, because they typically don’t access public social services.
As Eubanks points out, the model’s predictive accuracy was only 76 percent, which meant that in one out of four cases its assessment was wrong about whether children in a particular family were at risk. The actual use of the model was fairly benign, in the sense that caseworkers were expected to use their own judgment as to whether to pay attention to the model’s scores — but it also set up a surveillance tool that could easily be abused by a different county administration or in a different political climate.
“Automating Inequality” is engrossing in its descriptions of how technology is used to track, diagnose and stigmatize the poor. Eubanks argues that the use of technology in this way is neither value-free nor for the benefit of poor people. One test Eubanks suggests about databases and models like this is to consider whether they would be tolerated if the information was collected from middle-class people. She suggests a kind of analog to the “Hippocratic Oath” that software developers could take before setting up a model.
While Eugbanks could have done more in her examples to call out where the stories reinforced the points she makes in her concluding chapters, the book definitely makes the case that automation of social services does little, if anything, to help poor people, while making it easier to deny them help.
Courtesy of Street Roots’ sister paper, Real Change News in Seattle.
Street Roots is an award-winning, nonprofit, weekly newspaper focusing on economic, environmental and social justice issues. Our newspaper is sold in Portland, Oregon, by people experiencing homelessness and/or extreme poverty as means of earning an income with dignity. Learn more about Street Roots