Artificial Intelligence, or AI, chatbots seem to be everywhere.
Companies use them to handle customer questions. Newsrooms and magazines use them to write stories. Film studios use them to produce films. It seems like magic. And with everything supposedly happening “in the cloud,” it is easy to believe that AI is good for the environment. Unfortunately, things are not as they appear to be.
Chatbots are built on exploitation, use massive amounts of energy, and are far from reliable. While AI improving and making life easier in some respects is imaginable, companies pour billions into their creation for profit with little concern about social benefit.
We need to take corporate interest in AI seriously and develop strategies to help us gain control of companies’ AI development and use
The race is on
The chatbot revolution began in 2022 with OpenAI’s introduction of ChatGPT. ChatGPT was capable of human-like conversation, answering questions with generated text and writing articles and code. It was also free to use. Other companies soon followed with their own chatbots. The most widely used today are Google’s Gemini (formerly Bard) and Microsoft’s Copilot.
Chatbots need an expansive database of words, images, audio and online behavior. They also need sophisticated algorithms to organize the material in line with common use patterns. When asked a question, chatbots identify database material related to the pattern of words in the question and then assemble a set of words or images in response.
Chatbots are “the internet’s parrots, repeating words that are likely to be found next to one another in the course of natural speech,” TechRepublic’s Megan Crouse wrote in April.
Unsurprisingly, identifying patterns and constructing responses requires enormous amounts of energy. Thus, while it is common for people to say AI happens ‘in the cloud,’ chatbots depend on very “grounded” inputs — data and energy — provided at a high cost.
Data
AI systems use data from people. “Web crawler” bots grab almost everything found online: blogs, webpages, books, articles, searches, pictures, songs and videos, including copyrighted material taken without permission. In other words, people personally subsidize highly profitable companies pursuing greater profits.
More concerning is this data-gathering method means AI chatbots are trained using widely different and often inaccurate perspectives about science, history, politics, human behavior and current events, including information from hate groups and conspiracists. Problematic data can easily influence the output of even the most sophisticated chatbots.
For example, companies are increasingly using chatbots to help recruit employees. As Bloomberg News discovered in March, “the best-known generative AI tool systematically produces biases that disadvantage groups based on their names.”
“When asked 1,000 times to rank eight equally qualified resumes for a real financial analyst role at a Fortune 500 company, ChatGPT was least likely to pick the resume with a name distinct to Black Americans,” according to Bloomberg.
Chatbots also depend on the quality of human labor in another way because they cannot directly use a large share of the gathered data.
“Behind even the most impressive AI system are people — huge numbers of people labeling data to train it and clarifying data when it gets confused,” an investigation by The Verge found last June.
Major AI companies hire smaller companies to find and train workers for labeling. These subcontractors usually find their workers, called annotators, in the Global South, according to The Verge’s investigation.
Many annotators are hired to label items in videos and photos to ensure AI systems can connect specific configurations of pixels with items or emotions. For example, companies building AI systems for self-driving vehicles need annotators to label all the vehicles, pedestrians and cyclists in street or highway videos.
Other annotators label emotions. Some label the emotions of Reddit posts, which proved challenging for one group of Indian workers because of their lack of familiarity with U.S. internet culture. The subcontractor decided, after a review of their work, that some 30% of the posts were mislabeled.
Energy
A vast build-out of data centers and rising electricity demand to run computers, servers and air conditioners to prevent overheating has supported AI growth. The 2,700 U.S. data centers claimed more than 4% of the nation’s 2022 energy use, with their share projected to rise to 6% by 2026, according to the International Energy Agency.
Data centers are already taxing the U.S. power grid.
“Northern Virginia needs the equivalent of several large nuclear power plants to serve all the new data centers planned and under construction,” according to the The Washington Post. “Texas, where electricity shortages are already routine on hot summer days, faces the same dilemma.”
The Pacific Northwest faces a similar challenge.
“Data centers proliferating across Oregon will consume dramatically more electricity than regional utilities and power planners had anticipated, according to three new forecasts issued this summer,” The Oregonian reported last August. “That’s putting more pressure on the Northwest electrical grid and casting fresh doubt on whether Oregon can meet the ambitious clean energy goals the state established just two years ago.”
Power concerns have led Kansas, Nebraska, Wisconsin and South Carolina to delay closing coal plants. These trends represent a major threat to our ability to combat global warming.
‘Persuasive not truthful’
AI systems are only as good as the data entered, and perhaps even more importantly, no one really knows how those systems process their data to generate outcomes. The warning signs that these systems are being seriously oversold are already visible.
In 2022, a customer contacted Air Canada to learn how to get a bereavement fare. The airline’s customer service AI chatbot told him he needed to complete a form within 90 days of the purchase. When he submitted the form after the trip, airline personnel said the form had to be completed before the trip. When he showed the airline screenshots of what the chatbot told him, the airline said it was not responsible for what the chatbot said.
The customer sued Air Canada and won.
“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives — including a chatbot,” the judge wrote in the decision. “It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.”
Leaving aside whether companies might seek to have chatbots declared separate legal entities to distance themselves from their actions if desired, the airline has not explained why its chatbot gave out wrong information.
There is also the problem of AI “hallucinations.” A hallucination is when an AI system fabricates information. Case in point: lawyers representing a plaintiff in a 2023 lawsuit against a Colombian airline submitted a brief including six cases “found” by a chatbot. The chatbot invented the cases; some even mentioned airlines that did not exist. The judge dismissed the case and fined the lawyers for using fake citations. The lawyers, disagreeing with the judge’s assertion they acted in bad faith, said, “We made a good-faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”
The risks associated with hallucinations exist in higher-stakes contexts than poor customer service or embarrassed attorneys, including medical and military applications.
The U.S. military is rapidly increasing its use of AI to identify threats, guide unmanned aircraft, gather intelligence and plan for war. Imagine the potential disaster resulting from inadequate or incomplete training data or, even worse, a hallucination.
The obvious point is these systems are far from foolproof for a variety of reasons. An internal Microsoft document reported in May 2023 by The New York Times captures this best, declaring AI systems are “built to be persuasive, not truthful.”
What is to be done?
The sharpest struggles over AI are happening in the workplace. Companies use AI to keep tabs on worker organizing, monitor worker performance, and when possible, get rid of workers. Unsurprisingly, unionized workers are fighting back, proposing limits on company use of AI.
For example, the Writers Guild of America, representing some 12,000 screenwriters, went on strike against several major production companies for five months in 2023, seeking wage increases, employment protections, and restrictions on AI use. The strike produced major gains for the writers, especially concerning AI.
“The fear that first drafts would be done through ChatGPT and then handed to a writer for lower rewrite fees has been neutered,” The American Prospect’s David Dayan wrote last September. “This may be among the first collective-bargaining agreements to lay down markers for AI as it relates to workers.”
Other unionized workers are also bargaining with their bosses over AI to protect their jobs and defend professional standards, particularly actors, musicians and journalists.
These labor struggles represent an important start towards developing needed guardrails for AI use. They can be a foundation to build a broader labor alliance against the corporate drive to use AI to diminish human connections and human agency in our society. Our chances for success will greatly improve if we can help people see through the hype to accurately assess this technology's full range of costs and benefits.
Street Roots is an award-winning weekly investigative publication covering economic, environmental and social inequity. The newspaper is sold in Portland, Oregon, by people experiencing homelessness and/or extreme poverty as means of earning an income with dignity. Street Roots newspaper operates independently of Street Roots advocacy and is a part of the Street Roots organization. Learn more about Street Roots. Support your community newspaper by making a one-time or recurring gift today.
© 2024 Street Roots. All rights reserved. | To request permission to reuse content, email editor@streetroots.org or call 503-228-5657, ext. 40