Tech companies are working overtime selling artificial intelligence, or AI, as the gateway to a future of plenty. And to this point they have been successful in capturing investor money and government support, making their already wealthy owners wealthier. However, that success doesn’t change the fact that AI systems have already largely exhausted their potential. More concerning, the uncritical and rapidly increasing adoption of these systems by schools, businesses, the media, government and the military represents a serious threat to our collective well-being. We need to push back, and push back hard, against this big tech offensive.
The big con
According to tech leaders like Elon Musk, we are only years away from building sentient computers that can think, feel and behave like humans. In conversation with the United Arab Emirates Minister of State for Artificial Intelligence, Musk made clear that the real payoff from AI will be humanoid robots. Business Insider offered a report on the conversation, which included the following:
“You can produce any product, provide any service,” Musk said of humanoid robots. “There’s really no limit to the economy at that point. You can make anything.“Will money even be meaningful? I don’t know; it might not be,” he said, adding that robots could create a “universal high-income situation” because anyone will have the ability to make as many goods and services as they want.
Musk recently rebranded Tesla as an AI robotics company and, in a January earnings call, said the company will soon be building thousands of robots which will likely earn it “north of $10 trillion in revenue.” Tesla is not the only company pursuing this strategy. According to Bloomberg, “Apple and Meta are set to go toe-to-toe” in competing to build “AI-powered humanoid robots.”
Getting real
The 2022 release of ChatGPT by OpenAI marked the start of public engagement with AI. It was free and easy to use. And while it remains the most widely used chatbot, other companies have launched their own competing products, including Tesla, Amazon, Meta, Google and Microsoft. Although these chatbots can perform a variety of tasks, there is nothing “intelligent” about them and they do not represent a meaningful step towards the creation of humanoid robots with the ability to think, learn and solve problems on their own.
Existing AI systems rely on large scale pattern recognition. They are trained on massive data, mostly taken from the web, and use sophisticated algorithms to organize the material in line with common patterns of use. When prompted with a question or request for information, chatbots identify related material in their database and assemble a set of words or images, based on probabilities, that “best” satisfies the inquiry. In other words, chatbots do not “think” or “reason.”
Since competing companies have access to different data and algorithms, their chatbots can sometimes offer different responses to the same prompt. However, all chatbots suffer from many of the same weaknesses. One is that because they rely heavily on data scraped from the web, they cannot help but draw on material that is highly discriminatory and biased. As a result, chatbots often produce problematic responses. A case in point: AI-powered resume screening programs have been found to disproportionately select resumes tied to white-associated names. Another is that no one has yet been able to precisely determine how a chatbot uses its data for predictive purposes. Thus, no one has yet devised a way to stop chatbots from periodically “hallucinating” or seeing non-existing patterns or relationships, which causes them to make nonsensical responses.
The BBC recently tested the ability of the leading chatbots to summarize news stories by giving them access to the BBC website and then asking them to answer 100 questions about the news. Here is how Deborah Turness, CEO of BBC News and Current Affairs, describes the results:
“The team found ‘significant issues’ with just over half of the answers generated by the assistants,” Turness said. “The AI assistants introduced clear factual errors into around a fifth of answers they said had come from BBC material. And where AI assistants included ‘quotations’ from BBC articles, more than one in 10 had either been altered, or didn’t exist in the article. Part of the problem appears to be that AI assistants do not discern between facts and opinion in news coverage; do not make a distinction between current and archive material; and tend to inject opinions into their answers.”
This is certainly not a record that inspires confidence.
No light at the end of the tunnel
Tech companies argue that these problems can be overcome by increasing the amount of training data as well as the number of parameters chatbots use to process information. That is why they are racing to build new systems with ever more expensive chips powered by ever bigger data centers. However, recent studies on the performance of these new systems suggest that this is not a winning strategy.
For example, Lexin Zhou, the co-author of one of those studies, notes that “the newest LLMs (large language models) might appear impressive and be able to solve some very sophisticated tasks, but they’re unreliable in various aspects.” Moreover, “the trend does not seem to show clear improvements, but the opposite.” The reason, says Zhou, is that the changes tend to reduce the likelihood that the new systems will acknowledge uncertainty or ignorance.
The resulting dangers are obvious. As Lucy Cheke, a University of Cambridge professor of experimental psychology, explains, “Individuals are putting increasing trust in systems that mostly produce correct information, but mix in just enough plausible-but-wrong information to cause real problems. This becomes particularly problematic as people more and more rely on these systems to answer complex questions to which they would not be in a position to spot an incorrect answer.” Using these systems to provide mental health counseling or medical advice, teach our students or control weapons systems, is a disaster waiting to happen.
Some perspective
Tech leaders confidently assert that AI will lead to revolutionary changes, boosting productivity and well-being. And if we want to reap the expected rewards we need to get out of their way. But what can we really expect?
We can learn a lot from considering the economic consequences of the late 1990s tech-boom, which included the growing popularity and mass use of computers, the internet and email. This pivotal period was said, at the time, to mark the beginning of the Information Age and a future of endless economic expansion. Yet, almost all post-adoption economic trends, with the exception of profitability, have trended down.
It is true that these technologies and the many companies and products they spawned have changed how we work and live, but the economic consequences have been far from “revolutionary,” if by that we mean significantly improving the lives of most people. And given the limitations of AI systems, it is hard to imagine that their use will prove more beneficial. Of course, that isn’t really the main point. Tech companies are pushing their AI systems because they stand to make a lot of money if they succeed in getting them widely adopted.
The fightback
In exchange for their promised future of “quasi-infinite products and services,” tech companies are demanding that we help finance — through tax credits, zoning changes and investment subsidies — the massive buildout of energy and water hogging data centers needed to run their AI systems.
There is no win in this for us — in fact, Bloomberg News reports that Microsoft’s own research into AI use found that, “The more participants trusted AI for certain tasks, the less they practiced those skills themselves, such as writing, analysis and critical evaluations. As a result, they self-reported an atrophying of skills in those areas. Several respondents said they started to doubt their abilities to perform tasks such as verifying grammar in text or composing legal letters, which led them to automatically accept whatever generative AI gave them.”
And who will get blamed when the quality of work deteriorates or hallucinations cause serious mistakes? You can bet it won’t be the AI systems that cost millions of dollars.
So, what is to be done? At the risk of stating the obvious: We need to challenge the overblown claims of leading tech companies and demand that the media stop treating their press releases as hard news. We need to resist the building of ever bigger data centers and the energy systems required to run them. We need to fight to restrict the use of AI systems in our social institutions, especially to guard against the destructive consequences of discriminatory algorithms. We need to organize in workplaces to ensure that workers have a voice in the design and use of any proposed AI system. And we must always ensure that humans have the ability to review and, when necessary, override AI decisions.
Street Roots is an award-winning weekly investigative publication covering economic, environmental and social inequity. The newspaper is sold in Portland, Oregon, by people experiencing homelessness and/or extreme poverty as means of earning an income with dignity. Street Roots newspaper operates independently of Street Roots advocacy and is a part of the Street Roots organization. Learn more about Street Roots. Support your community newspaper by making a one-time or recurring gift today.
© 2024 Street Roots. All rights reserved. | To request permission to reuse content, email editor@streetroots.org or call 503-228-5657, ext.
This article appears in March 12, 2025.
