“‘Things have been going wrong for a lot longer than you think,’ Alden said. ‘It’s just that they’ve finally gone wrong enough for you to notice.’” —Caught in Crystal, Patricia C. Wrede
The other day someone was showing me one of those ChatGPT artificial intelligence things that can write a letter for you given a few simple parameters. I don’t need an AI to write a letter for me, but the point was how well it performed the task: AI is learning so fast.
I snapped a little bit at the person (unfairly; it’s not their fault that I saw streams of threads about this on Twitter a few weeks ago and got heartily sick of it) because my immediate response was irritation at the ubiquitous, fascinated, admiring glee with which many people are lauding these chat AIs’ effectiveness. “You know what would be awesome?” I said. “It would be awesome if all the people falling all over themselves about the potentials of AI would direct that energy for a while to the potentials of human beings. How many kids’ minds would outstrip ChatGPT in months if we directed all that research and investment money to making sure they’re fed and get enough sleep and have safe, secure homes and neighborhoods? What about human potential?”
It wasn’t the technology that got me; it was the attention. How much of it these developments get, as if yet another technology will completely change the paradigms of the dominant human society and how we relate to one another; whereas the actual relating we do, and the actual suffering that happens both separate from and caused by many of these same technologies, is ignored.
One of my favorite childhood books was Norton Juster’s The Phantom Tollbooth. I’m not sure it made its way out of the U.S., or even how many people in the U.S. have read it—though it was pretty popular—but if you haven’t, I think it holds up relatively well, with some caveats. Being both a math person and a word person (and someone who’s easily entertained), I enjoy the cities Dictionopolis and Digitopolis equally.
But my favorite city concept in the book is Reality, which became invisible because people stopped looking at it:
“One day someone discovered that if you walked as fast as possible and looked at nothing but your shoes you would arrive at your destination much more quickly. Soon everyone was doing it. They all rushed down the avenues and hurried along the boulevards seeing nothing of the wonders and beauties of their city as they went.”
Milo remembered the many times he’d done the very same thing; and, as hard as he tried, there were even things on his own street that he couldn’t remember. . . .
“Because nobody cared, the city slowly began to disappear. Day by day the buildings grew fainter and fainter, and the streets faded away, until at last it was entirely invisible. There was nothing to see at all.”
“Hasn’t anyone told them?” asked Milo.
“It doesn’t do any good,” Alec replied, “for they can never see what they’re in too much of a hurry to look for.”
Attention is a fascinating thing, and despite being frequently mentioned in criticisms of digital technology, I don’t think it gets nearly enough attention of its own. There’s a reason our attention plays such a big role in tech companies’ profits, and why news media dove so hard into clickbait (to get our attention). And why so many of us feel frequent urges to smash our smartphones with all their mental demands.
Attention means something. It’s not passive. Attention, and what we give ours to, has real-world effects.
The person showing me the AI’s letter-writing skill happened to be my spouse—who’s worked in cybersecurity and data protection for decades now; it’s part of his job to follow tech’s cutting edge and its inherent risks—so he took my brief annoyed digital tech rant in stride. He’s heard it before.
I’m not anti-tech. I wrote about this directly in A Walking Life. The fear that “we might become cyborgs someday” slightly exasperates me. We’ve been something like cyborgs since the first hominins picked up tools. Spoken language is a technology. So is visual art, and reading, and so are glasses and shoes and decorative jewelry and walking sticks. We interact with objects we find or make in ways that make them part of us all the time, and have done so probably since before Homo sapiens were even a species.
This doesn’t, though, mean that all technology is good for humans, much less the rest of life, or that every development serves to make our lives better. The Luddites, as I wrote about in the book, didn’t smash knitting frames because they hated technology. They smashed them because the owners of the factories were replacing skilled human operators with either machines or less-skilled operators of those machines because they could make far more profit, leaving the people without work and therefore without ways to feed and support themselves and their families. And churning out shoddy materials in the process.
I got into the subject of Luddites and technology because A Walking Life, being about walking, by extension was also about cars, car-centric culture, and how much destruction they’ve caused. The history of resistance to ceding our roads and streets to cars—which was immense in the early 1900s—has been wiped from our collective imaginations, convincing us that this is the world we wanted.
With digital technology, we’re in the very first steps of going down a similar path. I give attention to AI not because I think it’s cool, but because its development and implementation are integrated—often invisibly—with private property, ownership, and any hope of an equitable future.
Attention is not enough to control technology’s impact on our lives. It never was and never will be. But it is a necessary element, and we only have so much of it on any given day. What could change, what needs could be met, if a classroom of hungry kids and the beauty of every starlit night got the same attention given to every twitch of AI development? If every story about digital tech were centered first on the fact that no technology is going to solve our problems on its own?
Our attention plays a role in where we go from here, and where we end up. Giving attention to the wrong stories allows those in power to conceal a fundamental question: Does this technology serve us, or do we serve it?
Some stuff to read, listen to, or watch:
Speaking of attention and car-centric culture, this interview with the War on Cars podcast about a study by environmental psychologist Dr. Ian Walker on what car culture has done to us psychologically was really interesting. Especially when he pointed out that even people who don’t drive give “car behavior” a pass.
And speaking of tech, The Markup did an analysis of how tech companies largely gutted a Right to Repair bill in New York State: “These particular TechNet edits all have a common theme—ensuring that manufacturers retain control over the market for the repair of their products.” (I was thinking about the Right to Repair this morning as I pulled out a pair of jeans I bought less than six months ago that I already need to patch. What if I were only allowed to use fabric and thread and methods approved by the clothing company that made the jeans? Or had to send them to a certified technician?)
Kate Wagner writing in The Baffler about architecture firms’ complicity in the travesty that is NEOM, also known as “The Line,” the 105-mile-long completely enclosed and automated city being envisioned in the desert of Saudi Arabia: “The year-round ski resorts, indoor shopping malls, and bespoke manses of NEOM will be built on the backs of human suffering. They actively harm the world, not improve it. This is obvious to anyone with a conscience.”
This episode of Building Local Power, the podcast from the Institute for Local Self-Reliance, isn’t that long but packed in a lot of information about the role of monopolies in inflation: “In 2021 these markups skyrocketed, way beyond the cost of whatever a company might need to acquire in order to sell a thing. . . . It’s the highest level on record, and the largest one-year increase on record. . . . This is straight corporate profiteering.” (I particularly appreciated the few minutes of focus on the loss of pay and worker rights in the trucking industry starting in the 1980s.)
Iraq war veteran and author of Learning to Die in the Anthropocene Roy Scranton writing in Emergence magazine on what people mean when they say, “the end of the world.” “We can hardly make sense of it without attending to the realization that the world has already ended, over and over, for countless peoples and epochs.”
My head’s been lingering in bleak places recently, so I’m immensely grateful to Sherri Spelic for sending me this hour-long conversation between Ross Gay and Clint Smith about Gay’s recent book Inciting Joy: “You scratch a little bit and everyone’s broken-hearted.” (There was so much to love in this conversation. One thing that really stuck with me was his point that there’s a lot of discussion about epigenetic trauma, but none about epigenetic joy. That literally never occurred to me.)
And an interview on the podcast Storytelling Animals with Stan Rushworth and Dahr Jamail, co-editors of one of the more important books I read last year, We Are the Middle of Forever: Indigenous Voices from Turtle Island on the Changing Earth. (The podcast link is to a YouTube audio file, but Storytelling Animals seems to be on all podcast platforms.)
Just got around to this piece in my inbox! (I'm a little behind.)
I would highly recommend this Econtalk interview, that considers how a predictive text language model's success depends on largely on the predictability and unoriginality of the humans who wrote the material that trained it. (A mouthful, I know. https://www.econtalk.org/ian-leslie-on-being-human-in-the-age-of-ai/)
The previous GPT-3 language models of a few years ago were amazing, in the sense that you could feed it ridiculous queries and get hilarious results. "How many turduckens can I fit in my mouth?" It was fascinating in what it could do, but clearly limited by what it couldn't do.
The latest model is truly incredible. I had a really interesting conversation with it about the benefits and challenges of reading the Iliad. I asked it to compose a short, Star Wars-themed Christmas story. The r/ChatGPT subreddit is full of really interesting interactions that probe the capabilities and limitations. I don't think it's incredible because I think it is actually "Artificial Intelligence". I don't think it's about to take over the world, a la The Matrix.
But, it is going to change the world the way the calculator has. And the locomotive. And social media. Insidiously. With real economic benefits and human liabilities, as you astutely point out. Just as math students at some point become merely calculator operators, I think many language students may become language model operators. It is a sad reality. The plow shapes the field, and it shapes the plowman.
I think it's okay to be in awe of the power of the locomotive, recognize the economic benefits we get from them, and critical of the way trains have concentrated wealth and violated the land. The more we can understand this beast, the better we can be prepared for the future. The future will belong, more and more, to those who can think. That will be the scarcest and most valuable commodity.
Love this piece. I am reminded of something I heard Dr. Iain McGilchrist utter during a conversation I listened to a couple of months ago:
"Attention is a moral act because it changes what is actually there in the world for us to find. It changes us."
We must take care in how we attend to the world. It matters.