Since relocating to Substack, I’ve been intrigued by its audio options. Some writers read what they’ve written, augmenting posts with YouTuber-style intros, an option I’m likely to experiment with seeing as I love to talk and already read my writing aloud to myself before doing anything public-facing with it. Others leave the vocal interpretation of their work to the robots.
I listen to both. The robots often can’t figure out where the emphasis goes in a sentence, indeed in some single words. Also, non-Anglo names present an insurmountable obstacle. Titans of French cinema, gird your loins. I would shudder to think how they pronounce mine, except I was all shuddered out by the age of ten when it came to that particular curse.
A few weeks ago Google offered me a beta test of a just-rolled-out AI counterpart, and I thought why not? When a product is free, the product is you, but the blurb said “try at no cost.” Several sign-up steps later, having in fact racked up a time cost and no meaningful result, I abandoned the effort. I haven’t yet found an AI that I burned to test-run.
People ask me about AI and ML with the assumption that I have skin in the game, as the tech supposedly threatens the creative roles I hold. I maintain my early stance that human beings periodically freak out over the new advancement of the day and then, so far without fail, learn to collaborate with it as opposed to being replaced by it (horses and buggies are still real). And yet I’ve taken to saying I hope AI takes my job. I hope it takes all our jobs. I hope it renders jobs obsolete, and that we are ultimately able to assign every conceivable task to a designated robot, leaving us the space to fulfill the potential of our souls. Nothing to earn but friendship. Nothing to develop but family. Nothing to pursue but art. What did we do when we were all forced inside? Where did we turn when the chips were down?

But add in the people at the helm of all this and it isn’t so simple. Rebecca Solnit writes in the London Review of Books, “Tech billionaires often seem more interested in surviving the apocalypse than preventing it.” She goes on to talk about conviction cherished by this class—exacerbated by isolating forces like the pandemic and the very products and processes they themselves propagate, many of which are having highly tangible, apocalypse-prophesying consequences—that “they are the good guys, the people with solutions, sometimes the victims, but never the perpetrators of problems.”
I reiterate: I’m not bothered by the idea of these technologies taking over much of our work, even down to self-driving cars (which, Solnit also notes, need significantly more practice). What bothers me is the takeover by the companies that engineer this tech, a takeover whose effects are already apparent. Their thirst for profit knows no bounds; there’s no human interest they won’t crush, no institution they won’t challenge and weaken, in that endeavor.
And they disrespect our work by mining it for their agents’ gain with neither shame nor permission. Again, I don’t even object to my data being used—‘processed,’ to use the ultra-sanitized term—in the quest for a smarter internet. If I’m going to be an internet user (beneficiary?), that’s the bargain I make. I do have preferences as to the context. Becoming an adult turned me, inexplicably, into a survey person: I will answer almost every survey sent me by a business I’ve patronized, even when I don’t stand to win a prize or be somehow rewarded. They don’t take much time, there’s no obligation to go into detail (writing product reviews isn’t really my style) or reveal personal info, and they help marketing teams to develop a better picture of their buyer personas and the algorithm to learn more about the demographics of its users. (I DID TAKE A DIGITAL MARKETING COURSE THANK YOU VERY MUCH.)
The insidious trend we contend with today is the free-for-all plundering of our online homes by their architects, and our only defense—not even a sure one at that—is to expressly voice our opposition. Meta is gearing up to harvest all its users’ data for the purposes of training generative AI. For anyone who wants to opt out of this use, the time is now—prior to Wednesday the 26th. This link redirects to a Substack post that guides you through how to opt out, because Meta naturally buries the lede and sets up as many hurdles as possible. It’s in the interest of every tech giant to do so, to plow onward with the shoddy silence-means-consent model.
Silence, for the record, never means consent. Whoever takes it as such is trying to get away with something. Don’t let them.
If artificial intelligence is so intelligent, I should think its trainers should be encouraging it to distinguish between mundane footprints that it can learn from and works of transcendental humanity that it should let be for humanity to enjoy. But how do we go about classifying such works? And isn’t it incumbent on us to share those works on the internet in the hopes of reaching other humans? How do we distinguish between facets of our online identities? And if we don’t, or can’t, can we expect a robot to? There has to be a line somewhere. Doesn’t there?
Suffice it to say we have at least as much to learn as the robots do in navigating this brave, but sometimes also not-so-brave, new world. Until we sort it out: careful the things you say, AI will listen.
This is so great! Lots of delicious lines, like about silence not meaning consent. I also am not worried that AI will take our jobs (already did take mine but something more meaningful came along than writing/translating crappy marketing copy), but I agree about the rapaciousness of the companies involved. I don’t take surveys ever - UNLESS they offer money, my time is too precious to bother giving even 5 minutes away for free. So I tell myself…. as I swipe away mindlessly on IG, with ads… 😬
We live in paradoxical times!!