Typos and humanity
Over the weekend, I read a chilling line in a marketing-related newsletter[^1]:
A typo is no longer just a typo, it is a signal the writer is not using AI.
The context was that the author was saying that there's no excuse for typos—which this writer seems to obliquely posit as a form of humanity—because now you can just ask large language models (LLMs) like GPT-4 to edit your stuff. (Which is what the newsletter writer did to produce his "typo-free" writing. More on that later.)
So I suppose writing with less humanity is good, actually? Because it's more technically "correct"?
As someone who has made a life of reading and writing—both for fun and as part of my career—this was a disturbing thing to read. To me, writing and reading has always been about connecting with other people as humans. So, to me, every attempt to strip humanity out of writing nullifies its very purpose.
Even in the case of journal writing, writing about connecting with your own humanity. I am a big believer in morning pages, something that was popularized by Julia Cameron's book The Artist's Way (though the concept existed before that). Morning pages are a way to dump your thoughts onto into writing so you can move on with your day without being weighed down by your worries and anxieties.
Cameron doesn't even consider morning pages "writing," because they're not intended to be shared. Instead, they're a sort of preparation for the day ahead, a way to get your mind in order. As such, they are a deeply human practice, almost like meditation. They are a way of connecting with our humanity, the flawed parts of ourselves that are full of bad ideas, worries, and imperfections. Because they are not meant to be shared and should be written as quickly as possible, they are full of grammar and spelling mistakes and unclear sentences. The whole point is that they are unedited stream of consciousness.
Of course, writing that is meant for other people to consume should be more polished. The purpose isn't a brain dump; the purpose is communication.
To that end, sentences should be sensical. Grammar and spelling rules should be generally followed, at least to the point of ensuring that the writing is legible.
But everyone has their own idiosyncrasies, their own voice. And that doesn't need to be smoothed out and filed down into technically correct but personality-less prose.
Writing as a path to thinking
In a masterful article for The New Yorker, Ted Chiang gives one of the best and most easily understood explanations of what large language models (what we typically call AI), are actually doing and how they function. He also talks about the importance of writing. Just because LLM can write quickly and easily, in generally correct grammar, doesn't mean that LLMs should do all of our writing for us.
Chiang writes about how it is necessary for humans to write in order to discover their own ideas—even if when they begin writing, their work is unoriginal and derivative:
If students never have to write essays that we have all read before, they will never gain the skills needed to write something that we have never read.
It's important for students to learn how to articulate their ideas, not to prove that they've learned the information, but to develop their own original thoughts. Whether someone is a student or not, they likely develop their original ideas through writing. If humans aren't creating our own original writing or learning how to write, our ability to think creatively could atrophy.
"AI" as a tool, "AI" as an authority
I'm not one to say that "AI" tools should never be used during the writing process.
After all, I use Microsoft Word's spelling and grammar check daily.
I also wrote the rough draft of this blog post using Nuance's Dragon dictation software, which uses "machine learning" to better understand what the user is saying and translated into text.
Right now, I'm writing this on Nuance's mobile Dragon Everywhere software, which certainly leaves a lot to be desired in terms of accuracy. (Though it's still better than Google's free voice typing, which I also use daily.)
It certainly isn't like "machine learning" is infallible when it comes to writing. In fact, many of the typos that make their way into my writing are introduced by Dragon or Google voice typing not understanding my accent.
Perhaps an AI booster might say my accent (which that now-paywalled New York Times dialect quiz claimed is a mixture of North Texas, western Louisiana, and Oklahoma City) is the problem, not the software.
On days when I dictate a lot for work, I notice that my accent shifts slightly to become more "comprehensible" to the software, even though the desktop version of Dragon is supposed to adapt to my accent, not the other way around. There's a whole 'nother essay I could write about how the software we use tries to polish away our culture and histories (I'm Cajun and grew up in North Texas) and homogenize us into something that computers can best understand.
Writing in "AI voice"
In my day job (which involves editing other people's work), I've started to be able to spot when people have fed their fiction through "AI" editors like ProWritingAid.
It's a bit hard to articulate what the AI voice is; I'm just beginning to develop an eye for it.
But it looks like overly efficient sentences that seem to be missing something. They've been rephrased to be the most grammatically correct that they can be, on a technical level, but they often read as if they're incorrect.
It's an uncanny valley for writing, something that looks and seems human at first glance, but there is . . . something . . . missing. It's too efficient. Not everything needs to be over-optimized. At a certain point, perfectly correct prose stops sounding human.
Limits and uses of AI editing
As someone who works in Microsoft Word for hours every day, I frequently see Word's spelling and grammar check try to make corrections that are straight-out wrong. The changes might be grammatically correct but awkward in practice. Often, however, they misunderstand a grammar rule, and if implemented, the changes would make sentences incomprehensible, turning well-crafted prose into gibberish.
I always feel unaccountably pleased when the computer makes these mistakes. They feel like a confirmation of my own humanity, somehow.
Microsoft Word's spell check and even tools like ProWritingAid and Grammarly have their place. Not every piece of writing calls for a human editor. (For example, my blog posts don't get edited by anyone other than me. I rely on the basic spell check in my markdown editor and then send my writing out to the world.)
Also, not everyone has the money or time for a human editor, and it can be incredibly helpful to have a piece of software that can help polish the rough edges of your writing (especially if you're writing in a language that you're less familiar with).
All this is to say that machine learning, LLMs, AI—or whatever you want to call it—can be useful as a tool. But we should beware of letting it shape our expression and redefine our voices. Or circumscribe our thoughts.
To me, there's a big difference between having an AI catch your spelling and grammar errors vs. having them rewrite and rephrase your sentences to "improve" your writing (or having them write a first draft which you then edit and expand upon).
The sentence about typos has a typo
I loathe grammar nitpicking, but because it's directly relevant to this conversation: Ironically, the sentence that inspired this post technically contains a typo (a comma splice).
A typo is no longer just a typo, it is a signal the writer is not using AI.
If you wanted to be grammatically correct, you might write it as "A typo is no longer just a typo; it is a signal the writer is not using AI." or "A typo is no longer just a typo. It is a signal the writer is not using AI."
But I guess because the AI didn't catch it, it isn't a real typo? That raises an interesting question. As people rely more and more on machine-based editors, will that change how we think of grammar and writing? Will some things that are technically "correct" be considered incorrect, and vice versa?
Like I mentioned, despite being someone who knows grammar rules intimately, I despise dogmatic editing and have no patience for people who are pedantic about grammar.
And, to be honest, I don't mind comma splices and similar "errors" in casual online writing (including my own).
Because I edit things for a living, I feel confident exercising editorial judgment and deciding that some typos are fine. In publishing, there is a common phrase, "stet for voice." It's an instruction to ignore a correction because doing so will preserve the voice of the author or character, and it is more important to keep that voice alive than it is to be grammatically correct.
We read to learn, to connect with others, and to go on adventures. We don't read because we relish samples of perfectly grammatical, efficient prose.
Junkspace and the internet as a dead mall
In January, I read a tweet about how the internet now resembles a dead mall.
Google search barely works, links older than 10 years probably broken, even websites that survived unusable popping up subscription/cookie approval notifications, YouTube/Facebook/Twitter/IG all on the decline, entire internet got that dying mall vibe
I haven't been able to shake that comparison. I have also been haunted by the 2001 Rem Koolhaas essay "Junkspace," which I've been reading and rereading since November. The essay, which is ostensibly about the slick, commercial spaces and malls that popped up in the late 20th century, is uncanny in its accurate description of the dead mall of the internet.
In an internet made up of five websites, each full of screenshots of the other four, how can we not feel like we're wandering through a dead mall or junkspace (which Koolhaas described as having no walls, only partitions)?
Add to that the impersonal bullshit texts that LLMs and LLM-powered editors help people churn out, and it's easy to feel like you're walking through the echoing corridors of an empty shopping mall. Occasionally, something catches your eye, and you turn your head to greet another human, only to be met by an animatronic mannequin that can talk almost like a human—but not quite.
That's how it feels to search Google and come up with a bunch of SEO content-mill, LLM-generated articles that mean nothing but rank in the algorithm because they've followed all the rules. The mall is dead and full of ghosts. And not even the fun, interesting kind of ghost.
Koolhaas calls junkspace a body double of space, which feels suspiciously like the internet (or, worse, the metaverse that tech ghouls keep trying to make happen). In junkspace, vision is limited, expectations are low, and people are less earnest. Sound familiar?
I'll certainly talk more about junkspace in future blog posts, but I can't stop thinking of parallels between the polished perfection of commercial junkspaces and the writing and editing churned out by LLMs.
If our mistakes make us human, I'm perfectly happy to make mistakes. To me, that is preferable to communicating with robotic precision and filing down all of the things that gives my writing a unique voice (even when those things make my writing "worse").
[^1] I'm not including a link to the original because I'm not trying to put anyone on blast or critique any one individual's views, necessarily. This is more about a larger trend that I'm seeing in the discourse about "AI" and humanity.