I recently saw a thread on Mastodon about the nagging notifications for mobile applications in particular, specifically ones that don't carry any useful information and simply remind you of the app's existence on your phone. Thanks to Glyph for sharing this thread.
The thread made me think about what the future of notifications are when thinking about past many-to-many attention-demanding technologies, specifically phone calls.
My parents had an active landline only a few years ago, I can attest that the value-to-noise ratio on the landline was zero. The process to arrive there involved three steps:
At some point, >90% of incoming calls were not welcome.
Voicemail was added to avoid the aforementioned calls. This removed the "attention-right-now" ability of phone calls.
Congratulations! Phone calls are now useless compared to text messages.
My parents knew that anyone who called was a "solicitor" or a scammer and so let every call go to voicemail and listened at the end of the day. My brother and I stopped calling the landline because they never picked up, we were calling and texting our parents on their mobile phones because those actually got responses. The network had destroyed its utility and most people moved on.
This isn't exclusive to landlines either, this happens on mobile phones, too. For example, my phone does not ring unless you're in my contacts list. Unknown phone numbers are sent straight to voice mail. The network has an extremely low value to noise ratio without this filtering, and it's hard to imagine how many scam victims and human lifetimes would be saved if this were the default for mobile phones. Alas.
Phones used to be a way to get someone's attention in the moment and have a conversation if the other party was available. "Getting someone's attention" was the whole point. The lack of authentication and filtering controls meant that the many-to-many attention-grabbing network was destined for this fate. Now notifications are in a very similar boat, and I am "calling" (heh) that one day notifications will either be a thing of the past (ie, most people disable them) or more configurable and filterable to match user expectations.
If you look at my phone, almost every application that actually generates notifications is either disabled or it's a messaging application like iMessage, Signal, Discord, etc. Apple does not make this easy at all, you need to click into each application individually and disable notifications (flashback to having to do the same for disabling Apple Intelligence, coming from a "design-oriented" company).
So what would voicemail for notifications look like? All notifications received over the course of the day delivered as a sort of "newsletter" at a specific time of your choosing? Maybe the ability to "sinkhole" notifications that matched or didn't match a pattern. I want to see more APIs and experiments around notification management so that applications could be installed to manage notifications regardless of whatever bad UI Apple forces on its users.
I would love to see a mobile operating system manufacturer or app store actually embrace this future for notifications: the days of spamming users' pockets against their will is not long for this world. They do nothing for a company and only teach users to disable notifications. Will the utility of this "network" be preserved or continue down the road of users steadily disliking their time, friendships, and attention being abused for a fractional ROI?
This little thought exercise made me even more excited for "Other Networks: A Radical Technology Sourcebook" by Lori Emerson which can be pre-ordered on bookshop.org. If you're reading this book (or others like it) and reach out!
On a brisk January evening this year, I was speeding down I–295 in northeast Florida, under a full moon, to visit my dad’s brain. As I drove past shadowy cypress swamps, sinewy river estuaries, and gaudy-hued billboards of condominiums with waterslides and red umbrellas boasting, “Best place to live in Florida,” I was aware of the strangeness of my visit. Most people pay respects to their loved ones at memorials and grave sites, but I was intensely driven to check in on the last remaining physical part of my dad, immortalized in what seemed like the world’s most macabre library.
Michael DeTure, a professor of neuroscience, stepped out of a golf cart to meet me. “Welcome to the bunker. Just 8,000 of your quietest friends in here,” he said in a melodic southern drawl, grinning in a way that told me he’s made this joke before. The bunker is an indiscriminate warehouse, part of the Mayo Clinic’s Jacksonville, Florida campus that houses its brain bank.
DeTure opened the warehouse door, and I was met with a blast of cold air. In the back of the warehouse sat rows of buzzing white freezers. DeTure pointed to the freezer where my dad’s brain sat in a drawer in a plastic bag with his name written on it in black Sharpie pen. I welled up with tears and a feeling of intense fear. The room suddenly felt too cold, too sterile, too bright, and my head started to spin. I wanted to run away from this place.
And then my brain escaped for me. I saw my dad on a beach on Cape Cod in 1977. He was in a bathing suit, shirtless, lying on a towel. I was 7 years old and snuggled up to him to protect myself from the wind. He was reading aloud to my mom and me from Evelyn Waugh’s novel, A Handful of Dust, whose title is from T.S. Eliot’s poem, “The Wasteland”: “I will show you fear in a handful of dust.” He was reading the part about Tony Last, an English gentleman, being imprisoned by an eccentric recluse who forces him to read Dickens endlessly.
The room suddenly felt too cold, too sterile, too bright, and my head started to spin.
The memory calmed me down. The brain in a freezer bag, after all, was not my dad. My dad was Christopher Lehmann-Haupt, a New York Times book critic who spent decades, from the 1960s to the ’90s, shaping literary culture with an authority that made authors both revere and fear him. At home, my mom, Natalie Robins, a poet and author, called him “Crit,” her pet name for the tough critic who was most tough on himself, reading books late into the night, clinging to a glass of Christian Brother’s brandy, reclined in a broken swivel chair.
As we walked through the freezer room, I continued to think about my dad’s brain, nestled in a shelf among other frozen brains, and my fear gave way to a strange mix of wonder and sadness. The once active neurons that fired electrical and chemical signals from axons to dendrites inside the crevices and folds of the left frontal lobe, home to the language center, were now quiet. The memories produced in the hippocampus were now frozen. These parts worked in concert to convert my dad’s experiences into the language of more than 4,000 essays and book reviews, two novels, and a memoir about his life as a New York Yankees fan.
Beginning when he was in his 70s, my dad loyally volunteered for a longitudinal study on the aging brain run by the Albert Einstein College of Medicine. He would drive from home in the Riverdale neighborhood of the Bronx to a nearby lab at Montefiore Hospital every few months to take written tests and participate in interviews to evaluate his memory and mental agility. The purpose was to contribute to the body of scientific data on neurodegenerative diseases, such as dementia and Alzheimer’s disease, which Nature Medicine recently predicted could double in the United States over the next 35 years to 1 million new cases annually. My dad became so inspired by the study that he agreed to donate his brain for further research when he died.
I came to the brain bank because I wanted to learn how his brain was contributing to science. And I came for my own reasons. I have always wondered about my dad’s mind, how it worked, why it told stories, and how our family stories going back generations shaped his life. What I learned was emotional and startling, particularly when the neuropathologist explained to me what my dad’s brain revealed about how he died. My visit helped me find the deeper connection with my dad that I had yearned for growing up.
As my dad was taking tests about his brain at the Albert Einstein College of Medicine, he was writing a lot, working on a memoir about his experience working for The New York Times and the Jewish side of his family that he knew little about. He was calling it Re-View.
He had been in therapy in the last decade of his life, working out family trauma related to his parents’ tumultuous divorce during the time his father moved the family to Berlin after World War II. My grandfather, Hellmut Lehmann-Haupt, was an expert on rare books and the Gutenberg printing press. After the war, he took an assignment from the Monuments, Fine Arts, and Archives program, which the Allied powers had set up to hunt for artistic and cultural artifacts that had been stolen by the Nazis. My dad wrote that “my father’s assignment was to nurture and to revive the artists that Hitler and his henchmen had trod down for being ‘degenerate,’ their characterization of any imagery that deviated from their narrow totalitarian ideal of the human form.”
My dad also wrote about his relationships with many authors of his era: William F. Buckley Jr., Robert Caro, John Cheever, E.L. Doctorow, Joan Didion, Nora Ephron, Philip Roth, Lillian Hellman, and critic Lionel Trilling. He wrote that he and my mom Natalie “got so close to Bill and Pat Buckley that once when they were squabbling, which they did very often, Natalie was able to take their hands and say, ‘Now, children.’ About our being friends with both Hellman and Trilling, mortal enemies, Mailer once remarked to Natalie, ‘Someday, after you die, especially if you go to Hell, you’ll have to choose between those two.’”
FATHER AND DAUGHTER: The author and her dad, Christopher Lehmann-Haupt, an influential book critic, at home in New York City in 1972. “I have always wondered about my dad’s mind,” Rachel Lehmann-Haupt writes today. Photo by Nancy Crampton.
My dad was in relatively good health for his age, but he spent many years writing while chain-smoking unfiltered Camel cigarettes. At 45, he had a silent heart attack, so he quit smoking, began taking statins and took up jogging around the neighborhood.
When he turned 80, he would occasionally give me updates on his brain study, I think mainly to assure me that he didn’t have dementia. He was the kind of parent who never wanted to become a burden to his children. I mostly listened because I hoped hearing about the study would help me get to know him better. We were close, yet he was emotionally distant like many men of his generation. Getting his attention always felt like a struggle. He seemed to soften in those years and enjoy life more. He even gave up drinking brandy, and one day, I saw him doing deep knee bends in the kitchen. He loved being a grandfather and wanted to stay healthy to extend his time with us.
Then, one evening, in late October of 2018, at the age of 84, he suffered a stroke. My mom called me from an ambulance rushing from their Riverdale home to the Columbia Presbyterian emergency room on West 168th Street. Through her tears, she described how he had made a stir fry for dinner and then returned to his study to write. An hour later, she came downstairs to show him a photo of my son on her phone and found him unconscious and paralyzed in front of his computer.
In the emergency room, the doctors discovered that soon after he sat down at his desk, a blood clot was released from his heart and traveled up a cerebral artery into his brain, stopping blood flow. They rushed him into endovascular surgery to clear the clot, but when he woke up, he could no longer speak. When the nurses handed him a pen, his hand could no longer form letters. It was a poetic and cruel twist of fate for a man whose life was shaped by words and stories. I didn’t know what was worse: his impending death or that we couldn’t talk about what was going through his mind. “Be logical.” he always told me, when I went to him for advice, but logic was lost on me now. I just longed for his comforting words.
Over 10 days, our family watched him fade away, and soon, we made the collective decision to turn off the machines that were keeping his heart pumping. My dad had made it clear in his medical directive that he didn’t want “death staved off” if he no longer could live independently. In those hours, my mother, brother, and I were making that decision, I didn’t want to let go, but then I placed a pen in his hand to see if he could still hold it, and he couldn’t. His literary brain had lost language and could no longer produce the words that drove his life and defined so much of his identity. I knew then it was time to say goodbye.
I left the cold brain bank and piled into a golf cart with DeTure and Dennis Dickson, the neuropathologist who performed the autopsy of my dad’s brain. We zipped past modern glass research buildings, many of which contain a vast art collection that includes a chandelier of 170 glass-blown flowers by Dale Chihuly and a triptych of animal paintings by Andy Warhol called “Endangered Species.” There were bronze nudes on lawns donated by the wealthy families of former patients and a human-made lake dotted with palm trees punctuated by a fountain.
Dickson and I sat in his small office just down the hall from the dissection lab. In a modest, adroit Midwest manner, Dickson looked at me. “It’s rare for a loved one to come to visit a brain. Maybe we get one a year, usually a benefactor who donated.” He walked me through how my dad’s brain got to this unusual library. In New York, a pathologist removed his brain, and half was fixed in the chemical formalin, and the other half was frozen, and then both were flown down to Florida in a box.
The formalin-fixed half of his brain was set in a refrigerator to be dissected and sampled in the “brain-cutting conference.” Dickson cut standardized sections from the major brain areas while Ph.D. students read out the donors’ cause of death. A neurohistologist—a specialist who studies the microscopic anatomy and connections of the brain—embedded the tissue sample into paraffin wax and applied pink and purple dyes to make slides resembling fibrous stained-glass windows. These slides are used to look for specific pathologies under a microscope and digitized in a scanner so researchers can view the tissue on a large scale on a computer screen to analyze the cells. Brains are de-identified so scientists don’t know who the donor was.
My dad was a New York Times book critic whose authority made authors revere and fear him.
Dickson told me that the report found no signs of Alzheimer’s disease. That jibed with what Richard Lipman, the head investigator of the Einstein study, had told me about my dad. “Our staff thought he was amazing, articulate, and kind,” Lipman said. “I have rarely seen people at age 84 with verbal memories that good.” It was hypertension and atherosclerotic disease that caused the blood clot to blast through his brain. I thought about his ashtrays filled with smashed cigarette butts, his volcanic reactions to stress, and love of fatty foods.
Dickson said my dad had two contiguous strokes in the temporal and frontal lobes, the location of the control centers of language. The second stroke destroyed almost the entire temporal lobe, including the medial temporal lobe structures such as the amygdala, where emotions get processed, and the hippocampus, where memories form. The temporal and frontal lobes also hold the language control centers, Broca’s and Wernicke’s areas. “The affected regions are very much in the business of expressive and receptive language, both understanding what’s said to you and being able to formulate semantics with meaningful sentences,” Dickson said.
Because my Dad could understand us in his final 10 days in the hospital, but he couldn’t speak or read, they concluded that he had Broca’s aphasia. In 1861, French neurosurgeon and anthropologist Pierre Paul Broca studied patients with language deficits and aphasias. He discovered the role of the left frontal lobe in speech production. It was the first time a scientist showed that the two hemispheres of the brain, though seemingly symmetrical, have different functions. He famously said, “Nous parlons avec l’hémisphère gauche,” meaning “We speak with the left hemisphere.”
I was curious whether, when he was dying, even though he couldn’t express himself, my dad’s memories were still in there. Dickson said he couldn’t speak to that. Later, I spoke to John Crary, professor of neuroscience, artificial intelligence and human health at the Icahn School of Medicine at Mount Sinai, who runs its brain bank. He told me that while aphasia can impact the ability to access semantic memory and create words, that memory itself is generally still present. Although my dad could not express his thoughts and memories, I felt relieved knowing they were still there in the last 10 days we were all together.
There was something else I had to ask Dickson in his office. A few months after my dad died, my mom received his autopsy report. It was written in medical language that laypersons can only puzzle through. But one passage did jump out. It said several sections of my dad’s brain had lesions that resulted not from the stroke but “foreign material.” I had to accept that my dad was gone, but I always wondered about that material, how it got there, and whether it was connected in any way to his death.
Dickson told me he suspected the foreign material was polymer granules that sheared off a catheter that doctors had inserted into my dad’s arteries to assess the damage to his heart. As blood flowed to the brain, the polymer material “occluded” and “distended” tiny vessels in the brain, blocking blood flow and oxygen. While the initial stroke was likely caused by a blood clot due to atherosclerosis, the subsequent stroke may have been caused by the brain lesions that formed in response to the polymer material.
Your dad, Dickson said, “got the worst of both worlds. He got the stroke, and then the treatment for the stroke made it even worse.”
A FOREIGN DISCOVERY: At his neuropathology lab at the Mayo Clinic, Dennis Dickson dissects a brain. When Dickson performed the autopsy on the author’s father’s brain, he found it contained a damaging “foreign material.” Photo courtesy of the Mayo Clinic.
I didn’t press Dickson further on the polymer reaction because I had other questions ready for him. But later I thought a lot about it. I attained my dad’s medical records and found no mention of the polymers. I told my mom. “I was shocked that no one ever told us about it,” she said. “It was a piece of information I should have known, and I should have been told. I certainly would have not reacted quietly. I would have been a lot louder.”
I learned, though, the doctors who treated my dad likely didn’t know about the polymer leakage from the catheter. Except for a handful of medical journal articles and newspaper articles, the problem of polymers shearing from catheters into patients’ blood and organs is practically unknown.
Tyler Hickey, an assistant professor of medicine and pathology at the University of Toronto, who wrote one of those journal articles, published in 2019, told me that most medical personnel are indeed unaware of the problem. During procedures, he added, they can’t see the microscopic particles. “They get scattered into little areas in the brain—it’s death by a thousand cuts.” Hickey’s study of 110 patients who had catheter treatments within 90 days of their deaths showed that polymer materials in their bodies caused death in nine of them.
Hickey was kind enough to look at my dad’s autopsy report for me. “The polymer leakage certainly may have exacerbated the initial problem the physicians were trying to treat,” he told me. “But it’s impossible to know if your dad would have lived without the polymer material. Unfortunately, the leakage is just a quality of these catheter designs.” That was cold comfort to hear.
“He got the stroke, and then the treatment for the stroke made it even worse.”
As Dickson and I continued to talk in his office, he told me the polymer contamination in my dad’s brain made it “one of a kind” among the thousands of brains in the Mayo Clinic brain bank. It was a VIP in brain research in another way. Because there was no trace of dementia in his brain, it would be used as a healthy control. Control brains are used in research to measure brain changes that are related to diseases like Alzheimer’s and vascular dementia, and to better understand advancements in rehabilitation for stroke patients who have lost the ability to communicate. “We don’t get enough healthy brains to use as normal controls,” Dickson said, expressing his gratitude that my dad had donated. His brain tissue can now be checked out like a book from stacks of shelves to answer so many scientific questions.
After I said goodbye to Dickson, I waited in the Mayo Clinic lobby for my taxi to the airport. I watched a father push his elderly mother in a wheelchair while her grandson cuddled on her lap. I thought about my own aging and brain and wondered whether my dad’s love of words and story helped keep his brain strong. Studies have revealed that our most human qualities, our connections and collective stories, can protect our memories and keep our brains strong as we age. Sharing personal stories or fictional narratives can improve the neuroplasticity of the neural networks and encourage new neural connections. It can also boost neurotransmitters suchas dopamine and serotonin, which enhance mood and motivation. No doubt this emerging science proves why my dad continued to excel in memory tests until the end.
In the taxi, I called my mom, who is 86 and still writing poetry and short stories every day, to tell her about the visit. “What he did for the world is so beautiful and magical,” I told her. “His brain is continuing, life goes on indefinitely with this research.” She became quiet. “Well, when you’re done with this, you and your brother still have to deal with the ashes. It’s not nothing. It’s a different thing.”
My dad’s ashes sit inside a bookshelf in the Riverdale home where I grew up. He was an avid fly fisherman, and his final wish was for us to spread his ashes in a trout stream where he used to fish. In the New York Times obituary of my dad, Robert McFadden quoted from my dad’s review of the anthology The Habits of Rivers editedby Ted Leeson: “The wonder of these dozens of pieces is that they offer a lesson in the art of living,” he wrote. “Or, more precisely, in the art of getting outside yourself, which is essential to sane living.”
Reading these words made me realize that this visit to his brain was a way to see the bigger picture of my dad. He saw himself as part of a larger corpus of connected stories. This approach to life was punctuated by the words of his college buddy, the novelist Norman Rush, author of Mortals, who spoke at his memorial.
“Chris was an engaged man, with a citizenly approach to those tasks and institutions that help sustain enlightenment, idealism, and human decency,” Rush said. “He was generous with his mortal life.” Rush named my dad’s connections to his high school board and the James Joyce Society. He mentioned my dad’s commitment to the aging study with its “long-term aging study that involved detailed record-keeping and irksome visits to the doctors managing the project. This was citizen Christopher.”
Growing up, I often felt invisible to these pursuits when he worked so many hours in his home study, intensely focused on other writers’ words and his own to support our family. Now that I’m older and a parent, I see this effort better; his choices to get outside of himself. He showed up. He said yes to it all. He, indeed, was generous with his mortal life. The gift of his brain to science has given me a way to see my dad as a man just trying to get through all the stresses, pains, and baggage—and the joys of enjoying good food and a little too much brandy—that we all accumulate in our bodies. Mostly, I now know that I share the DNA of his brain, and the rest of the story is unfolding through me, giving me the connection with him I had always yearned for.
In the 17th century, the philosopher René Descartes proposed that humans have a body and a soul that exist outside the body, immaterial and indestructible. He called the non-physical stuff of thinking res cogitans and everything else, the physical that runs through our nerves, res extensa. My family has not yet had the ceremony to send my dad off down the river, so this story feels, in a way, unfinished.
But if there’s one thing I came to realize more deeply than ever before about my father, it’s this: His words and the wisdom in them will always live in archives that can be called up with fingers, and his brain will live on in the network of brains pushing the boundaries of science, and his ashes—maybe his soul—will be part of nature, forever flowing.
When I grade students’ assignments, I sometimes see answers like this:
Utilizing Euler angles for rotation representation could have the following possible downsides:
Gimbal lock: In certain positions, orientations can reach a singularity, which prevents them from continuously rotating without a sudden change in the coordinate values.
Numeric instability: Using Euler angles could cause numeric computations to be less precise, which can add up and produce inaccuracies if used often.
Non-unique coordinates: Another downside of Euler angles is that some rotations do not have a unique representation in Euler angles, particularly at singularities.
The downsides of Euler angles make them difficult to utilize in robotics.
It’s important to note that very few implementations employ Euler angles for robotics.
Instead, one could use rotation matrices or quaternions to facilitate more efficient rotation representation.
[Not a student’s real answer, but my handmade synthesis of the style and content of many answers]
You only have to read one or two of these answers to know exactly what’s up: the students just copy-pasted the output from a large language model, most likely ChatGPT.
They are invariably verbose, interminably waffly, and insipidly fixated on the bullet-points-with-bold style.
The prose rarely surpasses the sixth-grade book report, constantly repeating the prompt, presumably to prove that they’re staying on topic.
As an instructor, I am always saddened to read this.
The ChatGPT rhetorical style is distinctive enough that I can catch it, but not so distinctive to be worth passing along to an honor council.
Even if I did, I’m not sure the marginal gains in the integrity of the class would be worth the hours spent litigating the issue.
I write this article as a plea to everyone: not just my students, but the blog posters and Reddit commenters and weak-accept paper authors and Reviewer 2.
Don’t let a computer write for you!
I say this not for reasons of intellectual honesty, or for the spirit of fairness.
I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.
For the rest of this piece, I’ll briefly examine some guesses as to why people write with large language models so often, and argue that there’s no good reason to use one for creative expression.
Why do people do this?
I’m not much of a generative-model user myself, but I know many people who heavily rely upon them.
From my own experience, I see a few reasons why people use such models to speak for them.
It doesn’t matter.
I think this belief is most common in classroom settings.
A typical belief among students is that classes are a series of hurdles to be overcome; at the end of this obstacle course, they shall receive a degree as testament to their completion of these assignments.
I think this is also the source of increasing language model use in in paper reviews.
Many researchers consider reviewing ancillary to their already-burdensome jobs; some feel they cannot spare time to write a good review and so pass the work along to a language model.
The model produces better work.
Some of my peers believe that large language models produce strictly better writing than they could produce on their own.
Anecdotally, this phenomenon seems more common among English-as-a-second-language speakers.
I also see it a lot with first-time programmers, for whom programming is a set of mysterious incantations to be memorized and recited.
I think this is also the cause of language model use in some forms of academic writing: it differs from the prior case with paper reviews in that, presumably, the authors believe that their paper matters, but don’t believe they can produce sufficient writing.
There’s skin in the game.
This last cause is least common among individuals, but probably accounts for the overwhelming majority of language pollution on the Internet.
Examples of skin-in-the-game writing include astroturfing, customer service chatbots, and the rambling prologues found in online baking recipes.
This writing is never meant to be read by a human and does not carry any authorial intent at all.
For this essay, I’m primarily interested in the motivations for private individuals, so I’ll avoid discussing this much; however, I have included it for sake of completeness.
Why do we write, anyway?
I believe that the main reason a human should write is to communicate original thoughts.
To be clear, I don’t believe that these thoughts need to be special or academic.
Your vacation, your dog, and your favorite color are all fair game.
However, these thoughts should be yours: there’s no point in wasting ink to communicate someone else’s thoughts.
In that sense, using a language model to write is worse than plagiarism.
When copying another person’s words, one doesn’t communicate their own original thoughts, but at least they are communicating a human’s thoughts.
A language model, by construction, has no original thoughts of its own; publishing its output is a pointless exercise.
Returning to our reasons for using a language model, we can now examine them once more with this definition in mind.
If it’s not worth doing, it’s not worth doing well
The model output in the doesn’t-matter category falls under two classes to me: the stuff that actually doesn’t matter and the stuff that actually does matter.
I’ll start with the things that don’t matter.
When someone comments under a Reddit post with a computer-generated summary of the original text, I honestly believe that everyone in the world would be better off had they not done so.
Either the article is so vapid that a summary provides all of its value, in which case, it does not merit the engagement of a comment, or it demands a real reading by a real human for comprehension, in which case the summary is pointless.
In essence, writing such a comment wastes everyone’s time.
This is the case for all of the disposable uses of a model.
Meanwhile, there are uses which seem disposable at a surface-level and which in practice are not so disposable (the actually-does-matter category).
I should hope that the purpose of a class writing exercise is not to create an artifact of text but force the student to think; a language model produces the former, not the latter.
For paper reviewers, it’s worse: a half-assed review will produce little more than make-work for the original authors and tell the editor nothing they didn’t already know.
If it’s worth doing, it’s worth doing badly
I’ll now cover the opposite case: my peers who see generative models as superior to their own output.
I see this most often in professional communication, typically to produce fluff or fix the tone of their original prompts.
Every single time, the model obscures the original meaning and adds layers of superfluous nonsense to even the simplest of ideas.
If you’re lucky, it at least won’t be wrong, but most often the model will completely fabricate critical details of the original writing and produce something completely incomprehensible.
No matter how bad any original human’s writing is, I can (hopefully?) trust that they have some kind of internal understanding to share; with a language model, there is no such luck.
I have a little more sympathy for programmers, but the long-term results are more insidious.
You might recall Peter Naur’s Programming as Theory Building: writing a sufficiently complex program requires not only the artifact of code (that is, the program source), but a theory of the program, in which an individual must fully understand the logical structure behind the code.
Vibe coding; that is, writing programs almost exclusively by language-model generation; produces an artifact with no theory behind it.
The result is simple: with no theory, the produced code is practically useless.
In Naur’s terms, such a program is dead; in our case, it’s stillborn.
It should come as no surprise that nearly every vibe-coded app on the Internet struggles with security issues; look no further than the vibe-coded recipe app that leaks its OpenAI keys.
Every time one generates code by prompt, they create a new stillborn program; vibe coding is the art of stitching together their corpses into Frankenstein’s monster.
An annotated exercise
As an experiment, I copied the thesis statement of this essay into Google Gemini and prompted it to complete the essay.
Below the spoiler is its complete response.
The complete prompt and output.
Prompt:
The following is an introduction to a short essay which I have written. Complete the essay by writing 2 short paragraphs.
I write this article as a plea to everyone: not just my students, but the blog posters and Reddit commenters and weak-accept paper authors and Reviewer 2.
Don’t let a computer write for you!
I say this not for reasons of intellectual honesty, or for the spirit of fairness.
I say this because I believe that your original thoughts are far more interesting, meaningful, and valuable than whatever a large language model can transform them into.
For the rest of this piece, I’ll briefly examine some guesses as to why people write with large language models so often, and argue that there’s no good reason to use one for creative expression.
Output:
It’s fascinating to consider why so many are turning to large language models for writing. Perhaps it stems from a desire for efficiency, a wish to quickly generate text without the perceived effort of crafting each sentence. The allure of instant output can be strong, especially in a world that often prioritizes speed. For students facing deadlines, or blog posters needing content, the temptation to shortcut the writing process might feel overwhelming. There could also be a sense of insecurity, a feeling that one’s own writing isn’t “good enough” and that a language model can somehow elevate it. This might be particularly true for those navigating the often critical world of academic reviews or the sometimes harsh landscape of online commentary.
However, this reliance on artificial intelligence fundamentally misunderstands the very essence of creative expression. Your individual voice, shaped by your unique experiences, perspectives, and even your struggles to articulate an idea, is what makes your writing resonate. A language model, trained on vast datasets of existing text, can only ever produce a pastiche, a reflection of what has already been said. It lacks the spark of genuine insight, the unexpected turn of phrase that comes from wrestling with a concept until it becomes your own. To surrender your voice to an algorithm is to diminish the potential for true connection and the profound satisfaction of communicating your own original thoughts to the world.
The resulting text is, put plainly, boring.
In its 234-word output, the model produces merely a lengthy rehash of the same prompt.
Tonally, it provides no more than sweeping and pointless generality, and any points earned for its brobdingnagian vocabulary are immediately lost to its total lack of taste in using it.
For example, consider the following sentence:
Perhaps it stems from a desire for efficiency, a wish to quickly generate text without the perceived effort of crafting each sentence.
Gemini has produced a big sentence for a small thought.
I can trivially cut two-thirds of it and remove nothing of substance:
Perhaps it stems from a desire for efficiency.
With some care, I can trim it a little more.
Perhaps people do it for efficiency.
So, in short, a language model is great for making nonsense, and not so great for anything else.
Just show me the prompt
I now circle back to my main point: I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt.
The resulting output has less substance than the prompt and lacks any human vision in its creation.
The whole point of making creative work is to share one’s own experience - if there’s no experience to share, why bother?
If it’s not worth writing, it’s not worth reading.
Low-quality image placeholders (LQIPs) have a solid place in web page design. There are many different solutions but the main gotcha is that generating them tends to lean on things like JavaScript, requires lengthy chunks of not-particularly-human-readable code, or other tradeoffs. [Lean] came up with an elegant, minimal solution in pure CSS to create LQIPs.
Here’s how it works: all required data is packed into a single CSS integer, which is decoded directly in CSS (no need for any JavaScript) to dynamically generate an image that renders immediately. Another benefit is that without any need for wrappers or long strings of data this method avoids cluttering the HTML. The code is little more than a line like <img src="…" style="--lqip:567213"> which is certainly tidy, as well as a welcome boon to those who hand-edit files.
The trick with generating LQIPs from scratch is getting an output that isn’t hard on the eyes or otherwise jarring in its composition. [Lean] experimented until settling on an encoding method that reliably delivered smooth color gradients and balance.
This method therefore turns a single integer into a perfectly-serviceable LQIP, using only CSS. There’s even a separate tool [Lean] created to compress any given image into the integer format used (so the result will look like a blurred version of the original image). It’s true that the results look very blurred but the code is clean, minimal, and the technique is easily implemented. You can see it in action in [Lean]’s interactive LQIP gallery.
CSS has a lot of capability baked into it, and it’s capable of much more than just styling and lining up elements. How about trigonometric functions in CSS? Or from the other direction, check out implementing a CSS (and HTML) renderer on an ESP32.
In recent months, SentinelOne has observed and defended against a spectrum of attacks from financially motivated crimeware to tailored campaigns by advanced nation-state actors.
These incidents were real intrusion attempts against a U.S.-based cybersecurity company by adversaries, but incidents such as these are neither new nor unique to SentinelOne.
Recent adversaries have included:
DPRK IT workers posing as job applicants
ransomware operators probing for ways to access/abuse our platform
Chinese state-sponsored actors targeting organizations aligned with our business and customer base
This report highlights a rarely-discussed but crucially important attack surface: security vendors themselves.
Overview
At SentinelOne, defending against real-world threats isn’t just part of the job, it’s the reality of operating as a cybersecurity company in today’s landscape. We don’t just study attacks, we experience them firsthand, levied against us. Our teams face the same threats we help others prepare for, and that proximity to the front lines shapes how we think, and how we operate. Real-world attacks against our own environment serve as constant pressure tests, reinforcing what works, revealing what doesn’t, and driving continuous improvement across our products and operations. When you’re a high-value target for some of the most capable and persistent adversaries out there, nothing less will do.
Talking about being targeted is uncomfortable for any organization. For cybersecurity vendors, it’s practically taboo. But the truth is security vendors sit at an interesting cross-section of access, responsibility, and attacker ire that makes us prime targets for a variety of threat actors, and the stakes couldn’t be higher. When adversaries compromise a security company, they don’t just breach a single environment—they potentially gain insight into how thousands of environments and millions of endpoints are protected.
In the past several months alone, we’ve observed and defended against a spectrum of attacks ranging from financially motivated crimeware to tailored campaigns by advanced nation-state actors. They were real intrusion attempts targeting a U.S.-based cybersecurity company — launched by adversaries actively looking for an advantage, access, or leverage. Adversaries included DPRK IT workers posing as job applicants, ransomware operators probing for ways to access/abuse our platform, and Chinese state-sponsored actors targeting organizations aligned with our business and customer base.
We are certainly not the only ones facing these threats. In the spirit of furthering collective defenses and encouraging further collaboration, we’re pulling back the curtain to share some of what we’ve seen, why it matters, and what it tells us about the evolving threat landscape—not just for us, but for every company building and relying on modern security technology.
DPRK IT Workers Seeking Inside Jobs
One of the more prolific and persistent adversary campaigns we’ve tracked in recent years involves widespread campaigns by DPRK-affiliated IT Workers attempting to secure remote employment within Western tech companies– including SentinelOne. Early reports drew attention to these efforts and our own analysis revealed further logistical infrastructure to launder illicit funds via Chinese intermediary organizations. However, neither gave a sense of the staggering volume of ongoing infiltration attempts. This vector far outpaces any other insider threat vector we monitor.
These actors are not just applying blindly — they are refining their process, leveraging stolen or fabricated personas, and adapting their outreach tactics to mirror legitimate job seekers in increasingly convincing ways. Our team has tracked roughly 360 fake personas and over 1,000 job applications linked to DPRK IT worker operations applying for roles at SentinelOne — even including brazen attempts to secure positions on the SentinelLabs intelligence engineering team itself.
Public reporting of DPRK IT workers applying to threat intelligence positions
Engagement and Adversary Interaction
Instead of staying passive, we made a deliberate choice towards intelligence-driven engagement. In coordination with our talent acquisition teams, we developed workflows to identify and interact with suspected DPRK applicants during the early phases of their outreach. This collaboration was key. By embedding lightweight vetting signals and monitoring directly into recruiting processes — without overburdening hiring teams — we were able to surface anomalous patterns tied to DPRK-affiliated personas piped directly into our Vertex Synapse intelligence platform for analyst review.
Our attempted interactions offered rare insights into the craftiness and persistence of these infiltration campaigns — particularly the ways in which adversaries adapt to the friction they encounter.
Inbound DPRK referral request to strategic employees
The attackers are honing their craft beyond the job application and recruitment process. An operation of this scale and nature requires a different kind of backend infrastructure, such as a sprawling network of front companies to enable further laundering and logistics.
DPRK IT Worker Front Company Network (November 2024)
Helping Hiring Teams Help Us
A key takeaway in working on this investigation was the value of intentionally creating inroads and sharing threat context with different teams not normally keyed into investigations. Rather than cluelessness, we encountered an intuitive understanding of the situation as recruiters had already been filtering out and reporting ‘fake applicants’ within their own processes.
We brought campaign-level understanding that was combined with tactical insights from our talent team. The payoff was immediate. Recruiters began spotting patterns on their own, driving an increase in early-stage escalation of suspicious profiles. They became an active partner that continues to flag new sightings from the frontlines. In turn, we are codifying these insights into automated systems that flag, filter, enrich, and proactively block these campaigns to lower the burden on our recruiters and hiring managers, and reduce the risk of infiltration.
Make cross‑functional collaboration standard operating procedure: equip frontline business units—from recruiting to sales—with shared threat context and clear escalation paths so they can surface anomalies early without slowing the business. Codifying insights with automation will consistently bring bi-directional benefits.
The DPRK IT worker threat is a uniquely complex challenge — one where meaningful progress depends on collaboration between the security research community and public sector partners.
Ransomware Group Capability Development
Financially motivated threat actors frequently target enterprise security platforms —products designed to keep them from making money—for direct access. SentinelOne, like our peers, is no exception. While uncomfortable, this is a reality the industry faces continually and should handle with both transparency and urgency.
Forum post offering security product access
Privileged access to administrative interfaces or agent installers for endpoint security products provides tangible advantages for adversaries seeking to advance their operations. Console access can be used to disable protections, manipulate configurations, or suppress detections. Direct, unmonitored access to the endpoint agent offers opportunities to test malware efficacy, explore bypass or tampering techniques, and suppress forensic visibility critical for investigations. In the wrong hands, these capabilities represent a significant threat to both the integrity of security products and the environments they protect.
This isn’t a new tactic. Various high-profile criminal groups have long specialized in social engineering campaigns to gain access to core security tools and infrastructure—ranging from EDR platforms (including SentinelOne and Microsoft Defender) to IAM and VPN providers such as Okta. Their goal: expand footholds, disable defenses, and obstruct detection long enough to profit.
Recent leaks related to Black Basta further underscore this trend. The group’s operators were observed testing across multiple endpoint security platforms—including SentinelOne, CrowdStrike, Carbon Black, and Palo Alto Networks—before launching attacks, suggesting a systematic effort to evaluate and evade security tools prior to deployment.
Black Basta leak excerpts
Economy/Ecosystem
There is an increasingly mature and active underground economy built around the buying, selling, and renting of access to enterprise security tools. For the right price, aspiring threat actors continually attempt to obtain time-bound or persistent access to our EDR platform and administrative consoles. Well-known cybercrime forums are filled with vendors openly advertising such access—and just as many buyers actively seeking it. This includes long-established forums like XSS[.]is, Exploit[.]in and RAMP.
That said, more of this activity has been moving to confidential messaging platforms as well (Telegram, Discord, Signal). For example, Telegram bots are used to automate trading this access, and Signal is often used by threat actors to discuss nuance, targeting and initial access operations.
This supply-and-demand dynamic is not only robust but also accelerating. Entire service offerings have emerged around this ecosystem, including “EDR Testing-as-a-Service,” where actors can discreetly evaluate malware against various endpoint protection platforms.
Proposed Private EDR testing service
While these testing services may not grant direct access to full-featured EDR consoles or agents, they do provide attackers with semi-private environments to fine-tune malicious payloads without the threat of exposure—dramatically improving the odds of success in real-world attacks.
Prospective buyer for EDR installs
Access isn’t always bought, however. Threat actors frequently harvest legitimate credentials from infostealer logs—a common and low-cost method of acquiring privileged access to enterprise environments. In cases where existing customers reuse credentials, this can translate into a threat actor also gaining access to security tools. In more targeted operations, actors have also turned to bribery, offering significant sums to employees willing to sell out their account access.
These insider threats are not hypothetical. For instance, some groups have been observed offering upwards of $20,000 to employees at targeted companies in exchange for insider assistance—an approach openly discussed in the same dark web forums where compromised credentials and access are routinely traded.
On the defensive side, this requires constant monitoring and maintenance. Situational awareness has to be prioritized in order to maintain platform integrity and protect our legitimate customers. Our research teams are constantly monitoring for this style of abuse and access ‘leakage’, focusing on anomalous console access and site-token usage, and taking necessary actions to revoke these access vectors. This prohibits threat actors from fully interacting with the wider platform, and essentially orphans leaked agent installs, limiting the use of the agent in the hands of the threat actor.
Nitrogen — Threat Operators ‘Leveling Up’
Some ransomware operations are now bypassing the underground market altogether—opting instead for more tailored, concentrated-effort impersonation campaigns to gain access to security tools. This approach is epitomized by the Nitrogen ransomware group.
Nitrogen is believed to be operated by a well-funded Russian national with ties to earlier groups like Maze and Snatch. Rather than purchasing illicit access, Nitrogen impersonates real companies—spinning up lookalike domains, spoofed email addresses, and cloned infrastructure to convincingly pose as legitimate businesses. Nitrogen then purchases official licenses for EDR and other security products under these false pretenses.
This kind of social engineering is executed with precision. Nitrogen typically targets small, lightly vetted resellers—keeping interactions minimal and relying on resellers’ inconsistent KYC (Know Your Customer) practices to slip through the cracks.
These impersonation tactics introduce a new layer of complexity for defenders. If a threat actor successfully acquires legitimate licenses from a real vendor, they can weaponize the product to test, evade, and potentially disable protections—without ever having to engage with criminal markets.
This highlights a growing challenge for the security industry: reseller diligence and KYC enforcement are clearly part of the threat surface. When those controls are weak or absent, adversaries like Nitrogen gain powerful new ways to elevate their campaigns—often at a lower cost and lower risk than the black market.
Lessons Learned and Internal Collaboration
One of the most impactful lessons from tracking adversaries targeting our platform has been the value of deep, early collaboration across internal teams — particularly those not traditionally pulled into threat response efforts. For example, by proactively engaging with our reseller operations and customer success teams, we can surface valuable signals on questionable license requests, reseller behavior anomalies, and business inconsistencies that could have otherwise gone unnoticed.
By creating shared playbooks, embedding lightweight threat context, and establishing clear escalation paths, reactive processes turn into proactive signal sources. Now, suspicious licensing activity—especially when paired with evasive behaviors or mismatched domain metadata—can surface much earlier in the workflow.
To scale this effort, we increasingly lean into automation. By codifying threat patterns—such as domain registration heuristics, behavioral metadata mismatches, and reseller inconsistencies—organizations can automate enrichment and risk-scoring for incoming licensing requests. This can then be used to dynamically filter, flag, and in some cases, auto-block high-risk activity before it reaches onboarding.
The growing trend of adversaries exploiting sales processes—whether through impersonation, social engineering, or brute-force credential use—means security vendors must treat every access vector, including commercial and operational pipelines, as part of the attack surface. Making cross-functional threat awareness standard operating procedure and integrating detection logic at the edge of business systems is essential.
We’re continuing to improve this work in quiet ways. And while we won’t share every detection logic here (for obvious reasons), we encourage others in the industry to pursue similar internal partnerships. Sales and support teams may already be seeing signs of abuse—security teams just need to give them the lens to recognize it.
Chinese State-Sponsored Adversaries
One notable set of activity, occurring over the previous months, involved reconnaissance attempts against SentinelOne’s infrastructure and specific high value organizations we defend. We first became aware of this threat cluster during a 2024 intrusion conducted against an organization previously providing hardware logistics services for SentinelOne employees. We refer to this cluster of activity as PurpleHaze, with technical overlaps to multiple publicly reported Chinese APTs.
The PurpleHaze Activity Cluster
Over the course of months, SentinelLABS observed the threat actor conduct many intrusions, including into a South Asian government supporting entity, providing IT solutions and infrastructure across multiple sectors. This activity involved extensive infrastructure, some of which we associate with an operational relay box (ORB) network, and a Windows backdoor that we track as GoReShell. The backdoor is implemented in the Go programming language and uses functionalities from the open-source reverse_ssh tool to establish reverse SSH connections to attacker-controlled endpoints.
SentinelLABS collectively tracks these activities under the PurpleHaze moniker. We assess with high confidence that PurpleHaze is a China-nexus actor, loosely linking it to APT15 (also known as Nylon Typhoon, or other various outdated aliases). This adversary is known for its global targeting of critical infrastructure sectors, such as telecommunications, information technology, and government organizations – victimology that aligns with our multiple encounters with PurpleHaze.
We track the ORB network infrastructure observed in the attack against the South Asian government organization as being operated from China and actively used by several suspected Chinese cyberespionage actors, including APT15. The use of ORB networks is a growing trend among these threat groups, since they can be rapidly expanded to create a dynamic and evolving infrastructure that makes tracking cyberespionage operations and their attribution challenging. Additionally, GoReShell malware and its variations, including the deployment mechanism on compromised machines and obfuscation techniques have been exclusively observed in intrusions that we attribute with high confidence to China-nexus actors.
ShadowPad Intrusions
In June 2024, approximately four months prior to PurpleHaze targeting SentinelOne, SentinelLABS observed threat actor activity targeting the same South Asian government entity that was also targeted in October 2024. Among the retrieved artifacts, we identified samples of ShadowPad, a modular backdoor platform used by multiple suspected China-nexus threat actors to conduct cyberespionage. Recent ShadowPad activity has also included the deployment of ransomware, though the motive remains unclear — whether for financial gain or as a means of distraction, misattribution, or removal of evidence.
The ShadowPad samples we retrieved were obfuscated using ScatterBrain, an evolution of the ScatterBee obfuscation mechanism. Our industry partner, Google Threat Intelligence Group (GTIG), have also observed the use of ScatterBrain-obfuscated ShadowPad samples since 2022 and attribute them to clusters associated with the suspected Chinese APT actor, APT41.
GTIG APT41 Use of ScatterBrain
Investigations continue in determining the specific actor overlap between June 2024 ShadowPad intrusions and the later PurpleHaze activity. We do not rule out the involvement of the same threat cluster, particularly given the extensive sharing of malware, infrastructure, and operational practices among Chinese threat groups, as well as the possibility of access transfer between different actors.
Based on private telemetry, we identified a large collection of victim organizations compromised using ScatterBrain-obfuscated ShadowPad. Between July 2024 and March 2025, this malware was used in intrusions at over 70 organizations across various regions globally, spanning sectors such as manufacturing, government, finance, telecommunications, and research. We assess that the threat actor primarily gained initial foothold in the majority of these organizations by exploiting an n-day vulnerability in CheckPoint gateway devices, which aligns with previous research on ShadowPad intrusions involving the deployment of ransomware.
Among the victims, we identified the previously mentioned IT services and logistics organization that was at the time responsible for managing hardware logistics for SentinelOne employees. Victim organizations were promptly informed of intrusion specifics, which were swiftly investigated. At this point, it remains unclear whether the perpetrators’ focus was solely on the compromised organization or if they intended to extend their reach to client organizations as well.
A detailed investigation into SentinelOne’s infrastructure, software, and hardware assets found no evidence of secondary compromise. Nevertheless, this case underscores the fragility of the larger supplier ecosystem that organizations depend upon and the persistent threat posed by suspected Chinese threat actors, who continuously seek to establish strategic footholds to potentially compromise downstream entities.
SentinelLABS will share a detailed public release on this topic in due course, providing further technical information on these activities, including observed TTPs, malware, and infrastructure.
Lessons Learned While Hardening Our Operational Ecosystem
Our analysis of the PurpleHaze cluster, and more specifically the potential indirect risk introduced via compromised third-party service providers, has reinforced several key insights around operational security and supply chain monitoring. Even when our own infrastructure remained untouched, the targeting of an external service provider previously associated with business logistics surfaced important considerations.
One immediate reminder is the necessity of maintaining real-time awareness not only over internal assets but also over adjacent service providers—particularly those with past or current access to sensitive employee devices or logistical information. When incidents occur near your supply chain, don’t wait for confirmation of compromise. Proactively trigger internal reviews of asset inventories, procurement workflows, OS images and onboarding deployment scripts, and segmentation policies to quickly identify any exposure pathways and reduce downstream risk.
This leads to several defense recommendations:
Distribute Threat Intelligence Across Operational Stakeholders Organizations should proactively share campaign-level threat intelligence with business units beyond the traditional security org—particularly those managing vendor relationships, logistics, and physical operations. Doing so enables faster detection of overlap with compromised third parties and supports early reassessment of exposure through external partners.
Integrate Threat Context Into Asset Attribution Workflows Infrastructure and IT teams should collaborate with threat intelligence functions to embed threat-aware metadata into asset inventories. This enables more responsive scoping during incident response and enhances the ability to trace supply chain touchpoints that may be at risk.
Expand Supply Chain Threat Modeling Organizations should refine their threat modeling processes to explicitly account for upstream supply chain threats, especially those posed by nation-state actors with a history of leveraging contractors, vendors, or logistics partners as indirect access vectors. Tailoring models to include adversary-specific tradecraft enables earlier identification of unconventional intrusion pathways.
While attribution continues to evolve and victim impact remains diverse, one thing is clear: well-resourced threat actors are increasingly leaning on indirect routes into enterprise environments. Investigations like this help us sharpen our defenses—not just around traditional digital perimeters but around the full operational footprint of our organization.
The Strategic Value of Cyber Threat Intelligence
In today’s threat landscape, threat intelligence has evolved from a niche function into an essential pillar of enterprise defense—particularly for private sector organizations operating in the security space. As threat actors increasingly target security vendors for insider access, abuse of legitimate channels, and supply chain infiltration, the role of CTI in anticipating and disrupting these tactics has become more critical than ever.
One of the most tangible examples of this value is in internal talent acquisition and insider threat defense. Intelligence has become a frontline asset in identifying attempts by North Korean IT workers and other state-backed operatives to embed themselves in organizations under false pretenses. By flagging suspicious applicant patterns, cross-referencing alias histories, and tracking known tradecraft, CTI teams help hiring managers and HR avoid potential insider incidents before they start.
Our CTI capabilities must also directly support sales and channel operations. As criminal groups increasingly impersonate legitimate businesses to acquire security products through trusted resellers, intelligence plays a key role in verifying customer legitimacy and identifying anomalous purchase behaviors. By integrating intelligence insights into pre-sale vetting workflows, a crucial layer of protection is helping to ensure adversaries cannot simply “buy” their way into our technology stack.
Internally, threat intelligence informs and enhances how we defend our own technology and supply chain against highly targeted APT activity. From understanding how adversaries reverse-engineer our software to uncovering which parts of our technology stack they seek to compromise, CTI enables proactive hardening, smarter telemetry prioritization, and meaningful collaboration with product and engineering teams. In essence, intelligence acts as an early-warning system and a strategic guide—ensuring our defenses stay one step ahead of evolving threats.
Across every function—whether it’s HR, Sales, Engineering, or Security—cyber threat intelligence is no longer a backroom function. It’s embedded in the fabric of how we defend, operate, and grow as a business.