This was an incredibly clarifying read—thank you. Your framing of the "incipient nihilism of low expectations" really stuck with me. It feels like we’ve not only hollowed out the moral substance of our public roles but have come to expect and even design for that hollowness. And when the system assumes bad faith, good faith becomes almost subversive.
The Jay Weatherill quote captured it perfectly—each group caricaturing the other, until mutual respect is eroded and aspiration feels naive. I especially appreciated your contrast between external goods (metrics, image, outcomes) and internal goods (character, judgment, practice). That insight alone reframes what it means to “do good work” in any profession.
Grateful for this piece. It’s the kind of reflection that reorients.
Was shared this article – it was interesting at its core but was quite disappointed to hear you could not write it yourself without ChatGPT, and not a fan of the bland generic likely-AI art.
There are artists who can draw well in the style of Ghibli, much more so than the generic lifeless bland style of Chat GPT.
And on writing and how AI makes you a write-not, Paul Graham wrote well about it:
"Almost all pressure to write has dissipated. You can have AI do it for you, both in school and at work.
"The result will be a world divided into writes and write-nots. There will still be some people who can write. Some of us like it. But the middle ground between those who are good at writing and those who can't write at all will disappear. Instead of good writers, ok writers, and people who can't write, there will just be good writers and people who can't write.
"Is that so bad? Isn't it common for skills to disappear when technology makes them obsolete? There aren't many blacksmiths left, and it doesn't seem to be a problem.
"Yes, it's bad. The reason is something I mentioned earlier: writing is thinking. In fact there's a kind of thinking that can only be done by writing. You can't make this point better than Leslie Lamport did:
"'If you're thinking without writing, you only think you're thinking.'
"So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots. I know which half I want to be in, and I bet you do too.
"This situation is not unprecedented. In preindustrial times most people's jobs made them strong. Now if you want to be strong, you work out. So there are still strong people, but only those who choose to be.
"It will be the same with writing. There will still be smart people, but only those who choose to be."
I'm a big admirer of Paul Graham so thanks for drawing his comments to my attention. Even if I agreed with his observations, they are on a very big canvas, prognosticating about how things will look when all this has worked itself through. I doubt if they're intended to suggest that it's a bad thing to try to work with AI to find out how one can use it to help your writing.
In next week's substack I'm planning to extract this piece by Venkatesh Rao which is no less serious, but I think is more gregarious, more expansive.
"[G]etting a brilliant 2000-word essay out of a single prompt is of course as rare as a traditional first-draft being publishable with no edits. Usually I do have to do some iterative work with the LLM to get it there, and that’s the part that’s creative fun. In general, an AI-assisted essay requires about the same amount of high-level thinking effort for me as an unassisted essay, but gets done about 3x-5x faster, since the writing part can be mostly automated. It generally takes me an hour or two to produce an assisted essay. An unassisted one of similar length is usually 7-10 hours.
But I’m not really interested in production efficiency. That’s a side effect that has the (possibly unfortunate for some of you information-overload-anxious types) effect of increasing production volume. I’m interested primarily in enjoying the writing process more, and in different ways.
I’ve been doing unassisted writing for a couple of decades now, and this new process feels like a jolt of creative freshness in my writing life. It’s made writing playful and fun again.
I’ve also really enjoyed having AI assisted comprehension in tackling difficult books, such as last month’s book-club pick, Frances Yates’ Giordano Bruno and the Hermetic Tradition. I don’t know that I’d have been able to tackle it unassisted.
On the writing side, when I have a productive prompting session, not only does the output feel information dense for the audience, it feels information dense for me.
An example of this kind of essay is one I posted last week, on a memory-access-boundary understanding of what intelligence is. This was an essay I generated that I got value out of reading. And it didn’t feel like a simple case of “thinking through writing.” There’s stuff in here contributed by ChatGPT that I didn’t know or realize even subconsciously, even though I’ve been consulting for 13 years in the semiconductor industry."
Interesting to hear. I guess we have very different preferences.
I really enjoy taking the writing process whether it takes 2-10 hours to labour on an article on my Ghost political blog, and I enjoy parsing difficult sentences in both English political literature & creative literature and also when learning other languages that make heavy use of conjugations & declensions (like Latin and Russian).
I really do not enjoy the process of working with ChatGPT. I like the process of constructing every sentence to pack in all the relevant information I want to get across, and I like manually choosing the most appropriate words for the right use (considering both the connotations of each word as well the choosing which degree is the most incisive for the point I want to get across). What you describe as your favourite parts of working with ChatGPT encompasses everything that I find incredibly dull and lifeless about it.
I find it quite easy to make my sentences information-dense without requiring ChatGPT when writing about politics. (When writing, I often write sentences that cannot fit into a single tweet.)
I have been quite disappointed when trialling LLM-generated summaries of things such as news articles and opinion pieces – already they destroy the nuance of the writer and regurgitate a bland version/explanation with points that lose all characteristics & punch of the writer.
Do you find it comfortable to read an author such as linguist Noam Chomsky or is it also difficult for you to read without LLM assistance? For me it's much, much faster for me to process complex and dense information myself while reading, than to consult an LLM along the way which would take much longer.
On the subject of drawing, a few months ago, I submitted a political cartoon "The Australian Dream" to independent dissident Australian news site Pearls & Irritations which got published. I am primarily a writer but I practised drawing enough to be able to make that cartoon myself – without needing any assistance from AI.
I doubt, for example, that someone who is only used to generating visual depictions with prompts from ChatGPT would have considered working in the various visual jokes in that image across different levels. Those visual depictions were as a result of me absorbing a lot of political cartoons done by the Australian artist George Burchett (son of pioneering Australian journalist Wilfred Burchett) and understanding different ways that a relationship or idea could be depicted visually. In that image, I chose a map as a visual metaphor and depicted Australians' wish for proximity to the rest of the Anglosphere in the form of geographical proximity.
My point there is, it's one thing to use an LLM for drawing (or for writing), but using it doesn't actually build up your knowledge of how to make use of composition, visual metaphors and more to get your point across, like an artist who has practised drawing knows how to make use of them and wield them really well.
For me, writing is not about the production volume or efficiency. It's getting a point across that only I can articulate from my own observations, life experiences and learnings. It would feel weird for me to get another person or a bot to do it for me.
The most heartfelt writing and concise, punchy, well-articulated points that I have seen still consistently come from only people who fully say and write things themselves (e.g. from Glenn Greenwald, Uhuru & The Burning Spear and more). THAT's the kind of thing I'm looking for – people with something so important and quintessential that they want to share & express, and who so sincerely believe in what they say, that they have fully learnt the skills to keep trying to personally express their viewpoints the best that they possibly can. They would also find it weird to get another person or algorithm to write it from them.
Glenn Greenwald is so skilled at it that he can consistently express incredibly informative, concise and funny, punchy takes on politics day in and day out – never needed the use of AI to do it. Greenwald says he never gets bored of his job, having worked in journalism for decades.
The fact that you're already admitting here that you've gotten bored of writing simply strengthens my skepticism even more, and really says to me even more that you've ran out of personally-important things to say, more so than the first existing sign of using AI.
If you had things that you loved thinking about and still believed were important to write about, you most likely would not have gotten bored of writing about them. That's the core of what makes me uninterested in subscribing. It suggests that you have run out of having something that you fully personally believe is important to get across to the world.
If you don't have anything that's vitally important to you to write about, why would it be important to me? If you don't believe in it, at most what you could do is try to seem clever and impressive to other people by keeping a consistent writing volume, which is not interesting to me. I'm not looking for people who write something every week or whatnot for the sake of dishing something up to seem interesting. I'm looking for people with heartfelt things to say, and this can be from intermittent writers (people who write every few months) or people who so believe in what they're saying & doing that they do it all the time. I felt it necessary to express this because I do subscribe to people who write rarely but with meaning, over people who try to pump out a lot of stuff that they aren't deeply affected by.
The reason I expressed my disappointment is because I was about to subscribe to your newsletter. But upon finding out this article was not fully written by yourself, I hesitated to do so. If there are more articles where you try to write it fully yourself, I will be much more encouraged to subscribe.
Thanks Trúc, Why not see what you find. I think you're being puritanical. We all need to find out how AI can help us and the terms on which it can and cannot. I think that's a terrible thing to think one knows about in advance. As in all things, we need to keep an eye on whether those we tentatively trust are proving themselves worthy of that trust. I don't think there are any formulas.
Just on your arguments, you seem to think that if I quote someone, that I am adopting their opinion as a kind of credo. Either that or you didn't notice the quote marks. When I quote Venkatesh Rao saying "I’m interested primarily in enjoying the writing process more, and in different ways," it's a stretch to say he's bored with his standard routine - it's kind of 'verballing' him. Anyway it's verballing HIM. It's even further removed from verballing me as 'bored'.
I think you misunderstand the way he's saying that AI can help you read. He's talking about a decades old history book. You can read as wonderfully as you like, as joyfully as you like. If you're wondering if some claim that is made on p. 47 is still current scholarship, it won't help you. AI will. If you're wondering where Gordano Bruno's mother's family come from, your reading the original document might not help. AI might.
Anyway, here we are at the dawn of a new epoch and you've got it all worked out, so I'd say you're set. Onward and upward I say.
Great post. Only thing, I would not be too picky with the "enlightenment" these days. "MacIntyre’s central thesis is that the Enlightenment severed ethics from tradition and teleology, leaving a fragmented moral landscape.". I haven't read MacIntyre's book, but I would not see "Immanuel Kant" and a "fragmented moral landscape" as the same thing.
You're right — I should probably read the book. It seems very interesting, and I enjoyed reading your post. I just found it a bit strange to see the Enlightenment — someone like Kant, for instance — associated with 'diminished moral gravity'.
I like this a lot. I don't think there is a way back to character, I think the way forward is to something like an archetype, where the social and personal are harmonious and the feeling is organic - someone is an archetype in some sense.
MacIntyre doesn't offer guidance about what to do with the fact that the disenchantment with office holders is based on evidence and experience.
This was an incredibly clarifying read—thank you. Your framing of the "incipient nihilism of low expectations" really stuck with me. It feels like we’ve not only hollowed out the moral substance of our public roles but have come to expect and even design for that hollowness. And when the system assumes bad faith, good faith becomes almost subversive.
The Jay Weatherill quote captured it perfectly—each group caricaturing the other, until mutual respect is eroded and aspiration feels naive. I especially appreciated your contrast between external goods (metrics, image, outcomes) and internal goods (character, judgment, practice). That insight alone reframes what it means to “do good work” in any profession.
Grateful for this piece. It’s the kind of reflection that reorients.
Was shared this article – it was interesting at its core but was quite disappointed to hear you could not write it yourself without ChatGPT, and not a fan of the bland generic likely-AI art.
There are artists who can draw well in the style of Ghibli, much more so than the generic lifeless bland style of Chat GPT.
And on writing and how AI makes you a write-not, Paul Graham wrote well about it:
"Almost all pressure to write has dissipated. You can have AI do it for you, both in school and at work.
"The result will be a world divided into writes and write-nots. There will still be some people who can write. Some of us like it. But the middle ground between those who are good at writing and those who can't write at all will disappear. Instead of good writers, ok writers, and people who can't write, there will just be good writers and people who can't write.
"Is that so bad? Isn't it common for skills to disappear when technology makes them obsolete? There aren't many blacksmiths left, and it doesn't seem to be a problem.
"Yes, it's bad. The reason is something I mentioned earlier: writing is thinking. In fact there's a kind of thinking that can only be done by writing. You can't make this point better than Leslie Lamport did:
"'If you're thinking without writing, you only think you're thinking.'
"So a world divided into writes and write-nots is more dangerous than it sounds. It will be a world of thinks and think-nots. I know which half I want to be in, and I bet you do too.
"This situation is not unprecedented. In preindustrial times most people's jobs made them strong. Now if you want to be strong, you work out. So there are still strong people, but only those who choose to be.
"It will be the same with writing. There will still be smart people, but only those who choose to be."
Thanks, Trúc,
I'm a big admirer of Paul Graham so thanks for drawing his comments to my attention. Even if I agreed with his observations, they are on a very big canvas, prognosticating about how things will look when all this has worked itself through. I doubt if they're intended to suggest that it's a bad thing to try to work with AI to find out how one can use it to help your writing.
In next week's substack I'm planning to extract this piece by Venkatesh Rao which is no less serious, but I think is more gregarious, more expansive.
"[G]etting a brilliant 2000-word essay out of a single prompt is of course as rare as a traditional first-draft being publishable with no edits. Usually I do have to do some iterative work with the LLM to get it there, and that’s the part that’s creative fun. In general, an AI-assisted essay requires about the same amount of high-level thinking effort for me as an unassisted essay, but gets done about 3x-5x faster, since the writing part can be mostly automated. It generally takes me an hour or two to produce an assisted essay. An unassisted one of similar length is usually 7-10 hours.
But I’m not really interested in production efficiency. That’s a side effect that has the (possibly unfortunate for some of you information-overload-anxious types) effect of increasing production volume. I’m interested primarily in enjoying the writing process more, and in different ways.
I’ve been doing unassisted writing for a couple of decades now, and this new process feels like a jolt of creative freshness in my writing life. It’s made writing playful and fun again.
I’ve also really enjoyed having AI assisted comprehension in tackling difficult books, such as last month’s book-club pick, Frances Yates’ Giordano Bruno and the Hermetic Tradition. I don’t know that I’d have been able to tackle it unassisted.
On the writing side, when I have a productive prompting session, not only does the output feel information dense for the audience, it feels information dense for me.
An example of this kind of essay is one I posted last week, on a memory-access-boundary understanding of what intelligence is. This was an essay I generated that I got value out of reading. And it didn’t feel like a simple case of “thinking through writing.” There’s stuff in here contributed by ChatGPT that I didn’t know or realize even subconsciously, even though I’ve been consulting for 13 years in the semiconductor industry."
https://contraptions.venkateshrao.com/p/terms-of-centaur-service
Interesting to hear. I guess we have very different preferences.
I really enjoy taking the writing process whether it takes 2-10 hours to labour on an article on my Ghost political blog, and I enjoy parsing difficult sentences in both English political literature & creative literature and also when learning other languages that make heavy use of conjugations & declensions (like Latin and Russian).
I really do not enjoy the process of working with ChatGPT. I like the process of constructing every sentence to pack in all the relevant information I want to get across, and I like manually choosing the most appropriate words for the right use (considering both the connotations of each word as well the choosing which degree is the most incisive for the point I want to get across). What you describe as your favourite parts of working with ChatGPT encompasses everything that I find incredibly dull and lifeless about it.
I find it quite easy to make my sentences information-dense without requiring ChatGPT when writing about politics. (When writing, I often write sentences that cannot fit into a single tweet.)
I have been quite disappointed when trialling LLM-generated summaries of things such as news articles and opinion pieces – already they destroy the nuance of the writer and regurgitate a bland version/explanation with points that lose all characteristics & punch of the writer.
Do you find it comfortable to read an author such as linguist Noam Chomsky or is it also difficult for you to read without LLM assistance? For me it's much, much faster for me to process complex and dense information myself while reading, than to consult an LLM along the way which would take much longer.
On the subject of drawing, a few months ago, I submitted a political cartoon "The Australian Dream" to independent dissident Australian news site Pearls & Irritations which got published. I am primarily a writer but I practised drawing enough to be able to make that cartoon myself – without needing any assistance from AI.
https://www.lethanhtruc.com/cartoon-the-australian-dream/
I doubt, for example, that someone who is only used to generating visual depictions with prompts from ChatGPT would have considered working in the various visual jokes in that image across different levels. Those visual depictions were as a result of me absorbing a lot of political cartoons done by the Australian artist George Burchett (son of pioneering Australian journalist Wilfred Burchett) and understanding different ways that a relationship or idea could be depicted visually. In that image, I chose a map as a visual metaphor and depicted Australians' wish for proximity to the rest of the Anglosphere in the form of geographical proximity.
My point there is, it's one thing to use an LLM for drawing (or for writing), but using it doesn't actually build up your knowledge of how to make use of composition, visual metaphors and more to get your point across, like an artist who has practised drawing knows how to make use of them and wield them really well.
For me, writing is not about the production volume or efficiency. It's getting a point across that only I can articulate from my own observations, life experiences and learnings. It would feel weird for me to get another person or a bot to do it for me.
The most heartfelt writing and concise, punchy, well-articulated points that I have seen still consistently come from only people who fully say and write things themselves (e.g. from Glenn Greenwald, Uhuru & The Burning Spear and more). THAT's the kind of thing I'm looking for – people with something so important and quintessential that they want to share & express, and who so sincerely believe in what they say, that they have fully learnt the skills to keep trying to personally express their viewpoints the best that they possibly can. They would also find it weird to get another person or algorithm to write it from them.
Glenn Greenwald is so skilled at it that he can consistently express incredibly informative, concise and funny, punchy takes on politics day in and day out – never needed the use of AI to do it. Greenwald says he never gets bored of his job, having worked in journalism for decades.
The fact that you're already admitting here that you've gotten bored of writing simply strengthens my skepticism even more, and really says to me even more that you've ran out of personally-important things to say, more so than the first existing sign of using AI.
If you had things that you loved thinking about and still believed were important to write about, you most likely would not have gotten bored of writing about them. That's the core of what makes me uninterested in subscribing. It suggests that you have run out of having something that you fully personally believe is important to get across to the world.
If you don't have anything that's vitally important to you to write about, why would it be important to me? If you don't believe in it, at most what you could do is try to seem clever and impressive to other people by keeping a consistent writing volume, which is not interesting to me. I'm not looking for people who write something every week or whatnot for the sake of dishing something up to seem interesting. I'm looking for people with heartfelt things to say, and this can be from intermittent writers (people who write every few months) or people who so believe in what they're saying & doing that they do it all the time. I felt it necessary to express this because I do subscribe to people who write rarely but with meaning, over people who try to pump out a lot of stuff that they aren't deeply affected by.
The reason I expressed my disappointment is because I was about to subscribe to your newsletter. But upon finding out this article was not fully written by yourself, I hesitated to do so. If there are more articles where you try to write it fully yourself, I will be much more encouraged to subscribe.
Thanks Trúc, Why not see what you find. I think you're being puritanical. We all need to find out how AI can help us and the terms on which it can and cannot. I think that's a terrible thing to think one knows about in advance. As in all things, we need to keep an eye on whether those we tentatively trust are proving themselves worthy of that trust. I don't think there are any formulas.
And I'd be sorry to see a fellow cartoonist go ;)
http://clubtroppo.com.au/2010/09/30/the-life-you-could-be-leading-the-threats-and-extraordinary-possibilities-of-web-2-0/
But of course it's up to you :)
Wowzas.
This is close to perfect for me right now :)
This is seriously good, NG. I want to (and will) distribute it far and wide.
Thanks Trúc,
You've persuaded me you shouldn't subscribe.
Just on your arguments, you seem to think that if I quote someone, that I am adopting their opinion as a kind of credo. Either that or you didn't notice the quote marks. When I quote Venkatesh Rao saying "I’m interested primarily in enjoying the writing process more, and in different ways," it's a stretch to say he's bored with his standard routine - it's kind of 'verballing' him. Anyway it's verballing HIM. It's even further removed from verballing me as 'bored'.
I think you misunderstand the way he's saying that AI can help you read. He's talking about a decades old history book. You can read as wonderfully as you like, as joyfully as you like. If you're wondering if some claim that is made on p. 47 is still current scholarship, it won't help you. AI will. If you're wondering where Gordano Bruno's mother's family come from, your reading the original document might not help. AI might.
Anyway, here we are at the dawn of a new epoch and you've got it all worked out, so I'd say you're set. Onward and upward I say.
Great post. Only thing, I would not be too picky with the "enlightenment" these days. "MacIntyre’s central thesis is that the Enlightenment severed ethics from tradition and teleology, leaving a fragmented moral landscape.". I haven't read MacIntyre's book, but I would not see "Immanuel Kant" and a "fragmented moral landscape" as the same thing.
If you don't get what he's saying, I guess you'd need to read what he has to say then.
You're right — I should probably read the book. It seems very interesting, and I enjoyed reading your post. I just found it a bit strange to see the Enlightenment — someone like Kant, for instance — associated with 'diminished moral gravity'.
He's writing about our society, our culture. Not individual philosophers.
And it's one of my favourite books of philosophy - which I wrote up here.
https://clubtroppo.com.au/2024/06/17/alasdair-macintyre-on-how-ethically-lost-we-are/
Superb. A world run by "managers" is a dystopia and we are in it.
I like this a lot. I don't think there is a way back to character, I think the way forward is to something like an archetype, where the social and personal are harmonious and the feeling is organic - someone is an archetype in some sense.
MacIntyre doesn't offer guidance about what to do with the fact that the disenchantment with office holders is based on evidence and experience.