That this kind of writing puts a great number of us off is not important to many who seek their fortune in this industry.
I hear the cry: "it's my own words the LLM just assisted me". Yes we have to write prompts.
I'll let an LLM update code documentation or even write a README for my project but I'll edit that to ensure it doesn't express opinions or say things like "This is designed to help make code easier to maintain" - because that's an expression of a rationale that the LLM just made up.
I use LLMs to proofread text I publish on my blog. I just shared my current prompt for that here: https://simonwillison.net/guides/agentic-engineering-pattern...
I'm not shy to admit that LLMs even from 2 years ago could communicate ideas much better than me, especially for a general audience.
It’s like everything else that AI can do - looks fine at a glance, or to the inexperienced, but collapses under scrutiny. (By your own admission you’re not a great communicator… how can you tell then?)
Thankfully we don't have to know how to write well to enjoy a well written book.
A lot of the time, the inability to express an idea clearly hints at some problem with the underlying idea, or in one's conceptualisation of that idea. Writing is a fantastic way to grapple with those issues, and iron out better and clearer iterations of ideas (or one's understanding thereof).
An LLM, on the other hand, will happily spit out a coherent piece of writing defending any nonsense idea you throw at it. Nothing is learnt, nothing is gained from such "writing" (for either the author or the audience).
It doesn't come naturally to the more introverted type of person who cares about the object level problem and not whatever anyone else may know or doubt, I'll admit this. But slapping LLMs on it is not a great solution.