An observation: The too-little-text problem that Twitter has historically faced, and too-much-text-generated problem caused by ChatGPT are two sides of the same coin.
With the popularization of generative neural networks and large language models, we’ve opened quite the Pandora’s Box, haven’t we? We can now generate content faster than we can consume it, and it’s going to be relatively high quality text content, too. We can now generate bullshit without abandon.
However, even with systems like ChatGPT and the like, one of the key problems we face is that people are fundamentally lazy. Even with a model that spews out lots of text, if your prompt is lazy, you won’t see good results. And whether eloquent or not, lazy communication is often not helpful, not to mention easy to pick out.
Lazy communication can be a lack of communication. It can also be the opposite. You can use a lot of words to say nothing. Concise messaging is key, and that cannot be achieved without deciding what exactly what it is you wish to say. You must be thoughtful.
The problem of social networks that limit message length (like Twitter has been historically) is that complex opinions held by individuals cannot be accurately expressed within the character limit. It’s simple “takes” that often end up getting a lot of user interaction for this very reason. The outrage machine is alive when a simplified opinion is expressed.
Bullshitting about a subject has always been an issue, as well. It’s an art: how much can you say without actually saying anything? Turns out, a lot. With the proliferation of LLMs, this problem will likely become as big as Twitter’s “too little context” problem has become (if it hasn’t already).
Here’s a cool tip: To verify if someone is actually competent, you can often ask someone to explain a subject as if a layperson had asked. If — given some time — no simple analogies can come to the person’s mind, there’s some reason to believe that the person in question does not comprehensively understand the subject matter at hand.