A recently published post by the science fiction writer Robin Sloan (Is It Okay?, published 11th February 2025) ignited some examination and debate among my little corner of the web. The post asks the question of whether it’s ethical, from an individual moral standpoint to use an LLM (Large Language Model, such as Claude or GPT-4). Robin raises some important points about the trade-offs that come with LLMs, depending on their application.
If their primary application is to produce writing and other media that crowds out human composition, human production: no, it’s not okay.
He also offers an alternative view, where it could be supposed that LLMs will pave the way for “super science”, a common claim of AI advocates.
If super science is a possibility — if, say, Claude 13 can help deliver cures to a host of diseases — then, you know what? Yes, it is okay, all of it. I’m not sure what kind of person could insist that the maintenance of a media status quo trumps the eradication of, say, most cancers. Couldn’t be me. Fine, wreck the arts as we know them. We’ll invent new ones.
Here’s where I disagree with Robin’s reasoning: AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.
LLMs are not this. They synthesise text, which is not the same as data. Particularly when they are trained on the entire internet, which we all know includes a lot of incorrect, discriminatory and dangerous information. As Baldur Bjarnason points out:
There is no path from language modelling to super-science.
I don’t believe LLMs are entirely without utility. The company I work for designs and trains AI models for use in industrial processes and LLMs. But they are different things. In one application we (and by “we”, I mean my far cleverer colleagues) deploy models for analysing performance data from wind turbines to produce insights related to power output, deterioration and part failure, in order to enable operators to plan maintenance and optimise power generation. Here AI has the potential to help drive down costs and maximise clean energy production. It’s still early days, and it remains to be seen whether this kind of technology will be widely deployed and beneficial at a large scale, but this, to my mind, edges marginally towards the scientific potential that Robin refers to (while being a long way from, say, curing cancer). It’s not an LLM.
On the other hand, we do train LLMs for other applications, such as gleaning relevant information from thousands of disparate documents, which would be impossible to trawl through manually, and present findings in a user-friendly way. This is not a general-purpose LLM designed to regurgitate information from the entire internet, but is built from a set of highly specific training data that is relevant to the industry in which it is applied.
Both of these applications are interesting and potentially useful. But they are not the same. An LLM as described above, while useful, shouldn’t invent new information. It processes the text that already exists, not the science behind it, and if it appears to offer up something new then that should be met with the utmost scrutiny. And it remains to be seen whether they (and others like them) will be worth the extraordinary amount of energy and resources that AI demands.
By using Chat-GPT to write your essay, code or email, you are not contributing to “super science”. LLMs cannot do that. Maybe you’ll conclude that using an LLM is still worth it to make you more productive in writing code, or whatever. (And yes, I have Thoughts on this.) But once we discount “super science” from the equation, it seems to me there aren’t a whole lot of positives left.