Saving us from artificial intelligence

TL:DR – Artificial intelligence (AI) ain’t what it used to be. Back in the day, the notion of AI meant more than algorithms and language models trained on data sets, it encompassed the idea of machine sentience. Now, it seems it is simply used to refer to various tool, such as ChatGPT and MidJoOurney. These tools have no intelligence, artificial or otherwise.


If you’ve been following developments in technology over the last few months, you can’t fail to have noticed the advent of the buzz phrase artificial intelligence. There are tools that can take a text prompt and respond with a realistic face, a painting in the style of a well-known artist, or a Shakespearean poem.

Some of these tools, such as ChatGPT, aren’t AI, they’re simply generative language models, which I have mentioned several times before in recent weeks, can translate and edit your text, convert computer code in one language into another, they can even generate working computer code given an appropriate prompt. They can write news articles, essays, even research papers. Given suitable input.

There are concerns in many quarters that people will use such tools to cheat at school, in their jobs, and to generate digital content that is not strictly authentic. There is also the question of who owns the copyright to content generated by AI.

I would suggest that educators, publishers, gallery owners, and others need to quickly catch up with the technology. It is here, it is now. It is not going away. We all need to adapt to these new tools and recognise that we cannot ignore them just as we could not ignore the invention of the world wide web back in the late 1980s and its introduction to the world in the early 1990s. (See also the internet before it, the television, radio, the telegraph, the printing press, cave paintings, the hand axe).

At the moment, a keen eye can quite readily detect AI output, but the tools are being refined and becoming more sophisticated with every iteration. We need to address the concerns about plagiarism and copyright especially in education and research, but also in the creative industries and other realms of human endeavour.

New tools are needed that recognise AI output and these are in development. But, as with any new tool, we will continue to use it and society will ultimately accept that some content is generated this way rather than handwritten off-the-cuff with pen and ink.

It is likely that search engines will be quick to incorporate technology that recognises AI content on websites and perhaps allows the search engines to lower the rank of such content or otherwise penalise it. Similar tools will become available to educators, just as plagiarism-checking software was developed to reveal where students had lifted content from a website without citation. Conversely, the AI developers could incorporate systems into their tools that “watermarks” the output in some way, so that it might look authentic, but a quick scan of the text or whatever would reveal the watermark and betray the user who claimed the content as their own.

On that point, at the moment, I’d see the AI tools like ChatGPT and Stable Diffusion as being text manipulation tools. One has to craft a very specific prompt to generate the particular output one receives. A different, but similar prompt will not get the same output from the AI. The creativity, the authenticity, is in crafting ones’ prompt and then in the further processing, editing, and manipulation of that output to make your final product.

Of course, it is then down to the creative to decide whether to declare the tools they used to generate their content. Does every writer declare which word processor they use, which search engines and databases they trawled for information? Does every photographer mention which photo editor they used to adjust curves and levels and to crop their photos? No, they generally don’t, unless they’re offering advice or a tutorial. Of course, photos and documents can have meta data (the EXIF data in a digital photo for instance) and so perhaps a similar chunk of meta could be incorporated into AI output so that interested third parties could check the processes used to create the content. The meta data could include the original prompt, the specific AI tools used, and details of subsequent edits made by the creative.

Now, the question remains…did I use AI to generate the content above…or was it all off-the-cuff and handwritten with pen and ink?

Spoiler alert: It’s entirely original, it was written off-the-cuff and on-the-fly but on a laptop keyboard in WordPress on the sciencebase.com site.