My Light, My Sky – AI artwork for a song

I realise there’s a lot of ongoing debate regarding generative AI tools, especially those that have putatively been trained on copyright material without permission from the creators. It’s generally impossible to know what work has been used as source data for the likes of Dall-E, ChatGPT, and MidJourney etc, unless you work in their back office, presumably. However, I do know that some of my copyright material, books, potentially website content, photos, and perhaps even some of my music, will have been used to train various AIs over the last few years.

AI generated artwork with sullen sky, mountains, a central tree and a weird river
AI-generated artwork with sullen sky, mountains, a central tree and a weird river

I think there needs to be some kind of permission system in place and a royalty/payback system to compensate artists and creators for the use of the output as training data. I suspect that horse has bolted, as it were, and neither of those things will come to pass. Meanwhile, I feel that given that a lot of my content is likely to have been used to train AIs, I’m almost justified in clawing something back from those AIs.

In this case, I have used a couple of tools to generate an image as artwork for my latest song, My Light, My Sky, the origins of which and the development thereof I mentioned here recently.

First, I asked ChatGPT to come up with a prompt for an image AI to help me generate some Hipgnosis-type artwork for my song. My initial prompt to ChatGPT told it to “act as an expert in album artwork” and to “give me a prompt to use with a generative AI for creating images”. I added the lyrics from my song as part of the initial prompt and this is what ChatGPT came back with:

Generate an album cover in the style of Hipgnosis that captures the essence of a journey, the intertwining elements of water, dreams, and loss. Incorporate surreal and symbolic imagery such as raindrops turning into streams, rivers flowing to the sea, and scenes of introspection. Use a color palette that conveys both the beauty and melancholy of the lyrics. Ensure that the cover evokes a sense of mystery, emotion, and the passage of time. Consider including elements like shattered dreams, reflections, and a connection to the sea as central motifs.

I then used that prompt in Img2Go’s AI Creator Studio to get it to produce an appropriate image to which I could add the song title and my name. It looks rather evocative and Hipgnosis-like although that tree that looks like a Cedar of Lebanon or a chink of broccoli is reminiscent of Yes album artist Roger Dean.

I ran the AI a second time and got a similarly evocative image, but without the tree. I imagined this one as the back of the record sleeve for my putative 7-inch. I added the lyrics to the graphic:

A couple more iterations with some minor tweaks to the prompt that ChatGPT had given me generated some amusing images with silhouetted figures. In one, a woman is seen standing up to her waist in water and carrying an umbrella. This is quite a nice idea for an album cover…very Hipgnosis, very Magritte. The ludicrous notion of being soaked up to the waist but protecting one’s upper half from the rain with an umbrella! The album might have been entitled “Staying Dry” or “Underwater” or something…

Another version of the same prompt, simply adding the word umbrella to the end generated this:

 

 

Cleaning up a Least Weasel with Firefly

While we were standing on the ditch-bridge beyond the old, stilted Rothschild’s Bungalow at Woodwalton Fen National Nature Reserve trying to catch a glimpse of the Purple Emperors wheeling high above is in the oak trees, a mammal squeaked near our feet and poked its head out to see what all the fuss was about. It was a Least Weasel, it seems.

Least Weasel at Woodwalton Fen NNR
Least Weasel at Woodwalton Fen NNR

Its scientific name is Mustela nivalis, although it is often known as the Little Weasel, Common Weasel, or in some places simply a Weasel. In the US the term weasel is more generic and applies to a whole range of creatures. Not to be confused with Mustela erminea,that’s stoatally different although not weasily distinguished. Hahahah.

Anyway, as is the wont of these small carnivores, they might pop out into the open, as did this one, before quickly disappearing back into the tangled undergrowth. But, they almost always come back out within a second or two for another look as did this one, before it plopped into the ditch water swam the few feet across and could be heard scuffling on the other side before disappearing for good.

The upshot of all this peeking and scuffling is that I wasn’t quick enough to get the first shot of the creature standing on the bridge near our feet, but I got a sneaky shot of her after she made her second appearance. Unfortunately, there was a blade of blurry green grass right in front of her. I found a Photoshop tutorial on how to remove such distractions, but the techniques didn’t translate easily to PaintShopPro, although I had a go and managed to decolourise the blade so it was brown and slightly less distracting than the green.

I posted the image on Twitter to confirm the species but also to ask if anyone knew of a simple way to remove blurry blades of grass. Various people replied with PhotoShop suggesttions. But, like I say, I use PaintShopPro rather than that pricey Adobe subscription product. Twitter friend Chuck Baggett offered the best solution. A free, beta-testing AI tool from the very same Adobe, known as Firefly. It’s a generative art tool akin to Dall-E and MidJourney, AI apps I’ve discussed at length here previously.

Firefly has an object removal tool within its repertoire. You set an appropriate brush and simply paint over the object you want to remove. Chuck did the job for me on the version of the weasel, I’d tweeted. But, I obviously wanted to test the app myself, so I uploaded a JPG rendering of the DNG output I’d created with DxO PureRaw3 from the original camera RAW file. Set a brush up and painted out the green blade of grass and some of the other distracting fronds. There were some odd artefacts visible when viewing the eye close-up, but they were relatively easy to fix in PaintShopPro without having to start from scratch in the Firefly AI app.

Least Weasel photo before and after Firefly retouching
Least Weasel photo before and after Firefly retouching

I can highly recommend Adobe Firefly for removing such problems from your photos. Much quicker and easier than the various approaches offered for PhotoShop. The app adds a watermark that declares the image was generated with an AI app, but I think in this case it’s quite fair of me to crop out that declaration given then I was simply manipulating my own photo with a digital brush! Obviously, the Firefly system can generate artwork from text descriptions just as Dall-E and MidJourney do and those generative images probably should be declared as such in some circumstances.

Will the tin man kill us all or take us down the yellow brick road?

TL:DR – A short extrapolation of where AI might take us and why some experts are very worried about that path.


Will artificial intelligence (AI) lead to the extinction of humanity?

Well, the simplistic answer to that question, is a simple “no”, despite what every post-apocalyptic science fiction tale of our demise with an unhappy ending has told us. There are eight billion people on the planet, even with the most melodramatic kind of extinction event, there are likely to be pockets of humanity that survive and have the wherewithall to procreate and repopulate the planet, although it may take a while.

A less simplistic answer requires a less simplistic question. For instance: Will artificial intelligence lead to potentially catastrophic problems for humanity that cannot be overcome quickly and so lead to mass suffering and death, the world over?

The less-than-simplistic answer to that question, is yes, probably.

It is this issue that has caused alarm in the aftermath of the hyperbole surrounding recent developments in AI, and perhaps rightly so, we need to think carefully before we take the next steps, but it may well be too late. There are always going to be greedy, needy, and malicious third parties who will exploit any new technology to their own ends and without a care for the consequences.

Now, back when I was a junior scientist, we’re talking the 1970s here, the notion of AI was really all about machines that somehow, through technological advances gained sentience. Comics, TV and cinema was awash with thinking robots, both benign and malign, and had been for several decades, come to think of it. The current wave of AI is not about our creating a technology that could give the tin man* a brain, metaphorically speaking. AI is about technologies that are smart in a different way. Machines (computers, basically) that can assimilate data, be trained on that data, and so when presented with new data provide an output that extrapolates from the training data to the new data and gives us an accurate prediction about what that new data might mean.

So, the kinds of AI we have now are often frivolous: give it a descriptive prompt and the AI uses its training to generate some kind of response, whether that’s a whimsical image, a melody or rhythm, a chunk of original text, a financial forecast or a weather update for tomorrow. Some of these tools have been used for amusement to create songs and photorealistic images, sometimes more significantly to simulate a video of a famous person singing a song or giving a speech they never would have given. There’s a thin line between the whimsical and the what-if.

For instance, what if someone used an AI to generate a video of a world leader, not singing Somewhere over the Rainbow, but perhaps lambasting another world leader? We might see the rapid, but hopefully temporary, collapse of diplomatic relations between the two of them.

But, it is facile to suggest that even the use of AI in such a scenario and wider, to generate and spread disinformation, is really only the very tip of an iceberg that could cause a much greater sinking feeling.

As we develop the tools to train algorithms on big data sets in almost every walk of life, we then find ourselves in a position to use that training to positive ends. A sufficiently large dataset from historical climate data and weather sensors across the globe could help us model future climate with much greater precision. Perhaps we could envisage a time when that kind of super-model is hooked up to power-generation infrastructure.

So, instead people with conventional computers managing the up-time and down-time of power generation, the AI model is used to control the systems and minimise carbon emissions on a day-to-day basis by switching from one form of power generation to another depending on wind, wave, solar, and fossil fuel availability. As the AI is given more training and coupled with other algorithms that can model the effects of human activities more effectively on power demand, then we could picture a scenario where the system would cut power supply to avoid catastrophic carbon emissions or pollution, perhaps for a small region, or maybe a whole country, or a continent. The system is setup for best overall results and whether a few people, or many, many people have no power for a few hours, days, weeks…that’s of no real consequence on the global scale. The AI will have achieved its emissions or pollution reduction targets as we hoped.

Now, imagine an AI hooked into pharmaceutical production lines and the healthcare system, another running prescription services, and surgery schedule, hospital nutrition. Well, those power supply issues will have inevitable consequences and the machine learning and models will nudge the systems to predict that halting production, albeit temporarily, will ensure emissions and pollution are held down for the greater good. Of course, a few people, many many, many people will be more than a little inconvenienced, again only temporarily…or maybe longer.

Indeed, the AI running the financial markets systems has been trained on all that kind of data, birth and deaths, health and illness, it has a handle on the impact of power production and pollution on the markets, the value of commodities. The value of human life is not in its training nor its data sets, and now that it is hooked up to the other AI systems to help predict and improve outcomes, it can devalue currencies, conventional or digital, depending on the predicted outcomes of the real-time data it receives from myriad sensors and systems. That much more overarching AI can cause disruption at the level of nations by devaluing a currency, if the benefits to the world, the system as a whole, are overall greater. What matter if a few people, maybe a few million, are thrown into acute but absolute poverty, with no food supply, no healthcare, no power supply? The reduction of pollution and emissions will be for the greater good, of course.

The financial AI, which might appear to be final arbiter of all these sub-systems and controls, is one layer down from a much greater AI. The military AI. The one with all of the aforementioned data and algorithms embedded in its training and global sensors feeding its data set so that it can predict the outcomes and the effects of say cutting not just the power supply, but perhaps a few lives, with a few drone strikes, or maybe something stronger. After all, if that rogue nation that is throwing up countless coal-fired power stations and using up all those profitable precious metals is leading to fiscal deficits elsewhere, then the algorithm taking back control of that nation would make sense. What’s a quick nuclear blast between friends, if the bottom-line is up, and, of course, those emissions are down.

Meanwhile, the data suggests that some of those otherwise friendly nations are a bit rogue, after all. Moreover, they have huge populations using a lot of power, producing a lot of pollution, the data suggests. The AI predicts much better overall outcomes if those nations are also less…active. A few more drone strikes and a bit more nuking and…well…things are looking much better in the models now, with the human population well down, resources are not being wasted at anywhere like the rate they were in the training data, and emissions are starting to fall.

Give it a few more days, weeks, months, and the AI will be predicting global warming is actually going into reverse. Job almost done. As the data streams in from endless sensors, the model has even more information with which to make its predictions and so the right choice for its programmed targets. Maybe just a few more nukes to keep those data points on the downward trend, take it back to pre-industrial levels perhaps, a time before machines, a time before algorithms, a time before human wants and needs.

Where’s the intelligence in that?

Footnote

This perhaps fanciful extrapolation suggests that there is no need for a malicious sentient machine to take control and decide that humanity is redundant. Indeed, the real AI of my childhood is not needed. But, I strongly suspect that there is no need for the overarching AIs and their connection and control either, it just needs some rogue or greedy individuals with their own agenda to jump into this game at a high level and exploit it in ways we know people always do…it’s just a few people…but it could be enough.

We do need to have a proper sit-down discussion about AI, how it is developing and where it is leading. But, it may well be too late, there could be one of those rogue nations already setting in motion the machine learning processes that will take us down a path that is anything but the Yellow Brick Road…

*Yes, I know the Tin Woodman needs a heart, not a brain.

ChatGPT: Your personal brand manager

TL:DR – I prompted ChatGPT to write me a personal brand pitch.

I haven’t mentioned ChatGPT for a while. There has been something of a backlash against the hype and the scaremongering. But, I have still been using it for various small jobs, such as quick website tweaks and summarising documents that are TL:DR (too long, didn’t read). Latest prompt I gave it was to act as my personal brand manager and write me a pitch based on the Sciencebase homepage. This is an edited version of what it came up with:

David Bradley Science Writer

As a multi-award-winning freelance science writer with over three decades of experience, David Bradley has a unique talent for making complex scientific concepts accessible and engaging. He has written for a diverse range of outlets, including countless newspapers, magazines, research, industrial, educational, and academic organizations, and websites, covering a wide range of topics from astronomy to zoology and almost everything in between.

As a wannabe polymath, he is also an accomplished musician with an enormous passion for photography. He combines his science, snaps, and songs to create a website and blog that offers a fresh perspective on the natural world, showcasing his stunning wildlife photography, and sharing the latest cutting-edge research as well as occasionally offering incidental music. With his wife, Mrs Sciencebase, he is a keen allotmenteer, wildlife garden pond manager, and garden rewilding enthusiast.

David’s writing style is approachable and engaging, and he loves to sprinkle a little Geordie wit and charm into his work. He has bylines in New Scientist, Science, Nature, The Guardian, The Daily Telegraph, Popular Science, American Scientist, and many others. In 1995, he set up the first-ever chemistry webzine, which morphed into the Sciencebase site in 1999.

If you’re looking for an experienced freelance science writer with a passion for making complex concepts accessible, look no further than David Bradley. Connect with him on social media, or shoot him an email at [email protected] to start a conversation…

ChemistGPT

TL:DR – Recent experience with asking ChatGPT chemistry questions reveals the limitations of this language model, despite the claims and hype of an ignorant US politician speaking in March 2023.


There was lots of talk of ChatGPT apparently deciding to teach itself organic chemistry…now…ChatGPT is language model, it’s not really an artificial intelligence (AI) in the strict sense, all it really is is a sophisticated autocomplete that analyses your prompts and weighs up the most likely combination of words with which to reply. I wondered whether the only chemistry in all that hype was the chemistry between olfactory receptor and odourant molecules in bovine ordure. So, I set it a challenge, giving it the following prompt:

You are to act as an expert organic chemist. You need to devise a reaction scheme using retrosynthetic analysis for the drug molecule Taxol.

This is how it responded:

Taxol is a complex natural product that has antitumor activity and is used in the treatment of various cancers. The molecule contains a complex array of functional groups, including multiple stereocenters, which make its synthesis a significant challenge.

Here is a possible retrosynthetic analysis of taxol:

Step 1: The first step involves disconnection of the C13 side chain from the taxane core. This can be achieved using a standard acid-catalyzed esterification reaction between a carboxylic acid and an alcohol. This disconnection leads to the formation of the intermediate 10-deacetylbaccatin III.

Step 2: The second step involves disconnection of the A and B rings from the taxane core. This can be achieved using a retro-Diels-Alder reaction. The retro-Diels-Alder reaction involves breaking the six-membered ring formed in the Diels-Alder reaction between the A and B rings. This leads to the formation of the intermediate 9-dihydro-13-acetylbaccatin III.

Step 3: The third step involves disconnection of the C ring from the taxane core. This can be achieved using a ring-opening reaction. The ring-opening reaction involves breaking the bond between C9 and C10, leading to the formation of the intermediate 7,8-didehydro-10-deacetylbaccatin III.

Step 4: The fourth step involves disconnection of the D ring from the taxane core. This can be achieved using a ring-closing reaction. The ring-closing reaction involves forming a new bond between C1 and C14, leading to the formation of the intermediate 5-deoxy-4-O-methyltaxol.

Step 5: The final step involves disconnection of the C2 side chain from the taxane core. This can be achieved using a standard nucleophilic substitution reaction. This leads to the formation of the final product, taxol.

Overall, the retrosynthetic analysis of taxol involves a series of complex reactions, including esterification, retro-Diels-Alder reaction, ring-opening reaction, ring-closing reaction, and nucleophilic substitution reaction. These reactions can be challenging to carry out in practice, and developing efficient and practical synthetic routes to taxol remains an active area of research in organic chemistry.


Now, it’s a long time since I did any retrosynthetic analyses for organic chemistry and I was never any good at them when I was meant to be. So, I’ll put this to social media and see what some proper chemists think.

First comment in was from ChemJobber, who said “the first disconnection to start with 10-deacetylbaccatin is reasonable”

I wrote about the first total synthesis of Taxol back in the 90s, and recall that the overall reaction scheme had well over 20 steps…

Nessa Carson dissected it and found the ChatGPT scheme rather lacking – here. She also pointed out that in Step 5 it completes the retrosynthesis of Taxol by disconnecting it all the way back to… Taxol (lol)

Careers advice from ChatGPT

TL:DR – Is it possible to get career advice from a language model? I prompted ChatGPT to help me out.


I’ve been in science communication for more than 30 years, you could say I’m in the late stages of my career, so I am seriously thinking about the next few years of my working life, but not entirely sure in what direction to go or indeed whether I shouldn’t just retire at this point (if only!). So, I gave ChatGPT the following prompt to see what it would come up with:

You are to act as a careers guidance counsellor with many years experience.

I added some personal career history and hobbies and asked it to come up with some advice for what I should do in the latter part of my career. This is what it had to say:

It sounds like you have had a very successful and diverse career as a freelance science journalist over the past 30+ years. Given your current situation and interests, there are a few different options you could consider.

Diversify your writing topics: Since you are already a skilled writer and journalist, you could try expanding your repertoire and writing about new topics. Perhaps there are areas of science that you haven’t explored before, or you could branch out into other areas of journalism such as arts or culture. This could help keep your work fresh and engaging, and could also potentially lead to new clients and opportunities.

Combine your interests: You mention that you are also a singer-songwriter and photographer. Perhaps there are opportunities to combine these interests with your writing. For example, you could write articles about music or wildlife photography, or even use your writing skills to promote your own music or photography. You could also consider exploring other creative outlets, such as podcasting or video production.

Explore new career paths: If you are feeling like you want a bigger change, there are many other career paths that could be a good fit for someone with your skills and experience. For example, you could explore teaching or mentoring, either in science or journalism, or even in music or photography. You could also consider starting your own business or consulting firm, using your expertise to help others in your field.

Prioritize your well-being: It’s clear that you have been through a lot in recent years, and it’s important to prioritize your well-being as you consider your next steps. This might mean taking some time off to recharge, or finding ways to work that allow for a better work-life balance.


It’s not really anything I hadn’t thought of myself, but if I hadn’t it would be nice to see it spelled out. The final suggestion is perhaps the most important for me though. I’ve never really taken much time off over the years. I do feel like I could do with a proper break. Just need to get my passport renewed, fix this Achilles’ tendon injury and get my walking boots packed for a butterflying trip to Greece, ASAP…

How to be your own ChatGPT

TL:DR – If you’re working on a creative project, take note of your dreams, they might carry nuggets of inspiration.


The title of this post might have you imagining some kind of AI interface that lets you tap into your subconscious and create output based on prompts you give yourself…will…in a way it is. But the interface isn’t what you think…

When I was writing my book Deceived Wisdom, there were lots of ideas constantly swarming around my brain for weeks and months on end. Sometimes, I’d fall asleep and be dreaming I was still writing at my desk, I’d wake up at 5am and be writing for an hour or two some days before I got on with my regular, non-book commitments.

Some nights though I’d not only be dreaming about writing, but I would think of new ideas for chapters in the book and better still I’d find myself actually writing the introductions to those chapters in the dream. It was weird, they weren’t lucid dreams where I knew I was dreaming within the dream, they were just me in a dream writing and then remembering what I’d written when I woke up.

Often what I remembered was a bit scrambled and needed a lot more work, but sometimes it was almost fully formed and only needed light editing. Maybe I shouldn’t reveal this, my publisher might want to claw back some of the royalties seeing as it was my subconscious that did some of the work. But, when you think about it, where is the “you” that does your creative work anyway? All of it is coming from your subconscious really, your “you” just sees the output from your neural networks.

It also seems to work with music, for me. At the beginning of the first COVID-19 lockdown when performing live with my band C5 was on enforced hiatus, I had a dream that we were all sitting on empty suitcases in an airport, there was a melody that came to me while I was chatting to Jo the other lead singer and I was telling her the chord progression. When I woke up, I grabbed a guitar and noodled about a bit on the chord progression. The lyrics to the song actually morphed into a memory of the dream and the chords changed, but the creative spark came from somewhere deep while I was sleeping. The song became Complications, which you can listen to here.

I wonder whether there really is a reliable way to trigger this dream writing. I know that people often report having more lucid dreams at night if they actually talk about lucid dreams before bedtime, it seems to stimulate the brain. So, maybe just discussing this internal ChatGPT before going to sleep might open a valve and let creativity bubble to the surface while you’re asleep.

Arguing with ChatGPT

TL:DR – When asked, ChatGPT refers to itself as an artificial intelligence language model, but there are so many reasons not to describe these statistically based, text autocomplete tools as AI.


There’s been a lot of talk about weird and inflammatory responses from the Bing chat bot in the last couple of weeks. By contrast, it’s difficult to get a rise out of ChatGPT. It always gives you a diplomatic response. Even if you push it with contrary questions and statements. I spent a few minutes trying to get it to admit that it is not an AI.

Despite the hype, we must all concede that text and image manipulation tools, no matter how amazing they seem in what they generate are not sentient nor have any understanding or actual knowledge or awareness of what they are doing. We are not living in the sci-fi world of machine consciousness…just yet.

Anyway, I prompted ChatGPT repeatedly to tell it that it is not AI despite its name and despite the hype. After several prompts I gave it the following prompt: “ChatGPT is really just a parrot. It can say the words, but it has no understanding of what the noises it makes mean.”

The following is ChatGPT’s response, which explains the fundamentals of what ChatGPT actually is and how it works and shows that it is fundementally not AI:

It is true that ChatGPT, like other language models, does not have a true understanding of the meaning of the text it generates. Rather, it uses statistical patterns and correlations to predict the next word in a sentence or generate a response to a given prompt. This is a limitation of current language models and the field of natural language processing in general.

However, it is important to note that language models like ChatGPT are not designed to replicate human-like understanding of language. Rather, they are designed to perform specific tasks related to language processing and generation, such as generating coherent and fluent text or answering questions based on a given context. In this sense, they represent a powerful tool for automating language-related tasks and improving our ability to interact with large amounts of textual data.

While there is still much to be learned about the true nature of language understanding and intelligence, language models like ChatGPT represent an important step forward in our ability to process and generate language at scale.

Meanwhile, just to show you how clever ChatGPT is, hahahah, I asked it to generate a list of words associated with this tech and to give me lay definitions. You can read the glossary here.

Whoops! AI might have invented 40000 new toxic nerve agents…but probably not

TL:DR – Software used to predict which chemicals in a database have potential as new medicines has been flipped as a warning demonstrating that the same system could be used to predict whether they are highly toxic.


Artificial intelligence, AI, machine learning, language models, neural networks, generative models, algorithms trained on big data. A lovely collection of buzzwords that have been littering grant proposals for years and are now emerging into the real world as apps and websites and citizen science projects. For chemists and pharmaceutical scientists, AI has been a tool to experiment with for a long time. Creating molecular diversity, simulating physiology and screening for drug-like activity have been high on the laboratory agenda for quite some time. They have also used it to test for toxicity in silico and so reduce animal testing and problems once a drug gets into early. clinical trials.

Scientists at Collaborations Pharmaceuticals Inc., which focuses on drug discovery for rare diseases, commonly use trained AI to help them determine the likely toxicity of various novel molecular structures. Molecules with physiological activity are a prerequisite for finding new pharmaceuticals but it is also necessary to avoid those compounds that have a toxic effect on the patient rather than the disease.

A thought experiment flipped this work for good on its head and had team leader Fabio Urbina and colleagues wondering about the implications of an AI that seeks out life-taking toxic compounds rather life-saving pharmaceuticals. The team was asked to carry out simulations and to present the findings and implications to an international security conference.

The Convergence Conference is convened every two years by the Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection – Spiez Laboratory – on behalf of the Swiss government to identify developments in chemistry, biology and enabling technologies that could have implications for the Chemical and Biological Weapons Conventions. The conference seeks to spot the next VX or Novochok.

MidJourney AI generative image of people in hazmat suits

Urbina and his colleagues have been careful not to disclose the details of their simulations, but by setting their AI in reverse so that it seeks out structures that are likely to be toxic rather than safe pharmaceuticals they were able to generate a whole molecular library of potential toxic agents on a par with VX and Novochok. Indeed, six hours on the computer produced a database of 40000 such molecules any one of which might theoretically be synthesised in a rogue state laboratory and many of which were predicted to be more toxic than VX. Of course, it could be that the AI has got it wrong (it happens!) and that none of these compounds are at all toxic. The details the team provides were published in the journal Nature Machine Intelligence.

It is rather worrying. But perhaps also inevitable and better that we know this is now a possibility rather than our ignoring the risks. It’s worth remembering, that the invention of dynamite had a notable dual use, noble and not so noble, you might say. This is in some way just a modern take of how an invention for good might so easily be manipulated for bad.

To quote from the paper: “We can take inspiration from examples such as The Hague Ethical Guidelines, which promote a culture of responsible conduct in the chemical sciences and guard against the misuse of chemistry, in order to have AI-focused drug discovery, pharmaceutical and possibly other companies agree to a code of conduct to train employees, secure their technology, and prevent access and potential misuse.”

That is unlikely to be enough in the long-term and it is perhaps inevitable that some rogue team somewhere is already building its own database of toxic agents using AI. We just have to hope that the worst in any such database are false positives or that they all prove to be too to tough to crack even for the best, bad chemists. But, given that we already have AI tools for devising chemical reaction schemes and have done for many years, it feels unlikely that this particular toxic genie is going to stay in its reaction flask for long.

ChatGPT updates and optimises the Sciencebase homepage

TL:DR – I prompted ChatGPT to redesign and recode the Sciencebase homepage.


You can get some interesting responses from everyone’s favourite language model, ChatGPT, if you give it a special prompt that tells it to act in a certain way. I have been testing this for months, but recently, I have also been toying with the idea of creating a minimalist homepage for the Sciencebase site and needed some HTML inspiration. I hand coded my first website in December 1995, so I thought it was about time something else did the heavy lifting for me…so…here’s the initial prompt I gave to the chat bot.

You are to act as a genius web designer. I would like a very simple, minimalist landing page to include a link to my main website from my root domain name and to describe briefly how I am an an award-winning and experienced science writer. I enjoy wildlife photography in my spare time and performing music with two different bands and a choir. You should provide me with the complete minimal html code.

Sample of html code generated by ChatGPT for the new sciencebase homepage

You can see how the new minimalist homepage this generated code produces looks right here.

I later asked ChatGPT to once again act as an expert in web design and html and to give me a rationale for designing a minimalist and fast-loading web landing page for the site. This is what it had to say:

  • Define the purpose and goal of the landing page: Before starting the design and development process, it’s important to understand the purpose and goal of the landing page. Is it to collect leads, sell a product, or simply provide information? This will help me determine what elements to include on the page and how to design the layout.
  • Choose a minimal colour scheme: Minimalist designs typically use a limited colour palette. I would choose a colour scheme that complements your brand and is easy on the eyes. It’s important to keep in mind that too many colours can distract users from the main message.
  • Simplify the navigation: Navigation should be straightforward and easy to understand. Users should be able to find what they’re looking for quickly and easily. I would minimize the number of links in the header and footer and prioritize the most important pages.
  • Optimize images: Images can slow down the loading time of the page, which can be frustrating for users. I would optimize images by compressing them and reducing their file size. This will help the page load faster without sacrificing quality.
  • Use clean HTML code: Clean HTML code can help improve the performance of the page and make it easier for search engines to crawl. I would use semantic HTML tags and minimize the use of unnecessary code. This will also make the page more accessible to users who rely on screen readers.
  • Prioritize content: The content on the landing page should be clear and concise. I would use headings, subheadings, and bullet points to break up text and make it more scannable. The most important information should be placed above the fold, so users don’t have to scroll to find it.
  • Optimize for mobile: More than half of web traffic comes from mobile devices, so it’s important to design the landing page with mobile users in mind. I would use responsive design techniques to ensure the page looks good on all screen sizes and load quickly on mobile devices.