Artificial Art – Portraiture

Continuing with some more Wombo creations today, I selected a few friends and twitter contacts and used elements of their bio and usernames to create a portrait or pertinent pseudopainting for them

Classic FM’s Tim Lihoreau
Musician and (ex)minister Clive-upon-Sea
Andrea T from C5 The Band
Keith “SteelFolk” Walker
Jenny “LabLit” Rohn
Cartoonist Martin Shovel
Asher Wolf
Ernesto Priego
Dog lover and Times columnist David Aaronovitch
Subatomic Karthi
Rob Finch
Kae
Journalist and Martian author Nicholas Booth
Jane Sutton of the RAE

Now taking requests on Twitter…first one Wombat for @HomoCarnula

For Michele who tweeted “Nooooooo” to the wombat
For computational chemist and baseball fan Joaquin Barroso

Artificial art with Wombo

The trendy AI app – Wombo – which seems to be going viral takes your words, lets you choose an art style and then generates a dream-like image from the combination. You may have seen the Rush-inpsired art I had it create in a previous post, but here are a few more created with eclectic word choices and picking from the various styles – Mystical, Dark Fantasy, Psychic, Steampunk, Synthwave etc

Tidal Life
The Cybermaid’s Tale
Doctor Who cocktails – Daleks on the Beach
Terraforming Mars for Xmas
Dancing with Darth
Not 50 Shades of Grey
The Immigrant Song

No dreaming spires
Disney doesn’t do science

Bob
Sheeran exhaustion
For Alan Hull
More Music Tim
Anything but a self-portrait
Big in Japan

AI redesigns Rush the band

TL:DR – Wombo was the big AI image generator in late 2021.


You can’t have missed the latest AI app – Wombo. It has various incarnations, one of which Wombo Dream (for Android and iOS) lets you type in a few words, choose a style (Mystical, Dark Fantasy, Psychic, Steampunk, Synthwave etc) and create a fantastical landscape or scene. It’s quite bizarre to watch it drawing together an algorithmic “artwork” from whatever scraps of graphical information are contained within the AI.

I tried a few odd, dream-like phrases, some rude words, and my own name to get a feel for what it could do, and then I thought I’d see what it would make of references to the Canadian rock band, Rush.

Rush on Ice
Rush on Ice
2112 – A year on
Taking a slow boat to the East
Closer to the Heart of Cygnus
Grace under Pressure
Seven seas of High
…thought I would be singing, but I’m tired, out of breath
On the wing
Analog kid grows up to be a digital man
High school halls and shopping malls
Winding up the steampunks

Films from the Future

In Films from the Future – The Technology and Morality of Sci-Fi Movies, Andrew Maynard draws on his work on emerging technologies, responsible innovation and how we address the issues of risk. He introduces the reader to the profound capabilities presented by new and emerging technologies, and also to the complex personal and societal challenges they present.

In the twelve carefully curated movies, Maynard offers a starting point for us to explore potentially life-changing technologies and trends. From the genetic engineering of Jurassic Park and the brain-enhancing drugs of Limitless, to the ideas of human augmentation represented in Ghost in the Shell and artificial intelligence in Ex Machina. The concepts are woven together with emerging ideas on technological convergence and responsible and ethical innovation to give us a panoramic vista on where our technology might take us and how we might ensure it takes us to where we want to go.

More information from Mango Publishing (https://mango.bz)

Cutting noise from photos

UPDATE: March 2023 – I am currently using DxO PureRaw instead of the full PhotoLab. It does the same with denoising and lens/camera corrections. I then adjust curves and levels with PaintShopPro as I had been doing prior to trying PhotoLab.

UPDATE: January 2023 – I wrote this article back in 2018, since then various programs have come on to the market that offer AI approaches to denoising photographs many of which are much easier to use and work really well. For example, the Topaz AI Denoise tool reduces noise and blur and can even reduce motion blur, as I demonstrated in an article with a photograph of a Peregrine Falcon flying overhead. DxO Photolab is my current denoise software of choice though, its DeepPrime system effectively lowers the ISO of any noisy photograph by the equivalent of about three stops (like shooting at 400 rather than 3200 but with the same shutter speed and aperture). It lens/camera corrections built-in too as well as allowing you to adjust levels, curves, saturation etc etc.


Noise can be nice…look at that lovely grain in those classic monochrome prints, for instance. But, noise can be nasty, those purple speckles in that low-light holiday snap in that flashy bar with the expensive cocktails, for example. If only there were a way to get rid of the noise without losing any of the detail in the photo.

Now, I remember noise in spectroscopy at university, you could reduce it by cutting out any signal that was below a threshold. Unfortunately, as with photos that filtering cuts out detail and clarity. So, a solution was to run multiple spectra of the same sample, like taking the same photo, you could then stack them together so that the parts that are of interest add together. You then apply the filter to cull the dim parts, the noise. The bits that are the same in each shot (or spectrum will be added together, but the random noise will generally not overlap and so will not get stronger with the adding. The low-level filtering then applied will remove the noise and not cut the image. No more ambiguous spectral lines and no more purple speckles. That is in theory, at least. Your mileage in the laboratory or with your photos may vary.

De-noising by stacking together repeat frames of the same shot comes into its own when doing astrophotography where light levels are intrinsically low. Stack together a dozen photos of the Milky Way say, the stars and nebulae add together, then you can apply a cut to anything that isn’t as bright as the dimmest and you can reduce the noise significantly. Stack together a few hundred and your chances are even better, although you will have to use a system to move the camera as time goes on to avoid star trails.

Then it’s down to the software to work its tricks. One such tool called ImageMagick has been around for years and has a potentially daunting command-line interface for Windows, Mac, and Unix machines, but with its “evaluate-sequence” function it can nevertheless quickly process a whole stack of photos and reduce the noise in the output shot.

As a quick test, given it’s the middle of the afternoon here, I went to my office cupboard which is fairly dark even at midday, and searched out some dusty copies of an old book by the name of Deceived Wisdom, you may have heard of it. I piled up a few copies and with my camera on a tripod and the ISO turned as high as it will go to cut through the gloom, I snapped half a dozen close-ups of the spines of the books. The first photo shows one of the untouched photos, with a zoom in on a particularly noisy bit.

Next I downloaded the snaps, which all look essentially identical, but each having a slightly different random spray of noise. I then ran the following command in ImageMagick (there are other apps that will be more straightforward to work with having a GUI rather than relying on a command prompt. Nevertheless, within a minute or so the software has worked its magic(k).

magick convert *.jpg -evaluate-sequence median book-stack.jpg

And, so here’s the result, well the zoomed in area of the composite output photo, the average of the six essentially identical original frames with the noise filtered to a degree from the combined image. There is far less random colour fringing around the letters and overall it’s crisper. The next step would be to apply unsharp masking etc to work it up to a useful image.

It’s not perfect, but there is far less noise than in any of the originals as you can hopefully see. The software you use can have fine adjustments, but perhaps the most important factor is taking more photos of the same thing. That’s probably not going to work at that holiday cocktail bar, but with patience should work nicely for astro shots. Of course, if I wanted a decent noise-free photo of my book, I could have taken them out of the cupboard piled them on my desk, lit them properly, used a flash and diffuser and what have you and got a really nice photo with a single frame. But, then what would you learn from me doing that other than that I still have copies of my old book?

Is it okay to kick a robot?

UPDATE: One of their robots can now dance, which is the way forward, please don’t weaponise them, just let them twerk and moonwalk

These robots can now open doors for each other and let themselves out…just sayin’

By now, you’ve probably seen the astounding quadruped robots that have been built and demonstrated by Boston Dynamics. These machines run like four-legged animals and don’t seem to mind when their human companions give them a kick…hold on…give them a kick? Is that really the best example to set impressionable people watching the videos?

One could argue that it’s a machine, it doesn’t “mind” being kicked, if that demonstrates just how robust the software and servos are to disturbances in the forces around them. But, it is still quite a disconcerting thing to see. The next generation might be togged up with heads and fur, for instance, to make them look even more like animals, that would make for even more uncomfortable viewing, I reckon. And, then, of course, ultimately, such a robot might be endowed with artificial intelligence, sentience, even. Would kicking a bot that knows what you’re doing be moral?

This also raises another question. If we build sentient robots, would it be sensible to give them pain receptors? Would we want them to know to avoid things that might hurt. And, Asimov aside, might a robot in pain having been kicked feel that retaliation was the ethical thing to do from its perspective?

Asimov on the three laws of robotics

Laws of Robotics are essentially the rules by which autonomous robots should operate. Such robots do not yet exist but have been widely anticipated by futurists, novels, in science fiction movies and for those working in or simply interested in research and development in the fields of robotics and artificial intelligence are important pointers to the future…if we are to avoid a Terminator or Matrix type apocalypse (apparently). The most famous proponent of laws for robots was Isaac Asimov.

He introduced them in 1942 in a short story called “Runaround”, although others had alluded to such rules before this. The Three Laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Here’s the man himself discussing the three laws:

Grant a pardon to Alan Turing

UPDATE: Finally getting around to updating this page, it is ten years since Alan Turing was granted a posthumous pardon (2013)

A new petition is now online seeking a pardon from the UK government for mathematician Alan Turing. In 1952, he was convicted of ‘gross indecency’ with another man, under an archaic law that no longer exists. He was given so-called “organo-therapy” (chemical castration) and two years later, killed himself with cyanide-laced apple, aged just 41.

Turing invented the concept of the modern computer, devised the first real test for artificial intelligence and cracked the German codes to help bring WWII to an end.

Turing also figured out how the leopard got its spots and what makes zebras stripy. Seriously, he figured out that the diffusion of cyclical chemical patterns in the growing embryo, particularly in pigment cells would give rise to waves or rings of different pigment across the skin.

Video Lecture Search and Natural Language

A new speech and language search engine that could help you find particular subjects discussed in a video lecture, has been developed by MIT scientists in the Computer Science and Artificial Intelligence Laboratory. Regina Barzilay, James Glass and their colleagues say the web-based technology currently allows students and others to search hundreds of MIT lectures for key topics.

“Our goal is to develop a speech and language technology that will help educators provide structure to these video recordings, so it’s easier for students to access the material,” explains Glass. More than 200 MIT lectures are now available at http://web.sls.csail.mit.edu/lectures/ but there is no reason to think that the system could not be scaled to even the most puerile of Youtube video at some point in the future.

Meanwhile, a Cambridge, UK, company has gone into private beta with its natural language search engine. TrueKnowledge is discussed in more detail in an article entitled “What time is it at Google Headquarters?”, which is exactly the kind of question it can help you answer.