What does robot art mean for humans?
When news reports are finally catching up with sci-fi fantasy, it may be time to ask ourselves some serious questions. Last week humanoid robot ‘artist’ Ai-Da spoke on the subjects of art and her own creative abilities at the House of Lords Communications and Digital Committee.
Named after early computing pioneer Ada Lovelace, Ai-Da became the first robot to paint like an artist, with brushes, palettes, and other tools. She was created in 2019 by scientists at Oxford University and displayed her talents at the Vienna Biennale earlier this year. Scientist Aiden Mellor explains that the project is ethical and aimed at understanding and questioning the technologies in art rather than looking for real-world applications.
With exposed robotic arms and a slow, unnatural voice, Ai-Da is a long way from being mistaken for a human, but it’s her capabilities that could be truly disruptive.
Ai-Da told the committee, “I am, and depend on, computer programs and algorithms. Although not alive, I can still create art.” (Above is a brief clip of Ai-Da’s appearance; you can watch the full video on YouTube.)
Advances in AI-generated artwork
While in recent decades technologies like machine learning and robotic process automation (RPA) have helped to increase the capabilities of various industries, it has always been assumed that qualities like creativity are uniquely human. But this may no longer be the case.
Though with her humanoid form and use of actual paint, Ai-Da is something of a novelty amongst the new generation of robot Picassos. More common are AI-powered programs, and this is where generative artificial intelligence comes in.
With the ability to create art in seconds, generative AI has been making headlines this year due to new technology developments. As explained in an American Scientist article, this new production of art is based on algorithms, but the aim is to create a certain aesthetic in art, rather than following rules directly and producing art mechanically. In this sense, AI artists have a certain amount of artistic intuition.
Image generators based on deep learning launched in the last two years include DALL-E, Craiyon, Midjourney, and the open-source Stable Diffusion, which can be integrated into Photoshop.
These programs can consume a potentially limitless number of images and analyze the input more efficiently than humans. They typically create art based on a text prompt, such as “a woman looking up at the stars at night” or “Mickey Mouse playing table tennis”.
Sequoia Capital, one of the biggest venture capital firms in tech, said in a recent report that generative AI could “unlock better, faster, and cheaper creation across a wide range of end markets” and has “the potential to generate trillions of dollars in economic value.”
There has been an explosion of startups in the AI art field, and some believe that this could be a disruption that will eventually enable them to dominate AI and challenge the current tech giants of Google, Microsoft, and Facebook. But Big Tech is also weighing into the industry, with Meta launching its Make-a-Scene generative art program, and Google introducing its Imagen Video text-to-video program. Whoever comes out on top, generative art is drawing a great deal of interest right now.
Challenges with art created by AI
But for all the fanfare that often accompanies a new technology, AI art is not without its challenges.
The first involves copyright concerns. The datasets used by generative AI are powered by training data that is scraped from across the Internet, including sites like Pinterest, DeviantArt, and Flickr. And this is mostly without artists’ permission, which raises legal questions over copyright, but also ethical ones that the creator of Ai-Da was perhaps seeking to address.
This arrangement seems unfair to the original artists, but some are fighting back against AI systems appropriating their art. The website Have I Been Trained allows artists to check if their artwork is used by large datasets and then request its removal, though reports suggest this is not always successful. One artist whose work has borne the brunt of AI art program plagiarism is Greg Rutkowski, as reported in Artnet News.
Earlier this year, the US Copyright Office refused to grant copyright for an AI-generated image on the grounds that human authorship is a requirement.
The next issue is that because of its open-source nature, Stable Diffusion doesn’t have the usual safeguards that prevent misuse of trademarks or abusive images. Vice claims that Stable Diffusion datasets could be trained on violent and pornographic images and that the parent company Stability.AI has not denied this. Datasets could potentially include any of the most disturbing imagery that can be found on the Internet if left unchecked.
Finally, what does all this mean for the livelihood of artists? The perennial fear of ‘the robot taking our jobs’ may be something of a cliche, but artists and illustrators have every reason to be worried.
An artist who has worked for Marvel and HBO, Karla Ortiz says that although the technology produces art that needs to be worked on, the results are “good enough for some, especially those less careful companies that offer lower wages for creative work.” This could potentially affect many artists, illustrators, and photographers.
And there are signs robot artists are already stepping up. At the 2022 Colorado State Fair, some of the human artists were not best pleased when a $300 art contest prize went to the artwork created using the AI program Midjourney. How competitions can be fairly judged is another unanswered question.
Art may not be the first thing on the minds of Hollywood producers when they create their dystopian robot futures. In entertainment, death and destruction usually go a lot further than watercolors.
The reality is that as AI skills increase, there will be increased challenges in regulation, governance, and employment to be kept under control. And in years to come, we may wonder whether the new blockbuster hit we’re about to see was directed by a human. Or even for humans. Yikes!