Creative AI: Software writing software and the broader challenges of computational creativity
We've covered a lot of ground in this series. We went from algorithmic music to procedurally-generated games (and an AI game developer), then onto computers writing stories and robots painting portraits and abstract art or constructing buildings like the craftsmen of old. Now, in this final part of our deep dive into the world of computational creativity, we turn to the underlying ideas and the future challenges that face the field as a whole.
The next big thing for computational creativity may well be software that writes software. "We've all kind of avoided it," says Goldsmiths College and Falmouth University professor Simon Colton, whose artificial intelligence project – The Painting Fool – we saw in the entry on the robots that would be painters. "The one thing that everyone in the field can do is program, but we're doing it by proxy. We've got it to create art or got it to create poems or create games – although of course games are software – but we haven't addressed the question of how do people creatively write code."
A common criticism of creative AI agents is that they are just an extension of the people who created them – any creativity The Painting Fool exhibits is therefore seen to have root with Simon Colton. "Imagine if software was taught to rewrite itself entirely," Colton posits, "and then did that 10 times over. It's very difficult to say that you wrote a single line of it anymore. You just trained it with generic techniques, which it can use to alter its own code."
That self-modifying software could then go and create artifacts in whatever its domain is: games, cooking recipes, paintings, music, architecture, stories, or whatever else. Colton is starting his work on this track in games. He is writing an AI game designer inspired by Michael Cook's ANGELINA but with the twist that it will be able to not only create games from scratch but also modify its own code, remaking itself again and again as it improves its game-creating capacity.
The natural concern that many outside of the field may have at this point is control: That's great that the software can rewrite itself to better – or differently – produce something that requires creativity. But what if it learns too much? How do you control it? How can you be sure that this innocent starting point won't lead somehow to AI-led Armageddon as the software gets more and more powerful and wields that power with ever greater creativity until it manipulates the human race into non-existence?Colton and the other computational creativity researchers and practitioners that we spoke to for this series think such fears are unfounded and perhaps even damaging. Artificial intelligence poses no imminent threat, they say, voicing disappointment that technologists such as Elon Musk and Bill Gates and scientists such as Stephen Hawking have publicly stated otherwise. Science fiction is full of cautionary tales, sure, and AI has made great strides over the past few decades, but it remains grossly unintelligent.
"Every time you sit down and actually work with artificial intelligence you become aware of the limitations of what we're able to do and what we know how to do," says Georgia Tech associate professor Mark Riedl, who spoke to us about his story and game generation projects. "Oftentimes scientists like myself like to extol the virtues of what we're working on without talking about the limitations and the boundaries that we have."
Those limitations are stark. Artificial intelligence agents are currently highly focused. They mostly exist to accomplish one task – telling stories, for instance, or perhaps serving personalized ads to Facebook users – and have no idea how to do anything else. Colton notes that it's "painfully difficult" to get it to accomplish even the most unintelligent of intelligent tasks. And he thinks the idea of a singularity – some sudden moment where AI goes from dumb to hyper-intelligent – is misguided.
"It's not like the physical sciences," he explains. "There are no breakthroughs in AI. Certainly not in the 15 years that I've been going to conferences. There are things which come to the fore and become very successful, and AI is full of success, but there's no moment where we split the atom, put a man on the moon, cure a disease. That just doesn't happen. Things are incrementally, slowly, carefully done."
Game-developing AI ANGELINA creator Michael Cook puts a different spin on it. "Driving a car from A to B is not the hard part," he notes. "The hard part is having a chat with the passenger on the way there." The first of those we can now do; the second not so much.
Fears of meltdown come and go with the seasons, but the more overriding fear surrounding creative AI is of what it means for individual people – especially those whose jobs could conceivably be replaced by these artificially-intelligent creators. University of Sydney researcher Rob Saunders has encountered his share of hostility. "I've had people be very angry when I've shown them simulations of agents doing things," he says. "They'd be very angry one day and the next day they'd decided what I was doing was a performance or something and they'd made it safe for themselves, so it was no longer a threat to them."
This type of rationalizations stems from the fact that creative AI confronts the very core of what traditionally separated us humans from all else. Algorithmic music pioneer David Cope spent decades fighting such fears. (To recap, Cope's algorithms generate compositions that mimic either his style or that of great composers. See the video below for an example.) He thought that as computers developed, people would see the potential of creative machines and hop on board, but as recently as five years ago he had seen no evidence of this happening. Quite the contrary: for years he met ridicule and fear even among many of his contemporaries in artificial intelligence.
Cope believes that part of the problem is arrogance. "We think we're so special that only we can do certain things like create," he says. "We should realize that there are sunsets that are being created out there that are far superior to the best that we could produce. We could do amazing things if we'd just give up a little bit of our ego and attempt to not pit us against our own creations – our own computers – but embrace the two of us together and in one way or another continue to grow."
He sees that beginning to change, though, as a generation that grew up with computers all around them matures, and he's over the moon that algorithms are becoming commonplace in augmenting human creativity.
But some fears still remain. "Robots and computers will take our jobs," people cry, forgetting that, as Colton is quick to point out, the vast majority of people don't enjoy their jobs. The underlying issue is not that AI agents might "replace" certain kinds of workers, but rather that those workers need to earn a living. History shows this is unlikely to be a long-term issue – each major transition, from the agricultural revolution to the industrial one to the information age has brought with it other ways for people to earn a living.
Artists themselves are unlikely to be replaced, in any case. "When a new [batch of] Royal College of Art graduates come out onto the scene, is their first task to find an older artist and say 'I'm sorry, I'm replacing you.' It's not like that," says Colton. Computers will on the one hand provide cheap art, much like IKEA offers low-cost furniture (except perhaps now with mass personalization rather than mass production), and on the other they will support the artists themselves.
As Cope says, "it's not an either-or situation – either human or machine." Cope believes that it's rather a matter of both-and – both human composers, human creators, and artificial ones. And the human creators are already benefiting from the talents of their artificially-intelligent tools – take game developers who use procedural content generation to produce cities, trees, or, as in upcoming game No Man's Sky, an entire universe of worlds.
Boom or bust
Computational creativity researchers have fears of their own. Colton worries that a Mark Zuckerberg type might come along and corner the market, just like Facebook did for social media. "The dusty old professors doing social media work putting together prototypes won't be remembered by history as the founding fathers of social media – Zuckerberg and so on will be," Colton says. "Which is fair enough, but I don't want that to happen in computational creativity."His fears are reasonable. The field is increasingly getting attention from bigger players with deeper pockets. Not long after we covered Automated Insights in the storytelling entry of this series, the artificial intelligence startup that specializes in transforming raw data into stories was acquired by a huge private equity firm. And IBM has recently been pouring resources into computational creativity to algorithmically generate a recipe book with help from its Watson supercomputer.
"If they're going for it I think that someone is going to come along and not only make millions or billions but also become the face of computational creativity," says Colton. "That would wind me up like nothing else."
Money is crucial to the field's future. Colton notes that at Falmouth University his team has funding for the next four years, but at Goldsmiths 10 of the 12 computational creativity researchers could be out of a job in 18 months if they don't find new funding. And with the recent fear mongering in the media, he worries that artificial intelligence will lose external funding support.
If people see it as a danger to public health AI researchers won't get funded, but conversely if people think it's actually stupid and hundreds of years away from becoming dangerous then they may also cut funding. AI has boom and bust cycles, Colton says, and despite the positive progress of recent research there's a real chance of another bust right around the corner.
Every domain in the computational creativity field grapples with the problem of definitions. Riedl explains the predicament with a few simple questions: "What does it mean for a computer to be creative? Can you judge whether a computer is being creative? Random is creation. Where do you draw the line between random creativity and something that's considered what humans would think of as creative?"The answers to these can vary from one person to the next. For Cope, creativity "is the ability to associate two things which heretofore have not been considered particularly or usefully relatable or associate-able." But Riedl points out that such a definition forgets more pragmatic kinds of creativity such as finding a new route home when there's road construction or – as Saunders has just started working on with his robot craftsmen idea – figuring out how to use unpredictable materials fit to a high-level brief in a situation such as building from low-grade materials.
Colton is trying to work around this issue of non-agreed-upon definitions of creativity. He conceived of eight behaviors that software has to exhibit in order to avoid easy criticism of being non-creative. They are skill, appreciation, imagination, learning, innovation, accountability, subjectivity, and intentionality.
Skill is obvious. Appreciation relates to an ability to step back (albeit figuratively in the case of software) and appreciate what has been produced. Imagination is thinking of a world that does not exist. Learning is about improving or changing approach based on feedback. Innovation is coming up with a new process, which software that modifies itself to write new code would accomplish. Accountability is getting the software to tell you why and how it's done something, which Colton argues is important because software faces a "humanity gap" – a kind of lack of good faith because it is not human. Subjectivity is forming prejudices and aesthetic preferences. And intentionality is a matter of having a goal that the software sets for itself.
Colton made intentionality the focus of The Painting Fool in 2013 ahead of its You Can't Know My Mind exhibition. The software painted portraits of attendees according to its current mood, which was based on sentiment analysis on the previous 10 articles it had read on the Guardian website. If its mood was anything other than foul (in which case it told people to go away), the software instructed subjects to show a particular expression. It then did its painting and analyzed the finished product through a machine vision technique. It provided commentary on the process and result and self-assessed how successful it was in achieving the goal it set for itself.
For Colton, The Painting Fool was acting intentionally here. Its choices were accountably unpredictable – in other words, unpredictable but in no way random – and he could have had no way of knowing in advance what would happen because the number of possible outcomes was much too great.
Saunders believes that it's not enough for machines to evaluate their own creativity or to replicate or mimic human creativity. He suggests that we focus too much on the artifact that comes out of the process. "But any creator will tell you that the artifact is just the end of the path they took," he says. That, together with the ties creativity has to the physical characteristics of the creator, pushes him toward robotics. "For me the idea of even trying to create a machine that's going to be creative in the same way as a human is a lost start."
Robots and computers should do what they're good at just as humans should focus on their strengths, in other words. Robots can be stronger and more precise than humans, and computers can crunch gigabytes of complex data in seconds. Colton gives an example of a poet wanting to draw on the Twitter zeitgeist, suggesting they could sample maybe a thousand tweets in a day. "Software could sample a hundred million in a day and find exactly the right tweaks to express some news item in a poetic form."
Or you could look at his new game developing AI project, which will be used to experiment with an idea similar to SnapChat. "It would be soul-destroying – heartbreaking – to write a game ourselves to be played by one person and if they don't complete it in 10 seconds it's gone forever," Colton says. But if a computer could generate that game in a matter of seconds then it's no big deal.
Creative AI opens up these kinds of possibilities, where the previously unfeasible or impractical becomes trivial and the whole world can enjoy personalized and/or one-of-a-kind artifacts – be they games, songs, poems, short stories, houses, fantasy sports team reports, paintings, or whatever else a creative agent could conceivably produce.
So we've got software writing software, teaching AI to exhibit characteristics associated with human creative endeavors, aesthetic (self-)evaluation, further commercialization of AI-created games/music/art, and creative robots learning to use their electronic muscle to best effect. What else lies ahead for computational creativity?Colton mentions ideation, which is the central tenet of the European Commission's What-if Machine – a project that he coordinates. My cat walks in and yells at me just as he brings this up, so his example is "what if there's a cat that's so loud that its meowing bursts the eardrums of people around it?" It's a fictional idea that could easily be the premise for a cartoon or short story, and it's exactly the sort of thing the What-if Machine will do.
Once it does reliably generate ideas with cultural potential, they'll be tying it up with other computational creativity projects. Starting with ANGELINA and Colton's new game generation software. His example this time is "what if there was an old dog who couldn't run anymore, which he used to do for fun, so instead he decided to ride a horse?" The What-if Machine generated that idea, and Colton imagines it being put in a video game: "You've got this knackered old dog who can't move very much, so when you press left and right he barely moves. And you've got to get him on top of this horse and he keeps falling off," he continues. "You can write game mechanics around that [or rather ANGELINA might]."
Cook thinks culture and meaning is the next big challenge. "We skip over this so often and lately I'm just gripped by it, every day," he says. "We constantly find ways to weasel out of getting AI to understand that water is wet, that darkness is scary, that the color red signifies love, that people apologize when they bump into each other, and so on. We need software to truly understand the real world, so it can manipulate it intelligently and understand the consequences of doing so."
And in the meantime, Colton suggests, we have to find ways around the humanity gap. "People have to get used to the idea of software creating things," he says. "If we start calling things c-poems or c-music or c-games to point out that they're computer generated then people immediately change their expectations of humanity [within them]."