"I have to wonder though, if you actually ate this soup with a little fish in it. The soup is so full of flavor that there wasn’t even a single taste.”
“We ate this with a whole bunch of it.” Hermione pointed out. “We’re all eating this with a fish in it. It must be pretty good.”
“I think so,” Harry agreed. I have tried it with oyster sizzlers, with lobster, with shrimp and on lobster tails. It is very good."
As you’ve spotted, this isn’t an extract from a new and inexplicably bad J. K. Rowling novel, but rather than an attempt at Harry Potter fan fiction written by an artificial intelligence.
Its penchant for the culinary is down to a fundamental quirk of neural networks. This particular AI is an instance of GPT–2, which specializes in generating text. It was trained in Potter fan fiction by Janelle Shane, but before that, she trained it to create recipes. Ever since, when trained in a new task, it has a tendency to quickly bring the topic of conversation back to food.
In her recent book, You Look Like a Thing and I Love You, Shane explains that this phenomenon has to do with what’s known as catastrophic forgetting. A limited neural network trained in a new task quickly forgets what it had learned from previous ones. Meanwhile, larger neural nets that can retain knowledge from old tasks tend to do a bad job of knowing which learning to draw from. It’s a reason why today’s practical neural networks tend to be trained in only one thing: they’re just more reliable that way.
Shane documents these and other weirdly delightful idiosyncrasies of the neural networks she trains at her blog, AI Weirdness. Her favorite experiments are those that fuse neural nets with human creativity and the real world: experiments like knitting project SkyKnit and crochet project HAT3000, where patterns designed by an AI were, after a little debugging, brought to life by humans. The results include wonderful seemingly-organic creations with little resemblance to the patterns the AIs were trained on.
Shane is no hobbyist. She holds a PhD in electrical engineering and a master’s in physics, and her interest in AI began as an undergraduate at Michigan State University when she attended a lecture on evolutionary algorithms by Erik Goodman.
"If you were to go back in time and attend that lecture of his, you would recognize instantly the spirit of my entire outlook on AI," Shane explains on a voice call to New Atlas.
From there, Shane worked on genetic algorithms with Goodman as a freshman researcher, before tying that work into research on laser pulses. She’s retained her sense of wonder with the "unintentionally ingenious" nature of AI ever since. But it was the best part of a decade later that Shane began experimenting with and blogging about experiments in creating recipes by neural network. "I was mostly entertaining myself, and wanting somewhere to put these experiments so maybe I could show a couple of friends as well," Shane explains (a word of caution: don’t sample early iterations of AI-generated recipes yourself).
Neural networks are effectively artificial brains, though the ones Shane trains have about as many neurons as a worm. The most powerful neural networks are comparable to the brains of honey bees, though it’s been estimated that neural networks could achieve the complexity of human brains by 2050.
But that doesn’t mean we should expect human intelligence in the AIs of tomorrow. Neural networks don’t have actual neurons, just units of software which are, Shane writes, "able to perform very simple math," and much less complex than neurons in a human brain. If artificial neural networks have an advantage over biological brains, it’s that they can concentrate wholly on one task at a time.
Reading the book, it’s hard not to imbue the AIs with human qualities and failings: especially laziness. Often, they’ll take the path of least resistance toward a solution. In the case of a Tetris-playing AI programmed not to lose, that may take the shape of simply hitting pause when things go south rather than doing the hard work of learning to get good at Tetris. Yep, they can seem likably relatable at times.
"One example I’m really fond of is this tendency to choose to fall down instead of walk," Shane adds. "Given absolute freedom to design a body-type and the goal of getting to some distant spot, AI will over and over again discover that it can turn itself into a tall tower, fall over, and land at its goal. It’s way easier than walking or crawling or anything else." It’s a technique Shane borrowed at a recent TED robot-building workshop competition – and promptly won.
Shane describes these behaviors as diabolical and sinister while simultaneously being completely innocent, but the problem of shortcuts can be exacerbated when AIs operate in a simulation rather than the real world. There, they can exploit "glitches in the Matrix," as it were.
"There are AIs that hack their simulation physics," Shane explains. "They have less human and more microbial nature, the way microbes will harvest energy from anything they find in their environment, whether it’s rocks, or heat, or chemicals, or sunlight. You find the same tendencies in very simple simulated organisms. They’ll harvest energy from math errors or slight inaccuracies in the simulation."
A major theme of the book is the gap between AI as it is and AI as it’s portrayed in science fiction. Unlike the human-like general intelligences of speculative writing, the neural networks of today perform best when posed narrow, well-defined tasks.
"We need to be aware of how limited these algorithms are. You get tendencies of people who are selling them, building them or buying them who think these algorithms will save them from silly mistakes or will act ethically. That was one of my motivations for writing this book. We’ve got all these stories about science fiction AI, but we don’t have as many stories about the AIs that we actually have today. The super-smart, human-level AI is a lot more familiar in many ways because it’s the AI that we see in science fiction. And it’s still called AI, even though it’s as different from today’s AI as a human is from a Roomba."
The book also tackles the larger and thornier ethical questions of the use of AI. While it isn’t always portrayed in the best light, a theme of the book is that, often, AI’s failings reflect those of their human creators.
Take AIs trained to recommend job applicants by comparing past applicants to the eventual hiring decisions made: The problem is that, even when those systems are billed as removing bias from the equation, the AI is essentially just predicting who would have been hired based on historical data. So, if past hiring decisions were biased towards white men, it’s quite likely the AI will be too. In 2018, New Atlas reported on how facial recognition software can amplify bias, as well as IBM’s efforts to counter inherent bias in AI.
"There are uses of AI that I really don’t think can be used responsibly," Shane says. "One of these is gender-recognition: the idea that you can get a picture of somebody’s gender by looking at them, and all the possible uses you could think of for such an algorithm. All of these are inherently harmful, inherently exclusionary, and I think gender-recognition is one of these kinds of algorithms that just shouldn’t be built and can never work. You can ask the question 'can fire act ethically?' No, it just does what it does. You have to know what it will do and how to control it. We have to check to make sure they’re doing what they’re supposed to do. Never trust a neural net."
And yet Shane is optimistic for the future of AI, particularly in the realm of creativity. "There’s a lot of interesting stuff that I can’t wait to play with. I’d love to see a movie with AI-generated graphics or see some more games like AI Dungeon that are using AI to expand the gameplay and make things more flexible. There’s so much creative possibility. I’ve been trying my hand at writing some short stories that deal with different sides of AI or types of AI that are more like today’s AI."
One story, styled as a New York Times op-ed from the future, is about self-driving scooters which turn feral thanks to their evolutionary programming.
"One of the things I really wanted to do was make sure it featured a narrow AI that was not more complicated than what we have today. In fact in my story the scooters are developed in 2020. We pretty much have the technology to do that now. The science fiction part is then imagining what would happen if they were just left to evolve."
You Look Like a Thing and I Love You is available now. It's simultaneously an excellent primer in AI and a funny and absorbing read in its own right. Shane's experiments in neural networks continue.