From weapons to works of art: The year in artificial intelligence
Make no mistake, we are in the middle of a profoundly significant revolution: a shift into the age of artificial intelligence. And while the advances from year to year may seem small and piecemeal, when we look back in 10 or 20 years with the benefit of hindsight, these incremental moments will form key planks in the narrative we craft about how our future world came to be.
Late last year Stephen Hawking suggested the creation of artificial intelligence could be the biggest event in the history of our civilization, "… or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it."
2018 was a fascinating year for AI. We experienced a variety of milestone moments, such as the first auction of AI art, and a large assortment of exciting developments that perfectly encapsulate Hawking's prophetic lack of certainty regarding humanity's future relationship with artificial intelligence. In 2018 we paved the way for a future where AI could heal us, harm us, or even teach us.
The AI weapon debate
Military uses for artificial intelligence, particularly in the field of autonomous weaponry, seem almost inevitable at this stage. We've seen glimpses of development over the past few years but events in 2018 brought the controversial ethical debate into the foreground.
In February South Korea revealed the launch of a new facility that joined up the country's primary research university KAIST, with Hanwha Systems, the country's leading defense company. The goal of the collaboration was to, quite literally, work on the development of AI-based military innovations.
The announcement was basically the government saying, "screw it, we know that you know that we are already doing this, so why bother hiding it." The Future of Life Institute, a coalition of leading AI researchers, was not pleased with the South Korean revelation and pledged a major academic boycott of the university. By April the president of KAIST had capitulated, agreeing the university would not develop lethal autonomous weapons.
A few months later, the Future of Life institute took matters even further, getting 160 AI-related companies and organizations, and over 2,400 researchers and engineers to pledge they would not, "participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons." It was a landmark moment that will either stifle the development of autonomous military weapons or simply push those working on these technologies back into the shadows.
The AI doctor will see you soon
There is absolutely no doubt AI systems can sift through massive troves of data faster than any human. With the growing volume and complexity of medical data, these systems are proving stunningly effective at recognizing patterns that no human would be able to detect. This year has revealed several incredible algorithmic advances that point to a future where AI systems will be able to effectively diagnose, and treat, a person's medical conditions just by looking at their records.
One remarkable study published this year suggested an AI algorithm, trained to evaluate a variety of diagnostic data, could be effective at predicting whether a person is at the very earliest stages of cognitive decline, and if they are likely to significantly deteriorate over the following five years. Another system was trained to predict the social outcomes of patients with depression or psychosis, and in early experiments offered better predictions than human experts.
Other scientists are developing AI systems to crunch through our immense repositories of clinical research to help uncover missing links in already published papers or undiscovered dangerous drug combinations. It's not hard to see AI assistants in every aspect of medicine soon, from the earliest stages of clinical research, to the bedside of patients offering doctors diagnostic advice.
The AI teacher is watching
More than any other country in the world, China is racing to integrate AI systems into its entire social ecosystem, and 2018 revealed the populous nation has already started to test an assortment of striking education-enhancing technologies. In May it was revealed that a high school in Eastern China was testing a new facial recognition system designed to analyze the engagement of students in a class room, in real-time. The "intelligent classroom behavior management system" scans the room every 30 seconds logging both the behavior of the students and their facial expressions.
An even more provocative story came out of China around the same time, revealing the existence of a decade-old machine-learning system that is being used to automatically grade student essays. It's currently estimated that 60,000 schools are testing the technology, and it can reportedly offer the same grade as a human marker up to 92 percent of the time. One researcher working on the project frighteningly said, "It has evolved continuously and become so complex, we no longer know for sure what it was thinking and how it made a judgment."
The first AI art auction
AI art has been quietly bubbling away for several years now, with many engineers and computer scientists working on ways to imitate human creativity. However, 2018 presented a major milestone with an AI-generated artwork put up for auction at a major auction house for the very first time. Expected to sell for between US$7,000 and $10,000, the world was stunned when bidding pushed the final sale price up to an astounding $432,500.
The artwork itself quickly became mired in controversy with many in the AI world suggesting it was produced by algorithms previously developed by other scientists. This raised entirely new questions over what constitutes originality when a work of art is produced by an algorithm. Can you own the creative output of an algorithm? Whether this was good art, or even art in any sense of the word, is a philosophical argument sure to be debated for years to come, but what we can be sure of is that this AI-generated work will not be the last machine-made portrait to hit the high-end art market.
The AI advertisement writer
IBM's Watson supercomputer was given 15 years' worth of TV ads that won Cannes Lions International awards for creativity, alongside human emotional response data and company brand guidelines. The goal was to get Watson to spit out a script for a luxury car advertisement. The resulting script was produced, with the aid of an Oscar-winning filmmaker, and to be honest, it turned out to be one of the most coherent AI-generated pieces of media we've seen to date (skip to 8:30 in the video below to watch the ad).
Advertising, more than music, movies, art or entertainment, is the perfect incubation bed for this kind of technology. It's already massively data-driven, for one, and it's one of the few forms of "creative" expression directly designed to produce a measurable result in the decision making of its audience. So, while we've seen less than stellar attempts from AI in the past to make a mark in the world of film and television, it seems mastering TV ads is currently where the technology is at.
The AI film auteur
In absolute contrast to the Lexus advertisement, Zone Out is a blurry, incoherent, and entirely bonkers short film that also represents a truly stunning achievement in AI artistic creation. Instead of just penning the screenplay for a film or ad, the goal was to let an AI system generate an entire film, including cutting together the whole thing by face-swapping human actors onto old public domain films.
The duo behind this crazy project were responsible for another incredible AI production called Sunspring back in 2016. This time the AI system took the green-screened Sunspring actors, chewed up a massive volume of public domain movies, and then created a bizarre short film in just 48 hours. The result is not exactly refined but it offers a great indication of what could be done in a decade or two once this kind of technology improves.
AI versus Shakespeare
Shakespearean sonnets are often held up as a pinnacle of English poetic prose, creating rhythmic rhyming patterns within a strict set of parameters. So, of course, a team of researchers set out to create AI-generated sonnets that could rival those of The Bard.
The system was trained on 2,600 sonnets and, while the final results were convincing enough to fool regular laypeople, a serious English literature professor was not so easily fooled, suggesting the AI output could be easily identified from human writing due to its lower emotional impact. Below are two sonnet quatrains, one written by Shakespeare and the other by AI. Can you tell the difference? (The answer is at the end of this article.)
full many a glorious morning have I seen
flatter the mountain-tops with sovereign eye
kissing the golden face the meadows green
gilding pale streams with heavenly alchemy
or with a giddy circle mark'd the sight
which, swift and flaming, with disorder'd light
glaring and madly forward in the moon
to shrink into a bubble burst on down
The AI voice that sounded too human
Voice assistants, such as Siri and Alexa, are evolving pretty quickly, but this year Google inadvertently sparked an ethical debate by presenting a new AI system called Duplex. The system interacts with Google Assistant, and can essentially engage in simple conversational tasks via phone calls to businesses, such as scheduling a hair salon appointment, or making a reservation at a restaurant.
The big hallmark of Google Duplex is the system's ability to conduct natural sounding conversations. The system is programmed to have a quick response time and incorporate what Google refers to as "speech disfluencies" to sound more natural. This includes subtly calibrated "hmm"s and "uh"s to sound like a real person, and not the rigid mechanical computer voices we are generally used to.
Instead of admiration, the general public's response was shock and concern. People debated whether it was ethical for an AI system to pretend to be human, or whether there should be rules stipulating these new systems identify themselves as artificial when communicating with a human. Natural voice interactions with computers were, until this year, just an abstract sci-fi concept depicted in movies where people converse with AI systems such as HAL in 2001: A Space Odyssey. In 2018, for the first time, we had to grapple with the actual reality of this development and ask ourselves what happens when we can't tell if it is a human or an AI on the other end of the line.
AI or Shakespeare?: The first sonnet (Example A) was Shakespeare and the second (Example B) was penned by the AI system.
Please keep comments to less than 150 words. No abusive material or spam will be published.