Entertainment litigator James Sammataro explains why non-human creation shouldn't be dismissed as merely monkey business.
Monkeys cannot create copyrightable works. This is an actual rule. Seriously.
In 2011, British photographer David Slater was photographing a troop of macaques when Naruto, a six-year-old, smiled into Slater’s lens, pressed the shutter button, and captured this toothy selfie:
The lawsuit seeks to establish that Naruto should own his selfie, just as any human being owns a selfie they take.
— PETA (@peta) May 23, 2017
After going viral and popping social media metrics rivaling Ellen DeGeneres’ Oscar-selfie, the photo was posted on Wikipedia. Slater fired off a cease-and-desist letter, but Wikipedia refused to take down the photo because a monkey, not a human, created it. PETA jumped into the fray and sued Slater to establish Naruto’s ownership of his selfie. After entertaining some giggle-inducing, “monkey-see, monkey-do” briefs, a federal court held that Congress did not intend to extend copyright protection to works created by animals. A recent settlement – in which Slater will donate a portion of future royalties to conservation charities – mooted the Ninth Circuit appeal. Nonetheless, this seemingly frivolous lawsuit has significant consequences.
Following the “Monkey Selfie” decision, the United States Copyright Office compendium amended its eligibility requirements. “Photograph[s] taken by a monkey” were excluded. The Copyright Office also declared that “Works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author” were ineligible. Copyright requires a human touch.
This means musical works purely created by artificial intelligence (“AI”) are not protected by copyright law. Sounds like far-fetched sci-fi? Not to the music industry. Algorithmic music is not new: it is fifty years in the making. AI has been steadily infiltrating the music industry’s core creative processes and now teaches computers how to create without human intervention. One tech pundit recently predicted that AI’s progress would lead to the music industry’s next “Napster moment.” To assess this possibility, consider some of the AI forces already animating today’s music.
Amper Music, a “composer in a computer,” enables collaborations between artists and computers. It recently released the single, “Break Free,” a duet with Internet personality/singer, Taryn Southern on the single, “Break Free.” Amper developed the harmony, chords and sequences based on Southern’s suggested inputs, which human producers then fine-tuned.
AI is not just assisting the creative process. It is independently creating music. Jukedeck uses neural network AI technology to produce customized, royalty-free tracks. A user selects variables such as mood (energetic, melancholic), style (modern, classical, piano, synthesizers), tempo (beats per minute) and length. The selections enter Jukedeck’s MAKE cloud which then precipitates a tailored track in mere seconds. Even when identical parameters are selected again and again, Jukebox’s AI rains unique, complete musical works each time. The user can then preview the song, accept it, modify the selection, or request a new creation. Because there may or may not be some “creative input or intervention from a human author” after Jukedeck’s AI generates the music, only a case-by-case analysis could discern whether any given song generated by Jukedeck qualifies for copyright protection. At some point along the continuum from merely aiding to independently creating music, Jukedeck produces works that may be as uncopyrightable as Naruto’s Monkey Selfie.
Optimists view Jukedeck as a benefit to the music industry. Jukedeck summons science to reduce the cost of creating musical content. And because its core audience is YouTubers desiring personalized soundtracks for their videos, Jukedeck dissuades the kind of infringement endemic to YouTube.
Pessimists view Jukedeck as a harbinger of catastrophe. To back the 400-plus hours of video uploaded to YouTube every minute, Jukedeck asks only for a one-time charge (as low as $0.99 for individuals/$21.99 for large companies). These nominal fees eliminate the need for licenses, and have the potential to render stock music libraries and human musicians obsolete.
Realists sniff at Jukedeck, Amper and similar AI creations, rightly characterizing the current offerings as customized, affordable 21st century elevator music. But, can AI be taught to produce revenue-generating hits? Though even the most fervent AI supporters concede that the technology is a long way from producing a perennial hit like Toto’s “Africa” or a concept record like Pink Floyd’s “The Wall,” current software suggests that the era for realizing such potentialities has arrived.
Alice just started playing the piano in April 2017. It learns like a child, listening to thousands of songs and observing how experienced pianists play. After only a few months, Alice can already listen to human keystrokes and reply with suggested notes, resulting in collaborations and duets between AI and humans. Imagine what Alice will learn if it able to ingest every known sonic footprint, or spend one-on-one time with Kayne or Pharrell. Skeptical of Alice’s potential? Recall the scoffing that bubbled and then popped after Deep Blue beat Kasparov, arguably the greatest chess grandmaster in history. This stunning AI victory happened over twenty years ago. AI is significantly better today. At this rate, it is only a matter of time before Alice can out Britney, Britney, and a race for artists’ AI rights ensues. Australian start-up Popgun (co-founded by Twitter music executive, Stephen Phillips) utilizes deep-learning AI, which is capable of yielding exponentially more nuanced compositions than the background sounds traditionally produced by machine-learning AI. Popgun’s first project is Alice, an AI that plays piano with humans. Alice was inspired by Google’s AlphaGo project which famously taught a computer in 16 months how to best the world’s top-ranked player in Go (an ancient Chinese-board game more complex than chess).
Still incredulous? Follow the money. There’s an AI-driven gold rush. Google Brain introduced Magenta with the stated objective of determining whether computers can produce compelling artistic music. Sony and Warner have invested in Techstars Music, an AI music incubator. Sony’s separate Computer Science Laboratory unveiled the Flow Machine project to analyze thousands of differing scores (from ABBA to Zappa) and educate the computer to compose its own catchy pop tunes. Moodagent, IBM Watson and Gaana are also processing voluminous catalogues into big data to ascertain the science behind the music and musical preferences. AI is seeing beyond traditional genre divides, and plotting the dots between artist and audience. Spotify uses AI to create “Discover Weekly” playlists that are replacing the industry’s “golden ears” with more accurate, personalized recommendations. Increasingly ubiquitous smart voice assistants, Alexa and Siri, are already delivering a frictionless music experience.
Musical Moneyball has arrived, and just in the nick of time. Declining revenues, decreased marketing spends, and smaller A/R budgets necessitate minimizing risk, smarter resource allocation, and a higher hit rate in introducing new artists. AI’s continued evolution will aid album promotion (targeted chatbots), brand building (focused user engagement), and concert ticket sales (postal code analytics, “verified fan initiatives” and fan devotion metrics).
Fully realized AI will arrive with what futurists have dubbed the “singularity,” the moment when AI learning tumbles into a runaway reaction of self-improvement cycles that yields a superintelligence surpassing all human intelligence. Here, one imagines individuals empowered to compose her life’s soundtrack with supersmart phones and wearable technology, to create and play real-time musical scores drawn in accord with the listener’s external environment or internal biorhythms, or both. One company, AI Music, is already working on “shape changing” existing songs to match the listening context (such as the user’s walking pace) and remixing them on the fly to achieve harmonious states of being.
AI is the future of the music industry. While it may be awhile before AI-generated music tops the charts, AI is already curating music, breaking artists, changing consumption, and influencing listenership. It is not too early to check your contracts and negotiate AI rights. And do not be afraid to let computers make your music. Just be sure to secure your rights by imparting some human “creative input or intervention” to such digital ditties to avoid simian-like exclusion.Discover More