Kurt Schwitters’ sound poem Ursonate is a forty-minute stream of made-up nonsense language, written over the course of a decade, beginning in 1922. It is an ur-text for his own brand of Dadaism, which he named Merz (after being refused by the Dadaists for being too concerned with aesthetics) and is classically structured in four movements: Erster Teil (First Part), Largo, Scherzo, Presto. It is notoriously difficult to learn, and is considered by some experimental vocalists to be a rite of passage, including by composer Jennifer Walshe (she made a documentary on the piece for BBC Radio 4, and has also “translated” it into Irish). Here, Walshe has delegated the challenge of performing Schwitters’ piece to machine learning.
There is much proselytising about AI and its many effects and affects filling our feeds right now, along with infinite opinion pieces bemoaning the incoming AI apocalypse. In the sonic realm, Walshe is one of the few who has sunk her teeth into the vanguard literature, as an artist curious and engaged, and recognising that a technology is never inherently good or bad. We are, if you didn’t know already, fully entangled. In the case of that discussion and this version of Ursonate, it is in the moments of a capella singing that I feel the full horror of the uncanny valley’s own Mariana trench, but in some cases it’s so indistinguishable from a real voice that it’s on the uphill slope. The language of Ursonate, in only resembling language not assembling meaning, only confuses this sense, because do the Germanic syllables of Ursonate feel uncannily like language, or is the AI making tracks that are uncannily like different genres?
However, as we should be asking with all AI generated cultural artefacts: What Were The Prompts? My guesses are: something to do with contemporary EBM, death metal, early music, the overwrought crescendos of musical theatre, Animal Collective, Fall Out Boy, Leonard Cohen (this one is 100% ready for release, give me the whole album), maybe Nine Inch Nails, possibly Enya (if not, a missed opportunity), and something that might have originated as an instruction for the machine to devour the back catalogue of Diamanda Galas. I also heard the AI wrangling with Jessie J, Katy Perry and Lady Gaga’s ‘Bang Bang’, although that might be a personal haunting that does not register with you, because when our ears are scrying for prompts in material like this, it’s as much about the machine as our memories. The overall effect is of being in a late night bar in a foreign city where you don’t speak the language, and someone’s put a very random Spotify playlist on.
There’s a lot to unpack if you so desire: about authorship, about genre, about pop and the avant-garde, about AI and readymades. What I find most interesting are the questions it raises about how the listening ear jumps for associations – listening as simile for things that ‘sound like…’. I make the comparisons above based on what? On timbre or pacing, tone or structure? How did I come to learn these associations, anyway?
Ursonate is notoriously difficult to learn for a human performer – but not for an algorithm, if the instructions are delivered well enough. Is Ursonate%24 another way for Walshe to perform, where performer becomes prompt writer? Or is she wilfully undermining her own mastery of this difficult piece, in handing over its articulation to the machine? Is it any good? I’m not sure. Is it interesting? Absolutely.