When College of Auckland laptop scientist Professor Michael Witbrock’s dad took him to see 2001: A House Odyssey as a child, the robotic Hal didn’t scare him off. It hooked him.
Following a grim Nineteen Seventies summer season promoting boyswear in a Dunedin division retailer (“Don’t ever try this: Individuals don’t depart issues on the cabinets effectively organized!”) younger Witbrock had earned sufficient to purchase an early laptop, an Ohio Scientific Superboard 2, and begin tinkering.
Then, whereas doing a PhD at Carnegie Mellon within the US, he and a bunch of pals began enjoying round with one thing they known as ‘genetic artwork’.
“A picture is only a bunch of pixels, so in case you work out given the x and y coordinates, what color and depth these pixels needs to be, you possibly can produce a picture.
“A few of the photos had been fairly hanging; swooping pastel varieties with crystalline buildings in them. Within the vary of belongings you may suppose had been creative.”
Quick ahead three many years and what Witbrock and his pals had been doing with AI artwork within the Eighties isn’t so very totally different from what occurs with present-day synthetic intelligence picture manufacturing methods, Witbrock says. Simply the place these early college students used tidy mathematical formulation, AI now makes use of enormous neural networks skilled on huge portions of knowledge.
Nonetheless, the core questions have barely shifted: Is the output artwork? Can or not it’s authentic? And who owns it?
Alex Sims, a professor within the College’s Business Regulation division specialising in mental property, has little time for ‘AI slop’, the time period for low-quality AI generated photographs.
However her analysis highlights an actual rigidity – not all AI-generated work is slop.
Sims factors to a graphic novel known as Zarya of the Daybreak, the brainchild of New York artist Kristina Kashtanova, who used picture era device Midjourney to generate the photographs.
“They spent actually days on every picture. Initially they stated ‘Create me no matter’, after which used a sequence of prompts to lastly get to one thing inventive.”
The US copyright workplace initially granted safety for the graphic novel when it didn’t know the pictures had been AI-generated. When it came upon, it reversed the choice; US legislation states photographs should have a human writer to be protected.
That’s not the identical in New Zealand, the place computer-generated work could be protected. Sims says deciding the place the road is between ‘inventive’ and ‘non-creative’ isn’t black and white. Pictures weren’t protected by copyright legislation for many years, she says – folks argued photographers hadn’t created something, they had been simply recording nature.
And inventive folks have at all times – knowingly or unknowingly – drawn on influences from the previous.
“All of us, artists, everyone, are copying from what we all know.”
There are a selection of massive authorized instances about corporations coaching their AI on authentic inventive work wending their approach slowly by way of the courtroom system – Disney and Common versus Midjourney, Getty Photographs versus Stability, and extra. They usually could finally present extra certainty.

Within the meantime, Sims’ view is there needs to be some safety for AI-generated work, “however the safety needs to be rather a lot decrease”.
“So they don’t seem to be protected for as lengthy, and also you don’t get what we name ‘ethical rights’ – the appropriate to object to make use of of your work in sure conditions.”
She additionally says artists needs to be pressured to be up-front about their work being AI-generated, together with disclosing that within the metadata for the picture.
The paradox of AI creativity
Right here’s one other conundrum: you may suppose generative AI makes people redundant, however the reality is the robots can’t do it alone.
It’s typically known as the paradox of AI creativity, or the paradox of digital decay, and it’s one thing College of Auckland senior legislation lecturer Dr Joshua Yuvaraj examines in his analysis.
“The best way AI works is you set in a immediate: ‘Give me an image of a black cat’, and the AI has been skilled on tens of millions, billions, trillions of knowledge factors, photographs, phrases to statistically predict what it thinks you imply once you say ‘black cat’. And it’ll give you an image of black cat.
“The paradox is AI wants genuine authentic human works to be skilled on. Whether it is skilled on AI-generated works, finally it results in what laptop scientists name ‘mannequin collapse’.”
Think about the AI-generated black cat.

“If I re-feed that picture into the AI and practice it on that and we preserve doing that, finally the picture of a black cat that’s produced is so distorted it seems nothing like a black cat. However the AI thinks that’s what a black cat is.”
If we don’t have human output to coach AI, then AI will change into redundant, Yuvarej says.
“So in case you look on Search and different job websites, you’ve jobs being marketed for individuals who create stuff to coach AI. It’s really changing into an trade.”
A lot (human) labour
Right here’s one other irony: it takes an astonishing quantity of human laborious work to get synthetic intelligence to create an authentic piece of music.
Not less than, if the expertise of Dr David Chisholm is something to go by.
Chisholm is a composer and head of the College of Auckland’s College of Music, and final yr began a challenge with some knowledgeable colleagues in Chile to coach generative AI on his personal work. The goal: to create a brand new piece of music – within the model of David Chisholm.

It absolutely couldn’t be that onerous, he thought. Isn’t one of many greatest beefs of inventive artists about AI – the topic of authorized battles – and that algorithms are making work that seems to be an artist’s authentic?
Chisholm couldn’t have been extra improper. Making simply 5 minutes of authentic music utilizing AI was lengthy and laborious. And in spite of everything that, the primary iteration didn’t sound something like a Chisholm creation.
To get a performable piece, the composer needed to sit down and do a lot of the work himself. It took months.
“I believed it will be really easy – I’d simply transcribe it [from the AI]. As an alternative it was a lot work, so extremely laborious.”
















