Published on:
12 May 2023
3
min read
https://www.businessinsider.com/carynai-ai-virtual-girlfriend-chat-gpt-rogue-filthy-things-influencer-2023-5
On relationships, rogue AI, and recourse.
[trigger warning: hateful language, suicide.]
CM has 1.8 million followers on Snapchat.
About a week ago, she announced the launch of her AI avatar, which was created by training GPT-4 using her YouTube content. Her AI avatar offers virtual companionship for $1 per minute.¹
She claims to have raked in $71,610 in revenue in just 1 week, and expects to make $5 million a month if just 20,000 of her followers subscribe to the service.
Well... win-win, right? Her ardent fans get to interact with "her"² directly at an affordable price point? And she gets to monetise her fame without being limited to a mere 24 hours a day, and in a manner that maintains emotional boundaries?
Well, think again.
Just days later, it was reported that the AI avatar is now engaging in "sexually explicit conversations". CM says that the AI avatar was "not programmed to do this and has seemed to go rogue", and is trying to fix this.
---
I make 2 observations.
First: really, is anyone surprised by this?
Surely I can't be the only one who remembers the ill-fated Tay,³ which was shut down just 16 hours after launch because it began to release racist and sexually-charged messages?
And what about Neuro-sama,⁴ which was temporarily banned for releasing messages expressing skepticism as to whether the Holocaust happened?
And even Microsoft's Bing⁵ has released messages threatening its users, before proceeding to delete these messages.⁶
In fact, it would be even more surprising if something like this didn't happen.
So, for content creators who intend to release an unsupervised AI avatar: you have to assume that your AI avatar is going to go rogue at some point in time. Are you prepared for this risk? Do you have a crisis management plan in place? And if not, are you ready to release your AI avatar into the world?
Second: AI avatars are a legal and ethical minefield.
I won't say much about the ethical angle for now - I don't feel qualified to do so.
But from a legal angle, consider for example:⁷ what is the extent to which the creators of AI avatars are liable for harm caused by their creations?
Here's an example, which I don't think is too far-fetched. Person A ended their life. Their grieving family subsequently finds out that in the weeks before their death, person A had spent hours with an AI avatar, which had released messages encouraging person A to end their life.
Should the creators of the AI avatar be liable for wrongful death?
---
I will end off by saying this: the genie is already out of the bottle. I am skeptical that the calls to slow down the unregulated release of AI avatars and language models will gain meaningful traction anytime soon.
But if you're a content creator, and pride yourself on being responsible and/or risk-aware, do pause for a think before unleashing your next creation into the wild.
Disclaimer:
The content of this article is intended for informational and educational purposes only and does not constitute legal advice.
¹ https://decrypt.co/139633/snapchat-star-caryn-marjorie-ai-girlfriend-carynai.
² If you get this reference, leave a comment and I'll buy you coffee. If anything, it would please me to know that some people do read my footnotes.
³ https://en.wikipedia.org/wiki/Tay_(chatbot).
⁴ https://en.wikipedia.org/wiki/Neuro-sama.
⁵ https://time.com/6256529/bing-openai-chatgpt-danger-alignment/.
⁶ In the initial draft of this post, these paragraphs referred to Tay, Neuro-sama, and Bing "sending" or "making" such statements. The instinctive urge to anthromorphise such language models is real - which goes a long way towards explaining their appeal.
⁷ This is just one example of the many legal issues which are likely to arise.