Published on:
14 Jun 2023
3
min read
OpenAI CEO Sam Altman addressed concerns about the rapid roll-out of ChatGPT in a dialogue at Singapore Management University on June 13.ST PHOTO: KEVIN LIM
A follow-up from the previous post, on the OpenAI world tour's stop in Singapore.
The Straits Times [https://lnkd.in/ge-6F75n] and The Business Times [https://lnkd.in/g6mYkKPc] have both reported, with far greater accuracy, Altman's responses to the questions I posed.
---
ST:
'Society will be deprived of the time to “co-evolve” with artificial intelligence (AI) if the technology is developed in secret, said the man behind, ChatGPT, OpenAI chief executive Sam Altman.
Addressing concerns about the rapid roll-out of the chatbot, which has surpassed 100 million users and brought generative AI tools into the mainstream, he added that making the technology public is necessary to understand how AI can be used and guided to help society.
...
...A report by the authorities here highlighted disinformation, the lack of accountability and criminal use of AI among key risks in the sector.
While acknowledging these concerns, Mr Altman said the harms caused by AI models remain manageable in their current scale. “We want to minimise them as much as possible, but we realised that no matter how much testing (we do)... people will use things in ways that we didn’t think about. That is the case with any new technology.”
Releasing ChatGPT through gradual upgrades lets society adapt, he said.
“You can’t learn everything in a lab,” said Mr Altman. “If you don’t deploy this along the way, and you just go build an #AGI in secret in a lab and drop it on the world, society doesn’t get the time to co-evolve.
“The fact that the world is having this conversation now, well before we get into AGI, is really important. It wouldn’t have been very effective without us deploying it.”'
---
BT:
'IN ORDER to mitigate harm that could be posed by artificial intelligence (AI) on society, ChatGPT had to be released to the broader public so that it could be further improved, said Sam Altman, chief executive of OpenAI, the company behind the application.
“We want to minimise (the harm) as much as possible, but we realised that no matter how much testing and red teaming and auditing… people will use things in ways that we didn’t think about – and that is the case with any new technology,” he said at a fireside chat held at the Singapore Management University on Tuesday (Jun 13) as part of the AI research company’s global tour.
“We really believe that iterative deployment is the only way to do this because if you don’t deploy along the way and you just go build an artificial general intelligence in secret in the lab, and then you drop it on the world all at once, society doesn’t get that time to co-evolve,” he added.'
Disclaimer:
The content of this article is intended for informational and educational purposes only and does not constitute legal advice.
No footnotes this time!