The artificial intelligence application can cause some very bad outcomes, according to Sam Altman, creator of ChatGPT. Altman, CEO of OpenAI, who owns ChatGPT, said that society must be very cautious and that the technology comes with some real dangers.
Altman told ABC News on Thursday that they had to be cautious. I think it doesn't work to do all this in a lab. You have to get these products out into the world and make contact with reality. When stakes are low, make the mistakes. I think people should be happy that we are a little bit scared of this. ChatGPT, an AI-powered language model, has created a sensation for its ability to generate human-like text responses to a given prompt. It can answer complex questions, compose poetry, write essays, and converse on a variety of topics. Some are amazed by its ability to respond better than humans, but others are fearful of its misuse.
Altman said there was a set of very bad outcomes when asked what is the worst possible outcome. Altman said that they were particularly worried that these models could be used for large-scale disinformation. They're getting better at writing computer code, which could be used for offensive cyber-attacks. He said the technology could also be the greatest humanity has yet to develop. Altman's warning came a few days after OpenAI released its latest version of its language AI model, GPT-4. Immediately after the latest version was launched, Brett Winton, Chief Futurist at ARK Invest, said GPT 4's performance on human benchmarks was rather remarkable. GPT 3.5 scored on the bar exam 10th percentile, GPT 4 hits the 90th percentile. He said that on BC calculus it got the equivalent of a 3, good for college credit at 75% of colleges, sharing a graph comparing the performance of GPT-3 with GPT-4.
Tesla CEO Elon Musk, one of the first investors to be invested in OpenAI, said: What will be left for us humans to do? In February of this year, Musk warned that AI was one of the biggest risks to the future of civilization. Musk stated at the World Government Summit in Dubai, UAE that it is both positive or negative and has great, great promise, great capability. In December of last year, Musk stated that there was no regulatory oversight of AI, which is a major problem. I've been calling for AI safety regulation for over a decade! Brad Smith, Vice Chairman and President of Microsoft, which uses the GPT 4 language model on its search engine Bing, has highlighted key developments that one can expect to see in the future of generative AI. Smith said at the India Today Conclave 2023 that AI models will get better and more powerful in their ability to reason.
He said these models will progress from large language models to multimodal models, meaning they will be able to understand not just words, but also images, sound, and video, and cause them to produce content in a variety of forms. The models are going to get better first, and we're going to see them get better. Smith said that's because being better means that they're going to be more powerful in terms of their ability to reason.