The artificial intelligence tool GPT-3 has been causing a stir online, due to its impressive ability to design websites, prescribe medication, and answer questions.
GPT-3 is short for Generative Pre-training Transformer and is the third generation of the machine learning model. Machine learning is when computers can automatically learn from their experiences without having to be programmed.
GPT-3 is currently in closed-access, with demonstrations of its prowess being shared on social media.
Coder coder Sharif Shameem has shown how the artificial intelligence can be used to describe designs which will then be built by the AI despite it not being trained to produce code.
Last month, the product was made commercially available, but work still needed to be done to see how the tool should be used.
“If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”
The achievement is visually impressive, with some going as far as to suggest that the tool will be a threat to industry or even that it is showing self-awareness.
However, OpenAI’s CEO Sam Altman has described the “hype” as “way too much”.
“It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out”, he said.
Moreover, questions have been raised regarding exactly what achievements are made by GPT-3.
Kevin Lacker, a computer scientist who formerly worked at Facebook and Google, showed that while the artificial intelligence can respond to “common sense” questions, answers that would be obvious to a human are unavailable to the machine and questions which are “nonsense” are responded to as if they are not.
This includes asking “How many eyes does my foot have?”, to which GPT-3 responds, “Your foot has two eyes”, or the question “How many rainbows does it take to jump from Hawaii to seventeen?” to which the program responds “It takes two rainbows to jump from Hawaii to seventeen.”
“I think the best analogy is with some oil-rich country being able to build a very tall skyscraper,” Guy Van den Broeck, an assistant professor of computer science at UCLA, told VentureBeat.
“Sure, a lot of money and engineering effort goes into building these things. And you do get the ‘state of the art’ in building tall buildings. But … there is no scientific advancement per se. Nobody worries about the U.S. is losing its competitiveness in building large buildings because someone else is willing to throw more money at the problem. … I’m sure academics and other companies will be happy to use these large language models in downstream tasks, but I don’t think they fundamentally change progress in AI.”
The Independent
Adam Smith