This time, GPT-3s creators decided to hide its underlying code from the public, not wanting to disseminate the full model or the millions of web pages used to train the system. Previously, OpenAI had often released full code with its publications on new models. But when OpenAI researchers configured the system to generate strikingly human-like text, they began to imagine how these generative capabilities could be used for harmful purposes. GPT-3’s creators designed the AI to learn language patterns and immediately saw GPT-3 scoring exceptionally well on reading-comprehension tests. GPT-3 then uses its digest of that massive corpus to respond to text prompts by generating new text with similar statistical patterns, endowing it with the ability to compose news articles, satire, and even poetry. Put simply, GPT-3 is directed to study the statistical patterns in a dataset of about a trillion words collected from the web and digitized books. GPT-3 is an autoregressive language model that uses its deep-learning experience to produce human-like text. WIRED recently called AI-generated text “the scariest deepfake of all”, turning heads to one of the most powerful text generators out there: artificial intelligence research lab OpenAI’s Generative Pre-Trained Transformer (GPT-3) language model.
Deepfakes (deep learning fakes), or, algorithm-generated synthetic media, constitute one example of a still-emerging and tremendously consequential development in machine-learning. Recent advances in machine-learning systems have led to both exciting and unnerving technologies-personal assistance bots, email spam filtering, and search engine algorithms are just a few omnipresent examples of technology made possible through these systems.