What is GPT-3?


Generative Pre-trained Transformer 3
 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a for-profit San Francisco-based artificial intelligence research laboratory.
                                   GPT-3's full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and is in beta testing as of July 2020, is part of a trend in natural language processing (NLP) systems of pre-trained language representations. Before the release of GPT-3, the most prominent language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters or less than 10 percent compared to GPT-3.
                 The quality of the text generated by GPT-3 is so high that it is difficult to distinguish it from that written by a human, which has both benefits and risks. Thirty-one OpenAI researchers and engineers presented the original May 28, 2020 paper introducing GPT-3. In there, they warned of GPT-3's potential dangers and called for research to mitigate risk. David Chalmers, an Australian philosopher, described GPT-3 as "one of the most interesting and important AI systems ever produced."
Microsoft announced on September 22, 2020, that it had licensed "exclusive" use of GPT-3; others can still use the public API to receive output, but only Microsoft has control of the source code.

What is Open AI? 

OpenAI
 is an artificial intelligence research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company is considered a competitor to DeepMind, which researches in the field of artificial intelligence (AI) with the goal of promoting and developing friendly AI in a way that benefits humanity.

Generative models


What is an Artificial neural network?




Artificial neural networks
 (ANNs), usually called neural networks (NNs), are computing systems vaguely inspired by the biological neural networks that constitute animal brains.

                                 

An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit a signal to other neurons.

                         

 An artificial neuron that receives a signal then processes it and can signal neurons connected to it. The "signal" at a connection is a real number, and the output of each neuron is computed by some non-linear function of the sum of its inputs. The connections are called edges. Neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Neurons may have a threshold such that a signal is sent only if the aggregate signal crosses that threshold. Typically, neurons obtain aggregated into layers. Different layers may perform divergent transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.