AIgenerativecheatsheet 6 23
AIgenerativecheatsheet 6 23
AIgenerativecheatsheet 6 23
AI
Large Language Models are networks trained using massive amounts of text and data. Source: picture alliance/Getty Images
From
The arrival in late 2022 of the ChatGPT chatbot represented a milestone in artificial intelligence that took decades to
reach. Scientists were experimenting with “computer vision” and giving machines the ability to “read” as far back as
the 1960s. Today it’s possible to imagine a computer performing many human tasks better than people can. Whether
you’re worried about being replaced by a robot, or just intrigued by the possibilities, here are some frequently used
AI buzzwords and what they mean.
Machine Learning
ML is the process of gradually improving algorithms — sets of instructions to achieve a specific outcome — by exposing
them to large amounts of data. By reviewing lots of “inputs” and “outputs,” a computer can “learn” without
necessarily having to be trained on the specifics of the job at hand. Take the iPhone photo app. Initially, it doesn’t
know what you look like. But once you start tagging yourself as the face in photos taken over many years and in a
variety of environments, the machine acquires the ability to recognize it.
Chatbots
These products can hold conversations with people on topics ranging from historical trivia to new food recipes. Early
examples are the tools that service providers use on their “Contact Us” pages as a first resource for customers needing
help. It’s expected that chatbots such as OpenAI’s ChatGPT and Google’s Bard will improve rapidly as a result of
recent advances in AI and transform how we search the internet.
Generative AI
This refers to the production of works — pictures, essays, poetry, sea shanties — from simple questions or commands.
It encompasses the likes of OpenAI’s DALL-E, which can create elaborate and detailed imagery in seconds, and
Google’s MusicLM, which generates music from text descriptions. Generative AI creates a new work after being
trained on vast quantities of pre-existing material. It’s led to some lawsuits from copyright holders who complain that
their own work has been ripped off.
Neural Networks
This is a type of AI in which a computer is programmed to learn in very roughly the same way a human brain does:
through trial and error. Success or failure influences future attempts and adaptations, just as a young brain learns to
map neural pathways based on what the child’s been taught. The process can involve millions of attempts to achieve
proficiency.
These are very large neural networks that are trained using massive amounts of text and data, including e-books,
news articles and Wikipedia pages. With billions of parameters to learn from, LLMs are the backbone of natural
language processing that can recognize, summarize, translate, predict and generate text.
GPT
A generative pre-trained transformer is a type of LLM. “Transformer” refers to a system that can take strings of inputs
and process them all together rather than in isolation, so that context and word order can be captured. This is
important in language translation. For instance: “Her dog, Poppy, ate in the kitchen” could be translated into the
French equivalent of “Poppy ate her dog in the kitchen” without appropriate attention being paid to order, syntax and
meaning.
Hallucination
When an AI like ChatGPT makes something up that sounds convincing but is entirely fabricated, it’s called a
hallucination. It’s the result of a system not having the correct answer to a question but nonetheless still knowing
what a good answer would sound like and presenting it as fact. There’s concern that AI’s inability to say “I don’t
know” when asked something will lead to costly mistakes, dangerous misunderstandings and a proliferation of
misinformation.
Sentient AI
Most researchers agree that a sentient, conscious AI, one that’s able to perceive and reflect on the world around it, is
years from becoming reality. While AI displays human-like abilities, the machines don’t yet “understand” what they’re
doing or saying. They are just finding patterns in the vast amounts of information generated by human beings and
generating mathematical formulas that dictate how they respond to prompts. And it may be hard to know when
sentience has arrived, as there’s still no broad agreement on what consciousness is.
Emergent Behaviors
As large language models reached a certain scale, they began to display abilities that appear to have emerged from
nowhere, in the sense that they were neither intended nor expected by their trainers. Some examples include
generating executable computer code, telling strange stories and identifying movies from a string of emojis as clues.
Prompt Engineering
The accuracy and usefulness of a large language model’s responses depends to a large extent on the quality of the
commands it is given. Prompt engineers can fine-tune natural-language instructions to produce consistent, high-
quality outputs using minimum computer power. These skills are in big demand.
QuickTake explainers on AI chatbots, the debate over whether AI can gain sentience and AI in drug development.
Bloomberg articles on the ChatGPT lookalikes proliferating in China and how Microsoft’s chatbot war with Google
will be costly.
Bloomberg Opinion columnists weigh in on the relative sentience of Microsoft’s web search chatbot and the
potential of AI to profoundly change the economy.
Britain’s Alan Turing Institute discusses the ethics and safety of AI.
A Royal Society paper on the power and promise of computers that learn by example.
Our World in Data estimates when future AI breakthroughs are likely to occur.