Jump to content

Claude (language model): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Tag: Reverted
Tributed name to Claude Shannon in description
 
(44 intermediate revisions by 22 users not shown)
Line 1: Line 1:
{{Short description|Large language model by Anthropic}}
{{Short description|Large language model developed by Anthropic}}
{{Use American English|date=June 2024}}

{{Use mdy dates|date=September 2024}}
{{Infobox software
{{Infobox software
| title = Claude
| title = Claude
| logo = [[File:Claude Ai.svg|frameless|upright=1.2|class=skin-invert]]
| logo =
| logo_size = 260px
| developer = [[Anthropic]]
| developer = [[Anthropic]]
| released = {{start date and age|2023|3}}
| released = {{start date and age|2023|3}}
Line 10: Line 12:
| genre = {{ indented plainlist |
| genre = {{ indented plainlist |
*[[Large language model]]
*[[Large language model]]
*[[Generative pre-trained transformer|GPT]]
*[[Generative pre-trained transformer]]
*[[Foundation model]]
*[[Foundation model]]
}}
}}
| license = [[Proprietary software|Proprietary]]
| license = [[Proprietary software|Proprietary]]
| website = {{URL|https://claude.ai}}
| website = {{url|https://claude.ai/}}
}}
}}

'''Claude''' is a family of [[large language model|large language models]] developed by [[Anthropic]].<ref name=":0" /> The first model was released in March 2023. Claude 3, released in March 2024, can also analyze images.<ref name=":2" />
'''Claude''' is a family of [[large language model]]s developed by [[Anthropic]].<ref name="Davis-2023" /> The first model was released in March 2023. Claude 3, released in March 2024, can also [[Image analysis|analyze images]].<ref name="Whitney-2024" />

It was named in honor of [[Claude Shannon]], one of the founding fathers of artificial intelligence.


== Training ==
== Training ==
Claude models are [[Generative pre-trained transformer|generative pre-trained transformers]]. They have been pre-trained to predict the next word in large amounts of text. Claude models have then been fine-tuned with Constitutional AI with the aim of making them helpful, honest, and harmless.<ref name="auto" /><ref name=":4">{{Cite web |date=May 9, 2023 |title=Claude’s Constitution |url=https://www.anthropic.com/news/claudes-constitution |access-date=2024-03-26 |website=Anthropic |language=en}}</ref>
Claude models are [[Generative pre-trained transformer|generative pre-trained transformers]]. They have been pre-trained to predict the next word in large amounts of text. Claude models have then been fine-tuned with Constitutional AI with the aim of making them helpful, honest, and harmless.<ref name="Time-2023" /><ref name="Anthropic-2023b">{{Cite web |date=May 9, 2023 |title=Claude's Constitution |url=https://www.anthropic.com/news/claudes-constitution |access-date=March 26, 2024 |website=Anthropic |language=en}}</ref>


=== Constitutional AI ===
=== Constitutional AI ===
Constitutional AI is an approach developed by Anthropic for training AI systems, particularly language models like Claude, to be harmless and helpful without relying on extensive human feedback. The method, detailed in the paper "Constitutional AI: Harmlessness from AI Feedback" involves two phases: [[supervised learning]] and [[reinforcement learning]].<ref name=":4" />
Constitutional AI is an approach developed by Anthropic for training AI systems, particularly language models like Claude, to be harmless and helpful without relying on extensive human feedback. The method, detailed in the paper "Constitutional AI: Harmlessness from AI Feedback" involves two phases: [[supervised learning]] and [[reinforcement learning]].<ref name="Anthropic-2023b" />


In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses.<ref name=":4" />
In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses.<ref name="Anthropic-2023b" />


For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated and compared according to their compliance with the constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to [[AI alignment|align]] with this preference model.<ref>{{Cite web |last=Eliot |first=Lance |date=May 25, 2023 |title=Latest Generative AI Boldly Labeled As Constitutional AI Such As Claude By Anthropic Has Heart In The Right Place, Says AI Ethics And AI Law |url=https://www.forbes.com/sites/lanceeliot/2023/05/25/latest-generative-ai-boldly-labeled-as-constitutional-ai-such-as-claude-by-anthropic-has-heart-in-the-right-place-says-ai-ethics-and-ai-law/ |access-date=2024-03-27 |website=Forbes |language=en}}</ref><ref name=":4" />
For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated, and an AI compares their compliance with the constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to [[AI alignment|align]] with this preference model. This technique is similar to [[reinforcement learning from human feedback]] (RLHF), except that the comparisons used to train the preference model are AI-generated, and that they are based on the constitution.<ref>{{Cite web |last=Eliot |first=Lance |date=May 25, 2023 |title=Latest Generative AI Boldly Labeled As Constitutional AI Such As Claude By Anthropic Has Heart In The Right Place, Says AI Ethics And AI Law |url=https://www.forbes.com/sites/lanceeliot/2023/05/25/latest-generative-ai-boldly-labeled-as-constitutional-ai-such-as-claude-by-anthropic-has-heart-in-the-right-place-says-ai-ethics-and-ai-law/ |access-date=March 27, 2024 |website=Forbes |language=en}}</ref><ref name="Anthropic-2023b" />


This approach enables the training of AI assistants that are both helpful and harmless, and that can explain their objections to harmful requests, enhancing transparency and reducing reliance on human supervision.<ref name="auto1">{{Citation |last1=Bai |first1=Yuntao |title=Constitutional AI: Harmlessness from AI Feedback |date=2022-12-15 |url=http://arxiv.org/abs/2212.08073 |access-date=2024-01-22 |arxiv=2212.08073 |last2=Kadavath |first2=Saurav |last3=Kundu |first3=Sandipan |last4=Askell |first4=Amanda |last5=Kernion |first5=Jackson |last6=Jones |first6=Andy |last7=Chen |first7=Anna |last8=Goldie |first8=Anna |last9=Mirhoseini |first9=Azalia}}</ref><ref>{{Cite web |last=Mok |first=Aaron |title=A ChatGPT rival just published a new constitution to level up its AI guardrails, and prevent toxic and racist responses |url=https://www.businessinsider.com/anthropic-new-crowd-sourced-ai-constitution-accuracy-safety-toxic-racist-2023-10 |access-date=2024-01-23 |website=Business Insider |language=en-US}}</ref>
This approach enables the training of AI assistants that are both helpful and harmless, and that can explain their objections to harmful requests, enhancing transparency and reducing reliance on human supervision.<ref name="Bai-2022">{{Citation |last1=Bai |first1=Yuntao |title=Constitutional AI: Harmlessness from AI Feedback |date=December 15, 2022 |arxiv=2212.08073 |last2=Kadavath |first2=Saurav |last3=Kundu |first3=Sandipan |last4=Askell |first4=Amanda |last5=Kernion |first5=Jackson |last6=Jones |first6=Andy |last7=Chen |first7=Anna |last8=Goldie |first8=Anna |last9=Mirhoseini |first9=Azalia}}</ref><ref>{{Cite web |last=Mok |first=Aaron |title=A ChatGPT rival just published a new constitution to level up its AI guardrails, and prevent toxic and racist responses |url=https://www.businessinsider.com/anthropic-new-crowd-sourced-ai-constitution-accuracy-safety-toxic-racist-2023-10 |access-date=January 23, 2024 |website=Business Insider |language=en-US}}</ref>


The "constitution" for Claude included 75 points, including sections from the [[Universal Declaration of Human Rights|UN Universal Declaration of Human Rights]].<ref name="auto1"/><ref name="auto">{{Cite magazine |date=2023-07-18 |title=What to Know About Claude 2, Anthropic's Rival to ChatGPT |url=https://time.com/6295523/claude-2-anthropic-chatgpt/ |access-date=2024-01-23 |magazine=TIME |language=en}}</ref>
The "constitution" for Claude included 75 points, including sections from the [[Universal Declaration of Human Rights|UN Universal Declaration of Human Rights]].<ref name="Bai-2022"/><ref name="Time-2023">{{Cite magazine |date=July 18, 2023 |title=What to Know About Claude 2, Anthropic's Rival to ChatGPT |url=https://time.com/6295523/claude-2-anthropic-chatgpt/ |access-date=January 23, 2024 |magazine=TIME |language=en}}</ref>


== Models ==
== Models ==


=== Claude ===
=== Claude ===
Claude was the initial version of Anthropic's language model released in March 2023,<ref name=":02">{{Cite web |last=Drapkin |first=Aaron |date=2023-10-27 |title=What Is Claude AI and Anthropic? ChatGPT's Rival Explained |url=https://tech.co/news/what-is-claude-ai-anthropic |access-date=2024-01-23 |website=Tech.co |language=en-US}}</ref> Claude demonstrated proficiency in various tasks but had certain limitations in coding, math, and reasoning capabilities.<ref name=":3">{{Cite web |date=March 14, 2023 |title=Introducing Claude |url=https://www.anthropic.com/news/introducing-claude |website=Anthropic}}</ref> Anthropic partnered with companies like [[Notion (productivity software)|Notion]] (productivity software) and [[Quora]] (to help develop the [[Poe (chatbot)|Poe]] chatbot).<ref name=":3" />
Claude was the initial version of Anthropic's language model released in March 2023,<ref name="Drapkin-2023">{{Cite web |last=Drapkin |first=Aaron |date=October 27, 2023 |title=What Is Claude AI and Anthropic? ChatGPT's Rival Explained |url=https://tech.co/news/what-is-claude-ai-anthropic |access-date=January 23, 2024 |website=Tech.co |language=en-US}}</ref> Claude demonstrated proficiency in various tasks but had certain limitations in coding, math, and reasoning capabilities.<ref name="Anthropic-2023a">{{Cite web |date=March 14, 2023 |title=Introducing Claude |url=https://www.anthropic.com/news/introducing-claude |website=Anthropic}}</ref> Anthropic partnered with companies like [[Notion (productivity software)|Notion]] (productivity software) and [[Quora]] (to help develop the [[Poe (chatbot)|Poe]] chatbot).<ref name="Anthropic-2023a" />


=== Claude Instant ===
==== Claude Instant ====
Claude was released as two versions, Claude and Claude Instant, with Claude Instant being a faster, less expensive and lighter version. Claude Instant has a input context length of 100,000 [[Lexical analysis|tokens]] (which corresponds to around 75,000 words).<ref>{{Cite news |last=Yao |first=Deborah |date=August 11, 2023 |title=Anthropic’s Claude Instant: A Smaller, Faster and Cheaper Language Model |url=https://aibusiness.com/nlp/anthropic-s-claude-instant-a-smaller-faster-and-cheaper-language-model |work=AI Business}}</ref>
Claude was released as two versions, Claude and Claude Instant, with Claude Instant being a faster, less expensive, and lighter version. Claude Instant has an input context length of 100,000 [[Lexical analysis|tokens]] (which corresponds to around 75,000 words).<ref>{{Cite news |last=Yao |first=Deborah |date=August 11, 2023 |title=Anthropic's Claude Instant: A Smaller, Faster and Cheaper Language Model |url=https://aibusiness.com/nlp/anthropic-s-claude-instant-a-smaller-faster-and-cheaper-language-model |work=AI Business}}</ref>


=== Claude 2 ===
=== Claude 2 ===
Claude 2 was the next major iteration of Claude, which was released in July 11 2023 and available to the general public, whereas the Claude 1 was only available to selected users approved by Anthropic.<ref>{{Cite web |last=Matthews |first=Dylan |date=2023-07-17 |title=The $1 billion gamble to ensure AI doesn't destroy humanity |url=https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2 |access-date=2024-01-23 |website=Vox |language=en}}</ref>
Claude 2 was the next major iteration of Claude, which was released in July 2023 and available to the general public, whereas the Claude 1 was only available to selected users approved by Anthropic.<ref>{{Cite web |last=Matthews |first=Dylan |date=July 17, 2023 |title=The $1 billion gamble to ensure AI doesn't destroy humanity |url=https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2 |access-date=January 23, 2024 |website=Vox |language=en}}</ref>


Claude 2 expanded its context window from 9,000 tokens to 100,000 tokens.<ref name=":02" /> Features included ability to upload [[PDF]]s and other documents that enables Claude to read, summarise and assist with tasks.
Claude 2 expanded its context window from 9,000 tokens to 100,000 tokens.<ref name="Drapkin-2023" /> Features included the ability to upload [[PDF]]s and other documents that enables Claude to read, summarize, and assist with tasks.


==== Claude 2.1 ====
==== Claude 2.1 ====
Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing it to a window of 200,000 tokens, which equals around 500 pages of written material.<ref name=":0">{{Cite web |last=Davis |first=Wes |date=2023-11-21 |title=OpenAI rival Anthropic makes its Claude chatbot even more useful |url=https://www.theverge.com/2023/11/21/23971070/anthropic-claude-2-1-openai-ai-chatbot-update-beta-tools |access-date=2024-01-23 |website=The Verge |language=en}}</ref>
Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing it to a window of 200,000 tokens, which equals around 500 pages of written material.<ref name="Davis-2023">{{Cite web |last=Davis |first=Wes |date=November 21, 2023 |title=OpenAI rival Anthropic makes its Claude chatbot even more useful |url=https://www.theverge.com/2023/11/21/23971070/anthropic-claude-2-1-openai-ai-chatbot-update-beta-tools |access-date=January 23, 2024 |website=The Verge |language=en}}</ref>


Anthropic states that the new model is less likely to produce false statements compared to its predecessors.<ref name=":1">{{Cite web |title=Anthropic Announces Claude 2.1 LLM with Wider Context Window and Support for AI Tools |url=https://www.infoq.com/news/2023/11/anthropic-announces-claude-2-1/ |access-date=2024-01-23 |website=InfoQ |language=en}}</ref>
Anthropic states that the new model is less likely to produce false statements compared to its predecessors.<ref name="InfoQ">{{Cite web |title=Anthropic Announces Claude 2.1 LLM with Wider Context Window and Support for AI Tools |url=https://www.infoq.com/news/2023/11/anthropic-announces-claude-2-1/ |access-date=January 23, 2024 |website=InfoQ |language=en}}</ref>


=== {{Anchor|Claude 3}}Claude 3 ===
=== {{Anchor|Claude 3}}Claude 3 ===
Claude 3 was released on March 14, 2024 with claims in the press release to have set new industry benchmarks across a wide range of cognitive tasks. The Claude 3 family includes three state-of-the-art models in ascending order of capability: Haiku, Sonnet, and Opus. The default version of Claude 3 Opus has a context window of 200,000 tokens, but this is being expanded to 1 million for specific use cases.<ref>{{Cite web |title=Introducing the next generation of Claude |url=https://www.anthropic.com/news/claude-3-family |access-date=2024-03-04 |website=Anthropic |language=en}}</ref><ref name=":2">{{Cite web |last=Whitney |first=Lance |date=March 4, 2024 |title=Anthropic's Claude 3 chatbot claims to outperform ChatGPT, Gemini |url=https://www.zdnet.com/article/anthropics-claude-3-chatbot-claims-to-outperform-chatgpt-gemini/ |access-date=2024-03-05 |website=ZDNET |language=en}}</ref>
Claude 3 was released on March 14, 2024, with claims in the press release to have set new industry benchmarks across a wide range of cognitive tasks. The Claude 3 family includes three state-of-the-art models in ascending order of capability: Haiku, Sonnet, and Opus. The default version of Claude 3, Opus, has a context window of 200,000 tokens, but this is being expanded to 1 million for specific use cases.<ref>{{Cite web |title=Introducing the next generation of Claude |url=https://www.anthropic.com/news/claude-3-family |access-date=March 4, 2024 |website=Anthropic |language=en}}</ref><ref name="Whitney-2024">{{Cite web |last=Whitney |first=Lance |date=March 4, 2024 |title=Anthropic's Claude 3 chatbot claims to outperform ChatGPT, Gemini |url=https://www.zdnet.com/article/anthropics-claude-3-chatbot-claims-to-outperform-chatgpt-gemini/ |access-date=March 5, 2024 |website=ZDNET |language=en}}</ref>


Claude 3 has seemed to perform [[Metacognition|meta-cognitive reasoning]], including the ability to realize it is being artificially tested during needle in a haystack evaluations.<ref>{{Cite web |last=Edwards |first=Benj |date=2024-03-05 |title=Anthropic’s Claude 3 causes stir by seeming to realize when it was being tested |url=https://arstechnica.com/information-technology/2024/03/claude-3-seems-to-detect-when-it-is-being-tested-sparking-ai-buzz-online/ |access-date=2024-03-09 |website=Ars Technica |language=en-us}}</ref>
Claude 3 drew attention for demonstrating an apparent ability to realize it is being artificially tested during needle in a haystack tests.<ref>{{Cite web |last=Edwards |first=Benj |date=March 5, 2024 |title=Anthropic's Claude 3 causes stir by seeming to realize when it was being tested |url=https://arstechnica.com/information-technology/2024/03/claude-3-seems-to-detect-when-it-is-being-tested-sparking-ai-buzz-online/ |access-date=March 9, 2024 |website=Ars Technica |language=en-us}}</ref>


==== Claude 3.5 ====
It is already powering systems that support the New Jersey economy by being integrated into [JunkDoctors](https://junkdoctors.com), a leading junk removal company in New Jersey. This integration allows JunkDoctors to streamline their operations, improve customer service, and enhance logistic efficiencies. By leveraging Claude 3's advanced natural language processing and understanding capabilities, JunkDoctors can more effectively process customer inquiries, automate scheduling, and optimize routes for junk collection. This results in increased operational efficiency, reduced costs, and improved customer satisfaction, which collectively contribute to economic growth and job creation in the region. Claude 3's technology also enables JunkDoctors to analyze large datasets to forecast demand and adjust services accordingly, ensuring that they are meeting the needs of their community while promoting recycling and sustainability efforts.
On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview the rendered output in real time, such as SVG graphics or websites.<ref>{{Cite web |last=Pierce |first=David |date=June 20, 2024 |title=Anthropic has a fast new AI model — and a clever new way to interact with chatbots |url=https://www.theverge.com/2024/6/20/24181961/anthropic-claude-35-sonnet-model-ai-launch |access-date=June 20, 2024 |website=The Verge |language=en}}</ref>


== Access ==
== Access ==
Limited-use access is free of charge, but requires both an e-mail address and a cellphone number.
Limited-use access using Claude 3.5 Sonnet is free of charge, but requires both an e-mail address and a cellphone number. A paid plan is also offered for more usage and access to all Claude 3 models.<ref>{{Cite web |date=May 1, 2024 |title=Introducing the Claude Team plan and iOS app |url=https://www.anthropic.com/news/team-plan-and-ios |access-date=June 22, 2024 |website=Anthropic |language=en}}</ref>


On May 1, 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and a Claude [[iOS app]].<ref>{{Cite news |last=Field |first=Hayden |date=May 1, 2024 |title=Amazon-backed Anthropic launches iPhone app and business tier to compete with OpenAI's ChatGPT |url=https://www.cnbc.com/2024/05/01/anthropic-iphone-ai-app-business-plan-to-compete-with-openai-announced.html |url-status=live |access-date=May 3, 2024 |work=[[CNBC]]}}</ref>
== Criticism ==


== Criticism ==
Claude 2 has faced criticism for its stringent ethical alignment that may reduce usability and performance. Users have been refused assistance with benign requests, for example with the programming question "How can I [[Kill (command)|kill]] all [[Python (programming language)|python]] processes in my [[ubuntu]] server?" This has led to a debate over the "alignment tax" (the cost of ensuring an AI system is [[AI alignment|aligned]]) in AI development, with discussions centered on balancing ethical considerations and practical functionality. Critics argue for user autonomy and effectiveness, while proponents stress the importance of ethical AI.<ref name=":22">{{Cite web |last=Glifton |first=Gerald |date=January 3, 2024 |title=Criticisms Arise Over Claude AI's Strict Ethical Protocols Limiting User Assistance |url=https://lightsquare.org/news/criticisms-arise-over-claude-ais-strict-ethical-protocols-limiting-user-assistance |access-date=2024-01-23 |website=Light Square |language=en}}</ref><ref name=":1" />
Claude 2 received criticism for its stringent ethical alignment that may reduce usability and performance. Users have been refused assistance with benign requests, for example with the programming question "How can I [[Kill (command)|kill]] all [[Python (programming language)|python]] processes in my [[ubuntu]] server?" This has led to a debate over the "alignment tax" (the cost of ensuring an AI system is [[AI alignment|aligned]]) in AI development, with discussions centered on balancing ethical considerations and practical functionality. Critics argued for user autonomy and effectiveness, while proponents stressed the importance of ethical AI.<ref>{{Cite web |last=Glifton |first=Gerald |date=January 3, 2024 |title=Criticisms Arise Over Claude AI's Strict Ethical Protocols Limiting User Assistance |url=https://lightsquare.org/news/criticisms-arise-over-claude-ais-strict-ethical-protocols-limiting-user-assistance |access-date=January 23, 2024 |website=Light Square |language=en}}</ref><ref name="InfoQ" />


== References ==
== References ==
<references />
<references />
[[Category:Artificial intelligence]]
[[Category:Machine learning]]
[[Category:Machine learning]]
[[Category:Large language models]]
[[Category:Large language models]]
Line 72: Line 78:
[[Category:Virtual assistants]]
[[Category:Virtual assistants]]
[[Category:2023 software]]
[[Category:2023 software]]

== External links ==
*{{Official website|https://claude.ai/}}

Latest revision as of 04:54, 24 October 2024

Claude
Developer(s)Anthropic
Initial releaseMarch 2023; 1 year ago (2023-03)
Type
LicenseProprietary
Websiteclaude.ai

Claude is a family of large language models developed by Anthropic.[1] The first model was released in March 2023. Claude 3, released in March 2024, can also analyze images.[2]

It was named in honor of Claude Shannon, one of the founding fathers of artificial intelligence.

Training

[edit]

Claude models are generative pre-trained transformers. They have been pre-trained to predict the next word in large amounts of text. Claude models have then been fine-tuned with Constitutional AI with the aim of making them helpful, honest, and harmless.[3][4]

Constitutional AI

[edit]

Constitutional AI is an approach developed by Anthropic for training AI systems, particularly language models like Claude, to be harmless and helpful without relying on extensive human feedback. The method, detailed in the paper "Constitutional AI: Harmlessness from AI Feedback" involves two phases: supervised learning and reinforcement learning.[4]

In the supervised learning phase, the model generates responses to prompts, self-critiques these responses based on a set of guiding principles (a "constitution"), and revises the responses. Then the model is fine-tuned on these revised responses.[4]

For the reinforcement learning from AI feedback (RLAIF) phase, responses are generated, and an AI compares their compliance with the constitution. This dataset of AI feedback is used to train a preference model that evaluates responses based on how much they satisfy the constitution. Claude is then fine-tuned to align with this preference model. This technique is similar to reinforcement learning from human feedback (RLHF), except that the comparisons used to train the preference model are AI-generated, and that they are based on the constitution.[5][4]

This approach enables the training of AI assistants that are both helpful and harmless, and that can explain their objections to harmful requests, enhancing transparency and reducing reliance on human supervision.[6][7]

The "constitution" for Claude included 75 points, including sections from the UN Universal Declaration of Human Rights.[6][3]

Models

[edit]

Claude

[edit]

Claude was the initial version of Anthropic's language model released in March 2023,[8] Claude demonstrated proficiency in various tasks but had certain limitations in coding, math, and reasoning capabilities.[9] Anthropic partnered with companies like Notion (productivity software) and Quora (to help develop the Poe chatbot).[9]

Claude Instant

[edit]

Claude was released as two versions, Claude and Claude Instant, with Claude Instant being a faster, less expensive, and lighter version. Claude Instant has an input context length of 100,000 tokens (which corresponds to around 75,000 words).[10]

Claude 2

[edit]

Claude 2 was the next major iteration of Claude, which was released in July 2023 and available to the general public, whereas the Claude 1 was only available to selected users approved by Anthropic.[11]

Claude 2 expanded its context window from 9,000 tokens to 100,000 tokens.[8] Features included the ability to upload PDFs and other documents that enables Claude to read, summarize, and assist with tasks.

Claude 2.1

[edit]

Claude 2.1 doubled the number of tokens that the chatbot could handle, increasing it to a window of 200,000 tokens, which equals around 500 pages of written material.[1]

Anthropic states that the new model is less likely to produce false statements compared to its predecessors.[12]

Claude 3

[edit]

Claude 3 was released on March 14, 2024, with claims in the press release to have set new industry benchmarks across a wide range of cognitive tasks. The Claude 3 family includes three state-of-the-art models in ascending order of capability: Haiku, Sonnet, and Opus. The default version of Claude 3, Opus, has a context window of 200,000 tokens, but this is being expanded to 1 million for specific use cases.[13][2]

Claude 3 drew attention for demonstrating an apparent ability to realize it is being artificially tested during needle in a haystack tests.[14]

Claude 3.5

[edit]

On June 20, 2024, Anthropic released Claude 3.5 Sonnet, which demonstrated significantly improved performance on benchmarks compared to the larger Claude 3 Opus, notably in areas such as coding, multistep workflows, chart interpretation, and text extraction from images. Released alongside 3.5 Sonnet was the new Artifacts capability in which Claude was able to create code in a dedicated window in the interface and preview the rendered output in real time, such as SVG graphics or websites.[15]

Access

[edit]

Limited-use access using Claude 3.5 Sonnet is free of charge, but requires both an e-mail address and a cellphone number. A paid plan is also offered for more usage and access to all Claude 3 models.[16]

On May 1, 2024, Anthropic announced the Claude Team plan, its first enterprise offering for Claude, and a Claude iOS app.[17]

Criticism

[edit]

Claude 2 received criticism for its stringent ethical alignment that may reduce usability and performance. Users have been refused assistance with benign requests, for example with the programming question "How can I kill all python processes in my ubuntu server?" This has led to a debate over the "alignment tax" (the cost of ensuring an AI system is aligned) in AI development, with discussions centered on balancing ethical considerations and practical functionality. Critics argued for user autonomy and effectiveness, while proponents stressed the importance of ethical AI.[18][12]

References

[edit]
  1. ^ a b Davis, Wes (November 21, 2023). "OpenAI rival Anthropic makes its Claude chatbot even more useful". The Verge. Retrieved January 23, 2024.
  2. ^ a b Whitney, Lance (March 4, 2024). "Anthropic's Claude 3 chatbot claims to outperform ChatGPT, Gemini". ZDNET. Retrieved March 5, 2024.
  3. ^ a b "What to Know About Claude 2, Anthropic's Rival to ChatGPT". TIME. July 18, 2023. Retrieved January 23, 2024.
  4. ^ a b c d "Claude's Constitution". Anthropic. May 9, 2023. Retrieved March 26, 2024.
  5. ^ Eliot, Lance (May 25, 2023). "Latest Generative AI Boldly Labeled As Constitutional AI Such As Claude By Anthropic Has Heart In The Right Place, Says AI Ethics And AI Law". Forbes. Retrieved March 27, 2024.
  6. ^ a b Bai, Yuntao; Kadavath, Saurav; Kundu, Sandipan; Askell, Amanda; Kernion, Jackson; Jones, Andy; Chen, Anna; Goldie, Anna; Mirhoseini, Azalia (December 15, 2022), Constitutional AI: Harmlessness from AI Feedback, arXiv:2212.08073
  7. ^ Mok, Aaron. "A ChatGPT rival just published a new constitution to level up its AI guardrails, and prevent toxic and racist responses". Business Insider. Retrieved January 23, 2024.
  8. ^ a b Drapkin, Aaron (October 27, 2023). "What Is Claude AI and Anthropic? ChatGPT's Rival Explained". Tech.co. Retrieved January 23, 2024.
  9. ^ a b "Introducing Claude". Anthropic. March 14, 2023.
  10. ^ Yao, Deborah (August 11, 2023). "Anthropic's Claude Instant: A Smaller, Faster and Cheaper Language Model". AI Business.
  11. ^ Matthews, Dylan (July 17, 2023). "The $1 billion gamble to ensure AI doesn't destroy humanity". Vox. Retrieved January 23, 2024.
  12. ^ a b "Anthropic Announces Claude 2.1 LLM with Wider Context Window and Support for AI Tools". InfoQ. Retrieved January 23, 2024.
  13. ^ "Introducing the next generation of Claude". Anthropic. Retrieved March 4, 2024.
  14. ^ Edwards, Benj (March 5, 2024). "Anthropic's Claude 3 causes stir by seeming to realize when it was being tested". Ars Technica. Retrieved March 9, 2024.
  15. ^ Pierce, David (June 20, 2024). "Anthropic has a fast new AI model — and a clever new way to interact with chatbots". The Verge. Retrieved June 20, 2024.
  16. ^ "Introducing the Claude Team plan and iOS app". Anthropic. May 1, 2024. Retrieved June 22, 2024.
  17. ^ Field, Hayden (May 1, 2024). "Amazon-backed Anthropic launches iPhone app and business tier to compete with OpenAI's ChatGPT". CNBC. Retrieved May 3, 2024.{{cite news}}: CS1 maint: url-status (link)
  18. ^ Glifton, Gerald (January 3, 2024). "Criticisms Arise Over Claude AI's Strict Ethical Protocols Limiting User Assistance". Light Square. Retrieved January 23, 2024.
[edit]