Glossary/Claude (Anthropic)
Flagship AI Models

Claude (Anthropic)

Anthropic's AI — built with safety at the architecture level.


Definition

Claude is a family of large language models developed by Anthropic, founded in 2021 by former OpenAI researchers including Dario Amodei and Ilya Sutskever's colleagues. The Claude 3 family (Haiku, Sonnet, Opus) launched in March 2024 and became the first models to seriously challenge GPT-4 on reasoning and coding benchmarks. Claude 3.5 Sonnet, released June 2024, set a new state-of-the-art on SWE-Bench coding, outperforming every model including GPT-4o.

Constitutional AI: how Anthropic builds safety in

Claude is trained using Constitutional AI (CAI), a technique Anthropic invented that replaces human preference labelers with a written 'constitution' — a set of principles the model uses to evaluate and revise its own outputs. This allows Anthropic to scale alignment without requiring humans to label millions of harmful examples.

  • Step 1 — SL-CAI: Model generates responses to red-team prompts, then critiques and revises its own outputs using the constitution
  • Step 2 — RL-CAI: A preference model (PM) is trained on the AI's own critiques, then used as a reward signal for RLHF
  • The constitution includes principles from the UN Declaration of Human Rights, Apple ToS, Anthropic's own guidelines
  • This makes Claude's refusals more principled and consistent — it can explain *why* it won't do something, not just refuse

CAI vs standard RLHF

Standard RLHF relies entirely on human raters to signal what is harmful. Constitutional AI augments this with self-critique — the model learns to evaluate its own outputs, enabling more consistent alignment at scale.

Claude model family and capabilities

ModelReleasedBest forContextSpeed
Claude 3 HaikuMar 2024Fast, cheap tasks200K tokensFastest
Claude 3 SonnetMar 2024Balanced200K tokensFast
Claude 3 OpusMar 2024Complex reasoning200K tokensSlower
Claude 3.5 SonnetJun 2024Coding, analysis200K tokensFast
Claude 3.5 HaikuNov 2024Production speed200K tokensFastest
Claude 3.7 SonnetFeb 2025Extended thinking200K tokensFast

Claude's 200K context window (roughly 150,000 words — the length of an entire novel) was a major breakthrough when released. It allows Claude to analyze entire codebases, legal documents, or research papers in a single prompt. GPT-4 had a 128K context window; Gemini 1.5 Pro later extended this to 1 million tokens.

Claude's 'extended thinking' mode

Claude 3.7 Sonnet introduced 'extended thinking' — a mode where the model explicitly reasons through a problem step-by-step before producing a final answer, similar to OpenAI's o1 reasoning model. This is implemented as a separate thinking token budget: you can allocate up to 128K tokens for internal reasoning, after which the model produces its final visible response.

Using Claude with extended thinking

import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-3-7-sonnet-20250219",
    max_tokens=16000,
    thinking={
        "type": "enabled",
        "budget_tokens": 10000  # tokens for internal reasoning
    },
    messages=[{
        "role": "user",
        "content": "Prove that the square root of 2 is irrational."
    }]
)

# Response has two blocks: thinking + answer
for block in response.content:
    if block.type == "thinking":
        print("REASONING:", block.thinking[:200])
    elif block.type == "text":
        print("ANSWER:", block.text)

On LumiChats

LumiChats gives you access to the full Claude 3.5 and 3.7 model family — including extended thinking mode — alongside GPT-4o and Gemini in one interface.

Try it free

Try LumiChats for ₹69

39+ AI models. Study Mode with page-locked answers. Agent Mode with code execution. Pay only on days you use it.

Get Started — ₹69/day

Related Terms

5 terms