If something goes wrong inside the EU, this is who to contact.
by Anthropic
Claude 3 Opus is the most powerful model in the Claude 3 family, released by Anthropic in March 2024. It excels in complex reasoning, advanced coding, nuanced understanding, and multimodal tasks (e.g., analyzing images, documents, and charts). The model is optimized for high-stakes, high-impact applications and demonstrates state-of-the-art performance on benchmarks like MMLU, GPQA, and human preference evaluations. Claude 3 Opus was trained on a proprietary mix of publicly available internet data (up to August 2023), private datasets, and internally generated data. It underwent RLHF (Reinforcement Learning from Human Feedback) and Constitutional AI fine-tuning to align with principles like helpfulness, honesty, and harmlessness. The model is multimodal (text and vision) and multilingual. Safety evaluations show strong alignment, with low rates of harmful outputs and high resistance to jailbreaks. It was deployed under AI Safety Level 2 (ASL-2) protections.
Built by Anthropic in US. Your data passes through them.
We watch for changes to their terms so you don't have to.
1 thing to think about before letting it touch your work
The kinds of information this AI typically takes in
The people and company behind this AI
Where the company is officially based.
What this maker has officially told EU regulators about how their AI works.
These four sections are required by the EU AI Act. Auditors and compliance teams use them — feel free to skim.
If something goes wrong inside the EU, this is who to contact.
Stamps and certificates that say the AI passed EU checks.
Which EU countries this AI is sold in.
What the maker has said publicly about how the AI works.
Review recommended before use
Other companies this maker works with that may end up handling your data.
The legal documents — terms, privacy, security — that spell out the rules.
Discover EU-based alternatives for this AI application.