Skip to content
Anove LogoAnove LogoExplAIn
Get Started
Anove LogoAnove LogoAnove International B.V.

Helping organizations discover and manage AI applications responsibly. Your trusted source for AI governance and compliance.

Quick Links

  • Browse Directory
  • FAQ
  • Get Started
  • Sign In

Resources

  • Documentation
  • Status
  • Anove.ai

Terms

  • Privacy Policy
  • Terms & Conditions
  • Responsible Disclosure
  • AI Legal Notice
  • Security Policy

© 2026 Anove International B.V. All rights reserved.

Back to Directory
ListingAI Model
ProviderAnthropic
OriginUS
UpdatedFebruary 10, 2026Added February 10, 2026

Claude Sonnet 3.7

by Anthropic

AI ModelLow Confidence

Hybrid reasoning model in the Claude 3 family. Claude 3.7 Sonnet was trained on a proprietary mix of publicly available information on the Internet, as well as non-public data from third parties, data provided by data labeling services and paid contractors, and data Anthropic generates internally. While trained on publicly available information on the internet through November 2024, Claude 3.7 Sonnet’s knowledge cut-off date is the end of October 2024. Anthropic has employed several data cleaning and filtering methods, including deduplication and classification. The Claude 3 suite of models have not been trained on any user prompt or output data submitted to us by users or customers, including free users, Claude Pro users, and API customers. When Anthropic’s general purpose crawler obtains data by crawling public web pages, Anthropic follows industry practices with respect to robots.txt instructions that website operators use to indicate whether they permit crawling of the content on their sites. In accordance with their policies, Anthropic’s general purpose crawler does not access password protected or sign-in pages or bypass CAPTCHA controls, and Anthropic conducts diligence on the data that they use. Anthropic operates its general purpose crawling system transparently, which means website operators can easily identify Anthropic visits and signal their preferences to Anthropic. Claude was trained with a focus on being helpful, harmless, and honest. Training techniques include pretraining on large diverse data to acquire language capabilities through methods like word prediction, as well as human feedback techniques that elicit helpful, harmless, honest responses. Anthropic used a technique called Constitutional AI to align Claude with human values during reinforcement learning by explicitly specifying rules and principles based on sources like the UN Declaration of Human Rights. Starting with Claude 3.5 Sonnet, Anthropic has added an additional principle to Claude’s constitution to encourage respect for disability rights, sourced from Anthropic's research on Collective Constitutional AI. Some of the human feedback data used to finetune Claude was made public alongside Anthropic's RLHF and red-teaming research. Once their models are fully trained, Anthropic runs a suite of evaluations for safety. Their Safeguards team also runs continuous classifiers to monitor prompts and outputs for harmful use cases that violate their AUP.

Visit Website
On this page
Transparency Score
40%
Limited transparency
0·25·50·75100
Supply Chain
20/20
Compliance
10/20
Policy
10/25
Technical
0/25
Ethical & Operational
0/10
02Behind the product

Vendor Information

Complete information about the vendor/provider of this AI application

Company Name
Anthropic
View all products →→
Legal Entity
Anthropic PBC
Trading as: Anthropic
Website
anthropic.com
Email
press@anthropic.com
Registered Address
548 Market Street, San Fransisco, CA, 94105, US
On the map

Where they operate

Registered headquarters of the provider entity.

Headquarters
03Regulation · EU AI Act

EU AI Act Provider Information

Disclosure published under EU AI Act articles covering provider identity, conformity, and transparency obligations.

Verification StatusUnverified
EU Authorized Representative
Anthropic Ireland, Limited
6th Floor South Bank House, Barrow Street, Dublin, Leinster, IE
Compliance Documents
  • CE Marking
    Pending
  • Instructions for Use
Available in EU Member States
Transparency disclosures
04Risk profile

Risks

Potential Risks

1 considerations identified

Review recommended before use

1High
These considerations are automatically identified based on publicly available information about the vendor and AI catalog data. Actual risks may vary based on your specific use case and implementation.
05Network

Supply Chain Network

Visual representation of the vendor's digital supply chain relationships

06Paperwork

Policies & Documents

Legal, privacy, and compliance documentation

Compliance

Anthropic’s Responsible Scaling Policy

Anthropic’s Responsible Scaling Policy

AI Transparency
Compliance Framework SB53

Anthropic's compliance framework for SB53

Compliance Certification

Other

here
Other
European Union
07EU-based

EU Alternatives

Discover EU-based alternatives for this AI application.

01
Mistral Medium· Mistral AI
AI Model→
02
Codestral· Mistral AI
AI Model→
03
Magistral Small 1.0· Mistral AI
AI Model→
04
Mistral AI· Mistral
AI Model→
Browse all EU alternatives