Claude Sonnet 3.7
by Anthropic
Hybrid reasoning model in the Claude 3 family. Claude 3.7 Sonnet was trained on a proprietary mix of publicly available information on the Internet, as well as non-public data from third parties, data provided by data labeling services and paid contractors, and data Anthropic generates internally. While trained on publicly available information on the internet through November 2024, Claude 3.7 Sonnet’s knowledge cut-off date is the end of October 2024. Anthropic has employed several data cleaning and filtering methods, including deduplication and classification. The Claude 3 suite of models have not been trained on any user prompt or output data submitted to us by users or customers, including free users, Claude Pro users, and API customers. When Anthropic’s general purpose crawler obtains data by crawling public web pages, Anthropic follows industry practices with respect to robots.txt instructions that website operators use to indicate whether they permit crawling of the content on their sites. In accordance with their policies, Anthropic’s general purpose crawler does not access password protected or sign-in pages or bypass CAPTCHA controls, and Anthropic conducts diligence on the data that they use. Anthropic operates its general purpose crawling system transparently, which means website operators can easily identify Anthropic visits and signal their preferences to Anthropic. Claude was trained with a focus on being helpful, harmless, and honest. Training techniques include pretraining on large diverse data to acquire language capabilities through methods like word prediction, as well as human feedback techniques that elicit helpful, harmless, honest responses. Anthropic used a technique called Constitutional AI to align Claude with human values during reinforcement learning by explicitly specifying rules and principles based on sources like the UN Declaration of Human Rights. Starting with Claude 3.5 Sonnet, Anthropic has added an additional principle to Claude’s constitution to encourage respect for disability rights, sourced from Anthropic's research on Collective Constitutional AI. Some of the human feedback data used to finetune Claude was made public alongside Anthropic's RLHF and red-teaming research. Once their models are fully trained, Anthropic runs a suite of evaluations for safety. Their Safeguards team also runs continuous classifiers to monitor prompts and outputs for harmful use cases that violate their AUP.
Potential Risks
1 consideration identified
Review recommended before use
These considerations are automatically identified based on publicly available information about the vendor and AI catalog data. Actual risks may vary based on your specific use case and implementation.
Privacy & Security
Compliance
🇪🇺 European-based Alternatives
Discover AI solutions from European providers
Ready to manage AI applications?
Track, assess, and govern your AI applications with Anove.