Back to Directory



Browse all European AI
Ministral 3 3B Base 2512
by Mistral AI
AI Model
Low Confidence
The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities. This model is the base pre-trained version, not fine-tuned for instruction or reasoning tasks, making it ideal for custom post-training processes. For instruction and chat based use cases, we recommend using Ministral 3 3B Instruct 2512. The Ministral 3 family is designed for edge deployment, capable of running on a wide range of hardware. Ministral 3 3B can even be deployed locally, fitting in 16GB of VRAM in BF16, and less than 8GB of RAM/VRAM when quantized.
Transparency Score
60%
Moderate Transparency
Supply Chain
20/20
Compliance
10/20
Policy
10/25
Technical
15/25
Ethical & Operational
5/10
Openness Assessment
7%Closed
1 Open
0 Partial
0 Closed
13 Unknown
Availability
0/5 open
Documentation
1/6 open
Access Methods
0/3 open
Capabilities
Text Generation
Code Generation
Conversation
Summarization
Translation
Vendor Information
Complete information about the vendor/provider of this AI application
Mistral AIView all products →
Mistral AI
Contact Information
Registered Address
15 rue des Halles, Paris, 75001, France
EU AI Act Provider Information
Verification Status:
Unverified
Mistral AI
15 rue des Halles, Paris, 75001, France
Mistral AI
15 rue des Halles, Paris, France
Compliance Documents
CE Marking: Not Applicable
Mistral models have a finite context window (e.g., 32k tokens for some versions). This means they may struggle with very long documents or conversations, potentially losing track of earlier details.
While strong at many tasks, the models can make logical errors or oversimplify nuanced reasoning, especially in highly technical or abstract domains.
Mistral models are trained on data up to a specific cutoff date (e.g., November 2024 for some versions). They may not have real-time or post-cutoff knowledge unless fine-tuned or augmented with external tools.
While multilingual, performance is generally stronger in high-resource languages (e.g., English, French) compared to low-resource languages.
Supply Chain Network
Visual representation of the vendor's digital supply chain relationships
Subprocessors
Third-party vendors and subprocessors used by this vendor
Confluent
subprocessor
Oracle
subprocessor
Google Cloud Platform
subprocessor
Salesforce
subprocessor
Microsoft, Inc.
subprocessor
Intercom
subprocessor
Ory Corp.
subprocessor
Policies & Documents
Legal, privacy, and compliance documentation
Legal & Terms
Privacy & Security
Compliance
Compliance & Risk
Get insights into risk by running assessments on this AI application.
Data Categories
Types of data commonly processed by this application
User Content
Conversation Logs
Code Snippets
Pii
Added: February 3, 2026
Updated: February 11, 2026
🇪🇺 European-based Alternatives
Discover AI solutions from European providers
Devstral Small 2 (24B)
Mistral AI
AI Model
Mistral Saba
Mistral AI
AI Model
Mistral Large 2
Mistral AI
AI Model
Mistral Large 3
Mistral AI
AI Model
Ready to manage AI applications?
Track, assess, and govern your AI applications with Anove.