Frequently Asked Questions
Answers to common questions about using ExplAIn and managing AI applications with confidence.
What is ExplAIn?
ExplAIn is an AI tracker and search bar. It is a comprehensive, curated database of AI tools and models. It provides transparency scores based on criteria like compliance, ethical use, and trustworthiness. Our goal is to help users understand the risks and benefits of AI tools, so they can make informed decisions about which ones to use (if at all), and how to use them responsibly.
Who built explAIn?
ExplAIn is developed by Anove, a Dutch-French startup specializing in AI governance and compliance. Based in The Hague, our team combines expertise in law, technology, and ethics to build tools that support responsible AI adoption.
Why was explAIn built?
ExplAIn was built to promote AI literacy and transparency and empower users with knowledge. We aim to foster a culture of Responsible AI by making it easier to assess and compare tools. We believe everyone should have access to clear, unbiased information about AI tools so they can choose wisely and use them safely.
How transparent is the AI you use? ExplAIn helps you find out.
Every day, you rely on AI tools. But how much do you really know about them? Who built them? Where are they based? Do they follow the rules, are they using your data without your knowledge?
ExplAIn gives you answers. We evaluate AI tools and models for transparency, so you can make informed choices. With ExplAIn, you'll discover:
-
Who's behind the tool? (Company, location, and reputation)
-
Do they play by the rules? (Compliance with regulations like GDPR and the EU AI Act)
-
Who else may have access to your data? (Subprocessors and third parties)
-
What's missing? (Gaps in policies, unclear training data, or hidden risks)
Too many AI tools lack transparency, leaving you in the dark about privacy, security, and ethics. ExplAIn shines a light on what matters, so you can use AI with confidence.
Database and Updates
Is the ExplAIn database updated?
Yes. We actively monitor AI tools and models to keep our assessments current. New tools are added regularly based on user suggestions and market developments. Minor updates are made daily, while major updates are announced on our website. Have a suggestion or notice something missing? Let us know via our chat, we’re always happy to improve.
How many models and tools are in ExplAIn?
As of January 2026, we are tracking 215+ AI models and 161+ AI applications. The numbers grow weekly, check back often!
Assessments and Methodology
How does ExplAIn assess AI tools?
Our Anove compliance team evaluates tools using five key transparency criteria: supply chain transparency (20%), compliance transparency (20%), policy transparency (25%), technical transparency (25%), and ethical and operational transparency (10%). We are happy to share more details about our methodology if you’re interested.
Why do different models of the same provider have a different score in ExplAIn?
Providers often offer multiple models, such as Lite vs. Pro versions or models fine-tuned for specific tasks. We assess each model individually because their capabilities, risks, and compliance levels can vary significantly.
I think you made a mistake in your assessment of an AI model or tool. How can it be corrected?
We welcome feedback. If you spot an error, contact us via chat with details, and we’ll review it promptly.
I think ExplAIn is biased.
We strive for objectivity and welcome critical feedback. If you suspect bias, tell us why; we’ll review your concerns and explain our reasoning. While we may not always agree, we’re committed to fairness and continuous improvement.
Usage and Permissions
Can I use ExplAIn for commercial purposes?
Our free tool is designed for personal and professional use, such as research or decision-making. Commercial exploitation, such as reselling data or integrating our scores into paid products, requires permission. For advanced features or commercial use, reach out via chat; we’d love to discuss partnerships
How can my organization use ExplAIn?
Organizations can use ExplAIn to help them decide which AI system to use, audit their AI inventory, set governance policies, train teams on responsible AI use, and stay updated on new tools and compliance changes. Do you need enterprise features? Contact us to learn more.
Am I complying with all EU Regulations by using ExplAIn?
ExplAIn is designed to support your compliance efforts, but using it alone does not guarantee full compliance with all EU regulations.
ExplAIN helps you evaluate whether an AI tool aligns with key principles of the EU AI Act (e.g., risk classification, transparency obligations) and GDPR (e.g., data protection, accountability). By comparing tools, you can select those that are more likely to meet regulatory requirements. Organizations can use ExplAIn to audit their AI inventory, identify high-risk tools, and align their usage with internal policies, laws and regulations.
ExplAIn does not cover GDPR additional steps, such as: implementing technical and organizational measures (TOMs) to protect personal data, maintaining a Register of Processing Activities (RoPA), conducting Data Protection Impact Assessments (DPIAs) for high-risk processing, or ensuring lawful bases for data processing (e.g., consent, contractual necessity). These additional features are available for a fee in our AI Management System (AIMS).
Depending on the risk level of the AI system you use (e.g., prohibited, high-risk, limited-risk), you may need to comply with other obligations, such as conducting conformity assessments, registering high-risk systems in the EU database and other documentation, logging, and human oversight requirements. These requirements are fully covered by our AI Management System (AIMS). For more information, visit AI Management.
Some industries (e.g., healthcare, finance) must comply with additional regulations such as the Medical Devices Regulation (MDR), the eIDAS electronic Identification, Authentication and Trust Services (eIDAS) Regulation, the Digital Services Act (DSA), the Digital Operational Resillience Act (DORA) or the Network and Information Systems Directive 2 (NIS 2), which ExplAIn does not address.
Steps for full compliance are:
-
Use ExplAIn as a Starting Point: Filter tools by transparency scores and compliance criteria to shortlist safer options.
-
For high-risk AI systems or complex use cases, work with legal or compliance professionals to ensure all obligations are met.
-
Stay informed about Regulatory changes and adjust your practices accordingly.
-
Keep records of your AI tool assessments, risk mitigation measures, and compliance actions.
We have developed AIMS, an automated solution for full compliance with the EU AI Act, which you can read more on here.
Trust and Risk Management
I discovered I am using an AI tool with a low trust score. What should I do?
Switch to a higher-scoring alternative by filtering tools by use case in ExplAIn. If you continue using the tool, avoid sharing confidential or personal data, delete your data periodically if possible. In any case, always fact-check outputs before acting on them.
Is my data safe if I share it with an AI tool?
Your datas safety depends on the tools policies and practices. ExplAIns transparency scores can help you evaluate risks, but we recommend you to always review the tools privacy policy and compliance (e.g., GDPR), avoid sharing sensitive data with low-trust tools and use tools that allow data deletion or opt-out of data training.
How can I delete my history in AI tools?
Policies vary by tool. Before using any AI tool, check if it offers a data deletion feature, how long data is retained, and whether your data is used for training.
Is my data safe if I use ExplAIn?
We do not collect or store your personal data, and you dont need to create an account or log in to consult our database. You can explore our AI tool assessments and transparency scores anonymously.
We only use cookies to ensure the website functions properly, enhance security, and improve user experience. These cookies are used in accordance with our Cookie Policy. You can revisit your consent preferences at any time. Your privacy is our priority. If you have any questions, feel free to reach out!
Responsible AI and Safety
What is a Responsible AI?
Responsible AI means designing, developing, and deploying AI in ways that are ethical, transparent, accountable, and safe. At Anove, we help organizations put these principles into practice.
Is AI dangerous?
AI is a tool and not inherently dangerous, but misuse can cause harm. Risks include bias, misinformation, privacy violations, and legal exposure. Our mission is to help users spot risks and use AI safely and ethically.
What are the risks of AI?
While AI boosts productivity, it comes with various risks, such as:
-
AI trained on biased data can perpetuate or amplify discrimination. Cases have been documented in hiring, lending, or law enforcement;
-
Output may contain hallucinations (incorrect, misleading, or fabricated information);
-
AI systems can be hacked, manipulated (e.g., adversarial attacks), or exploited to spread malware or phishing scams;
-
AI tools may collect, analyze, or leak sensitive data without consent, violating privacy rights (e.g., facial recognition, data scraping);
-
AI-generated content (e.g., deepfakes, synthetic media) can spread disinformation, manipulate public opinion, or damage reputations;
-
AI outputs may infringe on copyrights or proprietary data, creating legal exposure for users (plagiarism);
-
Automation can disrupt workforce stability without proper reskilling programs.
Therefore, always assess tools before use and monitor outputs critically.
Feedback and Contributions
How can I contribute to ExplAIn?
You can contribute by suggesting tools we are missing, sharing feedback on assessments, or partnering with us to integrate ExplAIn into your workflows.
Who can I contact for more questions?
You can reach us via chat or email.