(Almost) Everything SMEs Need to Know about the EU AI Act
By Mark Sutter
2 August 2026. Mark this date!
For companies operating with exposure to EU markets, 2 August represents a watershed moment regarding the regulation of AI. The EU AI Act is the world's first comprehensive horizontal AI law and was introduced to make sure AI is safe, trustworthy, and respects European values.
What does it mean for your business? If you are operating in and into the EU, chances are this regulation applies to your organisation and you must comply.
This article will break down how to comply, the key highlights of the AI Act, who needs to worry, and some of the traps where businesses are likely to underestimate the work required.
The Compliance "Goldilocks" Zone
If you don’t have a dedicated risk function, someone in your company needs to pick up the work to get your company compliant with the EU AI Act. The issue with this approach is they probably already have a day job and complying with this new regulation is not their area of expertise — but most companies we speak to won’t require a full-time AI Safety Officer (for now).
So your organisation requires an AI governance specialist, but only part-time. Someone experienced and skilled enough to handle the intricacies of navigating AI into an organisation that can also understand how the business operates. That's the fractional model 3PEAT offer. It's not a compromise, it's the smarter way to handle compliance when you're still building out your AI capability. Stay lean but be covered.
The Great Misconception: "We Are Too Small for This"
This is the number one trap we see. Businesses look at their 15-person team and assume they are exempt. They are not.
The Act is not based on employee count or revenue. It is based purely on the impact of the AI system on European citizens.
If you fall into any of these categories, the Act applies to you, regardless of size:
- Developers (Providers): You build the AI system.
- Users (Deployers): You use a high-risk AI system for professional purposes (e.g., using a third-party AI to screen job applications).
- Global reach: your company is in New York or Mumbai, but your AI system’s output is used within the EU (extra-territorial reach).
Small businesses and startups do get some breaks: they pay lower fines, get priority access to regulatory 'sandboxes' (test environments), and have conformity assessment fees scaled to their size. But the core requirements must still be met.
Cheat Sheet: Compliance in Three (Risk) Buckets
The EU AI Act doesn't treat all AI the same. It uses a risk-based approach and the entire law asks one simple question: what is your AI being used for?
| Risk Level | What It Is | Your Compliance Action |
|---|---|---|
| 1. Prohibited | AI that poses an ‘unacceptable risk’. Think social scoring (like Black Mirror), real-time biometric surveillance in public, or manipulative AI. | STOP. These systems are banned in the EU. |
| 2. High-Risk | AI used in critical infrastructure, medical devices, law enforcement, credit scoring, or HR (hiring/firing). | HEAVY LIFT. You face strict rules: data quality standards, human oversight, pre-market audits, and fundamental rights assessments. |
| 3. Limited/Minimal | Chatbots, spam filters, recommendation engines, or games. | LIGHT TOUCH. Most just need transparency (e.g., telling a user "You are talking to an AI"). |
Where Businesses Underestimate the Challenge
If you think this is just an IT problem, think again. Companies are commonly getting caught out here:
1. Underestimating What counts as "High-Risk"
You may not be making killer robots but are you using an AI tool to rank job applicants? Or are you using AI to predict if someone will default on a loan? Are you using it for predictive maintenance in a hospital? These are all classified and categorised as High-Risk. This moves compliance from just being a checkbox exercise to needing to evidence documentation, a governance framework and be ready for auditing.
2. Ignoring the AI Supply Chain (The 'Deployer' Problem)
This is perhaps the biggest hidden pitfall. Many companies assume that if they buy an AI tool (like a hiring algorithm) from a Vendor, the Vendor handles compliance.
Nope. Under the AI Act, you are a 'Deployer'. You are responsible for how you use that high-risk system. You must ensure proper data governance, human oversight and you have to monitor the system's performance in your specific context. You are responsible!
3. Thinking You Have Time
The Act began its phased rollout in August 2024. The bans on ‘Prohibited AI’ took effect in February 2025. Most of the complex rules for high-risk systems apply on 2nd August 2026. This puts having a corporate AI policy in the ‘Let’s start deciding on resources now’ phase.
Strategic Peace of Mind
The EU AI Act is not a suggestion. For businesses that build or rely on high-risk AI, you need to fundamentally shift how you operate — and the way you achieve that is through governance.
What most people misunderstand about having an AI Safety Officer is that it's not just about avoiding fines. It's about your team actually feeling confident enough to move fast. When someone credible has looked at your setup and said “you are good to go”, people stop second-guessing every decision.
Working with 3PEAT you are guaranteed to work with someone without hidden agendas. We set you up, tell you the truth about where your risks actually are and how to manage them. Because when the penalties reach up to €35 million or 7% of total worldwide turnover, this is an insignificant investment for peace of mind.
You can also find and read the regulation here:
Ready to create your own AI Framework?
Use our guided framework builder to list your AI systems, classify risk, and generate a practical governance framework your team can implement immediately.
Create your own AI Framework