What will change in August
In August 2026, the European Union Artificial Intelligence Act comes fully into force. This is the first comprehensive AI legislation in the world.
The European Commission released final implementation guidelines on February 2, 2026, focusing especially on Article 6 requirements for post-market monitoring. All companies with AI systems covered by the law will have to comply with specific obligations from August.
What's different now? February 2026 marks the transition from "legal theory" to "operational triage". Companies that waited to see what would happen now have 6 months to prepare.
The law classifies AI systems into four risk categories: unacceptable, high, limited and minimal. Each category has different requirements. Most SMEs using tools like ChatGPT, virtual assistants or data analysis will fall into limited or minimal risk categories - but that doesn't mean zero obligations.
What it means for your business
First, breathe. If you're a Portuguese SME using common AI tools, you're probably not in the "high risk" category. High-risk systems include:
- AI used in recruitment for automated decisions
- AI to approve financial credit
- AI in medical devices
- AI in critical safety systems
If you use ChatGPT to answer emails, Copilot to write code, or data analysis tools, you're in the limited or minimal risk category.
But attention: Even in these categories there are obligations. The main one is transparency. You must:
- Inform when the customer is interacting with AI (can't pretend it's human)
- Document what AI systems you use and for what
- Monitor problems or errors after implementing
- Have responsible person identified for AI issues
If you don't comply, there are fines. For medium companies, they can reach thousands of euros. For large companies, millions. The EU is not playing around with this.
The positive side? The law also creates a single market for AI in Europe. If you comply with Portuguese rules (which follow European law), you can expand to other EU countries without additional obstacles.
How to prepare until August
You have six months. It's enough time if you start now. Here's the practical plan:
March 2026 - Inventory (2 weeks)
- List all AI tools you use
- For each one, note: supplier, purpose, what data it accesses, who uses it
- Identify if it interacts directly with customers
April 2026 - Classification (2 weeks)
- For each system, determine risk category
- Most will be "limited" or "minimal"
- If you have doubts about high risk, consult lawyer or consultant
May 2026 - Documentation (1 month)
- Create simple document per system:
* What it does * What data it uses * Who is responsible * How to monitor problems
- You don't need 50 pages, you need clarity
June 2026 - Transparency (2 weeks)
- If AI interacts with customers, add notices:
* "This response was generated with AI assistance" * "This chatbot is automated"
- Update privacy policy if needed
July 2026 - Monitoring (2 weeks)
- Define process to report AI problems
- Train employees on new rules
- Do internal test: does everyone know what to do?
August 2026 - Final review
- Verify all documentation is accessible
- Confirm transparency notices are active
- Ensure there's a designated responsible person
Keep all documentation. If there's an inspection, you'll need to prove compliance.
Mistakes to avoid
Based on what we're seeing in February 2026, there are three common mistakes:
Mistake 1: Ignore until August Some companies think "there's still 6 months". Then they reach August with nothing prepared and panic. Documentation takes time. Changes to websites and processes take time. Start now.
Mistake 2: Think it doesn't apply "We're small, this is only for big companies." Wrong. The law applies to all companies using AI in the EU, regardless of size. What changes is the risk category, not applicability.
Mistake 3: Excessive documentation The opposite also happens: companies creating 100 pages of documentation nobody reads. The law asks for clarity and accessibility, not volume. Better 5 clear pages everyone understands than 50 technical pages in the drawer.
Extra tip: If you use AI suppliers (like OpenAI for ChatGPT), ask them for compliance documentation. They also have to comply with the law. Don't assume everything is handled - verify.
Our advice
The European AI Law is serious, but not impossible to comply with for SMEs. The secret is to start early and be systematic.
If you're still deciding whether to implement AI, this law shouldn't be a barrier. The obligations for limited/minimal risk are reasonable and make sense (transparency, documentation, monitoring). These are practices you should have anyway.
If you already use AI without structure, this is the time to organize. August seems far, but between preparing documentation, making technical changes and training teams, six months pass quickly.
Line Consulting AI is preparing a compliance checklist specific for Portuguese SMEs, adapted to the sectors we serve (healthcare, legal, accounting, logistics, hospitality, retail, construction). If you need help navigating this, we're available.
One final important note: Portugal is part of the European Union and this law is directly applicable - no separate Portuguese law needed. What the EU decides in August applies in Lisbon the same day.
Six months to be compliant. Start this week.