Insights

AI Regulation and the EU AI Act: What Leaders Need to Know to Govern AI Responsibly

Written by Dr Zöe Webster

AI regulation

I hear a lot of people talk about the speed at which AI technology is advancing but I hear less about the pace of change in AI regulation – the means by which we can manage and govern AI when applied in our organisations.

AI’s Pace of Change vs. Regulation

Though AI seems to be developing by leaps and bounds seemingly on a daily basis, the big questions leaders should be considering in governing the technology do not change as quickly. There is no need to be concerned with keeping up with every new development – most of them won’t make a massive difference to your bottom-line, certainly not in the short-term. What matters are the questions you should be asking to ensure AI is used responsibly and effectively in your organisation. These questions can be largely agnostic to the particular flavour of AI you may be using. This means they can future-proof the way you handle AI.


The EU AI Act: Why It Matters

The EU AI Act, which entered into force in August 2024 with expectations that full requirements will be implemented by August 2027, applies where an organisation is providing AI-powered products or services to EU citizens. It places obligations on certain organisations that align with some of the most important questions including:

  • How are AI risks identified, mitigated and managed?
  • What should you be communicating to others about your use of AI?
  • Where does accountability for AI in your organisation sit?

Opportunities and Risks of AI Adoption

No technology is inherently good or bad, and AI comes with both opportunities and risks.

Opportunities include:

  • Reducing manual effort through AI-powered automation
  • Discovering patterns in data to target customers more effectively
  • Generating content for communications, software development, and critical administration


Risks include:

  • Discrimination against individuals or groups
  • Increased likelihood of data breaches or cybersecurity vulnerabilities
  • Undermining ESG commitments


Good Practice Beyond Compliance

Whether or not the AI you are planning to develop and/or deploy is high risk, the associated obligations represent elements of good practice. These practices are not only responsible but also help to maximise the opportunities from the technology.

For high-risk applications, an organisation must demonstrate it is managing the related risks and putting mitigations in place. For example:

  • Providing clear guidance and policy for employees on acceptable AI use
  • Ensuring senior managers maintain oversight and compliance

Transparency and Trust in AI Use

Another big question to consider is: what should you be sharing with others about your use of AI?

  • What would your customers, colleagues, investors and wider stakeholders want or need to know to trust that AI is being used effectively and responsibly?
  • Are customers aware when they are interacting with AI, e.g. through a chatbot?
  • Do you log and test AI systems before deployment?
  • Do you publish clear statements about where AI is embedded in your products and services?

This level of transparency not only builds trust but also helps organisations spot new opportunities for AI adoption.

Keeping Humans in the Loop

The third big question focuses on where the human is in the loop – ensuring people have oversight and accountability of where and how AI is used.

AI is not magic and, like humans, it is not perfect. This is an opportunity to ensure AI outputs can be checked before being used in automation or decision-making. This is more challenging when AI is customer-facing, but important to consider when and how humans should step in.

Are key people in your organisation AI-literate – do they understand both the power and limitations of AI, and how to deploy it safely and effectively?

Conclusion

We’ll see more change in the AI regulation space over the next few weeks, months and years — but keeping the big questions in mind will serve in future-proofing your AI innovations.

At Neueda, we help engineering leaders and L&D professionals build AI skills, frameworks, and confidence to adopt AI effectively through specialised enterprise AI training. If you’d like to explore how to embed responsible AI practices in your organisation, get in touch with us.

Want to learn more about our besoke AI learning solutions?

Get in touch

Set up a call with our experts today to learn more about our AI learning solutions

This field is for validation purposes and should be left unchanged.

Share Insight