I hear a lot of people talk about the speed at which AI technology is advancing but I hear less about the pace of change in AI regulation – the means by which we can manage and govern AI when applied in our organisations.
AI’s Pace of Change vs. Regulation
Though AI seems to be developing by leaps and bounds seemingly on a daily basis, the big questions leaders should be considering in governing the technology do not change as quickly. There is no need to be concerned with keeping up with every new development – most of them won’t make a massive difference to your bottom-line, certainly not in the short-term. What matters are the questions you should be asking to ensure AI is used responsibly and effectively in your organisation. These questions can be largely agnostic to the particular flavour of AI you may be using. This means they can future-proof the way you handle AI.
The EU AI Act: Why It Matters
The EU AI Act, which entered into force in August 2024 with expectations that full requirements will be implemented by August 2027, applies where an organisation is providing AI-powered products or services to EU citizens. It places obligations on certain organisations that align with some of the most important questions including:
How are AI risks identified, mitigated and managed?
What should you be communicating to others about your use of AI?
Where does accountability for AI in your organisation sit?
Opportunities and Risks of AI Adoption
No technology is inherently good or bad, and AI comes with both opportunities and risks.
Opportunities include:
Reducing manual effort through AI-powered automation
Discovering patterns in data to target customers more effectively
Generating content for communications, software development, and critical administration
Risks include:
Discrimination against individuals or groups
Increased likelihood of data breaches or cybersecurity vulnerabilities
Undermining ESG commitments
Good Practice Beyond Compliance
Whether or not the AI you are planning to develop and/or deploy is high risk, the associated obligations represent elements of good practice. These practices are not only responsible but also help to maximise the opportunities from the technology.
For high-risk applications, an organisation must demonstrate it is managing the related risks and putting mitigations in place. For example:
Providing clear guidance and policy for employees on acceptable AI use
Ensuring senior managers maintain oversight and compliance
Transparency and Trust in AI Use
Another big question to consider is: what should you be sharing with others about your use of AI?
What would your customers, colleagues, investors and wider stakeholders want or need to know to trust that AI is being used effectively and responsibly?
Are customers aware when they are interacting with AI, e.g. through a chatbot?
Do you log and test AI systems before deployment?
Do you publish clear statements about where AI is embedded in your products and services?
This level of transparency not only builds trust but also helps organisations spot new opportunities for AI adoption.
Keeping Humans in the Loop
The third big question focuses on where the human is in the loop – ensuring people have oversight and accountability of where and how AI is used.
AI is not magic and, like humans, it is not perfect. This is an opportunity to ensure AI outputs can be checked before being used in automation or decision-making. This is more challenging when AI is customer-facing, but important to consider when and how humans should step in.
Are key people in your organisation AI-literate – do they understand both the power and limitations of AI, and how to deploy it safely and effectively?
Conclusion
We’ll see more change in the AI regulation space over the next few weeks, months and years — but keeping the big questions in mind will serve in future-proofing your AI innovations.
At Neueda, we help engineering leaders and L&D professionals build AI skills, frameworks, and confidence to adopt AI effectively through specialised enterprise AI training. If you’d like to explore how to embed responsible AI practices in your organisation, get in touch with us.
Want to learn more about our besoke AI learning solutions?
A leading global technology company embarked on a multi-year transformation to become an AI-driven organisation. With a vision to embed AI across all business functions, rather than limiting it to specialized teams, Technology Ireland Digital Skillnet was tasked with overseeing the programme's design. To bring this vision to life, Digital Skillnet turned to Neueda to develop the training content and deliver the learning. This collaboration resulted in the AI Essentials for Teams – Deploying AI for Impact and Innovation course, which was delivered in partnership with Technological University of Dublin (TU Dublin) to formally accredit the training.
We designed and delivered a one-day, hands-on training programme tailored for Ericsson’s UX teams, focusing on the practical application of AI throughout the UX workflow.
Our client, a leading global financial institution, employs over 40,000 technologists worldwide, who are tasked with developing and maintaining critical financial systems that power its banking operations. With such a vast technology workforce, ensuring consistency in software development and deployment practices is essential for efficiency, security, and maintainability.
We designed a customised, self-paced digital learning programme, including interactive learning pathways and step-by-step labs running on the organisation’s infrastructure, ensuring that new engineers quickly adapt to the company’s software development and deployment process.