The European Union’s Artificial Intelligence Act (AI act) marks a significant step towards regulating the use of AI, ensuring that ethical and legal frameworks are in place to protect citizens and businesses alike. With AI-powered technologies rapidly transforming industries, companies need to start preparing now for compliance. In particular, sectors that handle personal data, including employee portraits, will face new responsibilities under the AI act, making it essential to understand the changes ahead.
Background for the AI act
The AI Act is a pioneering regulation proposed by the European Commission in 2021, aimed at creating a unified framework for AI governance across the EU. It has been designed with a dual focus: to promote AI innovation while protecting fundamental rights. The act categorises AI systems into risk levels—minimal, limited, high, and unacceptable—allowing regulators to tailor obligations depending on the system’s impact.
Historically, AI has been seen as an unregulated frontier, and the vision behind the AI act is to change that by providing a structured approach to ensure safety, transparency, and accountability. It mirrors the goals set by GDPR but extends them into the realm of machine learning and automated decision-making. The aim is to build trust and fairness in AI applications across industries.
Since its proposal, the AI act has undergone several rounds of consultations and amendments. The final adoption is expected in 2024, with the full implementation likely to roll out over the next few years, giving companies time to adjust their operations and ensure compliance.
By 2026, AI governance will be fully enforced, and non-compliance will result in severe penalties.
Penalties for non-compliance
Like GDPR, the AI act has set forth substantial fines for companies that fail to comply. These penalties can reach up to 6% of global annual turnover, which makes compliance a critical issue for companies of all sizes. The severity of penalties is designed to deter misuse of AI, especially in high-risk sectors.
AI governance – as crucial as cybersecurity and data protection
As AI continues to revolutionize how companies function, governance over these technologies will become as important as cybersecurity or data protection. The days of adopting AI tools without oversight are ending. In the same way companies adapted to GDPR, they must now turn their attention to ensuring responsible AI practices.
For companies like Eikonice, whose SaaS product automates employee portraits, ensuring GDPR compliance for our clients – now also need to stay compliant to the new AI act.
We add an extra layer to secure our clients AI acts requirements: transparency in the AI algorithms used, clear documentation on how data – in our case, portraits – is processed and adjusted, and guarantees that AI decisions, such as cropping and adjusting photos are free from bias.
Likewise, the use of face recognition, when our GDPR compliance feature is used to delete pictures from the image bank, in which employees who no longer are employed by the company appear in.
Eikonice secure compliance to the AI act
In the case of employee portraits, companies will need to ensure that the AI processes they use comply with both GDPR and the AI act. This means ensuring that portraits are handled ethically, with no risk of bias in processing and the full consent of employees.
The EU’s AI act is set to change the landscape for how AI technologies are regulated, and companies must act now to ensure they remain compliant. For businesses that handle personal data, such as those using AI for employee portraits, the stakes are even higher. By using Eikonice the AI governance will be secured and your company will avoid penalties, but also establish trust with their clients and employees due to take GDPR and AI regulations seriously.