The new AI Act and its significance
The new EU AI Act came into force on 1 August 2024 and brings with it far-reaching obligations for companies. Article 4 of the act in particular obliges providers and operators of AI systems to ensure that their staff have sufficient AI literacy. This obligation applies from 2 February 2025 and requires specific training and documentation measures.
Companies must act now to ensure compliance and avoid sanctions. But what does this mean in concrete terms? What steps need to be taken now?
Obligation to Provide AI Skills: Key Steps for Companies
According to Art. 4 of the EU AI Act, companies must ensure that their employees have sufficient AI skills. Employee training is a key component of this. This includes
- Determining the need for training
- Documentation of the AI systems used and their purposes
- Identifying employees who work with these systems
- Carrying out training courses
- Imparting basic knowledge about AI systems and how they work
- Training on risks, ethical issues and the legal framework
- Regular training and certification programmes
- Creation of internal guidelines
- Development of clear guidelines for the safe use of AI
- Implementation of internal compliance standards
- Individual briefings
- Specific training for employees with direct contact to AI systems
- Practice-orientated learning and simulation-based training
- Documentation and evidence
- Maintenance of training records
- Regular updates and adjustments to training concepts
Deadlines of the AI Act: When do companies have to act?
The AI Act’s implementation follows a phased approach. The first key deadline, 2 February 2025, introduces the following obligations:
- Prohibited AI systems that pose an unacceptable risk may no longer be used.
- Providers and operators must ensure AI literacy
- Companies must carry out a risk assessment of the AI systems used. This also applies to general AI models such as ChatGPT, Copilot or AI-based spam filters.
- Transparency obligations: Users must be able to recognise that they are interacting with an AI system. Content generated with AI should be labelled accordingly (e.g. with watermarks or references to AI-generated content).
- Check and optimise technical infrastructure: Companies must ensure that their IT systems are suitable for the use of AI.
Further deadlines:
- 2 August 2025: Applicability of the provisions for general-purpose AI systems (e.g. basic models such as chat GPT)
- 2 August 2026: General applicability of the AI Act including the Transparency obligations: Users must be able to recognise that they are interacting with an AI system. Content generated with AI should be labelled accordingly (e.g. with watermarks or references to AI-generated content).
- 2 August 2027: Applicability of the AI Act to high-risk AI systems
Enforcement and Sanctions
Member States are obliged to impose effective, proportionate and dissuasive sanctions. Violations of the AI Act could lead to significant penalties in the future. The use of AI systems is already associated with the risk of fines under data protection law. However, companies should not only consider compliance aspects, but also recognise the economic benefits of a well-trained workforce.
Lack of a supervisory authority: To date, there is still no official supervisory authority for the AI Act in Germany. This is expected to be established by 2 August 2025. According to the Federal Network Agency, companies are responsible for ensuring AI expertise until then.
Controls and implementation: There are currently no binding national implementation regulations. Companies must nevertheless be prepared for the fact that violations of due diligence obligations may become relevant. If damage is caused by untrained personnel, the company can be held liable.
Classification and risk assessment of AI systems
Another key element of the AI Act is the categorisation of the AI systems used according to risk levels. Companies must:
- Categorise the AI systems used in accordance with the requirements of the AI Act.
- Exclude prohibited AI use in accordance with Art. 5 of the AI Act (e.g. subliminal influence or social judgement of individuals).
- Clarify whether you are acting as a provider or operator of an AI system, as providers have further obligations.
- Implement obligations for high-risk AI systems, including registration in an EU database, conformity assessment procedures and human oversight.
- Implement relevant data protection and IT security measures to protect personal data and minimise legal risks.
Practical implementation aids for the AI Act
Companies should leverage professional training programs and compliance strategies, such as:
// PRIMA offers you:
Free download
Guideline for the use of AI systems in companies
Free download Guideline: Use of AI models in companies, taking into account the AI Act
AI training in //PRIMA
‘Safe use of AI’
Conclusion: Act now to ensure compliance
The EU AI Act establishes far-reaching compliance obligations for companies with immediate effect. The consistent introduction of tailored training programmes and precise internal guidelines forms the basis for sustainable and legally compliant AI integration into company processes.
Further information on the topic of AI literacy
Book your personal consultation now!