InspirationsArtificial Intelligence in Corporate Systems: Who is Responsible for AI Mistakes and How Will the AI Act Affect HR Departments?
Properly implemented artificial intelligence systems can help improve efficiency, speed, and quality of work in many companies. However, employers should bear in mind that they will be responsible for any mistakes made by AI systems. The risks associated with using AI systems can be mitigated by setting up proper internal guidelines. These should not only include a list of approved and prohibited AI tools but also a requirement for human oversight of AI outputs. Employers should also keep in mind European legislation.

AI tools are becoming increasingly common in companies across various industries. When correctly implemented, AI systems can indeed significantly boost efficiency, quality, and speed. However, this new technology also brings not only security risks but also legal and reputational risks. If artificial intelligence causes harm, the responsibility typically falls on the employer.
If the employer is the one who introduced the AI system as a tool for employee use, they bear the primary responsibility, just as they are responsible for the actions of their employees toward third parties. In addition to the employer, the provider of the AI system may also be responsible if the mistake occurred due to a defect in the system itself. In such cases, the employer may seek to exercise their right to recourse, but they will likely have to prove the careful implementation of the system, adherence to the manufacturer’s recommendations, testing, and oversight of the results.
High-Risk AI Systems Under the AI Act? Be Cautious with Recruitment Systems
Employers should consider European legislation known as the Artificial Intelligence Act (AI Act) when setting up rules for AI systems and when implementing these tools in the workplace. Although parts of this legislation are being gradually implemented, and employers won’t face some requirements until, for example, two years from now, it is essential not to underestimate the preparation for these changes. The AI Act will have a significant impact on the HR sector. It introduces the classification of AI systems based on their risk level, and high-risk systems include recruitment and selection systems, as well as systems that assess candidates.
For high-risk systems, according to the AI Act, risk assessment, human oversight, and transparency regarding how AI system algorithms make decisions will be mandatory. Employers will have to adjust their internal processes in time to comply with these requirements. They can start preparing now by conducting an inventory of the AI tools and systems they use and identifying how many of them will fall into the high-risk category. Potential penalties are substantial, as the EU sets fines ranging from 1% to 7% of annual turnover, with the possibility of lower penalties for small and medium-sized businesses or startups.
Risks Can Be Mitigated by Properly Set Internal Guidelines
Given the above, any employer who decides to implement AI tools and systems into corporate processes should ensure the proper establishment of rules for using AI tools. These rules should be part of the internal processes, ideally in the form of an internal directive. The specific rules may vary depending on the employer’s activities, but there will likely be a common set of rules applicable to most companies.
The directive should at least include the scope (who it applies to, whether only employees or also freelancers and suppliers), a list of approved and prohibited AI tools, an approval process for new tools, risk identification, data handling rules with an emphasis on GDPR compliance, a description of penalties, and rules for checks and revisions, particularly focusing on human oversight of all outputs generated by artificial intelligence.
Simply adopting the directive is not enough; it is crucial to clearly communicate the rules to employees, explain the importance of adhering to them, and provide appropriate training if necessary. Additionally, a procedure for reporting security incidents should be established, and employees should have access to lists of approved tools, ideally in a format that can be updated flexibly.
Taken from HRNews, 18.12.2025

