Gorici Legal Logo

Mar 13, 2026

AI in Human Resources: What Obligations Do Romanian Employers Have under the AI Act before August 2026

Do you use Artificial Intelligence in HR processes? If so, you are likely already within the scope of the strictest European regulatory framework currently in force.

Over the past few years, many companies in Romania have integrated Artificial Intelligence tools into recruitment and human resources management processes — from automated CV screening tools and pre-screening chatbots to video-interview platforms with behavioural analysis features or performance evaluation systems. Some use specialised tools; others simply rely on tools such as ChatGPT or Microsoft Copilot as assistants in selection processes.

What many employers do not realise is that a significant number of these uses may fall within the category of “high-risk AI systems” under Regulation (EU) 2024/1689, know as the AI Act, and that the main compliance deadline is approaching rapidly: 2 August 2026.

Non-compliance may trigger fines of up to EUR 35 million or 7% of worldwide annual turnover — penalties that, in some cases, even exceed those available under the General Data Protection Regulation (GDPR).




What is the AI Act

The AI Act is the world’s first comprehensive legal framework specifically designed to regulate Artificial Intelligence. It entered into force on 1 August 2024 and is directly applicable in all European Union Member States, including Romania, without the need for separate national transposition legislation.

The Regulation has extraterritorial reach: it applies to any company, irrespective of where it is established, where AI systems are used for decisions affecting persons located in the territory of the European Union. A Romanian employer dealing with candidates or employees in the EU is therefore, as a rule, within scope.

The AI Act is being phased in several stages. For the purposes of this article, the following dates are particularly relevant:

  • February 2025 — the rules on prohibited AI practices and the obligation relating to AI literacy become applicable;
  • August 2025 — the rules applicable to General-Purpose AI models (for example ChatGPT, Claude, Gemini, etc.) become applicable;
  • August 2026 — the AI Act becomes largely fully applicable: the full set of obligations for high-risk AI systems (including those used in HR) takes effect; the fines provided for under the AI Act become applicable; and the supervisory and enforcement framework provided by the AI Act becomes broadly operational.

In other words, some obligations under the AI Act already apply, while most of the remaining framework will become applicable in less than six months.




Why the use of AI in recruitment is classified as “high-risk”

The AI Act adopts a risk-based approach: the greater the potential impact on individuals’ fundamental rights, the stricter the regulatory requirements.

Under Article 6(2) and Annex III of the AI Act, AI systems used in the area of employment, workers management and access to self-employment are, in principle, classified as high-risk AI systems. More specifically, this category includes:

  • AI systems used for recruitment and selection — including targeted job advertising, CV screening, the sorting or ranking of candidates, and the assessment of candidates in interviews;
  • AI systems that influence the terms of the employment relationship — including promotions, task allocation, performance monitoring and dismissal decisions;
  • AI systems that monitor and evaluate employees during the performance of their work.

The rationale is straightforward: a hiring or dismissal decision taken (or materially influenced) by an algorithm can have a profound impact on an individual’s fundamental rights, including the right to work, the right to non-discrimination and human dignity. The AI Act therefore treats these systems with the same level of seriousness as AI used in critical infrastructure or biometric systems.




The key rule: what matters is not who developed the AI system, but how you use it

This is probably the most important practical takeaway from this article — and one that surprises many employers.

The AI Act draws a fundamental distinction between:

  • Provider — the company that developed and places the AI system on the market (for example OpenAI, Microsoft or the supplier of recruitment software, such as LinkedIn Recruiter);
  • Deployer — any company or person using an AI system in the course of its professional activities.

An employer using an AI tool in HR is generally a deployer — and therefore subject to concrete obligations, even if it did not itself develop the software used in the recruitment process.

A few practical examples help clarify where the line is drawn:

An employee in a company’s HR department uploads 20 CVs into ChatGPT and asks the model to rank candidates by reference to the job description and then recommend which candidates should proceed in the recruitment process and which should not. Through that prompt and that use case, the company may activate a high-risk use, even though it has not developed any proprietary AI system of its own.

Another relevant example is the use of “Candidate Fit Score” tools in LinkedIn Recruiter: if the recruiter filters candidates based on the score generated by the algorithm and does not manually review the profiles of excluded candidates, the decision not to contact a candidate is materially influenced by an AI system.

The same framework also applies in the relationship between employer and existing employees. A company using an AI system to monitor employee productivity by measuring, for example, the number of actions performed within a time period, response times or the level of activity on a computer, and then using that system’s output as a basis for periodic performance reviews or dismissal decisions, is operating a high-risk AI system. It does not matter that the final decision is formally signed off by a manager: if the algorithmic report materially underpins that decision, the system is high-risk and the compliance obligations apply in full.

The practical rule is the following: if the output of an AI system narrows, filters or orders the options available to the human decision-maker — in other words, the recruiter or manager no longer sees the full set of candidates or information, but only what the algorithm has filtered or ranked — that use is, in all likelihood, high-risk, regardless of whether the system was developed internally or purchased from a third party. In practical terms, if some candidates are automatically excluded without a human ever reviewing their CVs, and the recruiter acts on a score generated by the system without carrying out an independent assessment, the situation will very likely qualify as high-risk.




What concrete obligations does the employer (deployer) have from August 2026

1. Prior information to employees and their representatives

Before putting a high-risk AI system into use in the workplace, the employer must inform the affected employees and their representatives that they will be subject to the use of that system. This obligation applies to existing employees and must be complied with before implementation, not after.

2. Information of the persons subject to AI-assisted decisions

An employer using a high-risk AI system to take, or assist in taking, decisions relating to natural persons must inform those persons that they are subject to the use of such a system. In the recruitment context, this obligation concerns candidates — they must know that a high-risk AI system was involved in the process affecting them.

3. Effective human oversight

The employer must designate persons with the necessary competence, training and authority to ensure human oversight of the system. It is not enough for a human merely to see the output and click “confirm” — oversight must be genuine, with a real possibility to intervene or disregard the system’s recommendation.

4. Data governance

To the extent the employer exercises control over the data entered into the system — CVs, job descriptions, selection criteria — it must ensure that such data are relevant and sufficiently representative in order to reduce the risk of discriminatory or erroneous outcomes.

5. Right to an explanation

Persons affected by a decision taken with the assistance of a high-risk AI system have the right to request and receive a clear explanation of the role played by the system in the decision-making process. The employer must be in a position to provide that explanation, which in practice presupposes that the persons operating the system understand how it works and how it influences the final decision.

6. Ongoing monitoring

The employer must monitor the operation of the system in accordance with the provider’s instructions. If it identifies a risk or a serious incident, it must notify both the provider and the competent national market surveillance authority.

7. Retention of logs

Logs automatically generated by the AI system must be retained for a period of at least six months, to the extent that such logs are under the deployer’s control.

8. DPIA (Data Protection Impact Assessment)

Where the AI system processes personal data — which will typically be the case in any recruitment process — the employer must carry out a Data Protection Impact Assessment (DPIA) in accordance with Article 35 GDPR, where the conditions for such assessment are met. The AI Act expressly requires that this assessment take into account, among other things, the documentation made available by the provider.




What is already prohibited since February 2025

Certain AI uses are already entirely prohibited under the AI Act, with effect from 2 February 2025. In the context addressed by this article, and of particular relevance to HR teams and departments, the prohibited practices include:

  • Emotion recognition systems in the workplace or in interviews (for example platforms claiming to “read” candidates’ emotional states from facial expressions);
  • Biometric categorisation used to infer sensitive characteristics (political opinions, religion, sexual orientation, etc.);
  • Subliminal or manipulative techniques influencing individuals’ decisions without their knowledge.

If your company is currently using such tools, it is already in breach of the AI Act.




Employees’ use of AI tools

A frequently asked question: “If employees use ChatGPT on their own initiative, without an internal company policy, is the employer still a deployer?”

The legal answer has not yet been definitively settled, but the overall direction is relatively clear. The AI Act defines a deployer as the entity using an AI system “under its authority”, and the prevailing view in legal writing tends to treat professional use as use attributable to the employer, even in the absence of a formal internal policy. In other words, the fact that an employee uses ChatGPT on their own initiative does not automatically exempt the company from responsibility under the AI Act.

The European Commission has not yet issued guidance expressly clarifying this boundary. Guidance on deployers’ obligations has been announced for the second quarter of 2026, so, for the time being, companies are navigating this issue without official clarification.

Against that background, a cautious approach is advisable and this issue should not be left unanswered internally. A clear internal policy on employees’ use of AI, setting out which tools are permitted, under what conditions and subject to what limitations, serves two purposes: (1) it reduces legal risk; and (2) it helps establish the company’s position before the authorities in the event of an audit.




Applicable sanctions

The AI Act establishes a three-tier sanctions regime, enforced by the competent national authorities:

  • Level 1 — use of AI systems prohibited under Article 5: fines of up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher. This is the most serious infringement category;
  • Level 2 — non-compliance with deployer obligations under Article 26: fines of up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher;
  • Level 3 — provision of incorrect, incomplete or misleading information to national authorities: fines of up to EUR 7.5 million or 1% of worldwide annual turnover.

For small and medium-sized enterprises (SMEs) and start-ups, Article 99(6) provides for a more favourable regime: the lower of the fixed amount and the percentage of turnover applies, rather than the higher one.

Beyond administrative fines, an employer using high-risk AI systems in HR without complying with the relevant obligations is also exposed to adjacent legal risks: cumulative liability under the GDPR in the event of associated personal data infringements, as well as the possibility of discrimination claims brought by affected candidates or employees.




Conclusion

The use of AI in recruitment and HR is no longer merely a question of technology or operational efficiency. Some obligations are already in force — the prohibitions under Article 5 and the AI literacy obligation have applied since February 2025. From August 2026, the full compliance framework becomes broadly applicable, including the sanctions for failure to comply with the obligations that employers bear in their capacity as deployers.

Companies that begin their compliance process now still have enough time to complete the essential steps: mapping the AI systems in use, classifying the relevant use cases, and implementing the necessary internal policies and training. Companies that postpone implementation risk building an incorrect or incomplete compliance process or, worse, facing an audit by the authorities before they are ready.




If you are not sure whether the AI tools used in your HR processes fall within the scope of the AI Act, or which obligations apply to you in your capacity as a deployer, the Gorici Legal team can assist you with a compliance assessment tailored to your company’s specific circumstances.

Contact us →




This article is for general informational purposes only and does not constitute legal advice. For an analysis tailored to your specific circumstances, we recommend that you contact the Gorici Legal team.

© Gorici Legal | All rights reserved