The study clarified the difficulties in establishing causation between this harm and the use of AI systems. These difficulties can be however overcome if the harm is conceptualised as a procedural harm. This justifies focus on procedural guarantees (i.e. quality of the decision-making process, timeliness, effectiveness, independence, involvement of affected individuals, clarity of the reasoning behind decisions). If an AI system is involved, these guarantees should be complied with, including the involvement of the affected individuals and clarity of the decisions that have affected them.
The study stressed some inherent characteristics as to the nature of the refugee status determination procedure. The study highlights how AI supported decision-making presents distinctive problems for applying the legal standards in the procedure about assessment of protection needs.
Key conclusions and recommendations
- A major issue is that asylum decision-makers typically lack the means to verify whether their decisions (e.g. granting or rejecting protection) were correct. This absence of feedback means there is no reliable test data to evaluate AI systems during development or after being placed in operation. Moreover, the historical data for the development of the system might not be relevant for predicting future risks in applicants' countries of origin.
- This brings us to the following final insight that the report offers: the new technologies themselves can change the practice of asylum law. Such a change seems possible, given the increased importance of data, the selection of data and the role of programmers in the design of the algorithms.
- This in turn implies a shift away from discretion of individual decision-makers and in favour of discretion in the design of the systems themselves.
About the author
Vladislava Stoyanova is an Associate Professor of Public International Law at the Faculty of Law at Lund University, with expertise in human rights, migration law, and EU law. Her research focuses on the intersection of these fields, with a profile oriented towards human rights.
The report was published on 5 March 2026.
Picture: Igor Omilaev via Unsplash.