Crossing the (AI) Border: The Use of Bots in Canada’s Immigration System

cyber security, electronic encryption, connected with flashing pointsWhat’s the human risk of using non-human decision making for a nationwide immigration and refugee system? That’s the question that the International Human Rights Program (IHRP) is asking. The IHRP and the Citizen Lab, both entities at the University of Toronto, recently collaborated to publish “Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System“. Since 2014, Canada has been experimenting with the use of “predictive analytics to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications.” This year, Canada reportedly has been sourcing the private sector for an “Artificial Intelligence Solution in immigration decision-making and assessments, including in Humanitarian and Compassionate applications and Pre-Removal Risk Assessments.”

For the authors of this report, the implementation of this automation creates a significant risk to human rights and international law. From privacy to equality, discrimination, freedom of expression and movement, and personal security, the potential resulting decisions from an automated system could disrupt and infringe on basic principles once assumed as available to all. IHRP and Citizen Lab are also very concerned about the disproportionate risk to refugees. These individuals, whose complex circumstances against a violence-torn international backdrop, arguably require more than an automated evaluation. At the very least, the argument is based on procedural fairness and a standard of review. At most, and especially for refugees, it could mean life or death.

Based on the interdisciplinary critical analysis of the report, seven specific recommendations are made for the government of Canada. Below, as excerpted from the report:

  1. Publish a complete and detailed report, to be maintained on an ongoing basis, of all automated decision systems currently in use within Canada’s immigration and refugee system, including detailed and specific information about each system.
  2.  Freeze all efforts to procure, develop, or adopt any new automated decision system technology until existing systems fully comply with a government-wide Standard or Directive governing the responsible use of these technologies.
  3.  Adopt a binding, government-wide Standard or Directive for the use of automated decision systems, which should apply to all new automated decision systems as well as those currently in use by the federal government.
  4. Establish an independent, arms-length body with the power to engage in all aspects of oversight and review of all use of automated decision systems by the federal government.
  5. Create a rational, transparent, and public methodology for determining the types of administrative processes and systems which are appropriate for the experimental use of automated decision system technologies, and which are not.
  6. Commit to making complete source code for all federal government automated decision systems—regardless of whether they are developed internally or by the private sector—public and open source by default, subject only to limited exceptions for reasons of privacy and national security.
  7. Launch a federal Task Force that brings key government stakeholders alongside academia and civil society to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

If you’d like to learn more about artificial intelligence, human rights, and international law, ask one of our librarians for assistance!