The UK’s passport program went live in 2016. It uses an AI-powered facial recognition feature to determine whether user-uploaded photos meet the requirements and standards for use as a passport photo. The system rejects photos that miss the mark. In the time since its launch, many black users have reported numerous issues using the system that white people don’t appear to have, including the system’s failure to recognize that their eyes are open or their mouths are closed. Users can override the AI’s rejection and submit their images anyway, but they’re also warned that their application could be delayed or denied if there’s a problem with the photo – white users can rely on the AI to make sure they don’t suffer these issues, others have to hope for the best. This is the very definition of privilege-based racism. It’s a government-sponsored virtual priority lane for white people. And, according to a freedom of information act request by advocate organization medConfidential, Home Office was well aware of this before the system was ever deployed. Per a report from New Scientist writer Adam Vaughn, Home Office responded to the documents by stating it was aware of the problem, but felt it was acceptable to use the system anyway: AI is incredibly good at being racist because racism is systemic: small, difficult to see groupings of seemingly diverse data correlate to create any racist system. Given nearly any problem that can be solved for the benefit of white people or to a detriment excluding white people, AI’s going to reflect the exact same bias intrinsic in the data it’s fed. What the UK’s government has figured out, however, is how to exploit AI’s inherent bias to ensure that white people receive special privileges. The UK’s letting the entire world know what its priorities are. Read next: Facial recognition company CEO explains why government surveillance is bad for privacy