Chinese authorities are harnessing a vast and secretive system of advanced facial recognition technology in order to control and surveil the Uighurs, a largely Muslim minority, according to a new report.

Facial recognition technology, which is under fire from racial justice advocates and tech workers in the United States, has the potential to allow the easy targeting and profiling of communities through the lens of race and gender.

Based on interviews with five people who have direct knowledge of the systems, along with a review of databases used by the police, government procurement documents, and advertising materials distributed by the AI companies making the systems, The New York Times uncovered the first known example of a government intentionally using artificial intelligence for racial profiling.

PELOSI HERALDS 'NEW ERA' OF BIG TECH REGULATION, SAYS 230 PROTECTIONS COULD BE REVOKED

The Uighurs had already been targeted by Chinese authorities in the western region of Xinjiang with tools of surveillance, including tracking people's DNA, but these newly revealed systems allow officials to target the largely Muslim minority in up to 16 different provinces and regions across China.

According to the Times, a group of new startups is catering to the authoritarian country's appetite for surveillance and control.

"Take the most risky application of this technology, and chances are good someone is going to try it," Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law, told the Times. "If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity."

An Uighur woman uses an electric-powered scooter to fetch school children as they ride past a picture showing China's President Xi Jinping joining hands with a group of Uighur elders at the Unity New Village in Hotan, in western China's Xinjiang region. (AP Photo/Andy Wong, File)

The debate in America over the use of AI has mostly centered around the bias of the people designing the technology. Internal systems used by Amazon's HR department, for example, ended up rejecting basically all resumes that were submitted by women. Amazon workers and some lawmakers have called for the company to stop marketing and selling its AI software to police departments in the U.S. over fears about racial bias and potential misuse.

STRATOLAUNCH, WORLD'S LARGEST AIRCRAFT BY WINGSPAN, MAKES HISTORIC FIRST FLIGHT

The surveillance technology being used in China is also a big business, according to the Times, which reports that four of the companies behind the AI are each valued at more than $1 billion. With the potential for more profits to be made, there's a concern among advocates that the type of systems used in China could find their way into other countries.

"I don't think it's overblown to treat this as an existential threat to democracy," Jonathan Frankle, an AI researcher at the Massachusetts Institute of Technology, told the Times. "Once a country adopts a model in this heavy authoritarian mode, it's using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into."