Job Cuts and Surveillance
The Department of Government Efficiency (DOGE) is using artificial intelligence to monitor federal employees’ communications, seeking out any whiff of disloyalty. This is not the first time the department has used A.I. to monitor employees, but it is a worrying trend. According to a report, DOGE slashed around 222,000 jobs in March alone, hitting hardest in areas where the U.S. can least afford to fall behind, such as artificial intelligence and semiconductor development.
- The cuts are hitting hardest in areas where the U.S. can least afford to fall behind, such as artificial intelligence and semiconductor development.
- The use of A.I. in surveillance raises classic pitfalls such as biases and privacy issues.
- There is a risk of sensitive data being fed to private actors, which could lead to biased or inaccurate systems making decisions that affect real lives.
Surveillance Isn’t Just About Cameras or Keywords
Surveillance isn’t just about cameras or keywords anymore. It’s about who processes the signals, who owns the models, and who decides what matters.
The federal government is supposed to set standards, not outsource them. Public trust in A.I. will weaken if people believe decisions are made by opaque systems outside democratic control. The use of A.I. in surveillance creates a dangerous precedent that could be used to undermine public trust in the government.
What’s at Stake?
- The National Science Foundation (NSF) recently slashed more than 150 employees, and internal reports suggest even deeper cuts are coming.
- The NSF funds critical A.I. and semiconductor research across universities and public institutions, which supports everything from foundational machine learning models to chip architecture innovation.
- The White House is also proposing a two-thirds budget cut to NSF, which wipes out the very base that supports American competitiveness in A.I.
- Nearly 500 NIST employees are on the chopping block, including most of the teams responsible for the CHIPS Act’s incentive programs and R&D strategies.
Is DOGE Feeding Confidential Public Data to the Private Sector?
DOGE’s involvement | Raises a critical concern about confidentiality |
The department has quietly gained sweeping access to federal records and agency data sets. | This is a risk multiplier. |
A.I. tools are combing this data to identify functions for automation. | So, the administration is now letting private actors process sensitive information about government operations, public services and regulatory workflows. |
The move shifts public data into private hands without clear policy guardrails. It also opens the door to biased or inaccurate systems making decisions that affect real lives. Algorithms don’t replace accountability. There is no transparency around what data DOGE uses, which models it deploys, or how agencies validate the outputs. Federal workers are being terminated based on A.I. recommendations. The logic, weightings and assumptions of those models are not available to the public. That’s a governance failure.
The Need for Better A.I. Models
A better A.I. model is needed to meet the specific challenges and standards required in public service.
Surveillance doesn’t make a government efficient, without rules, oversight, or even basic transparency; it just breeds fear. And when artificial intelligence is used to monitor loyalty or flag words like “diversity,” we’re not streamlining the government—we’re gutting trust in it. Federal workers shouldn’t wonder if they’re being watched for doing their jobs or saying the wrong thing in a meeting. This also highlights the need for better, more reliable A.I. models to meet the specific challenges and standards required in public service.