Picture this: it’s rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats who’ve convinced themselves that bad vibes are now a crime category.
Welcome to the MTA’s shiny new plan for keeping you safe: an AI surveillance system designed to detect “irrational or concerning conduct” before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, might’ve been called “having a bad day.”
MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it “predictive prevention.”
“AI is the future,” Kemper assured the MTA’s safety committee.
So far, the MTA insists this isn’t about watching you, per se. It’s watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: “The technology being explored by the MTA is designed to identify behaviors, not people.”
And don’t worry about facial recognition, they say. That’s off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install “emotion detection” software that’s about as accurate as your aunt’s horoscope app.
The Governor’s Favorite Security Blanket
This push didn’t hatch in a vacuum. It’s part of Governor Kathy Hochul’s continuing love affair with surveillance. Since taking office, she’s gone full Minority Report on the MTA, installing cameras on every platform and train car. Kemper reports about 40 percent of platform cams are monitored in real-time; an achievement if your goal is to recreate 1984 as a regional transit initiative.
But that’s not enough. Now they’re coming for conductor cabs, too. Because apparently, the guy driving the train might be plotting something.
The justification? Public safety, of course. That reliable blank check for every civil liberties withdrawal.
The Algorithm Will See You Now
There’s a strange and growing faith among modern bureaucrats that algorithms are inherently wiser than humans. That they’re immune to the same messy flaws that plague beat cops and dispatchers and mayors. But AI isn’t some omniscient subway psychic. It’s a mess of code and assumptions, trained on biased data and sold with slick PowerPoint slides by tech consultants who wouldn’t last five minutes in a crowded Bronx-bound 4 train.
US Transportation Secretary Sean Duffy threatened to yank federal funding unless the agency coughed up a crime-fighting strategy. And when Washington says jump, the MTA asks if it should wear a bodycam while doing it.
So the MTA submitted a plan; basically a warmed-over casserole of ideas they were already cooking. Only now with more jargon and AI glitter sprinkled on top.
You’re the Suspect Now
The whole thing slots nicely into a global trend where governments outsource paranoia to machines. From South Korea’s “Dejaview” to the UK’s facial recognition fails to China’s social credit panopticon, the race is on to see who can algorithmically spot thoughtcrime first. The problem? Machines are stupid. And worse, they learn from us.
Which means whatever patterns these systems detect will reflect the same blind spots we already have; just faster, colder, and with a plausible deniability clause buried in a vendor contract.
And while the MTA crows about safer commutes, the reality is that this is about control. About managing perception. About being able to say, “We did something,” even if that something is turning the world’s most famous public transit system into a failed sci-fi pilot.
So go ahead. Pace nervously on the platform. Shift your weight too many times. Scratch your head while frowning. In the New York subway system of tomorrow, that might be all it takes to get flagged as a threat.