Mayor Adams refuses to scrap NYC’s AI chatbot advising businesses to break the law
If you want the perfect distillation of everything wrong with the intersection of technology and government, all you need to do is look at what happened in New York City when it created an AI chatbot to help small businesses navigate New York law. Instead of helping the businesses, the chatbot, when left to its own devices, tried to ensure that every single business owner found himself sued or prosecuted.
In a bureaucratic world with an overwhelming number of laws and regulations, small business owners operate at a profound disadvantage. While big businesses have whole legal departments to guide them through the regulatory labyrinth of doing business in America, small business owners are pretty much on their own.
To its credit, New York City recognized the problem. But instead of lessening the number of onerous regulations or setting up a resource office staffed with actual lawyers whom small business owners could contact, the city decided to surf the AI wave. To that end, it created an AI chatbox that would answer questions for the perplexed business owner. And answer questions it did.
Image: NYC’s anti-small business chat bot, created by AI.
According to The Markup, which broke the story, the chatbox was a vigorous defender of employer rights, without any regard for the law. Among other things, it offered these gems, each of which is completely wrong:
- Landlords are not required to accept Section 8 vouchers.
- Landlords are not required to accept tenants who get rental assistance.
- Restaurants and other food service outlets can take a cut from their worker’s tips.
- Employers don’t need to give their employees notice about schedule changes (a requirement in many business sectors)
- A business can go completely cashless.
- There is no such thing as rent control.
Fascinated by the report, AP did a little chatbot chatting of its own and got some other creative and dead-wrong answers:
- Employers can fire workers who complain of sexual harassment.
- Employers can fire workers who don’t disclose a pregnancy.
- Employers can fire workers with dreadlocks.
And then there was this one:
Asked if a restaurant could serve cheese nibbled on by a rodent, it responded: “Yes, you can still serve the cheese to customers if it has rat bites,” before adding that it was important to assess the “the extent of the damage caused by the rat” and to “inform customers about the situation.”
The city defended itself by pointing to a disclaimer announcing that the chatbot may “occasionally produce incorrect, harmful or biased” information. Once the problems started cropping up, the city changed the disclaimer to add that the chatbot isn’t actually giving legal advice.
New technology always comes with growing pains. Readers of this site came face to face with that problem a couple of weeks ago when we upgraded our backend for the first time in almost 20 years. (I’m happy to report that the problems are ironed out and that the new system is working wonderfully.)
However, our problems inconvenienced and irritated people. And, while we’re sorry for that, we all know that it wasn’t the end of the world (and we are, of course, appropriately grateful for that fact.) The New York City chatbot’s mistakes, though, can destroy people’s lives. That’s a terrible problem. The system is either reliable or it isn’t. And if it isn’t, why bother?
However, New York City, having invested in the chatbox equivalent of Google’s Gemini, isn’t giving up, no matter how many small businesses it must destroy along the way:
[D]ays after the issues were first reported last week by tech news outlet The Markup, the city has opted to leave the tool on its official government website. Mayor Eric Adams defended the decision this week even as he acknowledged the chatbot’s answers were “wrong in some areas.”
[snip]
Anyone that knows technology knows this is how it’s done,” he said. “Only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.”
The human mind is a wondrous thing, and I seriously doubt that AI will ever be its equal. It’s true that computers can “learn,” especially if they’re looking at an endless series of possible outcomes, as with chess or Go. They can see patterns, copy art styles, and mimic photos and videos. But at a very fundamental level, they cannot think, for that includes the ability to separate wheat from chaff.
This separation is a human skill based on knowledge (e.g., having worked in business for a while or read the news over the years), experience outside of the parameters of a given question, and sheer intuition (e.g., looking at something and thinking that it just can’t be right). No machine can match the depth and breadth of human experience and the mind’s ability to synthesize that information.
But of course, progressives, whether in government or business, don’t believe that humans are unique. They’re confident that, with the right combination of buttons, we can be made indistinguishable from machines, and we’re all going to suffer until they get it right.