

Personally I feel that the hate for AI is misplaced (mostly, as I do get there is a lot of nuance regarding peoples feelings on training sourcing etc). Partially because its such a wide catch all term, and then mostly, by far, because all of the problems with AI are actually just problems with the underlying crony capitalism in charge of its development right now.
Every problem like AI “lacking empathy” is down to the people using it not caring to keep it out of places where it fails to accomplish such goals or where they are explicitly using it to strip people of their humanity; something that inherently lacks empathy.
If you take away the horrible business motivations etc, I think its pretty undeniable AI is and will be a great technology for a lot of purposes and not for a lot of the ones its used for now (this continued idea that all UI can be replaced such that programmers wont be needed for specific apps and other such uses).
Obviously we can’t just separate that but I think its important to think about especially regarding regulation. That’s because I believe that big AI currently is practically begging to be regulated such that the moat to create useful AI becomes so large that no useful open source general purpose AI tools can exist without corporate backing. That’s I think one of their end goals along with making it far more expensive to become a competitor.
That being said this is a little bit out of hand in that this was about software in general, and regarding that and AI, I do believe empathy can be included, and built correctly, a computer system could have a lot more empathy than most human beings who typically only have meaningful empathy towards people they personally empathize with in their actions, which leads to awful systemic discrimination reinforcing practices.
As for the flock example, I think its almost certain they got in with some backroom deals, and in a more fair world… where those still exist somehow, the police department would have a contract with some sort of stipulations regarding what happens with false identifications. The police officers also would not be traumatizing people over stolen property in the first place.
That is all to say, I think that often when software is blamed, what should actually be blamed is the business goals that lead to the creation of that software and the people behind them. The software is after all automation of the will of the owners.

This is why people who are gung ho about AI policing need to slow their role.
If they got their way, what they don’t realize is that it’s actually what the big AI companies have wanted and been begging for all along.
They want AI to stay centralized and impossible to enter as a field.
This is why they want to lose copyright battles eventually such that only they will have the funds to actually afford to make usable AI things in the future (this of course is referring to the types of AI that require training material of that variety).
What that means is there will be no competitive open source self hostable options and we’d all be stuck sharing all our information through the servers of 3 USA companies or 2 Chinese companies while paying out the ass to do so.
What we actually want is sanity, where its the end product that is evaluated against copy right.
For a company selling AI services, you could argue that this is service itself maybe, but then what of an open source model? Is it delivering a service?
I think it should be as it is. If you make something that violates copyright, then you get challenged, not your tools.