The review leaves a lot to be desired. Yes algorithmic policing deserves criticism and attention... but that's a totally different film. This one seems focused on the existential risk. There's a very good chance animal life on earth will end in the next 100 years because of AI. Vincent has objections like (paraphrasing) "But a lot of people are publishing papers about possible ways to rein in AI!" I think that deeply misses the point.
There are thousands of different, independently developed AI programs running right now that play MMORPGs and are programmed to mine resources, forge weapons, seek the inhabitants of their world and kill them. Lots of these were made by teenagers. No one had to program them to be evil--they're not evil!
How far is Boston Dynamics from being able to perform all of those steps in real life? How far is the US military? There will probably be an arms race with killer robots, there will be tons of private sector research, there will be script kiddies, we already have homebrewed armed drones... and there's no real way to reliably keep software from taking actions in the real world that they would take in a virtual world.
The rhetorical task in front of the alarmist camp is merely to point out that there are many reasonable scenarios in which we inadvertently destroy the human race (and perhaps most animal life on earth) in the course of improving AI.
The rhetorical task in front of the optimists is to prove that all such scenarios that anyone could ever conceive are all unreasonable and all unlikely. I've read a lot of optimists, and I haven't encountered one that seems to be thinking about the problem thoroughly and rigorously.
They mistakenly operate from the presumption that only one body of researchers needs to come to agreement about restrictions on AI research. Or they mistakenly assume that evil is necessary for a machine to kill. Or the act is that we are still in the mainframe or personal computer era, and AI agents will be physically unpluggable. Or they misunderstand how easy it would be to kill hundreds of millions of people (just get a few dozen sheets of uranium together in various places and clap them together, or set up a network of autonomous factories that disproportionately use up the atmosphere's gases, or make tiny drones that seek and burrow into necks...) Or they are missing the aggressive direction of major power militaries towards creating software and hardware for self-perpetuating, autonomous killing machines. Or they misunderstand how opaque the inner workings of trained machine learning algorithms are.
When I hear someone say that they are certain there's no risk that this ends with the destruction of humanity (and probably most animal life on Earth), I think they are experiencing a profound failure of imagination.