I find the assertion that machines don't have morality entirely true. However, this doesn't mean they can't possess morality, seeing as morality is, in essence, a set of internalized rules and guidelines. It's not like the principles of human thought and behavior (and, by extension, human thought patterns that lead to moral, ethical behavior) are unknowable. All people operate on a basis of cause and effect, with the differences in their predictable response being what we call a personality. And then, once we get all that down to algorithms, we have AI, possibly the friendly kind.
I would agree that I wouldn't want anything simpler than that performing law enforcement, since they probably wouldn't have an adequate grasp of nuance and context to fulfill their functions well. But if we could have an artificially sapient, humanlike robotic police officer with a programmable personality and the sort of efficiency that allowed the robot arm from the video to defeat a master swordsman with probably much less practice than he had, why not? I suspect they would be no less fallible than the regular police officer in that event. Perhaps even less so.
Of course, we're probably going to obtain unmanned police drones that shoot people a little too often before such a thing is possible, but you know. A man can dream.
The concept of a universal ethical calculus is quite old and widely known, but somewhat problematic.
The main problem is that it doesn't work.
If it doesn't work, it requires improvement, doesn't it?
And of course it's not going to actually work if you're going to put randomly assigned values into a formula you pulled out of your ass (see: Drake's equation). However, imagine if we did have an algorithm that produces sapience, and thus we could work with behavior on a fundamental, mathematical level, figuring out what produces acceptable humanlike behavior. Now that's a position from which ethical calculus can be plausibly derived.
BUT PLEASURE IS THE ULTIMATE GOAL OF MY LIFE
This actually describes my thoughts on the matters of pleasure as well, by the by. Pleasure (as in, a state of mind that creates positive emotions, just to make sure we don't get into a discussion about what words mean) is the ultimate goal of everyone's life. What probably complicates things is that a single, solitary form of pleasure to the exception of all others doesn't always equate to actual pleasure on account of the nervous system adapting to it. For instance, chemical highs lose their kick after a long time having the exact same kind, while the pleasure of agency dulls itself if you take your life in a fundamentally unpleasant, self-destructive direction. Tending toward extremes decreases the pleasure gained, while a variety of pleasures in life makes them that much more efficient at providing happiness - a state of overall pleasure gain.
To make matters complicated, though, while people do choose their course of action based on the expected pleasure it will bring, their projection of pleasure gain can often be wrong (based on incorrect assumptions or made with flawed reasoning). Furthermore, they can also project themselves more broadly than as a self-contained entity, identifying themselves with concepts, communities and other people - hence the idea of self-sacrifice. They can even project themselves beyond their own deaths with the idea of an afterlife.