

Fredkin University Professor and Chair of Machine Learning at Carnegie Mellon. “Why would you treat them differently?” asks Tom Mitchell, the E. As long as customer desires are legal, ethical, and safe, smart machines could be as appropriately responsive as smart humans. Precisely how much discretion should autonomous cars have in driving aggressively to help a passenger catch a flight? “Aggressive” driving by responsive autonomous vehicles may be as important to customer service success as aggressive driving by humans.
#Robotize job evaluator drivers#
Unresponsive autonomous vehicles - like unresponsive human drivers - would get poorer ratings if they didn’t meaningfully react to passenger requests. They may want their car to “retaliate” if they’re cut off other vehicles. For example, just as with human drivers, impatient passengers in Uber autonomous vehicles might want their car driving faster and more aggressively than ordinary. Yet Kaplan also acknowledges that customer-facing machine learning systems increasingly integrate emotional affect, not just algorithmic optimization, into their behavior. “Look,” he adds, “you don’t sit these machines down and make them feel bad.”
#Robotize job evaluator serial#
Kaplan, a longtime Silicon Valley serial entrepreneur and investor, flatly dismisses efforts to humanize or “managerialize” smart machines as “excessive and gratuitous anthropomorphism.” “The natural extension of longstanding historical progress is ‘automation,’ not a re-creation of the human mind and condition.” “This goes to the heart of intelligent systems design,” asserts Jerry Kaplan, author of Humans Need Not Apply. Empowering smart machines to - pun intended - live up to their potential may well become the essential new 21st-century leadership skill.

Put bluntly, executives who can’t get their robots to do a better job may lose their own. Effective executives understand the productivity and customer loyalty future depends as much on motivating and managing their machines as inspiring their people. What happens when these algorithmic ensembles underperform? Who retrains “machine learning” underachievers? When sophisticated robots - Fidelity robo-advisers, Uber autonomous cars, Watson’s medical diagnoses - behave in ways that make customers nervous, how do they get the feedback essential to improvement? Who - or what - is accountable? Microsoft’s recent Tay debacle is a perfect example of what happens when you don’t take machine learning “training” seriously enough.īrilliant, hard-working machines will require job reviews every bit as much as lazy and toxic humans. Intelligent technologies are increasingly delivering greater value for less money.īut “better than human” comes with its own managerial challenges. They learn fast, work hard, and certainly complain less. Smart, quasi-autonomous robots and machines are replacing humans in workplaces all over the world.
