Humans like to think they are in charge. We are convinced that because we have our hands on the steering wheel we’re the ones driving the car… right up until our GPS sends us into a lake. Despite a long-standing belief that people don’t want computers telling them what to do, the evidence shows that we’re perfectly capable of not just relying on algorithms (such as in the case with GPS) but even sometimes preferring a computer’s advice to a human’s. Whether we realize it or not, we are welcoming our robot overlords in not-so-obvious ways that have encouraging implications for anyone in the business of helping brands communicate with the public.

The terms “computer” and “algorithm” and “artificial intelligence” and even “robots” all kind of mean the same thing here because your ordinary consumer can’t explain the difference. To your average person, it’s all just one big futuristic mishmash of everything from now-ordinary devices such as GPS to relatively newfangled tech such as Siri and Alexa to a wall-mounted food replicator on Star Trek: The Next Generation (“Computer: tea, Early Gray, hot.”).

The conventional wisdom that people trust human judgment over cold calculations dates from the post-war, three-martini-lunch era when businessmen just assumed they were smarter than algorithms. And that egotism, according to Jennifer Logg, Julia Minson, and Don Moore in a recent article in the Harvard Business Review, “morphed into the received wisdom that people will not trust and use advice from an algorithm.” The conventional wisdom is that we supposedly have our guards up against the machines, which is part of the reason Alexa isn’t called R2-D2.

The problem with conventional wisdom is that it’s overwhelmingly the former with little of the latter. A good example of this is the existing dogma that we won’t trust machines to make decisions for us, resulting in a feedback loop: The story is framed as a binary choice, a saga of being replaced by machines, us or them. Meanwhile, science keeps turning out studies that people actually trust machines to make decisions when it helps, not replaces, humans, yielding “man bites dog” headlines about how humans actually trust algorithms.

This shouldn’t be a huge surprise. In fact, the idea that humans are more comfortable using technology not as a choice between man and machine but as a compliment to humanity should be intuitive to anyone who has ever entered an elevator, pressed a button, and expected to end up safely on the correct floor. We have been using technology as a force multiplier for humanity since someone learned which rocks to bang together to make fire.

In their Harvard Business Review article, Logg, Minson, and Moore found that people weren’t willing to choose an algorithm’s judgment over their own at first. But, those who had already predicted outcomes to political and business events were willing to adjust their guesses based on suggestions by algorithms. This not only improved their accuracy, but they made more accurate predictions than the political and business experts who refused the algorithms’ help.

The willingness to accept help – but not replacement – from algorithms was confirmed by University of Chicago assistant professor Berkeley Dietvorst who found that people were more likely to use algorithms when they could money with them. “If you give the person any freedom to apply their own judgment through small adjustments, they’re much more willing,” he wrote.

It should not be too terribly surprising that humans are willing to cede independent judgment to a higher power. After all, the original black box algorithm goes by many names, is often worshipped on the weekends, and has been credited in at least one popular telling with creating the universe in seven days. Humans have long ceded decision making to forces unknown.

But algorithms are not divine, or infallible. They’re just better than we are at some things, especially in making predictions. Dietvorst, who estimates that algorithms are up to 15 percent better than we are at predicting stuff, says that random error and “aleatory uncertainty” impede Alexa’s resolute march toward perfection. And, keep in mind, humans at some point wrote the code that created these algorithms. Humanity, and all its imperfections and prejudices, is baked in. We might someday be replaced by Skynet, but, right now, Skynet is tethered by the imperfections of its creators.

Being able to make the distinction between when humans won’t rely on algorithms (when people feel they’re being replaced) and when they will (when computers are enhancing our abilities) shows where advances in AI in communications might work best.

Journalism is showing some promising applications. Lehigh University has an introductory course in artificial intelligence for journalism students, and Bloomberg is using automated technology to “assist reporters in churning out thousands of articles on company earnings reports each quarter.” And, in a case of what-they-don’t-know-won’t-hurt-them, AI is already producing journalism on minor league baseball for The Associated Press, high school football for The Washington Post and earthquakes for The Los Angeles Times, freeing human writers to report and write stories requiring a human touch.

Where the public seems less willing to put up with artificial intelligence is in customer service. A Harris Poll found that a slight (52 percent) majority of consumers get frustrated when forced to interact with an AI-powered chatbot with no option to talk to a real person, with an additional 18 percent feeling angry. More promising are those companies using AI to help its customer service staff with automated task management, schedule optimization, and labor forecasting – in other words augmenting, not replacing.

All of which is why people are now best-off thinking of algorithms as an additive, not a replacement. We’ve gotten so used to plugging directions into a GPS when we go anywhere that we have stopped thinking – literally. According to a 2017 study, an overreliance upon GPS leads to atrophy of the hippocampus, which has been shown to lead to PTSD and Alzheimer’s disease. The study’s authors recommended using your own brain to find your way once in a while. Because, as it turns out, algorithms aren’t supposed to replace us, just augment us, regardless of how comfortable we are taking driving directions from a computer-generated voice.