When people have questions, they often ask Google. They expect high-quality, accurate answers. Late last year, it emerged that the top answer Google gave to “Did the Holocaust happen?” linked to a neo-Nazi, white supremacist, Holocaust-denying website.

The ensuing outcry included people buying Google ads for the U.S. Holocaust Memorial Museum so that it would appear near the top of the results as well. After initial resistance, Google tweaked its algorithm – but only enough to push the false, prejudiced information somewhat farther down in the results.

These responses, however, miss a crucial element of the interplay between the tactics of Holocaust deniers’ (and conspiracy theorists more broadly) and Google’s search algorithm. Google wants to answer questions, and is often very good at it. But when the question itself has a hidden or implicit agenda, like expressing doubt about historical facts, the urge to answer that question shifts from a strength to a weakness.

Sowing doubts about the historical record is the bread and butter of Holocaust denial, and conspiracy theories more broadly. These illegitimate sites claim to be innocently curious, “just asking questions” about historical events and widely held beliefs. They are, of course, much more nefarious, seeking to spread anti-Semitism and right-wing hate.

As a scholar of political sociology and the Holocaust, it’s clear to me that sites intentionally presenting misinformation and propaganda are preying upon Google’s eagerness to answer questions. These sites, peddling what is sometimes called “fake news,” capitalize on people’s tendency to ask those questions directly on Google. This is one important example of the real-world effects of how algorithms are written. Human programmers need to be aware that there can be actual social consequences when they write what can seem like dry, straightforward code.