Advertising to the Affirmative Audience

Early on in the semester with my advertising classes, we talk about the relationship between the intended audience and the brand doing the advertising. The idea is that you’re not always trying to change people’s minds, but sometimes you’re trying to excite people who are already fans of the brand.

It’s called advertising to an Affirmative Audience.

It seems on the surface to be a little silly: Why spend money to get the attention of people who already like you and what you sell?

Advertising to the Affirmative Audience helps keep them aware of new products (like, for example, a new iPhone or Playstation). It also encourages them to feel good about their continued use of a product or service (say, if you’re a subscription service like Netflix or Spotify).

It’s a way to get people to reaffirm their enthusiasm, so that they don’t lose interest, and so that they maintain that positive relationship with the brand.

Which is why I want to take a quick moment to talk about foreign election interference.

A possible relationship between disinformation campaigns and advertising strategy

While scanning the comments to a New York Times article on an intelligence briefing about Russia’s desire to interfere with the 2020 election, I saw the following:

Okay, I’ll bite.

Let’s say the point of election interference isn’t about changing people’s minds from one party to another. Let’s take the perspective that the goal is to preach to the choir.

When advertising to an Affirmative Audience that already agrees with you, you strengthen their connection to your message.

The more a message connects with an audience, the more it becomes a part of their personal identity.

You can refute a loosely held belief with some effort, but if a person sees multiple messages reinforcing a misconception it can help to entrench that idea in their mind.

It’s part of how our minds work. The more connections we have in our brains to a central idea, the more likely we are to believe that central idea.

It’s unlikely a dank meme would turn a Democratic voter into a Republican voter, or vice versa.

But if a certain message, targeted at a person who is already predisposed toward the opinions of one party, gets repeated over and over, that becomes easier for them to believe.

If you see a piece of disinformation presented enough times, from what seems like a variety of different sources, it seems more believable. And if it already confirms something you already loosely believed, you’re even more likely to take it as the truth.

The more convinced a person becomes of the truth in a false statement, the less likely contradictory information will have any impact on their opinion—especially if the contradicting, true, information doesn’t agree with their previously held beliefs.

“But this still doesn’t explain how disinformation could affect an election.”

Take somebody who isn’t a regular voter and give them a steady stream of disinformation that supports one candidate while affirming their previously held beliefs, and now you have someone who will show up on election day to cast their ballot.

Take somebody who had an interest in a candidate and regularly present them with disinformation supporting their interest and they’ll be less likely to believe contradictory information that shows that candidate in a negative light.

A disinformation campaign doesn’t need to change minds to be effective. It might be more effective if it works to prevent people from changing their minds.