via http://ift.tt/2oKe4kn
Russia's Election Interference Is Digital Marketing 101:
antoine-roquentin:
Thus far, the media coverage of Mueller’s indictment has fixated on how all this could have happened, and probed whether the Trump campaign was involved. The answers to these questions will all emerge in time. The more troubling question is why it was so easy to make fools out of so many Americans.
Consider two things. First: While the Russians created fake accounts to pose as Americans on social media and buy ads, the technologies they deployed are all commonplace in the digital-marketing industry—this was no 007-style spycraft. Second: These days, Americans live in divisive, partisan information environments, chock-full of incendiary rhetoric. They have very low standards about the sources they accept as accurate, and yet aren’t great at parsing fact from fiction on the Internet. Even “digital natives”— young people most at home in an online information environment—have proven inept at judging credibility. In other words, when the Russians set out to poison American politics, they were pushing on an open door.
How does a ready-made toolbox for digital manipulation already exist? For that, we have the digital-advertising industry to thank.
In a recent study on the digital-advertising industry that we published with New America and Harvard’s Shorenstein Center, we analyzed how the tools of digital marketing can be readily repurposed by agents of disinformation. The basic idea is for advertisers to micro-target digital advertising at very specific demographic slices of social-media users to see how they respond. A disinformation operator could test hundreds of different messages, often aimed at thousands of different permutations of demographic groups on the advertising platforms of the most widely used social-media companies.
For example: A political advertiser (or communicator) might test a message about immigration in different cities across the country, or it might compare responses to that message based on age, income, ethnicity, education-level, or political preference. Because digital-media companies like Facebook collect vast amounts of data on their users, advertisers can parse based on age, income, ethnicity, political affiliation, location, education level, and many other consumer preferences that indicate political interests. Once the ad buys indicate what messages get the biggest response from particular groups, the operator can organize its entire social-media campaign to reach those people and build out bigger and bigger audiences.
This is digital marketing 101. Start with a product to sell and test a variety of messages until the best one rises to the surface.
In the election-interference case, the “products” for Russian trolls were divisive political messages about issues like, say, religion. But just as with any other product, the ads ginning up fear and outrage about Islam in America benefited from Google and Facebook’s machine-learning algorithms, which scan vast amounts of data and conduct tests on multitudes of political messages to determine the best way to find and engage an audience. Everybody makes more money if the ads work well—that is to say, if people click on them. The economic interests of advertisers and social media companies are essentially aligned. And while Facebook, Google, and Twitter are now taking steps to identify and block ads purchased by foreign agents and shut down these attempts to push fabricated news, the underlying machine of the ad tech market will, theoretically, accelerate users’ consumption of all but the most egregious content.
When political advertisers—including purveyors of disinformation—get into the mix, the economics of audience segmentation and micro-targeted advertising start to produce what is known as a “negative externality” in the market, or an unintended outcome that harms the public. The system naturally organizes people into homogenous groups and feeds them more of what they want—typically, information that reinforces their pre-existing beliefs—and then ups the sensation-factor in order to hold people’s interest for longer stretches of time.
A recent analysis of YouTube, for instance, showed that the videos in the “next up” queue were fed by an algorithm that prioritized keeping eyeballs glued on videos. The results predictably fed users content that matched previous preferences, or, failing that, just increased the level of sensationalism. In the wake of the Las Vegas shooting, users who watched at least one YouTube video questioning whether the shooting actually happened were then recommended more videos of the same sort—a dangerous example of how social-media algorithms can perpetuate and promote propaganda.
Today, even though hundreds of millions of people get their news and information from Google, Facebook, and Twitter, they are fragmented and polarized into a variety of isolated communities, ranging from the staunchly conservative to the hard left. In such an information environment, it’s common for everyday users of social media to circulate incendiary content from dubious sources. So when the Russians inject streams of content suggesting that NATO is showering chemicals across Poland or that a Ukrainian policeman proudly donned a Nazi uniform, it doesn’t seem so extraordinary for most of the audience.
it’s virtually guaranteed that the vast majority of the world’s nations undertake these tactics. but would the people getting mad about russia doing this get made about the uk? qatar? bolivia? what about corporations that spend on political advertising, both directly and through proxies? where is the line drawn? what countries and corporations have a green light to manipulate us? that’s why i can’t believe that the fact that the focus is on russia is simply because media people care about the manipulation of american politics by different actors. rather, it seems like a targeted campaign to raise fear over russia and stoke the embers of a new cold war, which would inevitably involve massive weapons and intelligence spending, among other things.
#you know what fucking sucks about this whole debate#people have completely forgotten the cambridge analytica story#and honestly if there is a case to be made for unethical election meddling#its the campaign disseminated via cambridge analytica#with funding that is…not going to be scrutinized#and using a level of private surveillance that is…also not going to be scrutinized#but apparently its ttly okay when yr govt does it to you#thats the thing that really gets to me about all this #uspol (tags via @chamerionwrites)
(Your picture was not posted)
Russia's Election Interference Is Digital Marketing 101:
antoine-roquentin:
Thus far, the media coverage of Mueller’s indictment has fixated on how all this could have happened, and probed whether the Trump campaign was involved. The answers to these questions will all emerge in time. The more troubling question is why it was so easy to make fools out of so many Americans.
Consider two things. First: While the Russians created fake accounts to pose as Americans on social media and buy ads, the technologies they deployed are all commonplace in the digital-marketing industry—this was no 007-style spycraft. Second: These days, Americans live in divisive, partisan information environments, chock-full of incendiary rhetoric. They have very low standards about the sources they accept as accurate, and yet aren’t great at parsing fact from fiction on the Internet. Even “digital natives”— young people most at home in an online information environment—have proven inept at judging credibility. In other words, when the Russians set out to poison American politics, they were pushing on an open door.
How does a ready-made toolbox for digital manipulation already exist? For that, we have the digital-advertising industry to thank.
In a recent study on the digital-advertising industry that we published with New America and Harvard’s Shorenstein Center, we analyzed how the tools of digital marketing can be readily repurposed by agents of disinformation. The basic idea is for advertisers to micro-target digital advertising at very specific demographic slices of social-media users to see how they respond. A disinformation operator could test hundreds of different messages, often aimed at thousands of different permutations of demographic groups on the advertising platforms of the most widely used social-media companies.
For example: A political advertiser (or communicator) might test a message about immigration in different cities across the country, or it might compare responses to that message based on age, income, ethnicity, education-level, or political preference. Because digital-media companies like Facebook collect vast amounts of data on their users, advertisers can parse based on age, income, ethnicity, political affiliation, location, education level, and many other consumer preferences that indicate political interests. Once the ad buys indicate what messages get the biggest response from particular groups, the operator can organize its entire social-media campaign to reach those people and build out bigger and bigger audiences.
This is digital marketing 101. Start with a product to sell and test a variety of messages until the best one rises to the surface.
In the election-interference case, the “products” for Russian trolls were divisive political messages about issues like, say, religion. But just as with any other product, the ads ginning up fear and outrage about Islam in America benefited from Google and Facebook’s machine-learning algorithms, which scan vast amounts of data and conduct tests on multitudes of political messages to determine the best way to find and engage an audience. Everybody makes more money if the ads work well—that is to say, if people click on them. The economic interests of advertisers and social media companies are essentially aligned. And while Facebook, Google, and Twitter are now taking steps to identify and block ads purchased by foreign agents and shut down these attempts to push fabricated news, the underlying machine of the ad tech market will, theoretically, accelerate users’ consumption of all but the most egregious content.
When political advertisers—including purveyors of disinformation—get into the mix, the economics of audience segmentation and micro-targeted advertising start to produce what is known as a “negative externality” in the market, or an unintended outcome that harms the public. The system naturally organizes people into homogenous groups and feeds them more of what they want—typically, information that reinforces their pre-existing beliefs—and then ups the sensation-factor in order to hold people’s interest for longer stretches of time.
A recent analysis of YouTube, for instance, showed that the videos in the “next up” queue were fed by an algorithm that prioritized keeping eyeballs glued on videos. The results predictably fed users content that matched previous preferences, or, failing that, just increased the level of sensationalism. In the wake of the Las Vegas shooting, users who watched at least one YouTube video questioning whether the shooting actually happened were then recommended more videos of the same sort—a dangerous example of how social-media algorithms can perpetuate and promote propaganda.
Today, even though hundreds of millions of people get their news and information from Google, Facebook, and Twitter, they are fragmented and polarized into a variety of isolated communities, ranging from the staunchly conservative to the hard left. In such an information environment, it’s common for everyday users of social media to circulate incendiary content from dubious sources. So when the Russians inject streams of content suggesting that NATO is showering chemicals across Poland or that a Ukrainian policeman proudly donned a Nazi uniform, it doesn’t seem so extraordinary for most of the audience.
it’s virtually guaranteed that the vast majority of the world’s nations undertake these tactics. but would the people getting mad about russia doing this get made about the uk? qatar? bolivia? what about corporations that spend on political advertising, both directly and through proxies? where is the line drawn? what countries and corporations have a green light to manipulate us? that’s why i can’t believe that the fact that the focus is on russia is simply because media people care about the manipulation of american politics by different actors. rather, it seems like a targeted campaign to raise fear over russia and stoke the embers of a new cold war, which would inevitably involve massive weapons and intelligence spending, among other things.
#you know what fucking sucks about this whole debate#people have completely forgotten the cambridge analytica story#and honestly if there is a case to be made for unethical election meddling#its the campaign disseminated via cambridge analytica#with funding that is…not going to be scrutinized#and using a level of private surveillance that is…also not going to be scrutinized#but apparently its ttly okay when yr govt does it to you#thats the thing that really gets to me about all this #uspol (tags via @chamerionwrites)
(Your picture was not posted)