In most cases, it’s quite well and, in some ways better than we know ourselves.
A study by AI experts at Brigham Young University, titled “Out of One, Many: Using Language Models to Simulate Human Samples,” found that predictive AI programs exhibited a striking degree of what they call “algorithmic fidelity,” or precise mapping to actual human behavior.
“Because these AI tools are basically trained on stuff that humans produce, things that we write, documents we make, websites we write, they can reflect back to us a lot of interesting and important things about ourselves,” Ethan Busby, political psychologist and co-author of the study, told The Epoch Times. “Kind of like if someone read your diary from start to finish, they would know a lot of things about you, and you’re not going to like every single thing.”
“In a similar way,” Busby said, “these tools have read so many things that humans have produced, and they can replicate or say back to us things about ourselves that we didn’t necessarily know.”
The study sought to analyze human behavior in the context of elections and asked how accurately a GPT-3 language model could predict voting patterns based on socio-demographic factors like a person’s gender, age, location, religion, race, and economic status. The authors used these factors to create “silicon samples,” or composite personas based on varying combinations of these attributes.
“You can basically ask these tools to put themselves in a specific frame of mind and pretend to be essentially this person, pretend to have these characteristics,” Busby said. They asked the program how these “silicon samples” would vote in specific campaigns, then they compared the results to actual voters’ behavior in elections between 2012 and 2020, using data from the American National Elections studies.
For example, Busby said, regarding the 2016 election, “We could say what kinds of groups are going to be pivotal in Ohio?” What they found was that AI quickly learned to accurately predict how people would vote, based on their attributes.
Left and Right Decry AI, When It Costs Elections
Artificial intelligence is highly useful to organizations that want to target things like political messaging campaigns or fundraising efforts. But some political analysts have raised red flags about this, inferring unfairness and election interference. Their degree of outrage, however, largely depends on whether their candidates or causes succeeded or failed.
In 2017, The Guardian, a left-wing British newspaper, wrote a series of articles claiming that conservative tech entrepreneur Robert Mercer, whom it called “the big data billionaire waging war on the mainstream media,” had financed a campaign strategy using AI to circumvent mainstream media narratives. This, the paper alleged, illicitly swayed voters in favor of Donald Trump, resulting in his victory in the presidential election in 2016.
Read more here…