As artificial intelligence technology advances daily, scientists and researchers have been looking into the risks and benefits AI would carry in this year’s upcoming election. While AI can allow bad actors to misinform the public, or affect security, Mike Kirby, a leadership member of the University of Utah’s Responsible AI Initiative and professor in the school of computing, said he thinks AI can be viewed as a tool rather than a risk.
The RAI is currently speaking with multiple community members to address how to use AI most effectively. These members include state leadership, lawyers and psychologists to retrieve as much data and input as they can.
According to a U report, the Responsible AI Initiative, funded with $100 million, aims to responsibly use advanced AI technology to tackle societal issues with current subtopics being environment, education and healthcare.
While elections are not currently a subtopic of the initiative, Kirby said it could be in the future.
Kirby said the media currently portrays AI as either a dystopian mechanism, used for ending the world, or a utopian supertool, and RAI lies in the middle of these polarized sides.
“We don’t take a dystopia or a utopian view,” he said. “ We try to take a measured view, a healthy optimistically measured view.”
However, while they are optimistic, Kirby clarified they do not operate under “blind optimism.”
RAI looks for the positives of AI and determines how to use these positives as a tool. At the same time, they understand this comes with future challenges.
When applying this research to the U.S. election system, Kirby said while the technology can be used to harm election results, that same technology can be used to counteract it.
Anomaly detection is one example. Kirby said forms of AI have ways of “sifting through data at rates that [humans] can’t and look for patterns that are anomalous and should be investigated.”
Kirby disagrees with the opinion that AI is “bad.” Considering how AI has been used for “deepfakes” and spreading disinformation to public voters, Kirby said AI should not be treated as an entity that has a choice. Bad actors use AI with negative intentions.
Using AI for disinformation is “encouraging a vigilance on the part of us as consumers,” he said. “Just understanding the fact that [we] need to be mindful of this.”
The International Federation of Library Associations and Institutions published an infographic on how to spot fake news. These guidelines include considering the source of the information, checking the sources provided, the date of publication and considering one’s own biases.
U Political Science Professor Josh McCrain said AI is not a concern when considering election security, adding election infrastructure is “extremely secure,” and any concerns of its integrity are misplaced by people with “bad intentions and bad faith” when the election turns out not in their favor.
“These are really secure elections,” he said. “And anybody suggesting otherwise has political motivations.”
McCrain said the main concern is deepfakes. Because there is no current legislation on deepfakes, it’s up to social media platforms, which are unregulated by the government, to figure it out on their own.
“That is definitely something that can be exploited by bad actors,” McCrain said.
Deepfakes have been around for years; however, as technology advances, they are expected to become even more prominent. Deepfakes can include fake videos of politicians saying things they haven’t said, which could ultimately sway voters with disinformation.
More recently, in January, a robocall emulating President Joe Biden went out to New Hampshire Democrats, telling them not to vote in the Jan. 23 presidential primary.
“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday,” the call said, according to NBC News.
“Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated,” the attorney general’s office said in a statement.
Solving the issue of deepfakes and disinformation is not as easy as recognizing anomalies of bad actor interference.
“What we don’t want is the mechanisms that we create to try to squash disinformation to be those mechanisms that squash the voice of freedom that’s needed,” said Kirby.
They also do not want these mechanisms to remove factual information.
Kirby said he appreciates the challenge.
“This is the amazing thing about our liberal democracies,” he said.
Andre Montoya • Feb 28, 2024 at 1:36 pm
Great story, Libbey! AI is unfortunately here to stay and it seems we’ll all have to learn to be responsible in spotting it.
John Hedberg • Feb 29, 2024 at 4:26 am
“Spotting it”? 😊 When I was 4, my mother had to drive me across a state highway intersection with no traffic lights to get me to kindergarten. This involved crossing traffic from one direction, pausing on the median, and then crossing traffic oncoming from the other direction as soon as the way was clear.
This feat was complicated by the fact that the speed limit was 55, and this intersection was just over the top of a rise, so Mom couldn’t see oncoming traffic more than ~200 feet away on that 2nd side, since the rise blocked her line-of-sight.
One day, it was raining a typical Massachusetts coastal rain (raining like ‘cats & dogs’), and we made the first half of the crossing to the median, where we waited for traffic to clear long enough to cross the remainder. Unbeknownst to us, a semi-trucker gunning uphill over the top of the rise lost his traction and began hydroplaning straight toward our station wagon on the median.
My mother saw him coming (unable to regain traction and stop his drift) for several seconds before he actually hit, but there was nothing she could do to avoid the collision: backing up would put her into 55 MPH traffic behind us, and going forward would do the same thing with traffic moving in the opposite direction, so there was literally nowhere to go and nothing to do. She only had time to turn to me in the back seat and yell “HOLD ON!” as the truck hit, folding the station wagon and pushing us dozens of yards down the grassy median, before both the semi’s momentum and our former car finally came to a steaming & very muddy rest! 😜😅 I was a little late to pre-K that day~!
The point here regarding AI is that by the time we actually “spot” the oncoming AI catastrophe hydroplaning over the top of the hill, it will already be moving too fast for civilization to avoid being hit, and we’re not going to have time to do anything to stop it from folding us in half, except (maybe) yell, “Hold on!” and hope we survive modern society getting squarely T-boned~! 😂😂
But… at least ‘responsible’ people may have a moment to tell us they care about us while the oncoming headlights brighten up the falling rain~ 🙂
‘Nuff said? Kindly,
John Hedberg • Feb 26, 2024 at 2:06 pm
Or you could just re-title this piece: “Responsible U Leaders Fight to Maintain Their Own Mass Formation Psychosis!”
(shout out to Mattias Desmet~) 🙂👨🎓
John Hedberg • Feb 26, 2024 at 8:08 am
” U Political Science Professor Josh McCrain said AI is not a concern when considering election security, adding election infrastructure is “extremely secure,” and any concerns of its integrity are misplaced by people with “bad intentions and bad faith” when the election turns out not in their favor. “
Machines, like AI, can be programmed to do anything a user wants them to do, so stating that election infrastructure is “extremely secure” only means it’s secure until someone reprograms it not to be. The fastest way to program a lower-tech device is using AI (how long does an update take on your phone?), so unless we return to paper ballots hand-counted by human beings, nothing is secure that AI can’t make unsecure in minutes or less. This is why the DOD has an entire Cyberwarfare division, so perhaps Mr. McCrain is being (more than) a little naïve~! (In fact, his statement is so ludicrous, I’m surprised he actually made it without cringing as he listened to himself say something so clearly non-sensical, almost like disinformation itself 😎😋).
This is why paper/hardcopy books are so important, when AI can literally rewrite, hide, and erase anything cloud-based to say whatever it’s programmed to say in moments, and human researchers will never know it’s been changed unless they have a paper/hard copy to compare it to. This means even science can be completely supplanted into propaganda (which the CCP, North Korea, and others are already known to do), since people (contrary to common sense) seem to trust online sources using anonymously funded algorithms, as if they’re more trustworthy than observations and data going back centuries, observations and data which AI can simply suppress, erase, or rewrite between key-clicks on your keyboard if they’re electronically retrieved. “Extremely secure”! 😜🤪😂
AI is like a snowball at the top of a ski slope. Once it starts rolling downhill, it gathers mass and momentum and gets harder and harder to deflect or control. The number of books and movies which address and explore a self-contained technology which gets out of control are so numerous, there’s nothing more I can say that hasn’t been said, kind of like mixing alcohol with hormones or with driving. We already know where this will lead us, but we still take that first drink thinking, “I know my limits now, and now that I know, nothing bad is going to happen this time”.
Famous last words~! 😂