Colorado Lawmakers Addressing AI Deepfakes in Politics | Westword

Are Colorado Legislators Doing Enough to Address AI Deepfakes?

"With more and more deepfakes being circulated, I believe that people will become more and more skeptical of the content and information that they are seeing on the internet."
Legislators in Colorado are trying to mitigate deepfakes with bills like House Bill 24-1147.
Legislators in Colorado are trying to mitigate deepfakes with bills like House Bill 24-1147. Unsplash/Priscilla Du Preez
Share this:
Nowadays, nobody is fully safe from artificial intelligence.

From Donald Trump to Taylor Swift and even President Joe Biden, people from all walks of life have been targeted in "deepfake" videos and audio clips made by AI in recent months and years. Political deepfakes are especially popular right now, with elections across the globe being manipulated by AI-generated content. Now legislation is being put forth as a result.

In Colorado, House Bill 24-1147, known as the "Candidate Election Deepfake Disclosures" bill, is set to be introduced in the Senate next week after advancing through the House earlier this year. The measure calls for the regulation and public disclosure of AI-generated content used in communications related to a candidate for elective office, while also creating a private cause of action for candidates, who would have the ability to sue.

The problem, however, has become much bigger — and darker — than just politics, according to experts.

"There's another really scary kind of long-term doomsday scenario that people are talking less about," says Ilana Beller, organizing manager of the democracy team for Public Citizen, a consumer advocacy group that tracks state legislation data on deepfakes in elections.

"With more and more deepfakes being circulated, I believe that people will become more and more skeptical of the content and information that they are seeing on the internet," she tells Westword. "We're already living in a society where there's a lot of mistrust of media and of content from elections, and to create an environment in which lots of fraudulent deepfakes are flying around and people aren't sure how to tell what is real and what is fake is potentially really damaging to the social trust in our society. And on top of that, this environment of mistrust and misinformation could potentially provide cover for an actual body actor to deny real content that depicts them actually doing something bad."

Legislators in Colorado are trying to mitigate the deepfake issue with bills like HB-1147. But many people, like Beller and Andrew Brandt — a cybercrime investigator who lives in Boulder and works for Sophos X-Ops, a U.K.-based cybersecurity firm — are worried it's not enough.

"Yes, this bill is a great start, and I'm very grateful to the sponsors of this bill who recognize the potential harm generative AI poses to political candidates," Brandt told a state House committee on February 26 while testifying on his own behalf in support of HB-1147. "But this bill does not go far enough. It does not protect everyone in the state of Colorado. So I hope you will pass this, I hope the governor signs it into law, and then I hope this committee and the original [sponsors] return with another bill that puts some guardrails on the AI industry and protects the rest of us from hyper-accelerated fraud and abuse."

Pointing to deepfake incidents in New Hampshire in which AI-generated robocalls mimicking President Biden's voice were made in an attempt to get people to skip the presidential primary, Brandt also detailed the harm that AI audio and video can have "not just on elections, but our human ability to discern truth from fiction." He noted how election disinformation can be created in minutes and then the harm spreads "around the world before the truth can even tie its shoes."

And it's become a problem that isn't just affecting political candidates.

Other Examples of Deepfake Deception

"In my own work, we are already seeing generative AI used to craft convincing frauds," Brandt told the House committee while detailing a crime known as "pig butchering."

"Criminals [who] befriend lonely people online and eventually convince them to invest their life savings in fraudulent investment schemes have recently adopted generative AI, because many of the perpetrators live outside of the U.S. and don't speak English as their first language," he described.

In February, a finance staffer of a large corporation in Hong Kong was tricked into transferring more than $25 million to accounts controlled by fraudsters using AI tools to create realistic and real-time video and audio of the company's chief financial officer on a Zoom meeting.

Another important problem that keeps getting swept under the rug but "is worth touching on," according to Beller, is how deepfakes are being utilized to create intimate and inappropriate images or videos of young people. "Intimate deepfakes are a really huge issue right now," Beller says. "We're really on the precipice of the rampant use of this technology."

Legislators across the country have been putting forth bills to address both intimate and political deepfakes. In New Jersey, high-schoolers have become popular victims, leading to last year's introduction of the Preventing Deepfakes of Intimate Images Act in Congress. The legislation would prohibit non-consensual disclosure of digitally altered intimate images and make sharing the images a crime under state law, while also creating a right of private action for victims to sue.

Beller says lawmakers who introduce politically-focused AI bills often get bipartisan support, which then gives them the confidence and momentum they need to introduce similar legislation related to intimate deepfakes, like the Intimate Images Act in New Jersey. Similar bills could be coming to Colorado, but probably not this year.

"It is certainly something we are interested in addressing," says Representative Junie Joseph, a Democrat representing Boulder and one of the sponsors for HB-1147. "Our deepfake bill is more concerned with the issue of election validity and interference, but the two issues are very closely related. Georgia, Hawaii, Texas and Virginia already have laws on the books that criminalize nonconsensual deepfake porn. California and Illinois have given victims the right to sue those who create images using their likenesses. It is a problem that increases in severity as this technology continues to break ground, but we are engaging with experts and thinking creatively to find ways to protect people’s privacy and security."

Why Just Politics?

Representative Brianna Titone, a Democrat from Golden and another one of the sponsors for HB-1147, tells Westword that legislators decided to keep the bill's language narrow because they didn't want to get in the way of First Amendment complications with political speech, since this type of speech has been "treated differently in the courts."

Titone admits that she, too, is worried that the bill is not enough — even in the political sense.

"I'm still worried about what we can't stop, and that's a lot," she says. "Social media is treated differently under federal law. Moderation is difficult for the platforms, as well, especially when something goes viral and is altered to avoid being picked up."

According to Titone, HB-1147 will only stop the "honest actors" in deepfaking.

"For that, take this example: if you want to prevent an 'honest actor' from staking things in your car, lock the door. The 'dishonest actor' will break in regardless of whether you've locked it. So we're stopping the paid advertisements that campaigns and independent expenditures would buy to mislead voters about candidates they support or oppose."

Ruthie Barko, of TechNet, a tech industry trade group, testified to amend the bill at the February 26 committee hearing, arguing that policing AI is something that is going to take legislative "uniformity" across the board.

"We are, overall, looking at how the disclosures between states would align," she said, noting how most states that have legislation in place on this topic or are currently pushing for bills to pass are using the definition of "synthetic media" rather than "deepfakes."

According to Barko, this poses an advantage because it allows states to apply bills to not just anything created with AI, but any digitally manipulated campaign content that may be deemed misleading.

Colorado Broadcasters Association CEO Justin Sasso also wanted to amend the bill, telling lawmakers that CBA's concerns exist with the limitation of identifying altered content or metadata provided to a broadcaster or media entity.

"There are no known resources for identifying manipulation of provided content, nor a way to identify if metadata has been mapped to hide those changes," Sasso said. "Broadcasters are required by federal regulations to maintain lots of publicly available paperwork the moment a campaign contacts a station, even if only to contact about advertising. Placing the impossible task of identifying manipulated metadata, deepfake or artificially created content, on broadcasters will require precious and limited resources, and technology that has yet to be invented."

Sasso told the House committee that the CBA ultimately feels the bill does not go far enough to exempt broadcasters from false claims of altered content or exposure to "frivolous" lawsuits that are often made to stall a campaign.

He, too, asked that the bill be changed to mirror legislation in other states that places the responsibility and liability with the advertiser, while exempting the broadcaster and other media platforms used by that broadcaster for disseminating the communication. The most recently amended version of the bill text shows exemptions for broadcasters and media, but the CBA position still hasn't changed, according to the Colorado Secretary of State's Office.
Can you help us continue to share our stories? Since the beginning, Westword has been defined as the free, independent voice of Denver — and we'd like to keep it that way. Our members allow us to continue offering readers access to our incisive coverage of local news, food, and culture with no paywalls.