January 04, 2024
🎉 Welcome back, Pro readers! Keep scrolling for our dive into the biggest issues you'll need to track in 2024.
- What else are you keeping an eye on in tech policy this year? Click reply to let us know.
1 big thing: AI issues to watch in 2024
Illustration: Brendan Lynch/Axios
There's a lot to tackle in Congress and across the federal government around artificial intelligence this year, Ashley writes.
Why it matters: How the U.S. government uses and regulates AI will provide a map for the rest of the country and Congress, and lawmakers have a ton of work to do to make good on the goals of last year's Senate AI Insight Forums.
Yes, but: Let's ease into the year — Congress isn't even back in town yet. Here are the AI issues we're watching on the Hill and across the federal government:
Senate leaders moving ahead on AI bills: We expect the action to be in committees, marking up and passing bills based on discussions at the AI Insight Forums.
- We anticipate a lot of work is going to be redone or duplicated at the committee level. There will be disagreements over which bills to try to pass first, whether any individual issue is worth a standalone bill or if any AI legislation should be part of a larger package.
- The Senate also has to figure out where the House is on AI and whether they can find agreement. House Speaker Mike Johnson, focused on the southern border, hasn't exactly said AI is a priority for him.
AI + copyright: The New York Times suing OpenAI last month is sure to reignite this conversation, which already had a lot of momentum.
- So far, AI companies' use of copyrighted material has operated in a sort of "ask for forgiveness, not permission" mode.
- That's likely to change at least to an extent, and it's an easy mark for lawmakers because protecting the copyright of original works is not a new debate.
AI + elections: The 2024 election is quickly approaching, and lawmakers are not going to want to see themselves deepfaked into oblivion by election ads.
- More broadly, democracy watchers are arguing that AI could tip an already precarious electorate into not believing anything it sees.
- The Federal Election Commission could act on AI in elections and make new rules, but that's not guaranteed.
- Senate Majority Leader Chuck Schumer has said an AI and elections bill will be prioritized this year.
- We'll be watching, but history shows that bills having to do with elections and social media or tech usually result in industry self-governance instead (the Honest Ads Act, anyone?).
AI + the federal government: The government is slowly complying with parts of President Biden's AI executive order.
- In late December, the Office of Personnel Management issued a memorandum on government-wide hiring authorities per the EO, allowing for more flexibility to hire AI experts for positions supporting implementation.
- Getting AI talent to work for the government instead of the much more lucrative private sector is a major hurdle, so we'll be watching to see who joins the feds and how quickly the government can staff up to carry out the EO.
- It's also likely that Congress will want a say in how the government uses AI.
2. Everything else we're watching in 2024
Illustration: Natalie Peeples/Axios
The states, courts and Europe are where we're likely to see definitive action on tech policy this year, Ashley reports.
Here's what we're paying attention to so far.
AI bills in the states: In 2023, nearly 200 AI-related bills were introduced in legislatures but only 14 became law, Axios' Ryan Heath reported in September.
- Bills are focusing on deepfakes, government uses of AI and impact assessments. California Gov. Gavin Newsom issued an EO on AI.
- In 2024, industry is expecting a lot more legislation, David Edmonson, TechNet senior vice president of state policy and government relations, told Ashley. Keep an eye on California, Connecticut, New York, Colorado and Washington.
- Edmonson said: "It's inevitable that there will be AI bills that cross the finish line, and we want to be productive partners working with policymakers."
Online speech fights at SCOTUS: This term, the Supreme Court will hear a pair of cases, NetChoice v. Paxton and Moody v. NetChoice, regarding laws in Texas and Florida preventing tech platforms from restricting political posts and accounts.
- Tech platforms are watching nervously as longtime GOP grievances about how social media makes and enforces its content rules reach the country's highest court.
- We're also watching Murthy v. Missouri, in which SCOTUS will hear arguments over whether Biden administration officials violated the First Amendment in their communications with social media companies like Meta about COVID and election content.
Antitrust cases: U.S. v. Google, weighing whether Google used its dominance in the online search space to illegally edge out rivals, is expected to wrap up this year, with closing arguments in May.
- We'll also be watching as the DOJ's lawsuit over Google's dominance in online advertising likely heads to trial in March.
- The FTC will continue its antitrust case against Amazon for alleged unfair dominance in the online marketplace sector, likely to be an uphill battle.
Kids' online safety: In addition to ongoing work in Congress, states, attorneys general and class-action lawsuits are all trying to hold social media platforms accountable for alleged harm to children and teens.
- A number of class-action lawsuits have been brought by families and school districts against tech companies, in addition to AG cases.
- Meta, for one, is fighting both an attempt by the FTC to prevent the company from monetizing teens' data and a state AG suit accusing the company of purposely addicting kids to its products.
Europe: As usual, tech companies have their hands full when it comes to complying with European rules.
- This year the Digital Markets Act and the Digital Services Act will be enforced and the AI Act will continue to move forward; other liability regimes are on the horizon.
- We'll be watching to see whether any companies pull out of Europe entirely and if companies that change user experiences in Europe to comply with laws will do the same elsewhere across the globe.
3. Exclusive: Deep dive on AI and watermarking
Photo illustration: Jonathan Raa/NurPhoto via Getty Images
Watermarking is being tossed around by policymakers as a key way to help people identify AI-generated content, but a new paper says lawmakers should consider more authentication techniques as they develop legislation.
Driving the news: The paper from the Information Technology Industry Council shared exclusively with Ashley argues that a combination of methods will be most helpful for consumers to navigate AI content, and that clear standards for authentication are needed for the industry.
- Watermarking — embedding a "signal" in a text or an image to identify it as AI-generated — is just one way to tackle the problem, and the paper argues there are other useful ways to show a piece of content's roots that need to be part of the conversation as lawmakers develop new rules.
Other methods include:
- Provenance, which traces "signals" in a dataset, such as when it was created and modified.
- Metadata auditing, or checking things like editing history, time stamps and device information, which could be useful for copyright concerns, per the paper.
- Human authentication, which would have experts decide whether something has been AI-generated. It would be slower and less reliable but useful in certain cases.
Yes, but: In some cases, marking whether something has been AI-generated may not be necessary at all.
What they're saying: "There's a strong need to examine which use cases and instances where watermarking makes sense, because lower risk applications might not need that sort of disclosure, such as changing the filter or lighting mode on a smartphone's camera," Courtney Lang, ITI's vice president of policy, trust and data, told Ashley.
What's next: Lang said it's too early to tell what type of content authentication method would be best for educating consumers about AI-generated content in elections.
4. Catch me up: Kids' safety and CHIPS cash
Illustration: Shoshana Gordon/Axios
📱 Kids' safety: NTIA will host the first in a series of public listening sessions on Jan. 18 on kids' online safety.
- NTIA is co-leading a White House task force to develop policy recommendations and voluntary guidance on kids' mental health, safety and privacy online.
- It has received more than 500 comments on the topic so far.
💰 CHIPS money: The Biden administration today announced an agreement to award $162 million in federal grants to Microchip Technology.
- This is the second allocation from the CHIPS and Science Act's $52 billion pot of money to boost the development and manufacturing of semiconductors in the U.S.
- The White House in December announced the first grant of $35 million would be given to BAE Systems to upgrade its Nashua, N.H., factory.
✅ Thank you for reading Axios Pro Policy, and thanks to editors Mackenzie Weinger and David Nather and copy editor Brad Bonhall.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive



