Connect with us
DAPA Banner

Tech

Blocking The Internet Archive Won’t Stop AI, But It Will Erase The Web’s Historical Record

Published

on

from the willingly-burning-libraries dept

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

Advertisement

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Archiving and Search Are Legal 

Making material searchable is a well-established fair use. Courts have long recognized it’s often impossible to build a searchable index without making copies of the underlying material. That’s why when Google copied entire books in order to make a searchable database, courts rightly recognized it as a clear fair use. The copying served a transformative purpose: enabling discovery, research, and new insights about creative works. 

Advertisement

The Internet Archive operates on the same principle. Just as physical libraries preserve newspapers for future readers, the Archive preserves the web’s historical record. Researchers and journalists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages. And that’s only one example. Countless bloggers, researchers, and reporters depend on the Archive as a stable, authoritative record of what was published online.

The same legal principles that protect search engines must also protect archives and libraries. Even if courts place limits on AI training, the law protecting search and web archiving is already well established.

The Internet Archive has preserved the web’s historical record for nearly thirty years. If major publishers begin blocking that mission, future researchers may find that huge portions of that historical record have simply vanished. There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake. 

Republished from the EFF’s Deeplinks blog.

Advertisement

Filed Under: ai, archives, copyright, culture, fair use, history

Companies: internet archive, ny times, the guardian

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Aetheon raises $1.2M to translate lived experiences into job-ready skills

Published

on

From left, Aetheon co-founders: Gina Jeneroux, Marie Gill, and Mark Wayman. (LinkedIn and Aetheon Photos)

Aetheon, a new startup that helps job candidates map their real world capabilities into work opportunities, has raised $1.24 million as part of its seed round.

Founded last year, the company is building what it calls a “skills operating system” aimed at helping workers — particularly military veterans and recent graduates — translate their real-world experience into language employers can use.

“At a high level, we’re investing in becoming the trusted infrastructure layer for how skills are understood, validated, and mobilized in a rapidly changing workforce,” said co-founder and CEO Marie Gill.

The company aims to solve a problem that’s gotten worse in the age of AI-generated resumes: how do employers evaluate what candidates can actually do? Aetheon’s platform ingests data from more than 100 occupational sources and maps it against a proprietary taxonomy of more than 300 skills, generating verified profiles that workers own and can carry across job opportunities.

Aetheon is pre-revenue and is focusing on paid pilots for veteran, higher-ed, and employer populations. Gill said the company is seeing demand from both sides of the market — individuals who want clearer visibility into their skills, and organizations looking for better signal in a noisy hiring landscape.

Advertisement

Gill, who is based in the Seattle region, was an exec at Executive Networks, Concertus, and Modifi. She also leads the Green Apron Alliance of Starbucks alumni.

Her co-founders are Gina Jeneroux, a 37-year veteran of BMO Financial Group, and longtime entrepreneur and product leader Mark Wayman.

The team plans to use the funding to launch its beta, expand pilot programs with employers, nonprofits, and public-sector partners, and build out its underlying data and intelligence layer.

The company’s investors include Blue Ash Ventures, along with a France-based strategic investor and two senior HR leaders in Hong Kong.

Advertisement

Source link

Continue Reading

Tech

Bungie scores an unexpected success with ‘Marathon’ revival

Published

on

(Bungie screenshot)

By all indications, Bungie’s revival of its Marathon franchise should not have worked out. Despite a CEO’s departure, an indefinite delay, several controversies, and targeting a saturated genre, Marathon came out earlier this month and has become one of this year’s unexpected successes.

Marathon, developed by Bellevue, Wash.-based Bungie (Halo 2, Destiny), is a multiplayer online shooter and a follow-up to Bungie’s classic Marathon trilogy on the Mac. Originally announced in 2023, Marathon is also a competitive, player-vs-player “game as a service” (GaaS, or simply live-service), which is meant to be consistently updated so it can be played indefinitely.

That was the first warning sign. As a GaaS, Marathon was up against heavy competition from the moment it debuted, both from other online shooters such as Fortnite and Call of Duty and other “forever games” like World of Warcraft and Dead by Daylight.

A successful GaaS can be a license to print money for its publisher, which has led to many game studios adopting the model in the last few years. Bungie itself was purchased by Sony Entertainment in 2022 as part of a plan by Sony to shift its internal game development to emphasize GaaS, owing largely to Bungie’s expertise running the Destiny series.

However, that same widespread publisher interest has flooded the market, especially in the last few years. The problem with a game that’s meant to last forever is that once it gets its hooks into a player, it’s rare for them to switch away from it, due to time investments, community ties, and — let’s face it — the sunk cost fallacy. Many live-service games are even designed to reward players who consistently log in every day, so a player who uses some of their finite leisure time to check out a competitor’s product can actively harm their overall experience.

Advertisement

As a result, anyone who wants to launch any kind of GaaS (or really, any video game at all) in 2026 has an uphill battle ahead of them in order to find an audience. They not only have to reach interested consumers, but they often have to implicitly convince them to stop playing something else.

If you’re trying to market a “hero shooter,” for example, you have to be aware that almost all of your prospective players are already heavily invested in Overwatch, Marvel Rivals, or Valorant. It’s not enough to offer them a good game. You have to give them a reason to switch.

It’s a tall order. Even major publishers working with famous licenses have had difficulty getting into this market sector, which has created a bloodbath. There’s already an entire virtual graveyard for recently discontinued live-service games, featuring releases such as Anthem, Multiversus, Rumbleverse, and most recently Highguard, which was infamously shut down less than 50 days after its launch in late Jan.

It didn’t help that Marathon in particular kept racking up warning signs. It was indefinitely delayed last summer, which followed several waves of layoffs at Bungie; longtime CEO Pete Parsons departed the company in Aug. 2025; Marathon’s publisher Sony abruptly abandoned another GaaS, Concord, in Oct. 2024, which seemed to suggest it was backing off of its bets on live-service gaming; and there was a controversy, since resolved, regarding visuals used in Marathon that had been stolen from a Scottish freelance artist. It initially looked like Marathon was headed into disaster.

Advertisement
(Bungie screenshot)

Instead, Marathon has taken off. At time of writing, it has a Very Positive rating on Steam with over 33,500 simultaneous players, as well as a respectable 79 on Metacritic. Against the odds, Bungie appears to have a solid hit on its hands.

Marathon is a revival of one of Bungie’s earliest franchises. The first three Marathon games were some of the first and only exclusive games for the Mac back in the ‘90s, and can be seen as a spiritual precursor to Halo: Combat Evolved. (Both games are first-person shooters about a cyborg in power armor following an AI’s orders while they fight aliens. The finer strokes are different, but there’s some connective tissue.)

2026’s Marathon is an interquel set 99 years after the events of the first game, on the planet Tau Ceti IV. It’s been several hundred years since the UESC Marathon left Earth’s solar system on a mission to establish an offworld colony and subsequently vanished. In 2893, Earth finally receives a distress signal from the ship.

Earth reacts by sending a squad of “runners,” humans who’ve digitized their minds and can download them into cybernetic shells, to Tau Ceti IV. Once there, the runners are thrown into an ongoing struggle between UESC forces, alien invaders, rogue AIs, and each other. Each individual runner is a wild card, who can opt to work for multiple factions from both on- and offworld.

Marathon, as a game, is what’s often called an “extraction shooter.” Players team up in groups of one to three to infiltrate various locations throughout Tau Ceti IV and must take on both computer-controlled and human enemies in order to grab whatever they can find. If you’re able to survive your mission and successfully evacuate the area, you can keep what you’ve found and use those salvaged resources to improve your equipment for your next run.

Advertisement

That gives Marathon, and other extraction shooters such as Escape from Tarkov, a unique tension compared to more typical PVP action games. Your survival actually matters, as opposed to another shooter where you might die 6 times in a good match, and you have something to lose.

(Bungie screenshot)

Marathon combines that with strange dreamlike visuals that are reminiscent of ‘90s cyberpunk, particularly Ghost in the Shell. Tau Ceti’s abandoned facilities are all colorful mazes, full of strange sights and narrow corridors, and all your fellow runners are barely humanoid robots. The whole game has a feel like it’s set inside a half-corrupted archive of experimental digital artwork, all the way down to its font choices and complicated menu structure. It’s a deliberate blend of the 1990s’ vision of the future with cutting-edge 2026 graphics, and looks like nothing else that’s currently on store shelves.

That also means that it’s got a couple of different learning curves. After spending a weekend with the game, I don’t feel like I’ve got a handle on it yet, either as a shooter or as an audiovisual experience. Marathon’s menus are a deliberate riot, and while its basic mechanics will be comfortably familiar if you’ve played other recent extraction shooters, it’s a little harder to navigate them than it needs to be.

For right now, my biggest takeaway from Marathon is that it’s beaten the odds. I wouldn’t have guessed at this time last year that Marathon would have a successful launch, between Bungie’s issues and current market forces, but it seems like there’s still at least a little room for this kind of FPS in the modern market.

Source link

Advertisement
Continue Reading

Tech

Anthropic Supply-Chain-Risk Designation Halted by Judge

Published

on

Anthropic won a preliminary injunction barring the US Department of Defense from labeling it a supply-chain risk, potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for the generative AI company as it tries to preserve its business and reputation.

“Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin wrote in justifying the temporary relief. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.”

Anthropic and the Pentagon did not immediately respond to requests to comment on the ruling.

The Department of Defense, which under Trump calls itself the Department of War, has relied on Anthropic’s Claude AI tools for writing sensitive documents and analyzing classified data over the past couple of years. But this month, it began pulling the plug on Claude after determining that Anthropic could not be trusted. Pentagon officials cited numerous instances in which Anthropic allegedly placed or sought to put usage restrictions on its technology that the Trump administration found unnecessary.

Advertisement

The administration ultimately issued several directives, including designating the company a supply-chain risk, which have had the effect of slowly halting Claude usage across the federal government and hurting Anthropic’s sales and public reputation. The company filed two lawsuits challenging the sanctions as unconstitutional. In a hearing on Tuesday, Lin said the government had appeared to illegally “cripple” and “punish” Anthropic.

Lin’s ruling on Thursday “restores the status quo” to February 27, before the directives were issued. “It does not bar any defendant from taking any lawful action that would have been available to it” on that date, she wrote. “For example, this order does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions.”

The ruling suggests the Pentagon and other federal agencies are still free to cancel deals with Anthropic and ask contractors that integrate Claude into their own tools to stop doing so, but without citing the supply-chain-risk designation as the basis.

The immediate impact is unclear because Lin’s order won’t take effect for a week. And a federal appeals court in Washington, DC, has yet to rule on the second lawsuit Anthropic filed, which focuses on a different law under which the company was also barred from providing software to the military.

Advertisement

But Anthropic could use Lin’s ruling to demonstrate to some customers concerned about working with an industry pariah that the law may be on its side in the long run. Lin has not set a schedule to make a final ruling.

Source link

Continue Reading

Tech

Landmark case finds Meta, YouTube addictive to children

Published

on

‘These verdicts mark an unsurprising breaking point,’ said Forrester VP research director Mike Proulx.

A landmark legal case has found that Meta and YouTube are designed to be addictive to children. A day earlier, Meta lost a child safety lawsuit, which found that its platforms’ design features enable child sexual exploitation.

The mounting legal challenges are being heralded by some as Big Tech’s ‘Big Tobacco moment’, intending to address some of the harm caused by social media platforms to its youngest users.

A jury in Los Angeles deliberated the case across nine days and concluded that Meta and YouTube are liable to pay the 20-year-old plaintiff behind the lawsuit a total of $6m in damages. Meta has been assigned 70pc of the financial responsibility, and YouTube 30pc.

Advertisement

Half of each company’s penalties will be used to compensate the plaintiff’s losses, including for mental health support, while the other half is for punitive damages to punish the companies.

Kaley GM’s lawsuit, filed in 2022, also included TikTok and Snapchat; however, both of them have since settled outside of court.

The young plaintiff said she began using YouTube from the age of six, and Instagram from nine. One day, she spent 16 hours on Instagram, she said. The plaintiff blamed the platforms for inflicting harm, including depression and body dysmorphia.

Her lawsuit is one of thousands currently pending, which together could deliver serious financial damages to the companies involved and help change the legal landscape social media platforms function under.

Advertisement

Meta and Google said they disagreed with the verdict. Google said it plans to appeal, while Meta said it is evaluating its legal options.

“This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site,” Google added.

Attempts have been made in recent years to bolster child safety on social media, including a controversial underage social media ban which took effect in Australia, and is currently being debated in several European countries. Platforms are also beginning to self-police.

‘Traditional’ social media aside, the advent of generative AI tools has added to the difficulty of protecting users online, as seen with Grok, where users can prompt the chatbot to undress people in pictures and videos.

Advertisement

“These verdicts mark an unsurprising breaking point. Negative sentiment toward social media has been building for years, and now it’s finally boiled over,” said Forrester’s VP research director Mike Proulx.

“This problem sits at the intersection of social media companies’ platform responsibility, years of government regulatory inaction, and the role parents and educators play in helping kids build healthier digital habits.

“These verdicts aren’t just about social media’s past. They’re a dire warning about how we handle the next wave of technology.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Senators Elizabeth Warren and Josh Hawley Push for Data Center Energy Transparency

Published

on

In a bipartisan team-up, Democratic Sen. Elizabeth Warren and Republican Sen. Josh Hawley are demanding more transparency regarding the energy use of data centers

The pair sent a letter to the Energy Information Administration on Thursday, urging the EIA to “establish a mandatory annual reporting requirement for data centers.” Wired reported the news earlier. 

Data centers have become a major topic of debate, as tech giants like Amazon Web Services, Google, Meta and Microsoft continue to buy massive amounts of land to house artificial intelligence data centers. While some landowners are taking the payouts, others — like a Kentucky woman and her mother, who turned down $26 million to sell their land — are holding out because of their opposition to data centers.

Advertisement

The interested buyer in Kentucky remains anonymous, but the landowner told WLKY they were described as a “major artificial intelligence company.”

Not only do data centers need large plots of land for their infrastructure, but they also require substantial water and electricity to operate. The exact amounts are not always known, which is why the senators are urging the change. 

The collected information would help with grid planning and “will support policymaking to prevent large companies from increasing electricity costs for American families,” Warren and Hawley’s letter stated in part. 

BloombergNEF reports that by 2035, the energy demand for data centers will more than double.

Advertisement

On Wednesday, Rep. Alexandria Ocasio-Cortez and Sen. Bernie Sanders introduced a bill to pause all data center construction until the government enacts safeguards. 

“AI and robotics are creating the most sweeping technological revolution in the history of humanity,” Sanders said in a statement. “The scale, scope and speed of that change is unprecedented. Congress is way behind where it should be in understanding the nature of this revolution and its impacts.” 

Source link

Advertisement
Continue Reading

Tech

Reddit cracks down on bots with new labels and human verification

Published

on


The move comes just weeks after social aggregator Digg, which once aimed to rival Reddit, shut down its app, citing an inability to control a surge of bots. Reddit, by contrast, appears determined to tackle the problem head-on.
Read Entire Article
Source link

Continue Reading

Tech

Apple Discontinues Mac Pro – Slashdot

Published

on

Apple has discontinued the Mac Pro and says it has no plans for future models. “The ‘buy’ page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed,” reports 9to5Mac. From the report: The Mac Pro has lived many lives over the years. Apple released the current Mac Pro industrial design in 2019 alongside the Pro Display XDR (which was also discontinued earlier this month). That version of the Mac Pro was powered by Intel, and Apple refreshed it with the M2 Ultra chip in June 2023. It has gone without an update since then, languishing at its $6,999 price point even as Apple debuted the M3 Ultra chip in the Mac Studio last year.

Source link

Continue Reading

Tech

Wikipedia cracks down on the use of AI in article writing

Published

on

As AI makes inroads into the worlds of editorial and media, websites are scrambling to establish ground rules for its usage. This week, Wikipedia banned the use of AI-generated text by its editors — although it stopped short of banning AI outright from the site’s editorial processes.

In a recent policy change, the site now states that “the use of LLMs to generate or rewrite article content is prohibited.” This new language updates and clarifies previous, vaguer language that stated that LLMs “should not be used to generate new Wikipedia articles from scratch.”

AI in Wikipedia articles has become a contentious issue among the site’s sprawling, volunteer-driven community of editors. 404 Media reports that the new policy, which was put to a vote by the site’s editors, garnered majority support — 40 to 2.

That said, the new policy still makes room for continued AI use in some editorial processes.

Advertisement

“Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own,” the new policy states. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

Source link

Continue Reading

Tech

AI smart glasses win $1.4 million prize as dementia care shifts toward assistive tech and everyday independence tools

Published

on


  • CrossSense AI smart glasses gain recognition as funding flows into dementia support tools
  • $1.4 million prize reflects growing reliance on technology in cognitive care strategies
  • Early results suggest benefits, but long-term clinical effectiveness is yet to be confirmed

The Longitude Prize on Dementia has awarded £1 million (roughly $1.4 million) to a smart-glasses system designed to support people living with dementia.

Backed by Alzheimer’s Society and Innovate UK, the prize is a major incentive for practical innovation rather than theoretical research.

Advertisement

Source link

Continue Reading

Tech

America’s Self-Proclaimed Free Speech Warrior, Brendan Carr, Gets A Letter Documenting His First Amendment Violations

Published

on

from the censorial-dipshit dept

For years, certain folks on the left kept insisting they wanted to bring back the Fairness Doctrine — the old FCC policy that required broadcasters to present “both sides” of controversial issues. Many of us in the tech policy world kept explaining why that was a terrible idea, one ripe for abuse and fundamentally at odds with the First Amendment. The FCC itself repealed the Doctrine back in 1987, partly because it found that compelling broadcasters to present multiple views actually reduced the quality and volume of coverage on important issues — the exact opposite of what it was supposed to do. The requirement to air “both sides” of a controversial story was the kind of burden that just made the broadcast media less willing… to cover controversial stories at all.

Well, congratulations to everyone who wanted to reanimate that corpse. FCC Chairman Brendan Carr is doing something remarkably similar — except he’s only using it in one direction (the other problem with the Fairness Doctrine, it depends entirely on the enforcers), to punish outlets that report things the Trump administration doesn’t like, while conveniently leaving alone outlets that parrot the administration’s preferred narratives.

We’ve been covering Carr’s censorial ambitions for a while now. When Trump picked Carr to chair the FCC, we noted that despite all the “free speech warrior” branding from the administration and the credulous political press that repeated it, Carr had made it abundantly clear he wanted to be America’s top censor. And he’s delivered on that promise with remarkable enthusiasm — going after CBS over “60 Minutes”, threatening ABC over Jimmy Kimmel’s jokes, and most recently threatening to revoke broadcast licenses of outlets that accurately report on the disastrous war in Iran.

Now, a broad coalition of more than 80 legal scholars, former FCC officials, and civil society organizations — organized by TechFreedom and signed by groups ranging from the ACLU to EFF to the Knight First Amendment Institute to the Institute for Free Speech — has sent a formal letter to Carr laying out, in meticulous legal detail, exactly how his threats violate the First Amendment. I’m proud to note that our think tank, the Copia Institute, is among the signatories, and this was a very easy decision.

Advertisement

The letter is direct about what Carr is doing:

We write concerning your abuse of the “public interest” standard as a weapon against viewpoints you and President Donald Trump do not like. You assert that “[b]roadcasters … are running hoaxes and news distortions – also known as the fake news” in a retweet of a President Donald Trump’s complaint that The Wall Street Journal and The New York Times were the “Fake News Media” because of headlines he alleged were misleading. You threatened that broadcasters who engaged in similar reporting would “lose their licenses” if they do not “correct course before their license renewals come up.” The next day, the President threatened broadcasters and programmers with “Charges for TREASON for the dissemination of false information!”

It’s kind of incredible how much of this is absolutely batshit crazy and simply could never have been imagined under any other presidential administration. The President of the United States threatened news outlets with treason charges — which carry the death penalty — for reporting things he didn’t like. And the FCC Chairman who spent years claiming to be a “free speech” absolutist, rather than defending the press from this kind of authoritarian nonsense, was the one who teed it up.

The letter does an excellent job of explaining why Carr’s reliance on the vague and essentially dormant “news distortion” policy is legally bankrupt. There’s an important distinction here that Carr is deliberately blurring: the FCC has an actual, codified Broadcast Hoax Rule that is extremely narrow and specific — it applies only when a broadcaster knowingly broadcasts false information about a crime or catastrophe, where it’s foreseeable that it will cause substantial public harm, and it actually does cause such harm. The FCC has applied it rarely, and typically only in cases involving the outright fabrication of news events like staged kidnappings.

That’s a world apart from what Carr is doing, which is invoking the far vaguer “news distortion” policy to go after headlines the president finds insufficiently flattering. As the letter notes:

Advertisement

[Y]our unsupported claim that unnamed broadcasters are engaged in unspecified “hoaxes,” combined with your invocation of the news distortion policy is plainly unconstitutional: it aims to do something the Supreme Court has forbidden—correcting bias or balancing speech—while its vagueness makes good-faith compliance impossible and invites arbitrary enforcement.

On that Supreme Court point, the letter cites Moody v. NetChoice (you remember: the Supreme Court case that ended Florida social media content moderation law). Recall, this is the very same Court that many expected would be friendly to conservative arguments about tech platforms supposedly “censoring” conservatives, but instead it made it crystal clear that the government has no business trying to reshape private editorial decisions:

In Moody v. Netchoice (2024), the Supreme Court rejected government efforts “to decide what counts as the right balance of private expression — to ‘un-bias’ what it thinks is biased.” “On the spectrum of dangers to free expression,” Moody said, “there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana.”

The letter also draws on NRA v. Vullo, another unanimous Supreme Court decision which we cite often, which held that “a government official cannot do indirectly what she is barred from doing directly: A government official cannot coerce a private party to punish or suppress disfavored speech on her behalf.” That’s a pretty precise description of what Carr is doing when he posts threats on social media about license renewals while his boss muses about treason prosecutions.

The most damning part of the letter is the receipts on Carr’s own hypocrisy. Back in 2019, Carr himself tweeted: “The FCC does not have a roving mandate to police speech in the name of the ‘public interest.’”

As the letter dryly observes, if the law were as “clear” as Carr now claims, why did he insist the FCC needed to “start a rulemaking” on it?

If, as you now claim, the “law is clear,” you would not have needed to suggest in 2024, that “we should start a rulemaking to take a look at what [the public interest standard] means.” In fact, the “public interest” standard becomes less clear each time you invoke it.

The letter also point out that Carr’s former colleague and mentor Ajit Pai also knows how messed up all this is:

Advertisement

Chairman Ajit Pai, your Republican predecessor, could “hardly think of an action more chilling of free speech than the federal government investigating a broadcast station because of disagreement with its news coverage or promotion of that coverage.” You have launched a flurry of such investigations.

And the letter documents that the chilling effect is already working:

Commissioner Anna Gomez has “heard from broadcasters who are telling their reporters to be careful about the way they cover this administration.”

Even Trump-supporting Republican officials like Ted Cruz have had enough of Brendan Carr’s censorial bullshit:

Sen. Ted Cruz (R-TX) understood that this a “mafioso” tactic “right out of ‘Goodfellas,’” essentially: “‘nice bar you have here, it’d be a shame if something happened to it.”

The fact that Ted Cruz of all people can see this for what it is should tell you something.

The signatories on this letter are worth noting. Beyond the civil society organizations, you’ve got former FCC officials from both parties, more than fifty First Amendment and communications law scholars from institutions ranging from Harvard to Stanford to Emory, and journalism scholars from across the country. There are people signed onto this letter who don’t agree with each other on much at all.

Advertisement

But on Brendan Carr’s censorship campaign, they all agree — because this really has nothing to do with partisan politics. This is about whether you believe the Constitution means what it says — or whether the First Amendment is just a talking point to wave around when it’s politically convenient and discard when it gets in the way. The same people who spent years fundraising off claims that Biden officials sending cranky emails about COVID misinformation represented an existential threat to free speech are now openly wielding license revocation and treason charges to dictate editorial content.

Look, we know Carr won’t do a damn thing in response to this letter. If anything, he’ll just screenshot parts and post it on X as proof that he’s upsetting the right people. That’s his whole game — the trolling, the culture war posturing, the audition tape for whatever higher office he’s eyeing. He doesn’t actually have to revoke any licenses (and likely couldn’t survive the legal challenge if he tried). The mere threat is the point, because, as the letter explains, the FCC can exercise “regulation by the lifted eyebrow” and hang a “Sword of Damocles” over each broadcaster’s head.

But highlighting the record still matters. When future scholars look back at this period and try to understand how a sitting FCC Chairman openly abandoned the First Amendment in service of a President who thinks “treason” is a synonym for “journalism I don’t like,” the documentation will be there.

And the breadth of the coalition sending this message matters too. This many scholars, former officials, and organizations — many of whom disagree vehemently on plenty of other issues — all looked at what Carr is doing and arrived at the same conclusion: this is unconstitutional, it’s dangerous, and someone needs to say so clearly and publicly, even if the person doing it couldn’t care less.

Advertisement

The letter closes with a quote from the Supreme Court that fits this moment uncomfortably well, drawn from West Virginia Board of Education v. Barnette, decided in 1943 when the country faced actual existential threats:

“[T]here is ‘one fixed star in our constitutional constellation: that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.’”

Brendan Carr has decided he can ignore all that and censor at will. He’ll likely ignore this letter too. But unlike Carr, the record doesn’t forget.

Filed Under: 1st amendment, brendan carr, fairness doctrine, fcc, free speech, jawboning, news distortion, public interest

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025