Connect with us

Tech

U.S. broadband households pay for networks while high-traffic streaming and AI platforms contribute almost nothing to infrastructure costs

Published

on


  • US households contribute monthly fees while platforms still impose substantial network infrastructure burdens
  • Broadband cost recovery does not reflect actual traffic or usage patterns
  • Heavy users in the electricity and airline sectors pay proportionally for demand

Broadband networks in the United States operate under a cost model that does not align with actual usage – as households generate substantial revenue for major internet platforms while also contributing to the Universal Service Fund, which supports rural connectivity, schools, libraries, and healthcare facilities.

A typical US broadband household contributes roughly $9 per month to this fund, yet the largest traffic generators impose substantial infrastructure burdens without proportional contributions.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

MAGA Suddenly Quiet About Overseas Influence Now That Larry Ellison’s Warner Bros Bid Has Saudi, Chinese Backing

Published

on

from the 100%-bad-faith dept

You might recall that during the great mass TikTok hyperventilation of 2021-2025, there was no limit of face fanning by Republicans like Brendan Carr about overseas involvement in social media. Carr was so particular on this subject, he scuttled an FCC program aimed at shoring up “smart” home device security standards because one of the testing labs (unsurprisingly) did business in China where this stuff is made.

Fast forward to this year, and Carr curiously has zero problems with significant Saudi and Chinese investments in Larry Ellison and Paramount’s efforts to acquire Warner Brothers. Though Carr’s actual regulatory oversight of the deal is limited given the lack of public broadcast licenses involved, he took to CNBC anyway to insist the massive $111 billion deal should likely fly through regulatory approval:

“If there’s any FCC role at all, it’ll be a pretty minimal role. And I think this is a good deal, and I think it should get through pretty quickly,” Carr added.

Carr told CNBC that Netflix “would have a very difficult path” getting regulatory approval, adding that Paramount’s was “a lot cleaner, does not raise at all the same types of concerns.”

“I think there’s some real consumer benefits that can emerge from it,” he added.

Advertisement

Carr’s (and Republicans’ more generally) gushing excitement comes despite the fact that significant structural overlap between Paramount and Warner Brothers will mean significantly more layoffs than we would have seen during the originally proposed Netflix Warner Brothers tie up. Layoffs that will likely be much worse than past Warner deals due to the absolutely massive debt involved.

This is before you even get to Larry Ellison’s obvious quest to built autocratic-friendly state television, the likes of which coddles authoritarianism and, in countries like Russia and Hungary, ultimately led to the total decimation of serious truth-to-power journalism.

Then there’s the $24 billion in combined funding for the Paramount deal from Middle Eastern sovereign wealth funds, including Saudi Arabia’s Public Investment Fund (PIF). As well as the recent announcement that Chinese company Tencent is weighing a significant investment. Before his deal was scuttled, Netflix CEO Ted Sarandos was pretty pointed about this being a problem:

“Before pulling out of the deal, Netflix co-CEO Ted Sarandos – speaking to the BBC in London on the morning after the recent BAFTA Film Awards – called the Gulf sovereign funds backing Paramount’s bid a “bad idea,” noting that they are from “a part of the world that is not very big on the First Amendment.” 

“It seems very odd to me with the level of investment that we’re talking about that they’d have no influence or editorial control over media in another country,” Sarandos added.

Advertisement

If you recall the multi-year right wing hysteria campaign about TikTok, it was fixated on the idea that having any overseas involvement in U.S. media was a doomsday scenario (they were not subtle or flexible on this point). Of course Trumpism immediately proceeded (with bumbling Democrat help) to “fix” this problem by offloading the company to Trump’s technofascist friends, while still maintaining a significant investment presence by the Chinese.

When Netflix was planning to buy Warner Brothers, Republicans engaged in no limit of face-fanning, featuring threats of “investigations” by Republican Attorneys General, and a phony Trump DOJ “investigation” into the antitrust concerns raised by the deal. But when a technofascist ally oligarch wants to own a major media property, with Saudi and Chinese help, all of that mysteriously disappears.

It’s almost as if Trump Republicans have no coherent ideology beyond their own power and unchecked wealth accumulation, and all of their posturing on issues like antitrust and national security, routinely propped up by a lazy press, is as hollow as a Dollar Store fake chocolate Easter bunny.

Filed Under: brendan carr, china, consolidation, journalism, larry ellison, media, mergers, national security, saudi, saudi arabia, soft power

Companies: netflix, paramount, warner bros. discovery

Advertisement

Source link

Continue Reading

Tech

Starlink’s V2 satellites will deliver 5G speeds from space

Published

on

Starlink is preparing a major upgrade to its satellite network, with next-generation V2 satellites promising what the company calls “5G speeds from space.”

According to Starlink, the new satellites will deliver up to 100 times the data density of the current V1 generation. This could potentially transform how satellite connectivity performs on everyday phones.

So far, satellite-to-phone services have focused more on coverage than speed. Current Starlink mobile connectivity is largely limited to basic messaging and light data use, particularly in areas without reliable cellular coverage. However, the V2 upgrade aims to push that much further. This will bring significantly higher bandwidth and faster data speeds.

One of the more notable changes is compatibility. Starlink says the upcoming system will work with hundreds of existing LTE smartphones. This will allow devices to connect directly to satellites without needing special antennas or hardware. In practice, the satellites will act like cell towers in low Earth orbit. Therefore, they will enable phones to maintain a connection even when conventional networks aren’t available.

Advertisement

SpaceX plans to launch up to 15,000 V2 satellites as part of the broader Starlink constellation expansion. Early testing of the upgraded network is expected to begin around early 2027. Although the full performance gains will depend on how quickly the larger constellation can be deployed.

Advertisement

In the meantime, the company has already begun launching V2 Mini satellites. These are designed to bridge the gap between the current generation and the full V2 rollout.

Starlink is also working with mobile operators to make the system more seamless. Partnerships including one with T-Mobile in the US aim to allow phones to switch between satellite and terrestrial networks without noticeable interruptions.

Advertisement

If the network scales as planned, Starlink suggests peak speeds of around 150Mbps per user could eventually be achievable. That would be a major leap for satellite connectivity. Historically, satellite networks have lagged far behind conventional cellular networks in both speed and capacity.

For now, though, much of that promise depends on the pace of satellite launches and how quickly the full V2 constellation becomes operational. Until then, Starlink’s satellite-to-phone service previously known as Direct to Cell and now branded Starlink Mobile will continue offering more basic connectivity where traditional coverage is limited.

Source link

Advertisement
Continue Reading

Tech

Today’s NYT Mini Crossword Answers for March 11

Published

on

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Need some help with today’s Mini Crossword? I thought it was a bit tricky. 1-Down is one of those old-fashioned comic-book sounds that I had to remember how to spell correctly. Read on for all the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Advertisement

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

completed-nyt-mini-crossword-puzzle-for-march-11-2026.png

The completed NYT Mini Crossword puzzle for March 11, 2026.

Advertisement

NYT/Screenshot by CNET

Mini across clues and answers

1A clue: Study of the human mind, informally
Answer: PSYCH

6A clue: Common fixture in a gym bathroom
Answer: SCALE

7A clue: Kinda boring
Answer: HOHUM

Advertisement

8A clue: Like a commenter without a username, for short
Answer: ANON

9A clue: “All good between us?”
Answer: WEOK

Mini down clues and answers

1D clue: Old-fashioned “Yeah, right!”
Answer: PSHAW

2D clue: Coffeehouse pastry
Answer: SCONE

Advertisement

3D clue: Google alternative
Answer: YAHOO

4D clue: Sound of a dull thump
Answer: CLUNK

5D clue: Line on the bottom of a pant leg
Answer: HEM

Advertisement

Source link

Continue Reading

Tech

Daily Deal: The 2026 Complete Firewall Admin Bundle

Published

on

from the good-deals-on-cool-stuff dept

Transform your future in cybersecurity with 7 courses on next‑level packet control, secure architecture, and cloud‑ready defenses inside the 2026 Complete Firewall Admin Bundle. Courses cover IT fundamentals, topics to help you prepare for the CompTIA Server+ and CCNA exams, and more. It’s on sale for $25.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal

Source link

Advertisement
Continue Reading

Tech

'X-Plane 12' flight simulator to take advantage of Nvidia CloudXR 6 in visionOS 26.4

Published

on

Apple Vision Pro owners will be able to take to the skies in the “X-Plane 12” flight simulator thanks to visionOS 26.4, which includes support for CloudXR.

Apple Vision Pro with two PSVR2 controllers arranged below it on a dark surface
Apple Vision Pro to get more gaming support via CloudXR

One of the main complaints surrounding Apple Vision Pro is its lack of a broad app ecosystem or VR game library. The latter could be addressed thanks to visionOS 26.4, and developers are already taking notice.
A post in the X-Plane blog shares that X-Plane 12 will launch on Apple Vision Pro “later this spring.” It seems to insinuate that it will launch alongside the release of visionOS 26.4, which includes support for the needed Nvidia CloudXR 6.0.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Office.eu and the hope for a digitally sovereign Europe

Published

on

‘For many years, Europe has relied on American software and therefore created a certain risk of dependency’, said Office.eu’s CEO.

Much has happened over the past few years for Europe to decide to tighten its reigns on data in the region.

Examples include numerous comments from US president Donald Trump threatening action against the EU for its supposed discriminatory actions against US Big Tech, the possibility of the US government compelling US tech companies to share EU data, and not to mention the very same Big Tech companies dominating the region’s market while also violating its laws.

Microsoft 365 was found to have been tracking EU school students’ data illegally, Apple was found to be preventing EU users from exercising choice, and Meta was forcing users to pay or consent to its data tracking.

Advertisement

Last November, around 900 policymakers and other stakeholders gathered in Berlin to discuss ways to make Europe more technologically resilient, and less dependent on the US.

France and Germany took centre stage, announcing a joint taskforce on digital sovereignty, aiming for a more competitive and sovereign Europe.

“The Digital Sovereignty Summit sends a clear signal – Europe has what it takes to lead the digital age,” commented French president Emmanuel Macron at the time. “Europe is stepping up to accelerate the development of European innovation, to uphold strong data protection and to call for fair market conditions.”

The idea of digital sovereignty has finally taken centre stage in Europe, after years of the region conducting its business and administration on infrastructure made elsewhere.

Advertisement

It’s unsurprising then that a wholly European-owned alternative to Microsoft Office and Google Workspace, called Office.eu, made its debut last week, promising to enable organisations to regain control over their data and digital operations.

“We have seen more and more how essential it is to become cloud-independent and to rely on software that is built around European values,” said Maarten Roelfs, the CEO of Office.eu. The Hague-headquartered company was founded in 2024 and started operating early this year.

“For many years, Europe has relied on American software and therefore created a certain risk of dependency, but we have also given away the control over our own data. Office.eu proves that we now have a strong European alternative, with sovereignty, privacy and transparency at its core.”

Office.eu runs entirely on European data centres. Built on an open-source foundation, the platform offers a cloud drive, tools such as emails, spreadsheets, presentations and calendar, and video conference services.

Advertisement

Early access already has nearly 15,000 applicants, Office.eu told SiliconRepublic.com. A phased European rollout is planned for the second quarter.

Office.eu’s Office Suite is not the first of its kind, however. Infomaniak, which describes itself as an “ethical cloud with no bullshit”, launched ‘kSuite’ back in 2022, with a Microsoft Teams alternative called ‘kChat’, and the option to use Microsoft Office with documents hosted in Switzerland.

OpenDesk, created by the German Centre for Digital Sovereignty, offers similar tools for use by the German administration. There are others as well, such as NextCloud, which offers self-hosted file storage and a chat function.

“The Rubicon has been crossed. American tech firms can no longer offer assurances to European companies that their data sovereignty will be protected,” Roelfs told Stories of Purpose.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Tony Hoare, Turing Award-Winning Computer Scientist Behind QuickSort, Dies At 92

Published

on

Tony Hoare, the Turing Award-winning pioneer who created the Quicksort algorithm, developed Hoare logic, and advanced theories of concurrency and structured programming, has died at age 92.

News of his passing was shared today in a blog post. The site I Programmer also commemorated Hoare in a post highlighting his contributions to computer science and the lasting impact of his work. Personal accounts have been shared on Hacker News and Reddit.

Many Slashdotters may know Hoare for his aphorism regarding software design: “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.”

Source link

Advertisement
Continue Reading

Tech

Nvidia and Thinking Machines Lab draw multi-year chip deal

Published

on

Nvidia has also made a ‘significant’ investment in Thinking Machines to support its long-term growth.

In a new multi-year partnership announced today (10 March), Thinking Machines Lab will be using Nvidia’s systems to train its AI frontier models. Deployment on Nvidia’s Vera Rubin platform is set to begin early next year.

Mira Murati launched her AI start-up Thinking Machines in early 2025, less than six months after quitting as OpenAI’s chief technology officer. By July 2025, the start-up had already hit $12bn in value after a $2bn seed round, all before it released its first product.

The start-up’s flagship product ‘Tinker’ is a training API for fine-tuning open-source models, designed for researchers and developers. Tinker can be used for a broad range of models, including Llama-3.2B, Qwen3.5, DeepSeek-V3.1 and Kimi-K2.

Advertisement

“Thinking Machines has brought together a world-class team to advance the frontier of AI,” said Nvidia CEO and founder Jensen Huang.

Huang has previously said that 1GW of AI data centre capacity costs up to $50bn, which would place the latest deal’s value to several billion dollars. In addition, the company has also made a “significant” investment in Thinking Machines to support its long-term growth.

“Nvidia’s technology is the foundation on which the entire field is built,” said Murati, Thinking Machines’ CEO. “This partnership accelerates our capacity to build AI that people can shape and make their own, as it shapes human potential in turn.”

This is the latest in a long range of supports Nvidia has been providing to fuel the AI race. The $4.4trn chipmaking giant participated in a freshly announced $1.03bn seed round into Advanced Machine Intelligence, supported Nscale in its $2bn Series C raise, and pumped $30bn into OpenAI (possibly its last time before OpenAI goes public).

Advertisement

Last month, Meta promised Nvidia billions of dollars on a multi-year partnership to support its data centre build-out. Meanwhile, Nvidia has been looking ahead to ensure AI-ready 6G infrastructure with a new partnership with telecommunication giants from around the globe.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Trump Administration Won’t Rule Out Further Action Against Anthropic

Published

on

At Anthropic’s first court hearing challenging sanctions imposed by the Trump administration, the AI tech startup asked the government to commit that it wouldn’t levy additional penalties on the company. That didn’t happen.

“I am not prepared to offer any commitments on that issue,” James Harlow, a Justice Department attorney, told US district judge Rita Lin over video conference on Tuesday.

In fact, the government is gearing up to take another step designed to sideline the company from doing business with federal agencies. President Trump is currently finalizing an executive order that would formally ban usage of Anthropic tools across the government, according to a person at the White House familiar with the matter but not authorized to discuss it. Axios first reported on the plan.

Tuesday’s hearing stemmed from one of the two federal lawsuits Anthropic filed against the Trump administration on Monday, alleging that the government unconstitutionally designated it a supply-chain risk and turned it into a tech industry pariah. Billions of dollars in revenue for Anthropic is now at risk, with current customers and prospective ones dropping out of deals and demanding new terms, according to the company.

Advertisement

Anthropic seeking a preliminary court order suspending the risk designation and barring the administration from taking further punitive measures against the company.

The court appearance on Tuesday was to decide on the schedule for a preliminary hearing, and Anthropic is eager for it to happen soon to prevent further harm to its business. Michael Mongan, an attorney for Anthropic at WilmerHale, told Lin he was less concerned about delaying it until April if the Trump administration could commit to not taking additional action. “The actions of defendants are causing irreparable injuries, and those injuries are mounting day by day,” Mongan said.

After Harlow declined, Lin moved up the date of the hearing to March 24 in San Francisco, though that timeline was still later than Anthropic wanted. “The case is quite consequential from both sides, and I want to make sure I’m deciding on an expedited record but also a full record,” the judge said.

Scheduling in the other case, which is in Washington, DC, is on hold while Anthropic pursues an administrative appeal to the Department of Defense, which is expected to fail on Wednesday.

Advertisement

The months-long dispute between the Pentagon and Anthropic began when the AI startup refused to sign off on its current technologies being used by the military for any lawful purpose, which it fears could include broad surveillance of Americans and the launch of missiles without human supervision. The Defense Department contends usage decisions are its prerogative.

Several attorneys with expertise in government contracts and the US Constitution believe the administration’s action against Anthropic continues a pattern of abusing the law to punish perceived political enemies, including universities, media companies, and law firms (such as WilmerHale, the firm representing Anthropic). The experts believe Anthropic should prevail, but the challenge will be overcoming the deference that courts often give to national security arguments from the government, especially during times of war.

“If this is a one-off, you might give the president some deference,” says Harold Hongju Koh, a Yale Law School professor who worked in the Barack Obama presidential administration and has written about the Anthropic case. “But now, it’s just unmistakable that this is just the latest in a chain of events related to a punitive presidency.”

David Super, a Georgetown University Law Center professor who studies the constitution, says the provisions the Defense Department used to sanction Anthropic were designed to protect the country from potential sabotage by its enemies.

Advertisement

Source link

Continue Reading

Tech

Human Problems: It’s Not Always The Technology’s Fault

Published

on

from the it-prevents-us-from-fixing-societal-issues dept

We have met the enemy and he is us.

When a teenage boy in Orlando started texting Character.AI’s chatbot, it started as an innocent use of a new tool. Sewell Setzer III customized the chatbot to have the Game of Thrones-inspired persona of Daenerys Targaryen, the series’ prominent dragon-riding queen. In the months that followed, the boy developed a romantic connection with the chatbot. One night, he messaged the bot: “What if I told you I could come home right now?” The bot sent back, “[P]lease do, my sweet king.” Setzer was only fourteen years old when he died by suicide later that evening.

Setzer’s death is a tragedy. Like many parents in the wake of suicide, Seltzer’s mother is left searching for answers and accountability. Suicide often leaves behind a painful void, filled with questions that rarely yield satisfying explanations. 

In her search, Setzer’s mother sued the chatbot’s developer, Character Technologies, alleging that its chatbot caused her son’s death. The complaint describes the bot as a “defective” and “inherently dangerous” technology, and accuses the company of having “engineered Setzer’s harmful dependency on their products.” She is not alone. Three other families have brought similar suits against Character Technologies, and another has sued OpenAI, alleging the chatbots harmed their children.

Advertisement

Framing suicide and other harms as technology problems—as much of the current discourse around chatbots suggests—obscures underlying societal conditions and can undermine effective interventions. In effect, what are often described as “tech problems” are, more accurately, the result of human decisions, norms, and policies. They are, at their core, human problems. 

Historical Framing of Tech and Media in Creating and Sustaining Societal Problems

This is just the latest vintage whine, rebottled yet another time. Humanity has long sought to condemn new technologies and media for problems of the day. When the printing press made literature available to the masses, church and state condemned publications for causing immorality. Rock ‘n’ Roll and comic books were blamed for juvenile delinquency. Later, it was heavy metal and role-playing games. The advent of video games supposedly led to increased violence by adolescent boys.

The desire to hold technology companies responsible for human harms, however, has its immediate antecedent in social media. Over the past decade, users have sued social media platforms for offline violence committed by people they met online, failing to prevent cyberbullying, and hosting user-generated content that allegedly radicalized extremists. 

Advertisement

Like in Setzer’s case, parents have also sued social media companies after the deaths of their children, arguing that design choices, engagement mechanics, and algorithmic targeting played a role. Indeed, this is the central question at the heart of the current wave of “social media addiction” litigation that is currently being tried.

AI is just the latest technological scapegoat to which we seek to ascribe fault. It’s easier to hold technology responsible for our problems, especially when the technology is as uncanny as generative AI. We’re afraid of robots, perhaps not because of any harm they cause us, but because they show us how much we, as humanity, can harm ourselves. We would rather fault the technology du jour than confront the harder truths underneath. 

Death by Suicide as a Case Study

To put this into context, consider the allegations about the Character.AI chatbot and Setzer’s suicide. Suicide is a complex, deeply human problem. Among youth and young adults, it stands as the second leading cause of death. Suicide has no single cause. Public health experts have long recognized that risk emerges from a convergence of individual, relational, communal, and societal factors. These can include long-term effects of childhood trauma, substance abuse, social isolation, relationship loss, economic instability, and discrimination. On the surface, these may look like personal struggles, but they’re really the fallout of systemic failure. 

Advertisement

Access to lethal means compounds the risk of self-harm and suicide. In particular, the presence of firearms in the home has remained strongly associated with higher youth suicide rates. 

These systemic failures tend to hit teens the hardest. Studies consistently show that young people are facing rising rates of mental health challenges, especially due to and following the COVID-19 pandemic. This is compounded by chronically underfunded school counseling programs, inaccessible mental health care, and inconsistent support for youth in crisis. LGBTQ+ youth, in particular, bear the brunt, facing higher rates of bullying, depression, and suicidal ideation, all while increasingly being targeted by state policies that strip away protections and deny their identities. 

We don’t and can’t know for sure why Setzer or anyone else died by suicide. Tragically, teenage suicide is common. Indeed, it’s the subject of many songs. There’s no mechanism to definitively determine how Setzer and other victims felt when they started using Character.AI. However, as we likely all remember from our own lives, teenage years can be trying. As we mature physically and mentally, it can be difficult to express and accept ourselves. Other children can be cruel. Hormones can lead us to lash out in anger and withdraw into ourselves. 

In Setzer’s case, the complaint and public reporting indicate that he exhibited other signs and conditions commonly associated with elevated suicide risk, including anxiety and depression, withdrawal from teachers and peers, chronic lateness, significant sleep deprivation, and access to a firearm in the home. His interactions with fictional characters on the Character.AI service may suggest unmet emotional needs or a search for understanding and connection. At different points, he described a character as resembling a father figure and spoke about feelings of loneliness and a lack of romantic connection—experiences that are not uncommon for adolescents, particularly during periods of heightened vulnerability. According to the complaint, Setzer also raised the topic of suicide in earlier conversations with the chatbot, and those exchanges were promptly halted by the system. 

Advertisement

The uncomfortable truth about suicide is that it has existed as long as there have been people–sometimes for reasons we can understand, and often for reasons we never will. We are terrified that people die by suicide, not only because it is difficult to comprehend, but because the forces that drive someone there can feel disturbingly familiar.

Parents like Setzer’s can’t fix systemic governmental and societal failures. What feels more immediate and actionable is holding the technology companies accountable when their services appear to enable or amplify harm. It is far easier to fixate on the medium through which people express suicidal thoughts rather than ask where those thoughts came from or why they felt like the only option.

Legal Analysis of Faulting Tech

Legal doctrine appears to recognize that holding the technology responsible for these systemic failures is not viable. For example, because suicide is shaped by so many overlapping factors, tort claims against AI companies for causing a teen’s death—while understandable in their urgency—are, doctrinally speaking, a stretch. 

Advertisement

Under traditional tort principles, providers of generative AI systems and social media services are unlikely to bear legal responsibility in these cases. Claims based on intentional torts, such as battery, generally fail because providers of online services do not act with the intent to cause—or even to contribute to—physical harm. Therefore, Plaintiffs more commonly turn to negligence theories.

Negligence, however, requires more than just harm in fact. It demands both factual causation and proximate (i.e., legal) causation. In some situations, an online service or generative AI model might satisfy a but-for test because the harm would not have occurred without the service. But that is not sufficient. 

Proximate cause—what the law treats as a legally meaningful connection between conduct and injury—is where most of these claims falter. In many cases, particularly those involving such numerous and complex factors as suicide, the link between a provider’s conduct and the ultimate injury is typically too attenuated to meet this standard. 

Services such as social media and AI chatbots are typically designed as broad, general-purpose tools. The potentially implicatable content comes from other users’ behaviors, personalized interactions, or the user’s own actions. Even where excessive technology use—including social media—has been associated with elevated rates of suicidal ideation among youth and young adults, research has not established a direct causal link. As a result, courts are generally reluctant to find the technology service to be the legal cause of death. 

Advertisement

The Broader Ramifications of a Myopic Focus on Tech

Beyond legal error, focusing solely on technology obscures the path to real solutions. When we frame fundamentally human problems as technological ones, we deflect attention from the underlying conditions that lead to these tragedies and make it more likely they will recur. 

This framing guides policymakers and advocates toward seemingly easy, surface-level technological fixes such as imposing age-verification requirements, mandating disclosures about content moderation, or curbing algorithmic feeds. True, technology companies can—and should—consider how to help mitigate real-world harms. Yet these proposed interventions rest on the assumption that technology is the primary culprit, even though research increasingly shows that, in the right contexts, technology can actually help those in crisis. 

The appeal of reducing complex social issues to matters of redesigning or banning technology is understandable. Technology problems can feel tractable. They suggest clear targets and concrete fixes. 

Advertisement

What this logic ignores, however, is that the pre-technology status quo for many public health crises has long been dismal. The better question, then, isn’t whether technology causes harm, but whether it deepens an already broken baseline—or simply reflects it.

Technology, including generative AI, often acts less as a cause than a mirror. Our digital spaces often reflect the offline world, including its ills. 

Today, children face more pressure to excel at school and attend the best universities, even while job prospects stagnate and inflation soars. They have lost access to the kinds of public and community spaces that once offered structure, connection, and care. Libraries operate with reduced hours. Budget cuts have decimated after-school programs. Parks are monitored and restricted for loitering. Community centers that shuttered during the pandemic have never reopened. In many ways, technology—and social media in particular—has stepped in as a makeshift third space for teens. Yet rather than address the erosion of offline support, policymakers are now working to dismantle these digital communities too.

If human distress reflects deteriorating real-world social infrastructure, then optimizing digital services cannot restore stasis. Technological interventions address a symptom while the deeper human cancer persists.

Advertisement

A Pragmatic Path Forward

The path forward requires resisting the impulse to treat fundamentally human problems as technological ones. When new technologies appear alongside harm, the harder and more necessary questions are not simply how to regulate the tool, but what human choices produced the conditions in which harm emerged, which institutions failed or fell short, and what values should guide our response. These questions are more difficult—and often more uncomfortable—because they turn our attention inward, toward ourselves, rather than external and more convenient actors.

Instead of focusing our energies on systematically regulating platforms, we should direct our efforts toward these human problems. For suicide, public health experts point to a wide range of evidence-based strategies for preventing and mitigating risk factors. These include strengthening economic supports such as household financial stability and housing security; creating safer environments by reducing at-risk individuals’ access to lethal means; fostering healthy organizational policies and cultures; and improving access to healthcare by expanding insurance coverage for mental health services and increasing provider availability and remote access in underserved areas. Experts also emphasize the importance of promoting social connection and teaching problem-solving skills that can help individuals navigate periods of acute distress.

These and other socioeconomic reforms are not easy solutions. They aren’t just a matter of adjusting algorithms or restricting platform features. They demand uncomfortable conversations about how we structure work, education, and community life. They require sustained political commitment and resource allocation. Yet if we can achieve these results, we will create a better world than one derived from mere technological fixes.

Advertisement

In short, technology doesn’t cause suicide. It doesn’t cause a host of human problems for which it is often accused. Sadly, they have always been with us. 

But technology, used wisely, could help us mitigate these problems. For example, through processing massive amounts of data, AI can detect patterns that elude us humans. This alone could help reveal early warning signs or surface new protective factors. AI chatbots, for example, could help us identify teens who are at risk and create opportunities to intervene. 

But that kind of progress demands that we take responsibility for these problems. We must acknowledge that our governments, societies, communities, and even ourselves may have normalized and contributed to these harmful conditions. We may discover there’s no rhyme or reason to why teenagers commit suicide. But we may uncover that teen suicide isn’t random at all. It may stem from something we’ve unwittingly ignored, or perhaps built into the world. 

That possibility is far more unsettling than the idea of dangerous technology. It’s the idea that the danger might be us.

Advertisement

Kevin Frazier directs the AI Innovation and Law Program at the University of Texas School of Law and is a Senior Fellow at the Abundance Institute. Brian L. Frye is a Spears-Gilbert Professor of Law at the University of Kentucky J. David Rosenberg College of Law. Michael P. Goodyear is an Associate Professor at New York Law School. Jess Miers is an Assistant Professor of Law at The University of Akron School of Law

Filed Under: ai, blame, moral panic, negligence, proximate cause, suicide, tech, tort law

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025