Connect with us

Tech

Ron Wyden Is Begging His Colleagues To Stop Trying To Hand Trump A Censorship Weapon

Published

on

from the how-many-times-are-we-going-to-do-this? dept

We’ve been writing about Section 230 for a very long time. We’ve written about why it matters, why the people attacking it are wrong, and why most of the proposed “reforms” would make the internet dramatically worse for everyone except the already powerful. And for just about as long as we’ve been doing that, Senator Ron Wyden—who co-authored Section 230 three decades ago—has been doing the same thing, often as a lonely voice in a Senate full of colleagues who either don’t understand the law or are actively trying to destroy it.

The Communications Decency Act just turned 30, and Wyden marked the occasion with an op-ed in MSNow that lays out, clearly and forcefully, why Section 230 matters more right now than it has in years. And the piece is a must-read, because it highlights something that should be blindingly obvious to Democrats in Congress but apparently remains invisible to far too many of them: gutting Section 230 while Donald Trump is president would be handing him the pen to rewrite the rules of online speech.

As President Donald Trump and his administration wage war against free speech, it is vital that Americans have a free and open internet where they can criticize the government, share personal health information and simply live their lives without government censorship and repression. For those of us who value the ability for regular people to speak and be heard online, preserving Section 230 is one of the most consequential ways to prevent Trump and the cabal of MAGA billionaires from controlling everything Americans see and read.

You’d think this would be uncontroversial among Democrats. You’d think that watching the Trump administration wage open war on free expression—retaliating against media companies, threatening platforms, unleashing threats from federal agencies on critics—would make it crystal clear that now is not the time to blow up the legal framework that protects people’s ability to speak freely online.

And yet…

Advertisement

Senator Dick Durbin, a Democrat, is still co-sponsoring legislation with Lindsey Graham to repeal Section 230 entirely within two years. This is beyond absurd. A senior Democratic senator is actively working to hand this administration the ability to reshape online speech liability from scratch.

In what universe does that end well?

If you need a refresher on what these senators are proposing to gut, Wyden lays it out plainly:

Section 230 is a simple law: In effect, it says the person who creates a post is the one responsible for it. Without it, goodbye retweets and reskeets, Reddit mods, Wikipedia editors and the people curating feeds on Bluesky. The ability to rapidly reshare information online is only possible because of the law.

That’s it. That’s what they want to hand Trump the power to rewrite.

Advertisement

And it gets worse.

Wyden highlights a category of proposal that perfectly encapsulates why building government censorship tools “for the right reasons” always backfires:

Other proposals include repealing Section 230 for posts the Health and Human Services secretary decides are medical misinformation. This was introduced in 2021 in response to the proliferation of COVID-19 misinformation, but today it would essentially give HHS Secretary Robert F. Kennedy the power to silence critics of his anti-vaccine agenda.

You might recall this one if you’re a regular Techdirt reader. Introduced by Democratic Senators Amy Klobuchar and Ben Ray Lujan, we called out how dangerous (and unconstitutional) it was back in 2021, and then reminded Senators Klobuchar and Lujan of this when RFK Jr. was first nominated to head HHS.

As Wyden notes, a bill written and supported by Democrats, designed to combat COVID misinformation by “reforming” Section 230, if it were law, would now hand Robert F. Kennedy Jr.—the most prominent anti-vaccine activist in American public life—the authority to define what constitutes medical “misinformation” online.

Advertisement

The person who has spent decades spreading conspiracy theories about vaccines would get to decide which health speech is acceptable on the internet. This is exactly the kind of scenario that people like us (and Wyden) have been warning about for years: the regulatory environment you create to fight the speech you don’t like today will be wielded by the people you trust least tomorrow.

Democrats like Durbin, Klobuchar, and Blumenthal spent years convinced that weakening Section 230 would force Big Tech to clean up its act. The counterargument—made by Wyden, by us, by basically everyone who actually read the law—was always the same: any power you create to shape online speech rules will eventually be used by people whose priorities look nothing like yours. That day has arrived. Those same Democrats are somehow still pushing the same bills.

So what would actually happen if they got their way? Nothing good. Wyden points to how Americans have been using platforms to document what’s actually happening with immigration enforcement:

Americans have used WhatsApp, Signal, Bluesky and TikTok to document violent, lawless activities by Immigration Customs Enforcement and Customs and Border Protection across the country. While corporate news organizations like CBS News have buried stories about Trump administration immigration abuses and are increasingly pushing disingenuous “both sides” reporting, regular Americans have helped to change public opinion with their first-hand videos of government-sanctioned violence that have spread across the internet. 

That was possible because of Section 230. Take it away and you would see ICE agents bring bad faith lawsuits against those platforms, perhaps claiming that Meta helped incite anti-ICE protests or defamed them by carrying posts alleging excessive force. To understand what would be possible, just look at how police departments and Big Oil have used civil suits to try and silence their biggest critics.

Advertisement

This is the part that the “repeal 230” crowd never seems to grapple with. Without Section 230, the platforms hosting that content become legally vulnerable for the content their users post. And the people with the deepest pockets and the most to hide—government agencies, corporations, the powerful—are exactly the ones who would use that vulnerability to silence critics through litigation. We’ve talked about this for years. It wouldn’t be Big Tech that suffers from a 230 repeal. Big Tech can afford armies of lawyers. The people who get crushed are the small platforms, the community forums, the individual users who share and reshare information that the powerful would prefer stayed hidden.

Wyden drives this home with another relevant example:

Or look at the Jeffrey Epstein case. It took dogged journalism by the Miami Herald and activism from Epstein’s victims to keep the story alive. But without Section 230, anyone who merely shared a story or allegation about Epstein and his associates on their social media could be sued by Epstein’s deep-pocketed pals, along with the site that hosted those posts.

He also takes a moment to push back on the persistent myth that Section 230 gives Big Tech blanket immunity to do whatever it wants—a myth that has fueled much of the bipartisan rage against the law:

Critics of Section 230 often misunderstand it. The statute only protects companies when a lawsuit tries to treat a company as the speaker of the post they find offensive or harmful. 

However, courts can and have held companies liable for their own speech and business practices. For example, Amazon has tried and failed to use Section 230 to avoid lawsuits about dangerous batteries it helped sell. Meta also tried and failed to use Section 230 to dodge responsibility for helping place discriminatory ads. And Big Tech is going to trial, after a California state court found that Section 230 did not protect certain social media design features.

Advertisement

(Wyden’s right that 230 isn’t the blanket immunity its critics claim—though where courts have drawn those lines remains hotly contested, and some of us would argue several of these rulings created more problems than they solved. In fact, the fallout from some of those rulings actually serves to show why Section 230 is so important.)

Either way, none of this should be new information, given how many times it’s been litigated and explained. But apparently it bears repeating every single time this debate comes up, because the same wrong arguments keep getting trotted out by the same people who refuse to read 26 words of statute.

Wyden closes with a warning that should be required reading for every legislator contemplating a 230 “reform” bill:

Opening up Section 230, especially right now, while Trump is president, would give him the pen to rewrite online speech rules. Given his administration’s attacks on free speech, I think that would be disastrous.

It says something profoundly depressing about the state of Congress that the guy who wrote the law 30 years ago is still the one who understands it best, and that he has to keep explaining it to colleagues who should know better. Wyden has been right about this from the start. He was right when Republicans attacked Section 230 because they wanted to force platforms to carry their content. He was right when Democrats attacked it because they wanted to force platforms to remove content they didn’t like. And he’s right now, when tearing it down would hand the most speech-hostile administration in modern memory the tools to reshape online expression however it sees fit.

Advertisement

Happy 30th birthday, Section 230. Here’s hoping your co-author can keep his colleagues from smothering you in your sleep.

Filed Under: amy klobuchar, dick durbin, donald trump, intermediary liability, ron wyden, section 230

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

OpenAI Amends Pentagon Deal As Sam Altman Admits It Looks ‘Sloppy’

Published

on

OpenAI is amending its Pentagon contract after CEO Sam Altman acknowledged it appeared “opportunistic and sloppy.” On Monday night, Altman said the company would explicitly restrict its technology from being used by intelligence agencies and for mass domestic surveillance. The Guardian reports: OpenAI, which has more than 900 million users of ChatGPT, made the deal almost immediately after the Pentagon’s existing AI contractor, Anthropic, was dropped. […] The deal prompted an online backlash against OpenAI, with users of X and Reddit encouraging a “delete ChatGPT” campaign. One post read: “You’re now training a war machine. Let’s see proof of cancellation.”

In a message to employees reposted on X, the OpenAI CEO said the original deal announced on Friday had been struck too quickly after Anthropic was dropped. “We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.” Upon announcing the deal, OpenAI had said the contract had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.”

[…] However, observers including OpenAI’s former head of policy research, Miles Brundage, have queried how OpenAI has managed to secure a deal that assuages ethical concerns Anthropic believed were insurmountable. Posting on X, he wrote: “OpenAI employees’ default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them.” Brundage added: “To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics.”

In his X post, he also wrote that he would “rather go to jail” than follow an unconstitutional order from the government. “We want to work through democratic processes,” Brundage wrote. “It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty.”

Advertisement

Source link

Continue Reading

Tech

Apple announces updated MacBooks with M5 Pro and Max chips

Published

on


The new MacBook Pro will be offered in 14-inch and 16-inch configurations, both featuring Liquid Retina XDR displays with up to 1600 nits of peak HDR brightness. Storage capacity starts at 1 TB for the M5 Pro model and 2 TB for the M5 Max variant.
Read Entire Article
Source link

Continue Reading

Tech

A Possible US Government iPhone-Hacking Toolkit Is Now in the Hands of Foreign Spies and Criminals

Published

on

Google notes that Apple patched vulnerabilities used by Coruna in the latest versions of its mobile operating system, iOS 26, so its exploitation techniques are only confirmed to work against iOS 13 through 17.2.1. It targets vulnerabilities in Apple’s Webkit framework for browsers, so Safari users on those older versions of iOS would be vulnerable, but there’s no confirmed techniques in the toolkit for targeting Chrome users. Google also notes that Coruna checks if an iOS devices has Apple’s most stringent security setting, known as Lockdown Mode, enabled, and doesn’t attempt to hack it if so.

Despite those limitations, iVerify says Coruna likely infected tens of thousands of phones. The company consulted with a partner that has access to network traffic and counted visits to a command-and-control server for the cybercriminal version of Coruna infecting Chinese-language websites. The volume of those connections suggest, iVerify says, that roughly 42,000 devices may have already been hacked with the toolkit in the for-profit campaign alone.

Just how many other victims Coruna may have hit, including Ukrainians who visited websites infected with the code by the suspected Russian espionage operation, remains unclear. Google declined to comment beyond its published report. Apple did not immediately provide comment on Google or iVerify’s findings.

A Single, Very Professional Author

Advertisement

In iVerify’s analysis of the cybercriminal version of Coruna—it didn’t have access to any of the earlier versions—the company found that the code appeared to have been altered to plant malware on target devices designed to drain cryptocurrency from crypto wallets as well as steal photos and, in some cases, emails. Those additions, however, were “poorly written” compared to the underlying Coruna toolkit, according to iVerify chief product officer Spencer Parker, which he found to be impressively polished and modular.

“My God, these things are very professionally written,” Parker says of the exploits included in Coruna, suggesting that the cruder malware was added by the cybercriminals who later obtained that code.

As for the code modules that suggest Coruna’s origins as a US government toolkit, iVerify’s Cole notes one alternative explanation: It’s possible that Coruna’s code overlaps with the Operation Triangulation malware that Russia pinned on US hackers could be based on Triangulation’s components being picked up and repurposed after they were discovered. But Cole argues that’s unlikely. Many components of Coruna have never been seen before, he points out, and the whole toolkit appears to have been created by a “single author,” as he puts it.

“The framework holds together very well,” says Cole, who previously worked at the NSA, but notes that he’s been out of the government for more than a decade and isn’t basing any findings on his own outdated knowledge of US hacking tools. “It looks like it was written as a whole. It doesn’t look like it was pieced together.”

Advertisement

If Coruna is, in fact, a US hacking toolkit gone rogue, just how it got into foreign and criminal hands remains a mystery. But Cole points to the industry of brokers that may pay tens of millions of dollars for zero-day hacking techniques that they can resell for espionage, cybercrime, or cyberwar. Notably, Peter Williams, an executive of US government contractor Trenchant, was sentenced this month to seven years in prison for selling hacking tools to the Russian zero-day broker Operation Zero from 2022 to 2025. Williams’ sentencing memo notes that Trenchant sold hacking tools to the US intelligence community as well as others in the “Five Eyes” group of English-speaking governments—the US, UK, Australia, Canada and New Zealand—though it’s not clear what specific tools he sold or what devices they targeted.

“These zero-day and exploit brokers tend to be unscrupulous,” says Cole. “They sell to the highest bidder and they double dip. Many don’t have exclusivity arrangements. That’s very likely what happened here.”

“One of these tools ended up in the hands of a non-Western exploit broker, and they sold it to whoever was willing to pay,” Cole concludes. “The genie is out of the bottle.”

Source link

Advertisement
Continue Reading

Tech

Big screen, real OLED, huge discount: this LG deal is easy to like

Published

on

A 77-inch OLED under $1,500 is the kind of TV deal that doesn’t need much explanation. It’s just a legitimately big discount on a TV people actually want. The LG 77-inch B5 Series OLED (2025) is down to $1,499.99, which is $1,500 off its $2,999.99 list price. That’s a major cut on a current-model OLED, not some random old stock set that only looks good because the sticker dropped.

And at this size, OLED really starts to feel worth it. You’re not just getting a nicer panel. You’re getting the kind of screen that can anchor a room, whether you care more about movies, sports, or gaming.

What you’re getting

The LG OLED77B5PUA is a 77-inch 4K OLED smart TV running webOS. It comes with LG’s Alpha 8 AI Processor Gen 2, plus support for Dolby Vision, HDR10, HLG, and Dolby Atmos.

It’s also better set up for gaming than a lot of TVs in this price range. The B5 has a native 120Hz refresh rate, VRR, NVIDIA G-Sync compatibility, AMD FreeSync Premium, and four HDMI 2.1 ports. That’s the kind of spec sheet that makes it easy to pair with a PS5, Xbox, and a gaming PC without compromise.

Advertisement

Why it’s worth it

The main reason this deal works is pretty simple: 77-inch OLED TVs still aren’t cheap, and this one just dropped into a price range where it feels much more reasonable.

The B-series has always made sense for buyers who want the core OLED experience without paying extra for a higher-tier model. You’re still getting the deep black levels, the strong contrast, the wide viewing angles, and the smooth gaming features that make OLED stand out. What you’re not doing is paying a lot more just to move up the lineup.

At $1,499.99, this is the kind of TV that makes a real upgrade feel obvious. If you’re coming from a smaller LED set, the jump in both size and picture quality is going to be immediately noticeable.

The bottom line

If you’ve been waiting for a big OLED to hit a more reasonable price, this is it. The LG 77-inch B5 Series OLED gives you the stuff that actually matters: 4K OLED picture quality, Dolby Vision, a native 120Hz panel, and four HDMI 2.1 ports. The current discount makes it one of the better 77-inch TV deals out there right now.

Advertisement

Source link

Continue Reading

Tech

With Teens Comfortable Confiding in AI, Should Schools Embrace It for Mental Health Care?

Published

on

The alert came around 7 p.m.

Brittani Phillips checked her phone. A middle school counselor in Putnam County, Florida, Phillips receives messages from an artificial intelligence-enabled therapy platform that students use during nonschool hours. It flags when a student may be at risk for harming themself or others based on what the student types into a chat.

Phillips saw that this was a “severe” alert for an eighth grader.

So, Phillips spent her evening on the phone with the student’s mom, probing her to figure out what was going on and how vulnerable the student was. Phillips also called the police, she says, noting that she tells students that the chats are confidential until they can’t be.

That was last school year, in the spring.

Advertisement

“He’s alive and well. He’s in ninth grade this year,” Phillips says. She believes that the interaction built trust between her and the family. When the student passes her in the hall now, he makes a point to greet her, she adds.

Navigating budget shortfalls and limited mental health staff, Interlachen Jr.-Sr. High School, where Phillips works, is using an AI platform to vet students’ mental health needs.

Phillips’ district has used Alongside, an automated student monitoring system, for three years. It’s an example of the growing category of tools that are marketed to K-12 schools for similar purposes, with at least 9 companies getting funding deals since 2022.

Alongside says its tool is used by more than 200 schools around the US and argues that its platform offers better services than typical telehealth options because it has a social and emotional skill-building chat tool — where students yak about their life-problems with a llama called Kiwi that tries to teach them to build up resilience — and its AI-generated content is monitored by clinicians. The system offers resource-tapped schools, especially in rural areas, access to critical mental health resources, company representatives say.

Advertisement

AI is a major component of the Trump administration’s national education agenda. Yet, some parents, educators and, increasingly, lawmakers, are wary of increasing teens’ time in front of screens. States have also started restricting the use of AI in telehealth.

Many experts and families also worry that students attach to AI too strongly. Even as a recent national survey found that 20 percent of high schoolers have used AI romantically or know someone who has, there’s significant interest in keeping students from emotionally connecting with bots. That even includes a proposed federal law that would force AI companies to remind students that chatbots aren’t real people.

Still, in her job, Phillips says the tool her school uses is exceptional at putting out the “small fires.” With around 360 middle schoolers to support, having this tool to hand-hold them through the breakups and other routine problems they face allows her to focus her time with students nearing crisis. Plus, students sometimes find it easier to turn to AI for dealing with emotional problems, she says.

On the Digital Couch

Student nervousness plays into why they are comfortable confiding in these technologies, school counselors say.

Advertisement

Speaking with a mental health professional can be intimidating, especially for adolescents, says Sarah Caliboso-Soto, a licensed clinical social worker who serves as the assistant director of clinical programs at the USC Suzanne Dworak-Peck School of Social Work and the clinical director for the Trauma Recovery Center and Telebehavioral Health at USC.

There’s a generational component as well. For students who’ve grown up encountering chat interfaces through social media and websites, AI interfaces can feel familiar. And kids today find that it’s easier to text than call someone on the phone, says Linda Charmaraman, director of the Youth, Media & Wellbeing Research Lab at Wellesley Centers for Women.

Using AI to work through emotions also allows students to avoid watching facial expressions, which they may worry will carry judgment, she adds. Also, chatbots are available at times when a human might not be, without the hassle of having to make an appointment, Charmaraman says.

“It’s almost more natural than interacting with another human being,” Caliboso-Soto says.

Advertisement

In her work with a telehealth clinic, Caliboso-Soto has seen a rise in crisis text lines and chat lines. The clinic doesn’t use AI of any kind, she says, but it often gets approached by companies looking to get AI into the therapy sessions as notetakers.

It’s not necessarily bad in Caliboso-Soto’s opinion. For resource-strapped schools, AI can be used “as a first line of defense,” regularly checking in with students and pointing them in the right direction when they need more help, she says.

The starting price for a school to use Alongside’s services is about $10 per student per year, according to the company. Larger districts usually receive volume-based discounts.

But Caliboso-Soto worries about using AI as a substitute counselor. It lacks the discernment that clinicians provide when interacting with students, she notes. While large language models can be trained to notice symptoms in text, they cannot see or hear what a human clinician can when interacting with a student, the inflections of the voice and the movements of the body, nor can it reliably catch subtle observations or behaviors. “You can’t replace human connection, human judgment,” she adds.

Advertisement

While AI can speed up the diagnostic process or free up time for school counselors, it’s crucial not to overly rely on it for mental health, says Charmaraman. The technology can miss some of the nuances that a human counselor would catch, and it can give students unrealistic positive reinforcement. Schools need to adopt a holistic approach that includes families and caregivers, she argues.

Plus, if a school is increasingly using AI intervention to filter serious cases, it’s worth paying attention to whether students are having less frequent contact with clinically-trained humans, Caliboso-Soto says.

For its part, Alongside representatives say that the platform is not meant as a replacement for human therapy. The app is a stepping stone to seeking help from adults, says Ava Shropshire, a junior at Washington University who serves as a youth adviser for Alongside. She argues that the app makes mental health and social-emotional learning feel more normal for students and can lead them to seek out human help.

Still, some students think it’s at best a Band-Aid.

Advertisement

Social Accountability

“Can you think of another time in history when people have been so lonely, when our communities have been so weak?” asks Sam Hiner, executive director of The Young People’s Alliance, a North Carolina-based organization that lobbies for more youth participation in politics and policymaking.

During a time of economic upheaval, technology and social media have manipulated and isolated students from one another, and that’s led to a deep yearning for community and belonging, Hiner says.

Students will get it wherever they can, even if that’s through ChatGPT, he adds.

The Young People’s Alliance released a framework for regulating AI that allows for some therapeutic uses of the technology.

Advertisement

But in general, the organization is striving to rebuild the human community and is set against use of AI when it threatens to replace human companionship, Hiner says. “That’s a critical aspect of therapy and of living a fulfilled life and having social connection and having mental well-being,” he adds.

So for Hiner, the main concern is what’s called a “parasocial relationship,” when students develop a one-sided emotional attachment, especially when the technology enters schools for therapeutic purposes. It might be valuable to have an AI that can provide feedback or conduct analysis, even to mental health, but Hiner says that the AI should not hint or convey that it has its own emotional state — for instance, saying “I’m proud of you” to a student user — because that encourages attachment.

Even though platforms often claim to decrease loneliness, they don’t really measure whether people are more connected and are more set up to live fulfilled, connected, happy lives in the long term, says Hiner: “All [tech platforms are] measuring is whether this bot is serving as an effective crutch for the immediate feelings of loneliness that they’re experiencing.”

What advocates want to prevent is these bots fueling the loss of social skills because they pull people away from relationships with other people, where they have social accountability, Hiner says.

Advertisement

Pushing Boundaries

Privacy experts note that these chatbots do not generally carry the same privacy protections of conversations with a licensed therapist. And when concerns about student privacy and encounters with the police are high, use of these tools raise “messy” privacy concerns, even when supervised by people with clinical training, a privacy law expert says.

Both the company and Phillips, the counselor in Putnam County, stress that, to work, these systems need human oversight. Phillips feels like this tool is an improvement over other monitoring tools the district has used, which point students toward in-school discipline rather than mental health help.

This school year, Phillips noted 19 “severe” alerts from the AI health tool as of February (from a total of 393 active users). The company doesn’t separate the incidents by which students caused them. So some of the same students are causing multiple of those 19 “severe alerts,” Phillips notes.

Phillips has learned, in using the tool, that it takes a human to perceive teenage humor, too.

Advertisement

That’s because some alerts aren’t genuine. On occasion, middle school students — usually boys — will test the boundaries of this technology, Phillips says. They type “my uncle touches me” or “my mom beat me with a pole” into the chat to test whether Phillips will follow up on it.

These boys are just trying to see if anyone is listening, to test whether anyone cares, she says. Sometimes, they just find it funny.

When she pulls them aside to discuss it, she can observe their body language, and whether it changes, which might suggest that the comment was real. If it was a joke, they often become apologetic. When a student doesn’t seem remorseful, Phillips will call and let the parents know what happened. But even in these cases, Phillips feels she has more options than provided by other monitoring systems, which would refer the student to in-school suspension.

Because Phillips is keeping her eye on the interactions, the students also learn to trust that she’s actually monitoring the system, she adds.

Advertisement

And, she says, the number of boys who do test the system in that way goes down every year.

Source link

Continue Reading

Tech

A rewritable hard drive made of DNA? Researchers say it's possible

Published

on


According to Li-Qun Gu, DNA is an extremely compact, stable package of information. Natural DNA strands encode the biological blueprints of all life on Earth but Mizzou researchers are exploring ways to repurpose molecular biology as a digital storage medium.
Read Entire Article
Source link

Continue Reading

Tech

Shareholder lobs lawsuit at Apple execs, claims they engaged in anticompetitive behavior

Published

on

Apple’s leadership is now being accused of knowingly steering the company through years of antitrust risk to maintain App Store dominance.

Older man with short gray hair, glasses, and navy suit sits at a desk, hands clasped, in a bright office with window blinds and potted plants in the background
Apple executives sued in shareholder derivative suit

In late February, one of Apple’s shareholders filed a derivative lawsuit against the Cupertino-based tech giant’s top executives. The suit says directors and top executives — including Tim Cook — had breached their fiduciary duties to Apple.
According to Bloomberg Law, it alleges that the parties in question allowed and furthered “monopolistic conduct” for longer than a decade. The case has been brought by a retirement fund.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Forget the Nintendo Switch 2, MWC 2026 is full of brilliant mobile gaming devices, including the Red Magic Astra

Published

on

It’s getting harder to deny the fact that gaming tablets are on the rise. These aren’t the best tablets ever, but they offer lots of processing power in a portable form, without all the bells and whistles of a standard slate. Think a gaming phone, but bigger. And I’ve just found a new love.

I’m on the ground at MWC 2026, a mobile tech conference which has seen a few gaming gadgets exhibited. I’ve seen the pint-sized Lenovo Legion Tab (Gen 5), and Nubia Neo 5 GT gaming phone, and both seemed great. I’ve also seen plenty of other powerful devices like the Xiaomi 17 Ultra, Samsung Galaxy S26 Ultra and the understated Honor Magic 8 Pro. But what’s really drawn my eyes is the Red Magic Astra.

Source link

Advertisement
Continue Reading

Tech

What you get from a maxed-out MacBook Pro, MacBook Air, or Mac mini

Published

on

Forget the “starts from” prices, here’s what you could spend on Macs from the Mac mini to the new MacBook Pro — and what you get for your money.

Open laptop on a desk, showing a photo of a backpack pocket holding a small silver device, with a blurred modern room and purple lighting in the background
A Mac mini on an MacBook Air’s screen, and both on a MacBook Pro’s display

With the launch of the new MacBook Pro and MacBook Air, Apple isn’t just presenter user with a choice of models. It’s also, as ever, providing a range of options for each device.
If you’re not to overspend, you need to know what you can be getting — and enough information to decide what exactly that is worth to you.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Apple’s MacBook Pro Rockets Ahead With M5 Pro and M5 Max Supercharged Speed and On-Device Intelligence

Published

on

Apple MacBook Pro M5 Pro M5 Max Reveal
Apple just revealed its latest MacBook Pros, which are powered by the M5 Pro and M5 Max CPUs. Engineers crafted the M5 Pro and M5 Max around a new Fusion Architecture. Separate dies combine into one seamless system-on-a-chip, optimized entirely for AI workloads alongside traditional computing demands.



This beast is powered by an 18-core CPU, with six of its cores being the absolute fastest out there – we’re talking the fastest single cores ever – and the other 12 designed to handle the long haul of multiple threads running at the same time. When it comes to CPU-intensive lift jobs, performance has increased by a solid 30% over previous versions.

Sale


Apple 2025 MacBook Pro Laptop with M5 chip with 10‑core CPU and 10‑core GPU: Built for Apple…
  • SUPERCHARGED BY M5 — The 14-inch MacBook Pro with M5 brings next-generation speed and powerful on-device AI to personal, professional, and creative…
  • HAPPILY EVER FASTER — Along with its faster CPU and unified memory, M5 features a more powerful GPU with a Neural Accelerator built into each core…
  • BUILT FOR APPLE INTELLIGENCE — Apple Intelligence is the personal intelligence system that helps you write, express yourself, and get things done…

The graphics side of things has also seen a significant improvement, with each core now having its own dedicated Neural Accelerator. The M5 Pro has a memory bandwidth of 307GB/s, which doubles to 614GB/s on the M5 Max. The M5 Pro can have up to 64GB of unified memory, while the M5 Max can have up to 128GB. Apple claims up to four times the AI performance of the previous generation and eight times faster overall than the outdated M1-era machines.

Advertisement


Developers will appreciate the improvement, since they can now work with large language model prompts four times faster than on a MacBook Pro with an M4 Pro or Max CPU. They can also generate photographs 8 times faster than the previous M1 Pro and Max sets. Graphics users will be pleased to know that graphics workloads have improved by 50% over the old M4 counterpart. In other tests, 3D rendering in Redshift was 1.4 times faster on the M5 Pro, while video effects in DaVinci Resolve were three times faster on the M5 Max.

Storage performance has been improved significantly, with read and write rates of up to 14.5GB/s, which is twice as fast as previous models. If you require tons of capacity, you’ll be glad to know that base configurations have increased: the M5 Pro starts at 1TB and the M5 Max at 2TB, all without additional expense.

Apple MacBook Pro M5 Pro M5 Max Reveal
The battery life appears to be very respectable, lasting up to 24 hours whether plugged in or on the road. Charging is very speedy, reaching 50% in 30 minutes if you use a 96W or greater charger. The Liquid Retina XDR display still offers 1600 nits of peak HDR and 1000 nits for normal screens, with the option of nano-texture to reduce glare. You have the same 12MP Center Stage camera that allows those on video calls to see you, as well as Desk View, video studio-quality mics, and a six-speaker system with Spatial Audio for a great media experience.

Apple MacBook Pro M5 Pro M5 Max Reveal
Connectivity options include three Thunderbolt 5 ports, HDMI with up to 8K output, an SDXC slot, and, of course, MagSafe 3. The M5 Pro can function with two external high-resolution screens, while the M5 Max can support four. Apple’s N1 wireless processor, which includes Wi-Fi 7 and Bluetooth 6, provides even better, more stable wireless communication.

Apple MacBook Pro M5 Pro M5 Max Reveal
As you might have expected, costs have risen to match the increased power and storage. The 14-inch MacBook Pro with M5 Pro costs $2,199 ($2,049 education pricing), while the 16-inch version starts at $2,699 ($2,499 education). M5 Max models, on the other hand, begin at $3,599 for the 14-inch model and $3,899 for the 16-inch model ($3,299 and $3,599 education, respectively). Both space black and silver finishes are offered. Pre-orders will be accepted starting March 4, with delivery beginning March 11.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025