Connect with us
DAPA Banner

Tech

Yes, Section 230 Should Apply Equally To Algorithmic Recommendations

Published

on

from the it-won’t-do-what-you-think-if-you-remove-it dept

If you’ve spent any time in my Section 230 myth-debunking guide, you know that most bad takes on the law come from people who haven’t read it. But lately I keep running into a different kind of bad take—one that often comes from people who have read the law, understand the basics passably well, and still say: “Sure, keep 230 as is, but carve out algorithmically recommended content.”

Unlike the usual nonsense, this one is often (though not always) offered in good faith. That makes it worth engaging with seriously.

It’s still wrong.

Let’s start with the basics: as we’ve described at great length, the real benefits of Section 230 are its procedural protections, which make it so that vexatious cases get tossed out at the earliest (i.e., cheapest) stage. That makes it possible for sites that host third party content to do so in a way that they won’t get sued out of existence any time anyone has a complaint about someone else’s content being on the site. This important distinction gets lost in almost every 230 debate, but it’s important. Because if the lawsuits that removing 230 protections would enable would still eventually win on First Amendment grounds, the only thing you’re doing in removing 230 protections is making lawsuits impossibly expensive for individuals and smaller providers, without doing any real damage to large companies, who can survive those lawsuits easily.

Advertisement

And that takes us to the key point: removing Section 230 for algorithmic recommendations would only lead to vexatious lawsuits that will fail.

But what about [specific bad thing]?

Before diving into the legal analysis, let’s engage with the strongest version of this argument. Proponents of carving out algorithmic recommendations typically aren’t imagining ordinary defamation suits. They’re worried about something more specific: cases where an algorithm itself arguably causes harm through its recommendation patterns—radicalization pipelines, engagement-driven amplification of dangerous content, recommendation systems that push vulnerable users toward self-harm.

The theory goes something like this: maybe the underlying content is protected speech, but the act of recommending it—especially when the algorithm was designed to maximize engagement and the company knew this could cause harm—should create liability, usually as some sort of “products liability” type complaint.

Advertisement

It’s a more sophisticated argument than “platforms are publishers.” But it still fails, for reasons I’ll explain below. The short version: a recommendation is an opinion, opinions are protected speech, and the First Amendment doesn’t carve out “opinions expressed via algorithm” as a special category.

A short history of algorithmic feeds

To understand why removing 230 from algorithmic recommendations would be such a mistake, it helps to remember the apparently forgotten history of how we got here. In the pre-social media 2000s, “information overload” was the panic of the moment. Much of the discussion centered on the “new” technology of RSS feeds, and there were plenty of articles decrying too much information flooding into our feed readers. People weren’t worried about algorithms—they were desperate for them. Articles breathlessly anticipated magical new filtering systems that might finally surface what you actually wanted to see.

The most prominent example was Netflix, back when it was still shipping DVDs. Because there were so many movies you could rent, Netflix built one of the first truly useful recommendation algorithms—one that would take your rental history and suggest things you might like. The entire internet now looks like that, but in the mid-2000s, this was revolutionary.

Advertisement

Netflix’s approach was so novel that they famously offered $1 million to anyone who could improve their algorithm by 10%. We followed that contest for years as it twisted and turned until a winner was finally announced in 2009. Incredibly, Netflix never actually implemented the winning algorithm—but the broader lesson was clear: recommendation algorithms were valuable, and people wanted them.

As social media grew, the “information overload” panic of the blog+RSS era faded, precisely because platforms added recommendation algorithms to surface content users were most likely to enjoy. The algorithms weren’t imposed on users against their will—they were the answer to users’ prayers.

Public opinion only seemed to shift on “algorithms” after Donald Trump was elected in 2016. Many people wanted something to blame, and “social media algorithms” was a convenient excuse.

Algorithmic feeds: good or bad?

Advertisement

Many people claim they just want a chronological feed, but studies consistently show the vast majority of people prefer algorithmic recommendations, because they surface more of what users actually want, compared to chronological feeds.

That said, it’s not as simple as “algorithms good.” There’s evidence that algorithms optimized purely for engagement can push emotionally charged political content that users don’t actually want (something Elon Musk might take notice of). But there’s also evidence that chronological feeds expose users to more untrustworthy content, because algorithms often filter out garbage.

So, algorithms can be good or bad depending on what they’re optimized for and who controls them. That’s the real question: will any given regulatory approach give more power to users, to companies, or to the government?

Keep that frame in mind. Because removing 230 protections for algorithmic recommendations shifts power away from users and toward incumbents and litigants.

Advertisement

The First Amendment still exists

As mentioned up top, the real role of Section 230 is providing a procedural benefit to get vexatious lawsuits tossed well before (and at much lower cost) they would get tossed anyway, under the First Amendment. With Section 230, you can get a case dismissed for somewhere in the range of $50k to $100k (maybe up to $250k with appeals and such). If you have to rely on the First Amendment, it’s up in the millions of dollars (probably $5 to $10 million).

And, the crux of this is that any online service sued over an algorithmic recommendation, even for something horrible, would almost certainly win on First Amendment grounds.

Because here’s the key point: a recommendation feed is a website’s opinion of what they think you want to see. And an opinion is protected speech. Even if you think it’s a bad or dangerous opinion. One thing that the US has been pretty clear on is that opinions are protected speech.

Advertisement

Saying that an internet service can be held liable for giving its opinion on “what we think you’d like to see” would be earth shatteringly problematic. As partly discussed above, the modern internet today relies heavily on algorithms recommending stuff, giving opinions. Every search result is just that, an opinion.

This is why the “algorithms are different” argument fails. Yes, there’s a computer involved. Yes, the recommendation emerges from machine learning rather than a human editor’s conscious decision. But the output is still an expression of judgment: “Based on what we know, we think you’ll want to see this.” That’s an opinion. The First Amendment doesn’t distinguish between opinions formed by editorial meetings and opinions formed by trained models.

In the earlier internet era, there were companies that sued Google because they didn’t like how their own sites appeared (or didn’t appear) in Google search results. The E-Ventures v. Google case here is instructive. Google determined that E-Venture’s “SEO” techniques were spammy, and de-indexed all its sites. E-Ventures sued. Google (rightly) raised a 230 defense which (surprisingly!) a court rejected.

But the case went on longer, and after lots more money on lawyers was spent, Google did prevail on First Amendment grounds.

Advertisement

This is exactly what we’re discussing here. Google search ranking is an algorithmic recommendation engine, and in this one case a court (initially) rejected a 230 defense, causing everyone to spend more money… to get to the same basic result in the long run. The First Amendment protects a website using algorithms to express an opinion over what it thinks you’ll want… or not want.

Who has agency?

This brings us back to the steelman argument I mentioned above: what about cases where an algorithm recommends something genuinely dangerous?

Our legal system has a clear answer, and it’s grounded in agency. A recommendation feed is not hypnotic. If an algorithm surfaces content suggesting you do something illegal or dangerous, you still have to make the choice to do the illegal or dangerous thing. The algorithm doesn’t control you. You have agency.

Advertisement

But there’s a stronger legal foundation here too. Courts have consistently found that recommending something dangerous is still protected by the First Amendment, particularly when the recommender lacks specific knowledge that what they’re recommending is harmful.

The Winter v. GP Putnam’s Sons case is instructive here. The publisher of a mushroom encyclopedia included recommendations to eat mushrooms that turned out to be poisonous—very dangerous! But the court found the publisher wasn’t liable because they didn’t have specific knowledge of the dangerous recommendation. And crucially, the court noted that the “gentle tug of the First Amendment” would block any “duty of care” that would require publishers to verify the safety of everything they publish:

The plaintiffs urge this court that the publisher had a duty to investigate the accuracy of The Encyclopedia of Mushrooms’ contents. We conclude that the defendants have no duty to investigate the accuracy of the contents of the books it publishes. A publisher may of course assume such a burden, but there is nothing inherent in the role of publisher or the surrounding legal doctrines to suggest that such a duty should be imposed on publishers. Indeed the cases uniformly refuse to impose such a duty. Were we tempted to create this duty, the gentle tug of the First Amendment and the values embodied therein would remind us of the social costs.

Now, I should acknowledge that Winter was a products liability case involving a physical book, not a defamation or tortious speech case involving an algorithm, but almost all of the current cases challenging social media are self-styled as product liability cases to try (usually without success) to avoid the First Amendment. And that’s all they would be regarding algorithms as well.

The underlying principle remains the same whether you call it a products liability case or one officially about speech: the First Amendment bars requirements that publishing intermediaries must “investigate” whether everything they distribute is accurate or safe. The reason is obvious—such liability would prevent all sorts of things from getting published in the first place, putting a massive damper on speech.

Advertisement

Apply that principle to algorithmic recommendations, and the answer is clear. If a book publisher can’t be required to verify that every mushroom recommendation is safe, a platform can’t be required to verify that every algorithmically surfaced piece of content won’t lead someone to harm.

The end result?

So what would it mean if we somehow “removed 230 from algorithmic recommendations”?

Practically, it means that if companies have to rely on the First Amendment to win these cases, only the biggest companies can afford to do so. The Googles and Metas of the world can absorb $5-10 million in litigation costs. For smaller companies, those costs are existential. They’d either exit the market entirely or become hyper-aggressive about blocking content at the first hint of legal threat—not because the content is harmful, but because they can’t afford to find out in court.

Advertisement

The end result would be that the First Amendment still protects algorithmic recommendations—but only for the very biggest companies that can afford to defend that speech in court.

That means less competition. Fewer services that can recommend content at all. More consolidation of power in the hands of incumbents who already dominate the market.

Remember the frame from earlier: does this give more power to users, companies, or the government? Removing 230 from algorithmic recommendations doesn’t empower users. It doesn’t make platforms more “responsible.” It just makes it vastly harder for anyone other than the giant platforms to exist while also giving more power to governments, like the one currently run by Donald Trump, to define what things an algorithm can, and cannot, recommend.

Rather than diminishing the power of billionaires and incumbents, this would massively entrench it. The people pushing for this carve-out often think they’re fighting Big Tech. In reality, they’re fighting to build Big Tech a new moat.

Advertisement

Filed Under: 1st amendment, algorithmic feeds, algorithmic recommendations, algorithms, feeds, free speech, opinion, section 230

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

14-inch MacBook Pro M5 vs Asus Zenbook A16: $2,000 shootout

Published

on

The Asus Zenbook A16 is a thin and light Windows notebook aiming to take the portability crown from Apple. Here’s how it compares against a similarly-priced MacBook Pro.

Two open laptops side by side: a dark Apple MacBook Pro on the left with abstract screen, and a beige ASUS Zenbook on the right showing a canyon landscape, gradient background.
M5 14-inch MacBook Pro vs Asus Zenbook A16

For our spec-sheet brawl, we’re going to put the $1,999 Asus Zenbook A16 against the 14-inch MacBook Pro with M5. As much as we would compare the similarly-sized 16-inch MacBook Pro, the other upgrades to the base-spec version pushes it to $2,699, which is a bit too high.
To make it a little bit closer in price, we will set the 14-inch MacBook Pro as having an enhanced memory allowance of 24GB or 32GB.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

3 underrated Amazon Prime Video movies you should watch this weekend (April 10-12)

Published

on

This weekend’s watchlist covers three different genres of movies, so you can pick whatever you are in the mood for. We have a trio of hidden gems on Amazon Prime Video that deserve way more attention.

There is a gritty Michael Caine revenge thriller you should not miss, a micro-budget 1950s sci-fi mystery that thrives on atmosphere and dialogue. For horror fans, we have a psychological horror bout a hospice nurse whose faith tips into something far more dangerous that gets inside your skin.

We also have guides to the best new movies to stream, the best movies on Netflix, the best movies on Hulu, the best free movies, and the best movies on Amazon Prime Video.

Saint Maud (2019)

Advertisement

Saint Maud is not a horror film in the traditional sense, and going in expecting one will work against you. What it actually is is a deeply unsettling psychological portrait of a young hospice nurse named Maud, a recent Catholic convert who becomes dangerously fixated on saving her terminally ill patient’s soul in ways that grow increasingly disturbing.

Morfydd Clark’s performance is the engine of the whole thing, holding a fragile, frightening line between piety and paranoia throughout. I really like how the film gets under your skin without ever fully explaining itself. You finish it feeling like you witnessed something you were not supposed to see, and that feeling does not leave quickly.

You can watch Saint Maud on Amazon Prime Video

Harry Brown (2009)

Advertisement

If you have a soft spot for slow-burn British crime dramas, Harry Brown is the movie you need to watch this weekend. Michael Caine plays the title character, a widowed, retired Royal Marines veteran living on a decaying South London housing estate overrun by gang violence. When his only friend is murdered, Harry stops looking the other way.

What makes this film work so well is how it refuses to glamorize what follows. Harry is not an action hero. He is an old man with emphysema who stumbles during a chase and collapses on a canal path.

I really like how the film earns every moment of tension because it keeps Harry vulnerable and the world around him genuinely threatening. Caine is absolutely extraordinary here, and there are sequences in this film that will make you forget you are watching a 77-year-old man.

You can watch Harry Brown on Amazon Prime Video

Advertisement

The Vast of Night (2019)

Have you accidentally tuned into a late-night radio broadcast and could not bring yourself to switch off. Well, The Vast of Night is exactly that kind of sci-fi movie.

Set over a single night in 1950s small-town New Mexico, the film follows Fay, a teenage switchboard operator, and Everett, a fast-talking local radio DJ, as they stumble onto a mysterious audio frequency that sends them down a strange and increasingly eerie rabbit hole.

There are no big set pieces or alien invasions. The tension is built almost entirely through dialogue, long unbroken camera takes, and an incredibly precise sound design that makes the night feel alive and watchable.

Advertisement

What I really love about this movie is how it makes stillness feel tense. A long phone call, a quiet street, a voice crackling through static, and somehow all of it keeps you completely locked in. For a movie made on a low budget, The Vast of Night makes an entertaining watch.

You can watch The Vast of Night on Amazon Prime Video

Source link

Advertisement
Continue Reading

Tech

Alibaba leads $293m round in Chinese AI start-up after HappyHorse reveal

Published

on

HappyHorse 1.0 shot up to the top ranks in the Artificial Analysis leaderboard.

Chinese technology giant Alibaba’s cloud division led a $293m funding round into ShengShu Technology, a 2023-founded Beijing-based start-up behind the Vidu AI video-generation tool.

Baidu Ventures and Luminous Ventures also participated in the round. The company’s post-money valuation has not been disclosed.

The latest investment comes after ShengShu raised nearly $88m in a Series A round in February.

Advertisement

Vidu is marketed towards independent creators and animators, promising “effortless” production of content with “diverse artistic styles”.

The start-up is focusing on building ‘world models’ built on multimodal data such as audio, video and “touch”. The latest funding, the company said, will help support the development of a “general world model”.

The company’s latest Vidu Q3 Pro, which launched in January, places at the seventh rank on the Artificial Analysis leaderboard on text-to-video models, while making it to the 10th spot on the image-to-video rankings.

Vidu competes with other Chinese AI heavyweights, including ByteDance’s Seedance 2.0 and lead investor Alibaba’s own video model HappyHorse 1.0 that shot up to the top rank on the Artificial Analysis leaderboard.

Advertisement

Meanwhile, models from companies such as Singapore’s Skywork AI and Beijing-based Kuaishou, behind KlingAI, also rank high on the boards. These models are hungry to fill the gap in the video generation space left by OpenAI after it shuttered Sora late last month. Top leaderboard rankings are increasingly being filled by Chinese models.

HappyHorse was anonymously launched earlier this week before Alibaba claimed ownership today (10 April). The model is a product of Alibaba’s new Token Hub (ATH) innovation unit, placing number one on text-to-video and image-to-video ranks with no audio, while placing at the second spot with audio.

Bloomberg News reported that HappyHorse 1.0, which is under beta testing currently, will be followed up with more new ATH products. Alibaba’s share prices shot up following speculation that the company was behind the model.

Alibaba made the decision last month to bring its AI services and development works under a single roof called ATH, led by CEO Eddie Wu.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Analysis of one billion CISA KEV remediation records exposes limits of human-scale security

Published

on

Person looking over a datacenter

Author: Saeed Abbasi, Senior Manager, Threat Research Unit, Qualys

With Time-to-Exploit now at negative seven days and autonomous AI agents accelerating threats, the data no longer supports incremental improvement. The architecture of defense must change.

What Leaders Need to Know

Analysis of CISA’s Known Exploited Vulnerabilities over the past four years shows critical vulnerabilities still open at Day 7 worsened from 56% to 63% despite teams closing 6.5x more tickets. Staffing cannot solve this.

Advertisement

Of the 52 tracked weaponized vulnerabilities in our study, 88% were patched more slowly than they were exploited — half were weaponized before any patch existed.

The problem is not speed. It is the operational model itself.

Cumulative exposure, not CVE counts, is the true risk metric that security teams now need to measure. While dashboards reward the sprint to get patches implemented, breaches exploit the tail. AI is not another attack surface — instead, the transition period where AI-powered attackers face human defenders is the industry’s most dangerous window.

In response, defenders have to implement their own autonomous, closed-loop risk operations.

Advertisement

The Broken Physics

New research from the Qualys Threat Research Unit, analyzing more than one billion CISA KEV remediation records from across 10,000 organizations over four years, quantifies what the industry has long suspected but never proved at scale. The operational model underpinning enterprise security is broken.

Vulnerability volumes have grown 6.5 times since 2022. According to Google M-Trends 2026, the average Time-to-Exploit has collapsed to negative seven days; in other words, adversaries are weaponizing the most serious vulnerabilities before patches exist. The percentage of critical vulnerabilities still open at seven days has climbed from 56 percent to 63 percent.

Yet this is not for lack of effort. Organizations closed 400 million more vulnerability events annually now than they did at baseline. Teams work harder, but it fails to make the difference where it counts. Our researchers call this the “human ceiling” — a structural limit no amount of staffing or process maturity can overcome. The constraint is not effort. It is the model itself.

Of 52 high-profile weaponized vulnerabilities tracked with complete exploitation timelines, 88 percent were remediated slower than they were exploited. As an example, Spring4Shell was exploited two days before disclosure, yet the average enterprise needed 266 days to remediate.

Advertisement

Similarly, the flaw in Cisco IOS XE was weaponized a month early; average close was 263 days.

The attacker’s advantage was measured in days. The defender’s response was measured in seasons. This is not an intelligence failure. It is an operationalization failure.

To understand the future around risk operations, AI and managing remediation at scale, come to ROCON EMEA, the Risk Operations Center Conference.

Join your peers and learn more about automated remediation.

Advertisement

Register Today

The Manual Tax and Risk Mass

The report identifies a “Manual Tax” — the multiplier effect where long-tail assets that human processes cannot reach drag exposure from weeks into months. For Spring4Shell, average remediation was 5.4 times the median.

The median tells a manageable story. The average tells the truth. Infrastructure systems face a harsher reality: for Cisco IOS XE, even the median was 232 days — compared to endpoint medians consistently under 14. When the best-case outcome is eight months, the Manual Tax is no longer a multiplier. It is the baseline.

Looking at average figures is no longer helpful for decision-making. Instead, looking at Risk Mass — vulnerable assets multiplied by days exposed — captures what CVE counts obscure around cumulative exposure. A companion metric, Average Window of Exposure (AWE), measures the full duration from weaponization to remediation across the environment.

As an example, Follina was weaponized 30 days before disclosure with an average close at Day 55.

Advertisement

However, the AWE stretched to 85 days. While the blind spot before disclosure accounted for 36 percent of that 85 days, the long tail of patching accounted for a further 44 percent. In total, pre-disclosure and long tail together represent 80 percent. The sprint that gets measured makes up less than 20.

At the same time, of 48,172 vulnerabilities disclosed in 2025, only 357 were remotely exploitable and actively weaponized. Organizations are burning remediation cycles on theoretical exposure while genuinely exploitable gaps persist.

Why the Gap Will Widen

Cybersecurity has long operated as a derivative of technology shifts — Windows security followed Windows, cloud security followed cloud. Leading practitioners and investors now argue AI breaks that pattern. It is not merely a new surface to defend; it is a fundamental transformation of the adversary itself.

Offensive agents can already discover, weaponize, and execute faster than any human-staffed operation can respond. The remediation data proves humans cannot keep pace today. Autonomous AI ensures the gap will accelerate tomorrow.

Advertisement

The transition period — where AI-powered attackers face human-speed defenders — represents the industry’s most dangerous window, compounded by the structural vulnerabilities that dominate the near term: attack surfaces expanded beyond what teams can govern, identity sprawl that outpaces policy, and remediation workflows still built on manual execution.

The traditional scan-and-report model was built for lower volumes of CVEs and longer exploit timelines. What replaces it is an end-to-end Risk Operations Center: embedded intelligence arriving as machine-readable decision logic, active confirmation validating whether a vulnerability is actually exploitable in a specific environment, and autonomous action compressing response to the timescale the threat demands.

The objective is not to eliminate human judgment but to elevate it, shifting practitioners from tactical execution to governing the policies that direct their own autonomous systems.

The organizations already winning the physics gap are not winning with larger teams. They are winning because they have removed human latency from the critical path.

Advertisement

How Security Teams can close the Risk Gap

The scan-and-report model — discover, score, ticket, manually route — was built for lower volumes and longer exploit timelines.

What replaces it is an end-to-end Risk Operations Center: embedded intelligence arriving as machine-readable decision logic, active confirmation validating whether a vulnerability is actually exploitable in a specific environment, and autonomous action compressing response to the timescale the threat demands.

The objective is not to eliminate human judgment but to elevate it — shifting practitioners from tactical execution to governing the policies that direct autonomous systems. The organizations already winning the physics gap are not winning with larger teams. They are winning because they have removed human latency from the critical path.

Time-to-Exploit will not return to positive numbers. Vulnerability volume will not plateau. The reactive model has hit a hard mathematical ceiling.

Advertisement

The only remaining question is whether organizations will use the architecture to match the mathematics — before the window between human-scale defense and autonomous-scale offense closes for good.

Contact Qualys for insights into how companies manage remediation at scale with automation and AI, and how you can make that difference right now.

Sponsored and written by Qualys.

Advertisement

Source link

Continue Reading

Tech

5 Tech Items You Shouldn’t Try To Donate To Thrift Stores

Published

on





We may receive a commission on purchases made from links.

You might feel like offloading electronics at a thrift store is an easy way to get rid of them while also letting others enjoy their use. To be fair, there are always some cool gadgets and electronics to look out for as a buyer, but there are some tech items that you shouldn’t even try donating to thrift stores. Because of different policies and simple safety concerns, certain pieces of tech will be rejected by thrift stores before they even leave your hands.

A great number of thrift stores have a list of items that they’ll accept or deny. These lists aren’t always uniform across different outlets, but a few pieces of tech are more likely to be refused than not. The ones that get turned down tend to be old or volatile for one reason or another, and stores obviously wouldn’t want to sell things that are broken or even dangerous. In some cases, there might also be items that you just shouldn’t want to give them anyway. Here are five different types of items that just aren’t worth trying to donate to thrift stores.

Advertisement

Printers and fax machines

Fax machines are generally seen as old tech devices that the latest generation will never learn to use, and they aren’t exactly small when compared to other types of electronics like phones or even laptops. Printers are a bit more universal, but again their size still makes them difficult for many thrift stores to accept. Generally, small electronics have a much better chance at being taken off your hands. It’s less a matter of function and more a matter of size and space.

Some thrift stores won’t have this issue for printers, but you might still run into issues depending on the type of printer you give them. In the past, many donators have found difficulty offloading printers that use proprietary cartridges for ink and toner. These are expensive, manufacturer-specific, and sometimes aren’t even made anymore. Even if these older printers are cheap, with so many restrictions on what allows them to work in the first place, many thrift stores simply don’t find it worthwhile to stock them at all.

Advertisement

Batteries, or items with batteries

It shouldn’t be too surprising to hear that thrift stores aren’t very willing to accept loose batteries. You should already be aware of their safety risks, especially if you’ve already experienced batteries leaking from improper storage and use. Besides, considering the specific tasks and devices they’re meant for, you probably don’t have much reason to donate AA or AAA batteries instead of throwing them away. And once they’re used up, you should be recycling them properly, not giving them away.

As you might expect, this rule can apply to more than just the batteries themselves. Car batteries and devices with batteries built-in can pose very similar risks. You might get away with being able to donate the latter, but rechargeable batteries integrated into small electronics such as smartphones can end up getting swollen over time. This is a sign that it’s just about ready to catch fire, and it should go without saying that no thrift store will be happy about that.

Advertisement

Older tech, including CRTs

You might think that a thrift store would happily accept an older television set. They’ve been making a comeback in recent years, and they don’t seem very harmful on the surface. But older CRT televisions are pretty much universally denied by these locations. Some shoppers have found thrift stores carrying CRTs in certain areas, but you might have a tough time getting your local location to accept one.

Once again, the problem here is safety above all else. Goodwill in Southern Alleghenies mentions how it had to stop accepting CRTs because they “contain five to eight pounds of lead.” In this case, there’s also a high cost for the store to offload them in the first place; it’s forced to pay fees and find landfills that will actually take the items. Few places have the freedom or motivation to deal with these issues, and fewer still will want to take the safety risks involved in keeping these stocked.

Advertisement

Computer monitors and other screens

The aforementioned Goodwill location refuses to take flat-screen TVs for similar reasons as CRTs: hazardous materials and risks to safety. But the rules aren’t universal for every location, even when it comes to different Goodwill stores. And this goes for other screens and displays, too, such as computer monitors. It’s really up in the air whether you’ll be able to find a thrift store near you that’ll accept them.

LCD monitors might be an example of tech that’s still worth buying used, but they can still face notable quality issues such as dead pixels. OLED monitors also have the risk of burn-in, which further creates problems with how attractive they are to buyers. Thrift stores aren’t likely to accept broken or damaged electronics, and depending on their definition, monitors with those problems could be quickly denied by them. At that point, it’s a much better decision to take those screens to a recycling center, not a thrift store.

Advertisement

Unwiped storage devices

Donators have faced difficulties in giving their digital storage devices to certain thrift stores, though some locations will still accept them without a major issue. The problem here is on your end, as you can’t be sure that these stores will reliably wipe these drives on their own. If you simply give away your older storage devices carelessly, whoever ends up buying it might end up picking through your personal information. Even a full deletion might not guarantee your safety unless you use special programs or physically destroy the old drive entirely — to the point where there’s no chance a thrift store will accept it.

Advertisement

On top of hard drives, USB flash sticks, and solid state drives themselves, you should be aware of any device that might have storage built-in. This applies most to computers and laptops, obviously, but smart TVs and game consoles can be problematic to donate if you still have them signed into your accounts. Many of the electronics thrift stores refuse are a risk to their safety, but make sure the items they accept aren’t a risk to your own.



Advertisement

Source link

Continue Reading

Tech

NVIDIA’s DLSS 5 Demo Video Briefly Taken Down Because YouTube’s Take Down Process Sucks

Published

on

from the the-italian-job dept

Last month, we discussed NVIDIA’s demo video for its forthcoming DLSS 5 technology and the controversy surrounding it. While I’m going to continue to be of the posture that an injection of nuance is desperately needed in the reaction to AI tools and the like, our comments section largely disagreed with me on that post. That’s cool, that’s what this place is for, and I still love you all.

But this post is not about DLSS 5. Rather, it’s about the video itself and how it was briefly taken down over automated copyright claims thanks to an Italian news channel. Please note that the source material here was written while the video was still down, but it has since been restored.

And now, here we are in April, and NVIDIA’s DLSS 5 announcement trailer is no longer available to watch on YouTube on the company’s official GeForce channel. And no, it’s not because NVIDIA is responding to the feedback and retooling the technology for a re-reveal or re-announcement; it’s now blocked on “copyright grounds.”

A clear mistake, but also one that highlights the limitations of Google’s automated system for YouTube. Apparently, the Italian television channel La7 included footage from the DLSS 5 reveal in a recent broadcast and has since copyrighted it. From there, essentially every video on YouTube with DLSS 5 trailer footage was issued a copyright strike and said to be in violation, with the videos taken down with the following message: “Video unavailable: This video contains content from La7, who has blocked it in your country on copyright grounds.”

Yes, this was clearly a mistake. But it’s a mistake that I’m frankly tired of hearing about, all while Google does absolutely nothing to iterate on its copyright process and systems to mitigate such mistakes. The examples of this very thing are so legion as to be laughable. Whether due to error or due to malicious intent, videos that include content from other videos for the purposes of reporting and commentary, which are then copyrighted and result in takedowns of the source material, happens all the damned time.

Advertisement

This is almost certainly all automated, which means there are no human eyes looking for an error in the flagging of a copyright violation. It just gets tagged as such and taken down. And, no, the irony is not lost on me that we need human eyes to keep an automated copyright takedown on a video about AI from occurring.

What makes this alarming is that the video was taken down with seemingly no human interaction or input, as it’s clear that NVIDIA not only created DLSS 5, for better or worse, but also the trailer that has been a hot topic of discussion this year. We’re assuming this will be resolved fairly quickly. Still, it will be interesting to see whether YouTube responds to this case and claims that false copyright infringement notices like this are prevalent on the platform.

Google hasn’t been terribly interested in commenting on the plethora of cases like this in the past, so I strongly doubt it will now. Which is a damned shame, honestly, because the company really should be advocating for all of the users on its platform, if not especially those that are negatively impacted by this haphazard process.

But, for now, the video is back, so you can go hate-watch it again if you like.

Filed Under: copyright, dlss 5, geforce, takedowns, video games

Companies: la7, nvidia, youtube

Advertisement

Source link

Continue Reading

Tech

Florida launches probe into OpenAI as company eyes massive IPO

Published

on


In a video posted to X, he said his office is examining whether OpenAI’s data and artificial intelligence systems “could fall into the hands of America’s enemies, such as the Chinese Communist Party.”
Read Entire Article
Source link

Continue Reading

Tech

ChatGPT rolls out new $100 Pro subscription to challenge Claude

Published

on

Claude

OpenAI has rolled out a new Pro subscription that costs $100 and is in line with Claude’s pricing, which also has a $100 subscription, in addition to the $200 Max monthly plan.

Until now, OpenAI has offered three subscription tiers.

First is Go, which costs approx $8, second is Plus for $20, and then the final tier is at $200, a jump of $180.

Wiz

On the other hand, Anthropic does not offer an $8 subscription, but it has a $100 subscription that comes between the cheapest $20 and the expensive $200 subscription, and it works for the company because it caters to the coding audience.

OpenAI has realized that it needs to go after coders and enterprises, similar to Anthropic’s strategy.

Advertisement

The company’s answer is ChatGPT Pro, which is designed for people who rely on AI to get high-stakes, complex work done for $100.

After this change, OpenAI’s offering looks like the following:

  • Plus $20 – For lighter use. Try advanced capabilities like Codex and Deep Research for select projects throughout the week.
  • Pro $100 – Built for real projects. For those who use advanced tools and models throughout the week, with 5x higher limits than Plus (and 10x Codex usage vs. Plus for a limited time).
  • Pro $200 – For heavy lifting. Run your most demanding workflows continuously, even across parallel projects, with 20× higher limits than Plus.

All Pro plans include access to advanced features, including:

  • Pro models
  • Codex
  • Deep research
  • Image creation
  • Memory
  • File uploads

OpenAI says the Pro plan also includes unlimited access to GPT-5 and legacy models, but it’s not truly unlimited because the typical “Terms of Use” policies apply, including sharing of accounts.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Mythos autonomously exploited vulnerabilities that survived 27 years of human review. Security teams need a new detection playbook

Published

on

A 27-year-old bug sat inside OpenBSD’s TCP stack while auditors reviewed the code, fuzzers ran against it, and the operating system earned its reputation as one of the most security-hardened platforms on earth. Two packets could crash any server running it. Finding that bug cost a single Anthropic discovery campaign approximately $20,000. The specific model run that surfaced the flaw cost under $50.

Anthropic’s Claude Mythos Preview found it. Autonomously. No human guided the discovery after the initial prompt.

The capability jump is not incremental

On Firefox 147 exploit writing, Mythos succeeded 181 times versus 2 for Claude Opus 4.6. A 90x improvement in a single generation. SWE-bench Pro: 77.8% versus 53.4%. CyberGym vulnerability reproduction: 83.1% versus 66.6%. Mythos saturated Anthropic’s Cybench CTF at 100%, forcing the red team to shift to real-world zero-day discovery as the only meaningful evaluation left. Then it surfaced thousands of zero-day vulnerabilities across every major operating system and every major browser, many one to two decades old. Anthropic engineers with no formal security training asked Mythos to find remote code execution vulnerabilities overnight and woke up to a complete, working exploit by morning, according to Anthropic’s red team assessment.

Anthropic assembled Project Glasswing, a 12-partner defensive coalition including CrowdStrike, Cisco, Palo Alto Networks, Microsoft, AWS, Apple, and the Linux Foundation, backed by $100 million in usage credits and $4 million in open-source grants. Over 40 additional organizations that build or maintain critical software infrastructure also received access. The partners have been running Mythos against their own infrastructure for weeks. Anthropic committed to a public findings report “within 90 days,” landing in early July 2026.

Advertisement

Security directors got the announcement. They didn’t get the playbook.

“I’ve been in this industry for 27 years,” Cisco SVP and Chief Security and Trust Officer Anthony Grieco told VentureBeat in an exclusive interview at RSAC 2026. “I have never been more optimistic for what we can do to change security because of the velocity. It’s also a little bit terrifying because we’re moving so quickly. It’s also terrifying because our adversaries have this capability as well, and so frankly, we must move this quickly.”

Security directors saw this story told fifteen different ways this week, including VentureBeat’s exclusive interview with Anthropic’s Newton Cheng. As one widely shared X post summarizing the Mythos findings noted, the model cracked cryptography libraries, broke into a production virtual machine monitor, and gave engineers with zero security training working exploits by morning. What that coverage left unanswered: Where does the detection ceiling sit in the methods they already run, and what should they change before July?

Seven vulnerability classes that show where every detection method hits its ceiling

  1. OpenBSD TCP SACK, 27 years old. Two crafted packets crash any server. SAST, fuzzers, and auditors missed a logic flaw requiring semantic reasoning about how TCP options interact under adversarial conditions. Campaign cost ~$20,000. Anthropic notes the $50 per-run figure reflects hindsight.

  2. FFmpeg H.264 codec, 16 years old. Fuzzers exercised the vulnerable code path 5 million times without triggering the flaw, according to Anthropic. Mythos caught it by reasoning about code semantics. Campaign cost ~$10,000.

  3. FreeBSD NFS remote code execution, CVE-2026-4747, 17 years old. Unauthenticated root from the internet, per Anthropic’s assessment and independent reproduction. Mythos built a 20-gadget ROP chain split across multiple packets. Fully autonomous.

  4. Linux kernel local privilege escalation. Mythos chained two to four low-severity vulnerabilities into full local privilege escalation via race conditions and KASLR bypasses. CSA’s Rich Mogull noted Mythos failed at remote kernel exploitation but succeeded locally. No automated tool chains vulnerabilities today.

  5. Browser zero-days across every major browser. Thousands identified. Some required human-model collaboration. In one case, Mythos chained four vulnerabilities into a JIT heap spray, escaping both the renderer and the OS sandboxes. Firefox 147: 181 working exploits versus two for Opus 4.6.

  6. Cryptography library vulnerabilities (TLS, AES-GCM, SSH). Implementation flaws enabling certificate forgery or decryption of encrypted communications, per Anthropic’s red team blog and Help Net Security. A critical Botan library certificate bypass was disclosed the same day as the Glasswing announcement. Bugs in the code that implements the math. Not attacks on the math itself.

  7. Virtual machine monitor guest-to-host escape. Guest-to-host memory corruption in a production VMM, the technology keeping cloud workloads from seeing each other’s data. Cloud security architectures assume workload isolation holds. This finding breaks that assumption.

Nicholas Carlini, in Anthropic’s launch briefing: “I’ve found more bugs in the last couple of weeks than I found in the rest of my life combined.”

VentureBeat’s prescriptive matrix

Vulnerability Class

Advertisement

Why Current Methods Miss It

What Mythos Does

Security Director Action

OS kernel logic (OpenBSD 27yr, Linux 2-4 chain)

Advertisement

SAST lacks semantic reasoning. Fuzzers miss logic flaws. Pen testers time-boxed. Bounties scope-exclude kernel.

Chains 2-4 low-severity findings into local priv-esc. ~$20K campaign.

Add AI-assisted kernel review to pen test RFPs. Expand bounty scope. Request Glasswing findings from OS vendors before July. Re-score clustered findings by chainability.

Media codec (FFmpeg 16yr H.264)

Advertisement

SAST unflagged. Fuzzers hit path 5M times, never triggered.

Reasons about semantics beyond brute-force. ~$10K campaign.

Inventory FFmpeg, libwebp, ImageMagick, libpng. Stop treating fuzz coverage as security proxy. Track Glasswing codec CVEs from July.

Network stack RCE (FreeBSD 17yr, CVE-2026-4747)

Advertisement

DAST limited at protocol depth. Pen tests skip NFS.

Full autonomous chain to unauthenticated root. 20-gadget ROP chain.

Patch CVE-2026-4747 now. Inventory NFS/SMB/RPC services. Add protocol fuzzing to 2026 cycle.

Multi-vuln chaining (2-4 sequenced, local)

Advertisement

No tool chains. Pen testers hours-limited. CVSS scores in isolation.

Autonomous local chaining via race conditions + KASLR bypass.

Require AI-assisted chaining in pen test methodology. Build chainability scoring. Budget AI red teams for 2026.

Browser zero-days (thousands, 181 Firefox exploits)

Advertisement

Bounties + continuous fuzzing missed thousands. Some required human-model collaboration.

90x over Opus 4.6. Chained 4 vulns into JIT heap spray escaping renderer + OS sandbox.

Shorten patch SLA to 72hr critical. Pre-stage pipeline for July cycle. Pressure vendors for Glasswing timelines.

Crypto libraries (TLS, AES-GCM, SSH, Botan bypass)

Advertisement

SAST limited on crypto logic. Pen testers rarely audit crypto depth. Formal verification not standard.

Found cert forgery + decryption flaws in battle-tested libraries.

Audit all crypto library versions now. Track Glasswing crypto CVEs from July. Accelerate PQC migration.

VMM / hypervisor (guest-to-host memory corruption)

Advertisement

Cloud security assumes isolation. Few pen tests target hypervisor. Bounties rarely scope VMM.

Guest-to-host escape in production VMM.

Inventory hypervisor/VMM versions. Request Glasswing findings from cloud providers. Reassess multi-tenant isolation assumptions.

Attackers are faster. Defenders are patching once a year.

The CrowdStrike 2026 Global Threat Report documents a 29-minute average eCrime breakout time, 65% faster than 2024, with an 89% year-over-year surge in AI-augmented attacks. CrowdStrike CTO Elia Zaitsev put the operational reality plainly in an exclusive interview with VentureBeat. “Adversaries leveraging agentic AI can perform those attacks at such a great speed that a traditional human process of look at alert, triage, investigate for 15 to 20 minutes, take an action an hour, a day, a week later, it’s insufficient,” Zaitsev said. A $20,000 Mythos discovery campaign that runs in hours replaces months of nation-state research effort.

Advertisement

CrowdStrike CEO George Kurtz reinforced that timeline pressure on LinkedIn the same day as the Glasswing announcement. “AI is creating the largest security demand driver since enterprises moved to the cloud,” Kurtz wrote. The regulatory clock compounds the operational one. The EU AI Act’s next enforcement phase takes effect August 2, 2026, imposing automated audit trails, cybersecurity requirements for every high-risk AI system, incident reporting obligations, and penalties up to 3% of global revenue. Security directors face a two-wave sequence: July’s Glasswing disclosure cycle, then August’s compliance deadline.

Mike Riemer, Field CISO at Ivanti and a 25-year US Air Force veteran who works closely with federal cybersecurity agencies, told VentureBeat what he is hearing from the government. “Threat actors are reverse engineering patches, and the speed at which they’re doing it has been enhanced greatly by AI,” Riemer said. “They’re able to reverse engineer a patch within 72 hours. So if I release a patch and a customer doesn’t patch within 72 hours of that release, they’re open to exploit.” Riemer was blunt about where that leaves the industry. “They are so far in front of us as defenders,” he said.

Grieco confirmed the other side of that collision at RSAC 2026. “If you talk to an operational team and many of our customers, they’re only patching once a year,” Grieco told VentureBeat. “And frankly, even in the best of circumstances, that is not fast enough.”

CSA’s Mogull makes the structural case that defenders hold the long-term advantage: fix a vulnerability once and every deployment benefits. But the transition period, when attackers reverse-engineer patches in 72 hours and defenders patch once a year, favors offense.

Advertisement

Mythos is not the only model finding these bugs. Researchers at AISLE, an AI cybersecurity startup, tested Anthropic’s showcase vulnerabilities on small, open-weights models and found that eight out of eight detected the FreeBSD exploit. AISLE says one model had only 3.6 billion parameters and costs 11 cents per million tokens, and that a 5.1-billion-parameter open model recovered the core analysis chain of the 27-year-old OpenBSD bug. AISLE’s conclusion: “The moat in AI cybersecurity is the system, not the model.” That makes the detection ceiling a structural problem, not a Mythos-specific one. Cheap models find the same bugs. The July timeline gets shorter, not longer.

Over 99% of the vulnerabilities Mythos has identified have not yet been patched, per Anthropic’s red team blog. The public Glasswing report lands in early July 2026. It will trigger a high-volume patch cycle across operating systems, browsers, cryptography libraries, and major infrastructure software. Security directors who have not expanded their patch pipeline, re-scoped their bug bounty programs, and built chainability scoring by then will absorb that wave cold. July is not a disclosure event. It is a patch tsunami.

What to tell the board

Every security director tells the board “we have scanned everything.” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat that the statement does not survive Mythos without a qualifier.

“What security leaders actually mean is: we have exhaustively scanned for what our tools know how to see,” Baer said in an exclusive interview with VentureBeat. “That’s a very different claim.”

Advertisement

Baer proposed reframing residual risk for boards around three tiers: known-knowns (vulnerability classes your stack reliably detects), known-unknowns (classes you know exist but your tools only partially cover, like stateful logic flaws and auth boundary confusion), and unknown-unknowns (vulnerabilities that emerge from composition, how safe components interact in unsafe ways). “This is where Mythos is landing,” Baer said.

The board-level statement Baer recommends: “We have high confidence in detecting discrete, known vulnerability classes. Our residual risk is concentrated in cross-function, multi-step, and compositional flaws that evade single-point scanners. We are actively investing in capabilities that raise that detection ceiling.”

On chainability, Baer was equally direct. “Chainability has to become a first-class scoring dimension,” she said. “CVSS was built to score atomic vulnerabilities. Mythos is exposing that risk is increasingly graph-shaped, not point-in-time.” Baer outlined three shifts security programs need to make: from severity scoring to exploitability pathways, from vulnerability lists to vulnerability graphs that model relationships across identity, data flow, and permissions, and from remediation SLAs to path disruption, where fixing any node that breaks the chain gets priority over fixing the highest individual CVSS.

“Mythos isn’t just finding missed bugs,” Baer said. “It’s invalidating the assumption that vulnerabilities are independent. Security programs that don’t adapt, from coverage thinking to interaction thinking, will keep reporting green dashboards while sitting on red attack paths.”

Advertisement

VentureBeat will update this story with additional operational details from Glasswing’s founding partners as interviews are completed.

Source link

Continue Reading

Tech

A Mercury Rover Could Explore The Planet By Sticking To The Terminator

Published

on

The planet Mercury in true color. (Credit: NASA)
The planet Mercury in true color. (Credit: NASA)

With multiple rovers currently scurrying around on the surface of Mars to continue a decades-long legacy, it can be easy to forget sometimes that repeating this feat on other planets that aren’t Earth or Mars isn’t quite as straightforward. In the case of Earth’s twin – Venus – the surface conditions are too extreme to consider such a mission. Yet Mercury might be a plausible target for a rover, according to a study by [M. Murillo] and [P. G. Lucey], via Universe Today’s coverage.

The advantages of putting a rover’s wheels on a planet’s surface are obvious, as it allows for direct sampling of geological and other features unlike an orbiting or passing space probe. To make this work on Mercury as in some ways a slightly larger version of Earth’s moon that’s been placed right next door to the Sun is challenging to say the least.

With no atmosphere it’s exposed to some of the worst that the Sun can throw at it, but it does have a magnetic field at 1.1% of Earth’s strength to take some of the edge off ionizing radiation. This just leaves a rover to deal with still very high ionizing radiation levels and extreme temperature swings that at the equator range between −173 °C and 427 °C, with an 88 Earth day day/night cycle. This compares to the constant mean temperature on Venus of 464 °C.

To deal with these extreme conditions, the researchers propose that a rover might be able to thrive if it sticks to the terminator, being the transition between day and night. To survive, the rover would need to be able to gather enough solar power – if solar-powered – due to the Sun being very low in the sky. It would also need to keep up with the terminator velocity being at least 4.25 km/h, as being caught on either the day or night side of Mercury would mean a certain demise. This would leave little time for casual exploration as on Mars, and require a high level of autonomy akin to what is being pioneered today with the Martian rovers.

Advertisement

Top image: the planet Mercury with its magnetic field. (Credit: A loose necktie, Wikimedia)

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025