Connect with us
DAPA Banner

Tech

News Publishers Are Now Blocking The Internet Archive, And We May All Regret It

Published

on

from the our-digital-history dept

Last fall, I wrote about how the fear of AI was leading us to wall off the open internet in ways that would hurt everyone. At the time, I was worried about how companies were conflating legitimate concerns about bulk AI training with basic web accessibility. Not surprisingly, the situation has gotten worse. Now major news publishers are actively blocking the Internet Archive—one of the most important cultural preservation projects on the internet—because they’re worried AI companies might use it as a sneaky “backdoor” to access their content.

This is a mistake we’re going to regret for generations.

Nieman Lab reports that The Guardian, The New York Times, and others are now limiting what the Internet Archive can crawl and preserve:

When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.

Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive’s APIs and filter out its article pages from the Wayback Machine’s URLs interface. The Guardian’s regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.

Advertisement

The Times has gone even further:

The New York Times confirmed to Nieman Lab that it’s actively “hard blocking” the Internet Archive’s crawlers. At the end of 2025, the Times also added one of those crawlers — archive.org_bot — to its robots.txt file, disallowing access to its content.

“We believe in the value of The New York Times’s human-led journalism and always want to ensure that our IP is being accessed and used lawfully,” said a Times spokesperson. “We are blocking the Internet Archive’s bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization.”

I understand the concern here. I really do. News publishers are struggling, and watching AI companies hoover up their content to train models that might then, in some ways, compete with them for readers is genuinely frustrating. I run a publication myself, remember.

But blocking the Internet Archive isn’t going to stop AI training. What it will do is ensure that significant chunks of our journalistic record and historical cultural context simply… disappear.

Advertisement

And that’s bad.

The Internet Archive is the most famous nonprofit digital library, and has been operating for nearly three decades. It isn’t some fly-by-night operation looking to profit off publisher content. It’s trying to preserve the historical record of the internet—which is way more fragile than most people comprehend. When websites disappear—and they disappear constantly—the Wayback Machine is often the only place that content still exists. Researchers, historians, journalists, and ordinary citizens rely on it to understand what actually happened, what was actually said, what the world actually looked like at a given moment.

In a digital era when few things end up printed on paper, the Internet Archive’s efforts to permanently preserve our digital culture are essential infrastructure for anyone who cares about historical memory.

And now we’re telling them they can’t preserve the work of our most trusted publications.

Advertisement

Think about what this could mean in practice. Future historians trying to understand 2025 will have access to archived versions of random blogs, sketchy content farms, and conspiracy sites—but not The New York Times. Not The Guardian. Not the publications that we consider the most reliable record of what’s happening in the world. We’re creating a historical record that’s systematically biased against quality journalism.

Yes, I’m sure some will argue that the NY Times and The Guardian will never go away. Tell that to the readers of the Rocky Mountain News, which published for 150 years before shutting down in 2009, or to the 2,100+ newspapers that have closed since 2004. Institutions—even big, prominent, established ones—don’t necessarily last.

As one computer scientist quoted in the Nieman piece put it:

“Common Crawl and Internet Archive are widely considered to be the ‘good guys’ and are used by ‘the bad guys’ like OpenAI,” said Michael Nelson, a computer scientist and professor at Old Dominion University. “In everyone’s aversion to not be controlled by LLMs, I think the good guys are collateral damage.”

That’s exactly right. In our rush to punish AI companies, we’re destroying public goods that serve everyone.

Advertisement

The most frustrating bit of all of this: The Guardian admits they haven’t actually documented AI companies scraping their content through the Wayback Machine. This is purely precautionary and theoretical. They’re breaking historical preservation based on a hypothetical threat:

The Guardian hasn’t documented specific instances of its webpages being scraped by AI companies via the Wayback Machine. Instead, it’s taking these measures proactively and is working directly with the Internet Archive to implement the changes.

And, of course, as one of the “good guys” of the internet, the Internet Archive is willing to do exactly what these publishers want. They’ve always been good about removing content or not scraping content that people don’t want in the archive. Sometimes to a fault. But you can never (legitimately) accuse them of malicious archiving (even if music labels and book publishers have).

Either way, we’re sacrificing the historical record not because of proven harm, but because publishers are worried about what might happen. That’s a hell of a tradeoff.

This isn’t even new, of course. Last year, Reddit announced it would block the Internet Archive from archiving its forums—decades of human conversation and cultural history—because Reddit wanted to monetize that content through AI licensing deals. The reasoning was the same: can’t let the Wayback Machine become a backdoor for AI companies to access content Reddit is now selling. But once you start going down that path, it leads to bad places.

Advertisement

The Nieman piece notes that, in the case of USA Today/Gannett, it appears that there was a company-wide decision to tell the Internet Archive to get lost:

In total, 241 news sites from nine countries explicitly disallow at least one out of the four Internet Archive crawling bots.

Most of those sites (87%) are owned by USA Today Co., the largest newspaper conglomerate in the United States formerly known as Gannett. (Gannett sites only make up 18% of Welsh’s original publishers list.) Each Gannett-owned outlet in our dataset disallows the same two bots: “archive.org_bot” and “ia_archiver-web.archive.org”. These bots were added to the robots.txt files of Gannett-owned publications in 2025.

Some Gannett sites have also taken stronger measures to guard their contents from Internet Archive crawlers. URL searches for the Des Moines Register in the Wayback Machine return a message that says, “Sorry. This URL has been excluded from the Wayback Machine.”

A Gannett spokesperson told NiemanLab that it was about “safeguarding our intellectual property” but that’s nonsense. The whole point of libraries and archives is to preserve such content, and they’ve always preserved materials that were protected by copyright law. The claim that they have to be blocked to safeguard such content is both technologically and historically illiterate.

Advertisement

And here’s the extra irony: blocking these crawlers may not even serve publishers’ long-term interests. As I noted in my earlier piece, as more search becomes AI-mediated (whether you like it or not), being absent from training datasets increasingly means being absent from results. It’s a bit crazy to think about how much effort publishers put into “search engine optimization” over the years, only to now block the crawlers that feed the systems a growing number of people are using for search. Publishers blocking archival crawlers aren’t just sacrificing the historical record—they may be making themselves invisible in the systems that increasingly determine how people discover content in the first place.

The Internet Archive’s founder, Brewster Kahle, has been trying to sound the alarm:

“If publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.”

But that warning doesn’t seem to be getting through. The panic about AI has become so intense that people are willing to sacrifice core internet infrastructure to address it.

What makes this particularly frustrating is that the internet’s openness was never supposed to have asterisks. The fundamental promise wasn’t “publish something and it’s accessible to all, except for technologies we decide we don’t like.” It was just… open. You put something on the public web, people can access it. That simplicity is what made the web transformative.

Advertisement

Now we’re carving out exceptions based on who might access content and what they might do with it. And once you start making those exceptions, where do they end? If the Internet Archive can be blocked because AI companies might use it, what about research databases? What about accessibility tools that help visually impaired users? What about the next technology we haven’t invented yet?

This is a real concern. People say “oh well, blocking machines is different from blocking humans,” but that’s exactly why I mention assistive tech for the visually impaired. Machines accessing content are frequently tools that help humans—including me. I use an AI tool to help fact check my articles, and part of that process involves feeding it the source links. But increasingly, the tool tells me it can’t access those articles to verify whether my coverage accurately reflects them.

I don’t have a clean answer here. Publishers genuinely need to find sustainable business models, and watching their work get ingested by AI systems without compensation is a legitimate grievance—especially when you see how much traffic some of these (usually less scrupulous) crawlers dump on sites. But the solution can’t be to break the historical record of the internet. It can’t be to ensure that our most trusted sources of information are the ones that disappear from archives while the least trustworthy ones remain.

We need to find ways to address AI training concerns that don’t require us to abandon the principle of an open, preservable web. Because right now, we’re building a future where historians, researchers, and citizens can’t access the journalism that documented our era. And that’s not a tradeoff any of us should be comfortable with.

Advertisement

Filed Under: ai, archives, culture, libraries, scanning, scraping

Companies: internet archive, ny times, the guardian, usa today

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Microsoft links Classic Outlook issue to email delivery problems

Published

on

Outlook

Microsoft is investigating a known issue that prevents some Classic Outlook users from sending emails via Outlook.com.

Affected users are being warned that their message hasn’t reached some intended recipients, and they will encounter this problem more often when the Outlook.com account they use to send email is an Outlook profile linked to another Exchange account.

“This message could not be sent. Try sending the message again later or contact your network administrator,” the non-delivery report (NDR) error displayed when sending or replying to emails reads.

“You do not have the permission to send the message on behalf of the specified user. Error is [0x80070005-0x0004dc-0x000524].”

Advertisement

Microsoft added that another condition that may trigger these errors is that the sender’s account has an Exchange Online mail contact with the same SMTP address.

Classic Outlook non-delivery report (NDR) error
Classic Outlook non-delivery report (NDR) error (Microsoft)

​While investigating this issue and still looking for a fix, the Outlook team shared several workarounds that may help affected customers temporarily mitigate the issue.

Microsoft recommends removing the M365 account Address Book so that the Outlook client does not check it when sending emails, hiding the Outlook.com contact from the Microsoft 365 account Global Address List (GAL).

Other alternatives include creating a new classic Outlook profile that includes only the account receiving NDR errors, and using the New Outlook client or Outlook.com on the web to send email from the affected account.

Over the last two weeks, Microsoft fixed two other known issues, including one that caused Classic Outlook to crash when enabling the Microsoft Teams Meeting Add-in and another that triggered 0x800CCC0F and 0x80070057 errors when synchronizing Gmail and Yahoo accounts.

Advertisement

Microsoft is also investigating known bugs that cause “Can’t connect to the server” errors when creating groups if Exchange Web Services (EWS) is enabled for the tenant, and that make the mouse pointer disappear for some users in Classic Outlook, OneNote, and other Microsoft 365 apps.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Rapid Snow Melt-Off In American West Stuns Scientists

Published

on

Scientists say extreme March heat caused an unusually rapid collapse of snowpack across the American West that’s leaving major basins at record or near-record lows. “This year is on a whole other level,” said Dr Russ Schumacher, a Colorado State University climatologist. “Seeing this year so far below any of the other years we have data for is very concerning.” The Guardian reports: […] The issue is extremely widespread. Data from a branch of the US Department of Agriculture (USDA), which logs averages based on levels between 1991 and 2020, shows states across the south-west and intermountain west with eye-popping lows. The Great Basin had only 16% of average on Monday and the lower Colorado region, which includes most of Arizona and parts of Nevada, was at 10%. The Rio Grande, which covers parts of New Mexico, Texas and Colorado, was at 8%. “This year has the potential of being way worse than any of the years we have analogues for in the past,” Schumacher said.

Even with near-normal precipitation across most of the west, every major river basin across the region was grappling with snow drought when March began, according to federal analysts. Roughly 91% of stations reported below-median snow water equivalent, according to the last federal snow drought update compiled on March 8. Water managers and climate experts had been hopeful for a March miracle — a strong cold storm that could set the region on the right track. Instead, a blistering heatwave unlike any recorded for this time of year baked the region and spurred a rapid melt-off. “March is often a big month for snowstorms,” Schumacher said. “Instead of getting snow we would normally expect we got this unprecedented, way-off-the-scale warmth.”

More than 1,500 monthly high temperature records were broken in March and hundreds more tied. The event was “likely among the most statistically anomalous extreme heat events ever observed in the American south-west,” climate scientist Daniel Swain said in an analysis posted this week. “Beyond the conspicuous ‘weirdness’ of it all,” Swain added, “the most consequential impact of our record-shattering March heat will likely be the decimation of the water year 2025-26 snowpack across nearly all of the American west.” Calling the toll left by the heat “nothing short of shocking,” Swain noted that California was tied for its worst mountain snowpack value on record. While the highest elevations are still coated in white, “lower slopes are now completely bare nearly statewide.”

Source link

Advertisement
Continue Reading

Tech

If Elden Ring was Turned Into a 90s-Inspired Saturday Morning Cartoon, This is What it would Look Like

Published

on

64 Bits Elden Ring 90s Cartoon
Studio 64 Bits worked tirelessly for four months to hand-draw every single frame, allowing them to introduce Elden Ring to the wild world of Saturday morning television in the 90s. The end effect feels like a bizarre parallel universe in which the game appears alongside Thundercats and He-Man.



Ranni is right there at the start, cradling a double-necked electric guitar slung lazily across her shoulders, and as she begins noodling on the opening chords, the camera sweeps across these blasted landscapes, full of familiar faces from both the base game and the Shadow of the Erdtree expansion. We see Malenia charging ahead with blades flashing, Blaidd standing tall and loyal by her side, and Messmer looming large as the true new threat. Meanwhile, Miquella and Radahn share a dramatic moment, and Melina, Rykard, Mohg, Varre, Midra, Placidusax, Astel, Maliketh, and the Elden Beast all flash across the screen in quick, splashy bursts. Each character receives the usual cartoon makeover, complete with bold outlines, vivid colors, and exaggerated posturing that transforms their conflicts into heroic showdowns rather than gloomy, terrible struggles.

64 Bits Elden Ring 90s Cartoon
The animation sticks to the technical limits of 90’s TV production, which means that the colors don’t deviate much from a fairly limited palette, the lines have just a hint of that creaky hand-drawn cel look to them, and the transitions snap along with the same crisp timing you’d see on shows from that era. The music builds to a precise pitch, culminating in a driving rock song that sounds like it might have come directly from the opening titles of Jayce and the Wheeled Warriors or Captain Planet. Every second creates the impression that an entire series could follow, with the Tarnished rushing across the continent to repair the Elden Ring and face off against these legendary monsters in one thrilling episode after the next.

64 Bits Elden Ring 90s Cartoon
After the main intro concludes, the film transitions to a brief commercial in which Elden Ring appears to have recently been released for the old SNES. A happy narration promises additional dungeons, secrets, and a fate that is entirely in the player’s control. The tagline reverses a humorous old Nintendo slogan into ‘Now you’re playing with power, rune power’, and the spot concludes with a quick little bumper that parodies the DIC logo from coutnless old 90’s cartoons. It connects neatly to their last full-length SNES remake, making the entire package feel like one continuous block of faux Saturday morning programming.

Source link

Advertisement
Continue Reading

Tech

I skipped Meta’s AI glasses, but they’ve finally fixed a fundamental problem for millions other like me

Published

on

Smart glasses have always had a basic problem for people like me. They looked cool in demos, sounded futuristic in press releases, and usually came with the same quiet catch. If you already wear glasses every day, you are expected to work around them. This meant adding prescription lenses later, settling for a fit that is not quite right, or treating the whole thing as a novelty instead of something you would actually wear throughout the day.

This is what makes Meta’s latest announcement more exciting. The company just unveiled its first prescription-optimized AI glasses, the Ray-Ban Meta Blayzer Optics (Gen 2) and Ray-Ban Meta Scriber Optics (Gen 2), and they are explicitly designed around people who rely on prescription eyewear all day.

Meta says they support nearly all prescriptions, start at $499 in the US, and will be available at optical retailers beginning April 14.

For me, that is the first time Meta’s glasses story has felt less like wearable hype and more like something I could actually live with.

Advertisement

Prescription wearers don’t have to do extra work

Billions of people around the world use glasses or contacts for vision correction, and Meta itself notes that many Ray-Ban Meta and Oakley owners already add prescription lenses to existing models. But “can be added later” is not the same thing as “built for you from the start.”

The new prescription-first push feels more thoughtful. Meta says that these new models were designed for all-day comfort and include features like overextension hinges, interchangeable nose pads, and optician-adjustable temple tips. These may sound like dry-product language stuff, but if you actually wear glasses every day, it is the kind of detail that decides whether something stays on your face for the next eight hours or if it gets thrown into a case after 20 minutes.

Balancing act between ‘gadget’ and ‘eyewear’

Meta is not just launching two new frame styles and calling it a day. It is trying to make AI glasses feel like a normal category of eyewear rather than a niche device for early adopters. These new prescription-optimized frames aren’t alone, as Meta also announced more frame and lens options across Ray-Ban Meta and Oakley Meta glasses.

There’s also a new software feature, like hands-free nutrition tracking, WhatsApp summaries and recall through Meta AI, and Neural Handwriting support expanding to iMessage. All of this makes these new glasses feel more natural for daily use. The tech itself is only half the story. The real breakthrough is when you don’t need to accommodate the hardware.

And if you already wear prescription glasses, that threshold is even higher. A smartwatch can be optional. Glasses are not.

This is the first Meta glasses move that feels genuinely practical

This is basically why I think these new Meta glasses matter more than they might look on paper. The usual wearable pitch is about features, AI tricks, cameras, and convenience. But for prescription wearers, such as myself, the first question is whether I would actually want to wear this all day instead of normal glasses?

And for a change, Meta seems to be answering that question directly.

Advertisement

Yes, the concerns don’t disappear, and smart glasses still have the privacy baggage and hefty price tag. They also haven’t proved that their AI features are useful often enough to justify becoming part of your daily routine. But this launch clears a much more fundamental barrier than people give it credit for.

And for someone who already owns prescription Wayfarers and knows how much difference proper eyewear fit makes, Meta’s new AI glasses suddenly feel a lot more attractive.

Source link

Advertisement
Continue Reading

Tech

Every 3D Printable Film Camera, In One Place

Published

on

For those of us who hack old cameras, the 3D printer has undoubtedly been a boon. High precision, or at least consistent precision, lightproof enclosures can be easily made and reproduced for others. As a result there are quite a few printable cameras out there, and we’ve featured our share here. We didn’t realize just how many there are without the work of [Sebastian] though, as he’s gathered together every one he can find in a glorious catalog of homemade photographic construction.

As a snapshot of the world of home made cameras it’s refreshing to see such a wide range of designs. There are pinholes aplenty as well as cameras using lenses from scavanged point and shoots through 35mm SLR, medium format, and even one using a Micro Four Thirds compact digital camera lens. For film there’s 35mm and 120 as well as large format, but we’re pleased to see a few instant cameras in there. Some of the models in the list are paid-for designs but most of them are free, so you probably won’t need any encouragement to make yourself a camera!

Unless we missed something, we didn’t see any movie cameras in the list. With 35mm and 16mm models to be found, we hope some of them make it.

Advertisement

Source link

Advertisement
Continue Reading

Tech

ChatGPT comes to Apple CarPlay but only if you are willing to talk to a robot

Published

on


  • ChatGPT arrives on Apple CarPlay update for iOS 26.4
  • Update adds support for “voice-based conversational apps”
  • Interaction is limited to voice prompts only

We reported on a big Apple update in February of this year with the release of the new iOS 26.4 public beta.

The headline news was the inclusion of third-party, voice-controlled AI chatbots on CarPlay for the first time, allowing drivers to make the most of AI assistants outside of those that come part and parcel of many modern cars.

Source link

Advertisement
Continue Reading

Tech

OpenClaw has 500,000 instances and no enterprise kill switch

Published

on

“Your AI? It’s my AI now.” The line came from Etay Maor, VP of Threat Intelligence at Cato Networks, in an exclusive interview with VentureBeat at RSAC 2026 — and it describes exactly what happened to a U.K. CEO whose OpenClaw instance ended up for sale on BreachForums. Maor’s argument is that the industry handed AI agents the kind of autonomy it would never extend to a human employee, discarding zero trust, least privilege, and assume-breach in the process.

The proof arrived on BreachForums three weeks before Maor’s interview. On February 22, a threat actor using the handle “fluffyduck” posted a listing advertising root shell access to the CEO’s computer for $25,000 in Monero or Litecoin. The shell was not the selling point. The CEO’s OpenClaw AI personal assistant was. The buyer would get every conversation the CEO had with the AI, the company’s full production database, Telegram bot tokens, Trading 212 API keys, and personal details the CEO disclosed to the assistant about family and finances. The threat actor noted the CEO was actively interacting with OpenClaw in real time, making the listing a live intelligence feed rather than a static data dump.

Cato CTRL senior security researcher Vitaly Simonovich documented the listing on February 25. The CEO’s OpenClaw instance stored everything in plain-text Markdown files under ~/.openclaw/workspace/ with no encryption at rest. The threat actor didn’t need to exfiltrate anything; the CEO had already assembled it. When the security team discovered the breach, there was no native enterprise kill switch, no management console, and no way to inventory how many other instances were running across the organization.

OpenClaw runs locally with direct access to the host machine’s file system, network connections, browser sessions, and installed applications. The coverage to date has tracked its velocity, but what it hasn’t mapped is the threat surface. The four vendors who used RSAC 2026 to ship responses still haven’t produced the one control enterprises need most: a native kill switch.

Advertisement

The threat surface by the numbers

Metric

Numbers

Source

Internet-facing instances

Advertisement

~500,000 (March 24 live check)

Etay Maor, Cato Networks (exclusive RSAC 2026 interview)

Exposed instances with security risks

30,000+ observed during scan window

Advertisement

Bitsight

Exploitable via known RCE

15,200 instances

SecurityScorecard

Advertisement

High-severity CVEs

3 (highest CVSS: 8.8)

NVD (24763, 25157, 25253)

Malicious skills on ClawHub

Advertisement

341 in Koi audit (335 from ClawHavoc); 824 by mid-Feb

Koi

ClawHub skills with critical flaws

13.4% of 3,984 analyzed

Advertisement

Snyk

API tokens exposed (Moltbook)

1.5 million

Wiz

Advertisement

Maor ran a live Censys check during an exclusive VentureBeat interview at RSAC 2026. “The first week it came out, there were about 6,300 instances. Last week, I checked: 230,000 instances. Let’s check now… almost half a million. Almost doubled in one week,” Maor said. Three high-severity CVEs define the attack surface: CVE-2026-24763 (CVSS 8.8, command injection via Docker PATH handling), CVE-2026-25157 (CVSS 7.7, OS command injection), and CVE-2026-25253 (CVSS 8.8, token exfiltration to full gateway compromise). All three CVEs have been patched, but OpenClaw has no enterprise management plane, no centralized patching mechanism, and no fleet-wide kill switch. Individual administrators must update each instance manually, and most have not.

The defender-side telemetry is just as alarming. CrowdStrike’s Falcon sensors already detect more than 1,800 distinct AI applications across its customer fleet — from ChatGPT to Copilot to OpenClaw — generating around 160 million unique instances on enterprise endpoints. ClawHavoc, a malicious skill distributed through the ClawHub marketplace, became the primary case study in the OWASP Agentic Skills Top 10. CrowdStrike CEO George Kurtz flagged it in his RSAC 2026 keynote as the first major supply chain attack on an AI agent ecosystem.

AI agents got root access. Security got nothing.

Maor framed the visibility failure through the OODA loop (observe, orient, decide, act) during the RSAC 2026 interview. Most organizations are failing at the first step: security teams can’t see which AI tools are running on their networks, which means the productivity tools employees bring in quietly become shadow AI that attackers exploit. The BreachForums listing proved the end state. The CEO’s OpenClaw instance became a centralized intelligence hub with SSO sessions, credential stores, and communication history aggregated into one location. “The CEO’s assistant can be your assistant if you buy access to this computer,” Maor told VentureBeat. “It’s an assistant for the attacker.”

Ghost agents amplify the exposure. Organizations adopt AI tools, run a pilot, lose interest, and move on — leaving agents running with credentials intact. “We need an HR view of agents. Onboarding, monitoring, offboarding. If there’s no business justification? Removal,” Maor told VentureBeat. “We’re not left with any ghost agents on our network, because that’s already happening.”

Advertisement

Cisco moved toward an OpenClaw kill switch

Cisco President and Chief Product Officer Jeetu Patel framed the stakes during an exclusive VentureBeat interview at RSAC 2026. “I think of them more like teenagers. They’re supremely intelligent, but they have no fear of consequence,” Patel said of AI agents. “The difference between delegating and trusted delegating of tasks to an agent … one of them leads to bankruptcy. The other one leads to market dominance.”

Cisco launched three free, open-source security tools for OpenClaw at RSAC 2026. DefenseClaw packages Skills Scanner, MCP Scanner, AI BoM, and CodeGuard into a single open-source framework running inside NVIDIA’s OpenShell runtime, which NVIDIA launched at GTC the week before RSAC. “Every single time you actually activate an agent in an Open Shell container, you can now automatically instantiate all the security services that we have built through Defense Claw,” Patel told VentureBeat. AI Defense Explorer Edition is a free, self-serve version of Cisco’s algorithmic red-teaming engine, testing any AI model or agent for prompt injection and jailbreaks across more than 200 risk subcategories. The LLM Security Leaderboard ranks foundation models by adversarial resilience rather than performance benchmarks. Cisco also shipped Duo Agentic Identity to register agents as identity objects with time-bound permissions, Identity Intelligence to discover shadow agents through network monitoring, and the Agent Runtime SDK to embed policy enforcement at build time.

Palo Alto made agentic endpoints a security category of their own

Palo Alto Networks CEO Nikesh Arora characterized OpenClaw-class tools as creating a new supply chain running through unregulated, unsecured marketplaces during an exclusive March 18 pre-RSA briefing with VentureBeat. Koi found 341 malicious skills on ClawHub in its initial audit, with the total growing to 824 as the registry expanded. Snyk found 13.4% of analyzed skills contained critical security flaws. Palo Alto Networks built Prisma AIRS 3.0 around a new agentic registry that requires every agent to be logged before operating, with credential validation, MCP gateway traffic control, agent red-teaming, and runtime monitoring for memory poisoning. The pending Koi acquisition adds supply chain visibility specifically for agentic endpoints.

Cato CTRL delivered the adversarial proof

Cato Networks’ threat intelligence arm Cato CTRL presented two sessions at RSAC 2026. The 2026 Cato CTRL Threat Report, published separately, includes a proof-of-concept “Living Off AI” attack targeting Atlassian’s MCP and Jira Service Management. Maor’s research provides the independent adversarial validation that vendor product announcements cannot deliver on their own. The platform vendors are building governance for sanctioned agents. Cato CTRL documented what happens when the unsanctioned agent on the CEO’s laptop gets sold on the dark web.

Advertisement

Monday morning action list

Regardless of vendor stack, four controls apply immediately: bind OpenClaw to localhost only and block external port exposure, enforce application allowlisting through MDM to prevent unauthorized installations, rotate every credential on machines where OpenClaw has been running, and apply least-privilege access to any account an AI agent has touched.

  1. Discover the install base. CrowdStrike’s Falcon sensor, Cato’s SASE platform, and Cisco Identity Intelligence all detect shadow AI. For teams without premium tooling, query endpoints for the ~/.openclaw/ directory using native EDR or MDM file-search policies. If the enterprise has no endpoint visibility at all, run Shodan and Censys queries against corporate IP ranges.

  2. Patch or isolate. Check every discovered instance against CVE-2026-24763, CVE-2026-25157, and CVE-2026-25253. Instances that cannot be patched should be network-isolated. There is no fleet-wide patching mechanism.

  3. Audit skill installations. Review installed skills against Cisco’s Skills Scanner or the Snyk and Koi research. Any skill from an unverified source should be removed immediately.

  4. Enforce DLP and ZTNA controls. Cato’s ZTNA controls restrict unapproved AI applications. Cisco Secure Access SSE enforces policy on MCP tool calls. Palo Alto’s Prisma Access Browser controls data flow at the browser layer.

  5. Kill ghost agents. Build a registry of every AI agent running. Document business justification, human owner, credentials held, and systems accessed. Revoke credentials for agents with no justification. Repeat weekly.

  6. Deploy DefenseClaw for sanctioned use. Run OpenClaw inside NVIDIA’s OpenShell runtime with Cisco’s DefenseClaw to scan skills, verify MCP servers, and instrument runtime behavior automatically.

  7. Red-team before deploying. Use Cisco AI Defense Explorer Edition (free) or Palo Alto Networks’ agent red-teaming in Prisma AIRS 3.0. Test the workflow, not just the model.

The OWASP Agentic Skills Top 10, published using ClawHavoc as its primary case study, provides a standards-grade framework for evaluating these risks. Four vendors shipped responses at RSAC 2026. None of them is a native enterprise kill switch for unsanctioned OpenClaw deployments. Until one exists, the Monday morning action list above is the closest thing to one.

Source link

Advertisement
Continue Reading

Tech

Larger, More Spacious 2027 Kia Seltos Debuts With New Hybrid Variant

Published

on

Kia’s combustion-powered Seltos has grown up and glowed up with more space and bigger tech inside.

Antuan Goodwin

Antuan started out in the automotive industry the old-fashioned way, by turning wrenches in a driveway and picking up speeding tickets. He now has nearly 20 years of expertise and experience behind the wheel of hundreds of cars, including electric, hybrid, plug-in hybrid, hydrogen, and traditional combustion vehicles.

For each car he tests, Antuan covers more than 200 miles behind the wheel and evaluates driving dynamics; acceleration and braking performance; range; and efficiency.

Antuan’s goal is to use his extensive car knowledge to educate CNET readers and help with their next car-related buying decision. Whether you’re EV-curious, an EV-enthusiast or a combustion-car loyalist, Antuan will bring you the unbiased advice, reviews, best lists and news you need.

You can reach Antuan at antuan.goodwin@cnet.com

Advertisement

Source link

Continue Reading

Tech

Legora just hit $100 million in revenue. It took 18 months.

Published

on

Eighteen months ago, Legora was a Stockholm startup with a handful of law-firm clients and roughly $1 million in annual recurring revenue. On Tuesday, the company told Business Insider that it has crossed $100 million in ARR, a milestone that in enterprise software typically takes the better part of a decade. Max Junestrand, Legora’s 26-year-old cofounder and chief executive, framed the number as a reflection of demand rather than salesmanship. “This is a reflection of how quickly our customers are pushing the industry forward,” he said in a statement. “They’re redefining how legal work gets done, and AI is becoming the core infrastructure for the profession.”

The claim, if verified independently, would place Legora among the fastest-growing software companies in European history and firmly establish it as the most serious challenger to Harvey, the San Francisco-based legal AI company that currently leads the market. Harvey, which was last valued at $11 billion after raising $200 million in late March, said it had crossed $200 million in ARR and now serves more than 100,000 lawyers across 1,300 organisations. Legora’s customer base has grown to more than 1,000 firms and legal teams, according to the company, up from around 800 at the time of its Series D financing in early March.

The revenue figure helps explain a valuation that, until now, looked difficult to justify on the numbers alone. Legora raised $550 million in a Series D round led by Accel on 10 March, with the round pricing the company at $5.55 billion. At the time, its publicly disclosed ARR was approximately $23 million, putting the valuation at a staggering 240 times revenue. If the company was already running closer to $100 million, the multiple drops to roughly 55 times, still aggressive but within the range investors have accepted for high-growth vertical AI businesses. Among the backers in that round were Benchmark, Bessemer Venture Partners, General Catalyst, ICONIQ, Redpoint Ventures, Menlo Ventures, Salesforce Ventures, Bain Capital, and Y Combinator, which backed Legora in its Winter 2024 batch. Total funding now stands at $816 million.

Junestrand’s biography reads like a case study in the argument that Europe should bet bigger on young founders. He was 23 when he started Legora with Sigge Labor, the company’s chief technology officer, and August Erséus, having previously competed in professional gaming, studied machine learning and business at KTH and the Stockholm School of Economics simultaneously, and worked at McKinsey. None of the three founders had practised law. They met Labor’s early prototype of software that could automate simpler legal tasks during the pandemic, but the state of large language models at the time limited what it could do. When the models improved, Legora launched.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The product now covers the full arc of legal work that firms have historically staffed with junior associates: tearing through data rooms during due diligence, comparing contracts clause by clause, drafting briefs, and running multi-document reviews. In November 2025, the company launched Portal, a platform designed to let law firms productise their expertise and deliver it to in-house legal teams through custom AI workflows and intelligent document sharing. Design partners on Portal include Linklaters, Cleary Gottlieb, Goodwin, Deloitte, and Bird & Bird, with general availability scheduled for the first quarter of 2026. On 12 March, days after closing the Series D, Legora acquired Walter AI, a Canadian startup building agentic legal workflows, integrating its nine-person team into Stockholm and establishing a Toronto office under Walter’s former chief customer officer.

Legora’s trajectory has been shaped by a legal industry that appears to have moved past the question of whether AI belongs in the practice of law. A Thomson Reuters survey published in January found that law-firm spending on technology and knowledge-management tools grew 9.7 and 10.5 per cent respectively in 2025, the fastest real growth the sector has recorded. Separate research suggests that 55 per cent of lawyers are already using AI in some form. The global legal AI market, estimated at between $2.7 billion and $5.6 billion depending on whose definition of “legal AI” you accept, is projected to grow at compound annual rates of 17 to 22 per cent through the end of the decade.

Advertisement

The competitive landscape is narrowing. Harvey and Legora have emerged as the two dominant platforms for large law firms, with most other entrants either acquired, consolidated, or relegated to niche applications. Harvey’s advantage is depth of penetration: its tools are embedded in the daily workflows of some of the world’s largest firms, including those in the Am Law 100 and Magic Circle. Legora’s advantage is breadth and speed. The company expanded from 250 firms in May 2025 to more than 1,000 in less than a year, growing its headcount from 40 to more than 400 and opening offices in London, New York, and Sydney alongside its Stockholm headquarters. It became the fastest Y Combinator company to reach unicorn status, hitting $1.8 billion in its Series C in October 2025, just 13 months after its YC batch.

That speed carries risks. Legora’s valuation has roughly tripled every five months since its Series B, a pace that leaves almost no margin for a revenue deceleration. The $5.55 billion price assumes the company will continue scaling at rates that would place it in the top fraction of enterprise software businesses globally. If the $100 million ARR figure is accurate and growth sustains through 2026, Legora’s investors will look prescient. If the legal market’s appetite for AI tools plateaus, or if firms begin consolidating around a single platform rather than running both Harvey and Legora in parallel, the arithmetic gets considerably harder.

The broader question is what the legal AI boom tells us about the acceleration of AI adoption across professional services. Law was supposed to be resistant to automation: high-stakes, relationship-driven, riddled with jurisdictional complexity, and governed by professional regulators who move slowly. Instead, it has become one of the fastest-adopting verticals in enterprise AI, driven partly by the economics of the billable hour, which makes the value of time savings immediately and precisely quantifiable. A tool that lets a junior associate complete a document review in two hours instead of ten does not merely improve productivity. It changes the unit economics of the firm.

The European deep-tech paradox, in which the continent produces world-class research but struggles to build world-scale companies, finds an unusual counter-example in Legora. A Swedish company, built by founders in their mid-twenties with no legal background, has raised more than $800 million, grown to $100 million in ARR in a year and a half, and is now competing head-to-head with a Silicon Valley rival that has the backing of Sequoia, GIC, and the OpenAI ecosystem. Whether Legora can sustain that trajectory, or whether the extraordinary growth rates of 2025 and early 2026 represent a peak rather than a baseline, will depend on something no language model can yet predict: whether the lawyers who adopted these tools in a rush of enthusiasm will still be paying for them in three years’ time.

Advertisement

For now, the numbers suggest they will. The legal profession, it turns out, has been waiting for someone to automate the parts of the job that nobody enjoyed doing in the first place. Legora and Harvey are both betting that the parts nobody enjoyed doing also happen to be the parts that generated most of the revenue. That tension, between efficiency and economics, between the promise of AI and the structures it disrupts, is the real story behind the $100 million milestone. The software works. The question is what the profession looks like once everyone is using it.

Source link

Advertisement
Continue Reading

Tech

NASA Launches Artemis II Astronauts Around the Moon

Published

on

NASA’s Artemis II mission has launched four astronauts around the moon and back, marking humanity’s first crewed lunar voyage in 53 years and the first test flight of NASA’s Orion capsule and Space Launch System (SLS) with people on board. Five minutes into the flight, Commander Reid Wiseman saw the team’s target: “We have a beautiful moonrise, we’re headed right at it,” he said from the capsule. The Associated Press reports: Artemis II set sail from the same Florida launch site that sent Apollo’s explorers to the moon so long ago. The handful still alive cheered this next generation’s grand adventure as the Space Launch System rocket thundered into the early evening sky, a nearly full moon beckoning some 248,000 miles (400,000 kilometers) away.

Artemis II commander Reid Wiseman led the charge into space with “Let’s go to the moon!” accompanied by pilot Victor Glover, Christina Koch and Canada’s Jeremy Hansen. It was the most diverse lunar crew ever with the first woman, person of color and non-U.S. citizen riding in NASA’s new Orion capsule.

Carrying three Americans and one Canadian, the 32-story rocket rose from NASA’s Kennedy Space Center where tens of thousands gathered to witness the dawn of this new era. Crowds also jammed the surrounding roads and beaches, reminiscent of the Apollo moonshots in the 1960s and ’70s. It is NASA’s biggest step yet toward establishing a permanent lunar presence. Visit NASA’s Artemis II Launch Day blog for the latest updates.

Developing…

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025