Connect with us
DAPA Banner

Tech

Rule-Breaking Black Hole Growing At 13x the Cosmic ‘Speed Limit’ Challenges Theories

Published

on

“A surprisingly ravenous black hole from the dawn of the universe is breaking two big rules,” reports Live Science. “It’s not only exceeding the ‘speed limit’ of black hole growth but also generating extreme X-ray and radio wave emissions — two features that are not predicted to coexist…”

“How is this rule-breaking behavior even possible? In a paper published Jan. 21 in The Astrophysical Journal, an international team of researchers observed ID830 in multiple wavelengths to find an answer….”


As they attract gas and dust, this material accumulates in a swirling accretion disk. Gravity pulls the material from the disk into the black hole, but the infalling material generates radiation pressure that pushes outward and prevents more stuff from falling in. As a result, black holes are muzzled by a self-regulating process called the Eddington limit… Its X-ray brightness suggests that ID830 is accreting mass at about 13 times the Eddington limit, due to a sudden burst of inflowing gas that may have occurred as ID830 shredded and engulfed a celestial body that wandered too close. “For a supermassive black hole (SMBH) as massive as ID830, this would require not a normal (main-sequence) star, but a more massive giant star or a huge gas cloud,” study co-author Sakiko Obuchi, an observational astronomer at Waseda University in Tokyo, told Live Science via email. Such super-Eddington phases may be incredibly brief, as “this transitional phase is expected to last for roughly 300 years,” Obuchi added.

ID830 also simultaneously displays radio and X-ray emissions. These two features are not expected to coexist, especially because super-Eddington accretion is thought to suppress such emissions. “This unexpected combination hints at physical mechanisms not yet fully captured by current models of extreme accretion and jet launching,” the researchers said in a statement. So while ID830 is launching massive radio jets, its X-ray emissions appear to originate from a structure called a corona, produced as intense magnetic fields from the accretion disk create a thin but turbulent billion-degree cloud of turbocharged particles. These particles orbit the black hole at nearly the speed of light, in what NASA calls “one of the most extreme physical environments in the universe.” Altogether, ID830’s rule-breaking behaviors suggest that it is in a rare transitional phase of excessive consumption — and excretion. This incredible feeding burst has energized both its jets and its corona, making ID830 shine brightly across multiple wavelengths as it spews out excess radiation.

Advertisement

Additionally, based on UV-brightness analysis, quasars like ID830 may be unexpectedly common, the researchers said. Models predict that only around 10% of quasars have spectacular radio jets, but these energetic objects could be significantly more abundant in the early universe than previously suggested. Most importantly, ID830 also shows how SMBHs can regulate galaxy growth in the early universe. As a black hole gobbles matter at the super-Eddington limit, the energy from its resultant emissions can heat and disperse matter throughout the interstellar medium — the gas between stars — to suppress star formation. As a result, ancient SMBHs like ID830 may have grown massive at the expense of their host galaxies.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

I’ve tested every iPhone since the iPhone 12, and Ceramic Shield 2 is the first iPhone glass I fully trust

Published

on

Marketing is one thing, but reality is quite another. Like many of us, I won’t forget the claimed “durable” microtwill of FineWoven, the shaky initial launch of Apple Maps, or the infamous butterfly keyboard that was supposedly four times more stable. Remember the promise of AirPower? Of course you don’t.

It’s worth celebrating when the real-world experience does actually live up to the hype, then. And that’s the case with Apple’s Ceramic Shield 2, the tech giant’s latest and unquestionably greatest iPhone glass.

Source link

Advertisement
Continue Reading

Tech

IBM settles its DEI lawsuit with the DOJ for $17 million

Published

on

IBM has agreed to settle the US Department of Justice’s accusations that the company violated civil rights laws with its DEI practices. According to a press release from the DOJ, IBM will pay more than $17 million to resolve allegations of taking “race, color, national origin, or sex” into account when making employment decisions. This settlement is the latest development in a longstanding effort from the Trump administration to end DEI programs, which was kick-started from an executive order in early 2025.

IBM denied any wrongdoing and said the settlement wasn’t an admission of liability, while the US government said this conclusion wasn’t a concession that its claims weren’t well founded, according to the settlement agreement. According to the DOJ, IBM had violated the Civil Rights Act of 1964 with practices that included altering “interview criteria based on race or sex,” developing “race and sex demographic goals for business units,” using “a diversity modifier that tied bonus compensation to achieving demographic targets” and more.

An IBM spokesperson told Engadget in an email that the company “is pleased to have resolved this matter,” adding that “our workforce strategy is driven by a single principle: having the right people with the right skills that our clients depend on.”

According to Todd Blanche, the agency’s acting attorney general, this action is one of the first resolutions to come out of the Civil Rights Fraud Initiative, which was launched in May 2025. IBM isn’t the only company to alter its policies, with both T-Mobile and Meta agreeing to put an end to its DEI initiatives last year.

Advertisement

Source link

Continue Reading

Tech

What’s Your Favorite Kind Of Hack?

Published

on

Talking with [Tom Nardi] on the podcast this week, he mentioned his favorite kind of hack: the community-developed open-source firmware that can be flashed into a commercial product that has crappy firmware, thus saving it. The example, just for the record, is the CrossPoint open e-book reader firmware that turns a mediocre cheap e-book into something that you can do anything you want with. Very nice!

And that got me thinking about “kinds of hacks” in general. Do we have a classification scheme for the hacks that we see here on Hackaday? For instance, the obvious precursor to many of Tom’s favorite hacks is the breaking-into-the-locked-firmware hack, where a device that didn’t want you loading your own firmware on it is convinced to let you do so. Junk-hacking is probably also a category of its own, where instead of finding your prey on AliExpress, you find it on eBay, or in the alleyway. And the save-it-from-the-landfill repair and renovation hacks are close relatives.

The doing-too-much-with-too-little hacks are maybe my personal favorite. I just love to see when someone manages to get DOOM running in Linux on a computer made with only 8-pin microcontrollers. Because of the nature of the game, these often also include a handful of abusing-a-component-to-do-something-it’s-not-meant-to-do hacks. Heck, we even had a challenge for just exactly those kind of hacks.

Advertisement

Then there are fine-art-hacks, where the aesthetic outcome is as important as the technical, or games-hacks where fun is the end result.

What other broad categories of hacks are we missing? And which are your favorite?

Advertisement

Source link

Continue Reading

Tech

AI And Cybersecurity: A Glass Half-Empty/Half-Full Proposition, Where The Glass Is Holding Nitroglycerin

Published

on

from the yikes dept

First, some of the good news: certain AI models—currently Anthropic’s Mythos, but surely others are well on their way if they haven’t already arrived—turn out to be really good at finding cybersecurity vulnerabilities. As Anthropic itself reported:

During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.

That’s quite the tool, if it can help find vulnerabilities so that they can be patched.

But it’s also quite the tool to help find vulnerabilities so that they can be exploited. Like so many tools, including technological tools, whether they are good or bad depends entirely in how they are used. A hammer is a really helpful tool for building things, but it also smashes windows. And with this news, AI now has the capability for some really destructive uses.

To try to prevent them, Anthropic is working with some of the largest tech companies in the world to let them use a preview of its model on their own software to help QA them and proactively patch vulnerabilities. As Casey Newton reports:

Advertisement

Anthropic announced Mythos alongside Project Glasswing, an initiative with more than 40 of the world’s biggest tech companies that will see Anthropic grant early access to the model to find and patch vulnerabilities across many of the world’s most important systems. Launch partners in the coalition include Apple, Google, Microsoft, Cisco and Broadcom.

They’ll be tasked with scanning and patching their own systems along with the critical open-source systems that modern digital infrastructure depends on. Anthropic is giving participants $100 million in usage credits for Mythos, and donating another $4 million to open-source security efforts.

This sounds like a great program. It also should be noted that the Mythos model is not consumer-grade AI; it takes expensive, dedicated infrastructure to run, which means that, at least for the moment, there’s not an imminent danger of it being misused. But trouble is nevertheless brewing, and someday it will be here, which raises certain questions, like:

(A) What about other AI models, which will inevitably be similarly powerful? What if they are produced by less ethical companies, who would have no compunction against rogue actors using their systems in destructive ways that Project Glasswing won’t have intercepted?

(B) And what about every single legacy technology system in use, which Project Glasswing is unlikely to be able to retroactively fix? Large, resourced companies may be able to weather the on-coming storm, but what about your local dentist office? Or a hospital? Municipal IT systems? Networked technology is everywhere, and these smaller businesses and institutions are likely to both have older, unpatched technology and also fewer resources to update and secure them, or deal with the consequences of a hack, which can be devastating for the business or the people they serve.

On the other hand, there does seem to be one other bit of good news with this revelation: governments, including that of the United States, have often engaged in the dubious practice of hording zero-days, or collecting information about vulnerabilities that they then kept secret so that they could exploit them themselves by using them on an adversary. For those unfamiliar, “zero-day” refers to a vulnerability that has yet to be disclosed, which is why it’s on “day zero,” or before the first day of it being a known vulnerability that could now be fixed.

Advertisement

Mythos’s capabilities would seem to obviate this strategy, because suddenly the stash of unknown vulnerabilities isn’t really going to be such a secret, since anyone using the model will be able to find them. Mythos’s existence changes the balance of interests, where the stronger national security play by the government would be to disclose any discovered vulnerability to the vendor as soon as possible so that they can be patched and our nation’s systems more secured. Arguably that was always the better national security play, but now there’s definitely no upside to trying to keep them secret because it now definitely needs to be presumed that adversaries will be able to find and exploit them. They’ll have the tools.

With these AI models we’re going to need to presume that everyone is going to have the tools to know about every vulnerability. Up to now there has been at least the illusion of some security, because vulnerabilities couldn’t be exploited if no one knew about them, and finding vulnerabilities is hard. But now that it will be easy, the risk to the nation’s cybersecurity is greater than we have ever before contended with.

It is also not really a great harbinger that we know about Mythos because… a copy of the software got leaked. It’s just the software that was leaked and not the models it uses to tune its “reasoning,” which means that anyone trying to now build their own Mythos is still missing an important piece if they want to mimic its full capabilities, but they would have a lot. Which is probably why Anthropic has been sending DMCA takedown notices to have the leaked software removed from the Internet.

But doing so raises a related issue: the role of copyright law when it comes to “vibe coding,” or “having an AI system write the software rather than a programmer, just by instructing it on what to do. It’s especially important in light of the cybersecurity concerns always raised by software (and including vibe-coded software, as we’re having to trust that what’s produced does not have vulnerabilities). Copyright requires a human author, which raises the question: can software written by an AI be copyrightable? The answer would appear to be no, unless there was a great deal of creative effort on the part of a human being to instruct the AI or modify the output. But as Ed Lee chronicled, per Anthropic itself, even its own software (“pretty much 100%”) is being written by AI. And if that’s the case, then Anthropic has no business sending takedown notices for its software because DMCA takedown notices are only for demanding the removal of copyrighted works, which, it would appear, Anthropic’s own code does not qualify for.

Advertisement

But maybe it’s better if software stops being subject to copyright. “Vibe coding,” is becoming increasingly efficient, to the point that there is likely no need for copyright to incentivize its authorship. Instead, what public policy really needs to emphasize is that whatever software is produced is secure software. But in many ways copyright obstructs that goal, like through its lengthy terms, which mean that while a copyright holder might not still be maintaining its older software, no one else can maintain and patch it either, without potentially infringing the software’s copyright.  Or through its privileged secrecy (unusually for copyright, when it comes to software you don’t actually have to disclose all the actual code to register a copyright in it!) and other powers to lock out security research efforts, like through Section 1201 of the DMCA, when such efforts aren’t specifically supported by the developer–assuming the developer supports any security testing at all, as right now there aren’t necessarily the incentives to make them care about it.  Instead public policy has given them the ability, like with copyright, to escape oversight of the security of their software products, even as those products end up embedded in more and more of our lives.

It’s time to change that focus and get copyright out of the way of making software security our top policy priority.

And fast.

Filed Under: ai, claude, claude code, copyright, cybersecurity, mythos, project glasswing, vulnerabilities, zero days

Companies: anthropic

Advertisement

Source link

Continue Reading

Tech

‘Crimson Desert’ Is a Cat Dad Simulator

Published

on

Step into the shoes of the strongest, goodest boy in a game that is beautiful, baffling, and impossible to put down.

Source link

Continue Reading

Tech

‘Moon joy!’ Artemis 2’s crew sets a distance record, documents lunar far side and heads back toward Earth

Published

on

NASA’s Artemis 2 crew captured an iconic “Earthset” picture, showing Earth dipping beneath the lunar horizon. (NASA Photo)

Four astronauts today became the first humans to make a trip around the moon since the Apollo era — and added new pages to history books for the Artemis era.

The Artemis 2 crew reached a maximum distance of 252,756 miles from Earth, surpassing the distance record for human travel that was set during the Apollo 13 mission in 1970 by more than 4,000 miles.

NASA astronaut Christina Koch marked the occasion in a radio transmission from NASA’s Orion space capsule, named Integrity. “We most importantly choose this moment to challenge this generation and the next to make sure this record is not long-lived,” she said.

Koch made history as the first woman to travel beyond Earth orbit. One of her crewmates, NASA pilot Victor Glover, is the first Black astronaut to take a moon trip, and Canadian astronaut Jeremy Hansen is the first non-U.S. astronaut to do so.

The main purpose of the 10-day Artemis 2 mission is to serve as an initial crewed test flight for the Orion spacecraft, which traced a similar round-the-moon course during the uncrewed Artemis 1 mission in 2022. A successful Artemis 2 mission will prepare the way for a lunar lander test flight in Earth orbit as early as next year, potentially followed in 2028 by the first crewed moon landing since Apollo.

Advertisement

Seattle-area tech workers have played a role in getting Orion off the ground — and bringing it back home. L3Harris’ Aerojet Rocketdyne facility in Redmond worked on the spacecraft’s main engine and some of its thrusters, while Karman Space Systems’ Mukilteo facility provided mechanisms for Orion’s parachute deployment system and emergency hatch release system.

Artemis 2’s flight plan took advantage of orbital mechanics and a precisely timed firing of Orion’s main engine to send the astronauts on a free-return trip around the moon and back. The moon’s gravitational pull caused Orion to make a crucial U-turn around the far side, at a minimum distance of 4,067 miles from the lunar surface, and then slingshot back toward Earth.

A scientific swing around the moon

Scientists enlisted the astronauts to make up-close geological observations of the lunar surface during the flyby. Because the Artemis astronauts had a wider perspective on the moon than Apollo astronauts did five decades ago, they could see parts of the far side that had gone unseen directly by human eyes (although they’ve been well-documented by robotic probes).

NASA’s mission commander, Reid Wiseman, found it difficult to break away from moongazing to discuss his observations over a radio link with Kelsey Young, Artemis 2’s lunar science lead. “You’re pulling me away from the moon right now, so let’s go,” he told Young.

Advertisement

Back at Mission Control in Houston, Young took it all in good stride. “I have to say that ‘moon joy’ is the new term that’s already become our team’s new motto,” she told Wiseman.

The astronauts focused on features of scientific interest — including Orientale Basin and Hertzsprung Basin, two multi-ring impact craters that document different geological eras on the far side. They noted subtle shades of green and brown on the mostly gray moonscape. They also took a close look at the south polar region, which is the target for the Artemis program’s first crewed landing.

“The view of the south pole is quite amazing,” Glover said.

Koch marveled over the bright young craters that stood out on the lunar surface. “What it really looks like is a lampshade with tiny pinpricks, and the light is shining through,” she said. “They’re so bright compared to the rest of the moon.”

Advertisement

Emotional moments

Hansen told Mission Control that the astronauts were proposing new names for two craters they spotted on the surface below. “Integrity” was chosen as the name for one of the craters, in honor of the crew’s spacecraft. The other crater was dubbed “Carroll,” in honor of Wiseman’s wife, who died in 2020. After Hansen spelled out Carroll’s name, the astronauts came together to give Wiseman a comforting hug.

That wasn’t the flyby’s only emotional moment. Koch said she felt an “overwhelming sense of being moved by looking at the moon” and comparing it with Earth. Her description of the feeling was similar to astronauts’ accounts of a phenomenon known as the Overview Effect.

“Everything we need, the Earth provides,” she said, “and that is in itself something of a miracle, and one that you can’t truly know until you’ve had the perspective of the other.”

Just before Orion was due to pass behind the moon for a temporary blackout, Glover took the opportunity to refer to the Christian commandment to love your neighbor as yourself. “As we prepare to go out of radio communication, we’re still able to feel your love from Earth. And to all of you down there on Earth, and around Earth, we love you from the moon,” he said. “We will see you on the other side.”

Advertisement

About 40 minutes later, Orion emerged from the other side of the moon, and communication was restored. “It is so great to hear from Earth again,” Koch told Mission Control.

“We will explore, we will build ships, we will visit again, we will construct science outposts, we will drive rovers, we will do radio astronomy, we will found companies, we will bolster industry, we will inspire,” Koch said. “But ultimately, we will always choose Earth.”

Earthset, Earthrise and an eclipse

The behind-the-moon turnaround provided the crew with opportunities to capture images of Earthset and Earthrise — and marked the beginning of Orion’s homeward journey. Back at Mission Control, the support team turned their double-sided mission patches around to change the focus of the patch’s design from the moon to Earth.

But the workday wasn’t yet finished: For the grand finale, the astronauts donned protective glasses and watched as the sun passed behind the moon to create an unearthly kind of solar eclipse. As the sun sank beneath the lunar horizon, they captured pictures of the solar corona.

Advertisement

Glover reported that the corona created a bright halo “almost around the entire moon,” with the lunar surface illuminated ever so faintly by Earth’s reflected light. “It is quite an impressive sight,” he said. “Earthshine is very distinct, and it creates quite an impressive visual illusion. Wow, it’s amazing.”

The sun’s re-emergence from behind the moon marked the end of today’s seven-hour lunar observation session. “I can’t say enough how much science we’ve already learned, and how much inspiration you’ve provided to our entire team, the lunar science community and the entire world with what you were able to bring today,” Young told the crew. “You really brought the moon closer today, and we can’t thank you enough.”

High-resolution images and reports about the observations are due to be downlinked and distributed in the days ahead. Planetary scientists will be poring over the data long after Orion and its crew make their scheduled splashdown in the Pacific Ocean on Friday.

After the flyby, President Donald Trump congratulated the crew over an audio link and called them “modern-day pioneers.”

Advertisement

“Today you’ve made history and made all America really proud,” he said. “No astronaut has been to the moon since the days of the Apollo program. … At long last, America is back.”

Source link

Continue Reading

Tech

Amazon sued by YouTubers for allegedly scraping their content to train AI video tool

Published

on

Amazon’s headquarters campus in Seattle. (GeekWire Photo / Kurt Schlosser)

A trio of YouTube producers filed a class action lawsuit against Amazon alleging the tech giant illegally used content from the video platform to train and improve its Nova Reel generative AI model.

The suit, filed Friday in U.S. District Court for the Western District of Washington in Seattle, describes how Amazon allegedly used datasets earmarked only for academic use, circumvented YouTube’s copyright protection measures, and scraped video content. KING5 first reported on the suit.

“In a world where Defendant and others can circumvent technological protections to exploit copyrighted works without authorization with impunity, creators will be less likely to make their creations available on YouTube and other similar platforms, for fear of losing all control of them,” the plaintiffs state in their suit. “The world will be poorer for it.”

Plaintiffs are seeking damages, restitution and injunctive relief, claiming Amazon violated the Digital Millennium Copyright Act.

An Amazon spokesperson declined to comment on the matter, citing ongoing litigation.

Advertisement

Amazon released its Nova foundation models in 2024 via AWS Bedrock. The Nova Reel model can take text prompts and images and turn them into short videos, with features including watermarking.

According to the suit, Amazon deployed automated download tools paired with virtual machines that cycled through IP addresses to avoid being blocked, enabling the unauthorized extraction of data from millions of videos.

The named plaintiffs include:

  • Ted Entertainment, Inc. (TEI), a California-based media company owned by Ethan and Hila Klein with more than 5,800 videos on YouTube with a combined total of more than 4 billion views. TEI channels include h3h3 Productions and H3 Podcast Highlights.
  • Matt Fisher, a California-based YouTuber who runs the MrShortGame Golf channel that provides instructional videos and has more than 500,000 subscribers.
  • Golfholics, a golf-focused YouTube channel with more than 130,000 subscribers and millions of views.

The suit argues the plaintiffs have no way to recover intellectual property already used to train Amazon’s models. “Once AI ingests content, that content is stored in its neural network and not capable of deletion or retraction,” it states.

Dozens of similar cases are working their way through courts nationwide. Among them: the New York Times’ lawsuit against OpenAI and Microsoft, a class action by authors against Microsoft, and a suit from musicians with YouTube content against Google.

Advertisement

Separate lawsuits against Anthropic and music-generation startup Suno over the alleged unauthorized use of books and music in AI training have since settled. A case brought by authors against Meta was dismissed.

Source link

Continue Reading

Tech

How Quiet Failures Are Redefining AI Reliability

Published

on

In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong.

Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do.

This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems.

When Systems Fail Without Breaking

Consider a hypothetical enterprise AI assistant designed to summarize regulatory updates for financial analysts. The system retrieves documents from internal repositories, synthesizes them using a language model, and distributes summaries across internal channels.

Advertisement

Technically, everything works. The system retrieves valid documents, generates coherent summaries, and delivers them without issue.

But over time, something slips. Maybe an updated document repository isn’t added to the retrieval pipeline. The assistant keeps producing summaries that are coherent and internally consistent, but they’re increasingly based on obsolete information. Nothing crashes, no alerts fire, every component behaves as designed. The problem is that the overall result is wrong.

From the outside, the system looks operational. From the perspective of the organization relying on it, the system is quietly failing.

The Limits of Traditional Observability

One reason quiet failures are difficult to detect is that traditional systems measure the wrong signals. Operational dashboards track uptime, latency, and error rates, the core elements of modern observability. These metrics are well-suited for transactional applications where requests are processed independently, and correctness can often be verified immediately.

Advertisement

Autonomous systems behave differently. Many AI-driven systems operate through continuous reasoning loops, where each decision influences subsequent actions. Correctness emerges not from a single computation but from sequences of interactions across components and over time. A retrieval system may return contextually inappropriate and technically valid information. A planning agent may generate steps that are locally reasonable but globally unsafe. A distributed decision system may execute correct actions in the wrong order.

None of these conditions necessarily produces errors. From the perspective of conventional observability, the system appears healthy. From the perspective of its intended purpose, it may already be failing.

Why Autonomy Changes Failure

The deeper issue is architectural. Traditional software systems were built around discrete operations: a request arrives, the system processes it, and the result is returned. Control is episodic and externally initiated by a user, scheduler, or external trigger.

Autonomous systems change that structure. Instead of responding to individual requests, they observe, reason, and act continuously. AI agents maintain context across interactions. Infrastructure systems adjust resource in real time. Automated workflows trigger additional actions without human input.

Advertisement

In these systems, correctness depends less on whether any single component works, and more on coordination across time.

Distributed-systems engineers have long wrestled with issues of coordination. But this is coordination of a new kind. It’s no longer about things like keeping data consistent across services. It’s about ensuring that a stream of decisions—made by models, reasoning engines, planning algorithms, and tools, all operating with partial context—adds up to the right outcome.

A modern AI system may evaluate thousands of signals, generate candidate actions, and execute them across a distributed infrastructure. Each action changes the environment in which the next decision is made. Under these conditions, small mistakes can compound. A step that is locally reasonable can still push the system further off course.

Engineers are beginning to confront what might be called behavioral reliability: whether an autonomous system’s actions remain aligned with its intended purpose over time.

Advertisement

The Missing Layer: Behavioral Control

When organizations encounter quiet failures, the initial instinct is to improve monitoring: deeper logs, better tracing, more analytics. Observability is essential, but it only shows that the behavior has already diverged—it doesn’t correct it.

Quiet failures require something different: the ability to shape system behavior while it is still unfolding. In other words, autonomous systems increasingly need control architectures, not just monitoring.

Engineers in industrial domains have long relied on supervisory control systems. These are software layers that continuously evaluate a system’s status and intervene when behavior drifts outside safe bounds. Aircraft flight-control systems, power-grid operations, and large manufacturing plants all rely on such supervisory loops. Software systems historically avoided them because most applications didn’t need them. Autonomous systems increasingly do.

Behavioral monitoring in AI systems focuses on whether actions remain aligned with intended purpose, not just whether components are functioning. Instead of relying only on metrics such as latency or error rates, engineers look for signs of behavior drift: shifts in outputs, inconsistent handling of similar inputs, or changes in how multi-step tasks are carried out. An AI assistant that begins citing outdated sources, or an automated system that takes corrective actions more often than expected, may signal that the system is no longer using the right information to make decisions. In practice, this means tracking outcomes and patterns of behavior over time.

Advertisement

Supervisory control builds on these signals by intervening while the system is running. A supervisory layer checks whether ongoing actions remain within acceptable bounds and can respond by delaying or blocking actions, limiting the system to safer operating modes, or routing decisions for review. In more advanced setups, it can adjust behavior in real time—for example, by restricting data access, tightening constraints on outputs, or requiring extra confirmation for high-impact actions.

Together, these approaches turn reliability into an active process. Systems don’t just run, they are continuously checked and steered. Quiet failures may still occur, but they can be detected earlier and corrected while the system is operating.

A Shift in Engineering Thinking

Preventing quiet failures requires a shift in how engineers think about reliability: from ensuring components work correctly to ensuring system behavior stays aligned over time. Rather than assuming that correct behavior will emerge automatically from component design, engineers must increasingly treat behavior as something that needs active supervision.

As AI systems become more autonomous, this shift will likely spread across many domains of computing, including cloud infrastructure, robotics, and large-scale decision systems. The hardest engineering challenge may no longer be building systems that work, but ensuring that they continue to do the right thing over time.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

AI is making us faster, more productive, and worse at thinking

Published

on

AI is everywhere, the pressure to adopt it is relentless, and the evidence that it’s making us smarter is getting thinner by the quarter.

On New Year’s Day 2026, a programmer named Steve Yegge launched an open-source platform called Gas Town. It lets users orchestrate swarms of AI coding agents simultaneously, assembling software at speeds no single human could match.

One of the first people to try it described the experience in terms that had nothing to do with productivity. “There’s really too much going on for you to comprehend reasonably,” he wrote. “I had a palpable sense of stress watching it.”

That sentence should be pinned to the wall of every executive suite, every venture capital boardroom, and every CES keynote stage where the word “intelligence” is thrown around like confetti. Because something strange is happening in the relationship between humans and the technology we keep calling intelligent.

Advertisement

The machines are getting faster. The humans interacting with them are getting more exhausted, more anxious, and, by several measures, less capable of the one thing intelligence was supposed to enhance: thinking clearly.

Advertisement

The pressure to adopt AI is now so pervasive that it has developed its own vocabulary of coercion.

You need to have AI.

You need to use AI.

You need to buy AI.

Advertisement

Your competitors are already using it.

Your children will fall behind without it.

The language does not come from engineers quietly solving problems. It comes from earnings calls, product launches, and LinkedIn posts written with the manic energy of people who have confused selling a product with describing reality.

In January 2026, at the World Economic Forum in Davos, Microsoft CEO Satya Nadella offered a phrase so revealing it deserves to be studied as a cultural artefact. He warned that AI risked losing its “social permission” to consume vast quantities of energy unless it started delivering tangible benefits to people’s lives.

Advertisement

The framing was striking: not a question of whether the technology works, but of whether the public can be kept on board while the industry figures out if it does. Nadella called AI a “cognitive amplifier,” offering “access to infinite minds.”

A month later, a Circana survey of US consumers found that 35 per cent of them did not want AI on their devices at all. The top reason was not confusion or technophobia. It was simpler than that. They said they did not need it.

The gap between the rhetoric and the evidence has become difficult to ignore. In March 2026, Goldman Sachs published an analysis of fourth-quarter earnings data and found, in the words of senior economist Ronnie Walker, “no meaningful relationship between productivity and AI adoption at the economy-wide level.”

The bank noted that a record 70 per cent of S&P 500 management teams had discussed AI on their earnings calls. Only 10 per cent had quantified its impact on specific use cases. One per cent had quantified its impact on earnings. Meanwhile, the five largest US technology companies were collectively expected to spend $667 billion on AI infrastructure in 2026, a 62 per cent increase over the previous year.

Advertisement

The National Bureau of Economic Research described the situation as a “productivity paradox”: perceived gains larger than measured ones.

There are real productivity improvements, but they are strikingly narrow. Goldman found a median gain of around 30 per cent in two specific areas: customer support and software development. Outside those domains, the evidence for broad improvement was, in the bank’s assessment, essentially absent. The promised revolution, for now, is happening in two rooms of a very large house.

What is happening in those rooms, though, is worth examining closely, because even where AI delivers, something else appears to be breaking.

In February 2026, researchers at UC Berkeley’s Haas School of Business published findings from an eight-month study embedded at a 200-person US technology firm. They found that AI did not reduce workloads. It intensified them. Tasks got faster, so expectations rose. Expectations rose, so the scope expanded. Scope expanded, so workers took on responsibilities that had previously belonged to other roles. Product managers began writing code. Researchers took on engineering work. Role boundaries dissolved because the tools made it feel possible, and then the exhaustion arrived.

Advertisement

I got tired just write it.

The researchers identified a cycle they called “workload creep”: a gradual accumulation of tasks that goes unnoticed until cognitive fatigue degrades the quality of every decision.

Harvard Business Review gave the phenomenon a blunter name: “AI brain fry.” A Boston Consulting Group study of nearly 1,500 US workers found that 14 per cent of those using AI tools requiring significant oversight reported experiencing it, a distinct form of mental fog characterised by difficulty focusing, slower decision-making, and headaches after extended AI interaction.

The workers most affected were not the sceptics or the laggards. They were the enthusiastic adopters, the ones who had done exactly what every keynote told them to do.

Advertisement

The distribution of this exhaustion is not random. Sixty-two per cent of associates and 61 per cent of entry-level workers reported AI-related burnout, according to the Harvard Business Review study.

Among C-suite executives, the figure dropped to 38 per cent. The pattern is consistent with what anyone who has spent time in an organisation could have predicted: the people who make the strategic decisions about AI adoption are not the people who manage its outputs, clean up its errors, and switch between its tools eight hours a day.

All of this raises a question that the industry would prefer to skip over: what, exactly, do we mean when we use the word “intelligence”?

The term “artificial intelligence” was coined in 1956 at a workshop at Dartmouth College, and it has been doing a particular kind of ideological work ever since. By naming the field after a human quality, its founders made a move that was as much marketing as science. It invited us to see computation as cognition, pattern-matching as understanding, speed as wisdom.

Advertisement

Every time a product is described as “intelligent,” it borrows from the emotional weight of a word that, for most of human history, meant something like the capacity for judgement, reflection, and the ability to sit with uncertainty long enough to think clearly about it.

That is not what these systems do. What they do, often brilliantly, is statistical prediction at an extraordinary scale. They recognise patterns in data, generate plausible continuations of sequences, and optimise for objectives defined by their designers.

This is genuinely useful. It is not intelligence in the sense that any philosopher, psychologist, or, for that matter, any thoughtful person on the street would recognise. The slippage between the two meanings is not accidental. It is the engine of the entire commercial project.

Here is the deepest irony: in the rush to surround ourselves with artificial intelligence, we appear to be eroding the conditions under which actual human intelligence operates. Intelligence, the real kind, requires things that the AI economy is systematically destroying: uninterrupted attention, tolerance for ambiguity, the willingness to sit with a problem before reaching for a solution, and the cognitive space to doubt, reconsider, and change one’s mind.

Advertisement

Researchers at the London School of Economics argued in a February 2026 paper that the manufactured urgency around AI narrows the space for democratic deliberation itself, collapsing the future into a single inevitability and leaving no room for the slow, uncertain, distinctly human process of deciding together what we actually want.

There is something almost comic about the situation.

We have built machines that can process language, generate images, and write code at superhuman speed, and the people using them are reporting mental fog, difficulty concentrating, and a growing inability to think.

A senior engineering manager cited in the BCG study described juggling multiple AI tools to weigh technical decisions, generate drafts, and summarise information. The constant switching and verification created what he called “mental clutter.” His effort had shifted from solving the core problem to managing the tools.

Advertisement

Not everyone is compliant. A third of consumers have looked at the AI being pushed into their phones and laptops and said, plainly, no. Workers whose organisations value work-life balance report 28 per cent lower AI fatigue, according to BCG’s research, which suggests the problem is less about the technology itself than about the culture of compulsive adoption wrapped around it.

The question is not whether AI is useful. In certain applications, it clearly is. The question is whether the frenzy surrounding it, the relentless pressure to adopt, integrate, and accelerate, is making us smarter or just making us more compliant.

Sixty-seven billion dollars in quarterly investment. Record mentions on earnings calls. Entire conferences dedicated to the word “intelligence.”

And in a January survey, the most common reason a human being gave for not wanting any of it was four words long: I do not need it. That sentence, quiet and unimpressed, may be the most intelligent thing anyone has said about AI in years. The question now is whether we still have the attention span to hear it.

Advertisement

Source link

Continue Reading

Tech

Nvidia-backed SiFive hits $3.65 billion valuation for open AI chips

Published

on

SiFive, a company founded in 2015 by the UC Berkeley engineers who created an open source chip design, has landed a $400 million oversubscribed round that values the company at $3.65 billion.

This deal is interesting for a bunch of reasons. For one, SiFive’s RISC-V open chip design is based on the RISC processor, not Intel’s x86 or ARM, the two major types of CPUs that currently feed Nvidia’s GPU computer system AI empire.

Also, Nvidia was investor in this round, alongside a long list of VCs, private equity, and hedge funds. The round was led by Atreides Management, founded by former Fidelity investor bigwig Gavin Baker. (Atreides was also an investor in Cerebras Systems $1 billion round). Other investors in the round include Apollo Global Management, D1 Capital Partners, Point72 Turion, T. Rowe Price Sutter Hill Ventures, and others.

SiFive’s business model is like Arm’s was in years gone by — it licenses its chip designs to those who modify them for their own needs and does not sell the chips themselves. (In March, Arm changed its model when it launched the first-ever chip it manufactured, an AI chip, developed with Meta with customers including OpenAI, Cerebras, and Cloudflare.)

Advertisement

SiFive stands in rarified air with chip designs that are open, not proprietary, as well as neutral, not reliant on specific customers. In fact, SiFive hasn’t raised since March 2022, Pitchbook estimates, when it brought in $175 million led by Coatue Management at a pre-money valuation of $2.33 billion. Intel Capital, Qualcomm Ventures, Aramco Ventures, were part of that round.

RISC-V has been, until recently, better known as a chip for smaller uses, like embedded systems. But with this cash and Nvidia’s attention, SiFive is moving into CPUs for AI data centers. SiFive’s designs will work with Nvidia’s CUDA software and its NVLink Fusion, a rack server system that lets different CPUs plug into Nvidia’s “AI factory.”

In other words, as rivals Intel and AMD seek to compete with Nvidia’s GPU, Nvidia is backing an 11-year-old startup that can design CPUs on an open and completely alternate technology.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025