Connect with us
DAPA Banner

Tech

How Quiet Failures Are Redefining AI Reliability

Published

on

In late-stage testing of a distributed AI platform, engineers sometimes encounter a perplexing situation: every monitoring dashboard reads “healthy,” yet users report that the system’s decisions are slowly becoming wrong.

Engineers are trained to recognize failure in familiar ways: a service crashes, a sensor stops responding, a constraint violation triggers a shutdown. Something breaks, and the system tells you. But a growing class of software failures looks very different. The system keeps running, logs appear normal, and monitoring dashboards stay green. Yet the system’s behavior quietly drifts away from what it was designed to do.

This pattern is becoming more common as autonomy spreads across software systems. Quiet failure is emerging as one of the defining engineering challenges of autonomous systems because correctness now depends on coordination, timing, and feedback across entire systems.

When Systems Fail Without Breaking

Consider a hypothetical enterprise AI assistant designed to summarize regulatory updates for financial analysts. The system retrieves documents from internal repositories, synthesizes them using a language model, and distributes summaries across internal channels.

Advertisement

Technically, everything works. The system retrieves valid documents, generates coherent summaries, and delivers them without issue.

But over time, something slips. Maybe an updated document repository isn’t added to the retrieval pipeline. The assistant keeps producing summaries that are coherent and internally consistent, but they’re increasingly based on obsolete information. Nothing crashes, no alerts fire, every component behaves as designed. The problem is that the overall result is wrong.

From the outside, the system looks operational. From the perspective of the organization relying on it, the system is quietly failing.

The Limits of Traditional Observability

One reason quiet failures are difficult to detect is that traditional systems measure the wrong signals. Operational dashboards track uptime, latency, and error rates, the core elements of modern observability. These metrics are well-suited for transactional applications where requests are processed independently, and correctness can often be verified immediately.

Advertisement

Autonomous systems behave differently. Many AI-driven systems operate through continuous reasoning loops, where each decision influences subsequent actions. Correctness emerges not from a single computation but from sequences of interactions across components and over time. A retrieval system may return contextually inappropriate and technically valid information. A planning agent may generate steps that are locally reasonable but globally unsafe. A distributed decision system may execute correct actions in the wrong order.

None of these conditions necessarily produces errors. From the perspective of conventional observability, the system appears healthy. From the perspective of its intended purpose, it may already be failing.

Why Autonomy Changes Failure

The deeper issue is architectural. Traditional software systems were built around discrete operations: a request arrives, the system processes it, and the result is returned. Control is episodic and externally initiated by a user, scheduler, or external trigger.

Autonomous systems change that structure. Instead of responding to individual requests, they observe, reason, and act continuously. AI agents maintain context across interactions. Infrastructure systems adjust resource in real time. Automated workflows trigger additional actions without human input.

Advertisement

In these systems, correctness depends less on whether any single component works, and more on coordination across time.

Distributed-systems engineers have long wrestled with issues of coordination. But this is coordination of a new kind. It’s no longer about things like keeping data consistent across services. It’s about ensuring that a stream of decisions—made by models, reasoning engines, planning algorithms, and tools, all operating with partial context—adds up to the right outcome.

A modern AI system may evaluate thousands of signals, generate candidate actions, and execute them across a distributed infrastructure. Each action changes the environment in which the next decision is made. Under these conditions, small mistakes can compound. A step that is locally reasonable can still push the system further off course.

Engineers are beginning to confront what might be called behavioral reliability: whether an autonomous system’s actions remain aligned with its intended purpose over time.

Advertisement

The Missing Layer: Behavioral Control

When organizations encounter quiet failures, the initial instinct is to improve monitoring: deeper logs, better tracing, more analytics. Observability is essential, but it only shows that the behavior has already diverged—it doesn’t correct it.

Quiet failures require something different: the ability to shape system behavior while it is still unfolding. In other words, autonomous systems increasingly need control architectures, not just monitoring.

Engineers in industrial domains have long relied on supervisory control systems. These are software layers that continuously evaluate a system’s status and intervene when behavior drifts outside safe bounds. Aircraft flight-control systems, power-grid operations, and large manufacturing plants all rely on such supervisory loops. Software systems historically avoided them because most applications didn’t need them. Autonomous systems increasingly do.

Behavioral monitoring in AI systems focuses on whether actions remain aligned with intended purpose, not just whether components are functioning. Instead of relying only on metrics such as latency or error rates, engineers look for signs of behavior drift: shifts in outputs, inconsistent handling of similar inputs, or changes in how multi-step tasks are carried out. An AI assistant that begins citing outdated sources, or an automated system that takes corrective actions more often than expected, may signal that the system is no longer using the right information to make decisions. In practice, this means tracking outcomes and patterns of behavior over time.

Advertisement

Supervisory control builds on these signals by intervening while the system is running. A supervisory layer checks whether ongoing actions remain within acceptable bounds and can respond by delaying or blocking actions, limiting the system to safer operating modes, or routing decisions for review. In more advanced setups, it can adjust behavior in real time—for example, by restricting data access, tightening constraints on outputs, or requiring extra confirmation for high-impact actions.

Together, these approaches turn reliability into an active process. Systems don’t just run, they are continuously checked and steered. Quiet failures may still occur, but they can be detected earlier and corrected while the system is operating.

A Shift in Engineering Thinking

Preventing quiet failures requires a shift in how engineers think about reliability: from ensuring components work correctly to ensuring system behavior stays aligned over time. Rather than assuming that correct behavior will emerge automatically from component design, engineers must increasingly treat behavior as something that needs active supervision.

As AI systems become more autonomous, this shift will likely spread across many domains of computing, including cloud infrastructure, robotics, and large-scale decision systems. The hardest engineering challenge may no longer be building systems that work, but ensuring that they continue to do the right thing over time.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The Gamblers Behind One of Chess’s Weirdest Unsolved Cheating Mysteries Have Been Unmasked

Published

on

The modern era of cheating in chess began on a Thursday in July 1993, when a man with shoulder-length dreadlocks walked into the World Open tournament in Philadelphia and registered as John von Neumann. Both the hair and the name were phony.

The real Von Neumann was a prominent mathematician and computer scientist who died in 1957. The fake Von Neumann had a suspicious buzzing bulge in his pocket, fought a grandmaster to a draw, then fled before anyone could work out who he was.

A Boston Globe columnist called it “one of the strangest cheating episodes in chess history.” Chess.com recorded the “Von Neumann incident” as “the earliest known case of a potential computer cheater.”

This was decades before chess pros started getting expelled from tournaments for using smartphones, and a lifetime before the recent buzzing anal beads scandal. (Google it, but not at work.) It was years ahead of Garry Kasparov’s defeat by IBM’s Deep Blue, in an era when humans still imagined themselves to be smarter than machines. The identity of the man with the dreadlocks has remained one of the game’s most enduring mysteries. Until now.

Advertisement

I stumbled across the culprits while researching Lucky Devils, my new book about gamblers using science and technology to win at blackjack, poker, roulette and, on this occasion, chess. The following excerpt is based on my interviews with the gamblers involved and the tournament’s organizers and participants, as well as contemporaneous reports. Wherever possible, details have been independently verified.


Rob Reitzen packed light for the flight from Los Angeles to Philadelphia. He had to. His suitcase was stuffed with computer equipment, switches, wires, and buzzers. Sitting next to him on the plane was his best friend John Wayne, known to everyone in their crew of professional gamblers as “the Duke,” after his Hollywood namesake.

It was June 1993, just before the start of the World Open chess tournament, hosted by the City of Brotherly Love. Reitzen and Wayne both fancied themselves as players. It was how they’d first met. The Duke had posted a flyer, inviting challenges against “John Wayne, chess champion and arm-wrestling champion.” Reitzen had responded and found himself sitting opposite a Black ex-soldier with a megawatt smile, beginning a relationship built on competitive pranks.

Their real calling, though, was gambling—specifically the high-tech kind. Reitzen, a dyslexic savant with a mop of curly hair permanently concealed under a baseball cap, earned a living with wearable gadgets. He’d used an adapted Zilog Z80 microprocessor, about the size of a pack of cards, to process the shifting possibilities in blackjack, then developed a similar device to do the same in California’s poker rooms. For a while, Reitzen and Wayne used a system with a tiny camera inside a player’s belt buckle. Outside, in a truck with a communications dish bolted to the side, teammates could pause its footage, zoom in, and see the blackjack dealer’s hidden card for a split second as it was placed face down on the felt. Was it cheating? Probably. But the profits spoke louder than any ethical doubts they might have had.

Advertisement

Since such machines were banned in casinos, they had to be concealed carefully. Reitzen and his players sent information to the computers using toe switches built into their shoes and received instructions back from a vibrating box hidden in the crotch.

On arrival in Philadelphia, the Duke wired himself up, putting on a pair of headphones to secure his wig. He wore one of their blackjack processors, modified to communicate with Reitzen, who would station himself, out of sight, in front of a bank of monitors in their hotel room running his homemade chess software. The two friends looked at each other, Reitzen grinning. This was it—their shot at chess immortality.

On the entry form, Wayne wrote the name John von Neumann. “As in … the father of game theory?” a skeptical official asked. Wayne nodded. The official raised an eyebrow, then put Wayne into the draw.

Source link

Advertisement
Continue Reading

Tech

Inertia moves to commercialize one of the world’s most elaborate science experiments

Published

on

Fusion power startup Inertia Enterprises said on Tuesday that it has signed three agreements with the Lawrence Livermore National Laboratory (LLNL) to help bring the laser-based fusion reactor pioneered at the Californian lab to market.

The deals could give Inertia a boost over rival startups. The National Ignition Facility (NIF) at LLNL is so far the only experiment to prove that controlled fusion reactions could produce more power than they require to ignite. Inertia burst onto the scene in February with a $450 million Series A, making it one of the best capitalized startups in the industry.

Inertia and LLNL are working on a type of fusion called inertial confinement, which generates fusion conditions by compressing a fuel pellet using some external force, unlike other approaches that use powerful magnetic fields to confine plasmas until atoms fuse.

At the NIF, 192 laser beams are fired into a large vacuum chamber so that they converge on a small gold cylinder called a hohlraum, which contains a diamond-coated fuel pellet. When the lasers hit the hohlraum, it gets vaporized and emits X-Rays that blast the BB-sized fuel pellet inside. The diamond coating is transformed into a plasma, which expands to compress the deuterium-tritium fuel.

Advertisement

If that doesn’t sound exotic enough, keep in mind that all of this needs to happen several times per second if the technology is ever going to produce power for the grid.

The laser-driven reactor design was first theorized in the 1960s as a safer way to research thermonuclear weapons, though scientists also recognized its potential for power production. Construction on the NIF began in 1997, and it took 25 years to reach the breakeven point where a fusion reaction released more power than needed to kick it off.

Several startups, including Inertia, Xcimer, Focused Energy and First Light, are attempting to turn the concept into commercial-scale power plants. Because NIF’s lasers are based on old technology, the hope is that new lasers will be more efficient, lowering the energy required to ignite each fusion reaction and so make it easier for each reaction to release enough energy to make a commercial-scale power plant profitable.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

The agreements between Inertia and LLNL cover two strategic partnership projects, and one cooperative research and development agreement. The organizations say they will work together to develop more advanced lasers and improve the fuel targets with an eye toward better performance and manufacturing. Inertia is also licensing almost 200 patents from the lab.

Advertisement

It was perhaps inevitable that Inertia and LLNL would continue to work together. Annie Kritcher, the co-founder and chief scientist of Inertia, helped design the successful experiment at NIF that achieved scientific breakeven. The 2022 CHIPS and Science Act paved the way for her to found a company while retaining her position at LLNL.

Source link

Continue Reading

Tech

Audio Reactive LED Strips Are Hard

Published

on

Back in 2017, Hackaday featured an audio reactive LED strip project from [Scott Lawson], that has over the years become an extremely popular choice for the party animals among us. We’re fascinated to read his retrospective analysis of the project, in which he looks at how it works in detail and explains that why for all its success, he’s still not satisfied with it.

Sound-to-light systems have been a staple of electronics for many decades, and have progressed from simple volume-based flashers and sequencers to complex DSP-driven affairs like his project. It’s particularly interesting to be reminded that the problem faced by the designer of such a system involves interfacing with human perception rather than making a pretty light show, and in that context it becomes more important to understand how humans perceive sound and light rather than to simply dump a visualization to the LEDs. We receive an introduction to some of the techniques used in speech recognition, because our brains are optimized to recognize activity in the speech frequency range, and in how humans register light intensity.

For all this sophistication and the impressive results it improves though, he’s not ready to call it complete. Making it work well with all musical genres is a challenge, as is that elusive human foot-tapping factor. He talks about using a neural network trained using accelerometer data from people listening to music, which can only be described as an exciting prospect. We genuinely look forward to seeing future versions of this project. Meanwhile if you’re curious, you can head back to 2017 and see our original coverage.

Advertisement

Source link

Advertisement
Continue Reading

Tech

What 2025 taught us about the importance of resilience in retail

Published

on

When it rains, it pours.

That phrase defined retail cybersecurity in 2025. What began as isolated incidents quickly became prolonged, intense disruptions, exposing just how interconnected — and fragile — modern retail operations really are.

Nadir Izrael

CTO and Co-Founder at Armis.

Source link

Continue Reading

Tech

14K+ jobs cut, with PMETs hit hard

Published

on

Singapore recorded a notable rise in retrenchments in 2025, with overall job cuts climbing to 14,490 for the year—an increase from 12,930 retrenchments in 2024.

On Mar 20, the Ministry of Manpower (MOM) released its latest quarterly Labour Market Report, revealing updated figures on retrenchments and broader employment trends.

The data showed that the incidence of retrenchment rose to 6.3 per 1,000 employees, up from 5.9 per 1,000 the year before.

And within this broader trend, white-collar workers have experienced disproportionate pressure.

Advertisement

PMETs are increasingly on the chopping block

Professional, managerial, executive, and technician (PMET) retrenchments have shown a steeper incline compared to the overall workforce.

In 2025, the incidence of retrenchment for this group rose to 10.1 per 1,000 resident PMETs—above the pre-recessionary average—from 8.6 per 1,000 in 2024.

The layoffs have been largely concentrated in three sectors:

  • Financial Services: Banking and insurance firms have cut headcount as market conditions tighten
  • Information and Communications: Tech and telecom companies are restructuring in response to changing demands
  • Professional Services: Consulting, legal, and accounting firms have undergone notable workforce adjustments

For this specific labour market report, MOM examined trends in PMET roles to assess concerns around AI-driven job disruptions.

While the evidence does not point conclusively to broad-based displacement, there are signs of restructuring that warrant continued monitoring.

Advertisement

Total employment continued to grow

If you’re working in a PMET role, these trends may naturally raise concerns. However, the broader data suggest that this is not necessarily a contraction in demand for these jobs.

The same sectors that saw the highest PMET layoffs also had relatively high PMET job vacancies in Dec 2025, with a combined total of 14,600, up from 13,900 in the year-ago period.

Data on the number of job vacancies are rounded to the nearest 100.

According to MOM, the overlap between higher retrenchments and higher PMET vacancies in these sectors suggests ongoing restructuring and skills transition, where some jobs are being displaced as firms restructure, while hiring continues for others.

For the full year of 2025, total employment grew by 55,500, up from 44,500 in 2024. Of this, resident employment grew by 11,600, driven largely by financial services as well as health and social services.

In 2026, resident employment is expected to grow at a similar or slightly slower pace, said MOM.

Advertisement
  • Read more articles we’ve written on Singapore’s job trends here.

Featured Image Credit: Shadow_of_light/ depositphotos

Source link

Advertisement
Continue Reading

Tech

Critical flaw in wolfSSL library enables forged certificate use

Published

on

Critical flaw in wolfSSL library enables forged certificate use

A critical vulnerability in the wolfSSL SSL/TLS library can weaken security via improper verification of the hash algorithm or its size when checking Elliptic Curve Digital Signature Algorithm (ECDSA) signatures.

Researchers warn that an attacker could exploit the issue to force a target device or application to accept forged certificates for malicious servers or connections.

wolfSSL is a lightweight TLS/SSL implementation written in C, designed for embedded systems, IoT devices, industrial control systems, routers, appliances, sensors, automotive systems, and even aerospace or military equipment.

Wiz

According to the project’s website, wolfSSL is used in more than 5 billion applications and devices worldwide.

The vulnerability, discovered by Nicholas Carlini of Anthropic and tracked as CVE-2026-5194, is a cryptographic validation flaw that affects multiple signature algorithms in wolfSSL, allowing improperly weak digests to be accepted during certificate verification.

Advertisement

The issue impacts multiple algorithms, including ECDSA/ECC, DSA, ML-DSA, Ed25519, and Ed448. For builds that have both ECC and EdDSA or ML-DSA active, it is recommended to upgrade to the latest wolfSSL release.

CVE-2026-5194 was addressed in wolfSSL version 5.9.1, released on April 8.

“Missing hash/digest size and OID checks allow digests smaller than allowed when verifying ECDSA certificates, or smaller than is appropriate for the relevant key type, to be accepted by signature verification functions,” reads the security advisory.

“This could lead to reduced security of ECDSA certificate-based authentication if the public CA [certificate authority] key used is also known.”

Advertisement

According to Lukasz Olejnik, independent security researcher and consultant, exploiting CVE-2026-5194 could trick applications or devices using a vulnerable wolfSSL version to “accept a forged digital identity as genuine, trusting a malicious server, file, or connection it should have rejected.”

An attacker can exploit this weakness by supplying a forged certificate with a smaller digest than cryptographically appropriate, so the system accepts a signature that is easier to falsify or reproduce.

While the vulnerability impacts the core signature verification routine, there may be prerequisites and deployment-specific conditions that might limit exploitation.

System administrators managing environments that do not use upstream wolfSSL releases but instead rely on Linux distribution packages, vendor firmware, and embedded SDKs should seek downstream vendor advisories for better clarity.

Advertisement

For example, Red Hat’s advisory, which assigns the flaw a maximum severity rating, states that MariaDB is not affected because it uses OpenSSL rather than wolfSSL for cryptographic operations.

Organizations using wolfSSL are advised to review their deployments and apply the security updates promptly to ensure certificate validation remains secure.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Microsoft is officially killing its Outlook Lite app next month

Published

on

Microsoft is shutting down Outlook Lite on May 26, the company confirmed to TechCrunch on Monday. Launched in 2022, Outlook Lite is a lightweight version of the regular Outlook app, designed for Android phones with limited storage and regions with slower internet connections. 

The app had already been scheduled for retirement — Microsoft announced last year that the app would be removed from the Google Play Store in October 2025. Now the company has confirmed that the app will lose functionality for existing users next month.

The news was first reported by Neowin.

“To continue enjoying a secure and feature-rich email experience, we recommend switching to Outlook Mobile,” Microsoft says in an Outlook Lite support page.

Advertisement

Outlook Lite users will be able to access their existing email, calendar items, and attachments by signing into Outlook Mobile. Users will also be directed to the Google Play Store to download the standard Outlook app.

Source link

Continue Reading

Tech

5 Of The Best-Looking Mid-Engined Sports Cars We’ve Ever Seen

Published

on





If you talk to sports car fans and enthusiasts, you’ll probably hear differing opinions about which is better: front-engined or mid-engined sports cars. Both have their own pros and cons and unique driving characteristics, and the layouts will also impart a certain look, most evident in the radical exterior change the Chevrolet Corvette underwent after its switch from a front-engine to a mid-engine platform.

With their engines mounted behind the cabin, mid-engined sports have a distinct profile that often brings to mind high-end, exotic European supercars. It’s a look that survives even when scaled down to less expensive, more mainstream-oriented cars, resulting in some beautiful vehicles. So with that in mind, we’ve rounded up five of what we think are the best-looking, most handsomely designed mid-engined sports cars of the modern era.

Now, there are countless beautiful (and incredibly expensive) mid-engined exotics that we could include on a list like this, but we’ve left some of the obvious choices out to keep things interesting. Thus, no exotic Ferraris and Lamborghinis here. Even so, we have a diverse mix of machinery that includes mid-engined offerings from Japan, the United States, and Europe, with engines ranging from modest, low-power four-cylinders to fire-breathing V8s.

Advertisement

Toyota MR2 (second generation)

Along with the front-engined Mazda MX-5 Miata, the Toyota MR2 is one of the most popular lightweight sports cars to come out of Japan. The MR2 debuted in the early 1980s and was built over three distinct generations before being discontinued in the mid-2000s. Each generation of the MR2 has its own personality and following, but from a design and performance standpoint, it’s the second generation that represents the MR2 at its peak. 

The second-generation MR2 debuted in Japan in 1989 and was on sale around the world shortly after. With its wider profile, its flip-up headlights, and distinct side vents, the second-generation car had a more aggressive look that, to some eyes, looks a lot like a scaled-down version of the Ferrari 348. The second-gen MR2 also had the performance to back up its look. Thanks to the 3S-GTE engine under the hood, Car and Driver got the MR2 Turbo to 60 mph in just under six seconds — very impressive by early ’90s standards.

Advertisement

To do all this at a relatively affordable price — $20,000 or so for the Turbo in 1990 — shows just how powerful Toyota was during this time. Today, along with the Supra it shared showrooms with, the second-gen MR2 is considered one of the most desirable Toyotas of its time, and especially in turbocharged form, one of the most desirable Japanese sports cars of the ’90s.

Advertisement

2004-2006 Ford GT

When Ford designers started working on the automaker’s mid-2000s Ford GT revival, they had a pretty big head start in creating a beautiful car. That’s because the design of the Ford GT was heavily inspired by the attractive and legendary Ford GT40 race car of the 1960s. Still, retro design isn’t always as easy as it looks, and it doesn’t take much for retro cars to veer into the tacky, but the GT’s designers absolutely aced their mission.

The modern road-going Ford GT is a much larger car than the GT40 it’s based on, but the lines are so good that you don’t realize that until you actually see the two cars side by side. The GT’s attractiveness carries over to the interior as well, with a wonderfully executed modern interpretation of 1960s design. Of course, it also doesn’t hurt that it’s got a mid-mounted supercharged 5.4-liter V8 mated to a manual transmission. 

Because the initial design was executed so well, the 2000s Ford GT has never felt dated in the way other cars from its era might. Design-wise, it almost feels like a remastered car from the ’60s rather than a product of the 2000s. All of these are reasons why, despite only being a little over 20 years old, the value of the mid-2000s Ford GT has climbed tremendously, with the car now becoming a highly desirable modern classic in its own right.

Advertisement

Lotus Elise

A sports car’s appealing design need not be tied to its physical size or amount of horsepower. Case in point: the Lotus Elise. The Elise is considered one of the purest sports cars of the modern era, with a platform and design that stretches back to the mid-1990s. While some could argue that the Elise isn’t a traditionally beautiful sports car, much of the Elise’s beauty comes from its focus on simplicity. The Elise evolved significantly between its mid-’90s debut and the end of its production run in 2021, but the car never strayed from its mission of delivering lightness and response over all else. 

The later variants of the Elise sold in North America use modestly powered Toyota four-cylinder engines, with the Elise’s light weight meaning it didn’t need massive amounts of horsepower to offer a fast and highly enjoyable sports car experience — part of why drivers love this car. Design-wise, the Elise is all about compact minimalism, and its svelte body lines and distinct round tail lights helped give the Elise its signature look.

Its attractive looks and go-kart-like handling are just a couple of the reasons why both the Elise and its closely-related counterpart, the Lotus Exige, have emerged as genuine modern classics. With its focus only on the essentials, the Elise is the antidote to the high-horsepower, overweight, and often overstyled modern performance car.

Advertisement

Alpine A110

Like the reto-styled Ford GT, the modern Alpine A110 is a modern, mid-engined sports car that might technically be cheating with its good looks. That’s because, like the Ford, the A110 is a modern reinterpretation of an iconic 1960s design — and one that happens to be done very well. 

The modern Alpine A110 (which is built by Renault) debuted in the late 2010s to wide acclaim as a rival to the Porsche Cayman. Boasting a mid-mounted turbocharged four-cylinder engine and a low curb weight, the A110 took its design inspiration from the original, rear-engined Alpine A110 of the ’60s and ’70s. Among the styling traits that carried over to the new A110 are the original’s quad front headlights and wrap-around rear window.

Advertisement

To this point, the biggest problem with the A110 is that, like other French models, it’s not offered in North America. In fact, it might just be the coolest modern performance car that’s not currently sold here. There have been rumors and serious speculation that the A110 will eventually make its way to the United States, although we don’t yet know whether it will be as a gasoline model or as a next-generation electric Alpine sports car

Advertisement

Honda/Acura NSX

Sometimes a sports car is a hit from the moment it debuts; other times, it ages nicely and becomes a favorite for a new generation of enthusiasts. In the case of the highly unique Honda (or Acura) NSX, it’s both. When the NSX first debuted in 1989, the car was a game-changer. It wasn’t just an impressive Japanese sports car; instead, it was a bona fide, homegrown Japanese exotic laced with Honda’s racing DNA.

Thanks to design choices like an all-aluminum construction and a mid-mounted, naturally aspirated VTEC V6 engine, the NSX had the performance and feel of a Ferrari — but in a more affordable and more reliable package that could be serviced at your local Honda or Acura dealer. In comparison tests, it edged out its more established performance car competitors. Design-wise, the original NSX was somewhat restrained, but its clean lines have aged extremely well, making it a favorite even among those born too late to experience its original run. 

When new, the NSX had a relatively affordable price tag for what it delivered, but values have climbed substantially in recent years, with certain examples crossing the $300,000 mark at auction. While many subsequent Japanese sports cars have eclipsed the original NSX’s performance benchmarks, its aura is still unmatched.

Advertisement



Source link

Continue Reading

Tech

Meta is building an AI version of Mark Zuckerberg

Published

on

The photorealistic digital character is trained on Zuckerberg’s mannerisms, tone, and his own thinking on company strategy. He is personally involved in testing it. The effort, described by four people familiar with the matter, is separate from a ‘CEO agent’ that handles tasks for Zuckerberg directly.


Meta is building a photorealistic, AI-powered version of Mark Zuckerberg that can interact with employees in his place, the Financial Times reported on Monday, citing four people familiar with the matter.

The character is being developed by Meta’s Superintelligence Labs and is trained on Zuckerberg’s mannerisms, tone, and publicly available statements, as well as his own thinking on company strategy, so that employees, in the words of one person familiar with the project, ‘might feel more connected to the founder through interactions with it.’ Z

uckerberg is personally involved in training and testing the animated version of himself.

Advertisement

The effort is at an early stage and is separate from a different project, first reported by the Wall Street Journal, in which Meta is building a ‘CEO agent’ designed to help Zuckerberg himself retrieve information faster, a tool that assists him rather than stands in for him.

Advertisement

The AI character project is part of a broader push within Meta’s Superintelligence Labs to develop lifelike, AI-driven digital figures capable of real-time conversation. The technical challenge is substantial: achieving realism and preventing perceptible delays in conversation requires enormous computing power.

The project reflects a significant escalation of Zuckerberg’s own involvement in Meta’s AI work. According to people familiar with the matter, he has been spending five to ten hours a week writing code on various AI projects and attending technical engineering review sessions, an unusual level of hands-on engagement for a CEO running a $1.6 trillion company.

He has committed publicly to developing what he calls ‘personal superintelligence’ as Meta works to close the gap with OpenAI and Google. On a January earnings call, he said Meta was ‘elevating individual contributors and flattening teams’ through AI-native tooling.

Meta has a history with AI characters. In September 2023 it launched a range of celebrity-based chatbots, among them personas modelled on Snoop Dogg, Tom Brady, Kendall Jenner, and Naomi Osaka, all of whom licensed their likenesses, but these were discontinued in the summer of 2024 after failing to gain meaningful traction.

Advertisement

Meta then opened an AI Studio allowing users and creators to build their own AI characters, but ran into controversy when users began generating sexually explicit personas. Since January, Meta has restricted teenager access to AI characters. Zuckerberg’s interest in the format was reportedly sharpened by the success of AI companion startup Character.AI, particularly with younger users.

Meta is not the only company exploring AI versions of its leadership. Uber CEO Dara Khosrowshahi said during a podcast interview earlier this year that his employees had built an AI clone of him.

But the Zuckerberg project has a different scale and institutional purpose: it is being designed as a mechanism for a $1.6 trillion company’s 79,000 employees to feel a sense of connection to a founder who is, by any measure, difficult to reach.

Advertisement

Source link

Continue Reading

Tech

Bremont Is Sending a Watch to the Moon’s Surface

Published

on

A multifaceted decahedral black ceramic bezel and sandwich-style three-piece case—a reworking of Bremont’s signature Trip-Tick construction—house a chronometer-rated automatic chronograph movement made by Sellita, with a 62-hour power reserve.

The watch will be a passenger aboard the FLIP rover, due to launch as part of Astrobotic’s Griffin Mission One (Griffin-1), expected to land at the lunar south pole at some point in the second half of this year.

It’s a one-way mission: The rover will remain permanently on the lunar surface, with the watch ticking away as it roams the landscape. FLIP’s objectives include reaching elevated positions on the lunar terrain, gathering data on lunar dust accumulation, testing dust-mitigation coatings, and surviving a two-week lunar night in hibernation (which would be a first for a US rover).

In terms of serious timekeeping data for Bremont, the mission is frankly symbolic. The watch will be positioned vertically in a specially designed housing within the FLIP’s chassis, between its front wheels. Only the watch head, weighing 107 grams, is included, glued in place using a specialist composite, its face visible to FLIP’s HD cameras. But the hibernatory periods will mean the watch (whose mechanical movement is driven in normal circumstances by the motion of the wearer’s arm) will stop running once its 62-hour power reserve runs down.

Advertisement

When the FLIP is on the move again, its motion should—in theory—jolt the mechanism into action once more. Despite the gravitational pull that’s a sixth of the Earth’s, the acceleration, pitches, and tilts of the rover should swing the winding rotor, if with less torque and efficiency than on Earth.

“My guess is that the watch will function from time to time, but for short periods,” Cerrato says. “We will learn along the way. But that’s what is exciting—it projects us into a thinking process that is absolutely out of the box. Just the fact of having it there is inspiring.” However, there is little doubt that Bremont will, just like other brands with any ties to the cosmos, mine its new space connection for all it is worth.

FLIP itself, which weighs just 1,058 pounds and carries a mix of commercial and government payloads, four HD cameras, and a deployable solar array, is fundamentally a technology demonstrator for Flexible Logistics and Exploration (FLEX), Astrolab’s much larger SUV-sized rover destined to support NASA’s Artemis program. The firm developed the FLIP from scratch after NASA’s equivalent vehicle for which the Griffin-1 mission was contracted, the VIPER, was put on pause in 2024. This left Astrobotic seeking a stand-in in short order. Astrolab, which signed the contract within a month of hearing about the opportunity in the fall of 2024, took the FLIP from blank sheet to finished rover in roughly a year.

Its standout feature is its hyper-deformable wheels, minutely structured from silicone, composite, and stainless steel, which create a soft, enlarged contact surface with the terrain. “It’s like if you’re off-roading in a Jeep or Land Rover where you let some air out of the tires to go softer and spread the load over a larger area,” explains Astrolab’s founder, Jaret Matthews. While the moon’s nighttime temperatures of around -200 degrees Celsius (around -328 Fahrenheit) would cause conventional rubber tires to become glass-like and shatter, Astrolab’s solution is intended to keep the rover from sinking into the unconsolidated lunar dust—or regolith—that covers the environment.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025