AI chatbots have made it surprisingly easy to talk about anything, and that includes some of the heaviest topics imaginable. That openness has always been a double-edged sword. OpenAI is now taking a step to address that, with a new feature that brings a trusted person into the picture when things get serious.
The company is rolling out a new feature called Trusted Contact, and it is starting to appear in ChatGPT settings for adult users. It lets users name one person who can be alerted if ChatGPT detects a serious self-harm concern.
OpenAI
How does Trusted Contact work?
Setting up a Trusted Contact is optional, but if you do decide to set it up, then you have to make sure that the contact you are nominating is at least 18 years old, or 19 in South Korea. Once you name someone, they get an invitation explaining what the role actually means, and they have one week to accept it before the feature goes live. If they decline, you can pick someone else.
The alert system itself is not automatic. If ChatGPT’s systems flag a conversation as potentially concerning, the chatbot first tells the user that their Trusted Contact may be notified, and it also nudges the user to reach out directly with some suggested conversation starters. A small team of specially trained human reviewers then steps in to assess the situation. Only if they confirm a serious risk does the contact actually get notified, via email, text, or in-app notification. The alert does not share chat transcripts or conversation details. It simply says that self-harm came up in a potentially concerning way and asks the contact to check in. OpenAI says it aims to complete that human review in under one hour.
OpenAI
Why is OpenAI adding this now?
Trusted contact is part of a broader set of safety features on the platform. Previously, OpenAI added features that let parents receive alerts when a linked teen account shows signs of distress. Trusted Contact is the adult-facing extension of this same feature. It was reportedly developed with input from clinicians, researchers, and mental health organizations, including the American Psychological Association.
All that said, it is worth mentioning that Trusted Contact does not replace crisis hotlines, emergency services, or professional mental health care. ChatGPT will still direct users toward those resources when needed. Users can remove or change their Trusted Contact at any time, and contacts can remove themselves whenever they want.
Advertisement
The reality of the matter is that ChatGPT is being used for some deeply personal conversations, whether OpenAI planned for that or not. Adding a feature like Trusted Contact is a move in the right direction, and also an admission that a chatbot can only do so much.
Brian Barrett: Grok would just say that it’s sick.
Zoë Schiffer: Grok mitigating the fight between the mom and the person who’s yelling at her about her baby.
Leah Feiger: I really, really feel for these workers, and I really, really feel for all of these customers that were stranded. Spirit in so many ways, like something that we love to make fun of just a little bit, like you take Spirit when you have to, but also it was actually available and it worked and it wasn’t nearly as expensive as anyone else. It’s kind of sad, especially when I look at the shrinking airline industry in the US, when I look over at Europe and I’m like, “You guys have so many low-cost carriers.” And especially with all of the deals, everything back and forth between JetBlue and Spirit that got squashed, it was just a little bit sad to see that happen.
Advertisement
Brian Barrett: And Leah, when you say stranded, I want to be clear, that’s literal. I think some of these employees, they were not in their home cities when Spirit shut down. So they had to rely on other airlines offering them a jump seat or a travel pass to get home. Fortunately, it’s apparently a very communal industry. Other airlines helped them out. Other airlines are offering preferential employment interviews to Spirit Airline employees. But can you imagine, I’m in London right now, and if WIRED shut down and I had to find another way home. I mean, I’d be OK, but—
Leah Feiger: No, but it would also just be ridiculous. This is wild. I think of that 30 Rock episode when Liz Lemon is like, “Oh yeah, this is my flight.” And they’re like, “Sorry, we’re out of flights now. We just make popcorn,” which was incredible to see, but that’s so real.
Brian Barrett: I think from a consumer level, if you were going to book tickets for the summer, do it soon because now it’s a supply and demand thing, right? A whole airline is gone. That’s a lot of seats that aren’t there, so there’s more scarcity. Prices are going up basically at the worst possible time for people like myself who are thinking about planning some time for summer travel with, again, two kids.
Zoë Schiffer: Coming up after the break, we’ll be getting into the news of the hantavirus outbreak on a cruise ship. Should we be concerned, or are we panicking for no reason? We’ll find out.
Advertisement
Leah Feiger: So in recent days, there have been more and more headlines of a hantavirus outbreak happening on the MV Hondius, a Dutch-flagged cruise ship. The cruise departed from the south end of Argentina over a month ago, making stops in Antarctica, the island of Saint Helena, among other stops. The trouble started when a man started showing symptoms like a fever, a headache, and eventually this became a respiratory illness. He died on board and a few weeks later, his wife did as well. She was later confirmed to have the hantavirus too. As of this week, seven cases have now been confirmed and the ship is currently carrying 147 passengers and crew. To help us understand what on earth is going on, we are joined by WIRED staff writer Emily Mullin.
But Microsoft executives had reservations about sending additional funding to OpenAI as far back as 2018 when it was just a small nonprofit research lab, according to emails between more than a dozen Microsoft executives, including CEO Satya Nadella, shown in a federal court on Thursday during the Musk v. Altman trial.
The emails show how Microsoft, at the time, wavered over what has since been held up as one of the most successful corporate partnerships in tech history. Several Microsoft executives said in the emails their visits to OpenAI did not indicate any imminent breakthroughs in developing artificial general intelligence. In 2017, much of OpenAI’s work was focused on building AI systems that could play video games, which showed early signs of success. But OpenAI needed five times more computing power than it had originally secured from Microsoft to continue the project.
Microsoft worried that not providing support could push OpenAI into the arms of Amazon, the world’s dominant cloud computing provider at the time. Roughly 18 months after the emails were sent, Microsoft announced a landmark $1 billion investment in OpenAI after the lab created a for-profit arm that provided the tech giant with the potential to generate a return of $20 billion.
Advertisement
Microsoft declined to comment.
Elon Musk’s attorneys introduced the emails to show Microsoft’s evolving relationship with OpenAI. After Musk reached out to Nadella, Microsoft in 2016 agreed to provide $60 million worth of cloud computing services to OpenAI at a steep discount. OpenAI consumed the services twice as fast as expected.
The email chain kicked off on August 11, 2017, with Nadella reaching out to OpenAI CEO Sam Altman to congratulate the lab on winning a video game competition using AI to mimic a human player. Ten days later, Altman responded seeking $300 million worth of Microsoft Azure cloud computing services.
“We could figure how to fund some of it but not that much,” Altman wrote, apparently seeking a financial handout and engineering help. “I think it will be the most impressive thing yet in the history of AI.”
Advertisement
Nadella asked four lieutenants for their input on how to respond three days later. Microsoft’s AI team saw “no value in engaging,” according to a response from Jason Zander, Microsoft’s executive vice president, that also documented how other teams felt. Its research team thought its own work was “more advanced,” while the public relation teams didn’t like the idea of supporting a group pushing the idea of “machines beating humans.” Ultimately, Zander suggested that Azure would benefit from associating with Musk and Altman but that he wouldn’t want to “take a complete bath,” or large financial hit, in doing so.
A subsequent analysis showed that Microsoft stood to lose about $150 million over several years if it provided the services Altman wanted, according to one email. “Unless he can help us draw a more direct networking effect with OpenAI -> Microsoft business value, we will wind up having to pass,” Zander wrote.
The thread went dark for several months, but was revived on January 10, 2018, with an email to Nadella from Brett Tanzer—who signed off his emails with “Brettt”—then a director on the Azure cloud unit. Altman had told Tanzer that OpenAI could license its gaming AI to Microsoft’s Xbox video game division in exchange for “$35-50 million in Azure Credits.” But Xbox couldn’t commit that much money. Microsoft planned to tell Altman there would be no more discounts after that March, per Tanzer’s email.
AeroKoi set out to answer a simple question. Could a desktop 3D printer produce train whistles that captured the exact chords once carried across fields and towns by steam engines? After months of steady work the answer arrived loud and clear through shop air at 120 pounds per square inch.
Rail lovers still feel a sentimental tug from such noises, as steam engines carried multi-note whistles that signaled their arrival from afar, with the train itself still invisible on the horizon. Modern diesels have much simpler horns, but for many people, the originals remain the gold standard, richer, more alive, and somehow more memorable. AeroKoi began with a small setup and quickly developed expertise. The early prototypes were crude, with PVC pipe wrapped around the printed parts and air forced via a nozzle. Unfortunately, the tones came out weak and strange, lacking the deep resonance he desired. Direct airflow proved to be the main issue. Real whistles work a little differently, allowing the air to build up in a bowl-shaped chamber before directing it out through a super-narrow slit and into the bell.
A1 mini + LED Lamp Kit for Functional Light Projects: Bambu Lab A1 mini + LED Lamp Kit lets you create illuminated models. Simply print compatible…
High-Speed Precision: Experience unparalleled speed and precision with the Bambu Lab A1 Mini 3D Printer. With an impressive acceleration of…
Multi-Color Printing with AMS lite: Unlock your creativity with vibrant and multi-colored 3D prints. The Bambu Lab A1 Mini 3D printers make…
That small idea actually opened the door to advancement. Every new iteration included a correct bowl, fine-tuned the slit width, and altered the distance between the bowl and the bell edge. He cut the four-inch-diameter whistles into vertical pieces that stacked nicely in a regular printer bed. For the majority of the experiments, simple PLA was used, with a carbon fiber blend added for stiffness where it was most needed. Layer height remained constant at 0.2 millimeters, with six walls and 25% infill to prevent the sections from collapsing with each blast of pressure.
As the plastic started rolling off the reel, the designs became more sophisticated. An early six-chime model sounded slightly better, but not quite right. The larger bells required greater airflow, thus threaded inlets progressed from quarter-inch fittings to half-inch and finally full-inch NPT ball valves for much smoother control. He included spacers between the parts to allow him to adjust the spacing between the bowl and the lip without having to reprint the entire whistle. Low notes now have a little extra internal room to assist them carry further, as shown by the original illustrations.
Today, two final whistles are available for anyone to download and print. One is a straight copy of a Santa Fe Railroad six-chime whistle, while the other is a Northern Pacific five-chime replica. Both are intended to be printed in pieces, assembled with simple fittings, and sound out lovely clean chords when connected to a compressor. The Santa Fe one feels unusually completed; all of the notes fit together perfectly, with no shrieking or rattling that plagued previous printers. [Source]
Things are heating up in a single datacenter, but not in a good way
Amazon Web Services is working to address a power outage that has created “impairments” to services served from the notorious US-EAST-1 region.
A May 7 incident report time-stamped 5:25 PM PDT (00:25 UTC Friday) states that AWS spotted problems in the use1-az4 availability zone of the US-EAST-1 Region. A subsequent update states “EC2 instances and EBS volumes hosted on impacted hardware are affected by the loss of power during the thermal event.”
Advertisement
An update time-stamped 6:47 PM PDT reveals“We continue to work towards mitigating the increased temperatures to its normal levels,” but warns “Other AWS services that depend on the affected EC2 instances and EBS volumes in this Availability Zone may also experience impairments.”
At 8:06 PM PDT Amazon said it was “actively working to restore temperatures to normal levels … though progress is slower than originally anticipated.”
The cloudy concern said it made “incremental progress to restore cooling systems” but users of EC2 Instances, EBS Volumes, and other services are “experiencing elevated error rates and latencies for some workflows.”
AWS has also shifted traffic away from the stricken AZ, and suggested companies shift workloads into other US-EAST-1 availability zones.
Advertisement
Good luck getting that done because the update admits “Customers may experience longer than usual provisioning times.”
AWS execs have told The Register the region isn’t inherently more fragile than other parts of the Amazonian cloud, but often runs things at bigger scale than elsewhere and therefore imposes extra stress on services.
The Register will update this story as the situation evolves. ®
Amazon CEO Andy Jassy at AWS re:Invent in 2025. (GeekWire File Photo)
Generative AI bear Gary Marcus called the AI capex boom the “greatest capital misallocation in history.” Goldman Sachs analyst Eric Sheridan reaches the opposite conclusion in his “AI in a Bubble?” research package. Sheridan argues that this is not a hope-and-hype cycle like 1999 but a scale and monetization cycle, with tangible revenue growth and extraordinary market momentum.
So, who’s right? Jobs, pensions, and trillions of stock-market dollars, are at stake with implications for all of us.
I focus on Amazon Web Services (AWS) as the most informative window into the broader conundrum: it is the largest of the cloud businesses, the one with the cleanest revenue disclosure, and the one whose CEO has put the most specific quantitative defense on the table.
The chart below previews where this analysis lands: three plausible curves for AWS revenue, all consistent with the data through Q1 2026, each implying a different return on the $200 billion Amazon plans to spend this year. The disagreement between bulls and bears is essentially a disagreement about which curve materializes.
Advertisement
(Click to enlarge)
The bulls argue that hyperscalers fund this build-out from cash flow rather than debt, which makes the AI capex boom different from the historical telecom and railway bubbles. Indeed, AWS grew 28% last quarter, its fastest pace in 15 quarters, validating that enterprise demand for AI compute is real and accelerating.
Beyond the curve question, the bears point to financial fragilities that run independently of demand: Oracle’s leverage, Amazon’s sharp pivot to debt funding, and the circular customer-financing arrangements that tie hyperscaler revenue to a small number of model labs whose own revenue depends on capital markets staying open. In essence, the bear case is that the financial structure is changing, the demand assumptions are fragile, and being too aggressive is courting financial disaster.
Advertisement
Returning to our chart, the structure of the disagreement becomes concrete. The bull case assumes that the recent acceleration in AWS growth is the new normal and that growth rates keep climbing — producing roughly $66 billion in quarterly revenue by Q4 2027 and AWS-quality returns on the $200 billion capex.
The bear case assumes the recent acceleration was a catch-up move and that sequential dollar additions stabilize around the current $2 billion per quarter — producing roughly $52 billion in quarterly revenue and acceptable but disappointing returns.
The catastrophe case is below the bear case: AI workload demand actually reverses, and the GPU layer no longer earns enough revenue to recover its cost. The gap between the bull and bear cases is not whether the capex pays off but how well it does.
By 2002, 85 to 95% of the fiber laid in the 1990s remained dark, and roughly $2 trillion dollars in market value had been wiped out. Demand eventually arrived — YouTube, streaming, the cloud — but it arrived a decade later, and the people who built it out lost their shirts. The relevant question for AI is not whether demand exists, which it plainly does, but whether it is growing fast enough to absorb $700 billion in annual capex
The data that resolves the disagreement is roughly 12 months away and will arrive in the regular cadence of quarterly earnings. By Q1 2027, the divergence between the bull and bear paths becomes visible in the AWS data: at that point, AWS quarterly revenue will be either accelerating toward the high $40 billions, tracking flat against the low $40 billions, or showing the first signs of inflecting downward.
None of those outcomes is currently disprovable from the trajectory through Q1 2026, which is why the hyperscalers can keep raising debt and the market keeps buying it. Anyone telling you they are certain which curve will materialize is selling something.
As for me, I just bought a 12-month supply of popcorn.
Advertisement
[Editor’s note: GeekWire publishes guest opinion pieces representing a range of perspectives. The views expressed are those of the author.]
Anthropic on Tuesday unveiled a suite of updates to its Claude Managed Agents platform at its second annual Code with Claude developer conference in San Francisco, introducing a new capability called “dreaming” that lets AI agents learn from their own past sessions and improve over time — a step toward the kind of self-correcting, self-improving AI systems that enterprises have demanded before trusting agents with production workloads.
The company also moved two previously experimental features — outcomes and multi-agent orchestration — from research preview into public beta, making them broadly available to developers building on the Claude platform. Together, the three features address what Anthropic says are the hardest problems in running AI agents at scale: keeping them accurate, helping them learn, and preventing them from becoming bottlenecks on complex, multi-step work.
Early adopters are already reporting significant results. Legal AI company Harvey saw task completion rates increase roughly 6x after implementing dreaming. Medical document review company Wisedocs cut its document review time by 50% using outcomes. And Netflix is now processing logs from hundreds of builds simultaneously using multi-agent orchestration.
The announcements come at a moment of extraordinary momentum for Anthropic. CEO Dario Amodei disclosed during a fireside chat at the conference that the company’s growth has outpaced even its own aggressive internal projections.
Advertisement
In the first quarter of 2026, Anthropic saw what Amodei described as 80x annualized growth in revenue and usage — far exceeding the 10x annual growth the company had planned for. API volume on the Claude platform is up nearly 70x year over year, and the average developer using Claude Code now spends 20 hours per week working with the tool.
“We tried to plan very well for a world of 10x growth per year,” Amodei said. “And yet we saw 80x. And so that is the reason we have had difficulties with compute.”
Anthropic’s actual growth in the first quarter of 2026 far outpaced its internal plan. The company had projected 10x annual growth; annualized revenue and usage grew 80x instead. (Image Credit: Michael Nunez / VentureBeat)
How Anthropic’s dreaming feature teaches AI agents to learn from their own history
Dreaming is the most novel of the three features and the one Anthropic is most eager to distinguish from conventional memory systems. While the company launched agent memory earlier this year — allowing Claude to retain preferences and context within and across individual sessions — dreaming works at a higher level of abstraction. It is a scheduled process that reviews an agent’s past sessions and memory stores, extracts patterns across them, and curates those memories so agents improve over time. It surfaces insights that no single agent session could see on its own: recurring mistakes, workflows that multiple agents converge on independently, and preferences shared across a team of agents.
Advertisement
Alex Albert, who leads research product management at Anthropic, explained the concept in an interview at the conference. He described dreaming as analogous to how people within organizations create skills after working through a task. “They might do a workflow with Claude, and at the end of that workflow, after they’ve iterated and zigzagged a little bit, they want to record that path from A to B,” Albert said. “A very similar thing is happening with dreaming — instead of you manually creating the skill from your experience working with Claude, the model is doing it, so it has that same context for a future session.”
Crucially, dreaming does not modify the underlying model weights. “We’re not changing the model itself through dreaming — it’s not doing updates to the weights or anything like that,” Albert said. Instead, the agent writes learnings as plain-text notes and structured “playbooks” that future sessions can reference, making the entire process observable and auditable by humans. When asked about the trust implications of agents consolidating their own knowledge, Albert acknowledged that “there is a level of trust that you need to place” but noted that all memories are inspectable and that smarter models are getting progressively better at managing this process. “They’re learning to write better notes for their future self,” he said.
A live demo showed AI agents improving overnight without human guidance
During the keynote, the Anthropic team demonstrated all three features live on stage using a fictional aerospace startup called “Lumara” that needed to autonomously land drones on the moon for resource mining. The team configured a multi-agent system with three specialists — a commander agent responsible for overall mission success, a detector agent that identified high-quality landing sites, and a navigator agent that handled safe drone flight and landing — and defined a success rubric requiring soft landings, clear ground, and enough fuel reserves for a return trip to Earth.
An initial simulation across six hypothetical landing sites produced strong but imperfect results. To improve, the presenters triggered a dreaming session directly from the Claude Developer Console. Overnight, the dreaming agent reviewed all past simulation sessions and wrote a detailed descent playbook — a comprehensive set of heuristics drawn from patterns across multiple mission runs. When the team ran a new simulation the following morning with the dreaming-derived playbook in memory, the results improved meaningfully on the sites that had previously underperformed.
Advertisement
“All we had to do was just have Caitlin press a button,” said Angela Jiang, Head of Product for the Claude Platform, referring to her colleague on stage. “All dreaming.”
The demo illustrated how the three features compose together in practice. Multi-agent orchestration split the complex task across specialists with independent context windows. Outcomes provided the rubric against which a separate grader agent evaluated each run. And dreaming extracted lessons across those runs to improve future performance — forming what Anthropic describes as a continuous improvement loop that requires no human intervention between iterations.
Why Anthropic built a separate ‘grader’ agent to check Claude’s own work
The outcomes feature, now in public beta, gives developers a way to define what success looks like using a rubric — a structural framework, a presentation standard, a brand voice, or any other set of criteria — and then lets the agent iterate toward that standard autonomously. What makes outcomes architecturally distinctive is its separation of concerns. When an agent completes its work, a separate grader agent evaluates the output against the developer-defined rubric in its own independent context window. Because the grader operates in a fresh context, it is not influenced by the working agent’s reasoning or accumulated biases from the session.
When the grader identifies gaps between the output and the rubric, it pinpoints specifically what needs to change, and the working agent takes another pass. This loop continues until the rubric criteria are met — without a human needing to review each attempt.
Advertisement
Albert described Anthropic’s broader verification strategy as employing “more test time compute, more models thinking about a problem for longer, to check over the work of another.” He acknowledged that having a model check its own work raises reasonable questions, but said a fresh context window reviewing completed work consistently outperforms asking the same long-running thread to identify its own bugs. “You will get higher success if you give that output to a fresh Claude and say, ‘what bugs do you see?’” he said. “There is still something to the attention” that degrades over very long sessions — a limitation he said Anthropic is actively working to fix in future models.
The approach mirrors strategies already in use at GitHub. Mario Rodriguez, Chief Product Officer at GitHub, described during a separate talk at the conference how Copilot uses a similar advisor pattern with Claude models — pairing a smaller, cheaper model as an executor with a larger model as a mentor. When the smaller model encounters a problem beyond its capability, it calls the larger model for guidance, then continues executing on its own. Rodriguez said the approach delivers near-Opus-level intelligence at significantly lower cost, and that GitHub inserts critique models at three specific points in the coding workflow: after drafting a plan, after a complex implementation, and after writing tests but before running them.
Parallel AI agents can now tackle tasks too complex for a single model thread
Multi-agent orchestration, the third feature moving to public beta, allows a lead agent to decompose a large task into subtasks and delegate each one to a specialist agent — each with its own model, system prompt, tools, and independent context window. Every step in the process is traceable in the Claude Console, showing which agent did what, in what order, and why.
The design gives each sub-agent an isolated context, which Anthropic says produces better results than having a single agent attempt to hold all the complexity in one thread. “Each sub-agent has its own independent thread and context window,” the keynote presenters explained. “This is very intentional — we found that by splitting the work and then merging the results, we get better outcomes.”
Advertisement
Albert offered his own heuristic for when multi-agent architectures make sense versus sticking with a single thread. “Parallel agents are better for investigation,” he said — situations where there is a lot of context that will ultimately be discarded. “If you’re trying to answer a specific question, you don’t need all the search results from the areas where it didn’t find the answer. You just need the answer.” He described spinning up disposable sub-agents for specific retrieval tasks and bringing only the result back to the main thread. Increasingly, he said, the model itself will decide when to parallelize. “In the future, you won’t really care if it’s one agent or multi-agent or whatever’s happening. You just have a Claude that you’re talking to, and it will deploy the right architecture automatically.”
Anthropic’s bigger bet: closing the gap between AI capabilities and real-world adoption
The three features arrive as part of a broader platform push that Anthropic framed throughout the conference as closing “the gap between what AI can do and what it’s actually doing for people.” Ami Vora, Anthropic’s Chief Product Officer, set the theme in her opening keynote, noting that while model capabilities are advancing on an exponential curve, most organizations are still adopting AI on a linear path.
Dianne Penn, who leads product for Anthropic’s research team, described the company’s measure of progress as “task horizon” — how long an AI agent can work autonomously while improving the quality of its deliverables. “This time last year, models could work for minutes,” she said. “Now, most of us have agents running for hours on end. Tomorrow, we’ll have agents that are proactive, always on, and know what to work on without losing the frame.”
The event also included several infrastructure announcements designed to help developers keep pace. Anthropic said it is doubling its five-hour rate limits for Pro, Max, Team, and Enterprise plans, and raising API rate limits considerably. The company announced a partnership with SpaceX to use the full capacity of its Colossus data center to expand compute availability — a direct response to the demand crunch Amodei described.
Advertisement
All three features are built into Claude Managed Agents, which launched in public beta on April 8 as an opinionated harness that bundles best practices including memory, tool integration, and action handling. Anthropic says teams using Managed Agents have shipped 10x faster than those building their own agent infrastructure from scratch. Albert described the platform using an operating system analogy: “With managed agents, you don’t need to think about all the technicalities of how you set up the surrounding system,” he said. “You’re building an application for Macs — you don’t want to go have to re-implement every detail of macOS.”
What dreaming, outcomes, and multi-agent orchestration mean for the future of enterprise AI
The competitive implications are significant. As AI agent platforms from OpenAI, Google, and others compete for developer adoption, Anthropic is betting that production reliability — not just raw model intelligence — will determine which platform wins enterprise budgets. The dreaming feature in particular stakes out new territory: while other platforms offer memory and tool use, the idea of agents systematically reviewing their own histories to extract reusable knowledge goes further toward the kind of continuously improving systems that enterprises need before delegating high-stakes work.
The conference showcased companies already operating at that scale. Mercado Libre, Latin America’s largest e-commerce platform, has 23,000 engineers running Claude Code, has reviewed more than 500,000 pull requests with human oversight, and is aiming for 90% autonomous coding by the third quarter of this year. Shopify has deployed Claude Code across not just engineering but design, product, and data science teams.
But it was Dario Amodei who articulated the most expansive vision for where all of this leads. He described a progression from single agents to multiple agents to whole organizational intelligence — from “a team of smart people in a room” to what he called “a country of geniuses in the data center.” And he reiterated a prediction he made roughly a year ago: that 2026 would see the first billion-dollar company run by a single person. “Hasn’t quite happened yet,” he said. “But we’ve got seven more months.”
Advertisement
Dreaming is available now in research preview. Outcomes and multi-agent orchestration are in public beta and available to all developers on the Claude platform. Whether seven months is enough time for a solo founder to build a billion-dollar business remains an open question — but after Tuesday, they have a few more tools to try.
The Australian Cyber Security Center (ACSC) is warning organizations of an ongoing malware campaign using the ClickFix social engineering technique to distribute the Vidar Stealer info-stealing malware.
ClickFix is a social engineering attack technique that tricks users into executing malicious commands, usually through fake CAPTCHA or browser verification prompts displayed on compromised or malicious websites.
The attack typically tricks users into executing PowerShell commands to bypass security controls and deliver malware, typically info-stealers.
Australian organizations and infrastructure entities are being targeted in attacks that involve compromised WordPress websites that redirect to malicious payloads.
Advertisement
Users visiting these websites are shown a fake Cloudflare verification or CAPTCHA prompt that instructs them to copy and manually execute a malicious PowerShell command on their system, which leads to a Vidar Stealer infection.
“The Australian Signals Directorate’s Australian Cyber Security Center (ASD’s ACSC) has observed ClickFix-associated activity leveraging WordPress-hosted infrastructure to distribute the Vidar Stealer malware,” reads the agency’s advisory.
Vidar Stealer is an information-stealing malware family and malware-as-a-service (MaaS) operation that emerged in late 2018.
It gradually became a popular choice among cybercriminals for its cost-effectiveness, ease of deployment, and broad data theft capabilities. It targets browser passwords, cookies, cryptocurrency wallets, autofill information, and system details.
ACSC notes that Vidar deletes its executable after launching on the infected device and then operates from system memory, reducing forensic artifacts.
It retrieves a command-and-control (C2) address via “dead-drop” URLs using public services like Telegram bots and Steam profiles, a tactic that has been widely used in the past but which remains effective.
ACSC recommends that organizations restrict PowerShell execution and implement application allow-listing to reduce the risk from these attacks.
Advertisement
WordPress site administrators are also advised to apply available security updates for themes and add-ons, and to remove any unused themes/plugins from their platforms.
ACSC’s security bulletin provides indicators of compromise (IoCs) for these attacks, allowing organizations to set up defenses or detect intrusions.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
The official announcement came on Wednesday via a dedicated Nintendo Direct livestream. Star Fox (2026) will launch on June 25 on the Switch 2, marking at least the fifth time the classic title has been remade or remastered by Nintendo over the past three decades. Read Entire Article Source link
The Watch Fit 5 Pro builds on Huawei’s most likeable smartwatch and continues to strike a good balance between a fitness and health tracker, a sports watch, and a smartwatch, all wrapped up in a pretty sleek package.
Attractive and very comfortable design
Bigger display doesn’t feel huge on the wrist
Fun addition of mini-workouts
Not a radical upgrade on the Fit 4 Pro
Android users will enjoy stronger smartwatch support
Huawei Health app is still full of bloatware
Key Features
Advertisement
Review Price:
£249
Big, bright display
Advertisement
The Huawei Watch Fit 5 Pro gets a large 1.92-inch AMOLED screen with sapphire glass and excellent outdoor visibility.
Smarter fitness tracking
Advertisement
With dual-band GPS, mini-workouts and richer cycling, golf and swimming insights, it’s a strong all-round fitness watch.
Impressive battery life
Advertisement
The Huawei Watch Fit 5 Pro lasts up to a week in typical use, with fast charging for quick top-ups.
Introduction
The Huawei Watch Fit 5 Pro sees Huawei commit to offering its most affordable smartwatch in a version that gets you a few more features for a bit more cash. The best way to think of this smartwatch is Huawei’s cut-price answer to the Apple Watch Ultra or Samsung Galaxy Watch Ultra.
For the Fit 5 Pro, Huawei has added a bigger and brighter screen alongside new fitness and smartwatch smarts that should appeal to those not just looking for an outdoor adventure companion.
Advertisement
I was a fan of the Watch Fit 4 Pro, so I hoped Huawei didn’t undo the good work it did with its predecessor. I’m happy to say that the Watch Fit 5 Pro is still a mid-range smartwatch with plenty to like.
Advertisement
Design and screen
Features a larger AMOLED screen
Screen is now brighter than Watch Fit 4 Pro
Suitable for recreational diving up to 40 metres
Huawei has opted to stick to largely the same design as the last Pro, and that means another smartwatch with an Apple Watch-aping design. I’d say it’s different enough to make it live a little differently on your wrist.
Image Credit (Trusted Reviews)
You’ll be glancing down at a 44.5m case made from aluminium, matched up with a titanium bezel. Huawei has also launched a ceramic version, which gets you more in the way of protection against general wear and scratches.
Whichever model you go for, you’ll find two physical buttons on the right side of the case. The twisting crown lets you scroll through data and menu screens when you don’t want to swipe on the display to do it instead.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
It’s mainly the display where things have changed. Huawei has moved from a 1.82-inch display to a larger 1.92-inch display with 480 x 408 resolution. While it’s technically bigger, it doesn’t actually dramatically impact the size of the watch. This is good news for anyone who liked the size of the Fit 4 Pro and was worried the Fit 5 Pro might be too big for smaller wrists.
Image Credit (Trusted Reviews)
It’s a very sharp, crisp display to look at, with a glossy finish that helps features like watch faces really pop on screen. The peak brightness is the same 3,000 nits as the 4 Pro, with sapphire glass in place to offer some premium protection against scratches.
That case is partnered up with a woven strap that’s been very comfortable to wear. Huawei states it’s a very breathable strap, and I’d be inclined to agree. Removing it is also easily done thanks to a lug-style connector. It’s much less fiddly than the daintier pin mechanisms you find on other Huawei smartwatches.
Image Credit (Trusted Reviews)
Advertisement
One of the biggest differences between the Fit 5 Pro and standard Fit 5 is around waterproofing. The Pro carries both a 5ATM rating and complies with the EN13319 standard for diving accessories. The former makes it suitable for activities like swimming, and the latter for recreational dives up to 40 meters. It’s rare to find that level of protection on a smartwatch that sits below £250.
Performance and software
Compatible with Android and iOS
Some features missing for iPhone users
Added NFC payments through Curve Pay
One of the biggest compromises you’ll need to make with wearing the Fit 5 Pro is that you’re not going to get everything that Huawei offers in its higher-end smartwatches. It doesn’t support LTE connectivity, while iPhone users miss out on the ability to act on notifications or access the full Huawei AppGallery store to download all available apps.
Image Credit (Trusted Reviews)
This is a smartwatch that Android users will get the most out of. That’s not to say using an iPhone with it is severely limited; you can still make use of the added NFC payment support for instance, once you’ve also downloaded the Curve Pay phone app to get things all set up. You can also add music to the Pro’s storage to listen to offline when you don’t want to stream music from your phone. There’s still plenty you can do.
Image Credit (Trusted Reviews)
Advertisement
Some of the more staple smartwatch features impress, like the array of watch faces you have to choose from. That includes some more fun animated ones you can interact with or slap virtual stickers onto. It’s a great watch to view detailed weather forecasts on, and there’s a useful voice recorder feature included among other basic yet useful features.
As I said, you don’t get the best Huawei has to offer. What you do get is slick software on the watch, and maybe not so much off it in the bloatware-riddled Huawei Health app. There’s enough that’s included to ensure the Fit 5 Pro does a solid job when you’re not putting its health and fitness tracking features to use.
Advertisement
Tracking and features
New mini-workouts
Richer tracking for cycling, golf and swimming
New Workout Service to boost third-party app support
This smartwatch has “Fit” in its name, and that’s what it mainly wants to do: keep you fit and healthy. It’s promising to do that in a variety of ways, whether that’s simple ways like keeping on top of daily step goals, tracking workouts, helping you warm up or letting you keep a close eye on your heart health.
The biggest updates on this front lie first with Huawei’s new mini-workouts. These can be found in the courses and plans section of the watch, and also activated through a very cute panda watch face. This watch face springs to life when you’ve been inactive for a period, prompting you to tackle bite-sized workouts that involve simple movements like side stretches or doing some seated dips.
Image Credit (Trusted Reviews)
Advertisement
It’s a really well put-together feature that won’t necessarily only appeal to users looking for simple ways to stay active. It also serves as a great reminder to keep moving in different ways throughout the day when you’ve been sitting for a long time.
If you’re already pretty active and looking for a watch that can track a multitude of activities and sports, this watch can do that as well. Huawei has looked to bolster support for core sports like golf, swimming and cycling. Cyclists can now benefit from features Huawei recently added to its Watch GT 6 Pro, including virtual cycling power estimates and FTP (Functional Threshold Power) measurements for assessing cycling fitness.
Image Credit (Trusted Reviews)
It’s good to see swimming support get upgraded with extra insights into training load and recovery. Like the Fit 4 Pro, it’s a pretty accomplished watch for sports tracking. Whether it’s the dual-band GPS performance, the breadth of sports supported or the fact you get features like free offline maps, it’s got the performance to back up the impressive array of features.
You’re also getting a pretty rich suite of health tracking features as well. Along with measuring heart rate and SpO2 levels continuously, there’s also the ability to use the onboard ECG sensor to check for signs of atrial fibrillation. You can also monitor for signs of arterial stiffness and use the optical heart rate sensor to analyse and detect arrhythmia.
Advertisement
Image Credit (Trusted Reviews)
Advertisement
These features have regulatory clearance in a host of countries and territories, making those measurements capable of clinical-grade accuracy. When I tried the ECG measurements, it provided similar readings to those from a pulse oximeter and the ECG-packing Apple Watch Ultra 3.
Image Credit (Trusted Reviews)
For support like daily activity tracking and sleep monitoring, the Fit 5 Pro does a good job on that front. I’ve been wearing it alongside an Oura Ring 4 to see how sleep data compares. For metrics like sleep duration, sleep stage breakdowns and recognising times I’d fallen asleep and woken up, the Fit 5 Pro generally posted similar data. It’s a similar story for daily step counts. You also get to see some nice animations when you hit or whizz past your goal on the watch.
Up to 10 days of battery life
Up to 4 days in always-on display mode
25 hours of GPS battery life
Huawei has changed things on the battery technology front. It’s added a high-silicon one, where the chief benefits lie in the battery you’ll enjoy when using the onboard GPS.
That’s because general battery numbers remain the same as the Watch Fit 4 Pro. That’s up to 10 days of battery life, which drops to 7 days when using more of the health and fitness monitoring support. If you keep the screen on at all times, that number drops to 4 days.
Advertisement
Image Credit (Trusted Reviews)
I’d say those numbers pretty much ring true of my time. This watch can comfortably last a week if you’re not keeping the screen on and don’t have it set very bright. I found that, in general, daily battery drop was around 10% and just a few percent overnight.
In terms of GPS battery life, Huawei claims the Fit 5 Pro can deliver up to 25 hours. I found that an hour of GPS use saw the battery drop by 5%. That works out to around 20 hours. So that’s short of those claims, but still not a bad showing.
Advertisement
Image Credit (Trusted Reviews)
Charging is done via a proprietary charging cradle, which can charge the Fit 5 Pro fully in an hour. Like the Fit 4 Pro, you can drop it onto that charger for 10 minutes, and it’ll get you enough battery to get you through a day of usual smartwatch use. It wasn’t a watch I got frustrated with, as far as the battery performance is concerned.
Advertisement
Should you buy it?
You want a relatively affordable smartwatch with a great mix of features
The Huawei Watch Fit 5 Pro gives you a lot for your money and, crucially, delivers performance that makes it a great-value buy.
Advertisement
You want Huawei’s best watch for smartwatch features
Advertisement
As with previous Watch Fits, you will need to accept that you won’t get all of the available Huawei smartwatch features. You’ll have to spend more to get those.
Final Thoughts
The Huawei Watch Fit 5 Pro might not be Huawei’s most premium smartwatch, but it’s arguably its most likeable.
Advertisement
It looks and feels great to wear, with a blend of smartwatch, fitness, and health features that’s just right for the price, though I wouldn’t say it’s a radical upgrade from the Fit 4 Pro, so Fit 4 Pro owners need not rush out to upgrade.
If you like the idea of having a bigger screen and particularly those new mini-workout features and generally more accessible fitness features, there’s plenty to like about this Huawei smartwatch to make it a smart buy. For more options, take a look at our selection of the best smartwatches and best fitness trackers.
How We Test
We thoroughly test every smartwatch we review. We use industry-standard testing to properly compare features, and we use the watch as our primary device throughout the review period. We’ll always tell you what we find, and we never, ever, accept money to review a product.
Worn as our main tracker during the testing period
Heart rate data compared against dedicated heart rate devices
FAQs
Can the Huawei Watch Fit 5 Pro connect to Strava?
Yes, you can connect the Huawei Watch Fit 5 Pro to Strava by enabling the connection in the data sharing and authorisation settings on the Huawei Health smartphone app.
Advertisement
Can you reply to messages on the Huawei Watch Fit 5 Pro?
Yes, you can reply to messages on the Huawei Watch Fit Pro if you have the watch paired to an Android phone.
Fezz Audio is not some boutique tube brand trying to sell Americans a misty-eyed postcard from Eastern Europe. Designed and manufactured in Poland, the new Fezz Audio Luna Integrated Amplifier arrives in the U.S. through Bluebird Music Distribution as part of a much bigger story: the rise of serious Polish and Eastern European hi-fi brands that are no longer asking for a seat at the table. They’re building the table, wiring it properly, and probably using better transformers while they’re at it.
The Luna is a modern EL34-based tube integrated amplifier with selectable Ultralinear and Triode modes, modular expansion options, HT and Sub Out connectivity, and remote control support; which is not exactly your uncle’s dusty tube amp that needs three candles, a prayer, and a forgiving loudspeaker to behave. It is now shipping in the U.S. at $3,495, which puts Fezz Audio in a very interesting position for listeners who want real tube amplification with modern system flexibility, without pretending that 1962 was the peak of civilization — although it was a very good year for music and cinema.
The Luna is available in Big Calm, Black Ice, Burning Red, EverGreen, Moonlight, Republika, and Sunlight finishes, and several of them are far more striking in person than the spec sheet suggests. EIC Ian White has seen some of Fezz’s finishes firsthand, and apparently nobody in Poland got the memo that former Soviet Bloc colors were supposed to be drab, beige, and emotionally unavailable.
Fezz Audio Luna Integrated Amplifier in Sunlight Finish
Toroidal Transformer Technology
At the core of the Fezz Luna is one of the company’s key engineering strengths: toroidal output transformers developed in-house by Toroidy, Fezz Audio’s sister company. That matters because most tube amplifiers still rely on conventional EI-core output transformers, making Fezz’s approach less common and very much part of its identity.
Advertisement
The claimed benefits are lower noise, reduced electromagnetic interference, wider bandwidth, and better control. In practical terms, the goal is not to strip away the warmth people expect from tubes, but to tighten the presentation with cleaner edges, quicker transients, and firmer bass. Tubes with discipline. Poland apparently did not come here to make syrup.
Dual Sonic Character
The Luna provides users the flexibility to tailor sound through selectable operating modes:
Triode Mode – This supports a more intimate, harmonically rich presentation with classic tube warmth
Ultralinear Mode – The mode supports greater power, dynamic impact, and control
This dual approach allows the amplifier to adapt more easily to different speakers, recordings, and personal preferences. The Luna effectively provides two distinct sonic profiles within a single design.
Amplification
The Luna employs classic EL34 push-pull circuit topology, delivering 40 watts per channel in ultralinear mode and 20 watts per channel in triode mode. Users can easily switch between modes, choosing between the harmonic richness and intimacy of triode operation or the greater dynamics and authority of ultralinear performance. A robust, well-filtered power supply using Torodial transformers ensures stability and consistent operation across a wide range of loudspeakers.
Modular Design
Recognizing the needs of modern listeners, the Luna features a modular expansion system that allows users to integrate additional functionality directly into the amplifier. Optional modules include:
Advertisement
This add-on options approach ensures the amplifier remains relevant as system requirements need to be updated, eliminating the need for unnecessary external components.
Connectivity & Control
Unlike many traditional tube amplifiers, the Luna is designed to integrate easily into contemporary audio systems.
Advertisement. Scroll to continue reading.
Features include:
Advertisement
Home Theater Bypass
Subwoofer Output
Remote Control Operation
This connection and operational flexibility allow Luna not only to serve as a high-performance amplifier but as a centerpiece for a complete audio system.
No Compromise Product Engineering
With its in-house transformer foundation and tighter control over production, Fezz Audio has a real engineering story to tell at this price point. The Luna is not just another tube integrated amplifier in a nice chassis with a glowing glass sales pitch. Its use of Toroidy toroidal output transformers, Polish manufacturing, and modern connectivity give it a more distinctive position in a crowded integrated amplifier market.
The Luna is still a tube amplifier, so expectations should be grounded in what that means: tone, texture, dimensionality, and a more tactile presentation. But Fezz is also aiming for better control, lower noise, and more system flexibility than many traditional tube designs offer. For listeners who want tube character without giving up modern usability, the Luna looks like a smart and credible option. Eastern Europe is no longer knocking. It brought its own soldering iron.
“Fezz Audio has created something truly special with the Luna,” said Jay Rein, president of Bluebird Music. “Its combination of toroidal transformer technology, classic tube topology, and modern usability delivers a level of performance and versatility that stands out in its class.”
Big Calm Black Ice Burning Red EverGreen Moonlight Republika Sunlight
Fezz Audio Luna Integrated Amplifier in Black Ice Finish
The Bottom Line
Fezz Audio may not be the loudest Polish hi-fi brand in the U.S. market, but it is one of the more interesting ones, and the Luna Integrated Amplifier gives Bluebird Music another credible piece of Eastern European tube artillery to work with. Between the Equinox Tube DAC with Lampizator Technology, the Evolution series amplifiers, and now the refreshed Luna Vacuum Tube Integrated Amplifier, Fezz is building a real identity around Polish manufacturing, in-house transformer expertise, and tube gear that feels modern without pretending valves were invented last Thursday.
What makes the Luna different is its use of Toroidy toroidal output transformers, its selectable operating modes, and a level of production control that many tube brands at this price do not have. At roughly $3,500, it is not inexpensive, but in the vacuum tube integrated amplifier category, it is not wildly out of bounds either.
Advertisement
The misses are pretty clear. The optional MM phono stage really should have been included, especially in an amplifier aimed at listeners who are likely spinning records. Tubes and vinyl belong together. Charging extra for that feels a little like selling pierogi and billing separately for the sour cream. A built-in headphone amplifier also would have made the Luna more useful for late-night listening and smaller dedicated systems.
The Luna is best suited for listeners who already understand the appeal of tube amplification and want a modern integrated amp for a dedicated two-channel room. It also makes sense for someone with a serious home theater setup elsewhere who wants a separate music-first system with some warmth, texture, and Polish engineering muscle. Add the phono stage if vinyl is part of the plan. And Bluebird Music should absolutely keep bringing more Fezz Audio products into the U.S. market, because this is the kind of brand that makes the category more interesting.
Price & Availability
Fezz Audio’s Luna Integrated Amplifier is Shipping in the USA through the Bluebird Music Dealer Network for $3,495.
Although not confirmed, it is estimated that each add-on module is priced at about $300.
You must be logged in to post a comment Login