Uber is one step closer to going airborne. On Wednesday, the company previewed its air taxi booking service ahead of an expected launch in Dubai later this year. The inaugural Uber Air program will let travelers book Joby Aviation’s electric air taxis through a familiar process in the Uber app.
The experience of booking an air taxi will be much like reserving a four-wheeled Uber. In the app, after entering your destination, Uber Air will appear as an option for eligible routes. The Uber app will book a flight and an Uber Black to pick you up and drop you off at a Joby “vertiport.”
The process of booking a flying taxi will be instantly familiar. (Uber)
Joby’s air taxis, built exclusively for city travel, can accommodate up to four passengers and luggage. (Uber says size and weight guidelines will be announced closer to launch.) The interior is about the size of an SUV and has “comfortable seating” with panoramic windows. They can travel up to 200 mph and have a range of up to 100 miles. Four battery packs and a triple-redundant flight computer are onboard for safety purposes.
The air taxis aren’t (yet) autonomous and will each have a human pilot onboard. That would at least suggest high prices. After all, pilots aren’t nearly as cheap as Uber’s legion of independent-contractor drivers. But the company insists its air taxi rides will somehow be around as expensive as an Uber Black trip.
Advertisement
Joby’s air taxis have “panoramic” windows with a view of the city below. (Joby)
Dubai is only the beginning of the companies’ plans. The US-based Joby says it’s in the final stage of FAA type certification and hopes to launch service in New York and Los Angeles. Globally, it’s targeting the UK and Japan as well.
As for how realistic a US launch is anytime soon, well, that’s up for debate. On one hand, President Trump signed executive orders last year that would create a pilot program to test such aircraft. But safety and cost considerations may require a grounding of expectations.
The aircraft requires a human pilot, at least in these early stages. (Joby)
In November, Robert Ditchey, a Los Angeles-based aviation expert and test pilot, toldNBC News that he didn’t think air taxi service “was ever going to happen” in American cities. “They’re dangerous,” he warned. “We have had helicopters fail and crash on top of buildings in Los Angeles. We’ve had helicopters fail at takeoff and landing in airports. They’re dangerous not from a fire point of view but in terms of landing on top of people and buildings.” In addition, he warned that air taxis can’t be developed in sufficient numbers to make them economically viable “unless they are subsidized by a government.”
Uber and Joby have partnered since 2019. In 2021, Joby bought the Uber Elevate ride-hailing division, which essentially integrated the companies’ services. Last year, Joby acquired Blade Air Mobility’s passenger business, which could open the door to eventually electrifying Blade’s routes.
Advertisement
The video below shows one of Joby’s air taxis taking a test flight in Dubai.
Is China picking back up the open source AI baton?
Z.ai, also known as Zhupai AI, a Chinese AI startup best known for its powerful, open source GLM family of models, has unveiled GLM-5.1 today under a permissive MIT License, allowing for enterprises to download, customize and use it for commercial purposes. They can do so on Hugging Face.
The new GLM-5.1 is designed to work autonomously for up to eight hours on a single task, marking a definitive shift from vibe coding to agentic engineering.
Advertisement
The release represents a pivotal moment in the evolution of artificial intelligence. While competitors have focused on increasing reasoning tokens for better logic, Z.ai is optimizing for productive horizons.
GLM-5.1 is a 754-billion parameter Mixture-of-Experts model engineered to maintain goal alignment over extended execution traces that span thousands of tool calls.
“agents could do about 20 steps by the end of last year,” wrote z.ai leader Lou on X. “glm-5.1 can do 1,700 rn. autonomous work time may be the most important curve after scaling laws. glm-5.1 will be the first point on that curve that the open-source community can verify with their own hands. hope y’all like it^^”
In a market increasingly crowded with fast models, Z.ai is betting on the marathon runner. The company, which listed on the Hong Kong Stock Exchange in early 2026 with a market capitalization of $52.83 billion, is using this release to cement its position as the leading independent developer of large language models in the region.
Advertisement
Technology: the staircase pattern of optimization
GLM-5.1s core technological breakthrough isn’t just its scale, though its 754 billion parameters and 202,752 token context window are formidable, but its ability to avoid the plateau effect seen in previous models.
In traditional agentic workflows, a model typically applies a few familiar techniques for quick initial gains and then stalls. Giving it more time or more tool calls usually results in diminishing returns or strategy drift.
Z.ai research demonstrates that GLM-5.1 operates via what they call a staircase pattern, characterized by periods of incremental tuning within a fixed strategy punctuated by structural changes that shift the performance frontier.
In Scenario 1 of their technical report, the model was tasked with optimizing a high-performance vector database, a challenge known as VectorDBBench.
Advertisement
VectorDBBench graphic from z.ai for GLM-5.1. Credit: z.ai
The model is provided with a Rust skeleton and empty implementation stubs, then uses tool-call-based agents to edit code, compile, test, and profile. While previous state-of-the-art results from models like Claude Opus 4.6 reached a performance ceiling of 3,547 queries per second, GLM-5.1 ran through 655 iterations and over 6,000 tool calls. The optimization trajectory was not linear but punctuated by structural breakthroughs.
At iteration 90, the model shifted from full-corpus scanning to IVF cluster probing with f16 vector compression, which reduced per-vector bandwidth from 512 bytes to 256 bytes and jumped performance to 6,400 queries per second.
By iteration 240, it autonomously introduced a two-stage pipeline involving u8 prescoring and f16 reranking, reaching 13,400 queries per second. Ultimately, the model identified and cleared six structural bottlenecks, including hierarchical routing via super-clusters and quantized routing using centroid scoring via VNNI. These efforts culminated in a final result of 21,500 queries per second, roughly six times the best result achieved in a single 50-turn session.
Advertisement
This demonstrates a model that functions as its own research and development department, breaking complex problems down and running experiments with real precision.
The model also managed complex execution tightening, lowering scheduling overhead and improving cache locality. During the optimization of the Approximate Nearest Neighbor search, the model proactively removed nested parallelism in favor of a redesign using per-query single-threading and outer concurrency.
When the model encountered iterations where recall fell below the 95 percent threshold, it diagnosed the failure, adjusted its parameters, and implemented parameter compensation to recover the necessary accuracy. This level of autonomous correction is what separates GLM-5.1 from models that simply generate code without testing it in a live environment.
Kernelbench: pushing the machine learning frontier
The model’s endurance was further tested in KernelBench Level 3, which requires end-to-end optimization of complete machine learning architectures like MobileNet, VGG, MiniGPT, and Mamba.
Advertisement
In this setting, the goal is to produce a faster GPU kernel than the reference PyTorch implementation while maintaining identical outputs. Each of the 50 problems runs in an isolated Docker container with one H100 GPU and is limited to 1,200 tool-use turns. Correctness and performance are evaluated against a PyTorch eager baseline in separate CUDA contexts.
The results highlight a significant performance gap between GLM-5.1 and its predecessors. While the original GLM-5 improved quickly but leveled off early at a 2.6x speedup, GLM-5.1 sustained its optimization efforts far longer. It eventually delivered a 3.6x geometric mean speedup across 50 problems, continuing to make useful progress well past 1,000 tool-use turns.
Although Claude Opus 4.6 remains the leader in this specific benchmark at 4.2x, GLM-5.1 has meaningfully extended the productive horizon for open-source models.
This capability is not simply about having a longer context window; it requires the model to maintain goal alignment over extended execution, reducing strategy drift, error accumulation, and ineffective trial and error. One of the key breakthroughs is the ability to form an autonomous experiment, analyze, and optimize loop, where the model can proactively run benchmarks, identify bottlenecks, adjust strategies, and continuously improve results through iterative refinement.
Advertisement
All solutions generated during this process were independently audited for benchmark exploitation, ensuring the optimizations did not rely on specific benchmark behaviors but worked with arbitrary new inputs while keeping computation on the default CUDA stream.
Product strategy: subscription and subsidies
GLM-5.1 is positioned as an engineering-grade tool rather than a consumer chatbot. To support this, Z.ai has integrated it into a comprehensive Coding Plan ecosystem designed to compete directly with high-end developer tools.
The product offering is divided into three subscription tiers, all of which include free Model Context Protocol tools for vision analysis, web search, web reader, and document reading.
The Lite tier at $27 USD per quarter is positioned for lightweight workloads and offers three times the usage of a comparable Claude Pro plan. The Pro tier at $81 per quarter is designed for complex workloads, offering five times the Lite plan usage and 40 to 60 percent faster execution.
Advertisement
The Max tier at $216 per quarter is aimed at advanced developers with high-volume needs, ensuring guaranteed performance during peak hours.
For those using the API directly or through platforms like OpenRouter or Requesty, Z.ai has priced GLM-5.1 at $1.40 per one million input tokens and $4.40 per million output tokens. There’s also a cache discount available for $0.26 per million input tokens.
Notably, the model consumes quota at three times the standard rate during peak hours, which are defined as 14:00 to 18:00 Beijing Time daily, though a limited-time promotion through April 2026 allows off-peak usage to be billed at a standard 1x rate. Complementing the flagship is the recently debuted GLM-5 Turbo.
Advertisement
While 5.1 is the marathon runner, Turbo is the sprinter, proprietary and optimized for fast inference and tasks like tool use and persistent automation.
At a cost of $1.20 per million input / $4 per million output, it is more expensive than the base GLM-5 but comes in at more affordable than the new GLM-5.1, positioning it as a commercially attractive option for high-speed, supervised agent runs.
The model is also packaged for local deployment, supporting inference frameworks including vLLM, SGLang, and xLLM. Comprehensive deployment instructions are available at the official GitHub repository, allowing developers to run the 754 billion parameter MoE model on their own infrastructure.
For enterprise teams, the model includes advanced reasoning capabilities that can be accessed via a thinking parameter in API requests, allowing the model to show its step-by-step internal reasoning process before providing a final answer.
Advertisement
Benchmarks: a new global standard
The performance data for GLM-5.1 suggests it has leapfrogged several established Western models in coding and engineering tasks.
SWE-Bench Pro benchmark comparison chart showing GLM-5.1 leading other major models. Credit: z.ai
On SWE-Bench Pro, which evaluates a model’s ability to resolve real-world GitHub issues using an instruction prompt and a 200,000 token context window, GLM-5.1 achieved a score of 58.4. For context, this outperforms GPT-5.4 at 57.7, Claude Opus 4.6 at 57.3, and Gemini 3.1 Pro at 54.2.
Beyond standardized coding tests, the model showed significant gains in reasoning and agentic benchmarks. It scored 63.5 on Terminal-Bench 2.0 when evaluated with the Terminus-2 framework and reached 66.5 when paired with the Claude Code harness.
Advertisement
On CyberGym, it achieved a 68.7 score based on a single-run pass over 1,507 tasks, demonstrating a nearly 20-point lead over the previous GLM-5 model. The model also performed strongly on the MCP-Atlas public set with a score of 71.8 and achieved a 70.6 on the T3-Bench.
In the reasoning domain, it scored 31.0 on Humanitys Last Exam, which jumped to 52.3 when the model was allowed to use external tools. On the AIME 2026 math competition benchmark, it reached 95.3, while scoring 86.2 on GPQA-Diamond for expert-level science reasoning.
The most impressive anecdotal benchmark was the Scenario 3 test: building a Linux-style desktop environment from scratch in eight hours.
Unlike previous models that might produce a basic taskbar and a placeholder window before declaring the task complete, GLM-5.1 autonomously filled out a file browser, terminal, text editor, system monitor, and even functional games.
Advertisement
It iteratively polished the styling and interaction logic until it had delivered a visually consistent, functional web application. This serves as a concrete example of what becomes possible when a model is given the time and the capability to keep refining its own work.
Licensing and the open segue
The licensing of these two models tells a larger story about the current state of the global AI market. GLM-5.1 has been released under the MIT License, with its model weights made publicly available on Hugging Face and ModelScope.
This follows the Z.ai historical strategy of using open-source releases to build developer goodwill and ecosystem reach. However, GLM-5 Turbo remains proprietary and closed-source. This reflects a growing trend among leading AI labs toward a hybrid model: using open-source models for broad distribution while keeping execution-optimized variants behind a paywall.
Industry analysts note that this shift arrives amidst a rebalancing in the Chinese market, where heavyweights like Alibaba are also beginning to segment their proprietary work from their open releases.
Advertisement
Z.ai CEO Zhang Peng appears to be navigating this by ensuring that while the flagship’s core intelligence is open to the community, the high-speed execution infrastructure remains a revenue-driving asset.
The company is not explicitly promising to open-source GLM-5 Turbo itself, but says the findings will be folded into future open releases. This segmented strategy helps drive adoption while allowing the company to build a sustainable business model around its most commercially relevant work.
Community and user reactions: crushing a week’s work
The developer community response to the GLM-5.1 release has been overwhelmingly focused on the model’s reliability in production-grade environments.
User reviews suggest a high degree of trust in the model’s autonomy.
Advertisement
One developer noted that GLM-5.1 shocked them with how good it is, stating it seems to do what they want more reliably than other models with less reworking of prompts needed. Another developer mentioned that the model’s overall workflow from planning to project execution performs excellently, allowing them to confidently entrust it with complex tasks.
Specific case studies from users highlight significant efficiency gains.
A user from Crypto Economy News reported that a task involving preprocessing code, feature selection logic, and hyperparameter tuning solutions, which originally would have taken a week, was completed in just two days. Since getting the GLM Coding plan, other developers have noted being able to operate more freely and focus on core development without worrying about resource shortages hindering progress.
On social media, the launch announcement generated over 46,000 views in its first hour, with users captivated by the eight-hour autonomous claim. The sentiment among early adopters is that Z.ai has successfully moved past the hallucination-heavy era of AI into a period where models can be trusted to optimize themselves through repeated iteration.
Advertisement
The ability to build four applications rapidly through correct prompting and structured planning has been cited by multiple users as a game-changing development for individual developers.
The implications of long-horizon work
The release of GLM-5.1 suggests that the next frontier of AI competition will not be measured in tokens per second, but in autonomous duration.
If a model can work for eight hours without human intervention, it fundamentally changes the software development lifecycle.
However, Z.ai acknowledges that this is only the beginning. Significant challenges remain, such as developing reliable self-evaluation for tasks where no numeric metric exists to optimize against.
Advertisement
Escaping local optima earlier when incremental tuning stops paying off is another major hurdle, as is maintaining coherence over execution traces that span thousands of tool calls.
For now, Z.ai has placed a marker in the sand. With GLM-5.1, they have delivered a model that doesn’t just answer questions, but finishes projects. The model is already compatible with a wide range of developer tools including Claude Code, OpenCode, Kilo Code, Roo Code, Cline, and Droid.
For developers and enterprises, the question is no longer, “what can I ask this AI?” but “what can I assign to it for the next eight hours?”
The focus of the industry is clearly shifting toward systems that can reliably execute multi-step work with less supervision. This transition to agentic engineering marks a new phase in the deployment of artificial intelligence within the global economy.
Cisco CEO Chuck Robbins says he’s already exploring how to send data centers to space
OpenAI’s Sam Altman sees it as a “pipe dream,” Elon Musk is optimistic
Space-bound data centers would tackle a lot of the current issues
Cisco CEO Chuck Robbins has revealed his company execs are already discussing plans to put data centers in space.
Robbins clearly backs the idea, noting that space could remove some of Earth’s key constraints like power, cooling and land availability. Abundant solar energy and fewer community objections are among the highlights (though a different type of objection would likely occur).
And Robbins isn’t the only person with influence over data centers who believes this: “Sam Altman is one who says, ‘I don’t think they should be in their backyards’,” he told Nilay Patel of The Verge.
Article continues below
Advertisement
Cisco is actively exploring putting data centers in space
Although Altman may be sceptical of locating data centers in space, SpaceX’s Elon Musk is a major supporter. When asked whether he would believe Altman, who claims space-bound data centers are a “pipe dream,” or Elon Musk, Robbins stated: “I wouldn’t bet against Elon.”
These campuses are generally seen as noisy, energy-intensive operations that are especially unpopular locally. Hyperscalers face increasing public opposition and concerns over environmental impacts, however soaring usage is a conflicting trend that’s requiring ongoing buildouts.
Advertisement
However, Cisco is still figuring out some of the technical challenges relating to temperature, atmospheric conditions and launch logistics.
There’s also a growing demand for data sovereignty, and it’s unclear at best how space-located data centers would play into this with infrastructure design shifting from global systems to localized deployments.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
As for the next steps, there are clearly a lot of them. “We’re in the early stages of just making sure the atmospheric issues, the temperatures, all of those things are taken into consideration,” the CEO stated, noting “we don’t even know everything we need to do yet.”
Advertisement
“Absolutely,” Robbins concluded when asked whether we should put data centers in space.
Tired of solicitors knocking on your front door trying to sell you junk while you’re relaxing? You can grab the Google Nest Doorbell, our favorite video doorbell, from Amazon for just $140, a $40 discount from its usual price, and turn them away without getting off the couch. This attractive and elegant video doorbell has a variety of smart features, full Google integration, and hooks up to a powered source so you never have to charge the battery. I have the wireless version at home and found it extremely useful for spotting packages, talking to neighbors, or just snooping on my house while I’m away.
Photograph: Julian Chokkattu
Photograph: Julian Chokkattu
Google
Nest Doorbell (Wired, 3rd Gen)
The video quality is excellent, with a huge 166-degree field of view that easily captures both your front yard and any packages that might be sitting on the ground close to the door. If you have other Nest displays, like the Nest Hub, they’ll show video alerts, and you can even turn on automatic picture-in-picture on your Google TV. When people speak to the doorbell the quality is nice and crisp, and you can even talk to delivery drivers or friends who stop by when you aren’t home.
Advertisement
You don’t need a subscription to use the basic video capture and doorbell features on the Nest Doorbell, but there is an upgraded plan available that adds in a longer video history, as well as more advanced detection features. While it hasn’t been the most consistent for me, it attempts to differentiate between familiar and unfamiliar faces, so it doesn’t bother pinging my phone when it sees me getting home. Depending on which plan you choose, you can get up to 60 days of video history, so I’ve been able to look back weeks into the past to look for packages, or spot if something happened to my neighbor’s car.
For the $40 discount on the wired Google Nest Doorbell, head over to Amazon to grab one in Snow, Hazel, or Linen. If you aren’t sure about the Nest Doorbell, or you aren’t invested in the Google Home ecosystem, we have a full roundup of the best video doorbells from brands like Google, Arlo, and Eufy.
In short:Joby Aviation and Air Space Intelligence have announced a partnership to integrate AI-driven airspace management into U.S. electric air taxi operations, using ASI’s Flyways AI platform to model high-density eVTOL traffic before commercial flights begin later this year.
The electric air taxi race has long centred on the aircraft itself: wing count, battery range, noise footprint. Now, with Joby Aviation weeks away from completing FAA type certification and the White House’s eVTOL Integration Pilot Programme clearing the way for early commercial operations across 10 U.S. states, the harder question is finally being asked out loud. The skies may be ready for one or two electric air taxis. They are almost certainly not ready for hundreds of them, all manoeuvring simultaneously through the same congested corridors above Manhattan, Miami, and Dallas. Joby and Air Space Intelligence (ASI) announced on 7 April 2026 that they intend to fix that, before it becomes a problem.
The partnership tasks the two companies with accelerating the integration of advanced air mobility into the U.S. National Airspace System (NAS), using ASI’s AI-powered Flyways platform as the core coordination layer. Joint demonstrations, including live operational exercises, are expected before the end of 2026, a timeline that aligns directly with Joby’s own commercial launch ambitions.
A new operating system for the sky
ASI, founded in Boston in 2018 and backed by a $34 million Series B led by Andreessen Horowitz in December 2023, has spent years solving a version of this problem for conventional aviation. Its flagship PRESCIENCE platform provides a four-dimensional digital twin of the operating environment, ingesting live traffic data, weather feeds, and demand forecasts to simulate airspace conditions hours in advance. Flyways AI, ASI’s commercial product layer built on PRESCIENCE, translates those simulations into decision-ready recommendations for air traffic controllers, allowing them to proactively reroute flows before congestion sets in rather than reacting after the fact.
Advertisement
Alaska Airlines and the U.S. Department of Defense are among ASI’s confirmed customers. The company’s existing work with legacy aviation gives it a dataset and a regulatory credibility that most newer entrants in the advanced air mobility space cannot easily replicate. Applying that platform to eVTOL is, in ASI’s framing, a natural extension. “Scaling advanced air mobility requires more than new aircraft,” said Bernard Asare, President of Civil Aviation at Air Space Intelligence. “It requires a new operating system for the airspace. Our Flyways AI platform gives operators and controllers the predictive awareness to coordinate high-density operations proactively, not reactively. This partnership brings that same capability to eVTOL operations from day one.”
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
What Joby brings to the table
Joby’s contribution is operational experience and institutional relationships that no software company can substitute. The Santa Cruz-based manufacturer has conducted more than 1,000 test flights of its S4 aircraft, completed Stage 4 of the FAA’s five-stage type certification process, and, in March 2026, was selected to participate in five projects under the White House-backed eVTOL Integration Pilot Programme, giving it the legal pathway to begin passenger operations in states including New York, Florida, Texas, North Carolina, and Utah before full certification is granted.
Advertisement
Joby has also built a commercial ecosystem that few of its rivals can match: a partnership with Delta Air Lines that includes vertiport infrastructure at JFK and LAX, a $250 million strategic investment from Toyota, a 25-site vertiport deal with Metropolis, and an active Dubai operation that represents the company’s first revenue-generating international market. Its SuperPilot autonomy stack, developed with Nvidia’s IGX Thor platform, is designed to progressively reduce cockpit dependency as regulatory confidence grows, part of a broader AI infrastructure build-out that mirrors ayear of rapid enterprise AI expansionacross sectors.
“America has long set the global standard for aviation, and modernising our airspace is key to maintaining that leadership,” said Greg Bowles, Chief Policy Officer at Joby Aviation. “By combining Joby’s operational capabilities with ASI’s advanced AI-driven Flyways platform, we’re helping build the intelligent infrastructure needed to integrate electric air taxis seamlessly into the NAS.”
The BNATCS window
The timing is not accidental. The FAA’s Brand New Air Traffic Control System (BNATCS) is now under active development, a $32.5 billion overhaul of the U.S.’s ageing telecommunications, radar, and automation infrastructure. Congress has committed $12.5 billion, with a further $20 billion still required. Peraton has been named as system integrator. The programme will introduce 5,170 new high-speed network connections across fibre, satellite, and wireless, and is expected to include automated decision-support tools specifically designed for the influx of new traffic categories, including drones and eVTOLs, that current systems were never built to handle.
The Joby-ASI partnership positions both companies to influence how those tools are designed. By running live operational exercises with Flyways AI ahead of the BNATCS rollout, the two companies will be able to generate real-world data on how AI-mediated coordination performs alongside human controllers. That data is precisely what the FAA needs to define the standards that will govern every eVTOL operator in the country. Joby and ASI are, in other words, not merely preparing their own operations; they are helping to write the rulebook. This kind of infrastructure investment at scale echoesbroader AI infrastructure dealsreshaping technology’s physical footprint, with companies moving quickly to own the foundational layers before standards harden.
Advertisement
The governance gap eVTOL must cross
The challenge ASI is addressing sits at the intersection of aviation safety and AI governance, an area that regulators globally are still working to define. Autonomous or AI-assisted systems operating in safety-critical environments require a level of explainability and auditability that most machine learning architectures were not originally designed to provide. PRESCIENCE’s 4D simulation approach, which generates human-interpretable lookahead scenarios rather than black-box outputs, is partly a product of this regulatory reality. Making AI legible to air traffic controllers is not a nice-to-have; it is a certification prerequisite. The broader question ofgoverned AI in high-stakes environmentsis one the entire industry is grappling with, and the Joby-ASI model may offer a template.
What sets this partnership apart from earlier eVTOL airspace initiatives, which tended to focus on unmanned traffic management (UTM) for drones rather than manned commercial aircraft, is the integration of existing air traffic control workflows. Flyways AI is not a parallel system that operates alongside the NAS; it is designed to slot into the controller’s existing interface, augmenting rather than replacing human judgement. That design philosophy may prove decisive as the FAA works to define what AI assistance in the cockpit and in the tower is, and is not, permitted to do.
What comes next
Both companies have indicated that live operational exercises will begin in 2026, though neither has specified which markets or corridors will be used for the initial demonstrations. Given Joby’s eIPP designations, New York and Florida are the likeliest candidates. The exercises are expected to produce data that can be submitted to the FAA as part of the ongoing NAS integration process, contributing to the regulatory record that will define how all future eVTOL operators handle airspace coordination at scale.
The partnership carries no disclosed financial terms. It is framed as a technical and operational collaboration, with both companies sharing data and co-developing protocols rather than exchanging capital. Whether that structure changes as the relationship matures will depend in part on how quickly Joby’s commercial operations scale, and how central Flyways AI becomes to running them. The questionthat defined much of last year’s AI conversation, whether AI tools can move from demonstration to durable operational infrastructure, is about to be tested in one of the most demanding environments imaginable: the U.S. National Airspace System, at altitude, with passengers on board.
Advertisement
The aircraft are almost ready. The question now is whether the sky itself can keep up.
Taken on its own terms there’s a whole lot to like about the KEF Muo and not a great deal to take issue with. But nothing happens in isolation – and the little shortcomings this speaker demonstrates means it’s under threat from some slightly more well-rounded alternatives…
Insightful, rhythmically positive sound of impressive scale
Impressive all-round specification
Extremely well-made and -finished
Midrange reproduction is relatively blunt and approaching strident
Since 2016 the company has enjoyed an enviable strike-rate where its new products are concerned – so does the 2nd Gen Muo chalk up another hit?
Advertisement
Design
Its dimensions, relatively light weight and very promising IP rating would tend to indicate the KEF Muo is a go-anywhere, do-anything kind of Bluetooth speaker. And it’s true, it’s built to survive in any realistic environment and to be no kind of hindrance when it comes to getting there or coming back again.
But bear in mind the majority of the Muo is built from smooth, tactile and exquisitely finished aluminium. The sort of material, in fact, that it’s not especially difficult to mark or scratch or even dent. So if you do intend to take your speaker with you into the Great Outdoors, be aware that there are devices that lend themselves much more readily to being slung into a backpack and bounced around in there than this one.
Advertisement
Image Credit (Trusted Reviews)
And you’ll want to keep it pristine, because in any of the available finishes the Muo (to my eyes, at least) looks the business. I wouldn’t necessarily choose the Midnight Black of my review sample, but I’d happily take any of the Silver Dusk, Moss Green, Blue Aura, Cocoa Brown or Orange Moon alternatives.
There are some physical controls integrated into the rubber end-cap at the top of the speaker – they cover power on/off and volume up/down, and there’s a multifunction button that takes care of skip forwards/backwards, play/pause and answer/end/reject call (the mic that turns this into a speakerphone features noise- and echo-cancellation technology). There’s also a button to initiate Bluetooth pairing at the rear of the speaker – it’s just next to the USB-C slot.
Advertisement
Image Credit (Trusted Reviews)
Features
Bluetooth 5.4 with aptX Adaptive
40 watts of Class D power
Auracast-enabled
There are a couple of ways of getting audio information on board the Muo. The USB-C slot at the rear of the cabinet can be used for data transfer as well as charging the battery, and wireless connectivity is dealt with by Bluetooth 5.4 that’s compatible with the SBC, AAC and aptX Adaptive codecs. These options can deal with 16-bit/48Hz and 24-bit/48Hz resolutions respectively.
And there are further connectivity options. The Muo is Auracast-enabled, so can be part of an extremely expansive system as long as it’s partnered correctly. Two Muo (Muos?) can form a stereo pair. And both Microsoft Swift Pair and Google Fast Pair are available, too.
Image Credit (Trusted Reviews)
Once the digital audio information is on board, it’s delivered by a two-driver array powered by a total of 40 Class D watts. A 20mm tweeter takes up 10 of those watts, the other 30 is taken by a 117mm x 58mm racetrack mid/bass driver that features the company’s P-Flex technology – this arrangement, says KEF, results in a frequency response of 43Hz – 20kHz.
There’s an accelerometer built into the Muo which allows it to detect its orientation and adjust its sound output accordingly. In portrait position, the tweeter is above the mid/bass driver; put the speaker into landscape orientation (it is fitted with four small rubber feet for this purpose) and obviously the drivers are now side-by-side.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
You can also exert control over the Muo by using the KEF Connect app. In this guise it deals only with input selection and volume control, but it does at least give access to five EQ presets and an indication of battery life too.
Battery life is quoted at 24 hours from a single charge (at moderate volume levels, naturally), and should the worst happen you can go from flat to full in around two hours via the USB-C input. A quick 15-minute burst should be enough to get another three hours of playback (again, provided you’re not going for it where volume levels are concerned).
Sound Quality
Nicely shaped and varied low-frequency response
Sizeable and detailed presentation
Can sound slightly strident, especially through the midrange
For a relatively compact speaker in physical terms, the sound the Muo makes is anything but discreet. No matter if you give it a bog-standard 320kbps MP3 file of Private Life by Grace Jones to deal with or a bigger 24-bit/44.1kHz FLAC file of By Storm’s Dead Weight, the KEF sounds big and spacious, and delivers a presentation that easily escapes the confines of its cabinet.
It extracts and reveals plenty of detail, both broad and fine, at every stage of the frequency range – which goes a long way to convincing you, as the listener, that you’re getting a full account of what’s going on.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
Down at the bottom end there’s a lot of information regarding texture made available, and bass sounds are nicely shaped and controlled too – so as well as an impressive amount of variation at the low end, rhythms are expressed with genuine positivity. It’s a similar story at the opposite end, inasmuch as treble sounds have shape and substance to go along with a fair amount of bite – and harmonic variation is apparent at every turn.
As well as the more understated dynamics of harmonic fluctuations, the Muo is also quite adept at dealing with the big dynamic variations that come when a recording ramps up the volume or the intensity. It has no problem tracking changes in attack, and maintains the distance between quiet and loud even if you’re listening quite loud in the first place.
Turning the volume up doesn’t alter the evenness of the frequency response or harm the natural, neutral tonality the speaker demonstrates at either end of the frequency range, either.
Image Credit (Trusted Reviews)
In the midrange, though, things aren’t quite so clear-cut. There’s still an admirable amount of detail available, and the transition from the midrange to the stuff going on either side of it, is smoothly and naturalistically achieved.
Advertisement
But there’s not a huge amount in common where tonality is concerned – the way the KEF hands over the midrange in general, and voices in particular, isn’t in absolute sympathy with the bass or treble reproduction. There’s a mild abrasiveness to the tonality here, which can result in voices becoming slightly strident or, in extremis, actually rather hard-edged and unyielding.
Advertisement
Should you buy it?
You value the look and the feel of your Bluetooth speaker as much as you value the sound
Advertisement
You’re after the best sound
You’re after an entirely even-handed and uncoloured account of your music
Advertisement
Final Thoughts
KEF has been out of the Bluetooth speaker conversation for quite a while – but the quality of the products it has launched since it last had a Bluetooth speaker in its line-up made me very optimistic about the new Muo’s chances.
I’m in no doubt that it’s one of the more covetable and more desirable designs around – but the question of whether it sounds like £249-worth is not quite so straightforward to answer, especially not if you’ve heard the Bang & Olufsen A1 3rd Gen in action…
How We Test
I listen to the Muo on my desk, in the kitchen, and in the garden (during those few moments when it isn’t raining sideways around here). I connect it wirelessly to an Apple iPhone 14 Pro, and to a FiiO M15S which allows the use of the aptX codec.
I also hard-wire it to an Apple MacBook Pro (running Colibro software) using its USB-C slot.
FAQs
Is this a hi-res speaker?
Advertisement
Kind of, sort of – aptX Adaptive can operate at a lossy 24-bit/48Hz and the USB-C slot can deal with 16-bit/48Hz
Hackers are exploiting a maximum-severity vulnerability, tracked as CVE-2025-59528, in the open-source platform Flowise for building custom LLM apps and agentic systems to execute arbitrary code.
The flaw allows injecting JavaScript code without any security checks and was publicly disclosed last September, with the warning that successful exploitation leads to command execution and file system access.
The problem is with the Flowise CustomMCP node allowing configuration settings to connect to an external Model Context Protocol (MCP) server and unsafely evaluating the mcpServerConfig input from the user. During this process, it can execute JavaScript without first validating its safety.
The developer addressed the issue in Flowise version 3.0.6. The latest current version is 3.1.1, released two weeks ago.
Flowise is an open-source, low-code platform for building AI agents and LLM-based workflows. It provides a drag-and-drop interface that lets users connect components into pipelines powering chatbots, automation, and AI systems.
Advertisement
It is used by a broad range of users, including developers working in AI prototyping, non-technical users working with no-code toolsets, and companies that operate customer support chatbots and knowledge-based assistants.
Caitlin Condon, security researcher at vulnerability intelligence company VulnCheck, announced on LinkedIn that exploitation of CVE-2025-59528 has been detected by their Canary network.
“Early this morning, VulnCheck’s Canary network began detecting first-time exploitation of CVE-2025-59528, a CVSS-10 arbitrary JavaScript code injection vulnerability in Flowise, an open-source AI development platform,” Condon warned.
Although the activity appears limited at this time, originating from a single Starlink IP, the researchers warned that there are between 12,000 and 15,000 Flowise instances exposed online right now.
Advertisement
However, it is unclear what percentage of those are vulnerable Flowise servers.
Condon notes that the observed activity related to CVE-2025-59528 occurs in addition to CVE-2025-8943 and CVE-2025-26319, which also impact Flowise and for which active exploitation in the wild has been observed.
Currently, VulnCheck provides exploit samples, network signatures, and YARA rules only to its customers.
Users of Flowise are recommended to upgrade to version 3.1.1 or at least 3.0.6 as soon as possible. They should also consider removing their instances from the public internet if external access is not needed.
Advertisement
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
The Berkeley biotech is backing a Nature-published approach that recreates the embryonic environment where blood stem cells first form, rather than reprogramming aged cells chemically or genetically. Its lead programme targets bone marrow transplant in blood cancers and has received FDA Orphan Drug Designation.
HexemBio has publicly launched with a $10.4 million seed round led by Draper Associates, with participation from SOSV, Seraphim, and other investors. The Berkeley and New York-based company is developing what it describes as the first blood stem cell rejuvenation therapy, built around a platform called the Synthetic Human Yolk Sac.
Rather than editing or chemically reprogramming aged haematopoietic stem cells, the technology temporarily places a patient’s own cells into a recreated version of the developmental environment where blood stem cells first emerge in the embryo, then returns them via standard IV infusion.
Haematopoietic stem cells sit deep in the bone marrow and give rise to every blood and immune cell in the human body. Their decline with age is linked to weakened immunity, chronic inflammation, and increased susceptibility to conditions including blood cancers and neurodegeneration.
Advertisement
Previous attempts to reverse this decline have typically involved transcription-factor reprogramming, cytokine treatments, or gene editing, approaches that can push cells into unstable states or carry safety risks HexemBio says its method sidesteps.
The Synthetic Human Yolk Sac recreates the microenvironment that generates the body’s first blood stem cells during early embryonic development. Foundational work supporting the platform was published in Nature in February 2024, by a team led by Mo Ebrahimkhani at the University of Pittsburgh, with Samira Kiani and Joshua Hislop among the authors. All three are now co-founders of HexemBio.
The company’s lead clinical programme targets bone marrow transplant in patients with blood cancers including acute myeloid leukaemia and acute lymphoblastic leukaemia.
HexemBio received FDA Orphan Drug Designation for this indication in July 2025 and completed its FDA Pre-IND meeting in January 2026. First-in-human trials are targeted for 2027.
Regulatory strategy focuses on bone marrow transplant outcomes because ageing itself is not currently recognised as a regulatory indication, a constraint that has shaped how several longevity-adjacent biotechs have structured their early clinical programmes.
Advertisement
The founding team spans MIT, UC Berkeley, Harvard, and Y Combinator. Gabriel Levesque Tremblay, a former YC founder and UC Berkeley postdoc, serves as CEO. Samira Kiani, a Presidential Early Career Award recipient who trained at MIT, is CTO.
Mo Ebrahimkhani, the inventor of the underlying technology and a pioneer in synthetic developmental biology, is CSO. Joshua Hislop, whose doctoral work contributed directly to the Nature publication, leads the company’s AI platform, which includes proprietary tools called YolkGPT and YolkScore. Samet Yildirim, a former YC founder with drug development experience at Boehringer Ingelheim, is chief business officer.
The advisory board includes Robert S. Langer, Institute Professor at MIT and co-founder of Moderna, who called the approach “fundamentally different from transcription-factor reprogramming or gene editing’ and said the early data were ‘extremely compelling.”
Further advisors include Peter Barton Hutt, former chief counsel of the FDA and current Moderna board member; Joanne Kurtzberg of Duke University, one of the leading bone marrow transplant clinicians in the US; David Harris, founder of the first public cord blood bank in the United States; Felipe Sierra, former director of the Division of Aging Biology at the NIH; Jens Nielsen, CEO of the BioInnovation Institute; and George Church, professor of genetics at Harvard Medical School and co-founder of Colossal Biosciences.
Advertisement
Seed funding will be used to complete IND-enabling studies and GMP manufacturing ahead of the 2027 trial target.
An anonymous reader quotes a report from the New York Times: President Javier Milei of Argentina promoted a cryptocurrency last year that quickly skyrocketed in value then cratered just as fast, costing investors millions of dollars and setting off a scandal and an investigation. Mr. Milei said he was simply highlighting a private venture and had no connection to the digital coin called $Libra. New evidence is now raising questions about his assertion. Phone logs from a federal investigation by Argentine prosecutors into the coin’s collapse show seven phone calls between Mr. Milei and one of the entrepreneurs behind the cryptocurrency on the night in 2025 when Mr. Milei posted about $Libra on X. The contents of the calls, which took place before and after Mr. Milei’s post, are not known.
But the phone logs — which were obtained by The New York Times and first reported by a local cable news channel, C5N — suggest a greater degree of communication between Mr. Milei and the entrepreneurs who launched the token than what the president has publicly acknowledged. Newly uncovered messages also suggest Mr. Milei received regular payments from one of the entrepreneurs while he was a congressman. Mr. Milei has not publicly commented on the call logs and other documents, and he did not respond to a request for comment. He is named as a person of interest in the federal prosecutor’s continuing investigation into the digital coin, according to court documents reviewed by The Times, but has not been formally charged with any crime. The latest revelations have revived a scandal that threatens the very foundation of a president who rose to power and was elected president in 2023 by attacking a political class he called corrupt.
Google is sharpening its focus on mental health safety with a key update to its Gemini platform, introducing a “one-touch” crisis support feature designed to connect users with real-world help faster. The move is part of a broader push to ensure AI tools act responsibly in sensitive situations, especially when users may be experiencing distress.
At the core of this update is a redesigned safety mechanism that activates when Gemini detects signals of potential mental health crises, including self-harm or suicidal thoughts. Instead of continuing a standard AI conversation, the system shifts toward immediate intervention. Users are presented with a simplified interface that allows them to instantly reach out to professional support through calls, texts, live chat, or official crisis hotline websites.
What makes this approach notable is its persistence
Once the one-touch interface is triggered, access to crisis support remains visible throughout the conversation, ensuring users are continually encouraged to seek human help rather than relying solely on AI-generated responses. The design prioritizes urgency and ease of access, reducing friction at moments when quick action can be critical.
This update reflects a growing recognition that AI must do more than provide information – it must actively guide users toward safe outcomes. Google says the system has been developed in collaboration with clinical experts, ensuring that responses are structured to encourage help-seeking behavior without reinforcing harmful thoughts or actions.
Advertisement
Importantly, Gemini is also being trained to avoid validating dangerous beliefs or behaviors
Instead, it aims to gently redirect users, distinguish between subjective feelings and objective reality, and prioritize connections to real-world resources. This balance between responsiveness and restraint is central to the platform’s evolving safety framework.
The significance of this feature lies in its potential real-world impact. With over one billion people globally affected by mental health challenges, digital tools like Gemini are increasingly becoming the first points of contact during vulnerable moments. By embedding a one-touch pathway to professional support, Google is attempting to bridge the gap between online interaction and offline care.
Google Gemini Live FeatureUnsplash
For users, this means faster, more direct access to help when it matters most. The update reduces the burden of searching for resources and ensures that support options are presented clearly and immediately.
Looking ahead, Google plans to continue refining these guardrails through ongoing research, testing, and collaboration with mental health professionals. As AI becomes more integrated into everyday life, features like one-touch crisis support could play a crucial role in shaping how technology responds to human vulnerability – prioritizing safety, accountability, and real-world connection over convenience alone.
What we think
Google’s AI mental health features feel like a step in the right direction, especially with tools that quickly guide users toward real-world help. The one-touch crisis support and improved responses show a clear intent to prioritize safety over engagement.
Advertisement
Mental HealthUnsplash
But there’s an inherent limitation here – AI can assist, but it cannot replace human empathy, clinical judgment, or long-term care. For someone in distress, a well-timed prompt helps, but it’s not a solution. These tools work best as bridges, not endpoints. The real challenge is ensuring users don’t stop at AI interaction and actually reach professional support when it truly matters.
An international operation from law enforcement authorities in partnership with private companies has disrupted FrostArmada, an APT28 campaign hijacking local traffic from MikroTik and TP-Link routers to steal Microsoft account credentials.
The Russian threat group APT28, also tracked as Fancy Bear, Sofacy, Forest Blizzard, Strontium, Storm-2754, and Sednit, has been linked to Russia’s General Staff Main Intelligence Directorate (GRU) 85th Main Special Service Center (GTsSS) military unit 26165.
In the FrostArmada attacks, the hackers compromised mainly small office/home office (SOHO) routers and altered the domain name system (DNS) settings to point to virtual private servers (VPS) under their control, which acted as DNS resolvers.
This allowed APT28 to intercept authentication traffic to targeted domains and steal Microsoft logins and OAuth tokens.
At its peak in December 2025, FrostArmada infected 18,000 devices across 120 countries, primarily targeting government agencies, law enforcement, IT and hosting providers, and organizations operating their own servers.
Advertisement
Microsoft, whose services were targeted by this campaign, worked together with Black Lotus Labs (BLL), Lumen’s threat research and operations division, to map the malicious activity and identify victims.
With support from the FBI, the U.S. Department of Justice, and the Polish government, the offending infrastructure has been taken offline.
FrostArmada activity
The attackers targeted internet-exposed routers, primarily MikroTik and TP-Link, as well as some firewall products from Nethesis and older Fortinet models.
Once compromised, the devices communicated with the attackers’ infrastructure and received DNS configuration changes that redirected traffic to malicious VPS nodes.
Advertisement
The new DNS settings were automatically pushed to internal devices via the Dynamic Host Configuration Protocol (DHCP).
When clients queried authentication-related domains the threat actor targeted, the DNS server returned the attacker’s IP instead of the real one, redirecting victims to an adversary-in-the-middle (AitM) proxy.
DNS request redirection at the router level Source: Black Lotus Labs
The only visible sign of fraud for the victim would have been a warning for an invalid TLS certificate, which could have easily been dismissed. However, ignoring the alert gave the threat actor access to the victim’s unencrypted internet communication.
“The actor essentially ran a proxy service as the AitM that the end user was directed to via DNS,” Lumen’s Black Lotus Labs researchers explain.
“The only sign of this attack would be a pop-up warning about connecting to an untrusted source because of the ‘break and inspect’ configuration.”
Advertisement
“If warnings were present and ignored or clicked through, the actor proxied requests to the legitimate services, collecting the data at the midpoint and collecting data associated with the targeted account by passing the valid OAuth token.”
In some cases, though, the hackers spoofed DNS responses for certain domains, thus forcing affected endpoints to connect to the attack infrastructures, Microsoft says in a report today.
Lumen reports that FrostArmada operated in two distinct clusters, one called the ‘Expansion team’ dedicated to device compromise and botnet growth, and the second handling the AiTM and credential collection operations.
Overview of the Expansion branch operations Source: Black Lotus Labs
The researchers report that FrostArmada activity increased sharply following an August 2025 report from the National Cyber Security Centre (NCSC) in the UK describing a Forest Blizzard toolset that targeted Microsoft account credentials and tokens.
Microsoft confirmed that APT28 carried out AitM attacks against domains associated with the Microsoft 365 service, as subdomains for Microsoft Outlook on the web have also been targeted.
Advertisement
Additionally, the company observed this activity on servers belonging to three government organizations in Africa that were not hosted on Microsoft infrastructure. In those attacks, “Forest Blizzard intercepted DNS requests and conducted follow-on collection.”
Black Lotus Labs also observed the threat actor targeting entities with on-premise email servers and “a small number of government organizations” in North Africa, Central America, and Southeast Asia.
The researchers note that “there was also a connection to a national identity platform in one European country.”
In a report today, the UK agency says that the AitM activity impacted both browser sessions and desktop applications, and the DNS hijacking is believed to have been opportunistic in nature to build a large pool of potential targets and then filtering those of interest.
Advertisement
Black Lotus Labs has published a small set of indicators of compromise for the VPS servers used during the FrostArmada campaign:
IP address
First Seen
Last Seen
64.120.31[.]96
Advertisement
May 19, 2025
March 31, 2026
79.141.160[.]78
July 19, 2025
March 31, 2026
23.106.120[.]119
Advertisement
July 19, 2025
March 31, 2026
79.141.173[.]211
July 19, 2025
March 31, 2026
185.117.89[.]32
Advertisement
September 9, 2025
September 9, 2025
185.237.166[.]55
December 30, 2025
December 30, 2025
The researchers note that defenders should implement certificate pinning for corporate devices (laptops, mobile phones) controlled via an MDM solution, which would generate an error when the attacker tries to intercept and analyze traffic on their VPS infrastructure.
Advertisement
Another recommendation is to minimize the attack surface through patching, limiting exposure on the public web, and removing all end-of-life equipment.
Microsoft and the NCSC also provide a list of IoCs and protection guidance to help defenders identify and prevent DNS hijacking attacks.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
You must be logged in to post a comment Login