Meta has introduced a new “live chats” feature to Threads, enabling people on the platform to participate in real-time conversations about live events they’re interested in. Live chats can be hosted within Threads communities, the topic-specific social spaces that Meta last year.
The new feature sounds a bit like Threads’ take on Instagram’s broadcast channels, but the latter only allows for one-way messaging. Live chats can be hosted by select creators, including Community Champions — users highly engaged within specific communities — and media personalities. Once a chat is launched or scheduled, the host chooses who is invited to contribute and can then share the link publicly.
You can post photos, videos, links and emoji reactions as well as text-based messages. If you’re unable to send messages in a live chat that is at capacity, you can still watch it, react to others messages and vote in polls. Live chats remain open to view after they’ve ended, and you don’t need to be part of a community to join.
Meta is debuting its new social feature in the NBAThreads Community during the Playoffs, with Malika Andrews, Rachel Nichols, Trysta Krick, David Rushing and Lexis Mickens named as hosts. Live chats will appear at the top of the NBAThreads Community feed, and can also be shared in a post that might appear on your main feed in Threads. You’ll also see a red ring around a host’s profile photo when they’re live.
Advertisement
Meta says live chats will gradually be rolled out to more communities on Threads, with features like co-hosting, lock screen widgets and the ability to quote and share messages from a chat on your feed coming soon.
Meta has been steadily expanding its X rival’s features since it launched in 2023. It started small with (note: not hashtags) and , before rolling out communities last year. It also started long-form text posts and just gave Threads a long-overdue facelift on web. Back in October, the company that its text-based social media platform now has 150 million daily users.
The Cambridge Audio MSX Series s a capable mix of satellite speakers and a subwoofer that provides an immersive sound with tight bass, a surprisingly wide soundstage and rich mids for such a compact set of units that can be placed virtually anywhere. The treble can feel a little smooth, though, and once you add a streamer and amp, it can get a little dearer than some active units.
Versatile and stylish looks
Surprisingly weighty bass
Immersive for such small units
Treble could do with more bite
Can be quite expensive once you add a streamer and amp
Key Features
Advertisement
Versatile placement
The smaller size of the main units and sub mean they can be placed in areas that other, more conventional, speakers may not be able to.
Advertisement
2.1 system
This Cambridge system includes both a set of stereo speakers and its own subwoofer to provide a more rounded feel.
Introduction
The Cambridge Audio MSX20 and MSX Sub 200 combo represents an intriguing proposition in the brand’s rather hefty hi-fi catalogue.
Advertisement
These products are essentially rehashes of the older Minx series of compact speakers and subwoofers, designed to be versatile and affordable without compromising on audio quality, so they can be placed in more challenging environments where otherwise ‘normal’ speakers couldn’t.
In that regard, it’s quite a unique option, not least for the price – the MSX20 speakers cost £99 each, with the MSX Sub 200 an additional £299.
Advertisement
That isn’t accounting for an amp to power them, or a streamer for a complete system, although it’s still an interesting alternative to powered choices such as the Klipsch ProMedia Lumina 2.1, the dinky Kanto Uki, or even the Cambridge Audio L/R S, if you’re tight on space, or have a unique setup you want to add audio to.
Advertisement
I’ve been putting this combo through its paces for the last couple of weeks to see how well it performs on my sideboard.
Design
Surprisingly compact
Redesign provides a more modern look
Discrete colour choices
What immediately surprised me about this system was how tiny everything is – the MSX20 speakers are just 155mm high, 79mm wide and 97mm deep, meaning they can be placed virtually anywhere and take up little space on my sideboard against larger speakers.
The MSX Sub 200 is the smaller of the two Cambridge offers (there is the larger and beefier MSX Sub 300 available), but I was still quite surprised at how small it is compared to other subwoofers I’ve seen in sets with active speakers.
Advertisement
Image Credit (Trusted Reviews)
The satellite speakers can work either on a table on their own, as I had them, or raised up on their own desk stands – there is also wall mounting available with hardware included in the box to help you out. Cambridge even says you can stack them on top of each other, if you want to, although having them separate will be better for stereo immersion.
The MSX20 is available in either black or white, as is the MSX Sub 200, meaning they can carry a discrete look to blend into your space. I don’t mind this, although it is a shame they don’t also come in a matching silver finish for Cambridge’s other hardware for a more unified look – I appreciate that’s a little nitpicky, though.
Advertisement
Image Credit (Trusted Reviews)
On the whole, I appreciate the little redesign these new models have undergone against the older Minx variants, with a new Cambridge logo and a redesigned grille on the front of both the satellite speakers and the sub, bringing them closer to Cambridge’s current portfolio.
Connectivity
Passive speakers connect by banana plugs
Subwoofer has RCA line-in and line-out
Best paired with a streamer and amp
Advertisement
The MSX20 is a passive speaker, and can connect to any amp or AV receiver using either the terminals on the rear, or these can unscrew to reveal slots for 4mm banana plugs, which I chose to use.
The sub houses more connectivity on its rear panel, admittedly, with RCA input and output options, plus a power cable. The input handles a streamer, for instance, while the output goes to the amplifier in this case.
Image Credit (Trusted Reviews)
For my testing, these were both Cambridge products, with the affordable MXN10 streamer (in pre-amp mode) and the MXW70 power amplifier, which is where the MSX20 speakers were plugged into.
With the MXN10 in tow, it means this system can work with the likes of Tidal Connect, Spotify Connect, Deezer, Qobuz, internet radio and a DLNA server over Cambridge’s Streammagic app, plus it can handle Bluetooth 5.0,Google Cast and AirPlay 2. It’s also Roon Ready, which is where I spent most of my time with this system.
Physical connections include the RCA line output, plus a coaxial out, optical out, USB-A port and wired Ethernet.
Advertisement
Advertisement
Image Credit (Trusted Reviews)
The MXW70 provides 70 watts of power into 8 ohms with the MSX20, with Hypex NCore Class D amplification that’s tuned by Cambridge’s engineers. Connectivity here includes unbalanced RCA, a pair of XLR ports, a 12V trigger in, loudspeaker connections that accept 4mm banana plugs and a power cable.
If you’re about space-saving, I think this half-width combo works well with the MSX20 and MSX Sub 200, although with the streamer at £349 and the MXW70 at £499, it can increase the cost of the overall system.
A more affordable streaming app, such as the WiiM Amp Pro, WiiM Amp Ultra or Eversolo Play can cut costs and the amount of boxes you need down, depending on the physical constraints of your space.
Sound Quality
Strong bass from subwoofer
Forward mids and excellent width
Treble can sometimes feel a little lost
As much as the outside of this unit has changed, the core of the MSX20 isn’t too different to the Minx satellites that preceded it. This means they benefit from Cambridge’s fourth-gen Balanced Mode Radiator, or BMR, tech, which is designed to provide balanced and engaging results from wherever you are in a room.
Advertisement
Advertisement
On their own, these speakers only cover the mids and treble, as they only go down to 120Hz, with the MSX Sub 200 dialled in to handle anything below that. With the subwoofer in tow, I was pleasantly surprised by the amount of bass on offer from such a small set of units.
Of course, it works best when you use the dials on the rear to set crossover, the desired phase and volume of the pounding bass, but once I’d set that up, it was set-and-forget as far as I was concerned.
Image Credit (Trusted Reviews)
A good example of this was Off The Wall from Michael Jackson, with its pounding groove from the subwoofer demonstrating a good extension and tight feel, while the satellite speakers handled vocals and the rest of the frequency range. Both here and with Earth, Wind & Fire’s Let’s Groove, the MSX Sub 200 felt more unified with the overall frequency response of the overall system, rather than feeling like a thud in the corner that doesn’t contribute too much.
Steven Wilson’s Luminol features some relentless bass grooves in the opening few minutes alongside a vicious drum groove and hints of guitar work that can be quite difficult for some systems to deal with. The MSX20 and MSX Sub 200 combo impressed me here with the power and strength of the bass, although it didn’t overpower the punch of the drums and guitar work.
Image Credit (Trusted Reviews)
Advertisement
The entire system provides a sound with good width and depth, as demonstrated with Luminol and Peter Gabriel’s That Voice Again in my testing. This particular cut from So features a pounding bass, rich vocal and a lot of detailed cymbal work that can be lost with systems sometimes, which isn’t the case here.
Advertisement
I felt the mid-range that the MSX20 provided was rich, as demonstrated with James Taylor’s October Road; his vocals and a warm acoustic guitar sit right up front in the mix, with the ensemble built around it, which was demonstrated wonderfully. In Gloria Estefan’s Get On Your Feet, her vocals sit back in the mix against percussion and electric guitar work, although each was given space and room to breathe.
Image Credit (Trusted Reviews)
The one area I was a bit disappointed by was that the top end was quite smoothed over, lacking bite and detail, and sometimes felt a little lost against the low-end and the mid-range. For instance, in Lock All The Doors from Noel Gallagher’s High Flying Birds, the cymbal and percussion hits lacked a bit of presence, feeling a little lost against his vocals, while the competing percussion intro in Steely Dan’s Do It Again had good separation but lacked a bit of punch against other systems.
Should you buy it?
A compact and versatile system
The MSX20 and MSX Sub 200 combo works well if you’re after speakers and a sub that can be placed virtually anywhere in a room, although you will need to budget for a streamer and amp for a complete system.
Advertisement
This system feels a little lacking in the top-end, though, as it lacks a certain bite and sharpness against other systems.
Advertisement
Advertisement
Final Thoughts
The Cambridge Audio MSX20 & MSX Sub 200 is a capable mix of satellite speakers and a subwoofer that provides an immersive sound with tight bass, a surprisingly wide soundstage and rich mids for such a compact set of units that can be placed virtually anywhere. The treble can feel a little smooth, though, and once you add a streamer and amp, it can get a little dearer than some active units.
The main package here is comparable in price to the active Klipsch ProMedia Lumina 2.1, which offers better compatibility with desktop systems with USB-C and the like, and requires less effort to set up.
Advertisement
With this in mind, I think the MSX20 and MSX Sub 200 offer a better overall sound, with better handling of the low-end alongside an immersive sound. For a more affordable and easy-to-use passive system, this combo works rather well, although sometimes you can’t beat the simplicity and versatility afforded by the Cambridge Audio L/R S for a similar price.
How We Test
We test every speaker setup we review thoroughly over an extended period of time. We use industry-standard tests to compare features properly. We’ll always tell you what we find. We never, ever, accept money to review a product.
Tested over several weeks
Tested with real world use
FAQs
Does the Cambridge Audio MSX20 and MSX Sub 200 system have a subwoofer?
Yes, the Cambridge Audio MSX20 and MSX Sub 200 have a subwoofer with the MSX Sub 200.
Advertisement
Does the Cambridge Audio MSX20 and MSX Sub 200 system have a control app?
On its own, no, as the Cambridge Audio MSX20 and MSX Sub 200 system is purely built of passive speakers and a subwoofer that need wiring to other components. If you use a streamer, such as Cambridge’s own MXN10, that will offer the brand’s Streammagic app, for instance, to send audio to the speakers and sub.
Advertisement
Full Specs
Cambridge Audio MSX Series Review
UK RRP
£497
USA RRP
$657
Manufacturer
Cambridge Audio
Size (Dimensions)
210 x 232 x 220 MM
Weight
6.5 KG
Release Date
2026
First Reviewed Date
10/05/2026
Driver (s)
Two BMR drivers (main units), 6.5-inch active woofer and 2x passive radiators (subwoofer)
Ports
Banana plugs/terminals (main units), RCA input and output (subwoofer)
The JP4x4 is a new take on two of the original Renault 4s: the Plein Air version, built in 1969 for open-air fun, and the JP4 from 1981, which seemed to channel carefree days by the sea. The name JP4 is derived from Journée à la Plage, which translates to “a day at the beach.” The new name JP4x4 incorporates the four-wheel drive feature, which is self-explanatory.
On May 18, visitors to the 2026 Roland-Garros French Open will get their first look at the vehicle, which joins three previous concepts built on the same electric Renault 4 E-Tech platform, each of which explored new ways to use the compact hatchback. The most recent version focuses squarely on leisure and light adventure. The vehicle joins three previous prototypes based on the same electric Renault 4 E-Tech chassis, each exploring new ways to use the compact hatchback. This most recent edition focuses solely on leisure and minor adventure. Emerald green paint covers the bodywork in a somewhat iridescent tint that resembles the colors offered on the classic 4L in the 1970s. Bright orange fills the interior, creating a sharp, cheerful contrast that draws the eye from all sides. Half-doors replace the traditional five-door layout, stopping just short of the B-pillar enabling simple entry and departure. There are no side windows or a canvas roof, so the hut is always open to the breeze.
The openwork roof is made up of a cross-shaped structure that provides enough stiffness while allowing plenty of sky to be visible. The same frame supports a surfboard strapped securely on top. At the back, the tailgate folds flat like the side of a pickup truck, transforming the cargo area into a simple loading platform. Skateboards fit nicely into the free area behind the seats, ready for whatever happens next.
The dashboard and digital screens are carried over from the production car, but Renault added a passenger grab handle for rougher terrain and a floating center console to keep the space airy. Inside, the seats replicate the distinctive bucket style of 1970s Renault models, complete with integrated headrests that resemble wrapped Egyptian mummies. The seats are covered in mixed fabrics, combining a crepe base with diagonal mesh sections for a sporty yet comfortable feel. Orange accents appear. They are covered in a mix of fabrics, including a crepe base and diagonal mesh parts for a sporty yet comfortable feel. The dashboard and digital panels are carried over from the production car, but Renault has added a passenger grasp hold for rougher terrain and a floating center console to keep the area open. Orange accents emerge on the door panels and surrounding the console, bringing everything together.
The JP4x4 is mechanically similar to last year’s Savane 4×4 concept, with a second electric motor driving the rear wheels, giving the vehicle permanent all-wheel drive instead of the front-wheel-drive setup found on the standard Renault 4 E-Tech. The ground clearance rises by 15 millimeters, and each track widens by 10 millimeters for better stability. The 18-inch wheels wear a fresh design inspired by the original JP4, wrapped in Goodyear UltraGrip Performan A second electric motor powers the back wheels, providing the vehicle permanent all-wheel drive rather than the front-wheel-drive system seen on the ordinary Renault 4 E-Tech. The ground clearance increases by 15 millimeters, and each track expands by 10 millimeters to improve stability. The 18-inch wheels feature a new design inspired by the original JP4, as well as Goodyear UltraGrip Performance+ tires in the 225/55 size. The wheelbase remains at 2,624 millimeters, as it was on the production vehicle.
Renault built the entire package for sandy beaches, stony pathways, and unpaved treks where extra traction is critical. The combination of raised height, wider stance, and all-wheel drive gives the car a capable feel without making it a serious off-road vehicle. Nobody expects this particular vehicle to hit showrooms. Instead, it serves as a showcase for the electric Renault 4 platform’s versatility.
A recent Figure AI tech showcase depicts two F.03 humanoid robots walking into a clean but lived-in environment. One robot goes straight to a coat thrown on a bed and hangs it neatly on a wall hook. At the same time, the second robot closes a laptop on the desk and places a pair of headphones back onto their stand. They keep progressing without pausing, each catching up on what the other has previously accomplished. When they approach the unmade bed, they naturally split off, one on each side, and begin manipulating the sheets and comforter together until everything is level and smooth.
People have seen robots doing laundry and stacking boxes before, but this time, not one, but TWO machines went through a whole sequence of everyday jobs in the same room at the same time, not bad for a minute and a half of work. The list of tasks included opening doors, pushing a chair under the desk, closing a book, emptying a small trash can, and generally tidying up the joint so it looked ready for the next day. But this time, not one, but TWO computers performed a whole sequence of daily tasks in the same room at the same time, which is not bad for a minute and a half of effort. The list of activities included opening doors, moving a chair beneath the desk, closing a book, emptying a little trash can, and generally cleaning up the space so it looked ready for the next day.
Sleek & Durable Design: Standing at 132cm tall and weighing only approx. 35kg, the G1 is constructed with aerospace-grade aluminum alloy and carbon…
High Flexibility & Safe Movement: Boasting 23 joint degrees of freedom (6 per leg, 5 per arm), it offers an extensive range of motion. For safety, it…
Smart Interaction & Connectivity: Powered by an 8-core high-performance CPU and equipped with a depth camera and 3D LiDAR. It supports Wi-Fi 6 and…
Each F.03 stands about five feet eight inches tall, walks around on two legs with arms that have hands made up of five individual fingers, and their heads have stereo cameras that feed live video straight into their central brain, with no need for any external sensors or additional computers to help them out. The trick is in a special bit of software called Helix-02, which Figure built in as a single rule that takes images from the cameras and then The trick is in a special piece of software called Helix-02, which Figure included as a single rule that takes images from the cameras and a simple aim and translates them into an infinite succession of joint movements, with no further planners or coding required.
Engineers basically gave the system thousands of hours of practice time through simulated runs, then threw in some real-world examples from their earlier tests of the robots doing grocery shopping and kitchen cleaning, and then they simply added some new data showing the robots working together in a room, and the model just learned the new pattern, no need for new code.
When the comforter becomes all bunched up and begins to slide out from under the robots, the one on the left will tilt its head slightly, and the one on the right will notice and adjust its grip just in time to catch it; because they don’t send any messages to each other, everything happens on the fly, with each watching the other’s body language and adjusting their own plan as it goes.
Engineers will tell you that the problem is that objects that can change shape, such as a blanket, provide a significant challenge. The policy predicts when the shape will change and adjusts accordingly, all in a fraction of a second! It also makes the robots steady when they reach, stride, and turn, as it was interesting to see one of them stand on one leg to press the pedal while the other just kept walking. [Source]
MUFG, Mizuho, and SMFG would be the first Japanese institutions added to Anthropic’s restricted Project Glasswing rollout, a source familiar with the matter told Reuters
Japan’s three megabanks are set to gain access to Claude Mythos, Anthropic’s vulnerability-hunting AI model, within roughly two weeks, a source familiar with the matter told Reuters on Tuesday.
It would be the first time a Japanese company has been granted entry to the restricted preview, which has so far been confined to Anthropic’s American and a handful of European partners.
Mitsubishi UFJ Financial Group, Mizuho Financial Group, and Sumitomo Mitsui Financial Group were informed of the move during meetings in Tokyo this week with US Treasury Secretary Scott Bessent. The three lenders are expected to be onboarded by the end of May.
Advertisement
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Mythos has been treated by regulators and chief executives as a category-shifting event since Anthropic disclosed its existence earlier this month.
The model has discovered thousands of previously unknown zero-day vulnerabilities across every major operating system and every major web browser, and in internal testing it wrote working exploits, including chains that escape both renderer and operating-system sandboxes in a browser.
Advertisement
Mozilla last week shipped Firefox 150 with fixes for 271 vulnerabilities found by Mythos in a single evaluation pass.
Anthropic has not released the model publicly. Instead, it has run a controlled rollout under what it calls Project Glasswing, with 12 named launch partners, including AWS, Apple, Cisco, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks, and around 40 further institutions granted access on a case-by-case basis.
Tokyo is moving in parallel. Finance Minister Satsuki Katayama announced the formation of a 36-entity public-private working group on Mythos-class risks, comprising the country’s major banks, the Bank of Japan, and the Japanese units of Anthropic and OpenAI.
Advertisement
The group is chaired by Mizuho’s chief information security officer and is charged with identifying exposures, implementing defensive measures, and drafting contingency plans for what would amount to a co-ordinated patching push across the Japanese financial system.
For the three banks involved, the immediate question is operational. Mythos under Glasswing terms is delivered with restrictions on output disclosure, with the model used to find vulnerabilities in a partner’s own systems and to draft remediation, not to publish exploits.
The Mozilla case offers a template: 271 vulnerabilities patched in a single Firefox release after a Mythos sweep, with the model’s findings handed back to Mozilla engineers under non-disclosure rather than published.
The geopolitical layer is unusually visible. Bessent’s role in conveying the access decision in Tokyo aligns Mythos rollout with US Treasury statecraft rather than with Anthropic’s commercial channel, an arrangement that has drawn complaints from European capitals.
Advertisement
Eurozone finance ministers raised the issue at an Ecofin meeting last week, where no EU government had access to the model while the White House was reported to be blocking further expansion of the partner list.
Industry views on Mythos remain split. Some cybersecurity researchers have argued that the vulnerabilities Mythos surfaced are reachable through clever orchestration of public models, and that the bigger story is the rate of improvement of frontier AI in offensive cyber, not Mythos itself.
Others, including Anthropic chief executive Dario Amodei, have described the moment as a “cyber moment of danger” that justifies the access controls.
Anthropic and the three Japanese banks did not immediately respond to requests for comment, according to the Reuters source’s account.
Lady Gaga’s “Mayhem Requiem” filmed live performance will stream on Thursday, May 14, via Apple Music Live and at select AMC theaters across the United States.
At 11:00 p.m. Eastern / 8:00 p.m. Pacific, Lady Gaga fans can head to the Apple Music app on their iPhone, iPad, Mac, Apple TV, or in-browser at music.apple.com to tune into an exclusive stream of the Mayhem Requiem filmed live performance. In addition to streaming on the app, 15 select AMC theaters across the U.S. will show the performance at the same time.
The premiere is free for anyone to watch; no Apple Music subscription is required. However, Apple Music subscribers will be able to watch the performance on demand after the event is over.
Apple Music describes the event:
Advertisement
“The opera house from Lady Gaga’s MAYHEM Ball has been reduced to rubble— and now it’s time for MAYHEM Requiem, a celebration and musical reimagining of her sixth album.”
It’s worth noting that the filmed live performance isn’t actually live, either. It was recorded on January 14 at Los Angeles’ Walter Theater.
A live album of all songs mastered in spatial audio will be available on Apple Music. Fans can unlock bonus content, like wallpapers and Apple Watch faces, through the Shazam app by identifying any Lady Gaga song.
A week ahead of the Google I/O event, during the Android Show stream, there were some iPhone-friendly Android features teased. We already knew about them.
An Android smartphone and an iPhone
Google I/O 2026 is taking place on May 19 and 20, and the search giant is warming up for its biggest presentation of the year. To prepare its users for that event, it held a smaller presentation on Tuesday about Android. The Android Show I/O Edition 2026 was a 40-minute prerecorded stream, introducing a number of changes to Google’s ecosystem. There was obviously a lot of Google, Chromebook, and Android-specific content, but also some that was Apple-related in nature. Continue Reading on AppleInsider | Discuss on our Forums
The organisation plans to use the investment as a means of accelerating the application of its AI model, at scale.
AI-powered drug design and development company Isomorphic Labs has announced the raising of $2.1bn in Series B funding. The round was led by Thrive Capital and includes participation from existing backers Alphabet and GV alongside new investors MGX, Temasek, CapitalG and the UK Sovereign AI Fund.
Founded in 2021 and led by CEO Demis Hassabis and its president Max Jaderberg, Isomorphic Labs is headquartered in London and has additional premises in Cambridge, Massachusetts and Lausanne, Switzerland. The company, which is a spin-off from Google DeepMind, an AI research lab acquired by Alphabet in 2014, aims to address the challenges of drug discovery using AI technology.
Isomorphic labs intends to put the recently raised funds towards the continued development and deployment of its AI drug design engine (IsoDDE) and the acceleration and expansion of its pipeline of therapeutic programmes. Additionally, the funding will support current hiring targets.
Advertisement
Commenting on the announcement Ruth Porat, the president and chief investment officer at Alphabet and Google said, “The application of AI in healthcare offers a profound opportunity.
“Isomorphic Labs has already made extraordinary progress in harnessing AI to accelerate drug discovery, and we are excited by this momentum and the early promise of the technology platform.This trajectory is encouraging, and this funding will be used to accelerate the work and bring important interventions to market with greater speed.”
Jaderberg added, “This milestone is built on the strength of our AI drug design engine, which has already proven its worth across our internal programmes by hitting key milestones and identifying viable candidates with unprecedented speed.
“Our drug design engine works, and it’s giving us a repeatable way to design new medicines for a wide range of diseases, building a future of medicine that was previously out of reach.”
Advertisement
Reportedly, Isomorphic expects to run its first clinical trials by the end of 2026, a delay from the CEO’s earlier target of having AI-designed drugs in trials by the end of 2025.
In late April, Alphabet was among some of the large scale organisations posting positive quarterly reports. Alphabet beat revenue expectations for the past quarter, led by its growing cloud business, which rose 63pc to hit $20bn. Consolidated revenue grew 22pc to nearly $110bn.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
It’s possible that among Hackaday readers are the largest community of people who have designed their own CPU in the world. We have featured many here, but it’s possible that not so many of them have gone on to power an everyday project. Step forward [Baltazar Studios] then, with a scientific calculator sporting a self-designed CPU on an FPGA.
The calculator itself is nice enough, with a smart 3D printed case, an OLED display which almost evokes a VFD, and very well made buttons. But it’s the CPU which is of most interest, because while it follows a conventional Harvard architecture with a 12-bit instruction set, it works with 4-bit nibbles. This choice follows one used by HP in their calculator designs, seemingly because it can be optimised for the binary coded decimal which the calculator uses.
With calculators being yet another app on our spartphones or comnputers, there seems to be less use of calculators outside of education in 2026. But if you are a calculator user there’s nothing like a calculator you made yourself, and with a CPU of your own design it has few equals. We like this project almost as much as we like the Flapulator!
Artificial intelligence didn’t roll out slowly. In fact, at times it feels like it landed all at once.
In just a few years, systems that began as internal experiments are now embedded in customer support, fraud detection, software development, and even IT infrastructure operations.
AI is now part of the operational backbone of modern enterprises.
Advertisement
Latest Videos From
But there’s a problem.
Anand Kashyap
CEO and co-founder, Fortanix.
Advertisement
While AI capabilities have advanced, the way we secure them hasn’t kept up.
Most organizations are still applying traditional security models to a fundamentally different kind of workload, and it’s leaving a critical gap at runtime, or the exact moment when AI systems do their work.
The Illusion of Coverage
For years, enterprise security has focused on two primary states of data: when it’s stored and when it’s moving. Encryption for data at rest and in transit, with identity and access controls for both.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
These controls still matter. But there’s a third state that’s far more complex and far less protected: data in use.
When an AI model runs, sensitive data is actively processed in memory. Model weights, which are often the most valuable intellectual property an organization owns, are loaded into memory. Prompts, responses and contextual data are generated and transformed in real time.
In most environments, all of that becomes visible to the underlying system. The uncomfortable reality is that even well-secured environments can expose their most valuable assets at the moment they’re being used.
Advertisement
Where AI Security Actually Breaks
When security teams investigate AI-related risks, the root cause rarely traces back to perimeter defenses. The issues tend to emerge deeper in the lifecycle across three key phases:
1. Training: When data quietly leaks into models. Training pipelines span storage systems, shared compute environments, orchestration layers and debugging tools. They can be messy: data moves constantly, intermediate artifacts are created and cached, and logs accumulate quickly.
Advertisement
In this environment, sensitive information might surface in unexpected places. Models themselves may unintentionally retain elements of the sensitive data they were trained on. And model weights, which encapsulate that learning, are often handled more casually than they should be.
This all creates a subtle but serious risk where exposure doesn’t always come from a direct attack. Sometimes it comes from normal development practices.
2. Inference: An overlooked exposure layer. Once a model is deployed, attention shifts to inference, or the point at which inputs become outputs.
On the surface, it looks simple. But in practice, inference workflows involve multiple streams of sensitive data, including user prompts and queries, generated responses, internal enterprise data retrieved to ground outputs, and the model itself.
Advertisement
Much of this data is processed through monitoring tools, logging systems and debugging pipelines, often in plaintext.
Even without a breach, sensitive information can be exposed through routine operations. Troubleshooting dashboards might capture more than intended, or logs could persist longer than expected. Shared infrastructure also introduces more potential for leakage.
Inference security isn’t only about blocking access. It’s about controlling what happens during execution, and most organizations aren’t doing that yet.
3. Runtime: The blind spot in modern security. The most critical yet least protected phase is the runtime phase. This is where models actually execute, encrypted data is decrypted, and model weights exist in memory. And it’s precisely where traditional security models fall short.
Advertisement
Even in environments with strong identity management controls and encryption policies, runtime assumes a certain level of trust in the underlying system. If that system is compromised, or even simply misconfigured, the protections around it don’t matter because keys are still released, workloads still run, and sensitive assets are still exposed.
This is why runtime is currently the weakest link, and why it has emerged as the true security boundary for AI systems.
Why the Problem Becomes Worse at Scale
As organizations expand their use of AI tools, the risks don’t just increase. They multiply. AI workloads are rarely isolated. They more commonly run across distributed environments, shared accelerators, and multi-tenant infrastructure. They interact with internal systems and external services, and they operate continuously, not intermittently.
Advertisement
This creates a compounding effect:
1. More data flowing through more systems.
2. More models deployed across more environments.
3. More opportunities for exposure during execution.
Advertisement
At the same time, the value of what’s being processed is going way up. Proprietary models are becoming core business assets, and sensitive enterprise data is being used to fine-tune outputs and drive decisions.
In this context, a single weak point at runtime becomes a major systemic risk.
Top Priority: Rethinking Trust in AI Systems
The core issue isn’t a lack of security tools. It’s a mismatch in assumptions when it comes to trusting the infrastructure AI runs on.
Advertisement
With traditional security, the assumption has always been that once a workload is inside a trusted environment, it can be relied upon to behave securely. But AI changes this because these systems are dynamic. They process sensitive data continuously, rely on complex stacks that are difficult to fully validate, and often run in environments that organizations don’t fully control.
In other words, crossing the perimeter isn’t the hard part anymore. Staying secure after crossing it is.
To address this, security needs to move closer to the workload itself. So, instead of focusing only on protecting access to systems, organizations need to protect what happens inside them, particularly during execution. That means:
1. Ensuring that data remains protected even while it’s being processed,
Advertisement
2. Preventing unauthorized access to model weights during runtime,
3. Verifying that workloads are running in trusted environments before allowing them to execute.
This is where approaches like Confidential Computing and hardware-based isolation are making a difference. By creating protected execution environments and tying access to cryptographic verification, the industry is moving security from assumption-based trust to proof-based trust.
In simple terms: don’t trust the system. Make it prove it’s secure.
Advertisement
Security Has Moved to the Moment of Use
For years, organizations have invested in securing where data lives and how it moves. But with AI, the most important moment is when the model runs, and data, logic and decision-making converge in real time.
That’s where the real risks are, and that’s where security needs to be focused.
The organizations that recognize this shift early will set themselves up to scale AI safely. Those that don’t may find that their most advanced systems, built on an outdated trust models, are highly vulnerable.
Advertisement
In modern AI, security isn’t defined by the perimeter. It’s defined by what happens inside it.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
You must be logged in to post a comment Login