In between the Nixie tube era of the 50s and 60s and the advent of multi-digit vacuum fluorescent displays (VFDs) common in 80s and 90s consumer technology, there was a brief time in the early 70s where single-digit VFDs were commonplace. Superficially these devices look like Nixie tubes, but have a number of advantages to them including lower voltage, lower power requirements, and lower cost. [maurycyz] recently found a number of these salvaged from old calculators and used them to build a retro-themed clock.
[maurycyz] was not able to find datasheets for this display, but was able to reverse-engineer each of the digits. Similar to vacuum tubes there is a heater which has a few ohms of resistance, and from there each of the segments of the digit can be deduced by probing the 13 signal wires. These are analog devices in some respects, so a lot of experimentation had to go into driving the displays to find their optimal conditions. A quartz crystal was used for timekeeping with an AVR128DA28 microcontroller chosen to provide control for the digits, using seven pins as segment drivers and four as grid drivers. Each digit uses around 0.14 watts, so with all four digits on it can consume a little over half a watt. A simple wood enclosure rounds out the build.
Suntory Toki Whisky and Technics have partnered on Toki-O Nights, a free and ticketed event series running across London, Edinburgh, and Manchester from June to December 2026 that draws on Tokyo’s kissaten listening bar culture as its central reference point.
The series launches on 3 June at Spiritland Kings Cross before spreading across nine venues throughout the rest of the year, with events scheduled on two Wednesdays per month at spaces including Archive & Myth, Bar Shrimp, Caley Bar, Equal Parts, Jazu, Mad Cats, Mitsu, Spiritland, and The Listening Room.
Technics is curating the musical programme for each event, with vinyl-led DJ sets spanning electronic, jazz, and soul styles performed on the brand’s SL-1200GR2 turntables, with artists including Mari Kimura, Nina Yamada, and Zag Erlat of My Analog Journal confirmed across the series.
Select venues will also host a dedicated listening station, offering a more focused audio environment separate from the main event floor where guests can sit with a Toki Highball and engage with the music at closer range.
Advertisement
Advertisement
Image Credit (Technics)
Beyond the free Wednesday DJ nights, a limited run of ticketed sessions will run alongside the main programme, covering workshops on Japanese vinyl culture and Q&A sessions with Technics audio specialists, giving attendees a closer look at the hardware and craft behind each event.
The kissaten format underpinning the series traces its roots to mid-twentieth century Japan, where dedicated listening bars gave patrons access to high-end audio at a time when personal hi-fi ownership remained broadly inaccessible.
The Toki Japanese Highball serves as the signature drink across all events, with each venue also offering its own riff on the format using Suntory Toki Whisky as the base, meaning the drinks menu will shift in character from venue to venue across the series.
Toki-O Nights is free to attend, though the organisers recommend booking a table directly through each venue given expected demand, with full event details and ticketed session information available at Toki-O Highball website.
The future of football is being decided six floors underground at FIFA’s headquarters in Zurich.
Past the meditation suite made from Afghan onyx and the congress room that could pass for the United Nations Assembly Hall, Lenovo technology is being infused into every layer of the beautiful game in time for this summer’s World Cup.
I was the only UK journalist on a rare behind-the-scenes foray into FIFA’s inner sanctum, where offside calls, tactical analysis, and the fans’ view of the action are being overhauled by AI ahead of kick-off. Here’s what’s coming.
Advertisement
Latest Videos From
The VAR makeover
(Image credit: Future / James Day)
VAR’s grey stick figures are finished. All 1,200 players at this year’s tournament are being individually 3D scanned before a ball is kicked to produce a photorealistic digital twin — accurate to the millimeter — for faster, more precise offside calls.
Under the current system, a 6ft 5in Erling Haaland and a 5ft 7in Lionel Messi appear the same height. “Our mind is leading us to think: if it doesn’t look real, it’s probably not that adherent to the context,” says Dr. Valerio Rizzo, the Lenovo neuroscientist who built the system.
“For the referee, they are human beings, and their brain is like one of the fans. They see that scene. They don’t perceive the reality of that illustration, and maybe they can be biased as well.”
Advertisement
Rizzo opens a presentation with a reference to Permutation City, Greg Egan’s 1994 novel about digitizing human beings into a simulation. “This is like something that nowadays seems all the time closer and closer,” he says.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The AI-enabled avatars are created with 3D Gaussian Splatting, where photographs are converted into clouds of trainable particles whose position, color, and rotation are optimized until they match reality exactly.
Image 1 of 3
Advertisement
The photo capture stage of 3D Gaussian Splatting(Image credit: Future / James Day)
The photo capture stage of 3D Gaussian Splatting(Image credit: Future / James Day)
Dr. Valerio Rizzo at FIFA HQ in Zurich(Image credit: Future / James Day)
With three million data points per player and sub-centimeter accuracy, “It’s not a default puppet, it’s not a random shape,” says Rizzo. “It’s the actual body of the player. The chest size is the chest size. The foot length is the foot length of the player.”
When I ask how much more accurate offside decisions will be, his answer is characteristically direct: “It’s more accurate, it’s more precise, it’s more realistic. You can make up your own assumption.”
A segmentation AI strips clothing from the body post-scan so jersey colour, squad number and boots can all be changed without reworking the underlying geometry. Hair is captured as individual particle strands rather than a single mesh. Micro-movements, such as fingers, are fixed algorithmically.
Me, in FIFA’s photo capture booth (Image credit: Future / James Day)
Then comes my turn. The photo capture system is part airport scanner, part Noughties music video, when every rapper wanted to be filmed with a fisheye lens. Covers go over my feet, and I step into a cylindrical white booth plastered floor to ceiling in what look like giant QR codes. Arms out, middle finger pointing downward to a height marker, it’s impossible not to feel like Cristiano Ronaldo.
Advertisement
Then 36 4K cameras fire simultaneously, and it’s over in seconds.
Around 20 minutes later, I show up as a ghostly white mesh. Hit ‘texture’, and my face, my kit, even my tattoos are rendered with unnerving accuracy, like someone has put a mini me in a glass case.
Every World Cup player will go through this same process during their mandatory FIFA media day, with 28 portable rigs travelling between all 48 team base camps from June 4 to June 13.
Advertisement
ChatGPT for coaches
FIFA’s generative AI tactics tool, Football AI Pro (Image credit: Future / James Day)
From the scanning booth, I move to a conference suite full of screens for a first look at Football AI Pro, a generative AI tactics tool giving every competing nation the same analytical capability.
All 48 nations, from Germany to Curaçao (which has a population of just 156,000 and is the smallest nation ever to qualify for the World Cup), get it free.
It starts like ChatGPT. You type a direct question in plain English, Spanish, Arabic, or Chinese, and it responds like a human. Then you go deeper, and it plays like Football Manager on steroids.
FIFA’s generative AI tactics tool, Football AI Pro (Image credit: Future / James Day)
The live demonstration uses the PSG vs. Chelsea match from the Club World Cup. A single question returns nine attempts on goal, with heatmaps, pass maps, 3D reconstructions from the goalkeeper’s perspective, and downloadable coaching clips, all generated in seconds.
Advertisement
“In elite football,” says Alvaro Perez, Lenovo’s senior product manager for the system, “the difference between a question and a decision is often the difference between winning or losing.”
Lenovo’s Alvaro Perez speaking at FIFA HQ in Zurich (Image credit: Future / James Day)
The tool is built on “FIFA’s Football Language”, a knowledge graph standardizing every event in a match. “The big teams can come with an army of analysts, and analysis takes a lot of time and effort,” says Perez. “FIFA wants to democratize things so the federations with fewer resources get the same insights.”
Once enough World Cup match data exists, Football AI Pro can even analyse penalty takers and goalkeepers ahead of a shootout.
Despite similarities to large language models like ChatGPT or Claude, Lenovo claims the football-specific knowledge has been built from scratch with FIFA, though the system does use some underlying LLM architecture from as-yet-unnamed external providers. “If there is no solid answer,” says Perez, “then it will reply: sorry, we cannot find the right data to provide this type of information.” It will not guess.
Advertisement
The machine behind it all
Image 1 of 2
FIFA HQ in Zurich, Switzerland(Image credit: Future / James Day)
FIFA HQ in Zurich, Switzerland(Image credit: Future / James Day)
Supporting all of this is the most complex technology deployment in sporting history.
Over 17,000 Lenovo devices and 30,000 total assets are pre-configured at hubs in North Carolina, Toronto, and Mexico City across 16 stadiums and all FIFA venues across three countries. Open the laptop, and the right application is already there.
“Think of them as an empty shell,” says Myles Spittle, Lenovo’s services delivery lead for FIFA. “You might get a 10-minute window at a loading bay. There are security dogs, there’s a whole host of things to consider.” NSA and Secret Service protocols apply if the President attends.
Advertisement
Lenovo’s Myles Spittle speaking at FIFA HQ in Zurich (Image credit: Future / James Day)
A Technical Command Centre in Miami with a 60-foot LED wall monitors everything simultaneously. After the final whistle, engineers have five days to decommission everything, followed by two weeks’ leave. The stress level, says Spittle, makes that break from work non-negotiable.
For the first time, viewers will also get a genuinely immersive, stabilized first-person view from on the pitch in real time.
Referee View uses the same gyroscopic stabilization found on F1 helmet cams and is processed live in under two seconds, for broadcasts worldwide. The players had better behave themselves.
As for FIFA’s tech partner, they say ‘form is temporary, class is permanent’ in football, but Lenovo won’t have the luxury of having an off day. “The World Cup doesn’t get delayed by two weeks,” says Spittle. “You either deliver, or you don’t. And don’t isn’t an option.”
Working from home has its own perils. Pets can be demanding, your back aches from hours at a desk, or you simply forget to move. There are a fewapps that nudge you to move around or indicate that you’re not sitting in an ideal position, but they’re easy to dismiss.
I’ve spent the better part of a decade at a home desk, iterating on the setup as I go — gaming chair, lumbar support, the works. None of it guarantees good posture.
Then I came across Isa, a desk device from German startup Deep Care that takes a different approach entirely. It tracks posture, hydration, light, sound, and movement. And it does all of it without a camera or an internet connection, which, in an era of always-on surveillance, is a meaningful differentiator.
Here’s how it works and what’s inside. Isa has a 5.5-inch IPS HD screen and looks like a table clock. It is powered by USB-C; the company supplies a power unit with it, but you can use any of your existing chargers too, as it has a power consumption rating of roughly 2.45W.
Advertisement
The key sensor for the device is the Time-of-Flight (ToF) 3D depth sensor on the front — the same technology used in facial recognition and some smartphone cameras — that tracks posture and movement. It also enables beta features, such as counting the number of times you’ve had water or other liquids. The company said that the sensor works in the range of 0.15 meters to 1.8 meters. That means if the device is sitting on your desk, it can measure your movement, even when you stand up and move about. It also packs several other sensors: a ToF 1D sensor, a gyroscope, a barometer, a light sensor, a sound level sensor, a CO₂/VoC sensor, and a temperature and humidity sensor.
Image Credits: Deep CareImage Credits:Deepcare
Getting started is straightforward — the device asks for a few details about you and your work routine. I found it strange that there was no option to set the device to India time (or any other Asian time zone). The company said Isa currently supports only EU and US time zones. Fair enough for now — but broader time zone support, or even a simple world clock, feels like a basic expectation for a desk device.
On the screen, Isa displays your posture with a squircle (a rounded square) ring that fills or empties based on how well you’re sitting, while a water-tank-style widget tracks your drinking. If you are not sitting in the correct posture, the indicator will turn yellow. The Apple Watch-style ring is a surprisingly effective nudge — when I see yellow or red, I straighten up almost instinctively.
The device vibrates to alert you if you’ve been slouching for too long, and I’m okay with that kind of mild shaming. That alert also indicates if you are leaning far too forward or back and helps you correct your stance.
Image Credits: Ivan MehtaImage Credits:Ivan Mehta
A similar widget tracks movement, and if you have been stationary for a while, Isa suggests you get up, with on-device guided exercises to follow. When you return to your desk after a break, the movement tracker resets.
Deep Care chose not to include a cameras, which helps with privacy, but it comes with trade-offs.
Advertisement
Image Credits: Ivan MehtaImage Credits:Ivan Mehta
If a bottle or some other object sits between you and the sensor, it may read that as a person and log you as stationary. Pets or housemates passing by can trigger the sensor, too. Isa usually figures out that you’ve stepped away and goes to a digital clock display, but I would have liked a manual button to tell it I’m not at the desk so it stops tracking.
Because of the sensor-only approach, the device occasionally told me I’d been stationary for too long when I’d been sitting for under half an hour. These are minor inconveniences. On balance, the device made me check my posture more often than I used to, and the exercise suggestions are truly useful.
image Credit: Ivan MehtaImage Credits:Ivan Mehta
To process all these features, the device uses a quad-core 2 GHz processor. The device can connect to Wi-Fi for software updates, but you can turn it off at any time.
Deep Care was founded by three former Bosch employees and initially sold Isa directly to businesses. It recently expanded to consumers — a shift that signals confidence in the retail market for workplace wellness hardware, and a test of whether a subscription model layered onto premium hardware can find a mainstream audience.
Isa is priced at €299 ($354) with two subscription tiers. The core plan (€4.99 per month) gives you access to posture tracking, healthy sitting habit tracking, drinking habit detection, and its exercise library. The Pro plan(€7.99 per month) lets you track light, noise, and CO2 levels for a healthy working environment.
The company plans to use Isa’s sensor suite to venture into mental health-related tracking. It claims that by using signals like posture, head movement, and chest movement, the device can measure breathing patterns. Plus, paired with environmental data like noise, light levels, and CO2 level, the company wants to introduce a stress-related score.
Advertisement
Even if you skip the mental health features, Isa is a solid device for anyone serious about posture and movement. It isn’t cheap, and the subscription adds to the long-term cost. But if you or someone you know works from home and has been meaning to do something about their desk habits, it’s one of the more thoughtful options out there.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
It’s been a while since a horror series grabbed my attention, like really grabbed my attention. We’re living in an era where genre programming feels plentiful, yet formulaic — where the algorithm can overpower originality. It’s important to say that, because I’ve discovered a new horror show that, through its familiar-feeling aesthetic, feels fresh, original and requires my complete attention.
I’m talking about Widow’s Bay on Apple TV, and if this is the first you’ve heard of the series, the best way I can describe it is to ask, what if Parks and Recreation was created by Stephen King? If that question stopped you in your tracks, then you’re going to want to read what I have to say.
This is a show that blends the small-town sensibilities of The Andy Griffith Show with David Lynch’s Twin Peaks. It’s quaint like the beach scenes from Jaws; it’s terrifying like the shark scenes from, well, Jaws.
Advertisement
Bold statement, incoming: It’s the best new horror series on TV, and there’s nothing else on quite like it.
Widow’s Bay follows Tom Loftis (Matthew Rhys), the mayor of the struggling coastal town, who works tirelessly to make it the next Martha’s Vineyard. No matter how hard he tries, though, the fishing village just can’t measure up to the iconic tourist attraction. Aside from the conflict and complications that come with working a municipal job such as this, Tom’s drive to successfully revamp the town is overshadowed by local legends of monsters, boogeymen and other such omens stemming from a centuries-long curse.
To delve deeper into these details would be to unleash major story spoilers and, since the series is still airing — new episodes hit Apple TV every Wednesday — I’d prefer not to ruin the experience for you. What I will say, though, is that Widow’s Bay should be a bigger part of the conversation. It’s a bona fide sleeper hit, and audiences should wake up and take notice.
If I were to categorize Widow’s Bay, I’d say it is a horror-comedy. But not in the overt, blood-spattered, wisecracking manner most horror-comedies behave. There’s a Twin Peaks/Picket Fences quality to the show that allows the humor to jump out and surprise you in the most unexpected places.
Advertisement
Kate O’Flynn, Matthew Rhys and Stephen Root star in Widow’s Bay on Apple TV.
Apple TV
While the comedy isn’t really laugh-out-loud funny — it’s way more peculiar and quirky than anything — there have been a few moments where I’ve cackled uncontrollably at the stuff playing out on screen. You can tell there’s a deep understanding of the horror genre and its tropes from those behind the scenes making this show, which leads to smart choices and moments that feel like inside-baseball winks at the audience.
Widow’s Bay is in on the joke, and that’s what makes it so good.
The Apple TV series hails from creator and writer Katie Dippold, who cut her teeth on Parks and Recreation, which makes complete sense when you dip into this show. She’s enlisted directors like genre faves Ti West and Hiro Murai to contribute their visual sensibilities to the mix.
Advertisement
When it comes down to it, though, the real standout elements of Widow’s Bay are its cast. Matthew Rhys, who showcased his insidious side in Netflix’s The Beast in Me last year, flips expectations and leans into some big underdog energy as the town’s mayor. The comedy that arises from his bewilderment isn’t overt because his internal conflict stems from deep-seated pain and the denial that accompanies it. This combination, along with his drive to make the town better, is the right formula to make the viewer root for him and go on this wild ride.
Stephen Root is a pleasure to watch as Wyck, the hardened fisherman who carries the history of the island on his back. I mentioned Jaws earlier, and several elements throughout the series honor the classic film. Root’s performance is one of them as he dives into the Quint-like quirks that drive Wyck, and he’s so good here that it’d be worth watching the series just for him.
Kate O’Flynn stars in Widow’s Bay on Apple TV.
Advertisement
Apple TV
That said, it’s Kate O’Flynn’s Patricia who steals the show. The awkward town hall assistant is the energetic middle ground between Tom and Wyck, and her work in the series is star-turning. Patricia has layers beneath his grumpy exterior that command the screen — whether she’s hosting a Wiccan death party, running for her life in the middle of the night or holding a shotgun to a monster’s burnt ashes.
Oh, and there are monsters. Widow’s Bay has an assortment of creepy threats from ghosts to killer clowns, to an undead pilgrim and a murderous boogeyman I alluded to above.
Reading that above sentence can make this article sound like the show just throws an assortment of scary monsters at the screen to see what sticks. Let’s be real: there are moments when it feels that way, but the series sprinkles its lore throughout the episodes, pointing to a deeper curse that has plagued this island for centuries.
Widow’s Bay is an amalgamation of so many genre elements and references to other things that, in the wrong hands, it could easily come off as formulaic. But it isn’t. This is a show that feels familiar but remains fresh. It’s scary like Stephen King at his best; it’s creepy like a ghost story at a campout. Through it all, it’s a surprisingly fun ride.
A BCG survey of 625 CEOs and board members found that 61% of chief executives believe their boards are rushing AI transformation. Three-quarters of board members rate their AI knowledge as adequate, but nearly 40% of CEOs disagree, and more than half say hype is distorting boardroom judgment.
Advertisement
Sixty-one per cent of chief executives say their boards are pushing AI transformation too fast, according to a global survey of 625 leaders published by Boston Consulting Group. The research, titled Split Decisions, polled 351 CEOs and 274 board members at companies with at least $100 million in annual revenue and found a consistent pattern: boards and CEOs agree that AI matters, but disagree on how quickly it should be deployed, how well boards understand it, and how much of a CEO’s job now depends on delivering returns from it.
The findings land at a moment when AI FOMO has become a dominant force in corporate strategy. More than half of the CEOs surveyed said that hype around artificial intelligence is distorting their boards’ judgment, and nearly 40 per cent said their boards lack an informed view of how AI is reshaping growth strategy. One in three said their board overestimates the human capabilities that AI can replace.
The confidence gap
The survey’s most striking finding is the disconnect between how board members rate their own AI knowledge and how their CEOs rate it. Three-quarters of board members said their AI understanding is on par with or ahead of their peers. CEOs were far less impressed. The implication is that many boards are making consequential decisions about AI strategy on the basis of knowledge their chief executives consider inadequate.
BCG’s Julie Bedard, a managing director and partner, said the gap can be closed if CEOs take direct responsibility for board education. Rather than delegating AI briefings to a chief technology officer or an outside consultant, she argued, CEOs should personally lead upskilling sessions that demonstrate what current tools can and cannot do, and should frame AI in terms that distinguish between tasks where the technology substitutes for humans and tasks where it complements them.
Advertisement
That distinction is more important than it sounds. Boards that treat AI as a wholesale replacement for human labour are likely to push for faster, broader deployment than the technology can support. Boards that understand AI as a complement to human work are more likely to approve investments that are scoped to realistic outcomes. The survey suggests that too many boards are in the first camp, and that the consequences of FOMO-driven investment decisions in AI are becoming harder to ignore.
The accountability mismatch
The survey also exposed a gap in how CEOs and boards perceive accountability for AI results. CEOs estimated that 35 per cent of their performance evaluation now depends on delivering AI-related returns on investment. Board members put the figure at 27 per cent. The eight-percentage-point difference suggests that CEOs feel more pressure to show AI results than their boards realise they are applying.
This matters because it shapes behaviour. A CEO who believes more than a third of their evaluation hinges on AI outcomes has a strong incentive to prioritise AI projects, even if those projects are premature or poorly scoped. A board that believes the figure is lower may not understand why its CEO is resisting calls to move faster, or may underestimate the operational risk of accelerating deployment to meet perceived expectations.
Judith Wallenstein, BCG’s managing director and senior partner who leads its global CEO Advisory practice, said CEOs need to bring their boards along on the same learning journey they have taken, but compressed and focused on building genuine understanding rather than surface-level awareness. The engineering and operational realities of AI deployment are considerably messier than the boardroom presentations that often precede investment decisions.
Advertisement
What the survey does not say
It is worth noting what the research does not cover. The survey does not measure whether the CEOs who say their boards are rushing are themselves correct in their caution, or whether some boards are right to push harder. It is possible that in certain industries, faster AI adoption is exactly the right strategy and that CEO resistance reflects organisational inertia rather than sound judgment. The data captures a perception gap, not a verdict on who is right.
The survey also does not break down results by industry, geography, or company size beyond the $100 million revenue threshold, which limits the conclusions that can be drawn about specific sectors. A board pushing AI transformation at a financial services firm faces a very different risk profile from a board doing the same at a manufacturing company, and the survey treats both identically.
What the research does establish is that the most senior leaders at large companies are not aligned on the most consequential technology investment of the current era. Approximately 80 per cent of both CEOs and board members agreed that prospective board candidates should be required to demonstrate a measurable understanding of how AI can reshape their industry, a finding that suggests both groups recognise the knowledge gap even if they disagree on its severity.
The harder question
The deeper issue the survey raises is whether traditional board governance is suited to decisions about AI at all. Boards typically meet a handful of times per year, rely on management presentations for information, and are composed of members whose primary expertise may lie in finance, regulation, or sector-specific operations rather than technology. That structure worked well when the pace of technological change allowed for quarterly deliberation. It is less clear that it works when the questions that matter most about AI require technical fluency that most board members do not have.
Advertisement
BCG’s recommendation, that CEOs should personally educate their boards, is practical but also reveals the problem. If the chief executive is the primary source of a board’s AI understanding, the board’s ability to independently evaluate the CEO’s AI strategy is compromised. The survey does not propose a solution to this structural tension, but it does make the tension visible.
For companies trying to scale AI in 2026, the message is that alignment at the top is not optional. Boards that push too fast risk approving projects that fail to deliver returns. CEOs that move too slowly risk losing competitive ground. And for both groups, the temptation to let AI substitute for clear thinking rather than support it is a risk that no survey can fully quantify.
A recent poll from Gallup shows 70 percent of Americans oppose a data center in their local area, including 48 percent who are strongly opposed. That 70 percent number is tied to several concerns, environmental questions and quality of life chief among them, and it’s up 18 percent (!) in just two months, when Gallup asked the same question in March.
Nonetheless, data centers keep going up at a rate that is nothing short of astonishing.
According to one estimate, more than 4,000 data centers have already been built across the country. More than 2,000 that are currently under construction.
Advertisement
That alone shows just how quickly artificial intelligence, workforce automation, and the data centers that power these new technologies are becoming one of the can’t-miss issues in our current political landscape. And still, President Donald Trump and the White House have seemingly chosen to stand aside on AI regulation.
On the Democratic side, it’s an open question what comes next. Politicians like Sen. Bernie Sanders (I-VT) have called for a nationwide moratorium on data centers in order to institute more consumer protections. Others, like Sen. Ruben Gallego (D-AZ), are less definitive: He told me recently that artificial intelligence is a “necessary evil” of our modern age, and building data centers is part of that equation.
With all that uncertainty, producer Kasia Broussalian and I decided to sort through the mess ourselves. We headed to Vineland, a city in southern New Jersey where a new data center is under construction.
We talked to homeowners who live near the data center and a Democrat running on an AI reform platform, and went to a town hall to hear from community members who wanted to voice their concerns. One person brought up rising electricity bills, while another said the data center has made it impossible for her to sell her home. Many had a general anxiety about the global rise of AI.
Advertisement
However, the most universal complaint was not technically about artificial intelligence at all. It was about a political process that residents said did not include them. At the town hall, people said they were shocked by the data center’s initial construction, and want more transparency about relationships between elected officials and these big tech companies.
They also urged politicians to act proactively, rather than waiting for a crisis before imposing regulation. It wasn’t just that they didn’t like the data center itself: They were upset at how it seemed like a physical manifestation of whose interests are prioritized in politics.
Read on for what some of those town hall attendees had to say, lightly edited for length and clarity. As always, there’s much more in the full show, so listen to America, Actually wherever you get your podcasts or watch it on Vox’s YouTube channel.
How many of you right now feel like you got information about the data center before the construction started?
Advertisement
Can someone raise their hand and just tell me what their biggest concern was once they started hearing about it?
Angela Bardoe, Cumberland County, New Jersey, resident: Well, when I saw it, I thought it was the ugliest thing I’ve ever seen. So, that part of East Island is beautiful farmland — was beautiful farmland — but then of course I’ve thought about a lot of my friends that live out that way and how it was going to impact their everyday life.
Most people live there because they love the farmland.
Now I know about the structure, I know about kind of energy concerns. I wanted to ask about AI generally, like how many of you would say that your concerns about this data center are tied to larger concerns about AI and kind of some anxiety around that.
Advertisement
Fred Barsuglia, Clayton, New Jersey, resident: The internet brought us the best of the world and the worst in the world. AI is going to do the same thing. It’s already begun. I scroll through Facebook and there’s AI all over the place. Some of it’s cute little bunnies and cats, but a lot of the other stuff, you know, is bad.
Again, our government is very slow to react. There has to be some regulations.
Where would you now put this on your scale of issues?
There’s so much happening right now, whether it be war in Iran or tariffs or just generally. I wonder where data centers and this specific local reality maps onto your importance of issues.
Advertisement
Angela: I would say most of the topics fall into two categories. Is it benefiting people, or is it benefiting the elite and the money that’s going into their pockets? We see people trading before the war’s announced and they’re benefiting from it. And I just find it all very disgusting.
Louise Thigpen, Cumberland County resident: They’re gambling.
Angela: Yeah. I mean, they’re gambling insider information.
I hear what you’re saying.
Advertisement
On one hand there’s, there’s a kind of politics way of thinking about this in one bucket or another, but you’re like, it actually feels like in general, they’re not responding to you the regular person, and that’s across a lot of issues.
Angela: Well, yes. That’s how I see it.
Fred: I feel the same way. It’s because everything relates from the top down and what we’re getting from the top has spread all the way to the local level.
Louise: And it isn’t good.
Advertisement
Thank you all for entertaining our questions. It’s illuminating to hear the way these issues are connected for people. And I think just this general sentiment that folks feel unheard.
Louise: And we don’t feel that way. We are that way.
The Russian hacker group Secret Blizzard has developed its long-running Kazuar backdoor into a modular peer-to-peer (P2P) botnet designed for long-term persistence, stealth, and data collection.
Secret Blizzard, whose activity overlaps that of Turla, Uroburos, and Venomous Bear, has been associated with the Russian intelligence service (FSB) and is known for targeting government and diplomatic organizations, defense-related entities, and critical systems across Europe, Asia, and Ukraine.
The Kazuar malware has been documented since 2017, and researchers found that its code lineage goes as far back as 2005. Its activity has been linked to the Turla espionage group working for the FSB.
Microsoft researchers analyzed a recent variant of Kazuar and observed that the malware now operates using three distinct modules: kernel, bridge, and worker.
The Kernel module is the central coordinator that manages tasks, controls other modules, elects a leader, and orchestrates communications and data flow across the botnet.
The leader is essentially one infected system within a compromised environment or network segment, which communicates with the command-and-control (C2) server, receives tasks, and forwards them internally to the other infected systems.
Non-leader systems enter “silent” mode and don’t communicate directly with the C2. This results in better stealth and reduced detection surface.
Advertisement
“The Kernel leader is the one elected Kernel module that communicates with the Bridge module on behalf of the other Kernel modules, reducing visibility by avoiding large volumes of external traffic from multiple infected hosts,” explains Microsoft.
The process for selecting the leader is internal and autonomous, using uptime, reboot, and interruption counts.
The Bridge module acts as the external communications proxy that relays traffic between the elected Kernel leader and the remote C2 infrastructure using protocols like HTTP, WebSockets, or Exchange Web Services (EWS).
Kazuar’s internal communications diagram Source: Microsoft
Internal communications rely on IPC (inter-process communication), including Windows Messaging, Mailslots, and named pipes, blending well with normal operational noise. The messages are AES-encrypted and serialized with Google Protocol Buffers (Protobuf).
The Worker module performs the actual espionage operations, such as:
Advertisement
keylogging
capturing screenshots
harvesting data from the filesystem
performing system and network reconnaissance
collecting email/MAPI data (including Outlook downloads)
monitoring windows
stealing recent files
The collected data is encrypted, staged locally, and later exfiltrated through the Bridge module.
Types of system info Kazuar collects Source: Microsoft
Microsoft underlines Kazuar’s versatility, which now supports 150 configuration options allowing operators to enable/disable specific security bypasses, perform task scheduling, time the data theft and size of exfiltration chunks, perform process injection, manage tasks and command execution, and more.
Regarding the security bypass options, Kazuar now offers Antimalware Scan Interface (AMSI) bypass, Event Tracing for Windows (ETW) bypass, and Windows Lockdown Policy (WLDP) bypass.
Secret Blizzard typically seeks long-term persistence on target systems for intelligence collections. The actor exfiltrates documents and email content that has political importance.
Microsoft recommends that companies focus their defense on behavioral detection rather than static signatures, as Kazuar’s modular and highly configurable nature makes the threat particularly evasive.
Automated pentesting tools deliver real value, but they were built to answer one question: can an attacker move through the network? They were not built to test whether your controls block threats, your detection rules fire, or your cloud configs hold.
This guide covers the 6 surfaces you actually need to validate.
OpenAI has signed deals with fintech startups, tech giants and even Disney, but it’s breaking new ground by announcing a “world’s first partnership” with the country of Malta. In a post on its website, OpenAI said that it would provide ChatGPT Plus for one year to every Maltese resident or citizen.
“Malta is the first country to launch a partnership of this scale because we refuse to let our citizens stay behind in the digital age,” Silvio Schembri, Malta’s minister for Economy, Enterprise and Strategic Projects, said in a statement. “We are putting our people at the very forefront of global change.”
Advertisement
For the approximately 574,250 residents living in Malta, they’ll have to complete a course developed by the University of Malta before launching the ChatGPT Plus subscription, which costs $20 a month in the US. The course teaches the basics of AI, but also how to use the technology responsibly, whether it’s at home or at work. Any interested Maltese residents will also need to have an active eID account from the European Union to claim the subscription. According to OpenAI, the first phase of the program will launch this month, with the Malta Digital Innovation Authority managing the distribution to eligible participants. OpenAI added that the program will scale up once more Maltese residents or its citizens abroad complete the course.
While OpenAI kicks off a new program in Malta, it’s putting a pause to its Stargate data center plans in the UK. The project was designed to assist the UK with building out AI infrastructure, but attributed high energy costs and regulatory issues with the latest stoppage.
Rivian’s founder is running three companies with $12.3B raised. Mind Robotics just hit $1B at a $3.4B valuation.
RJ Scaringe has raised more than $12.3 billion across three startups, and the pace is accelerating. The Rivian founder and CEO, who holds a doctorate in mechanical engineering from MIT, is now simultaneously running an electric vehicle manufacturer, an autonomous micromobility company, and an industrial AI robotics startup, each attracting capital at a speed that would be remarkable for any single venture.
The latest data point arrived this week when Mind Robotics, Scaringe’s industrial robotics company, closed a $400 million round led by Kleiner Perkins, bringing its total funding to more than $1 billion and its valuation to $3.4 billion. The venture arms of Volkswagen and Salesforce also participated. Mind Robotics was founded in 2025, initially as an internal Rivian project called “Project Synapse,” and has raised $115 million in seed funding, $500 million in a Series A in March, and now $400 million more in under two months. The company is building AI-powered robots designed to handle the dexterous, reasoning-intensive manufacturing tasks that conventional factory automation cannot, using Rivian’s own production lines as a live training environment.
Scaringe’s second venture, Also, is an electric micromobility company spun out of Rivian in 2025. It has raised more than $300 million, including a $200 million Series C led by Greenoaks in March that valued the company at over $1 billion. DoorDash invested alongside a multi-year commercial agreement to deploy Also’s purpose-built autonomous small EVs for last-mile delivery. The company’s product lineup includes a $3,500 e-bike and a four-wheeled cargo EV designed to fit in a bike lane.
The overwhelming majority of the $12.3 billion, more than $11 billion, went into Rivian itself, most of it between 2018 and the company’s blockbuster IPO in November 2021. Rivian was founded in 2009 as Mainstream Motors and operated in near-obscurity for nearly a decade before revealing its R1T truck and R1S SUV prototypes at the 2018 Los Angeles Auto Show. The money followed quickly: Amazon led a $700 million round in early 2019, Ford invested $500 million, and by the end of that year Rivian had closed four funding rounds. A $2.5 billion raise in July 2020 and a $2.65 billion raise six months later preceded the IPO, which generated nearly $12 billion in gross proceeds at $78 per share and briefly valued the company at more than $100 billion.
Advertisement
Today, Rivian’s market capitalisation stands at approximately $18.2 billion, a significant decline that reflects the broader struggles of the EV sector. But the company continues to attract major partnerships. Volkswagen has overtaken Amazon as Rivian’s largest shareholder through a $5.8 billion software joint venture, and Uber struck a deal worth up to $1.25 billion for up to 50,000 autonomous Rivian R2 robotaxis across 25 cities by 2031.
What makes Scaringe unusual is not just the quantity of capital but the breadth. Supersized seed rounds have become more common in recent years, but they have generally gone to defence tech or AI startups founded by former OpenAI or Anthropic employees, not to electric micromobility or industrial robotics. Eclipse, one of Scaringe’s biggest backers and a lead investor in both Also and Mind Robotics, credits his combination of engineering depth and product instinct. Jiten Behl, partner at Eclipse and a former Rivian executive, described Scaringe’s ability to communicate a vision without overselling as “an art.”
The comparison to other serial entrepreneurs who have raised billions across multiple ventures, Elon Musk, Sam Altman, Palmer Luckey, is inevitable but imprecise. Multiple investors told TechCrunch that Scaringe’s distinguishing quality is the absence of self-promotion. “It’s not about him,” one insider said. “When you talk to him, he has enthusiasm about the product that is completely external.” Joe Fath, also at Eclipse, noted that Scaringe “has the rare combination of being a truly great engineer while also having an exceptional instinct for product design,” a pairing he described as “incredibly uncommon.”
The question that follows from $12.3 billion across three companies, all run by the same person, is whether Scaringe can sustain the pace. He travels between Palo Alto, Irvine, Rivian’s factory in Normal, Illinois, and a second factory under construction in Georgia. Mind Robotics is scaling rapidly, Also is preparing to deliver its first US products in 2026, and Rivian is ramping the R2 SUV while navigating a hostile tariff environment that has seen at least a dozen EV models cancelled or paused this year.
Advertisement
The industrial robotics market is attracting capital at an extraordinary rate, with companies from 1X to Unitree to Foundation Industries all raising hundreds of millions for physical AI systems. Mind Robotics’ pitch, that it has access to a live high-volume factory floor for training data, gives it a structural advantage most competitors lack. Whether that advantage translates into a durable business depends on execution at a scale that even Scaringe has not yet attempted.
Behl framed the question differently. “The big question is, how much can he do?” he said. “That’s a question that already assumes he’s reaching his limit. The thing is, he doesn’t look at it that way.“
Four chainable OpenClaw flaws dubbed “Claw Chain” let attackers weaponise the agent’s own sandbox. Patches are live.
Cybersecurity researchers at Cyera have disclosed four vulnerabilities in OpenClaw that, when chained together, allow an attacker to steal sensitive data, escalate privileges, and establish persistent control over a compromised host. The flaws, collectively dubbed “Claw Chain,” affect OpenClaw’s OpenShell managed sandbox backend and its MCP loopback runtime. All four have been patched in OpenClaw version 2026.4.22.
The attack chain works in four stages. First, a malicious plugin, prompt injection, or compromised external input gains code execution inside the OpenShell sandbox. Second, two of the vulnerabilities, CVE-2026-44113 and CVE-2026-44115, are exploited to expose credentials, secrets, and sensitive files. Third, CVE-2026-44118 is used to obtain owner-level control of the agent runtime by exploiting an improperly validated ownership flag. Fourth, CVE-2026-44112, the most severe of the four with a CVSS score of 9.6, is used to plant backdoors, modify configuration, and establish persistence outside the sandbox.
The most architecturally interesting flaw is CVE-2026-44118, which stems from OpenClaw trusting a client-controlled flag called senderIsOwner without validating it against the authenticated session. Any non-owner loopback client could impersonate an owner and gain control over gateway configuration, cron scheduling, and execution environment management. The fix, according to OpenClaw’s advisory, involves issuing separate owner and non-owner bearer tokens, with senderIsOwner now derived exclusively from the authenticating token rather than from a spoofable header.
Advertisement
The two TOCTOU (time-of-check/time-of-use) race conditions, CVE-2026-44112 and CVE-2026-44113, allow attackers to bypass sandbox restrictions and redirect file writes or reads outside the intended mount root. CVE-2026-44115 exploits an incomplete allowlist by embedding shell expansion tokens inside a heredoc body, enabling execution of commands that would otherwise be blocked at runtime.
The 💜 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
What makes Claw Chain particularly concerning is that each step looks like normal agent behaviour to traditional security controls. “By weaponizing the agent’s own privileges, an adversary moves through data access, privilege escalation, and persistence, using the agent as their hands inside the environment,” Cyera said. The attack broadens blast radius while making detection significantly harder, because the malicious actions are indistinguishable from the legitimate operations the agent is designed to perform.
Advertisement
This is not the first time OpenClaw’s security has come under scrutiny. In January, a critical remote code execution vulnerability (CVE-2026-25253) allowed any website a user visited to silently connect to the agent’s local server through an unvalidated WebSocket, chaining a cross-site hijack into full code execution. A Koi Security audit of ClawHub, OpenClaw’s skill marketplace, found 341 malicious entries out of 2,857 available skills, with attacks designed to steal credentials, open reverse shells, and hijack agents for cryptocurrency mining.
Nvidia addressed some of these structural security concerns in March with NemoClaw, an enterprise layer that adds sandbox orchestration, privacy guardrails, and security hardening on top of OpenClaw. The product was built in partnership with Cisco, CrowdStrike, Google, and Microsoft Security. But NemoClaw operates at the infrastructure level, not the application level, and the Claw Chain vulnerabilities sit inside OpenClaw’s own sandbox implementation, meaning even NemoClaw-hardened deployments would have been affected before the patch.
The scale of the exposure is significant. OpenClaw has more than 3.2 million users, is integrated with ChatGPT subscriptions through OpenAI, and has been adopted as an enterprise platform by Nvidia (NemoClaw) and Tencent (ClawPro). A significant portion of the installed base is running older, unpatched versions, and attackers have been targeting known vulnerabilities in versions prior to 2026.1.30 since at least February.
Security researcher Vladimir Tokarev has been credited with discovering and reporting the issues. Users are advised to update to version 2026.4.22 immediately. The broader lesson is one the AI agent industry has been slow to internalise: when an autonomous agent has access to files, credentials, APIs, and network resources, compromising the agent is functionally equivalent to compromising the user. Traditional perimeter security was not designed for a world in which the most privileged entity inside the environment is software that executes instructions from external sources.
Advertisement
Claw Chain is unlikely to be the last vulnerability disclosure of this kind. It may, however, be the one that forces the industry to treat AI agent security with the same rigour it applies to operating systems and cloud infrastructure, rather than as an afterthought bolted onto a product that was never designed to be this important.
You must be logged in to post a comment Login