As wireless communications evolve from static spectrum allocations toward dynamic, shared access models, RF coexistence has become a critical engineering challenge. Over 30 billion connected devices now compete for finite spectrum resources. The 2.4 GHz ISM band alone hosts Wi-Fi, Bluetooth, ZigBee, and many other overlapping protocols. Meanwhile, high-value spectrum auctions such as FCC Auction 107 have placed 5G transmitters adjacent to safety-critical systems like aircraft radar altimeters and GPS receivers. These incumbent systems were designed before co-channel interference was a concern. Standards like ANSI C63.27, tiered sharing frameworks like CBRS, and cognitive radio systems using AI and software-defined radios offer practical paths forward. This guide examines these coexistence challenges, reviews real-world interference case studies, and outlines the test architectures needed to evaluate RF device performance under realistic operational conditions.
VueBuds, a prototype developed by University of Washington researchers who have embedded a rice-grain-sized camera into each earbud of a standard pair of Sony wireless earbuds. (UW Photo)
Wireless earbuds seemingly sprang out of nowhere. Popularized by Apple’s AirPods, they were suddenly everywhere — on the subway, in the grocery store, in the ears of the person sitting across from you — until somewhere along the way, they became the thing nearly everyone wears without a second thought.
Could that popularity make earbuds better than smart glasses for AI? That is the bet behind VueBuds, a prototype developed by University of Washington researchers who have embedded a rice-grain-sized camera into each earbud of a standard pair of Sony wireless earbuds. The result is a visual AI assistant hiding in plain sight: look at a can of food and ask how many calories it has, hold up an unfamiliar kitchen tool and get an answer in about a second.
The system processes images on-device and responds through a connected AI model — no cloud required, no images stored.
The UW team believes it is the first to embed cameras directly in commercial wireless earbuds.
The earbuds don’t remember anything, but the people around you might not know that. That tension sits at the heart of what the UW team built and raises a question the researchers take seriously: what are the social norms when cameras are embedded in objects nobody thinks of as cameras?
Advertisement
The team’s answer is to lean hard on minimizing data collection. Images are processed and discarded; nothing is saved. But the system offers no outward signal to bystanders that a camera is present, which the researchers acknowledge is an open challenge rather than a solved one.
For technology like this to earn trust, Maruchi Kim, lead researcher and UW doctoral student in the Paul G. Allen School of Computer Science & Engineering, argued that privacy can’t be an afterthought.
“We don’t support saving the images,” Kim said. “It’s mainly just to bridge the interaction between a person and having access to AI on the go, especially in hands-free scenarios.”
The team’s other central argument is about form factor — and it’s a pointed challenge to Meta, which has spent years and hundreds of millions of dollars trying to make camera glasses a mainstream product.
Advertisement
The UW team’s position is that smart glasses will never fully shed their social baggage: the memory of Google Glass, the discomfort of being watched, the visible signal that the wearer has opted into something most people haven’t. Earbuds carry none of that history.
“From the get-go, we didn’t want to be associated with that,” Kim said.
Getting cameras into earbuds required solving a power problem first. Cameras consume far more energy than microphones, so the team opted for a low-power sensor that captures roughly one frame per second in black and white — slow by video standards, but fast enough for the question-and-answer style of interaction the researchers had in mind.
The cameras are angled five to 10 degrees outward, providing a 98- to 108-degree field of view, and images from both earbuds are stitched into a single frame before processing, cutting response time to about one second.
Advertisement
The applications range from the practical to the significant. The system can read text on food packaging, identify objects, and translate written Korean. But for people with low vision or cataracts, the implications run deeper.
The team received more than a dozen emails from people with visual impairments describing what they’d use it for: understanding facial expressions, reading books, watching television — tasks that existing AI tools can’t easily support in a hands-free, ambient way.
Kim sees another underserved group in the workforce. Electricians, plumbers, and workers in industrial settings often can’t pause to pull out a phone mid-task — a pipe fitting wedged in place, a live wire that needs both hands.
For those workers, a voice-queryable visual assistant that doesn’t require touching a screen is the difference between having access to AI and not having it at all.
Advertisement
“There’s a lot of blue collar work where those people aren’t really able to harness the benefits of recent AI advances,” Kim said. “They can’t just whip out their phones and take a photo.”
The hands-free framing extends broadly: surgeons, cooks, anyone who has ever tried to follow a recipe with wet hands.
The system remains experimental and isn’t available for purchase. Shyam Gollakota, a professor in the Allen School and the project’s senior researcher, said interest from technology companies has been significant, and camera-equipped earbuds could reach consumers within a few years.
On cost, Gollakota is optimistic. The camera sensor itself could run under a dollar at the component level, he said — meaning that at the scale of a major consumer electronics manufacturer, the price premium over standard earbuds would likely be modest.
Advertisement
The $10 figure Gollakota cited refers to a more conservative estimate at smaller production volumes.
“What we do at the universities is show that you can solve technical problems,” Gollakota said. “Then we show a path for these companies and other people to say that this is actually possible.”
What’s left of CBS News recently landed an interview with Israeli Prime Minister Benjamin Netanyahu. It’s a bit of a doozy (transcript, video). There’s a part where Netanyahu tries to blame foreign social media bot farms for the rise in people disgusted by his government’s carpet bombing of children. There’s a part where he pretends to not actually want billions in U.S. taxpayer dollars.
And there’s this part where he likens himself to Churchill and makes some strange comments about Hitler:
PRIME MINISTER BENJAMIN NETANYAHU: They implant themselves among civilians, you know, so that they have civilian casualties and they can put it on the tube or in your cell phone. So, yes, I mean, I don’t know how to fight it. I mean, Churchill, without cell phones and without digital campaigns and farm bots was labeled a warmonger in the 1930s because he said, “You have to stand up to Hitler.”
MAJOR GARRETT: Hitler, right.
PRIME MINISTER BENJAMIN NETANYAHU: And they accused him of being a warmonger. And Hitler didn’t even say “death to America, death to Britain,” you know. I– I think he might have planned it, but he didn’t say it. And still they accused him of that.
Advertisement
The interviewer, Major Garrett, spends absolutely no serious time pushing back against the claims Netanyahu makes. Or meaningfully addressing indisputable evidence that the Israeli government has engaged in widespread genocidal war crimes on the U.S. taxpayer dime. When Netanyahu tries to dismiss the massive civilian casualties in Gaza, Iran, and Lebanon as minor andinnocent mistakes, Garrett has no response.
Garrett doesn’t normally work for 60 Minutes. He was brought on board from elsewhere within CBS because Netanyahu specifically asked for him. According to Oliver Darcy’s excellent media newsletter Status, 60 Minutes correspondent Leslie Stahl was trying to land the interview with Netanyahu when Weiss intervened and shuffled the interview over to Garrett, causing (more) internal anger:
“But behind the scenes, Status has learned that famed “60 Minutes” correspondent Lesley Stahl had also been gunning for the interview but was upstaged by CBS News boss Bari Weiss, who booked Netanyahu herself and handed the interview to Garrett, who is notably not a “60 Minutes” correspondent. The move sparked hostility and amplified the already strained relationship between Weissand the reporting team at the iconic newsmagazine.”
There’s been a mass exodus at CBS for months as actual journalists bristle at the obvious shift toward soggy corporatist agitprop under Weiss. While Weiss was hired on to modernize CBS and make autocratic billionaire ass kissing exciting, viral, and good for ratings; the whole experiment has been a monumental failure so far, with CBS News recently seeing its lowest ratings in a quarter century.
Weiss rose to prominence at her weird little troll blog Free Press, which obviously hasn’t translated well to running a television network. Case in point: Weiss’ preferred new CBS News anchor, Tony Dokoupil, is having to broadcast the network’s coverage on Trump’s China visit from Taiwan because Weiss and friends failed to secure his visa on time for the trip. This mirrors other similar competency issues like Weiss making last-minute unapproved changes to teleprompter text that screws up broadcasts.
Beyond the clownish nature of it all, it remains an open question who this sort of stuff is actually for (beyond the extremely rich people endlessly trying to control information flow). Despite having a massive fortune, Ellison seems incapable of creating propaganda people actually want to watch, and even their target audience — center-right bigots with impaired critical thinking faculties — aren’t tuning in because they have a universe of other terrible (but far more entertaining) choices.
Like Jeff Bezos’ sad and desperate effort to repurpose the Washington Post into what now feels like a satirical billionaire-coddling rag, all the money in the world can’t seem to produce class warfare agitprop actual human beings want to consume. Almost as if the behaviors of the global authoritarian extraction class are starting to reach a point where they’re simply too heinous and ham-handed to spin.
Microsoft is introducing a new capability that will allow it to remotely roll back problematic Windows drivers delivered through Windows Update.
Called Cloud-Initiated Driver Recovery, the new feature will remove the need for hardware partners or end users to manually fix driver issues once drivers have been distributed to devices. The recovery process is entirely managed by Microsoft, with no partner-side actions required, and will only be initiated for Windows drivers rejected due to quality issues during shiproom evaluation.
Under the current system, if a driver distributed through Windows Update has quality issues, the hardware partner must submit a replacement, or users must manually uninstall the faulty driver, which can leave devices using subpar drivers for a long time.
With Cloud-Initiated Driver Recovery, Microsoft can directly trigger a rollback to a previous, stable driver version (or the next best version available on Windows Update) without requiring new software or actions from hardware partners.
Advertisement
“Today, when a driver published through Windows Update is identified after distribution to have quality issues, the remediation path relies on the hardware partner to submit an updated driver — or on end users to manually uninstall the problematic driver themselves. This creates a gap where devices may remain on a low-quality driver for an extended period,” Microsoft said.
“With Cloud-Initiated Driver Recovery, Microsoft can now trigger a recovery action directly from the Hardware Dev Center (HDC) Driver Shiproom, rolling back a problematic driver to the previously known-good version via the Windows Update pipeline. This is handled through coordinated updates to the PnP driver stack and the driver flighting and publishing services.”
The company also noted that:
Devices where a Driver Shiproom-approved driver cannot be located will not attempt Cloud-Initiated Driver Recovery
Recovery is delivered through the existing Windows Update infrastructure — no new client agent or partner tooling is required.
The new Windows Update feature is being tested between May and August and will begin rolling back drivers rejected during Flighting or Gradual Rollout starting September 2026.
Last week, at WinHEC 2026 (the Windows Hardware Engineering Conference) in Taipei, Microsoft unveiled a Driver Quality Initiative (DQI) to raise driver quality, reliability, and security across the Windows ecosystem, in coordination with OEM, silicon, and hardware partners.
Advertisement
“In the months ahead, we will keep investing in the fundamentals that matter most to customers: reliability, security, performance, compatibility and quality,” Microsoft said. “We’ll also keep collaborating with OEMs, silicon partners, IHVs, ODMs and the broader hardware ecosystem through the Windows Resiliency Initiative, the new Driver Quality Initiative and the work we do together every day.”
In June 2025, Microsoft also announced plans to periodically remove legacy drivers from the Windows Update catalog to mitigate compatibility issues and security risks.
Automated pentesting tools deliver real value, but they were built to answer one question: can an attacker move through the network? They were not built to test whether your controls block threats, your detection rules fire, or your cloud configs hold.
This guide covers the 6 surfaces you actually need to validate.
The round, led by Schroders Capital, follows January’s acquisition of Berlin-based StackFuel and 50% year-on-year revenue growth. Customers include the AA, Babcock, and Capital.
Multiverse, the London-based AI- and tech-upskilling platform founded by Euan Blair, said on Friday it had raised $70m in primary funding led by Schroders Capital, at a $2.1bn valuation.
Existing investors, including General Catalyst, Lightspeed, D1 Capital, Index Ventures, Bond and StepStone Group, joined the round.
The new valuation marks a $400m step up from the company’s $1.7bn Series D in 2022. Multiverse said revenue grew 50% year-on-year for the third consecutive year of accelerating growth, and the company reported its first cash-positive quarter, from January to March 2026. All employees are being offered equity in connection with the raise.
Advertisement
Multiverse is positioning the new round behind a category pitch rather than a product one. Chief executive Euan Blair told the company’s blog the firm wants to become ‘Europe’s AI adoption platform’, sitting between the businesses buying AI tools and the workforces meant to use them.
In his framing, the missing layer of the AI stack is not another model or another agent runtime; it is the workforce capable of operating them.
The European push has a foothold already. In January Multiverse completed the acquisition of StackFuel, a Berlin-based data and AI training provider with corporate customers including Mercedes-Benz, IAV and Telefónica, and a stated goal of training 100,000 German workers in AI skills.
StackFuel reports a 92% programme completion rate. Its founders, Leo Marose and Stefan Berntheisel, joined the senior leadership of the combined entity.
On the company’s own numbers, Multiverse has delivered more than £2bn in verified ROI for over 1,000 employers to date, including Babcock, the AA, Capita and Addison Lee.
Advertisement
Atlas, its AI coaching platform, tripled daily active users in the past year. Partnerships have moved upmarket on the tools side, with Microsoft, Palantir and Databricks now named as platform partners.
Multiverse is leaning into a familiar enterprise complaint: AI budgets are up, AI returns are uneven. The company cites BCG’s 2026 AI Radar, which reports corporate AI spend has doubled since last year, and notes that ‘trailblazer’ adopters invest about twice as much in workforce upskilling as ‘follower’ peers.
CEOs surveyed by Multiverse named skills gaps as the second-largest barrier to AI adoption, behind only regulation and ahead of data quality.
The raise comes with a political signal, of the sort British scale-ups now actively seek. Chancellor of the Exchequer Rachel Reeves said in a statement provided by the company that the UK government wants Britain to achieve the fastest rate of AI adoption in the G7, and called Multiverse ‘a fantastic example of a British company helping turn that ambition into reality’. The investment, Reeves added, would ‘support its expansion across Europe’.
Advertisement
Multiverse’s thesis cuts across a louder one in the enterprise AI conversation. Where companies including Klarna have frozen hiring on the argument that AI tools let them do more with fewer people, Multiverse is selling employers on the opposite trade: that the value of an AI deployment is determined by how well the existing workforce can operate it.
The new round is, in effect, a $70m bet that the latter view is the one enterprise buyers will be writing cheques against next.
Multiverse did not name the banks involved or disclose run-rate revenue. The company said the funding would be used to accelerate European expansion, with no further geographic breakdown.
The next time you refuel, you may notice several different grades of gasoline for sale. You are probably familiar with regular unleaded and premium unleaded. But there is another grade in between, which can be called “Plus” like above, or by another name.
A gasoline’s octane rating is typically shown by a large number on the pump. According to the U.S. Energy Information Administration, 87 octane is regular, mid-grade gas is 89 or 90 octane, and premium is between 91 and 94. What the different octane numbers show is each fuel’s resistance to knocking, with higher octane numbers indicating a higher knock resistance.
Knocking coming from your engine, also called premature detonation, means your fuel is not burning evenly in one or more cylinders, which can have numerous causes and will result in serious damage if not fixed promptly. While engines with higher performance levels tend to require premium fuel that is less likely to knock, average, lower-stressed engines should be fine with regular gas. The 89 octane mid-grade falls in the middle.
Advertisement
The origin story of 89 octane unleaded starts in 1975, when the EPA banned leaded gasoline and the transition to unleaded began, long before top tier gas existed. Because unleaded gas requires more processing compared to leaded gas, unleaded premium cost more than its leaded equivalent. Gas station operators marketed this 89-octane gas for less than premium, with higher octane than regular. It never really caught on, currently amounting to eight percent of total gasoline sales. No vehicles made today require 89-octane fuel.
Advertisement
Is there ever a reason to use 89 octane gas instead of 87 octane regular?
alvarog1970/Shutterstock
If your vehicle runs fine on regular, without any pinging or knocking sounds, and you are maintaining it according to the manufacturer’s recommended service intervals, there is nothing to do. Continue using regular in your car, because you don’t have a problem, even though gas prices just hit a four-year high. If you are in doubt, check your owner’s manual for the correct fuel recommended by the manufacturer.
But let’s say that your engine pings or knocks when using regular gas. While this can be caused by gas that is too low in octane, it can also be a result of a build-up of carbon deposits in your engine’s combustion chambers, worn spark plugs, an air/fuel mixture that’s too lean, bad spark timing, or an engine that is overheating. It’s probably a good idea to make a date with your mechanic, so that you can rule out all of the non-fuel-related issues first. Otherwise, you may be paying more for that higher octane fuel without seeing any benefits, which is a fuel myth you really should stop believing.
In case you are wondering about how 89-octane, a type of gasoline that makes up a measly eight percent of the market and is not required in any production vehicles, can still be offered at the pumps, there’s a little trick to how it is made. The 89-octane gas is made by blending high-octane premium with low-octane regular, which can usually be done right at the pump. So, whoever wants it is free to buy it.
Nvidia is known for making some of the best graphics cards, and these days, a lot of them end up powering AI-related workloads instead of games. Graphics processing units (GPUs) are the very foundation of the data centers that make AI possible. The funny thing is that the circle of AI life goes on and on, as Nvidia now also uses AI to help create new chips, which later end up in GPUs. A recent interview revealed that this leads to benefits like faster chip design, fewer man-hours used on certain tasks, and even new, innovative, sometimes odd ways to approach existing problems.
In a discussion with Google’s Jeff Dean at the 2026 GPU Technology Conference (GTC), Bill Dally, Nvidia’s chief scientist and senior vice president of research, revealed that the chipmaker is trying to introduce AI at every step of GPU design. The headliner is, undoubtedly, the fact that Nvidia used AI to save so much time and money during one stage of the process.
Whenever a new semiconductor process is introduced (essentially the process node that Nvidia then builds its GPU around), the company needs to port its standard cell library to it, which adds up to around 2,500 to 3,000 cells. Completing this task used to take eight people around 10 months.
Advertisement
Nvidia then designed NVCell, a program that completes this time-consuming task in one night on just one GPU. There seems to be no catch here, as Dally clarified that the results were better than what human engineers produced.
Advertisement
Nvidia uses AI for more than just porting cell libraries
Evolf/Shutterstock
It’s unclear how long it took for Nvidia to develop the reinforcement learning-based NVCell, but it does seem to be paying off. Not only is Nvidia saving precious time, but it’s also making it easier to move on to a new process when the next generation of GPUs comes around. While replacing eight engineers with one GPU seemed like the biggest win Nvidia had to share, Dally also listed a couple more ways in which the company is leaning into AI in its design process.
The next tool is Prefix RL, and to save you a highly technical explainer, its job is tackling various chip design options. The software tries to solve these concepts in a process of trial-and-error, and grades itself to learn from its attempts. Dally said that the tool comes up with all sorts of odd ideas, but at the end of the day, they’re 20-30% better than human designs.
Nvidia also uses AI to free up some time that its senior engineers had to spend helping more junior colleagues. Its internal large language models (LLMs), Chip Nemo and Bug Nemo, were trained on Nvidia’s proprietary database and codebase, so they know everything there is to know about the way Nvidia builds and designs GPUs. Armed with that knowledge, those LLMs can help junior engineers and explain complex concepts in an approachable manner. On the consumer side, Nvidia recently revealed Alpamayo, bringing AI models to self-driving cars.
Few pieces of technology capture attention quite like a device that launched an era. In September 2008 T-Mobile teamed up with Google to reveal the G1, a phone built from the ground up to run Android software. Available starting that October for customers on a two-year plan, it arrived at stores priced at $179 ($277 today) and immediately drew crowds eager to try something different.
When users opened the body, they discovered a full QWERTY keyboard stashed away beneath, which was a lifesaver for hammering out emails or scribbling down quick notes without having to look for those tiny on-screen buttons. The 3.2″ capacitive touchscreen sat above the keyboard, displaying some wonderful 320 x 480 pixel pictures and menus that would come to life with a tap or sweep of the finger.
BIGGER, YET SLIMMER THAN EVER: Who would’ve guessed that wider could also be lighter? The design of Galaxy Z Fold7 is refined to feel like a…
BEST CAMERA ON A FOLD YET: You asked for more – now you can have the most. Galaxy Z Fold7 now boasts an ultra-premium 200MP camera with Pro-Visual…
SCREENSHARE FOR STREAMLINED ASSISTANCE: Intrigued by something you see? Go Live with Google Gemini, then screenshare or point your camera at it for…
A little trackball tucked itself away just below the screen, making it easy to scroll through large lists or zoom into maps, which was difficult to perform on the glass without it. Nearby buttons handled the typical navigation, while a dedicated search key quickly returned Google results. Power came from a Qualcomm MSM7201A processor running at 528MHz and supported by 192MB of memory. In the box, they’d discover a 1GB microSD card already installed and ready to start, with room to add an 8GB one if necessary.
Advertisement
The 3.15MP rear camera took clear and detailed shots, and the autofocus quickly locked onto whatever you were trying to photograph. Despite the lack of a flash, daily photographs typically looked far better than you’d expect for a phone of this period. The detachable 1150 milliamp-hour battery pack provided around five hours of conversation time and more than five days on standby. It only took a few seconds to swap it out for a spare, and you were back up and running without having to find a charger.
The G1 launched with new software, the first version of Android, and did an excellent job of connecting with Google services from the start. Your Gmail would send new messages instantly, and Maps would show streets in sharp clarity and even spin to mirror the real world using a built-in compass. YouTube videos would play smoothly over Wi-Fi or a carrier data connection, and the new Android Market allowed you to download additional apps directly to your smartphone.
Connectivity-wise, the phone had all the bases covered, as it functioned on GSM networks, could do 3G speeds when available, easily snagged Wi-Fi networks, quickly paired with Bluetooth headsets, and provided accurate turn-by-turn directions via GPS. It weighed 158 grams and was just over half an inch thick, so it fit comfortably inside a pocket even with the sliding mechanism. You might choose between black, white, or brown to match your unique taste.
Pershing Square has taken a new position in Microsoft, with the size to be disclosed in a 13F filing later on Friday. The stock is down roughly 16% year-to-date.
Bill Ackman has bought Microsoft. Pershing Square’s chief executive said on X on Friday morning that the fund had taken a new position in the software company after its recent share-price decline, with the size to be disclosed in a regulatory filing later in the day.
Ackman’s stated rationale was that the market is mispricing the enterprise franchise rather than the AI one. Investors have underestimated Microsoft’s software ‘given its deeply embedded role across enterprises and highly attractive price-value proposition’, he wrote, framing the position as a quality-compounder bet on the installed base rather than a directional call on Azure capex.
The timing is the substance of the trade. Microsoft shares are down roughly 16% year-to-date and have traded near $413 since late April, when chief financial officer Amy Hood used the company’s fiscal Q3 results to raise full-year capital expenditure guidance to about $190bn, well above the roughly $155bn analysts had penciled in.
The results themselves were a beat. Azure grew 40%, the AI run-rate hit $37bn, and total revenue cleared $82.9bn. The stock fell anyway, on what one widely circulated buy-side note called the $190bn capex plan that repriced AI.
Advertisement
Ackman has run this play before this year. Pershing Square disclosed a new stake in Meta in February, three weeks after the latter’s own capex-driven sell-off, with Ackman describing the position at the time as a ‘deeply discounted valuation’.
The Microsoft entry follows the same shape: a megacap dragged lower by an AI-spending guide, framed by Ackman as an opportunity to buy a high-quality franchise at a temporarily de-rated multiple.
Funds running more than $100m are required to file Form 13F disclosures of US-listed positions within 45 days of quarter-end, which makes Friday a heavy day for hedge-fund reading.
Pershing Square’s last 13F, covering the December quarter, showed eleven positions and roughly $16bn in disclosed US holdings concentrated in Brookfield, Uber, Amazon, Alphabet and Meta. Microsoft did not appear. Today’s filing will show whether the firm has trimmed any existing names to fund the new one or sized it from cash.
Advertisement
The trade also lands inside a wider AI-infrastructure debate. Hyperscalers have committed more than $650bn to AI capex across 2026, on the combined Q1 numbers from Microsoft, Alphabet, Amazon, Meta and Apple, and the market is now pricing the question of when, or whether, that spend converts cleanly into operating earnings.
Ackman is, in effect, arguing that Microsoft’s existing Office, Windows and Azure book of business is enough to clear the bar, separately from the AI optionality.
Microsoft’s deep integration of OpenAI’s models across Copilot, Azure and the developer stack has been the dominant narrative in the company’s repricing over the past three years (TNW has tracked the arc). The capex bill is the cost of holding that lead. Ackman’s bet is that the enterprise software business underneath it is being given less credit than it should.
Pershing Square has not disclosed the size of the position or the average purchase price. The 13F filing is expected later on Friday US time.
Formerly known as WhiteHat, Multiverse was last valued at $1.7bn in 2022 following a $220m raise.
London-based edtech Multiverse has raised $70m in a round led by Schroders Capital, to accelerate its European expansion. The round places the company at a valuation of $2.1bn.
Founded by British businessperson Euan Blair, the round also saw the support of existing investors including General Catalyst, Lightspeed Venture Partners, D1 Capital Partners, Index Ventures, Bond and StepStone Group. Blair is the son of former UK prime minister Tony Blair.
Multiverse said that it wants to ensure that “AI benefits the workplace, rather than displacing it”. It was last valued at $1.7bn in 2022 following a $220m raise.
Advertisement
Multiverse, which was founded under the name WhiteHat in 2016, offers personalised upskilling programmes to support technological adoption. It has an AI coaching platform called Atlas.
To date, Multiverse has delivered more than £2bn worth of benefits to more than 1,000 employers, it said. Its clientele includes the likes of Microsoft, Palantir and Databricks. Atlas has tripled daily active users over the last year, the company added.
Alongside this raise, all Multiverse employees, regardless of seniority, have been offered equity and a long term stake in the company as a result of raise, Multiverse said.
“There are companies who desperately need the benefits AI can bring. There are AI companies. What has been missing is the layer that bridges the two,” said CEO Blair.
Advertisement
“This investment marks the moment Multiverse defines that category, and takes it across Europe. Getting outcomes from AI and unlocking productivity is not just a technology problem. It is a people problem. We exist to solve it.”
UK Chancellor of the Exchequer Rachel Reeves added: “Multiverse is a fantastic example of a British company helping turn that ambition into reality. This investment will support its expansion across Europe, strengthening a UK firm that is competing globally and equipping people with the skills to make AI work in practice.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Eben Upton says AI could put young people off tech jobs
This could hurt the economy due to a shortage of engineers
Some are overhyping the capabilities of AI tools and technology
Raspberry Pi founder Eben Upton has warned AI is making people less likely to pursue tech jobs, and could therefore hurt the economy of the future.
Speaking to the BBC’s Big Boss Interview podcast, Upton said that the technology could “distort people’s choices in ways that make that skill shortage worse and not better.”
Upton added there is a level of overestimation of what AI chatbots can do, adding that this could “undo a lot of the good work that’s been done, not just by Raspberry Pi, but by a lot of other organisations.”
Upton founded Raspberry Pi in 2012 in order to provide an engaging way for young people to get involved in computing and programming.
Those who have built a foundational level of understanding for a tech role during their education who would then expect to further their knowledge in a place of work have found that the positions they would typically apply for are shrinking. The work that a decade ago would have been done by an entry level employee is instead handed off to an AI tool instead.
This in turn creates a self-feeding problem: how do you replace senior staff members that retire or move jobs if there isn’t a pool of talent to pick from?
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Upton added his concern for parents worried about the direction their children’s education could take. “You read in the paper: ‘What guidance should you give your child about what GCSEs to choose in the context of an AI future?’ We have no data to inform a rational decision on that.”
“The answer is: wait five years, wait 10 years, and then maybe we might know something,” Upton added.
When asked if these problems could hurt the economy, Upton responded, “Absolutely. We need a supply of engineers.”
You must be logged in to post a comment Login