Six months after public release, Liquid Glass remains as controversial as ever. Apple may be considering some mitigations in iOS 27 and so forth, but this is the future — especially if we get touchscreen Macs.
Maybe Liquid Glass is controversial on Macs and iPhones, but you’d still buy one of these Apple Park models if you could — image credit: Apple
It’s not at all true that everyone hates Apple’s Liquid Glass redesign. What is true is that right now, it’s in flux and being changed. There was never any question that it would stay. It was always going to evolve, just as Apple’s iOS 7 so famously and controversially did over many years. Continue Reading on AppleInsider | Discuss on our Forums
Microsoft is working to address an ongoing Exchange Online outage that is preventing customers from accessing their mailboxes and calendars.
“We’re investigating reports of some users experiencing issues when accessing their Exchange Online mailbox via one or more connection methods,” Microsoft said when it acknowledged the issue at 06:42 AM UTC.
As Microsoft explained in a Microsoft 365 admin center update under EX1253275, Outlook on the web, Outlook desktop, Exchange ActiveSync, and other Exchange Online connection protocols are all affected by this outage.
While the company said that “telemetry continues to show the issue is no longer occurring for affected users” and that its engineers are “continuing to monitor service health to assess whether any additional actions are required to ensure sustained recovery,” customers are still reporting issues accessing their email.
Advertisement
Right before publishing, the Office.com web portal was down and displayed the message “We are sorry, something went wrong. Please try refreshing the page in a few minutes.”
Office.com down (BleepingComputer)
Microsoft is also investigating a separate outage affecting the Microsoft 365 Copilot web sign‑in page and Copilot web clients at office.com/chat and m365.cloud.microsoft, m365.cloud.microsoft/chat, and copilot.cloud.microsoft.
Customers who need to use Microsoft Copilot are advised to use one of the application-based Microsoft Copilot services, including the Microsoft Copilot desktop app, Copilot in Microsoft Teams, or Copilot in Office apps.
“We’ve identified that a section of service infrastructure is not processing traffic efficiently. We’re making configuration changes to remediate impact,” the company said in an admin center service alert (MO1253428).
Global PC shipments set for sharp decline as component shortages intensify
Memory and storage prices surge, forcing vendors to rethink PC strategies
Budget computers face the steepest shipment losses amid tightening component supply
Anyone planning to purchase a new work PC in the coming months may encounter shrinking availability as supply pressures deepen across the industry, experts have warned.
Research from Omdia indicates global shipments of desktops, notebooks, workstations, and even some mini PC designs could drop sharply in 2026.
The projected drop is due to shortages in memory and storage, which are major parts of these devices.
Article continues below
Advertisement
Rising component costs threaten global PC supply
The Omdia report estimates that worldwide PC shipments will fall by 12% to roughly 245 million units, as increases in component prices, especially memory and storage, are expected to surge by at least 60% during the first quarter of 2026.
Since early 2025, the cost of mainstream memory and storage configurations has already increased by between $90 and $165, which has placed pressure on manufacturers to raise prices or adjust configurations.
Advertisement
Desktops are expected to decline by roughly 10% to 53.2 million units shipped, while laptops could drop by 12% to around 192.2 million units.
Vendors now face difficult trade-offs as supply tightens and manufacturing costs continue rising.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Omdia claims this affects low-priced computers more, and systems priced below $500 could see shipments decline by 28% to about 62.1 million units in 2026.
Advertisement
Analysts say this segment has less flexibility to absorb price increases without affecting demand.
“For lower-priced products, there is less margin room to absorb rising costs, and consumers in this segment are typically more sensitive to price fluctuations,” said Omdia Principal Analyst Ben Yeh.
“In addition, lower-price-band products often rely on lower-capacity, previous-generation components and receive lower allocation priority while facing the hurdle of some suppliers discontinuing production.”
Advertisement
By contrast, higher-priced systems above $900 appear more resilient, and “some consumers and IT decision makers will accept higher price points to meet essential needs.”
However, Yeh cautioned that the movement toward higher price bands “does not necessarily represent improved product configurations.”
The outlook for 2026 points to a difficult year for the global PC market as component shortages and rising costs continue to influence production and pricing decisions.
Market performance will depend largely on how vendors manage production, pricing, and component allocation, with the future market trend remaining uncertain.
Popular Photoshop alternative GIMP has been updated to feature non-destructive Link and Vector Layers, an upgraded MyPaint Brush tool, and expanded file format support including SVG export. The update also brings UI and stability improvements.
Personal audio is no longer some gateway drug into traditional hi-fi. It is the drug. The energy, crowds, and money are here. And judging by the number of legacy high-end brands still trying to figure out how to get into the category, the window may already be closing. The companies already in the pool are doing very well. The ones still standing on the deck trying to appeal to Mrs. Wheeler might want to find a towel. Fast.
Now that CanJam NYC 2026 is in the rearview mirror, that reality feels even clearer.
What unfolded over the weekend inside that packed hotel ballroom wasn’t just another headphone meet. It was a very visible reminder that personal audio has become one of the most dynamic, and crowded corners of the entire hi-fi industry.
This wasn’t a niche gathering of a few hundred die-hards swapping cables and arguing about burn-in. The lines outside the ballroom doors early Saturday morning told a different story. Hundreds of people were already queued up before the doors opened, waiting for the chance to hear the latest headphones, IEMs, DACs, and amplifiers. That kind of turnout doesn’t happen unless the category has real momentum. When thousands show up, it’s time to accept that we’re living in two very different worlds right now in the high-end audio segment.
Credit where it’s due.
Advertisement
The team behind the CanJam Global series knows exactly what it’s doing. Jude Mansilla and Ethan Opolion have spent more than a decade turning what began as a relatively small headphone gathering into some of the most focused and consistently packed audio events anywhere on the calendar.
Not every stop along the way has been a home run. That’s the reality of ever growing event series. But the level of interest in personal audio has never been higher, and a large portion of that momentum can be traced directly back to the ecosystem built around CanJam Global and Head-Fi.
After spending two days walking those rooms in New York, the conclusion feels unavoidable.
Personal audio isn’t the gateway to hi-fi anymore.
For a lot of people walking through those doors, it is hi-fi.
Advertisement
The Good
The enthusiasm at CanJam NYC 2026 was impossible to miss. Packed rooms. Long listening lines. People carrying Pelican cases full of IEMs like they were transporting crown jewels. Personal audio might be the most obsessive corner of the hobby right now, but it’s also one of the most energized.
Advertisement. Scroll to continue reading.
That said, a little personal space wouldn’t be the worst thing in the world.
There’s a certain breed of attendee who believes the correct way to audition a $3,000 pair of headphones is to hover six inches behind the person already listening. Close enough to fog up the back of your neck. As if the pressure alone will somehow convince you to wrap it up. Five minutes into a track and they’re shifting their weight like TSA agents who missed lunch. Relax. The headphones will still be here when I’m done.
Advertisement
My Saturday started well before any of that.
5 a.m. alarm. Quick shower and shave. Irish Spring for the win. Walk the dog in the dark while staring down the neighborhood red fox that has apparently decided my dog looks like breakfast. Bad decision. Tyrion would tear this furry idiot apart and turn it into a winter jacket.
Bagel run. Coffee run. Daven. Walk to the train station.
Thanks to the ongoing Portal Bridge construction project, NJ Transit has been doing its best to remind those of us living south of Newark that patience is a virtue. My commute from the Jersey Shore into Penn Station clocked in at about 100 minutes. Plenty of time to stare out the window, drink mediocre coffee, read the news from Israel, and think about headphones.
Advertisement
All of this before 7 a.m. on what might have been the nicest Saturday morning in the tri-state in two months.
Because the winter that just crawled out of this region wasn’t some mild seasonal inconvenience. We’re talking snow measured in feet, weeks of frozen tundra that would have looked perfectly at home on Hoth, and winds ripping off the Atlantic like they were personally offended that anyone still lived along the Jersey Shore.
And judging by the crowd already gathering outside the ballroom doors when I arrived, I wasn’t the only one willing to wake up early for this show.
What showed up in New York was serious hardware. Flagship headphones pushing well past the $5,000 mark. Electrostatics that require dedicated energizers. DACs and portable amplifiers with engineering that would have been considered state of the art in traditional two-channel systems not that long ago. The performance ceiling keeps moving higher, and the companies building this gear clearly understand that their audience is paying attention.
Then there were the IEM tables.
Advertisement
An insane number of them.
Advertisement. Scroll to continue reading.
Everywhere you looked there were cases full of wired in-ear monitors. Universal fits. Custom demos. Multi-driver designs that look like miniature spacecraft. Some companies had entire tables dedicated just to different variations of their flagship IEM lines. It wasn’t just a handful of boutique builders either—established brands and newer players were all leaning heavily into the category.
If anyone still thinks wired IEMs are some niche side hustle inside personal audio, they probably didn’t walk the floor at CanJam this weekend.
Advertisement
And the crowds kept coming.
One of the most impressive things about CanJam NYC 2026 was the constant flow of people moving through the show. Not just a big opening rush and then a slow taper. Waves. All day.
Which is even more remarkable when you consider the location. The hotel sits right in the middle of Times Square, arguably one of the most chaotic intersections of humanity on the planet. Outside the doors you’ve got tourists staring at LED billboards the size of aircraft carriers, Elmos asking for tips, and someone selling $12 hot dogs that probably violate several international treaties.
And inside?
Advertisement
Thousands of people quietly listening to headphones.
Only in New York could you walk out of a room filled with $6,000 electrostatic headphones and immediately get run over by a guy in a Spider-Man costume holding a margarita the size of a fire extinguisher and chanting for the Ayatollah.
The Bad
Not everything about CanJam NYC 2026 was perfect.
The biggest absence was impossible to ignore.
Advertisement
CanJam Honcho Ethan Opolion wasn’t there.
Ethan was stuck at home in Israel because of the ongoing war between America, Israel, and Iran, which made international travel impossible. That’s a tough break for someone who has been a constant presence at these shows since the beginning. In fact, this was the first CanJam he’s missed since the series launched nearly a decade ago.
Advertisement. Scroll to continue reading.
Considering how much work Ethan and Jude Mansilla put into organizing these events, his absence was definitely felt.
Advertisement
There were also a few notable no-shows.
Focal and Naim didn’t make an appearance this year. That raised a few eyebrows at first, but we now know the reason. They were in the middle of the Barco acquisition, which likely pushed a headphone show in Times Square a little further down the priority list.
Still, their absence was noticeable.
Another surprise was the lack of a presence from Headphones.com, which has become one of the most influential retailers in the personal audio space.
Advertisement
A couple of companies also showed up with smaller footprints than usual.
Dan Clark Audio and Schiit Audio both had scaled-back tables compared to previous years. That was a bit of a bummer. Both companies usually bring a larger spread of gear and draw a steady crowd throughout the day.
And while there was plenty of excellent equipment on display, I’ll be honest about something else.
I didn’t walk away feeling like I had witnessed a lot of earth-shattering innovation.
That doesn’t mean the show lacked interesting products. Far from it.
Advertisement
Companies like Grell, iFi, Chord, Meze Audio, Grado, and HiFiMAN all had new gear worth hearing. Some of it sounded fantastic. Some of it pushed design ideas forward in smaller, incremental ways.
But nothing made me want to sell my firstborn son to the Dothraki as payment and spend the rest of my days beyond the Wall trapped in a frozen hut with Cersei.
Advertisement. Scroll to continue reading.
And this is only the first part of the “bad.” There’s more.
Advertisement
More of the Bad
Another thing that stood out to me this year had less to do with gear and more to do with the crowd itself.
Having attended CanJams before there even were CanJams; back when they were basically Head-Fi meets, I’ve been genuinely impressed by how the demographics have evolved over the years.
Those early events were…let’s call them special.
Picture small headphone gatherings in dingy hotel mini-ballrooms. The kind of rooms where nobody would ever admit to having their Bar Mitzvah, Bat Mitzvah, or Communion, mostly because the catering trays looked like they came from the “we have glass for a reason” Chinese restaurant next door. A lot of folding tables. Extension cords everywhere. And a crowd that skewed heavily toward white and Asian single or married men who could spend three hours debating driver metallurgy without coming up for oxygen.
Advertisement
Fast forward to the past few CanJam NYC events and the shift had been pretty noticeable.
Still plenty of guys. This is hi-fi after all. But there were far more women in attendance. Women of different ages, backgrounds, and cultures. Some attending with partners. Some clearly there on their own. Young professionals who can absolutely afford this gear. Couples sharing listening sessions. Music lovers who were just as curious about the newest IEM or headphone amplifier as anyone else in the room.
The crowd this year seemed to tilt back toward the more traditional male-heavy demographic. Plenty of White, Asian, African American, Indian, and Hispanic attendees, but overwhelmingly men of all shapes and sizes. Perhaps even a few too many of the white-haired older audiophiles who make hi-fi shows so inspiring about the future of the hobby.
Advertisement
CanJam NYC 2026
There was still a lot of youth in the room, which is great. That’s been one of the more encouraging developments at recent CanJam events. Younger listeners discovering personal audio, building systems, and actually caring about sound quality instead of whatever algorithm Spotify decides to shove into their ears that day.
But when the old guard starts showing up in droves, and that includes the older generation of hi-fi journalists born before the Nixon administration…some even before Kennedy, you have to wonder what it really means.
Is that growth?
Advertisement. Scroll to continue reading.
Advertisement
Or just the same crowd discovering a new category of gear to argue about?
There were women at the show. Absolutely.
But having attended the past four NYC CanJams, this one definitely felt like it had fewer women in attendance than recent years. Whether that’s just a one-year anomaly or something else entirely is hard to say.
Still, it was noticeable.
Advertisement
Something else has been quietly happening alongside the explosion in personal audio.
Not in some ironic, retro, nostalgia cosplay kind of way. Not because a handful of aging collectors refuse to move on. What I’m observing, both at shows like CanJam and out in the real world is something broader.
It’s multi-generational.
Young listeners discovering vinyl for the first time. Film fans hunting down UHD 4K Blu-rays because streaming services keep editing or removing the movies they want to watch. Readers buying physical books because staring at another glowing rectangle after ten hours of work feels like a punishment, not a reward.
Advertisement
And for some of us, it never really left.
I’ve been buying and collecting books, movies, and music for more than 50 years. It’s not a hobby. It’s part of how my brain works.
Just how important?
Advertisement. Scroll to continue reading.
Advertisement
I turned 56 last week.
My family offered to buy me hockey tickets as a birthday gift. Given how the Devils and Rangers have been performing lately, tickets are suddenly a lot easier to come by as the NHL season winds down. Normally that would have been an easy “yes.” I’ve played hockey most of my life. I still follow the league obsessively. My brain stores NHL statistics the way other people remember their kids’ birthdays.
But I said no.
What did I want instead?
Advertisement
Books. Movies. Music.
Better than a game. Better than a new watch. Better than just about anything else they could have wrapped in a box.
Because physical media does something that streaming never will.
It stimulates the brain.
Advertisement
The ritual of picking a record. Pulling a disc off the shelf. Opening a book and feeling the paper between your fingers. Your mind engages differently. The senses fire in ways that a scrolling app menu simply can’t replicate.
Streaming is convenient. I use it every day.
But it doesn’t stimulate my brain or frankly my loins, the way physical music does.
And before someone says it, yes, eReaders have their place. I know that better than most. I helped Barnes & Noble launch the Nook between 2009 and 2011. The technology solved real problems and made reading more accessible for a lot of people.
Advertisement
Advertisement. Scroll to continue reading.
But nothing compares to a physical book.
Not ever.
To sit and read next to the most glorious blonde space princess in the galaxy. To fall asleep watching a movie together pulled from the collection. That’s the dream. Always has been.
Advertisement
And judging by what I’m seeing in record stores, bookstores, from boutique film labels, and the listening habits of younger audiences discovering this stuff for the first time…
For more than two decades, digital businesses have relied on a simple assumption: When someone interacts with a website, that activity reflects a human making a conscious choice. Clicks are treated as signals of interest. Time on page is assumed to indicate engagement. Movement through a funnel is interpreted as intent. Entire growth strategies, marketing budgets, and product decisions have been built on this premise.
Today, that assumption is quietly beginning to erode.
As AI-powered tools increasingly interact with the web on behalf of users, many of the signals organizations depend on are becoming harder to interpret. The data itself is still accurate — pages are viewed, buttons are clicked, actions are recorded — but the meaning behind those actions is changing. This shift isn’t theoretical or limited to edge cases. It’s already influencing how leaders read dashboards, forecast demand, and evaluate performance.
The challenge ahead isn’t stopping AI-driven interactions. It’s learning how to interpret digital behavior in a world where human and automated activity increasingly overlap.
Advertisement
A changing assumption about web traffic
For decades, the foundation of the internet rested on a quiet, human-centric model. Behind every scroll, form submission, or purchase flow was a person acting out of curiosity, need, or intent. Analytics platforms evolved to capture these behaviors. Security systems focused on separating “legitimate users” from clearly scripted automation. Even digital advertising economics assumed that engagement equaled human attention.
Over the last few years, that model has begun to shift. Advances in large language models (LLMs), browser automation, and AI-driven agents have made it possible for software systems to navigate the web in ways that feel fluid and context-aware. Pages are explored, options are compared, workflows are completed — often without obvious signs of automation.
This doesn’t mean the web is becoming less human. Instead, it’s becoming more hybrid. AI systems are increasingly embedded in everyday workflows, acting as research assistants, comparison tools, or task completers on behalf of people. As a result, the line between a human interacting directly with a site and software acting for them is becoming less distinct.
The challenge isn’t automation itself. It’s the ambiguity this overlap introduces into the signals businesses rely on.
Advertisement
What do we mean by AI-generated traffic?
When people hear “automated traffic,” they often think of the bots of the past — rigid scripts that followed predefined paths and broke the moment an interface changed. Those systems were repetitive, predictable, and relatively easy to identify.
AI-generated traffic is different.
Modern AI agents combine machine learning (ML) with automated browsing capabilities. They can interpret page layouts, adapt to interface changes, and complete multi-step tasks. In many cases, language models guide decision-making, allowing these systems to adjust behavior based on context rather than fixed rules. The result is interaction that appears far more natural than earlier automation.
Importantly, this kind of traffic is not inherently problematic. Automation has long played a productive role on the web, from search indexing and accessibility tools to testing frameworks and integrations. Newer AI agents simply extend this evolution — helping users summarize content, compare products, or gather information across multiple sites.
Advertisement
The issue is not intent, but interpretation. When AI agents interact with a site successfully on behalf of users, traditional engagement metrics may no longer reflect the same meaning they once did.
Why AI-generated traffic is becoming harder to distinguish
Historically, detecting automated activity relied on spotting technical irregularities. Systems flagged behavior that moved too fast, followed perfectly consistent paths, or lacked standard browser features. Automation exposed “tells” that made classification straightforward.
AI-driven systems change this dynamic. They operate through standard browsers. They pause, scroll, and navigate non-linearly. They vary timing and interaction sequences. Because these agents are designed to interact with the web as it was built — for humans — their behavior increasingly blends into normal usage patterns.
As a result, the challenge shifts from identifying errors to interpreting behavior. The question becomes less about whether an interaction is automated and more about how it unfolds over time. Many of the signals that once separated humans from software are converging, making binary classification less effective.
Advertisement
When engagement stops meaning what we think
Consider a common e-commerce scenario.
A retail team notices a sustained increase in product views and “add to cart” actions. Historically, this would be a clear signal of growing demand, prompting increased ad spend or inventory expansion.
Now imagine that a portion of this activity is generated by AI agents performing price monitoring or product comparison on behalf of users. The interactions occurred. The metrics are accurate. But the underlying intent is different. The funnel no longer represents a straightforward path toward purchase.
Nothing is “wrong” with the data — but the meaning has shifted.
Advertisement
Similar patterns are appearing across industries:
Digital publishers see spikes in article engagement without corresponding ad revenue.
SaaS companies observe heavy feature exploration with limited conversion.
Travel platforms record increased search activity that doesn’t translate into bookings.
In each case, organizations risk optimizing for activity rather than value.
Why this is a data and analytics problem
At its core, AI-generated traffic introduces ambiguity into the assumptions underlying analytics and modeling. Many systems assume that observed behavior maps cleanly to human intent. When automated interactions are mixed into datasets, that assumption weakens.
Behavioral data may now include:
Advertisement
Exploration without purchase intent
Research-driven navigation
Task completion without conversion
Repeated patterns driven by automation goals
For analytics teams, this introduces noise into labels, weakens proxy metrics, and increases the risk of feedback loops. Models trained on mixed signals may learn to optimize for volume rather than outcomes that matter to the business.
This doesn’t invalidate analytics. It raises the bar for interpretation.
Data integrity in a machine-to-machine world
As behavioral data increasingly feeds ML systems that shape user experience, the composition of that data matters. If a growing share of interactions comes from automated agents, platforms may begin to optimize for machine navigation rather than human experience.
Over time, this can subtly reshape the web. Interfaces may become efficient for extraction and summarization while losing the irregularities that make them intuitive or engaging for people. Preserving a meaningful human signal requires moving beyond raw volume and focusing on interaction context.
Advertisement
From exclusion to interpretation
For years, the default response to automation was exclusion. CAPTCHAs, rate limits, and static thresholds worked well when automated behavior was clearly distinct.
That approach is becoming less effective. AI-driven agents often provide real value to users, and blanket blocking can degrade user experience without improving outcomes. As a result, many organizations are shifting from exclusion toward interpretation.
Rather than asking how to keep automation out, teams are asking how to understand different types of traffic and respond appropriately — serving purpose-aligned experiences without assuming a single definition of legitimacy.
Behavioral context as a complementary signal
One promising approach is focusing on behavioral context. Instead of centering analysis on identity, systems examine how interactions unfold over time.
Advertisement
Human behavior is inconsistent and inefficient. People hesitate, backtrack, and explore unpredictably. Automated agents, even when adaptive, tend to exhibit a more structured internal logic. By observing navigation flow, timing variability, and interaction sequencing, teams can infer intent probabilistically rather than categorically.
This allows organizations to remain open while gaining a more nuanced understanding of activity.
Ethics, privacy, and responsible interpretation
As analysis becomes more sophisticated, ethical boundaries become more important. Understanding interaction patterns is not the same as tracking individuals.
The most resilient approaches rely on aggregated, anonymized signals and transparent practices. The goal is to protect platform integrity while respecting user expectations. Trust remains a foundational requirement, not an afterthought.
The future: A spectrum of agency
Looking ahead, web interactions increasingly fall along a spectrum. On one end humans are browsing directly, in the middle users are assisted by AI tools, on the other end agents are acting independently on a user’s behalf.
Advertisement
This evolution reflects a maturing digital ecosystem. It also demands a shift in how success is measured. Simple counts of clicks or visits are no longer sufficient. Value must be assessed in context.
What business leaders should focus on now
AI-generated traffic is not a problem to eliminate — it’s a reality to understand.
Leaders who adapt successfully will:
Reevaluate how engagement metrics are interpreted
Separate activity from intent in analytics reviews
Invest in contextual and probabilistic measurement approaches
Preserve data quality as AI participation grows
Treat trust and privacy as design principles
The web has evolved before, and it will evolve again. The question is whether organizations are prepared to evolve how they read the signals it produces.
Shashwat Jain is a senior software engineer at Amazon.
Advertisement
Welcome to the VentureBeat community!
Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.
Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!
From copilots and chatbots to advanced analytics and automation, AI systems are now embedded in how organizations operate and compete. Yet as adoption accelerates, a less visible issue is coming sharply into focus: energy.
Enrique Lizaso
Co-founder and CEO at Multiverse Computing.
Advertisement
Data centers are already major consumers of electricity, and AI is pushing demand even higher. Power consumption linked to AI workloads is projected to grow by around 15% per year, far outpacing growth across other sectors.
Training and running large language models (LLMs) requires enormous computational resources, and every additional layer of complexity translates directly into higher energy use.
Article continues below
Advertisement
This trajectory raises a critical question for the future of AI: how long can innovation continue on a path that depends on ever-increasing power consumption?
Power constraints are shaping AI’s future
The AI industry has spent the past decade chasing scale. Larger models, more parameters and bigger datasets have driven impressive gains in performance. At the same time, the cost of delivering those gains has risen sharply.
Advertisement
Electricity prices, grid capacity and data center availability are no longer background considerations. They are becoming limiting factors. In many regions, access to sufficient power is now a strategic constraint, shaping where AI infrastructure can be built and who can afford to use it.
For businesses, this creates growing tension. Advanced AI promises efficiency and competitive advantage, yet the operational costs of running large models can be prohibitive. For governments and regulators, the challenge is even broader: balancing AI-led economic growth with sustainability targets and grid resilience.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Without changes in how AI systems are built and deployed, energy demand risks slowing progress at exactly the moment when momentum is strongest.
Advertisement
Cost-effective AI is essential for wider adoption
The conversation around democratizing AI often focuses on access to tools or models. In practice, affordability plays an equally important role. If advanced AI remains expensive to run, its benefits will concentrate in the hands of a few large organizations with the deepest pockets and the most robust infrastructure.
Most companies do not need the largest possible model available. They need systems that deliver reliable results at a predictable cost. That applies just as much to public sector organizations, manufacturers and mid-sized enterprises as it does to startups.
Energy-efficient AI lowers the barrier to entry. Reduced power requirements mean lower operational costs, simpler deployment and fewer infrastructure constraints. For data centers, this translates into more efficient use of existing capacity, reduced cooling demands and less need for constant expansion.
Optimized models allow organizations to do more with the infrastructure they already have, easing pressure on energy supply while improving overall economics.
Efficiency also enables new deployment models. Smaller, compressed AI systems can run locally on devices such as smartphones, laptops, vehicles and even home or industrial appliances.
By bringing intelligence closer to where data is generated, organizations can reduce latency, improve reliability and limit dependence on centralized cloud infrastructure. For many use cases, this is a practical advantage as well as a sustainability win.
Advertisement
Smaller models can still deliver strong results
There is a widespread assumption that cutting down models inevitably means sacrificing accuracy. Advances in model optimization are challenging that idea.
Techniques such as compression, pruning and optimization allow LLMs to be significantly reduced in size while preserving performance on real-world tasks.
This allows organizations to deploy efficient AI models in environments where large-scale systems would be impractical or uneconomical, without sacrificing the performance required for enterprise applications.
Advertisement
The impact is dramatic. Compressed models can be up to 95% smaller, requiring far less memory and compute. That reduction translates directly into lower energy consumption and faster inference, while maintaining the level of accuracy organizations expect.
This approach shifts the emphasis from brute-force scaling to intelligent design. Rather than treating size as a proxy for quality, it prioritizes efficiency, precision and real-world applicability.
Sustainability and competitiveness go hand in hand
As AI becomes a core part of digital infrastructure, its environmental footprint will increasingly matter. Businesses are under pressure to meet ESG commitments, and customers are paying closer attention to how digital services are delivered. Governments, meanwhile, are assessing how AI fits into long-term energy planning.
Advertisement
Energy-efficient AI aligns with all of these priorities. Lower power consumption reduces emissions, eases strain on grids and improves the economics of deployment. It also makes AI more resilient, less dependent on scarce resources and better suited to global scale.
The shift toward efficiency does not require slowing innovation. On the contrary, it creates room for growth by removing one of the most significant constraints facing the industry.
Building the next phase of AI
The next chapter of AI will be shaped less by how large models can become and more by how effectively they can be deployed. Progress depends on systems that are powerful, practical and sustainable.
Advertisement
Achieving that balance requires collaboration across the ecosystem – from researchers developing leaner architectures to organizations rethinking how and where AI is deployed. It also calls for a broader definition of innovation, one that values efficiency alongside raw performance.
AI has the potential to transform industries, improve productivity and address complex global challenges. Ensuring that transformation remains accessible and sustainable will determine how widely those benefits are shared.
Solving AI’s energy challenge is part of that work. Done well, it opens the door to a future where advanced intelligence is not limited by power consumption, but enabled by smarter design.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Tic-Tac-Toe is a relatively simple game, and one of the few which has effectively been solved for perfect play. The nature of the game made it possible for [Joost van Velzen] to create a LEGO machine that can play the game properly in an entirely mechanical fashion.
The build features no electronics to speak of. Instead, it uses 52 mechanical logic gates and 204 bits of mechanical memory to understand and process the game state and respond with appropriate moves in turn. There are some limitations to the build, however—the game state always begins with the machine taking the center square. Furthermore, the initial move must always be played on one of two squares—given the nature of the game though, this doesn’t really make a difference.
It’s also worth heading over to the Flickr page for the project just to appreciate the aesthetics of the build. It’s styled in the fashion of an 18th-century automaton or similar. It’s also been shared on LEGO Ideas where it’s raised quite a profile.
Advertisement
If you’ve ever wanted to think about computing in a mechanical sense, this build is a great example of how it can be done. We often see some fun LEGO machines around these parts, from massive parts sorters to somewhat-functional typewriters.
The new Digg experiment is ending in a resounding fiasco. The beta version of the rebuilt social sharing portal has already been shut down – a “difficult” decision that forced the company to significantly downsize its development team. Building new internet projects in 2026 is a completely different experience, Digg… Read Entire Article Source link
Exclusivity is a part of luxury, but luxury automakers have found that a more egalitarian approach is good for the bottom line. The Audi Q3 is one of numerous pint-sized crossover SUVs (as well as a few sedans and coupes) from luxury brands that appeal to budget-conscious new car shoppers and thus represent a major opportunity for cynicism. A customer who won’t look past the brand name can end up with an underwhelming car built down to a price. What’s a reputation when image-conscious shoppers don’t know (or care about) the difference?
The redesigned 2026 Audi Q3 naturally incorporates elements from other new or recently-updated Audi models like the Q5 and Q6 e-tron crossovers. But Audi is also dispensing with the bovine excrement and offering a single well-equipped version (with a few options) that’s more expensive than before, but also offers more of everything and ensures customers will get more than just a prestige badge.
Advertisement
It looks more like an SUV than before
Stephen Edelstein/SlashGear
This third-generation Q3 retains the Volkswagen Group MQB architecture used by the outgoing model, as well as the Audi A3 sedan and most of VW’s U.S. lineup, but clothes it in something different. Audi put it on a whey and protein diet, creating more visual bulk in order to make the Q3 look less like a lifted hatchback and more like a traditional SUV. It also looks like a mini version of the Audi Q5, which fits perfectly with its aspirational mission.
That doesn’t mean the new Q3 is nice to look at. The tall front end and body sides left a lot of space that needed to be filled with fussy styling details. The headlights and LED daytime running lights (with three programmable styles) are stacked atop pillar-like air intakes that are mostly blanked off (there’s a small secondary radiator intake below the passenger-side headlight, and slats for to direct air around the outside of the wheel wells to reduce drag).
Things are better in profile view, where sheetmetal character lines nicely break up the body-side surfaces and a longer hood makes the front end look less stubby than before. A strip of textured plastic clutters the back end, but is necessary to camouflage the girth of the rear bumper. Optional OLED taillights have programmable styles like the DRLs, but that required a split arrangement similar to that of the 2026 Audi A6 to meet federal regulations.
Advertisement
Form meets function
Stephen Edelstein/SlashGear
The interior is perfectly calibrated for the price point. Standard wood dashboard trim provides a nice bit of contrast, while the dash itself has a distinctive concave shape. The overall aesthetic is clean but a bit sterile, like a new apartment in a freshly gentrified neighborhood, the kind of place many Q3 owners will likely call home. Even the plastics are presentable, although there’s an unfortunate moat of piano black around the cupholders, where it’s likely to get stained by coffee drips.
Designers also made sure aesthetics didn’t get in the way of functionality. The door handles are placed high and are easy to grab, a simple detail many other automakers nonetheless miss. The doors also have seemingly endless space for water bottles, and the wireless phone charger on the center console is now standard equipment. To free up more space, Audi also replaced the shifter and the wiper/turn signal stalks with small tabs on either side of the steering wheel. They don’t take much getting used to, although it is easy to accidentally brush the touchpads on the wheel spokes when using them.
Advertisement
The new Q3 isn’t much bigger than the old version: Audi increased cargo space by 5.3 cubic feet with rear seats up and 2.0 cubic feet with the rear seats folded, to 29.0 and 50.0 cubic feet, respectively. That gives the Audi more cargo space than rivals with the rear seats in place. Headroom and legroom aren’t as remarkable, but are respectable.
Advertisement
Important tech updates
Stephen Edelstein/SlashGear
As the least-expensive Audi SUV, one might expect the Q3 to have a watered-down version of the tech seen in its pricier siblings. But in some respects, the Q3 is a step ahead.
The standard touchscreen grows from 10.25 inches to 12.8 inches, while the digital instrument cluster grows from 8.8 inches to 11.9 inches. They’re arranged in the same Digital Stage style as other recent Audi models, meaning in an oversized housing that curves around the driver’s seat. The instrument cluster in particular is much smaller than the bezel around it. That looks chintzy, a word that also describes the sound of the optional 12-speaker Sonos audio system.
However, the Q3 launches with an updated interface that isn’t available on other Audi models yet. This returns the map view to the gauge cluster, and populates both the cluster and touchscreen with large gray icons that are easier to read than the previous version. The underlying Android-based software still provides snappy responses, while incorporating wireless Apple CarPlay and Android Auto.
The voice recognition system is augmented with ChatGPT, allowing the car to answer trivia questions, although when we asked it to tell us a joke it demurred (perhaps because this is a German car). It was also able to make recommendations, although we were unable to confirm the quality of the Mexican food at the place the system suggested due to time constraints. More mundane tasks, such as adjusting temperature and adjusting seat heaters, were easily handled.
Advertisement
It’s quicker and more powerful than before
Stephen Edelstein/SlashGear
Like the vehicle architecture, the engine carries over and will be familiar to fans of Audi and its parent Volkswagen brand. It’s the EA888 (here in Evo 4 spec) 2.0-liter turbocharged inline-four. Versions of this engine are used in everything from the VW GTI hot hatchback to the Atlas midsize SUV. In the 2026 Q3, it gets a substantial power bump of 27 horsepower and 22 pound-feet of torque, bringing the totals up to 255 hp and 273 lb-ft.
That extra power means, according to Audi, the Q3 will now do zero to 60 mph in 5.5 seconds instead of 7.1 seconds. That’s quicker than a BMW X1 xDrive28i or Mercedes-Benz GLA 250 4Matic—the base all-wheel drive versions of the Audi’s German rivals—where the opposite was the case previously. And it leaves other subcompact crossovers behind as well. BMW and Mercedes offer quicker versions, though, whereas the Q3 is only available in a single spec for now. Audi engineered RS Q3 performance versions of the previous two generations, but not for the U.S.
Naturally for Audi, all-wheel drive is standard. That’s far for a given in the segment, despite the premium brands playing in it. However, this redesign swaps the eight-speed automatic transmission for a seven-speed dual-clutch unit, like that used in the Q5 and A5. And as in those models, it’s not very well calibrated. There’s significant hesitation at throttle tip-in, and a bit of clunkiness after that.
Advertisement
But more power doesn’t equal more fun
Stephen Edelstein/SlashGear
The transmission wasn’t the only thing that needed a bit more fine-tuning. Audi fitted the latest Q3 with its “progressive” steering rack, which is supposed to quicken up responses as the wheel is turned more. But that effect wasn’t apparent when maneuvering on urban streets. Combined with the slow-reacting transmission, it made this subcompact crossover feel clumsy in an area where it should excel. Outward visibility is at least better than in bigger traditional SUVs, at least.
Unshackled from stop-and-go traffic, the Q3 felt more composed, and indeed refined enough to justify its prestige badge. It may share components with humble Volkswagen models, but the comfortable ride (even on optional 20-inch wheels) shows just how well-engineered those components are. Audi also added acoustic laminated glass to the front-door windows for 2026, lowering the amount of wind noise significantly. The crashing of the 20-inch wheels against surface imperfections was still apparent, but that could be cured with smaller wheels.
Advertisement
What’s missing is excitement. Audi dialed up the power, but it didn’t do the same with anything else. The Q3 goes around corners without embarrassing itself, and it has autobahn-worthy solidity at highway speeds. But this isn’t a car that encourages you to take the long way home. Most shoppers probably aren’t looking for that from their entry-level crossover, even if Audi laid the foundation for something sportier with the Q3’s added power.
Advertisement
2026 Audi Q3 verdict
Stephen Edelstein/SlashGear
Audi’s decision to offer the 2026 Q3 in one well-equipped spec means it’s more expensive than before, but it’s still a good value. The base price of $44,995 is $3,900 higher than the outgoing model, but Audi claims that $3,699 worth of previously-optional equipment is now standard. Factor in the added power, cargo space, and screen size, and the price increase is (for once) justified.
Several option packages are available, which can boost the price to $51,790 for a fully-loaded model. That’s not an enormous step up from the starting price, showing that Audi really did cover the bases with standard equipment. Many of the added features are design-related or, like the lackluster Sonos audio system, don’t dramatically change the experience. The Q3 is also still priced close to all-wheel drive versions of its BMW X1 and Mercedes GLA-Class rivals, as well as the Volvo XC40. The Acura ADX and Lexus UX are cheaper, but also less appealing.
The Q3 won’t make these rivals obsolete, but it will deliver what customers attracted to its four-ring badge should expect. Its refined driving experience and long list of features provide solid reasons to choose a Q3 over something more mainstream, even if its handling and styling aren’t as stirring as they should be. It could be better but, by delivering a true budget-friendly dose of luxury, it’s probably already better than it needed to be.
Microsoft has removed the Samsung Galaxy Connect app from the Microsoft Store because it was causing issues on specific Samsung Galaxy Book 4 and desktop models running Windows 11.
This comes after the company said on Friday that it was investigating reports of app failures and users losing access to their C:\ drive on some Windows 11 systems.
“Users might encounter the error, ‘C:\ is not accessible – Access denied,’ which prevents access to files and blocks the launch of some applications including Outlook, Office apps, web browsers, system utilities and Quick Assist,” Microsoft explained.
The known issue impacts a wide range of Samsung Galaxy Book 4 and Samsung Desktop models running Windows 11, including NP750XGJ, NP750XGL, NP754XGJ, NP754XFG, NP754XGK, DM500SGA, DM500TDA, DM500TGA, and DM501SGA.
Advertisement
On affected devices, users have been experiencing problems launching apps, accessing files, or performing administrative tasks, and, in some cases, issues elevating privileges, uninstalling updates, or collecting logs due to permission failures.
Following a joint investigation with Samsung, Microsoft has attributed these issues to the Samsung Galaxy Connect app (used for screen mirroring, file sharing, and data transfer between Galaxy devices and Windows PCs) and temporarily removed it from the Microsoft Store.
“The affected Samsung Galaxy Connect application was temporarily removed from the Microsoft Store to prevent further installations,” Microsoft said.
“Samsung has republished a stable previous version of the application to stop recurrence on additional devices. Recovery options for devices already impacted remain limited, and Samsung continues to evaluate remediation approaches with Microsoft’s support.”
Advertisement
Microsoft and Samsung have not yet provided a workaround and are still working on a fix for affected Windows 11 devices. Impacted users are advised to contact Samsung for device-specific assistance.
On Friday, Microsoft also released an out-of-band (OOB) update to fix a security issue in the Routing and Remote Access Service (RRAS) management tool affecting Windows 11 Enterprise devices that receive hotpatch updates instead of regular Patch Tuesday cumulative updates.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
You must be logged in to post a comment Login