Connect with us

Tech

7 Laptop Docking Stations to Unlock the Full Desktop Experience (2026)

Published

on

Other Laptop Docking Stations to Consider

We test a lot of laptop docking stations and, quite frankly, most of them are … fine. They’re fine! We get into the nitty-gritty for specific use cases to find the best, but that leaves a bunch of devices that are great options even if they don’t make our top picks. Here’s a selection of some of our favorites, past favorites, or just alternatives to our picks above.

Front and back view of black rectangular device with multiple ports

Satechi Triple 4K Docking Station

Photograph: Eric Ravenscraft

Satechi Triple 4K Docking Station for $300: Satechi’s Triple 4K Docking Station supports three monitors, and while the first display output is HDMI-only, the two others can be connected via either HDMI or DisplayPort. Each display supports up to a maximum of 4K resolution at 60 Hz, which is more than enough for most office or media work, though don’t expect it to support the high frame rates you might want for gaming. —Eric Ravenscraft

Plugable USB-C Dual HDMI Display Dock for $120: Sometimes, all you need is a quick and easy way to plug your laptop into a couple of monitors—preferably without spending hundreds of dollars. It supports two monitors via HDMI and includes a healthy array of ports to connect the rest of your accessories. So, while there are certainly more up-to-date options out there, this is an affordable way to get some basic connectivity.

Advertisement

Kensington Triple Video Mobile Dock for $83: A mobile docking station might sound like a contradiction, but in the case of the tiny Kensington Triple Video Mobile Dock, it makes a lot of sense. Using the included two HDMI ports and the DisplayPort, this little device can power three 1080p displays or two 4K displays—all at 60 Hz. It also has a USB-C port with 85 watts of pass-through charging, which is enough to charge most laptops. The downside is that it only supports a single 4K monitor on MacBooks, as the dual 4K support is only for Windows devices.

Image may contain Electronics Hardware Modem Router Computer Laptop and Pc

Sonnet Echo 13 Thunderbolt 5 Dock

Courtesy of Luke Larsen

Sonnet Echo 13 Thunderbolt 5 Dock for $440: Sonnettech’s Echo 13 was one of the first Thunderbolt 5 docks out on the market. As it turns out, it’s also one of the most unique offerings out there, including just about every port imaginable as well as an integrated M.2 storage slot with a Kingston SSD inside. While handy, I don’t like that the drive isn’t user accessible. The cheap plastic chassis is disappointing for the price, too.

Advertisement

Ivanky FusionDock Max 1 Thunderbolt 4 Docking Station for $380: It’s hard to overstate how excessively luxurious this dock is. It’s specifically for MacBook Pro users and can tackle up to four 6K screens, something only recent MacBook Pros support. The Ivanky FusionDock Max 1 accomplishes this via four USB-C Thunderbolt 4 ports, each capable of 40 Gbps data transfer speeds. If you’re building the beefiest media workstation you can for the most powerful MacBook Pros on the market, this is it. Just put it all on the company card, because it’s expensive. —Eric Ravenscraft

Ugreen Revodok Max 213 Thunderbolt 4 for $228: Few people need an 8K display—or multiple 4K displays—but those who do know how difficult it can be to find gear that supports their needs. Fortunately, the Revodok Max 213 from Ugreen fits that bill. The DisplayPort 1.4 port can handle up to an 8K display at 30 Hz. It also comes with a Thunderbolt 4 upstream port that runs to your laptop, and, more importantly, a pair of downstream Thunderbolt 4 ports, which is another rarity among the docks I’ve tested. If you need to transfer a ton of media from various sources into one machine, connected to seriously high-res displays, this is the dock that can handle it all. —Eric Ravenscraft

Advertisement

Do You Need a Docking Station or a USB Hub?

This is the big question you’ll want to answer before moving forward. Chances are you know if you need a full-on docking station rather than just a USB hub, but I’ll explain the differences in case you’re on the fence. A simple USB hub will handle most people’s needs, as the device will expand the potentially very limited ports of your laptop. If you own a MacBook Air, for example, a USB hub functions as a multiport adapter to get you HDMI, USB-A, and more. They’re intended to be portable and many of these hubs even include HDMI to connect an external display.

A laptop docking station will do quite a lot more. These devices are meant to be stationary on a desk, enabling you to access your entire workstation setup with just a single USB-C cable. The docking station is meant to stay put and have all your monitors and accessories plugged into it. Because of that, they require significantly more power and are often bundled with a large power brick. They are often quite expensive. So, while both accessories connect your laptop to more ports, they serve two different functions.

There are now lots of docks and hubs that blur the lines, offering multi-monitor support in a very small package. These can be useful, but a full docking station will still give you the fastest transfer speeds, the most ports, and better external display support all through a single cable.

Advertisement

What Ports Should Your Docking Station Have?

Figuring out the right connections you need for your setup can be daunting, and the confusing, arcane USB terminology only makes it worse. You can check out our explainer on parsing USB terms here. For the short version, here are the basics you should keep in mind:

Check your ports’ speeds, and don’t rely on version numbers. For a lot of confusing reasons, ports labeled as USB 3.0, 3.1, and 3.2 can all have the same speed or wildly different speeds. For this reason, docking station manufacturers have recently started opting to add speeds (usually written like “5 Gbps”) directly onto individual ports. Use the faster ports for transferring data, and slower ports for things like your keyboard and mouse.

Thunderbolt is best for lightning-fast data transfers, or high-res displays. Thunderbolt is like a supercharged version of USB, and it even uses USB-C ports. However, Thunderbolt ports are capable of transferring massive amounts of data. This makes it ideal for things like moving uncompressed video files around, as well as things like 4K (or even 8K) displays or lower-resolution monitors with extra high refresh rates.

Advertisement

Keep in mind your power needs. Most laptop docking stations will have some form of power connector and USB Power Delivery (or USB-PD) that can send power through to your laptop. You’ll also sometimes see this referred to as “pass-through charging.” Most devices you connect will require their own power as well, especially if you want to connect monitors or charge your phone and tablet. If you plan to connect a lot of power-hungry devices, make sure your docking station can handle your needs.

Upstream and downstream ports. You’ll often see USB ports labeled either as upstream or downstream. The data either flows up to the source (your PC or docking station) or down from the source. An upstream USB port means it’s meant for transferring data from a peripheral (like an external drive) to your PC, whereas a downstream USB port only works in the opposite direction.

All the docks in our recommendations are compatible with both Mac and Windows, unless otherwise noted. But there are lots of hubs and docks out there that have certain limitations on Mac, such only supporting mirroring mode in dual monitors. That’s not a problem on Windows.

On lower-end Macs there is a limitation on the number of screens. There is a way aroudn this if you use a dock that supports DisplayLink. Software can create a “virtual GPU” that tricks the system into allowing for additional displays so you can drive more displays than is typically allowed on a MacBook Air, for example. In my experience, however, the performance can be shoddy, and you may run into issues with latency.

Advertisement

Is Thunderbolt 5 Worth It?

The first Thunderbolt 5-capable PCs, docks, and accessories came out in 2024. Thunderbolt 5 can now handle three 4K displays at 144 Hz (or two 4K displays at 240 Hz) and can deliver up to 240 watts of power. That’s dramatic, up from the 100 watts of Thunderbolt 4. Thunderbolt 5 allows you to fully juice up more powerful devices, such as gaming laptops or the 16-inch M4 Max MacBook Pro.

Thunderbolt 5 docks are all backward compatible, so there’s no worry about outdoing the peripherals you currently own. As is true in many scenarios, buying the latest specs are often worth it to avoid having to upgrade later. However, the adoption of Thunderbolt 5 has been slower than I’d hoped for. And if you don’t have a Thunderbolt 5 laptop to connect to, you won’t get the full benefit that Thunderbolt 5 offers.

While there’s a breadth of Thunderbolt 5 docs out in the world (many of which you’ll see on this list), the biggest disappointment has been in the lack of Thunderbolt 5 accessories to come out over the past year. It’s still very difficult, for example, to find a Thunderbolt 5 SSD, to achieve those faster speeds. Thunderbolt 5 docks are sometimes only marginally more expensive than previous-generation options, so they’re often worth buying for the improved display support or higher power delivery. For example, CalDigit’s TS5 is only $20 more than the TS4.

Advertisement

Power up with unlimited access to WIRED. Get best-in-class reporting that’s too important to ignore for just $2.50 $1 per month for 1 year. Includes unlimited digital access and exclusive subscriber-only content. Subscribe Today.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Cognizant TriZetto breach exposes health data of 3.4 million patients

Published

on

Cognizant TriZetto breach exposes health data of 3.4 million patients

TriZetto Provider Solutions, a healthcare IT company that develops software and services used by health insurers and healthcare providers, has suffered a data breach that exposed the sensitive information of over 3.4 million people.

The firm, which has been operating under the Cognizant umbrella since 2014, disclosed that it detected suspicious activity on a web portal on October 2, 2025, and launched an investigation with the help of external cybersecurity experts.

The investigation revealed that unauthorized access began nearly a year before, on November 19, 2024.

During the exposure period, the threat actors accessed records relating to insurance eligibility verification transactions, which are part of the process providers use to confirm a patient’s insurance coverage before treatment.

Advertisement

The types of data that have been exposed vary per individual, and may include one or more of the following:

  • Full names
  • Physical address
  • Date of birth
  • Social Security number
  • Health insurance member number
  • Medicare beneficiary identifier
  • Provider name
  • Health insurer name
  • Demographic, health, and insurance information

Affected providers were alerted on December 9, 2025, but customer notification started in early February 2026. According to a filing Maine’s Attorney General submitted today, the number of exposed individuals is 3,433,965.

TriZetto says that payment card, bank account, or other financial information was not exposed in this incident.

Also, the company is not aware of any cases where cybercriminals have attempted to misuse this information.

TriZetto says it has taken steps to strengthen cybersecurity on its systems and informed law enforcement authorities of the incident.

Advertisement

Notification recipients are offered free 12-month coverage of credit monitoring and identity protection services from Kroll to help mitigate risks arising from compromised data.

BleepingComputer has contacted TriZetto to learn more about the nature of the security breach and why the firm delayed notifications to consumers for several months, but we have not received a response by publication time.

No ransomware groups have taken responsibility for the attack yet, and no data leaks linked to TriZetto have appeared on underground forums.

Cognizant itself was rumored to have suffered a Maze ransomware breach in 2020. In June 2025, Clorox sued the IT firm for gross negligence after it allegedly let Scattered Spider operatives into its network following a social engineering attack in September 2023.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

The remake of one of the best Assassin’s Creed games is actually happening

Published

on

Ubisoft has finally confirmed what Assassin’s Creed fans have suspected for years: a remake of Assassin’s Creed IV: Black Flag is officially in the works.

The company revealed the project, titled Assassin’s Creed: Black Flag Resynced, in a new blog post outlining the future of the long-running series.

We don’t know much about the game yet, but initial reports suggest that Resynced will be a full remake rather than a simple remaster, with upgraded visuals and gameplay improvements, bringing one of the best AC games into the modern age.

It’s also suggested that new story content will be added to flesh out the world around Edward Kenway’s life – at the expense of the modern day gameplay, which has apparently been removed from the remake altogether. It’ll be interesting to see how this all works, given how the original game weaved parts of both storylines into the ending.

Advertisement

We’ve known for quite some time that Ubisoft has been thinking about breathing life into the 2013 game, but this was more or less confirmed when the name surfaced on a European ratings board listing late last year.

Advertisement

We don’t yet have a release date for the game, but we know that an unannounced game was due to arrive before the end of the current financial year. Of course, Ubisoft delayed seven games earlier this year – and Black Flag is expected to be one of them.

Whether or not we see the game before the end of 2026 remains to be seen, but for now we’ll keep our “spyglass on the horizon”.

Advertisement

Source link

Continue Reading

Tech

Fully charged: Meet the local leader energizing the Pacific Northwest battery boom

Published

on

Grayson Shor, far right, at a recent Pacific Northwest Battery Collaborative meet up at a Seattle brewery on Capitol Hill. Shor launched the organization to help the sector build connections. (PNWBC Photo)

Grayson Shor, founder and executive director of the Pacific Northwest Battery Collaborative, is the driving force that’s uniting and energizing the region’s battery community.

The collaborative’s launch in October 2024 was so popular it ran out of chairs and the group now caps RSVPs because venues keep maxing out. The nonprofit has hosted 1,400 attendees at 17 different events in Washington, Oregon and online. Shor’s latest project is helping create a battery-focused mini-series he describes as a hybrid between Anthony Bourdain’s “Parts Unknown” and “Cosmos.”

Who knew that energy storage devices could generate so much enthusiasm?

“Batteries are sexy right now,” Shor said.

Batteries are making electric vehicle adoption more attractive as they’ve become increasingly powerful and quicker to recharge. They’re ubiquitous given the pervasive use of phones and consumer electronics. And as electricity demand is spiking thanks to data centers and other energy users, they’re a relatively quick, affordable way to add more power to the grid.

Advertisement

“We are installing more grid batteries in 2025 than the total amount that existed globally just two years ago,” Shor said. “This isn’t just growth, it’s a total reimagining of how our economy is powered.”

A battery ecosystem emerges

Part of the crowd at the Pacific Northwest Battery Collaborative launch party, with founder Grayson Shor in the front row in a tie. (PNWBC Photo)

Shor has spent nearly a decade working on sustainability, circular economy and battery-related issues for organizations ranging from the U.S. Department of State to Amazon to startups. When the former diplomat landed in Seattle from the other Washington more than two years ago, he was impressed by the region’s battery sector.

That included startups in electric aviation, alternative chemistries such as sodium batteries, and next-generation silicon battery materials, plus R&D resources and support at the University of Washington’s Clean Energy Institute.

But he realized the industry lacked the connections to bring together companies, academics, entrepreneurs and investors, and set out to address it. The sector welcomes his efforts.

“I’ve paid attention to folks trying to knit together community, and for the Northwest battery innovation and application ecosystem, Grayson Shor has been an unrelenting force seeking to build and amplify our unique strengths,” said Dan Schwartz, founding director of the Clean Energy Institute.

Advertisement

Tom Gurski, founder of the plug-in hybrid vehicle startup Blue Dot Motorworks, has attended the group’s functions. “In a region famous for introverted personalities their events and happy hours are invaluable for breaking down silos and getting people to connect,” Gurski said.

Beyond building community, Shor is lobbying for support for local and state policies that promote the industry and get more batteries deployed in the state. The energy storage devices have important societal benefits, he said, including better electrical grid performance and helping meet power needs during peak demand.

‘The Battery Life’

Shor speaking at a Pacific Northwest Battery Collaborative event in Seattle during 2025 PNW Climate Week. (PNBC Photo)

Shor is also the co-founder and chief product officer for Buckstop, an “urban mining” startup helping recover critical minerals from waste electronics. He also volunteers as the policy and government affairs director for the Volta Foundation, the world’s largest battery industry association.

And there’s the TV series, called “The Battery Life.” Crews recently spent three days in the Seattle area filming the first episode, visiting the battery materials company Group14 Technologies and interviewing startups at the UW’s Clean Energy Test Beds.

“We’re doing walks through factories. We’re meeting with the CEOs and the inventors, diving deep into their technology,” Shor said. But the series also has “the ‘Carl Sagan vibe,’” he added, explaining “how does this technology actually impact humanity, and why does it matter to the average person?”

Advertisement

Additional episodes will be shot in Portland and Vancouver, B.C. The plan is to air the series later this year at energy events in Oregon and Las Vegas, plus other area venues.

Future Pacific Northwest Battery Collaborative plans include a job fair and fundraising gala. Shor also envisions a convention where the entrepreneurs and innovators could set up booths to show off their technologies. The ideas keep coming.

“This is playing my little role in trying to tackle climate change, to try to advance the energy transition,” he said. “It helps with equity, it helps with economic opportunity …. It makes me happy.”
 

Source link

Advertisement
Continue Reading

Tech

The World’s Smallest Marble Clock With Pick And Place Arm

Published

on

Clocks come in many styles and sizes, with perhaps the most visually pleasing ones involving marbles. Watching these little spheres obey gravity and form clearly readable numbers on a clock has strong mesmerizing qualities. If you’re not into really big marble clocks, or cannot quite find the space for a desk-sized clock, then the tiny marble clock by [Jens] may be an option.

While he totally loved the massive marble clock that [Ivan Miranda] built, it is a massive contraption that’s hard to justify as a permanent installation. His take on the concept thus makes it as small as possible, by using a pick-and-place style arm to place the marbles instead. Although the marbles don’t do a lot of rolling this way, it’s decidedly more quiet, and replace the rumbling and click-clacking of marbles with the smooth motion of a robotic arm.

Another benefit of this clock is that it’s cheap to make, with a price tag of less than $23. A big part of this is the use of cheap SG90 micro servos, and a permanent magnet along with a mechanism that pushes the marble off said magnet. Perhaps the biggest issue with this clock is that the arm somewhat obscures the time while it’s moving around, but it’s definitely another interesting addition to the gallery of marble clocks.

Advertisement

We have previously seen such clocks built out of wood and brass as well as 3D-printed using pendulum mechanisms, which can be made pretty compact as well, albeit with a more analog vibe.

Thanks to [Hari] for the tip.

Advertisement

Source link

Continue Reading

Tech

Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)

Published

on

Scenario Modeling and Array Design for Non-Terrestrial Networks (NTNs)

Non-terrestrial networks (NTNs) using low earth orbit (LEO) satellites present unique technical challenges, from managing large satellite constellations to ensuring reliable communication links. In this webinar, we’ll explore how to address these complexities using comprehensive modeling and simulation techniques. Discover how to model and analyze satellite orbits, onboard antennas and arrays, transmitter power amplifiers (PAs), signal propagation channels, and the RF and digital receiver segments—all within an integrated workflow. Learn the importance of including every link component to achieve accurate, reliable system performance.

Highlights include:

  • Modeling large satellite constellations
  • Analyzing and visualizing time-varying visibility and link closure
  • Using graphical apps for antenna analysis and RF component design
  • Modeling PAs and digital predistortion
  • Simulating interference effects in communication links

Click ‘Watch Now’ to explore this webinar.

Source link

Continue Reading

Tech

Utah’s Proposal To Tax Online Pornography Is A Civil Liberties Disaster Waiting To Happen

Published

on

from the bad-ideas-stupider-ideas dept

Republican lawmakers in Utah have long been on the cutting edge of shitty policymaking when it comes to regulating the internet. The latest chapter in that legacy is a proposed tax on porn and adult content purchased in the state’s digital space.

Originally proposed by a pair of Republican lawmakers in the Utah state legislature earlier this year, Senate Bill (SB) 73 would levy a so-called “material harmful to minors” tax at 2 percent on revenues generated by the sale of online porn (it was originally 7 percent). Having been amended and passed through the state Senate with considerable support, SB 73 is on track to clear the hurdles of the House of Representatives and be signed into law by Gov. Spencer Cox, a Republican and staunch anti-pornography activist like the bill’s sponsors. 

This activism from Gov. Cox and the sponsors of porn tax bill—Republican state Sen. Calvin R. Musselman and state Rep. Steve Eliason—could presage a far more corrosive and expansive campaign against civil liberties and key freedom of expression protections that cover sexually-related speech. 

First off, SB 73 would fund a variety of efforts for Utah’s state government. Such efforts benefiting from the funds under the proposal would include enforcement efforts for the state’s social media and pornography age verification laws. 

Advertisement

But the bill goes further, especially after several rounds of being amended in the Senate and the House to include the mention of web traffic sourced from virtual private networks (VPNs) and other proxies. This bill would make it illegal to circumvent content blocks implemented by platforms due to local age verification laws, making it punishable by a bevy of civil penalties. Nonetheless, what goes well beyond extreme is that there is a provision in the bill that would also make it illegal for websites covered by age verification laws (e.g., a porn site) to offer Utah-based users information about using VPNs to get around any content blocks securely.

Consider the following language in the current form of Senate Bill 73 regarding VPN “facilitation”:

“A commercial entity that operates a website that contains a substantial portion of material harmful to minors may not facilitate or encourage the use of a virtual private network, proxy server, or other means to circumvent age verification requirements, including by providing: (a) instructions on how to use a virtual private network or proxy server to access the website; or (b) means for individuals in this state to circumvent geofencing or blocking.”

This goes far beyond anything I’ve seen in recent legislative trends in state legislatures controlled by conservative GOP politicians. The bill is similar to a law that is on the books in Alabama which levies a 10 percent levy on all porn websites in that state’s digital space, paired with the extra set of legal requirements for adult performers to have notarized consent forms that contradicts existing federal record-keeping laws

Utah’s bill doesn’t go that far on the concerns of records, but it certainly conjures up civil liberties concerns. Aside from the glaring privacy concerns related to age verification tech, Utah has no right to restrict the communications of a private company to its customers. This goes double for attempts to supersede interstate commerce on a category of products and services that are lawful. And don’t forget the dimensions of the porn tax. SB 73’s approach is expansive and blatantly violates the First Amendment rights of millions of people, not just those who live within the state boundaries of Utah. 

Advertisement

The tax is a textbook “sin tax” a jurisdiction would levy on something like alcohol, tobacco, and gambling. But what is different between the purchase of a six-pack of beer versus wanking off alone in your home is that buying that beer from the liquor store isn’t necessarily considered expressive in its nature. Producing, selling, and consuming pornography are matters of protected sexual speech so long nothing illegal and criminal occur. Porn taxes like the one proposed in SB 73 explicitly outline “covered entities,” to include all entities that sell adult content through clip sales, subscriptions, and fan sites. And with total Utah sales, revenues are then taxed at the 2 percent levy and then paid to the state each year. 

This might be an incidental bump in the road for many of the larger platforms, like Pornhub or OnlyFans, but this type of policymaking is a vindictive ploy to make operating a small and medium business in this space excruciatingly harder. I do see the Utah bill passing this legislative session, which would lead to a potential legal standoff in a federal courthouse. But I am not holding my breath for anything more beyond that. 

Michael McGrady covers the tech and legal sides of the online porn business.

Filed Under: 1st amendment, free speech, porn tax, sin tax, state laws, utah, vpns

Advertisement

Source link

Continue Reading

Tech

Artificial Muscles, Boston Dynamics, and More Videos

Published

on

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

The functional replication and actuation of complex structures inspired by nature is a longstanding goal for humanity. Creating such complex structures combining soft and rigid features and actuating them with artificial muscles would further our understanding of natural kinematic structures. We printed a biomimetic hand in a single print process comprised of a rigid skeleton, soft joint capsules, tendons, and printed touch sensors.

[ Paper ] via [ SRL ]

Advertisement

Two Boston Dynamics product managers talk about their favorite classic BD robots, and then I talk about mine.

And this is Boston Dynamics’ LittleDog, doing legged locomotion research 16 or so years ago in what I’m pretty sure is Katie Byl’s lab at UCSB.

Advertisement

[ Boston Dynamics ]

This is our latest work on the trajectory planning method for floating-based articulated robots, enabling the global path searching in complex and cluttered environments.

[ DRAGON Lab ]

Thanks, Moju!

Advertisement

OmniPlanner is a unified solution for exploration and inspection path planning (as well as target reach) across aerial, ground, and underwater robots. It has been verified through extensive simulations and a multitude of field tests, including in underground mines, ballast water tanks, forests, university buildings, and submarine bunkers.

[ NTNU ]

Thanks, Kostas!

In the ARISE project, the FZI Research Center for Information Technology and its international partners ETH Zurich, University of Zurich, University of Bern, and University of Basel took a major step toward future lunar missions by testing cooperative autonomous multi-robot teams under outdoor conditions.

Advertisement

[ FZI ]

Welcome to the future, where there are no other humans.

[ Zhejiang Humanoid ]

Advertisement

This is our latest work on robotic fish, and is also the first underwater robot of DRAGON Lab.

[ DRAGON Lab ]

Thanks, Moju!

Watch this one simple trick to make humanoid robots cheaper and safer!

Advertisement

[ Zhejiang Humanoid ]

Gugusse and the Automaton’ is a 1897 French film by Georges Méliès featuring a humanoid robot in nearly as realistic of a way as some of the humanoid promo videos we’ve seen lately.

Advertisement

[ Library of Congress ] via [ Gizmodo ]

At Agility, we create automated solutions for the hardest work. We’re incredibly proud of how far we’ve come, and can’t wait to show you what’s next.

[ Agility ]

Advertisement

[ Humanoids Summit ]

Anca Dragan is no stranger to Waymo. She worked with us for six years while also at UC Berkeley and now, Google DeepMind. Her focus on making AI safer helped Waymo as it launched commercially. In this final episode of our season, Anca describes how her work enables AI agents to work fluently with people, based on human goals and values.

[ Waymo Podcast ]

This UPenn GRASP SFI Seminar is by Junyao Shi, on “Unlocking Generalist Robots with Human Data and Foundation Models.”

Advertisement

Building general-purpose robots remains fundamentally constrained by data scarcity and labor-intensive engineering. Unlike vision and language, robotics lacks large, diverse datasets spanning tasks, environments, and embodiments, limiting both scalability and generalization. This talk explores how human data and foundation models trained at scale can help overcome these bottlenecks.

[ UPenn ]

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

The Xbox isn’t ending, but it needs these 3 changes to return to glory

Published

on

If you’ve spent any time following gaming news in early 2026, you might think the end of Xbox is right around the corner. Between reports of a 32% year-over-year drop in hardware revenue, the sudden departure of longtime Xbox boss Phil Spencer, and wild speculation that Microsoft might pivot the entire gaming division toward AI, the internet has been flooded with dramatic takes about the “death of Xbox.”

But the eulogies are premature. Despite the noise, Xbox still sits on one of the most powerful portfolios in gaming, including Halo, Forza, Gears of War, Call of Duty, Minecraft, and more. Microsoft also has the financial backing, infrastructure, and studio network to remain a major player for decades. The real issue isn’t survival, but identity.

You see, for several years, Xbox leadership pushed an ambitious idea that “every screen is an Xbox.” The strategy expanded the brand through cloud gaming, PC integration, and Game Pass across multiple platforms. While that approach broadened reach, it also created confusion about what Xbox actually is. Now, under the new leadership of Microsoft Gaming CEO Asha Sharma, the company appears to be acknowledging that confusion and attempting a course correction.

Sharma recently confirmed Project Helix, the codename for Xbox’s next-generation hardware, promising a device that will “lead in performance and play your Xbox and PC games.” That announcement alone signals a shift in direction. Xbox isn’t ending, but it is entering a critical rebuilding phase. And if the company wants to return to its former glory, experts and players alike largely agree that three major changes are essential.

1. Nail the execution of Project Helix

One of the biggest challenges Xbox faces today is simple: many players aren’t sure why they should buy an Xbox console anymore.

Advertisement

If the same games appear on PC, and sometimes even on rival platforms, what makes the Xbox console special? That’s where Project Helix could become the most important product Microsoft has released in years. Rumored for a 2027 launch, Helix is expected to be a hybrid system, essentially a powerful AMD-powered console running a “console-ized” version of Windows. The promise is compelling: the simplicity of a traditional console combined with the flexibility of a gaming PC.

Imagine a device that boots straight into a controller-friendly interface but also lets players access platforms like Steam or Epic from the living room. If done right, Helix could blur the line between PC and console in a way no competitor currently offers. But execution will determine everything. Helix must never feel like a desktop computer awkwardly connected to a TV. Instead, it needs to launch into a seamless controller-first experience, as the “Xbox Full Screen Experience” we saw on the ROG Xbox Ally, preserving the plug-and-play simplicity that console players expect.

If Microsoft can successfully merge the PC and console ecosystems without sacrificing ease of use, Helix won’t just save Xbox hardware, but it could redefine what a console is. Yes, it’s likely going to be expensive, with rumors suggesting a price tag that could cross the $1,000 mark. But Xbox could still justify that premium if it delivers on the other two pillars that matter just as much.

2. Let the studios deliver the games

The second major fix is both obvious and unavoidable: Xbox needs more great games, more consistently.

Advertisement

Over the past decade, Microsoft has spent nearly $100 billion acquiring studios, including Bethesda and Activision Blizzard. On paper, that gives Xbox one of the strongest first-party lineups in gaming history. Yet the results have been uneven. Franchises like Halo, Gears of War, and Forza, once the backbone of the platform, have seen long development gaps. Meanwhile, studio closures, layoffs, and shifting corporate priorities have created uncertainty inside Microsoft’s gaming division.

To further add to the injury, when Sharma took over, some players worried that her background in AI-driven tech companies might push Xbox toward algorithm-generated content. Thankfully, she has quickly pushed back on that idea, stating that Microsoft will not “chase short-term efficiency or flood our ecosystem with soulless AI slop.” Now the company needs to prove it.

Microsoft now owns some of the most talented developers in the world. What they need most is stability. Fewer shifting mandates, fewer corporate interruptions, and enough time to create the kind of system-defining games that drive entire console generations. Because ultimately, subscriptions and hardware don’t sell themselves. Great games do. The upcoming Forza Horizon 6 is already generating plenty of buzz and appears well on track to be a major success. However, Microsoft will need a steady stream of titles, especially strong exclusives, if it hopes to match the kind of consistent first-party momentum Sony has built on the PlayStation side.

3. Rebuild the culture around Xbox

Finally, there’s one part of the Xbox experience that often gets overlooked: the community culture. For many fans, the Xbox 360 era still feels like the golden age of the platform. Profiles felt personal, avatars actually mattered, and the dashboard felt like a social space where gamers could hang out. It wasn’t just a storefront pushing subscriptions and ads.

Over time, much of that personality has disappeared. Today, the Xbox dashboard is often criticized for feeling cluttered with Game Pass promotions and advertisements. Across communities like Reddit, ResetEra, and Xbox Insider forums, the message from players is clear: bring back the personality. Fans want things like dynamic themes, meaningful achievement rewards, deeper avatar integration, and more ways to personalize the UI so the console feels like their space again.

Players are also asking Xbox to double down on something it once did better than anyone else: game preservation. The Backward Compatibility program was hugely popular, and with Activision Blizzard now under Microsoft’s umbrella, fans want to see classic titles return. If Xbox can become the place where decades of gaming history remain playable on modern hardware, it could turn preservation into one of its biggest strengths.

The road back

Long story short, Xbox isn’t going anywhere anytime soon. The brand still holds enormous influence in the gaming industry, backed by Microsoft’s resources and a massive network of studios and services. However, the platform is at a turning point.

For Xbox to truly thrive again, the solution isn’t chasing every new trend. It’s about focusing on the basics: delivering great games consistently, launching a strong next-generation hardware platform, and reconnecting with the community that built the brand. If Microsoft gets these fundamentals right, the “Xbox is dying” narrative could quickly fade, and the next chapter of Xbox might end up being its most exciting yet.

Advertisement

Source link

Continue Reading

Tech

MSI unveils a lobster-like PC with a 13.3-inch touchscreen, RTX 5080X, and a quirky design that defies all conventions

Published

on


  • MSI MEG Vision X AI 13.3-inch touchscreen doubles as a monitoring hub for creatives and professionals
  • GPU selection dictates performance for gaming, rendering, and professional workloads alike
  • Lobster-like chassis combines expandability with unconventional aesthetics

MSI has launched the MEG Vision X AI series, a barebones all-in-one PC which combines high-end gaming hardware with a strikingly unconventional design.

The system features a full-size tower measuring 299.3mm wide, 502.7mm deep, and 423.4mm tall, weighing approximately 18.3kg, and a PS3-esque appendage and protrusions that suggest both function and a distinctive aesthetic.

Source link

Advertisement
Continue Reading

Tech

New KV cache compaction technique cuts LLM memory 50x without accuracy loss

Published

on

Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working memory is stored.

A new technique developed by researchers at MIT addresses this challenge with a fast compression method for the KV cache. The technique, called Attention Matching, manages to compact the context by up to 50x with very little loss in quality.

While it is not the only memory compaction technique available, Attention Matching stands out for its execution speed and impressive information-preserving capabilities.

The memory bottleneck of the KV cache

Large language models generate their responses sequentially, one token at a time. To avoid recalculating the entire conversation history from scratch for every predicted word, the model stores a mathematical representation of every previous token it has processed, also known as the key and value pairs. This critical working memory is known as the KV cache.

Advertisement

The KV cache scales with conversation length because the model is forced to retain these keys and values for all previous tokens in a given interaction. This consumes expensive hardware resources. “In practice, KV cache memory is the biggest bottleneck to serving models at ultra-long context,” Adam Zweiger, co-author of the paper, told VentureBeat. “It caps concurrency, forces smaller batches, and/or requires more aggressive offloading.”

In modern enterprise use cases, such as analyzing massive legal contracts, maintaining multi-session customer dialogues, or running autonomous coding agents, the KV cache can balloon to many gigabytes of memory for a single user request.

To solve this massive bottleneck, the AI industry has tried several strategies, but these methods fall short when deployed in enterprise environments where extreme compression is necessary. A class of technical fixes includes optimizing the KV cache by either evicting tokens the model deems less important or merging similar tokens into a single representation. These techniques work for mild compression but “degrade rapidly at high reduction ratios,” according to the authors.

Real-world applications often rely on simpler techniques, with the most common approach being to simply drop the older context once the memory limit is reached. But this approach causes the model to lose older information as the context grows long. Another alternative is context summarization, where the system pauses, writes a short text summary of the older context, and replaces the original memory with that summary. While this is an industry standard, summarization is highly lossy and heavily damages downstream performance because it might remove pertinent information from the context.

Advertisement

Recent research has proven that it is technically possible to highly compress this memory using a method called Cartridges. However, this approach requires training latent KV cache models through slow, end-to-end mathematical optimization. This gradient-based training can take several hours on expensive GPUs just to compress a single context, making it completely unviable for real-time enterprise applications.

How attention matching compresses without the cost

Attention Matching achieves high-level compaction ratios and quality while being orders of magnitude faster than gradient-based optimization. It bypasses the slow training process through clever mathematical tricks.

The researchers realized that to perfectly mimic how an AI interacts with its memory, they need to preserve two mathematical properties when compressing the original key and value vectors into a smaller footprint. The first is the “attention output,” which is the actual information the AI extracts when it queries its memory. The second is the “attention mass,” which acts as the mathematical weight that a token has relative to everything else in the model’s working memory. If the compressed memory can match these two properties, it will behave exactly like the massive, original memory, even when new, unpredictable user prompts are added later. 

“Attention Matching is, in some ways, the ‘correct’ objective for doing latent context compaction in that it directly targets preserving the behavior of each attention head after compaction,” Zweiger said. While token-dropping and related heuristics can work, explicitly matching attention behavior simply leads to better results.

Advertisement
Attention matching

Before compressing the memory, the system generates a small set of “reference queries” that act as a proxy for the types of internal searches the model is likely to perform when reasoning about the specific context. If the compressed memory can accurately answer these reference queries, it will very likely succeed at answering the user’s actual questions later. The authors suggest various methods for generating these reference queries, including appending a hidden prompt to the document telling the model to repeat the previous context, known as the “repeat-prefill” technique. They also suggest a “self-study” approach where the model is prompted to perform a few quick synthetic tasks on the document, such as aggregating all key facts or structuring dates and numbers into a JSON format.

With these queries in hand, the system picks a set of keys to preserve in the compacted KV cache based on signals like the highest attention value. It then uses the keys and reference queries to calculate the matching values along with a scalar bias term. This bias ensures that pertinent information is preserved, allowing each retained key to represent the mass of many removed keys.

This formulation makes it possible to fit the values with simple algebraic techniques, such as ordinary least squares and nonnegative least squares, entirely avoiding compute-heavy gradient-based optimization. This is what makes Attention Matching super fast in comparison to optimization-heavy compaction methods. The researchers also apply chunked compaction, processing contiguous chunks of the input independently and concatenating them, to further improve performance on long contexts.

Attention matching in action

To understand how this method performs in the real world, the researchers ran a series of stress tests using popular open-source models like Llama 3.1 and Qwen-3 on two distinct types of enterprise datasets. The first was QuALITY, a standard reading comprehension benchmark using 5,000 to 8,000-word documents. The second, representing a true enterprise challenge, was LongHealth, a highly dense, 60,000-token dataset containing the complex medical records of multiple patients.

The key finding was the ability of Attention Matching to compact the model’s KV cache by 50x without reducing the accuracy, while taking only seconds to process the documents. To achieve that same level of quality previously, Cartridges required hours of intensive GPU computation per context.

Advertisement
Attention matching on Qwen 3

Attention Matching with Qwen-3 (source: arXiv)

When dealing with the dense medical records, standard industry workarounds completely collapsed. The researchers noted that when they tried to use standard text summarization on these patient records, the model’s accuracy dropped so low that it matched the “no-context” baseline, meaning the AI performed as if it had not read the document at all. 

Attention Matching drastically outperforms summarization, but enterprise architects will need to dial down the compression ratio for dense tasks compared to simpler reading comprehension tests. As Zweiger explains, “The main practical tradeoff is that if you are trying to preserve nearly everything in-context on highly information-dense tasks, you generally need a milder compaction ratio to retain strong accuracy.”

The researchers also explored what happens in cases where absolute precision isn’t necessary but extreme memory savings are. They ran Attention Matching on top of a standard text summary. This combined approach achieved 200x compression. It successfully matched the accuracy of standard summarization alone, but with a very small memory footprint.

Advertisement

One of the interesting experiments for enterprise workflows was testing online compaction, though they note that this is a proof of concept and has not been tested rigorously in production environments. The researchers tested the model on the advanced AIME math reasoning test. They forced the AI to solve a problem with a strictly capped physical memory limit. Whenever the model’s memory filled up, the system paused, instantly compressed its working memory by 50 percent using Attention Matching, and let it continue thinking. Even after hitting the memory wall and having its KV cache shrunk up to six consecutive times mid-thought, the model successfully solved the math problems. Its performance matched a model that had been given massive, unlimited memory.

There are caveats to consider. At a 50x compression ratio, Attention Matching is the clear winner in balancing speed and quality. However, if an enterprise attempts to push compression to extreme 100x limits on highly complex data, the slower, gradient-based Cartridges method actually outperforms it.

The researchers have released the code for Attention Matching. However, they note that this is not currently a simple plug-and-play software update. “I think latent compaction is best considered a model-layer technique,” Zweiger notes. “While it can be applied on top of any existing model, it requires access to model weights.” This means enterprises relying entirely on closed APIs cannot implement this themselves; they need open-weight models. 

The authors note that integrating this latent-space KV compaction into existing, highly optimized commercial inference engines still requires significant effort. Modern AI infrastructure uses complex tricks like prefix caching and variable-length memory packing to keep servers running efficiently, and seamlessly weaving this new compaction technique into those existing systems will take dedicated engineering work. However, there are immediate enterprise applications. “We believe compaction after ingestion is a promising use case, where large tool call outputs or long documents are compacted right after being processed,” Zweiger said.

Advertisement

Ultimately, the shift toward mechanical, latent-space compaction aligns with the future product roadmaps of major AI players, Zweiger argues. “We are seeing compaction to shift from something enterprises implement themselves into something model providers ship,” Zweiger said. “This is even more true for latent compaction, where access to model weights is needed. For example, OpenAI now exposes a black-box compaction endpoint that returns an opaque object rather than a plain-text summary.”

Source link

Continue Reading

Trending

Copyright © 2025