Connect with us

Tech

New KV cache compaction technique cuts LLM memory 50x without accuracy loss

Published

on

Enterprise AI applications that handle large documents or long-horizon tasks face a severe memory bottleneck. As the context grows longer, so does the KV cache, the area where the model’s working memory is stored.

A new technique developed by researchers at MIT addresses this challenge with a fast compression method for the KV cache. The technique, called Attention Matching, manages to compact the context by up to 50x with very little loss in quality.

While it is not the only memory compaction technique available, Attention Matching stands out for its execution speed and impressive information-preserving capabilities.

The memory bottleneck of the KV cache

Large language models generate their responses sequentially, one token at a time. To avoid recalculating the entire conversation history from scratch for every predicted word, the model stores a mathematical representation of every previous token it has processed, also known as the key and value pairs. This critical working memory is known as the KV cache.

Advertisement

The KV cache scales with conversation length because the model is forced to retain these keys and values for all previous tokens in a given interaction. This consumes expensive hardware resources. “In practice, KV cache memory is the biggest bottleneck to serving models at ultra-long context,” Adam Zweiger, co-author of the paper, told VentureBeat. “It caps concurrency, forces smaller batches, and/or requires more aggressive offloading.”

In modern enterprise use cases, such as analyzing massive legal contracts, maintaining multi-session customer dialogues, or running autonomous coding agents, the KV cache can balloon to many gigabytes of memory for a single user request.

To solve this massive bottleneck, the AI industry has tried several strategies, but these methods fall short when deployed in enterprise environments where extreme compression is necessary. A class of technical fixes includes optimizing the KV cache by either evicting tokens the model deems less important or merging similar tokens into a single representation. These techniques work for mild compression but “degrade rapidly at high reduction ratios,” according to the authors.

Real-world applications often rely on simpler techniques, with the most common approach being to simply drop the older context once the memory limit is reached. But this approach causes the model to lose older information as the context grows long. Another alternative is context summarization, where the system pauses, writes a short text summary of the older context, and replaces the original memory with that summary. While this is an industry standard, summarization is highly lossy and heavily damages downstream performance because it might remove pertinent information from the context.

Advertisement

Recent research has proven that it is technically possible to highly compress this memory using a method called Cartridges. However, this approach requires training latent KV cache models through slow, end-to-end mathematical optimization. This gradient-based training can take several hours on expensive GPUs just to compress a single context, making it completely unviable for real-time enterprise applications.

How attention matching compresses without the cost

Attention Matching achieves high-level compaction ratios and quality while being orders of magnitude faster than gradient-based optimization. It bypasses the slow training process through clever mathematical tricks.

The researchers realized that to perfectly mimic how an AI interacts with its memory, they need to preserve two mathematical properties when compressing the original key and value vectors into a smaller footprint. The first is the “attention output,” which is the actual information the AI extracts when it queries its memory. The second is the “attention mass,” which acts as the mathematical weight that a token has relative to everything else in the model’s working memory. If the compressed memory can match these two properties, it will behave exactly like the massive, original memory, even when new, unpredictable user prompts are added later. 

“Attention Matching is, in some ways, the ‘correct’ objective for doing latent context compaction in that it directly targets preserving the behavior of each attention head after compaction,” Zweiger said. While token-dropping and related heuristics can work, explicitly matching attention behavior simply leads to better results.

Advertisement
Attention matching

Before compressing the memory, the system generates a small set of “reference queries” that act as a proxy for the types of internal searches the model is likely to perform when reasoning about the specific context. If the compressed memory can accurately answer these reference queries, it will very likely succeed at answering the user’s actual questions later. The authors suggest various methods for generating these reference queries, including appending a hidden prompt to the document telling the model to repeat the previous context, known as the “repeat-prefill” technique. They also suggest a “self-study” approach where the model is prompted to perform a few quick synthetic tasks on the document, such as aggregating all key facts or structuring dates and numbers into a JSON format.

With these queries in hand, the system picks a set of keys to preserve in the compacted KV cache based on signals like the highest attention value. It then uses the keys and reference queries to calculate the matching values along with a scalar bias term. This bias ensures that pertinent information is preserved, allowing each retained key to represent the mass of many removed keys.

This formulation makes it possible to fit the values with simple algebraic techniques, such as ordinary least squares and nonnegative least squares, entirely avoiding compute-heavy gradient-based optimization. This is what makes Attention Matching super fast in comparison to optimization-heavy compaction methods. The researchers also apply chunked compaction, processing contiguous chunks of the input independently and concatenating them, to further improve performance on long contexts.

Attention matching in action

To understand how this method performs in the real world, the researchers ran a series of stress tests using popular open-source models like Llama 3.1 and Qwen-3 on two distinct types of enterprise datasets. The first was QuALITY, a standard reading comprehension benchmark using 5,000 to 8,000-word documents. The second, representing a true enterprise challenge, was LongHealth, a highly dense, 60,000-token dataset containing the complex medical records of multiple patients.

The key finding was the ability of Attention Matching to compact the model’s KV cache by 50x without reducing the accuracy, while taking only seconds to process the documents. To achieve that same level of quality previously, Cartridges required hours of intensive GPU computation per context.

Advertisement
Attention matching on Qwen 3

Attention Matching with Qwen-3 (source: arXiv)

When dealing with the dense medical records, standard industry workarounds completely collapsed. The researchers noted that when they tried to use standard text summarization on these patient records, the model’s accuracy dropped so low that it matched the “no-context” baseline, meaning the AI performed as if it had not read the document at all. 

Attention Matching drastically outperforms summarization, but enterprise architects will need to dial down the compression ratio for dense tasks compared to simpler reading comprehension tests. As Zweiger explains, “The main practical tradeoff is that if you are trying to preserve nearly everything in-context on highly information-dense tasks, you generally need a milder compaction ratio to retain strong accuracy.”

The researchers also explored what happens in cases where absolute precision isn’t necessary but extreme memory savings are. They ran Attention Matching on top of a standard text summary. This combined approach achieved 200x compression. It successfully matched the accuracy of standard summarization alone, but with a very small memory footprint.

Advertisement

One of the interesting experiments for enterprise workflows was testing online compaction, though they note that this is a proof of concept and has not been tested rigorously in production environments. The researchers tested the model on the advanced AIME math reasoning test. They forced the AI to solve a problem with a strictly capped physical memory limit. Whenever the model’s memory filled up, the system paused, instantly compressed its working memory by 50 percent using Attention Matching, and let it continue thinking. Even after hitting the memory wall and having its KV cache shrunk up to six consecutive times mid-thought, the model successfully solved the math problems. Its performance matched a model that had been given massive, unlimited memory.

There are caveats to consider. At a 50x compression ratio, Attention Matching is the clear winner in balancing speed and quality. However, if an enterprise attempts to push compression to extreme 100x limits on highly complex data, the slower, gradient-based Cartridges method actually outperforms it.

The researchers have released the code for Attention Matching. However, they note that this is not currently a simple plug-and-play software update. “I think latent compaction is best considered a model-layer technique,” Zweiger notes. “While it can be applied on top of any existing model, it requires access to model weights.” This means enterprises relying entirely on closed APIs cannot implement this themselves; they need open-weight models. 

The authors note that integrating this latent-space KV compaction into existing, highly optimized commercial inference engines still requires significant effort. Modern AI infrastructure uses complex tricks like prefix caching and variable-length memory packing to keep servers running efficiently, and seamlessly weaving this new compaction technique into those existing systems will take dedicated engineering work. However, there are immediate enterprise applications. “We believe compaction after ingestion is a promising use case, where large tool call outputs or long documents are compacted right after being processed,” Zweiger said.

Advertisement

Ultimately, the shift toward mechanical, latent-space compaction aligns with the future product roadmaps of major AI players, Zweiger argues. “We are seeing compaction to shift from something enterprises implement themselves into something model providers ship,” Zweiger said. “This is even more true for latent compaction, where access to model weights is needed. For example, OpenAI now exposes a black-box compaction endpoint that returns an opaque object rather than a plain-text summary.”

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

GoPro Lit Hero Review: a tiny action cam, with too many compromises

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

GoPro Lit Hero: two-minute review

GoPro is a name that’s synonymous with the action cam market, with the brand having largely been responsible for the explosion in popularity of such cameras over the past two decades. The brand has come a long way since its first Hero camera, a 35mm film-compatible wearable model released in 2004.

Advertisement

Source link

Continue Reading

Tech

LangChain’s CEO argues that better models alone won’t get your AI agent to production

Published

on

As models get smarter and more capable, the “harnesses” around them must also evolve.

This “harness engineering” is an extension of context engineering, says LangChain co-founder and CEO Harrison Chase in a new VentureBeat Beyond the Pilot podcast episode. Whereas traditional AI harnesses have tended to constrain models from running in loops and calling tools, harnesses specifically built for AI agents allow them to interact more independently and effectively perform long-running tasks.

Chase also weighed in on OpenAI’s acquisition of OpenClaw, arguing that its viral success came down to a willingness to “let it rip” in ways that no major lab would — and questioning whether the acquisition actually gets OpenAI closer to a safe enterprise version of the product.

“The trend in harnesses is to actually give the large language model (LLM) itself more control over context engineering, letting it decide what it sees and what it doesn’t see,” Chase says. “Now, this idea of a long-running, more autonomous assistant is viable.”

Advertisement

Tracking progress and maintaining coherence

While the concept of allowing LLMs to run in a loop and call tools seems relatively simple, it’s difficult to pull off reliably, Chase noted. For a while, models were “below the threshold of usefulness” and simply couldn’t run in a loop, so devs used graphs and wrote chains to get around that. Chase pointed to AutoGPT — once the fastest-growing GitHub project ever — as a cautionary example: same architecture as today’s top agents, but the models weren’t good enough yet to run reliably in a loop, so it faded fast.

But as LLMs keep improving, teams can construct environments where models can run in loops and plan over longer horizons, and they can continually improve these harnesses. Previously, “you couldn’t really make improvements to the harness because you couldn’t actually run the model in a harness,” Chase said.

LangChain’s answer to this is Deep Agents, a customizable general-purpose harness.

Advertisement

Built on LangChain and LangGraph, it has planning capabilities, a virtual filesystem, context and token management, code execution, and skills and memory functions. Further, it can delegate tasks to subagents; these are specialized with different tools and configurations and can work in parallel. Context is also isolated, meaning subagent work doesn’t clutter the main agent’s context, and large subtask context is compressed into a single result for token efficiency.

All of these agents have access to file systems, Chase explained, and can essentially create to-do lists that they can execute on and track over time.

“When it goes on to the next step, and it goes on to step two or step three or step four out of a 200 step process, it has a way to track its progress and keep that coherence,” Chase said. “It comes down to letting the LLM write its thoughts down as it goes along, essentially.”

He emphasized that harnesses should be designed so that models can maintain coherence over longer tasks, and be “amenable” to models deciding when to compact context at points it determines is “advantageous.”

Advertisement

Also, giving agents access to code interpreters and BASH tools increases flexibility. And, providing agents with skills as opposed to just tools loaded up front allows them to load information when they need it. “So rather than hard code everything into one big system prompt,” Chase explained, “you could have a smaller system prompt, ‘This is the core foundation, but if I need to do X, let me read the skill for X. If I need to do Y, let me read the skill for Y.’”

Essentially, context engineering is a “really fancy” way of saying: What is the LLM seeing? Because that’s different from what developers see, he noted. When human devs can analyze agent traces, they can put themselves in the AI’s “mindset” and answer questions like: What is the system prompt? How is it created? Is it static or is it populated? What tools does the agent have? When it makes a tool call, and gets a response back, how is that presented?

“When agents mess up, they mess up because they don’t have the right context; when they succeed, they succeed because they have the right context,” Chase said. “I think of context engineering as bringing the right information in the right format to the LLM at the right time.”

Listen to the podcast to hear more about:

Advertisement
  • How LangChain built its stack: LangGraph as the core pillar, LangChain at the center, Deep Agents on top.

  • Why code sandboxes will be the next big thing.

  • How a different type of UX will evolve as agents run at longer intervals (or continuously).

  • Why traces and observability are core to building an agent that actually works.

You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

Source link

Continue Reading

Tech

Iceland Foods Finally Surrenders In Trademark Fight With Iceland, The Country

Published

on

from the who’s-the-moron-in-a-hurry-here dept

The ten year war over Iceland is over and Iceland has come out the victor.

If you don’t know what I’m talking about, be prepared to listen to a whole bunch of stupid. In 2016, we wrote about Iceland Foods, a UK grocer, which had somehow convinced the EU to give it a trademark for “Iceland” and which then went about bullying other companies and opposing trademarks for any that included the name of that country. One of the entities that Iceland Foods found itself in a trademark opposition with was Iceland, as in the country, when it attempted to trademark “Inspired by Iceland.” The Icelandic government didn’t take too kindly to that appropriation of its own name and petitioned to cancel the Iceland Foods trademark, which is exactly what happened. Rather than put an end to this absurdity, Iceland Foods appealed that decision, lost, then appealed it again, lost again, appealed a third time, only to lose there as well.

From there, Iceland Foods had but one final option for appealing all of these perfectly sane rulings, which would be to take this before the Court of Justice of the EU. And, while that would obviously be crazy, everything I’d seen to date led me to believe the grocer would do just that.

But sanity seems to finally be on the menu, I guess. Iceland Foods has publicly announced that it is ending the fight and surrendering.

Advertisement

Executive chairman Richard Walker revealed the supermarket would drop the legal dispute, which centres on the right to use the phrase Iceland in the EU, following its third legal loss in July 2025.

Iceland had one fourth and final route of appeal, via the Court of Justice of the European Union, but Walker told the Financial Times it would instead use the “couple of hundred grand” it would save in legal fees to give a “rapprochement discount” to Icelandic shoppers.

Yeah, that’s how this should have been approached from the jump, folks. And this actually goes back even further, where this broad, geographic trademark by a private entity consisting of the name of a sovereign nation never should have been granted a trademark to begin with.

But that’s all over now. Iceland Foods’ trademark is invalidated. Iceland once more is free from being bullied over its own name, as would be other companies from the island nation. Iceland Foods can keep on operating as it always has, sans the ability to bully others with this ridiculous mark. Walker himself said as much, in a very frustrating manner.

“We lost for a third time. We’re going to throw in the towel,” Walker told the FT. “It’s actually fine — we don’t have to change our name.”

Exactly. You never had to. That was never in question. The only question is whether you got to keep your laughable trademark and bully others over it.

Advertisement

Instead, the grocer wasted everyone’s time, and who knows how much of its own money, trying to wage this silly war.

Filed Under: cjeu, iceland, iceland iceland iceland, trademark, uk

Companies: iceland foods

Source link

Advertisement
Continue Reading

Tech

Reverse Engineering The PROM For The SGI O2

Published

on

The SGI O2 was SGI’s last-ditch attempt at a low-end MIPS-based workstation back in 1996, and correspondingly didn’t use the hottest parts of the time, nor did it offer much of an upgrade path. None of which is a concern to hobbyists who are more than happy to work around any hardware- and software limitations to e.g. install much faster CPUs. While quite a few CPU upgrades were possible with just some BGA chip reworking skills, installing the 900 MHz RM7900 would require some PROM hacking, which [mattst88] recently took a shake at.

The initial work on upgrading SGI O2 systems was done in the early 2000s, with [Joe Page] and [Ian Mapleson] running into the issue that these higher frequency MIPS CPUs required a custom IP32 PROM image, for which they figured that they’d need either SGI’s help or do some tricky reverse-engineering. Since SGI is no longer around, [mattst88] decided to take up the torch.

After downloading a 512 kB binary dump of the last version of the O2’s PROM, he set to work reverse-engineering it, starting by dissembling the file. A big part of understanding MIPS PROM code is understanding how the MIPS architecture works, including its boot process, so much of what followed was a crash-course on the subject.

Advertisement

With that knowledge it was much easier to properly direct the Capstone disassembler and begin the arduous process of making sense of the blob of data and code. The resulting source files now reassemble into bit-identical ROM files, which makes it likely that modifying it to support different CPUs is now possible with just a bit more work.

For those who want to play along, [mattst88] has made his ip32prom-decompiler project available on GitHub.

Thanks to [adistuder] for the tip.


Top image: Silicon Graphics 1600SW LCD display and O2 workstation. (Source: Wikimedia)

Advertisement

Source link

Advertisement
Continue Reading

Tech

Apple is adding a warning against AI music content

Published

on

Apple Music is introducing a new way to flag AI-generated music. However, it’s relying on the music industry itself to disclose it.

As reported by Music Business Worldwide, the streaming service has launched Transparency Tags, a new metadata system that allows record labels and distributors to mark when artificial intelligence has been used in different parts of a release.

The tags can be applied immediately. Eventually, they will become a requirement when partners deliver new content to the platform.

Rather than analysing songs itself, Apple is placing the responsibility on the supply chain. Labels and distributors will decide whether a track or release qualifies as AI-generated. They will apply the tags during the delivery process – this is similar to how genres or credits are currently submitted.

Advertisement

The system covers four areas of a release. Artwork tags flag when AI is used to create album artwork or other visuals. Track tags indicate that AI helped generate the sound recording itself. Composition tags apply when lyrics or other songwriting elements are created using AI. Meanwhile, Music Video tags identify AI-generated visuals tied to releases.

Advertisement

Apple says the goal is to give the industry better visibility into how generative AI is being used in music production. In a note to industry partners, the company described the tags as a “first step”. This is toward building clearer policies and best practices around AI-created content.

The approach stands in contrast to how some rivals are tackling the issue. Streaming platform Deezer, for example, has built its own AI detection system. It scans uploads automatically rather than relying on labels to self-report.

Advertisement

That difference matters given how quickly AI-generated music is growing. Deezer said earlier this year that it now receives more than 60,000 fully AI-generated tracks every day. Synthetic music now accounts for roughly 39% of all uploads to the platform.

The company also claims most of that content is tied to streaming fraud rather than artistic experimentation. According to Deezer, up to 85% of streams on AI-generated tracks were fraudulent in 2025. Those plays were removed from the royalty pool.

Apple’s Transparency Tags don’t currently include a visible enforcement mechanism or verification system. This means the accuracy of the labels will largely depend on the honesty of the distributors supplying the music.

Advertisement

Advertisement

For now, though, Apple’s move signals that AI disclosure is quickly becoming the next battleground for music streaming platforms.

Source link

Advertisement
Continue Reading

Tech

UL and IMR to design Ireland’s first 3D-printed liquid rocket engine

Published

on

The partnership news comes with official acceptance into the prestigious UK-based Race2Space 2026 International Propulsion competition.

The University of Limerick (UL) Aeronautical Society High-Powered Rocketry Team (ULAS HiPR) has announced a partnership with UL and Irish Manufacturing Research (IMR) to design and produce the first additive manufactured (3D-printed) liquid rocket engine in the Republic of Ireland, called the Lúin of Celtchar.

The engine is a high-performance 2 kilonewton, water-cooled, IPA/nitrous oxide bi-propellant system, which has been designed entirely by the ULAS HiPR student team and is now being manufactured at IMR’s Advanced Manufacturing Lab in Mullingar using metal additive manufacturing. It will be returned to UL for precision machining and assembly. 

Established in 2022, ULAS HiPR has more than 100 members and is a combination of students from a range of disciplines, such as aeronautical, mechanical, software and design engineering – all of whom have an interest in designing, manufacturing and launching powerful rockets. 

Advertisement

The team has enjoyed some success having represented Ireland internationally at prestigious competitions, including Mach-24 and Euroc, the European Rocketry Challenge. Alongside the announcement of the partnership, ULAS HiPR has also officially been accepted into the UK-based Race2Space 2026 International Propulsion competition.

This is, according to ULAS HiPR, “a major milestone in advancing Irish student-led space propulsion capabilities”.

Speaking on the announcement, Jay Looney, the co-head of ULAS HiPR, said: “The acceptance of our project to Race2Space marks a defining moment not only for ULAS HiPR, but for Ireland’s student space community. 

“The selection of the first additively manufactured liquid rocket engine in the Republic of Ireland into the competition validates the technical ambition of our student team, and the strength of collaboration between Irish university students with industry. It demonstrates that world-class propulsion innovation can now be designed, manufactured and tested entirely here in Ireland.”

Advertisement

Mark Hartnett, a design for manufacturing senior technologist at IMR, added: “At IMR, supporting ambitious student teams like ULAS HiPR reflects our commitment to strengthening Ireland’s advanced manufacturing ecosystem and enabling the next generation of aerospace innovators. 

“These are vital platforms for advancing cutting-edge technologies and building Ireland’s future engineering capability, and this ULAS HiPR propulsion project demonstrates how emerging technologies can move rapidly from concept to high-performance hardware.”

In late February, Silicon Republic attended the official launch of Ireland’s first European Space Agency Phi-Lab, which is headquartered at IMR in Mullingar and run in collaboration with the AMBER Centre at Trinity College Dublin.

One of 10 European Phi-Labs, it is designed to be Ireland’s national platform for space technology development and to anchor the country’s ambitions within Europe and the world’s rapidly-expanding space economy.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Humanity Heating Planet Faster Than Ever Before, Study Finds

Published

on

An anonymous reader The Guardian: Humanity is heating the planet faster than ever before, a study has found. Climate breakdown is occurring more rapidly with the heating rate almost doubling, according to research that excludes the effect of natural factors behind the latest scorching temperatures. It found global heating accelerated from a steady rate of less than 0.2C per decade between 1970 and 2015 to about 0.35C per decade over the past 10 years. The rate is higher than scientists have seen since they started systematically taking the Earth’s temperature in 1880.

“If the warming rate of the past 10 years continues, it would lead to a long-term exceedance of the 1.5C (2.7F) limit of the Paris agreement before 2030,” said Stefan Rahmstorf, a scientist at the Potsdam Institute for Climate Impact Research and co-author of the study. […] The researchers applied a noise-reduction method to filter out the estimated effect of nonhuman factors in five major datasets that scientists have compiled to gauge the Earth’s temperature. In each of them, they found an acceleration in global heating emerged in 2013 or 2014. The findings have been published in the journal Geophysical Research Letters.

Source link

Continue Reading

Tech

X is testing a new ad format that connects posts with products

Published

on

X is testing a new ad format that inserts a recommendation directly underneath a post that references the company or its products. The initial test, spotted by an X user in Europe, displayed a suggestion to “Get Starlink” beneath a post from a user that said Starlink’s satellite service works great in Portugal. The link, when clicked, directed users to Starlink’s website.

X head of product Nikita Bier confirmed the test, responding, “Trying to make an ad product that isn’t an ad.”

The Starlink ad is not visible to all users at this time, but the placeholder where the ad sits is.

If you visit X user @levelsio’s post from March 6 (screenshotted below in case of deletion), you’ll see an outlined box beneath the text of his post. This box currently showcases a random X post, unless you’re in the market where the ad test is live.

In places where the ad displays, several commenters noticed the new addition, with one asking, “lmao, did you add this Starlink button?”

Advertisement

In the thread, Bier also responded to a suggestion that X should allow affiliate links in this ad slot by saying, “No, then people will lie. I want to trust recommendations on here.”

Image Credits:Screenshot from X

X could not be immediately reached for comment. TechCrunch will update the article if the company responds.

The test follows news earlier this week that the company is rolling out “Paid Partnership” labels for creators. The labels can be applied to posts to comply with regulations around social media advertising, instead of requiring creators to use a hashtag like “ad” or “paid partnership.”

If creators’ sponsored posts were to be combined with an embedded link to an advertiser like the one being tested, X could potentially attract more marketers to the platform. That could boost creators’ use of the app, allowing it to better compete against larger social networks favored by creators, like Instagram, YouTube, and TikTok.

X has been chasing creator content for some time — even before it was called “X” and before it was owned by Elon Musk. Yet the app has never quite found its footing in this space. So far, the company has rolled out a number of creator products, including payouts for viral content, ad-revenue sharingcreator subscriptions, and more. 

Advertisement

The company this week also revamped its Creator Subscriptions offering with a number of new features, including the ability to monetize individual threads.

In addition, X announced Friday that the integrated chatbot Grok is now capable of reading X’s long-form content, known as Articles. This feature, too, is underutilized, as creators who publish lengthy written text tend to prefer doing so through their own websites or newsletters.

Source link

Advertisement
Continue Reading

Tech

Haier’s new Couture Care Collection will stop you from going to the dry cleaners

Published

on

Haier has introduced the Couture Care Collection, a two-product premium fabric care range comprising a stacked Laundry Centre and a wardrobe-style Clothes Drying Closet.

It’s a little boujee, but the collection is focused on offering complete fabric care for your clothes, rather than just traditional wash and dry functionality.

That’s because the Couture Care Collection 11 Laundry Centre combines washing and drying in a space-saving stacked format, with AI-powered Smart Link technology automatically syncing wash and dry cycles based on load type and fabric composition. The I-Refresh Pro steam function handles lightly worn garments without running a full wash cycle, while an Ultra Fresh Air system keeps laundry fresh for up to 12 hours after the cycle ends.

The Ultra Reverse Drum and Flexy Air technology apparently reduce tangling and creasing during the drying phase, which honestly sounds like a lifesaver given how crinkled my clothes look when I remove them from dryer at home – although that serves me right for not looking at the best tumble dryers before buying.

Advertisement

Most interestingly of all though, is the Clothes Drying Closet, which looks like a wardrobe, but can dry delicate fabrics shoes, and accessories. If you’re used to running back and forth to the dry cleaners every week, this might be the home gadget you’ve been looking for.

Advertisement

Quick refresh cycles run from around ten minutes for lightly worn clothing, while a combination of steam, UV, and plasma technology sanitises up to 99.99% of bacteria.

Both products connect to Haier’s hOn app for remote control, cycle customisation, and notifications, with pricing and availability for the Couture Care Collection expected to be confirmed closer to the product’s retail launch.

Advertisement

Source link

Continue Reading

Tech

MacBook Neo proves Apple can build a $599 laptop without cheapening the Mac

Published

on

Apple’s industrial design chief says the MacBook Neo was created to bring the Mac into a much lower price tier without sacrificing the materials and design language associated with Apple laptops.

Open laptop on a table displaying colorful app windows, with a light keyboard and trackpad, and another closed laptop in the background on a softly lit surface
MacBook Neo

Apple vice president of industrial design Molly Anderson said in a rare March 6 solo interview that the MacBook Neo retains its MacBook identity despite its $599 starting price. Apple introduced the MacBook Neo on March 4 as its most affordable Mac laptop.
The MacBook Neo uses the A18 Pro processor instead of the Apple Silicon M-series chips found in other Macs. Apple is targeting students and first-time Mac buyers who might otherwise choose inexpensive Windows laptops or Chromebooks.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025