Tech
Predator: Badlands Slays on 4K UHD Disc: Review
As the only director to have helmed more than a single Predator film (previously the direct-to-streaming Prey and animated anthology Killer of Killers), Dan Trachtenberg has become the de facto shepherd of the franchise, and that’s a good thing. With Badlands, he takes audiences in a new direction, revealing some deep backstory to the “Yautja” culture and even creating sympathy for the story’s underdog protagonist.
We open on their homeworld where Dek, the runt of his clan, struggles to prove his worth, particularly in the eyes of his cold, fanatical father. Barely escaping execution, Dek blasts off to the “death planet” of Genna, desperate to kill the ultimate prey and earn his prized camouflage cloak once and for all. Immediately we discover how this place earned its nickname, and he only survives his first few moments with the help of a sassy robot, a “synthetic” named Thia (Elle Fanning). They form an unconventional bond and begin a shared quest, fraught with danger and more than a few twists big and small.

Predator: Badlands was shot digitally and yielded a native 4K master filled with extensive computer-generated visual effects that are photo-realistic and imperceptibly blended, such that the adventure plays out like we’re viewing it all through a 2.39:1 window. The use of color is occasionally quite clever, such as when a bleak, oppressive sky is streaked with vermilion. Emissive highlights stand out as well, particularly when the headlights of high-tech vehicles slice through the darkness.
Enemies approach and attack from all directions, and the hand-to-hand fights are some of the fiercest I’ve seen in a minute, bringing us some wonderfully hyperactive panning across the surrounds. Bass impact is outstanding, from rocket blasts to explosions to individual weapon hits, with the testosterone meter seldom dipping below 10. Among the more subtle flourishes, a distinctive inorganic effect is applied to the synthetics’ voices, at different levels as the mood commands. Worth noting, the Dolby Atmos track is exclusive to the 4K disc, as the included HD Blu-ray disc tops out at DTS-HD Master Audio 7.1.

Trachtenberg anchors the commentary, joined by his producer, director of photography and stunt coordinator. The track also appears on the HD Blu-ray, where the assorted featurettes do a fantastic job of breaking down the unique challenges of such an original production. In addition, there are half a dozen deleted, alternate and extended scenes, some rendered as detailed pre-visualizations, totaling almost half an hour, with their own optional commentary. The two-disc (plus digital copy) set is also being offered as a SteelBook ($76.99 at Amazon)
Fans will be quick to note that this is not the first Predator movie that tips its cap to the long-running Alien franchise, although when the script isn’t giving us a nifty wink it can be a bit derivative of a variety of sources. Overall, Badlands is a fierce and revitalized entry in the franchise that absolutely dominates in a home theater setting.
Movie Details
- STUDIO: Fox/Disney
- FORMAT: Ultra HD 4K Blu-ray (February 17, 2026)
- THEATRICAL RELEASE YEAR: 2025
- ASPECT RATIO: 2.39:1
- HDR FORMATS: Dolby Vision, HDR10
- AUDIO FORMAT: Dolby Atmos with TrueHD 7.1 core
- LENGTH: 107 mins.
- MPAA RATING: PG-13
- DIRECTOR: Dan Trachtenberg
- STARRING: Elle Fanning, Dimitrius Schuster-Koloamatangi, Rohinal Nayaran, Chris Terhune, Mike Homik, Stefan Grube
Our Ratings
★★★★★★★★★★ Picture
★★★★★★★★★★ Sound
★★★★★★★★★★ Extras
Where to buy
Related Reading:
Tech
Microsoft blocks the word 'Microslop' in Copilot Discord, and the server melts down
![]()
Trouble began when users discovered that Discord messages containing the word “Microslop,” a mocking nickname for the company’s AI-heavy direction, were automatically blocked. Those attempting to post the word received a notice saying their message included a “prohibited phrase.”Screenshots spread quickly across social media, pushing what might have been a…
Read Entire Article
Source link
Tech
Samsung Galaxy Buds4 and Buds4 Pro Add AI Live Translation, Adaptive EQ, and 360 Audio
Samsung is not playing it safe in 2026. With the launch of the Samsung Galaxy Buds4 and Buds4 Pro, the company is taking a direct shot at the top of the premium wireless earbud market and squarely targeting both Apple and Sony.
The new Galaxy Buds4 series combines improved sound performance with a more refined industrial design built around Samsung’s signature blade form. That shape was developed using hundreds of millions of global ear data points and more than 10,000 simulations to create a fit that feels more natural and secure. This is not a cosmetic tweak. Samsung is leaning into computational modeling and ergonomic data to improve comfort, stability, and long term wear.
The earbuds now feature smaller heads for a tighter seal, a stabilized blade with a premium metal finish, and an engraved pinch control area that makes it easier to find and adjust settings without guesswork. It is a focused evolution designed to elevate both usability and perceived quality.

Samsung is clearly aiming higher. The question is whether better fit, smarter processing, and upgraded audio features are enough to disrupt the two companies that currently define this space. The Galaxy Buds4 and Buds4 Pro look ready for the fight. But do they deliver enough to change the outcome?
“Samsung understands that a truly premium audio experience combines technical sound quality with how that sound feels throughout a user’s day,” said Ikhyun Cho, Corporate Vice President of the Mobile Enhancement R&D Team within the Mobile eXperience Business at Samsung Electronics. “With the Galaxy Buds4 series, our design philosophy was uncompromising. We focused on delivering all day comfort without sacrificing audio performance because those are what consumers value most. We engineered our most powerful hi-fi audio and our most secure ergonomic fit to enhance one another, delivering the best listening experience we have ever created.”

Ear Wearing Style Engineered by Data
Galaxy Buds4 Pro and Buds4 offer two distinct design approaches to suit different listening preferences.
The Galaxy Buds4 Pro features a traditional in ear design built to maximize sound isolation, performance, and advanced functionality.
The Galaxy Buds4 adopts an open ear design focused on everyday comfort and a more natural, user friendly listening experience. Both models are available in multiple color options, giving customers the flexibility to match their personal style.
Transparent Clamshell Case Puts the Design on Display
The Buds4 series introduces a new transparent clamshell style case that simplifies storage and charging while showcasing the refined blade design for a more distinctive look on the go. The Galaxy Buds4 Pro case features a 530 mAh battery and measures 51 x 28.3 x 51 mm, with a total case weight of 44.3 grams.
The Galaxy Buds4 case includes a 515 mAh battery in the same 51 x 28.3 x 51 mm footprint, weighing slightly more at 45.1 grams.

Buds4 Pro Enhancements
The Galaxy Buds4 Pro features a wider woofer paired with enhanced Active Noise Cancellation and an upgraded Adaptive Equalizer. Together, these technologies are designed to deliver more accurate sound while intelligently responding to real world listening conditions.
The enhanced ANC system reduces everything from heavy transit noise to everyday background distractions, helping create a more immersive listening experience that adjusts as your environment changes.
Targeted design updates, including intuitive hands free controls and deeper AI integration, reinforce Samsung’s focus on earbuds built for how people actually listen throughout the day.
The Galaxy Buds4 Pro uses a two-way driver system positioned along the upper portion of the metal housing to optimize Active Noise Cancellation performance while reducing interference from wind and other external factors.
It also introduces a newly engineered wider woofer that makes more efficient use of internal space. By expanding the vibration area and minimizing the speaker edge, Samsung increases the effective speaker surface by nearly 20 percent compared to the previous generation without compromising comfort or wearability.
Combined with the dedicated tweeter, the Galaxy Buds4 Pro delivers immersive audio with cleaner bass and more refined treble response. The system supports 24-bit/96kHz playback, bringing listeners closer to the original recording with higher resolution detail and greater dynamic range.
These hardware upgrades allow the earbuds to reproduce everything from the soaring resonance of violins to the deep, textured weight of double bass notes, resolving nuances that were more difficult to capture in earlier generations.
Clearer Calls Without the Tunnel Effect
For phone calls, the Super Clear Call feature on both the Galaxy Buds4 Pro and Buds4 uses super wideband call technology along with machine learning based noise reduction and voice enhancement. This system delivers up to twice the bandwidth of conventional Bluetooth calls, improving clarity and vocal presence.
Whether at a packed baseball game, in a busy restaurant, or at a noisy playground, the technology is designed to keep voices sounding natural and intelligible, closer to a face-to-face conversation.
For Samsung Galaxy Phone Users
For Galaxy Phone users, the Buds4 series provides features that enhance the Galaxy Phone/Earbud ecosystem experience.
Users can activate AI agents including Bixby, Google Gemini, and Perplexity using hands free voice controls, allowing them to stay aware of their surroundings while managing their audio experience.

The Galaxy Buds4 Pro enables direct access to supported AI features without reaching for a phone, making AI assistance easier to incorporate into everyday routines.
Galaxy AI Live Translate and Interpreter are supported in up to 22 languages on compatible Samsung Galaxy devices, including the Galaxy S26 series, when signed in with a Samsung account. Some languages require additional downloads.
For Galaxy ecosystem users, setup is streamlined. Opening the charging case prompts a quick connection option through the Buds shortcut menu or Quick Panel, eliminating the need to install the Galaxy Wearable app. From there, users can adjust volume, manage EQ settings, and customize controls directly from their device.
Galaxy ecosystem users benefit from a more streamlined setup process. Opening the charging case triggers a quick connection prompt on compatible Galaxy phones or tablets, eliminating the need to install the Galaxy Wearable app. Through the Buds shortcut menu or Quick Panel, users can immediately adjust volume, customize EQ settings, and manage controls for a more personalized listening experience.
The Galaxy Buds4 Pro also introduces Head Gesture controls for managing calls and interacting with Bixby, enabling additional hands free functionality. Combined with voice commands, these gesture based controls allow users to handle everyday tasks without reaching for their device, helping keep daily routines fluid and uninterrupted.
Galaxy Buds4 Pro vs Galaxy Buds4 Key Differences

Earbud Design: The Galaxy Buds4 uses an open type design without silicone tips, while the Galaxy Buds4 Pro features a sealed in ear design with silicone ear tips for improved noise isolation and a more secure fit.
Speaker Drivers: The Buds4 Pro employs a two way driver system with an 11 mm woofer and 5.5 mm tweeter for deeper bass and more detailed highs. The Buds4 uses a single 11 mm driver in a one way configuration.
Active Noise Cancellation: The Pro model includes Adaptive ANC 2.0 with more precise, multi level, and responsive noise cancellation compared to the standard Buds4.
Battery Life: The Buds4 Pro delivers approximately six hours of playback with ANC enabled, about one hour longer than the Buds4, which offers up to five hours with ANC on.
Water and Dust Resistance: The Buds4 Pro carries an IP57 rating for greater resistance to water and dust, while the Buds4 is rated IP54.
Microphones and AI Features: Both models support voice commands and AI functionality, but the Buds4 Pro includes upgraded microphones designed to improve call clarity in louder environments.
Comparison

| Buds4 Pro | Buds4 | Buds3 FE | |
| Product Type | Wireless Earbuds | Wireless Earbuds | Wireless Earbuds |
| Price | $279.99 | $179.99 | $149.99 |
| Ear Fit | In-Ear | Open Ear | In-Ear |
| Driver Configuration | Enhanced 2-way with 11mm woofer and 5.5mm tweeter | 1-way with 11mm Driver | 1-way with 11mm Driver |
| Number of MICs | 6 | 6 | 6 |
| Ambient Sound | Yes | Yes | Yes |
| Adaptive Noise Control | Yes | Yes | Not indicated |
| Active Noise Cancellation | Yes | Yes | Yes |
| Adaptive ANC | Yes | Yes | Not Indicated |
| Voice Detect | Yes | Yes | Not Indicated |
| 360 Audio | Yes | Yes | Yes |
| Head Tracking | Yes | Yes | No |
| Siren Detect | Yes | No | No |
| Adaptive EQ | Yes | Yes | Not Indicated |
| Super Wide Band | Yes | Yes | Not Indicated |
| Bluetooth Version | v6.1 | v6.1 | v5.4 |
| Bluetooth Profiles | A2DP, AVRCP, HFP, PBP, TMAP | A2DP, AVRCP, HFP, PBP, TMAP | A2DP, AVRCP, HFP |
| LE Audio | Yes | Yes | Not Indicated |
| Auracast | Yes | Yes | Yes |
| Auto Switch (Android only) | Yes | Yes | Yes |
| Sensors | Accelerometer
Gyro Hall Pressure Proximity Touch VPU (Voice Pickup Unit) |
Accelerometer
Gyro Hall Pressure Proximity Touch VPU (Voice Pickup Unit) |
Hall
Pressure Proximity Touch |
| Additional Features | Samsung Find
Bixby Voice Wake-up Neck Stretch Voice Command |
Samsung Find
Bixby Voice Wake-up Neck Stretch |
Samsung Find
Bixby Voice Wake-up |
| Dust/Water Resistance (IPX Rating) | IP57 | IP54 | IP54 |
| Usage Time (ANC On) | Talk Time Up to 4.5 Hours
Total Talk Time Up to 20 Hours Music Play Time Up to 6 Hours Total Music Play Time (Up to 26 Hours |
Talk Time Up to 3.5 Hours
Total Talk Time Up to 18 Hours Music Play Time Up to 5 Hours Total Music Play Time (Up to 24 Hours |
Talk Time Up to 4 Hours
Total Talk Time Up to 18 hours Music Play Time Up to 6 hours Total Music Play Time Up to 24 hours |
| Usage Time (ANC Off) | Talk Time Up to 5 Hours
Total Talk Time Up to 22 Hours Music Play Time Up to 7 Hours Total Music Play Time Up to 30 Hours |
Talk Time Up to 4 Hours
Total Talk Time Up to 20 Hours Music Play Time Up to 6 Hours Total Music Play Time Up to 30 Hours |
Talk Time Up to 4 Hours
Total Talk Time Up to 18 hours Music Play Time Up to 8.5 hours Total Music Play Time Up to 30 hours |
| Earbud Battery Capacity | 61 mAh | 45 mAh | 53mAh |
| Case Battery Capacity | 530 mAh | 515 mAh | 515mAh |
| Earbud Dimension (HWD) | 30.9 x 18.1 x 19.6mm | 30.5 x 18.3 x 19.3mm | 21.1 x 18.0 x 33.8mm |
| Earbud Weight | 5.1mm | 4.6mm | 5mm |
| Case Dimension (HWD) | 51 x 28.3 x 51mm | 51 x 28.3 x 51mm | 48.7 x 58.9 x 24.4mm |
| Case Weight | 44.3 grams | 45.1 grams | 41.8 grams |
| Colors | White Black Pink Gold |
White Black |
Gray Black |

The Bottom Line
The Samsung Galaxy Buds4 and Buds4 Pro are clearly designed to tighten Samsung’s grip on its own ecosystem while taking aim at premium rivals from Apple and Sony. What makes them stand out is the combination of data driven ergonomic design, a refined blade aesthetic, upgraded driver architecture on the Pro model, and deeper AI integration that goes well beyond simple voice commands.
Features like Adaptive EQ, enhanced ANC, 24-bit/96kHz support on the Pro, head gesture controls, and Galaxy AI powered Live Translate and Interpreter give Samsung users a tightly integrated, forward looking experience.
These earbuds are best suited for Galaxy phone owners who want seamless setup, native control through the Quick Panel, automatic device switching, and full access to Samsung’s AI and audio processing features. If you own a newer Galaxy device such as the S26 series, the experience is cohesive and clearly optimized.
The limitation is obvious. While both models will pair with an iPhone over Bluetooth, key Samsung specific features including Adaptive EQ, 360 Audio, automatic switching, and high resolution playback are not available, and audio is capped at 16-bit/44.1kHz.
There is also no Samsung Wearable app for iPhone. In other words, these are built first and foremost for the Galaxy ecosystem. Outside of it, they are still solid wireless earbuds. Inside it, they are operating at full strength.
Price & Availability
General availability and shipping are expected to begin on March 11, 2026.
Related Reading:
Tech
The ‘European’ Jolla Phone Is an Anti-Big-Tech Smartphone
“There are Chinese components as well—we are totally open about it—but the key is that, as we compile the software ourselves and install it in Finland, we protect the integrity of the product,” Pienimäki says.
What makes Sailfish OS unique over competitors like GrapheneOS and e/OS is that it’s not based on the Android Open Source Project, but Linux. That means it has no ties to Google—no need for the company to “deGoogle” the software; meaning there’s a greater sense of sovereignty over the software (and now the hardware). Still, it’s able to run Android apps, though the implementation isn’t perfect. Another common criticism is that it’s not as secure as options like GrapheneOS, where every app is sandboxed.
There’s a good chance some Android apps on Sailfish OS will run into issues, which is why in the startup wizard the phone will ask if you want to install services like MicroG—open source software that can run Google services on devices that don’t have the Google Play Store, making it an easier on-ramp for folks coming from traditional smartphones without a technical background. You don’t even need to create a Sailfish OS account to use the Jolla Phone.
Jolla’s effort is hardly the first to push the anti–Big Tech narrative. A wave of other hardware and software companies offer a deGoogled experience, whether that’s Murena from France and its e/OS privacy-friendly operating system or the Canadian GrapheneOS, which just announced a partnership with Motorola. At CES earlier this year, the Swiss company Punkt also teamed up with ApostrophyOS to deploy its software on the new MC03 smartphone. Jolla is following a broader European trend of reducing reliance on US companies, like how French officials ditched Zoom for French-made video conference software earlier this year.
The Phone
A common problem with these niche smartphones is that they inevitably end up costing a lot of money for the specs. Take the Light Phone III, for example, a fairly low-tech anti-smartphone that doesn’t enjoy the benefits of economies of scale, resulting in an outlandish $699 price. The Jolla Phone is in a similar boat, though the specs-to-value ratio is a little more respectable.
It’s powered by a midrange MediaTek Dimensity 7100 5G chip with 8 GB of RAM, 256 GB of storage, plus a microSD card slot and dual-SIM tray. There’s a 6.36-inch 1080p AMOLED screen, the two main cameras, and a 32-megapixel selfie shooter. The 5,500-mAh battery cell is fairly large considering the phone’s size, though the phone’s connectivity is a little dated, stuck with Wi-Fi 6 and Bluetooth 5.4.
Uniquely, the Jolla Phone brings back “The Other Half” functional rear covers from the original. These swappable back covers have pogo pins that interface with the phone, allowing people to create unique accessories like a second display on the back of the phone or even a keyboard attachment. There’s an Innovation Program where the community can cocreate functional covers together and 3D-print them. And yes, a removable rear cover means the Jolla Phone’s battery is user-replaceable.
Tech
Android gets patches for Qualcomm zero-day exploited in attacks
Google has released security updates to patch 129 Android security vulnerabilities, including an actively exploited zero-day flaw in a Qualcomm display component.
“There are indications that CVE-2026-21385 may be under limited, targeted exploitation,” the company said on Monday in its March 2025 Android Security Bulletin.
While Google didn’t provide any further information on the attacks currently targeting this vulnerability, Qualcomm revealed in a separate security advisory issued on February 3 that the flaw is an integer overflow or wraparound in the Graphics subcomponent that local attackers can exploit to trigger memory corruption.
Qualcomm says it was alerted to this high-severity vulnerability on December 18, and it notified customers on February 2. According to its February advisory, which has yet to flag CVE-2026-21385 as exploited in attacks, the security flaw affects 235 Qualcomm chipsets.
With this month’s Android security updates, Google fixed 10 critical security vulnerabilities in the System, Framework, and Kernel components that attackers exploit to gain remote code execution, elevate privileges, or trigger denial-of-service conditions.
“The most severe of these issues is a critical security vulnerability in the System component that could lead to remote code execution with no additional execution privileges needed. User interaction is not needed for exploitation,” Google said.
Google issued two sets of patches: the 2026-03-01 and 2026-03-05 security patch levels. The latter bundles all fixes from the first batch, as well as patches for closed-source third-party and kernel subcomponents, which may not apply to all Android devices.
While Google Pixel devices receive security updates immediately, other vendors often take longer to test and tweak them for specific hardware configurations.
Google and Qualcomm spokespersons were not immediately available for comment when contacted by BleepingComputer earlier today regarding the CVE-2026-21385 attacks and their targets.
Google released patches for two other high-severity zero-day vulnerabilities (CVE-2025-48633 and CVE-2025-48572) in December, both of which were also tagged as “under limited, targeted exploitation.”
Tech
AI Proof Verification: Gauss Tackles 24D
When Ukrainian mathematician Maryna Viazovska received a Fields Medal—widely regarded as the Nobel Prize for mathematics—in July 2022, it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. Today, in a collaboration between humans and AI, Viazovska’s proofs have been formerly verified, signaling rapid progress in AI’s abilities to assist with mathematical research.
“These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc Liam Fowl, who was not involved in the work.
In her Fields Medal–winning research, Viazovska had tackled two versions of the sphere-packing problem, which asks: How densely can identical circles, spheres, et cetera, be packed in n-dimensional space? In two dimensions, the honeycomb is the best solution. In three dimensions, spheres stacked in a pyramid are optimal. But after that, it becomes exceedingly difficult to find the best solution, and to prove that it is in fact the best.
In 2016, Viazovska solved the problem in two cases. By using powerful mathematical functions known as (quasi-)modular forms, she proved that a symmetric arrangement known as E8 is the best 8-dimensional packing, and soon after proved with collaborators that another sphere packing called the Leech lattice is best in 24 dimensions. Though seemingly abstract, this result has potential to help solve everyday problems related to dense sphere packing, including error-correcting codes used by smartphones and space probes.
The proofs were verified by the mathematical community and deemed correct, leading to the Fields Medal recognition. But formal verification—the ability of a proof to be verified by a computer—is another beast altogether. Since 2022, much progress has been made in AI-assisted formal proof verification.
Serendipity leads to formalization project
A few years later, a chance meeting in Lausanne, Switzerland, between third-year undergraduate Sidharth Hariharan and Viazovska would reignite her interest in sphere-packing proofs. Though still very early in his career, Hariharan was already becoming adept at formalizing proofs.
“Formal verification of a proof is like a rubber stamp,” Fowl says. “It’s a kind of bona fide certification that you know your statements of reasoning are correct.”
Hariharan told Viazovska how he had been using the process of formalizing proofs to learn and really understand mathematical concepts. In response, Viazovska expressed an interest in formalizing her proofs, largely out of curiosity. From this, in March 2024 the Formalising Sphere Packing in Lean project was born. Lean is a popular programming language and “proof assistant” that allows mathematicians to write proofs that are then verified for absolute correctness by a computer.
A collaboration bringing in experts Bhavik Mehta (Imperial College London), Christopher Birkbeck (University of East Anglia, England), Seewoo Lee (University of California, Berkeley), and others, the project involved writing a human-readable “blueprint” that could be used to map the 8-dimensional proof’s various constituents and which of them had and had not been formalized and/or proven, and then proving and formalizing those missing elements in Lean.
“We had been building the project’s repository for about 15 months when we enabled public access in June 2025,” recalls Hariharan, now a first-year Ph.D. student at Carnegie Mellon University. “Then, in late October we heard from Math, Inc. for the first time.”
The AI speedup
Math, Inc. is a startup developing Gauss, an AI specifically designed to automatically formalize proofs. “It’s a particular kind of language model called a reasoning agent that’s meant to interleave both traditional natural-language reasoning and fully formalized reasoning,” explains Jesse Han, Math, Inc. CEO and cofounder. “So it’s able to conduct literature searches, call up tools, and use a computer to write down Lean code, take notes, spin up verification tooling, run the Lean compiler, et cetera.”
Math, Inc. first hit the headlines when it announced that Gauss had completed a Lean formalization of the strong prime number theorem (PNT) in three weeks last summer, a task that Fields Medalist Terence Tao and Alex Kontorovich had been working on. Similarly, Math, Inc. contacted Hariharan and colleagues to say that Gauss had proven several facts related to their sphere-packing project.
“They told us that they had finished 30 “sorrys,” which meant that they proved 30 intermediate facts that we wanted proved,” explains Hariharan. A proportion of these sorrys were shared with the project team and merged with their own work. “One of them helped us identify a typo in our project, which we then fixed,” adds Hariharan. “So it was a pretty fruitful collaboration.”
From 8 to 24 dimensions
But then, radio silence followed. Math, Inc. appeared to lose interest. However, while Hariharan and colleagues continued their labor of love, Math, Inc. was building a new and improved version of Gauss. “We made a research breakthrough sometime mid-January that produced a much stronger version of Gauss,” says Han. “This new version reproduced our three-week PNT result in two to three days.”
Days later, the new Gauss was steered back to the sphere-packing formalization. Working from the invaluable preexisting blueprint and work that Hariharan and collaborators had shared, Gauss not only autoformalized the 8-dimensional case, but also found and fixed a typo in the published paper, all in the space of five days.
“When they reached out to us in late January saying that they finished it, to put it very mildly, we were very surprised,” says Hariharan. “But at the end of the day, this is technology that we’re very excited about, because it has the capability to do great things and to assist mathematicians in remarkable ways.”
Hariharan was working on sphere-packing proof verification as the sun was setting behind Carnegie Mellon’s Hamerschlag Hall.Sidharth Hariharan
The 8-dimensional sphere-packing proof formalization alone, announced on February 23, represents a watershed moment for autoformalization and AI–human collaboration. But today, Math, Inc. revealed an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks.
There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused. However, Gauss had no preexisting blueprint to work from this time. “And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought on line surrounding many of the properties of the Leech lattice, in particular its uniqueness,” explains Han.
Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement, regarding it as a collaborative endeavor overall between humans and AI.
But for Han, it represents even more: the beginning of a revolutionary transformation in mathematics, where extremely large-scale formalizations are commonplace. “A programmer used to be someone who punched holes into cards, but then the act of programming became separated from whatever material substrate was used for recording programs,” he concludes. “I think the end result of technology like this will be to free mathematicians to do what they do best, which is to dream of new mathematical worlds.”
From Your Site Articles
Related Articles Around the Web
Tech
The Analogue Pocket will be back in stock this week, but there’s a tariff-related price increase
The Analogue Pocket handheld retro console has proven to be extremely popular, as initial runs have sold out. The company just announced the system will be , along with the dock accessory. Preorders open up on March 4 at 11AM ET, with shipments going out this June.
That’s the good news. The bad news? The little console is getting slapped with a price increase. It’s shooting up to $240 from the recent price of $220, with the company President Trump’s neverending tariffs. The device is assembled in China and Trump just after the Supreme Court struck down the old ones. We love to suddenly pay more for gadgets that have been out in the wild for nearly six years, don’t we folks?
For the uninitiated, the Pocket isn’t an emulation machine. It plays actual Game Boy, Game Boy Color and Game Boy Advance cartridges. It also integrates with various Game Boy accessories, like the camera and printer. The console can even handle Game Gear, TurboGrafx-16 and Atari Lynx games, but those require separate adapters.
We praised the Analogue Pocket , calling it “a clever little thing” that is sure to light up the nostalgia center of your brain. It even made our list of the , which is notable given competition included stuff like the Steam Deck.
In any event, this drop will likely sell out quickly. We recommend parking a browser on the company website just prior to 11AM ET if you can stomach the new price.
Tech
The few new things in Apple’s midrange tablet
The iPad Air, the middle child in Apple’s tablet lineup, has been upgraded to the M4 chip with increased RAM and… Well, there’s not a whole lot else if I’m being honest. At the very least, the new iPad Air M4 models remain at the same price as the iPad Air M3, with the 11-inch version starting at $599 and the 13-inch at $799. I would give Apple more credit if it had increased the starting storage or added literally anything else.
If you put them side by side, you might not be able to tell the difference, but this upgrade would benefit creatives and professionals more than anything. There’s a significant performance bump from the M3 to the M4, and the increased RAM is doing a lot of work, especially if you’re taking advantage of Apple Intelligence.
If you’re using an M1-powered iPad Air or something even older, though, the new iPad Air M4 should be a compelling upgrade. Pre-orders start at 9:15AM ET on March 4, with the units arriving a week later. We expect full reviews will be published by then. But in the meantime, let’s dive into what the performance gains might look like and what we’re missing out on in this year’s iteration of the iPad Air.
iPad Air M4 vs. iPad Air M3: Performance and battery life
The most significant difference between the two iPad Air generations is their chipsets. The latest iPad Air launches with the M4 chip versus its predecessor’s M3 chip, and it gets a bump in RAM from 8GB to 12GB.
I don’t give much fanfare to incremental chip increases because the performance gain is usually minimal. However, the M4 is up to 30 percent faster than the M3, according to Apple. That might be noticeable to even casual users, especially as the years go on and iPadOS becomes more demanding. For power users, it’ll mean more demanding work like video editing will be noticeably quicker.
For those in need of the fastest internet speeds, the new iPad Air is also equipped with Apple’s N1 chip, which enables Wi-Fi 7 and Bluetooth 6, the latest connectivity technology. However, I really don’t imagine the average user needing up to 46 gigabits per second of internet speed compared to the iPad M3’s 9.6 Gbps on Wi-Fi 6. If you do, you’re in the tax bracket for an iPad Pro.
Now, despite the increase in speeds, the battery life between the M4 and M3 models remains the same. Apple claims all four models get up to 10 hours of battery life surfing the web on Wi-Fi or watching video (up to 9 hours on cellular). No complaints here.
iPad Air M4 vs. iPad Air M3: Design, display, audio and cameras
For better or worse, we’re not getting any changes in any of these departments, which is why I’m lumping them together.
The iPad Air comes in blue, purple, beige and gray. The 11-inch option measures 9.74 x 7.02 x 0.24 inches and the 13 comes in at 11.04 x 8.46 x 0.24 inches. As their names suggest, they’re both rather light, at 1.01 pounds (1.02 pounds for M4) and 1.36 pounds, respectively. My only wish was that we got new colors that popped a bit more.
Then there’s the displays. All four versions of the iPad Airs sport a Liquid Retina LED display at 264 ppi. The 11-inch supports a 2,360 x 1,640 resolution with a peak brightness of 500 nits, while the 13-inch offers a 2,732 x 2,048 resolution at 600 nits. It would’ve been nice to see an OLED or even Mini-LED panel make its way to the iPad Air, which could’ve made the screen more vivid and vibrant. But it’s more disappointing that we’re stuck at 60Hz unlike the Pro models that offer 120Hz, making their visual experience smoother.
Both products feature landscape stereo speakers. The iPad Air M3’s audio quality couldn’t live up to the iPad Pro, so I doubt the M4 model will.
You won’t catch me taking photos with an iPad, but for those of you who do, the iPad Air M4 features the same 12MP cameras on the front and back as its predecessor.
iPadOS 26, Apple Intelligence and Apple accessories
Nothing huge is happening to iPadOS or the Apple accessories in the iPad Air refresh. The revamped Magic Keyboard from last year still works with these new models, as does the Apple Pencil Pro. iPadOS 26, released last fall, was a major update but will still be familiar enough to anyone who has used an iPad before. The new iPad Air M4 is getting a significant boost in AI processing speeds, though, thanks to its new chip and 50 percent increase in RAM. However, unless you’re an AI power user, you probably won’t notice a difference there.
All that said, if your love language is spreadsheets, the full specs are helpfully laid out below:
iPad Air M4 vs. iPad Air M3: Specs at a glance
|
Spec |
iPad Air M4 |
iPad Air M3 |
|
Price |
$599 (11-inch), $799 (13-inch) |
$599 (11-inch), $799 (13-inch) |
|
Processor |
M4 |
M3 |
|
Display |
11-inch: Liquid Retina, 2,360 x 1,640, LED display at 264 ppi 13-inch: Liquid Retina, 2,732 x 2,048, LED display at 264 ppi |
11-inch: Liquid Retina, 2,360 x 1,640, LED display at 264 ppi 13-inch: Liquid Retina, 2,732 x 2,048, LED display at 264 ppi |
|
RAM |
12GB |
8GB |
|
Storage |
128GB, 256GB, 512GB, 1TB |
128GB, 256GB, 512GB, 1TB |
|
Battery |
Up to 10 hours (Wi-Fi), 9 hours (Cellular model) |
Up to 10 hours (Wi-Fi), 9 hours (Cellular model) |
|
Cameras |
12MP Wide (rear), 12MP Center Stage (front) |
12MP Wide (rear), 12MP Center Stage (front) |
|
Apple accessories |
Apple Pencil Pro, Apple Pencil, Magic Keyboard Folio |
Apple Pencil Pro, Apple Pencil, Magic Keyboard Folio |
|
Dimensions |
11-inch: 9.74 x 7.02 x 0.24 inches 13-inch: 11.04 x 8.46 x 0.24 inches |
11-inch: 9.74 x 7.02 x 0.24 inches 13-inch: 11.04 x 8.46 x 0.24 inches |
|
Weight |
11-inch: 1.02 pounds 13-inch: 1.36 pounds |
11-inch: 1.01 pounds 13-inch: 1.36 pounds |
Tech
Superagers’ ‘Secret Ingredient’ May Be the Growth of New Brain Cells
alternative_right shares a report from ScienceAlert: According to a study of 38 adult human brains donated to science, superagers — people who retain exceptional memory as they age — have roughly twice as many immature neurons as their peers who age more typically. Moreover, people with Alzheimer’s disease show a marked reduction in neurogenesis compared to a normal baseline. […]
Led by researchers at the University of Illinois Chicago, the team set out to examine a variety of postmortem hippocampal tissue samples to see if they could identify markers of neurogenesis — and if different groups had any notable differences. The brain samples were donated from five groups: eight healthy young adults, aged between 20 and 40; eight healthy agers, aged between 60 and 93; six superagers, aged between 86 and 100; six individuals with preclinical Alzheimer’s pathology, aged between 80 and 94; and 10 individuals with an Alzheimer’s diagnosis, aged between 70 and 93. The young healthy adult brain tissue was first analyzed to establish the neurogenesis pathways in the adult brain. Then, they analyzed 355,997 individual cell nuclei isolated from the hippocampus, searching for three different stages of cell development: Stem cells, which can develop into neurons; neuroblasts, which are stem cells in the process of that development; and immature neurons, on the verge of functionality. The results were striking.
“Superagers had twice the neurogenesis of the other healthy older adults,” [says neuroscientist Orly Lazarov of the University of Illinois Chicago]. “Something in their brains enables them to maintain a superior memory. I believe hippocampal neurogenesis is the secret ingredient, and the data support that.” That’s an interesting result on its own, but the data from the individuals with preclinical Alzheimer’s pathology and Alzheimer’s diagnoses is where the real meat of the study sits. In the preclinical group, subtle molecular changes hinted that the system supporting new neuron growth was beginning to falter. In the Alzheimer’s group, a clear drop in immature neurons was evident. A genetic analysis of the nuclei also showed that superager neural cells have increased gene activity linked to stronger synaptic connections, greater plasticity, and brain-derived neurotrophic factor, a critical protein for neural survival, growth, and maintenance. Taken together, these three things can be interpreted as resilience. The research has been published in the journal Nature.
Tech
What Is That Mysterious Metallic Device US Chief Design Officer Joe Gebbia Is Using?
Joe Gebbia, cofounder of Airbnb and the US chief design officer appointed by President Trump, was spotted in San Francisco today using a mysterious metallic device. In a social media post on X viewed more than 500,000 times, a man who looks like Gebbia sits with an espresso at a coffee shop. He’s wearing metallic buds that bisect his ears, with a matching clamshell-shaped disc in front of him on the counter.
After the video was posted Monday morning, social media users were quick to suggest that this could be some kind of prototype from OpenAI’s upcoming line of hardware devices designed in partnership with famed Apple designer Jony Ive. An OpenAI spokesperson declined to comment on the potential Gebbia video after WIRED reached out. Gebbia also did not respond to a request for comment.
The device Gebbia appears to be wearing looks quite similar to the hardware seen in a fake OpenAI ad that was widely circulated on Reddit and social media in February. That video seemingly showed Pillion actor Alexander Skarsgård interacting with an AI device that had a similar-looking pair of earbuds and a circular disc. At the time, OpenAI denounced the widely seen video as not real. “Fake news,” wrote OpenAI President Greg Brockman at the time, responding to a social media post.
The earbuds seen in the video of Gebbia on Monday also look quite similar in shape to the Huawei FreeClip 2, a pair of open earbuds released earlier this year. However, the clamshell seen on the coffee counter next to Gebbia is different from Huawei’s most recent headphone case. It would also be quite surprising if a government official were seen using Huawei tech, considering the Chinese company is effectively banned from selling its phones in the US due to security concerns.
WIRED’s audio experts say he’s most likely wearing open earbuds, as Gebbia’s pair share some similarities with Soundcore’s AeroClips or Sony’s LinkBuds Clip, though the cases for those buds don’t match what’s on the table in front of Gebbia. WIRED also ran the photo and video through software that attempts to identify AI-generated outputs and other deepfakes. The detection software, from a company called Hive, says the odds are low that this imagery of Gebbia was generated by AI. Still, AI detectors are not always reliable and can include false outputs. It’s possible that the entire post could be a synthetic hoax.
Could this be some kind of soft launch teaser for OpenAI’s hardware? The timing of this trickle-out would make sense, since the company may ship devices to consumers sometime early in 2027. Still, OpenAI denied any involvement with the previous pseudo-ad for the metallic AI hardware, with its shiny earbuds and matching disc.
Tech
Alibaba’s small, open source Qwen3.5-9B beats OpenAI’s gpt-oss-120B and can run on standard laptops
Despite political turmoil in the U.S. AI sector, in China, the AI advances are continuing apace without a hitch.
Earlier today, e-commerce giant Alibaba’s Qwen Team of AI researchers, focused primarily on developing and releasing to the world a growing family of powerful and capable Qwen open source language and multimodal AI models, unveiled its newest batch, the Qwen3.5 Small Model Series, which consists of:
-
Qwen3.5-0.8B & 2B: Two models, both ptimized for “tiny” and “fast” performance, intended for prototyping and deployment on edge devices where battery life is paramount.
-
Qwen3.5-4B: A strong multimodal base for lightweight agents, natively supporting a 262,144 token context window.
-
Qwen3.5-9B a compact reasoning model that outperforms the 13.5x larger U.S. rival OpenAI’s open soruce gpt-oss-120B on key third-party benchmarks including multilingual knowledge and graduate-level reasoning
To put this into perspective, these models are on the order of the smallest general purpose models lately shipped by any lab around the world, comparable more to MIT offshoot LiquidAI’s LFM2 series, which also have several hundred million or billion parameters, than the estimated trillion parameters (model settings) reportedly used for the flagship models from OpenAI, Anthropic, and Google’s Gemini series.
The weights for the models are available right now globally under Apache 2.0 licenses — perfect for enterprise and commercial use, including customization as needed — on Hugging Face and ModelScope.
The technology: hybrid efficiency and native multimodality
The technical foundation of the Qwen3.5 small series is a departure from standard Transformer architectures. Alibaba has moved toward an Efficient Hybrid Architecture that combines Gated Delta Networks (a form of linear attention) with sparse Mixture-of-Experts (MoE).
This hybrid approach addresses the “memory wall” that typically limits small models; by using Gated Delta Networks, the models achieve higher throughput and significantly lower latency during inference.
Furthermore, these models are natively multimodal. Unlike previous generations that “bolted on” a vision encoder to a text model, Qwen3.5 was trained using early fusion on multimodal tokens. This allows the 4B and 9B models to exhibit a level of visual understanding—such as reading UI elements or counting objects in a video—that previously required models ten times their size.
Benchmarking the “small” series: performance that defies scale
Newly released benchmark data illustrates just how aggressively these compact models are competing with—and often exceeding—much larger industry standards. The Qwen3.5-9B and Qwen3.5-4B variants demonstrate a cross-generational leap in efficiency, particularly in multimodal and reasoning tasks.
Multimodal dominance: In the MMMU-Pro visual reasoning benchmark, Qwen3.5-9B achieved a score of 70.1, outperforming Gemini 2.5 Flash-Lite (59.7) and even the specialized Qwen3-VL-30B-A3B (63.0).
Graduate-level reasoning: On the GPQA Diamond benchmark, the 9B model reached a score of 81.7, surpassing gpt-oss-120b (80.1), a model with over ten times its parameter count.
Video understanding: The series shows elite performance in video reasoning. On the Video-MME (with subtitles) benchmark, Qwen3.5-9B scored 84.5 and the 4B scored 83.5, significantly leading over Gemini 2.5 Flash-Lite (74.6).
Mathematical prowess: In the HMMT Feb 2025 (Harvard-MIT mathematics tournament) evaluation, the 9B model scored 83.2, while the 4B variant scored 74.0, proving that high-level STEM reasoning no longer requires massive compute clusters.
Document and multilingual knowledge: The 9B variant leads the pack in document recognition on OmniDocBench v1.5 with a score of 87.7. Meanwhile, it maintains a top-tier multilingual presence on MMMLU with a score of 81.2, outperforming gpt-oss-120b (78.2).
Community reactions: “more intelligence, less compute”
Coming on the heels of last week’s release of an already pretty small, powerful open source Qwen3.5-Medium capable of running on a single GPU, the announcement of the Qwen3.5-Small Models Series and their even smaller footprint and processing requirements sparked immediate interest among developers focused on “local-first” AI.
“More intelligence, less compute” resonated with users seeking alternatives to cloud-based models.
AI and tech educator Paul Couvert of Blueshell AI captured the industry’s shock regarding this efficiency leap.
“How is this even possible?!” Couvert wrote on X. “Qwen has released 4 new models and the 4B version is almost as capable as the previous 80B A3B one. And the 9B is as good as GPT OSS 120b while being 13x smaller!”
Couvert’s analysis highlights the practical implications of these architectural gains:
-
“They can run on any laptop”
-
“0.8B and 2B for your phone”
-
“Offline and open source”
As developer Karan Kendre of Kargul Studio put it: “these models [can run] locally on my M1 MacBook Air for free.”
This sentiment of “amazing” accessibility is echoed across the developer ecosystem. One user noted that a 4B model serving as a “strong multimodal base” is a “game changer for mobile devs” who need screen-reading capabilities without high CPU overhead.
Indeed, Hugging Face developer Xenova noted that the new Qwen3.5 Small Model series can even run directly in a user’s web browser and perform such sophisticated and previously higher-compute demanding operations like video analysis.
Researchers also praised the release of Base models alongside the Instruct versions, noting that it provides essential support for “real-world industrial innovation.”
The release of Base models is particularly valued by enterprise and research teams because it provides a “blank slate” that hasn’t been biased by a specific set of RLHF (Reinforcement Learning from Human Feedback) or SFT (Supervised Fine-Tuning) data, which can often lead to “refusals” or specific conversational styles that are difficult to undo.
Now, with the Base models, those interested in customizing the model to fit specific tasks and purposes an easier starting point, as they can now apply their own instruction tuning and post-training without having to strip away Alibaba’s.
Licensing: a win for the open ecosystem
Alibaba has released the weights and configuration files for the Qwen3.5 series under the Apache 2.0 license. This permissive license allows for commercial use, modification, and distribution without royalty payments, removing the “vendor lock-in” associated with proprietary APIs.
-
Commercial use: Developers can integrate models into commercial products royalty-free.
-
Modification: Teams can fine-tune (SFT) or apply RLHF to create specialized versions.
-
Distribution: Models can be redistributed in local-first AI applications like Ollama.
Contextualizing the news: why small matters so much right now
The release of the Qwen3.5 Small Series arrives at a moment of “Agentic Realignment.” We have moved past simple chatbots; the goal now is autonomy. An autonomous agent must “think” (reason), “see” (multimodality), and “act” (tool use). While doing this with trillion-parameter models is prohibitively expensive, a local Qwen3.5-9B can perform these loops for a fraction of the cost.
By scaling Reinforcement Learning (RL) across million-agent environments, Alibaba has endowed these small models with “human-aligned judgment,” allowing them to handle multi-step objectives like organizing a desktop or reverse-engineering gameplay footage into code. Whether it is a 0.8B model running on a smartphone or a 9B model powering a coding terminal, the Qwen3.5 series is effectively democratizing the “agentic era.”
The Qwen3.5 series shift from “chatbits” to “native multimodal agents” transforms how enterprises can distribute intelligence. By moving sophisticated reasoning to the “edge”—individual devices and local servers—organizations can automate tasks that previously required expensive cloud APIs or high-latency processing.
Strategic enterprise applications and considerations
The 0.8B to 9B models are re-engineered for efficiency, utilizing a hybrid architecture that activations only the necessary parts of the network for each task.
-
Visual Workflow Automation: Using “pixel-level grounding,” these models can navigate desktop or mobile UIs, fill out forms, and organize files based on natural language instructions.
-
Complex Document Parsing: With scores exceeding 90% on document understanding benchmarks, they can replace separate OCR and layout parsing pipelines to extract structured data from diverse forms and charts.
-
Autonomous Coding & Refactoring: Enterprises can feed entire repositories (up to 400,000 lines of code) into the 1M context window for production-ready refactors or automated debugging.
-
Real-Time Edge Analysis: The 0.8B and 2B models are designed for mobile devices, enabling offline video summarization (up to 60 seconds at 8 FPS) and spatial reasoning without taxing battery life.
The table below outlines which enterprise functions stand to gain the most from local, small-model deployment.
|
Function |
Primary Benefit |
Key Use Case |
|
Software Engineering |
Local Code Intelligence |
Repository-wide refactoring and terminal-based agentic coding. |
|
Operations & IT |
Secure Automation |
Automating multi-step system settings and file management tasks locally. |
|
Product & UX |
Edge Interaction |
Integrating native multimodal reasoning directly into mobile/desktop apps. |
|
Data & Analytics |
Efficient Extraction |
High-fidelity OCR and structured data extraction from complex visual reports. |
While these models are highly capable, their small scale and “agentic” nature introduce specific operational “flags” that teams must monitor.
-
The Hallucination Cascade: In multi-step “agentic” workflows, a small error in an early step can lead to a “cascade” of failures where the agent pursues an incorrect or nonsensical plan.
-
Debugging vs. Greenfield Coding: While these models excel at writing new “greenfield” code, they can struggle with debugging or modifying existing, complex legacy systems.
-
Memory and VRAM Demands: Even “small” models (like the 9B) require significant VRAM for high-throughput inference; the “memory footprint” remains high because the total parameter count still occupies GPU space.
-
Regulatory & Data Residency: Using models from a China-based provider may raise data residency questions in certain jurisdictions, though the Apache 2.0 open-weight version allows for hosting on “sovereign” local clouds.
Enterprises should prioritize “verifiable” tasks—such as coding, math, or instruction following—where the output can be automatically checked against predefined rules to prevent “reward hacking” or silent failures.
-
Politics4 days agoITV enters Gaza with IDF amid ongoing genocide
-
Fashion4 days agoWeekend Open Thread: Iris Top
-
Tech2 days agoUnihertz’s Titan 2 Elite Arrives Just as Physical Keyboards Refuse to Fade Away
-
Business7 days agoTrue Citrus debuts functional drink mix collection
-
Sports3 days ago
The Vikings Need a Duck
-
NewsBeat5 days agoCuba says its forces have killed four on US-registered speedboat | World News
-
NewsBeat3 days agoDubai flights cancelled as Brit told airspace closed ’10 minutes after boarding’
-
Politics1 hour agoAlan Cumming Brands Baftas Ceremony A ‘Triggering S**tshow’
-
Tech7 days agoUnsurprisingly, Apple's board gets what it wants in 2026 shareholder meeting
-
NewsBeat5 days agoManchester Central Mosque issues statement as it imposes new measures ‘with immediate effect’ after armed men enter
-
NewsBeat3 days agoThe empty pub on busy Cambridge road that has been boarded up for years
-
NewsBeat2 days ago‘Significant’ damage to boarded-up Horden house after fire
-
NewsBeat3 days agoAbusive parents will now be treated like sex offenders and placed on a ‘child cruelty register’ | News UK
-
NewsBeat6 days agoPolice latest as search for missing woman enters day nine
-
Entertainment1 day agoBaby Gear Guide: Strollers, Car Seats
-
Business5 days agoDiscord Pushes Implementation of Global Age Checks to Second Half of 2026
-
Business5 days agoOnly 4% of women globally reside in countries that offer almost complete legal equality
-
Tech4 days agoNASA Reveals Identity of Astronaut Who Suffered Medical Incident Aboard ISS
-
Crypto World6 days agoEntering new markets without increasing payment costs
-
Politics2 days ago
FIFA hypocrisy after Israel murder over 400 Palestinian footballers

