Connect with us

Tech

Cursor has reportedly surpassed $2B in annualized revenue

Published

on

The AI coding assistant Cursor has surpassed $2 billion in annualized revenue, according to a Bloomberg source. This individual says the four-year-old startup saw its revenue run rate double over the past three months.

The disclosure appears timed to counter a recent wave of skepticism. Last week, tweets went viral questioning whether Cursor’s momentum was stalling, citing high-profile defections by individual developers to competing tools — particularly Anthropic’s Claude Code.

Founded in 2022, Cursor initially sold its product primarily to individual developers. Over the last year, however, it has focused more on landing large corporate buyers, which now account for approximately 60% of revenue, according to Bloomberg.

While some individual developers and smaller startups have switched from Cursor to Claude Code, which is seen as more competitively priced, that attrition seems to higher-spending corporate customers who tend to stick around longer.

Advertisement

Beyond Claude Code, OpenAI’s coding tool Codex is also competing for share in the rapidly growing market for AI-assisted software development. Other startups in the space include Replit, Cognition, and Lovable.

Cursor was last valued at $29.3 billion when it raised a $2.3 billion funding round co-led by Accel and Coatue in November.

Cursor did not immediately respond to our request for comment.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

The ‘European’ Jolla Phone Is an Anti-Big-Tech Smartphone

Published

on

“There are Chinese components as well—we are totally open about it—but the key is that, as we compile the software ourselves and install it in Finland, we protect the integrity of the product,” Pienimäki says.

What makes Sailfish OS unique over competitors like GrapheneOS and e/OS is that it’s not based on the Android Open Source Project, but Linux. That means it has no ties to Google—no need for the company to “deGoogle” the software; meaning there’s a greater sense of sovereignty over the software (and now the hardware). Still, it’s able to run Android apps, though the implementation isn’t perfect. Another common criticism is that it’s not as secure as options like GrapheneOS, where every app is sandboxed.

There’s a good chance some Android apps on Sailfish OS will run into issues, which is why in the startup wizard the phone will ask if you want to install services like MicroG—open source software that can run Google services on devices that don’t have the Google Play Store, making it an easier on-ramp for folks coming from traditional smartphones without a technical background. You don’t even need to create a Sailfish OS account to use the Jolla Phone.

Jolla’s effort is hardly the first to push the anti–Big Tech narrative. A wave of other hardware and software companies offer a deGoogled experience, whether that’s Murena from France and its e/OS privacy-friendly operating system or the Canadian GrapheneOS, which just announced a partnership with Motorola. At CES earlier this year, the Swiss company Punkt also teamed up with ApostrophyOS to deploy its software on the new MC03 smartphone. Jolla is following a broader European trend of reducing reliance on US companies, like how French officials ditched Zoom for French-made video conference software earlier this year.

Advertisement

The Phone

A common problem with these niche smartphones is that they inevitably end up costing a lot of money for the specs. Take the Light Phone III, for example, a fairly low-tech anti-smartphone that doesn’t enjoy the benefits of economies of scale, resulting in an outlandish $699 price. The Jolla Phone is in a similar boat, though the specs-to-value ratio is a little more respectable.

It’s powered by a midrange MediaTek Dimensity 7100 5G chip with 8 GB of RAM, 256 GB of storage, plus a microSD card slot and dual-SIM tray. There’s a 6.36-inch 1080p AMOLED screen, the two main cameras, and a 32-megapixel selfie shooter. The 5,500-mAh battery cell is fairly large considering the phone’s size, though the phone’s connectivity is a little dated, stuck with Wi-Fi 6 and Bluetooth 5.4.

Uniquely, the Jolla Phone brings back “The Other Half” functional rear covers from the original. These swappable back covers have pogo pins that interface with the phone, allowing people to create unique accessories like a second display on the back of the phone or even a keyboard attachment. There’s an Innovation Program where the community can cocreate functional covers together and 3D-print them. And yes, a removable rear cover means the Jolla Phone’s battery is user-replaceable.

Source link

Advertisement
Continue Reading

Tech

Android gets patches for Qualcomm zero-day exploited in attacks

Published

on

Android

Google has released security updates to patch 129 Android security vulnerabilities, including an actively exploited zero-day flaw in a Qualcomm display component.

“There are indications that CVE-2026-21385 may be under limited, targeted exploitation,” the company said on Monday in its March 2025 Android Security Bulletin.

While Google didn’t provide any further information on the attacks currently targeting this vulnerability, Qualcomm revealed in a separate security advisory issued on February 3 that the flaw is an integer overflow or wraparound in the Graphics subcomponent that local attackers can exploit to trigger memory corruption.

Qualcomm says it was alerted to this high-severity vulnerability on December 18, and it notified customers on February 2. According to its February advisory, which has yet to flag CVE-2026-21385 as exploited in attacks, the security flaw affects 235 Qualcomm chipsets.

Advertisement

With this month’s Android security updates, Google fixed 10 critical security vulnerabilities in the System, Framework, and Kernel components that attackers exploit to gain remote code execution, elevate privileges, or trigger denial-of-service conditions.

“The most severe of these issues is a critical security vulnerability in the System component that could lead to remote code execution with no additional execution privileges needed. User interaction is not needed for exploitation,” Google said.

Google issued two sets of patches: the 2026-03-01 and 2026-03-05 security patch levels. The latter bundles all fixes from the first batch, as well as patches for closed-source third-party and kernel subcomponents, which may not apply to all Android devices.

While Google Pixel devices receive security updates immediately, other vendors often take longer to test and tweak them for specific hardware configurations.

Advertisement

Google and Qualcomm spokespersons were not immediately available for comment when contacted by BleepingComputer earlier today regarding the CVE-2026-21385 attacks and their targets.

Google released patches for two other high-severity zero-day vulnerabilities (CVE-2025-48633 and CVE-2025-48572) in December, both of which were also tagged as “under limited, targeted exploitation.”

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

AI Proof Verification: Gauss Tackles 24D

Published

on

When Ukrainian mathematician Maryna Viazovska received a Fields Medal—widely regarded as the Nobel Prize for mathematics—in July 2022, it was big news. Not only was she the second woman to accept the honor in the award’s 86-year history, but she collected the medal just months after her country had been invaded by Russia. Nearly four years later, Viazovska is making waves again. Today, in a collaboration between humans and AI, Viazovska’s proofs have been formerly verified, signaling rapid progress in AI’s abilities to assist with mathematical research.

“These new results seem very, very impressive, and definitely signal some rapid progress in this direction,” says AI-reasoning expert and Princeton University postdoc Liam Fowl, who was not involved in the work.

In her Fields Medal–winning research, Viazovska had tackled two versions of the sphere-packing problem, which asks: How densely can identical circles, spheres, et cetera, be packed in n-dimensional space? In two dimensions, the honeycomb is the best solution. In three dimensions, spheres stacked in a pyramid are optimal. But after that, it becomes exceedingly difficult to find the best solution, and to prove that it is in fact the best.

In 2016, Viazovska solved the problem in two cases. By using powerful mathematical functions known as (quasi-)modular forms, she proved that a symmetric arrangement known as E8 is the best 8-dimensional packing, and soon after proved with collaborators that another sphere packing called the Leech lattice is best in 24 dimensions. Though seemingly abstract, this result has potential to help solve everyday problems related to dense sphere packing, including error-correcting codes used by smartphones and space probes.

Advertisement

The proofs were verified by the mathematical community and deemed correct, leading to the Fields Medal recognition. But formal verification—the ability of a proof to be verified by a computer—is another beast altogether. Since 2022, much progress has been made in AI-assisted formal proof verification.

Serendipity leads to formalization project

A few years later, a chance meeting in Lausanne, Switzerland, between third-year undergraduate Sidharth Hariharan and Viazovska would reignite her interest in sphere-packing proofs. Though still very early in his career, Hariharan was already becoming adept at formalizing proofs.

“Formal verification of a proof is like a rubber stamp,” Fowl says. “It’s a kind of bona fide certification that you know your statements of reasoning are correct.”

Hariharan told Viazovska how he had been using the process of formalizing proofs to learn and really understand mathematical concepts. In response, Viazovska expressed an interest in formalizing her proofs, largely out of curiosity. From this, in March 2024 the Formalising Sphere Packing in Lean project was born. Lean is a popular programming language and “proof assistant” that allows mathematicians to write proofs that are then verified for absolute correctness by a computer.

Advertisement

A collaboration bringing in experts Bhavik Mehta (Imperial College London), Christopher Birkbeck (University of East Anglia, England), Seewoo Lee (University of California, Berkeley), and others, the project involved writing a human-readable “blueprint” that could be used to map the 8-dimensional proof’s various constituents and which of them had and had not been formalized and/or proven, and then proving and formalizing those missing elements in Lean.

“We had been building the project’s repository for about 15 months when we enabled public access in June 2025,” recalls Hariharan, now a first-year Ph.D. student at Carnegie Mellon University. “Then, in late October we heard from Math, Inc. for the first time.”

The AI speedup

Math, Inc. is a startup developing Gauss, an AI specifically designed to automatically formalize proofs. “It’s a particular kind of language model called a reasoning agent that’s meant to interleave both traditional natural-language reasoning and fully formalized reasoning,” explains Jesse Han, Math, Inc. CEO and cofounder. “So it’s able to conduct literature searches, call up tools, and use a computer to write down Lean code, take notes, spin up verification tooling, run the Lean compiler, et cetera.”

Math, Inc. first hit the headlines when it announced that Gauss had completed a Lean formalization of the strong prime number theorem (PNT) in three weeks last summer, a task that Fields Medalist Terence Tao and Alex Kontorovich had been working on. Similarly, Math, Inc. contacted Hariharan and colleagues to say that Gauss had proven several facts related to their sphere-packing project.

Advertisement

“They told us that they had finished 30 “sorrys,” which meant that they proved 30 intermediate facts that we wanted proved,” explains Hariharan. A proportion of these sorrys were shared with the project team and merged with their own work. “One of them helped us identify a typo in our project, which we then fixed,” adds Hariharan. “So it was a pretty fruitful collaboration.”

From 8 to 24 dimensions

But then, radio silence followed. Math, Inc. appeared to lose interest. However, while Hariharan and colleagues continued their labor of love, Math, Inc. was building a new and improved version of Gauss. “We made a research breakthrough sometime mid-January that produced a much stronger version of Gauss,” says Han. “This new version reproduced our three-week PNT result in two to three days.”

Days later, the new Gauss was steered back to the sphere-packing formalization. Working from the invaluable preexisting blueprint and work that Hariharan and collaborators had shared, Gauss not only autoformalized the 8-dimensional case, but also found and fixed a typo in the published paper, all in the space of five days.

“When they reached out to us in late January saying that they finished it, to put it very mildly, we were very surprised,” says Hariharan. “But at the end of the day, this is technology that we’re very excited about, because it has the capability to do great things and to assist mathematicians in remarkable ways.”

Advertisement

A laptop with sphere packing code in the foreground, with an autumn sunset at Carnegie Mellon in the background. Hariharan was working on sphere-packing proof verification as the sun was setting behind Carnegie Mellon’s Hamerschlag Hall.Sidharth Hariharan

The 8-dimensional sphere-packing proof formalization alone, announced on February 23, represents a watershed moment for autoformalization and AI–human collaboration. But today, Math, Inc. revealed an even more impressive accomplishment: Gauss has autoformalized Viazovska’s 24-dimensional sphere-packing proof—all 200,000+ lines of code of it—in just two weeks.

There are commonalities between the 8- and 24-dimensional cases in terms of the foundational theory and overall architecture of the proof, meaning some of the code from the 8-dimensional case could be refactored and reused. However, Gauss had no preexisting blueprint to work from this time. “And it was actually significantly more involved than the 8-dimensional case, because there was a lot of missing background material that had to be brought on line surrounding many of the properties of the Leech lattice, in particular its uniqueness,” explains Han.

Though the 24-dimensional case was an automated effort, both Han and Hariharan acknowledge the many contributions from humans that laid the foundations for this achievement, regarding it as a collaborative endeavor overall between humans and AI.

But for Han, it represents even more: the beginning of a revolutionary transformation in mathematics, where extremely large-scale formalizations are commonplace. “A programmer used to be someone who punched holes into cards, but then the act of programming became separated from whatever material substrate was used for recording programs,” he concludes. “I think the end result of technology like this will be to free mathematicians to do what they do best, which is to dream of new mathematical worlds.”

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

The Analogue Pocket will be back in stock this week, but there’s a tariff-related price increase

Published

on

The Analogue Pocket handheld retro console has proven to be extremely popular, as initial runs have sold out. The company just announced the system will be , along with the dock accessory. Preorders open up on March 4 at 11AM ET, with shipments going out this June.

That’s the good news. The bad news? The little console is getting slapped with a price increase. It’s shooting up to $240 from the recent price of $220, with the company President Trump’s neverending tariffs. The device is assembled in China and Trump just after the Supreme Court struck down the old ones. We love to suddenly pay more for gadgets that have been out in the wild for nearly six years, don’t we folks?

For the uninitiated, the Pocket isn’t an emulation machine. It plays actual Game Boy, Game Boy Color and Game Boy Advance cartridges. It also integrates with various Game Boy accessories, like the camera and printer. The console can even handle Game Gear, TurboGrafx-16 and Atari Lynx games, but those require separate adapters.

We praised the Analogue Pocket , calling it “a clever little thing” that is sure to light up the nostalgia center of your brain. It even made our list of the , which is notable given competition included stuff like the Steam Deck.

Advertisement

In any event, this drop will likely sell out quickly. We recommend parking a browser on the company website just prior to 11AM ET if you can stomach the new price.

Source link

Continue Reading

Tech

Predator: Badlands Slays on 4K UHD Disc: Review

Published

on

As the only director to have helmed more than a single Predator film (previously the direct-to-streaming Prey and animated anthology Killer of Killers), Dan Trachtenberg has become the de facto shepherd of the franchise, and that’s a good thing. With Badlands, he takes audiences in a new direction, revealing some deep backstory to the “Yautja” culture and even creating sympathy for the story’s underdog protagonist.

We open on their homeworld where Dek, the runt of his clan, struggles to prove his worth, particularly in the eyes of his cold, fanatical father. Barely escaping execution, Dek blasts off to the “death planet” of Genna, desperate to kill the ultimate prey and earn his prized camouflage cloak once and for all. Immediately we discover how this place earned its nickname, and he only survives his first few moments with the help of a sassy robot, a “synthetic” named Thia (Elle Fanning). They form an unconventional bond and begin a shared quest, fraught with danger and more than a few twists big and small.

predator-badlands-4k-ultra-hd-blu-ray

Predator: Badlands was shot digitally and yielded a native 4K master filled with extensive computer-generated visual effects that are photo-realistic and imperceptibly blended, such that the adventure plays out like we’re viewing it all through a 2.39:1 window. The use of color is occasionally quite clever, such as when a bleak, oppressive sky is streaked with vermilion. Emissive highlights stand out as well, particularly when the headlights of high-tech vehicles slice through the darkness.

Enemies approach and attack from all directions, and the hand-to-hand fights are some of the fiercest I’ve seen in a minute, bringing us some wonderfully hyperactive panning across the surrounds. Bass impact is outstanding, from rocket blasts to explosions to individual weapon hits, with the testosterone meter seldom dipping below 10. Among the more subtle flourishes, a distinctive inorganic effect is applied to the synthetics’ voices, at different levels as the mood commands. Worth noting, the Dolby Atmos track is exclusive to the 4K disc, as the included HD Blu-ray disc tops out at DTS-HD Master Audio 7.1.

predator-badlands-4k-steelbook-2-disc-set
Predator: Badlands – Limited Edition Steelbook

Trachtenberg anchors the commentary, joined by his producer, director of photography and stunt coordinator. The track also appears on the HD Blu-ray, where the assorted featurettes do a fantastic job of breaking down the unique challenges of such an original production. In addition, there are half a dozen deleted, alternate and extended scenes, some rendered as detailed pre-visualizations, totaling almost half an hour, with their own optional commentary. The two-disc (plus digital copy) set is also being offered as a SteelBook ($76.99 at Amazon)

Fans will be quick to note that this is not the first Predator movie that tips its cap to the long-running Alien franchise, although when the script isn’t giving us a nifty wink it can be a bit derivative of a variety of sources. Overall, Badlands is a fierce and revitalized entry in the franchise that absolutely dominates in a home theater setting.

Advertisement

Movie Details

  • STUDIO: Fox/Disney
  • FORMAT: Ultra HD 4K Blu-ray (February 17, 2026)
  • THEATRICAL RELEASE YEAR: 2025
  • ASPECT RATIO: 2.39:1
  • HDR FORMATS: Dolby Vision, HDR10
  • AUDIO FORMAT: Dolby Atmos with TrueHD 7.1 core
  • LENGTH: 107 mins.
  • MPAA RATING: PG-13
  • DIRECTOR: Dan Trachtenberg
  • STARRING: Elle Fanning, Dimitrius Schuster-Koloamatangi, Rohinal Nayaran, Chris Terhune, Mike Homik, Stefan Grube

Our Ratings

★★★★★★★★★★ Picture

★★★★★★★★★★ Sound

★★★★★★★★★★ Extras

Where to buy

Advertisement. Scroll to continue reading.
Advertisement

Source link

Continue Reading

Tech

The few new things in Apple’s midrange tablet

Published

on

The iPad Air, the middle child in Apple’s tablet lineup, has been upgraded to the M4 chip with increased RAM and… Well, there’s not a whole lot else if I’m being honest. At the very least, the new iPad Air M4 models remain at the same price as the iPad Air M3, with the 11-inch version starting at $599 and the 13-inch at $799. I would give Apple more credit if it had increased the starting storage or added literally anything else.

If you put them side by side, you might not be able to tell the difference, but this upgrade would benefit creatives and professionals more than anything. There’s a significant performance bump from the M3 to the M4, and the increased RAM is doing a lot of work, especially if you’re taking advantage of Apple Intelligence.

If you’re using an M1-powered iPad Air or something even older, though, the new iPad Air M4 should be a compelling upgrade. Pre-orders start at 9:15AM ET on March 4, with the units arriving a week later. We expect full reviews will be published by then. But in the meantime, let’s dive into what the performance gains might look like and what we’re missing out on in this year’s iteration of the iPad Air.

iPad Air M4 vs. iPad Air M3: Performance and battery life

The most significant difference between the two iPad Air generations is their chipsets. The latest iPad Air launches with the M4 chip versus its predecessor’s M3 chip, and it gets a bump in RAM from 8GB to 12GB.

Advertisement

I don’t give much fanfare to incremental chip increases because the performance gain is usually minimal. However, the M4 is up to 30 percent faster than the M3, according to Apple. That might be noticeable to even casual users, especially as the years go on and iPadOS becomes more demanding. For power users, it’ll mean more demanding work like video editing will be noticeably quicker.

For those in need of the fastest internet speeds, the new iPad Air is also equipped with Apple’s N1 chip, which enables Wi-Fi 7 and Bluetooth 6, the latest connectivity technology. However, I really don’t imagine the average user needing up to 46 gigabits per second of internet speed compared to the iPad M3’s 9.6 Gbps on Wi-Fi 6. If you do, you’re in the tax bracket for an iPad Pro.

Now, despite the increase in speeds, the battery life between the M4 and M3 models remains the same. Apple claims all four models get up to 10 hours of battery life surfing the web on Wi-Fi or watching video (up to 9 hours on cellular). No complaints here.

iPad Air M4 vs. iPad Air M3: Design, display, audio and cameras

For better or worse, we’re not getting any changes in any of these departments, which is why I’m lumping them together.

Advertisement

The iPad Air comes in blue, purple, beige and gray. The 11-inch option measures 9.74 x 7.02 x 0.24 inches and the 13 comes in at 11.04 x 8.46 x 0.24 inches. As their names suggest, they’re both rather light, at 1.01 pounds (1.02 pounds for M4) and 1.36 pounds, respectively. My only wish was that we got new colors that popped a bit more.

Then there’s the displays. All four versions of the iPad Airs sport a Liquid Retina LED display at 264 ppi. The 11-inch supports a 2,360 x 1,640 resolution with a peak brightness of 500 nits, while the 13-inch offers a 2,732 x 2,048 resolution at 600 nits. It would’ve been nice to see an OLED or even Mini-LED panel make its way to the iPad Air, which could’ve made the screen more vivid and vibrant. But it’s more disappointing that we’re stuck at 60Hz unlike the Pro models that offer 120Hz, making their visual experience smoother.

Both products feature landscape stereo speakers. The iPad Air M3’s audio quality couldn’t live up to the iPad Pro, so I doubt the M4 model will.

You won’t catch me taking photos with an iPad, but for those of you who do, the iPad Air M4 features the same 12MP cameras on the front and back as its predecessor.

Advertisement

iPadOS 26, Apple Intelligence and Apple accessories

Nothing huge is happening to iPadOS or the Apple accessories in the iPad Air refresh. The revamped Magic Keyboard from last year still works with these new models, as does the Apple Pencil Pro. iPadOS 26, released last fall, was a major update but will still be familiar enough to anyone who has used an iPad before. The new iPad Air M4 is getting a significant boost in AI processing speeds, though, thanks to its new chip and 50 percent increase in RAM. However, unless you’re an AI power user, you probably won’t notice a difference there.

All that said, if your love language is spreadsheets, the full specs are helpfully laid out below:

iPad Air M4 vs. iPad Air M3: Specs at a glance

Spec

iPad Air M4

Advertisement

iPad Air M3

Price

$599 (11-inch), $799 (13-inch)

$599 (11-inch), $799 (13-inch)

Advertisement

Processor

M4

M3

Display

Advertisement

11-inch: Liquid Retina, 2,360 x 1,640, LED display at 264 ppi

13-inch: Liquid Retina, 2,732 x 2,048, LED display at 264 ppi

11-inch: Liquid Retina, 2,360 x 1,640, LED display at 264 ppi

13-inch: Liquid Retina, 2,732 x 2,048, LED display at 264 ppi

Advertisement

RAM

12GB

8GB

Storage

Advertisement

128GB, 256GB, 512GB, 1TB

128GB, 256GB, 512GB, 1TB

Battery

Up to 10 hours (Wi-Fi), 9 hours (Cellular model)

Advertisement

Up to 10 hours (Wi-Fi), 9 hours (Cellular model)

Cameras

12MP Wide (rear), 12MP Center Stage (front)

12MP Wide (rear), 12MP Center Stage (front)

Advertisement

Apple accessories

Apple Pencil Pro, Apple Pencil, Magic Keyboard Folio

Apple Pencil Pro, Apple Pencil, Magic Keyboard Folio

Dimensions

Advertisement

11-inch: 9.74 x 7.02 x 0.24 inches

13-inch: 11.04 x 8.46 x 0.24 inches

11-inch: 9.74 x 7.02 x 0.24 inches

13-inch: 11.04 x 8.46 x 0.24 inches

Advertisement

Weight

11-inch: 1.02 pounds

13-inch: 1.36 pounds

11-inch: 1.01 pounds

Advertisement

13-inch: 1.36 pounds

Source link

Continue Reading

Tech

Superagers’ ‘Secret Ingredient’ May Be the Growth of New Brain Cells

Published

on

alternative_right shares a report from ScienceAlert: According to a study of 38 adult human brains donated to science, superagers — people who retain exceptional memory as they age — have roughly twice as many immature neurons as their peers who age more typically. Moreover, people with Alzheimer’s disease show a marked reduction in neurogenesis compared to a normal baseline. […]

Led by researchers at the University of Illinois Chicago, the team set out to examine a variety of postmortem hippocampal tissue samples to see if they could identify markers of neurogenesis — and if different groups had any notable differences. The brain samples were donated from five groups: eight healthy young adults, aged between 20 and 40; eight healthy agers, aged between 60 and 93; six superagers, aged between 86 and 100; six individuals with preclinical Alzheimer’s pathology, aged between 80 and 94; and 10 individuals with an Alzheimer’s diagnosis, aged between 70 and 93. The young healthy adult brain tissue was first analyzed to establish the neurogenesis pathways in the adult brain. Then, they analyzed 355,997 individual cell nuclei isolated from the hippocampus, searching for three different stages of cell development: Stem cells, which can develop into neurons; neuroblasts, which are stem cells in the process of that development; and immature neurons, on the verge of functionality. The results were striking.

“Superagers had twice the neurogenesis of the other healthy older adults,” [says neuroscientist Orly Lazarov of the University of Illinois Chicago]. “Something in their brains enables them to maintain a superior memory. I believe hippocampal neurogenesis is the secret ingredient, and the data support that.” That’s an interesting result on its own, but the data from the individuals with preclinical Alzheimer’s pathology and Alzheimer’s diagnoses is where the real meat of the study sits. In the preclinical group, subtle molecular changes hinted that the system supporting new neuron growth was beginning to falter. In the Alzheimer’s group, a clear drop in immature neurons was evident. A genetic analysis of the nuclei also showed that superager neural cells have increased gene activity linked to stronger synaptic connections, greater plasticity, and brain-derived neurotrophic factor, a critical protein for neural survival, growth, and maintenance. Taken together, these three things can be interpreted as resilience. The research has been published in the journal Nature.

Source link

Advertisement
Continue Reading

Tech

What Is That Mysterious Metallic Device US Chief Design Officer Joe Gebbia Is Using?

Published

on

Joe Gebbia, cofounder of Airbnb and the US chief design officer appointed by President Trump, was spotted in San Francisco today using a mysterious metallic device. In a social media post on X viewed more than 500,000 times, a man who looks like Gebbia sits with an espresso at a coffee shop. He’s wearing metallic buds that bisect his ears, with a matching clamshell-shaped disc in front of him on the counter.

After the video was posted Monday morning, social media users were quick to suggest that this could be some kind of prototype from OpenAI’s upcoming line of hardware devices designed in partnership with famed Apple designer Jony Ive. An OpenAI spokesperson declined to comment on the potential Gebbia video after WIRED reached out. Gebbia also did not respond to a request for comment.

The device Gebbia appears to be wearing looks quite similar to the hardware seen in a fake OpenAI ad that was widely circulated on Reddit and social media in February. That video seemingly showed Pillion actor Alexander Skarsgård interacting with an AI device that had a similar-looking pair of earbuds and a circular disc. At the time, OpenAI denounced the widely seen video as not real. “Fake news,” wrote OpenAI President Greg Brockman at the time, responding to a social media post.

The earbuds seen in the video of Gebbia on Monday also look quite similar in shape to the Huawei FreeClip 2, a pair of open earbuds released earlier this year. However, the clamshell seen on the coffee counter next to Gebbia is different from Huawei’s most recent headphone case. It would also be quite surprising if a government official were seen using Huawei tech, considering the Chinese company is effectively banned from selling its phones in the US due to security concerns.

Advertisement

WIRED’s audio experts say he’s most likely wearing open earbuds, as Gebbia’s pair share some similarities with Soundcore’s AeroClips or Sony’s LinkBuds Clip, though the cases for those buds don’t match what’s on the table in front of Gebbia. WIRED also ran the photo and video through software that attempts to identify AI-generated outputs and other deepfakes. The detection software, from a company called Hive, says the odds are low that this imagery of Gebbia was generated by AI. Still, AI detectors are not always reliable and can include false outputs. It’s possible that the entire post could be a synthetic hoax.

Could this be some kind of soft launch teaser for OpenAI’s hardware? The timing of this trickle-out would make sense, since the company may ship devices to consumers sometime early in 2027. Still, OpenAI denied any involvement with the previous pseudo-ad for the metallic AI hardware, with its shiny earbuds and matching disc.

Source link

Advertisement
Continue Reading

Tech

Alibaba’s small, open source Qwen3.5-9B beats OpenAI’s gpt-oss-120B and can run on standard laptops

Published

on

Despite political turmoil in the U.S. AI sector, in China, the AI advances are continuing apace without a hitch.

Earlier today, e-commerce giant Alibaba’s Qwen Team of AI researchers, focused primarily on developing and releasing to the world a growing family of powerful and capable Qwen open source language and multimodal AI models, unveiled its newest batch, the Qwen3.5 Small Model Series, which consists of:

  • Qwen3.5-0.8B & 2B: Two models, both ptimized for “tiny” and “fast” performance, intended for prototyping and deployment on edge devices where battery life is paramount.

  • Qwen3.5-4B: A strong multimodal base for lightweight agents, natively supporting a 262,144 token context window.

  • Qwen3.5-9B a compact reasoning model that outperforms the 13.5x larger U.S. rival OpenAI’s open soruce gpt-oss-120B on key third-party benchmarks including multilingual knowledge and graduate-level reasoning

To put this into perspective, these models are on the order of the smallest general purpose models lately shipped by any lab around the world, comparable more to MIT offshoot LiquidAI’s LFM2 series, which also have several hundred million or billion parameters, than the estimated trillion parameters (model settings) reportedly used for the flagship models from OpenAI, Anthropic, and Google’s Gemini series.

The weights for the models are available right now globally under Apache 2.0 licenses — perfect for enterprise and commercial use, including customization as needed — on Hugging Face and ModelScope.

Advertisement

The technology: hybrid efficiency and native multimodality

The technical foundation of the Qwen3.5 small series is a departure from standard Transformer architectures. Alibaba has moved toward an Efficient Hybrid Architecture that combines Gated Delta Networks (a form of linear attention) with sparse Mixture-of-Experts (MoE).

This hybrid approach addresses the “memory wall” that typically limits small models; by using Gated Delta Networks, the models achieve higher throughput and significantly lower latency during inference.

Furthermore, these models are natively multimodal. Unlike previous generations that “bolted on” a vision encoder to a text model, Qwen3.5 was trained using early fusion on multimodal tokens. This allows the 4B and 9B models to exhibit a level of visual understanding—such as reading UI elements or counting objects in a video—that previously required models ten times their size.

Benchmarking the “small” series: performance that defies scale

Newly released benchmark data illustrates just how aggressively these compact models are competing with—and often exceeding—much larger industry standards. The Qwen3.5-9B and Qwen3.5-4B variants demonstrate a cross-generational leap in efficiency, particularly in multimodal and reasoning tasks.

Advertisement
Qwen3.5 Small Models Series benchmarks

Qwen3.5 Small Models Series benchmarks against other similarly-sized/classed models. Credit: Alibaba Qwen

Multimodal dominance: In the MMMU-Pro visual reasoning benchmark, Qwen3.5-9B achieved a score of 70.1, outperforming Gemini 2.5 Flash-Lite (59.7) and even the specialized Qwen3-VL-30B-A3B (63.0).

Graduate-level reasoning: On the GPQA Diamond benchmark, the 9B model reached a score of 81.7, surpassing gpt-oss-120b (80.1), a model with over ten times its parameter count.

Video understanding: The series shows elite performance in video reasoning. On the Video-MME (with subtitles) benchmark, Qwen3.5-9B scored 84.5 and the 4B scored 83.5, significantly leading over Gemini 2.5 Flash-Lite (74.6).

Advertisement

Mathematical prowess: In the HMMT Feb 2025 (Harvard-MIT mathematics tournament) evaluation, the 9B model scored 83.2, while the 4B variant scored 74.0, proving that high-level STEM reasoning no longer requires massive compute clusters.

Document and multilingual knowledge: The 9B variant leads the pack in document recognition on OmniDocBench v1.5 with a score of 87.7. Meanwhile, it maintains a top-tier multilingual presence on MMMLU with a score of 81.2, outperforming gpt-oss-120b (78.2).

Community reactions: “more intelligence, less compute”

Coming on the heels of last week’s release of an already pretty small, powerful open source Qwen3.5-Medium capable of running on a single GPU, the announcement of the Qwen3.5-Small Models Series and their even smaller footprint and processing requirements sparked immediate interest among developers focused on “local-first” AI.

“More intelligence, less compute” resonated with users seeking alternatives to cloud-based models.

Advertisement

AI and tech educator Paul Couvert of Blueshell AI captured the industry’s shock regarding this efficiency leap.

“How is this even possible?!” Couvert wrote on X. “Qwen has released 4 new models and the 4B version is almost as capable as the previous 80B A3B one. And the 9B is as good as GPT OSS 120b while being 13x smaller!”

Couvert’s analysis highlights the practical implications of these architectural gains:

  • “They can run on any laptop”

  • “0.8B and 2B for your phone”

  • “Offline and open source”

As developer Karan Kendre of Kargul Studio put it: “these models [can run] locally on my M1 MacBook Air for free.”

Advertisement

This sentiment of “amazing” accessibility is echoed across the developer ecosystem. One user noted that a 4B model serving as a “strong multimodal base” is a “game changer for mobile devs” who need screen-reading capabilities without high CPU overhead.

Indeed, Hugging Face developer Xenova noted that the new Qwen3.5 Small Model series can even run directly in a user’s web browser and perform such sophisticated and previously higher-compute demanding operations like video analysis.

Researchers also praised the release of Base models alongside the Instruct versions, noting that it provides essential support for “real-world industrial innovation.”

The release of Base models is particularly valued by enterprise and research teams because it provides a “blank slate” that hasn’t been biased by a specific set of RLHF (Reinforcement Learning from Human Feedback) or SFT (Supervised Fine-Tuning) data, which can often lead to “refusals” or specific conversational styles that are difficult to undo.

Advertisement

Now, with the Base models, those interested in customizing the model to fit specific tasks and purposes an easier starting point, as they can now apply their own instruction tuning and post-training without having to strip away Alibaba’s.

Licensing: a win for the open ecosystem

Alibaba has released the weights and configuration files for the Qwen3.5 series under the Apache 2.0 license. This permissive license allows for commercial use, modification, and distribution without royalty payments, removing the “vendor lock-in” associated with proprietary APIs.

  • Commercial use: Developers can integrate models into commercial products royalty-free.

  • Modification: Teams can fine-tune (SFT) or apply RLHF to create specialized versions.

  • Distribution: Models can be redistributed in local-first AI applications like Ollama.

Contextualizing the news: why small matters so much right now

The release of the Qwen3.5 Small Series arrives at a moment of “Agentic Realignment.” We have moved past simple chatbots; the goal now is autonomy. An autonomous agent must “think” (reason), “see” (multimodality), and “act” (tool use). While doing this with trillion-parameter models is prohibitively expensive, a local Qwen3.5-9B can perform these loops for a fraction of the cost.

By scaling Reinforcement Learning (RL) across million-agent environments, Alibaba has endowed these small models with “human-aligned judgment,” allowing them to handle multi-step objectives like organizing a desktop or reverse-engineering gameplay footage into code. Whether it is a 0.8B model running on a smartphone or a 9B model powering a coding terminal, the Qwen3.5 series is effectively democratizing the “agentic era.”

Advertisement

The Qwen3.5 series shift from “chatbits” to “native multimodal agents” transforms how enterprises can distribute intelligence. By moving sophisticated reasoning to the “edge”—individual devices and local servers—organizations can automate tasks that previously required expensive cloud APIs or high-latency processing.

Strategic enterprise applications and considerations

The 0.8B to 9B models are re-engineered for efficiency, utilizing a hybrid architecture that activations only the necessary parts of the network for each task.

  • Visual Workflow Automation: Using “pixel-level grounding,” these models can navigate desktop or mobile UIs, fill out forms, and organize files based on natural language instructions.

  • Complex Document Parsing: With scores exceeding 90% on document understanding benchmarks, they can replace separate OCR and layout parsing pipelines to extract structured data from diverse forms and charts.

  • Autonomous Coding & Refactoring: Enterprises can feed entire repositories (up to 400,000 lines of code) into the 1M context window for production-ready refactors or automated debugging.

  • Real-Time Edge Analysis: The 0.8B and 2B models are designed for mobile devices, enabling offline video summarization (up to 60 seconds at 8 FPS) and spatial reasoning without taxing battery life.

The table below outlines which enterprise functions stand to gain the most from local, small-model deployment.

Function

Advertisement

Primary Benefit

Key Use Case

Software Engineering

Local Code Intelligence

Advertisement

Repository-wide refactoring and terminal-based agentic coding.

Operations & IT

Secure Automation

Automating multi-step system settings and file management tasks locally.

Advertisement

Product & UX

Edge Interaction

Integrating native multimodal reasoning directly into mobile/desktop apps.

Data & Analytics

Advertisement

Efficient Extraction

High-fidelity OCR and structured data extraction from complex visual reports.

While these models are highly capable, their small scale and “agentic” nature introduce specific operational “flags” that teams must monitor.

  • The Hallucination Cascade: In multi-step “agentic” workflows, a small error in an early step can lead to a “cascade” of failures where the agent pursues an incorrect or nonsensical plan.

  • Debugging vs. Greenfield Coding: While these models excel at writing new “greenfield” code, they can struggle with debugging or modifying existing, complex legacy systems.

  • Memory and VRAM Demands: Even “small” models (like the 9B) require significant VRAM for high-throughput inference; the “memory footprint” remains high because the total parameter count still occupies GPU space.

  • Regulatory & Data Residency: Using models from a China-based provider may raise data residency questions in certain jurisdictions, though the Apache 2.0 open-weight version allows for hosting on “sovereign” local clouds.

Enterprises should prioritize “verifiable” tasks—such as coding, math, or instruction following—where the output can be automatically checked against predefined rules to prevent “reward hacking” or silent failures.

Advertisement

Source link

Continue Reading

Tech

OpenAI will amend Defense Department deal to prevent mass surveillance in the US

Published

on

OpenAI’s Sam Altman said the company will amend its deal with the Defense Department (or the Department of War) to explicitly prohibit the use of its AI system on mass surveillance against Americans. Altman has published an internal memo previously sent to employees on X, telling them that the company will tweak the agreement to add language to make that point especially clear. Specifically, it says:

“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.

For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

Altman has also claimed in the memo that the agency affirmed that its services will not be used by its intelligence agencies, including the NSA, without a modification to their contract. He added that if he received what he believed was an unconstitutional order, he would rather go to jail than follow it.

Advertisement

In addition, the OpenAI CEO has admitted in the memo that the company shouldn’t have rushed to get the deal out on Friday, February 27, since the issues were “super complex and demand clear communication.” Altman explained that the company was “trying to de-escalate things and avoid a much worse outcome” but it “looked opportunistic” in the end. If you’ll recall, OpenAI announced the partnership shortly after President Trump ordered all US government agencies to stop using Claude and any other Anthropic services. To note, Anthropic started working with the US government in 2024.

The Defense Department and Secretary Pete Hegseth had been pressuring Anthropic with to remove its AI’s guardrails so that it can be used for all “lawful” purposes. Those include mass surveillance and the development of fully autonomous weapons. Anthropic refused to bow down to Hegseth’s demands and in a statement said that “no amount of intimidation or punishment” will change its “position on mass domestic surveillance or fully autonomous weapons.” Trump issued the order as a result. The Defense Department had also taken the first steps to designate Anthropic as a “supply chain risk,” which is typically reserved for Chinese companies believed to be working with their country’s government.

Altman said that in his conversations with US officials, he reiterated that Anthropic shouldn’t be designated as a supply chain risk and that he hoped the Defense Department would offer it the same deal OpenAI agreed to. In an AMA session on X over the weekend, Altman clarified that he didn’t know the details of Anthropic’s agreement and how it differed from the one OpenAI signed. But if it had been the same, he thought Anthropic should have agreed to it.

After the news broke out about OpenAI’s deal, Anthropic climbed its way to the number one spot of the App Store’s Top Free Apps leaderboard, beating out both ChatGPT and Google Gemini. Anthropic, capitalizing on Claude’s sudden popularity, launched a memory import tool to make switching to its chatbot from another company’s easier. Meanwhile, uninstalls for ChatGPT’s jumped by 295 percent day-over-day, according to Sensor Tower.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025