To work around those rules, the Humanizer skill tells Claude to replace inflated language with plain facts and offers this example transformation:
Before: “The Statistical Institute of Catalonia was officially established in 1989, marking a pivotal moment in the evolution of regional statistics in Spain.”
After: “The Statistical Institute of Catalonia was established in 1989 to collect and publish regional statistics.”
Claude will read that and do its best as a pattern-matching machine to create an output that matches the context of the conversation or task at hand.
Advertisement
An example of why AI writing detection fails
Even with such a confident set of rules crafted by Wikipedia editors, we’ve previously written about why AI writing detectors don’t work reliably: There is nothing inherently unique about human writing that reliably differentiates it from LLM writing.
One reason is that even though most AI language models tend toward certain types of language, they can also be prompted to avoid them, as with the Humanizer skill. (Although sometimes it’s very difficult, as OpenAI found in its yearslong struggle against the em dash.)
Also, humans can write in chatbot-like ways. For example, this article likely contains some “AI-written traits” that trigger AI detectors even though it was written by a professional writer—especially if we use even a single em dash—because most LLMs picked up writing techniques from examples of professional writing scraped from the web.
Along those lines, the Wikipedia guide has a caveat worth noting: While the list points out some obvious tells of, say, unaltered ChatGPT usage, it’s still composed of observations, not ironclad rules. A 2025 preprint cited on the page found that heavy users of large language models correctly spot AI-generated articles about 90 percent of the time. That sounds great until you realize that 10 percent are false positives, which is enough to potentially throw out some quality writing in pursuit of detecting AI slop.
Advertisement
Taking a step back, that probably means AI detection work might need to go deeper than flagging particular phrasing and delve (see what I did there?) more into the substantive factual content of the work itself.
Although it dates back to the early days of the Marconi Company in the 1920s, the Franklin oscillator has remained a relatively obscure circuit, its memory mostly kept alive by ham radio operators who prize its high stability at higher frequencies. At the core of the circuit is an LC tank circuit, a fact which [nobcha] used to build quite a precise LC meter.
The meter is built around two parts: the Franklin oscillator, which resonates at a frequency defined by its inductance and capacitance, and an Arduino which counts the frequency of the signal. In operation, the Arduino measures the frequency of the original LC circuit, then measures again after another element (capacitor or inductor) has been added to the circuit. By measuring how much the resonant frequency changes, it’s possible to determine the value of the new element.
Before operation, the meter must be calibrated with a known reference capacitor to determine the values of the base LC circuit. In one iteration of the design, this was done automatically using a relay, while in a later version a manual switch connects the reference capacitor. Because the meter measures frequency differences and not absolute values, it minimizes parasitic effects. In testing, it was capable of measuring inductances as low as 0.1 µH.
— Steven VanRoekel, a longtime former Microsoft leader and U.S. chief information officer under President Obama, is now CEO of Earth Species Project (ESP). The non-profit research lab is using artificial intelligence to better understand animal communication in creatures from carrion crows to beluga whales.
VanRoekel, who is based in Bend, Ore., said his career has focused on driving impact at scale, and that ESP is poised for big breakthroughs.
AI can “unlock the mysteries of our planet, especially around animal communication,” he said in an ESP blog. “Once we begin unlocking that mystery, we could see shifts on the scale of Copernican or Galilean moments in history: new science, new understanding, and perhaps most importantly, new relationships with our planet.”
Krzysztof Duleba. (LinkedIn Photo)
— Krzysztof Duleba joined LinkedIn’s Bellevue, Wash., office as a distinguished engineer in its infrastructure program. Duleba has spent his career at Google, working there for 18 years in roles across search, ads, maps, AI and cloud. In separate posts on LinkedIn, Duleba shared his career journey.
“Eighteen years ago, a kid from rural Poland walked into Google with no idea what he was getting into. He walked out a very different engineer, a father of three, and — he hopes — a better person,” Duleba wrote in announcing his Google departure.
And regarding his new role: “LinkedIn is in the middle of a major infrastructure transformation, and the timing matters. I consider getting reliability economics right during this window, before agentic development fully hits, the difference between drowning in the AI wave and catching it.”
Advertisement
Dennis Stansbury. (LinkedIn Photo)
— London-based Dennis Stansbury is resigning from Amazon after more than 18 years. He has held a variety of leadership roles in European offices, most recently serving as a principal product manager for Prime Video and Amazon MGM Studios in the United Kingdom.
“I started in Seattle in March 2008, shortly after Kindle launched but before Prime Video or Alexa were likely even ideas,” Stansbury said on LinkedIn, adding that he’s going “to take some time off and put more thought into what’s next.”
Miranda Chen. (LinkedIn Photo)
— After nearly 14 years at Amazon, Miranda Chen is leaving her role as a director and technical advisor for leaders in worldwide corporate and business development. Chen, who is based in the San Francisco Bay Area, did not indicate her next move.
“I first started working for Amazon at A9, a Bay Area subsidiary, where we could review the key metrics for our entire offsite advertising business in a single weekly meeting,” she said on LinkedIn. “Now we have Amazon offices worldwide and Amazon Ads is a meaningfully large business.”
— Scott Lawson, Amazon director of Global Real Estate and Facilities (GREF) design and construction, is leaving his role. Seattle-based Lawson has been with Amazon for nearly nine years. He was previously with Clark Construction Group working on developments nationwide. Lawson hinted on LinkedIn that information on his “next chapter” would be coming soon.
Danielle Decatur. (LinkedIn Photo)
— Danielle Decatur is vice president of community engagement and communications for Cloverleaf Infrastructure, a startup based in Seattle and Houston that’s coordinating between landowners and power providers to offer ready-to-build sites tailored for data centers.
“I’ll be dedicated to enabling data center infrastructure that works for and directly benefits communities,” Decatur said on LinkedIn. The sector is facing pushback over concerns about energy prices and environmental impacts of the facilities.
Advertisement
Decatur was previously at Microsoft for more than 14 years, working most recently as director of energy and sustainability. Cloverleaf co-founder Brian Janous is Microsoft’s former vice president of energy. Earlier in her career, Decatur served with the U.S. Air Force and with FEMA.
Bradford Snow. (LinkedIn Photo)
— Augmodo named Bradford Snow as chief technology officer. The Seattle startup is developing wearable tech for retail store employees and Snow will focus on Augmodo’s technical vision and innovation strategy.
Snow joined the company from Axon, which sells taser devices and body cameras. His career also includes leadership roles at multiple tech giants where he worked on a variety of virtual reality technologies such as AR and VR devices at Meta; Amazon’s Alexa AI and health and wellness wearable tech; and HoloLens initiatives at Microsoft.
Abhishek Mathur. (LinkedIn Photo)
— Abhishek Mathur is now chief technology and product officer for ServiceTitan, a California software giant building an agentic operating system to serve trades such as plumbing, electrical and roofing by automating workflows and supporting technicians in the field.
“This sector remains one of the largest untapped opportunities for technology to drive meaningful impact,” Mathur said on LinkedIn.
Mathur, who is based in the Seattle area, has held engineering leadership roles at Meta and was at Microsoft for more than 11 years. He was most recently at Figma as senior VP of engineering.
Advertisement
Anush Kumar. (LinkedIn Photo)
— Anush Kumar is now founder and CEO of Intelligent Systems, a Bellevue, Wash.-based startup that aims to “transform operational workflows” with AI tools.
“We’re on a mission to help enterprises stop piloting and start producing,” Kumar said in a LinkedIn post that includes links to five articles explaining the team’s approach.
Kumar was previously head of product for agentic automation at Atlassian. Other past roles include VP of technology at Expedia Group, senior VP of product at Zendesk, and director roles at Oracle and Avanade. His first tech role was lead product manager at Microsoft.
— Chris Cappello joined Provn as vice president of marketing. Cappello has worked in multiple marketing roles for companies including WE Communications, Marina Maher Communications and M-Squared. He and Provn CEO Nikesh Parekh both worked earlier in their careers at HouseValues, which rebranded as Market Leader.
Provn, a new Seattle startup, wants companies to scrap the traditional resume and replace it with portfolios of real work and challenge-based assessments.
Advertisement
— Fred Hutch Cancer Center appointed two new leaders. Dr. Mazyar Shadmanand Vyshak Venurwere named as deputy chief medical officers, effective April 1. Shadman will serve as deputy CMO for classical hematology, hematologic malignancies, transplant and immunotherapy, while Venur will serve as deputy CMO for solid tumor and acute care services.
And two Fred Hutch researchers received endowed chairs: Dr. Soheil Meshinchi, a global leader in treatments for acute myeloid leukemia, was awarded the Dylan Burke Endowed Chair in Immunotherapy; and Holly Harris received the inaugural Bus Family Endowed Chair in recognition for her work in prevention, early detection and precision oncology for uterine, ovarian and breast cancers.
— Seattle’s Marianne Bichsel, former VP of external affairs at Comcast, has launched Engaged Public Affairs, a PR and policy firm advising “leaders at the intersection of government, public trust, and corporate responsibility.” Bichsel’s co-founders are Julie Anderson, who has served in city and Washington state government, and Natasha Jones, a longtime leader in King County government.
— Theodora, a Seattle-area wine recommendation app, appointed Lindsey Singhavi as its founding marketing lead.
Advertisement
— In case you missed it, GeekWire took deeper dives into these recent notable tech moves (in no particular order, except maybe the first item):
Some FDM filaments are pretty brittle even if properly dried and stored, especially those which contain carbon fiber (CF) or similar additives like glass fiber (GF). This poses a problem in that these filaments can snap even within the PTFE tube as they’re being guided towards the extruder. Here a community theory is that having an actively heated chamber can help prevent this scenario, but is it actually true? [Dr. Igor Gaspar] of the My Tech Fun YouTube channel gave this myth a try to either confirm or bust it.
The comments suggested that heating the chamber to 65°C will help, but there’s little information online to support this theorem. To test the claim, a heated chamber was used along with a bending rig to see at which angle the filament would snap. In total five different filaments from three manufacturers (Polymaker, Qidi and YXPolyer) were tested, including Qidi’s PET-GF and PAHT-GF as the sole non-CF filaments.
A big question is how long exactly the filament will spend inside the heated chamber after making its way from the spool, which would be about 2.5 minutes with a 500 mm tube. For the test 5 minutes was used for the best possible result. Despite this, the results show that even with the standard deviation kept in mind, the heating actually seems to make the filaments even more brittle.
Advertisement
Considering that in general CF seems to simply weaken the polymer matrix after printing, this finding adds to the question of whether these CF and GF-infused filaments make any sense at all.
Major wireless carriers: A necessary evil if you travel a lot, have a family, or are just interested in coverage that’s reliably consistent and widespread. AT&T is the third-largest provider in the US (first for 5G), with the largest coverage map. I’ve had various AT&T plans for more than a decade, first for just myself and now for my whole family, even though I only get one cell bar at my house and have to stand in one 5-square-foot patch of yard to make a phone call. And have lost entire days of my life to fighting unexpected random charges and upsells. (Verizon is somehow worse.) But anyway! AT&T is fine, it has all the latest phones, and there are some legitimately good perks, like no roaming in Canada or Mexico with select plans. If you know you’re going to have to go with one of the big guys, don’t sign up without checking out the below discounts first.
Save on AT&T Prepaid Phone Plans With the Latest Deals
An AT&T prepaid phone plan is one of the easiest ways to save big on your future phone bills. AT&T has a wide selection of prepaid phone plans, including 5G prepaid plans and multi-month long-term plans. For as low as $25 per month, you’ll get unlimited talk, text, and data. Plus, all AT&T prepaid plans include AT&T ActiveArmor mobile security, and are eligible for an eSIM or SIM card for as little as $0.99.
Get the new Samsung Galaxy S26 Ultra for $0
We on the WIRED Reviews team love the Samsung Galaxy S26 Ultra. We rated it a high 8/10 because of its built-in privacy display. We also loved the horizon lock to capture super steady video footage. Plus, it has excellent performance, great battery life, and a reliable quad-camera system. And right now, you can get a Samsung Galaxy S26 Ultra for free with an eligible trade-in (in any condition, but has a required trade-in of Galaxy S24+, Z Fold5, or newer).
Save Over $600 a Year With AT&T Fiber
AT&T Fiber claims to be the fastest internet network in America. You can find out for yourself (for less) with this new deal. When first time customers sign up for Fiber now, they’ll get 1 Gigabyte for only $37 per month. That’s over $600 in savings per year!
Advertisement
Are There AT&T Promos for Existing Customers?
But I already have AT&T, you might be saying—new deals never apply to me. Do you have AT&T internet, though? If, like me, you have AT&T for your phone plan and Xfinity or CenturyLink for internet, did you know you can save 20% off your AT&T bill every single month if you bundle your internet service with unlimited wireless? This applies to both current phone customers and current internet customers who don’t have both plans.
AT&T wants to reward you for your loyalty: when you sign up for AT&T Fiber and eligible wireless plans, you can get up to $150 in AT&T Visa Reward Cards. Be sure to check out the AT&T deals page for more details on that offer, along with other great ways to save.
Not a new customer; not in a place to bundle; and not a teacher/first responder, in the military, or a student? All is not lost on the discount front. You can save over $800 a year on AT&T Wireless when you bundle four unlimited wireless plans with your current internet plan. (Savings based on 20% discount on four voice lines with eligible internet service, plus $10/month discount with eligible AutoPay & paperless bill, which starts within two bills.)
Advertisement
Don’t need four unlimited wireless plans? Check out if your employer offers a discount—a couple of mine have in the past, and you can save $10 per month per line on the unlimited plan. Check here to see if your workplace qualifies.
You can also get a discount on a new phone with an eligible trade-in, but the best deal yet may be the fact AT&T lets you try its wireless free for 30 days. Keep your current service and phone number while trying out AT&T’s network from your device—no catches or commitments. You don’t even need a credit card. It’s a great way to see if you get good service where you’ll be using the phone most.
Save More With AT&T Family Plans
If you want multiple people on your phone line or are adding a line for your child’s first phone, an AT&T Family Plan is one of the most cost effective ways to make the change. With AT&T family plans, you can mix and match any of AT&T’s unlimited plans to get great deals and serious discounts on any smartphone for each family member. Depending on what you choose, plans start at only $36 per month, per line (for 4 lines).
Choose AT&T for the Best Internet for Gaming
If you’re a big gamer, you’re going to want fast, reliable internet that’ll provide clear, bright graphics without laggy gameplay or interactive audio. AT&T Fiber with All-Fi promises to give gamers everything they look for in a service, with super fast speeds (up to 5 GIG), and tons of bandwidth for fast upload and download speeds.
A lawsuit from music streaming app Musi suggested Apple had removed its app over unsubstantiated copyright claims, but it has been dismissed by courts with prejudice.
Musi loses its lawsuit over App Store removal
Apps are removed from the App Store for many reasons, some less clear than others. However, a judge just ruled that Apple can remove an app from the App Store, “with or without cause.” It’s a significant win for Apple that sets precedence for future potential lawsuits. US District Judge Eumi Lee didn’t just rule in Apple’s favor — he tore Musi’s case apart on multiple levels. Continue Reading on AppleInsider | Discuss on our Forums
An anonymous reader quotes a report from the Wall Street Journal: A battle of insults and threats has broken out between the tech world and Wall Street. What’s got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy — and way cheaper — alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now “Bloomberg is cooked,” some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. […]
The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is “laughable,” said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). “It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution,” he wrote. […] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it’s rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay “a really good foundation for a financial application. And that really has not been possible before.”
Others aren’t so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic’s Claude. “It was laughable at best, horrific at worst,” he said. Shevelenko acknowledged there are some aspects of the terminal that can’t be replicated with vibe coding, including some of Bloomberg’s proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal’s data security, reliability and robust support system. “I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy,” said Lemire. His message to the techies? “There’s nothing that you can vibe code in a weekend or even like over the course of a year that’s going to come anywhere close.”
The generative AI era began for most people with the launch of OpenAI’s ChatGPT in late 2022, but the underlying technology — the “Transformer” neural network architecture that allows AI models to weigh the importance of different words in a sentence (or pixels in an image) differently and train on information in parallel — dates back to Google’s seminal 2017 paper “Attention Is All You Need.”
Yet while Transformers deliver unparalleled model quality and have underpinned most of the major generative AI models used today, they are computationally gluttonous. They are burdened by quadratic compute and linear memory demands that make large-scale inference an expensive, often prohibitive, endeavor. Hence, the desire by some researchers to improve on them by developing a new architecture, Mamba, in 2023, which has gone on to be included in hybrid Mamba-Transformer models like Nvidia’s Nemotron 3 Super.
Now, the same researchers behind the original Mamba architecture including leaders Albert Gu of Carnegie Mellon and Tri Dao of Princeton have released the latest version of their new architecture, Mamba-3, as a language model under a permissive Apache 2.0 open source license — making it immediately available to developers, including enterprises for commercial purposes. A technical paper has also been published on arXiv.org.
This model signals a paradigm shift from training efficiency to an “inference-first” design. As Gu noted in the official announcement, while Mamba-2 focused on breaking pretraining bottlenecks, Mamba-3 aims to solve the “cold GPU” problem: the reality that during decoding, modern hardware often remains idle, waiting for memory movement rather than performing computation.
Advertisement
Perplexity (no, not the company) and the newfound efficiency of Mamba 3
Mamba, including Mamba 3, is a type of State Space Model (SSM).
These are effectively a high-speed “summary machine” for AI. While many popular models (like the ones behind ChatGPT) have to re-examine every single word they’ve already seen to understand what comes next—which gets slower and more expensive the longer the conversation lasts—an SSM maintains a compact, ever-changing internal state. This state is essentially a digital “mental snapshot” of the entire history of the data.
As new information flows in, the model simply updates this snapshot instead of re-reading everything from the beginning. This allows the AI to process massive amounts of information, like entire libraries of books or long strands of DNA, with incredible speed and much lower memory requirements.
To appreciate the leap Mamba-3 represents, one must first understand perplexity, the primary metric used in the research to measure model quality.
Advertisement
In the context of language modeling, perplexity is a measure of how “surprised” a model is by new data.
Think of a model as a professional gambler. If a model has high perplexity, it is unsure where to place its bets; it sees many possible next words as equally likely.
A lower perplexity score indicates that the model is more “certain”—it has a better grasp of the underlying patterns of human language. For AI builders, perplexity serves as a high-fidelity proxy for intelligence.
The breakthrough reported in the Mamba-3 research is that it achieves comparable perplexity to its predecessor, Mamba-2, while using only half the state size. This means a model can be just as smart while being twice as efficient to run.
Advertisement
A new philosophy
Mamba 3 architecture diagram. Credit: Tri Dao
The philosophy guiding Mamba-3 is a fundamental shift in how we think about AI “intelligence” versus the speed of the hardware it runs on. While the previous generation, Mamba-2, was designed to be trained at record-breaking speeds, Mamba-3 is an “inference-first” architecture — inference referring to the way AI models are served to end users, through websites like ChatGPT or Google Gemini, or through application programming interfaces (APIs).
Mamba 3’s primary goal is to maximize every second the computer chip (GPU) is active, ensuring that the model is thinking as hard as possible without making the user wait for an answer.
In the world of language models, every point of accuracy is hard-won. At the 1.5-billion-parameter scale, the most advanced “MIMO” variant of Mamba-3 achieved a 57.6% average accuracy across benchmarks, representing a 2.2-percentage-point leap over the industry-standard Transformer.
Advertisement
Mamba 3 benchmark comparison chart. Credit: Aakash Lahoti, Kevin Y. Li, Berlin Chen, Caitlin Wang, Aviv Bick, J. Zico Kolter, Tri Dao, Albert Gu
While a two-point jump might sound modest, it actually represents a nearly 4% relative increase in language modeling capability compared to the Transformer baseline. Even more impressively, as alluded to above, Mamba-3 can match the predictive quality of its predecessor while using only half the internal “state size,” effectively delivering the same level of intelligence with significantly less memory lag.
For years, efficient alternatives to Transformers suffered from a “logic gap”—they often failed at simple reasoning tasks, like keeping track of patterns or solving basic arithmetic, because their internal math was too rigid. Mamba-3 solves this by introducing complex-valued states.
This mathematical upgrade acts like an internal compass, allowing the model to represent “rotational” logic. By using this “rotary” approach, Mamba-3 can near-perfectly solve logic puzzles and state-tracking tasks that its predecessors could only guess at, finally bringing the reasoning power of linear models on par with the most advanced systems.
Advertisement
The final piece of the puzzle is how Mamba-3 interacts with physical hardware. Most AI models today are “memory-bound,” meaning the computer chip spends most of its time idle, waiting for data to move from memory to the processor.
Mamba-3 introduces a Multi-Input, Multi-Output (MIMO) formulation that fundamentally changes this dynamic. By performing up to four times more mathematical operations in parallel during each step, Mamba-3 utilizes that previously “idle” power. This allows the model to do significantly more “thinking” for every word it generates without increasing the actual time a user spends waiting for a response. More on these below.
Three new technological leaps
The appeal of linear models has always been their constant memory requirements and linear compute scaling.
However, as the Mamba 3 authors point out, there is “no free lunch”. By fixing the state size to ensure efficiency, these models are forced to compress all historical context into a single representation—the exact opposite of a Transformer’s ever-growing KV cache. Mamba-3 pulls three specific levers to make that fixed state do more work.
Advertisement
1. Exponential-Trapezoidal Discretization
State Space Models are fundamentally continuous-time systems that must be “discretized” to handle the discrete sequences of digital data.
Previous iterations relied on “Exponential-Euler” discretization—a heuristic that provided only a first-order approximation of the system.
Mamba-3 introduces a generalized trapezoidal rule, providing second-order accurate approximation. This isn’t just a mathematical refinement; it induces an “implicit convolution” within the core recurrence.
By combining this with explicit B and C bias terms, the researchers were able to remove the short causal convolution that has been a staple of recurrent architectures for years.
Advertisement
2. Complex-Valued SSMs and the “RoPE Trick”
One of the most persistent criticisms of linear models has been their inability to solve simple state-tracking tasks, such as determining the parity of a bit sequence.
This failure stems from restricting the transition matrix to real numbers, which prevents the model from representing “rotational” dynamics.Mamba-3 overcomes this by viewing the underlying SSM as complex-valued.
Using what the team calls the “RoPE trick,” they demonstrate that a complex-valued state update is mathematically equivalent to a data-dependent rotary embedding (RoPE) applied to the input and output projections.
This allows Mamba-3 to solve synthetic reasoning tasks that were impossible for Mamba-2.
Advertisement
3. MIMO: Boosting Arithmetic Intensity
The most significant leap in inference efficiency comes from the transition from Single-Input, Single-Output (SISO) to Multi-Input, Multi-Output (MIMO) SSMs.
In a standard SSM, the state update is an outer-product operation that is heavily memory-bound.By switching to a matrix-multiplication-based state update, Mamba-3 increases the “arithmetic intensity” of the model—the ratio of FLOPs to memory traffic.
This allows the model to perform more computation during the memory-bound decoding phase. Essentially, Mamba-3 utilizes the “idle” compute cores of the GPU to increase model power for “free,” maintaining the same decoding speed as its simpler predecessors.
What Mamba 3 means for enterprises and AI builders
For enterprises, Mamba-3 represents a strategic shift in the total cost of ownership (TCO) for AI deployments.
Advertisement
Cost vs. Performance: By matched-parameter performance, Mamba-3 (MIMO) matches the perplexity of Mamba-2 while using half the state size. For enterprise deployment, this effectively doubles the inference throughput for the same hardware footprint.
Agentic Workflows: As organizations move toward parallel, agentic workflows (like automated coding or real-time customer service agents), the demand for low-latency generation increases exponentially. Mamba-3 is designed specifically to prevent GPU hardware from sitting “cold” during these tasks.
The Hybrid Advantage: The researchers predict that the future of enterprise AI lies in hybrid models. By interleaving Mamba-3 with self-attention, organizations can combine the efficient “memory” of SSMs with the precise “database” storage of Transformers.
Availability, licensing, and usage
Mamba-3 is not merely a theoretical research paper; it is a fully realized, open-source release available for immediate use with model code published on Github.
The project is released under the Apache-2.0 License. This is a permissive, business-friendly license that allows for free usage, modification, and commercial distribution without requiring the disclosure of proprietary source code.
This release is good for developers building long-context applications, real-time reasoning agents, or those seeking to reduce GPU costs in high-volume production environments.
Leading the State Space Models (SSM) revolution
The release was met with enthusiasm on social media, particularly regarding the “student-led” nature of the project. Gu, whose X/Twitter bio describes him as “leading the ssm revolution,” gave full credit to the student leads, including Aakash Lahoti and Kevin Y. Li
“We’re quite happy with the final model design! The three core methodological changes are inspired by (imo) some elegant math and methods.”
As agentic workflows push inference demand “through the roof,” the arrival of Mamba-3 suggests that the future of AI may not just be about having the biggest model, but about having the most efficient one.
Mamba-3 has successfully re-aligned the SSM with the realities of modern hardware, proving that even in the age of the Transformer, the principles of classical control theory still have a vital role to play.
Mark of I Make Games, chose to rebuild Diablo 2 from the ground up in Unreal Engine 5, but with one major difference: the entire game is played in first person. A clean heads-up display lies at the bottom of the screen, displaying your current location, an experience bar that ticks upward as you fight monsters, skill slots, glowing potion icons, and a stamina meter that drains anytime you push yourself too far.
Mark has been adding spells to the mix as well, with Fireball letting you watch the projectile arc through the air and detonate on impact, and Teleport doing exactly what it sounds like, making your character vanish and reappear somewhere else in the blink of an eye.
XBOX EXPERIENCE BROUGHT TO LIFE BY ROG The Xbox gaming legacy meets ROG’s decades of premium hardware design in the ROG Xbox Ally. Boot straight into…
XBOX GAME BAR INTEGRATION Launch Game Bar with a tap of the Xbox button or play your favorite titles natively from platforms like Xbox Game Pass…
ALL YOUR GAMES, ALL YOUR PROGRESS Powered by Windows 11, the ROG Xbox Ally gives you access to your full library of PC games from Xbox and other game…
There’s also sliding, which allows you to glide down slopes or across slippery floors to maintain speed, because you never know when you’ll need to escape quickly. Climbing allows you to scale narrow ledges or sneak into concealed routes, which is ideal for continued exploration. Meanwhile, dismemberment is already at work on the evil guys, so when you smack them hard enough, their pitiful limbs just fly off.
Advertisement
Teleport, of course, allows you to simply walk through walls for a variety of nefarious purposes, and then there’s Whirlwind, the hapless barbarian spinning around in circles with blades out, mowing down all comers. Lightning is the other new ability, which fires bolts back and forth between targets with impressive visual effects to keep you on your toes. Both were slightly tweaked to ensure proper timing. Mark does use a few pre-made character meshes to save time, but for everything else, he browses the Unreal Marketplace like a kid in a candy store.
During testing, you can switch the camera to third person for a brief look, but Mark prefers to maintain the focus on the first-person experience. Visual effects will have to wait till things are a little more established. As it stands, Mark is only adding new regions and powers to his channel one at a time, creating a gradual but constant trickle of advancement, and the followers are already getting antsy; who knows, maybe one day they’ll get to check it out for themselves. [Source]
Horizon Worlds, Meta’s first pass at a metaverse, will be inaccessible via virtual reality headset after June 15, 2026. The company shared plans to separate Horizon Worlds from Quest VR platform and focus exclusively on the smartphone version of the app in February, and now in a new post on its community forums, Meta detailed when the VR version of Horizon Worlds will be deprecated.
By March 31, Meta says individual Horizon Worlds and Events will no longer be listed in the Quest’s Store and headset owners will be unable to visit worlds like “Horizon Central, Events Arena, Kaiju and Bobber Bay.” Then, after June 15, the app will be removed from Quest headsets and worlds will be completely unavailable to visit in VR. From that point on, the easiest place to visit Horizon Worlds will be in the Meta Horizon app for iOS and Android.
Additionally, Hyperscape Capture, a recently added beta feature that allows Quest headset owners to capture, share and visit each other in detailed 3D scans of real-life locations, is also being removed from Horizon Worlds. Meta says users will still be able to capture and view Hyperscapes, “but sharing, inviting, and co-experiencing Hyperscapes with others will no longer be supported.”
While Meta’s original blog detailing its 2026 VR strategy left open the possibility that a committed Quest owner might still be able to access some part of Meta’s original VR metaverse, that apparently was never the company’s plan. Meta saw enough “positive momentum” focusing on supporting the mobile version of Horizon Worlds in 2025 that it made sense to completely abandon the VR one in 2026. While that seems to run contrary to Meta’s positioning as a “metaverse company,” it does reflect where the company is spending the most money and seeing the most (relative) success: AI and smart glasses.
An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.
Anthropic’s paper, called “Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.
But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.” Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms. “Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the ‘good’ uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for,” argues Koebler. “Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth…”
“This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media,” writes Koebler, in closing. “We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.”
You must be logged in to post a comment Login