Connect with us
DAPA Banner

Tech

Is Anthropic ‘nerfing’ Claude? Users increasingly report performance degradation as leaders push back

Published

on

A growing number of developers and AI power users are taking to social media to accuse Anthropic of degrading the performance of Claude Opus 4.6 and Claude Code — intentionally or as an outcome of compute limits — arguing that the company’s flagship coding model feels less capable, less reliable and more wasteful with tokens than it did just weeks ago.

The complaints have spread quickly on Github, X and Reddit over the past several weeks, with several high-reach posts alleging that Claude has become worse at sustained reasoning, more likely to abandon tasks midway through, and more prone to hallucinations or contradictions.

Some users have framed the issue as “AI shrinkflation” — the idea that customers are paying the same price for a weaker product.

Others have gone further, suggesting Anthropic may be throttling or otherwise tuning Claude downward during periods of heavy demand.

Advertisement

Those claims remain unproven, and Anthropic employees have publicly denied that the company degrades models to manage capacity. At the same time, Anthropic has acknowledged real changes to usage limits and reasoning defaults in recent weeks, which has made the broader debate more combustible.

VentureBeat has reached out to Anthropic for further clarification on the recent accusations, including whether any recent changes to reasoning defaults, context handling, throttling behavior, inference parameters or benchmark methodology could help explain the spike in complaints.

We have also asked how Anthropic explains the recent benchmark-related claims and whether it plans to publish additional data that could reassure customers. An Anthropic spokesperson did not address the questions individually, instead referring us to X posts by Claude Code creator Boris Cherny and Claude Code team member Thariq Shihipar regarding Opus 4.6 performance and usage limits, respectively. Both X posts are also referenced and linked below.

Viral user complaints, including from an AMD Senior Director, argue Claude has become less capable

One of the most detailed public complaints originated as a GitHub issue filed by Stella Laurenzo on April 2, 2026, whose LinkedIn profile identifies her as Senior Director in AMD’s AI group.

Advertisement

In that post, Laurenzo wrote that Claude Code had regressed to the point that it could not be trusted for complex engineering work, then backed that claim with a sprawling analysis of 6,852 Claude Code session files, 17,871 thinking blocks and 234,760 tool calls.

The complaint argued that, starting in February, Claude’s estimated reasoning depth fell sharply while signs of poorer performance rose alongside it, including more premature stopping, more “simplest fix” behavior, more reasoning loops, and a measurable shift from research-first behavior to edit-first behavior.

The post’s broader point was that for advanced engineering workflows, extended reasoning is not a luxury but part of what makes the model usable in the first place.

That GitHub thread then escaped into the broader social media conversation, with X users including @Hesamation, who posted screenshots of Laurenzo’s GitHub post to X on April 11, turning it into an even more viral talking point.

Advertisement

That amplification mattered because it gave the wider “Claude is getting worse” narrative something more concrete than anecdotal frustration: a long, data-heavy post from a senior AI leader at a major chip company arguing that the regression was visible in logs, tool-use patterns and user corrections, not just gut feeling.

Anthropic’s public response focused on separating perceived changes from actual model degradation. In a pinned follow-up on the same GitHub issue posted a week ago, Claude Code lead Boris Cherny thanked Laurenzo for the care and depth of the analysis but disputed its main conclusion.

Cherny said the “redact-thinking-2026-02-12” header cited in the complaint is a UI-only change that hides thinking from the interface and reduces latency, but “does not impact thinking itself,” “thinking budgets,” or how extended reasoning works under the hood.

He also said two other product changes likely affected what users were seeing: Opus 4.6’s move to adaptive thinking by default on Feb. 9, and a March 3 shift to medium effort, or effort level 85, as the default for Opus 4.6, which he said Anthropic viewed as the best balance across intelligence, latency and cost for most users.

Advertisement

Cherny added that users who want more extended reasoning can manually switch effort higher by typing /effort high in Claude Code terminal sessions.

That exchange gets at the core of the controversy. Critics like Laurenzo argue that Claude’s behavior in demanding coding workflows has plainly worsened and point to logs and usage patterns as evidence.

Anthropic, by contrast, is not saying nothing changed. It is saying the biggest recent changes were product and interface choices that affect what users see and how much effort the system expends by default, not a secret downgrade of the underlying model. That distinction may be technically important, but for power users who feel the product is delivering worse results, it is not necessarily a satisfying one.

External coverage from TechRadar and PC Gamer further amplified Laurenzo’s post and larger wave of agreement from some power users.

Advertisement

Another viral post on X from developer Om Patel on April 7 made the same argument in even more direct terms, claiming that someone had “actually measured” how much “dumber” Claude had gotten and summarizing the result as a 67% drop.

That post helped popularize the “AI shrinkflation” label and pushed the controversy beyond hard-core Claude Code users into the broader AI discourse on X.

These claims have resonated because they map closely onto what many frustrated users say they are seeing in practice: more unfinished tasks, more backtracking, more token burn and a stronger sense that Claude is less willing to reason deeply through complicated coding jobs than it was earlier this year.

Benchmark posts turned anecdotal frustration into a public controversy

The loudest benchmark-based claim came from BridgeMind, which runs the BridgeBench hallucination benchmark. On April 12, the account posted that Claude Opus 4.6 had fallen from 83.3% accuracy and a No. 2 ranking in an earlier result to 68.3% accuracy and No. 10 in a new retest, calling that proof that “Claude Opus 4.6 is nerfed.”

Advertisement

That post spread widely and became one of the main anchors for the broader public case that Anthropic had degraded the model.

Other users also circulated benchmark-related or test-based posts suggesting that Opus 4.6 was underperforming versus Opus 4.5 in practical coding tasks.

Still other posts pointed to TerminalBench-related results as supposed evidence that the model’s behavior had changed in certain harnesses or product contexts.

The effect was cumulative: benchmark screenshots, side-by-side tests and anecdotal frustration all began reinforcing one another in public.

Advertisement

That matters because benchmark claims tend to travel farther than more subjective complaints. A developer saying a model “feels worse” is one thing. A screenshot showing a ranking drop from No. 2 to No. 10, or a dramatic percentage swing in accuracy, gives the appearance of hard proof, even when the underlying comparison may be more complicated.

Critics of the benchmark claims say the evidence is weaker than it looks

The most important rebuttal to the BridgeBench claim did not come from Anthropic. It came from Paul Calcraft, an outside software and AI researcher on X, who argued that the viral comparison was misleading because the earlier Opus 4.6 result was based on only six tasks while the later one was based on 30.

In his words, it was a “DIFFERENT BENCHMARK.” He also said that on the six tasks the two runs shared in common, Claude’s score moved only modestly, from 87.6% previously to 85.4% in the later run, and that the bigger swing appeared to come mostly from a single fabrication result without repeats. He characterized that as something that could easily fall within ordinary statistical noise.

That outside rebuttal matters because it undercuts one of the cleanest and most viral claims in circulation. It does not prove users are wrong to think something has changed. But it does suggest that at least some of the benchmark evidence now driving the story may be overstated, poorly normalized or not directly comparable.

Advertisement

Even the BridgeBench post itself drew a community note to similar effect. The note said the two benchmark runs covered different scopes — six tasks in one case and 30 in the other — and that the common-task subset showed only a minor change. That does not make the later result meaningless, but it weakens the strongest version of the “BridgeBench proved it” argument.

This is now a key feature of the controversy: the claims are not all equally strong. Some are grounded in first-hand user experience. Some point to real product changes. Some rely on benchmark comparisons that may not be apples-to-apples. And some depend on inferences about hidden system behavior that users outside Anthropic cannot directly verify.

Earlier capacity limits gave users a reason to suspect more changes under the hood

The current backlash also lands in the shadow of a real, confirmed Anthropic policy change from late March. On March 26, Anthropic technical staffer Thariq Shihipar posted that, “To manage growing demand for Claude,” the company was adjusting how 5-hour session limits work for Free, Pro and Max subscribers during peak hours, while keeping weekly limits unchanged.

He added that during weekdays from 5 a.m. to 11 a.m. Pacific time, users would move through their 5-hour session limits faster than before. In follow-up posts, he said Anthropic had landed efficiency wins to offset some of the impact, but that roughly 7% of users would hit session limits they would not have hit before, particularly on Pro tiers.

Advertisement

In an email on March 27, 2026, Anthropic told VentureBeat that Team and Enterprise customers were not affected by those changes, and that the shift was not dynamically optimized per user but instead applied to the peak-hour window the company had publicly described. Anthropic also said it was continuing to invest in scaling capacity.

Those comments were about session limits, not model downgrades. But they are important context, because they establish two things that users now keep connecting in public: first, Anthropic has been dealing with surging demand; second, it has already changed how usage is rationed during busy periods. That does not prove Anthropic reduced model quality. It does help explain why so many users are primed to believe something else may also have changed.

Prompt caching and TTL

A separate, more recent GitHub issue broadens the dispute beyond model quality and into pricing and quota behavior. In issue #46829, user seanGSISG argued that Claude Code’s prompt-cache time-to-live, or TTL, appeared to shift from a one-hour setting back to a five-minute setting in early March, based on analysis of nearly 120,000 API calls drawn from Claude Code session logs across two machines.

The complaint argues that this change drove meaningful increases in cache-creation costs and quota burn, especially for long-running coding sessions where cached context expires quickly and must be rebuilt. The author claims that this helps explain why some subscription users began hitting usage limits they had not previously encountered.

Advertisement

What makes this issue notable is that Anthropic did not flatly deny that something changed. In a reply on the thread, Jarred Sumner said the March 6 change was real and intentional, but rejected the framing that it was a regression. He said Claude Code uses different cache durations for different request types, and that one-hour cache is not always cheaper because one-hour writes cost more up front and only save money when the same cached context is reused enough times to justify it.

In his telling, the change was part of ongoing cache optimization work, not a silent downgrade, and the pre–March 6 behavior described in the issue “wasn’t the intended steady state.”

The thread later drew a more detailed response from Anthropic’s Cherny, who described one-hour caching as “nuanced” and said the company has been testing heuristics to improve cache hit rates, token usage and latency for subscribers. Cherny said Anthropic keeps five-minute cache for many queries, including subagents that are rarely resumed, and said turning off telemetry also disables experiment gates, which can cause Claude Code to fall back to a five-minute default in some cases.

He added that Anthropic plans to expose environment variables that let users force one-hour or five-minute cache behavior directly. Together, those replies do not validate the issue author’s claim that Anthropic silently made Claude Code more expensive overall, but they do confirm that Anthropic has been actively experimenting with cache behavior behind the scenes during the same period users began complaining more loudly about quota burn and changing product behavior.

Advertisement

Anthropic says user-facing changes, not secret degradation, explain much of the uproar

Anthropic-affiliated employees have publicly pushed back on the broadest accusations. In one widely circulated reply on X, Cherny responded to claims that Anthropic had secretly nerfed Claude Code by writing, “This is false.”

He said Claude Code had been defaulted to medium effort in response to user feedback that Claude was consuming too many tokens, and that the change had been disclosed both in the changelog and in a dialog shown to users when they opened Claude Code.

That response is notable because it concedes a meaningful product change while rejecting the more conspiratorial interpretation of it. Anthropic is not saying nothing changed. It is saying that what changed was disclosed and was aimed at balancing token use, not secretly reducing model quality.

Public documentation also supports the fact that effort defaults have been in motion. Claude Code’s changelog says that on April 7, Anthropic changed the default effort level from medium to high for API-key users as well as Bedrock, Vertex, Foundry, Team and Enterprise users.

Advertisement

That suggests Anthropic has actively been tuning these settings across different segments, which could plausibly affect user perceptions even if the core model weights are unchanged.

Shihipar has also directly denied the broader demand-management accusation. In a reply on X posted April 11, he said Anthropic does not “degrade” its models to better serve demand. He also said that changes to thinking summaries affected how some users were measuring Claude’s “thinking,” and that the company had not found evidence backing the strongest qualitative claims now spreading online.

The real issue may be trust as much as model quality

What is clear is that a trust gap has opened between Anthropic and some of its most demanding users.

For developers who rely on Claude Code all day, subtle shifts in visible thinking output, effort defaults, token burn, latency tradeoffs or usage caps can feel indistinguishable from a weaker model.

Advertisement

That is true whether the root cause is a product setting, a UI change, an inference-policy tweak, capacity pressure or a genuine quality regression.

It also means both sides of the fight may be talking past each other. Users are describing what they experience: more friction, more failures and less confidence. Anthropic is responding in product terms: effort defaults, hidden thinking summaries, changelog disclosures, and denials that demand pressure is causing secret model degradation.

Those are not necessarily incompatible descriptions. A model can feel worse to users even if the company believes it has not “nerfed” the underlying model in the way critics allege. But coming at a time when Anthropic’s chief rival OpenAI has recently pivoted and put more resources behind its competing, enterprise and vibe-coding focused product Codex — even offering a new, more mid-range ChatGPT subscription in an effort to boost usage of the tool — it’s certainly not the kind of publicity that stands to benefit Anthropic or its customer retention.

At the same time, the public evidence remains mixed. Some of the most viral claims have come from developers with detailed logs and strong opinions based on repeated use. Some of the benchmark evidence has been challenged by outside observers on methodological grounds. And Anthropic’s own recent changes to limits and settings ensure that this debate is happening against a backdrop of real adjustments, not pure rumor.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Emergency Bolt-Action Launcher For EpiPens

Published

on

Imagine you and your friend are enjoying a nice sunny day, and BAM — they start to have a severe allergic reaction to who knows what. You have an EpiPen, but your friend is on the other side of a field! The solution? Obviously [Emily The Engineer] has only one option: build an entire EpiPen launcher!

Starting off the life-saving project, [Emily] prototyped with a 3D printed blank and a simple solenoid-controlled glorified potato cannon. This proved effective, as one would expect of such a project after successful tests on a human subject. However, there was one simple problem: what if you missed your initial shot?

To ensure no possible failed missions, a bolt-action magazine was retrofitted onto the device. Additionally, an air compressor placed in a mobile backpack carrier allows for repeated mobile use. Official testing was done on ballistic gel before a “war game” scenario played out involving an anaphylactic friend. As one would assume, this went perfectly, ignoring the time delay of having to wait for the compressor to build up enough pressure…

Advertisement

Anyways, even if you won’t be using this EpiPen launcher anytime soon, there are some actual DIY medical miracles you can look into! Something that’s a tad less insane to hack together than an EpiPen gun would be a splint. That is exactly what you can learn about here!

Advertisement

Source link

Continue Reading

Tech

Intel reveals secret sauce to keep gaming laptops running quieter and cooler

Published

on

If you’ve ever played video games on a laptop that sounded like a small aircraft trying to take off, Intel has heard you (and your laptop). The company’s Chinese division has launched “AI Quiet Plus,” a new certification and optimization program for gaming laptops (via VideoCardz). 

As the name suggests, the feature uses artificial intelligence to dramatically reduce fan noise and surface heat while maintaining performance. 

How does AI Quiet Plus actually work?

It might be a bit confusing at first, since AI Quiet Plus isn’t a chip or a software update that you can download on the go. As mentioned earlier, it’s a certification standard that OEM partners must meet to carry the label.

The program uses the Neural Processing Unit (NPU) built into Intel’s Core Ultra 200HX Plus processors to monitor temperature, workload, power consumption, and fan speed in real-time. 

Rather than running the cooling fan at maximum speed a few minutes into a game (when the motherboard starts to heat up a bit), the system claims to intelligently read gaming conditions and adjust cooling only when it is actually required. 

Advertisement

What does this mean for everyday gamers?

OEMs meeting the new standard must meet more stringent targets across acoustics, keyboard and chassis temperatures, and battery efficiency. The technology builds directly on Intel China’s “AI Quiet Gaming Laptop” initiative. 

For everyday gamers, the AI Quiet Plus should translate to less disturbance and annoyance from the rocket engines on the laptop, less heat for your wrists, should you hop onto an urgent mail trail in the middle of your gaming session, and a longer battery life between charging sessions.

The first laptops certified under this program are expected to reach the market by the end of 2026. These would include laptops from brands like Asus, MSI, Lenovo, and Acer. For now, the program is tied to the Core Ultra 200HX Plus chips, which came out in March 2026. 

Source link

Advertisement
Continue Reading

Tech

5 Car Shows Worth Checking Out In Spring 2026

Published

on





For a certain kind of person, cars are more than transportation; they’re a genuine obsession, culture, and way of life. Regardless if you’re a seasoned collector hunting for your next garage queen, a buyer who wants to see a few new models in person before committing, or someone who simply appreciates the engineering and design that goes into a well-built machine, a car show has something to offer for everyone.

Spring is when the regional calendar fills up across the country, the weather cooperates, and the variety of cars on display gets genuinely impressive. The energy at a well-run show is hard to replicate — you get lifted trucks parked next to vintage Porsches parked next to whatever someone built in their garage over winter, all in the same field. Some people prefer smaller, more informal events, while others want the hustle and bustle of a major car show.

As such, the options in 2026 are plentiful. Whether it’s a Saturday morning cruise-in, a full-blown concours, or a week-long festival with drag racing and swap meets, these events are where car memories are made. For a deeper dive into what makes these events worth the trip, here are five car shows worth checking out in spring 2026.

Advertisement

Goodguys 11th Griot’s Garage North Carolina Nationals | April 17–18 | Raleigh, NC

The Goodguys circuit is one of the most well-established hot rod and custom car show series in the country with more than 70,000 members across the globe. The Goodguys 11th Griot’s Garage North Carolina Nationals presented by Grundy Insurance brings classic cars, custom trucks, hot rods, and family fun to the North Carolina State Fairgrounds on April 17 and 18. The show floor features over 1,500 of the Southeast’s finest 1999-and-older hot rods.

It also includes trucks, customs, muscle cars, and classics, alongside Goodguys AutoCross Racing action, a swap meet, a Cars 4 Sale Corral, vendor midway, and live music. The top awards of the weekend are handed out on Saturday, including the coveted Builder’s Choice Top 10 by Goolsby Customs. Goodguys, which bills itself as the world’s largest hot-rodding association, runs 15 events across the country annually, and the upcoming Raleigh stop is one of the more accessible ones for enthusiasts on the East Coast.

Advertisement

General admission (GA) runs between $10 to $30 while member tickets range from $10 to $24. If that sounds interesting to you and you want to book your tickets, here are 10 common hot rod terms to learn so you don’t sound completely lost when you get there.

Advertisement

Old Town Festival of Speed & Style | May 17 | Alexandria, VA

Alexandria, Virginia plays host to one of the more distinctive car events on the East Coast calendar this spring. The Old Town Festival of Speed & Style is back for its seventh year, and is being held on May 17, 2026. Now, if you are a fan of a famous Italian prancing horse brand, this is the place to be since this year’s edition puts a spotlight on Ferrari, with examples ranging from the 1950s all the way through to current production.

This means that you might even be able to see some of the best-looking Ferraris of all time like the F40 or the 288 GTO up close. Such a sight is not to be missed, which is partly why the attendance is expected to surpass 40,000, with judging across 11 award categories wrapping up at noon. What separates this event from a standard car show is the fashion component — models are styled specifically to complement a selection of the cars on display, merging automotive and haute couture in a way few shows attempt.

If you want to feel fancy and important next to a bunch of classic Ferraris, this year’s event will start on May 16 with the High-Octane Ball gala where visitors are expected to wear formal white, black, and red. As this is a fancy event, the tickets are priced accordingly, between $125 and $250. For those content for just having their cars present, applications cost $125 and are open until April 30th. 

Advertisement

JDM Fest | July 10–11 | Mirabel, Quebec, Canada

Although lusting after hot rods and classic Ferraris is a great way to spend a weekend, what if you are a die-hard Japanese domestic market (JDM) enthusiast who does not care about either? Well, you are in luck. For anyone whose taste runs toward Japanese car culture, JDM Fest at ICAR Route 66 in Mirabel, Quebec on July 10-11 is the event to have on the radar. First held in 2011, the event draws fans of both modified and factory-original Japanese vehicles.

Think Toyota, Honda, Nissan, Mazda, and Subaru. If a long winter without seeing an R34 in the flesh has you down bad, ICAR Route 66 is the perfect antidote, as it’s Canada’s largest Japanese car show. Last year, it pulled in over 13,000 spectators, and the programming goes well beyond a static display. A Show N Shine Top 100 competition awards trophies across several classes, while dedicated zones exist for right-hand-drive JDM exclusives, club gatherings, and exhibition vehicles that don’t compete.

On the track side, drift competition runs across the entire weekend and a Drag Shootout offers prize money in the street class. 12 and under are free to enter, while GA tickets range between $48 and $50 Canadian. Here are 35 affordable JDM cars we recommend, so you can shortlist a few once you arrive and finally make your JDM dream a reality.

Advertisement

Greenwich Concours d’Elegance | May 30–31 | Greenwich, CT

With JDMs out of the way, it’s time to go back to the refined end of the car collector world. If that sounds like something up your alley, the Greenwich Concours d’Elegance in southwestern Connecticut at the end of May is worth marking on the calendar. Now in its 30th anniversary year, the event is backed by Hagerty and has established itself as one of the most important concours events in the Northeast.

According to Greenwich Concours, this year will host Paul Russell, a preservationist and European collector car specialist who spent two years and more than 9,500 hours restoring Ralph Lauren’s 1938 Bugatti Type 57SC Atlantic – the most expensive car in Lauren’s collection. The weekend runs across two distinct events — Saturday’s Concours de Sport centers on high-performance machinery, while Sunday’s Concours d’Elegance is judged on historical importance and design distinction.

Expert panels, live restoration demonstrations, and direct access to car owners and fellow enthusiasts are woven throughout the program. A two day pass is currently priced at $110, but separate events can reach up to $225. If you want to experience the event without paying big bucks, the Concours GA is priced at $60, while the Greenwich Concours de Sport GA is also priced at $60.

Advertisement

Hagerty Cars & Caffeine at Indianapolis Motor Speedway | June 20–21 | Indianapolis, IN

Few car shows can claim a setting as iconic as the one behind this next entry. The Hagerty Cars & Caffeine Car Show arrives at Indianapolis Motor Speedway — nicknamed The Brickyard — on June 20-21, pairing a full show field with live racing on one of the most storied tracks in the world. 

The show welcomes all classic, vintage, collector, muscle, modern, and exotic cars and motorcycles, with car clubs encouraged to attend. On the racing side, the weekend features 850-hp Trans Am cars, classic Porsches, Alfa Romeos, Corvettes, and even historic Formula 1 and Indy cars. Here is a detailed explanation of all of the differences between F1 and Indy racing so you can spot the details in the flesh. 

Advertisement

Show car registrants receive a pair of two-day passes and access to a guided paddock walking tour on Sunday, putting attendees directly alongside the race machinery and the people behind it. Ticket prices are provided upon registration, while discounts are available for Hagerty Drivers Club members, military members, veterans, and first responders.

Advertisement

How we made the list

Although most people reading this article share the love and passion for cars, not everyone likes the same ones. Different generations grew up liking different vehicles, and the idea of this article was to include a wide variety of car shows with multiple separate events in order to satisfy most people. Whether it be fashion and cars, straight-up hot-rodding, exotic track toys, drifting JDMs, muscle cars, historic F1 and Indy cars, drag races, motorcycles, Corvettes, 850 horsepower Trans Ams, or the very top-end of collector Ferrari cars, we listed as many as we could.

To make sure our list provides some of the best venues to visit, we dug through countless event programs, media coverage, past events, important cars, attendance numbers, and distinct legacies to make sure every experience is unique and worth the trip. All events are situated in North America.



Advertisement

Source link

Continue Reading

Tech

Character.AI Will Use AI to Let You Play a Character in Your Favorite Book

Published

on

I can rip through a book or three a week, depending on my schedule. I just truly feel like there’s nothing better than becoming immersed in the plot and, specifically, diving into the minds of the characters. I’ve always gravitated toward fantasy books, with their mesmeric world-building and action-packed scenes. Like many people, I literally ached to be at Hogwarts when I was reading the Harry Potter series for the first time. With the progress of artificial intelligence, we’re getting closer to a reality where we can dive into those favorite stories. 

A new feature for Character.AI, launched Thursday, is an AI-powered role-play experience that allows you to play as a character inside your classic literature text. Called Books, the feature will launch with more than 20 public-domain titles, including Alice in Wonderland, Pride and Prejudice, Dracula, Frankenstein, Romeo and Juliet, and The Great Gatsby.

This new feature moves beyond Character.AI’s standard interactive open-ended chatting to an immersive storytelling experience and narrative-driven entertainment. 

Advertisement

Character.AI seems to be taking a step back from just offering open-ended chat functionality, which can be risky for emotionally vulnerable users or teens. Google and Character.AI both settled lawsuits earlier this year related to claims that these AI chatbots caused emotional harm to minors and led children to die by suicide. Last year, Character.AI made sweeping policy and safety changes, barring anyone under the age of 18 from using the platform for open-ended chats.

Read also: Character.AI: What to Know About the Role-Playing AI Tool and Its New Video Features

A screenshot from Character.ai where the user can pick a character to play from Alice in Wonderland.

You can step into a story, talk to its characters and play as one. 

Advertisement

Character.AI

How Books works 

To get started, you can log into the Character.AI website and decide if you want to interact with Story Modes or Alternate Universe Remixes.

In Story Modes, you can choose to role-play as a specific character from a book or as your own from Character.AI. You can then choose to follow the original plot of the story with Book arc mode or create your own narrative with Go off script mode. Character.AI plans to launch more modes soon, such as TapTale, a more guided mode that offers prewritten prompts that you can choose to drive the narrative. 

Read also: Teens Can’t Talk to Character.AI’s Chatbots Anymore. Here’s What They Can Do Instead

Advertisement
A screenshot of a user going through the beginning of Alice in Wonderland in c.ai's Books.

You can play The White Rabbit from Alice in Wonderland and interact with characters like Alice and the Duchess. 

Character.AI

You can also completely remix the universe of classic literature with Alternate Universe Remixes. In this mode, you can set Alice in Wonderland in space or make Elizabeth Bennet a modern rom-com heroine. 

Books is now available for Character.AI Plus subscribers on both mobile and web as part of Character.AI Labs.

Advertisement

Source link

Continue Reading

Tech

vivo X300 Ultra India Launch Expected in May: Specs, Price, Features

Published

on

After launching in China, the Vivo X300 Ultra is now expected to go global on April 24. Although vivo hasn’t officially confirmed the timeline, signs point toward an upcoming rollout. India may not have to wait much longer, as the launch is expected soon after, with the phone aiming to stand out mainly for its camera capabilities. Europe is expected to get the phone first, with other regions likely to follow soon after.

Key Specifications of Vivo X300 Ultra

X300 Ultra telephoto kit

From a specifications standpoint, the Vivo X300 Ultra will take the X300 Pro’s features to the next level. It will come with a 6.82-inch 2K LTPO OLED display with a 144 Hz refresh rate. It may also pack a 7,000mAh battery and use the Snapdragon 7 Elite Gen 5 chipset. Even though these are solid features, they are not the primary reason this phone stands out.

One of the standout features of the Vivo X300 Ultra is its camera system. The device is said to be equipped with two 200MP cameras, with one serving as the main camera and the other as the periscope telephoto lens. In addition, the device will have a 50MP ultrawide camera and a 50MP front-facing selfie camera. A separate teleconverter module will be available for this device, enabling users to capture high-zoom images.

India Launch Timeline and Availability

Vivo X300 Ultra india

The Vivo X300 Ultra is expected to launch in India around May, with sources suggesting a May 7 release date. However, the official release date has yet to be confirmed, and the launch is expected soon after this. The phone has already been showcased in India, with a recent picture posted showing it in the hands of Indian cricketer Shreyas Iyer.

Expected Price and Market Positioning

The upcoming Vivo X300 Ultra will likely have a high price point, particularly when considering international markets. In China, Vivo sells the 16GB + 1TB variant for about Rs 1.3 lakh, but in Europe, the company could price it at around EUR 1,900 (roughly Rs 2.08 lakh). At this price, it goes beyond devices like the iPhone 17 Pro Max and Samsung Galaxy S26 Ultra.

Source link

Advertisement
Continue Reading

Tech

Ex-PlayStation boss says Microsoft is ‘trying so hard to will’ Xbox Game Pass ‘into health’ and suggests ‘a clarifying post mortem would do the entire industry some good’

Published

on


  • Former PlayStation exec Shawn Layden has suggested Xbox can’t save Game Pass
  • Layden said Xbox is “trying so hard to will this into health” despite the current issues
  • This comes after a leaked memo from Xbox CEO Asha Sharma revealed plans to “evolve” the “expensive” service

Former PlayStation executive Shawn Layden has suggested Microsoft can’t save Xbox Game Pass, following comments made by Xbox CEO Asha Sharma about wanting to “evolve” the service.

Earlier this week, a leaked memo to employees obtained and reported on by The Verge showed Sharma admitting that the subscription service is now too expensive for members as she outlined a brief plan for how she aims to change things, writing, “Game Pass is central to gaming value on Xbox. It’s also clear that the current model isn’t the final one.

Advertisement

Source link

Continue Reading

Tech

The fintech that pivoted because of Kanye West just hit a $1.4B valuation

Published

on

Slash, the vertical banking platform built by two college dropouts, has raised a $100M Series C backed by Khosla Ventures and Ribbit Capital. The company’s valuation has nearly quadrupled since its May 2025 Series B, the latest leg of a comeback story that began when its core market evaporated overnight.


Slash, the San Francisco-based vertical banking platform, has raised $100 million in a Series C round at a $1.4 billion valuation, backed by Khosla Ventures and Ribbit Capital, Bloomberg reported on Wednesday.

The raise marks a sharp acceleration in the company’s trajectory: less than a year ago, Slash closed its Series B at a $370 million valuation. The new round values the company at nearly four times that figure.

Slash was founded by Victor Cardenas, a Stanford dropout, and Kevin Bai, who left the University of Waterloo, and its origin story is one of the more unusual in recent fintech. The pair initially built banking services for sneaker resellers, a niche that took off quickly.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Then Kanye West made a series of antisemitic public statements in late 2022. Adidas terminated its partnership with the rapper, collapsing the Yeezy market that was the backbone of the sneaker reselling economy.

According to Cardenas, Slash’s revenue fell by 80% almost overnight. The company had raised $19 million and built a team around a market that had suddenly ceased to exist.

Advertisement

Rather than fold, Cardenas and Bai pivoted to a broader thesis: vertical banking for online businesses. Instead of competing horizontally against Ramp, Mercury, and Brex, platforms that serve businesses across all industries, Slash builds tailored financial products for specific sectors.

The first post-pivot target was performance marketing firms, which run digital advertising campaigns on behalf of e-commerce companies.

A key pain point: these firms needed to create distinct accounts within their banking system for each end client to track prepayment and spending separately. Slash built that. By the time of its Series B announcement in May 2025, Cardenas told Fortune that more than 1% of all Facebook ads are bought with a Slash-issued card.

The pivot worked. Slash now serves verticals including web3, e-commerce, agencies, contractors, affiliate marketers, healthcare suppliers, online travel agencies, and wholesalers, alongside a stablecoin payments product and treasury and working capital tools.

Advertisement

The company’s product suite has expanded considerably since the early days of virtual debit cards for teenagers. Current offerings include corporate cards, business banking, stablecoin payments, treasury management, working capital, global USD accounts, invoicing, and a platform layer with multi-entity support, accounting integration, expense management, global payments, an API, analytics, and AI agents.

The platform is built on Column, a chartered bank co-founded by a Plaid executive that was designed from the ground up to serve tech-forward fintech companies, a relationship Cardenas has credited with helping Slash navigate the turbulence that hit the fintech middleware sector when Synapse, a major banking-as-a-service intermediary, collapsed.

Khosla Ventures has a long record of early fintech bets that have paid off at scale, the firm was an early investor in Stripe, Affirm, and Ramp. Ribbit Capital specialises exclusively in financial services and has backed Robinhood, Coinbase, and Credit Karma.

Their joint involvement in this round signals a conviction that Slash’s vertical model, winning in niche after niche rather than fighting for share across a horizontal market, has the structural advantages to compound into something large.

Advertisement

In the Series B announcement, Cardenas articulated the longer-term ambition: “If we continue solving these niche, vertical, specific financial workflows for businesses across different industries, then we can sneakily become one of the largest commercial credit card issuers in the country.”

Source link

Advertisement
Continue Reading

Tech

Spektr raises $20M Series A to bring AI agents to financial compliance

Published

on

The Copenhagen fintech has built a platform of specialised AI agents that handle KYC and KYB work, document reviews, ownership mapping, risk rationale, in minutes rather than hours. NEA led the Series A; Northzone, Seedcamp, and PSV Tech participated.


Spektr, a Copenhagen-based startup building AI infrastructure for financial compliance, has raised $20 million in a Series A round led by NEA, with continued participation from existing investors Northzone, Seedcamp, and PSV Tech.

The round brings total funding to just under $26 million, and will be used to expand Spektr’s engineering team, accelerate adoption among banks and large financial institutions, and open offices in London and New York. 

Spektr’s pitch is built on a specific frustration: that despite years of investment in compliance technology, most KYC and KYB work is still done by analysts manually.

Advertisement

The typical compliance review at a bank involves searching company registries, cross-referencing documents from multiple sources, mapping beneficial ownership structures, and writing risk rationales by hand, work that looks the same every day, is difficult to audit consistently, and scales poorly as regulatory volume increases.

Most of the tools built to address this have focused on workflow management and data aggregation, which reduces friction but does not eliminate the underlying analytical labour.

Spektr’s response is a platform of specialised AI agents that perform the analytical work itself, researching companies, verifying business activity, interpreting documents from multiple sources, generating structured risk assessments, with compliance teams reviewing and approving results rather than producing them from scratch.

According to the company, work that previously took an analyst hours completes in minutes. Financial institutions can design their own onboarding and monitoring workflows and deploy networks of these agents within them, turning manual analyst-driven processes into automated operations that can run at the scale of a large bank’s customer portfolio.

The platform handles both onboarding and ongoing monitoring, covering KYC, KYB, source-of-funds checks, document review, and false-positive reduction across the compliance lifecycle.

Advertisement

NEA partner Luke Pappas, who led the investment, told Crunchbase News he believes Spektr wins through “taste” and deep domain expertise in a market where AI can mass-produce functionality.

The company’s customers include Pleo, Santander Leasing, Mercuryo, Phantom, and Monta, as well as what the company describes as major US marketplace clients.

Pappas characterised Spektr’s differentiation as the ability to “coexist with existing solutions” while providing orchestration for compliance teams that are not yet ready to consolidate onto a single vendor.

CEO and co-founder Mikkel Skarnager described the core problem in the company’s announcement:

Advertisement

“Compliance technology has mostly focused on workflow and data collection. But the real bottleneck has always been the work itself, analysts researching companies, interpreting information, and documenting decisions.”

The company was seeded in February 2024 and has grown to 45 employees, with the new capital earmarked to scale engineering capacity for the more complex technical requirements of serving Tier 1 banks and large fintechs.

Source link

Advertisement
Continue Reading

Tech

The Wall Street Journal Wonders Why There Are Suddenly So Many Sleazy Fees

Published

on

from the dumb-questions,-asked-unseriously dept

I cut my teeth as a telecom reporter, so I spent a lot of time writing about how broadband monopolies and cable TV giants rip off consumers with sleazy, misleading fees. I also spent a lot of that time writing about how lobbying and regulatory capture have ensured that big companies see no meaningful penalties should they falsely advertise one price, then sock you with a bunch of spurious surcharges.

The Biden administration, for its faults, at least tried to tackle some of this. The Biden FTC considered new and popular rules outlawing “junk fees”. The Biden FCC also implemented rules that didn’t ban sleazy fees (unfortunately), but forced broadband ISPs to clearly list them out at the point of sale (something recently dismantled by the Trump administration).

The Trump administration (and its courts) has taken an absolute hatchet to U.S. consumer protection on regulatory autonomy, ensuring that the problem of predatory fees is much worse across every sector you interface with. So it was funny to see Wall Street Journal reporters recently openly wondering why there are so many shitty fees all of a sudden (non-paywalled alternative):

“An extra 3% for paying with a credit card. A 5% involuntary contribution to a restaurant’s employee wellness fund. $25 a month in addition to rent for trash collection.  

Consumers already weary of rising inflation are now contending with a new crop of costs that are hidden in plain sight. New fees or surcharges are popping up everywhere as companies search for ways to recoup their own rising costs while blaming outside pressures.”

Advertisement

The WSJ reporters and editors decided to cover soaring sleazy fees, but at no point in the article do they mention (even in passing) that Trump has dismantled most of the (already fleeting) efforts to rein in such predation. Or that the Trump Supreme Court has issued numerous rulings effectively making it almost impossible for regulators to fine corporations or hold them accountable for bad behavior.

The article mentions that the Trump FTC did grudgingly implement the Biden-era plan to ban junk fees, but they don’t think it’s worth mentioning that the Trump administration refuses to enforce it:

“The Federal Trade Commission banned drip pricing in short-term lodging and live-event ticketing in 2025, citing research showing that consumers were manipulated by low initial prices even when the full cost was eventually disclosed.”

They also don’t think it’s worth mentioning that the worst offenders of this kind of stuff, like Ticketmaster, were recently let off the hook by the Trump FTC via a piddly settlement (that left states, which had partnered with the FTC legally, high and dry). They’ve chosen to cover consumer protection, but not really. Not with any sort of interest in full, contextual reality.

While this particular instance is the Wall Street Journal, you’ll notice this same habit across most of corporate media. They’re dedicated to an alternate reality where Trump isn’t historically corrupt, and the regulators you’ve historically trusted to be at least semi-present to police the worst offenses are still dutifully on the beat protecting the public interest.

Advertisement

It’s of course a reflection of ownership bias seeping into editorial (most media owners are affluent Conservatives or Libertarians who like tax cuts, rubber stamped merger approvals, and mindless deregulation). But it’s also a form of weird normalization bias, where the reporters assume that because regulators have always been there (with natural partisan ebb and flow) they’ll always be there.

But they’re not there anymore. The damage will likely be deadly and permanent, impacting far more than just shitty, sneaky fees. And the press is doing a terrible job informing the public of that fact.

This is particularly amusing because the Wall Street Journal’s own reporting recently highlighted how even the semi-consistent folks within MAGA who sometimes supported things like functional antitrust reform have been easily ousted by lobbyists, but the reporters exploring “why are we getting ripped off more than ever by predatory corporations” aren’t willing to make the obvious connection.

Filed Under: antitrust, consumer protection, corruption, fees, ftc, hidden fees, junk fees, regulations, surcharges, trump

Advertisement

Source link

Continue Reading

Tech

New ATHR vishing platform uses AI voice agents for automated attacks

Published

on

New ATHR vishing platform uses AI voice agents for automated attacks

A new cybercrime platform called ATHR can harvest credentials via fully automated voice phishing attacks that use both human operators and AI agents for the social engineering phase.

The malicious operation is advertised on underground forums for $4,000 and a 10% comission from profits, and can steal login data for multiple services, including Google, Microsoft, and Coinbase.

Automation covers the entire telephone-oriented attack delivery (TOAD) stages, from luring targets over email to conducting voice-based social engineering and harvesting account credentials.

Wiz

ATHR attack chain

According to researchers at cloud email security company Abnormal, ATHR is a complete phishing/vishing attack generator that offers brand-specific email templates, per-target customization, and spoofing mechanisms to make it appear as if the message originates from a trusted sender.

At the time of their analysis, the researchers observed that ATHR supported eight online services: Google, Microsoft, Coinbase, Binance, Gemini, Crypto.com, Yahoo, and AOL.

Advertisement

The attack starts with the victim receiving an email crafted to pass casual verification and even technical authentication checks.

“The lure is typically a fake security alert or account notification – something urgent enough to prompt a phone call but generic enough to avoid triggering content-based filters,” Abnormal notes in a report today.

Calling the phone number in the email routes the victim through Asterisk and WebRTC to AI voice agents driven by carefully crafted prompts that guide the victim through the data theft process.

The agents follow a multi-step script simulating a security incident. For Google accounts, they replicate the account recovery and verification process, using preset prompts that shape their tone, approach, persona, and behavior to mimic professional support staff.

Advertisement
AI agent script builder tool
ATHR’s AI agent script builder tool
Source: Abnormal

The purpose of the fake recovery process is to extract a six-digit verification code that allows the attacker to gain access to the victim’s account.

Although ATHR does offer the option to route the call to a human operator, the ability to use an AI agent is what sets it apart.

ATHR’s dashboard gives operators control over the entire process and real-time data for each attack per target.

Through the ATHR panel, they control email distribution, handle calls, and manage phishing operations, monitoring outcomes in real time and receiving logs containing the stolen data.

ATHR main dashboard
ATHR main dashboard
Source: Abnormal

Researchers at Abnormal warn that ATHR significantly reduces the manual effort for the operator and provides threat actors with an integrated platform that can handle all stages of a TOAD attack without the need to configure individual components.

This allows less technical attackers with no infrastructure to deploy automated vishing attacks from start to finish.

Advertisement

“The shift from a fragmented, manually intensive operation to a productized, largely automated one means TOAD attacks no longer require large teams or specialized infrastructure,” Abnormal warns.

With the rise of ATHR-like cybercrime platforms, the researchers expect vishing attacks to become more frequent and more difficult to distinguish from legitimate communications.

Defending against such attacks requires a different approach, since the lure emails carry no reliable indicators, are customized to authenticate correctly, and appear as valid notifications.

However, detection is possible by checking the communication behavioral patterns between a sender and a recipient, and identifying if similar lures containing a phone number reached the organization within a short time frame.

Advertisement

Abnormal researchers say that modeling normal communication behavior across the organization can help AI-powered detection flag anomalies before targets make a call.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025