Connect with us
DAPA Banner

Tech

GitHub freezes new Copilot sign-ups as agentic AI breaks the economics

Published

on

Agentic coding workflows are now routinely generating costs that exceed what users pay per month. GitHub’s response, pausing new sign-ups for Pro, Pro+, and Student plans and tightening usage caps, signals that the era of unlimited AI assistance at fixed prices is ending.


GitHub has paused new sign-ups for its Copilot Pro, Pro+, and Student plans and tightened usage limits across all individual tiers, citing a fundamental mismatch between how developers now use the product and the infrastructure it was built to support.

The company’s VP of product, Joe Binder, said in a blog post that agentic coding workflows, long-running, parallelised sessions in which AI agents and subagents tackle complex problems autonomously over extended periods, are now routinely consuming more compute than users pay for in a month.

“It’s now common for a handful of requests to incur costs that exceed the plan price,” Binder wrote.

Advertisement

The change, effective 20 April, leaves Copilot Free as the only plan still accepting new individual sign-ups. Existing users retain access to their current plans and can upgrade between tiers, but GitHub has given no timeline for resuming new subscriptions.

Pro and Pro+ subscribers who contact GitHub support between 20 April and 20 May can cancel and receive a refund, with no charge for April.

The usage changes that accompany the pause are structured to push heavier users towards the pricier Pro+ tier. GitHub is tightening both session and weekly token limits on individual plans, caps that govern how many tokens a user can consume in a given time window, separate from the premium request entitlements that determine model access.

A user can have premium requests remaining and still hit a usage limit because the two systems operate independently. Pro+, at $39 per month, now offers more than five times the limits of the $10-per-month Pro plan.

Advertisement

Usage warnings are being added to VS Code and the Copilot CLI so developers can see approaching limits before hitting them mid-workflow.

Model access is also being restructured. Opus models, Anthropic’s heaviest and most capable models, are being removed from the Pro plan entirely.

Opus 4.7 remains available on Pro+. Opus 4.5 and 4.6, previously announced for removal from Pro+, are being removed from that tier as well. The pattern is straightforward: the most compute-intensive models are migrating exclusively to the most expensive individual tier.

The economics behind the move are unusually candid for a Microsoft product announcement. Copilot was originally designed for code completion, short, stateless suggestions that consume modest compute per interaction.

Advertisement

Agentic coding, by contrast, involves sessions that can run for hours, spawn multiple parallel threads, and generate token volumes that bear no resemblance to the autocomplete interactions that shaped the original pricing structure.

GitHub’s own Copilot features, including the /fleet command for parallel workflows, are now listed among the behaviours GitHub is asking its own users to limit.

This is not the first sign of strain. The week before the sign-up pause, GitHub had already suspended Copilot Pro free trials due to abuse, a narrower measure that hinted at the broader capacity pressure to come.

And the sign-up pause itself arrives at a politically awkward moment for GitHub with its developer user base.

Advertisement

In late March, the platform came under significant backlash after developers discovered that Copilot had been inserting promotional “tips”, including an advertisement for productivity app Raycast, into pull requests, in some cases appearing as if written by the developer rather than the AI.

The feature was disabled the same day, with GitHub’s VP of developer relations, Martin Woodward, saying the behaviour had become “icky” after Copilot’s reach was extended to pull requests it hadn’t created.

GitHub described it as a programming logic issue, not an advertising strategy. More than 11,000 pull requests were affected before the rollback.

The broader pattern, analysts say, is structural. Charlie Dai, vice president and principal analyst at Forrester, said the move shows how agent-driven coding is shifting workloads towards longer-running and parallel sessions that create higher and less predictable compute demand.

Advertisement

“Cost structures built for lightweight assistance no longer hold,” Dai said, “and this puts pressure on GPU capacity, reliability, and unit economics.” He added that similar usage restrictions from major model providers suggest capacity rationing is likely to become a feature of the industry as agentic development becomes routine.

For enterprise engineering leaders, Dai said the episode is a reminder to evaluate AI coding tools as metered infrastructure rather than unlimited productivity layers.

Faisal Kawoosa, founder and chief analyst at Techarc, said the dynamic is a familiar one. “First you give users access to a tool with relatively open usage, and then gradually start defining limits as adoption grows,” he said.

Kawoosa added that the next step is likely to be more differentiated plans that create clearer monetisation opportunities, noting that GitHub’s depth of integration into developer workflows gives it unusual leverage: “a developer can live without an email ID, but not a GitHub account.”

Advertisement

Whether competitors, including Claude Code, Cursor, and Codeium, can move quickly enough to absorb frustrated Copilot users before GitHub recalibrates its pricing structure is the open question the market is now watching.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

AI research lab NeoCognition lands $40M seed to build agents that learn like humans

Published

on

Investors are aggressively courting AI researchers to build startups that can make AI more reliable and efficient.

Yu Su, an Ohio State professor leading an AI agent lab, said he initially resisted the pressure from VCs to commercialize his work. He finally took the leap last year and spun out his work into a startup when he saw that foundational model advances could make agents truly personalized.

NeoCognition, a startup Su describes as a research lab developing self-learning AI agents, has just emerged from stealth with $40 million in seed funding. The round was co-led by Cambium Capital and Walden Catalyst Ventures, with participation from Vista Equity Partners and angels, including Intel CEO Lip-Bu Tan and Databricks co-founder Ion Stoica.

“Today’s agents are generalists,” Su (pictured right) told TechCrunch. “Every time you ask them to do a task, you take a leap of faith.”

Advertisement

According to Su, the issue lies in a lack of consistency. Current agents, whether from Claude Code, OpenClaw or Perplexity’s computer tools, successfully complete tasks as intended only about 50% of the time, he said.

Since agents are still so unreliable, they are not ready to be trusted, independent workers, Su told TechCrunch. NeoCognition intends to change that by developing an agent system that can self-learn to become an expert in any domain, similar to how humans learn.

Su argues that while human intelligence is broad, its real power is our ability to specialize. When we enter a new environment or profession, we can rapidly master its unique rules, relationships, and consequences.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

NeoCognition is building agents to mirror this exact approach.

Advertisement

“For humans, our continued learning process is essentially the process of building a world model for any profession, any environment,” Su said. “We believe for agents to become experts, they need to learn autonomously to build a model of any given micro world.”

Su views this capacity for rapid specialization as the critical missing link to getting AI to work reliably on its own.

While it is possible to train agents for autonomous tasks, they must be custom-engineered for a specific vertical. NeoCognition is different because it’s building agents that are generalists capable of self-learning and specializing in any domain.

NeoCognition intends to sell its agent systems primarily to enterprises, including established SaaS companies, which can use them to build agent-workers or to enhance existing product offerings.

Advertisement

Su highlighted that an investment from Vista Equity Partners is especially valuable for this reason. As one of the largest private equity firms in the software space, Vista can provide NeoCognition with direct access to a vast portfolio of companies looking to modernize their products with AI.

NeoCognition currently has about 15 employees, the majority of whom hold PhDs.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Meta To Start Capturing Employee Mouse Movements, Keystrokes For AI Training Data

Published

on

Reuters reports that Meta plans to start collecting U.S.-based employees’ mouse movements, clicks, keystrokes, and occasional screen snapshots to train AI agents that can better learn how humans use computers. The tool, called Model Capability Initiative (MCI), will reportedly “not be used for performance assessments or any other purpose besides model training and that safeguards were in place to protect ‘sensitive content.'” From the report: Meta CTO Andrew Bosworth told employees in a separate memo shared on Monday that the company would step up internal data collection as part of those “AI for Work” efforts, now re-branded as Agent Transformation Accelerator (ATA). “The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve,” Bosworth said. The aim, he added, was for agents to “automatically see where we felt the need to intervene so they can be better next time.” Bosworth did not explicitly spell out how those agents would be trained, but said Meta would be “rigorous” about “building up data and evals for all the types of interactions we have as we go about our work.”

Meta spokesperson Andy Stone acknowledged that the MCI data would be among the inputs. […] “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people “actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus,” said Stone.

Source link

Continue Reading

Tech

Audio-Technica at AXPONA 2026: Japan’s Quiet Giant Steps Into the Spotlight

Published

on

In a headphone market split between legacy brands that barely change and boutique players that release new models at a relentless pace, it’s easy to overlook the ones playing a longer, quieter game. Shure has built a reputation on consistency. Campfire Audio operates at the other extreme. Most brands fall somewhere in between.

That leaves a gap, and that’s where Audio-Technica operates. The Tokyo-based manufacturer doesn’t chase headlines or flood the market, but it brings decades of experience across both professional and consumer audio. At AXPONA 2026, that approach stands out. For newer enthusiasts, it’s a reminder that some of the most established names in personal audio are not always the loudest.

Why the Japanese Audio Brand Still Matters

Founded in Tokyo by Hideo Matsushita, Audio-Technica set out to make high quality audio accessible to a wider audience. The company began with phono cartridges in 1962 and expanded steadily into headphones, microphones, turntables, and wireless systems for broadcast and live sound. That pro side of the business still matters, even if it doesn’t always get the attention.

Today, Audio-Technica is one of the largest audio companies in Japan. Outside of cartridges, though, it can still fly under the radar for a lot of listeners. At the show, that low profile was hard to miss. Instead of setting up in the Ear Gear section where the headphone crowd tends to gather, the brand took a series of smaller rooms off the main path. Easy to walk past if you weren’t looking for them.

Advertisement
Audio-Technica NARUKAMI HPA-KG NARU Tube Headphone Amplifier
Audio-Technica NARUKAMI HPA-KG NARU Tube Headphone Amplifier at AXPONA 2026

That’s a shame, because the setup was one of the more complete at the show. Visitors could move from cartridge demos to a full spread of headphones, covering everything from entry-level models to the flagship end of the spectrum, including the $108,000 NARUKAMI HPA-KG NARU Tube Headphone Amplifier and its matching headphones.

Audio-Technica offers a headphone lineup that can stand alongside Sony, Beyerdynamic, and Sennheiser, with models that cover a wide range of prices and use cases. That includes everything from entry level wired designs to high-end open and closed back headphones, along with more niche offerings like wireless in-ear models tied to Star Wars characters. It is a broad catalog, but it rarely gets presented as aggressively as its competitors.

audio-technica-ath-awkg-axpona-2026

At AXPONA 2026, that range was on full display. I spent time with the flagship ATH-ADX7000 ($3,499 at Crutchfield), along with several of the step down open back models, and moved over to the closed back side with the Narukami system and the ATH-AWKG ($4,499 at Amazon), plus a few sub-flagship options.

The ADX7000 was not new territory. We have already reviewed it favorably, and both Editor-in-Chief Ian White and Editor-at-Large Chris Boylan placed it among their top three headphones from CanJam NYC 2026. That context matters because it frames the rest of the lineup. The flagship is not just competitive. It sets the tone for everything below it.

The house tuning of Audio-Technica headphones leans a bit brighter than what you typically get from Beyerdynamic or Sennheiser, with a noticeable lift in the presence region. Vocals come forward, strings have a bit more bite, and that works especially well with string quartets, concertos, vocal tracks, or a cappella arrangements. It’s not trying to sound polite. It’s trying to keep things engaging.

Advertisement

The upside is consistency. That same tuning shows up from the top of the line down to the entry level models. As you move up, you get more resolution, better control, and a cleaner presentation, but the core voicing doesn’t shift. The idea that Hideo Matsushita started with is still intact. You’re not relearning the sound every time you move up the ladder.

For those who haven’t spent time with the brand, the ATH-AD500X (open-back) and ATH-A550Z (closed-back) are easy entry points at around $150. They won’t match the technical performance of the flagships, but they give you a clear sense of what Audio-Technica is aiming for without asking for a major commitment.

Advertisement. Scroll to continue reading.

I was also able to speak with a representative from Audio-Technica about reviewing the newly released X-series models, which push the price of their open-back headphones down to as little as $59. That’s a meaningful shift for a brand that has traditionally started higher up the ladder. It opens the door for a lot more people to hear what that house sound is about without much risk.

Advertisement

I’m looking forward to spending time with those. The Audio-Technica models I already have get a lot of use with classical and jazz, and they offer a different perspective compared to the darker tuning you get from some of the established German brands. It’s not better or worse. Just a different take that a lot of listeners might find more engaging. The plan is to start with the X-series and work up the line so readers can see how that tuning evolves as the price climbs.

audio-technica-narukami-front-axpona-2026
audio-technica-narukami-angle-above-axpona-2026
audio-technica-narukami-tube-closeup

At the other end of the spectrum sits the NARUKAMI HPA-KG NARU Tube Headphone Amplifier and matching headphones. Only two units are currently in North America, which raises an obvious question. Were the other 23 already sold? At $108,000, in under two years, that would be quite a statement. Audio-Technica spent a decade developing that system and went through 11 prototypes before bringing it to market. It’s hard to justify on paper, but that’s not really the point. The design, build quality, and sonic performance are about as far as this category can be pushed right now.

And in the context of AXPONA 2026, it almost felt reasonable. There were plenty of speaker systems in the building that cost a lot more. Getting more time with it would require another trip to the show. I’m not expecting a loaner to show up anytime soon, but there’s no harm in asking.

Advertisement

Source link

Continue Reading

Tech

2026 Green Powered Challenge: A Low Power Distraction Free Writing Tool

Published

on

Distraction free writing tools are a reaction to the bells and whistles of the modern desktop computer, allowing the user to simply pick up the device and write. The etyper from [Quackieduckie] is one such example, packing an e-paper screen into a minimalist case.

These devices are most often made using a microcontroller such as an ESP32, so it’s interesting to note that this one uses a full-fat computer — if an Orange Pi Zero 2W can be described as “Full-fat”, anyway. There’s an Armbian image for it with the software pre-configured, and also mention of a Raspberry Pi port. It works with wired USB-C keyboards, and files can be retrieved via Bluetooth. It doesn’t look as though there’s a framebuffer or other more general driver for the display so it’s likely you won’t be using this as a general purpose machine, but maybe that’s not the point. We like it, though maybe it’s not a daily driver.

This hack is part of our 2026 Green Powered Challenge. You’ve just got time to get your own entry in, so get a move on!

Advertisement

Source link

Advertisement
Continue Reading

Tech

Microsoft's full-screen Xbox experience is now available to Windows 11 Insiders

Published

on


Microsoft recently announced a new Canary build within the Windows Insider Program. While not particularly groundbreaking in terms of features, Windows 11 Canary Build 29570.1000 does include a potentially interesting change for gaming scenarios. The new preview release finally brings Xbox mode to Windows Insider testers, allowing users to try…
Read Entire Article
Source link

Continue Reading

Tech

Companies are hoarding expensive AI GPUs and leaving most of that costly compute power sitting idle while bills quietly spiral upward

Published

on


  • Most AI GPUs run at shockingly low utilization across production systems
  • Companies are paying for twenty times more GPU capacity than needed
  • Overprovisioning is rising sharply instead of improving year after year

Companies across the tech industry are racing to buy massive amounts of AI infrastructure, but most of it does barely any useful work at all.

A report from Cast AI, based on tens of thousands of Kubernetes clusters across AWS, Azure, and GCP, found that average GPU utilization sits at just 5%.

Source link

Continue Reading

Tech

CISA flags new SD-WAN flaw as actively exploited in attacks

Published

on

Cisco

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has given government agencies four days to secure their systems against another Catalyst SD-WAN Manager vulnerability it flagged as actively exploited in attacks.

Catalyst SD-WAN Manager (formerly known as vManage) is a network management software that helps admins monitor and manage up to 6,000 Catalyst SD-WAN devices from a single dashboard.

Cisco patched this information disclosure vulnerability (CVE-2026-20133) in late February, saying that it allows unauthenticated remote attackers to access sensitive information on unpatched devices.

image

“This vulnerability is due to insufficient file system access restrictions. An attacker could exploit this vulnerability by accessing the API of an affected system,” Cisco said at the time. “A successful exploit could allow the attacker to read sensitive information on the underlying operating system.”

One week later, the company revealed that two other security flaws it had patched the same day (CVE-2026-20128 and CVE-2026-20122)were being exploited in the wild.

Advertisement

Federal agencies ordered to patch until Friday

On Monday, CISA added CVE-2026-20133 to its Known Exploited Vulnerabilities (KEV) Catalog, “based on evidence of active exploitation,” and ordered Federal Civilian Executive Branch (FCEB) agencies to secure their networks until Friday, April 24.

“Please adhere to CISA’s guidelines to assess exposure and mitigate risks associated with Cisco SD-WAN devices as outlined in CISA’s Emergency Directive 26-03 and CISA’s Hunt & Hardening Guidance for Cisco SD-WAN Devices,” CISA said. “Adhere to the applicable BOD 22-01 guidance for cloud services or discontinue use of the product if mitigations are not available.”

Cisco has yet to confirm the U.S. cybersecurity agency’s report that the flaw is being exploited in attacks, with its security advisory still saying that its Product Security Incident Response Team (PSIRT) is “not aware of any public announcements or malicious use of the vulnerabilities that are described in CVE-2026-20133.”

In February, Cisco also tagged a critical authentication bypass vulnerability (CVE-2026-20127) as exploited in zero-day attacks that were enabling threat actors to add malicious rogue peers to targeted networks since at least 2023.

Advertisement

More recently, in early March, the company released security updates to address two maximum-severity vulnerabilities in its Secure Firewall Management Center (FMC) software that can allow attackers to gain root access to the underlying operating system and execute arbitrary Java code with root privileges.

Over the last several years, CISA has tagged 91 Cisco vulnerabilities as exploited in the wild, six of which have been used by various ransomware operations.


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Advertisement
Continue Reading

Tech

Geekom A6 mini PC is a powerful beast with $100 off right now

Published

on

If desk space is tight but you still want solid performance for everyday work, a mini PC can be a great solution. I’ve found a terrific deal on the Geekom A6, now down to $549 (was $649) at Amazon.

Trimming a healthy amount off the asking price, that $100 discount brings it down to the same price as on the Geekom website but there it comes with a free $69 case.

In our tests, Geekom mini PCs have proved very strong, and in our rave review we found the A6 “packs in an impressive amount of power” and noted, “when it comes to performance it really is a cut above many other mini PCs of this size.” We also praised the “quality of the build and the style of the design which make it one of the best-looking mini PCs out there.”

Advertisement

best mini PCs we’ve tested and reviewed.

Source link

Advertisement
Continue Reading

Tech

UK probes Telegram, teen chat sites over CSAM sharing concerns

Published

on

Telegram

Ofcom, the United Kingdom’s independent communications regulator, has launched an investigation into Telegram based on evidence suggesting it’s being used to share child sexual abuse material (CSAM).

The investigation was launched under the UK’s Online Safety Act to examine whether the social media and instant messaging (IM) service is complying with its illegal content safety duties, which require it to prevent CSAM from being shared.

Ofcom says it received evidence regarding the alleged presence and sharing of CSAM on Telegram from the Canadian Centre for Child Protection, and that it had also conducted its own assessment of the platform.

image

“In light of this, we have decided to open an investigation to examine whether Telegram has failed, or is failing, to comply with its duties in relation to illegal content,” Ofcom said.

However, Telegram denied Offcom’s accusations, saying that it “virtually eliminated the public spread of CSAM” on its platform since 2018.

Advertisement

“We are surprised by this investigation and concerned that it may be part of a broader attack on online platforms that defend freedom of speech and the right to privacy,” Telegram said.

Ofcom has also launched formal investigations into two teen chat sites (Teen Chat and Chat Avenue) over concerns that predators are using them to groom children and to check if the two services are taking all required steps to assess and mitigate these risks.

The UK’s independent online safety watchdog is also probing X under the UK’s Online Safety Act over nonconsensual sexually explicit content generated using the Grok AI chatbot account.

If it identifies compliance failures, Ofcom can impose fines of up to £18 million or 10% of qualifying worldwide revenue (whichever is greater). Additionally, in serious cases of non-compliance, it can request a court order effectively banning the offending platform in the United Kingdom.

Advertisement

“In the most serious cases of non-compliance, and where appropriate given risks of harm to individuals in the UK, we can seek a court order to require third parties to take action to disrupt the business of the provider,” Ofcom noted.

“This may require third parties (such as providers of payment or advertising services, or Internet Service Providers) to withdraw services from, or block access to, a regulated service in the UK.”


article image

AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.

At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.

Claim Your Spot

Source link

Advertisement
Continue Reading

Tech

Contrary to popular superstition, AES 128 is just fine in a post-quantum world

Published

on

On Monday, Valsorda finally channeled years’ worth of frustration, fueled by the widely held misunderstanding, into a blog post titled “Quantum Computers Are Not a Threat to 128-bit Symmetric Keys.”

“There’s a common misconception that quantum computers will ‘halve’ the security of symmetric keys, requiring 256-bit keys for 128 bits of security,” he wrote. “That is not an accurate interpretation of the speedup offered by quantum algorithms, it’s not reflected in any compliance mandate, and risks diverting energy and attention from actually necessary post-quantum transition work.”

That’s the easy part of the argument. The much harder part is the math and physics that explain it. At its highest level, it comes down to a fundamental difference in the way a brute-force search works on classical computers versus the way it works using Grover’s algorithm. Classical computers can perform multiple searches simultaneously, a capability that allows large tasks to be broken into smaller pieces to complete the overall job faster. Grover’s algorithm, by contrast, requires a long-running serial computation, where each search is done one at a time.

“What makes Grover special is that as you parallelize it, its advantage over non-quantum algorithms gets smaller,” Valsorda said in an interview. He continued:

Advertisement

Imagine it with small numbers, let’s say there are 256 possible combinations to a lock, A normal attack would take 256 tries. You decide it’s too long, so you get three friends and you each do 64 tries. “That’s the classical parallelization. With Grover you could in theory do √256)=16 tries in a row, but if that’s still too long and you again look for help from three friends. Each has to do √256/4)=8 tries.

So in total you do 8*4=32 tries, which is more than the 16 you would have done alone! Asking for help to parallelize the attack made the attack slower overall. Which is not the case for classical attacks.

Of course the numbers are way larger, but if we apply any reasonable constraint on the attacker (like having to finish a run in 10 years), the total work becomes so much more than 264.

Also, 264 was never the right number, because that pretends you can do AES as a single operation on a single qubit. This is somewhat orthogonal. The combination of these two observations turn the actual cost into 2104 give or take, which is well beyond the threshold for security.

Sophie Schmieg, a senior cryptography engineer at Google, explained it this way:

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025