Connect with us
DAPA Banner

Tech

Anthropic’s Claude Opus 4.6 brings 1M token context and ‘agent teams’ to take on OpenAI’s Codex

Published

on

Anthropic on Thursday released Claude Opus 4.6, a major upgrade to its flagship artificial intelligence model that the company says plans more carefully, sustains longer autonomous workflows, and outperforms competitors including OpenAI’s GPT-5.2 on key enterprise benchmarks — a release that arrives at a tumultuous moment for the AI industry and global software markets.

The launch comes just three days after OpenAI released its own Codex desktop application in a direct challenge to Anthropic’s Claude Code momentum, and amid a $285 billion rout in software and services stocks that investors attribute partly to fears that Anthropic’s AI tools could disrupt established enterprise software businesses.

For the first time, Anthropic’s Opus-class models will feature a 1 million token context window, allowing the AI to process and reason across vastly more information than previous versions. The company also introduced “agent teams” in Claude Code — a research preview feature that enables multiple AI agents to work simultaneously on different aspects of a coding project, coordinating autonomously.

“We’re focused on building the most capable, reliable, and safe AI systems,” an Anthropic spokesperson told VentureBeat about the announcements. “Opus 4.6 is even better at planning, helping solve the most complex coding tasks. And the new agent teams feature means users can split work across multiple agents — one on the frontend, one on the API, one on the migration — each owning its piece and coordinating directly with the others.”

Advertisement

Why OpenAI and Anthropic are locked in an all-out war for enterprise developers

The release intensifies an already fierce competition between Anthropic and OpenAI, the two most valuable privately held AI companies in the world. OpenAI on Monday released a new desktop application for its Codex artificial intelligence coding system, a tool the company says transforms software development from a collaborative exercise with a single AI assistant into something more akin to managing a team of autonomous workers.

AI coding assistants have exploded in popularity over the last year, and OpenAI said more than 1 million developers have used Codex in the past month. The new Codex app is part of OpenAI’s ongoing effort to lure users and market share away from rivals like Anthropic and Cursor.

The timing of Anthropic’s release — just 72 hours after OpenAI’s Codex launch — underscores the breakneck pace of competition in AI development tools. OpenAI faces intensifying competition from Anthropic, which posted the largest share increase of any frontier lab since May 2025, according to a recent Andreessen Horowitz survey. Forty-four percent of enterprises now use Anthropic in production, driven by rapid capability gains in software development since late 2024. The desktop launch is a strategic counter to Claude Code’s momentum.

Advertisement

According to Anthropic’s announcement, Opus 4.6 achieves the highest score on Terminal-Bench 2.0, an agentic coding evaluation, and leads all other frontier models on Humanity’s Last Exam, a complex multi-discipline reasoning test. On GDPval-AA — a benchmark measuring performance on economically valuable knowledge work tasks in finance, legal and other domains — Opus 4.6 outperforms OpenAI’s GPT-5.2 by approximately 144 ELO points, which translates to obtaining a higher score approximately 70% of the time.

0e5c55fa8fd05a893d11168654dc36999e90908b-2600x2968

Claude Opus 4.6 leads or matches competitors across most benchmark categories, according to Anthropic’s internal testing. The model showed particular strength in agentic tasks, office work and novel problem-solving. (Source: Anthropic)

Inside Claude Code’s $1 billion revenue milestone and growing enterprise footprint

The stakes are substantial. Asked about Claude Code’s financial performance, the Anthropic spokesperson noted that in November, the company announced that Claude Code reached $1 billion in run rate revenue only six months after becoming generally available in May 2025.

The spokesperson highlighted major enterprise deployments: “Claude Code is used by Uber across teams like software engineering, data science, finance, and trust and safety; wall-to-wall deployment across Salesforce’s global engineering org; tens of thousands of devs at Accenture; and companies across industries like Spotify, Rakuten, Snowflake, Novo Nordisk, and Ramp.”

Advertisement

That enterprise traction has translated into skyrocketing valuations. Earlier this month, Anthropic signed a term sheet for a $10 billion funding round at a $350 billion valuation. Bloomberg reported that Anthropic is simultaneously working on a tender offer that would allow employees to sell shares at that valuation, offering liquidity to staffers who have watched the company’s worth multiply since its 2021 founding.

How Opus 4.6 solves the ‘context rot’ problem that has plagued AI models

One of Opus 4.6’s most significant technical improvements addresses what the AI industry calls “context rot“—the degradation of model performance as conversations grow longer. Anthropic says Opus 4.6 scores 76% on MRCR v2, a needle-in-a-haystack benchmark testing a model’s ability to retrieve information hidden in vast amounts of text, compared to just 18.5% for Sonnet 4.5.

“This is a qualitative shift in how much context a model can actually use while maintaining peak performance,” the company said in its announcement.

The model also supports outputs of up to 128,000 tokens — enough to complete substantial coding tasks or documents without breaking them into multiple requests.

Advertisement

For developers, Anthropic is introducing several new API features alongside the model: adaptive thinking, which allows Claude to decide when deeper reasoning would be helpful rather than requiring a binary on-off choice; four effort levels (low, medium, high, max) to control intelligence, speed and cost tradeoffs; and context compaction, a beta feature that automatically summarizes older context to enable longer-running tasks.

long-context retrieval anthropic

Opus 4.6 dramatically outperformed its predecessor on tests measuring how well models retrieve information buried in long documents — a key capability for enterprise coding and research tasks. (Source: Anthropic)

Anthropic’s delicate balancing act: Building powerful AI agents without losing control

Anthropic, which has built its brand around AI safety research, emphasized that Opus 4.6 maintains alignment with its predecessors despite its enhanced capabilities. On the company’s automated behavior audit measuring misaligned behaviors such as deception, sycophancy, and cooperation with misuse, Opus 4.6 “showed a low rate” of problematic responses while also achieving “the lowest rate of over-refusals — where the model fails to answer benign queries — of any recent Claude model.”

When asked how Anthropic thinks about safety guardrails as Claude becomes more agentic, particularly with multiple agents coordinating autonomously, the spokesperson pointed to the company’s published framework: “Agents have tremendous potential for positive impacts in work but it’s important that agents continue to be safe, reliable, and trustworthy. We outlined our framework for developing safe and trustworthy agents last year which shares core principles developers should consider when building agents.”

Advertisement

The company said it has developed six new cybersecurity probes to detect potentially harmful uses of the model’s enhanced capabilities, and is using Opus 4.6 to help find and patch vulnerabilities in open-source software as part of defensive cybersecurity efforts.

569d748607388e6ed42e3ff0ff245d9b0cde6878-3840x2160

Anthropic says its newest model exhibits the lowest rate of problematic behaviors — including deception and sycophancy — of any Claude version tested, even as capabilities have increased. (Source: Anthropic)

Sam Altman vs. Dario Amodei: The Super Bowl ad battle that exposed AI’s deepest divisions

The rivalry between Anthropic and OpenAI has spilled into consumer marketing in dramatic fashion. Both companies will feature prominently during Sunday’s Super Bowl. Anthropic is airing commercials that mock OpenAI’s decision to begin testing advertisements in ChatGPT, with the tagline: “Ads are coming to AI. But not to Claude.”

OpenAI CEO Sam Altman responded by calling the ads “funny” but “clearly dishonest,” posting on X that his company would “obviously never run ads in the way Anthropic depicts them” and that “Anthropic wants to control what people do with AI” while serving “an expensive product to rich people.”

Advertisement

The exchange highlights a fundamental strategic divergence: OpenAI has moved to monetize its massive free user base through advertising, while Anthropic has focused almost exclusively on enterprise sales and premium subscriptions.

The $285 billion stock selloff that revealed Wall Street’s AI anxiety

The launch occurs against a backdrop of historic market volatility in software stocks. A new AI automation tool from Anthropic PBC sparked a $285 billion rout in stocks across the software, financial services and asset management sectors on Tuesday as investors raced to dump shares with even the slightest exposure. A Goldman Sachs basket of US software stocks sank 6%, its biggest one-day decline since April’s tariff-fueled selloff.

The selloff was triggered by a new legal tool from Anthropic, which showed the AI industry’s growing push into industries that can unlock lucrative enterprise revenue needed to fund massive investments in the technology. One trigger for Tuesday’s selloff was Anthropic’s launch of plug-ins for its Claude Cowork agent on Friday, enabling automated tasks across legal, sales, marketing and data analysis.

Thomson Reuters plunged 15.83% Tuesday, its biggest single-day drop on record; and Legalzoom.com sank 19.68%. European legal software providers including RELX, owner of LexisNexis, and Wolters Kluwer experienced their worst single-day performances in decades.

Advertisement

Not everyone agrees the selloff is warranted. Nvidia CEO Jensen Huang said on Tuesday that fears AI would replace software and related tools were “illogical” and “time will prove itself.” Mark Murphy, head of U.S. enterprise software research at JPMorgan, said in a Reuters report it “feels like an illogical leap” to say a new plug-in from an LLM would “replace every layer of mission-critical enterprise software.”

What Claude’s new PowerPoint integration means for Microsoft’s AI strategy

Among the more notable product announcements: Anthropic is releasing Claude in PowerPoint in research preview, allowing users to create presentations using the same AI capabilities that power Claude’s document and spreadsheet work. The integration puts Claude directly inside a core Microsoft product — an unusual arrangement given Microsoft’s 27% stake in OpenAI.

The Anthropic spokesperson framed the move pragmatically in an interview with VentureBeat: “Microsoft has an official add-in marketplace for Office products with multiple add-ins available to help people with slide creation and iteration. Any developer can build a plugin for Excel or PowerPoint. We’re participating in that ecosystem to bring Claude into PowerPoint. This is about participating in the ecosystem and giving users the ability to work with the tools that they want, in the programs they want.”

Claude in Powerpoint

Claude’s new PowerPoint integration, shown here analyzing a market research slide, places Anthropic’s AI directly inside a flagship Microsoft product — despite Microsoft’s major investment in rival OpenAI. (Source: Anthropic)

Advertisement

The data behind enterprise AI adoption: Who’s winning and who’s losing ground

Data from a16z’s recent enterprise AI survey suggests both Anthropic and OpenAI face an increasingly competitive landscape. While OpenAI remains the most widely used AI provider in the enterprise, with approximately 77% of surveyed companies using it in production in January 2026, Anthropic’s adoption is rising rapidly — from near-zero in March 2024 to approximately 40% using it in production by January 2026.

The survey data also shows that 75% of Anthropic’s enterprise customers are using it in production, with 89% either testing or in production — figures that slightly exceed OpenAI’s 46% in production and 73% testing or in production rates among its customer base.

Enterprise spending on AI continues to accelerate. Average enterprise LLM spend reached $7 million in 2025, up 180% from $2.5 million in 2024, with projections suggesting $11.6 million in 2026 — a 65% increase year-over-year.

a16z LLM Vendor Usage Over Time chart

OpenAI remains the dominant AI provider in enterprise settings, but Anthropic’s share has surged from near zero in early 2024 to roughly 40 percent of companies using it in production by January 2026. (Source: Andreessen Horowitz survey, January 2026)

Advertisement

Pricing, availability, and what developers need to know about Claude Opus 4.6

Opus 4.6 is available immediately on claude.ai, the Claude API, and major cloud platforms. Developers can access it via claude-opus-4-6 through the API. Pricing remains unchanged at $5 per million input tokens and $25 per million output tokens, with premium pricing of $10/$37.50 for prompts exceeding 200,000 tokens using the 1 million token context window.

For users who find Opus 4.6 “overthinking” simpler tasks — a characteristic Anthropic acknowledges can add cost and latency — the company recommends adjusting the effort parameter from its default high setting to medium.

The recommendation captures something essential about where the AI industry now stands. These models have grown so capable that their creators must now teach customers how to make them think less. Whether that represents a breakthrough or a warning sign depends entirely on which side of the disruption you’re standing on — and whether you remembered to sell your software stocks before Tuesday.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

VPN logging: what data does your VPN need to collect?

Published

on

Virtual Private Networks (VPNs) promise to hide your online activities from prying eyes, but still need to gather some information to work properly.

Understanding exactly what data a VPN collects – and why – can help you decide whether a VPN service truly protects your privacy or simply adds another unwanted layer of surveillance.

From activity logs to the different policy types, we’ll walk you through the typical categories of logs a VPN provider might keep. We’ll explain what a “no-logs” VPN really means, highlight when a VPN’s data collection becomes too risky, and provide you with some practical tips for picking a trustworthy VPN provider.

Article continues below

Advertisement

VPN apps on iPhone

The most trustworthy VPNs will only log what’s absolutely necessary, but what does that include? (Image credit: Getty Images)

Source link

Continue Reading

Tech

Engineers: Translate Complexity Into Clarity

Published

on

This article is crossposted from IEEE Spectrum’s careers newsletter. Sign up now to get insider tips, expert advice, and practical strategies, written in partnership with tech career development company Parsity and delivered to your inbox for free!

Engineers Aren’t Bad at Communication. They’re Just Speaking to the Wrong Audience.

There’s a persistent myth that engineers are bad communicators. In my experience, that’s not true.

Engineers are often excellent communicators—inside their domain. We’re precise. We’re logical. We structure arguments clearly. We define terms. We reason from constraints.

The breakdown happens when the audience changes.

Advertisement

We’re used to speaking in highly technical language, surrounded by people who share our vocabulary. In that environment, shorthand and jargon are efficient. But outside that bubble, when talking to executives, product managers, marketing teams, or customers, that same precision can be confusing.

The problem isn’t that we can’t communicate. It’s that we forget to translate.

If you’ve ever explained a critical issue or error to a non-technical stakeholder, you’ve probably experienced this: You give a technically accurate explanation. They leave either more confused than before, or more alarmed than necessary.

Suddenly you’re spending more time clarifying your explanation than fixing the issue.

Advertisement

Under pressure, we default to what we know best—technical detail. But detail without context creates cognitive overload. The listener can’t tell what matters, what’s normal, and what’s dangerous.

That’s when the “engineers can’t communicate” narrative shows up.

In reality, we just skipped the translation step.

The Writing Shortcut

One of the simplest ways to improve written communication today is surprisingly easy: Run your explanation through an AI model and ask, “would this make sense to a non-technical audience? Where would someone get confused?”

Advertisement

You can also say:

  • “Rewrite this for an executive audience.”
  • “What analogy would help explain this?”
  • “Simplify this without losing accuracy.”

Large language models are particularly good at identifying jargon and offering alternative framings. They’re essentially translation assistants.

Analogies are especially powerful. If you’re explaining system latency, compare it to traffic congestion. If you’re describing technical debt, compare it to skipping maintenance on a house. If you’re explaining distributed systems, try using supply chain examples.

The goal isn’t to “dumb it down.” It’s to map the unfamiliar onto something familiar.

Before sending an email or report, ask yourself:

Advertisement
  • Does this audience need to understand the mechanism, or just impact?
  • Does this explanation help them make a decision?
  • Have I defined terms they might not know?

Translation When Speaking

When speaking—especially in meetings or presentations—most engineers have one predictable habit: We speak too fast.

Nerves speed us up. Speed causes filler words. Filler words dilute authority.

To prevent that, follow a simple rule: Speak 10 to 15 percent slower than feels natural.

Slowing down cuts down the number of times you say “um” and “uh”, gives you time to think, makes you sound more confident, and gives the listener time to process.

Another rule: Say only what the audience needs to move forward.

Advertisement

Explain just enough for the person to make a decision. If you overload someone with implementation details when they only need tradeoffs, you’ve made their job harder.

The Real Skill

The key skill in communication is audience awareness.

The same engineer who can clearly explain a concurrency bug to a peer can absolutely explain system risk to an executive. The difference is framing, vocabulary, and context. Not intelligence.

In the age of AI, where code generation is increasingly commoditized, the ability to translate complexity into clarity is becoming a defining advantage.

Advertisement

Engineers aren’t bad communicators. We just have to remember that outside our bubble, translation is part of the job.

—Brian

Robert Goddard launched the first liquid-fueled rocket 100 years ago, but his legacy still has relevant lessons for today’s engineers. Although Goddard’s headstrong confidence in his ideas helped bring about the breakthrough, it later became an obstacle in what systems engineer Guru Madhavan calls “the alpha trap.” Madhavan writes: “We love to celebrate the lone genius, yet we depend on teams to bring the flame of genius to the people.”

Read more here.

Advertisement

For Communications of the ACM, two Microsoft engineers propose a model for software engineering in the age of AI: Making the growth of early-in-career developers an explicit organizational goal. Without hiring early-career workers, the profession’s talent pipeline will eventually dry up. So, they argue, companies must hire them and develop talent, even if that comes with a short-term dip in productivity.

Read more here.

Looking for a job? Last year, IEEE Industry Engagement hosted its first virtual career fair to connect recruiters and young professionals. Several more career fairs are now planned, including two upcoming regional events and a global career fair in June. At these fairs, you can participate in interactive sessions, chat with recruiters, and experience video interviews.

Read more here.

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Demonstrating Gray Codes With Industrial Display

Published

on

Many people base huge swaths of their lives on foundational philosophical texts, yet few have read them in their entirety. The one that springs to the forefront of many of our minds is The Art of Computer Programming by Donald Knuth. Full of many clever and outright revolutionary algorithms and new ways of thinking about how computers work, [Attoparsec] has been attempting to read this tome from cover to cover, and has found some interesting tidbits. One of those is the various algorithms around Gray Codes, and he built this device as a visual aid.

Gray Codes, otherwise known as reflected binary, is a way of ordering an arbitrarily large set of binary values so that only one bit changes between any two of them. The most common place these are utilized is in things like rotary encoders, where it provides better assurance that the position of a shaft is in a known location. To demonstrate this in a more visual way [Attoparsec] hooked up an industrial signal light, normally used for communicating the status of machinery in a factory, and then programmed it to display the various codes. A standard binary counter is used as a reference, and it can also display standard Gray Code as well as a number of other algorithms used for solving similar problems.

[Attoparsec] built this as an interactive display for the Open Sauce festival in San Francisco. To that end it needed to be fairly rugged, so he built it out of old industrial equipment, which is also a fitting theme for the light itself. There’s also a speed controller and an emergency stop button which also add to the motif. For a deeper dive on Gray Codes and their uses, take a look at this feature from a few years back.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Pepper acquires YC-backed Alima to bring AI to food distribution catalogues

Published

on

Pepper, a New York-based technology platform for independent food distributors, has acquired Alima, a Y Combinator-backed startup that built ordering and procurement software for small food distributors in Latin America. The deal, announced on Tuesday with no disclosed financial terms, brings Alima’s two cofounders into Pepper’s leadership team and extends the company’s push into AI-driven product content and data infrastructure for an industry that still runs largely on phone calls, faxes, and personal relationships.

Jorge Vizcayno, Alima’s chief executive, will lead Pepper’s product content platform and data infrastructure, which uses AI to match and enrich product catalogues at scale. Blanca Espinosa, Alima’s chief marketing officer and cofounder, will head customer implementation, applying AI tooling to the onboarding process that has historically been one of the most friction-heavy parts of selling software to food distributors.

Two companies, one thesis

The acquisition is small in isolation but revealing in what it says about where vertical software for food distribution is heading. Pepper and Alima were built on the same premise: that independent food distributors, who collectively account for more than two-thirds of food distribution in North America and handle over $1.4 trillion in annual sales, are woefully underserved by technology.

Alima, founded in 2021, tackled the problem from the Latin American side, where the gap is even wider. More than 85 per cent of B2B food suppliers and distributors in the region lack digital sales capabilities, according to the company’s own estimates. Alima built an ordering platform for small and mid-sized distributors, focusing initially on fresh produce procurement in Mexico. The company went through Y Combinator’s Winter 2022 batch and raised $1.5 million in seed funding from Soma Capital, YC, The Dorm Room Fund, and angel investors.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Pepper, meanwhile, has grown into a broader platform covering ordering, sales and marketing, accounts receivable, and embedded payments for US-based food distributors. The company has raised $99 million across three rounds, most recently a $50 million Series C in February led by Lead Edge Capital, with participation from ICONIQ, Index Ventures, Greylock, Harmony Partners, and Interplay. It now serves more than 500 distributors representing approximately $30 billion in annual gross merchandise volume.

The AI angle

The strategic logic of the deal centres on product content, the sprawling, fragmented catalogues that food distributors must manage across thousands of SKUs from hundreds of suppliers. In food distribution, product data is notoriously messy: item descriptions vary between suppliers, packaging formats differ by region, and pricing changes frequently. Pepper has been building AI systems to match and enrich this data automatically, and Vizcayno’s experience building similar infrastructure for Latin American distributors makes the acquisition a talent and technology play as much as a market expansion one.

Advertisement

Espinosa’s role is equally telling. Customer implementation, the process of getting a distributor onto a new technology platform, is where many vertical SaaS companies lose deals. Distributors often have limited technical staff, legacy systems that resist integration, and operations that cannot afford downtime during a migration. Pepper is betting that AI-assisted onboarding can compress what has traditionally been a months-long process, and Espinosa’s background in customer acquisition at Alima positions her to lead that effort.

This is Pepper’s second acquisition in seven months. In August 2025, it acquired Kimelo, a distribution toolset that included a restaurant supply ordering app. The pace suggests Pepper is consolidating a fragmented market of small vertical tools into a single platform, a playbook familiar from other industries but still relatively early in food distribution.

A $1.4 trillion market, still on paper

The broader context is that food distribution technology remains in its early innings despite its enormous addressable market. Independent distributors are the backbone of the food supply chain, connecting farms and manufacturers to the restaurants, grocery stores, and institutions that feed people. Yet the industry’s technology adoption lags far behind comparable sectors like logistics, retail, and financial services.

Pepper’s investor list, which includes Index Ventures and Greylock, signals that serious venture capital is flowing into the space. The $50 million Series C in February valued the company at an undisclosed figure but positioned it as the category leader in a market where no dominant platform has yet emerged. The Alima acquisition adds Latin American domain expertise and a bilingual founding team to a company that will likely need to expand beyond the US to justify its funding trajectory.

Advertisement

For Alima’s founders, the framing is pragmatic. Vizcayno described the acquisition as the most honest continuation of Alima’s journey. Whether that honesty reflects strategic alignment or the practical reality that a $1.5 million seed-stage startup in a difficult Latin American market found a faster path to impact inside a better-funded platform is, ultimately, the same thing said two different ways.

Source link

Advertisement
Continue Reading

Tech

A new app wants to cure loneliness by getting people off their phones and into the same room

Published

on

A startup called Friending has launched a social platform built around a premise that sounds almost quaint in 2026: helping people make friends by meeting in person. The app, based in Raleigh, North Carolina, connects users by shared interests and geographic proximity, then deliberately limits chat functionality to push them toward face-to-face meetings rather than prolonged online conversations. Every user is verified through a third-party identity service, and the platform can confirm when two users’ phones are physically near each other, a feature designed to validate that meetings actually happen.

The timing is deliberate. In 2023, US Surgeon General Vivek Murthy issued an 82-page advisory declaring loneliness and social isolation a public health epidemic, finding that lacking social connection carries health risks comparable to smoking up to 15 cigarettes per day. Social isolation increases the risk of premature death by 29 per cent, heart disease by 29 per cent, and stroke by 32 per cent. Among older adults, chronic loneliness raises the risk of dementia by approximately 50 per cent. Half of American adults reported experiencing loneliness even before the pandemic.

Friending is far from the first app to try to address this. Bumble BFF launched in 2016 and saw a 16 per cent increase in time spent on its parent platform after adding the feature. Peanut, which connects mothers, has raised $17 million. Yubo, aimed at young adults, has raised $65.7 million. The friendship app category as a whole has attracted more than $84 million in venture capital. Yet none of these platforms has achieved the scale or cultural penetration of dating apps, which suggests either that the market is harder to crack or that the product designs have not yet found the right formula.

What Friending does differently

Friending’s distinguishing feature is its insistence on brevity in online interaction. Where most social platforms optimise for engagement time, measuring success by how long users stay on their screens, Friending treats extended chat as a failure state. The app is designed so that the valuable action is not the conversation but the meeting that follows it. The proximity verification feature, which registers when two users’ phones are physically close, serves as both a safety mechanism and a behavioural nudge: it confirms the meeting happened and reinforces the platform’s core proposition.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The identity verification layer is worth noting in a market where catfishing and fake profiles have eroded trust across social platforms. Friending uses a third-party verification system, though the company has not disclosed which provider it uses or what level of identity confirmation is required.

Gabor Kadas, the company’s founder, has described the app as a response to a paradox he experienced personally: moving between countries and accumulating thousands of online connections while feeling increasingly isolated. The company is currently raising venture capital to fund development and expansion, though it has not disclosed the size of the round or any committed investors.

Advertisement

The harder question

The challenge for any friendship app is not getting people to download it but getting them to use it more than once. Dating apps benefit from a powerful, specific motivation: the desire for romantic connection is urgent enough to overcome the friction of meeting strangers. Friendship is different. The need is real but diffuse, and the social cost of admitting you need an app to make friends remains higher than the cost of admitting you need one to find a date.

There is also the question of whether limiting online interaction actually helps. Research from the New York Academy of Sciences suggests that the relationship between social media and loneliness depends on the type of platform and the nature of the engagement. Active participation, such as responding to posts and sending messages, is associated with reduced loneliness. Passive use, such as scrolling without interacting, is not. By restricting chat, Friending may be removing one of the mechanisms through which users build the comfort and trust necessary to meet a stranger in person.

None of this means the idea lacks merit. The Surgeon General’s advisory was not a passing observation; it was a formal declaration that the country’s social fabric is fraying in ways that produce measurable harm. If Friending can convert even a fraction of the lonely half of America into regular users, it will have found something the larger platforms have not. The question is whether an app that asks people to put down their phones is fighting the problem or fighting human nature at the same time.

Advertisement

Source link

Continue Reading

Tech

Samsung 2026 Mini LED TVs: Full Pricing, Features and 4K Lineup Details

Published

on

Samsung is expanding its already crowded TV lineup for 2026 with a new range of Mini LED 4K UHD models, alongside an updated Neo QLED series that pushes further into premium territory. The strategy is familiar but effective: take the core advantages of Mini LED backlighting; better contrast control, higher brightness, and more precise local dimming, and pair them with a deeper layer of AI-driven processing and smart platform refinements.

There’s a lot to unpack across both categories, so we’re keeping this focused. This article breaks down Samsung’s 2026 Mini LED 4K lineup; where the company is clearly trying to hit the sweet spot between performance and price, while the Neo QLED models, which lean more heavily into flagship features and higher-end positioning, are covered separately. 

What Are Samsung Mini LED TVs?

Samsung’s Mini LED TVs are still LCD-based displays, but they use a more advanced form of full-array LED backlighting. The difference comes down to scale: the LEDs are significantly smaller, which allows for far more precise local dimming and better control of light across the screen—especially when rendering bright objects against darker backgrounds.

When paired with HDR formats like HDR10+, this improved backlight control translates into higher peak brightness, better contrast, and expanded color volume. In practical terms, that means a more dynamic and accurate picture without abandoning the proven strengths of LCD technology.

Advertisement

Samsung 2026 Mini LED TV Lineup 

For 2026, Samsung is introducing two Mini LED TV series so far: the M80H and M70H. Both models feature 4K UHD resolution, the Tizen smart TV platform, gaming-focused features, and Samsung’s Vision AI Companion for enhanced picture and usability.

samsung-m80h-tv
M80H

The M80H series is available in screen sizes from 55 to 85 inches, while the M70H series spans a broader range from 43 to 85 inches. Between the two, there’s enough flexibility to match just about any viewing distance or room size without forcing a compromise on features.

samsung-m70h-tv
M70H

Key Features

Both series are built to deliver a strong 4K UHD viewing experience, with AI-driven processing handling upscaling and scene optimization. The M80H uses Samsung’s NQ4 AI Gen2 Processor, while the M70H relies on the Mini LED Processor 4K, with both designed to enhance clarity and detail on a scene-by-scene basis.

Samsung’s Real Depth Enhancer is included on both models, improving foreground definition and helping key on-screen elements stand out more clearly. The M80H adds AI Customization Mode, which learns your preferred picture settings by genre during setup and then automatically adjusts image quality based on what you’re watching.

Advertisement

For audio, the M80H also includes Active Voice Amplifier, which boosts dialogue and important sound effects to improve clarity—especially useful when background noise tries to steal the scene. 

samsung-m70h-un85m70hafxza-side

Also, with Q Symphony, the M80H and M70H can be combined with compatible Samsung soundbars and Wi-Fi speakers to operate as a single, coordinated sound system rather than isolated components.

Gaming support for both models includes Samsung’s Gaming Hub, Cloud Gaming, and VRR (Variable Refresh Rate) in both series. However, the M80H also provides AI Auto Game Mode, Gaming Motion Plus, and AMD Freesync Premium Pro support

Advertisement. Scroll to continue reading.

Samsung’s Vision AI experience is included in both Mini LED TV series. Anchored by the Perplexity TV App, Vision AI on TVs goes beyond simple voice commands or video enhancements by combining AI audio/video processing, Bixby voice control, Tizen Smart TV integration, and Knox Security into a single, seamless ecosystem.

Advertisement

Comparison

There’s a fair amount of feature overlap between the M80H and M70H models, but key differences remain. We’ve included a detailed comparison chart below to make it easier to see where they separate.

Samsung Model M80H M70H
Product Type Mini-LED TV Mini-LED TV
Screen Size (diagonal inches) 55, 65, 75, 85 43, 50, 55, 65, 75, 85
Price

(55″) $699.99
(65″) $799.99
(75″) $1,199.99
(85″) $1,799.99

(43″) $349.99
(50″) $399.99
(55″) $449.99
(65″) $529.99
(75″) $729.99
(85″) $1,199.99
Refresh Rate: 144Hz (VRR Support) 60Hz
Lighting Technology Mini LED Mini LED
Display Resolution 4K (3840 x 2160) 4K (3840 x 2160)
Anti Reflection
Dimming Technology Supreme Mini LED Dimming Supreme Mini LED Dimming
Processor NQ4 AI Gen2 Processor Mini LED Processor 4K
Upscaling 4K AI Upscaling 4K Upscaling
Variable Refresh Rate (VRR) Yes Yes
Motion Handling Motion Xcelerator 144Hz Motion Xcelerator
DLG (Dual Line Gate) 240Hz 85″ – 55″: 120Hz 
50″ – 43″”: N/A
Contrast Enhancer Real Depth Enhancer Yes
Color Pure Spectrum Color Pure Spectrum Color
Color Booster Pro Yes
HDR (High Dynamic Range) Mini LED HDR Mini LED HDR
HDR10+ Yes Yes
Auto HDR Remastering Yes
Adaptive Picture AI Customization
Supersize Picture Enhancer
TV Depth 3″ 3″
Front Color Black Titan Black
Stand Type Basic Feet Basic Feet
Stand Color Titan Gray Black
Adjustable Stand: 75-inch only
Wi-Fi Yes (Wi-Fi 6E) Yes (Wi-Fi 6E)
Bluetooth Version 5.3 5.3
HDMI Inputs 3 3
HDMI Maximum Input Rate 4K 144Hz (for HDMI 1/2/3) 4K 60Hz (for HDMI 1/2/3)
HDMI Audio Return Channel eARC eARC
HDMI-CEC Yes Yes
USB Ports 1 x USB-A 1 x USB-A
Ethernet (LAN) Yes Yes
Digital Audio Out (Optical):
RF Connection: Yes Yes
RS-232C Input
Samsung Vision AI Yes Yes
Gaming Support Gaming Hub
Cloud Gaming- Xbox, NVIDIA GeForce Now, Luna, Blacknut, Antstream, Boosteroid
ALLM (Auto Low Latency Mode)
HGIG
AI Auto Game Mode
Gaming Motion Plus
Super Ultra Wide Game View
Game Bar
Mini Map Zoom
AMD FreeSync: Freesync Premium™ Pro 
Hue Sync
Gaming Hub
Cloud Gaming: – Xbox, NVIDIA GeForce Now, Luna, Blacknut, Antstream, Boosteroid 
ALLM (Auto Low Latency Mode)
HGIG
TV Art Features Art Mode: NA 
Art Store: Yes
Art Mode: NA 
Art Store: Yes
Operating System One UI Tizen One UI Tizen
Free Ad Supported TV Samsung TV Plus Samsung TV Plus
Smart Home Connectivity: SmartThings, Matter, IoT-Sensor Functionality Quick Remote Only
Smart Assistants (Built-In) Bixby, Alexa Bixby, Alexa
Smart Assistants (Works with) Google Assistant Google Assistant
Far-Field Voice Interactions Yes
Web Browser: Yes Yes
Samsung Health Yes Yes
Multi-Device Experience Mobile to TV
Sound Mirroring
Wireless TV On
TV initiates mirroring
Mobile to TV
Sound Mirroring
Wireless TV On
Multi-View Up to 2 videos
Buds Auto Switch Yes
Works with Apple AirPlay Yes Yes
Works with Google Cast Yes Yes
Daily+ Yes
Now Brief Yes Voice/User Detection
Workout Tracker Yes
Audio 2 Channel speaker system
20 Watts Output Power
Object Tracking Sound
(OTS) Lite\Q-Symphony
Active Voice Amplifier (AVA) Pro
Adaptive Sound Plus
2 Channel speaker system
20 Watts Output Power
Object Tracking Sound (OTS) Lite\Q-Symphony
Karaoke Mic: Yes Yes
Multi-Control Yes
Storage Share: Yes
Security Knox Vault: N/A 
Knox Security: Yes
Knox Vault: N/A 
Knox Security: Yes
Remote Control Bluetooth Simple Remote TM2280A with batteries IR Simple Remote TM2240A with batteries
samsung-metalstream-design

The Bottom Line

Samsung’s 2026 Mini LED lineup sits in a very calculated middle ground. You’re getting the core benefit that actually matters; Mini LED backlighting for better contrast, brightness control, and more consistent HDR performance without paying Neo QLED prices. Add in Tizen, Vision AI, and solid gaming support, and these don’t feel stripped down in daily use. For a lot of buyers, this is where the real value is.

What’s missing is just as important. No Quantum Dot layer means color accuracy and color volume won’t match Samsung’s Neo QLED models, and you’re not getting the full processing and refinement stack reserved for the higher tier. These are for buyers who want a meaningful step up from basic LED TVs without drifting into premium pricing. If you’re chasing reference-level performance, keep walking. If you want a well-equipped 4K Mini LED TV that covers the essentials and then some, this is the safer—and smarter—place to land.

Availability & Pricing

Samsung’s 2026 Mini LED 4K TVs are available now:

Advertisement

M80H Series

M70H Series

For more information: Samsung Product Page

Advertisement

Source link

Continue Reading

Tech

Reddit may ask you to prove you’re human as it cracks down on bot accounts

Published

on

Reddit is stepping up its fight against bots, and now your account could be asked to prove it is human if the platform detects fishy behaviour.

Reddit CEO Steve Huffman says these checks will be rare, but they are meant to protect what makes Reddit work in the first place – real people talking to real people.

As AI-generated content spreads, Reddit admits it is getting harder to tell who is behind a post. So instead of broad crackdowns, it is focusing on suspicious behavior and adding clearer signals across the platform.

How Reddit plans to separate humans from bots

If Reddit detects signs of automation or unusual behavior, it may trigger a human verification check. This could involve simple actions like passkeys or FaceID that confirm a human is present.

In some cases, third-party biometric systems like Sam Altman’s World ID may be used. The platform may also use government-issued IDs in regions where laws require them. However, Reddit says that your identity will stay separate from your account.

Advertisement

The company is also standardizing labels for automated accounts. Approved bots will carry an [APP] tag, making it obvious you are interacting with software. Developers will need to register their tools to get this label, which adds a layer of transparency.

What does this mean for your Reddit experience?

Since Reddit says this is not a sitewide verification system, most users might never be asked to prove anything. Even when such checks take place, the focus will be on confirming a human exists, not identifying who that person is.

At the same time, the platform will continue removing harmful bots at scale, already taking down around 100,000 accounts daily. It is also improving reporting tools so users can flag suspicious activity more easily.

Reddit is not banning AI-written posts outright, but it is drawing a firm line. For now, the platform cares less about how content is written and more about who is behind it.

Advertisement

Source link

Continue Reading

Tech

New Torg Grabber infostealer malware targets 728 crypto wallets

Published

on

New Torg Grabber infostealer malware targets 728 crypto wallets

A new info-stealing malware called Torg Grabber is stealing sensitive data from 850 browser extensions, more than 700 of them for cryptocurrency wallets.

Initial access is obtained through the ClickFix technique by hijacking the clipboard and tricking the user into executing a malicious PowerShell command.

According to researchers at cybersecurity company Gen Digital, Torg Grabber is actively developed, with 334 unique samples compiled in three months (between December 2025 and February 2026) and new command-and-control (C2) servers registered every week.

Apart from cryptocurrency wallets, Torg Grabber steals data from 103 password managers and two-factor authentication tools, and 19 note-taking apps.

Advertisement

Rapid evolution

In a technical report this week, Gen Digital researchers say that Torg Grabber’s initial builds used a Telegram-based and then a custom, encrypted TCP protocol for data exfiltration.

On December 18, 2025, the two mechanisms were abandoned in favor of an HTTPS connection routed through Cloudflare infrastructure. The method supports chunked data uploads and payload delivery.

Torg Grabber's development timeline
Torg Grabber’s development timeline
Source: Gen Digital

The malware features several anti-analysis mechanisms, multi-layered obfuscation, and uses direct syscalls and reflective loading for evasion, running the final payload entirely in memory.

On December 22, 2025, Torg Grabber added App-Bound Encryption (ABE) bypass to beat Chrome’s (and Brave’s, Edge’s, Vivaldi’s, and Opera’s) cookie protection system, like many other information stealers.

However, the researchers also discovered a standalone tool called Underground, used for extracting browser data.

Advertisement

It injects a DLL reflectively into the browser to access Chrome’s COM Elevation Service and extract the master encryption key, a method also recently seen in VoidStealer.

Extensive data theft capabilities

Gen Digital found that Torg Grabber targets 25 Chromium-based browsers and 8 Firefox variants, trying to steal credentials, cookies, and autofill data.

Of the 850 browser extensions it targets, 728 are for cryptocurrency wallets, covering “essentially every crypto wallet ever conceived by human optimism.”

“The marquee names are all there – MetaMask, Phantom, TrustWallet, Coinbase, Binance, Exodus, TronLink, Ronin, OKX, Keplr, Rabby, Sui, Solflare,” the researchers say.

Advertisement

“But the list doesn’t stop at the big names. It keeps going, deep into the long tail, past projects with install counts you could fit in a phone booth.”

Apart from wallets, the malware also targets a large list of 103 extensions for passwords, tokens, and authenticators: LastPass, 1Password, Bitwarden, KeePass, NordPass, Dashlane, ProtonPass, Enpass, Psono, Pleasant Password Server, heylogin, 2FAAuth, GAuth, TOTP Authenticator, and Akamai MFA.

Torg Grabber also targets information from Discord, Telegram, Steam, VPN apps, FTP apps, email clients, password managers, and desktop cryptocurrency wallet apps.

The malware can also profile the host, create a hardware fingerprint, document installed software (including 24 antivirus tools), take screenshots of the user’s desktop, and steal files from the Desktop/Documents folders.

Advertisement

Also notable is its capability to execute shellcode on the compromised device, delivered in ChaCha-encrypted zlib-compressed form from the C2.

Gen Digital cautions that Torg Grabber continues to develop rapidly, registering new C2 domains weekly, and that its operator base is expanding, with 40 tags documented by the time of analysis.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Singapore’s 1st driverless public bus is here

Published

on

Trials will start in Marina Bay and one-north

The first of six autonomous public buses has reached Singapore and will be tested on routes 400 in Marina Bay and 191 in one-north from the second half of 2026 as part of a three-year pilot programme.

In a Facebook video on Mar 25, the Singapore Land Transport Authority (LTA) said that the six buses will be rigorously tested to ensure that they meet all safety and operating requirements before they hit the roads.

When ready, the self-driving public buses will operate alongside existing manned buses, allowing LTA to maintain routes with lower ridership or introduce new services that are currently difficult to introduce due to manpower constraints.

In its Facebook video, LTA offered a glimpse of the driverless bus interior./ Screengrab from LTA

The video from LTA revealed a 16-seater bus with features that resemble those of existing public buses. It also showed a space designated for a wheelchair.

Cameras and sensors are seen mounted on the front, rear and top of the autonomous bus, providing operators with a 360-degree view of the surroundings.

Advertisement

LTA noted that further preparation is needed before testing begins.

The tests will include LTA’s closed-circuit assessment, consisting of basic manoeuvres and safe passenger boarding and alighting at all designated stops.

Service 400 connects Marina Bay and Shenton Way, stopping at Marina Bay Cruise Centre, Gardens by the Bay, Shenton Way and Downtown MRT stations.

Whereas service 191 loops through one-north, with stops at Buona Vista bus terminal, one-north MRT, and Buona Vista MRT.

Advertisement

Following this deployment, LTA may procure up to 14 additional autonomous buses and expand the pilot to more public bus services.

LTA first teased the launch of driverless public buses last Oct, where it awarded a contract for the pilot deployment of autonomous buses to a consortium of MKX Technologies Pte Ltd, Zhidao Network Technology (Beijing) Co. Ltd and BYD (Singapore) Pte Ltd for a contract sum of around S$8.14 million to pilot autonomous buses.

The consortium will also work with the Singapore Bus Academy to train existing bus captains to take on new roles as safety operators, so that they are equipped to operate the autonomous buses competently and confidently.

  • Read other articles we’ve written on Singaporean businesses here.

Featured Image Credit: LTA

Advertisement

Source link

Continue Reading

Tech

Canada’s Immigration Rejected Applicant Based On AI-Invented Job Duties

Published

on

New submitter haroldbasset writes: Canada’s Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department’s AI assistant had invented that work experience. She has been working in Canada as a health scientist — she has a Ph.D. in the immunology of aging — but the AI genius instead described her as “wiring and assembling control circuits, building control and robot panels, programming and troubleshooting.” “It’s believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals,” reports the Toronto Star. “The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision.”

The applicant’s lawyer was shocked “how any human being could make this decision.” “Somehow, it hallucinated my client’s job description,” he said. “I would love to see what the officer saw. Something seriously went wrong here.”

The applicant’s refusal came just as Canada’s Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025