Connect with us

Tech

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws

Published

on

System76 isn’t the only one criticizing new age-verification laws. The blog 9to5Linux published an “informal” look at other discussions in various Linux communities.

Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. “Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response,” said Jon Seager, VP Engineering at Canonical. “The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical.”

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.
Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization “has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet.”

And there’s another problem. “Many of these mandates imagine technology that does not currently exist.”
Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

Advertisement

These burdens fall particularly heavily on developers who aren’t at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users’ and developers’ right to free expression, their digital liberties, privacy, and ability to create and use open platforms…

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Enterprise agentic AI requires a process layer most companies haven’t built

Published

on

Presented by Celonis


85% of enterprises want to become agentic within three years — yet 76% admit their operations can’t support it. According to the Celonis 2026 Process Optimization Report, based on a survey of more than 1,600 global business leaders, organizations are aggressively pursuing AI-driven transformation. Yet most acknowledge that the foundational work — modernizing workflows, reducing process friction, and building operational resilience — remains unfinished. The ambition is clear. The infrastructure to execute on it is not.

To act autonomously and effectively, AI agents need optimized, AI-ready processes and the process data and operational context that only comes from process intelligence. Without that, they’re guessing. And 82% of decision-makers believe AI will fail to deliver return on investment (ROI) if it doesn’t understand how the business runs.

“The scale of the opportunity is truly remarkable: 89% of leaders see AI as their biggest competitive opportunity,” says Patrick Thompson, global SVP of customer transformation. “That’s not a marginal finding. What’s interesting is the shift in the framing. Leaders are confident that AI will transform operations. The question now is how to fuel their ambitions with the right AI enablers.”

Advertisement

Explaining the gap between ambition and reality

Right now, 85% of teams are using gen AI tools for everyday tasks, so the “will this work?” question is largely settled. The real question has shifted to: “Why isn’t it working the way we need it to?” And that’s a much harder problem, because it’s structural. It’s siloed teams. Systems that don’t talk to each other. AI that looks impressive in a demo but falters once it’s dropped into a real enterprise environment. That’s the wall companies are hitting.

So, despite the overwhelming ambition, only 19% of organizations use multi-agent systems today. It all comes down to an operational readiness problem, Thompson says.

“Nine in ten leaders are already using or exploring multi-agent systems, so the will is absolutely there, but ambition without infrastructure doesn’t get you very far,” he explains.

Until now, process has largely been a “good enough” problem, because processes that are messy and disconnected can still produce results, just inefficient and opaque. As long as the business is growing, there hasn’t been a burning urge to fix them. AI changed the calculus. If 82% of leaders believe AI can only deliver ROI with proper business context, then sub-optimal processes aren’t just an operational inconvenience, they’re actively blocking an AI strategy. Suddenly, process optimization isn’t a background IT project, but a prerequisite for competing.

Advertisement

“This is where structural modernization becomes critical,” he says. “Organizations that have invested in modernizing their data, systems, and processes are in a far stronger position to enable AI at scale.”

The other AI stopper: Lack of business context

AI will not be able to provide the strongest ROI possible until it understands the operational context of the business. That includes how KPIs are defined and calculated, any unique internal policies and procedures, how the organization is structured, and where the real decision authority sits.

This knowledge is usually trapped in different departments that have developed their own languages and systems over time. They don’t naturally share a common understanding. Bringing AI into that environment is something like dropping someone into a conversation that’s been going on for years, without any of the backstory.

Process intelligence becomes the connective layer — a shared operational language that grounds AI decisions in how the business actually runs.

Advertisement

Why AI adoption is also a change management problem

The AI adoption challenge is less a technology problem and more of a change-management and operating-model problem than many more leaders want to admit, because technology problems feel easier to solve. The data shows that only 6% of leaders cite resistance to change as a hurdle. The real blockers are siloed teams (54%) and a lack of coordination between departments (44%). And 93% of process and operations leaders explicitly state that process optimization is as much about people and culture as it is about tools and technology.

“When companies come to us looking for a technology fix, part of our job is helping them see that the operating model has to evolve alongside the tooling,” Thompson says. “You can’t bolt AI onto a broken process and expect it to work. True enterprise modernization means redesigning how teams, systems, and decisions connect, and AI only works when that modernization happens first.”

Making process optimization a strategic advantage

How do you make process optimization a strategic advantage, rather than another operational project? Connect it directly to outcomes that executives care about. When processes work, they go beyond IT metrics, directly affecting board-level concerns. A full 63% of leaders use process optimization to proactively manage risks, while 58% see faster decision-making.

Plus, the economic and geopolitical environment right now makes agility a survival skill. Look at the supply chain industry, where 66% already view process optimization as a critical business-wide initiative.

Advertisement

“That’s the mindset shift we’re trying to catalyze across the rest of the organization,” Thompson says. “It’s not maintenance work. It’s what lets you move fast when the world changes, and right now the world is moving constantly.”

Closing the readiness gap in enterprise agentic AI

To succeed, and even triumph, organizations must be ready to close the readiness gap, and they need to be honest about where they’re starting from, Thompson says.

“The biggest risk I see is companies continuing to layer AI on top of fragmented, opaque processes and then wondering why they’re not getting results,” he says. “Moving from static, traditional tools to real process intelligence, where you have live visibility into how your operations actually run, that’s the foundational shift that makes agentic AI viable.”

Without it, agents get deployed in the wrong places, can’t be integrated with existing systems, and organizations end up with expensive pilots that don’t scale. The call to action is clear: stop starting with tools and start with operational visibility.

Advertisement

“The leaders who will win in the agentic era aren’t necessarily the ones with the most sophisticated AI,” he says. “They’re the ones who’ve done the hard work of building a shared, accurate picture of their operations. Process intelligence is the starting point. It’s what enables enterprise modernization in practice, creating the operational clarity AI needs to deliver real ROI. Master your processes, give AI the context it needs, and then you can actually deploy it somewhere it will deliver.”


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Advertisement
Continue Reading

Tech

Nscale raises $2bn Series C at $14.6bn valuation

Published

on

The UK hyperscaler has now raised over $4.5bn across equity rounds in less than six months, and says it is the largest Series C ever closed in Europe. That claim deserves scrutiny.


When Josh Payne founded Nscale, the company was barely a year old, and the world’s appetite for GPU compute had not yet tipped into anything approaching panic. That was 2024. By March 2026, his company will have closed a $2 billion Series C, carry a $14.6 billion valuation, and have recruited three of the most recognisable names in global technology and politics to its board.

The question is no longer whether Nscale can raise money. It is whether the infrastructure it is racing to build will be ready before the market moves on.

Nscale announced the round today, led jointly by Aker ASA, the Norwegian industrial conglomerate that also led its $1.1 billion Series B in September 2025, and 8090 Industries, a Dallas-based industrial technology fund co-founded by Rayyan Islam.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Additional investors in the round include Astra Capital Management, Citadel, Dell, Jane Street, Lenovo, Linden Advisors, Nokia, NVIDIA, and Point72. Goldman Sachs and J.P. Morgan acted as joint placement agents, and the raise is inclusive of the pre-Series C SAFE that Nscale closed in October 2025.

The company says the round is the largest Series C ever completed in Europe.

Advertisement

The infrastructure play

Nscale’s proposition is vertically integrated AI infrastructure: GPU compute, networking, data services, and orchestration software, delivered from its own and colocated data centres across Europe, North America, and Asia. The pitch is that the bottleneck in the AI economy is not demand; everyone wants compute, but the ability to deploy capacity reliably and at scale.

Nscale’s data centres are designed from first principles to handle GPU-dense workloads rather than retrofitting facilities built for traditional cloud computing.

The company has moved quickly. Since its Series B in September 2025, it has signed a $1.4 billion delayed-draw term loan backed by GPUs, which it announced in February 2026, and has secured large-scale contracts with Microsoft, including plans for a facility in Texas targeting 104,000 NVIDIA GB300 GPUs.

Its data centre footprint spans Norway, the UK, Portugal, Iceland, and the US, with its Norwegian presence anchored by the Glomfjord and Narvik sites. In July 2025, it announced the Stargate Norway project alongside Aker and OpenAI, targeting 100,000 NVIDIA GPUs by the end of 2026.

Advertisement

Alongside the fundraising, Nscale has also resolved a structural question that had been hanging over the Norway operations. The Aker–Nscale joint venture, announced in July 2025, will be wound into Nscale as a wholly owned entity.

Aker remains a leading shareholder, its CEO, Øyvind Eriksen, continues to sit on the board, and the company says all existing projects under the joint venture remain fully operational. The practical effect is to put delivery and governance under a single roof.

“This step strengthens execution by putting delivery and governance under one roof, while keeping continuity for the people and projects already underway,” Eriksen said in a statement. “We have full confidence in Nscale’s ability to deliver responsibly in Norway over the long term.”

The new board

The three board appointments announced today are striking in different ways. Sheryl Sandberg, the former Meta COO who stepped down from the company’s board in 2024, is the co-founder of Sandberg Bernthal Venture Partners, an early-stage fund she has been building since 2021.

Advertisement

Her addition brings operational credibility from a company that scaled to hundreds of billions in revenue during her tenure, and, notably, deep expertise in the advertising and data infrastructure that underpins modern AI products.

Susan Decker, former president of Yahoo and CEO of the university community platform Raftr, brings financial acumen and a long record of corporate governance, including serving as lead director of Berkshire Hathaway.

Her Berkshire role gives Nscale a board member with rare experience overseeing a conglomerate that owns businesses across energy, infrastructure, and financial services, the sectors Nscale is increasingly operating in.

Nick Clegg is the most overtly political appointment. The former UK Deputy Prime Minister and Meta President of Global Affairs joined Hiro Capital as a General Partner in December 2025, where he focuses on spatial computing and AI investment across Europe.

Advertisement

He joins Nscale’s board, bringing a combination of European regulatory fluency, Meta-era experience of AI governance debates, and political networks that could prove valuable as Nscale pursues sovereign AI mandates and government contracts across the UK and EU.

Payne, speaking in the press release, framed the round as more than a fundraiser. “Nscale is leading this buildout,” he said, describing the company’s ambition as building “the foundation that the market sits on, the engine of superintelligence.” The language is bullish even by AI infrastructure standards.

The harder questions

Nscale has raised over $4.5 billion in equity rounds since its Series B in September 2025. That velocity would be remarkable for any company; for one incorporated only in 2024, it is extraordinary. What it also means is that the gap between capital raised and assets deployed is wide and growing.

Building the infrastructure that Nscale has committed to, across multiple continents, at GPU densities that require bespoke facility design, is an execution problem of considerable complexity.

Advertisement

The company’s own published data centre pipeline and the Microsoft contract details that have been reported suggest it is making real progress. But significant infrastructure projects routinely fall behind schedule, and Nscale has not yet had a delivery cycle long enough to fully validate its operational model at the scale it is now targeting.

The $2 billion will be used to accelerate global deployments, expand engineering and operations teams, and strengthen the platform. Nscale’s IPO ambitions, which CEO Payne has previously flagged for as early as 2026, add another variable.

Whether markets are ready to absorb a listing from a company this young, at this valuation, will depend on whether the compute economy continues to grow at the pace of the last two years, and whether Nscale can demonstrate that it is not just a capital vehicle but an operator.

The board hires suggest the company knows the next phase is about governance and credibility as much as fundraising. Whether Sandberg, Decker, and Clegg can help deliver that remains to be seen.

Advertisement

Source link

Continue Reading

Tech

Hackers abuse .arpa DNS and ipv6 to evade phishing defenses

Published

on

 

Threat actors are abusing the special-use “.arpa” domain and IPv6 reverse DNS in phishing campaigns that more easily evade domain reputation checks and email security gateways.

The .arpa domain is a special top-level domain reserved for internet infrastructure rather than normal websites. It is used for reverse DNS lookups, which allow systems to map an IP address back to a hostname.

IPv4 reverse lookups use the in-addr.arpa domain, while IPv6 uses ip6.arpa. In these lookups, DNS queries a hostname derived from the IP address, written in reverse order and appended to one of these domains.

Advertisement

For example, www.google.com has the IP addresses 192.178.50.36 (IPv4) and 2607:f8b0:4008:802::2004 (IPv6). Querying Google’s IP of 192.178.50.36 via the dig tool resolves to an in-addr.arpa hostname and ultimately a regular hostname:


; <<>> DiG 9.18.39-0ubuntu0.24.04.2-Ubuntu <<>> -x 192.178.50.36
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59754
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;36.50.178.192.in-addr.arpa.    IN      PTR

;; ANSWER SECTION:
36.50.178.192.in-addr.arpa. 1386 IN     PTR     lcmiaa-aa-in-f4.1e100.net.

;; Query time: 7 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Fri Mar 06 13:57:31 EST 2026
;; MSG SIZE  rcvd: 94

Querying Google’s IPv6 address of 2607:f8b0:4008:802::2004 shows that it first resolves to an IPv6.arpa hostname and then a hostname, as shown below.


; <<>> DiG 9.18.39-0ubuntu0.24.04.2-Ubuntu <<>> -x 2607:f8b0:4008:802::2004
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31116
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;4.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.2.0.8.0.8.0.0.4.0.b.8.f.7.0.6.2.ip6.arpa. IN PTR

;; ANSWER SECTION:
4.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.2.0.8.0.8.0.0.4.0.b.8.f.7.0.6.2.ip6.arpa. 78544 IN PTR tzmiaa-af-in-x04.1e100.net.
4.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.2.0.8.0.8.0.0.4.0.b.8.f.7.0.6.2.ip6.arpa. 78544 IN PTR mia07s48-in-x04.1e100.net.

;; Query time: 10 msec
;; SERVER: 127.0.0.1#53(127.0.0.1) (UDP)
;; WHEN: Fri Mar 06 13:58:43 EST 2026
;; MSG SIZE  rcvd: 171

Phishing campaign abuses in .arpa domains

A phishing campaign observed by Infoblox uses the ip6.arpa reverse DNS TLD, which normally maps IPv6 addresses back to hostnames using PTR records.

However, attackers found that if they reserve their own IPv6 address space, they can abuse the reverse DNS zone for the IP range by configuring additional DNS records for phishing sites.

Advertisement

In normal DNS functionality, reverse DNS domains are used for PTR records, which allow systems to determine the hostname associated with a queried IP address.

However, attackers discovered that once they gained control over the DNS zone for an IPv6 range, some DNS management platforms allowed them to configure other record types that can be abused for phishing attacks.

“We have seen threat actors abuse Hurricane Electric and Cloudflare to create these records—both of which have good reputations that actors leverage—and we confirmed that some other DNS providers also allow these configurations,” explains Infoblox.

“Our tests were not exhaustive, but we notified the providers where we discovered a gap. Figure 2 depicts the process the threat actor used to create the domain used in the phishing emails.”

Advertisement

To set up the infrastructure, the attackers first obtained a block of IPv6 addresses via IPv6 tunneling services.

Infoblox's overview of how the .arpa TLD is abused in phishing emails
Infoblox’s overview of how the .arpa TLD is abused in phishing emails
Source: Infoblox

After gaining control of the address space, the attackers then generate reverse DNS hostnames from the IPv6 address range using randomly generated subdomains that are difficult to detect or block.

Instead of configuring PTR records as expected, the attackers create A records that point those reverse DNS domains to infrastructure hosting phishing sites.

The phishing emails in this campaign use lures that promise a prize, a survey reward, or an account notification. The lures are embedded in the emails as images linked to a reverse IPv6 DNS record, such as  “d.d.e.0.6.3.0.0.0.7.4.0.1.0.0.2.ip6.arpa,” rather than a regular hostname, so the target doesn’t see a strange arpa hostname.

Phishing email lures
Phishing email lures
Source: Infoblox

When a victim clicks the phishing email image, the device resolves the attacker-controlled reverse DNS name servers via a DNS provider.

HTML showing image and link using .arpa hostnames
HTML showing image and link using .arpa hostnames
Source: Infoblox

In some cases, the authoritative name servers were hosted by Cloudflare, and the reverse DNS domains resolved to Cloudflare IP addresses, hiding the location of the backend phishing infrastructure.

After clicking the image, victims are redirected through a traffic distribution system (TDS) that determines whether they are a valid target, commonly based on device type, IP address, web referers, and other criteria. If the visitor passes validation, they are redirected to a phishing site. Otherwise, they are sent to a legitimate website.

Advertisement

Infoblox says the phishing links are short-lived, only active for a few days. After the links expire, they redirect users to domain errors or other legitimate sites.

The researchers believe this is done to make it harder for security researchers to analyze and investigate the phishing campaign.

Furthermore, as the ‘.arpa’ domain is reserved for internet infrastructure, it does not include data normally found in registered domains, such as WHOIS info, domain age, or contact information. This makes it harder for email gateways and security tools to detect malicious domains.

The researchers also observed the phishing campaign using other techniques, such as hijacking dangling CNAME records and subdomain shadowing, allowing the attackers to push phishing content through subdomains linked to legitimate organizations.

Advertisement

“We found over 100 instances where the threat actor used hijacked CNAMEs of well-known government agencies, universities, telecommunication companies, media organizations, and retailers,” explained Infoblox.

By weaponizing trusted reverse DNS features used by security tools, attackers can generate phishing URLs that bypass traditional detection methods.

As always, the best way to avoid phishing attacks like these is to avoid clicking on unexpected links in emails and instead visit services directly through their official websites.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Advertisement

Source link

Continue Reading

Tech

OpenAI’s robotics hardware lead resigns following deal with the Department of Defense

Published

on

OpenAI‘s robotics hardware lead is out. Caitlin Kalinowski, who oversaw hardware within the robotics division of OpenAI, posted on X that she was resigning from her role, while criticizing the company’s haste in partnering with the Department of Defense without investigating proper guardrails. OpenAI told Engadget that there are no plans to replace Kalinowski.

Kalinowski, who previously worked at Meta before leaving to join OpenAI in late 2024, wrote on X that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” Responding to another post, the former OpenAI exec explained that “the announcement was rushed without the guardrails defined,” adding that it was a “governance concern first and foremost.”

OpenAI confirmed Kalinowski’s resignation and said in a statement to Engadget that the company understands people have “strong views” about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn’t support the issues that Kalinowski brought up.

“We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons,” the OpenAI statement read.

Advertisement

Kalinowski’s resignation may be the most high-profile fallout from OpenAI’s decision to sign a deal with the Department of Defense. The decision came just after Anthropic refused to comply with lifting certain AI guardrails around mass surveillance and developing fully autonomous weapons. However, even OpenAI’s CEO, Sam Altman, said that he would amend the deal with the Department of Defense to prohibit spying on Americans.

Correction, March 8 2026, 10:30AM ET: This story has been updated to correct Kalinowski’s role at OpenAI to “robotics hardware lead” instead of “head of robotics.”

Source link

Advertisement
Continue Reading

Tech

Top Benefits of Implementing Salesforce Agentforce for Enterprise Businesses

Published

on

Most enterprise teams struggle with systems built to support strategy and processes but cannot keep up. As a result, Sales teams are stuck doing manual data entry; service teams are responding to backlogs. Additionally, operations managers are pulling reports from three different platforms just to get a picture of past incidents. The compounding effect of these inefficiencies leads to deals slowing down, customers getting inconsistent experiences, and the more a business grows, the more difficult it gets to manage these complex issues. Agentforce implementation services address this directly. 

Using these services allows enterprises to deploy AI agents that act inside the Salesforce environment, not just surface recommendations but execute tasks. There are other benefits to Salesforce agentforce consulting which we’ll cover. In this blog, we’ll cover 7 practical benefits of Agentforce for enterprise businesses. In addition, we’ll also discuss a few steps to help you identify the right Agentforce implementation services partner.

Driving Enterprise Impact: 7 Benefits of Agentforce Implementation Services

Here are 7 transformative advantages of Salesforce Agentforce implementation services:

1. Automated Operational Workflows

Advertisement

Even well-designed automation still depends on humans to start it, for instance, an agent logs a call, or a manager approves the next step. Each of those handoffs introduces delay, but Agentforce eliminates that dependency. AI agents execute tasks like updating records, routing escalations, triggering follow-on steps; now conditions are met, not when a team member is available to act. At enterprise scale, removing that delay across thousands of daily interactions is not an incremental gain. It’s a structural advantage over organizations still relying on human-initiated workflows to drive execution.

2. Human-Centered Service Allocation

Enterprise service operations face persistent capacity problems. Query volume grows faster than hiring capacity, and every routine request handled by a trained agent is time that could have been spent resolving a genuinely complex case. Agentforce reallocates that effort when routine queries are addressed immediately and accurately. This is without manual searching, and resolutions are applied or recommended in real time. Businesses that engage Salesforce Agentforce consulting to configure the agents to their specific service workflows consistently report faster resolution times and lower escalation rates.

3. Standardized Sales Execution

Advertisement

Large sales organizations face a consistent challenge: performance varies not because of product or pricing, but because individual execution varies. One territory delivers results because of a disciplined manager while another underperforms because follow-up processes are inconsistently applied. With the help of Agentforce, you can standardize execution at the system level. 

High-intent accounts are identified and flagged before opportunities lapse, and relevant account context is given at the appropriate stage of the deal. So, the agent who’s performing the best becomes the default process for all of them/ When applied across the sales ecosystem, the standardization brings in tangible business outcomes.

4. Maximized Existing Salesforce ROI

Enterprises accumulated Salesforce investments carry substantial sunk costs in licenses, customizations, and integrations. Sometimes, a part of that infrastructure remains underutilized, like data collect fields that are rarely reviewed, or even reports are generated but do not reach decision-makers in time to be actionable. Agentforce does not require building a new system on top of existing ones; it activates what is already in place. Engaging a certified Salesforce implementation partner to handle the configuration correctly from the outset prevents the accumulation of a second layer of technical debt. The ROI case extends beyond Agentforce itself and helps businesses extract full value from the broader Salesforce platform it has already committed to.

Advertisement

5. Scalable Capacity Efficiency

It’s simple math: more customers require more service capacity, more pipelines require more sales coverage, and in addition, more operational activity requires more administrative overhead. Agentforce interrupts that relationship as AI agents absorb increased volume without a corresponding increase in cost, and without the onboarding time or training requirements. This enables enterprises to manage high-demand periods by expanding operational capacity without restructuring staff or systems.

6. Real-Time Actionable Data

If leaders don’t get insights in real-time, they often make decisions based on outdated information. So, when a problem appears in the report, the opportunity to respond promptly is usually closed by then. But with Agentforce, you can process data as it moves through the system. You can also uncover relevant indicators in real time, shifts in pipeline health, service queue build-up, performance variances without requiring anyone to prepare a report in advance.

Advertisement

Therefore, when you opt for Salesforce Consulting Services, you can design the reporting architecture that makes this visibility consistent across business units and geographies. Thus, ensuring leadership is acting on current information rather than a delayed representation of it.

7. Built-In Compliance Assurance

Regulated industries such as financial services, healthcare, and insurance, compliance is an ongoing operational responsibility. Regulators require a clear account of what occurred, when it occurred, and under authority. When producing those answers depends on manually reconstructing a process after the fact, the burden in time, resources, and risk is considerable. Agentforce captures every agent’s action by default: each decision made, each workflow initiated; each record modified is logged as part of normal system operation. 

This makes it possible for organizations to establish compliance boundaries within the operation of the agents. Additionally, you can enforce these parameters uniformly without having to invest in another maintenance effort. This is very helpful for businesses in regulated industries as it transforms the reactive documentation into an integrated responsibility and limits compliance risk for them.

Advertisement

How to Find the Right Agentforce Implementation Services Partner

While assessing Certified Salesforce implementation partners, look specifically for credentials in Agentforce and the Salesforce clouds relevant to your operations.

With the right enterprise experience, you get expertise in multi-cloud, data volume, and complex rollouts.

Proper discovery covers your workflows, integration requirements, data structure, and where the real operational friction sits, not just the features you asked about.

A poor implementation costs far more in rework than the initial savings are worth, so ensure you have insight into the depth of a partner with Salesforce consulting services.

Advertisement

Ensure the selected Salesforce agentforce consulting service partner also offers ongoing support, internal team training, and optimization services.

Closing Remarks

Agentforce is not a feature upgrade, especially for enterprises. For them, it represents a structural change in how work gets done inside Salesforce: less waiting, less manual effort, more consistent execution across teams that are already stretched. What determines whether that plays out in practice is the implementation. The right Agentforce implementation services partner brings the technical credentials, the enterprise experience, and the operational discipline to make sure the system works the way the business actually runs. 

Source link

Advertisement
Continue Reading

Tech

Seattle’s newest early stage fund makes a bet on vertical AI startups

Published

on

TheFounderVC team, from left: Paul Longhenry, Mia Lewin, and Shail Kaveti. Not pictured: Jay Bartot. (TFVC Photo)

TheFounderVC (TFVC), a new early stage fund based in Seattle and San Francisco, announced the public launch Tuesday of its inaugural $5 million fund.

The firm is focused on “vertical AI” — startups building industry-specific products on top of increasingly powerful AI models. It’s led by a team that includes:

  • Mia Lewin, founding partner based in Seattle, previously launched three startups including StyleGenome, which was acquired by Wayfair.
  • Paul Longhenry, partner based in San Francisco, a longtime investor and former exec at Tapjoy, Pinpoint Predictive, and Bolt.
  • Shail Kaveti, partner based in San Francisco, former Wayfair exec and Amazon senior manager, angel investor in Perplexity.
  • Jay Bartot, founding CTO based in Seattle, founded multiple startups in the Seattle area and was previously managing director at Madrona Venture Labs.

In a LinkedIn post, Lewin said the biggest opportunities in AI are in applications that combine structural data advantages with workflow-native products and highly personalized user experiences. “We back visionary founders who combine deep domain expertise with an AI-native vision to build category leaders of tomorrow,” she wrote.

TFVC invests at the pre-seed and seed stage. It plans to make 25 to 30 investments, with initial check sizes of $100,000 to $250,000. TFVC said it has about 60 limited partners.

The firm invests across the U.S., but is deeply tied to Seattle: five of the fund’s seven portfolio companies have at least one founder based in Seattle. Those include Potato, which is automating science experiments; Liminary, an AI-powered knowledge storage company; Planette, which helps businesses plan for weather and climate risks; and Ridge AI, a data analytics dashboard startup.

Its portfolio also includes fashion AI startup Daydream and home-buying company Catchouse.

Advertisement

Source link

Continue Reading

Tech

Schiit Asgard X Headphone Amp Packs Mjolnir Tech and Continuity A Power for Under $550 at CanJam NYC 2026

Published

on

At CanJam NYC 2026, Schiit Audio kept a lower profile than in past years, but the Texas based manufacturer still had something worth hearing. The company has now fully relocated operations from California to facilities in San Antonio and Corpus Christi, and judging by what we heard at the show, the move hasn’t slowed development one bit.

Front and center was the Schiit Asgard X headphone amplifier, a modular desktop amp that pulls technology directly from Schiit’s flagship Schiit Mjolnir headphone amplifier. The new model introduces Schiit’s Continuity A output stage and supports an optional internal DAC card that adds digital control through the company’s Schiit Forkbeard control system. The demo unit on the table included the DAC module and was paired with the Grado HP100 SE headphones we reviewed in 2025, making it one of the more interesting desktop headphone rigs on the show floor.

The result is a mid tier amplifier that looks familiar on the outside but carries more serious tech under the hood. With trickle down circuitry from the Mjolnir platform, app based control, and modular expandability, the Asgard X feels less like a routine update and more like Schiit raiding its own vault for parts. And judging by the crowd around the table, New York City still appreciates a little well engineered Schiit.

Schiit Audio Asgard X Class A Headphone Amp/DAC Silver Angle

Asgard X: Class A Power and a Little More Useful Schiit

Base price starts at $399, which gets you the amplifier and preamp functionality. Add the Mesh DAC card for $150, and the Asgard X turns into a compact all in one desktop rig with digital input and app control through Schiit’s Forkbeard control system. That’s where things get more interesting.

Advertisement

The DAC card introduces Schiit’s Mesh digital architecture, a custom filter design that is optimized in both the time and frequency domains rather than chasing the usual marketing buzzwords. The bigger change for day to day use is Forkbeard. Through the app you can control volume, balance, loudness, phase, NOS mode, and even adjust a full three band parametric EQ. In other words, the kind of controls people usually beg for once they realize their desktop stack requires three remotes and a flashlight.

Schiit Asgard X Rear

Power output is more than adequate for most headphones:

  • 3.4W RMS at 16 ohms
  • 2.8W RMS at 32 ohms
  • 1.9W RMS at 50 ohms
  • 380mW RMS at 300 ohms
  • 200mW RMS at 600 ohms

Digital input is handled through Schiit’s Unison USB interface, supporting sample rates up to 384 kHz. No DSD. No MQA. We can hear Jason and Mike laughing all the way from Times Square.

No smoke machines, no Thor cosplay. Just a modular desktop amp with plenty of power, a DAC option that actually adds functionality, and enough control to keep both the Brooklyn headphone crowd and the Texas engineers reasonably happy.

grado-hp100-se-headphones-schiit
Grado Signature HP100 SE Headphones with Schiit Asgard X at CanJam NYC 2026.

Listening to the Asgard X: Class A Power, No Funny Schiit

Right off the bat, it was clear the Schiit Asgard X headphone amplifier had more than enough power and headroom to keep the Grado HP100 SE headphones fully under control. It didn’t try to goose the top end with extra sparkle, but where it really impressed was from the bass through the lower midrange. Black Sabbath and AC/DC had real weight and drive, while Deadmau5 and Kraftwerk showed just how well the amp handles pacing and rhythmic energy.

The treble could use a bit more air on some recordings, but the sense of space and impact made up for it. Percussion had real snap, kick drums landed with authority, and the overall timing kept everything moving forward with purpose. It’s the kind of presentation that makes you stop analyzing after a few tracks and just keep listening.

Advertisement

The Grado HP100 SE headphones have always handled vocals well, and that remained true here, though I did find myself wishing for a little more illumination at the top. Some higher notes came across slightly muted on certain tracks, but I’ll take that over a presentation that turns hard or brittle after a few minutes. Your mileage may vary depending on the recording, but in this setup the balance leaned toward smooth and listenable rather than aggressively detailed.

Advertisement. Scroll to continue reading.

The Bottom Line

The Schiit Asgard X headphone amplifier is aimed squarely at listeners who want a powerful Class A desktop amp without turning their desk into a stack of separate components. At $399, it works well as a straightforward headphone amp and preamp, and the optional Mesh DAC card adds modern convenience through Schiit Forkbeard control system without complicating the design.

With solid power, modular flexibility, and a sound that favors weight, pacing, and long listening sessions over flashy treble, the Asgard X makes the most sense for desktop headphone listeners who value control and usability over chasing the last ounce of analytical detail.

Advertisement

For more information: schiit.com

Source link

Advertisement
Continue Reading

Tech

Fascinating Look Back at the Compaq Presario 4402 from 1996, a Time When Compaq Put the Computer Inside the Monitor

Published

on

Compaq Presario 4402 All-in-One Computer 1996
In 1996, families looking for a home computer had the same old problem: a cluttered desk with different boxes, cables strung out everywhere, and setting it all up felt like launching a small rocket. Compaq responded with the Presario 4402, a stylish (for the time) all-in-one system that combined all of the necessary components into a single, large package.



Compaq introduced the Presario 4402 in mid-1996 as part of an effort to simplify home computing. It cost roughly $1,999, which is nearly $4,144 now, for a system that critics described as one of the few truly all-in-one packages available at the time. The design contained a 15-inch display, but only 13.8 inches showed a real image, and it was fastened on top of the computer’s internals, all squeezed into one enormous beige monstrosity. It was a mammoth, measuring 16 inches wide, 14.1 inches deep, and 15.2 inches tall, weighing a whopping 43 pounds.


HP 27 inch All-in-One Desktop PC, FHD Display, AMD Ryzen 7 7730U, 32 GB RAM, 1 TB SSD, AMD Radeon…
  • CONNECT AND COLLABORATE ON YOUR TERMS – Built with all the features to keep you connected like a tiltable pop-up privacy camera, HP Video Controls…
  • MORE MUSCLE FOR YOUR HUSTLE – With a powerful AMD Ryzen 7 7730U processor, ample storage, speedy connections, and plenty of memory, you’re all set to…
  • THE RIGHT FIT FOR EVERY FLOW – Immerse yourself in whatever you are doing with an ultra-slim three-sided micro-edge display and up to a…

Compaq Presario 4402 All-in-One Computer 1996
An Intel Pentium CPU ran at a respectable 133 MHz on a 66 MHz system bus, with RAM starting at 16 MB of EDO and expanding up to 128 MB with 60 ns modules. A 1.6 GB hard drive was used for storage, a 6x CD-ROM for software installation or audio disc playback, and a 1.44 MB floppy drive for file transfer to disk. The built-in speakers provided stereo sound, and a 33.6 kbps modem enabled dial-up access to the early internet and other services. Windows 95 came included with a Quick Restore CD to assist you fix problems if the system went wrong.

Compaq Presario 4402 All-in-One Computer 1996
You could control the CD player from the front panel, so you could just insert an audio disc and modify the level without having to restart the computer. Finally, the system functioned as a speakerphone/answering machine, allowing you to keep your desk clutter-free. Then there came the matching remote, which allowed you to control several devices from across the room. The software bundle featured Microsoft Works for work and spreadsheets, Netscape Navigator for web browsing, and Compton’s Interactive Encyclopedia 1996 for quick lookups. You can also choose between CompuServe, America Online, and GNN internet access.

Compaq Presario 4402 All-in-One Computer 1996
Critics at the time praised the Presario 4402 for its performance without requiring you to make too many compromises. One reviewer in particular stated that it was ideal for writing papers, playing games, or searching up information online, particularly if you were cramped in a small college dorm or family room with little space. The main disadvantage of the built-in display was that you couldn’t simply replace the screen when you needed to update; instead, you had to replace the entire system. Internal extension was feasible, however, thanks to a riser card that provided two ISA slots and one PCI slot for adding things like enhanced graphics.

Source link

Advertisement
Continue Reading

Tech

My 8-year-old daughter was struggling with math until we discovered this app

Published

on

If you say the word ‘Duolingo’, people think of language learning. With over 40 supported languages and an engaging learning workflow, it’s no wonder that the app is currently the most downloaded education platform globally, nearing the historic 1 billion install milestone.

But did you know that language lessons are not the only string to its impressive bow? In fact, in October 2022, the company launched a standalone math app before integrating the module directly into their main app a year later.

Source link

Advertisement
Continue Reading

Tech

Google made Gmail and Drive easier for AI agents to use

Published

on

A new command-line tool published to GitHub consolidates Workspace’s sprawling APIs into a single interface. It also signals how seriously the company is taking the agentic AI moment.


The tool, whose documentation describes it as “one CLI for all of Google Workspace, built for humans and AI agents,” is called gws. It provides unified command-line access to Gmail, Google Drive, Calendar, Docs, Sheets, Slides, Chat, and most other Workspace services.

But the more revealing detail is buried in the instructions: the documentation includes a dedicated integration guide for OpenClaw, the open-source AI agent that went viral in late January and has since become something of a Rorschach test for where agentic AI is headed.

Google’s decision to name-check OpenClaw in official documentation, even unofficial official documentation, is not something companies do by accident.

Advertisement

Why a command-line tool matters for AI agents

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Before GWS, an AI agent that wanted to search a Gmail inbox, pull a file from Drive, and update a Calendar event had to navigate three separate APIs, each with its own authentication flows, rate limits, and response formats. The process worked, but as PCWorld described it, it was “a royal pain.”

The new tool collapses that into a single interface. Every operation produces structured JSON output the format AI agents can parse reliably without the ambiguity that can derail graphical interfaces. Authentication is handled once via OAuth, then inherited by any agent that calls the tool.

Advertisement

The architecture has one particularly elegant feature: gws does not ship a static list of commands. Instead, it reads Google’s own Discovery Service at runtime and builds its entire command surface dynamically. When Google adds a new API endpoint, the tool picks it up automatically.

There is no version to update, no stale documentation to wrestle with. For agents designed to work across long time horizons, that self-updating quality is not a minor convenience; it is a meaningful reliability guarantee.

The repository also includes more than 100 pre-built “agent skills” covering common Workspace workflows: uploading files to Drive with automatic metadata, appending data to Sheets, scheduling Calendar events, forwarding Gmail attachments, and dozens of similar operations.

These are the discrete, composable building blocks that agent frameworks like OpenClaw are designed to chain together.

Advertisement

The OpenClaw connection

OpenClaw’s story has moved fast. The project was published in November 2025 by Peter Steinberger, an Austrian software developer, under the name Clawdbot, a name that drew a trademark complaint from Anthropic.

After a brief stint as Moltbot, it settled on OpenClaw in late January 2026. Within weeks, users had created 1.5 million agents using the platform; the GitHub repository accumulated nearly 200,000 stars. OpenClaw’s premise is simple enough to fit on a business card: AI that actually does things.

On 14 February, Sam Altman announced that Steinberger was joining OpenAI to lead the next generation of personal agents. OpenClaw would move into an independent open-source foundation that OpenAI would support. “The lobster is taking over the world,” Steinberger wrote in his farewell post. “My next mission is to build an agent that even my mum can use.”

Google’s Workspace CLI landing in the middle of that story, with OpenClaw integration instructions in the documentation, three weeks after Steinberger joined OpenAI, is the kind of timing that does not look accidental. Whether it reflects a deliberate competitive response, a coincidental release, or simply developers at Google shipping something that was already in progress is not confirmed.

Advertisement

What is clear is that a major platform company has now built infrastructure specifically to make its apps more useful for the open-source agent ecosystem that OpenAI just acquired the architect of.

MCP and the broader picture

Beyond OpenClaw, gws also functions as a Model Context Protocol server. MCP is the open standard for how AI agents communicate with external tools, originally developed by Anthropic and now adopted across the industry. Running gws mcp exposes Workspace APIs as structured tools that any MCP-compatible client, Claude Desktop, VS Code with AI extensions, or Google’s own Gemini CLI, can natively call.

That MCP support is significant because it means the tool is not merely an OpenClaw utility. It is infrastructure for the entire class of AI agents that is converging on MCP as a standard. Google is, in effect, making Workspace a first-class citizen in the emerging agent ecosystem, regardless of which model or framework is doing the work.

One important caveat: Google’s documentation explicitly notes that gws is “not an officially supported Google product.” It is published as a developer sample, meaning there are no guarantees of stability, security, or ongoing maintenance at the level of a production service. For individual developers and experimenters, that is a manageable risk.

Advertisement

For enterprises considering deploying AI agents against live Workspace data, it is a meaningful limitation, particularly given the ongoing concerns about OpenClaw’s security model, which a Cisco research team found vulnerable to data exfiltration and prompt injection via malicious third-party skills.

What Google is signalling

Addy Osmani, Director of Google Cloud AI, has framed his team’s focus as building infrastructure for agentic systems, those capable of generating command-line inputs and managing structured outputs across complex workflows. The Workspace CLI fits that vision directly.

The broader pattern is legible. Microsoft has Copilot Tasks. OpenAI now has the architect of OpenClaw. Google has its own Gemini agent stack, and now a CLI that makes its most widely-used productivity suite readable by any agent that speaks JSON and MCP.

The competition for where enterprise AI agents live and what data they can reach is accelerating, and the battleground increasingly looks like the infrastructure beneath the applications, not the applications themselves.

Advertisement

For now, gws is a GitHub repository with a caveat. But the 14,000 stars it accumulated before most journalists noticed suggest that developers who build agents for a living already understand what it means.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025