Connect with us

Tech

The enterprise AI land grab is on. Glean is building the layer beneath the interface.

Published

on

The battle for enterprise AI is heating up. Microsoft is bundling Copilot into Office. Google is pushing Gemini into Workspace. OpenAI and Anthropic are selling directly to enterprises. Every SaaS vendor now ships an AI assistant. 

In the scramble for the interface, Glean is betting on something less visible: becoming the intelligence layer beneath it. 

Seven years ago, Glean set out to be the Google for enterprise — an AI-powered search tool designed to index and search across a company’s SaaS tool library, from Slack to Jira, Google Drive to Salesforce. Today, the company’s strategy has shifted from building a better enterprise chatbot to becoming the connective tissue between models and enterprise systems.

Advertisement

“The layer we built initially – a good search product – required us to deeply understand people and how they work and what their preferences are,” Jain told TechCrunch on last week’s episode of Equity, which we recorded at Web Summit Qatar. “All of that is now becoming foundational in terms of building high quality agents.”

He says that while large language models are powerful, they’re also generic. 

“The AI models themselves don’t really understand anything about your business,” Jain said. “They don’t know who the different people are, they don’t know what kind of work you do, what kind of products you build. So you have to connect the reasoning and generative power of the models with the context inside your company.”

Glean’s pitch is that it already maps that context and can sit between the model and the enterprise data. 

Advertisement

The Glean Assistant is often the entry point for customers — a familiar chat interface powered by a mix of leading proprietary (ie, ChatGPT, Gemini, Claude) and open-source models, grounded in the company’s internal data. But what keeps customers, Jain argues, is everything underneath it. 

Techcrunch event

Boston, MA
|
June 23, 2026

Advertisement

First is model access. Rather than forcing companies to commit to a single LLM provider, Glean acts as the abstraction layer, allowing enterprises to switch between or combine models as capabilities evolve. That’s why Jain says he doesn’t see OpenAI, Anthropic, or Google as competition, but rather as partners. 

“Our product gets better because we’re able to leverage the innovation that they are making in the market,” Jain said. 

Advertisement

Second are the connectors. Glean integrates deeply with systems like Slack, Jira, Salesforce, and Google Drive to map how information flows across them and enable agents to act inside those tools. 

And third, and perhaps most important, is governance. 

“You need to build a permissions-aware governance layer and retrieval layer that is able to bring the right information, but knowing who’s asking that question so that it filters the information based on their access rights,” Jain said. 

In large organizations, that layer can be the difference between piloting AI solutions and deploying them at scale. Enterprises can’t simply load all their internal data into a model and create a wrapper to sort out the solutions later, says Jain. 

Advertisement

Also critical is ensuring the models don’t hallucinate. Jain says its system verifies model outputs against source documents, generates line-by-line citations, and ensures that responses respect existing access rights. 

The question is whether that middle layer survives as platform giants push deeper into the stack. Microsoft and Google already control much of the enterprise workflow surface area, and they’re hungry for more. If Copilot or Gemini can access the same internal systems with the same permissions, does a standalone intelligence layer still matter?

Jain argues enterprises don’t want to be locked into a single model or productivity suite and would rather opt for a neutral infrastructure layer rather than a vertically integrated assistant.

Investors have bought into that thesis. Glean raised a $150 million Series F in June 2025, nearly doubling its valuation to $7.2 billion. Unlike the frontier AI labs, Glean doesn’t need massive compute budgets.

Advertisement

“We have a very healthy, fast-growing business,” Jain said.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Anthropic and the Pentagon are reportedly arguing over Claude usage

Published

on

The Pentagon is pushing AI companies to allow the U.S. military to use their technology for “all lawful purposes,” but Anthropic is pushing back, according to a new report in Axios.

The government is reportedly making the same demand to OpenAI, Google, and xAI. An anonymous Trump administration official told Axios that one of those companies has agreed, while the other two have supposedly shown some flexibility.

Anthropic, meanwhile, has reportedly been the most resistant. In response, the Pentagon is apparently threatening to pull the plug on its $200 million contract with the AI company.

In January, the Wall Street Journal reported that there was significant disagreement between Anthropic and Defense Department officials over how its Claude models could be used. The WSJ subsequently said that Claude was used in the U.S. military’s operation to capture then-Venezuelan President Nicolás Maduro.

Advertisement

Anthropic did not immediately respond to TechCrunch’s request for comment.

A company spokesperson told Axios that the company has “not discussed the use of Claude for specific operations with the Department of War” but is instead “focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance.”

Source link

Advertisement
Continue Reading

Tech

How to Watch Netflix’s ‘America’s Next Top Model’ Docuseries

Published

on

A new three-part Netflix docuseries will cover the chaos and complicated legacy of the hit reality series, America’s Next Top Model. 

ANTM premiered in 2003 and ran for 24 seasons, helping launch the careers of contestants like Eva Marcille, Lio Tipton and Yaya DaCosta. Netflix’s synopsis describes the new doc as the definitive chronicle of the modeling competition, which “became a pop-culture juggernaut defined by explosive drama, public meltdowns and controversies that still fuel viral moments today.” 

Advertisement

Former contestants, judges and producers — including host and creator Tyra Banks — took part in Netflix’s series, Reality Check, which you can stream shortly. 

When to watch Reality Check: Inside America’s Next Top Model on Netflix

Netflix will drop its three-episode doc on the modeling competition series in the early morning hours on Monday, Feb. 16 (3 a.m. ET, to be exact).

Like many other streaming services, Netflix’s cheapest tier is ad-supported, and you can opt for a pricier tier to avoid commercials. You can subscribe to Standard with ads for $8 per month, Standard for $18 per month or Premium for $25 per month.

Advertisement

James Martin/CNET

For ad-free streaming and access to every title Netflix offers, you should opt for the streamer’s Standard or Premium tiers. The Standard with ads tier comes with some limits on what you can watch due to licensing restrictions. Netflix’s website lets you compare the simultaneous streams, downloads and extra member slots you get with each tier.

Advertisement

Source link

Continue Reading

Tech

Solid-State EV Batteries Just Got One Step Closer To American Roads

Published

on





Just about every modern electric vehicle on American roads is powered by one of three battery types: lithium-iron phosphate (the most common, also known as LFP), nickel-manganese cobalt (NMC), and nickel-cobalt aluminum (NCA). Each of these is a relatively mature and well-understood system, with each holding certain advantages — LFP batteries are cheap and stable, whereas NCA batteries are energy-dense and powerful. But these EVs have only really been commonplace on today’s roads for the past two decades or so, a comparatively small amount of time when measured against the common internal combustion engine’s history spanning almost 140 years. Technology advances at an ever-increasing pace, and we may be on the precipice of that next evolution — at least on American roads.

Enter the solid-state battery, a pioneering technology that promises to combine all the benefits of the aforementioned configurations into a single entity. High performance, excellent energy density, potentially lasting many years, and stable thermal conductivity, though it comes at a steep cost — one that Karma Automotive appears to be willing to pay. As of February 2026, Karma Automotive announced plans to ship the first mass-production vehicle powered by solid-state batteries stateside, equipped with Factorial FEST SSBs.

Advertisement

Karma Automotive is the only American ultra-luxury manufacturer offering a diverse portfolio of vehicles, a specialized firm dedicated to producing EVs deep into six-figure USD territory. The company currently fields six distinct models, but only one will receive the solid-state battery at first: the Kaveya super coupe, scheduled for a 2027 debut. Let’s dive in and explore more about the car and solid-state batteries, along with what the technology promises to accomplish.

Advertisement

How solid-state batteries work

First thing’s first: what is a solid-state battery and how does it differ from most other EV battery types? In short, the typical EV battery houses two poles on either side, the anode and cathode — positive and negative, respectively. In between these is an ion that’s constantly shifting from the positive to negative side, like a relay runner, going from one electrolyte solution to the other. There are several types of these batteries, the most common of which is lithium-ion, but they all use a sort of gel-like electrolyte. Solid-state batteries, or SSBs for short, use a solid electrolyte instead, providing a more stable and energy-dense solution to power storage.

There are several variants of SSBs in service; the one Karma Automotive is testing is actually known as a quasi-solid-state battery. Produced by Factorial Energy, the quasi-SSB design prioritizes a combination of thermal stability (quasi-SSBs are inherently far less flammable than standard lithium-ion batteries) and high energy density, which translates to double the range. The company website cites range figures of at least 500 miles for the next generation of EV while weighing roughly one third less, based on the typical 90 kWh battery. Factorial also lists the Solstice SSB as a potential candidate for future EVs alongside the FEST quasi-SSB.

With standard battery technology fully matured, the current consensus is that SSBs represent the next technological leap forward for battery technology. Implementing such designs in cars holds a number of benefits: lighter vehicles with higher ranges, greater battery longevity, and greater power. However, because it’s still an emerging technology as far as EVs go, costs are currently prohibitively expensive for regular mass-production cars in the United States, and so you still can’t buy them for any U.S.-sold EV — yet.

Advertisement

The Karma Kaveya

As for the car itself, the Karma Kaveya is a sleek, ultra modern super coupe designed with a high-end grand tourer aesthetic. The name “Kaveya” is Sanskrit, meaning “power in motion,” a theme present in the promised statistics — Karma claims the high-end coupe to be capable of 0-60 times in less than 3 seconds and speeds in excess of 180 mph, thanks to its 1,000 hp powertrain. All of that is speculative for now, of course — especially given the emergent nature of the battery it houses.

According to the official figures listed on Karma’s website, the battery boasts a HV120 kWh output for a grand total of 1,270 lb-ft combined available torque, coupled with a 10-80% charging time of about 45 minutes. This contrasts an earlier estimate by Stellantis, which announced a partnership with Factorial back in April 2025 to use the batteries in Dodge demonstration vehicles to promote SSB technology; their figures listed an estimated charging time of 18 minutes from 15-90%.

Advertisement

Regardless of the battery’s performance now, it’ll likely exceed that of even the most advanced mass-production standard battery pack, albeit for a steep cost. But Karma isn’t in the business of cheap vehicles, so it’s a model that suits the company well. With the Kaveya representing the current cutting-edge of EV technology, Karma looks poised to leave a definitive mark in the ongoing electric arms race no matter what happens.



Advertisement

Source link

Continue Reading

Tech

Nvidia, Groq and the limestone race to real-time AI: Why enterprises win or lose here

Published

on

​From miles away across the desert, the Great Pyramid looks like a perfect, smooth geometry — a sleek triangle pointing to the stars. Stand at the base, however, and the illusion of smoothness vanishes. You see massive, jagged blocks of limestone. It is not a slope; it is a staircase.

​Remember this the next time you hear futurists talking about exponential growth.

​Intel’s co-founder Gordon Moore (Moore’s Law) is famously quoted for saying in 1965 that the transistor count on a microchip would double every year. Another Intel executive, David House, later revised this statement to “compute power doubling every 18 months.” For a while, Intel’s CPUs were the poster child of this law. That is, until the growth in CPU performance flattened out like a block of limestone.

​If you zoom out, though, the next limestone block was already there — the growth in compute merely shifted from CPUs to the world of GPUs. Jensen Huang, Nvidia’s CEO, played a long game and came out a strong winner, building his own stepping stones initially with gaming, then computer visioniand recently, generative AI.

Advertisement

​The illusion of smooth growth

​Technology growth is full of sprints and plateaus, and gen AI is not immune. The current wave is driven by transformer architecture. To quote Anthropic’s President and co-founder Dario Amodei: “The exponential continues until it doesn’t. And every year we’ve been like, ‘Well, this can’t possibly be the case that things will continue on the exponential’ — and then every year it has.”

​But just as the CPU plateaued and GPUs took the lead, we are seeing signs that LLM growth is shifting paradigms again. For example, late in 2024, DeepSeek surprised the world by training a world-class model on an impossibly small budget, in part by using the MoE technique.

​Do you remember where you recently saw this technique mentioned? Nvidia’s Rubin press release: The technology includes “…the latest generations of Nvidia NVLink interconnect technology… to accelerate agentic AI, advanced reasoning and massive-scale MoE model inference at up to 10x lower cost per token.”

​Jensen knows that achieving that coveted exponential growth in compute doesn’t come from pure brute force anymore. Sometimes you need to shift the architecture entirely to place the next stepping stone.

Advertisement

​The latency crisis: Where Groq fits in

​This long introduction brings us to Groq.

​The biggest gains in AI reasoning capabilities in 2025 were driven by “inference time compute” — or, in lay terms, “letting the model think for a longer period of time.” But time is money. Consumers and businesses do not like waiting.

​Groq comes into play here with its lightning-speed inference. If you bring together the architectural efficiency of models like DeepSeek and the sheer throughput of Groq, you get frontier intelligence at your fingertips. By executing inference faster, you can “out-reason” competitive models, offering a “smarter” system to customers without the penalty of lag.

​From universal chip to inference optimization

​For the last decade, the GPU has been the universal hammer for every AI nail. You use H100s to train the model; you use H100s (or trimmed-down versions) to run the model. But as models shift toward “System 2” thinking — where the AI reasons, self-corrects and iterates before answering — the computational workload changes.

Advertisement

​Training requires massive parallel brute force. Inference, especially for reasoning models, requires faster sequential processing. It must generate tokens instantly to facilitate complex chains of thought without the user waiting minutes for an answer. ​Groq’s LPU (Language Processing Unit) architecture removes the memory bandwidth bottleneck that plagues GPUs during small-batch inference, delivering lightning-fast inference.

​The engine for the next wave of growth

​For the C-Suite, this potential convergence solves the “thinking time” latency crisis. Consider the expectations from AI agents: We want them to autonomously book flights, code entire apps and research legal precedent. To do this reliably, a model might need to generate 10,000 internal “thought tokens” to verify its own work before it outputs a single word to the user.

  • On a standard GPU: 10,000 thought tokens might take 20 to 40 seconds. The user gets bored and leaves.

  • On Groq: That same chain of thought happens in less than 2 seconds.

​If Nvidia integrates Groq’s technology, they solve the “waiting for the robot to think” problem. They preserve the magic of AI. Just as they moved from rendering pixels (gaming) to rendering intelligence (gen AI), they would now move to rendering reasoning in real-time.

​Furthermore, this creates a formidable software moat. Groq’s biggest hurdle has always been the software stack; Nvidia’s biggest asset is CUDA. If Nvidia wraps its ecosystem around Groq’s hardware, they effectively dig a moat so wide that competitors cannot cross it. They would offer the universal platform: The best environment to train and the most efficient environment to run (Groq/LPU).

Advertisement

Consider what happens when you couple that raw inference power with a next-generation open source model (like the rumored DeepSeek 4): You get an offering that would rival today’s frontier models in cost, performance and speed. That opens up opportunities for Nvidia, from directly entering the inference business with its own cloud offering, to continuing to power a growing number of exponentially growing customers.

​The next step on the pyramid

​Returning to our opening metaphor: The “exponential” growth of AI is not a smooth line of raw FLOPs; it is a staircase of bottlenecks being smashed.

  • Block 1: We couldn’t calculate fast enough. Solution: The GPU.

  • Block 2: We couldn’t train deep enough. Solution: Transformer architecture.

  • Block 3: We can’t “think” fast enough. Solution: Groq’s LPU.

​Jensen Huang has never been afraid to cannibalize his own product lines to own the future. By validating Groq, Nvidia wouldn’t just be buying a faster chip; they would be bringing next-generation intelligence to the masses.

Andrew Filev, founder and CEO of Zencoder

Advertisement

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Hideki Sato, known as the father of Sega hardware, has reportedly died

Published

on

Hideki Sato, who led the design of Sega’s beloved consoles from the ’80s and ’90s, died on Friday, according to the Japanese gaming site Beep21. He was 77. Sato worked with Sega from 1971 until the early 2000s, but he’s best known for his involvement in the development of the Sega arcade games and home consoles that defined many late Gen X and early millennial childhoods, starting with the SG-1000 to the Genesis, Saturn and Dreamcast.

Sato went on to serve as Sega’s president from 2001 to 2003. In the post announcing his death, Beep21, which interviewed Sato numerous times over the years, wrote (translated from Japanese), “He was truly a great figure who shaped Japanese gaming history and captivated Sega fans all around the world. The excitement and pioneering spirit of that era will remain forever in the hearts and memories of countless fans, for all eternity.” Sato’s passing comes just a few months after that of Sega co-founder David Rosen, who died in December at age 95. 

Source link

Continue Reading

Tech

OpenClaw creator Peter Steinberger joins OpenAI

Published

on

Peter Steinberger, who created the AI personal assistant now known as OpenClaw, has joined OpenAI.

Previously known as Clawdbot, then Moltbot, OpenClaw achieved viral popularity over the past few weeks with its promise to be the “AI that actually does things,” whether that’s managing your calendar, booking flights, or even joining a social network full of other AI assistants. (The name changed the first time after Anthropic threatened legal action over its similarity to Claude, then changed again because Steinberger liked the new name better.)

In a blog post announcing his decision to join OpenAI, the Austrian developer said that while he might have been able to turn OpenClaw into a huge company, “It’s not really exciting for me.”

“What I want is to change the world, not build a large company[,] and teaming up with OpenAI is the fastest way to bring this to everyone,” Steinberger said.

Advertisement

OpenAI CEO Sam Altman posted on X that in his new role, Steinberger will “drive the next generation of personal agents.” As for OpenClaw, Altman said it will “live in a foundation as an open source project that OpenAI will continue to support”

Source link

Continue Reading

Tech

Researchers turn Edison's 1879 light bulb into a mini graphene reactor

Published

on


Graphene is a two-dimensional lattice of carbon atoms arranged in a hexagonal pattern, renowned for its exceptional electrical conductivity, thermal transport, and mechanical strength. Turbostratic graphene is a stacked variant in which the layers are rotated and misaligned, weakening interlayer coupling and making the material easier to process at scale.
Read Entire Article
Source link

Continue Reading

Tech

Software Development On The Nintendo Famicom In Family BASIC

Published

on

Back in the 1980s, your options for writing your own code and games were rather more limited than today. This also mostly depended on what home computer you could get your hands on, which was a market that — at least in Japan — Nintendo was very happy to slide into with their ‘Nintendo Family Computer’, or ‘Famicom’ for short. With the available peripherals, including a tape deck and keyboard, you could actually create a fairly decent home computer, as demonstrated by [Throaty Mumbo] in a recent video.

After a lengthy unboxing of the new-in-box components, we move on to the highlight of the show, the HVC-007 Family BASIC package, which includes a cartridge and the keyboard. The latter of these connects to the Famicom’s expansion port. Inside the package, you also find a big Family BASIC manual that includes sprites and code to copy. Of course, everything is in Japanese, so [Throaty] had to wrestle his way through the translations.

The cassette tape is used to save applications, with the BASIC package also including a tape with the Sample 3 application, which is used in the video to demonstrate loading software from tape on the Famicom. Although [Throaty] unfortunately didn’t sit down to type over the code for the sample listings in the manual, it does provide an interesting glimpse at the all-Nintendo family computer that the rest of the world never got to enjoy.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Google Docs can turn long documents into audio summaries in latest Workspace update

Published

on


The new feature will roll out across Google Workspace over the next two weeks. It will appear under Tools > Audio > Listen to document summary, where users can trigger a small media player to control playback. The summaries, typically under three minutes, draw on information from multiple document tabs…
Read Entire Article
Source link

Continue Reading

Tech

Longtime NPR host David Greene sues Google over NotebookLM voice

Published

on

David Greene, the longtime host of NPR’s “Morning Edition,” is suing Google, alleging that the male podcast voice in the company’s NotebookLM tool is based on Greene, according to The Washington Post.

Greene said that after friends, family members, and coworkers began emailing him about the resemblance, he became convinced that the voice was replicating his cadence, intonation, and use of filler words like “uh.”

“My voice is, like, the most important part of who I am,” said Greene, who currently hosts the KCRW show “Left, Right, & Center.”

Among other features, Google’s NotebookLM allows users to generate a podcast with AI hosts. A company spokesperson told the Post that the voice used in this product is unrelated to Greene’s: “The sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired.”

Advertisement

This isn’t the first dispute over AI voices resembling real people. In one notable example, OpenAI removed a ChatGPT voice after actress Scarlett Johansson complained that it was an imitation of her own.

Source link

Continue Reading

Trending

Copyright © 2025