Connect with us
DAPA Banner

Tech

Google is making it dramatically easier to sign in to apps without OTP or link hassles

Published

on

If you’ve ever signed up for an app and then spent the next five minutes hunting for a six-digit code buried in your inbox, you know how painful the process is. I especially despise the magic sign-up link that websites send, as they sometimes fail to work if my default browser isn’t Google Chrome.

Thankfully, Google is fixing that with a new verified email credential for Android, and it’s a genuinely smart solution. 

So what’s actually broken with OTPs?

The humble OTP has been the backbone of email verification forever, but it comes with real problems. You leave the app, open your inbox, find the email, copy the code, and come back. 

It’s a long process that not only hurts consumers but also app developers. The number of steps required may cause a user to leave the app mid-sign-up, meaning the app loses potential users before they even try it.

Advertisement

iOS fixed this issue by directly allowing users to sign in via Apple account. Recently, it also added a feature to autofill OTPs from emails, just as Android supports OTP autofill from messages. 

Now, Google is also creating a seamless signup process that doesn’t require users to jump between apps. 

How does the new system work?

Google now issues a cryptographically verified email credential directly to Android devices through the Credential Manager API. When an app needs to confirm your email, it can pull that credential directly using the Credential Manager API. 

A small prompt appears on screen showing what information is being requested. You tap to confirm, and the app gets your verified email. No switching apps, no codes, no delay.

Google recommends pairing this with passkey creation, so the first sign-up becomes the last time a user has to do anything manual. 

The same can also be used for account recovery and re-authentication of sensitive actions, including setting changes, updating profile details, and more. 

Advertisement

The best part is that the new feature supports Android 9 and later devices, so you don’t need the best new Android smartphones to enjoy this quality-of-life improvement.

Are there any restrictions?

There are a few restrictions. The feature currently works only with regular consumer Google Accounts, not Workspace accounts. It also only works with Gmail accounts, and not with third-party email accounts that you might have used to create your Google account.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

OpenAI’s new image model reasons before it draws

Published

on

The new model reasons about composition, searches the web for context, generates up to eight coherent images from one prompt, and renders text in non-Latin scripts with near-flawless accuracy. It also took the number one spot on the Image Arena leaderboard within 12 hours of launch, by the largest margin ever recorded.


Two years ago, asking ChatGPT to generate a visual was like commissioning a poster from a sleep-deprived intern with a glue stick and a head injury. You’d ask for a clean design and get “leftovers creativity” splashed across the image, plus three new words that looked like they’d been invented during a minor software malfunction.

The images looked AI-generated in the way that has become a cultural shorthand for uncanny: almost right, conspicuously wrong, and instantly recognisable as synthetic.

The leap matters. Text rendering has been the persistent, embarrassing weakness of AI image generators since DALL-E first turned heads in January 2021, a model we covered at the time as a fascinating curiosity.

Advertisement

Images 2.0 claims approximately 99% accuracy in text rendering across any language and script, including Japanese, Korean, Chinese, Hindi, and Bengali. If that figure holds in independent testing, it closes the gap between “impressive AI demo” and “tool a graphic designer would actually use for production work.”

The architectural change that makes the model different, though not just better, is what OpenAI calls “thinking capabilities.” Images 2.0 is the company’s first image model to integrate its O-series reasoning architecture.

Before generating a pixel, the model researches the prompt, plans the composition, reasons about spatial relationships between elements, and can search the web for real-time context.

It is, in OpenAI’s framing, not a rendering tool but a “visual thought partner.”

Advertisement

ChatGPT Image 2

This is my cat transformed into a comic strip with ChatGPT.

In practice, this manifests in two access modes. Instant mode ships to all ChatGPT users, including free-tier accounts, and delivers the core quality improvements: better text, sharper editing, richer layouts.

Thinking mode, which enables web search, multi-image batching, and output verification, is restricted to Plus ($20/month), Pro ($200/month), Business, and Enterprise subscribers.

The distinction is commercially significant. The reasoning capabilities, where most of the quality premium lives, sit behind the paywall. Free users get better images; paying users get images the model has thought about.

Advertisement

The multi-image capability is the feature most likely to change professional workflows. A single prompt can now produce up to eight images that maintain character and object continuity across the set.

That means a designer can generate a family of social media assets, a children’s book sequence, or a series of storyboard frames from one instruction, with consistent visual identity throughout.

Previously, each image had to be prompted individually and stitched together manually. For marketing teams and content creators, that is a meaningful reduction in production friction.

The integration into Codex, OpenAI’s coding environment, is the strategically loaded move. Developers and designers can now generate UI mockups, prototypes, and visual assets inside the same agentic workspace they use for code, slides, and browser automation, using a single ChatGPT subscription.

Advertisement

The image model is no longer a standalone product; it is a capability embedded in OpenAI’s broader platform, competing not just with Midjourney and Google’s Nano Banana 2 on quality but with Canva and Figma on workflow integration.

The benchmark performance is striking. Within 12 hours of launch, Images 2.0 took the number one spot on the Image Arena leaderboard across every category, with a score of 1,512, a +242-point lead over the second-place model, Google’s Nano Banana 2. That is the largest lead ever recorded on the leaderboard.

For most of 2026, OpenAI and Google had been trading the top position within a tight margin; Images 2.0 broke away decisively. 

DALL-E 2 and DALL-E 3 are being deprecated and retired on 12 May 2026. GPT-Image-1.5, released in December 2025 as an intermediate upgrade, remains accessible via the API for legacy integrations but is no longer the default model.

Advertisement

OpenAI did not disclose the architecture of Images 2.0, describing it only as a “generalist model” or “GPT for images” and declining to specify whether it uses a diffusion, autoregressive, or hybrid approach. The API model identifier is gpt-image-2; the API is expected to open to developers in early May 2026.

Token-based pricing is $8 per million tokens for image input, $2 for cached input, and $30 for image output, with per-image costs typically ranging from $0.04 to $0.35 depending on prompt complexity and resolution. Output resolution reaches up to 2K.

The knowledge cutoff is December 2025, which introduces a practical boundary: the model cannot accurately render events, people, or products that emerged after that date without supplementing its internal knowledge with live web search.

The model’s safety architecture includes content filtering, C2PA metadata for provenance, and what OpenAI described in the press briefing as ongoing monitoring, a point the company was notably emphatic about, given the growing regulatory scrutiny of synthetic media and the use of AI image generators in deepfakes, scams, and non-consensual imagery.

Advertisement

The most consequential question Images 2.0 raises is not about quality. The technical gap between AI-generated and human-created imagery has been narrowing for years; this model narrows it further.

The question is about what happens when the tool is no longer a novelty but infrastructure, when image generation is a default capability of every coding environment, every chat interface, and every enterprise productivity suite, and when the distinction between “designed by a person” and “generated by a prompt” becomes something only metadata can verify.

OpenAI, for its part, appears to be betting that the answer is scale: more images, faster, better, cheaper, everywhere. When we covered first covered DALL-E five years ago, the model’s outputs were fascinating oddities. Now they are production assets.

The era in which AI-generated images were obviously AI-generated is over. What comes next depends on whether the guardrails can keep pace with the capability.

Advertisement

Source link

Continue Reading

Tech

These New Smart Glasses From Ex-OnePlus Engineers Have a Hidden Cost

Published

on

Lots of smart glasses have AI bots inside them now. The one in L’Atitude 52°N’s glasses is called Goya, named after Francisco Goya, the famous Spanish artist who painted renowned masterpieces of romanticism.

CEO and founder Gary Chen, who has worked on wearable devices for companies like Oppo, OnePlus, and HTC, says his company’s glasses are focused on travelers, with AI features that act like a tour guide and talk about all the paintings in famous museums.

“Basically, you can say, ‘Hey, Goya, what is the story about Mona Lisa?’” Chen says. “You can ask anything and, with your permission, they will take a photo to analyze what’s in front of you.”

I ask if you could quiz it about perhaps the most famous Goya painting, the terrifying, Gothic horror-esque image of Saturn devouring his own son.

Advertisement

“Yes, yes,” Chen says, “It can also give you some recommendations about restaurants.”

Image may contain Accessories Sunglasses and Glasses

Berlin-based L’Atitude 52°N is a new player in the smart glasses space, selling its first pairs on Kickstarter in September 2025, where the campaign surpassed its funding goal and raised more than $400,000. There have been some bumps since then, as shipments were delayed from an originally announced release date in February 2026, and one model in development was scrapped outright. Now, L’Atitude 52°N has announced an official release date for its smart glasses.

Preorders for one model, called Berlin, start on May 19. The glasses actually go on sale on May 26. This might be a disappointment for Kickstarter backers, as the most recent official update from the campaign came in March and said shipping would begin on April 15 for Berlin units and June 7 for the second model, called Milan. L’Atitude 52°N still hasn’t set an official launch date for the Milan, except to say that it will be “arriving in the second quarter of 2026.”

The Berlin glasses cost $399. Add another $50 for the photochromatic lenses. There is one very big catch: The AI features enabled on the device will only work for 12 months, which L’Atitude 52°N calls an “AI feature trial.” After that, customers have to pay for a subscription service, or will be limited to the base features, like playing music and capturing media.

How much will that subscription service cost? Chen says he doesn’t know.

Advertisement

Source link

Continue Reading

Tech

Leica Chairman says ‘a true Leica sensor’ is coming, and image quality may never be the same

Published

on


  • Leica announced a strategic partnership with Chinese sensor maker Gpixel
  • Recent Leica cameras use Sony-made sensor tech
  • We can expect a new bespoke sensor for future models, possibly the rumored M12

There has been plenty going on behind the scenes at Leica recently, not least of which potentially selling a controlling stake of the business for approximately $1.2 billion to Chinese and Swedish investors. Now, it looks set to change the bedrock of the tech inside its digital cameras.

In January, Leica’s Chairman of the Supervisory Board, Dr Andreas Kaufman, revealed that “we’re also developing our own sensor again” when referring to future development of the Leica M-system. He made the comment in the German-language ‘Leica Enthusiast Podcast’, according to a report by Peta Pixel.

Source link

Continue Reading

Tech

Turkey wants to ban social media for kids under 15

Published

on

The Turkish parliament has voted through a bill that would ban all children under the age of 15 from using social media. As part of the legislation, social media platforms would be required to enforce age-verification measures on their apps, provide parental control tools, and react more quickly to harmful content being posted.

As reported by , lawmakers have passed the bill in the wake of two deadly school shootings in Turkey, after which police 162 people accused of sharing footage of the tragedies online.

Turkey’s President Recep Tayyip Erdogan now has 15 days to accept the bill in order for it to become law, after reportedly saying social media platforms had become “cesspools” in a televised address to the nation.

As well as the major social media platforms, AP reports that online gaming companies would also have to implement their own restrictions on minors, with potential punishments including bandwidth reductions and financial penalties.

Advertisement

This isn’t the first time Turkey has locked horns with social media and online gaming platforms. Instagram has been in the country before, back in 2024, relating to a dispute over the posting of Hamas-related content. Access was restored around a week later, but in the same time period Turkey also Roblox over reports of inappropriate sexual content accused of being explorative to children. At the time, a Turkish official also named the “promotion of homosexuality” as one reason for the ban.

Turkey has also temporarily banned (now called X) on several occasions, most after 2023’s devastating earthquakes, though it was not clear at the time why the government may have moved to block the social media platform.

The country’s lawmakers moving to ban under-15s from accessing social media is part of an emerging trend in Europe and across the globe. The likes of and have recently introduced similar legislation of their own, following becoming the first country in the world to ban children under 16 from social media last year. The UK has since bringing in tighter restrictions too.

Source link

Advertisement
Continue Reading

Tech

Lume Cube Edge Light Go Review (2026): Versatile, Portable

Published

on

The base of the lamp has two slider buttons. One toggle adjusts the warmth, from cold white light all the way to red. One adjusts the intensity, from ultra-bright down to a glareless glow. Hard taps on each button skip ahead, while holding the toggle down on one side or another adjusts the light settings quite slowly—slowly enough I at first sometimes question whether it’s happening.

The maximum brightness is 1,000 lumens—the approximate intensity of a 75-watt incandescent bulb. At this brightness, the battery lasts about five hours. At a lower intensity, this can extend to as long as a dozen hours.

Red Shift

Image may contain Book and Publication

Photograph: Matthew Korfhage

There’s an added feature I have come to appreciate at night, which is the red-light mode. There’s little evidence that blue light from your little smartphone is keeping you awake at night. But numerous studies do show that blue light wavelengths can affect melatonin levels and thus your body’s circadian rhythm, while red light doesn’t do this.

Red light therapy is, of course, the province of TikTok as much as science—a field where wild exaggerations live alongside legitimate uses and benefits. For every sleep study showing that red light is superior to blue light when it comes to melatonin levels, there’s another showing that red light is associated with “negative emotions” before bed.

Advertisement

So I can only offer my own experience, which is that Edge Light Go’s red reading light offers me a pleasant liminal space between awake time and sleepy time, one not offered by a basic nightstand lamp. It allows me to sort of bask in a darkroom space that still lets me see and read, and drift off a little easier.

If I fall asleep, the light has an automatic 25-minute shut-off, which means I won’t do what I far too often do, which is drift off while reading and then wake up, alarmed, to a room filled with bright light in the middle of the night.

Caveats and Quirks

Image may contain Lamp Furniture and Tape

Photograph: Matthew Korfhage

This said, for all the virtues of portability, the Edge Light Go does not boast a base that’s heavy enough to stop the lamp from tipping over if I bend it forward from its lowest hinge. This can be an annoyance when trying to use the lamp as a reading light from a bedside table or the arm of a couch.

Source link

Advertisement
Continue Reading

Tech

Volvo’s parent company just made a sleek $14,300 electric sedan that will elude US buyers

Published

on

Volvo’s parent company has launched a new electric sedan in China that hits a familiar sore spot for US car shoppers.

The Geely Galaxy A7 EV pairs a clean, mainstream shape with a claimed 550km of CLTC range, and it enters the market at a price that still looks strikingly low by Western EV standards. It also appears set to stay far away from US dealerships.

That low headline number needs some caution though. Car News China reports a cheaper entry point, but the launched EV trims are higher, starting at 112,800 yuan, or about $16,530, and rising to 119,800 yuan. That is still aggressive pricing for a sedan of this size, just not quite the jaw-dropping bargain the earliest figure implied.

Cheap price, messy rollout

Once you get past the pricing confusion, the core package looks solid. The A7 EV uses a 58.05 kWh LFP battery and a front-mounted 160 kW motor, with Geely claiming 550km on the CLTC cycle. Reports say there’s a smaller-battery version, so there’s still some room for Geely to clear up how broad the lineup may become.

The rest of the car sounds more mature than bargain-bin. The EV gets restrained exterior styling, a 14.6-inch touchscreen, a digital instrument display, and an interior layout that reads like normal family transport instead of a stripped-down cost cutter. That matters because the real appeal here is not novelty. It is normality at a low price.

Advertisement

Why this one won’t reach you

For American readers, the frustrating part is how familiar this story has become.

China keeps producing lower-cost EVs that look usable and complete, while the US market rarely gets anything close to this price in a new electric sedan.

Nothing tied to the A7 EV points to a US launch, so this one looks like another car Americans will only watch from afar.

What happens before this goes on sale

The next question is whether the EV version can help revive momentum for the wider Galaxy A7 line.

Geely delivered 15,230 A7 units in China in the first quarter of 2026, but that total was down 59.4% from the prior quarter.

Advertisement

If this EV lands with buyers, it will matter as more than a fresh trim. It will show how quickly China’s lower-cost EV market is sharpening up.

Source link

Advertisement
Continue Reading

Tech

OpenAI unveils Workspace Agents, a successor to custom GPTs for enterprises that can plug directly into Slack, Salesforce and more

Published

on

OpenAI introduced a new paradigm and product today that is likely to have huge implications for enterprises seeking to adopt and control fleets of AI agent workers.

Called “Workspace Agents,” OpenAI’s new offering essentially allows users on its ChatGPT Business ($20 per user per month) and variably priced Enterprise, Edu and Teachers subscription plans to design or select from pre-existing agent templates that can take on work tasks across third-party apps and data sources including Slack, Google Drive, Microsoft apps, Salesforce, Notion, Atlassian Rovo, and other popular enterprise applications.

Put simply: these agents can be created and accessed from ChatGPT, but users can also add them to third-party apps like Slack, communicate with them across disparate channels, ask them to use information from the channel they’re in and other third-party tools and apps, and the agents will go off and do work like drafting emails to the entire team, selected members, or pull data and make presentations.

Advertisement

Human users can trust that the agent will manage all this complexity and complete the task as requested, even if the user who requested it leaves.

It’s the end of “babysitting” agents and the start of letting them go off and get shit done for your business — according to your defined business processes and permissions, of course.

The product experience appears centered on the Agents tab in the ChatGPT sidebar, where teams can discover and manage shared agents.

This functions as a kind of team directory: a place where agents built by coworkers can be reused across a workspace. The broader idea is that AI becomes less of an individual productivity trick and more of a shared organizational resource.

Advertisement

In this sense, OpenAI is targeting one of office work’s oldest pain points: the handoff between people, systems, and steps in a process.

OpenAI says workspace agents will be free for the next two weeks, until May 6, 2026, after which credit-based pricing will begin. The company also says more capabilities are on the way, including new triggers to start work automatically, better dashboards, more ways for agents to take action across business tools, and support for workspace agents in its AI code generation app, Codex.

For more information on how to get started building and using them, OpenAI recommends heading over to its online academy page on them here and its help desk documentation here.

The Codex backbone

The most significant shift in this announcement is the move away from purely session-based interaction. Workspace agents are powered by Codex — the cloud-based, partially open-source AI coding harness that OpenAI has been aggressively expanding in 2026 — which gives them access to a workspace for files, code, tools, and memory.

Advertisement

OpenAI says the agents can do far more than answer a prompt. They can write or run code, use connected apps, remember what they have learned, and continue work across multiple steps.

That description lines up closely with the capabilities OpenAI shipped into Codex just six days ago, including background computer use, more than 90 new plugins spanning tools like Atlassian Rovo, CircleCI, GitLab, Microsoft Suite, Neon by Databricks, and Render, plus image generation, persistent memory, and the ability to schedule future work and wake up on its own to continue across days or weeks.

Workspace agents inherit that plumbing. When one pulls a Friday metrics report, it is effectively spinning up a Codex cloud session with the right tools attached, running code to fetch and transform data, rendering charts, writing the narrative, and persisting what it learned for next week.

When that same agent is deployed to a Slack channel, it is a Codex instance listening for mentions and threading its work back in.

Advertisement

This is the technical decision enterprise buyers should focus on. Building an agent on a code-execution substrate rather than a pure LLM-call-and-response loop is what gives workspace agents the ability to do real work — transforming a CSV, reconciling two systems of record, generating a chart that is actually correct — rather than describing what the work would look like.

Persistence and scheduling

In earlier AI assistant models, progress paused when the user stopped interacting. Workspace agents change that by running in the cloud and supporting long-running workflows. Teams can also set them to run on a schedule.

That means a recurring reporting agent can pull data on a set cadence, generate charts and summaries, and share the results with a team without anyone manually kicking off the process.

Here at VentureBeat, we analyze story traffic and user return rate on a weekly basis — exactly the kind of recurring, multi-step, multi-source task that could theoretically be automated with a single workspace agent. Any enterprise with a weekly reporting rhythm pulling from dynamic data sources is likely to find a use for these agents.

Advertisement

Agents also retain memory across runs. OpenAI says they can be guided and corrected in conversation, so they improve the more a team uses them.

Over time they start to reflect how a team actually works — its processes, its standards, its preferred ways of handling recurring jobs — which is a meaningfully different proposition from the static instruction-set GPTs that preceded them.

The integrated ecosystem

OpenAI’s claim is that agents should gather information and take action where work already happens, rather than forcing teams into a separate interface. That point becomes clearest in the Slack examples. OpenAI’s launch materials show a product-feedback agent operating inside a channel named #user-insights, answering a question about recent mobile-app feedback with a themed summary pulled from multiple sources.

The company’s demo lineup walks through a sample team directory of agents: Spark for lead qualification and follow-up, Slate for software-request review, Tally for metrics reporting, Scout for product feedback routing, Trove for third-party vendor risk, and Angle for marketing and web content.

Advertisement
OpenAI workspace agents examples

OpenAI first-party workspace agents. Credit: OpenAI

OpenAI also shared more functional examples its own teams use internally — a Software Reviewer that checks employee requests against approved-tools policy and files IT tickets; an accounting agent that prepares parts of month-end close including journal entries, balance-sheet reconciliations, and variance analysis, with workpapers containing underlying inputs and control totals for review; and a Slack agent used by the product team that answers employee questions, links relevant documentation, and files tickets when it surfaces a new issue.

In a sense, it is a continuation of the philosophy OpenAI espoused for individuals with last week’s Codex desktop release: the agent joins the workflow where work is already happening, draws in context from the surrounding apps, takes action where permitted, and keeps moving.

From GPTs to a broader agent push

Workspace agents are not a standalone launch. They sit inside a roughly 12-month arc in which OpenAI has been systematically rebuilding ChatGPT, the API, and the developer platform around agents.

Advertisement

Workspace agents are explicitly positioned by OpenAI as an evolution of its custom GPTs, introduced in late 2023, which gave users a way to create customized versions of ChatGPT for particular roles and use cases.

However, now OpenAI says it is deprecating the custom GPT standard for organizations in a yet-to-be determined future date, and will require Business, Enterprise, Edu and Teachers users to update their GPTs to be new workspace agents.

Individuals who have made custom GPTs can continue using them for the foreseeable future, according to our sources at the company.

In October 2025, OpenAI introduced AgentKit, a developer-focused suite that includes Agent Builder, a Connector Registry, and ChatKit for building, deploying, and optimizing agents.

Advertisement

In February 2026, it introduced Frontier, an enterprise platform focused on helping organizations manage AI coworkers with shared business context, execution environments, evaluation, and permissions.

Workspace agents arrive as the no-code, in-product entry point that sits on top of that stack — even if OpenAI does not explicitly describe the architectural relationship in its materials.

The subtext across all three launches is the same: OpenAI has decided that the future of ChatGPT-for-work is fleets of permissioned agents, not single chat windows — and that GPTs, its first attempt at letting businesses customize ChatGPT, were not enough.

Governance and enterprise safeguards

Because workspace agents can act across business systems, OpenAI puts heavy emphasis on governance. Admins can control who is allowed to build, run, and publish agents, and which tools, apps, and actions those agents can reach.

Advertisement

The role-based controls are more granular than the ones most custom-GPT rollouts ever had: admins can toggle, per role, whether members can browse and run agents, whether they can build them, whether they can publish to the workspace directory, and — separately — whether they can publish agents that authenticate using personal credentials.

That last setting is the risky case, and OpenAI explicitly recommends keeping it narrowly scoped.

Authentication itself comes in two flavors, and the choice has real consequences. In end-user account mode, each person who runs the agent authenticates with their own credentials, so the agent only ever sees what that individual is allowed to see.

In agent-owned account mode, the agent uses a single shared connection so users don’t have to authenticate at run time. OpenAI’s documentation strongly recommends service accounts rather than personal accounts for the shared case, and flags the data-exfiltration risk of publishing an agent that authenticates as its creator.

Advertisement

Write actions — sending email, editing a spreadsheet, posting a message, filing a ticket — default to Always ask, requiring human approval before the agent executes.

Builders can relax specific actions to “Never ask” or configure a custom approval policy, but the default posture is human-in-the-loop.

OpenAI also claims built-in safeguards against prompt-injection attacks, where malicious content in a document or web page tries to hijack an agent. The claim is welcome but not yet proven in the wild.

For organizations that want deeper visibility, OpenAI says its Compliance API surfaces every agent’s configuration, updates, and run history.

Advertisement

Admins can suspend agents on the fly, and OpenAI says an admin-console view of every agent built across the organization, with usage patterns and connected data sources, is coming soon.

Two caveats worth flagging for security-sensitive buyers: workspace agents are off by default at launch for ChatGPT Enterprise workspaces pending admin enablement, and they are not available at all to Enterprise customers using Enterprise Key Management (EKM).

Analytics and early customer signal

OpenAI also ships an analytics dashboard aimed at helping teams understand how their agents are being used. Screenshots in the launch materials show measures like total runs, unique users, and an activity feed of recent runs, including one by a user named Ethan Rowe completing a run in a #b2b-sales channel.

The mockup detail supports OpenAI’s broader point: the company wants organizations to measure not just whether agents exist, but whether they are being used.

Advertisement

The clearest early-adopter signal in the launch itself comes from Rippling. Ankur Bhatt, who leads AI Engineering at the HR platform, says workspace agents shortened the traditional development cycle enough that a sales consultant was able to build a sales agent without an engineering team. “It researches accounts, summarizes Gong calls, and posts deal briefs directly into the team’s Slack room,” Bhatt says. “What used to take reps 5–6 hours a week now runs automatically in the background on every deal.”

OpenAI’s announcement names SoftBank Corp., Better Mortgage, BBVA, and Hibob as additional early testers.

The era of the digital coworker

Workspace agents do not land in a vacuum. They land in the middle of a broader OpenAI push — through AgentKit, through Frontier, through the Codex overhaul — to make agents more persistent, more connected, and more useful inside real organizational workflows.

They also land in a deeply crowded field: Microsoft Copilot Studio is wired into the Microsoft 365 base, Google is pushing Agentspace, Salesforce has rebuilt itself as agent infrastructure with Agentforce, and Anthropic recently introduced Claude Managed Agents, all different flavors of similar ideas — agents that cut across your apps and tools, take actions on schedules repeatedly as desired, and retain some degree of memory, context, and permissions and policies.

Advertisement

But this launch matters because it turns OpenAI’s strategy into something concrete for the teams already paying for ChatGPT, and because it quietly retires the product those teams were most recently told to standardize on.

If workspace agents live up to the pitch — shared, reusable, scheduled, permissioned coworkers that follow approved processes and keep work moving when their human is offline — it would mark a meaningful change in what workplace software does. Less passive software waiting for input, more active systems helping teams coordinate, execute, and move faster together.

The era of the digital coworker has begun. And, on OpenAI’s plans at least, the era of the custom GPT is ending.

Source link

Advertisement
Continue Reading

Tech

‘It ultimately made people realize that music was worth paying for’: Spotify’s Sten Garmark on how the streaming giant created an entirely new business model, and its mission to convince users that ‘there was something better than free’

Published

on

Over the past couple of decades we’ve witnessed a whirlwind of cultural changes in the music industry, but also major changes in terms of how we find and listen to music. And there’s arguably one entity that has contributed to these shifts more than any other: Spotify — which was founded 20 years ago today (April 23). Feel old yet? I sure do.

For many music lovers out there, myself included, Spotify was their introduction to music streaming, and over the last 20 years it’s climbed to the top of the ladder, amassing over 750 million users and cementing its position as one of the best music streaming services — and in the eyes of many, the daddy of them all.

Source link

Continue Reading

Tech

AI galaxy hunters are adding to the global GPU crunch

Published

on

NASA announced that it will launch the Nancy Grace Roman space telescope into orbit in September 2026, eight months ahead of schedule. The new space telescope is expected to deliver 20,000 terabytes of data to astronomers over the course of its life.

That will add to 57 gigabytes of breath-taking imagery downlinked daily from the James Webb Space Telescope, which began its work in 2021, and the start of a survey later this year by the Vera C. Rubin Observatory in the mountains of Chile, which is expected to gather 20 terabytes of data each night.

For comparison, the Hubble Space telescope, once the gold standard, delivers just 1 to 2 gigabytes of sensor readings each day. It’s been a while since all those readings were pored over by hand, but like everyone else with a pile of data, astronomers are now turning to GPUs to solve their problems.

Brant Robertson, a UC Santa Cruz astrophysicist, has had a front-row seat to this step change in science while supporting or using data from these missions. Robertson has spent the past 15 years working with Nvidia to apply GPUs to the problems of understanding space, first through advanced simulations testing theories about supernnova explosions, and now developing the tools to analyze a torrent of data from the newest observatories.

Advertisement

“There’s been this evolution [from] looking at a few objects, to doing CPU-based analyses on large scales of the data set, to then doing GPU-accelerated versions of those same analyses,” he told TechCrunch.

Robertson and then-graduate student Ryan Hausen developed a deep learning model called Morpheus that can pore over large data sets and identify galaxies. Their early AI analysis of Webb data identified a surprising number of a specific type of disc galaxies and added a new wrinkle to theories about the development of our universe.

Now Morpheus is changing with the times: Robertson is switching its architecture from convolutional neural networks to the transformers behind the rise of large language models. That will result in the model being able to analyze several times the area than it can currently, speeding up its work.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Robertson is also working on generative AI models trained on space telescope data to improve the quality of observations collected by ground telescopes, which are distorted by Earth’s atmosphere. Despite advances in rocketry, it’s still hard to get an 8 meter mirror into orbit, so using software to improve Rubin’s observations is the next best thing.

Advertisement

But he’s still feeling the pressure of global demand for GPU access. Robertson has used the National Science Foundation to build a GPU cluster at UC Santa Cruz, but it is becoming outdated even as more researchers want to apply compute-intensive techniques to their work. The Trump administration proposed cutting the NSF’s budget by 50% in its current budget request.

“People want to do these AI, ML analyses, and GPUs are really the way to do that,” Robertson said. “You have to be entrepreneurial…especially when you’re working kind of at the edge of where the technology is. Universities are very risk averse because they just have constrained resources, so you have to go out and show them that, ‘look, this is where we’re going as a field.’”

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Source link

Advertisement
Continue Reading

Tech

Making RAM At Home In Your Own Semiconductor Fab

Published

on

There’s little point in setting up your own shed-based clean room for semiconductor purposes if you don’t try to do something practical with it. Something like responding to the RAMpocalypse by trying to make your own RAM, for example.

Testing the DRAM cells. (Credit: Dr. Semiconductor, YouTube)
Testing the DRAM cells. (Credit: Dr. Semiconductor, YouTube)

After all, what could be so hard about etching the same repeating structures over and over? In a recent video, [Dr. Semiconductor]’s experience doing exactly this are detailed, with actual DRAM resulting at the end.

We covered the construction of the clean room shed previously, which should provide at least the basic conditions to produce semiconductors without worrying about contaminating dies. From here the process is reminiscent of etching PCBs, with a prepared surface coated with photoresist. Using UV exposure through a mask, the pattern is etched into the photoresist and from there the pattern is subsequently etched into the wafer’s surface.

With the patterns formed, the next step is doping of the silicon in order to create the active structures, i.e. the transistors and capacitors. Doping can be done in a variety of ways, with ion implantation being the industry standard method, but a bit too expensive and bulky for a shed fab. Instead a spin-on-glass method was used. After this the remaining functional structures can be built up.

If anyone was expecting to see a DDR5 DRAM die pop out at the end, they’re bound to be disappointed. The target here was to create a 5×4 array of DRAM cells, for a dizzying 20 bits. Still, the fact that it’s possible to DIY DRAM like this at home is already pretty awesome, with clearly plenty of room to push it towards and past fabrication nodes of the 1990s and beyond.

Advertisement

Although the produced DRAM cells have fairly leaky capacitors, they’re good enough for their purpose, and the plan is to scale up to a large DRAM array from here. Whether the DRAM control logic will also be implemented in hardware like this remains to be seen, but the video’s ending makes it clear that the goal is to attach it to a PC somehow.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025