The base of the lamp has two slider buttons. One toggle adjusts the warmth, from cold white light all the way to red. One adjusts the intensity, from ultra-bright down to a glareless glow. Hard taps on each button skip ahead, while holding the toggle down on one side or another adjusts the light settings quite slowly—slowly enough I at first sometimes question whether it’s happening.
The maximum brightness is 1,000 lumens—the approximate intensity of a 75-watt incandescent bulb. At this brightness, the battery lasts about five hours. At a lower intensity, this can extend to as long as a dozen hours.
Red light therapy is, of course, the province of TikTok as much as science—a field where wild exaggerations live alongside legitimate uses and benefits. For every sleep study showing that red light is superior to blue light when it comes to melatonin levels, there’s another showing that red light is associated with “negative emotions” before bed.
Advertisement
So I can only offer my own experience, which is that Edge Light Go’s red reading light offers me a pleasant liminal space between awake time and sleepy time, one not offered by a basic nightstand lamp. It allows me to sort of bask in a darkroom space that still lets me see and read, and drift off a little easier.
If I fall asleep, the light has an automatic 25-minute shut-off, which means I won’t do what I far too often do, which is drift off while reading and then wake up, alarmed, to a room filled with bright light in the middle of the night.
Caveats and Quirks
Photograph: Matthew Korfhage
This said, for all the virtues of portability, the Edge Light Go does not boast a base that’s heavy enough to stop the lamp from tipping over if I bend it forward from its lowest hinge. This can be an annoyance when trying to use the lamp as a reading light from a bedside table or the arm of a couch.
Summary: The Pentagon has narrowed its Advanced Nuclear Power for Installations (ANPI) programme from eight companies to three, advancing microreactor deployment at Buckley Space Force Base (Colorado) and Malmstrom Air Force Base (Montana) by 2030. The original eight vendors included BWXT, Oklo, X-energy, Kairos Power, Radiant, General Atomics, Westinghouse, and Antares. The commercially owned reactor model, backed by Executive Order 14299 and $125 million in Congressional funding, addresses military grid vulnerability while serving as a proving ground for reactors that could also power AI data centres.
The Pentagon has narrowed the field for its programme to install microreactors at US Air Force bases, selecting three companies from an original pool of eight to advance toward deployment, Bloomberg reported on Tuesday. The down-selection is the most concrete step yet in the Advanced Nuclear Power for Installations programme, known as ANPI, a joint effort between the Defense Innovation Unit, the Air Force, and the Army that aims to make military bases energy-independent by replacing their reliance on a civilian power grid that is increasingly vulnerable to cyberattacks, extreme weather, and the cascading demands ofAI-driven energy consumption.
The programme began in April 2025, when the DIU selected eight companies to develop microreactor proposals: Antares Nuclear Energy, BWXT Advanced Technologies, General Atomics Electromagnetic Systems, Kairos Power, Oklo, Radiant Industries, Westinghouse Electric Company, and X-energy. Each was tasked with designing commercially owned and operated reactors that could be built on military land, licensed through the Nuclear Regulatory Commission, and maintained by the vendor throughout their operational life. The military would buy the electricity without owning the reactor, a model designed to accelerate deployment by sidestepping the decades-long procurement cycles that have historically paralysed defence infrastructure projects.
Why Air Force bases need their own power plants
The Department of Defense consumes more than 30 terawatt-hours of electricity annually across more than 500 installations, making it the single largest energy consumer in the US government. The overwhelming majority of that power comes from the civilian grid. That dependence is now treated as a strategic vulnerability. Cyberattacks on US energy infrastructure have increased by roughly 70% in recent years. The grid itself is under growing strain from data centre construction, with the International Energy Agency projecting that data centre electricity consumption will exceed 1,000 terawatt-hours globally by the end of 2026. Military bases that host missile fields, space surveillance operations, and nuclear command infrastructure cannot afford to compete with AI training clusters for grid capacity.
Advertisement
Two Air Force installations have been selected as the first deployment sites. Buckley Space Force Base in Aurora, Colorado, hosts the Aerospace Data Facility, one of the Department of Defense’s primary satellite ground stations. Malmstrom Air Force Base in Great Falls, Montana, oversees 150 Minuteman III intercontinental ballistic missiles spread across 13,800 square miles of Montana prairie. Both bases require uninterrupted power for operations that are, by definition, existential. Nancy Balkus, the Deputy Assistant Secretary of the Air Force for operational energy, has said that energy security at these installations is not an efficiency question but a readiness question. The target is operational microreactors at both sites by 2030.
Microreactors are nuclear fission reactors that typically produce between one and 20 megawatts of electrical power, small enough to fit on a few truck trailers and large enough to power a military base or a small data centre. They use advanced fuel forms, most commonly TRISO (tristructural isotropic) particles encased in ceramic and graphite shells that can withstand extreme temperatures without melting down. Several of the ANPI candidates use high-assay low-enriched uranium, or HALEU, which is enriched to between 5% and 20% uranium-235, higher than conventional reactor fuel but well below weapons grade.
The designs vary significantly. BWXT’s Project Pele, developed separately for the Army, is a 1.5-megawatt transportable reactor that completed initial testing at Idaho National Laboratory and uses TRISO fuel with a gas-cooled design. In February 2026, the Pentagon airlifted a five-megawatt microreactor prototype from California to Utah, the first military nuclear airlift, demonstrating the transportability that makes these systems attractive for expeditionary and remote base operations. Oklo, whose chairman is OpenAI chief executive Sam Altman, designs a compact fast reactor called the Aurora that uses metallic fuel and targets both military and commercial applications.X-energy, which went public with Amazon’s backing, is developing the Xe-100, an 80-megawatt high-temperature gas-cooled reactor that uses TRISO-X fuel pebbles. Kairos Power is building a fluoride salt-cooled reactor. Radiant Industries, founded by former SpaceX engineers, is developing a portable one-megawatt reactor designed for rapid deployment.
Only NuScale Power has received full design certification from the Nuclear Regulatory Commission for a small modular reactor, but NuScale’s design is a 77-megawatt light-water reactor, far larger than what ANPI requires. The ANPI programme’s commercially owned model means that vendors will need to secure their own NRC licences for reactors sited on military land, a regulatory path that has not been tested at this scale. The Atomic Energy Act provides a military exemption for reactors operated by the armed forces, but the ANPI model explicitly uses commercial operators, which means NRC jurisdiction applies.
The policy architecture
The programme sits within a broader policy push that has acquired unusual bipartisan momentum. Executive Order 14299 explicitly links nuclear power to AI infrastructure at military installations, directing federal agencies to accelerate the siting and permitting of advanced reactors. The ADVANCE Act, signed into law with an 82-to-14 Senate vote, streamlines NRC licensing for advanced reactor designs. Congress has appropriated $125 million for military microreactor development. The Army’s separate Project Janus programme is evaluating nine additional bases for microreactor deployment.
Advertisement
The convergence of military energy security and commercial AI infrastructure is not coincidental. The Department of Energy has identified 16 federal sites, many adjacent to existing nuclear facilities, as candidates for data centre construction.Nuclear-powered AI data centresare attracting dedicated venture capital, with Valar Atomics raising $450 million at a $2 billion valuation to build small modular reactors purpose-built for AI workloads. The same microreactors that power a missile field in Montana could, in a commercially licensed configuration, power an AI training cluster in Texas. The ANPI programme is a military procurement initiative, but it is also a proving ground for the reactors that the technology industry hopes will solve its energy problem.
What stands in the way
The 2030 deployment target is ambitious by nuclear standards. No advanced microreactor design has completed NRC licensing. HALEU fuel supply remains constrained, with Centrus Energy as the only domestic commercial producer and Russia historically the dominant global supplier, a dependency that sanctions have complicated. Community opposition to nuclear facilities, even small ones on existing military bases, has slowed previous projects. The cost economics of microreactors at the one-to-20-megawatt scale remain unproven in commercial operation, though the commercially owned model shifts that financial risk from the Department of Defense to the vendors.
Thenuclear waste questionalso persists. Microreactors produce far less spent fuel than conventional power plants, but the United States still lacks a permanent repository for any nuclear waste. Advanced fuel forms like TRISO are more proliferation-resistant and easier to store than conventional spent fuel rods, but “easier” is relative in an industry where waste management has been a political impossibility for four decades.
Thebroader debate over nuclear power and AIhas tended to focus on fusion, the technology that is always 20 years away, or on gigawatt-scale conventional plants that take a decade to build. Microreactors occupy a different niche: small enough to be manufactured in a factory rather than constructed on site, simple enough to operate with minimal staffing, and modular enough to scale by adding units rather than building larger. The military is betting that this niche is real. The down-selection from eight companies to three means the Pentagon has now seen enough proposals to decide which designs are credible and which are not. The three that remain have roughly four years to prove that a nuclear reactor can be as reliable, and as unremarkable, as the diesel generators that military bases currently keep for backup power. If they succeed, the implications extend well beyond the fence line of an Air Force base.
With countries banning social media for kids left and right, Meta is trying different things to convince parents that its platforms are safe for teens. In its latest effort, the company will start showing parents the topics their teens have discussed with Meta AI over the previous seven days.
“Parents will be able to see the topics their teen has been asking Meta AI about in [Facebook, Messenger or Instagram] over the past week,” Meta explained in a blog post. “Topics can range from School, Entertainment, and Lifestyle to Travel, Writing, and Health and Wellbeing, among others.”
For parents overseeing Meta’s teen accounts, the feature will appear in a new Insights tab within supervision, both in-app and on web. Parents can tap on a topic to see the different categories within each: for instance, sub-categories within Lifestyle include fashion, food and holidays, while fitness, physical health and mental health are part of the Health and Wellbeing topic.
Meta
Meta also worked with the Cyberbullying Research Center to develop what it calls “conversation starters,” or open-ended conversations about their experience with AI. It provides detail about what the questions are designed to address, and can be found on the Family Center website or through a link in the new Insights tab.
Advertisement
Finally, Meta revealed more detail about its AI Wellbeing Expert Council, who will provide “ongoing input on our AI experience for teens.” It will be made up of three existing advisory groups as well as new members with special expertise in responsible and ethical AI, who are affiliated with the National Council of Suicide Prevention and multiple universities. It’s worth noting that Meta has a separate oversight board that deals with subjects ranging from AI to moderation.
Offboarding moderation chores to busy parents appears to be par for the course for Meta these days. The company has recently cut back on the use of third-party vendors that help with content moderation, shifting responsibility instead to advanced AI systems, according to recent reports.
The dangers of AI for teens have been one of multiple reasons countries like Spain have banned social media platforms for kids. One of the most recent and tragic cases was in Canada, where a teen was provided specific details by OpenAI’s ChatGPT about how to carry out a school shooting. Another such case is under investigation in Florida, and AI’s have been implicated in multiple teen suicides as well.
In the US, the National Suicide Prevention Lifeline is 1-800-273-8255 or you can simply dial 988. Crisis Text Line can be reached by texting HOME to 741741 (US), 686868 (Canada), or 85258 (UK). Wikipedia maintains a list of crisis lines for people outside of those countries.
The new model reasons about composition, searches the web for context, generates up to eight coherent images from one prompt, and renders text in non-Latin scripts with near-flawless accuracy. It also took the number one spot on the Image Arena leaderboard within 12 hours of launch, by the largest margin ever recorded.
Two years ago, asking ChatGPT to generate a visual was like commissioning a poster from a sleep-deprived intern with a glue stick and a head injury. You’d ask for a clean design and get “leftovers creativity” splashed across the image, plus three new words that looked like they’d been invented during a minor software malfunction.
The images looked AI-generated in the way that has become a cultural shorthand for uncanny: almost right, conspicuously wrong, and instantly recognisable as synthetic.
The leap matters. Text rendering has been the persistent, embarrassing weakness of AI image generators since DALL-E first turned heads in January 2021, a model we covered at the time as a fascinating curiosity.
Advertisement
Images 2.0 claims approximately 99% accuracy in text rendering across any language and script, including Japanese, Korean, Chinese, Hindi, and Bengali. If that figure holds in independent testing, it closes the gap between “impressive AI demo” and “tool a graphic designer would actually use for production work.”
The architectural change that makes the model different, though not just better, is what OpenAI calls “thinking capabilities.” Images 2.0 is the company’s first image model to integrate its O-series reasoning architecture.
Before generating a pixel, the model researches the prompt, plans the composition, reasons about spatial relationships between elements, and can search the web for real-time context.
It is, in OpenAI’s framing, not a rendering tool but a “visual thought partner.”
Advertisement
This is my cat transformed into a comic strip with ChatGPT.
In practice, this manifests in two access modes. Instant mode ships to all ChatGPT users, including free-tier accounts, and delivers the core quality improvements: better text, sharper editing, richer layouts.
Thinking mode, which enables web search, multi-image batching, and output verification, is restricted to Plus ($20/month), Pro ($200/month), Business, and Enterprise subscribers.
The distinction is commercially significant. The reasoning capabilities, where most of the quality premium lives, sit behind the paywall. Free users get better images; paying users get images the model has thought about.
Advertisement
The multi-image capability is the feature most likely to change professional workflows. A single prompt can now produce up to eight images that maintain character and object continuity across the set.
That means a designer can generate a family of social media assets, a children’s book sequence, or a series of storyboard frames from one instruction, with consistent visual identity throughout.
Previously, each image had to be prompted individually and stitched together manually. For marketing teams and content creators, that is a meaningful reduction in production friction.
The integration into Codex, OpenAI’s coding environment, is the strategically loaded move. Developers and designers can now generate UI mockups, prototypes, and visual assets inside the same agentic workspace they use for code, slides, and browser automation, using a single ChatGPT subscription.
Advertisement
The image model is no longer a standalone product; it is a capability embedded in OpenAI’s broader platform, competing not just with Midjourney and Google’s Nano Banana 2 on quality but with Canva and Figma on workflow integration.
The benchmark performance is striking. Within 12 hours of launch, Images 2.0 took the number one spot on the Image Arena leaderboard across every category, with a score of 1,512, a +242-point lead over the second-place model, Google’s Nano Banana 2. That is the largest lead ever recorded on the leaderboard.
For most of 2026, OpenAI and Google had been trading the top position within a tight margin; Images 2.0 broke away decisively.
DALL-E 2 and DALL-E 3 are being deprecated and retired on 12 May 2026. GPT-Image-1.5, released in December 2025 as an intermediate upgrade, remains accessible via the API for legacy integrations but is no longer the default model.
Advertisement
OpenAI did not disclose the architecture of Images 2.0, describing it only as a “generalist model” or “GPT for images” and declining to specify whether it uses a diffusion, autoregressive, or hybrid approach. The API model identifier is gpt-image-2; the API is expected to open to developers in early May 2026.
Token-based pricing is $8 per million tokens for image input, $2 for cached input, and $30 for image output, with per-image costs typically ranging from $0.04 to $0.35 depending on prompt complexity and resolution. Output resolution reaches up to 2K.
The knowledge cutoff is December 2025, which introduces a practical boundary: the model cannot accurately render events, people, or products that emerged after that date without supplementing its internal knowledge with live web search.
The model’s safety architecture includes content filtering, C2PA metadata for provenance, and what OpenAI described in the press briefing as ongoing monitoring, a point the company was notably emphatic about, given the growing regulatory scrutiny of synthetic media and the use of AI image generators in deepfakes, scams, and non-consensual imagery.
Advertisement
The most consequential question Images 2.0 raises is not about quality. The technical gap between AI-generated and human-created imagery has been narrowing for years; this model narrows it further.
The question is about what happens when the tool is no longer a novelty but infrastructure, when image generation is a default capability of every coding environment, every chat interface, and every enterprise productivity suite, and when the distinction between “designed by a person” and “generated by a prompt” becomes something only metadata can verify.
OpenAI, for its part, appears to be betting that the answer is scale: more images, faster, better, cheaper, everywhere. When we covered first covered DALL-E five years ago, the model’s outputs were fascinating oddities. Now they are production assets.
The era in which AI-generated images were obviously AI-generated is over. What comes next depends on whether the guardrails can keep pace with the capability.
Lots of smart glasses have AI bots inside them now. The one in L’Atitude 52°N’s glasses is called Goya, named after Francisco Goya, the famous Spanish artist who painted renowned masterpieces of romanticism.
CEO and founder Gary Chen, who has worked on wearable devices for companies like Oppo, OnePlus, and HTC, says his company’s glasses are focused on travelers, with AI features that act like a tour guide and talk about all the paintings in famous museums.
“Basically, you can say, ‘Hey, Goya, what is the story about Mona Lisa?’” Chen says. “You can ask anything and, with your permission, they will take a photo to analyze what’s in front of you.”
I ask if you could quiz it about perhaps the most famous Goya painting, the terrifying, Gothic horror-esque image of Saturn devouring his own son.
Advertisement
“Yes, yes,” Chen says, “It can also give you some recommendations about restaurants.”
Berlin-based L’Atitude 52°N is a new player in the smart glasses space, selling its first pairs on Kickstarter in September 2025, where the campaign surpassed its funding goal and raised more than $400,000. There have been some bumps since then, as shipments were delayed from an originally announced release date in February 2026, and one model in development was scrapped outright. Now, L’Atitude 52°N has announced an official release date for its smart glasses.
Preorders for one model, called Berlin, start on May 19. The glasses actually go on sale on May 26. This might be a disappointment for Kickstarter backers, as the most recent official update from the campaign came in March and said shipping would begin on April 15 for Berlin units and June 7 for the second model, called Milan. L’Atitude 52°N still hasn’t set an official launch date for the Milan, except to say that it will be “arriving in the second quarter of 2026.”
The Berlin glasses cost $399. Add another $50 for the photochromatic lenses. There is one very big catch: The AI features enabled on the device will only work for 12 months, which L’Atitude 52°N calls an “AI feature trial.” After that, customers have to pay for a subscription service, or will be limited to the base features, like playing music and capturing media.
How much will that subscription service cost? Chen says he doesn’t know.
In January, Leica’s Chairman of the Supervisory Board, Dr Andreas Kaufman, revealed that “we’re also developing our own sensor again” when referring to future development of the Leica M-system. He made the comment in the German-language ‘Leica Enthusiast Podcast’, according to a report by Peta Pixel.
We now have a clearer picture of how that might work, following Leica’s announcement this week that it’s in a strategic partnership with Gpixel — a prominent Chinese sensor manufacturer with BSI, stacked and global-shutter full-frame sensors in its product portfolio.
Article continues below
Advertisement
This means Leica could ditch the ‘off-the-shelf’ Sony sensor tech used by its latest cameras, such as the full-frame 61MP unit found in the M11 series, in favor of its own tailor-made type for future models.
The first Leica M11 rangefinder was released in January 2022, but the series still feels fresh considering that the M11-P and the screen-less M11-D models have followed in 2023 and 2024 respectively, and last year’s polarizing M-EV1 uses the same Sony sensor too.
That hasn’t stopped Leica M12 rumors from circulating for quite some time though, which is to be expected given the original M11 was released over four years ago.
Advertisement
This most recent announcement is only going to pour fuel onto the Leica M12 rumor fire. Dr Kaufmann said, “I am really happy and proud that our long-term cooperation with Gpixel will result in a true Leica sensor, incorporating the best ingredients of engineering between Wetzlar, Antwerp and Changchun”, referring to Leica and Gpixel HQs, as well as Gpixel’s Europe base in Belgium.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
What does the future look like for the next generation of Leica cameras?
Leica’s M11 series of rangefinders use a 61MP full-frame Sony-made sensor. The next generation could feature a bespoke sensor instead, which could have a big impact on image quality (Image credit: Future)
Gpixel is a much smaller player than Sony in the sensor-making game, but it has an impressive portfolio nonetheless.
Advertisement
You can forget looking at Gpixel’s portfolio for clues, though, because Leica says the next true Leica sensor for next-generation cameras will be an entirely new creation: “The partnership focuses on jointly engineering a bespoke image sensor optimized for Leica’s rigorous imaging standards, enabling unprecedented levels of image quality, dynamic range, color fidelity, and low-light performance across future Leica products.”
There’s no indication of how far down the line the development of this new sensor is, when we can expect to see it in a camera, or which future cameras will be the first to enjoy it. There’s speculation based on the current Leica camera lineup and which models are due an upgrade, though, chief among them being a successor to the Leica M11.
Beyond a potential Leica M12, there could be a new high performance Leica SL series mirrorless camera, or even an entirely new model given Leica’s recent form — the M-EV1 was the first ‘rangefinder’ in the 70 year history of Leica’s M-mount to feature an EVF.
Advertisement
Casting a wider glance, this is another situation where a Chinese company could have a big impact in the photo industry, following huge strides by lens makers such as Viltrox, and leading smartphones such as the Oppo Find X9 Ultra – the most versatile camera phone available today.
What’s the next Leica camera you’d like to see, and what sort of image quality improvements would you like the new sensor to deliver? Let me know in the comments below!
The Turkish parliament has voted through a bill that would ban all children under the age of 15 from using social media. As part of the legislation, social media platforms would be required to enforce age-verification measures on their apps, provide parental control tools, and react more quickly to harmful content being posted.
As reported by , lawmakers have passed the bill in the wake of two deadly school shootings in Turkey, after which police 162 people accused of sharing footage of the tragedies online.
Turkey’s President Recep Tayyip Erdogan now has 15 days to accept the bill in order for it to become law, after reportedly saying social media platforms had become “cesspools” in a televised address to the nation.
As well as the major social media platforms, AP reports that online gaming companies would also have to implement their own restrictions on minors, with potential punishments including bandwidth reductions and financial penalties.
Advertisement
This isn’t the first time Turkey has locked horns with social media and online gaming platforms. Instagram has been in the country before, back in 2024, relating to a dispute over the posting of Hamas-related content. Access was restored around a week later, but in the same time period Turkey also Roblox over reports of inappropriate sexual content accused of being explorative to children. At the time, a Turkish official also named the “promotion of homosexuality” as one reason for the ban.
Turkey has also temporarily banned (now called X) on several occasions, most after 2023’s devastating earthquakes, though it was not clear at the time why the government may have moved to block the social media platform.
The country’s lawmakers moving to ban under-15s from accessing social media is part of an emerging trend in Europe and across the globe. The likes of and have recently introduced similar legislation of their own, following becoming the first country in the world to ban children under 16 from social media last year. The UK has since bringing in tighter restrictions too.
Volvo’s parent company has launched a new electric sedan in China that hits a familiar sore spot for US car shoppers.
The Geely Galaxy A7 EV pairs a clean, mainstream shape with a claimed 550km of CLTC range, and it enters the market at a price that still looks strikingly low by Western EV standards. It also appears set to stay far away from US dealerships.
That low headline number needs some caution though. Car News China reports a cheaper entry point, but the launched EV trims are higher, starting at 112,800 yuan, or about $16,530, and rising to 119,800 yuan. That is still aggressive pricing for a sedan of this size, just not quite the jaw-dropping bargain the earliest figure implied.
Geely
Cheap price, messy rollout
Once you get past the pricing confusion, the core package looks solid. The A7 EV uses a 58.05 kWh LFP battery and a front-mounted 160 kW motor, with Geely claiming 550km on the CLTC cycle. Reports say there’s a smaller-battery version, so there’s still some room for Geely to clear up how broad the lineup may become.
The rest of the car sounds more mature than bargain-bin. The EV gets restrained exterior styling, a 14.6-inch touchscreen, a digital instrument display, and an interior layout that reads like normal family transport instead of a stripped-down cost cutter. That matters because the real appeal here is not novelty. It is normality at a low price.
Advertisement
Why this one won’t reach you
For American readers, the frustrating part is how familiar this story has become.
China keeps producing lower-cost EVs that look usable and complete, while the US market rarely gets anything close to this price in a new electric sedan.
Nothing tied to the A7 EV points to a US launch, so this one looks like another car Americans will only watch from afar.
Geely
What happens before this goes on sale
The next question is whether the EV version can help revive momentum for the wider Galaxy A7 line.
Geely delivered 15,230 A7 units in China in the first quarter of 2026, but that total was down 59.4% from the prior quarter.
Advertisement
If this EV lands with buyers, it will matter as more than a fresh trim. It will show how quickly China’s lower-cost EV market is sharpening up.
If you’ve ever signed up for an app and then spent the next five minutes hunting for a six-digit code buried in your inbox, you know how painful the process is. I especially despise the magic sign-up link that websites send, as they sometimes fail to work if my default browser isn’t Google Chrome.
Thankfully, Google is fixing that with a new verified email credential for Android, and it’s a genuinely smart solution.
So what’s actually broken with OTPs?
The humble OTP has been the backbone of email verification forever, but it comes with real problems. You leave the app, open your inbox, find the email, copy the code, and come back.
It’s a long process that not only hurts consumers but also app developers. The number of steps required may cause a user to leave the app mid-sign-up, meaning the app loses potential users before they even try it.
Advertisement
iOS fixed this issue by directly allowing users to sign in via Apple account. Recently, it also added a feature to autofill OTPs from emails, just as Android supports OTP autofill from messages.
Digital Trends
Now, Google is also creating a seamless signup process that doesn’t require users to jump between apps.
How does the new system work?
Google now issues a cryptographically verified email credential directly to Android devices through the Credential Manager API. When an app needs to confirm your email, it can pull that credential directly using the Credential Manager API.
A small prompt appears on screen showing what information is being requested. You tap to confirm, and the app gets your verified email. No switching apps, no codes, no delay.
Android Developers Blog
Google recommends pairing this with passkey creation, so the first sign-up becomes the last time a user has to do anything manual.
The same can also be used for account recovery and re-authentication of sensitive actions, including setting changes, updating profile details, and more.
Advertisement
Android Developers Blog
The best part is that the new feature supports Android 9 and later devices, so you don’t need the best new Android smartphones to enjoy this quality-of-life improvement.
Are there any restrictions?
There are a few restrictions. The feature currently works only with regular consumer Google Accounts, not Workspace accounts. It also only works with Gmail accounts, and not with third-party email accounts that you might have used to create your Google account.
OpenAI introduced a new paradigm and product today that is likely to have huge implications for enterprises seeking to adopt and control fleets of AI agent workers.
Called “Workspace Agents,” OpenAI’s new offering essentially allows users on its ChatGPT Business ($20 per user per month) and variably priced Enterprise, Edu and Teachers subscription plans to design or select from pre-existing agent templates that can take on work tasks across third-party apps and data sources including Slack, Google Drive, Microsoft apps, Salesforce, Notion, Atlassian Rovo, and other popular enterprise applications.
Put simply: these agents can be created and accessed from ChatGPT, but users can also add them to third-party apps like Slack, communicate with them across disparate channels, ask them to use information from the channel they’re in and other third-party tools and apps, and the agents will go off and do work like drafting emails to the entire team, selected members, or pull data and make presentations.
Advertisement
Human users can trust that the agent will manage all this complexity and complete the task as requested, even if the user who requested it leaves.
It’s the end of “babysitting” agents and the start of letting them go off and get shit done for your business — according to your defined business processes and permissions, of course.
The product experience appears centered on the Agents tab in the ChatGPT sidebar, where teams can discover and manage shared agents.
This functions as a kind of team directory: a place where agents built by coworkers can be reused across a workspace. The broader idea is that AI becomes less of an individual productivity trick and more of a shared organizational resource.
Advertisement
In this sense, OpenAI is targeting one of office work’s oldest pain points: the handoff between people, systems, and steps in a process.
OpenAI says workspace agents will be free for the next two weeks, until May 6, 2026, after which credit-based pricing will begin. The company also says more capabilities are on the way, including new triggers to start work automatically, better dashboards, more ways for agents to take action across business tools, and support for workspace agents in its AI code generation app, Codex.
The most significant shift in this announcement is the move away from purely session-based interaction. Workspace agents are powered by Codex — the cloud-based, partially open-source AI coding harness that OpenAI has been aggressively expanding in 2026 — which gives them access to a workspace for files, code, tools, and memory.
Advertisement
OpenAI says the agents can do far more than answer a prompt. They can write or run code, use connected apps, remember what they have learned, and continue work across multiple steps.
That description lines up closely with the capabilities OpenAI shipped into Codex just six days ago, including background computer use, more than 90 new plugins spanning tools like Atlassian Rovo, CircleCI, GitLab, Microsoft Suite, Neon by Databricks, and Render, plus image generation, persistent memory, and the ability to schedule future work and wake up on its own to continue across days or weeks.
Workspace agents inherit that plumbing. When one pulls a Friday metrics report, it is effectively spinning up a Codex cloud session with the right tools attached, running code to fetch and transform data, rendering charts, writing the narrative, and persisting what it learned for next week.
When that same agent is deployed to a Slack channel, it is a Codex instance listening for mentions and threading its work back in.
Advertisement
This is the technical decision enterprise buyers should focus on. Building an agent on a code-execution substrate rather than a pure LLM-call-and-response loop is what gives workspace agents the ability to do real work — transforming a CSV, reconciling two systems of record, generating a chart that is actually correct — rather than describing what the work would look like.
Persistence and scheduling
In earlier AI assistant models, progress paused when the user stopped interacting. Workspace agents change that by running in the cloud and supporting long-running workflows. Teams can also set them to run on a schedule.
That means a recurring reporting agent can pull data on a set cadence, generate charts and summaries, and share the results with a team without anyone manually kicking off the process.
Here at VentureBeat, we analyze story traffic and user return rate on a weekly basis — exactly the kind of recurring, multi-step, multi-source task that could theoretically be automated with a single workspace agent. Any enterprise with a weekly reporting rhythm pulling from dynamic data sources is likely to find a use for these agents.
Advertisement
Agents also retain memory across runs. OpenAI says they can be guided and corrected in conversation, so they improve the more a team uses them.
Over time they start to reflect how a team actually works — its processes, its standards, its preferred ways of handling recurring jobs — which is a meaningfully different proposition from the static instruction-set GPTs that preceded them.
The integrated ecosystem
OpenAI’s claim is that agents should gather information and take action where work already happens, rather than forcing teams into a separate interface. That point becomes clearest in the Slack examples. OpenAI’s launch materials show a product-feedback agent operating inside a channel named #user-insights, answering a question about recent mobile-app feedback with a themed summary pulled from multiple sources.
The company’s demo lineup walks through a sample team directory of agents: Spark for lead qualification and follow-up, Slate for software-request review, Tally for metrics reporting, Scout for product feedback routing, Trove for third-party vendor risk, and Angle for marketing and web content.
OpenAI also shared more functional examples its own teams use internally — a Software Reviewer that checks employee requests against approved-tools policy and files IT tickets; an accounting agent that prepares parts of month-end close including journal entries, balance-sheet reconciliations, and variance analysis, with workpapers containing underlying inputs and control totals for review; and a Slack agent used by the product team that answers employee questions, links relevant documentation, and files tickets when it surfaces a new issue.
In a sense, it is a continuation of the philosophy OpenAI espoused for individuals with last week’s Codex desktop release: the agent joins the workflow where work is already happening, draws in context from the surrounding apps, takes action where permitted, and keeps moving.
From GPTs to a broader agent push
Workspace agents are not a standalone launch. They sit inside a roughly 12-month arc in which OpenAI has been systematically rebuilding ChatGPT, the API, and the developer platform around agents.
Advertisement
Workspace agents are explicitly positioned by OpenAI as an evolution of its custom GPTs, introduced in late 2023, which gave users a way to create customized versions of ChatGPT for particular roles and use cases.
However, now OpenAI says it is deprecating the custom GPT standard for organizations in a yet-to-be determined future date, and will require Business, Enterprise, Edu and Teachers users to update their GPTs to be new workspace agents.
Individuals who have made custom GPTs can continue using them for the foreseeable future, according to our sources at the company.
In October 2025, OpenAI introduced AgentKit, a developer-focused suite that includes Agent Builder, a Connector Registry, and ChatKit for building, deploying, and optimizing agents.
Advertisement
In February 2026, it introduced Frontier, an enterprise platform focused on helping organizations manage AI coworkers with shared business context, execution environments, evaluation, and permissions.
Workspace agents arrive as the no-code, in-product entry point that sits on top of that stack — even if OpenAI does not explicitly describe the architectural relationship in its materials.
The subtext across all three launches is the same: OpenAI has decided that the future of ChatGPT-for-work is fleets of permissioned agents, not single chat windows — and that GPTs, its first attempt at letting businesses customize ChatGPT, were not enough.
Governance and enterprise safeguards
Because workspace agents can act across business systems, OpenAI puts heavy emphasis on governance. Admins can control who is allowed to build, run, and publish agents, and which tools, apps, and actions those agents can reach.
Advertisement
The role-based controls are more granular than the ones most custom-GPT rollouts ever had: admins can toggle, per role, whether members can browse and run agents, whether they can build them, whether they can publish to the workspace directory, and — separately — whether they can publish agents that authenticate using personal credentials.
That last setting is the risky case, and OpenAI explicitly recommends keeping it narrowly scoped.
Authentication itself comes in two flavors, and the choice has real consequences. In end-user account mode, each person who runs the agent authenticates with their own credentials, so the agent only ever sees what that individual is allowed to see.
In agent-owned account mode, the agent uses a single shared connection so users don’t have to authenticate at run time. OpenAI’s documentation strongly recommends service accounts rather than personal accounts for the shared case, and flags the data-exfiltration risk of publishing an agent that authenticates as its creator.
Advertisement
Write actions — sending email, editing a spreadsheet, posting a message, filing a ticket — default to Always ask, requiring human approval before the agent executes.
Builders can relax specific actions to “Never ask” or configure a custom approval policy, but the default posture is human-in-the-loop.
OpenAI also claims built-in safeguards against prompt-injection attacks, where malicious content in a document or web page tries to hijack an agent. The claim is welcome but not yet proven in the wild.
For organizations that want deeper visibility, OpenAI says its Compliance API surfaces every agent’s configuration, updates, and run history.
Advertisement
Admins can suspend agents on the fly, and OpenAI says an admin-console view of every agent built across the organization, with usage patterns and connected data sources, is coming soon.
Two caveats worth flagging for security-sensitive buyers: workspace agents are off by default at launch for ChatGPT Enterprise workspaces pending admin enablement, and they are not available at all to Enterprise customers using Enterprise Key Management (EKM).
Analytics and early customer signal
OpenAI also ships an analytics dashboard aimed at helping teams understand how their agents are being used. Screenshots in the launch materials show measures like total runs, unique users, and an activity feed of recent runs, including one by a user named Ethan Rowe completing a run in a #b2b-sales channel.
The mockup detail supports OpenAI’s broader point: the company wants organizations to measure not just whether agents exist, but whether they are being used.
Advertisement
The clearest early-adopter signal in the launch itself comes from Rippling. Ankur Bhatt, who leads AI Engineering at the HR platform, says workspace agents shortened the traditional development cycle enough that a sales consultant was able to build a sales agent without an engineering team. “It researches accounts, summarizes Gong calls, and posts deal briefs directly into the team’s Slack room,” Bhatt says. “What used to take reps 5–6 hours a week now runs automatically in the background on every deal.”
OpenAI’s announcement names SoftBank Corp., Better Mortgage, BBVA, and Hibob as additional early testers.
The era of the digital coworker
Workspace agents do not land in a vacuum. They land in the middle of a broader OpenAI push — through AgentKit, through Frontier, through the Codex overhaul — to make agents more persistent, more connected, and more useful inside real organizational workflows.
They also land in a deeply crowded field: Microsoft Copilot Studio is wired into the Microsoft 365 base, Google is pushing Agentspace, Salesforce has rebuilt itself as agent infrastructure with Agentforce, and Anthropic recently introduced Claude Managed Agents, all different flavors of similar ideas — agents that cut across your apps and tools, take actions on schedules repeatedly as desired, and retain some degree of memory, context, and permissions and policies.
Advertisement
But this launch matters because it turns OpenAI’s strategy into something concrete for the teams already paying for ChatGPT, and because it quietly retires the product those teams were most recently told to standardize on.
If workspace agents live up to the pitch — shared, reusable, scheduled, permissioned coworkers that follow approved processes and keep work moving when their human is offline — it would mark a meaningful change in what workplace software does. Less passive software waiting for input, more active systems helping teams coordinate, execute, and move faster together.
The era of the digital coworker has begun. And, on OpenAI’s plans at least, the era of the custom GPT is ending.
‘It ultimately made people realize that music was worth paying for’: Spotify’s Sten Garmark on how the streaming giant created an entirely new business model, and its mission to convince users that ‘there was something better than free’
Over the past couple of decades we’ve witnessed a whirlwind of cultural changes in the music industry, but also major changes in terms of how we find and listen to music. And there’s arguably one entity that has contributed to these shifts more than any other: Spotify — which was founded 20 years ago today (April 23). Feel old yet? I sure do.
For many music lovers out there, myself included, Spotify was their introduction to music streaming, and over the last 20 years it’s climbed to the top of the ladder, amassing over 750 million users and cementing its position as one of the best music streaming services — and in the eyes of many, the daddy of them all.
However, it’s likely that few of today’s users know much about the company’s early days. Someone who knows more than most is Sten Garmark, Spotify’s Global Head of Consumer Experience, who’s been integral to its evolution since 2011.
Advertisement
Article continues below
To celebrate this milestone, Garmark and I sat down for an in-depth discussion, reflecting on Spotify’s impact on music over the last 20 years, and on what it took to craft a strong global brand, and reminiscing about its most iconic product features.
Advertisement
Let’s go back to the beginning
Daniel Ek (left) founded Spotify with Martin Lorentzon (not pictured) in April 2006 (Image credit: Getty Images / Kevin Mazur)
As with so many of today’s tech behemoths, from Apple to Amazon, Spotify had to start small, and Garmark remembers the unpredictable nature of the industry at the time it was founded. “The music industry was in free-fall, and it was kind of a dire time,” he tells me. “So the challenge in the beginning was to turn this around.”
In an age where piracy was rife, this became the catalyst for founders Daniel Ek and Martin Lorentzon to kick-start what was then a small business. “Daniel and Martin’s wild idea was trying to compete with piracy by forming partnerships with the industry, with labels and publishers, to create an entirely new business model for music to convince users that, actually, there was something better than free,” Garmark continues. “It ultimately also made people realize that music was worth paying for.”
These were the days before the modern smartphones really took off — the first iPhone was launched in January 2007 — which Garmark believes is the key to understanding Spotify’s early struggles in terms of scaling its business.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
“This was a world of PCs and the iPod,” he says “The key innovation in the beginning was the ‘freemium’ model, and us believing that music was worth paying for, which most people didn’t at the time. Freemium was the way to get there, to give away a really fantastic free product, and we did that on the PC.” However, when the smartphone revolution kicked off, this opened up a gap in the market, and Spotify immediately aimed for it.
Garmark has played a major role in developing Spotify’s mobile experience, and he says of those early years, before he came on board: “After the iPhone launched, the company said ‘oh, go mobile’, then you pay a subscription. So that was the business model, mobile just took off and PC sales weren’t the thing, so we had to totally reinvent ourselves to really reimagine how the whole business model worked.”
It all started with the playlist
Over the years Spotify has been building on its playlist feature, now offering tools such as Prompted Playlists, Blends, AI Playlists, and more (Image credit: Future)
One area where Spotify continues to have an advantage over its rivals, is its range of addictive product features, which in themselves have contributed to the cultural shifts in music consumption and discovery. Beloved tools like Spotify Wrapped and Discover Weekly have hooked subscribers and reeled them in, but for Garmark, there’s one feature that’s truly iconic; the playlist.
Advertisement
“It’s such a basic feature that it almost feels like you just assume it’s there,” he says. “If you go back to before Spotify, a playlist was something you had on your own MP3, but you couldn’t share it. I think (the playlist) is the core innovation, because we built so much on top of it.”
When you think about it, the basic foundation of the playlist has served as the basis for pretty much all of Spotify’s unique product features over the last 20 years. It paved the way for Collaborative playlists and Prompted playlists, and even for fun tools like Daylist, but Spotify also prides itself on its editorial playlists, curated collections of songs put together by Spotify employees who Garmark refers to as “the taste-makers”. But not all of Spotify’s ideas for new features have seen the light of day.
“There are so many”, says Garmark. “There are two ways to do product development. You look at what other people do and copy that. The other thing is you imagine things that you’d want in the world, but which don’t exist. We are firmly the latter. It’s our job to make bets and come up with ideas and believe that, if we build this thing, people are going to love it. They might not always be asking for it, but once they’ve experienced it, they’re never going to look back.”
Advertisement
Music discovery is forever
(Image credit: Future)
There’s no doubt that the playlist has been a powerful asset to Spotify’s user experience, but I’d argue that the algorithm has been a monumental development, shaping how we find new music, and also how it finds us. The issue of tastes becoming homogenized remains a concern for many and is still a hot topic — but Garmark believes otherwise.
“I think it’s totally the other way around,” he tells me.“You have a new user come in on Spotify, and if they’re a little bit older, they tend to just listen to the music that they fell in love with in their formative years. Then things get frozen in time, and that’s what people tend to listen to for their whole life. But it also means that they fall out of love with music in a way, and discovery is so important to kind of keep falling in love with music over again.
“The algorithms that we have really enable that. What we see with our users coming in as we follow them on their journeys, is that the variation of what they listen to isn’t wrong. When they come in they explore more, and they get to discover more, and it never ends. It’s this wonderful machinery of just discovering more wonderful, talented artists that people just keep falling in love with.”
Advertisement
Despite Spotify’s vast list of achievements over the years, in terms of both innovating and growing its user base, it feels like, for Garmark, this is just the beginning, and my senses tell me that it has a lot more tricks up its sleeve for the next 20 years and beyond. And sure enough, Garmark leaves me with a tease: “At Spotify, we are full of ideas.”
You must be logged in to post a comment Login