Connect with us
DAPA Banner

Tech

Lume Cube Edge Light Go Review (2026): Versatile, Portable

Published

on

The base of the lamp has two slider buttons. One toggle adjusts the warmth, from cold white light all the way to red. One adjusts the intensity, from ultra-bright down to a glareless glow. Hard taps on each button skip ahead, while holding the toggle down on one side or another adjusts the light settings quite slowly—slowly enough I at first sometimes question whether it’s happening.

The maximum brightness is 1,000 lumens—the approximate intensity of a 75-watt incandescent bulb. At this brightness, the battery lasts about five hours. At a lower intensity, this can extend to as long as a dozen hours.

Red Shift

Image may contain Book and Publication

Photograph: Matthew Korfhage

There’s an added feature I have come to appreciate at night, which is the red-light mode. There’s little evidence that blue light from your little smartphone is keeping you awake at night. But numerous studies do show that blue light wavelengths can affect melatonin levels and thus your body’s circadian rhythm, while red light doesn’t do this.

Red light therapy is, of course, the province of TikTok as much as science—a field where wild exaggerations live alongside legitimate uses and benefits. For every sleep study showing that red light is superior to blue light when it comes to melatonin levels, there’s another showing that red light is associated with “negative emotions” before bed.

Advertisement

So I can only offer my own experience, which is that Edge Light Go’s red reading light offers me a pleasant liminal space between awake time and sleepy time, one not offered by a basic nightstand lamp. It allows me to sort of bask in a darkroom space that still lets me see and read, and drift off a little easier.

If I fall asleep, the light has an automatic 25-minute shut-off, which means I won’t do what I far too often do, which is drift off while reading and then wake up, alarmed, to a room filled with bright light in the middle of the night.

Caveats and Quirks

Image may contain Lamp Furniture and Tape

Photograph: Matthew Korfhage

This said, for all the virtues of portability, the Edge Light Go does not boast a base that’s heavy enough to stop the lamp from tipping over if I bend it forward from its lowest hinge. This can be an annoyance when trying to use the lamp as a reading light from a bedside table or the arm of a couch.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Pentagon selects three microreactor companies for Air Force bases as military nuclear programme advances toward 2030

Published

on

Summary: The Pentagon has narrowed its Advanced Nuclear Power for Installations (ANPI) programme from eight companies to three, advancing microreactor deployment at Buckley Space Force Base (Colorado) and Malmstrom Air Force Base (Montana) by 2030. The original eight vendors included BWXT, Oklo, X-energy, Kairos Power, Radiant, General Atomics, Westinghouse, and Antares. The commercially owned reactor model, backed by Executive Order 14299 and $125 million in Congressional funding, addresses military grid vulnerability while serving as a proving ground for reactors that could also power AI data centres.

The Pentagon has narrowed the field for its programme to install microreactors at US Air Force bases, selecting three companies from an original pool of eight to advance toward deployment, Bloomberg reported on Tuesday. The down-selection is the most concrete step yet in the Advanced Nuclear Power for Installations programme, known as ANPI, a joint effort between the Defense Innovation Unit, the Air Force, and the Army that aims to make military bases energy-independent by replacing their reliance on a civilian power grid that is increasingly vulnerable to cyberattacks, extreme weather, and the cascading demands of AI-driven energy consumption.

The programme began in April 2025, when the DIU selected eight companies to develop microreactor proposals: Antares Nuclear Energy, BWXT Advanced Technologies, General Atomics Electromagnetic Systems, Kairos Power, Oklo, Radiant Industries, Westinghouse Electric Company, and X-energy. Each was tasked with designing commercially owned and operated reactors that could be built on military land, licensed through the Nuclear Regulatory Commission, and maintained by the vendor throughout their operational life. The military would buy the electricity without owning the reactor, a model designed to accelerate deployment by sidestepping the decades-long procurement cycles that have historically paralysed defence infrastructure projects.

Why Air Force bases need their own power plants

The Department of Defense consumes more than 30 terawatt-hours of electricity annually across more than 500 installations, making it the single largest energy consumer in the US government. The overwhelming majority of that power comes from the civilian grid. That dependence is now treated as a strategic vulnerability. Cyberattacks on US energy infrastructure have increased by roughly 70% in recent years. The grid itself is under growing strain from data centre construction, with the International Energy Agency projecting that data centre electricity consumption will exceed 1,000 terawatt-hours globally by the end of 2026. Military bases that host missile fields, space surveillance operations, and nuclear command infrastructure cannot afford to compete with AI training clusters for grid capacity.

Advertisement

Two Air Force installations have been selected as the first deployment sites. Buckley Space Force Base in Aurora, Colorado, hosts the Aerospace Data Facility, one of the Department of Defense’s primary satellite ground stations. Malmstrom Air Force Base in Great Falls, Montana, oversees 150 Minuteman III intercontinental ballistic missiles spread across 13,800 square miles of Montana prairie. Both bases require uninterrupted power for operations that are, by definition, existential. Nancy Balkus, the Deputy Assistant Secretary of the Air Force for operational energy, has said that energy security at these installations is not an efficiency question but a readiness question. The target is operational microreactors at both sites by 2030.

The technology

Microreactors are nuclear fission reactors that typically produce between one and 20 megawatts of electrical power, small enough to fit on a few truck trailers and large enough to power a military base or a small data centre. They use advanced fuel forms, most commonly TRISO (tristructural isotropic) particles encased in ceramic and graphite shells that can withstand extreme temperatures without melting down. Several of the ANPI candidates use high-assay low-enriched uranium, or HALEU, which is enriched to between 5% and 20% uranium-235, higher than conventional reactor fuel but well below weapons grade.

The designs vary significantly. BWXT’s Project Pele, developed separately for the Army, is a 1.5-megawatt transportable reactor that completed initial testing at Idaho National Laboratory and uses TRISO fuel with a gas-cooled design. In February 2026, the Pentagon airlifted a five-megawatt microreactor prototype from California to Utah, the first military nuclear airlift, demonstrating the transportability that makes these systems attractive for expeditionary and remote base operations. Oklo, whose chairman is OpenAI chief executive Sam Altman, designs a compact fast reactor called the Aurora that uses metallic fuel and targets both military and commercial applications. X-energy, which went public with Amazon’s backing, is developing the Xe-100, an 80-megawatt high-temperature gas-cooled reactor that uses TRISO-X fuel pebbles. Kairos Power is building a fluoride salt-cooled reactor. Radiant Industries, founded by former SpaceX engineers, is developing a portable one-megawatt reactor designed for rapid deployment.

Only NuScale Power has received full design certification from the Nuclear Regulatory Commission for a small modular reactor, but NuScale’s design is a 77-megawatt light-water reactor, far larger than what ANPI requires. The ANPI programme’s commercially owned model means that vendors will need to secure their own NRC licences for reactors sited on military land, a regulatory path that has not been tested at this scale. The Atomic Energy Act provides a military exemption for reactors operated by the armed forces, but the ANPI model explicitly uses commercial operators, which means NRC jurisdiction applies.

The policy architecture

The programme sits within a broader policy push that has acquired unusual bipartisan momentum. Executive Order 14299 explicitly links nuclear power to AI infrastructure at military installations, directing federal agencies to accelerate the siting and permitting of advanced reactors. The ADVANCE Act, signed into law with an 82-to-14 Senate vote, streamlines NRC licensing for advanced reactor designs. Congress has appropriated $125 million for military microreactor development. The Army’s separate Project Janus programme is evaluating nine additional bases for microreactor deployment.

Advertisement

The convergence of military energy security and commercial AI infrastructure is not coincidental. The Department of Energy has identified 16 federal sites, many adjacent to existing nuclear facilities, as candidates for data centre construction. Nuclear-powered AI data centres are attracting dedicated venture capital, with Valar Atomics raising $450 million at a $2 billion valuation to build small modular reactors purpose-built for AI workloads. The same microreactors that power a missile field in Montana could, in a commercially licensed configuration, power an AI training cluster in Texas. The ANPI programme is a military procurement initiative, but it is also a proving ground for the reactors that the technology industry hopes will solve its energy problem.

What stands in the way

The 2030 deployment target is ambitious by nuclear standards. No advanced microreactor design has completed NRC licensing. HALEU fuel supply remains constrained, with Centrus Energy as the only domestic commercial producer and Russia historically the dominant global supplier, a dependency that sanctions have complicated. Community opposition to nuclear facilities, even small ones on existing military bases, has slowed previous projects. The cost economics of microreactors at the one-to-20-megawatt scale remain unproven in commercial operation, though the commercially owned model shifts that financial risk from the Department of Defense to the vendors.

The nuclear waste question also persists. Microreactors produce far less spent fuel than conventional power plants, but the United States still lacks a permanent repository for any nuclear waste. Advanced fuel forms like TRISO are more proliferation-resistant and easier to store than conventional spent fuel rods, but “easier” is relative in an industry where waste management has been a political impossibility for four decades.

The broader debate over nuclear power and AI has tended to focus on fusion, the technology that is always 20 years away, or on gigawatt-scale conventional plants that take a decade to build. Microreactors occupy a different niche: small enough to be manufactured in a factory rather than constructed on site, simple enough to operate with minimal staffing, and modular enough to scale by adding units rather than building larger. The military is betting that this niche is real. The down-selection from eight companies to three means the Pentagon has now seen enough proposals to decide which designs are credible and which are not. The three that remain have roughly four years to prove that a nuclear reactor can be as reliable, and as unremarkable, as the diesel generators that military bases currently keep for backup power. If they succeed, the implications extend well beyond the fence line of an Air Force base.

Advertisement

Source link

Continue Reading

Tech

Meta will show parents the topics of their teens’ AI conversations

Published

on

With countries banning social media for kids left and right, Meta is trying different things to convince parents that its platforms are safe for teens. In its latest effort, the company will start showing parents the topics their teens have discussed with Meta AI over the previous seven days.

“Parents will be able to see the topics their teen has been asking Meta AI about in [Facebook, Messenger or Instagram] over the past week,” Meta explained in a blog post. “Topics can range from School, Entertainment, and Lifestyle to Travel, Writing, and Health and Wellbeing, among others.”

For parents overseeing Meta’s teen accounts, the feature will appear in a new Insights tab within supervision, both in-app and on web. Parents can tap on a topic to see the different categories within each: for instance, sub-categories within Lifestyle include fashion, food and holidays, while fitness, physical health and mental health are part of the Health and Wellbeing topic.

Meta will allow parents to look at the conversation topics kids use when talking to an AI

Meta

Meta also worked with the Cyberbullying Research Center to develop what it calls “conversation starters,” or open-ended conversations about their experience with AI. It provides detail about what the questions are designed to address, and can be found on the Family Center website or through a link in the new Insights tab.

Advertisement

Finally, Meta revealed more detail about its AI Wellbeing Expert Council, who will provide “ongoing input on our AI experience for teens.” It will be made up of three existing advisory groups as well as new members with special expertise in responsible and ethical AI, who are affiliated with the National Council of Suicide Prevention and multiple universities. It’s worth noting that Meta has a separate oversight board that deals with subjects ranging from AI to moderation.

Offboarding moderation chores to busy parents appears to be par for the course for Meta these days. The company has recently cut back on the use of third-party vendors that help with content moderation, shifting responsibility instead to advanced AI systems, according to recent reports.

The dangers of AI for teens have been one of multiple reasons countries like Spain have banned social media platforms for kids. One of the most recent and tragic cases was in Canada, where a teen was provided specific details by OpenAI’s ChatGPT about how to carry out a school shooting. Another such case is under investigation in Florida, and AI’s have been implicated in multiple teen suicides as well.

In the US, the National Suicide Prevention Lifeline is 1-800-273-8255 or you can simply dial 988. Crisis Text Line can be reached by texting HOME to 741741 (US), 686868 (Canada), or 85258 (UK). Wikipedia maintains a list of crisis lines for people outside of those countries.

Advertisement

Source link

Continue Reading

Tech

OpenAI’s new image model reasons before it draws

Published

on

The new model reasons about composition, searches the web for context, generates up to eight coherent images from one prompt, and renders text in non-Latin scripts with near-flawless accuracy. It also took the number one spot on the Image Arena leaderboard within 12 hours of launch, by the largest margin ever recorded.


Two years ago, asking ChatGPT to generate a visual was like commissioning a poster from a sleep-deprived intern with a glue stick and a head injury. You’d ask for a clean design and get “leftovers creativity” splashed across the image, plus three new words that looked like they’d been invented during a minor software malfunction.

The images looked AI-generated in the way that has become a cultural shorthand for uncanny: almost right, conspicuously wrong, and instantly recognisable as synthetic.

The leap matters. Text rendering has been the persistent, embarrassing weakness of AI image generators since DALL-E first turned heads in January 2021, a model we covered at the time as a fascinating curiosity.

Advertisement

Images 2.0 claims approximately 99% accuracy in text rendering across any language and script, including Japanese, Korean, Chinese, Hindi, and Bengali. If that figure holds in independent testing, it closes the gap between “impressive AI demo” and “tool a graphic designer would actually use for production work.”

The architectural change that makes the model different, though not just better, is what OpenAI calls “thinking capabilities.” Images 2.0 is the company’s first image model to integrate its O-series reasoning architecture.

Before generating a pixel, the model researches the prompt, plans the composition, reasons about spatial relationships between elements, and can search the web for real-time context.

It is, in OpenAI’s framing, not a rendering tool but a “visual thought partner.”

Advertisement

ChatGPT Image 2

This is my cat transformed into a comic strip with ChatGPT.

In practice, this manifests in two access modes. Instant mode ships to all ChatGPT users, including free-tier accounts, and delivers the core quality improvements: better text, sharper editing, richer layouts.

Thinking mode, which enables web search, multi-image batching, and output verification, is restricted to Plus ($20/month), Pro ($200/month), Business, and Enterprise subscribers.

The distinction is commercially significant. The reasoning capabilities, where most of the quality premium lives, sit behind the paywall. Free users get better images; paying users get images the model has thought about.

Advertisement

The multi-image capability is the feature most likely to change professional workflows. A single prompt can now produce up to eight images that maintain character and object continuity across the set.

That means a designer can generate a family of social media assets, a children’s book sequence, or a series of storyboard frames from one instruction, with consistent visual identity throughout.

Previously, each image had to be prompted individually and stitched together manually. For marketing teams and content creators, that is a meaningful reduction in production friction.

The integration into Codex, OpenAI’s coding environment, is the strategically loaded move. Developers and designers can now generate UI mockups, prototypes, and visual assets inside the same agentic workspace they use for code, slides, and browser automation, using a single ChatGPT subscription.

Advertisement

The image model is no longer a standalone product; it is a capability embedded in OpenAI’s broader platform, competing not just with Midjourney and Google’s Nano Banana 2 on quality but with Canva and Figma on workflow integration.

The benchmark performance is striking. Within 12 hours of launch, Images 2.0 took the number one spot on the Image Arena leaderboard across every category, with a score of 1,512, a +242-point lead over the second-place model, Google’s Nano Banana 2. That is the largest lead ever recorded on the leaderboard.

For most of 2026, OpenAI and Google had been trading the top position within a tight margin; Images 2.0 broke away decisively. 

DALL-E 2 and DALL-E 3 are being deprecated and retired on 12 May 2026. GPT-Image-1.5, released in December 2025 as an intermediate upgrade, remains accessible via the API for legacy integrations but is no longer the default model.

Advertisement

OpenAI did not disclose the architecture of Images 2.0, describing it only as a “generalist model” or “GPT for images” and declining to specify whether it uses a diffusion, autoregressive, or hybrid approach. The API model identifier is gpt-image-2; the API is expected to open to developers in early May 2026.

Token-based pricing is $8 per million tokens for image input, $2 for cached input, and $30 for image output, with per-image costs typically ranging from $0.04 to $0.35 depending on prompt complexity and resolution. Output resolution reaches up to 2K.

The knowledge cutoff is December 2025, which introduces a practical boundary: the model cannot accurately render events, people, or products that emerged after that date without supplementing its internal knowledge with live web search.

The model’s safety architecture includes content filtering, C2PA metadata for provenance, and what OpenAI described in the press briefing as ongoing monitoring, a point the company was notably emphatic about, given the growing regulatory scrutiny of synthetic media and the use of AI image generators in deepfakes, scams, and non-consensual imagery.

Advertisement

The most consequential question Images 2.0 raises is not about quality. The technical gap between AI-generated and human-created imagery has been narrowing for years; this model narrows it further.

The question is about what happens when the tool is no longer a novelty but infrastructure, when image generation is a default capability of every coding environment, every chat interface, and every enterprise productivity suite, and when the distinction between “designed by a person” and “generated by a prompt” becomes something only metadata can verify.

OpenAI, for its part, appears to be betting that the answer is scale: more images, faster, better, cheaper, everywhere. When we covered first covered DALL-E five years ago, the model’s outputs were fascinating oddities. Now they are production assets.

The era in which AI-generated images were obviously AI-generated is over. What comes next depends on whether the guardrails can keep pace with the capability.

Advertisement

Source link

Continue Reading

Tech

These New Smart Glasses From Ex-OnePlus Engineers Have a Hidden Cost

Published

on

Lots of smart glasses have AI bots inside them now. The one in L’Atitude 52°N’s glasses is called Goya, named after Francisco Goya, the famous Spanish artist who painted renowned masterpieces of romanticism.

CEO and founder Gary Chen, who has worked on wearable devices for companies like Oppo, OnePlus, and HTC, says his company’s glasses are focused on travelers, with AI features that act like a tour guide and talk about all the paintings in famous museums.

“Basically, you can say, ‘Hey, Goya, what is the story about Mona Lisa?’” Chen says. “You can ask anything and, with your permission, they will take a photo to analyze what’s in front of you.”

I ask if you could quiz it about perhaps the most famous Goya painting, the terrifying, Gothic horror-esque image of Saturn devouring his own son.

Advertisement

“Yes, yes,” Chen says, “It can also give you some recommendations about restaurants.”

Image may contain Accessories Sunglasses and Glasses

Berlin-based L’Atitude 52°N is a new player in the smart glasses space, selling its first pairs on Kickstarter in September 2025, where the campaign surpassed its funding goal and raised more than $400,000. There have been some bumps since then, as shipments were delayed from an originally announced release date in February 2026, and one model in development was scrapped outright. Now, L’Atitude 52°N has announced an official release date for its smart glasses.

Preorders for one model, called Berlin, start on May 19. The glasses actually go on sale on May 26. This might be a disappointment for Kickstarter backers, as the most recent official update from the campaign came in March and said shipping would begin on April 15 for Berlin units and June 7 for the second model, called Milan. L’Atitude 52°N still hasn’t set an official launch date for the Milan, except to say that it will be “arriving in the second quarter of 2026.”

The Berlin glasses cost $399. Add another $50 for the photochromatic lenses. There is one very big catch: The AI features enabled on the device will only work for 12 months, which L’Atitude 52°N calls an “AI feature trial.” After that, customers have to pay for a subscription service, or will be limited to the base features, like playing music and capturing media.

How much will that subscription service cost? Chen says he doesn’t know.

Advertisement

Source link

Continue Reading

Tech

Leica Chairman says ‘a true Leica sensor’ is coming, and image quality may never be the same

Published

on


  • Leica announced a strategic partnership with Chinese sensor maker Gpixel
  • Recent Leica cameras use Sony-made sensor tech
  • We can expect a new bespoke sensor for future models, possibly the rumored M12

There has been plenty going on behind the scenes at Leica recently, not least of which potentially selling a controlling stake of the business for approximately $1.2 billion to Chinese and Swedish investors. Now, it looks set to change the bedrock of the tech inside its digital cameras.

In January, Leica’s Chairman of the Supervisory Board, Dr Andreas Kaufman, revealed that “we’re also developing our own sensor again” when referring to future development of the Leica M-system. He made the comment in the German-language ‘Leica Enthusiast Podcast’, according to a report by Peta Pixel.

Source link

Continue Reading

Tech

Turkey wants to ban social media for kids under 15

Published

on

The Turkish parliament has voted through a bill that would ban all children under the age of 15 from using social media. As part of the legislation, social media platforms would be required to enforce age-verification measures on their apps, provide parental control tools, and react more quickly to harmful content being posted.

As reported by , lawmakers have passed the bill in the wake of two deadly school shootings in Turkey, after which police 162 people accused of sharing footage of the tragedies online.

Turkey’s President Recep Tayyip Erdogan now has 15 days to accept the bill in order for it to become law, after reportedly saying social media platforms had become “cesspools” in a televised address to the nation.

As well as the major social media platforms, AP reports that online gaming companies would also have to implement their own restrictions on minors, with potential punishments including bandwidth reductions and financial penalties.

Advertisement

This isn’t the first time Turkey has locked horns with social media and online gaming platforms. Instagram has been in the country before, back in 2024, relating to a dispute over the posting of Hamas-related content. Access was restored around a week later, but in the same time period Turkey also Roblox over reports of inappropriate sexual content accused of being explorative to children. At the time, a Turkish official also named the “promotion of homosexuality” as one reason for the ban.

Turkey has also temporarily banned (now called X) on several occasions, most after 2023’s devastating earthquakes, though it was not clear at the time why the government may have moved to block the social media platform.

The country’s lawmakers moving to ban under-15s from accessing social media is part of an emerging trend in Europe and across the globe. The likes of and have recently introduced similar legislation of their own, following becoming the first country in the world to ban children under 16 from social media last year. The UK has since bringing in tighter restrictions too.

Source link

Advertisement
Continue Reading

Tech

Volvo’s parent company just made a sleek $14,300 electric sedan that will elude US buyers

Published

on

Volvo’s parent company has launched a new electric sedan in China that hits a familiar sore spot for US car shoppers.

The Geely Galaxy A7 EV pairs a clean, mainstream shape with a claimed 550km of CLTC range, and it enters the market at a price that still looks strikingly low by Western EV standards. It also appears set to stay far away from US dealerships.

That low headline number needs some caution though. Car News China reports a cheaper entry point, but the launched EV trims are higher, starting at 112,800 yuan, or about $16,530, and rising to 119,800 yuan. That is still aggressive pricing for a sedan of this size, just not quite the jaw-dropping bargain the earliest figure implied.

Cheap price, messy rollout

Once you get past the pricing confusion, the core package looks solid. The A7 EV uses a 58.05 kWh LFP battery and a front-mounted 160 kW motor, with Geely claiming 550km on the CLTC cycle. Reports say there’s a smaller-battery version, so there’s still some room for Geely to clear up how broad the lineup may become.

The rest of the car sounds more mature than bargain-bin. The EV gets restrained exterior styling, a 14.6-inch touchscreen, a digital instrument display, and an interior layout that reads like normal family transport instead of a stripped-down cost cutter. That matters because the real appeal here is not novelty. It is normality at a low price.

Advertisement

Why this one won’t reach you

For American readers, the frustrating part is how familiar this story has become.

China keeps producing lower-cost EVs that look usable and complete, while the US market rarely gets anything close to this price in a new electric sedan.

Nothing tied to the A7 EV points to a US launch, so this one looks like another car Americans will only watch from afar.

What happens before this goes on sale

The next question is whether the EV version can help revive momentum for the wider Galaxy A7 line.

Geely delivered 15,230 A7 units in China in the first quarter of 2026, but that total was down 59.4% from the prior quarter.

Advertisement

If this EV lands with buyers, it will matter as more than a fresh trim. It will show how quickly China’s lower-cost EV market is sharpening up.

Source link

Advertisement
Continue Reading

Tech

Google is making it dramatically easier to sign in to apps without OTP or link hassles

Published

on

If you’ve ever signed up for an app and then spent the next five minutes hunting for a six-digit code buried in your inbox, you know how painful the process is. I especially despise the magic sign-up link that websites send, as they sometimes fail to work if my default browser isn’t Google Chrome.

Thankfully, Google is fixing that with a new verified email credential for Android, and it’s a genuinely smart solution. 

So what’s actually broken with OTPs?

The humble OTP has been the backbone of email verification forever, but it comes with real problems. You leave the app, open your inbox, find the email, copy the code, and come back. 

It’s a long process that not only hurts consumers but also app developers. The number of steps required may cause a user to leave the app mid-sign-up, meaning the app loses potential users before they even try it.

Advertisement

iOS fixed this issue by directly allowing users to sign in via Apple account. Recently, it also added a feature to autofill OTPs from emails, just as Android supports OTP autofill from messages. 

Now, Google is also creating a seamless signup process that doesn’t require users to jump between apps. 

How does the new system work?

Google now issues a cryptographically verified email credential directly to Android devices through the Credential Manager API. When an app needs to confirm your email, it can pull that credential directly using the Credential Manager API. 

A small prompt appears on screen showing what information is being requested. You tap to confirm, and the app gets your verified email. No switching apps, no codes, no delay.

Google recommends pairing this with passkey creation, so the first sign-up becomes the last time a user has to do anything manual. 

The same can also be used for account recovery and re-authentication of sensitive actions, including setting changes, updating profile details, and more. 

Advertisement

The best part is that the new feature supports Android 9 and later devices, so you don’t need the best new Android smartphones to enjoy this quality-of-life improvement.

Are there any restrictions?

There are a few restrictions. The feature currently works only with regular consumer Google Accounts, not Workspace accounts. It also only works with Gmail accounts, and not with third-party email accounts that you might have used to create your Google account.

Source link

Advertisement
Continue Reading

Tech

OpenAI unveils Workspace Agents, a successor to custom GPTs for enterprises that can plug directly into Slack, Salesforce and more

Published

on

OpenAI introduced a new paradigm and product today that is likely to have huge implications for enterprises seeking to adopt and control fleets of AI agent workers.

Called “Workspace Agents,” OpenAI’s new offering essentially allows users on its ChatGPT Business ($20 per user per month) and variably priced Enterprise, Edu and Teachers subscription plans to design or select from pre-existing agent templates that can take on work tasks across third-party apps and data sources including Slack, Google Drive, Microsoft apps, Salesforce, Notion, Atlassian Rovo, and other popular enterprise applications.

Put simply: these agents can be created and accessed from ChatGPT, but users can also add them to third-party apps like Slack, communicate with them across disparate channels, ask them to use information from the channel they’re in and other third-party tools and apps, and the agents will go off and do work like drafting emails to the entire team, selected members, or pull data and make presentations.

Advertisement

Human users can trust that the agent will manage all this complexity and complete the task as requested, even if the user who requested it leaves.

It’s the end of “babysitting” agents and the start of letting them go off and get shit done for your business — according to your defined business processes and permissions, of course.

The product experience appears centered on the Agents tab in the ChatGPT sidebar, where teams can discover and manage shared agents.

This functions as a kind of team directory: a place where agents built by coworkers can be reused across a workspace. The broader idea is that AI becomes less of an individual productivity trick and more of a shared organizational resource.

Advertisement

In this sense, OpenAI is targeting one of office work’s oldest pain points: the handoff between people, systems, and steps in a process.

OpenAI says workspace agents will be free for the next two weeks, until May 6, 2026, after which credit-based pricing will begin. The company also says more capabilities are on the way, including new triggers to start work automatically, better dashboards, more ways for agents to take action across business tools, and support for workspace agents in its AI code generation app, Codex.

For more information on how to get started building and using them, OpenAI recommends heading over to its online academy page on them here and its help desk documentation here.

The Codex backbone

The most significant shift in this announcement is the move away from purely session-based interaction. Workspace agents are powered by Codex — the cloud-based, partially open-source AI coding harness that OpenAI has been aggressively expanding in 2026 — which gives them access to a workspace for files, code, tools, and memory.

Advertisement

OpenAI says the agents can do far more than answer a prompt. They can write or run code, use connected apps, remember what they have learned, and continue work across multiple steps.

That description lines up closely with the capabilities OpenAI shipped into Codex just six days ago, including background computer use, more than 90 new plugins spanning tools like Atlassian Rovo, CircleCI, GitLab, Microsoft Suite, Neon by Databricks, and Render, plus image generation, persistent memory, and the ability to schedule future work and wake up on its own to continue across days or weeks.

Workspace agents inherit that plumbing. When one pulls a Friday metrics report, it is effectively spinning up a Codex cloud session with the right tools attached, running code to fetch and transform data, rendering charts, writing the narrative, and persisting what it learned for next week.

When that same agent is deployed to a Slack channel, it is a Codex instance listening for mentions and threading its work back in.

Advertisement

This is the technical decision enterprise buyers should focus on. Building an agent on a code-execution substrate rather than a pure LLM-call-and-response loop is what gives workspace agents the ability to do real work — transforming a CSV, reconciling two systems of record, generating a chart that is actually correct — rather than describing what the work would look like.

Persistence and scheduling

In earlier AI assistant models, progress paused when the user stopped interacting. Workspace agents change that by running in the cloud and supporting long-running workflows. Teams can also set them to run on a schedule.

That means a recurring reporting agent can pull data on a set cadence, generate charts and summaries, and share the results with a team without anyone manually kicking off the process.

Here at VentureBeat, we analyze story traffic and user return rate on a weekly basis — exactly the kind of recurring, multi-step, multi-source task that could theoretically be automated with a single workspace agent. Any enterprise with a weekly reporting rhythm pulling from dynamic data sources is likely to find a use for these agents.

Advertisement

Agents also retain memory across runs. OpenAI says they can be guided and corrected in conversation, so they improve the more a team uses them.

Over time they start to reflect how a team actually works — its processes, its standards, its preferred ways of handling recurring jobs — which is a meaningfully different proposition from the static instruction-set GPTs that preceded them.

The integrated ecosystem

OpenAI’s claim is that agents should gather information and take action where work already happens, rather than forcing teams into a separate interface. That point becomes clearest in the Slack examples. OpenAI’s launch materials show a product-feedback agent operating inside a channel named #user-insights, answering a question about recent mobile-app feedback with a themed summary pulled from multiple sources.

The company’s demo lineup walks through a sample team directory of agents: Spark for lead qualification and follow-up, Slate for software-request review, Tally for metrics reporting, Scout for product feedback routing, Trove for third-party vendor risk, and Angle for marketing and web content.

Advertisement
OpenAI workspace agents examples

OpenAI first-party workspace agents. Credit: OpenAI

OpenAI also shared more functional examples its own teams use internally — a Software Reviewer that checks employee requests against approved-tools policy and files IT tickets; an accounting agent that prepares parts of month-end close including journal entries, balance-sheet reconciliations, and variance analysis, with workpapers containing underlying inputs and control totals for review; and a Slack agent used by the product team that answers employee questions, links relevant documentation, and files tickets when it surfaces a new issue.

In a sense, it is a continuation of the philosophy OpenAI espoused for individuals with last week’s Codex desktop release: the agent joins the workflow where work is already happening, draws in context from the surrounding apps, takes action where permitted, and keeps moving.

From GPTs to a broader agent push

Workspace agents are not a standalone launch. They sit inside a roughly 12-month arc in which OpenAI has been systematically rebuilding ChatGPT, the API, and the developer platform around agents.

Advertisement

Workspace agents are explicitly positioned by OpenAI as an evolution of its custom GPTs, introduced in late 2023, which gave users a way to create customized versions of ChatGPT for particular roles and use cases.

However, now OpenAI says it is deprecating the custom GPT standard for organizations in a yet-to-be determined future date, and will require Business, Enterprise, Edu and Teachers users to update their GPTs to be new workspace agents.

Individuals who have made custom GPTs can continue using them for the foreseeable future, according to our sources at the company.

In October 2025, OpenAI introduced AgentKit, a developer-focused suite that includes Agent Builder, a Connector Registry, and ChatKit for building, deploying, and optimizing agents.

Advertisement

In February 2026, it introduced Frontier, an enterprise platform focused on helping organizations manage AI coworkers with shared business context, execution environments, evaluation, and permissions.

Workspace agents arrive as the no-code, in-product entry point that sits on top of that stack — even if OpenAI does not explicitly describe the architectural relationship in its materials.

The subtext across all three launches is the same: OpenAI has decided that the future of ChatGPT-for-work is fleets of permissioned agents, not single chat windows — and that GPTs, its first attempt at letting businesses customize ChatGPT, were not enough.

Governance and enterprise safeguards

Because workspace agents can act across business systems, OpenAI puts heavy emphasis on governance. Admins can control who is allowed to build, run, and publish agents, and which tools, apps, and actions those agents can reach.

Advertisement

The role-based controls are more granular than the ones most custom-GPT rollouts ever had: admins can toggle, per role, whether members can browse and run agents, whether they can build them, whether they can publish to the workspace directory, and — separately — whether they can publish agents that authenticate using personal credentials.

That last setting is the risky case, and OpenAI explicitly recommends keeping it narrowly scoped.

Authentication itself comes in two flavors, and the choice has real consequences. In end-user account mode, each person who runs the agent authenticates with their own credentials, so the agent only ever sees what that individual is allowed to see.

In agent-owned account mode, the agent uses a single shared connection so users don’t have to authenticate at run time. OpenAI’s documentation strongly recommends service accounts rather than personal accounts for the shared case, and flags the data-exfiltration risk of publishing an agent that authenticates as its creator.

Advertisement

Write actions — sending email, editing a spreadsheet, posting a message, filing a ticket — default to Always ask, requiring human approval before the agent executes.

Builders can relax specific actions to “Never ask” or configure a custom approval policy, but the default posture is human-in-the-loop.

OpenAI also claims built-in safeguards against prompt-injection attacks, where malicious content in a document or web page tries to hijack an agent. The claim is welcome but not yet proven in the wild.

For organizations that want deeper visibility, OpenAI says its Compliance API surfaces every agent’s configuration, updates, and run history.

Advertisement

Admins can suspend agents on the fly, and OpenAI says an admin-console view of every agent built across the organization, with usage patterns and connected data sources, is coming soon.

Two caveats worth flagging for security-sensitive buyers: workspace agents are off by default at launch for ChatGPT Enterprise workspaces pending admin enablement, and they are not available at all to Enterprise customers using Enterprise Key Management (EKM).

Analytics and early customer signal

OpenAI also ships an analytics dashboard aimed at helping teams understand how their agents are being used. Screenshots in the launch materials show measures like total runs, unique users, and an activity feed of recent runs, including one by a user named Ethan Rowe completing a run in a #b2b-sales channel.

The mockup detail supports OpenAI’s broader point: the company wants organizations to measure not just whether agents exist, but whether they are being used.

Advertisement

The clearest early-adopter signal in the launch itself comes from Rippling. Ankur Bhatt, who leads AI Engineering at the HR platform, says workspace agents shortened the traditional development cycle enough that a sales consultant was able to build a sales agent without an engineering team. “It researches accounts, summarizes Gong calls, and posts deal briefs directly into the team’s Slack room,” Bhatt says. “What used to take reps 5–6 hours a week now runs automatically in the background on every deal.”

OpenAI’s announcement names SoftBank Corp., Better Mortgage, BBVA, and Hibob as additional early testers.

The era of the digital coworker

Workspace agents do not land in a vacuum. They land in the middle of a broader OpenAI push — through AgentKit, through Frontier, through the Codex overhaul — to make agents more persistent, more connected, and more useful inside real organizational workflows.

They also land in a deeply crowded field: Microsoft Copilot Studio is wired into the Microsoft 365 base, Google is pushing Agentspace, Salesforce has rebuilt itself as agent infrastructure with Agentforce, and Anthropic recently introduced Claude Managed Agents, all different flavors of similar ideas — agents that cut across your apps and tools, take actions on schedules repeatedly as desired, and retain some degree of memory, context, and permissions and policies.

Advertisement

But this launch matters because it turns OpenAI’s strategy into something concrete for the teams already paying for ChatGPT, and because it quietly retires the product those teams were most recently told to standardize on.

If workspace agents live up to the pitch — shared, reusable, scheduled, permissioned coworkers that follow approved processes and keep work moving when their human is offline — it would mark a meaningful change in what workplace software does. Less passive software waiting for input, more active systems helping teams coordinate, execute, and move faster together.

The era of the digital coworker has begun. And, on OpenAI’s plans at least, the era of the custom GPT is ending.

Source link

Advertisement
Continue Reading

Tech

‘It ultimately made people realize that music was worth paying for’: Spotify’s Sten Garmark on how the streaming giant created an entirely new business model, and its mission to convince users that ‘there was something better than free’

Published

on

Over the past couple of decades we’ve witnessed a whirlwind of cultural changes in the music industry, but also major changes in terms of how we find and listen to music. And there’s arguably one entity that has contributed to these shifts more than any other: Spotify — which was founded 20 years ago today (April 23). Feel old yet? I sure do.

For many music lovers out there, myself included, Spotify was their introduction to music streaming, and over the last 20 years it’s climbed to the top of the ladder, amassing over 750 million users and cementing its position as one of the best music streaming services — and in the eyes of many, the daddy of them all.

Source link

Continue Reading

Trending

Copyright © 2025