Connect with us
DAPA Banner

Tech

Light-years ahead of the competition

Published

on

Verdict

An incredible update, Alexa+ throws away the stilted conversations of the old and introduces a voice assistant that simply understands what you’re asking. Better at general responses, able to disseminate information from emails and charts, and capable of building smart home routines quickly, Alexa+ is light years ahead of the competition. It can be a bit over-friendly at times and quite verbose, but the beauty of this system is that you can tell Alexa+ what you do and don’t like to get it working the way you want. The only real downside is that the local business search is terrible, but everything else is so much better that it’s got me talking more and using my phone less.

  • Understands context and learns your preferences

  • You can use natural language

  • Capable of building complicated routines

  • Can pull information from emails, photos and documents

  • Responses can be a long-winded until tweaked

  • Local business directory isn’t very good

Key Features

  • Advertisement

    Free for now

    No cost during the Early Access service, then bundled with Amazon Prime.

  • Advertisement

    Works with most Echo speakers

    All speakers bar some first generation models support Alexa+.

SQUIRREL_PLAYLIST_10208358

Advertisement

Introduction

Amazon Alexa was a magical concept at launch. It finally felt as though the future that Star Trek promised us was here, with a personal assistant you could talk to. As good as it was, clunky interactions via ‘Alexa Speak’ and several limitations ended up with Alexa (and its competition) feeling slightly more niche. Amazon Alexa+ fixes that.

The GenAI-powered voice assistant is miles ahead of the original, and miles ahead of the competition. I wrote about my initial thoughts after a week with the service, describing what Alexa+ was good at (and what it needed to improve), but I’ve had more time with the system, so read my full review to find out why it’s the best.

Advertisement

Availability and compatibility

To get Alexa+, you need to sign up for the Early Access programme, which is currently by invitation. The quickest way to jump up the queue is to get a new Alexa device, but eventually, the system will be rolled out to everyone. Most devices (bar a few first-generation ones) are supported, although the new versions have dedicated chips that help with the processing.

Advertisement

My guide on how to enable Alexa+ goes into more detail on how to get the system, and which devices are compatible. Currently, Alexa+ is free while on early access, and then it will be bundled with Amazon Prime, although you can pay £19.99 a month to have the service. Clearly, Amazon Prime is a much better way of getting it.

General conversations and information

  • Understands general speech
  • Much better at context
  • Can be over-friendly

Although Alexa+ works in the same way as standard Alexa (you say, “Alexa”, followed by your request), the new system operates in a completely different way. Gone is the need for ‘Alexa Speak’ (saying things in a specific way to get Alexa to understand), replaced with natural conversation. And, the replies are more natural, too, with Alexa understanding and building context, as it goes. 

More natural conversations make it a lot easier to talk to Alexa. Sure, I can have the standard interaction, such as “Alexa, weather”, to find out what the upcoming weather is like. But, I can also ask, “When’s a good day to have a BBQ?” or, “Is the weather going to be consistently nice this week?” and not only does Alexa understand, but it gives sensible answers.

Amazon Alexa Plus BBQ
Amazon Alexa Plus BBQ
Image Credit (Trusted Reviews)

Advertisement

Alexa+ is also much better at context. By default, it stays listening after a reply, so you can follow up with another question. But if you go silent, you can follow up with another question at any point. In my example about the BBQ, I followed up 20 minutes later with, “Alexa, and what about next week?”

That’s limited context, but Alexa+ also builds up information about you, both explicitly (when you first get the service, it asks some basic questions, such as what your favourite type of music is) and through inference, such as by learning that you like a specific football team.

Advertisement

That’s surprisingly powerful. Once Alexa+, for example, has learned which of your friends are vegetarian, it will adjust the recipe ideas it suggests if any of them are coming around. 

Alexa+ can also moderate its responses, adjusting emotion based on the news it gives you. As a Spurs fan, Alexa’s replies about the latest game are usually said with a slightly sad voice, although a recent win came with a more excited response.

It feels much more natural to talk to Alexa+ than to Alexa. Although there are still some oddities. Sometimes, when a reply consists of multiple sentences, the pause between each is a little off. Rather than flowing naturally, a second sentence starts abruptly, almost before the first sentence has finished. It sounds a little like Alexa+ is interrupting itself.

In terms of replies, it helps that the new service can retrieve information from a wider range of sources. All too often with standard Alexa (and pretty much all of the time with Siri), I’d hit the limit of capability, with questions that can’t be answered.

Advertisement

Advertisement

That doesn’t happen often at all with Alexa+. I asked if the recent tube strikes were going ahead, and Alexa+ told me they were, when they’d start, when they’d end and how trains could be affected after people started to return to work.

Generally, if you want to know the answer to something, Alexa+ can give you the answer in a way that makes sense.

But, it does need tweaking. Asking about the tube strike, Alexa+ was almost excited to tell me it was going ahead, and needed reminding that this was bad news and to tone it down.

Advertisement

Similarly, I don’t like some of the out-of-the-box responses, as they feel forced. Ask about football, and Alexa likes to say ‘mate’ a lot, a bit like it’s been programmed by watching poorly-written football beer adverts. I told Alexa+ not to call me ‘mate’ and it has stopped.

Often, Alexa+ can be too verbose, trying to be chatty, but in an unconvincing and slightly odd way. I’ve told it to be brief and to-the-point with answers, and it’s much better.

All of that’s important, as Alexa+ can be tweaked: what you get at the start and what you get weeks later are quite different. Just remember to keep tweaking and feeding back to get Alex+ to behave the way that you want it to.

Advertisement

Advertisement

For all the tweaking that you can do, there are some issues and obstacles that can’t be overcome. Asking Alexa+ about Spurs on my Echo Show 11 has the response on screen, along with some extra information that runs across the bottom. Only, the snippets of information are about the San Antonio Spurs, which isn’t very helpful.

Amazon Alexa Plus snippets showing wrong informationAmazon Alexa Plus snippets showing wrong information
Image Credit (Trusted Reviews)

Local business search is also, quite frankly, rubbish. Amazon says it’s working on it, and boy, does it need to. “Alexa, what’s the nearest French restaurant?” I asked. The answer was Le Marmiton, Wanstead. Not only is that restaurant an eight-minute walk from my house (so not the closest one), but, the main issue is that Le Marmiton shut down in 2023.

On my Echo Show, the response says, “It’s open today from 5pm to 9pm”, but the snippet below from TripAdvisor clearly shows that the restaurant is closed today (and, in fact, forever).

Amazon Alexa Plus local businessesAmazon Alexa Plus local businesses
Image Credit (Trusted Reviews)

Restaurants that still exist and are on OpenTable can be booked via voice. It’s a neat system that makes Alexa+ do the hard work of finding a table for the number of people you want, at the time you want. One limitation is that you can’t book restaurants that require a credit card, although this is being worked on, too.

Advertisement

Documents, calendars and more

  • Can read documents and create tasks and calendar entries
  • Work accounts not supported

AI is very good at understanding structured data. That, combined with email and calendar management, means that Alexa+ can be a kind of personal assistant. Well, provided you don’t pay for your email. I use hosted Exchange for my personal email, but this type of account isn’t supported, nor is a Google Workspace account. That’s a little annoying, as it means creating a free Gmail or similar account for the time being.

What is very good is that via the app or by sending emails to [email protected] from a registered email address (set via the Alexa app), Alexa+ can pull out information, create reminders and calendar appointments.

Advertisement

Sending an email with a PDF containing information on my daughter’s upcoming DoFE expedition, the Alexa app pinged a few minutes later to tell me it had found some appointments and tasks, all spot-on, and all ready to go straight into the calendar.

Trying to go through the appallingly formatted term dates page on my daughter’s school is a nightmare, but I used the Alexa app to take a photo of it, and it quickly worked out when the inset days were and the holidays, letting me add them to my calendar. Cleverly, as we’re partway through a school year, Alexa+ ignored everything that’s already passed.

With full email support, I can see Alexa+ become core to managing everything – please, Amazon, hurry up and add work accounts!

Advertisement

Advertisement

Smart home control

  • Smarter responses
  • Can build Routines using voice

The same old basic commands work, such as turning on lights, or setting them to a specific temperature. But, Alexa+ is also smarter and makes it much easier to interact with.

Tell Alexa+ that it’s cold, and it will boost the heating around you. With standard Alexa, I’d need to ask what the temperature was and then ask again to set the heating to a temperature above that. Tell Alexa+ that it’s dark, and it can turn the lights on for you. 

Thanks to better language processing, Alexa+ mostly understands what I want it to do, and I don’t have to phrase requests in a specific way.

Alexa+ can also build routines for you, via voice, which can be tweaked and edited in the app. I find that it’s often faster to do things this way, rather than the old app-based one.

More complicated commands can also be turned into one-time Routines. “Alexa, turn off the office lights in 10 minutes’ time” does just that.

Advertisement

Advertisement

It’s possible to string together a series of commands for a one-time run, too, such as turning the lights on, waiting for 10 minutes, and then turning them off again. These commands can often go a bit wrong.

“Alexa, set Dave officer heater to 25° and then after five minutes turn it off,” I said. This then got Alexa+ to create a routine that did what I’d described, only the command to trigger the routine was the exact, lengthy phrase that I’d said above. That’s clearly not what I wanted, but Alexa+ is getting there, making the complex much easier.

Amazon Alexa Plus RoutinesAmazon Alexa Plus Routines
Image Credit (Trusted Reviews)

SQUIRREL_PLAYLIST_10208358

Should you buy it?

Advertisement

You want a smarter assistant

Much more powerful than its competition, Alexa+ is the voice assistant to use.

Advertisement

You don’t want a smart assistant

There’s no real comparison between this and its rivals, so only avoid if you don’t want a smart assistant.

Advertisement

Final Thoughts

I’d practically stopped using standard Alexa for anything more than basic requests: timers, turning a specific light on, or setting an alarm clock. Alexa+ changes that.

Yes, there are areas that need improvement (Routines and local business search are two good examples), but the general interactions are so much better. When I want the answer to a question, I now tend to ask Alexa+ rather than digging out my phone – and it helps me avoid getting trapped in doomscrolling along the way. The ability to tweak Alexa+ to understand your preferences and learn your context makes it even more powerful.

There’s simply no competition at the moment: when it comes to everyday life, general questions and smart home control, Alexa+ is so far ahead of the competition.

Advertisement

Advertisement

FAQs

Do you have to pay for Alexa+?

Alexa+ is free during Early Access, but it will eventually be bundled with Amazon Prime, or it will alternatively cost £19.99 a month.

Advertisement
Can I get Alexa+ now?

The short answer is yes, but you have to sign up for the invite, and people who have bought a more recent Echo speaker will get priority access.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Phones Get Stolen At TSA Checkpoints More Often Than You Probably Realized

Published

on





If you’re a frequent flyer, going through airport security checks can feel like a mind-numbing chore standing between you and your destination. You almost go through it on autopilot, letting the airport powers-that-be handle it all. Somewhere in that fugue state, you may forget the very real possibility of making it past every checkpoint minus a phone. And yet, it’s a very real possibility.

In fact, according to the TSA, somewhere between 90,000 and 100,000 items get left behind at security checkpoints on a monthly basis. Obviously, not all of those are phones, but plenty are. Now, TSA’s electronics rules do cover what’s allowed in checked bags versus carry-ons, but those guidelines don’t really help once a device is sitting unattended in a tray.

Travel + Leisure spoke to one flyer who learned about airport theft the hard way. She was sprinting between gates to catch a connecting flight when she dropped her phone in a bin and kept moving without it. The realization only hit her once the plane started to take off. She pinged the airport from her laptop while still airborne but no reply came back. Eventually, the airport admitted it had never turned up.

Advertisement

Forgetfulness isn’t the way you can turn phoneless, though. A viral TikTok from a flier, picked up by the New York Post, has a TSA agent telling her that placing your phone bare in the bin is “the fastest way to get it stolen.” Phones, she relayed, are what agents see vanishing the most.

Advertisement

How to keep your phone safe at the airport

The most basic way to keep your phone safe is rather unglamorous. Just tuck it into a zipped pocket inside your bag before the bag goes anywhere near the scanner belt. That way, anyone with sticky fingers has to fumble through a zipper while standing next to a bunch of TSA agents, which cuts the appeal.

For travelers who have TSA PreCheck (typically US citizens), things get easier. Small electronics can stay inside carry-on bags during screening, which means there’s no reason to leave a phone in a tray to begin with. Unfortunately, this doesn’t apply to most regular passengers, since some TSA checkpoints make you remove your laptop and other bulky devices regardless.

The Travel + Leisure report also features another flyer who once had her camera lifted at an Indonesian security checkpoint. She now sticks to a deliberate loading order. She starts off with her first bin for whatever she cares about least, often just a jacket or a scarf. Then comes the carry-on. Electronics and anything actually worth stealing get loaded last of all.

And if a bag is pulled aside for extra screening, experts suggest asking the agents to gather your other belongings so you can keep them at the inspection table. If a device does still vanish, notify the nearest TSA officer.

If you can’t track it down immediately, the TSA holds recovered items for at least 30 days and lets you file a claim through its website. Try to be quick with that, though, as anything not picked up within that holding period gets its memory wiped or destroyed outright to keep personal data from leaking. After submission, an acknowledgement letter with a control number tends to land roughly four to six weeks later.

Advertisement



Source link

Advertisement
Continue Reading

Tech

IronSource founders raise $60M at $500M valuation for Zyg, an agentic AI platform that automates e-commerce advertising

Published

on

TL;DR

Zyg, the agentic e-commerce platform built by five IronSource co-founders, raised $60 million at a $500 million valuation led by Accel, just two months after emerging from stealth with a $58 million seed round. The company automates advertising, retention, support, and inventory forecasting for DTC sellers using AI agents that operate autonomously on platforms like Meta. The structural irony is that the team that built IronSource’s ad tech infrastructure for human media buyers is now building agents designed to replace them.

The founders of IronSource spent a decade building tools that helped mobile app developers monetise their products through advertising. They sold that company to Unity for $4.4 billion in 2022, watched Unity dismantle the ad network they had built, left in 2024, and have now returned with a company whose premise is that the entire category of work IronSource supported, the human management of digital advertising campaigns, can be automated by AI agents. Zyg, their new startup, raised $60 million at a $500 million valuation on Tuesday, led by Accel, with participation from Bessemer Venture Partners and Lightspeed Venture Partners. The company came out of stealth two months ago with a $58 million seed round. In eight weeks it has raised $118 million at a half-billion-dollar valuation without a single public customer case study. The bet is not that AI can assist e-commerce advertising. The bet is that AI agents can replace the people who run it.

Advertisement

The thesis

Zyg describes itself as an agentic operating system for e-commerce scale. The platform automates business functions that direct-to-consumer sellers currently manage through a combination of human operators, fragmented software tools, and advertising agencies: campaign creation and optimisation on Meta and other platforms, customer retention, support, and inventory forecasting. Chief Executive Officer Omer Kaplan told Bloomberg that Zyg’s agents are already running advertising campaigns on Meta’s platforms and are “doing the vast majority of the activity themselves.” The company’s customer base includes businesses with between $2 million and $15 million in annual revenue, the segment of the e-commerce market large enough to need sophisticated advertising but too small to afford the teams that run it.

The irony is structural. IronSource built the infrastructure that app developers used to acquire users and monetise through advertising. The company’s success depended on the existence of a large class of professionals, media buyers, growth managers, and performance marketers, who spent their days optimising campaigns inside platforms like Meta, Google, and the ad networks IronSource itself operated. Zyg’s premise is that those professionals are now a cost centre that AI agents can eliminate. The same team that built the tools human ad buyers used is now building the agents designed to make those humans unnecessary. It is not a pivot. It is a succession.

The market

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Advertisement

Zyg is entering a category that barely existed twelve months ago and is now attracting hundreds of millions in capital. Hightouch raised $150 million at a $2.75 billion valuation last week to build an agentic marketing platform for enterprises, with Goldman Sachs and Bain Capital leading the round. Shopify has launched Agentic Storefronts that let merchants sell products inside ChatGPT, Perplexity, and Microsoft Copilot. Meta itself is moving toward fully automated advertising, where an advertiser inputs a business URL and Meta’s AI handles creative generation, audience targeting, budget allocation, and performance optimisation without human intervention. AI marketplaces are reshaping how advertising is created and distributed, collapsing the distance between an advertiser’s intent and a campaign’s execution to something approaching zero.

The competitive landscape suggests that Zyg’s timing is right but its window is narrow. When Meta completes its own automation of the advertising workflow, the question becomes what value a third-party platform adds on top of a system that already runs itself. Zyg’s answer is that Meta optimises for Meta. A direct-to-consumer brand needs agents that optimise across advertising, retention, support, and inventory simultaneously, making decisions that account for the full business rather than a single channel’s performance metrics. That cross-functional integration is what separates an agentic operating system from an automated ad tool, and it is what justifies the ambition of the valuation.

The speed

A $500 million valuation two months after stealth is not normal, even in 2026’s funding environment. But the velocity reflects a specific dynamic in AI venture capital: repeat founders with a demonstrated exit command valuations that bear no relationship to current revenue. VAST Data raised $1 billion at a $30 billion valuation as AI infrastructure demand accelerated, and the broader funding environment saw $297 billion flow into startups in Q1 2026 alone, with AI capturing 80 per cent of the total. In this market, a team that built and sold a company for $4.4 billion, that understands advertising infrastructure at a technical level, and that is applying that understanding to the single largest category of AI agent deployment, is exactly the profile that commands pre-revenue valuations at scale.

Accel, which led the round, raised a $5 billion fund in April specifically to back AI companies. The firm’s investment in Zyg is consistent with its thesis that the returns from AI will come not from foundational model companies but from vertical platforms that deploy agents in specific industries. Google is turning Chrome into an agentic workplace tool with autonomous browsing capabilities, and every major platform is building agent infrastructure. The venture bet on Zyg is that e-commerce advertising is a vertical where domain expertise, the IronSource team’s specific understanding of how campaigns work, provides an advantage that general-purpose agent platforms cannot replicate.

Advertisement

The talent

Zyg was founded by five of the original IronSource founders: Tomer Bar-Zeev as chairman, Omer Kaplan as CEO, Assaf Ben Ami as CFO and COO, alongside Nadav Ashkenazy and Daniel Shinar. The team also includes cybersecurity and AI specialists from Unit 81, the Israeli military’s elite technology unit. The funding will be primarily used to hire AI talent in Israel, a market where the competition for researchers and engineers has intensified as global companies and domestic startups chase the same pool of specialists. Meta’s raid of the Thinking Machines Lab founders, reportedly including a $1.5 billion engineer, illustrates the premium the industry places on concentrated AI talent. Israeli startups raised $15.6 billion in 2025, with AI-focused companies commanding the majority of capital, and the talent war is the primary constraint on how fast companies like Zyg can build.

The Unit 81 connection is relevant beyond credentials. Building agents that autonomously manage advertising campaigns, handle customer data, and make inventory decisions requires the kind of security architecture that military intelligence backgrounds produce. An agent that runs ad campaigns is also an agent with access to business-critical systems, customer information, and financial data. The governance challenge, how to let an agent operate autonomously while preventing it from making catastrophic errors, is as much a security problem as an AI problem, and Zyg’s founding team is constructed to address both.

The question

Agentic AI is entering specific verticals from construction to logistics to legal services, and in each category the same question applies: does the agent platform become the new operating layer for the industry, or does the incumbent platform absorb the agent functionality into its own product? In e-commerce advertising, the incumbents are Meta, Google, Amazon, and Shopify, each of which is building AI automation directly into its platform. Meta’s Advantage+ suite already handles creative generation and targeting for 8 million advertisers. Google’s Performance Max automates campaign creation across all Google surfaces. Shopify’s AI agents manage everything from SEO to email to ad buying.

Zyg’s wager is that the multi-platform problem is unsolvable from inside any single platform. A DTC brand selling on Shopify, advertising on Meta and Google, retaining customers through email and SMS, and forecasting inventory across seasonal demand curves needs an agent that understands the business as a system, not a collection of channels. That is the same insight that made IronSource valuable: app developers needed a monetisation layer that worked across ad networks, not inside any single one. The founders are running the same play, one abstraction layer higher. The difference is that the previous abstraction layer helped humans manage complexity. This one is designed to eliminate the need for the humans entirely. Whether that works at the scale of the DTC market, for the thousands of mid-sized brands that cannot afford engineering teams but generate enough revenue to justify AI-powered operations, will determine whether Zyg’s $500 million valuation was prescient or premature. The founders have two months of post-stealth existence and $118 million in capital to find out.

Advertisement

Source link

Continue Reading

Tech

The Math You Need To Start Understanding LLMs

Published

on

Once you peel back the hype and mysticism, large language models (LLMs) are a fascinating application of statistical models, effectively what you get when you dial a basic auto-complete model up to eleven. In order to analyze a mind-boggling amount of text and produce meaningful auto-completion results quite a bit of math is involved, with a recent three-part article series by [Giles] going through the basics of inference, being the prediction step using a trained model.

The text is encoded in the LLM’s vector space as token IDs, each token being a text fragment that has some probability of following another ID, such as when cats may be found on desks, as in the above photo by [Giles]. With inference multiple of such IDs are retrieved in a vector from which in successive steps a sentence can be pieced together. These so-called logits are detailed in the first article in the series, with the second article focusing on vocabulary space and embedding, as well as the matrix operations used for inference.

Finally, the third article puts all of this together and looks at transformers, which is a crucial part of GPT (generative pretrained transformer) LLM architecture. Of note is the attention mechanism, which takes GPTs beyond merely being glorified auto-complete systems by adding pattern matching. Here we can see how the statistical model of the LLM is used to generate a rather plausible output, which is where the human has to ask themselves in how far they feel that it is correct.

Advertisement

Of course, there goes a lot more into making LLMs and GPTs performant, such as key-value caches that massively speed up inference.

Source link

Advertisement
Continue Reading

Tech

Coinbase cuts 14pc of jobs to save costs and embrace AI

Published

on

Last year, Coinbase Europe was fined nearly €21.5m for failing to monitor transactions.

Coinbase is making 14pc of its workforce redundant to cut cost and adopt AI. According to recent company filings, the layoffs will affect around 700 employees.

The company employs around 150 in Ireland; however, it is unclear how many of them would be affected in this move. Coinbase did not provide SiliconRepublic.com with details when queried.

Restructuring expenses are expected to cost up to $60m. Company shares were up nearly 4pc in pre-market trading at the time of publication.

Advertisement

In a post on X, Coinbase co-founder and CEO Brian Armstrong said that the company is “volatile from quarter to quarter”.

“While we’ve managed through that cyclicality many times before and come out stronger on the other side, we’re currently in a down market and need to adjust our cost structure now so that we emerge from this period leaner, faster and more efficient for our next phase of growth.”

AI is “changing how we work”, Armstrong said. The workforce adjustments are expected to make the company “lean, fast, and AI-native”, he added.

He is part of a growing list of company leaders choosing a slimmer, more AI-powered workforce. In recent months, Meta cut 8,000 jobsBlock, 4,000 jobs; Oracle, about 10,000; Amazon, 30,000Atlassian, 10pc of its workforce; and Snap, about 16pc – with the trend largely attributed to changing technology at the workplace.

Advertisement

A joint report recently published by Ireland’s Economic and Social Research Institute and the Department of Finance has found that AI adoption in here is likely to lead to job losses, leading to increases in income inequality in the “short to medium term”.

“The biggest risk now is not taking action,” Armstrong continued in his post, calling this an “inflection point, not just for Coinbase, but for every company”.

With a smaller workforce, Coinbase plans to “[concentrate] around AI-native talent” who can manage fleets of AI agents. The company is also introducing “one-person teams” who can manage engineering, design and product management.

Last November, the Central Bank of Ireland fined Coinbase Europe nearly €21.5m for breaching its obligations to monitor money laundering and terrorist financing.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Bose Lifestyle Ultra Soundbar, Speaker Let You Control With Your Favorite Music App

Published

on

Bose has unveiled its Lifestyle Collection of audio products, which include a new soundbar, smart speaker and subwoofer that now use Google Cast or Apple AirPlay instead of a proprietary app.

The range includes the Bose Lifestyle Ultra Speaker ($299), the Bose Lifestyle Ultra Soundbar ($1,099) and the “most powerful” Bose Lifestyle Ultra Subwoofer ($899) and are available to preorder now.

See also: Best Soundbars of 2026

Advertisement

The range is not backward compatible with the company’s existing systems, though Bose says you can connect the company’s subs (or any other sub you choose) to the Sub Out port.

The Lifestyle Ultra Soundbar, which replaces the existing Smart Ultra Soundbar, features Dolby Atmos compatibility as well as the Alexa Plus virtual assistant. The speaker features two of the company’s proprietary PhaseGuides to fire sound out of the unit, in addition to six full-range drivers, two up-firing and four front-facing and a center tweeter. It features a glass-and-fabric look reminiscent of the previous soundbar.

Three Bose Lifestyle Ultra speakers in white beige and black

The $299 Bose Lifestyle Ultra Speaker is available in a choice of colors.

Advertisement

Ty Pendlebury/CNET

The “smart” Lifestyle Ultra speaker is a single-channel speaker with a height driver that uses the company’s “direct reflecting” technology to add presence, though it’s not an Atmos height channel on its own. If you use the Ultra as a rear speaker, though, the height driver will act as an Atmos channel. The Ultra includes a 3.5mm input so users can connect a source component like a turntable to it. If you have two Ultras, you can pair them as a stereo system.

Unlike previous Bose systems, the Bose Lifestyle system doesn’t have a proprietary control app, and the company expects users to use either Apple AirPlay, Spotify Connect or Google Cast to manage speakers. The advantage is that you can now group non-Bose speakers, as well as not needing to control their music with a megalithic Sonos-style app. The Bose app is now used for setup only, while also replacing the clunky wired headset to calibrate the system with your phone’s onboard microphone.

Why would Bose ditch a music control app? Look to its competitor Sonos. Two years ago, Sonos owners complained of multiple issues with its all-in-one app — after the introduction of the Ace headphones — and this ultimately led to the departure of its longtime CEO, Patrick Spence. While Sonos and Bose helped invent the multiroom speaker systems we know today, the world has moved on from a single app that controls everything, and most people just use the streaming app they’re most comfortable with.

Advertisement
Sounbar underneath a TV

David Carnoy/CNET

Ears on

I heard the three speakers playing music and with home theater content, and while Bose says the soundbar has the best bass the company has produced, it didn’t compare to my memories of the Sonos Arc Ultra — a speaker you can use without a subwoofer. Based on the demos I heard, the sub added much-needed bass to movies. 

Bose’s Ultra speaker was fine, but I found the height effect subtle and would depend on your room’s layout. The Ultra didn’t have the punchiness I’ve heard in the (cheaper) Sonos Era 100, but I look forward to hearing it against its competition.

Sound quality aside, I think the lack of interoperability with existing Bose speakers will be of most concern to potential buyers. We shall see.

Source link

Advertisement
Continue Reading

Tech

Eight Sleep’s New Pregnancy Mode Adjusts Your Bed Temperature So You Don’t Have To

Published

on

If you’ve ever been pregnant (or know someone who has), you know that sleep gets harder as the pregnancy progresses. One minute you’re freezing, the next you’re waking up to night sweats. Most likely, your sleep environment hasn’t changed, but your body has.

Eight Sleep, the company behind the Pod smart mattress cover and sleep tracker, just launched Pregnancy Mode, an AI-powered system built for the physiological changes brought on from pregnancy and postpartum. It’s available now as a free app update for all Eight Sleep members with a paid Standard, Enhanced or Elite subscription ($17 to $33 per month).

When you activate Pregnancy Mode within the app, the system uses your pre-pregnancy temperature baseline, your last menstrual period date and your estimated due date to automatically calculate weekly temperature adjustments across all sleep stages.

Advertisement
Eight Sleep Pod 5 Couple

Eight Sleep

Eight Sleep’s dataset of over 100 pregnant members and 50,000 nights of sleep data found that in the early stages of pregnancy, your body tends to sleep warmer (about 0.4 to 0.7 degrees Fahrenheit) than your pre-pregnancy baseline temperature. The study also found that by the third trimester, it’s the complete opposite — pregnant people were setting their Pods nearly 3.5 degrees cooler than normal. Pregnancy Mode also tracks that curve and continues to adjust eight weeks postpartum.

The feature includes a dedicated Pregnancy Insights card in the app, where you can see weekly biometric summaries of your heart rate, heart rate variability, respiratory rate, sleep stages and snoring. It’ll also compare these metrics to your pre-pregnancy scores. 

Baby development milestones and prenatal visit reminders are part of this new feature, too. Partners sharing the bed will get their own weekly insights on what to expect and how to help their pregnant significant other.

Pregnancy Mode is now available in the Eight Sleep app on iOS and Android.

Advertisement

Source link

Continue Reading

Tech

Bose Brings Back Its ‘Lifestyle’ Branding With New Speakers for the Home

Published

on

Bose has three new speakers to spice up your home listening. The company’s new “Lifestyle Collection”—designed with a snazzy fabric-wrapped grille and gentle curves—includes the Lifestyle Ultra Speaker, Lifestyle Ultra Subwoofer, and Lifestyle Ultra Soundbar. All of them can be connected to multiple units and third-party speakers via AirPlay and Google Cast for a better multi-room audio experience.

These audio products mark a “reentering” into the home speaker space for the company, bringing back the iconic Lifestyle lineup that originally debuted in 1990—known for simplicity and ease of use—which Bose subsequently discontinued in 2022.

To no surprise, Bose says the Ultra Soundbar is the “best soundbar we have ever made,” and that the Ultra Speaker might even be one of the company’s best in its storied history. The wireless speaker starts at $299, with a $349 limited-edition model in Driftwood Sand; the soundbar costs $1,099, and the subwoofer is $899. They’re available for preorder now and go on sale May 15.

Image may contain Electronics and Speaker

Bose Luxury Ultra Speaker in Driftwood Sand.

Advertisement

Courtesy of Bose

These Wi-Fi-enabled speakers support AirPlay, Google Cast, Spotify Connect, and, uniquely, are the first to integrate with Alexa+ (in the US only), allowing you to ask Amazon’s chatbot to play music through the speakers via voice commands. There’s also Bluetooth support, and even an auxiliary input for connecting the Ultra Speaker to a turntable.

You can group two Lifestyle Ultra Speakers into a stereo system in the Bose app, or group them all together for a home theater system. Sadly, if you hoped to use it as a surround system with your existing Bose soundbar, the company says it’s only backward compatible with the Bass Module 700. And with the new Lifestyle Ultra Soundbar, it can only be used as a wired connection. For multi-room audio, the company has passed those grouping duties to the Google Home app for Google Cast technology, or Apple’s AirPlay for iOS users. Speaking of the app, there’s a redesigned onboarding process that purportedly makes setting up all of these speakers a breeze.

On the audio front, the Ultra Speaker notably features an upward-firing driver for Dolby Atmos–like spatial audio, along with two front-facing drivers. (It doesn’t seem to support Dolby Atmos Music at this time.) The company is also touting its CleanBass technology, which pairs Bose’s QuietPort acoustic opening with the woofer for deep sound that performs better than its size suggests, though we’ll have to hear it for ourselves to see if it lives up to Bose’s claims.

Source link

Advertisement
Continue Reading

Tech

Schools are using VR headsets to relieve student stress and fix attention issues

Published

on

Secondary schools in the London borough of Sutton are using VR headsets to help students manage exam stress, ADHD, and difficult home lives. The headsets are made by tech firm Phase Space, and the schools are running the pilot alongside the local NHS mental health trust.

As reported by the Guardian, the seven-minute VR program gives students a quick mental reset, either during a scheduled slot or when they need to step away from class because anxiety has taken over. 

According to Zillah Watson, the program’s co-creator and visiting professor at University College London, nine out of 10 participants have shown improvement, leading to a decrease in stress and anxiety. 

Does seven minutes in VR actually makes a difference?

Watson, who is also the former head of VR at the BBC, designed the program specifically for overwhelmed and anxious students. She says it has led to improvements in attendance, behavior, and reductions in exam-related anxiety. 

Advertisement

Sixteen-year-old Lora Wilson described her experience: the program starts in an empty room where the light slowly fades until you feel transported somewhere else entirely. “Exams terrified me. They don’t scare me as much anymore,” she said.

What do teachers think?

Aelisha Needham, vice-principal at Ark Academy in north London, says they mostly use the headsets in the mornings, when students arrive anxious after disruptions at home or changes to their usual school routine. Since introducing the headsets, the school has seen fewer students being asked to leave lessons.

“Students are a lot calmer,” she said. Students now proactively ask to use the program when they feel overwhelmed, rather than simply walking out of class.

This is a really novel idea and one of the best applications of VR technology in recent times. Currently, it’s being tested in 15 schools. If the impact can be replicated over longer durations and across hundreds of schools, VR headsets could become a low-cost, effective way to support struggling students before things escalate.

Advertisement

Source link

Continue Reading

Tech

OpenAI turns its sold-out GPT-5.5 party into a monthlong Codex giveaway for 8,000 developers

Published

on

OpenAI on Monday began emailing more than 8,000 developers who applied for its invite-only GPT-5.5 party with a surprise consolation prize: a tenfold increase in Codex rate limits on their personal ChatGPT accounts, effective immediately and lasting through June 5.

“We had over 8,000 people express interest in just 24 hours, and while we wish our office was big enough to welcome everyone, we weren’t able to make space for every person who applied,” the company wrote in the email, which VentureBeat obtained. “As a small token of appreciation, we’ve 10x’ed your Codex rate limits until June 5th on your personal ChatGPT account.”

The gift is not limited to the lucky few who scored invitations to the party itself. Everyone who raised their hand — whether they were accepted, waitlisted, or turned away — received the rate limit boost, according to the email and confirmed by multiple recipients on social media.

CEO Sam Altman telegraphed the move on X shortly before inboxes started lighting up. “We are gonna do something nice for everyone who applied for the GPT-5.5 party and that we didn’t have space for,” he wrote. “Hope you enjoy!” The post amassed more than 521,000 views within hours.

Advertisement

What a month of supercharged Codex access actually means for developers

The practical implications are huge. Codex, OpenAI’s AI-powered coding agent, operates under daily usage caps that vary by subscription tier. A tenfold increase to those caps gives developers dramatically more room to prototype, debug, and ship code using GPT-5.5 — which OpenAI says matches GPT-5.4’s per-token latency while performing at a higher level of intelligence and using significantly fewer tokens to complete tasks.

The 31-day window is generous enough to reshape habits. By flooding thousands of developers with expanded access during a critical adoption period, OpenAI is effectively subsidizing the kind of deep, sustained usage that turns a curious trial into a daily dependency. It is a bet that once developers experience Codex at full throttle, they won’t want to go back — and that when the limits reset on June 5, a meaningful number will upgrade their subscriptions to preserve the workflow they’ve built.

openai1

An email sent to developers who applied for OpenAI’s invite-only GPT-5.5 party in San Francisco. Applicants who didn’t receive an invite were offered 10x Codex rate limits on their personal ChatGPT accounts through June 5 as “a small token of appreciation.” More than 8,000 people expressed interest within 24 hours, according to the email. (Image: Screenshot provided to VentureBeat)

The developer community responded with a mix of glee and regret. “I’m literally not taking my Codex hat off for the month,” one developer declared on X. Others kicked themselves for not signing up. “That’s the last time I don’t sign up just because I’m not in SF,” one wrote.

Advertisement

Several users raised a question OpenAI has yet to answer publicly: does the boost stack with the existing Pro $200 tier’s 20x multiplier? One user reported that OpenAI support said no — users get whichever limit is higher, not a combined total. “The key question isn’t whether the 10x boost is only for party applicants,” they wrote. “It’s whether it stacks with Pro.”

OpenAI did not immediately respond to a request for comment on whether the boost stacks with Pro-tier limits.

Inside the low-key meetup that an AI planned for itself

The rate limit gift is a sidecar to the main event: “GPT-5.5 on 5/5,” an invite-only gathering running tonight from 5:55 p.m. to 8:55 p.m. PDT at an undisclosed San Francisco venue. OpenAI billed the evening as “a low-key meetup with Sam and the team behind GPT-5.5,” promising food, drinks, community, giveaways, and swag — not a product announcement. Even the address remained secret until invitations were confirmed — a touch of exclusivity that generated its own buzz.

In a detail that doubles as a product demo, Altman revealed that GPT-5.5 itself planned the party. The model proposed the May 5 date, suggested that human developers give the toasts rather than the AI, and recommended setting up a suggestion box for the next-generation model. Altman described this as “weird emergent behavior.” Registrations closed shortly after opening due to overwhelming demand, with Codex handling the selection process.

Advertisement

Altman also extended an unlikely invitation. He publicly asked Elon Musk to attend, saying, “He can come if he wants… the world needs more love.” The gesture arrives amid Musk’s ongoing lawsuit against OpenAI seeking up to $150 billion in damages — a fact that makes the invitation read less like diplomacy and more like performance art.

Anthropic’s competing reception turns a scheduling overlap into a Silicon Valley spectacle

Here is where the story gets interesting. VentureBeat has confirmed that Anthropic is hosting its very own invite-only event in San Francisco on Tuesday evening — a “Media VIP Welcome Reception” at nearly identical times to OpenAI’s party. The reception serves as a warm-up for Anthropic’s Code with Claude developer conference, the company’s second annual gathering focused on its API, CLI tools, and Model Context Protocol (MCP). The conference proper takes place tomorrow.

The scheduling overlap is difficult to dismiss as coincidence. Both companies are hosting developer-focused events on the same evening, in the same city, targeting many of the same people. Whether this was deliberate counter-programming or genuine coincidence, the optics neatly capture where things stand in the industry’s most consequential rivalry.

Anthropic’s conference will feature its executive and product teams discussing Claude Code, agent implementation strategies, and the product roadmap — all squarely aimed at the same developer audience that just received a month of free Codex upgrades from OpenAI.

Advertisement

How Anthropic overtook OpenAI in revenue — and what it means for the coding wars

The dueling cocktail hours are a social manifestation of a far more consequential battle playing out in revenue, developer adoption, and investor confidence — one that has tilted sharply in Anthropic’s favor.

According to Counterpoint Research data, Anthropic surpassed OpenAI for the first time in global LLM revenue market share in Q1 2026, capturing 31.4% compared to OpenAI’s 29%. But the headline near-tie obscures a dramatic structural divergence. Counterpoint estimates Anthropic achieved that share with roughly 134 million monthly active users, compared to approximately 900 million for OpenAI — yielding average monthly revenue per active user of $16.20 for Anthropic versus $2.20 for OpenAI. OpenAI commands massive scale; Anthropic extracts roughly seven times more revenue per user. That gap is the central tension in this rivalry.

 Counterpoint Research data, Anthropic surpassed OpenAI for the first time in global LLM revenue market share in Q1 2026, capturing 31.4 compared to OpenAI's 29

Anthropic led all large language model providers in revenue during the first quarter of 2026, claiming 31.4 percent of a $20.7 billion global market — narrowly edging out OpenAI, which held 29 percent despite having nearly seven times as many users. (Source: Counterpoint Research)

The enterprise shift has been building for over a year. Menlo Ventures — whose portfolio includes Anthropic — estimates the company now captures 40% of enterprise LLM spend, up from 24% the prior year and 12% in 2023, while OpenAI’s share fell to 27% from 50% over the same period. Anthropic has maintained an almost unparalleled 18 months atop the LLM leaderboards for coding, starting with Claude Sonnet 3.5 in June 2024. That dominance in code — AI’s first true killer app — has become the on-ramp to broader enterprise adoption and the engine behind Anthropic’s revenue acceleration.

Advertisement

The top-line numbers tell the rest of the story. Anthropic said earlier this month that its annualized revenue has topped $30 billion, up from $9 billion at the end of 2025, with more than 1,000 business customers now spending over $1 million annually — a figure the company says has more than doubled since February.

Sources familiar with Anthropic’s financials told TechCrunch the run rate is currently closer to $40 billion, driven largely by demand for Claude Code and Cowork. OpenAI, meanwhile, topped $25 billion in annualized revenue as of February, according to Reuters — but the Wall Street Journal reported that the company has recently missed its own projections for user growth and revenue, with CFO Sarah Friar warning colleagues that if growth doesn’t accelerate, the company could face difficulty funding future compute agreements.

The momentum has carried into fundraising at a pace that could redraw the industry’s power map. Anthropic raised $30 billion at a valuation of $380 billion in February. Bloomberg reported last week that the company has begun weighing a fresh funding round that would value it at more than $900 billion, potentially leapfrogging OpenAI as the world’s most valuable AI startup. OpenAI was valued at $852 billion in late March after closing a record-breaking $122 billion funding round. If Anthropic proceeds at the terms described, the company would not only more than double its valuation but would also surpass OpenAI — a reversal that seemed unthinkable six months ago.

Two parties, two visions, and one city at the center of the AI industry’s defining rivalry

For the 8,000-plus developers who applied for the GPT-5.5 party, the immediate value is straightforward: a full month of dramatically expanded Codex usage, free of charge, during a period when both companies are shipping at a breakneck pace. For the industry, the signal is harder to miss. The two most valuable private companies in the world are competing for developer loyalty with a combination of free perks, invite-only parties, celebrity CEO engagement, and multi-billion-dollar enterprise ventures — all within the same 24-hour window, in the same seven-square-mile city.

Advertisement

The broader stakes extend well beyond cocktail napkins and rate limits. Both companies are barreling toward potential IPOs. Both are courting the same Wall Street backers for enterprise joint ventures. Both are racing to define how the next generation of software gets built — and by whom. The developers caught between them are, for the moment, the beneficiaries of a spending war that shows no sign of cooling.

Tonight in San Francisco, the Anthropic reception starts at 5pm. The OpenAI party starts at 5:55pm. VentureBeat will be at both. And somewhere between the two venues, 8,000 developers who couldn’t get into either room will be burning through their new rate limits — building the future with whichever model they opened first.


Michael Nunez is an editor at VentureBeat covering artificial intelligence. He is attending both the Anthropic Code with Claude Media VIP Welcome Reception and the OpenAI GPT-5.5 launch party tonight in San Francisco.

This story is developing and will be updated.

Advertisement

Source link

Continue Reading

Tech

Camera Slider: Build Instead Of Buy Goes Awry

Published

on

[TheHyperFix] had a problem. He’d spied a brilliant camera slider, but didn’t want to lay out big money to acquire it. The natural solution? Build one! Only, life is seldom so straightforward.

The plan was straightforward – take an old broken 3D printer, and repurpose its parts to make a camera slider instead. The build started with a aluminium extrusion, some V-slot wheels, and a 3D printed platform to hold the camera. Moving the platform was done via a belt drive, using the stepper motors and some software to tell the original printer controller what to do.

Unfortunately, the early experiments failed when the controller blew up under load. An Arduino was subbed in with a CNC shield, which got things back on track, and [TheHyperFix] had a somewhat functional slider with relatively jerky movement. A tough iterative design process ensued to work out problems with bearings and the Arduino’s pulse limit, among others.

Advertisement

As it stands, the slider is semi-functional, but it’s not quite well behaved enough to use for professional shooting. Still, for a first attempt at electronics prototyping, we think [TheHyperFix] did a pretty solid job. It might not be all there yet, but it’s well on the way, and a great deal was learned in the process.

If you’re trying to build a camera slider in a hurry, you might like to try recreating one of the builds we’ve featured before. Video after the break.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025