Don’t mistake the Steam Controller for a PC controller. Even though its main function is to play PC games, Valve’s new gamepad communicates with Steam, and only Steam. This is not a general controller for your PC, Android or iOS devices, and it’s certainly not compatible with any console on the market today, unless you count the handheld Steam Deck. In order to play a game with the Steam Controller, you have to boot it up through Steam. (More on this later).
Valve’s end goal for the Steam Controller is compatibility with the Steam Machine, a console that doesn’t yet have a public release date or price point. The Steam Machine will support 4K gaming at 60 fps with FSR, it’ll come with 512GB or 2TB of SSD storage, and it’ll work with the Steam Frame VR headset, as will the Controller. The new Steam Machine was supposed to drop early this year, fulfilling a long-promised dream of PC gaming by moving your entire Steam library to the couch in a compact but powerful box. Due to the memory shortages plaguing the tech industry, the Machine and Frame aren’t here yet, so the Steam Controller is the first step in Valve’s hardware takeover of living room territory. It’s due to come out on May 4, priced at $99.
The Steam Controller represents roughly 13 years of R&D, from its first iteration announced in 2013 to the debut of the Steam Deck in 2022, and the refinement period clearly paid off.
Valve
Advertisement
The Steam Controller is a sturdy and sleek gamepad that stands up to the competition. It’s for Valve diehards, trackpad fanatics and anyone whose main gaming hub is Steam.
Pros
Well-balanced and solidly built
Precise TMR thumbsticks
Trackpads and Gyros add flexibility
Long battery life
Cons
It’s built for Steam, for better or worse
Some features won’t be useful until the Steam Frame is out
The Steam Controller is a tidy chonker of a gamepad with a broad, Duke-like face holding two square trackpads beneath the standard analog sticks and face buttons. Despite its extra girth, the Steam Controller feels light, slim and balanced, even in my smaller-than-average hands. The grips are slender and have four circular rear buttons, two per side, that are super satisfying to click even when they don’t do anything in-game. The bumpers, triggers, D-pad and face buttons are shiny black plastic, and all of the controller’s edges are rounded, allowing for a smooth glide between the bumpers and triggers especially. The trackpads don’t get in the way when you don’t need them, but in-use, they’re incredibly sensitive and kind of mesmerizing. They look and feel just like the trackpads on the Steam Deck, following the trails of your thumbs with miniature popping bubbles.
The Steam Controller uses tunnel magnetoresistance (TMR) joysticks, which are a leveled-up version of Hall effect sticks, offering ultimate precision and long-term stability with no chance of drift. After a few days of use across a range of game genres, including competitive first-person shooters, they’ve proven to be reliable and accurate. In terms of stick precision and feel, I find the Steam Controller is comparable to the Razer Wolverine V3 Pro, my PC gamepad of choice. I otherwise much prefer the swappability, rubberized microswitches and crisp clickiness of Razer’s gamepad — but the Wolverine also costs about $100 more and doesn’t come with trackpad capabilities, so we’ll call it a wash.
Advertisement
Sam Rutherford for Engadget
One of the neatest aspects of the Steam Controller is its charging and connection puck, which plugs into your PC or Steam Deck through a USB cable and enables stable wireless play. The puck snaps onto the belly of the controller for charging, and when you hover the gamepad’s connection point over it, it jumps up and latches on like a cute little sucker fish. I don’t know if this behavior is an intentional selling point, but it certainly is for me. The Steam Controller also connects to devices via Bluetooth or with a cable, and in all configurations it’s performed without issue for me. Of course, Bluetooth mode has the highest latency, so that’s mainly for phones and Steam Link play. The puck can support two Steam Controllers at once. Swapping between Puck and Bluetooth mode is a simple matter of holding the right bumper and A or B, respectively, when you turn the controller on.
Pressing the power button with the Steam logo wakes up the gamepad, and pressing it twice when you’re connected to a PC launches Steam in Big Picture mode. The Steam Controller feels like a natural extension of Valve’s storefront, and with its matte black finish and bubbled edges, it’ll be familiar to anyone who’s fallen in love with a Steam Deck these past few years.
I tested out the controller on my PC with Steam games and non-Steam games (added to my Steam library first, of course — seriously, more on that later), and in my living room with my Steam Deck acting as a makeshift, low-powered Steam Machine. On PC I played The Seance of Blake Manor, Creature Kitchen and Overwatch, and on Steam Deck I played Blake Manor, Demonschool and Balatro. Whether connected with Bluetooth, the puck or USB, the Steam Controller provided seamless play and no noticeable latency. The distance from my couch to the puck nestled behind my Steam Deck is about eight feet, and I didn’t feel a frame drop while cosplaying as a Steam Machine owner. I also never ran into battery issues, but that’s not shocking considering Valve’s claim that the gamepad has more than 35 hours on a single charge. In my testing, the battery barely registered a drop after multiple hours of playtime, and I was happy to snap on the charging puck whenever I wanted to set the controller down.
Sam Rutherford for Engadget
Valve notes the battery life may be lower if playing with the Steam Frame. The Steam Controller has infrared LEDs for tracking, which will obviously drain the battery a little faster. Some VR games may have you waving your controller, as there are gyroscopic sensors in there as well. As the Steam Frame isn’t out, I wasn’t able to test some of the controller’s more interesting features.
Advertisement
Even against players using a keyboard and mouse in competitive Overwatch matches, I won games and earned awards, passing my personal ultimate test of a controller’s capabilities. When it comes to Overwatch, I’m mostly comparing the Steam Controller to Sony’s DualSense, and it feels surprisingly similar. I enjoy the Steam Controller’s smooth slide between the bumpers and triggers, though its haptic feedback is more subtle than the DualSense’s, lacking in the analog sticks particularly. Much like with the Steam Deck, I haven’t found a consistent use case for the trackpads on the Steam Controller, but I appreciate their inclusion, the accessibility factor, and the fact that they aren’t otherwise intrusive. Now, just add a Playdate crank and I’m really sold.
The Steam Controller is a clear and unmistakable signal that Valve is joining the console wars, and perhaps by patient and diligent design, it’s appearing at a vulnerable time. Xbox is fumbling the current generation and attempting to redefine its place in the console market amid a significant leadership shakeup, while Sony and Nintendo are carrying on with standard hardware upgrade cycles in a landscape that’s based less on platform exclusivity every day. Right now there’s room for a robust PC-based storefront to stake its claim on couch gaming, and voila, here’s Valve with the Steam Machine and Steam Controller.
Sam Rutherford for Engadget
Similarly to the way Valve used Half-Life 2 to get people to download Steam in 2004, the Steam Controller pushes players to fully consolidate their PC libraries in its own ecosystem. You’ll have to add games with their own launchers like Overwatch, Valorant, Minecraft and Fortnite to your Steam library before you can play them using Valve’s controller. This is a small inconvenience, since it takes just a few clicks to add a non-Steam game to your profile.
(Welcome to later). However, I don’t enjoy doing it. As I was browsing through files to add Overwatch to my Steam library, I couldn’t help thinking that it would have been pretty easy for Valve to add a switch that would let the Steam Controller communicate with any PC game. Maybe it’s a touch of oppositional defiant disorder, but I despise being coerced into behaviors that are designed to serve a corporation’s market control over my own workflow, especially in my personal spaces.
Advertisement
Now more than ever, I value my ability to choose — which businesses I work with, where I store my software, how I play — and the Steam launcher requirement is another small expansion of Valve’s incredible power in the PC games industry. It’s too easy to say, most of my games are already on Steam, no big deal, and use the Controller as an excuse to consolidate them all on Valve’s launcher. Suddenly, Steam is where you begin and end every gaming session, rather than just most. Obviously and especially with the coming rollout of the Steam Machine, this is the reality that Valve wants: a rich industry utterly reliant on its platform of DRM, shitty revenue splits and random opaque censorship. It’s the situation that Microsoft, Apple or Epic also want for themselves, but the main difference is that this future is actually in reach for Valve, and the Steam Controller is a tiny part of the plan. If willing and unforced support of a monopoly makes you bristle as well, feel free to stick with 8BitDo.
Sam Rutherford for Engadget
Truly though, I get it. The Steam Controller doesn’t come with a PC switch because it’s not a PC controller. It’s for controlling Steam, a service that’s become synonymous with PC and handheld gaming, and is now creeping onto the living-room scene. The Steam Controller is designed to follow you everywhere Steam is, for all your gaming needs across every screen forever and always — and there is something soothing about that idea in a Brave New World Soma kind of way. A PC controller? That’s far too limited, from Valve’s perspective.
Encroaching corporate dystopia aside, the Steam Controller is a sturdy and sleek gamepad that stands up to the competition. It’s for Valve diehards, trackpad fanatics and anyone whose main gaming hub is Steam. Which, to be clear, is a massive market that’s only poised to grow.
Scientists in Switzerland have developed a prototype camera capable of capturing clear three-dimensional images of neutrinos, particles so elusive they often earn the label ghost particles. Neutrinos come in huge numbers from the sun and other sources throughout space, yet they interact so rarely with ordinary matter that trillions pass through a person every second without any effect
For years, detecting them needed large subterranean containers filled with specific liquids or massive arrays of sensors buried deep in the ground to capture the exceedingly rare occasions when one collides with an atom. That works, but it’s a bit pricey and doesn’t allow scientists to track the particles’ travels very well. However, a team of researchers from ETH Zurich and EPFL has recently developed a new device called PLATON, which is a far easier way of doing things.
Cinematic-Style Footage – Experience the power of Xtra Muse’s 1-inch CMOS sensor, capable of recording breathtaking 4K resolution videos at 120fps…
Ultra-Steady Shooting – No more shaky videos! Xtra Muse’s advanced 3-axis gimbal camera stabilizer ensures exceptional smoothness. Enjoy smooth…
Effortless Framing – Enjoy Xtra Muse’s expansive 2-inch touch screen, and switch between horizontal and vertical shooting effortlessly.
The idea is to use a solid block of a special material called a scintillator, which emits tiny flashes of light whenever a particle passes through it, and to that block is attached a specially designed camera with a grid of tiny lenses and a sensor that can pick up individual photons of light, as well as the exact time they hit it.
Advertisement
When a neutrino interacts with the block, it produces a brief burst of energy that is converted into light, and the camera captures the pattern of light flashes from a variety of angles at the same time. The data is then processed by some sophisticated software, which uses timing information and complex pattern recognition to create a comprehensive 3D representation of the particle’s route. Previous detectors would slice the scintillator into thousands of small bits and connect them with fibers to determine where the particles were, but this made them all large, expensive, and completely unworkable. PLATON does all of the heavy lifting with the camera.
In laboratory experiments, the new device performed admirably, reconstructing tracks from electrons emitted by a known radioactive source. Simulations based on genuine neutrino interactions indicated that it could pinpoint sites with an accuracy of roughly a fifth of a millimeter. According to team member Davide Sgalaberna, this simplifies the process of creating particle detectors while also providing a high level of 3D precision.
This technology opens a host of new possibilities for future studies that will enable researchers to investigate neutrinos more efficiently, and it could be valuable in a variety of other areas as well. Medical imaging is another area where accurate readings inside materials are critical. Of course, there is still a lot of work to be done before it can be scaled up to the size required for large science projects, but for now, this prototype looks promising. As technology advances and becomes more accessible, it has the potential to reveal a great deal more about these fundamental particles and how they help form the cosmos. [Source]
A Chinese national accused of carrying out cyberespionage operations for China’s intelligence services has been extradited from Italy to the United States to face criminal charges.
According to a DOJ announcement, Xu Zewei is alleged to be a contract hacker for China’s Ministry of State Security (MSS) who conducted breaches between February 2020 and June 2021 as part of a coordinated intelligence-gathering campaign.
Xu was previously arrested in Milan, Italy, in 2025 at the request of U.S. authorities for his alleged ties to the Silk Typhoon hacking group.
The indictment links Xu to a series of attacks attributed to the Chinese Silk Typhoon hacking group, also known as Hafnium, which exploited vulnerabilities in internet-facing systems to gain initial access to victim networks. Once inside, the attackers performed reconnaissance, deployed malware, and stole data.
The DOJ says Xu was involved in intrusions targeting COVID-19 research organizations, where the attackers allegedly sought to obtain data on vaccines, treatments, and testing.
Advertisement
U.S. authorities also allege that Xu and his co-conspirators exploited Microsoft Exchange Server zero-day vulnerabilities beginning in late 2020 as part of a widespread campaign to compromise email servers and gain access to victim networks.
After breaching vulnerable Exchange servers, attackers deployed web shells that allowed them to access mailboxes, move laterally within networks, and exfiltrate data. The widespread exploitation led to global incidents impacting thousands of organizations before patches were fully available.
Prosecutors say Xu and his co-defendant operated as contracted hackers under the direction of MSS officials.
“According to court documents, officers of the PRC’s Ministry of State Security’s (MSS) Shanghai State Security Bureau (SSSB) directed Xu to conduct this hacking,” the DOJ said.
Advertisement
“When Xu conducted the computer intrusions, he allegedly worked for a company named Shanghai Powerock Network Co., Ltd. (Powerock),” the announcement adds, describing it as one of many firms used to carry out hacking operations on behalf of the Chinese government.
Xu is expected to appear in federal court, where he faces multiple counts related to computer intrusions and conspiracy.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
As its name suggests, generative AI is designed to generate material in response to prompts by drawing on its probabilistic database built up through analyzing huge quantities of training input. But it can draw on those patterns to analyze other files, and that’s also a widely used application. Writing in The Argument, Kelsey Piper encountered an interesting variant of that approach:
Recently, Anthropic released a new version of Claude, Opus 4.7. I did what I usually do when a new AI model is released by Google, OpenAI, or Anthropic and ran a bunch of tests on it to see what it can do. One of those tests is to paste in some text from unpublished drafts of mine and ask it to guess the author.
…
From only the above text [not shown here], 125 words, Claude Opus 4.7 informed me that the likeliest author is Kelsey Piper. This is an Opus 4.7-specific power; ChatGPT guessed Yglesias, and Gemini guessed Scott Alexander. I did not have memory enabled, nor did I have information about me associated with my account; I did these tests in Incognito Mode.
As Piper admits:
Advertisement
this is far from an impossible feat of style identification — a lot of my writing is public on the internet, and this is clearly the start of a political column, narrowing the possible authors down dramatically.
She went on to input less obvious material. For example, an “unpublished draft of a school progress report in a completely different register”:
An unpublished fantasy novel produced a similar result, although:
in that case it took more like 500 words for Claude to inform me that it’s the work of Kelsey Piper (whereas ChatGPT flattered me by guessing that I’m real fantasy novelist K.J. Parker).
And finally, “a college application essay I wrote 15 years ago, when my prose style was vastly worse and frankly embarrassing to reread”:
“Kelsey Piper,” said Claude, and in this case, also ChatGPT.
Piper comments:
Advertisement
Right now, today’s AI tools probably can be used to deanonymize any writer who has a large public corpus of writing under their real name and also writes anonymously, unless they have been extremely careful, for years, to make sure that nothing written under their secondary account has the stylistic fingerprints of their primary one. Many academics and industry researchers, for instance, have reported being identified from a draft or in the middle of a chat.
And she concludes:
Whatever goods anonymity ever offered us, we will have to do without them. I don’t want the anonymous posters to all go away and for everyone to frantically delete all their old internet presence before it surfaces, but more than anything, I don’t want them to be surprised.
Those links to other cases of unpublished material being recognized by AI show that Piper’s experience was not a one-off, although the results remain in the realm of anecdata. But even if imperfect, the ability of generative AI to carry out this kind of analysis quickly and often accurately represents an important new option for the well-established field of stylometry. Wikipedia explains:
Stylometry may be used to unmask pseudonymous or anonymous authors, or to reveal some information about the author short of a full identification. Authors may use adversarial stylometry to resist this identification by eliminating their own stylistic characteristics without changing the meaningful content of their communications. It can defeat analyses that do not account for its possibility, but the ultimate effectiveness of stylometry in an adversarial environment is uncertain: stylometric identification may not be reliable, but nor can non-identification be guaranteed; adversarial stylometry’s practice itself may be detectable.
The limitations of stylometry were demonstrated in John Carreyrou’s attempt to reveal the true identity of Bitcoin’s pseudonymous creator, Satoshi Nakamoto, published in The New York Times a few weeks ago. Carreyrou concluded that various real-world coincidences plus linguistic evidence indicated that Bitcoin was created by the 55-year-old British computer scientist Adam Back, something Back denies. Carreyrou’s attempts to use computerized stylometry (not the AI services Piper drew on) were unsatisfactory, and he eventually adopted a more hands-on approach to text analysis, which involved looking at Satoshi’s vocabulary, grammatical hyphenation mistakes and the use of British spellings.
Despite Carreyrou’s lack of success, stylometric analysis by generative AI is likely to become more common in many disciplines for the simple reason it is so quick, easy and cheap to carry out. Even if its results are unreliable, people may find it useful as a stimulus for further investigations. And as we know, the fact that generative AI systems can churn out nonsense hasn’t stopped hundreds of millions of people from using and trusting them anyway.
Online trading platform Robinhood’s account creation process was exploited by threat actors to inject phishing messages into legitimate emails, tricking users into believing their accounts had suspicious activity.
Starting last night, Robinhood customers began receiving “Your recent login to Robinhood” emails stating that an “Unrecognized Device Linked to Your Account” was detected, containing unusual IP addresses and partial phone numbers.
“We detected a login attempt from a device that is not recognized,” reads the phishing email. “If this was not you, please review your account activity immediately to secure your account.”
Included in the email was a button titled “Review Activity Now”, which led to a phishing site at robinhood[.]casevaultreview[.]com, which is now down.
However, screenshots on Reddit indicate that the site was likely used to try to steal Robinhood credentials.
Advertisement
What made the emails convincing is that they came from the legitimate Robinhood email address noreply@robinhood.com and passed SPF and DKIM email security checks.
Attackers abused Robinhood to generate phishing emails by exploiting a flaw in the company’s onboarding process that allowed them to inject arbitrary HTML into its account confirmation emails.
BleepingComputer confirmed that when a new Robinhood account is registered, the company automatically sends a “Your recent login to Robinhood” email to the associated address, containing the registration time, IP address, device information, and approximate location.
To inject the phishing message, threat actors modified their device metadata fields to include embedded HTML, which Robinhood did not properly sanitize.
Advertisement
This HTML was then injected into the Device: field of the account creation email, causing it to render as a fake “Unrecognized Device Linked to Your Account” message.
The attackers also used Gmail’s dot aliasing behavior, where adding periods to an address does not change its destination, allowing them to register accounts using variations of real email addresses while still delivering the messages to the intended recipients.
As a result, recipients received what appeared to be a standard login alert, but with an embedded phishing section warning of “unrecognized activity” and urging them to review their account.
Advertisement
Robinhood confirmed the incident in a statement posted to X.
“On Sunday evening, some customers received a falsified email from noreply@robinhood.com with the subject line ‘Your recent login to Robinhood.’,” posted RobinHood.
“This phishing attempt was made possible by an abuse of the account creation flow. It was not a breach of our systems or customer accounts, and personal information and funds were not impacted.”
BleepingComputer has confirmed that Robinhood has fixed this flaw by removing the Device: field that was previously abused from their account creation emails.
Advertisement
Robinhood advises users who received the message to delete it and avoid clicking any links.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
OpenAI is developing a smartphone where AI agents replace apps, with Qualcomm and MediaTek jointly designing the custom processor and Luxshare exclusively manufacturing, according to Ming-Chi Kuo. The analyst projects 300-400 million annual shipments, targeting mass production in 2028. Qualcomm surged 13% on the report. The supply chain is credible, Luxshare builds AirPods, Qualcomm powers 75% of Galaxy S26, but OpenAI has never shipped hardware, and every previous AI device (Humane Pin, Rabbit R1) has failed. This is OpenAI’s second hardware track alongside the Jony Ive project.
OpenAI is developing a smartphone built around AI agents rather than apps, with Qualcomm and MediaTek jointly designing the custom processor and Luxshare Precision Industry co-designing and exclusively manufacturing the device, according to Ming-Chi Kuo, the TF International Securities analyst whose Apple supply-chain intelligence has made him the most closely followed hardware analyst in the industry. Kuo projects 300 to 400 million annual shipments if the device succeeds, a figure that would exceed Apple’s iPhone volumes and place the phone in direct competition with the two companies that control roughly 40% of the global smartphone market. Specifications and the supplier list are expected to be finalised by late 2026 or the first quarter of 2027, with mass production targeted for 2028. Qualcomm’s shares surged as much as 13% in premarket trading on the report. None of the three companies, Qualcomm, OpenAI, or MediaTek, confirmed the partnership. This is an analyst report, not an announcement, but the supply chain Kuo describes is not speculative. It is the supply chain that already builds the devices you own.
Advertisement
The concept
The phone Kuo describes is not a smartphone with an AI assistant. It is a device where the AI agent is the interface and the app is obsolete. Instead of downloading applications and navigating screens, users would interact with agents that handle tasks directly: ordering transport, booking restaurants, managing email, conducting research, writing messages. The architecture would process lighter tasks on-device, including context awareness, memory management, and smaller AI models, while offloading complex inference to the cloud. The device would maintain what Kuo calls “full real-time state,” continuously capturing a user’s location, activity, communication, and environmental context to feed the agents. This is the vision Qualcomm CEO Cristiano Amon has been articulating throughout 2026: that AI agents will replace the mobile operating system and apps as the primary interaction layer, and that the hardware must be designed from scratch to support continuous, power-efficient AI inference rather than retrofitting existing chipsets with neural processing units bolted on.
The concept is separate from OpenAI’s other hardware project with Jony Ive, the former Apple design chief whose company io is developing a non-phone device, reportedly a smart speaker with a camera first, then glasses, a lamp, and earbuds, with the first product expected in early 2027. OpenAI is pursuing two parallel hardware strategies: a device that reimagines what a personal computer looks like without a screen, and a device that keeps the phone form factor but replaces everything that runs on it.Apple is testing AI smart glasseswith a custom chip, cameras, and Siri powered by a Gemini model, targeting 2027. The question of whether AI lives in your phone, on your face, or in a speaker on your counter is being answered simultaneously by every major technology company, each with a different bet. OpenAI is betting on all of them at once.
The credibility of the report rests on the supply chain, not the concept. Luxshare Precision Industry is a major Apple supplier that assembles AirPods, Apple Watch components, and an increasing share of iPhones. Qualcomm’s Snapdragon 8 Elite Gen 5 powers 75% of Samsung’s Galaxy S26 series and has, for the first time, overtaken Apple in raw multi-core and GPU performance. MediaTek’s Dimensity 9500 matches Qualcomm and Apple in CPU performance at lower cost with better efficiency. These are not the suppliers of a concept phone. They are the suppliers of phones that ship in the hundreds of millions.Qualcomm’s acquisition of Edge Impulse, an edge AI developer platform, in 2025 signalled the company’s strategic commitment to on-device AI inference across device categories. The Snapdragon 8 Elite Gen 5’s Hexagon NPU delivers 37% faster AI processing than its predecessor, supports agentic AI that learns from user behaviour, and includes a personal knowledge graph and continuous context awareness through an upgraded sensing hub. Qualcomm is also reportedly building custom 3D DRAM specifically optimised for AI workloads on mobile devices. The silicon for the phone Kuo describes does not need to be invented. The components exist. The question is whether the software paradigm works.
The financial context matters. Qualcomm’s stock was trading at $149.84 before the report, down from a 52-week high of $205.95, with earnings growth declining 46.9% and gross margins down to 55.1%. The company reports earnings on April 29, two days after the Kuo report. In February, Bloomberg reported that Qualcomm gave a “tepid forecast in sign of shaky phone market.” An OpenAI partnership would represent a new revenue stream in a market where Qualcomm’s traditional business, supplying modems and processors to phone manufacturers, is under pressure from Apple’s efforts to develop its own modem chips and MediaTek’s encroachment on the premium Android segment. Qualcomm would be helping build a device designed to challenge the iPhone while continuing to supply Apple with modem chips through at least 2027, a business relationship that embodies the contradictions of the semiconductor supply chain.
The graveyard
The AI device category has produced more failures than products. The Humane AI Pin, a $699 wearable with a laser projector that beamed information onto the user’s palm, was permanently bricked on February 28, 2025, when HP acquired Humane’s remnants for $116 million and shut down the servers. The Rabbit R1, a $199 “large action model” device, attracted 100,000 pre-orders but retained only 5,000 active users after five months, a 95% abandonment rate. Its founder admitted the device launched too early. Both failed for the same reason: they created new form factors that solved no problem the smartphone did not already solve, at price points that demanded the user carry a second device. The OpenAI phone takes a fundamentally different approach. It is not an additional device. It is a replacement for the device 4.7 billion people already carry, in the same form factor, with the same basic capabilities, but with a radically different interaction model. Whether that is enough to avoid the graveyard depends on whether agents can do what apps do, better, faster, and without the friction of learning a new paradigm.
Advertisement
AI is already reshaping the mobile app ecosystem, with “vibe-coded” applications flooding the App Store in such volume that Apple has had to crack down on submissions.The EU is preparing to force Google to open Android to rival AI assistantsincluding ChatGPT and Claude under the Digital Markets Act, requiring equal system-level access for voice activation and deep integration. The smartphone’s software layer is already in flux. Samsung’s Galaxy S26 runs a triple AI engine with Gemini, Perplexity, and Bixby. Google’s Pixel 10 hands off multi-step tasks to background AI agents. Apple Intelligence processes queries on-device with an emphasis on privacy. Every major phone manufacturer is moving toward AI-first experiences, but all of them are constrained by backward compatibility with billions of existing apps and the operating systems that run them. OpenAI’s advantage, if the phone materialises, is that it has no legacy. It can design a clean-slate interaction model without worrying about whether Instagram’s notification system works or whether the banking app renders correctly. The disadvantage is that users may not want a clean slate. They may want their apps and an AI assistant that works around them, which is what Samsung, Google, and Apple already offer.
The question
Kuo’s projection of 300 to 400 million annual shipments would make the OpenAI phone one of the most successful consumer electronics products in history. For context, Apple ships roughly 230 million iPhones per year. Samsung ships approximately 220 million Galaxy phones. A new entrant reaching those volumes has no precedent in the smartphone era. The projection reflects the scale of OpenAI’s ambition, not a reasonable base case for a first-generation device from a company that has never manufactured hardware, sold through carriers, managed warranty claims, or operated a supply chain at consumer scale. The Jony Ive device carries the same risk: a company whose expertise is in large language models attempting to become a consumer electronics manufacturer, a transition that requires competencies in industrial design, supply chain management, retail distribution, and after-sales service that OpenAI does not have and cannot acquire by hiring one designer, however talented.
The 2028 timeline gives OpenAI two years to finalise specifications, secure component supply, build manufacturing capacity, develop the agent-first software platform, negotiate carrier partnerships, establish retail distribution, and convince hundreds of millions of consumers to abandon their iPhones and Galaxy phones for a device built by a company that has never shipped hardware. The Humane AI Pin took longer than that and shipped a device that lasted nine months before being permanently disabled. The ambition is extraordinary. The supply chain is credible. The concept addresses a genuine architectural limitation of current smartphones, which were designed around apps in 2007 and have not fundamentally changed since. But the distance between a credible supply chain report and a shipping product that displaces the iPhone is the distance between a thesis and a business, and every company in the AI device graveyard had a thesis too.
The pet food aisle has never been more crowded, which is exactly why Hillary Coles says she was skeptical when Atomic Labs came calling.
“I had the same reaction you did,” Coles told me on a call Monday afternoon, a day before her new company, Golden Child, opened for business. “Surely that can’t be what people need.”
Coles co-founded Hims & Hers with Andrew Dudum, Jack Abraham, and Joe Spector back in 2016 and spent seven years there overseeing brand, physical products, and consumer strategy before taking a year and a half off to have her children. She describes herself as “a consumer person first” who happened to land in healthcare. Dog food wasn’t “on the bingo card,” as she put it.
The pitch that won her over was rooted less in dog food specifically than in a methodology. Atomic, the startup studio founded by Abraham, runs what it calls “painted door tests” — lightweight experiments designed to reveal what consumers will actually do, not just what they say they want. When Atomic ran those tests in the pet food space, interest was clear. The team then studied 11,000 reviews of existing fresh dog food products and found recurring complaints: inconvenience, dogs getting sick, food that felt like a chore to prepare and serve. “We started to peel the onion,” Coles said.
Advertisement
What they found, she and her co-founder Quentin Lacornerie argue, is an industry that hasn’t innovated in about 12 years — a claim that strains credulity, given how crowded the premium and human-grade segment has become — but one they say ties to 11,000 customer reviews showing persistent complaints about existing fresh food options, even as the humans feeding their dogs have dramatically changed their expectations.
Lacornerie, who was part of the founding team at Hims & Hers and spent years spearheading its personalized growth strategy, says there are lots of parallels to the early days of that company. “Wellness has eclipsed Big Pharma by 4x in market cap,” he noted. Pet parents who take collagen for joint health, who read ingredient labels, and who track their own nutritionincreasingly want the same rigor applied to what goes in their dog’s bowl.
Golden Child is launching with two “five-star” products sold direct-to-consumer for now: a fresh frozen meal system and, more intriguingly, a “drizzle” — a shelf-stable liquid topper that can be added to whatever a dog is already eating, whether that’s Golden Child’s own food, kibble, or something else. The drizzle retails for $19.95 a bottle. The meal system starts at $3 a day and is sold primarily on subscription, though a starter box is available for people who want to ease into the relationship.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
The drizzle is the more novel idea and, presumably, the higher-margin one. I asked Coles whether the company had considered just focusing on that product. “Like all entrepreneurs, we have a lot of opportunities to build out worlds,” she answered. “This is just the first inning.”
Advertisement
The food itself is made in the U.S. across multiple manufacturing facilities, using human-grade supply chains — a harder thing to establish than it sounds, said Lacornerie. The recipes were developed by a PhD in animal nutrition; Megan Sparkle, who is one of only roughly 80 board-certified veterinary nutritionists in the country; and (naturally) a classically trained chef, one who has work ties to Ina Garten and Guy Fieri, says Lacornerie.
The company also developed what it’s calling a “protein block,” a way of delivering chicken and beef with an enhanced amino acid profile that standard meat cuts alone don’t provide, says Coles.
Golden Child is announcing $37 million in total funding today as it comes out of stealth — a seed round and a Series A led by Redpoint Ventures, with Atomic and A-Star also participating. That’s a meaningful amount for a company selling dog food, but Lacornerie says that doing it right requires actual experts who don’t just dial it in. Indeed, among the company’s 12 employees, and the nutritionists and chef are all on staff, not advisors.
The brand name is broad by design. When I asked whether Golden Child might eventually expand into shampoos, travel gear, even some form of veterinary access — getting medication for a dog is its own particular bureaucratic headache — Coles didn’t deny it. “There’s a lot of interest and excitement from pet parents to involve their dogs in all aspects of their life,” she said. The goal, eventually, is to earn a place as a household brand, not just a food company.
Advertisement
Atomic has had notable successes along with some stumbles. Hims & Hers, now 10 years old, is a publicly traded company with a nearly $7 billion market cap. OpenStore, the e-commerce roll-up co-founded in 2021 by Abraham and venture investor Keith Rabois, tells a different story: after years of splashy coverage and more than $150 million in venture funding, it recently shuttered.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.
The build-up to Deborah’s (Jean Smart) solo Madison Square Gardens show is in full swing… and this week, we’re getting a double helping of Hacks.
After foiling her own original plan to complete the EGOT (Emmy, Grammy, Oscar, Tony) awards clean-sweep in the premiere of Hacks season 5, Deborah has decided to make her big comeback by selling out a show at Madison Square Garden.
There’s just one hitch: legally, she’s not allowed to perform at all under the terms of the injunction against her — something Ava (Hannah Einbinder) is scrupulously trying to sidestep.
Advertisement
Article continues below
So what’s Deborah’s next step? And when when are Hacks season 5 episodes 4 and 5 dropping on HBO Max?
Advertisement
What time can I watch Hacks season 5 episodes 4 and 5 on HBO Max?
Hacks Season 5 | Official Trailer | HBO Max – YouTube
For US viewers, Hacks season 5 episodes 4 and 5 will drop together on Thursday, April 30, at 9 pm ET.
Internationally, you’re looking out for these timings:
US – 6pm PT / 9pm ET
Canada – 6pm PT / 9pm ET
India – Friday, May 1 at 6:30am IST
Singapore – Friday, May 1 at 9am SGT
Australia – Friday, May 1 at 11am AEDT
New Zealand – Friday, May 1 at 12pm NZDT
In the UK, Hacks season 5 started running on Sky and NOW TV from April 17, and is a week behind.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
There’s currently no confirmation on when it will come to HBO Max UK.
Advertisement
When do new episodes of Hacks season 5 come out?
Da gals. (Image credit: HBO)
According to HBO Max, Hacks season 5 will consist of 10 episodes, with new episodes every week (except April 30 and May 7, when two episodes will drop).
Oriveti, based in Hong Kong, is expanding its lineup with the Dynabird, a $99 in-ear monitor and one of the first models under its new “bleqk” sub brand, which stands for “Basic Line Exquisite Quality Kept.” The Oriveti Dynabird follows the recently reviewed Purecaster and takes a more stripped down approach, pairing an all-metal shell with a minimalist design and a clear focus on value. Oriveti has built a steady presence in the midrange segment with IEMs that emphasize balanced tuning and solid construction, competing with Moondrop, FiiO, and DUNU for listeners who want strong performance without stepping into flagship pricing.
About My Preferences: This review is a subjective assessment and is therefore tinged by my personal preferences. While I try to mitigate this as much as possible during my review process, I’d be lying if I said my biases are completely erased. So for you, my readers, keep this in mind: I prefer solid sub bass, textured mid bass, a slightly warm midrange, and extended treble, with mild sensitivity to higher frequencies.
Testing equipment and standards can be found here.
Oriveti Dynabird IEM
Oriveti Dynabird Key Specs:
Driver: 1 x 9.2mm dynamic driver with beryllium coating
Impedance: 16 ohms
Sensitivity: 108 ±3 dB/mW at 1000 Hz
Frequency Response: 20 Hz to 20 kHz
Distortion: 0.08%
Cable: 0.78 mm 2 pin detachable cable
Termination: Gold plated 3.5 mm stereo plug
Shell: CNC machined aluminum
Build
The Dynabird features metal shells with detachable 2-pin cables. The top of the Dynabird’s shells host its 2-pin sockets, and they’re secured firmly in place.
The Dynabird’s cable is fairly thick, soft, and doesn’t retain memory. Strain relief is used generously, which inspires confidence in long term durability. It only comes with a 3.5 mm termination, so if you need USB-C or 4.4 mm you’ll have to go aftermarket.
Advertisement
Comfort
Comfort is a metric that relies heavily on factors influenced by your individual ear anatomy. Mileage will vary.
I found the Dynabird to be of average comfort. It didn’t particularly offend my ears, but if I wasn’t careful with how I positioned it, its angular metal shells could easily create irritation over time. I was able to get a decent passive seal, but would have really liked to use foam eartips to better secure it in place.
Accessories
Inside the box you’ll find:
1x Semi-hard carrying case
1x 2-pin 3.5mm cable
7x Pairs silicone eartips
For $99, this is an acceptable, if not somewhat underwhelming, accessory package. The included carrying case is sufficiently-spacious to store the IEMs, spare eartips, and even a compact discrete Dongle DAC. It is also pretty well-padded, meaning it will protect from both drops and someone bumping into you on the train well. The included eartips get the job done, but don’t seal perfectly in my ears. I quite liked the wide-bore silicone eartips and found that they helped open up the bottom-end of the Dynabird’s frequency-response. I’d have really liked to see Oriveti include a pair or two of foam eartips, as that would have greatly improved the ergonomics for me.
Listening
The Dynabird has a mild V-shaped sound signature. Sub-bass and mid-bass are both lifted, with slightly more emphasis on the mid-bass. There is some sub-bass roll-off that begins around 50 Hz, so it has punch, but not the deepest foundation.
The midrange has a touch of warmth, which gives vocals a reasonable amount of body. The upper-mids are pushed forward and peak around 2 to 3 kHz without going completely off the rails. From there, the Dynabird moves into a bright and active treble, with additional peaks above 8 kHz and 12 kHz. That gives it air and perceived detail, but also some upper-midrange and upper-treble grain.
Advertisement
Advertisement. Scroll to continue reading.
Compared with the Purecaster, the Dynabird is bassier, but it is not especially warm or rich. There is still some sterility to its presentation, made more obvious by the forward midrange. Vocals and instruments can sound a little artificial or thin depending on the recording, and I did find myself skipping tracks when the Dynabird clashed with a song’s mastering or tonal balance.
The Dynabird is not offensive, but it does need refinement. As a single dynamic-driver IEM, it feels like Oriveti could have pulled more weight from the low-end to better balance the brightness and energy in the upper-mids and lower-treble. The technical ability is clearly there. The tuning just needs more seasoning and less lab coat.
Comparisons
Comparisons are selected solely based on what I think is interesting. If you would like me to add more comparisons, feel free to make a request in the comments below.
Advertisement
KBear SR-8
The SR-8 a four-driver hybrid IEM with resin shells and metal nozzles. The Dynabird costs the same as the SR-8 and comes with nicer-feeling metal shells. The SR-8 features removable 2-pin cables and comes with a fixed 3.5mm termination. The Dynabird has a thicker, but simpler-looking cable that also features 3.5mm termination. Both IEMs come with decent accessories, though I like the case and eartips that come with the Purecaster a little more.
Sonically, the SR-8 is more U-shaped than the Dynabird. The Dynabird delivers less sub-bass and slightly less mid-bass impact, giving it a leaner lower-register across-the-board. The Dynabird has a tighter, more precise mid-bass, occasionally allowing it to generate a greater sensation of tactility than the SR-8. The SR-8 leans warm, but the Dynabird’s mids are warmer-still. The SR-8 doesn’t have as much upper-midrange emphasis and lacks the occasional graininess sometimes found on high-pitched vocals on the Dynabird. The SR-8 has less lower-treble and a small decrease in upper-treble presence, compared to the Dynabird. The Dynabird is brighter and grainier-sounding than the SR-8, and the SR-8 demonstrates a more-natural airiness.
Between the two IEMs, I’m going with the SR-8. Its smoother timbre and less-dramatic upper-register make for much easier listening. Its bass is comparatively-lifted, giving it a deeper and more-substantial presence in bass-heavy genres. This greater flexibility and tonal completeness make it the more-appealing choice, even considering the Dynabird’s superior construction and material choices.
Kefine Klean SV
The Kefine Klean SV is a single dynamic-driver IEM with metal shells and swappable tuning nozzles. It costs $55 and includes a detachable cable with your choice of a 3.5mm, 4.4mm, or USB-C termination. The Dynabird costs roughly twice what the Klean SV costs, coming in at $99. The Klean SV comes with similar-quality accessories, though its cable is a bit thinner and its case is a bit smaller.
Compared to the Klean SV with its black nozzles, the Dynabird has a warmer midrange, and slightly forward lower-treble. The Dynabird has a broadly more-emphasized upper-treble, though the Klean SV does lean a little harder in to the 10KHz-12KHz range. The Klean SV has a less-forward mid-bass and sub-bass, though it does demonstrate similar levels of bass extension. The Dynabird matches the Klean SV’s technical capabilities in the lower-register, delivering sufficient levels of tactility and control to render bass-bound textures.
The Dynabird, while warmer and bassier than the Klean SV, lacks a bit of its finesse. The Klean SV, as forward as its upper-register is, still feels a bit more cohesive and put-together than the Dynabird. The Klean SV’s smoother delivery of treble and vocal detail makes it less-tiring companion for longer listening sessions too. For those reasons, I’m going with the Klean SV.
Advertisement
Juzear Defiant
The Juzear Defiant is a $99 hybrid IEM featuring resin shells and metal nozzles. It has a modular detachable 2-pin cable that is similar in thickness to the Dynabird’s cable. The Defiant comes with a similar-useful case as the Dynabird, but has a wider and higher-quality selection of eartips. The Defiant is lighter and more ergonomic than the Dynabird, delivering greater comfort and isolation during longer listening sessions.
Sonically, the Dynabird has a thicker midrange, but a softer, less-precise mid-bass than the Defiant. The Defiant has a similar level of extension to the Dynabird, while the Dynabird picks up a little additional weight below 50Hz. The Defiant has a less pushed-up upper-midrange, giving a more natural and cohesive vocal and instrumental presence, relative to the Dynabird. The Dynabird has a greater amount of upper-treble presence, giving a brighter, bloomier disposition. The Defiant has a less-aggressive upper-treble and lacks some of the grain found on the Dynabird.
The Dynabird, while built better than the Defiant, is my second-choice. The Defiant represents a more-natural tuning that is far more flexible across mastering styles and disparate genres. That, combined with the Defiant’s more-robust accessory package make it the more-appealing IEM out of the box.
Advertisement. Scroll to continue reading.
Oriveti Dynabird IEM
The Bottom Line
The Dynabird gets the fundamentals right but stumbles where it matters most. Build quality is strong, the metal shells feel durable, and the driver has the technical ability to deliver a solid performance. At $99, it’s clearly positioned as a value play, and the intent is obvious.
The issue is tuning. The elevated upper-mids and energetic lower-treble can push vocals and instruments too far forward, sometimes sounding thin or unnatural depending on the recording. There is enough mid-bass punch to keep things engaging, but not enough low-end weight to balance out the brightness.
Advertisement
This is best suited for listeners who prefer a mid-forward, brighter presentation and prioritize clarity over warmth. If you are sensitive to treble or looking for a more natural, fuller sound, there are better options in this price range. The Dynabird shows promise, but it needs more refinement to stand out in a very competitive field.
Pros:
Well-built metal shells that feel durable and look the part
Practical carrying case that actually earns its keep
Mid-bass has punch and decent texture
Vocals cut through clearly and remain intelligible
Works across a wide range of genres without falling apart
$99 pricing keeps it within reach
Cons:
Sub-bass rolls off earlier than it should
No foam tips included, which feels like a miss at this price
Ergonomics are average at best
Upper mids can come across grainy
Vocals and instruments are pushed forward in a way that can sound unnatural
Treble can bloom at times and draw too much attention
The latest brain-computer interface could help people recover from severe depression. Motif Neurotech announced Monday that the US Food and Drug Administration has approved a human study to trial the company’s blueberry-sized brain implant that sits in the skull and delivers electrical stimulation to treat depression.
The Houston-based startup, founded in 2022, is part of a budding industry pursuing technology to read and interpret brain signals. While other companies exploring similar technology, like Elon Musk’s Neuralink, Paradromics, and Synchron, are developing devices to enable paralyzed people to communicate and use computers, Motif is aiming to ease depression in people who have not benefited from medication.
The company’s device is implanted in the skull just above the dura, the brain’s protective membrane. It targets the central executive network, a part of the brain that is responsible for high-level cognitive functions and is underactive in major depressive disorder. The implant emits specific patterns of stimulation to turn “on” this network.
Motif’s device would allow patients to receive therapeutic brain stimulation at home. “Through frequent electrical stimulation, we think we can drive that neuroplasticity that creates stronger connectivity within the central executive network for patients with depression, so that they can get out of bed in the morning, call their friends, go to the gym,” says Jacob Robinson, Motif’s cofounder and CEO.
Advertisement
Courtesy of Motif
Electrical stimulation has been used for decades to treat depression, and Motif’s approach is just the latest iteration. Electroconvulsive or “shock” therapy began in the 1930s and is still used today in cases where patients don’t benefit from antidepressants. Deep brain stimulation, which involves surgically implanting electrodes into the brain, is occasionally used experimentally but is not FDA approved. A much milder form of stimulation known as transcranial magnetic stimulation, or TMS, was approved in 2008. While it can be highly effective, it typically requires a lengthy treatment regimen of five treatments a week for six weeks.
A study from 2021 found that during a 12-month period in the United States, nearly 9 million adults were undergoing treatment for major depressive disorder, and of those, almost 3 million were considered to have treatment-resistant depression, when symptoms do not improve after at least two, and often more, antidepressant medications.
Motif’s device can be implanted in a 20-minute outpatient procedure without the need for brain surgery. It’s powered by wireless magnetoelectric technology that Robinson developed while at Rice University and is charged with a baseball cap that patients will wear when receiving the stimulation.
Progress may be slow, but it’s still progress. While I’ve been talking about the importance of video game preservation as a function of our own overall cultural preservation, very few people out there are actually trying to do something about it all. One of those doers has been Ross Scott and others involved in the Stop Killing Games movement. Scott, a YouTuber, started this whole thing in 2024 and really got it rolling on a second attempt in 2025. In that short period of time, the movement managed to secure some allies in the EU and British governments, ran a successful signature campaign to get the EU to open the discussion on legislative and enforcement remedies, and got that hearing on the schedule.
The Stop Killing Games initiative now faces increased legislative examination because of its current status as a proposed law. The Stop Killing Games movement brought its digital obsolescence battle to European Parliament this month because its members succeeded in establishing their first political presence. The hearing organized by Ross Scott and Moritz Katzner aimed to expose the harmful industry custom which enables companies to disable online games completely. The movement believes that publishers who stop supporting products which they sold as retail items engage in false advertising which violates consumer rights.
Advocates for the proposed legislation introduced an organized approach to guide lawmaking bodies during the proceeding. The main requirement of their proposal demands software firms to create offline functionality for their products or make their server code accessible as open source when games reach their end of life stage. Scott and Katzner maintained that these products serve as vital cultural heritage items which consumers own through their property rights. The commission members received evidence which showed that abrupt game terminations take away users’ financial resources and time investments while failing to provide proper solutions.
As a more direct reminder, below are the articulated goals of the movement.
Advertisement
Games sold must be left in a functional state
Games sold must require no further connection to the publisher or affiliated parties to function
The above also applies to games that have sold microtransactions to customers
The above cannot be superseded by end user license agreements
The hearing itself included witness testimony from consumer rights groups in the EU, which is really important. While cultural preservation clearly remains a primary goal of the movement, that goal was cleverly wrapped within claims that there are already laws on the books designed to protect customer rights and property when purchased that many game publishers appear to be pretty clearly violating. Within the hearing itself it was also revealed that the movement has gained even further support from other politicians and advocacy groups within the EU.
It was, by all accounts, a really positive hearing for those of us who care about game preservation. But we do need to temper our expectations as to the timeline for what comes next, because the EU is a big ol’ bureaucracy and this is all going to take a great deal of time.
The gaming community should not expect instant changes to policy according to advocates who received positive feedback from committee leaders. Moritz Katzner explained that the hearing served as an effective platform to present their case yet it stands as the first step in a lengthy administrative procedure. The campaign succeeded in establishing its primary objective by bringing the subject into official political debates but now needs to navigate ledge machinery to convert these consumer rights violations into legal protections which will be enforced across Europe.
And that may, or likely will, take years. But it’s a fight worth sticking out, if you care at all about art preservation and the rights of the public to retain ownership of the things they’ve paid for. And, frankly, if you care about the public domain, which you damned well should.
I’m going to keep coming back to this point, because I think it’s pretty much unassailable. In any copyright system in which the purpose of the limited monopoly granted to a publisher of art is to benefit the public through both the creation of more art as well as those creations ending up in the public domain for everyone’s benefit, then video games being designed such that publishers can disappear them on a whim breaks the copyright bargain. It seems to me that it goes unrecognized too often that if a work of art, including video games, isn’t guaranteed to end up in the public domain eventually, then it shouldn’t be granted a copyright in the first place.
But, for now, it’s nice to see the Stop Killing Games movement having taken the first legislative step. All that’s left now is a whole lot of waiting, advocacy, and combat to be done with adverse lobbying dollars.
You must be logged in to post a comment Login