Connect with us

Tech

Why Sierra the Supercomputer Had to Die

Published

on

Supercomputers can be measured in several ways, but the vital statistic is their ability to perform floating-point operations per second, or flops. Flopping as fast as possible is what makes you successful. At her peak, Sierra could hit 94.64 petaflops—94.64 quadrillion floating-point operations—per second. El Capitan, at 1.809 exaflops, is about 19 times faster. In late 2025, he was officially declared the world’s fastest supercomputer. Sierra’s juice, Neely says, was no longer worth the squeeze.

There was no big red button, no giant lever, that turned Sierra off. Someone could’ve just cut the cords, sure, but that’s not the recommended procedure. First, Sierra’s user scientists were warned, via email, to save their work. Then a DNR was formally instituted—no new parts.

The decommissioning proceeded in phases, starting with the compute nodes and the rack switches—management nodes are last, since they’re needed until the very end. The process involves running scripts that, digitally, shut the computer down, and then hard power switches are flipped off too. There’s also a dehydration. When she was alive, Sierra could get quite hot, so the lab recirculated thousands of gallons of water per minute, funneled through veiny pipes that came up from under her floorboards. As she approached death, that water had to be drained. It was tested by safety staff first, to ensure it was an environmentally healthy pH.

Large diameter aquatherm pipes as part of the cooling system for the Sierra supercomputer at the Lawrence Livermore...

Some of the pipes that kept Sierra cool.

Advertisement

Photograph: Balazs Gardi

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

New York sues Valve for promoting illegal gambling via game loot boxes

Published

on

Valve

New York Attorney General Letitia James sued video game developer and publisher Valve Corporation for using game loot boxes to facilitate illegal gambling activities among children and teenagers.

Valve operates Steam, one of the largest digital game distribution services in the world, offering access to thousands of games for millions of users worldwide. At the time this article was published, Steam was reporting over 29 million players online, with nearly 7.5 million playing a game.

Attorney General James said the gaming giant is violating the state’s gambling laws by offering players the opportunity to win random virtual prizes that can be exchanged for real money, in a process described as being similar to a slot machine.

Wiz

“Illegal gambling can be harmful and lead to serious addiction problems, especially for our young people,” said James. “Valve has made billions of dollars by letting children and adults alike illegally gamble for the chance to win valuable virtual prizes. These features are addictive, harmful, and illegal, and my office is suing to stop Valve’s illegal conduct and protect New Yorkers.”

The lawsuit targets loot boxes in Counter-Strike 2, Team Fortress 2, and Dota 2 that award players with random items, such as weapon skins or character accessories. However, the odds of winning rare items are allegedly deliberately skewed by Valve to make them far more valuable, leading the total value of market items to balloon to an estimated $4.3 billion as of March 2025, according to Attorney General James.

Advertisement

Some individual items (such as AK-47 skins) have even fetched prices of over $1 million, making Steam accounts a frequent target for hackers and scammers.

The lawsuit also highlights the potential harm to children, as they may be drawn into loot box purchases to win rare items and boost social status within gaming communities. “Children who are introduced to gambling are four times more likely to develop a gambling problem later in life than those who are not,” according to research cited in the Wednesday press release.

Attorney General James has asked the court to permanently bar Valve from operating loot box features in the state, to require the company to return all profits generated by the practice, and to impose fines for the alleged violations.

In January 2025, Genshin Impact developer Cognosphere (aka Hoyoverse) agreed to pay $20 million to settle a U.S. Federal Trade Commission (FTC) lawsuit over unfair marketing of loot boxes to minors, obscuring the actual costs. andmisleading the players about the odds of winning prizes.

Advertisement

BleepingComputer reached out to a Valve spokesperson for comment, but a response was not immediately available.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

ASUS 2026 Creator Series Launched: ProArt GoPro Edition, ROG Flow Z13-KJP, TUF A14

Published

on

ASUS gaming laptops have been the cream of the crop for some time now, as evidenced by my review of last year’s Zephyrus G14 and Strix G16. Building on that momentum, the Taiwanese laptop maker has announced its 2026 Creator lineup in India under the campaign “Built for Originals.” The new portfolio expands ASUS’s AI-powered laptop range with three key models: the ProArt GoPro Edition (PX13), the limited-edition ROG Flow Z13-KJP in collaboration with KOJIMA PRODUCTIONS, and the 2026 TUF Gaming A14. Here’s everything you need to know about them.

ProArt GoPro Edition (PX13)

Image of the ProArt GoPro Edition (PX13) laptop bundle

The ProArt GoPro Edition (PX13) is made for people who need serious editing performance on the move. It runs on the AMD Ryzen AI Max+ 395 processor with up to 50 TOPS NPU and supports up to 128GB LPDDR5X memory. Up front, you get a 13-inch 3K touchscreen display with 100% DCI-P3 color coverage and Pantone validation, along with stylus support. ASUS has also added a dedicated GoPro hotkey for faster editing workflows, along with creative tools like ASUS DialPad, StoryCube AI for media organization, and MuseTree for AI-assisted idea mapping.

Weighing just 1.39 kg and featuring a 360-degree hinge, the PX13 is clearly built for portability. Buyers also get a hard-shell carry case and modular memory foam packaging designed to securely store GoPro accessories. As part of the launch, ASUS is offering a GoPro MAX2 bundle (worth ₹62,500) at a 35% discount for ProArt GoPro Edition buyers. The bundle includes the GoPro MAX2 360 camera, extension pole, two Enduro batteries, and a 64GB SanDisk microSD card.

ROG Flow Z13-KJP

Back design of the Asus ROG Flow Z13-KJP

The ROG Flow Z13-KJP is a limited-edition 2-in-1 gaming device created in collaboration with KOJIMA PRODUCTIONS and artist Yoji Shinkawa. Inspired by the Ludens sci-fi aesthetic, the device features CNC-milled aluminium, carbon fiber detailing, custom keycaps, laser-etched vents, and themed packaging. Under the hood, it packs an AMD Ryzen AI Max+ 395 processor with Radeon 8060S graphics and up to 128GB of LPDDR5X unified memory. The 13.4-inch 2.5K ROG Nebula display supports a 180Hz refresh rate, 3ms response time, 100% DCI-P3 color gamut, and 500 nits brightness.

Advanced vapor chamber cooling and dual Arc Flow fans should help sustain performance in its compact chassis. Buyers will also receive a complimentary PC game code for DEATH STRANDING 2: ON THE BEACH, redeemable via Armoury Crate. Pre-orders begin at 12 PM on February 26, 2026, with availability starting March 4. ASUS is also offering a 2-year warranty extension and 3 years of accidental damage protection (worth up to ₹27,299) for ₹1 as a pre-order benefit. The ROG Flow Z13-KJP starts at ₹3,79,990.

TUF Gaming A14 (2026)

Asus TUF Gaming A14 2026 image

The 2026 TUF Gaming A14 is positioned as a portable performance machine for entry- to mid-level creators and gamers. It is powered by the AMD Ryzen AI Max+ 392 processor with Radeon 8060S graphics and integrates a 12-core Zen 5 CPU with RDNA 3.5 graphics. It comes with 32GB LPDDR5X unified memory and a 1TB PCIe 4.0 SSD, with an additional 2280 NVMe slot supporting up to 2TB expansion. The 73Wh battery supports fast charging and Type-C charging for better portability.

Despite its power, the laptop weighs just 1.48 kg and measures 1.69 cm thin. The 14-inch 2.5K display features a 165Hz refresh rate, 3ms response time, 100% sRGB coverage, and AMD FreeSync Premium support. The device is MIL-STD-810H certified and includes ports such as USB 4 and a microSD card reader. The TUF Gaming A14 starts at ₹1,79,990 and will be available via ASUS Exclusive Stores, ASUS eShop, Flipkart, Amazon, Croma, Reliance Digital, and other retail outlets.

Advertisement

Pricing & Availibility

Model Starting Price Availability
ROG Flow Z13-KJP Rs 3,79,990 Pre-orders from February 26, 2026; On shelf from March 4, 2026
ProArt GoPro Edition (PX13) Rs 3,34,990 Available from February 26, 2026
TUF Gaming A14 (2026) Rs. 1,79,990 Available from February 26, 2026

Source link

Continue Reading

Tech

ChatGPT sucks at being a real robot

Published

on

This story was originally published in The Highlight, Vox’s member-exclusive magazine. To get access to member-exclusive stories every month, join the Vox Membership program today.

There’s something sad about seeing a humanoid robot lying on the floor. Without any electricity, these bipedal machines can’t stand up, so if they’re powered down and not hanging from a winch, they’re sprawled out on the floor, staring up at you, helpless.

That’s how I met Atlas a couple of months ago. I’d seen the robot on YouTube a hundred times, running obstacle courses and doing backflips. Then I saw it on the floor of a lab at MIT. It was just lying there. The contrast is jarring, if only because humanoid robots have become so much more capable and ubiquitous since Atlas got famous on YouTube.

Across town at Boston Dynamics, the company that makes Atlas, a newer version of the humanoid robot had learned not only to walk but also to drop things and pick them back up instinctively, thanks to a single artificial intelligence model that controls its movement. Some of these next-generation Atlas robots will soon be working on factory floors — and may venture further. Thanks in part to AI, general-purpose humanoids of all types seem inevitable.

Advertisement

“In Shenzhen, you can already see them walking down the street every once in a while,” Russ Tedrake told me back at MIT. “You’ll start seeing them in your life in places that are probably dull, dirty, and dangerous.”

Tedrake runs the Robot Locomotion Group at the MIT Computer Science and Artificial Intelligence Lab, also known as CSAIL, and he co-led the project that produced the latest AI-powered Atlas. Walking was once the hard thing for robots to learn, but not anymore. Tedrake’s group has shifted focus from teaching robots how to move to helping them understand and interact with the world through software, namely AI. They’re not the only ones.

In the United States, venture capital investment in robotics startups grew from $42.6 million in 2020 to nearly $2.8 billion in 2025. Morgan Stanley predicts the cumulative global sales of humanoids will reach 900,000 in 2030 and explode to more than 1 billion by 2050, the vast majority of which will be for industrial and commercial purposes. Some believe these robots will ultimately replace human labor, ushering in a new global economic order. After all, we designed the world for humans, so humanoids should be able to navigate it with ease and do what we do.

an illustration of one nervous person and three robots all transporting brown boxes together in a line

Janik Söllner for Vox

They won’t all be factory workers, if certain startups get their way. A company called X1 Technologies has started taking preorders for its $20,000 home robot, Neo, which wears clothes, does dishes, and fetches snacks from the fridge. Figure AI introduced its Figure 03 humanoid robot, which also does chores. Sunday Robotics said it would have fully autonomous robots making coffee in beta testers’ homes next year.

Advertisement

So far, we’ve seen a lot of demos of these AI-powered home robots and promises from the industrial humanoid makers, but not much in the way of a new global economic order. Demos of home robots, like the X1 Neo, have relied on human operators, making these automatons, in practice, more like puppets. Reports suggest that Figure AI and Apptronik have only one or two robots on manufacturing floors at any given time, usually doing menial tasks. That’s a proof of concept, not a threat to the human work force.

“In order to make them better, we have to make AI better.”

You can think of all these robots as the physical embodiment of AI, or just embodied AI. This is what happens when you put AI into a physical system, enabling it to interact with the real world. Whether that’s in the form of a humanoid robot or an autonomous car, it’s the next frontier for hardware and, arguably, technological progress writ large.

Embodied AI is already transforming how farming works, how we move goods around the world, and what’s possible in surgical theaters. We might be just one or two breakthroughs away from walking, talking, thinking machines that can work alongside us, unlocking a whole new realm of possibilities. “Might” is the key word there.

Advertisement

“If we’re looking for robots that will work side by side with us in the next couple of years, I don’t think it will be humanoids,” Daniela Rus, director of CSAIL, told me not long after I left Tedrake’s lab. “Humanoids are really complicated, and we have to make them better. And in order to make them better, we have to make AI better.”

So to understand the gap between the hype around humanoids and the technology’s real promise, you have to know what AI can and can’t do for robots. You also, unfortunately, have to try to understand what Elon Musk has been up to at Tesla for the past five years.

It’s still embarrassing to watch the part of the Tesla AI Day presentation in 2021 when a human person dressed in a robot costume appears on stage dancing to dubstep music. Musk eventually stops the dance and announces that Tesla, “a robotics company,” will have a prototype of a general-purpose humanoid robot, now known as Optimus, the following year. Not many people believed him, and now, years later, Tesla still has not delivered a fully functional Optimus. Never afraid to make a prediction, Musk told audiences at Davos in January 2026 that Tesla’s robot will go on sale next year.

“People took him seriously because he had a great track record,” said Ken Goldberg, a roboticist at the University of California-Berkeley and co-founder of Ambi Robotics. “I think people were inspired by that.”

Advertisement

You can imagine why people got excited, though. With the Optimus robot, Elon Musk promised to eliminate poverty and offer shareholders “infinite” profits. He said engineers could effectively translate Tesla’s self-driving car technology into software that could power autonomous robots that could work in factories or help around the house. It’s a version of the same vision humanoid robotics startups are chasing today, albeit colored by several years of Musk’s unfulfilled promises.

We now know that Optimus struggles with a lot of the same problems as other attempts at general-purpose humanoids. It often requires humans to remotely operate it, and it struggles with dexterity and precision. The 1X Neo, likewise, needed a human’s help to open a refrigerator door and collapsed onto the floor in a demo for a New York Times journalist last year. The hardware seems capable enough. Optimus can dance, and Neo can fold clothes, albeit a bit clumsily. But they don’t yet understand physics. They don’t know how to plan or to improvise. They certainly can’t think.

“People in general get too excited by the idea of the robot and not the reality.”

“People in general get too excited by the idea of the robot and not the reality,” said Rodney Brooks, co-founder of iRobot, makers of the Roomba robot vacuum. Brooks, a former CSAIL director, has written extensively and skeptically about humanoid robots.

Advertisement

Clearly, there’s a gap between what’s happening in research labs and what’s being deployed in the real world. Some of the optimism around humanoids is based on good science, though. In 2023, Tedrake coauthored a landmark paper with Tony Zhao, co-founder and CEO of Sunday Robotics, that outlined a novel method for training robots to move like humans. It involves humans performing the task wearing sensor-laden gloves that send data to an AI model that enables the robot to figure out how to do those tasks. This complemented work Tedrake was doing at the Toyota Research Institute that used the same kinds of methods AI models use to generate images to generate robot behavior. You’ve heard of large language models, or LLMs. Tedrake calls these large behavior models, or LBMs.

It makes sense. By watching humans do things over and over, these AI models collect enough data to generate new behaviors that can adapt to changing environments. Folding laundry, for example, is a popular example of a task that requires nimble hands and better brains. If a robot picks up a shirt and the fabric flops down in an unexpected way, it needs to figure out how to handle that uncertainty. You can’t simply program it to know what to do when there are so many variables. You can, however, teach it to learn.

That’s what makes the lemonade demo so impressive. Some of Rus’s students at CSAIL have been teaching a humanoid robot named Ruby to make lemonade — something that you might want a robot butler to do one day — by wearing sensors that measure not only the movements but the forces involved. It’s a combination of delicate movements, like pouring sugar, and strong ones, like lifting a jug of water. I watched Ruby do this without spilling a drop. It hadn’t been programmed to make lemonade. It had learned.

The real challenge is getting this method to scale. One way is simply to brute-force it: Employ thousands of humans to perform basic tasks, like folding laundry, to build foundation models for the physical world. Foundation models are the massive datasets that can be adapted to specific tasks like generating text, images, or in this case, robot behavior. You can also get humans to teleoperate countless robots in order to train these models. These so-called arm farms already exist in warehouses in Eastern Europe, and they’re about as dystopian as they sound.

Advertisement

Another option is YouTube. There are a lot of how-to videos on YouTube, and some researchers think that feeding them all into an AI model will provide enough data to give robots a better understanding of how the world works. These two-dimensional videos are obviously limited, if only because they can’t tell us anything about the physics of the objects in the frame. The same goes for synthetic data, which involves a computer rapidly and repeatedly carrying out a task in a simulation. The upside here, of course, is more data, more quickly. The downside is that the data isn’t as good, especially when it comes to physical forces like friction and torque, which also happen to be the most important for robot dexterity.

“Physics is a tough task to master,” Brooks said. “And if you have a robot, which is not good with physics, in the presence of people, it doesn’t end well.”

an illustration of a robot butler tripping up some stairs. Food and drinks fly everywhere.

Janik Söllner for Vox

That’s not even taking into account the many other bottlenecks facing robotics right now. While components have gotten cheaper — you can buy a humanoid robot right now for less than $6,000, compared to the $75,000 it cost to buy Boston Dynamics’ small, four-legged robot Spot five years ago — batteries represent a major bottleneck for robotics, limiting the run time of most humanoids to two to four hours.

Then you have the problem with processing power. The AI models that can make humanoids more human require massive amounts of compute. If that’s done in the cloud, you’ve got latency issues, preventing the robot from reacting in real time. And inevitably, to tie a lot of other constraints into a tidy bundle, the AI is just not good enough.

Advertisement

If you trace the history of AI and the history of robotics back to their origins, you’ll see a braided line. The two technologies have intersected time and again, since the birth of the term “artificial intelligence” at a Dartmouth summer research workshop in the summer of 1956. Then, half a century later, things started heating up on the AI front, when advances in machine learning and powerful processors called GPUs — the things that have now made Nvidia a $5 trillion company — ushered in the era of deep learning. I’m about to throw a few technical terms at you, so bear with me.

Machine learning is a type of AI. It’s when algorithms look for patterns in data and make decisions without being explicitly trained to do so. Deep learning takes it to another level with the help of a machine learning model called a neural network. You can think of a neural network, a concept that’s even older than AI, as a system loosely modeled on the human brain that’s made up of lots of artificial neurons that do math problems. Deep learning uses multilayered neural networks to learn from huge data sets and to make decisions and predictions. Among other accomplishments, neural networks have revolutionized computer vision to improve perception in robots.

There are different architectures for neural networks that can do different things, like recognize images or generate text. One is called a transformer. The “GPT” in ChatGPT stands for “generative pre-trained transformer,” which is a type of large language model, or LLM, that powers many generative AI chatbots. While you’d think LLMs would be good at making robots think, they really aren’t. Then there are diffusion models, which are often used for image generation and, more recently, making robots appear to think. The framework that Tedrake and his coauthors described in their 2023 research into using generative AI to train robots is based on diffusion.

“Under the hood, what’s actually going on should be something much more like our own brains.”

Advertisement

Three things stand out in this very limited explanation of how AI and robots get along. One is that deep learning requires a massive amount of processing power and, as a result, a huge amount of energy. The other is that the latest AI models work with the help of stacks of neural networks whose millions or even billions of artificial neurons do their magic in mysterious and usually inefficient ways. The third thing is that, while LLMs are good at language, and diffusion models are good at images, we don’t have any models that are good enough at physics to send a 200-pound robot marching into a crowd to shake hands and make friends.

As Josh Tenenbaum, a computational cognitive scientist at MIT, explained to me recently, an LLM can make it easier to talk to a robot, but it’s hardly capable of being the robot’s brains. “You could imagine a system where there’s a language model, there’s a chatbot, you want to talk to your robot,” Tenenbaum said. “Under the hood, what’s actually going on should be something much more like our own brains and minds or other animals, not just humans in terms of how it’s embodied and deals with the world.”

So we need better AI for robots, if not in general. Scientists at CSAIL have been working on a couple of physics-inspired and brain-like technologies they’re calling liquid neural networks and linear optical networks. They both fall into the category of state-space models, which are emerging as an alternative or rival to transformer-based models. Whereas transformer-based models look at all available data to identify what’s important, state-space models are much more efficient, as they maintain a summary of the world that gets updated as new data comes in. It’s closer to how the human brain works.

To be perfectly honest, I’d never heard of state-space models until Rus, the CSAIL director, told me about them when we chatted in her office a few weeks ago. She pulled up a video to illustrate the difference between a liquid neural network and a traditional model used for self-driving cars. In it, you can see how the traditional model focuses its attention on everything but the road, while the newer state-space model only looks at the road. If I’m riding in that car, by the way, I want the AI that’s watching the road.

Advertisement

“And instead of a hundred thousand neurons,” Rus says, referring to the traditional neural network, “I have only 19.” And here’s where it gets really compelling. She added, “And because I have only 19, I can actually figure out how these neurons fire and what the correlation is between these neurons and the action of the car.”

You may have already heard that we don’t really know how AI works. If newer approaches bring us a little bit closer to comprehension, it certainly seems worth taking them seriously, especially if we’re talking about the kinds of brains we’ll put in humanoid robots.

When a humanoid robot loses power, when electricity stops flowing to the motors that keep it upright, it collapses into a heap of heavy metal parts. This can happen for any number of reasons. Maybe it’s a bug in the code or a lost wifi connection. And when they’re on, humanoids are full of energy as their joints fight gravity or stand ready to bend. If you imagine being on the wrong side of that incredible mechanical power, it’s easy to doubt this technology.

Some companies that make humanoid robots also admit that they’re not very useful yet. They’re too unreliable to help out around the house, and they’re not efficient enough to be helpful in factories. Furthermore, most of the money being spent developing robots is being spent on making them safe around people. When it comes to deploying robots that can contribute to productivity, that can participate in the economy, it makes a lot more sense to make them highly specialized and not human-shaped.

Advertisement

“Let’s not do open heart surgery right away with these things.”

The embodied AI that will transform the world in the near future is what’s already out there. In fact, it’s what’s been out there for years. Early self-driving cars date back to the 1980s, when Ernst Dickmanns put a vision-guided Mercedes van on the streets of Munich. Researchers from Carnegie Mellon University got a minivan to drive itself across the United States in 1995. Now, decades later, Waymo is operating its robotaxi service in a half-dozen American cities, and the company says its AI-powered cars actually make the roads safer for everyone.

Then there are the Roombas of the world, the robots that are designed to do one thing and keep getting better at it. You can include the vast array of increasingly intelligent manufacturing and warehouse robots in this camp too. By 2027, the year Elon Musk is on track to miss his deadline to start selling Optimus humanoids to the public, Amazon will reportedly replace more than 600,000 jobs with robots. These would probably be boring robots, but they’re safe and effective.

Science fiction promised us humanoids, however. Pick an era in human history, in fact, and someone was dreaming about an automaton that could move like us, talk like us, and do all our dirty work. Replicants, androids, the Mechanical Turk — all these humanoid fantasies imagined an intelligent synthetic self.

Advertisement

Reality gave us package-toting platforms on wheels roving around Amazon warehouses or the sensor-heavy self-driving cars clogging San Francisco streets. In time, even the skeptics think that humanoids will be possible. Probably not in five years, but maybe in 50, we’ll get artificially intelligent companions who can walk alongside us. They’ll take baby steps.

“Good robots are going to be clumsy at first, and you have to find applications where it’s okay for the robot to make mistakes and then recover,” Tedrake said. “Let’s not do open-heart surgery right away with these things. This is more like folding laundry.”

Source link

Advertisement
Continue Reading

Tech

Designing for Precision: CAD Tips for Micro-Scale 3D Printing

Published

on

More Information

Micro-scale 3D printing demands a fundamentally different approach to CAD design compared to traditional macro-scale work. With feature sizes smaller than a strand of hair and tolerances measured in single-digit microns, the margin for error is virtually zero. Engineers working in medical devices, electronics, photonics, and microfluidics need to rethink how they handle tolerances, geometry, wall thickness, and support structures when designing at this scale. This whitepaper walks through practical, field-tested tips — from setting appropriate tolerances and reinforcing thin walls to designing functional microfluidic channels and choosing the right materials — so you can reduce failed prints, shorten iteration cycles, and move from concept to validated prototype with confidence.

 

Source link

Advertisement
Continue Reading

Tech

Tech Moves: Zillow names CPO; AWS leader retires; Microsoft hires AI expert from Apple

Published

on

Zillow Group’s new senior leadership team members, from left: Christopher Roberts, Jon Lim and Marissa Brooks. (Zillow Photos)

Zillow Group announced three promotions to its senior leadership team.

  • After nearly two decades with Zillow, Christopher Roberts is now chief product officer. Roberts helped build Zillow Rentals, which the company touts as the No. 1 platform among renters. His Seattle tech career started at Expedia as a senior vice president of engineering.
  • Jon Lim is moving from VP of product management to SVP of Rentals Product & Business Operations. Prior to Zillow, Lim worked in technical product management roles at Amazon for more than five years.
  • Marissa Brooks is now SVP of corporate affairs, having previously served as VP of communications. Brooks, who works from Scottsdale, Ariz., joined Zillow in 2017.

Earlier this month, Zillow reported its revenue grew 16% last year. Its quarterly revenue, which came in at $654 million, was at the upper end of Zillow’s guidance and slightly higher than investors’ projections.

Jeffrey Kratz. (LinkedIn Photo)

Jeffrey Kratz is retiring from Amazon Web Services after more than 13 years. He’s leaving the role of vice president of Worldwide Public Sector Industry international sales. Throughout his tenure at AWS, Kratz worked with public sector customers, whom he described on LinkedIn as “making the world a better place.”

Kratz previously was employed at crosstown rival Microsoft for two decades where he held a variety of leadership roles in enterprise and public sector sales.

“Now it’s time to recharge, take Luna-the-pup on leisurely walks, spend quality time with Beverly, Andrew, family, and friends,” Kratz wrote, adding that he would work on his golf swing, volunteering and “spending more time with Boards in areas I am passionate about.”

— In another Amazon departure, David Luan, who led the company’s San Francisco-based AGI Lab and oversaw one of its most important agentic AI initiatives, is leaving for an undisclosed new gig. Luan announced his exit on LinkedIn, saying he will leave at the end of the week. He joined Amazon through an acqui-hire deal targeting leaders at the startup Adept. More details are in this GeekWire story.

Manasa Hari. (LinkedIn Photo)

Microsoft nabbed Manasa Hari from Apple to join its California-based AI Super Intelligence program as a partner.

“I’ll be supporting to build the infrastructure for human-centric AI systems that are safe, useful, and aligned with human needs. Inspired by Mustafa Suleyman’s mission to build AI that amplifies human potential, I’m excited about its broad impact on enterprise,” Hari said on LinkedIn.

Advertisement

Hari was previously head of product and program at Apple’s AIML Machine Learning Platform. She also serves on San Francisco State University’s Big Data Advisory Board, which provides input on course curriculum.

Craig Cincotta has moved to chief of staff for Microsoft’s Xbox division. He previously was a general manager of communications for cloud and AI. Cincotta has been with the Redmond, Wash.-tech giant for more than 17 years over two stretches of employment.

The company last week announced that Asha Sharma is taking the helm of Xbox and Microsoft Gaming, succeeding 38-year Microsoft veteran Phil Spencer. Cincotta and Sharma previously worked together at Seattle-based Porch.

Julie Keef. (LinkedIn Photo)

Julie Keef is leaving her role of VP of product at Redfin, the Seattle real estate platform that was acquired nearly a year ago by Rocket Companies. Keef joined Redfin in 2016 as the first hire on what would become the company’s content marketing team. She was promoted seven times to reach her VP position in which she oversaw a team of 50.

“We grew Redfin to the 3rd most visited real estate site, and held on to that spot despite competitors outspending us 5 to 1 on tech and advertising. And we had fun doing it. Even as the housing market turned and investment was hard to come by, the rabid squirrel spirit of Redfin persisted,” Keef said on LinkedIn.

Advertisement

Keef did not disclose her next pursuit.

Ravi Doddivaripall. (BusinessWire Photo)

— Seattle’s DexCare named Ravi Doddivaripall as chief technology officer. Doddivaripall joins the company from XY Retail and has more than 25 years of senior platform and engineering experience. He is based in the San Francisco Bay Area.

DexCare’s software platform helps healthcare providers manage their system’s capacity and schedule appointments. The startup launched at Providence, spinning out from the healthcare network’s digital innovation group in 2021.

“Ravi brings the architectural depth and platform experience to accelerate what we’ve built to help more health systems treat more patients with the resources they already have,” said Matt Blosl, CEO of DexCare, in a statement.

Kelly Brooks. (LinkedIn Photo)

Kelly Brooks is now VP of sales for Read AI, a Seattle startup that sells enterprise productivity software tools using generative AI. Brooks joins from HubSpot where she worked for nearly nine years.

On LinkedIn, Brooks said she was attracted to the company after using its technology.

Advertisement

“I saw immediate value from trialing the product, and got excited by the ways Read improves the transfer and access of information through organizations — perennial challenges I tackled as Chief of Staff at HubSpot,” Brooks wrote. “Inspired, I reached out to [CEO] David Shim to make a connection. The rest is history… or at least a story for another day :)”

— Serial entrepreneur and ShiftAI podcast host Boaz Ashkenazy is now senior director of AI infrastructure for Redapt, a Woodinville, Wash.-based IT company.

Ashkenazy is also co-founder of the legal tech startup Clause and co-founder and CEO of Augmented AI Labs, which builds and tests AI products. Ashkenazy additionally serves on the board of trustees for the Seattle Metropolitan Chamber of Commerce.

Jerome Johnson. (LinkedIn Photo)

Jerome Johnson has a new leadership role at Amazon Web Services, serving as director of its professional services business for U.S. federal, defense and aerospace customers. Johnson, who is based in Arlington, Virginia, has been with AWS for more than 12 years. His previous role was director of solutions architecture for national security and defense customers.

“While my focus expands from architecture leadership to business and delivery leadership, the mission remains the same: Serving customers by helping them solve their hardest problems with AWS,” Johnson wrote on LinkedIn.

Advertisement

Jill Angelo is the new board chair of Special Olympics Washington. Angelo is the founder and past CEO of Gennev, a company billed as the first virtual menopause care provider in the U.S. The business was acquired by Unified Women’s Healthcare, where she served as president until last year.

Angelo is also currently VP of women’s health and commercial partnerships at the wellness startup Oura.

Frieda Chan has left her role as manager of innovation development at the University of Washington’s CoMotion, the institution’s collaborative entrepreneurial hub. Chan is now director business development at Yale Ventures.

Yoodli shared that Tom Craven is now the enterprise sales leader for the Seattle-based AI roleplay startup.

Advertisement

William Bal is now VP of growth for EdgeRunner AI, a Seattle-based defense technology company that raised $12 million last year.

Source link

Continue Reading

Tech

Factor Offers High Protein Meal Delivery Options (2026)

Published

on

I should probably add the disclaimer that I like to cook, was a professional chef for many years, and my family of five rarely eats anything other than home cooked meals. But I get it. Many people are looking for a way to eat healthier in the midst of busy schedules, and maybe have never learned how to cook, or want to follow some specific diet like keto that requires a lot of research, planning, and effort.

In those situations I can see the appeal of a solution like Factor. Dial in what you want, it shows up, you microwave it, eat, and you’re on your way without caving and ordering pizza for the third time this week.

While Factor’s meals are generally enjoyable and reasonably tasty—for whatever reason, the dishes tending toward Mexican food seemed to be better than the rest—there’s just no denying that eating food out of segmented plastic tray is, um, uninspiring. At the very least, put your heated results on a real plate. It’ll taste better that way. Trust me, there’s a reason your plate is carefully arranged when it reaches your table at the fancy restaurant. Aesthetics matter.

Image may contain Food Lunch Meal Food Presentation and Meat

Photograph: Scott Gilbertson

Factor’s proteins, especially the meats, were the highlight of most of the meals. Options I tried included a meatball and pasta dish with green beans, a bunless burger, shrimp pasta with some zucchini, a faux grits meal (cauliflower grits), and a chicken taco bowl. In every case, the protein was quite tasty, the sauces were a mixed bag, while the vegetables fared less well in the whole, cook it, pack it, ship it, reheat it process. Green beans were especially what I could call “grim”, rather than the “vibrant and fresh” that I suspect Factor was going for.

Advertisement

But you need to step back from the aesthetic experience and remember the context in which these meals exist. This is not fine dining or even a home cooked meal, but a healthy alternative to frozen microwavable meals high in artificial ingredients and often with unnecessary added sugars. When you remember that, Factor start to look not only better, but downright appealing.

Source link

Continue Reading

Tech

Hackers Expose The Massive Surveillance Stack Hiding Inside Your “Age Verification” Check

Published

on

from the the-failure-is-the-system dept

We’ve been saying this for years now, and we’re going to keep saying it until the message finally sinks in: mandatory age verification creates massive, centralized honeypots of sensitive biometric data that will inevitably be breached. Every single time. And every single time it happens, the politicians who mandated these systems and the companies that built them act shocked—shocked!—that collecting enormous databases of government IDs, facial scans, and biometric data from millions of people turns out to be a security nightmare.

Well, here we go again.

A couple weeks ago, Discord announced it would launch “teen-by-default” settings for its global audience, meaning all users would be shunted into a restricted experience unless they verified their age through biometric scanning. The internet, predictably, was not thrilled. But while many users were busy venting their frustration, a group of security researchers decided to do something more useful: they took a look under the hood at Persona, one of the companies Discord was using for verification (specifically for users in the UK).

What they found, according to The Rage, was exactly what we would predict:

Advertisement

Together with two other researchers, they set out to look into Persona, the San Francisco-based startup that’s used by Discord for biometric identity verification – and found a Persona frontend exposed to the open internet on a US government authorized server.

In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies.

Let me say that again: 2,456 publicly accessible files sitting on a government-authorized server, exposed to the open internet. Files that revealed a system performing not a simple age check, but a ton of potentially intrusive checks:

Once a user verifies their identity with Persona, the software performs 269 distinct verification checks and scours the internet and government sources for potential matches, such as by matching your face to politically exposed persons (PEPs), and generating risk and similarity scores for each individual. IP addresses, browser fingerprints, device fingerprints, government ID numbers, phone numbers, names, faces, and even selfie backgrounds are analyzed and retained for up to three years.

The information the software evaluates on the images themselves includes “Selfie Suspicious Entity Detection,” a “Selfie Age Inconsistency Comparison,” similar background detection, which appears to be matched to other users in the database, and a “Selfie Pose Repeated Detection,” which seems to be used to determine whether you are using the same pose as in previous pictures.

This was the same company checking whether a teenager should be allowed to use voice chat on a gaming platform.

Advertisement

Beyond offering simple services to estimate your age, Persona’s exposed code compares your selfie to watchlist photos using facial recognition, screens you against 14 categories of adverse media from mentions of terrorism to espionage, and tags reports with codenames from active intelligence programs consisting of public-private partnerships to combat online child exploitative material, cannabis trafficking, fentanyl trafficking, romance fraud, money laundering, and illegal wildlife trade.

So you wanted to verify you’re old enough to use voice chat, and now there’s a permanent risk score somewhere documenting whether you might be involved in illegal wildlife trafficking.

What could go wrong?

As the researchers put it to The Rage:

“The internet was supposed to be the great equalizer. Information wants to be free, the network interprets censorship as damage and routes around it, all that beautiful optimism. And for a minute it was true.”

[….]

Advertisement

“The state wants to see everything. The corporations want to see everything. And they’ve learned to work together.”

Discord, to its credit, has now said it will not be proceeding with Persona for identity verification. And to be fair, Discord and similar internet companies are in an impossible position here—facing mounting regulatory pressure in multiple jurisdictions to verify ages while being handed a market of vendors who keep turning out to be security nightmares. But this is part of a pattern that should be deeply familiar by now.

Just last year, Discord’s previous third-party age verification partner suffered a breach that exposed 70,000 government ID photos, which were then held for ransom. Discord said it stopped using that vendor. Then it moved to Persona, which was already raising concerns due to connections to Peter Thiel. Now Persona’s frontend is found wide open on a government-authorized server, and Discord is dropping them too.

See the pattern? Discord keeps swapping vendors like someone frantically rotating buckets under a leaking roof, apparently hoping the next bucket won’t have a hole in it. But the problem was never the bucket. The problem is the hole in the roof — the never-ending stream of age-verification government mandates.

And this brings us to the bigger, more important point that almost nobody in the “protect the children” policy crowd seems willing to engage with honestly. Every single time you mandate age verification, you are mandating the creation of a centralized database of extraordinarily sensitive personal information. Government IDs. Biometric facial data. The kind of data that, once breached, cannot be “changed” like a password. You get one face. You get one government ID number. When those leak—and they will leak—the damage is permanent.

Advertisement

Even the IEEE Spectrum Magazine is now publishing articles that detail how age verification undermines any effort to protect children by putting their privacy at risk.

These systems fail in predictable ways.

False positives are common. Platforms identify as minors adults with youthful faces, or adults who are sharing family devices, or have otherwise unusual usage. They lock accounts, sometimes for days. False negatives also persist. Teenagers learn quickly how to evade checks by borrowing IDs, cycling accounts, or using VPNs.

The appeal process itself creates new privacy risks. Platforms must store biometric data, ID images, and verification logs long enough to defend their decisions to regulators. So if an adult who is tired of submitting selfies to verify their age finally uploads an ID, the system must now secure that stored ID. Each retained record becomes a potential breach target.

Scale that experience across millions of users, and you bake the privacy risk into how platforms work.

Advertisement

We have been cataloging these breaches for years. In 2024, Australia greenlit an age verification pilot, and hours later a mandated verification database for bars was breached. That same year, another ID verification service was breached, exposing private info collected on behalf of Uber, TikTok, and more. Then came the Discord vendor breach last year. And now Persona.

This keeps happening because it has to keep happening. It’s the inevitable result of a system designed to aggregate the exact kind of data that attackers most want to steal. Computer scientists and privacy experts have been sounding this alarm for years.

And what makes this even more galling is that these age verification systems don’t even accomplish what they claim to accomplish.

Take Australia’s infamous ban on social media for under-16s, the poster child for this approach. It’s been a complete failure on its own terms: plenty of kids have already figured out ways around the ban, while those who can’t—particularly kids with disabilities who relied on social platforms for community—are being actively harmed by their exclusion. As the security researcher who helped discover the Persona leak, Celeste, told The Rage:

Advertisement

“Normies won’t be able to bypass these,” while less benevolent people “will always find ways to exploit your system.”

So we’ve built a system that fails to keep out the people it’s supposedly targeting, while successfully creating permanent biometric dossiers on millions of law-abiding users. Not great!

Meanwhile, what’s happening at the legislative level is perhaps even more cynical. Governments around the world are pushing harder and harder for mandatory age verification online. And as these mandates create a captive market worth billions of dollars, a whole ecosystem of venture-backed “identity-as-a-service” startups has sprung up to serve it. Persona, valued at $2 billion and backed by Peter Thiel’s investment network, is just one of many. These companies make grand promises about privacy-preserving verification, get contracts with major platforms, and then — whoops — leave 2,456 files exposed on a government server.

And, of course, these very firms are now lobbying for stricter age verification mandates. They’ve positioned themselves as protectors of children while actively working to expand the legal requirements that guarantee their revenue stream.

Lawmakers mandate an impossible task, VC-backed startups pop up to sell a “solution,” those startups then lobby for even stricter mandates to protect their market, and the cycle repeats.

Advertisement

“Child safety” has simply become the marketing department for a rent-seeking surveillance industry.

As long as the law demands that these biometric gates exist, the “security” of the data they collect will always be a secondary concern to “compliance” with the mandate. Companies will keep rotating through vendors, each one promising that their system is the one that won’t leak, right up until it does. And the age verification industry will keep lobbying for stricter laws, because every new mandate is another guaranteed revenue stream.

The researchers who exposed Persona’s frontend hope their findings will serve as a wake-up call. Given the track record, it probably won’t be. Discord dropping Persona changes nothing—the next vendor will collect the same data, make the same promises, and eventually suffer the same breach. Because the problem was never which company holds your biometric data. The problem is that anyone is being forced to hand it over in the first place.

Filed Under: age verification, data breaches, privacy, security

Companies: discord, persona

Advertisement

Source link

Continue Reading

Tech

Apple Vision Pro users will get to see Disney's 'Muppet*Vision 3D' in all its glory

Published

on

“The Muppet Show” rebirth has brought Jim Henson’s creations back into the spotlight, and fans are awaiting news of the virtual return of the fan-favorite “Muppet*Vision 3D” via Apple Vision Pro.

A poster for the 'Muppet*Vision 3D' ride showing the cast of characters, including the green frog Kermit, who is holding a sign that says '3D'
‘Muppet*Vision 3D’ may have closed, but it’s being kept alive in VR

Jim Henson was responsible for a lot of the world’s most popular entertainment, and even Apple has some in their studio. We’re not here to talk about Fraggle Rock, but instead, a green guy and his friends that are a little more popular.
It’s a great time to be a Muppets fan, as Seth Rogen’s new special seems to have successfully revived the brand. Long-time fans recently packed the theater for the first time in forever, and mourned the loss of the popular Muppet*Vision 3D attraction at Hollywood Studios in Orlando.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Do intl F&B chains have more value for money? Some S’poreans think so.

Published

on

These F&B chains are winning over the taste buds of Singaporeans

“I support foreign F&B [brands] over local ones.”

It’s a statement that sparked debate on a Reddit thread—and it reflects a growing trend in Singapore’s dining scene. While Singaporeans still love their local fare, an increasing number are showing support for foreign F&B brands.

This shift is evident in the wave of international F&B chains expanding and growing their presence here.

Over the past few years, Singapore has seen a significant influx of international food and beverage operators. As of 2025, around 85 Chinese F&B brands alone were operating roughly 405 outlets in Singapore, a sharp increase from just 32 brands running 184 outlets in 2024.

Advertisement

Western brands are also entering the market, with names like Chick-fil-A and Yochi among those seeking to capture local diners.

Many of these international F&B brands cite Singapore’s strategic location, strong infrastructure, and vibrant business environment as ideal for testing and localising products for Asian markets, as well as coordinating regional operations and supply chains.

But potential alone isn’t enough—demand ultimately determines success. In Singapore, these brands have not only managed to establish a foothold but have also seen enough consumer support to thrive in a competitive market.

So why are Singaporeans turning towards these brands?

Over the last decade, consumer preferences have reshaped Singapore’s culinary landscape.

Advertisement

Today’s diners are increasingly health-conscious, environmentally aware, and eager to explore global flavours, often influenced by overseas travel. This openness has created opportunities for international brands offering novel concepts, regional specialities, and fusion menus.

US-based Mexican fast food chain Chipotle is set to launch in Singapore this year./ Image Credit: Chipotle

But for some consumers, the shift isn’t about novelty. It’s about value.

In online discussions about the growing presence of foreign F&B chains in Singapore, one comment summed up a recurring sentiment:

“Some of these foreign F&B provide better value, like free napkins, free-flow rice and water. Most local establishments charge for these, and they add up.”

It sounds trivial until you realise how price-sensitive Singapore’s mass dining market actually is. In a high-cost city, diners are acutely aware of incremental add-ons, like:

  • S$0.30–S$0.50 for takeaway containers
  • S$0.50 for water
  • Extra charges for rice top-ups
  • Service charge and GST

Individually, they seem negligible.

Collectively, a casual meal that costs S$10 could easily edge closer to S$15 after factoring in these add-ons—S$0.50 for water, another S$0.50 for a takeaway container, extra rice portions, plus service charge and GST.

Advertisement

For frequent diners, these incremental costs quickly add up, making international chains that offer bundled extras feel significantly more attractive, even if the base price is similar.

The ability of international chains to offer these perks ultimately comes down to scale and resources.

Many are backed by established parent companies, venture funding, or large franchise groups. That backing provides access to capital during early expansion, standardised operations, and lower costs through bulk purchasing and centralised procurement across multiple markets.

A single-outlet local eatery sourcing from domestic distributors, on the other hand, does not enjoy the same leverage. It would likely pay market rates for ingredients and double-digit monthly rents, hence, absorbing the cost and providing free-flow rice or drinks is far more challenging.

Advertisement
luckin coffeeluckin coffee
All customers need to do to order a coffee from Luckin Coffee is download the app. With just a few taps, they can place an order for pickup at any outlet, receive real-time status updates within the app, and earn rewards through an integrated loyalty programme./ Image Credit: Luckin Coffee

Beyond cost advantages, many international F&B brands have leveraged their resources to streamline operations from the outset, creating a customer experience that feels efficient, fuss-free, and reliable.

Take Luckin Coffee, for example: from the moment it launched in Singapore, the brand used app-based ordering, cashless payments, and standardised store layouts to minimise wait times and optimise service flow. For busy urban diners, this translates into convenience as much as value.

Other brands have focused on consistency across outlets, a factor that independent operators often struggle to match. Portion sizes, ingredient quality, and menu offerings are carefully standardised, meaning diners know exactly what to expect regardless of location.

CHAGEE is a case in point: a tea from its Plaza Singapura outlet tastes the same as one from Pagoda Street, thanks to strict SOPs, centralised ingredient sourcing, and staff training.

In contrast, local eateries may vary slightly between outlets, or even from day to day, depending on ingredient availability and staffing.

Advertisement

Why this matters

All of this is to say that it appears Singaporean diners are increasingly gravitating towards brands that can consistently deliver value, convenience, and quality—traits that larger, well-resourced F&B chains are often better equipped to provide.

For the industry, this intensifies competition. F&B operators in Singapore already operate on thin profit margins of 5–7%, leaving little room for error.

The first 10 months of 2025 alone saw 2,431 food business closures, underscoring the sector’s volatility. Alarmingly, over 60% of these businesses shuttered within five years of opening, and 82% were unprofitable, highlighting how difficult it is to survive in the current climate.

In this environment, businesses that can maintain operational efficiency, predictable quality, and value for money have a structural advantage in meeting these evolving expectations.

Advertisement

International F&B brands have a clear advantage: they can leverage scale, operational systems, and financial backing to meet evolving tastes and lifestyles, and capture Singaporean diners’ loyalty.

  • Read other articles we’ve written on Singaporean businesses here.

Featured Image Credit: @the_xw via Instagram/ SDQ International Productions

Source link

Advertisement
Continue Reading

Tech

Samsung Galaxy S26, S26+, and S26 Ultra: Specs, Features, Price, Release Date

Published

on

Samsung’s latest Galaxy smartphones—the Galaxy S26 series—are all about optimization and AI. Announced at its Galaxy Unpacked event in San Francisco, the phones are not hugely different from last year’s Galaxy S25 models, but the company is hyping up performance optimizations that purportedly boost AI processing. Naturally, there are a bunch of new AI features baked into the phones too.

The headline hardware change is reserved for the top-tier Galaxy S26 Ultra: the Privacy Display. It prevents stray eyes from peeping over your shoulder at sensitive information on your screen—no need to apply a third-party privacy screen protector. The Ultra otherwise doesn’t look as visually distinct next to the Galaxy S26+ and Galaxy S26; unlike the previous flagships, they now all share the same look.

Image may contain Electronics Mobile Phone Phone and Speaker

Samsung Galaxy S26 Series

Photograph: Julian Chokkattu

The Galaxy S26 series is available for preorder now, with official sales kicking off on March 11. The Galaxy S26 and S26+ are getting a $100 price increase—likely due to a RAM bump, as RAM is expensive these days. They start at $900 and $1,100, respectively. The Galaxy S26 Ultra remains at the same price as its predecessor: $1,300. Samsung also unveiled a new pair of wireless earbuds, the Galaxy Buds4 ($179) and Buds4 Pro ($249), also arriving March 11. Here’s everything you need to know.

Advertisement

The Privacy Display

The Galaxy S26 Ultra has something you’ve never seen on a smartphone: a built-in privacy screen. This is a hardware-driven feature; there are two types of pixels on the OLED panel, one that shoots light directly to your eyes, and another next to it that is wider, allowing the light to reach the sides. That allows you to view the screen from all angles. When the Privacy Display is enabled, the latter pixels are turned off, severely limiting what people around you can see. It’s not just blocking the left and right sides of the smartphone like most two-way privacy screen protectors, but also the top and bottom.

What makes it more powerful than your usual privacy screen protector is that the Privacy Display can be customized via the software. You can toggle it on for the entire screen with a simple tap on the Quick Settings tile, or you can enable it for all incoming notifications, on a per-app basis, or for any app that requires a pin or passcode, like banking apps. Samsung says it’ll even work with its Routines, so you can automatically turn it on via geolocation, like when you leave the office.

Image may contain Electronics Mobile Phone Phone and Iphone

Photograph: Julian Chokkattu

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025