Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
“Roadrunner” is a new bipedal wheeled robot prototype designed for multi-modal locomotion. It weighs around 15 kg (33 lb) and can seamlessly switch between its side-by-side and in-line wheel modes and stepping configurations depending on what is required for navigating its environment. The robot’s legs are entirely symmetric, allowing it to point its knees forward or backward, which can be used to avoid obstacles or manage specific movements. A single control policy was trained to handle both side-by-side and in-line driving. Several behaviors, including standing up from various ground configurations and balancing on one wheel, were successfully deployed zero-shot on the hardware.
Incredibly (INCREDIBLY!) NASA says that this is actually happening.
NASA’s SkyFall mission will build on the success of the Ingenuity Mars helicopter, which achieved the first powered, controlled flight on another planet. Using a daring mid-air deployment, SkyFall will deliver a team of next-gen Mars helicopters to scout human landing sites and map subsurface water ice.
NASA’s MoonFall mission will blaze a path for future Artemis missions by sending four highly mobile drones to survey the lunar surface around the Moon’s South Pole ahead of astronauts’ arrival there. MoonFall is built on the legacy of NASA’s Ingenuity Mars Helicopter. The drones will be launched together and released during descent to the surface. They will land and operate independently over the course of a lunar day (14 Earth days) and will be able to explore hard-to-reach areas, including permanently shadowed regions (PSRs), surveying terrain with high-definition optical cameras and other potential instruments.
Advertisement
For what it’s worth, Moon landings have a success rate well under 50%. So let’s send some robots there to land over and over!
In Science Robotics, researchers from the Tangible Media group led by Professor Hiroshi Ishii, together with colleagues from Politecnico di Bari, present Electrofluidic Fiber Muscles: a new class of artificial muscle fibers for robots and wearables. Unlike the rigid servo motors used in most robots, these fiber-shaped muscles are soft and flexible. They combine electrohydrodynamic (EHD) fiber pumps — slender tubes that move liquid using electric fields to generate pressure silently, with no moving parts — with fluid-filled fiber actuators. These artificial muscles could enable more agile untethered robots, as well as wearable assistive systems with compact actuation integrated directly into textiles.
In this study, we developed MEVIUS2, an open-source quadruped robot. It is comparable in size to Boston Dynamics Spot, equipped with two LiDARs and a C1 camera, and can freely climb stairs and steep slopes! All hardware, software, and learning environments are released as open source.
In this work, a multi-robot planning and control framework is presented and demonstrated with a team of 40 indoor robots, including both ground and aerial robots.
Quadrupedal robots can navigate cluttered environments like their animal counterparts, but their floating-base configuration makes them vulnerable to real-world uncertainties. Controllers that rely only on proprioception (body sensing) must physically collide with obstacles to detect them. Those that add exteroception (vision) need precisely modeled terrain maps that are hard to maintain in the wild. DreamWaQ++ bridges this gap by fusing both modalities through a resilient multi-modal reinforcement learning framework. The result: a single controller that handles rough terrains, steep slopes, and high-rise stairs—while gracefully recovering from sensor failures and situations it has never seen before.
While the pyramid exploration that iRobot did was very cool, they did it with a custom made robot designed for a very specific environment. Cleaning your floors is way, way harder. Here’s a bit more detail on the pyramids thing:
MIT engineers have designed a wristband that lets wearers control a robotic hand with their own movements. By moving their hands and fingers, users can direct a robot to perform specific tasks, or they can manipulate objects in a virtual environment with high-dexterity control.
At NVIDIA GTC 2026, we showcased how AI is moving into the physical world. Visitors interacted with robots using voice commands, watching them interpret intent and act in real time — powered by our KinetIQ AI brain.
Developed by Zhejiang Humanoid Robot Innovation Center Co., Ltd., the Naviai Robot is an intelligent cooking device. It can autonomously process ingredients, perform cooking tasks with high accuracy, adjust smart kitchen equipment in real time, and complete post-cooking cleaning. Equipped with multi-modal perception technology, it adapts to daily kitchen environments and ensures safe and stable operation.
This CMU RI Seminar is by Hadas Kress-Gazit from Cornell, on “Formal Methods for Robotics in the Age of Big Data.”
Formal methods – mathematical techniques for describing systems, capturing requirements, and providing guarantees – have been used to synthesize robot control from high-level specification, and to verify robot behavior. Given the recent advances in robot learning and data-driven models, what role can, and should, formal methods play in advancing robotics? In this talk I will give a few examples for what we can do with formal methods, discuss their promise and challenges, and describe the synergies I see with data-driven approaches.
The European Commission is investigating a breach after a threat actor allegedly accessed at least one of its AWS cloud accounts and claimed to have stolen more than 350 GB of data, including databases and employee-related information. AWS says its own services were not breached. BleepingComputer reports: Sources familiar with the incident have told BleepingComputer that the attack was quickly detected and that the Commission’s cybersecurity incident response team is now investigating. While the Commission has yet to share any details about this breach, the threat actor who claimed responsibility for the attack reached out to BleepingComputer earlier this week, stating that they had stolen over 350 GB of data (including multiple databases).
They didn’t disclose how they breached the affected accounts, but they provided BleepingComputer with several screenshots as proof that they had access to information belonging to European Commission employees and to an email server used by Commission employees. The threat actor also told BleepingComputer that they will not attempt to extort the Commission using the allegedly stolen data as leverage, but intend to leak the data online at a later date.
Embedding fasteners or other hardware into 3D prints is a useful technique, but it can bring challenges when applied to large or non-flat objects. The solution? Use a gap-cap.
The gap-cap technique is essentially a 3D printed lid. One pauses a print, inserts hardware, then covers it with a lid before resuming the print. The lid — or gap-cap — does three things. It seals in the part, it fills in empty space left above the component, and it provides a nice flat surface for subsequent layers which makes the whole process much cleaner and more reliable.
This whole technique is a bit reminiscent of the idea of manual supports, except that the inserted piece is intended to be sealed into the print along with the embedded hardware under it.
If you have never inserted anything larger than a nut or small magnet into a 3D print, you may wonder why one needs to bother with a gap-cap at all. The short version is that what works for printing over small bits doesn’t reliably carry over to big, odd-shaped bits.
For one thing, filament generally doesn’t like to stick to embedded hardware. As the size of the inserted object increases, especially if it isn’t flat, it increasingly complicates the printer’s ability to seal it in cleanly. Because most nuts are small, even if the printer gets a little messy it probably doesn’t matter much. But what works for small nuts won’t work for something like an LED strip mounted on its side, as shown here.
Advertisement
Cross-section of a print with an embedded LED strip. The print pauses (A), LED strip is inserted and capped with a gap-cap (B, C), then printing resumes and completes (D).
In cases like these a gap-cap is ideal. By pre-printing a form-fitting cap that covers the inserted hardware, one provides a smooth and flat surface that both seals the component in snugly while providing an ideal surface upon which to resume printing.
If needed, a bit of glue can help ensure a gap-cap doesn’t shift and cause trouble when printing resumes, but we can’t help but recall the pause-and-attach technique of embedding printed elements with the help of a LEGO-like connection. Perhaps a gap-cap designed in such a way would avoid needing any kind of adhesive at all.
Bellevue, Wash.-based wireless carrier T-Mobile confirmed it made an unspecified number of layoffs this week. A tipster told GeekWire the number was in the hundreds, which the company did not verify.
“To move even faster in a dynamic market while continuing to deliver best-in-class digital experiences for our customers, we’re further aligning our IT organization to support future growth and innovation,” T-Mobile said in a statement to GeekWire on Friday. “This includes the difficult decision of eliminating some roles while continuing to invest and hire in areas.”
Posts on LinkedIn referenced the layoffs, with some alluding to a “major corporate restructuring.”
The new round of cuts comes less than two months after T-Mobile shed 393 workers in Washington state. Those cuts impacted analysts, engineers and technicians, as well as directors, managers and VP-level executives.
T-Mobile employed about 75,000 people as of Dec. 31, 2025. The company has nearly 8,000 workers in the Seattle region, according to LinkedIn.
Advertisement
The Seattle area has been hit by thousands of tech-related layoffs, including job losses at Amazon, Expedia, Meta, Zillow and other companies.
T-Mobile, the largest U.S. telecom company by market capitalization, laid off 121 workers in August 2025. Last November, former Chief Operating Officer Srini Gopalan replaced longtime leader Mike Sievert as CEO.
T‑Mobile grew service revenue to $71.3 billion in 2025, up 8% from the prior year, while posting $11 billion in net income and adding a record 7.6 million postpaid customers, underscoring how it continues to expand even as it trims IT and corporate roles.
The company said Friday it is “providing robust support to impacted employees as they transition.”
Processing 200,000 tokens through a large language model is expensive and slow: the longer the context, the faster the costs spiral. Researchers at Tsinghua University and Z.ai have built a technique called IndexCache that cuts up to 75% of the redundant computation in sparse attention models, delivering up to 1.82x faster time-to-first-token and 1.48x faster generation throughput at that context length.
The technique applies to models using the DeepSeek Sparse Attention architecture, including the latest DeepSeek and GLM families. It can help enterprises provide faster user experiences for production-scale, long-context models, a capability already proven in preliminary tests on the 744-billion-parameter GLM-5 model.
The DSA bottleneck
Large language models rely on the self-attention mechanism, a process where the model computes the relationship between every token in its context and all the preceding ones to predict the next token.
However, self-attention has a severe limitation. Its computational complexity scales quadratically with sequence length. For applications requiring extended context windows (e.g., large document processing, multi-step agentic workflows, or long chain-of-thought reasoning), this quadratic scaling leads to sluggish inference speeds and significant compute and memory costs.
Advertisement
Sparse attention offers a principled solution to this scaling problem. Instead of calculating the relationship between every token and all preceding ones, sparse attention optimizes the process by having each query select and attend to only the most relevant subset of tokens.
DeepSeek Sparse Attention (DSA) is a highly efficient implementation of this concept, first introduced in DeepSeek-V3.2. To determine which tokens matter most, DSA introduces a lightweight “lightning indexer module” at every layer of the model. This indexer scores all preceding tokens and selects a small batch for the main core attention mechanism to process. By doing this, DSA slashes the heavy core attention computation from quadratic to linear, dramatically speeding up the model while preserving output quality.
But the researchers identified a lingering flaw: the DSA indexer itself still operates at a quadratic complexity at every single layer. Even though the indexer is computationally cheaper than the main attention process, as context lengths grow, the time the model spends running these indexers skyrockets. This severely slows down the model, especially during the initial “prefill” stage where the prompt is first processed.
Advertisement
The DSA indexing tax increases with context length (source: arXiv)
Caching attention with IndexCache
To solve the indexer bottleneck, the research team discovered a crucial characteristic of how DSA models process data. The subset of important tokens an indexer selects remains remarkably stable as data moves through consecutive transformer layers. Empirical tests on DSA models revealed that adjacent layers share between 70% and 100% of their selected tokens.
To capitalize on this cross-layer redundancy, the researchers developed IndexCache. The technique partitions the model’s layers into two categories. A small number of full (F) layers retain their indexers, actively scoring the tokens and choosing the most important ones to cache. The rest of the layers become shared (S), performing no indexing and reusing the cached indices from the nearest preceding F layer.
IndexCache splits layers into full and shared layers
Advertisement
During inference, the model simply checks the layer type. If it reaches an F layer, it calculates and caches fresh indices. If it is an S layer, it skips the math and copies the cached data.
There is a wide range of optimization techniques that try to address the attention bottleneck by compressing the KV cache, where the computed attention values are stored. Instead of shrinking the memory footprint like standard KV cache compression, IndexCache attacks the compute bottleneck.
“IndexCache is not a traditional KV cache compression or sharing technique,” Yushi Bai, co-author of the paper, told VentureBeat. “It eliminates this redundancy by reusing indices across layers, thereby reducing computation rather than just memory footprint. It is complementary to existing approaches and can be combined with them.”
The researchers developed two deployment approaches for IndexCache. (It is worth noting that IndexCache only applies to models that use the DSA architecture, such as the latest DeepSeek models and the latest family of GLM models.)
Advertisement
For developers working with off-the-shelf DSA models where retraining is unfeasible or too expensive, they created a training-free method relying on a “greedy layer selection” algorithm. By running a small calibration dataset through the model, this algorithm automatically determines the optimal placement of F and S layers without any weight updates. Empirical evidence shows that the greedy algorithm can safely remove 75% of the indexers while matching the downstream performance of the original model.
For teams pre-training or heavily fine-tuning their own foundation models, the researchers propose a training-aware version that optimizes the network parameters to natively support cross-layer sharing. This approach introduces a “multi-layer distillation loss” during training. It forces each retained indexer to learn how to select a consensus subset of tokens that will be highly relevant for all the subsequent layers it serves.
Real-world speedups on production models
To test the impact of IndexCache, the researchers applied it to the 30-billion-parameter GLM-4.7 Flash model and compared it against the standard baseline.
At a 200K context length, removing 75% of the indexers slashed the prefill latency from 19.5 seconds down to just 10.7 seconds, delivering a 1.82x speedup. The researchers note these speedups are expected to be even greater in longer contexts.
Advertisement
During the decoding phase, where the model generates its response, IndexCache boosted per-request throughput from 58 tokens per second to 86 tokens per second at the 200K context mark, yielding a 1.48x speedup. When the server’s memory is fully saturated with requests, total decode throughput jumped by up to 51%.
IndexCache speeds up the prefill and decode stages significantly (source: arXiv)
For enterprise teams, these efficiency gains translate directly into cost savings. “In terms of ROI, IndexCache provides consistent benefits across scenarios, but the gains are most noticeable in long-context workloads such as RAG, document analysis, and agentic pipelines,” Bai said. “In these cases, we observe at least an approximate 20% reduction in deployment cost and similar improvements in user-perceived latency.” He added that for very short-context tasks, the benefits hover around 5%.
Remarkably, these efficiency gains did not compromise reasoning capabilities. Using the training-free approach to eliminate 75% of indexers, the 30B model matched the original baseline’s average score on long-context benchmarks, scoring 49.9 against the original 50.2. On the highly complex AIME 2025 math reasoning benchmark, the optimized model actually outperformed the original baseline, scoring 92.6 compared to 91.0.
Advertisement
The team also ran preliminary experiments on the production-scale 744-billion-parameter GLM-5 model. They found that eliminating 75% of its indexers with the training-free method yielded at least a 1.3x speedup on contexts over 100K tokens. At the same time, the model maintained a nearly identical quality average on long-context tasks.
IndexCache increases the speed of GLM-5 by 20% while maintaining the accuracy (source: arXiv)
Getting IndexCache into production
For development teams wanting to implement the training-free approach today, the process is straightforward but requires careful setup. While the greedy search algorithm automatically finds the optimal layer configuration, the quality of that configuration depends on the data it processes.
“We recommend using domain-specific data as a calibration set so that the discovered layer-sharing pattern aligns with real workloads,” Bai said.
Advertisement
Once calibrated, the optimization is highly accessible for production environments. Open-source patches are already available on GitHub for major serving engines. “Integration is relatively straightforward — developers can apply the patch to existing inference stacks, such as vLLM or SGLang, and enable IndexCache with minimal configuration changes,” Bai said.
While IndexCache provides an immediate fix for today’s compute bottlenecks, its underlying philosophy points to a broader shift in how the AI industry will approach model design.
“Future foundation models will likely be architected with downstream inference constraints in mind from the beginning,” Bai concluded. “This means designs that are not only scalable in terms of model size, but also optimized for real-world throughput and latency, rather than treating these as post-hoc concerns.”
We’ve professionally sat in a lot of office chairs, and the Branch Ergonomic Chair Pro has held the top spot in our office chair buying guide ever since we first tested it. It’s easy to spend a lot on an office chair, but this one packs in plenty of features for a relatively modest price. We like it at full price, and we’ve shared deal stories when it has gone on sale for $450 in the past.
Right now, though, it’s down to $400 thanks to the Amazon Spring Sale. That’s $50 cheaper than we’ve seen it before, and so of course, we had to tell you.
Photograph: Julian Chokkattu
Branch
Ergonomic Chair Pro
Advertisement
Many of the products and gadgets that we recommend are nice to have, but not necessary. Headphones are cool, but you might not need an upgrade. A fancy smart bird feeder is neat, but not crucial. But working from an inefficient, ergonomically poor office setup can wreak havoc on your body. It’s actually bad for you. If you’re sitting at a desk working from a computer, you genuinely, truly need a good office chair.
We recommend this chair for most people because it’s easy to adjust and offers several customizable features. Its armrests, seat, and back can be tilted and maneuvered to dial in the perfect fit for your sit, and there are several different upholstery options available, including leather, vegan leather, and mesh. (Although the Amazon sale only features the mesh option; you’ll have to go to Branch’s website for the other materials.) All of the finishes offer a nice mix of softness, durability, and breathability. You could spend a lot more money for a little more customization, some higher-end materials, or even more adjustments, but we think this mesh version does a darn good job for what you’ll pay and what most people need. Snagging it for $100 less is a no-brainer if you’re in the market.
Laser Welding is apparently the new hotness, in part because these sci-fi rayguns masquerading as tools are really cool. They cut! They weld! They Julienne Fry! Well, maybe not that last one. In any case, perhaps feeling the need to cancel out that coolness as quickly as he possibly could, YouTuber [Wesley Treat] decided to make a giant version of his own head.
[Wesely] had previously been 3D scanned as part of the maker scans project, which you can find over on Printables. Those of you who really hate YouTubers, take note: finally you have something to take your frustrations out on. [Wesely] takes that model into Blender to decimate and decapitate– fans of the band Tyr may wonder if the model questioned his sword–before feeding that head through an online papercraft tool called PaperMaker to generate cut files for his CNC. There are also a lot of welding montages interspersed there as he practices with the new tool. [Wesely] did first try out his new raygun on steel in a previous video, but even knowing that, he makes the learning curve on these lasers look quite scalable.
While we’re not likely to follow in [Wesely]’s footsteps and create our own low-poly Zardoz– Zardozes? Zardii?– using a papercraft toolchain and CNC equipment with sheet aluminum is absolutely a great idea worth stealing. It’s very similar to what another hacker did with PCBs— though that project was perhaps more reasonable in scale and ego.
Company establishes dominant position on world’s largest retail platform while building multi-channel distribution strategy
Innovative Eyewear, Inc. (NASDAQ: LUCY) has emerged as the clear category leader in the rapidly growing smart safety glasses segment, capturing approximately 44% market share on Amazon.com according to recent market analysis. This dominant position on the world’s most popular retail platform validates the company’s product strategy and provides a powerful foundation for broader retail expansion in 2026. The achievement is particularly significant given that Lucyd Armor represents the only smart safety glass available on the platform with full safety certification in the United States, according to company research. This combination of regulatory compliance, smart features, and consumer accessibility creates a defensible competitive position that would be difficult for new entrants to replicate quickly.
Market Leadership Built on Product Innovation
Lucyd Armor has distinguished itself in the market by offering a unique combination of features that address real workplace needs. The product line delivers ANSI Z87.1+ certified protection alongside high-fidelity audio, hands-free walkie communication features, photochromic lenses, and prescription adaptability, all within a single frame design.This comprehensive feature set addresses a significant gap in the industrial and commercial safety eyewear market, where workers have historically been forced to choose between safety compliance and connectivity. Lucyd Armor eliminates this tradeoff, allowing professionals across construction, manufacturing, logistics, and other industries to maintain communication and access to information while meeting safety requirements. The product’s appeal extends beyond traditional industrial applications. Recent enterprise adoption includes a top-five global logistics company that placed an initial order to utilize Lucyd Armor with the Lucyd app’s Walkie feature, enabling secure, hands-free team communication through private encrypted channels.
Amazon as Strategic Foundation
Amazon’s role as both a consumer discovery platform and a business purchasing channel makes the company’s 44% market share particularly valuable. The platform serves as a primary research and purchasing venue for both individual consumers and business buyers, providing Innovative Eyewear with exposure to diverse customer segments. The Amazon channel also provides valuable market intelligence. Real-time sales data, customer reviews, and competitive positioning insights allow the company to rapidly iterate on product development and marketing approaches. This feedback loop has informed product expansions including the introduction of multiple Lucyd Armor variants to address specific use cases and preferences. Customer reviews on Amazon have consistently highlighted the product’s audio quality, comfort for all-day wear, and successful integration of safety certification with smart features. This organic customer validation reinforces the company’s product-market fit and provides social proof for prospective buyers researching the category.
Multi-Channel Expansion Strategy
While Amazon market leadership provides an important foundation, Innovative Eyewear has been systematically building distribution across complementary channels to maximize market reach and reduce platform concentration risk. The company’s products are now available through major national retailers including Walmart.com, Target.com, BestBuy.com, and DicksSportingGoods.com. This expansion into established retail ecosystems provides access to millions of additional customers who prefer shopping through these familiar platforms. Simultaneously, the company has been developing its optical industry presence through participation in major trade shows including Vision Expo West, MIDO Milan, and SILMO Paris. These efforts have resulted in approximately 40 new optical industry accounts and initial orders from key European markets including the UK, Romania, Greece, Spain, and France. The B2B channel development extends to specialized industrial and safety equipment distributors. By making Lucyd Armor available through channels where businesses already purchase personal protective equipment, Innovative Eyewear can accelerate adoption among commercial customers who may not discover the product through consumer retail channels.
The company’s investment in obtaining comprehensive safety certifications across multiple jurisdictions creates meaningful barriers to competitive entry. Lucyd Armor now carries ANSIZ87.1+ certification for U.S. markets, CSA Z94.3 for Canada, and EN 16639:2018 for European markets.
Advertisement
These certifications require significant time and investment to obtain, involving rigorous testing protocols and compliance documentation. For competitors seeking to enter the smart safety eyewear category, this regulatory burden creates delays and costs that protect InnovativeEyewear’s first-mover advantage.The certification strategy also enables geographic expansion. With compliance already secured for North American and European markets, the company can rapidly scale distribution in these regions without additional product development or testing delays.
Looking Ahead to 2026
Management has indicated that the company’s product mix and global fulfilment network position it to scale distribution across hardware, retail, and optical chains throughout 2026. This suggests upcoming partnership announcements and channel expansion that could significantly amplify the company’s market presence. The combination of Amazon market leadership, expanding multi-channel distribution, regulatory certifications, and demonstrated product-market fit creates a compelling growth narrative for investors. As smart safety glasses transition from niche product to standard workplace equipment, Innovative Eyewear’s established position and distribution infrastructure should enable it to capture disproportionate value from category expansion. For investors evaluating the wearable technology sector, Innovative Eyewear’s clear market leadership in an emerging category with significant growth potential represents a differentiated opportunity. The company’s success in establishing dominant Amazon share while simultaneously building diversified distribution demonstrates execution capability that reduces commercial risk.
About Innovative Eyewear
Innovative Eyewear develops and manufactures ChatGPT-enabled smart eyewear under the Lucyd®, Lucyd Armor®, Reebok®, Eddie Bauer®, and Nautica® brands. The company’s mission is to Upgrade Your Eyewear® by offering Bluetooth audio glasses that allow users to stay safely and ergonomically connected to their digital lives through hundreds of frame and lens combinations.
OPPO India has announced a major expansion of its service network across the country. The company is rolling out its Service Center 3.0 Pro to over 150 locations in India, going beyond its earlier plan of 110 centers. OPPO aims to launch more than 50 new service centers by June 2026 as part of this growth.
As smartphones continue to play a major role in everyday activities, the importance of strong after-sales support has increased. OPPO India is working to improve its service quality while making support easier to access for users across India. The brand is clearly focusing on delivering a smoother and more reliable customer experience.
To improve the service experience, the Service Center 3.0 Pro model brings several user-friendly features. OPPO India includes digital check-ins, real-time updates, and clear communication throughout the visit. Customers can see the repair process directly, making it more transparent. The centers also offer a cleaner layout, product display zones, and relaxing waiting spaces.
OPPO is strengthening its service quality by training staff and offering multilingual support, making interactions smoother for users. Customers are often attended to within minutes of arrival. As per Counterpoint Research, the brand is among the top performers in repair transparency, which builds greater trust among users.
Advertisement
Furthermore, the company provides assistance in 19 languages, making it easy for users across regions to interact without difficulty. Additionally, the company provides free pick-up and drop-off services for any repairs. This adds another layer of convenience for customers, especially if they are unable to reach the service center.
The company provides service for most repairs within a day, so customers do not have to wait long to start using their devices. This expansion by OPPO reinforces its dominance by ensuring customers receive reliable, convenient service.
Systemd now includes a user date-of-birth field for age verification purposes
Garuda Linux refuses to enforce age checks, citing no legal obligation
TBOTE Project claims Meta contributes significant funding to push age laws
Recent changes within the Linux ecosystem suggest that age verification could move closer to the operating system level.
An update to systemd introduces a new field for storing a user’s date of birth, designed to support compliance with laws in regions including California, Colorado, and Brazil.
The addition is intended to enable age verification requirements and may also support upcoming parental control features linked to application frameworks.
Article continues below
Advertisement
Age data will be stored
The feature stores user birth dates within system records, with modification restricted to users holding root privileges.
While the change has been merged into the codebase, its long-term role depends on adoption across distributions and whether it remains in future releases.
Advertisement
Reactions across Linux distros have been inconsistent, reflecting differing legal obligations and technical philosophies.
Developers associated with Garuda Linux stated that the distribution will not introduce age verification measures, citing the absence of legal requirements in its jurisdictions.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The maintainers also described the wider discussion as contentious, noting that “some of us have honestly been quite shocked at the way this conversation has been moving in the Linux community as a whole.”
Advertisement
They added that “distribution developers are being hounded at every corner for complying with these laws,” pointing to growing tension between compliance and community expectations.
The response illustrates how decentralized development models complicate unified approaches to regulatory changes.
The introduction of age-related features follows new legislation aimed at enforcing online safety requirements.
Advertisement
Reports linked to research from the TBOTE Project claim that lobbying efforts behind these laws are backed by substantial financial resources.
The research suggests that Meta has contributed funding toward initiatives such as the App Store Accountability Act, although these claims remain part of ongoing public debate.
Additional pressure is attributed to advocacy groups such as the Digital Childhood Alliance, which has reportedly influenced policy discussions despite its relatively recent formation.
These developments indicate that regulatory changes affecting operating systems may continue to expand beyond application-level controls.
Advertisement
The shift has broader implications for distributions that rely on systemd, as well as those that deliberately avoid it.
Some projects, including GrapheneOS, have publicly stated that they will not require personal data or identification for use, even if this limits availability in certain regions.
The integration of age-related data into system components may also affect related technologies, including application packaging systems and parental control frameworks.
As discussions continue, Linux distros will likely adopt different responses depending on legal exposure and community priorities.
The Meadow slips into a pocket without a second thought. Measuring just 1.3 by 2 by 0.4 inches and weighing four ounces, it feels closer to a good luck charm than a conventional smartphone. The recycled polycarbonate shell has a smooth, understated feel that should hold up well to everyday use, and the three inch square display sits centered in that compact body, clear enough for a quick glance but small enough that lingering on it for too long simply isn’t that appealing. That last part is rather the point.
Setup takes under five minutes and works with your existing phone number, no new SIM required. Calls go to your main phone first, and if that is unavailable Meadow picks up automatically. Messaging works on a similar principle, with one deliberate restriction: only 12 contacts you have approved can reach you by text. Anything from outside that list simply does not come through, which cuts spam and unwanted pings entirely. Leave your main phone behind and an auto-reply lets people know you are unreachable for the time being.
Google Pixel 10 is the everyday phone unlike anything else; it has Google Tensor G5, Pixel’s most powerful chip, an incredible camera, and advanced…
Unlocked Android phone gives you the flexibility to change carriers and choose your own data plan[2]; it works – Google Fi, Verizon, T-Mobile, AT&T…
The upgraded triple rear camera system has a new 5x telephoto lens – up to 20x Super Res Zoom for stunning detail from far away; Night Sight takes…
The app selection is deliberately minimal but covers what most people actually need day to day. You get calls, messaging, a camera, a clock, maps, notes, and weather. Spotify and Apple Music handle music streaming, with local playback and a dedicated app available for podcasts and audiobooks. Strava covers fitness tracking and Uber handles rides. That is the full list, and there is no app store to tempt you into adding more. For anyone who has grown tired of their attention being pulled in a dozen directions at once, that simplicity feels less like a limitation and more like a breath of fresh air.
The hardware is more than capable of handling the lean app selection without any lag, with 6GB of memory and 128GB of storage on board. A single 13 megapixel rear camera is there when you need it, and the absence of a front facing lens is a deliberate trade-off rather than an oversight. Battery life stretches to a day or two of mixed use depending on how you are using it, and USB-C fast charging keeps top-ups quick. Bluetooth handles headphones and speakers without issue, though there is no headphone jack. Wi-Fi, Bluetooth, NFC, and 4G are all supported, with connectivity managed through a monthly service that costs $10 after the first nine months of free service included with purchase.
Pre-orders are open now at $399, with the price rising to $449 once stock arrives. US customers can expect delivery around June 2026, with each unit coming bundled with a beach pouch, an activity case, and a charging cable. [Source]
You must be logged in to post a comment Login