This teacher captured the broader moment in education. Over the past several years, schools have been urged to respond to the rapid emergence of generative AI tools such as ChatGPT with limited information and a lot of hype and horror stories. Some have framed the technology as potentially transformative for teaching and learning, while others claim the opposite. Yet in many classrooms, adoption has been slower and more selective than the surrounding hype might suggest.
Advertisement
That hesitation is often interpreted as resistance to innovation, but conversations with educators suggest a different interpretation. In many cases, teachers behave as experts in most fields do when encountering a new technology, evaluating whether it solves a real problem. When professionals encounter a tool that is widely marketed but still evolving, they ask a basic question: What does this actually help me do better?
For many educators, that question remains unresolved when it comes to classroom instruction, and that’s what our research project aimed to answer: What are teachers experiencing with generative AI in their classrooms?
In fall 2024, EdSurge researchers facilitated discussions between a group of 17 teachers from around the world. We convened a group of third to 12th grade teachers, and some of them designed and delivered their own lesson plans, either teaching with or about AI.
Overall, our participants’ responses reflect a few major themes, with the most prominent sentiment being an air of indifference. In particular, a fourth grade math teacher participant attempted to use generative AI in her instruction. However, before adoption, she asked how AI could help her elementary students learn math. Her question captured what several participants were thinking, aligning with 2024 data from the Pew Research Center that shows educators were split on whether student AI use was more harmful than helpful.
Advertisement
A Technology Arriving Faster Than Schools Can Unpack
A high school computer science teacher from Georgia describes her fears about generative AI’s widespread push into classrooms:
One of my biggest fears is actually Arthur C. Clarke’s rule: any sufficiently advanced technology is indistinguishable from magic…we have students, parents, and teachers looking at AI as if it’s magic.
A high school library media specialist from New York described the same tension from a different angle:
There’s a fear about not being able to keep up with how things progress…the new tools and the impact it has on education.
Schools typically adopt new technologies through deliberate cycles of experimentation, professional development and evaluation. Generative AI has entered classrooms through a different pathway. Consumer tools became available to teachers and students simultaneously, often before schools had developed policies or instructional frameworks for using them.
The result is a situation in which educators encounter the technology while they are still trying to understand its implications.
Advertisement
Where AI Is Already Providing Value
In conversations with teachers, the pattern that appears consistently is a classic user design case. The most immediate use cases for generative AI have little to do with student learning. Instead, an engineering and computer science teacher in New Jersey addressed workload:
I have a running discussion with some of my colleagues about how to use AI to lesson plan. I use it routinely to lesson plan. I don’t really use the lessons, but we have to produce all this stuff for admin that no one reads… AI will just roll it off.
Another teacher described similar experimentation among colleagues:
It’s really great that so many people have kind of scratched the surface and are using it to support their productivity and efficiency… lesson planning and newsletters and stuff like that.
These examples reflect a pattern seen across many professions: Generative AI is particularly effective at drafting, summarizing and generating text. In contexts where professionals face time pressure and administrative demands, those capabilities can be immediately useful.
Teachers experience those same pressures. Beyond instruction, many juggle grading, lesson planning, parent communication, extracurricular supervision and administrative reporting. In that environment, a chatbot that helps compress routine tasks can feel genuinely helpful.
Advertisement
Recent research, as well as national survey data from RAND’s American Educator Panels, suggests that teachers are adopting generative AI primarily as a productivity tool rather than a core instructional technology, a pattern that mirrors how educators in this study described their own early experimentation.
However, instructional discretion is different from a teacher’s administrative workload.
The Instructional Use Case Remains Unclear
When teachers consider introducing AI tools to students during class time, the calculations they make change. The relevant question becomes: What student learning problem does this tool solve? Many educators are still trying to answer this question, even after several years of exposure to generative AI in some capacity.
Some teachers are experimenting with AI in limited ways, such as using it as a revision partner in writing. A science teacher from Guam said:
Advertisement
Students write a first draft and then feed it into ChatGPT for a second draft… but I push them not to use it for research.
Others are designing lessons where the technology itself becomes the subject of inquiry. A high school special education teacher in New York shared how she removes the veil from the magic of chatbots.
We purposely trained [a chatbot] wrong, so students could understand the data is only as good as how and who trains it.
Learning science research suggests that students benefit most when technology supports reflection and revision, rather than replacing the productive struggle of critical thinking and problem solving, a principle that many teachers in this study have applied. In these cases, AI becomes a tool that students analyze and critique. The participants do not attribute AI as a source of authoritative knowledge.
AI Literacy as a Practical Classroom Entry Point
Many teachers see the most promising instructional opportunity in AI literacy, as it may feel most appropriate to teach students about the tools they’re hearing about and encountering daily. International guidance from the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organisation for Economic Co-operation and Development (OECD) increasingly frames AI literacy as a foundational skill for students, encouraging schools to help young people understand how algorithmic systems generate information, rather than incorporating AI tools into everyday classroom tasks.
An elementary teacher from New York state describes focusing on helping students understand how these systems produce information and where they fail:
For me it starts with literacy — [teaching] students how to prompt, and then how to fact-check the information that’s generated to make sure there’s no bias in it.
A middle school teacher from New York uses simple analogies to illustrate how machine learning systems work:
We used an exercise about making the best peanut butter and jelly sandwich. The ingredients were the dataset, the procedure was the algorithm, and the output depended on how it was designed.
These lessons treat AI less as a productivity tool and more as a window into how digital systems generate knowledge.
Hallucinations, Bias and the Question of Trust
Teachers also raised consistent concerns about the reliability of generative AI outputs. An elementary library media specialist from New York said:
Advertisement
You ask ChatGPT to write a paper on something and it makes something up totally imaginary.
To illustrate the risks, some educators point to real-world examples. A high school French teacher shared:
I tried ChatGPT. I think it’s very useful if you know your content very well. IIf you don’t know your content, it’s hard to tell whether or not it’s accurate.
Others connect these issues to broader discussions about algorithmic bias, explaining why they fear that students will become reliant on these tools. A high school computer science teacher in New Jersey shares her concerns about the increased use of AI by students. She works at a school with large populations of African American, Latino and Black newcomer families from African and Caribbean countries:
When we talk about bias, we look at hiring data and incarceration data… and facial recognition systems where error rates vary depending on who the system is trying to recognize.
In these contexts, AI becomes less a tool for answering questions and more a case study of how technological systems shape information.
The “Air of Indifference”
Taken together, these conversations reveal a stance that is not often captured in public discussions of AI in schools. What initially appeared to be an insignificant factor in keeping teachers interested in robust discussions about AI turned out to be a prominent theme aligned with both existing and emerging research.
Advertisement
By and large, teachers are not rejecting the technology. But they are also not reorganizing their classrooms around AI.
Instead, many are adopting a posture that might be described as pragmatic indifference:
“I use it for lesson planning… but I don’t really use the lessons.”
“I push students not to use it for research.”
Advertisement
In other words, teachers are using AI where it clearly saves time while maintaining boundaries around core learning tasks. This posture reflects professional judgment, rather than resistance to inevitable technological innovation.
Schools exist partly to create conditions in which students practice complex cognitive work, such as deep reading, methodical writing, reasoning through problems and evaluating evidence. If a tool primarily reduces the need to perform that work, teachers have reason to question whether it advances or undermines learning.
And that brings us back to the fourth-grade teacher’s question: What can I use this for with fourth-grade math?
If the instructional use case for AI remains unclear, what should students be learning instead?
Advertisement
That question leads to a deeper conversation about the kinds of skills that remain valuable even as technologies change.
Bellevue, Wash.-based wireless carrier T-Mobile confirmed it made an unspecified number of layoffs this week. A tipster told GeekWire the number was in the hundreds, which the company did not verify.
“To move even faster in a dynamic market while continuing to deliver best-in-class digital experiences for our customers, we’re further aligning our IT organization to support future growth and innovation,” T-Mobile said in a statement to GeekWire on Friday. “This includes the difficult decision of eliminating some roles while continuing to invest and hire in areas.”
Posts on LinkedIn referenced the layoffs, with some alluding to a “major corporate restructuring.”
The new round of cuts comes less than two months after T-Mobile shed 393 workers in Washington state. Those cuts impacted analysts, engineers and technicians, as well as directors, managers and VP-level executives.
T-Mobile employed about 75,000 people as of Dec. 31, 2025. The company has nearly 8,000 workers in the Seattle region, according to LinkedIn.
Advertisement
The Seattle area has been hit by thousands of tech-related layoffs, including job losses at Amazon, Expedia, Meta, Zillow and other companies.
T-Mobile, the largest U.S. telecom company by market capitalization, laid off 121 workers in August 2025. Last November, former Chief Operating Officer Srini Gopalan replaced longtime leader Mike Sievert as CEO.
T‑Mobile grew service revenue to $71.3 billion in 2025, up 8% from the prior year, while posting $11 billion in net income and adding a record 7.6 million postpaid customers, underscoring how it continues to expand even as it trims IT and corporate roles.
The company said Friday it is “providing robust support to impacted employees as they transition.”
Processing 200,000 tokens through a large language model is expensive and slow: the longer the context, the faster the costs spiral. Researchers at Tsinghua University and Z.ai have built a technique called IndexCache that cuts up to 75% of the redundant computation in sparse attention models, delivering up to 1.82x faster time-to-first-token and 1.48x faster generation throughput at that context length.
The technique applies to models using the DeepSeek Sparse Attention architecture, including the latest DeepSeek and GLM families. It can help enterprises provide faster user experiences for production-scale, long-context models, a capability already proven in preliminary tests on the 744-billion-parameter GLM-5 model.
The DSA bottleneck
Large language models rely on the self-attention mechanism, a process where the model computes the relationship between every token in its context and all the preceding ones to predict the next token.
However, self-attention has a severe limitation. Its computational complexity scales quadratically with sequence length. For applications requiring extended context windows (e.g., large document processing, multi-step agentic workflows, or long chain-of-thought reasoning), this quadratic scaling leads to sluggish inference speeds and significant compute and memory costs.
Advertisement
Sparse attention offers a principled solution to this scaling problem. Instead of calculating the relationship between every token and all preceding ones, sparse attention optimizes the process by having each query select and attend to only the most relevant subset of tokens.
DeepSeek Sparse Attention (DSA) is a highly efficient implementation of this concept, first introduced in DeepSeek-V3.2. To determine which tokens matter most, DSA introduces a lightweight “lightning indexer module” at every layer of the model. This indexer scores all preceding tokens and selects a small batch for the main core attention mechanism to process. By doing this, DSA slashes the heavy core attention computation from quadratic to linear, dramatically speeding up the model while preserving output quality.
But the researchers identified a lingering flaw: the DSA indexer itself still operates at a quadratic complexity at every single layer. Even though the indexer is computationally cheaper than the main attention process, as context lengths grow, the time the model spends running these indexers skyrockets. This severely slows down the model, especially during the initial “prefill” stage where the prompt is first processed.
Advertisement
The DSA indexing tax increases with context length (source: arXiv)
Caching attention with IndexCache
To solve the indexer bottleneck, the research team discovered a crucial characteristic of how DSA models process data. The subset of important tokens an indexer selects remains remarkably stable as data moves through consecutive transformer layers. Empirical tests on DSA models revealed that adjacent layers share between 70% and 100% of their selected tokens.
To capitalize on this cross-layer redundancy, the researchers developed IndexCache. The technique partitions the model’s layers into two categories. A small number of full (F) layers retain their indexers, actively scoring the tokens and choosing the most important ones to cache. The rest of the layers become shared (S), performing no indexing and reusing the cached indices from the nearest preceding F layer.
IndexCache splits layers into full and shared layers
Advertisement
During inference, the model simply checks the layer type. If it reaches an F layer, it calculates and caches fresh indices. If it is an S layer, it skips the math and copies the cached data.
There is a wide range of optimization techniques that try to address the attention bottleneck by compressing the KV cache, where the computed attention values are stored. Instead of shrinking the memory footprint like standard KV cache compression, IndexCache attacks the compute bottleneck.
“IndexCache is not a traditional KV cache compression or sharing technique,” Yushi Bai, co-author of the paper, told VentureBeat. “It eliminates this redundancy by reusing indices across layers, thereby reducing computation rather than just memory footprint. It is complementary to existing approaches and can be combined with them.”
The researchers developed two deployment approaches for IndexCache. (It is worth noting that IndexCache only applies to models that use the DSA architecture, such as the latest DeepSeek models and the latest family of GLM models.)
Advertisement
For developers working with off-the-shelf DSA models where retraining is unfeasible or too expensive, they created a training-free method relying on a “greedy layer selection” algorithm. By running a small calibration dataset through the model, this algorithm automatically determines the optimal placement of F and S layers without any weight updates. Empirical evidence shows that the greedy algorithm can safely remove 75% of the indexers while matching the downstream performance of the original model.
For teams pre-training or heavily fine-tuning their own foundation models, the researchers propose a training-aware version that optimizes the network parameters to natively support cross-layer sharing. This approach introduces a “multi-layer distillation loss” during training. It forces each retained indexer to learn how to select a consensus subset of tokens that will be highly relevant for all the subsequent layers it serves.
Real-world speedups on production models
To test the impact of IndexCache, the researchers applied it to the 30-billion-parameter GLM-4.7 Flash model and compared it against the standard baseline.
At a 200K context length, removing 75% of the indexers slashed the prefill latency from 19.5 seconds down to just 10.7 seconds, delivering a 1.82x speedup. The researchers note these speedups are expected to be even greater in longer contexts.
Advertisement
During the decoding phase, where the model generates its response, IndexCache boosted per-request throughput from 58 tokens per second to 86 tokens per second at the 200K context mark, yielding a 1.48x speedup. When the server’s memory is fully saturated with requests, total decode throughput jumped by up to 51%.
IndexCache speeds up the prefill and decode stages significantly (source: arXiv)
For enterprise teams, these efficiency gains translate directly into cost savings. “In terms of ROI, IndexCache provides consistent benefits across scenarios, but the gains are most noticeable in long-context workloads such as RAG, document analysis, and agentic pipelines,” Bai said. “In these cases, we observe at least an approximate 20% reduction in deployment cost and similar improvements in user-perceived latency.” He added that for very short-context tasks, the benefits hover around 5%.
Remarkably, these efficiency gains did not compromise reasoning capabilities. Using the training-free approach to eliminate 75% of indexers, the 30B model matched the original baseline’s average score on long-context benchmarks, scoring 49.9 against the original 50.2. On the highly complex AIME 2025 math reasoning benchmark, the optimized model actually outperformed the original baseline, scoring 92.6 compared to 91.0.
Advertisement
The team also ran preliminary experiments on the production-scale 744-billion-parameter GLM-5 model. They found that eliminating 75% of its indexers with the training-free method yielded at least a 1.3x speedup on contexts over 100K tokens. At the same time, the model maintained a nearly identical quality average on long-context tasks.
IndexCache increases the speed of GLM-5 by 20% while maintaining the accuracy (source: arXiv)
Getting IndexCache into production
For development teams wanting to implement the training-free approach today, the process is straightforward but requires careful setup. While the greedy search algorithm automatically finds the optimal layer configuration, the quality of that configuration depends on the data it processes.
“We recommend using domain-specific data as a calibration set so that the discovered layer-sharing pattern aligns with real workloads,” Bai said.
Advertisement
Once calibrated, the optimization is highly accessible for production environments. Open-source patches are already available on GitHub for major serving engines. “Integration is relatively straightforward — developers can apply the patch to existing inference stacks, such as vLLM or SGLang, and enable IndexCache with minimal configuration changes,” Bai said.
While IndexCache provides an immediate fix for today’s compute bottlenecks, its underlying philosophy points to a broader shift in how the AI industry will approach model design.
“Future foundation models will likely be architected with downstream inference constraints in mind from the beginning,” Bai concluded. “This means designs that are not only scalable in terms of model size, but also optimized for real-world throughput and latency, rather than treating these as post-hoc concerns.”
We’ve professionally sat in a lot of office chairs, and the Branch Ergonomic Chair Pro has held the top spot in our office chair buying guide ever since we first tested it. It’s easy to spend a lot on an office chair, but this one packs in plenty of features for a relatively modest price. We like it at full price, and we’ve shared deal stories when it has gone on sale for $450 in the past.
Right now, though, it’s down to $400 thanks to the Amazon Spring Sale. That’s $50 cheaper than we’ve seen it before, and so of course, we had to tell you.
Photograph: Julian Chokkattu
Branch
Ergonomic Chair Pro
Advertisement
Many of the products and gadgets that we recommend are nice to have, but not necessary. Headphones are cool, but you might not need an upgrade. A fancy smart bird feeder is neat, but not crucial. But working from an inefficient, ergonomically poor office setup can wreak havoc on your body. It’s actually bad for you. If you’re sitting at a desk working from a computer, you genuinely, truly need a good office chair.
We recommend this chair for most people because it’s easy to adjust and offers several customizable features. Its armrests, seat, and back can be tilted and maneuvered to dial in the perfect fit for your sit, and there are several different upholstery options available, including leather, vegan leather, and mesh. (Although the Amazon sale only features the mesh option; you’ll have to go to Branch’s website for the other materials.) All of the finishes offer a nice mix of softness, durability, and breathability. You could spend a lot more money for a little more customization, some higher-end materials, or even more adjustments, but we think this mesh version does a darn good job for what you’ll pay and what most people need. Snagging it for $100 less is a no-brainer if you’re in the market.
Laser Welding is apparently the new hotness, in part because these sci-fi rayguns masquerading as tools are really cool. They cut! They weld! They Julienne Fry! Well, maybe not that last one. In any case, perhaps feeling the need to cancel out that coolness as quickly as he possibly could, YouTuber [Wesley Treat] decided to make a giant version of his own head.
[Wesely] had previously been 3D scanned as part of the maker scans project, which you can find over on Printables. Those of you who really hate YouTubers, take note: finally you have something to take your frustrations out on. [Wesely] takes that model into Blender to decimate and decapitate– fans of the band Tyr may wonder if the model questioned his sword–before feeding that head through an online papercraft tool called PaperMaker to generate cut files for his CNC. There are also a lot of welding montages interspersed there as he practices with the new tool. [Wesely] did first try out his new raygun on steel in a previous video, but even knowing that, he makes the learning curve on these lasers look quite scalable.
While we’re not likely to follow in [Wesely]’s footsteps and create our own low-poly Zardoz– Zardozes? Zardii?– using a papercraft toolchain and CNC equipment with sheet aluminum is absolutely a great idea worth stealing. It’s very similar to what another hacker did with PCBs— though that project was perhaps more reasonable in scale and ego.
Company establishes dominant position on world’s largest retail platform while building multi-channel distribution strategy
Innovative Eyewear, Inc. (NASDAQ: LUCY) has emerged as the clear category leader in the rapidly growing smart safety glasses segment, capturing approximately 44% market share on Amazon.com according to recent market analysis. This dominant position on the world’s most popular retail platform validates the company’s product strategy and provides a powerful foundation for broader retail expansion in 2026. The achievement is particularly significant given that Lucyd Armor represents the only smart safety glass available on the platform with full safety certification in the United States, according to company research. This combination of regulatory compliance, smart features, and consumer accessibility creates a defensible competitive position that would be difficult for new entrants to replicate quickly.
Market Leadership Built on Product Innovation
Lucyd Armor has distinguished itself in the market by offering a unique combination of features that address real workplace needs. The product line delivers ANSI Z87.1+ certified protection alongside high-fidelity audio, hands-free walkie communication features, photochromic lenses, and prescription adaptability, all within a single frame design.This comprehensive feature set addresses a significant gap in the industrial and commercial safety eyewear market, where workers have historically been forced to choose between safety compliance and connectivity. Lucyd Armor eliminates this tradeoff, allowing professionals across construction, manufacturing, logistics, and other industries to maintain communication and access to information while meeting safety requirements. The product’s appeal extends beyond traditional industrial applications. Recent enterprise adoption includes a top-five global logistics company that placed an initial order to utilize Lucyd Armor with the Lucyd app’s Walkie feature, enabling secure, hands-free team communication through private encrypted channels.
Amazon as Strategic Foundation
Amazon’s role as both a consumer discovery platform and a business purchasing channel makes the company’s 44% market share particularly valuable. The platform serves as a primary research and purchasing venue for both individual consumers and business buyers, providing Innovative Eyewear with exposure to diverse customer segments. The Amazon channel also provides valuable market intelligence. Real-time sales data, customer reviews, and competitive positioning insights allow the company to rapidly iterate on product development and marketing approaches. This feedback loop has informed product expansions including the introduction of multiple Lucyd Armor variants to address specific use cases and preferences. Customer reviews on Amazon have consistently highlighted the product’s audio quality, comfort for all-day wear, and successful integration of safety certification with smart features. This organic customer validation reinforces the company’s product-market fit and provides social proof for prospective buyers researching the category.
Multi-Channel Expansion Strategy
While Amazon market leadership provides an important foundation, Innovative Eyewear has been systematically building distribution across complementary channels to maximize market reach and reduce platform concentration risk. The company’s products are now available through major national retailers including Walmart.com, Target.com, BestBuy.com, and DicksSportingGoods.com. This expansion into established retail ecosystems provides access to millions of additional customers who prefer shopping through these familiar platforms. Simultaneously, the company has been developing its optical industry presence through participation in major trade shows including Vision Expo West, MIDO Milan, and SILMO Paris. These efforts have resulted in approximately 40 new optical industry accounts and initial orders from key European markets including the UK, Romania, Greece, Spain, and France. The B2B channel development extends to specialized industrial and safety equipment distributors. By making Lucyd Armor available through channels where businesses already purchase personal protective equipment, Innovative Eyewear can accelerate adoption among commercial customers who may not discover the product through consumer retail channels.
The company’s investment in obtaining comprehensive safety certifications across multiple jurisdictions creates meaningful barriers to competitive entry. Lucyd Armor now carries ANSIZ87.1+ certification for U.S. markets, CSA Z94.3 for Canada, and EN 16639:2018 for European markets.
Advertisement
These certifications require significant time and investment to obtain, involving rigorous testing protocols and compliance documentation. For competitors seeking to enter the smart safety eyewear category, this regulatory burden creates delays and costs that protect InnovativeEyewear’s first-mover advantage.The certification strategy also enables geographic expansion. With compliance already secured for North American and European markets, the company can rapidly scale distribution in these regions without additional product development or testing delays.
Looking Ahead to 2026
Management has indicated that the company’s product mix and global fulfilment network position it to scale distribution across hardware, retail, and optical chains throughout 2026. This suggests upcoming partnership announcements and channel expansion that could significantly amplify the company’s market presence. The combination of Amazon market leadership, expanding multi-channel distribution, regulatory certifications, and demonstrated product-market fit creates a compelling growth narrative for investors. As smart safety glasses transition from niche product to standard workplace equipment, Innovative Eyewear’s established position and distribution infrastructure should enable it to capture disproportionate value from category expansion. For investors evaluating the wearable technology sector, Innovative Eyewear’s clear market leadership in an emerging category with significant growth potential represents a differentiated opportunity. The company’s success in establishing dominant Amazon share while simultaneously building diversified distribution demonstrates execution capability that reduces commercial risk.
About Innovative Eyewear
Innovative Eyewear develops and manufactures ChatGPT-enabled smart eyewear under the Lucyd®, Lucyd Armor®, Reebok®, Eddie Bauer®, and Nautica® brands. The company’s mission is to Upgrade Your Eyewear® by offering Bluetooth audio glasses that allow users to stay safely and ergonomically connected to their digital lives through hundreds of frame and lens combinations.
OPPO India has announced a major expansion of its service network across the country. The company is rolling out its Service Center 3.0 Pro to over 150 locations in India, going beyond its earlier plan of 110 centers. OPPO aims to launch more than 50 new service centers by June 2026 as part of this growth.
As smartphones continue to play a major role in everyday activities, the importance of strong after-sales support has increased. OPPO India is working to improve its service quality while making support easier to access for users across India. The brand is clearly focusing on delivering a smoother and more reliable customer experience.
To improve the service experience, the Service Center 3.0 Pro model brings several user-friendly features. OPPO India includes digital check-ins, real-time updates, and clear communication throughout the visit. Customers can see the repair process directly, making it more transparent. The centers also offer a cleaner layout, product display zones, and relaxing waiting spaces.
OPPO is strengthening its service quality by training staff and offering multilingual support, making interactions smoother for users. Customers are often attended to within minutes of arrival. As per Counterpoint Research, the brand is among the top performers in repair transparency, which builds greater trust among users.
Advertisement
Furthermore, the company provides assistance in 19 languages, making it easy for users across regions to interact without difficulty. Additionally, the company provides free pick-up and drop-off services for any repairs. This adds another layer of convenience for customers, especially if they are unable to reach the service center.
The company provides service for most repairs within a day, so customers do not have to wait long to start using their devices. This expansion by OPPO reinforces its dominance by ensuring customers receive reliable, convenient service.
Systemd now includes a user date-of-birth field for age verification purposes
Garuda Linux refuses to enforce age checks, citing no legal obligation
TBOTE Project claims Meta contributes significant funding to push age laws
Recent changes within the Linux ecosystem suggest that age verification could move closer to the operating system level.
An update to systemd introduces a new field for storing a user’s date of birth, designed to support compliance with laws in regions including California, Colorado, and Brazil.
The addition is intended to enable age verification requirements and may also support upcoming parental control features linked to application frameworks.
Article continues below
Advertisement
Age data will be stored
The feature stores user birth dates within system records, with modification restricted to users holding root privileges.
While the change has been merged into the codebase, its long-term role depends on adoption across distributions and whether it remains in future releases.
Advertisement
Reactions across Linux distros have been inconsistent, reflecting differing legal obligations and technical philosophies.
Developers associated with Garuda Linux stated that the distribution will not introduce age verification measures, citing the absence of legal requirements in its jurisdictions.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The maintainers also described the wider discussion as contentious, noting that “some of us have honestly been quite shocked at the way this conversation has been moving in the Linux community as a whole.”
Advertisement
They added that “distribution developers are being hounded at every corner for complying with these laws,” pointing to growing tension between compliance and community expectations.
The response illustrates how decentralized development models complicate unified approaches to regulatory changes.
The introduction of age-related features follows new legislation aimed at enforcing online safety requirements.
Advertisement
Reports linked to research from the TBOTE Project claim that lobbying efforts behind these laws are backed by substantial financial resources.
The research suggests that Meta has contributed funding toward initiatives such as the App Store Accountability Act, although these claims remain part of ongoing public debate.
Additional pressure is attributed to advocacy groups such as the Digital Childhood Alliance, which has reportedly influenced policy discussions despite its relatively recent formation.
These developments indicate that regulatory changes affecting operating systems may continue to expand beyond application-level controls.
Advertisement
The shift has broader implications for distributions that rely on systemd, as well as those that deliberately avoid it.
Some projects, including GrapheneOS, have publicly stated that they will not require personal data or identification for use, even if this limits availability in certain regions.
The integration of age-related data into system components may also affect related technologies, including application packaging systems and parental control frameworks.
As discussions continue, Linux distros will likely adopt different responses depending on legal exposure and community priorities.
The Meadow slips into a pocket without a second thought. Measuring just 1.3 by 2 by 0.4 inches and weighing four ounces, it feels closer to a good luck charm than a conventional smartphone. The recycled polycarbonate shell has a smooth, understated feel that should hold up well to everyday use, and the three inch square display sits centered in that compact body, clear enough for a quick glance but small enough that lingering on it for too long simply isn’t that appealing. That last part is rather the point.
Setup takes under five minutes and works with your existing phone number, no new SIM required. Calls go to your main phone first, and if that is unavailable Meadow picks up automatically. Messaging works on a similar principle, with one deliberate restriction: only 12 contacts you have approved can reach you by text. Anything from outside that list simply does not come through, which cuts spam and unwanted pings entirely. Leave your main phone behind and an auto-reply lets people know you are unreachable for the time being.
Google Pixel 10 is the everyday phone unlike anything else; it has Google Tensor G5, Pixel’s most powerful chip, an incredible camera, and advanced…
Unlocked Android phone gives you the flexibility to change carriers and choose your own data plan[2]; it works – Google Fi, Verizon, T-Mobile, AT&T…
The upgraded triple rear camera system has a new 5x telephoto lens – up to 20x Super Res Zoom for stunning detail from far away; Night Sight takes…
The app selection is deliberately minimal but covers what most people actually need day to day. You get calls, messaging, a camera, a clock, maps, notes, and weather. Spotify and Apple Music handle music streaming, with local playback and a dedicated app available for podcasts and audiobooks. Strava covers fitness tracking and Uber handles rides. That is the full list, and there is no app store to tempt you into adding more. For anyone who has grown tired of their attention being pulled in a dozen directions at once, that simplicity feels less like a limitation and more like a breath of fresh air.
The hardware is more than capable of handling the lean app selection without any lag, with 6GB of memory and 128GB of storage on board. A single 13 megapixel rear camera is there when you need it, and the absence of a front facing lens is a deliberate trade-off rather than an oversight. Battery life stretches to a day or two of mixed use depending on how you are using it, and USB-C fast charging keeps top-ups quick. Bluetooth handles headphones and speakers without issue, though there is no headphone jack. Wi-Fi, Bluetooth, NFC, and 4G are all supported, with connectivity managed through a monthly service that costs $10 after the first nine months of free service included with purchase.
Pre-orders are open now at $399, with the price rising to $449 once stock arrives. US customers can expect delivery around June 2026, with each unit coming bundled with a beach pouch, an activity case, and a charging cable. [Source]
Riders in one European capital will soon be able to summon a self-driving car and pay for it using a familiar app. Uber is collaborating with Pony.ai, a Chinese autonomous car technology startup, and Verne, a Croatian company familiar with the local scene. On March 26, 2026, the three firms revealed their plans, and they’ve decided to kick things off in Zagreb.
You can already witness test vehicles driving around Zagreb as part of the real-world testing procedure. They’re all powered by Pony.ai’s latest autonomous system, Gen-7 technology, which provides them with more than enough intelligence to navigate from A to B without the need for a human driver. They are all Arcfox Alpha T5s, and after the final checks are completed, fare collection will be only a few weeks or months away.
LEGO FERRARI MODEL CAR KIT – Builders ages 10+ can create the legendary Technic Ferrari FXX K with authentic details and working mechanical features
AUTHENTIC RACE CAR DETAILS – This supercar building set features working butterfly doors, opening hood, and an engine cover that reveals the…
CAR MODEL KIT – Young engineers explore real automotive concepts with this educational learning toy as they build the working differential and watch…
It all works relatively simply: Pony.ai provides the self-driving technology and software that allows the cars to traverse routes on their own. Given their local experience and presence in Zagreb, Verne owns the cars and manages the day-to-day operations, while Uber integrates the rides into their worldwide network, allowing anybody with the app to order one alongside a regular ride or bike, all from the same app.
Advertisement
Pony.ai has already launched commercial robotaxis in a number of Chinese cities, and the data show that they are covering costs and turning a profit. That track record gives the partners great confidence that they can replicate this success in Europe as well. Verne understands the local roads and rules, as well as client expectations across Europe. Together, they want to avoid the lengthy delays that have hindered other autonomous initiatives throughout the continent.
Next, the partners discuss expanding their fleet to thousands of vehicles and several cities in the coming years. For the time being, Zagreb serves as a proving ground. Success there will be the key to expanding into other European markets, and even beyond. Meanwhile, Verne is working with regulators to ensure that their safety standards remain similar no matter where the service ends up.
Dara Khosrowshahi, Uber’s CEO, says the goal is to make autonomous rides more accessible by combining great technology with a thorough grasp of the local market. James Peng who founded Pony.ai pointed to the same idea noting that proven systems work best when paired with operators who understand each market. Marko Pejkovic who leads Verne put it simply that Europe has waited long enough for real autonomous service instead of endless tests. [Source]
The first day of the BGIS final has just curtailed. Today, we saw some amazing battle action not just from the top teams but from almost everyone. Still, we had winners and also losers. The biggest winner of today was Soul, which topped the rankings, followed closely by Godlike and VS. At the bottom was Nebula, who had a horrible run of matches. Here’s what the standings look like after day one of BGIS Grand Finals.
BGIS 2026 Grand Finals Standings After Day 1
Teams
WWCD
Position Points
Finish Points
Total Points
SOUL
1
18
48
66
GODL
2
21
42
63
VS
1
23
34
57
WF
1
23
32
55
GENS
0
10
44
54
VE
1
17
31
48
RGE
0
17
25
42
RNTX
0
6
29
35
OG
0
7
21
28
NINZ
0
8
18
26
K9
0
10
14
24
MYTH
0
10
14
24
WELT
0
8
13
21
TT
0
5
15
20
LEFP
0
5
11
16
NBE
0
4
10
14
Day 2 awaits us tomorrow, and it’s historically a day of comebacks in BGMI. We hope to see similar top-tier action. If you missed today’s games, check out our highlights of day 1.
You must be logged in to post a comment Login