Connect with us
DAPA Banner

Tech

Google’s new TurboQuant algorithm speeds up AI memory 8x, cutting costs by 50% or more

Published

on

As Large Language Models (LLMs) expand their context windows to process massive documents and intricate conversations, they encounter a brutal hardware reality known as the “Key-Value (KV) cache bottleneck.”

Every word a model processes must be stored as a high-dimensional vector in high-speed memory. For long-form tasks, this “digital cheat sheet” swells rapidly, devouring the graphics processing unit (GPU) video random access memory (VRAM) system used during inference, and slowing the model performance down rapidly over time.

But have no fear, Google Research is here: yesterday, the unit within the search giant released its TurboQuant algorithm suite — a software-only breakthrough that provides the mathematical blueprint for extreme KV cache compression, enabling a 6x reduction on average in the amount of KV memory a given model uses, and 8x performance increase in computing attention logits, which could reduce costs for enterprises that implement it on their models by more than 50%.

The theoretically grounded algorithms and associated research papers are available now publicly for free, including for enterprise usage, offering a training-free solution to reduce model size without sacrificing intelligence.

Advertisement

The arrival of TurboQuant is the culmination of a multi-year research arc that began in 2024. While the underlying mathematical frameworks—including PolarQuant and Quantized Johnson-Lindenstrauss (QJL)—were documented in early 2025, their formal unveiling today marks a transition from academic theory to large-scale production reality.

The timing is strategic, coinciding with the upcoming presentations of these findings at the upcoming conferences International Conference on Learning Representations (ICLR 2026) in Rio de Janeiro, Brazil, and Annual Conference on Artificial Intelligence and Statistics (AISTATS 2026) in Tangier, Morocco.

By releasing these methodologies under an open research framework, Google is providing the essential “plumbing” for the burgeoning “Agentic AI” era: the need for massive, efficient, and searchable vectorized memory that can finally run on the hardware users already own. Already, it is believed to have an effect on the stock market, lowering the price of memory providers as traders look to the release as a sign that less memory will be needed (perhaps incorrect, given Jevons’ Paradox).

The Architecture of Memory: Solving the Efficiency Tax

To understand why TurboQuant matters, one must first understand the “memory tax” of modern AI. Traditional vector quantization has historically been a “leaky” process.

Advertisement

When high-precision decimals are compressed into simple integers, the resulting “quantization error” accumulates, eventually causing models to hallucinate or lose semantic coherence.

Furthermore, most existing methods require “quantization constants”—meta-data stored alongside the compressed bits to tell the model how to decompress them. In many cases, these constants add so much overhead—sometimes 1 to 2 bits per number—that they negate the gains of compression entirely.

TurboQuant resolves this paradox through a two-stage mathematical shield. The first stage utilizes PolarQuant, which reimagines how we map high-dimensional space.

Rather than using standard Cartesian coordinates (X, Y, Z), PolarQuant converts vectors into polar coordinates consisting of a radius and a set of angles.

Advertisement

The breakthrough lies in the geometry: after a random rotation, the distribution of these angles becomes highly predictable and concentrated. Because the “shape” of the data is now known, the system no longer needs to store expensive normalization constants for every data block. It simply maps the data onto a fixed, circular grid, eliminating the overhead that traditional methods must carry.

The second stage acts as a mathematical error-checker. Even with the efficiency of PolarQuant, a residual amount of error remains. TurboQuant applies a 1-bit Quantized Johnson-Lindenstrauss (QJL) transform to this leftover data. By reducing each error number to a simple sign bit (+1 or -1), QJL serves as a zero-bias estimator. This ensures that when the model calculates an “attention score”—the vital process of deciding which words in a prompt are most relevant—the compressed version remains statistically identical to the high-precision original.

Performance benchmarks and real-world reliability

The true test of any compression algorithm is the “Needle-in-a-Haystack” benchmark, which evaluates whether an AI can find a single specific sentence hidden within 100,000 words.

In testing across open-source models like Llama-3.1-8B and Mistral-7B, TurboQuant achieved perfect recall scores, mirroring the performance of uncompressed models while reducing the KV cache memory footprint by a factor of at least 6x.

Advertisement

This “quality neutrality” is rare in the world of extreme quantization, where 3-bit systems usually suffer from significant logic degradation.

Beyond chatbots, TurboQuant is transformative for high-dimensional search. Modern search engines increasingly rely on “semantic search,” comparing the meanings of billions of vectors rather than just matching keywords. TurboQuant consistently achieves superior recall ratios compared to existing state-of-the-art methods like RabbiQ and Product Quantization (PQ), all while requiring virtually zero indexing time.

This makes it an ideal candidate for real-time applications where data is constantly being added to a database and must be searchable immediately. Furthermore, on hardware like NVIDIA H100 accelerators, TurboQuant’s 4-bit implementation achieved an 8x performance boost in computing attention logs, a critical speedup for real-world deployments.

Rapt community reaction

The reaction on X, obtained via a Grok search, included a mixture of technical awe and immediate practical experimentation.

Advertisement

The original announcement from @GoogleResearch generated massive engagement, with over 7.7 million views, signaling that the industry was hungry for a solution to the memory crisis.

Within 24 hours of the release, community members began porting the algorithm to popular local AI libraries like MLX for Apple Silicon and llama.cpp.

Technical analyst @Prince_Canuma shared one of the most compelling early benchmarks, implementing TurboQuant in MLX to test the Qwen3.5-35B model.

Across context lengths ranging from 8.5K to 64K tokens, he reported a 100% exact match at every quantization level, noting that 2.5-bit TurboQuant reduced the KV cache by nearly 5x with zero accuracy loss. This real-world validation echoed Google’s internal research, proving that the algorithm’s benefits translate seamlessly to third-party models.

Advertisement

Other users focused on the democratization of high-performance AI. @NoahEpstein_ provided a plain-English breakdown, arguing that TurboQuant significantly narrows the gap between free local AI and expensive cloud subscriptions.

He noted that models running locally on consumer hardware like a Mac Mini “just got dramatically better,” enabling 100,000-token conversations without the typical quality degradation.

Similarly, @PrajwalTomar_ highlighted the security and speed benefits of running “insane AI models locally for free,” expressing “huge respect” for Google’s decision to share the research rather than keeping it proprietary.

Market impact and the future of hardware

The release of TurboQuant has already begun to ripple through the broader tech economy. Following the announcement on Tuesday, analysts observed a downward trend in the stock prices of major memory suppliers, including Micron and Western Digital.

Advertisement

The market’s reaction reflects a realization that if AI giants can compress their memory requirements by a factor of six through software alone, the insatiable demand for High Bandwidth Memory (HBM) may be tempered by algorithmic efficiency.

As we move deeper into 2026, the arrival of TurboQuant suggests that the next era of AI progress will be defined as much by mathematical elegance as by brute force. By redefining efficiency through extreme compression, Google is enabling “smarter memory movement” for multi-step agents and dense retrieval pipelines. The industry is shifting from a focus on “bigger models” to “better memory,” a change that could lower AI serving costs globally.

Strategic considerations for enterprise decision-makers

For enterprises currently using or fine-tuning their own AI models, the release of TurboQuant offers a rare opportunity for immediate operational improvement.

Unlike many AI breakthroughs that require costly retraining or specialized datasets, TurboQuant is training-free and data-oblivious.

Advertisement

This means organizations can apply these quantization techniques to their existing fine-tuned models—whether they are based on Llama, Mistral, or Google’s own Gemma—to realize immediate memory savings and speedups without risking the specialized performance they have worked to build.

From a practical standpoint, enterprise IT and DevOps teams should consider the following steps to integrate this research into their operations:

Optimize Inference Pipelines: Integrating TurboQuant into production inference servers can reduce the number of GPUs required to serve long-context applications, potentially slashing cloud compute costs by 50% or more.

Expand Context Capabilities: Enterprises working with massive internal documentation can now offer much longer context windows for retrieval-augmented generation (RAG) tasks without the massive VRAM overhead that previously made such features cost-prohibitive.

Advertisement

Enhance Local Deployments: For organizations with strict data privacy requirements, TurboQuant makes it feasible to run highly capable, large-scale models on on-premise hardware or edge devices that were previously insufficient for 32-bit or even 8-bit model weights.

Re-evaluate Hardware Procurement: Before investing in massive HBM-heavy GPU clusters, operations leaders should assess how much of their bottleneck can be resolved through these software-driven efficiency gains.

Ultimately, TurboQuant proves that the limit of AI isn’t just how many transistors we can cram onto a chip, but how elegantly we can translate the infinite complexity of information into the finite space of a digital bit. For the enterprise, this is more than just a research paper; it is a tactical unlock that turns existing hardware into a significantly more powerful asset.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

It’s here! NASA reveals full livestream schedule for crewed moon mission

Published

on

The excitement is building with NASA now just a few days away from sending four astronauts on a voyage around the moon.

On Wednesday, the space agency shared its schedule for coverage of the final buildup and main event, including a Q&A with the astronauts this Friday, blast off on Wednesday, April 1, and regular updates as the crew make their way to the moon.

Americans Reid Wiseman, Victor Glover, and Christina Koch, together with Canadian Jeremy Hansen, will leave the launchpad aboard an Orion spacecraft carried skyward by NASA’s massive Space Launch System (SLS) rocket.

They’ll spend several days in Earth orbit checking the spacecraft’s systems before heading toward the moon. They won’t land on the lunar surface, but instead fly around it on a journey that will take humans farther from Earth than at any time since the Apollo era more than five decades ago.

Advertisement

Below is a summary of the events linked to the upcoming Artemis II mission. All times are Eastern Time (ET):

Friday, March 27

2:30 p.m.: The Artemis II crew will arrive at the Kennedy Space Center for a Q&A session with the press. NASA chief Jared Isaacman will also be in attendance, along with CSA (Canadian Space Agency) president Lisa Campbell.

Sunday, March 29

Advertisement

9:30 a.m.: The Artemis II crew will spend some time answering additional media questions, but this time virtually, from their quarantine facility.

2 p.m.: NASA officials linked to the mission will hold a status update on preparations for the Artemis II launch.

Monday, March 30

5 p.m.: Following a key mission meeting, NASA will host a news conference to provide a status update on preparations for launch.

Advertisement

Tuesday, March 31

1 p.m.: The space agency will hold a prelaunch news conference on the countdown status.

Wednesday, April 1

7:45 a.m.: Coverage begins on NASA+ of the tanking operations to load propellant into the SLS rocket. The livestream will include various views of the rocket and commentator analysis.

Advertisement

12:50 p.m.: NASA+ begins the official livestream for the much-anticipated launch, which is targeted for no earlier than 6:24 p.m. Following liftoff, coverage will continue on YouTube after Orion’s solar array wings deploy in space.

Around two-and-a-half hours after launch, and after the SLS rocket’s upper stage has performed a burn to send Orion and its crew to high-Earth orbit, NASA will hold a news conference to offer an update on the mission. The start time could change, depending on the precise liftoff time. In fact, the entire schedule could change, according to how the final preparations proceed. NASA will post any developments on its X account.

For information on the timing of daily updates during the mission, including live link-ups with the crew, check out NASA’s full schedule.

Advertisement

Source link

Continue Reading

Tech

The least surprising chapter of the Manus story is what’s happening right now

Published

on

Okay, so the U.S. and China are locked in an all-out race to build the most powerful AI on the planet. Beijing is throwing billions at homegrown models, tightening its grip on the tech sector, and watching nervously as its best AI talent gravitates to U.S. companies. A Carnegie Endowment study published late last year found that 87 of the 100 top Chinese AI researchers at U.S. institutions in 2019 are still there.

Yet Manus — one of China’s most buzzed-about AI startups — quietly relocated to Singapore and sold itself to Meta for $2 billion. Did anyone think there would not be a reckoning over this tie-up?

As industry watchers know, Manus burst onto the scene in the spring of last year with a demo video showing an AI agent screening job candidates, planning vacations, and analyzing stock portfolios, and it cheekily claimed it outperformed OpenAI’s Deep Research. Within weeks, Benchmark — the consummate Silicon Valley venture firm — led a $75 million funding round at a $500 million valuation. That was surprising. (Senator John Cornyn had thoughts, tweeting at the time, “Who thinks it is a good idea for American investors to subsidize our biggest adversary in AI, only to have the CCP use that technology to challenge us economically and militarily? Not me.”)

By December, Manus had millions of users and was pulling in over $100 million in annual recurring revenue. Then Meta came calling, and Mark Zuckerberg, who has staked the company’s future on AI, snapped it up for $2 billion.

Advertisement

It’s worth noting that Manus didn’t just sell itself to an American buyer; it spent the better part of last year actively trying to operate outside China’s orbit. The company relocated its headquarters and core team from Beijing to Singapore, restructured its ownership, and after the Meta deal was announced, Meta pledged to cut all ties with Manus’s Chinese investors and shut down its operations in China entirely. By every measure, Manus was trying to make itself a Singapore company.

But if that string of events raised eyebrows in Washington, you can only imagine that in Beijing, they were apoplectic.

China has a phrase for all of this: “selling young crops” — homegrown AI companies that move abroad and sell themselves to foreign buyers before they’ve fully matured, taking their intellectual property and talent with them.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Beijing hates it and has spent years establishing that no company operates outside its reach. Surely, we all remember that time Jack Ma gave a speech in 2020, mildly criticizing Chinese regulators, after which he disappeared from public life for months, Ant Group’s blockbuster IPO was killed overnight, and Alibaba was handed a $2.8 billion fine. China then spent the next two years methodically dismantling its own booming tech sector, wiping out hundreds of billions in market value. Chinese leaders are many things, but subtle is not one of them.

Advertisement

Which is why it wasn’t entirely surprising when, on Tuesday, the Financial Times reported that Manus co-founders Xiao Hong and Ji Yichao were summoned to a meeting this month with China’s National Development and Reform Commission and told that they wouldn’t be leaving the country for a while. No formal charges have been filed — just an inquiry into whether the Meta deal violated Beijing’s foreign investment rules.

Beijing is calling it a routine regulatory review.

At some point, someone at Manus probably thought they’d gotten away with it, and maybe they still will. But given the stakes of the AI race, that was always a big gamble. Now Beijing wants answers; Manus’s founders are apparently not going anywhere until it gets them.

Source link

Advertisement
Continue Reading

Tech

GitHub adds AI-powered bug detection to expand security coverage

Published

on

GitHub adds AI-powered bug detection to expand security coverage

GitHub is adopting AI-based scanning for its Code Security tool to expand vulnerability detections beyond the CodeQL static analysis and cover more languages and frameworks.

The developer collaboration platform says that the move is meant to uncover security issues “in areas that are difficult to support with traditional static analysis alone.”

CodeQL will continue to provide deep semantic analysis for supported languages, while AI detections will provide broader coverage for Shell/Bash, Dockerfiles, Terraform, PHP, and other ecosystems.

The new hybrid model is expected to enter public preview in early Q2 2026, possibly as soon as next month.

Advertisement

Finding bugs before they bite

GitHub Code Security is a set of application security tools integrated directly into GitHub repositories and workflows.

It is available for free (with limitations) for all public repositories. However, paying users can access the full set of features for private/internal repositories as part of the GitHub Advanced Security (GHAS) add-on suite.

It offers code scanning for known vulnerabilities, dependency scanning to pinpoint vulnerable open-source libraries, secrets scanning to uncover leaked credentials on public assets, and provides security alerts with Copilot-powered remediation suggestions.

The security tools operate at the pull request level, with the platform selecting the appropriate tool (CodeQL or AI) for each case, so any issues are caught before merging the potentially problematic code.

Advertisement

If any issues, such as weak cryptography, misconfigurations, or insecure SQL, are detected, those are presented directly in the pull request.

GitHub’s internal testing showed that the system processed over 170,000 findings over 30 days, resulting in 80% positive developer feedback, and indicating that the flagged issues were valid.

These results showed “strong coverage” of the target ecosystems that had not been sufficiently scrutinized before.

GitHub also highlights the importance of Copilot Autofix, which suggests solutions for the problems detected through GitHub Code Security.

Advertisement

Stats from 2025 comprising over 460,000 security alerts handled by Autofix show that resolution was reached in 0.66 hours on average, compared to 1.29 hours when Autofix wasn’t used.

GitHub’s adoption of AI-powered vulnerability detection marks a broader shift where security is becoming AI-augmented and also natively embedded within the development workflow itself.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Step Into the Michigan Factory That Builds Every Real Eames Lounge Chair

Published

on

Inside Eames Lounge Chair Factory Tour
Photo credit: WSJ
Few pieces of furniture have earned a place in both design history and everyday luxury quite like the Eames Lounge Chair. The Wall Street Journal recently got a rare look inside the MillerKnoll factory in Zeeland, Michigan, where every authentic example is still assembled by hand, walking the production floor from raw wood all the way through to finished chair and making it very clear why each one carries a price tag somewhere between five and ten thousand dollars.



It starts with thin sheets of veneer cut from sustainably grown walnut or cherry. Workers layer seven of them together with glue, alternating the grain direction with each sheet before a hydraulic press applies heat and pressure until the wood begins to take on the chair’s distinctive curves, forming the seat, back, and headrest that make the design instantly recognizable. Once cooled, the molded pieces move to a computer guided cutter that trims everything to the correct dimensions. Because the wood itself dictates the final appearance, no two chairs ever come out looking exactly the same.

Sale


CHITA Genuine Leather Reclining Swivel Chair with Adjustable Headrest and Ottoman for Living Room, Black
  • Reclining Swivel Chair and Ottoman: This chair and ottoman set is just right for a modern office, den, or living room. A recline feature makes this…
  • Memory Swivel Base & Adjustable Headrest: Stable 4-prong metal base features memory return that spins back to your original position after you get up…
  • 100% Top Grain Leather: The finest hide portion, extremely durable for high-use chairs, adding sophistication to any room. Over time, it molds to your…

Inside Eames Lounge Chair Factory Tour
Every edge is then hand sanded, with workers running their fingertips along each surface to catch anything the machines might have missed. A coat of linseed oil goes on next, brushed in and left to soak, protecting the wood and deepening its color gradually over time. While that is happening, the metal components are being prepared, polished aluminum spines and bases that are as refined as anything else on the chair. The hardwood shells are then fastened to the frames with small spacers that keep everything locked in place and silent. It is a lengthy process by design, because a single misaligned hole or loose screw is enough to throw the whole thing off balance.

Inside Eames Lounge Chair Factory Tour
Upholstery takes place in a different area of the plant, where leather hides are pre-selected for uniform thickness and color before being dispatched to the cutting stations. Workers lay out patterns on each hide and cut them by hand using sharp knives, after which stitchers wrap and sew the covers around cushions filled with down and foam. The leather is pushed taut to flow smoothly over the chair’s curves with no creases. Each final cushion hooks onto its plywood shell using hidden fasteners, allowing owners to replace the covers decades later if necessary.

Inside Eames Lounge Chair Factory Tour
Quality control is strict, with each chair passing through a separate testing lab where Kyle Wright spends his days attempting to break them. In just a few hours, one machine rotates the base a hundred thousand times, replicating a decade of daily use. Another device presses down on the seat and back with weights that simulate the load of a big person shifting about after years of frequent use. If anything creaks, loosens, or gives way, the entire batch is returned to the factory floor for repairs. Only the chairs that pass all tests receive the little Herman Miller emblem sewn discreetly inside.
[Source]

Advertisement

Source link

Continue Reading

Tech

OpenAI shutters controversial AI video generator Sora

Published

on

Reports suggest Disney’s $1bn equity investment into OpenAI will not progress.

OpenAI is shutting down its controversial AI video generator Sora just months after announcing a multi-year licensing deal with Disney. The company told the BBC that the discontinuation will enable it to focus on other developments, such as robotics “that will help people solve real-world, physical tasks”.

Details on the timeline of the app’s shutdown, API and data preservation will be shared soon, OpenAI’s Sora team said in a post on X. “To everyone who created with Sora, shared it, and built community around it – thank you. What you made with Sora mattered, and we know this news is disappointing,” the post read.

The BBC further reported that following Sora’s closure, OpenAI will no longer focus on video-generation tools.

Advertisement

Video models such as Sora, its later iteration Sora 2 – which came with a social media app to share the AI content – as well as more recent ones such as ByteDance’s Seedance 2.0 have garnered strong criticism from artists and publishers who oppose to their copyrighted material being used to generate AI videos.

Prior to Sora’s launch in late 2024, protesting artists reportedly leaked the model on Hugging Face, claiming they were “lured” by OpenAI into “’art washing’ to tell the world that Sora is a useful tool for artists”.

Meanwhile, Disney’s three-year partnership and licensing deal with OpenAI came after the company reportedly opted out of allowing its copyrighted material from being used by Sora.

The deal, announced in December 2025, gave OpenAI access to more than 200 Disney characters to be used by Sora and ChatGPT Images. Alongside the licensing agreement, Disney also agreed to make a $1bn equity investment in OpenAI. The investment has reportedly been scrapped.

Advertisement

“We respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere,” a Disney spokesperson told news outlets.

“We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.”

With Sora’s closure, OpenAI is seemingly shifting priorities towards AI tools suited for enterprise use, a sector where Anthropic is capturing a majority of newcomers. Meanwhile, Claude also overtook ChatGPT as the most downloaded app in the US last month.

To compete, OpenAI is building a new desktop ‘superapp’ by fusing together ChatGPT, Codex – the company’s coding tool – and Atlas, an AI-powered web browser launched last October.

Advertisement

“Sora was a resource black hole with strong compute costs and limited monetisation. The platform also struggled to prevent the creation of non-consensual imagery and realistic misinformation, not to mention major copyright infringement,” commented Forrester’s VP principal analyst Thomas Husson.

“In the context of its upcoming IPO, OpenAI likely decided to minimise the associated risks and prioritise profits and enterprise tools over experimental social apps, despite some consumer interest.

“Sora may be repurposed for some robotics and physical applications, but it is still very early days. At the end of the day, it highlights that OpenAI is still very far away from recouping its huge investments.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

The legendary 3dfx Voodoo is back in FPGA form

Published

on


3dfx Voodoo graphics accelerators are likely to remain a key part of retro modding projects and gaming ventures for years to come. The Voodoo chip is now almost perfectly emulated in several DOS-based emulators, such as DOSBox-X, and PC emulators like PCem and 86Box, while hardware modders continue developing their…
Read Entire Article
Source link

Continue Reading

Tech

Study: Delaying Kindergarten Has Few Longterm Benefits

Published

on

In addition to screen time, the type of school to attend, the content children consume and the food they eat, a new concern cropped up for parents over the last few years: Whether to keep their children back a year from entering kindergarten.

“Redshirting,” a reference to collegiate sports in which the athlete sits out a year to boost their skills, has crept into the decision making process for parents with children on the cusp of the age cut-off in kindergarten, usually age 6 in most states. Parents can either have the student as one of the oldest in their grade or among the youngest, with some believing holding their child back can help academic achievement.

But according to a new report, the practice is not becoming more widespread. It has hovered steady at around 5 percent, since the the 1990s and 2010s, The number reached 6.4 percent during the pandemic.

“One of the reasons we wanted to look into it is because we felt like everyone talks about it, but only 1 in 20 students actually do it,” says Megan Kuhfeld, director of modeling and data analytics at NWEA, an education research firm. “So why does it feel like everyone was considering it for their children?”

Advertisement

Kuhfeld hypothesizes the smaller, more vocal group of parents considering redshirting was amplified on social media, but when it came time to make the decision, outside factors – like paying for an extra year of child care, which is becoming more costly than ever — played a large role.

“It might seem that this is a good idea but it’s, ‘We’re on the hook for an extra $15,000 in child-care costs,’ which may not be practical for a lot of families,” Kuhfeld says, adding she expects redshirting to stay steady. “The types to consider it will likely continue to, but a lot of people consider it then decide it’s not practical for a lot of reasons.”

The NWEA study did find more young boys were likely to be kept back than girls, with white students more often than nonwhite students. In the 2021 year, there were also upticks in rural areas, jumping from 6.2 percent to 9 percent, and high poverty areas, jumping from 2.2 to 4.7 percent. That could be because child care is more affordable in smaller towns, or easier to find with a friend, family or neighbor.

Proponents of redshirting say it gives the child an academic and social advantage being an older kindergartner. However, the benefits generally are short-lived, according to the NWEA report. While children initially saw higher reading and math scores, equating to about 20 percent to 30 percent of a year of learning, those results evened out by third grade, when the children who entered kindergarten early catch up to the redshirters.

Advertisement
While children who started kindergarten later initially saw a large academic advantage in math and reading scores, by third grades, those gaps were filled.

Source: NWEA

There is at least one strong reason not to redshirt, according to the American Economic Association: Children who started kindergarten after 5 years old are more likely to drop out later on.

“People often focus on the short-term gains, but it’s important to keep in mind the perspective of what it means to be the older kid in class, where you turn 18 your junior year of high school,” Kuhfeld says. “It’s just keeping in mind these longer term outcomes and making the best decision for your child.”

Some states have begun pushing toward a forced redshirting of sorts. North Carolina public schools shifted its age cut off in 2007, requiring students to be 5 years old or older on Aug. 31, upping the date from a previous mid-October cut off.

Advertisement

Jade Jenkins, an associate professor of education at University of California, Irvine, found in a report that forced redshirting brought pros and cons. It helped math and reading scores in third through fifth grades, and students with forced delays into kindergarten also had a 4 percent increase of being identified as academically gifted. However, the same report found students had a 6 percent drop in disability identification. According to Jenkins’ research, it benefitted lower-income, white students but brought no benefit to Hispanic students.

“Is the valuation of the academic benefits of delayed entry higher than the costs of the hold-out year and the public costs of increased racial-ethnic achievement gaps? Future research can provide a more precise estimate of this calculation, but we find this unlikely,” Jenkins says in the report.

The latest redshirt debate is one of several parents surrounding kindergarten. Some state legislators are pushing for it to become mandatory across the nation, with others concerned about the dipping levels for kindergarten readiness. It has also become more academic-focused than ever, which in part spurred the latest NWEA study.

“We wanted to get this information out in an accessible way to have both the advantages and disadvantages, and not get caught up in blanket guidance,” Kuhfeld says.

Advertisement

“Especially in high socio-economic status schools and districts, there’s already an arms race by preschool to get situated for college, which is where a lot of this comes from,” she adds. “There’s this attitude of, ‘We have to take every avenue to get ahead’ and I don’t think that is healthy.”

Source link

Continue Reading

Tech

Researchers build experimental drone that flies without moving parts

Published

on


The concept, known as a solid-state ornithopter, replaces the typical network of actuators with electricity-driven materials that deform when voltage is applied. This approach could represent a turning point for next-generation aerial vehicles, combining principles of aerodynamics, materials science, and biomechanics into a single design model.
Read Entire Article
Source link

Continue Reading

Tech

This Meta smartglasses-detecting app is a great model for Apple Glass developers to follow

Published

on

Meta’s smart glasses are being used to film people in bathrooms, courts, and doctor’s offices. A new app just released on the App Store is the perfect example of safeguards should be implemented when Apple launches its smart glasses.

Sleek black-framed smart glasses with blue-tinted lenses and a black smart wristband featuring a modern, minimalist design on a gradient background.
Meta Ray-Ban Display. Image source: Meta

The Apple Vision Pro isn’t exactly stealthy. Meta’s Ray-Bans are, and are being used mainly to violate other people’s privacy.
I’ve already talked at-length about the issue with smart glasses. Especially if they’re glasses designed to be relatively unclockable at a distance.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Android 17’s new Contact Picker stops apps from accessing your entire contact list

Published

on

Android 17 is getting a new Contact Picker that changes how apps access your contacts list. Earlier reports hinted at this shift toward tighter privacy, and now Google is rolling it out.

📣 New feature in Android 17!

Android 17 is introducing a new Contact Picker feature that provides a standardized, secure, and searchable interface for contact selection.

Historically, apps needing access to your contacts relied on the broad “READ_CONTACTS” permission, which… pic.twitter.com/eLZ1zVRArS

— Mishaal Rahman (@MishaalRahman) March 25, 2026

Advertisement

Instead of giving apps full access to your address book, you will be able to choose exactly which contacts they can see. Previously, apps often relied on a broad permission that exposed your entire contact list.

That resulted in sharing data more than necessary without even realizing it. With this update, Android is trying to limit that exposure while keeping things simple for you.

How the new Contact Picker keeps your contacts private

The new Contact Picker in Android 17 offers a secure and searchable interface where you select specific contacts to share. Apps only receive the data you approve, not your full address book. This reduces unnecessary access and gives you more control over your information.

For apps built for Android 17 devices or newer, the system automatically routes existing contact pick requests through the new, more secure Contact Picker interface. That means even apps that have not been fully updated may still benefit from better privacy protections.

Developers are also being pushed to adopt the new picker directly. It supports features like selecting multiple contacts in one go, making it more flexible than older methods. Apps can also request only the exact details they need, like a phone number or email address.

Advertisement

Android 17 changes how apps interact with your contacts

With this update, Android is moving away from blanket permissions and toward more precise, user-driven access. For you, that means fewer apps quietly pulling your entire contact list in the background

This update does not just tighten privacy; it also sets a new standard for how apps should handle personal data going forward.

Recently, Android rolled out a new contact feature with customizable calling cards, which makes it easier to personalize how you appear on calls. Google is also working on a tap-to-share contact feature to quickly exchange contact details between devices, just like Apple’s NameDrop.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025