Connect with us

Tech

AI’s GPU problem is actually a data delivery problem

Published

on

Presented by F5


As enterprises pour billions into GPU infrastructure for AI workloads, many are discovering that their expensive compute resources sit idle far more than expected. The culprit isn’t the hardware. It’s the often-invisible data delivery layer between storage and compute that’s starving GPUs of the information they need.

“While people are focusing their attention, justifiably so, on GPUs, because they’re very significant investments, those are rarely the limiting factor,” says Mark Menger, solutions architect at F5. “They’re capable of more work. They’re waiting on data.”

AI performance increasingly depends on an independent, programmable control point between AI frameworks and object storage — one that most enterprises haven’t deliberately architected. As AI workloads scale, bottlenecks and instability happens when AI frameworks are tightly coupled to specific storage endpoints during scaling events, failures, and cloud transitions.

Advertisement

“Traditional storage access patterns were not designed for highly parallel, bursty, multi-consumer AI workloads,” says Maggie Stringfellow, VP, product management – BIG-IP. “Efficient AI data movement requires a distinct data delivery layer designed to abstract, optimize, and secure data flows independently of storage systems, because GPU economics make inefficiency immediately visible and expensive.”

Why AI workloads overwhelm object storage

These bidirectional patterns include massive ingestion from continuous data capture, simulation output, and model checkpoints. Combined with read-intensive training and inference workloads, they stress the tightly coupled infrastructure upon which the storage systems are reliant.

While storage vendors have done significant work in scaling the data throughput into and out of their systems, that focus on throughput alone creates knock-on effects across the switching, traffic management, and security layers coupled to storage.

The stress on S3-compatible systems from AI workloads is multidimensional and differs significantly from traditional application patterns. It’s less about raw throughput and more about concurrency, metadata pressure, and fan-out considerations. Training and fine-tuning create particularly challenging patterns, like massive parallel reads of small to mid-size objects. These workloads also involve repeated passes through training data across epochs and periodic checkpoint write bursts.

Advertisement

RAG workloads introduce their own complexity through request amplification. A single request can fan out into dozens or hundreds of additional data chunks, cascading into further detail, related chunks, and more complex documents. The stress concentration is less about capacity, storage system speed, and more about request management and traffic shaping.

The risks of tightly coupling AI frameworks to storage

When AI frameworks connect directly to storage endpoints without an intermediate delivery layer, operational fragility compounds quickly during scaling events, failures, and cloud transitions, which can have major consequences.

“Any instability in the storage service now has an uncontained blast radius,” Menger says. “Anything here becomes a system failure, not a storage failure. Or frankly, aberrant behavior in one application can have knock-on effects to all consumers of that storage service.”

Menger describes a pattern he’s seen with three different customers, where tight coupling cascaded into complete system failures.

Advertisement

“We see large training or fine-tuning workloads overwhelm the storage infrastructure, and the storage infrastructure goes down,” he explains. “At that scale, the recovery is never measured in seconds. Minutes if you’re lucky. Usually hours. The GPUs are now not being fed. They’re starved for data. These high value resources, for that entire time the system is down, are negative ROI.”

How an independent data delivery layer improves GPU utilization and stability

The financial impact of introducing an independent data delivery layer extends beyond preventing catastrophic failures.

Decoupling allows data access to be optimized independently of storage hardware, improving GPU utilization by reducing idle time and contention while improving cost predictability and system performance as scale increases, Stringfellow says.

“It enables intelligent caching, traffic shaping, and protocol optimization closer to compute, which lowers cloud egress and storage amplification costs,” she explains. “Operationally, this isolation protects storage systems from unbounded AI access patterns, resulting in more predictable cost behavior and stable performance under growth and variability.”

Advertisement

Using a programmable control point between compute and storage

F5’s answer is to position its Application Delivery and Security Platform, powered by BIG-IP, as a “storage front door” that provides health-aware routing, hotspot avoidance, policy enforcement, and security controls without requiring application rewrites.

“Introducing a delivery tier in between compute and storage helps define boundaries of accountability,” Menger says. “Compute is about execution. Storage is about durability. Delivery is about reliability.”

The programmable control point, which uses event-based, conditional logic rather than generative AI, enables intelligent traffic management that goes beyond simple load balancing. Routing decisions are based on real backend health, using intelligent health awareness to detect early signs of trouble. This includes monitoring leading indicators of trouble. And when problems emerge, the system can isolate misbehaving components without taking down the entire service.

“An independent, programmable data delivery layer becomes necessary because it allows policy, optimization, security, and traffic control to be applied uniformly across both ingestion and consumption paths without modifying storage systems or AI frameworks,” Stringfellow says. “By decoupling data access from storage implementation, organizations can safely absorb bursty writes, optimize reads, and protect backend systems from unbounded AI access patterns.”

Advertisement

Handling security issues in AI data delivery

AI isn’t just pushing storage teams on throughput, it’s forcing them to treat data movement as both a performance and security problem, Stringfellow says. Security can no longer be assumed simply because data sits deep in the data center. AI introduces automated, high-volume access patterns that must be authenticated, encrypted, and governed at speed. That’s where F5 BIG-IP comes into play.

“F5 BIG-IP sits directly in the AI data path to deliver high-throughput access to object storage while enforcing policy, inspecting traffic, and making payload-informed traffic management decisions,” Stringfellow says. “Feeding GPUs quickly is necessary, but not sufficient; storage teams now need confidence that AI data flows are optimized, controlled, and secure.”

Why data delivery will define AI scalability

Looking ahead, the requirements for data delivery will only intensify, Stringfellow says.

“AI data delivery will shift from bulk optimization toward real-time, policy-driven data orchestration across distributed systems,” she says. “Agentic and RAG-based architectures will require fine-grained runtime control over latency, access scope, and delegated trust boundaries. Enterprises should start treating data delivery as programmable infrastructure, not a byproduct of storage or networking. The organizations that do this early will scale faster and with less risk.”

Advertisement

Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Hackers breach SmarterTools network using flaw in its own software

Published

on

Hackers breach SmarterTools network using flaw in its own software

SmarterTools confirmed last week that the Warlock ransomware gang breached its network after compromising an email system, but it did not impact business applications or account data.

The company’s Chief Commercial Officer, Derek Curtis, says that the intrusion occurred on January 29, via a single SmarterMail virtual machine (VM) set up by an employee.

“Prior to the breach, we had approximately 30 servers/VMs with SmarterMail installed throughout our network,” Curtis explained.

Wiz

“Unfortunately, we were unaware of one VM, set up by an employee, that was not being updated. As a result, that mail server was compromised, which led to the breach.”

Although SmarterTools assures that customer data wasn’t directly impacted by this breach, 12 Windows servers on the company’s office network, as well as a secondary data center used for laboratory tests, quality control, and hosting, were confirmed to have been compromised.

Advertisement

The attackers moved laterally from that one vulnerable VM via Active Directory, using Windows-centric tooling and persistence methods. Linux servers, which constitute the majority of the company’s infrastructure, were not compromised by this attack.

The vulnerability exploited in the attack to gain access is CVE-2026-23760, an authentication bypass flaw in SmarterMail before Build 9518, which allows resetting administrator passwords and obtaining full privileges.

SmarterTools reports that the attacks were conducted by the Warlock ransomware group, which has also impacted customer machines using a similar activity.

The ransomware operators waited roughly a week after gaining initial access, the final stage being encryption of all reachable machines.

Advertisement

However, in this case, Sentinel One security products reportedly stopped the final payload from performing encryption, the impacted systems were isolated, and data was restored from fresh backups.

Tools used in the attacks include Velociraptor, SimpleHelp, and vulnerable versions of WinRAR, while startup items and scheduled tasks were also used for persistence, according to the company.

Cisco Talos reported in the past that the threat actors were abusing the open-source DFIR tool Velociraptor.

In October 2025, Halcyon cybersecurity company linked the Warlcok ransomware gang to a Chinese nation-state actor tracked as Storm-2603.

Advertisement

ReliaQuest published a report earlier today confirming that the activity is linked to Storm-2603, with moderate-to-high confidence.

“While this vulnerability allows attackers to bypass authentication and reset administrator passwords, Storm-2603 chains this access with the software’s built-in ‘Volume Mount’ feature to gain full system control,” ReliaQuest said.

“Upon entry, the group installs Velociraptor, a legitimate digital forensics tool it has used in previous campaigns, to maintain access and set the stage for ransomware.”

ReliaQuest also saw probes for CVE-2026-24423, another SmarterMail flaw flagged by CISA as actively exploited by ransomware actors last week, although the primary vector was CVE-2026-23760.

Advertisement

The researchers note that CVE-2026-24423 provides a more direct API path to achieve remote code execution, but CVE-2026-23760 can be less noisy, blending into legitimate administrative activity, which is why Storm-2603 might have opted for that one instead.

To address all recent flaws in the SmarterMail product, administrators are recommended to upgrade to Build 9511 or later as soon as possible.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

What AI builders can learn from fraud models that run in 300 milliseconds

Published

on

Fraud protection is a race against scale. 

For instance, Mastercard’s network processes roughly 160 billion transactions a year, and experiences surges of 70,000 transactions a second during peak periods (like the December holiday rush). Finding the fraudulent purchases among those — without chasing false alarms — is an incredible task, which is why fraudsters have been able to game the system. 

But now, sophisticated AI models can probe down to individual transactions, pinpointing the ones that seem suspicious — in milliseconds’ time. This is the heart of Mastercard’s flagship fraud platform, Decision Intelligence Pro (DI Pro). 

“DI Pro is specifically looking at each transaction and the risk associated with it,” Johan Gerber, Mastercard’s EVP of security solutions, said in a recent VB Beyond the Pilot podcast. “The fundamental problem we’re trying to solve here is assessing in real time.”

Advertisement

How DI Pro works

Mastercard’s DI Pro was built for latency and speed. From the moment a consumer taps a card or clicks “buy,” that transaction flows through Mastercard’s orchestration layer, back onto the network, and then on to the issuing bank. Typically, this occurs in less than 300 milliseconds. 

Ultimately, the bank makes the approve-or-decline decision, but the quality of that decision depends on Mastercard’s ability to deliver a precise, contextualized risk score based on whether the transaction could be fraudulent. Complicating this whole process is the fact that they’re not looking for anomalies, per se; they’re looking for transactions that, by design, are similar to consumer behavior. 

At the core of DI Pro is a recurrent neural network (RNN) that Mastercard refers to as an “inverse recommender” architecture. This treats fraud detection as a recommendation problem; the RNN performs a pattern completion exercise to identify how merchants relate to one another. 

Advertisement

As Gerber explained: “Here’s where they’ve been before, here’s where they are right now. Does this make sense for them? Would we have recommended this merchant to them?” 

Chris Merz, SVP of data science at MasterCard, explained that the fraud problem can be broken down into two sub components: A user’s pattern behavior and a fraudster’s pattern behavior. “And we’re trying to tease those two things out,” he said. 

Another “neat technique,” he said, is how Mastercard approaches data sovereignty, or when data is subject to the laws and governance structures in the region where it is collected, processed, or stored. To keep data “on soil,” the company’s fraud team relies on aggregated, “completely anonymized” data that is not sensitive to any privacy concerns and thus can be shared with models globally. 

“So you still can have the global patterns influencing every local decision,” said Gerber. “We take a year’s worth of knowledge and squeeze it into a single transaction in 50 milliseconds to say yes or no, this is good or this is bad.”

Advertisement

Scamming the scammers

While AI is helping financial companies like Mastercard, it’s helping fraudsters, too; now, they’re able to rapidly develop new techniques and identify new avenues to exploit.  

Mastercard is fighting back by engaging cyber criminals on their turf. One way they’re doing so is by using “honeypots,” or artificial environments meant to essentially “trap” cyber criminals. When threat actors think they’ve got a legitimate mark, AI agents engage with them in the hopes of accessing mule accounts used to funnel money. That becomes “extremely powerful,” Gerber said, because defenders can apply graph techniques to determine how and where mule accounts are connected to legitimate accounts. 

Because in the end, to get their payout, scammers need a legitimate account somewhere, linked to mule accounts, even if it’s cloaked 10 layers down. When defenders can identify these, they can map global fraud networks.

“It’s a wonderful thing when we take the fight to them, because they cause us enough pain as it is,” Gerber said. 

Advertisement

Listen to the podcast to learn more about: 

  • How Mastercard created a “malware sandbox” with Recorded Future; 

  • Why a data science engineering requirements document (DSERD) was essential to align four separate engineering teams;

  • The importance of “relentless prioritization” and tough decision-making to move beyond “a thousand flowers blooming” to projects that actually have a strong business impact;

  • Why successful AI deployment should incorporate three phases: ideation, activation, and implementation — but many enterprises skip the second step. 

Listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

Source link

Advertisement
Continue Reading

Tech

Want full Discord perks? Prepare to prove you’re an adult

Published

on

If you’re used to hopping between gaming servers, sharing memes, or catching up with friends on Discord, be ready for a small twist in how the platform works. Starting March 2026, Discord will introduce a global age verification system that automatically places every user into a “teen-appropriate experience” until they prove they’re an adult. To fully unlock age-restricted content and certain features, users will need to verify their age via face-based AI estimation or a government-issued ID.

According to Discord’s official announcement, this change is part of a broader safety push to control access to adult or graphic material and comply with international safety expectations. But before you panic that Discord is turning into a DMV, it’s worth noting what changes and what stays the same, if you don’t verify right away.

What works and what doesn’t without age verification

Under Discord’s upcoming teen-by-default setup, everyday features won’t suddenly vanish. Users can still send messages, chat in most regular servers and DMs, hop into voice calls with friends, and hang out in their usual gaming communities just like before.

However, anything marked as adult or age-restricted will stay locked behind a verification wall. That includes access to certain servers and channels, speaking privileges in Stage channels, and viewing unfiltered or sensitive content, which will remain blurred by default. Some message request controls and notification customizations may also stay limited until age verification is completed, meaning full access only unlocks once you confirm you’re old enough.

Discord knows this move isn’t going to earn universal applause. Any time an app asks for a face scan or ID, a few users are bound to hit the exit button instead. Still, the company says the bigger picture is safety first, aiming to set a more secure, teen-friendly default for everyone rather than letting the wild west stay wild.

The good news is your everyday Discord life isn’t getting kicked from the party. DMs, regular servers, and voice chats with the squad will keep humming along. But if you want the full, no-filters version of Discord, complete with adult servers, spicy memes, and every corner of your late-night gaming hangouts, you’ll have to show you’re old enough to enter. Think of it less like a ban and more like a digital bouncer asking for ID at the door.

Advertisement

Source link

Continue Reading

Tech

Sony’s best wireless buds are getting a refresh imminently

Published

on

Sony has all but confirmed what the leaks have been hinting at for months: a new pair of flagship earbuds is on the way.

The company has released a short teaser video for the WF-1000XM6, confirming that its next-generation wireless buds are “coming soon”. In addition, a full reveal is set for February 12 at 8am PT.

While the teaser doesn’t give much away on paper, it does make it clear that Sony is getting ready to refresh one of the most respected noise-cancelling earbud lines around. The WF-1000X series has long sat at the top of Sony’s audio lineup. Now, the XM6 looks set to continue that run.

Advertisement

The video itself only briefly shows the earbuds in shadow, but by this point the design isn’t exactly a mystery. Full renders have already leaked, suggesting a refined look rather than a dramatic redesign. Expect a similar premium feel, with subtle tweaks aimed at comfort and wearability rather than a radical change in direction.

As ever, active noise cancellation is expected to be the headline feature. Sony’s ANC performance has consistently ranked among the best in the business, and there’s little reason to think the WF-1000XM6 will be any different. While Sony hasn’t confirmed specific specs yet, improved processing and smarter noise handling feel like safe bets for a generational update.

Advertisement

Colour options are also starting to come into focus. Alongside the usual Black and Platinum Silver, leaks suggest Sony may introduce a new Sandpink finish. This would add a softer, more lifestyle-friendly option to the lineup.

Advertisement

Sony hasn’t said whether February 12 will be a straight reveal or a full launch, but either way, the wait won’t be long. With the WF-1000XM5 already regarded as one of the best pairs of wireless earbuds you can buy, expectations for the XM6 are understandably high.

If Sony sticks the landing, this could be one of the more important audio upgrades of the year. Additionally, it would be a clear signal that the premium earbud arms race isn’t slowing down anytime soon.

Source link

Advertisement
Continue Reading

Tech

YouTube TV’s new bundles are here to help you lower your streaming bill

Published

on

YouTube TV announced plans to introduce new genre-specific bundles last year to give users the flexibility to pick and pay for the content they actually want to watch. Four of these bundles are now live in the US, with the platform planning to roll out over ten bundles across Sports, News, Entertainment, and Family content in the coming weeks.

YouTube TV announced the rollout in a recent blog post, highlighting the content included in the introductory bundles and their pricing. The base Sports plan, priced at $64.99/month, will give subscribers access to all major broadcasters, alongside sports networks like FS1, NBC Sports Network, and all of ESPN networks. Subscribers ill also gain access to ESPN Unlimited this fall.

The Sports + News plan includes everything from the Sports plan, along with national news networks such as CNBC, Fox News, MSNBC, CNN, CSPAN, Bloomberg, and Fox Business. It’s priced at $71.99/month. The Entertainment plan, which is the most affordable of the four, costs $54.99/month and includes all major broadcasters and content, “ranging from FX dramas to Hallmark classics, with channels such as Comedy Central, Bravo, Paramount, Food Network, HGTV, and many more.”

Finally, the News + Entertainment + Family plan, priced at $69.99/month, combines news and entertainment channels with family content like Disney Channel, Nickelodeon, National Geographic, Cartoon Network, PBS Kids and more.

Advertisement

YouTube TV bundles offer big savings, especially for new subscribers

All four bundles is priced lower than the main YouTube TV plan, the platform’s most comprehensive offering with over 100 networks across genres. For those who haven’t previously subscribed to YouTube TV, the bundles are available at a discount.

The Sports plan is available for $54.99 per month for the first year, while the Sports + News, Entertainment, and News + Entertainment + Family plans are priced at $56.99, $44.99, and $59.99/month, respectively, for the first three months.

Source link

Advertisement
Continue Reading

Tech

Microsoft starts the countdown for the end of Exchange Web Services

Published

on


  • EWS doesn’t deliver security, scale and reliability to today’s standards
  • It will be shut down for cloud environments from April 2027
  • ‘Scream tests’ will help show any dependencies

Microsoft has confirmed it will be phasing out Exchange Web Services (EWS) for Microsoft 365 and Exchange Online after nearly two decades of services, and we’ve been given all the key dates.

As soon as October 2026, the company will disable EWS by default for Exchange Online tenants, with the final shutdown set for April 1, 2027.

Source link

Advertisement
Continue Reading

Tech

Snag the Samsung Galaxy S25 FE on the cheap with $200 off

Published

on

If you need a powerful smartphone upgrade without breaking the bank then look no further than this outstanding Galaxy S25 FE deal.

If you are in the market for a welcome phone upgrade with a ton of new features, then Samsung’s 2025 Galaxy S25 FE is a strong pick, particularly when it’s been discounted to just $509.99.

Deal Samsung Galaxy S25 FE 256GB Jet BlackDeal Samsung Galaxy S25 FE 256GB Jet Black

Snag the Samsung Galaxy S25 FE 256GB for just over $500, slicing $200 off the price in the deal

The Samsung Galaxy S25 FE 256GB has dropped to just over $500, carving $200 off the price and pushing this fan‑favourite.

View Deal

Advertisement

The Galaxy S25 FE sports a large display, which, beyond being great for games and watching films on, is also fantastic for productivity.

When you do stumble across a moment that requires you to capture it in detail, the S25 FE’s high-res camera is more than capable of taking the required picture, and thanks to all of Samsung’s AI photo editing software, it can quickly spruce up any picture on the go, saving you a ton of editing time when you reach a computer.

Just to further sell the device, we awarded it 4.5 stars out of 5 in our review, mentioning “In both battery life and long-term software support, the Galaxy S25 FE won’t let you down, making it an easy option to recommend on the mid-range market.”

Advertisement

Advertisement

There’s also 256GB of storage, which is ideal for those who like to have as many apps, games and photos saved onto their phone as possible, as it allows you to download all of your must-haves without having to make frequent trips to the cloud.

There’s a ton of other features to rattle off, but all you really need to know is that this Galaxy S25 FE packs far more than it should do, for a price of $509.99.

With the S25 FE typically sitting at a much higher $709.99, getting a near flagship-level phone for the same price as an entry-level device is what makes this deal so tempting.

Unless you’re dead-set on having a smaller phone that’s easy to use one-handed, the Samsung Galaxy S25 FE represents a far better buy than the standard S25, and one of the best value mid-range phones currently on the market.

Advertisement

  • Large, bright display is perfect for entertainment

  • Solid everyday performance

  • Bigger battery and improved charging speeds

  • Seven years of software updates

  • Struggles a bit with demanding 3D games

  • Secondary cameras aren’t the best at night-time photography

SQUIRREL_PLAYLIST_10148964

Advertisement

Advertisement

Source link

Continue Reading

Tech

Ferrari’s first electric car is Luce, rocking interiors by ex-Apple designer Jony Ive

Published

on

Ferrari is finally shifting gears and heading into the electric car era. Naturally, the company must do so in style, right? Well, today, Ferrari announced Luce, its first electric car, and also gave us a glimpse of its interiors. And guess who helped with the design process? Ferrari partnered with LoveFrom, a design firm started by Apple legend Sir Jony Ive. 

The “Apple touch” is visible

Ferrari says both firms worked for five years to design the car, and the results look stunning. Ive’s touch also reflects his work at Apple, bringing clean metallic looks, rounded corners, and a seamless fusion of glass and other luxurious materials in the cabin. What’s notable is the focus on analog inputs, instead of going with an all-digital cockpit, a principle that has been embraced by Mercedes-Benz and BMW. 

The steering wheel is inspired by the wooden three-spoke Nardi wheel from the 1960s, and its recycled aluminum material was developed specifically for the Luce. The button placement is inspired by Formula One cars, while the start key features an e-ink display that lights up in tandem with the central console and binnacle.

The Ferrari Luce is also the first car from the brand to feature an instrument cluster mounted on the steering column, and it features a one-of-a-kind dual OLED display with three cutouts. The control panel, notably, is mounted on a ball-and-socket joint, while the instrument cluster graphics draw inspiration from helicopters and airplanes. 

Raw, tactile, and visceral

Ferrari also paid special attention to the shifter, which is made out of special Corning Fusion5 Glass with laser-etched microholes to create the backlit graphics. There are plenty of buttons, dials, toggles and switches in the cockpit, and the overall design is calmly minimalist, unlike other hypercars that either go too deep with digital controls or embrace over-the-top aggressive styling.

The key, in particular, looks like a miniature iPhone or iPod, with its metallic sides and polished finish. Ive’s touch is clearly visible on the car’s interior. The Luce, which was initially supposed to debut as the Ferrari Elettrica, will be introduced later this year. It’s also somewhat of a bittersweet chapter for Ive, who was involved with Apple’s cancelled electric car project. Notably, Ferrari’s announcement comes at a time when rumors are swirling that Porsche has cancelled two of its high-profile electric sports cars.

Advertisement

Source link

Continue Reading

Tech

Why I Stopped Believing Every Child Belongs in Every Classroom

Published

on

This story was published by a Voices of Change fellow. Learn more about the fellowship here.

One of my students has ADHD. In a traditional classroom, his restless energy might be seen as a constant disruption. But in my microschool in Atlanta, where short, active lessons and recess are built into the curriculum for grades four through 12, he thrives. He can barely sit still for 10 minutes, but he doesn’t need to. We’re always doing something that allows movement, and he belongs here.

Another student needs something different. He longs for a soft, nurturing presence, the kind that soothes with warmth. I’ll be honest: I was raised by my father, so my version of love is structure, humor and high expectations, not hugs and gentle tones. For him, I come across as harsh. While one child tells everyone how much he loves me, this child quietly believes I don’t like him. Same teacher. Two very different fits.

That’s when I began to see what I hadn’t been allowed to say in a public school: one child belonged here — the other did not. Those of us who teach and believe in education for all would like to believe that every classroom can meet the needs of every child. It sounds noble, even fair. But schools were never truly built that way.

Advertisement

Maybe real equity begins when we accept that belonging looks different for each child, and that true fairness means giving every student the chance to find the place where they actually fit.

A Shift in Perspective

When I worked in public schools, I had no choice in who entered my classroom. I was expected to reach every child, regardless of fit, and I carried guilt when my approach didn’t work for someone.

Later, when I started my own school, I thought I would serve everyone equally well. But reality set in quickly. For the first three years, I was the only teacher who attempted to teach every subject, planning every lesson and holding everything together. Soon, my capacity became clear: I could not teach science. I hated it, and every science teacher I hired struggled in the same way. My school was not built for the science enthusiast. Then came students with exceptionalities I wanted to support but couldn’t. Deep in debt, I couldn’t afford more training or certifications. Through trial and error, I learned that creating a thriving space sometimes means being selective — not in the discriminatory way I once criticized, but in a way that honors who we are as educators and who our school is built to serve.

I think about one student in particular, a bright boy with an exceptionality whose attendance was inconsistent. Though he was capable, his parent often excused him from the very work that would have helped him grow. I tried every strategy I knew, but his progress stalled. Eventually, I realized that without a parent’s commitment to time and a belief in their child’s ability, even the best intentions can’t create change.

Advertisement

Saying no to continuing his enrollment was one of the hardest choices I’ve ever made, but it wasn’t rooted in rejection; it was rooted in honesty. That moment taught me that being selective isn’t about exclusion; it’s about capacity, alignment and care.

My school is wonderful for the right family and for the children who need short lessons, movement, flexibility and structure wrapped in humor. For others, another school might be the better fit. That doesn’t make my school less. It makes it intentional.

What This Means for Schools

What if schools admitted this truth out loud? Not every child belongs in every school, and not every teacher’s style works for every child. What if we stopped shaming teachers for not reaching every child in the same way, and instead built ecosystems where families and educators could find the right match?

“School choice” is not just about privilege. It’s about belonging. It’s about giving children spaces where their needs and personalities are met, and giving teachers the freedom to serve in the ways they serve best.

Advertisement

Because at the end of the day, my realization always returns to the two students who first taught me this lesson: the one who thrived and the one who didn’t. One blossomed because my school was built for him. The other needed something I could not give. Both deserved to be in spaces that fit. That is the heart of school choice — not separation, not exclusion, but the belief that every child and every teacher should be able to say: This place was made for me.

Source link

Continue Reading

Tech

Autodesk Takes Google To Court Over AI Movie Software Named ‘Flow’

Published

on

Autodesk has sued Google in San Francisco federal court, alleging the search giant infringed its “Flow” trademark by launching competing AI-powered software for movie, TV and video game production in May 2025.

Autodesk says it has used the Flow name since September 2022 and that Google assured it would not commercialize a product under the same name — then filed a trademark application in Tonga, where filings are not publicly accessible, before seeking U.S. protection.

Source link

Continue Reading

Trending

Copyright © 2025