Connect with us

Tech

The Texas Senate Primary Was a Preview of Creator Wars to Come

Published

on

On Tuesday, James Talarico, On Tuesday, James Talarico, a 36-year-old Presbyterian seminarian and state representative from Austin, Texas, defeated congresswoman Jasmine Crockett in what has become one of the most closely watched primary races so far this year.

While both candidates boast immense social media followings—Talarico with 1.6 million followers and Crockett with 2.6 million followers on TikTok—it wasn’t just the candidates who drove the conversation. It was the creators around them, who offer a preview of the digital fights to come throughout the midterms and, ultimately, the 2028 presidential race.

The Talarico and Crockett campaigns ran distinctly different digital strategies. Crockett has built her congressional brand on confrontation, going massively viral last year after calling out Marjorie Taylor Greene for having a “bleach-blonde, bad-built, butch body” and telling Elon Musk to “fuck off.” Talarico’s digital presence reads more like a populist sermon delivered over his own social media accounts. He’s carried these preachings to unconventional platforms, like the Joe Rogan Experience, that rewarded him with countless viral clips.

But for the most part, the incendiary aspects of the digital-focused campaigns came from outside the candidates. In January, the hosts of “Las Culturistas,” a pop-culture and comedy podcast, set off a firestorm of criticism after discouraging listeners from supporting Crockett in an episode of the show. “Don’t waste your money sending to Jasmine Crockett, do not do it,” Matt Rogers, one of the hosts said at the time. The show faced immediate backlash from members of its audience and Crockett backers, forcing them to apologize.

Advertisement

It was the first in a series of online spats that would reach a fever pitch in February, when a Dallas-based creator named Morgan Thompson claimed that Talarico called Colin Allred, a former House representative, a “mediocre Black man.” The video shared with her nearly 200,000 TikTok followers went viral, breaking out from pro-Crockett communities online and into the mainstream press. Responding to the allegation, the Talarico campaign called the comment a “mischaracterization” of an off-the-record conversation the candidate had with Thompson in which he called Allred’s method of campaigning “mediocre,” not the man himself.

“I would never attack him [Allred] on the basis of race,” Talarico said at the time. “As a Black man in America, Congressman Allred has had to work twice as hard to get where he is. I understand how my critiques of the Congressman’s campaign could be interpreted given this country’s painful legacy of racism, and I care deeply about the impact my words have on others.”

This episode illuminated a key question amongst strategists going into the heat of the 2026 midterms and the next presidential race: What role should creators play in campaigns? And how do you manage them? While working with creators has become commonplace in both Republican and Democratic campaigns, the relationships are often loosely defined and difficult to control.

“There are so many factors that the campaign staff themselves have to deal with and think about,” says Kyle Tharp, who writes the Chaotic Era newsletter that focuses on digital politics. “Do I put them in the press risers at the rally, or do I give them upfront VIP access? Do I give them a couple minutes with the candidate? Am I going to be screening their questions? Or do I just let them riff and hope for the best?”

Advertisement

President Donald Trump’s 2024 reelection campaign relied heavily on creators and podcasters to reach young, predominantly male voters. But many of those very same creators have turned against Trump over the last year. In the leadup to the 2024 election, Trump appeared on “Flagrant,” a popular podcast hosted by comedian Andrew Schulz. But Schulz’s support for Trump quickly evolved into ire. Last summer, Schulz took issue with the administration’s failure to release files related to Justice Department investigations into convicted sex offenderJeffrey Epstein. Since then, Schulz has repeatedly leveraged his platform to criticize the administration.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Windows 10 KB5075039 update fixes broken Recovery Environment

Published

on

Windows 10

Microsoft has released the KB5075039 Windows Recovery Environment update for Windows 10 to fix a long-standing issue that prevented some users from accessing the Recovery environment.

The Windows Recovery Environment (WinRE) is a minimal troubleshooting environment used to repair or restore the operating system after it fails to start, to diagnose crashes, or to remove malware.

Windows Recovery Environment
Windows Recovery Environment
Source: BleepingComputer
 

In October 2025, Microsoft confirmed that the KB5066835 Patch Tuesday updates broke USB mouse and keyboard input when using the Windows 11 Recovery Environment, making it difficult for many to use the troubleshooting tool.

While they quickly rolled out a fix for this flaw, they didn’t disclose until February that the Windows 10 KB5068164 update released in October also broke WinRE.

“This update contains an issue that prevents the Windows Recovery Environment from starting successfully,” reads the February update to the change log.

Advertisement

Yesterday, Microsoft released the “KB5075039: Windows Recovery Environment update for Windows 10” to fix the WinRE issue introduced last year.

“[Windows Recovery Environment (WinRE)] Fixed: WinRE would not start after installing the October 14, 2025 update KB5068164,” reads the change log.

To install the update, your WinRE partition must be at least 256MB in size. If not, you will need to increase the partition size using these instructions.

Before resizing any partition, including the WinRE partition, it is always advisable to back up the data on the drive whose partitions are being resized.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

This smart device stops sneaky AI gadgets from listening to your conversations

Published

on

A new device aims to give people control over who can hear them in a world filled with gadgets that are always listening and capturing your conversations. A startup called Deveillance has introduced Spectre I, a portable device designed to stop microphones in nearby devices from recording your voice.

Today, we’re introducing Spectre I, the first smart device to stop unwanted audio recordings.

We live in a world of always-on listening devices.

Smart devices and AI dominate our world in business and private conversations.

With Deveillance, you will @be_inaudible. pic.twitter.com/WdxmnyFq1I

Advertisement

— Aida Baradari (@aidaxbaradari) March 3, 2026

The company says the device can make conversations unintelligible to phones, smart speakers, laptops, and other gadgets that constantly listen for audio. The idea addresses a growing concern around always-on devices.

According to the company, about 14.4 billion devices worldwide are continuously listening for voice input. These recordings often become valuable data sources used for data mining, training artificial intelligence systems, influencing our buying behaviours or deepest opinions.

Even a short sample of speech can reveal sensitive personal details. Around 30 seconds of voice data can help determine traits such as age, weight, income level, and even health information.

Advertisement

A device that creates a privacy bubble around your voice

Spectre I works by creating a two meter protection zone around the user. When activated, it scans for nearby microphones and emits signals that humans cannot hear, but microphones can detect.

These signals overlay your speech so that recording devices receive distorted audio that cannot be understood.

Unlike traditional signal jammers that rely on strong radio interference, the device uses artificial intelligence, signal processing, and physics based research to target microphones directly.

The system operates locally on the device and does not send any data to the cloud. The portable design of Spectre I makes it easy to carry anywhere.

Deveillance says this makes it useful in business meetings, personal conversations, or any situation where people want to keep discussions private.

The company has opened pre-orders for Spectre I with a refundable deposit of $1,199. The device is currently in development, with the first shipments expected in the second half of 2026.

Advertisement

Privacy groups like the Electronic Frontier Foundation have long warned about the risks of always-on surveillance. Deveillance says Spectre I is only the beginning of its effort to give users more control over how their data is collected and shared.

Source link

Advertisement
Continue Reading

Tech

Apple’s budget MacBook Neo is here to take on the best cheap laptops and Chromebooks

Published

on

It’s been rumoured for a long time, but Apple has finally taken the wraps off arguably its most exciting laptop in years.

The $599 MacBook Neo arrives as the most affordable entry in Apple’s laptop range, with a price that’s more in line with the brand’s iPad range – including the new iPad Air M4 – than the MacBook Air or MacBook Pro.

This is the first MacBook to be powered by the A18 Pro chip originally made for an iPhone, and, like the iMac, it also comes in a range of fun colours – Silver, Indigo, Blush, and Citrus – that are also a first for a MacBook. Apple has colour-matched the keyboard and feet, giving it a very distinct look.

The A18 Pro chip has a 6-core CPU (with 2 performance cores and 4 efficiency cores) and a 5-core GPU, along with ray-tracing support and a 16-core neural engine. Apple will only offer a single 8GB memory option, with storage sizes of either 256GB or 512GB.

Advertisement
macbook neo (1)macbook neo (1)

Advertisement

Battery life is stated at 16 hours for video streaming and 11 hours of web browsing, and there’s a 1080p camera on the front. Unlike the other MacBook models, there’s no notch, but a bezel similar to that of an iPad. The display is 13 inches, with a 2408 x 1506 resolution and a reported 500 nits of brightness.

The base model ships without Touch ID, although you can pay a little more and get a version with the fingerprint unlock embedded into the keyboard.

Connectivity comes from two USB-C (one is a USB 2 and the other USB 3) ports and a headphone jack, so no MagSafe charging. There is Wi-Fi 6E and Bluetooth 6 though, which is welcome.

MacBook Air Neo Price and Release Date

Prices start at £599/$599 for a model with 256GB storage and £699/$699 for a Touch ID-toting 512GB variant. It can only be selected with 8GB memory, and there’s no 1TB storage option.

Advertisement

This is a breaking news story. We’ll update it as we get more information

Advertisement

Source link

Advertisement
Continue Reading

Tech

Black Forest Labs’ new Self-Flow technique makes training multimodal AI models 2.8x more efficient

Published

on

To create coherent images or videos, generative AI diffusion models like Stable Diffusion or FLUX have typically relied on external “teachers”—frozen encoders like CLIP or DINOv2—to provide the semantic understanding they couldn’t learn on their own.

But this reliance has come at a cost: a “bottleneck” where scaling up the model no longer yields better results because the external teacher has hit its limit.

Today, German AI startup Black Forest Labs (maker of the FLUX series of AI image models) has announced a potential end to this era of academic borrowing with the release of Self-Flow, a self-supervised flow matching framework that allows models to learn representation and generation simultaneously.

By integrating a novel Dual-Timestep Scheduling mechanism, Black Forest Labs has demonstrated that a single model can achieve state-of-the-art results across images, video, and audio without any external supervision.

Advertisement

The technology: breaking the “semantic gap”

The fundamental problem with traditional generative training is that it’s a “denoising” task. The model is shown noise and asked to find an image; it has very little incentive to understand what the image is, only what it looks like.

To fix this, researchers have previously “aligned” generative features with external discriminative models. However, Black Forest Labs argues this is fundamentally flawed: these external models often operate on misaligned objectives and fail to generalize across different modalities like audio or robotics.

The Labs’ new technique, Self-Flow, introduces an “information asymmetry” to solve this. Using a technique called Dual-Timestep Scheduling, the system applies different levels of noise to different parts of the input. The student receives a heavily corrupted version of the data, while the teacher—an Exponential Moving Average (EMA) version of the model itself—sees a “cleaner” version of the same data.

The student is then tasked not just with generating the final output, but with predicting what its “cleaner” self is seeing—a process of self-distillation where the teacher is at layer 20 and the student is at layer 8. This “Dual-Pass” approach forces the model to develop a deep, internal semantic understanding, effectively teaching itself how to see while it learns how to create.

Advertisement

Product implications: faster, sharper, and multi-modal

The practical results of this shift are stark. According to the research paper, Self-Flow converges approximately 2.8x faster than the REpresentation Alignment (REPA) method, the current industry standard for feature alignment. Perhaps more importantly, it doesn’t plateau; as compute and parameters increase, Self-Flow continues to improve while older methods show diminishing returns.

The leap in training efficiency is best understood through the lens of raw computational steps: while standard “vanilla” training traditionally requires 7 million steps to reach a baseline performance level, REPA shortened that journey to just 400,000 steps, representing a 17.5x speedup.

Black Forest Labs’ Self-Flow framework pushes this frontier even further, operating 2.8x faster than REPA to hit the same performance milestone in roughly 143,000 steps.

Taken together, this evolution represents a nearly 50x reduction in the total number of training steps required to achieve high-quality results, effectively collapsing what was once a massive resource requirement into a significantly more accessible and streamlined process.

Advertisement

Black Forest Labs showcased these gains through a 4B parameter multi-modal model. Trained on a massive dataset of 200M images, 6M videos, and 2M audio-video pairs, the model demonstrated significant leaps in three key areas:

  1. Typography and text rendering: One of the most persistent “tells” of AI images has been garbled text. Self-Flow significantly outperforms vanilla flow matching in rendering complex, legible signs and labels, such as a neon sign correctly spelling “FLUX is multimodal”.

  2. Temporal consistency: In video generation, Self-Flow eliminates many of the “hallucinated” artifacts common in current models, such as limbs that spontaneously disappear during motion.

  3. Joint video-audio synthesis: Because the model learns representations natively, it can generate synchronized video and audio from a single prompt, a task where external “borrowed” representations often fail because an image-encoder doesn’t understand sound.

In terms of quantitative metrics, Self-Flow achieved superior results over competitive baselines. On Image FID, the model scored 3.61 compared to REPA’s 3.92. For video (FVD), it reached 47.81 compared to REPA’s 49.59, and in audio (FAD), it scored 145.65 against the vanilla baseline’s 148.87.

From pixels to planning: the path to world models

The announcement concludes with a look toward world models—AI that doesn’t just generate pretty pictures but understands the underlying physics and logic of a scene for planning and robotics.

By fine-tuning a 675M parameter version of Self-Flow on the RT-1 robotics dataset, researchers achieved significantly higher success rates in complex, multi-step tasks in the SIMPLER simulator. While standard flow matching struggled with complex “Open and Place” tasks, often failing entirely, the Self-Flow model maintained a steady success rate, suggesting that its internal representations are robust enough for real-world visual reasoning.

Advertisement

Implementation and engineering details

For researchers looking to verify these claims, Black Forest Labs has released an inference suite on GitHub specifically for ImageNet 256×256 generation. The project, primarily written in Python, provides the SelfFlowPerTokenDiT model architecture based on SiT-XL/2.

Engineers can utilize the provided sample.py script to generate 50,000 images for standard FID evaluation. The repository highlights that a key architectural modification in this implementation is per-token timestep conditioning, which allows each token in a sequence to be conditioned on its specific noising timestep. During training, the model utilized BFloat16 mixed precision and the AdamW optimizer with gradient clipping to maintain stability.

Licensing and availability

Black Forest Labs has made the research paper and official inference code available via GitHub and their research portal. While this is currently a research preview, the company’s track record with the FLUX model family suggests these innovations will likely find their way into their commercial API and open-weights offerings in the near future.

For developers, the move away from external encoders is a massive win for efficiency. It eliminates the need to manage separate, heavy models like DINOv2 during training, simplifying the stack and allowing for more specialized, domain-specific training that isn’t beholden to someone else’s “frozen” understanding of the world.

Advertisement

Takeaways for enterprise technical decision-makers and adopters

For enterprises, the arrival of Self-Flow represents a significant shift in the cost-benefit analysis of developing proprietary AI.

While the most immediate beneficiaries are organizations training large-scale models from scratch, the research demonstrates that the technology is equally potent for high-resolution fine-tuning. Because the method converges nearly three times faster than current standards, companies can achieve state-of-the-art results with a fraction of the traditional compute budget.

This efficiency makes it viable for enterprises to move beyond generic off-the-shelf solutions and develop specialized models that are deeply aligned with their specific data domains, whether that involves niche medical imaging or proprietary industrial sensor data.

The practical applications for this technology extend into high-stakes industrial sectors, most notably robotics and autonomous systems. By leveraging the framework’s ability to learn “world models,” enterprises in manufacturing and logistics can develop vision-language-action (VLA) models that possess a superior understanding of physical space and sequential reasoning.

Advertisement

In simulation tests, Self-Flow allowed robotic controllers to successfully execute complex, multi-object tasks—such as opening a drawer to place an item inside—where traditional generative models failed. This suggests that the technology is a foundational tool for any enterprise seeking to bridge the gap between digital content generation and real-world physical automation.

Beyond performance gains, Self-Flow offers enterprises a strategic advantage by simplifying the underlying AI infrastructure. Most current generative systems are “Frankenstein” models that require complex, external semantic encoders often owned and licensed by third parties.

By unifying representation and generation into a single architecture, Self-Flow allows enterprises to eliminate these external dependencies, reducing technical debt and removing the “bottlenecks” associated with scaling third-party teachers. This self-contained nature ensures that as an enterprise scales its compute and data, the model’s performance scales predictably in lockstep, providing a clearer ROI for long-term AI investments.

Source link

Advertisement
Continue Reading

Tech

Vehicle Tire Pressure Sensors Enable Silent Tracking

Published

on

Longtime Slashdot reader linuxwrangler writes: Dark Reading reports that a team of researchers has determined that signals from tire pressure monitoring systems (TPMSs), required in U.S. cars since 2007, can be used to track the presence, type, weight, and driving pattern of vehicles. The researchers report (PDF) that the TPMS data, which includes unique sensor IDs, is sent in clear text without authentication and can be intercepted 40-50 meters from a vehicle using devices costing $100. “Researchers have discovered that most TPMS sensors transmit a unique identifier in clear text that never changes during the lifetime of the tire,” the researchers pointed out. “This unencrypted wireless communication makes the signals susceptible to eavesdropping and potential tracking by any third party in proximity to the car.”

Source link

Continue Reading

Tech

Spotify and Liquid Death Launch Eternal Playlist Urn So the Music Never Dies Even If You Do

Published

on

Spotify has partnered with beverage brand Liquid Death to launch one of the stranger marketing ideas to crawl out of the modern streaming era: the Liquid Death x Spotify Eternal Playlist Urn, a cremation urn paired with a tool that generates a personalized Spotify playlist meant to live on after you’re gone. The concept blends memorial products with algorithmic music discovery, allowing users to create what the companies call a “forever soundtrack” based on their listening history. It’s part branding stunt, part commentary on how deeply streaming has embedded itself into daily life and identity.

Of course, it raises a perfectly reasonable question: how macabre can a marketing campaign get? Memorializing someone with their favorite songs might sound touching in theory, but it also assumes your loved ones actually enjoy your music. Speaking personally, my family already rolls their eyes at half the things I play. The last thing they need is the possibility of being haunted by my eternal playlist from beyond the grave.

Liquid Death x Spotify Eternal Playlist Urn

A Real Urn With a Bluetooth Speaker…And Apparently a Five Per Customer Limit 

The Liquid Death x Spotify Eternal Playlist Urn is exactly what it sounds like: a limited edition cremation urn designed to hold human ashes while also functioning as a Bluetooth speaker that plays a custom Spotify playlist. Priced at $495 and limited to just 150 units, the urn is made from 100% polyester resin, stands nearly a foot tall (29 cm). Each unit is produced in small batches and marketed as a one of a kind piece, meaning small cosmetic imperfections are considered part of the design rather than defects.

The unusual twist is built directly into the lid. A wireless Bluetooth speaker is embedded at the top of the urn and powered by a rechargeable battery that charges via USB-C. Once connected to a phone, users can stream a personalized Spotify Eternal Playlist, which is generated through Spotify’s playlist tool based on a listener’s music history and preferences. In theory, the result is a curated soundtrack that reflects the music someone loved while they were alive and can continue playing long after they’re gone.

Advertisement

Of course, this is where the concept gets a little…unsettling. Unlike novelty memorial products or decorative keepsakes, this is an actual urn designed to hold cremated remains. That means the person whose playlist is blasting through the Bluetooth speaker could literally be inside the box producing the music. Whether that feels like a touching tribute or the world’s most awkward living room accessory probably depends on how much your family enjoyed your taste in music.

In my case, this isn’t likely to become a problem. Judaism traditionally prohibits cremation, so the Eternal Playlist Urn probably won’t be part of my exit strategy. If my kids stick to tradition, I’ll end up buried somewhere outdoors instead. Knowing New Jersey, that likely means the backyard under the big weeping willow. The space under the pine and oak trees is already spoken for. Jersey. You really don’t want to know.

The Bottom Line

And if all of that wasn’t strange enough, there’s the line in the product description that really makes you stop mid scroll and stare: “Limited to 5 per customer.”

Advertisement

Five.

Not to be weird. Because this whole thing clearly hasn’t crossed that line yet.

But five urns? Who exactly is buying five cremation urns with Bluetooth speakers? Are these supposed to be Christmas gifts? A subtle 100th birthday present for Grandma that comes with a note reading, “It’s time to move on old lady. My 9 to 5 job isn’t paying for that new F-150 and fishing boat, but my inheritance might.” Maybe it’s for the dog, so he can sit in the living room listening to Dad’s eternal yacht rock playlist while contemplating the existential horror of Bluetooth connectivity from beyond the grave.

Or maybe the idea is that your entire family can go out together, each with their own urn blasting their personal soundtrack like some kind of posthumous silent disco.

Advertisement
Advertisement. Scroll to continue reading.

And because nothing ever truly tops American commercialism and our endless appetite for things we probably don’t need, all 150 urns sold out in a single day.

Yes, the entire run of Bluetooth enabled afterlife sound systems disappeared almost instantly. Somewhere out there, people are proudly displaying a cremation urn that doubles as a wireless speaker while a Spotify playlist hums away on eternal repeat.

And if you missed the first batch of algorithmic immortality, don’t worry. More are coming. Because in America, even death apparently comes with a restock notification.

Advertisement

Where to order: $495 at Liquid Death or Create Your Eternal Playlist on Spotify.

Source link

Advertisement
Continue Reading

Tech

Apple updates iOS, macOS Tahoe to 26.3.1 to support new Studio Displays

Published

on

Right after concluding its week of product launches, Apple has rolled out iOS 26.3.1, iPadOS 26.3.1, and macOS 26.3.1 updates, adding support for its updated Studio Display and the new Studio Display XDR.

Two sleek, silver Apple desktop monitors side by side, each showing vibrant abstract artwork with bold colors and geometric shapes on a clean white background
Apple’s two new Studio Displays — image credit: Apple

Apple periodically releases smaller updates for its operating systems, fixing bugs and adding support for new products. Wednesday’s updates firmly fall into the latter category.
The updates, rolling out to iPhone and Mac, brings macOS Tahoe up to version 26.3.1, with iOS 26.3.1 and iPadOS 26.3.1 also released at the same time.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

How this AI expert took an alternative route to tech

Published

on

Aon’s Joseph Holland discusses how taking the route less travelled can lead you towards the career you were meant to have.

“I wanted to be an architect,” explains Joseph Holland, a director of digital foundations, AI platforms and developer experience at Aon. That was the plan; however, having completed the Leaving Cert, he found he didn’t have the required CAO points and “suddenly didn’t have a plan any more”. 

“I’d always been into computers and technology though. Even while I was unemployed, I was refurbishing old PCs and selling them on,” he tells SiliconRepublic.com. “So when a FÁS caseworker mentioned Fastrack into Information Technology (FIT), it caught my attention immediately.” 

He was accepted onto the programme and emerged with a QQI-FET level six Advanced Certificate in IT Specific Support and a one-year contract at Kepak Group that soon became permanent. 

Advertisement

From there, he moved on to Version 1 and then Aon where, having spotted a gap whereby there was no developer experience function, he made the case for building one. Today, he is leading the AI platform and developer experience service. Along the way, he also enrolled at Trinity College Dublin, as a mature student, where he completed his information systems degree. 

All that is to say that often, despite having a plan, you don’t always end up going in the direction you thought you would. Professionally, it can take time and research to figure out the best course of action.  

“I’m glad I did it,” says Holland of his degree programme. 

“I picked up useful skills around project management, systems analysis and understanding how technology fits into broader business strategy. But honestly, the experience and track record I’d already built mattered more to every employer than the piece of paper.”

Advertisement

No alternative to progress

Access to less typical educational and upskilling opportunities is, for Holland, “everything”, as he explains that without FIT, he likely would have chosen to retake the Leaving Cert, putting his career on a different trajectory. 

He notes: “The traditional system had written me off based on a set of exam results. FIT looked at me differently. What makes programmes like FIT work is the direct connection to industry.

“You’re not studying theory in isolation. You’re learning skills that employers actually need and you’re getting placed in real workplaces where you can prove yourself.”

Apprenticeships, he finds, have the power to break down the biggest barriers for young people struggling to get their foot in the door when they don’t have a degree on their CV. 

Advertisement

“The tech industry moves fast and it doesn’t particularly care where your qualification came from. It cares whether you can solve problems and keep learning. Alternative pathways are often better at developing those qualities than four years of lectures,” he says. 

And part of creating opportunities for young people, he explains, is breaking down harmful myths about alternative educational routes as a vehicle towards a tech-based career.

Mythbusters

“The biggest myth is that they are second-best. That if you were good enough, you’d have gone to university. University education has real value and I’m not knocking it,” he says.

“But I’ve worked with people from every educational background over the past 20 years and the route someone took tells you very little about how good they are at their job.” 

Advertisement

What matters, he finds, is what the individual has done with their time since. Another pervasive falsehood is that there is a ceiling that you will eventually hit. Holland explains that there is often a misguided belief that while you can access an entry-level role through an apprenticeship, once you start looking for a more senior position, you will run into roadblocks. 

“I’m a director at a Fortune 500 company. I got my degree years into my career, not before it. The ceiling is artificial and it’s maintained by hiring practices, not by any real limitation in what people from alternative routes can achieve.”

Lastly, he finds that there is also a misconception that alternative routes only lead to technical roles. In Holland’s experience, the skills developed through programmes such as FIT go far beyond coding or networking. 

“My own career moved from hands-on infrastructure work to leading enterprise AI strategy and building a new business function. Technology careers are built on continuous learning and the starting point matters far less than people think.”

Advertisement

To that point, Holland urges employers to take a serious look at how tech apprenticeships in particular can create a sturdy talent pipeline, noting that many in-demand skills – such as curiosity, a strong work ethic and a willingness to learn – never require a degree. 

And to any young person who didn’t get the number of points or exam results they needed, or who is sitting in a classroom querying if they are on the right path or if there are indeed alternatives, he wants them to know that there are – and he has been there too. 

“The education system measures one very narrow type of ability at one very specific moment in your life. It doesn’t define you and it definitely doesn’t predict where you’ll end up. I went from an unemployed school leaver to directing AI platforms at a Fortune 500 while running an animal sanctuary and a music-tech start-up,” he says.  

“Life is broader and stranger and more interesting than any career guidance session will tell you. Programmes like FIT exist because the tech industry needs people who think differently and aren’t afraid to figure things out on the fly. If that sounds like you, there’s a path waiting. You just need to know it’s there.”

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Premier League Soccer 2026: Stream Newcastle vs. Man United Live

Published

on

When to watch Newcastle vs. Man United

  • Wednesday, March 4 at 3:15 p.m. ET (12:15 p.m. PT)

Where to watch Newcastle vs. Man United

  • The match will air in the US on Peacock.
73% off with 2yr plan (+4 free months). Now only $3.49/month


See more details

See at Fubo
Advertisement
Fubo

Watch the Premier League in Canada

Fubo Canada

Advertisement

Wednesday sees a crucial clash in the race for UEFA Champions League qualification as Manchester United travel to St. James’ Park looking to continue their excellent recent form against a Newcastle team whose European dream is in danger of slipping away.

The visitors have claimed a superb 19 points from a possible 21 since the January appointment of Michael Carrick as interim Red Devils boss. Sunday’s 2-1 win over Crystal Palace was the latest victory under the former Old Trafford midfielder. 

The Magpies will be looking to arrest a hugely disappointing slide, with Eddie Howe’s men having crashed to five defeats from their last six games in the English Premier League. That run has seen them slip into the bottom half of the table. 

Newcastle United takes on Manchester United on Wednesday, March 4, at St. James’ Park, with kickoff set for 8:15 p.m. GMT. That makes it a 3:15 p.m. ET or 12:15 p.m. PT start in the US and Canada, and a 7:15 a.m. AEDT kickoff in Australia early on Thursday morning.

Advertisement
Benjamin Sesko of Manchester United celebrating, shouting.

In-form Slovenian stiker Benjamin Šeško has scored seven goals in Man United’s last eight games. 

Zohaib Alam/Manchester United/Getty Images

How to watch Newcastle vs. Man United in the US without cable

This match will be broadcast on the streaming service Peacock. To catch the game live, you’ll need a Peacock Premium or Premium Plus subscription. 

Advertisement

Peacock offers two Premium plans, and after recent price increases, the ad-supported Premium plan costs $11 a month and the ad-free Premium Plus plan costs $17 a month.

How to watch Premier League 2025-26 from anywhere with a VPN

If you’re traveling abroad and want to keep up with your favorite shows while away from home, a VPN can help enhance your privacy and security when streaming. It encrypts your traffic and prevents your internet service provider from throttling your speeds, and can also be helpful when connecting to public Wi-Fi networks while traveling, adding an extra layer of protection for your devices and logins.

Advertisement

VPNs are legal in many countries, including the US and Canada, and can be used for legitimate purposes such as improving online privacy and security. However, some streaming services may have policies that restrict VPN use to access region-specific content. If you’re considering a VPN for streaming, check the platform’s terms of service to ensure compliance.

If you choose to use a VPN, follow the provider’s installation instructions to ensure you’re connected securely and in compliance with applicable laws and service agreements. Some streaming platforms may block access when a VPN is detected, so verify whether your streaming subscription allows VPN use.

James Martin/CNET

Price $13 per month, $75 for the first year or $98 total for the first two years (one- and two-year plans renew at $100 per year)Latest Tests No DNS leaks detected, 18% speed loss in 2025 testsJurisdiction British Virgin IslandsNetwork 3,000 plus servers in 105 countries

Advertisement

ExpressVPN is our best VPN pick for people who want a reliable and safe VPN that works on a variety of devices. Prices start at $3.49 a month on a two-year plan for the Basic tier. Note that ExpressVPN offers a 30-day money-back guarantee.

Advertisement
73% off with 2yr plan (+4 free months). Now only $3.49/month

Livestream Newcastle vs. Man United in the UK

This Wednesday evening match at St. James’ Park is exclusive to TNT, which is airing all of this week’s midweek Premier League fixtures live in the UK across its channels. This game is set to be broadcast on TNT Sports 1.

TNT Sports
Advertisement

TNT Sports offers a sizable 52 live matches this season, exclusively for viewers in the UK. Subscribers can access TNT Sports via Sky Q as a TV package or have the option of streaming online. It costs £31 either way and comes in a package that includes Discovery Plus’ library of documentary content.

Livestream Newcastle vs. Man United in Canada

If you want to livestream Premier League games in Canada this season, you need to subscribe to Fubo. The service has secured exclusive rights to the Premier League and is broadcasting all 380 matches live.

Fubo
Advertisement

Fubo is the go-to destination for Canadians looking to watch the Premier League, with exclusive streaming rights to every match. It costs CA$27 for the first month, then CA$31.50 per month from then on.

Livestream Newcastle vs. Man United in Australia

Livestreaming rights for the EPL are now with Stan Sport, which is showing all 380 fixtures live, including this match.

Stan
Advertisement

Stan Sport will set you back AU$20 a month (on top of a Stan subscription, which starts at AU$12). It’s also worth noting that the streaming service is currently offering a seven-day free trial.

A subscription will also give you access to Premier League, Champions League and Europa League action, as well as international rugby and Formula E.

Source link

Advertisement
Continue Reading

Tech

Linux Fu: The USB WiFi Dongle Exercise

Published

on

The TX50U isn’t very Linux-friendly

If you’ve used Linux for a long time, you know that we are spoiled these days. Getting a new piece of hardware back in the day was often a horrible affair, requiring custom kernels and lots of work. Today, it should be easier. The default drivers on most distros cover a lot of ground, kernel modules make adding drivers easier, and dkms can automate the building of modules for specific kernels, even if it isn’t perfect.

So ordering a cheap WiFi dongle to improve your old laptop’s network connection should be easy, right? Obviously, the answer is no or this would be a very short post.

Plug and Pray

The USB dongle in question is a newish TP-Link Archer TX50U. It is probably perfectly serviceable for a Windows computer, and I got a “deal” on it. Plugging it in caused it to show up in the list of USB devices, but no driver attached to it, nor were any lights on the device blinking. Bad sign. Pro tip: lsusb -t will show you what drivers are attached to which devices. If you see a device with no driver, you know you have a problem. Use -tv if you want a little more detail.

The lsusb output shows the devices as a Realtek, so that tells you a little about the chipset inside. Unfortunately, it doesn’t tell you exactly which chip is in use.

Internet to the Rescue?

Note that most devices (including the network card) have drivers since this was taken after the driver install. The fingerprint scanner (port 5 device 3) does not have a driver, however.

My first attempt to install a Realtek driver from GitHub failed because it was for what turned out to be the wrong chipset. But I did find info that the adapter had an RTL8832CU chip inside. Armed with that nugget, I found [morrownr] had several versions, and I picked up the latest one.

Problem solved? Turns out, no. I should have read the documentation, but, of course, I didn’t. So after going through the build, I still had a dead dongle with no driver or blinking lights.

Advertisement

Then I decided to read the file in the repository that tells you what USB IDs the driver supports. According to that file, the code matches several Realtek IDs, an MSI device, one from Sihai Lianzong, and three from TP-Link. All of the TP-Link devices use the 35B2 vendor ID, and the last two of those use device IDs of 0101 and 0102.

Suspiciously, my dongle uses 0103 but with a vendor ID of 37AD. Still, it seemed like it would be worth a shot. I did a recursive grep for 0x0102 and found a table that sets the USB IDs in os_dep/linux/usb_intf.c.

Of course, since I had already installed the driver, I had to change the dkms source, not the download from GitHub. That was, on my system, in /usr/src/rtl8852cu-v1.19.22-103/os_dep_linux/usb_intf.c. I copied the 0x0102 line and changed both IDs so there was now a 0x0103 line, too:

 {USB_DEVICE_AND_INTERFACE_INFO(0x37ad, 0x0103, 0xff, 0xff, 0xff), .driver_info = RTL8852C}, 
/* TP-Link Archer TX50U */

Now it was a simple matter of asking dkms to rebuild and reinstall the driver. Blinking lights were a good sign and, in fact, it worked and worked well.

Advertisement

DKMS

If you haven’t used DKMS much, it is a reasonable system that can rebuild drivers for specific Linux kernels. It basically copies each driver and version to a directory (usually /usr/src) and then has ways to build them against your kernel’s symbols and produce loadable modules.

The system also maintains a build/install state database in /var/lib. A module is “added” to DKMS, then “built” for one or more kernels, and finally “installed” into the corresponding location for use by that kernel. When a new kernel appears, DKMS detects the event — usually via package manager hooks or distribution-specific kernel install triggers — and automatically rebuilds registered modules against the new kernel headers. The system tracks which module versions are associated with which kernels, allowing parallel kernel installations without conflicts. This separation of source registration from per-kernel builds is what allows DKMS to scale cleanly across multiple kernel versions.

If you didn’t use DKMS, you’d have to manually rebuild kernel modules every time you did a kernel update. That would be very inconvenient for things that are important, like video drivers for example.

Of course, not everything is rosy. The NVidia drivers, for example, often depend on something that is prone to change in future Linux kernels. So one day, you get a kernel update, reboot, and you have no screen. DKMS is the first place to check. You’ll probably find it has some errors when building the graphics drivers.

Advertisement

Your choices are to look for a new driver, see if you can patch the old driver, or roll back to a previous working kernel. Sometimes the changes are almost trivial like when an API changes names. Sometimes they are massive changes and you really do want to wait for the next release. So while DKMS helps, it doesn’t solve all problems all the time.

Extras and Thoughts

I skipped over the part of turning off secure boot because I was too lazy to add a signing key to my BIOS. I’ll probably go back and do that later. Probably.

You have to wonder why this is so hard. There is already a way to pass the module options. It seems like you might as well let a user jam a USB ID in. Sure, that wouldn’t have helped for the enumeration case, but it would have been perfectly fine to me if I had just had to put a modprobe or insmod with a parameter to make the card work. Even though I’m set up for rebuilding kernel modules and kernels, many people aren’t, and it seems silly to force them to recompile for a minor change like this.

Of course, another fun answer would be to have vendors actually support their devices for Linux. Wouldn’t that be nice?

Advertisement

You could write your own drivers if you have sufficient documentation or the desire to reverse-engineer the Windows drivers. But it can take a long time. User-space drivers are a little less scary, and some people like using Rust.

What’s your Linux hardware driver nightmare story? We know you have one. Let us hear about it in the comments.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025