Connect with us
DAPA Banner

Tech

Definitive Edition Arrives On Mac Next Month

Published

on





One of the greatest real-time strategy games ever is making its way to macOS (again). Publisher Feral Interactive announced today it will bring Age of Empires II: Definitive Edition to Apple computers through Steam on May 28, with an App Store release to follow later this year. Feral worked with World’s Edge, the studio that has managed the Age of Empires franchise for Microsoft since 2019, to develop the port.

Advertisement

Like its PC counterpart, the Definitive Edition on Mac will include content from AoE2’s original Age of Kings release alongside its highly regarded The Conquerors expansion. It also comes with three pieces of more recent DLC: Lords of the West, Dynasties of India and Dawn of the Dukes. Between those, you could easily spend hundreds of hours playing all the included single player campaigns. (I know I did.) This being a remake, you also get updated graphics, music and about two decades of quality of life improvements. 

For multiplayer, you will also have access to many of the civilizations that are in the game. If you’re still keen to play more AoE2 after all that, every piece of DLC available for the PC version of the game, up to and including the most recent The Last Chieftains expansion, will be available to purchase separately. 

Technically, this isn’t the first time Age of Empires II has been available on Mac. The original game arrived on Mac back in 2001, but this is the first time the Definitive Edition has been available on Apple’s operating system since it was released on PC back in 2019. Notably, this is the first Microsoft title to make its way to Mac since Psychonauts 2 in 2021. Seven years is a long time to wait for a game to release on another platform, but the nice thing about Age of Empires II is the community hasn’t left the game. On Steam, there are consistently about 20,000 people playing at any time, so you can always find a match.

Advertisement



Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

AI integration needs accountability, not just innovation

Published

on

Artificial intelligence has already embedded itself into the rhythms of modern life, shaping decisions in ways that often go unnoticed. Amy Trahey, founder of Great Lakes Engineering Group, believes that integration is exactly what makes it powerful and, in many cases, risky. From her perspective in engineering, she sees AI as something that directly influences outcomes tied to public safety, funding, and long-term trust. 

Her understanding of AI began outside formal systems. It revealed itself through daily interactions with technology, from predictive recommendations to voice-enabled tools that respond almost instinctively, which paved the way for a sudden epiphany.

Amy Trahey, P.E.

Amy Trahey, P.E.

She says, “I realized how AI is integrated into everything. Whether I watch something on streaming platforms, whether I’m talking on the phone, and suddenly I’m seeing ads for what I spoke about, it’s already part of how we live, and it’s moving faster than any of us can keep up with.” That speed, in her view, creates a leadership gap. Organizations are adopting AI at scale, and Trahey believes many leaders underestimate how quickly their teams are already relying on it. 

Advertisement

She points to studies showing that nearly three out of four companies now use AI in some capacity, interpreting that as proof that passive oversight is no longer viable. “You have to realize your team is going to use it. It’s not a question anymore. So if that’s the case, then it becomes your responsibility to understand it and make sure it’s being used the right way,” Trahey explains. 

Education became her first step toward that responsibility. She enrolled in a five-week intensive program focused on AI prompting, approaching it with the same discipline she applies to engineering work. What she found reshaped her perspective. “It truly is transformational technology. This is on the level of the World Wide Web, but it’s evolving even faster,” Trahey shares. “It has great power to make positive changes, and naturally, it has the potential to be used the wrong way. It all comes down to intent and whether you’re doing things with integrity.

At Great Lakes Engineering Group, Trahey finds it imperative to establish that duality to ensure that efficiency gains are measurable. She highlights using AI to translate complex engineering briefs and updates into concise and coherent communication for clients, to generate structured meeting documentation in minutes instead of hours. The value, she posits, lies in augmenting human capability, not replacing it. 

Oversight, however, remains fundamental to her process. She insists that no AI-generated output should move forward without human review, particularly in high-stakes environments. Within her work, which revolves around overseeing bridge and transportation infrastructure projects, due diligence finds greater relevance. 

It acts as an assistant for me, and sometimes as an advisor,” Trahey explains. “But everything still comes back to me. I review it before it goes anywhere. It’s known to hallucinate, and it can try to please you by giving you what it thinks you want to hear. That’s where human responsibility comes in. You cannot take your hands off the wheel.

Advertisement

Responsibility extends into organizational culture as well, as Trahey recognized early that AI adoption within her team required structure, not restriction. Observing younger engineers already integrating these tools into their workflows prompted her to formalize guidelines. “We do bridge design. We’re working on things that are technically complex and tied to safety,” she says. “If people are using AI, then I need to understand it so I can create policies around what’s acceptable and what’s not. That’s part of leadership. You don’t ignore it. You define how it’s used.”

Her framework draws a clear line between ethical efficiency and misuse. Automating administrative tasks or organizing large datasets represents what she considers appropriate use. In her view, misrepresenting AI-generated work or exploiting time savings for financial gain reflects a breakdown in professional integrity. She speaks directly to that risk.

 “There are people who will use it and then bill five hours for something that took five minutes. That’s not innovation. That’s a lack of integrity. And when you’re dealing with taxpayer money or public safety, that matters.

Her concerns also extend to societal implications. Trahey believes the accessibility of AI introduces new risks that require coordinated oversight. “When something this powerful is accessible to every human being across the globe, there has to be some level of legislative involvement. We need guidelines and accountability. This isn’t just for technically savvy people anymore. This is for everybody,” Trahey shares. 

Advertisement

Personal experience adds another layer to her perspective. Watching her son Quinn interact with AI as someone with autism has highlighted its potential and its complexity. She sees value in its ability to support communication, especially for individuals who struggle to express themselves. At the same time, she remains attentive to how that interaction is framed. “He sees it as something he can talk to, and there’s a benefit in that,” she explains. “But it’s my job to help him understand what it is and what it isn’t. It’s a tool, not a person. That distinction matters.”

Trahey’s approach to AI reflects a consistent principle. Innovation should be pursued with intention, supported by education, and governed by clear standards. She believes organizations that engage with AI thoughtfully will be better positioned to harness its benefits without compromising trust, and as the world accelerates into the new era of technological collaboration, that distinction, she says, makes all the difference.

Source link

Advertisement
Continue Reading

Tech

One tool call to rule them all? New open source Python tool RunPod Flash eliminates containers for faster AI dev

Published

on

Runpod, the high-performance cloud computing and GPU platform designed specifically for AI development, today launched a new open source, MIT licensed, enterprise-friendly Python programming tool called Runpod Flash — and it is poised to make creation, iteration and deployment of AI systems inside and outside of foundation model labs much faster.

The tool aims to eliminate some of the biggest barriers and hurdles to training and using AI models today, namely, doing away with Docker packages and containerization when developing for serverless GPU infrastructure, which the company believes will speed up development and deployment of new AI models, applications and agentic workflows.

Additionally, the platform is built to serve as a critical substrate for AI agents and coding assistants—such as Claude Code, Cursor, and Cline—enabling them to orchestrate and deploy remote hardware autonomously with minimal friction.

Developers can utilize Flash to accomplish a diverse set of high-performance computing tasks, including cutting-edge deep learning research, model training, and fine-tuning.

Advertisement

“We make it as easy as possible to be able to bring together the cosmos of different AI tooling that’s available in a function call,” said RunPod chief technology officer (CTO) Brennen Smith, in a video call interview with VentureBeat last week.

The tool allows for the creation of sophisticated “polyglot” pipelines, where users can route data preprocessing to cost-effective CPU workers before automatically handing off the workload to high-end GPUs for inference.

Beyond research and development, Flash supports production-grade requirements through features such as low-latency load-balanced HTTP APIs, queue-based batch processing, and persistent multi-datacenter storage.

Eliminating the ‘packaging tax’ of AI development

The core value proposition of Flash GA is the removal of Docker from the serverless development cycle.

Advertisement

In traditional serverless GPU environments, a developer must containerize their code, manage a Dockerfile, build the image, and push it to a registry before a single line of logic can execute on a remote GPU. Runpod Flash treats this entire process as a “packaging tax” that slows down iteration cycles.

Under the hood, Flash utilizes a cross-platform build engine that enables a developer working on an M-series Mac to produce a Linux x86_64 artifact automatically.

This system identifies the local Python version, enforces binary wheels, and bundles dependencies into a deployable artifact that is mounted at runtime on Runpod’s serverless fleet.

This mounting strategy significantly reduces “cold starts”—the delay between a request and the execution of code—by avoiding the overhead of pulling and initializing massive container images for every deployment.

Advertisement

Furthermore, the technology infrastructure supporting Flash is built on a proprietary Software Defined Networking (SDN) and Content Delivery Network (CDN) stack.

Smith told VentureBeat that the hardest problems in GPU infrastructure are often not the GPUs themselves, but the networking and storage components that link them together.

“Everyone is talking about agentic AI, but the way I personally see it — and the way the leadership team at RunPod sees it — is that there needs to be a really good substrate and glue for these agents, whatever they might be powered by, to be able to work with,” Smith said.

Flash leverages this low-latency substrate to handle service discovery and routing, enabling cross-endpoint function calls. This allows developers to build “polyglot” pipelines where, for instance, a cheap CPU endpoint handles data preprocessing before routing the clean data to a high-end NVIDIA H100 or B200 GPU for inference.

Advertisement

Four distinct workload architectures supported

While the Flash beta focused on live-test endpoints, the GA release introduces a suite of features designed for production-grade reliability.

The primary interface is the new @Endpoint decorator, which consolidates configuration—such as GPU type, worker scaling, and dependencies—directly into the code. The GA release defines four distinct architectural patterns for serverless workloads:

  • Queue-based: Designed for asynchronous batch jobs where functions are decorated and run.

  • Load-balanced: Tailored for low-latency HTTP APIs where multiple routes share a pool of workers without queue overhead.

  • Custom Docker Images: A fallback for complex environments like vLLM or ComfyUI where a pre-built worker is already available.

  • Existing Endpoints: Using Flash as a Python client to interact with previously deployed Runpod resources via their unique IDs.

A critical addition for production environments is the NetworkVolume object, which provides first-class support for persistent storage across multiple datacenters.

Files mounted at /runpod-volume/ allow for model weights and large datasets to be cached once and reused, further mitigating the impact of cold starts during scaling events.

Advertisement

Additionally, Runpod has introduced environment variable management that is excluded from the configuration hash, meaning developers can rotate API keys or toggle feature flags without triggering an entire endpoint rebuild.

To address the rise of AI-assisted development, Runpod has released specific skill packages for coding agents like Claude Code, Cursor, and Cline.

These packages provide agents with deep context regarding the Flash SDK, effectively reducing syntax hallucinations and allowing agents to write functional deployment code autonomously.

This move positions Flash not just as a tool for humans, but as the “substrate and glue” for the next generation of AI agents.

Advertisement

Why open source RunPod Flash?

Runpod has released the Flash SDK under the MIT License, one of the most permissive open-source licenses available.

This choice is a deliberate strategic move to maximize market share and developer adoption. In contrast to more restrictive licenses like the GPL (General Public License), which can impose “copyleft” requirements—potentially forcing companies to open-source their own proprietary code if it links to the library—the MIT license allows for unrestricted commercial use, modification, and distribution.

Smith explained this philosophy as a “motivating construct” for the company: “I prefer to win based on product quality and product innovation rather than legal ease and lawyers,” he told VentureBeat.

By adopting a permissive license, Runpod lowers the barrier for enterprise adoption, as legal teams do not have to navigate the complexities of restrictive open-source compliance.

Advertisement

Furthermore, it invites the community to fork and improve the tool, which Runpod can then integrate back into the official release, fostering a collaborative ecosystem that accelerates the development of the platform.

Timing is everything: RunPod’s growth and market positioning

The launch of Flash GA comes at a time of explosive growth for Runpod, which has surpassed $120 million in Annual Recurring Revenue (ARR) and serves a developer base of over 750,000 since it was founded in 2022.

The company’s growth is driven by two distinct segments: the “P90” enterprises—large-scale operations like Anthropic, OpenAI, and Perplexity—and the “sub-P90” independent researchers and students who represent the vast majority of the user base.

The platform’s agility was recently demonstrated during the release of DeepSeek V4 in preview last week. Within minutes of the model’s debut, developers were utilizing Runpod infrastructure to deploy and test the new architecture.

Advertisement

This “real-time” capability is a direct result of Runpod’s specialized focus on AI developers, offering over 30 GPU SKUs and billing by the millisecond to ensure that every dollar of spend results in maximum throughput.

Runpod’s position as the “most cited AI cloud on GitHub” suggests that it has successfully captured the developer mindshare required to sustain its momentum.

With Flash GA, the company is attempting to transition from being a provider of raw compute to becoming the essential orchestration layer for the AI-first cloud.

As development shifts toward “intent-based” coding—where the outcome is prioritized over the execution details—tools that bridge the gap between local ideas and global scale will likely define the next era of computing.

Advertisement

Source link

Continue Reading

Tech

Why critical infrastructure needs critical cybersecurity

Published

on

UL’s Dr Muzaffar Rao discusses the professional diploma in OT security programme, and what motivates his research in OT and ICS cybersecurity.

For Dr Muzaffar Rao, University of Limerick (UL) has been a research base for a number of years.

When Rao first joined UL in 2013, he was a PhD student conducting research on reconfigurable hardware for security, specifically field programmable gate array (FPGA)‑based cryptographic systems.

After his PhD, Rao began working at the university as a postdoctoral researcher with the Centre for Robotics and Intelligent Systems, a role that Rao says allowed him to further develop “expertise in hardware‑based cryptographic systems”.

Advertisement

Fast-forward to the current day, and Rao is now an associate professor in the Department of Electronic and Computer Engineering at UL, as well as an associate investigator with Lero, the Research Ireland Centre for Software.

Rao is also the course director of the professional diploma in operational technology (OT) security programme – a specialised Level 9 programme that Rao says is a “unique offering in Ireland”, as it’s dedicated specifically to OT and industrial control systems (ICS) security.

The primary objective of the programme, according to Rao, is to equip professionals with the practical knowledge and specialised skills required to “securely integrate IT and OT systems while effectively managing associated cyber risks”.

“Developed in close collaboration with industry partners, the course focuses on real-world operational challenges, OT-specific threats, relevant legal and regulatory frameworks, and risk mitigation strategies,” he explains.

Advertisement

“A strong emphasis is placed on bridging workforce skills gaps to ensure graduates can protect and secure complex operational environments.”

Rao tells SiliconRepublic.com that recently, the course was provided with the Airbus CyberRange, a simulation and training platform that provides “immersive, hands-on learning through realistic, scenario-based exercises that reflect real-world critical infrastructure and smart manufacturing systems”.

Securing OT and ICS

While his duties have expanded to new duties such as teaching and curriculum development, his cybersecurity research continues to be a major part of his post at UL.

Rao’s current research focuses on strengthening the security and resilience of OT and ICS, particularly in critical infrastructure environments that rely on legacy systems.

Advertisement

“These systems,” he tells us, “are often difficult or impossible to patch, replace or take offline, which makes conventional security approaches impractical.”

He says a “central strand” of his work involves developing lightweight cryptographic mechanisms specifically tailored for ageing industrial hardware with limited processing power, constrained bandwidth and long operational life cycles – with the goal of introducing strong security controls without disrupting industrial operations.

He also researches early‑warning and intrusion‑detection frameworks for “advanced, including nation‑state-level, threats in OT and ICS environments”.

“This includes addressing situations where monitoring is minimal or absent, with particular attention to unmonitored industrial sensors and peripheral devices that create blind spots attackers can exploit.”

Advertisement

But why is this research important?

Rao explains that much of Ireland’s critical infrastructure – including energy, water, healthcare and manufacturing – still depends on “ageing operational technology that cannot be easily upgraded or taken offline”.

“These constraints create significant security gaps and make essential services especially vulnerable to sophisticated cyberthreats, including those from nation‑state actors targeting industrial systems across Europe,” says Rao.

“By developing lightweight cryptographic solutions suitable for legacy devices, improving early‑warning intrusion detection and securing the increasingly interconnected IT/OT environment, this research directly addresses these risks.

Advertisement

“It enhances system visibility, limits lateral movement by attackers, and strengthens Ireland’s ability to prevent and respond to cyber‑physical attacks. Ultimately, this work contributes to national resilience, the continuity of essential services and public safety at a time when cyberattacks are becoming more frequent, targeted and complex.”

Misconceptions and motivation

Rao says he was drawn to this specific area of research because it lies at “the intersection of fields that have consistently shaped my academic path”.

In fact, he says his PhD research on FPGA‑based cryptographic designs naturally exposed him to the “unique and under‑addressed security challenges” of OT and ICS.

“These environments depend heavily on legacy hardware that underpins critical infrastructure yet lacks the protections expected in modern IT systems.”

Advertisement

One misconception about his research that Rao often encounters is the belief that improving security in OT and ICS environments is “simply a matter of applying traditional IT security controls or waiting for outdated systems to be replaced”.

“In reality, critical infrastructure rarely has the option of downtime, frequent patching or uniform visibility, and many industrial systems were never designed with security in mind,” he explains.

He adds that there’s also a belief that effective security requires heavy monitoring, expensive hardware or “intrusive changes that risk disrupting operations”. Rao says his research directly challenges this assumption by “demonstrating that strong security and early intrusion detection can be achieved using lightweight, domain-aware techniques that respect operational constraints”.

“These methods address blind spots such as unmonitored sensors and can detect sophisticated attacks well before they escalate into physical or safety incidents, without disrupting essential services.”

Advertisement

With a number of years spent in this research area, one has to wonder what keeps bringing Rao back to the OT and ICS domain.

As Rao explains to us, he continues to find motivation in “the combination of intellectual challenge and real‑world impact”.

“Unlike conventional IT systems, OT environments cannot simply be patched, replaced or taken offline, even as they face increasingly sophisticated nation‑state threats and growing IT/OT convergence,” he says.

“Developing lightweight cryptography, early‑stage intrusion detection and secure architectures under strict resource and operational constraints is both technically demanding and societally important.

Advertisement

“The opportunity to produce research that has practical relevance and contributes directly to the resilience of essential services is what keeps this work compelling for me.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

This device keeps your data safe by doing something your computer was never designed to handle in the first place

Published

on


  • The Aegis Padlock DT FIPS processes PINs on the device, not on the connected computer
  • This drive functions where software-based encryption cannot, including embedded systems
  • Epoxy coating and locked firmware prevent physical tampering and BadUSB attacks on the Padlock DT FIPS

Most companies assume that encrypting their sensitive data is enough, but encryption only matters if the keys and authentication methods stay out of attackers’ reach.

Software-based encryption tools leave those secrets exposed on the host computer, where keyloggers, screen scrapers, and remote access trojans can capture them with ease.

Source link

Advertisement
Continue Reading

Tech

No, Sony won’t check your PlayStation game licenses every 30 days

Published

on

Sony has shut down claims that PlayStation games would soon require monthly online license checks, with the company confirming that it is not introducing any such system.

The concern started last week after screenshots circulated on X suggesting a “Valid Period” tied to digital purchases. That sparked worry among players and preservation groups, as they feared games could become unplayable if a console stayed offline for more than 30 days.

Sony has now clarified to Game File that this isn’t the case. Once a digital game is purchased, it receives a perpetual license after a single online verification. After that initial check, there are no ongoing requirements to reconnect or revalidate the license.

Advertisement

“Players can continue to access and play their purchased games as usual,” a Sony representative said. “A one-time online check is required after purchase to confirm the game’s license, after which no further check-ins are needed.”

Advertisement

PS5PS5

That statement directly contradicts the interpretation many users had after testing suggested that even setting a PS4 or PS5 as a “primary” console didn’t appear to override the supposed 30-day limit. This helped fuel the belief that Sony was quietly rolling out stricter DRM rules for digital ownership.

Sony hasn’t explained why the “Valid Period” language appeared in the first place. However, one theory links it to its 14-day digital refund window, where temporary validation could help prevent abuse. The company hasn’t confirmed this.

The episode has also revived familiar concerns around game preservation and ownership, especially in a market that is increasingly digital-first. It also inevitably brings back memories of Microsoft’s original Xbox One plans in 2013. Those plans required daily online DRM checks before they were reversed after widespread backlash.

Advertisement

For now, Sony is making one thing clear: buying a digital game on PlayStation still means permanent access, with no recurring online verification needed after the initial purchase check.

Advertisement

Source link

Advertisement
Continue Reading

Tech

What is the release date for Marshals: A Yellowstone Story episode 10 on CBS and Paramount+?

Published

on

When it comes to Marshals: A Yellowstone Story, the stakes just keep getting higher. Last week in episode 9, the Marshals strategized a risky assault on a paramilitary compound when one of their own was taken prisoner.

Kayce (Luke Grimes) led the team straight into the thick of it, risking everything to save their teammate. Frankly, he never looks better than when he’s playing the hero. But when does Marshals: A Yellowstone Story episode 10 arrive on CBS and Paramount+?

Source link

Advertisement
Continue Reading

Tech

Trashing Your Old Tech Hurts the Environment and Your Wallet. Some Still Do It Anyways

Published

on

What do you do with your old tablet or smartphone when you get a new one? CNET recently asked 2,638 adults how they get rid of their old devices, and the results are disappointing. Fewer than half (39%) recycle old tech, while 29% just stash them at home. What’s even more alarming is that 22% of US adults throw their devices in the trash. That route pollutes the environment, can be a fire hazard and is illegal in some states. 

So what should you be doing with that old iPad or TV? Your plan may depend on the device and its condition, but there are retailers that can safely recycle it for you, and some even offer cash or store credit for its trade-in value. You just have to know where to start. Here’s the data and a list of places to keep in mind as you tackle tech spring cleaning or upgrade your personal devices. 

Advertisement

♻️ Fewer than half (39%) of US adults recycle tech they no longer use. Some US adults keep old devices at home (29%), while 10% don’t know what to do with them.
 
♻️ 22% of US adults still throw old tech in the trash, which is illegal in some states.

♻️ National retailers, including Best Buy and Staples, offer recycling programs to safely dispose of your unwanted appliances and gadgets.

Advertisement

Only 24% of US adults trade in their old devices 

So what are most of us doing with the devices we no longer use? CNET found that typical plans vary. You may consider factors such as the device, its condition and your personal preferences. 

cnet survey old electronics

Cole Kan/CNET/Getty Images

Fewer than half (39%) of US adults recycle their old devices, with boomers making up nearly half (48%) of that group. On the other hand, 33% of US adults give away their old tech, while 29% stash these devices at home. 

Only US adults look at old tech as a way to make some cash by trading it in with a retailer (24%) or selling their gadgets online (18%). 

There are less desirable ways to dispose of your tech. It’s not a good idea to throw away old tech, but 22% of US adults say they do. CNET’s latest findings also show that nearly three in 10 (29%) hoard tech at home, with Gen Z making up 40% of this group.  

Advertisement

Watch this: Make Money for Recycling Old Tech and Let the Broken iPhone Go

Selling, donating or recycling your e-waste is better than polluting the environment with toxins and chemicals found in smartphones and tablets. Tossing one in the trash may seem like the most convenient way to get rid of it, but this may be illegal in your state. 

E-waste laws have been enacted in 25 states, according to the Electronics Recycling Coordination Clearinghouse. For example, South Carolina bans disposing of tech in solid-waste landfills. Computer monitors, TVs and printers must be recycled.

Advertisement
gettyimages-1371954835

Best Buy and Staples are two of several retailers that accept old personal devices. 

Witthaya Prasongsin/Moment/Getty Images

Where to recycle or trade in your old tech 

Here’s a list of retailers where you can recycle or trade in your old smartphones, laptops and other personal tech. When narrowing down where to drop off your old gadget, see what recycling and trade-in options are available through your tech manufacturer, such as Apple and HP. Your local recycling services and national services, including The Battery Network (formerly Call2Recycle),  Earth911 and Greener Gadgets, also have tech-recycling programs to safely get rid of your tech based on your ZIP code. 

Amazon Recycling Program

Amazon’s Recycling Program lets you trade in eligible devices to save on a new Amazon tech gadget. If your device doesn’t qualify, you can drop off your old tech at a participating store, such as Staples. Or you can mail it in with a free shipping label.

Apple 

Apple has a special Earth Day offer that lasts until May 16. You can trade in an eligible Apple device, such as an iPad, Mac, iPhone or Apple Watch, and get 10% off select accessories. Apple also has other trade-in offers for Apple and Android devices year-round that give you a credit as an Apple gift card for your used tech. Apple will recycle your device for free if the device is ineligible for a credit. 

Advertisement

Best Buy 

Best Buy lets you recycle up to three accepted items per household per day for free. It also offers a haul-away service to get rid of your old tech as a standalone service. Best Buy can remove and recycle up to two large products and unlimited select small products for $200. There are restrictions, such as not being able to haul away fitness equipment. You can also order a mail-in box from Best Buy and fill it to the weight limit with accepted electronics and ship it at a UPS Store using a prepaid shipping label.

GreenDrop

GreenDrop accepts various tech items on behalf of its nonprofits. However, large appliances, cabinet TVs, monitors and medical equipment are not accepted. Call your local GreenDrop about your specific device before dropping it off. Donations are tax-deductible.

Smartphone Recycling

Smartphone Recycling is a bulk recycling and trade-in program that lets you recycle smartphones and tablets. You can ship your old phone, computer and tablet using a FedEx shipping label. Smartphone Recycling may pay you up to $400 for your old devices, including locked and damaged ones. 

Staples

You can earn Staples’ Easy Rewards by recycling tech devices online and in-store. Points can be redeemed as savings on purchases. Staples also offers mail-in recycling kits to ship your tech starting at about $14, and you can receive electronic gift cards when you trade in an eligible device in stores only. There are a few restricted items, and Staples charges a fee for recycling monitors.

Advertisement

Target 

Target has a trade-in program that lets you trade your old tech in for a Target eGiftCard based on the value of your device. The gift card can be used at Target stores, Target.com, Target Tech kiosks, Target Optical and merchants within the Target store. 

Eligible trade-in items include hearables, mobile phones, MP3 players, tablets, smart speakers, video-game consoles and games, and wearables. The program is only available online.

What to do before you toss your old tech

Before you recycle, sell or give away your old device, there are a few steps you should take. 

First, make sure you back up any important data, such as files and photos, using cloud storage or an external hard drive. If you downloaded any software, make sure you make note of any license keys. Then restore your device to its original state by doing a factory reset. This wipes clean any personal information, software and files by restoring the phone to its original condition.

Advertisement

If you plan to donate or recycle your device, check for any special instructions to safely dispose of your e-waste. Some tablets, phones and laptops use lithium-ion batteries that can pose a significant fire hazard if damaged or not disposed of properly. The EPA also has a directory listing hazardous rechargeable batteries and where to dispose of them by ZIP code. 

For other ways to get rid of unwanted tech, check out the video below for charities that accept unwanted electronics and what to know before selling your used tech for a fair price

Methodology 

CNET commissioned YouGov PLC to conduct the survey. All figures, unless otherwise stated, are from YouGov PLC. The total sample size was 2,638 adults. Fieldwork was undertaken April 10-14, 2026, and the survey was conducted online. The figures have been weighted and are representative of all US adults, ages 18 or older. 

Advertisement

Source link

Continue Reading

Tech

Migrant Deaths Hit Record High Under Trump 2.0

Published

on

from the concentration-camp-shit-going-on-here dept

Not that ICE was ever that great about taking care of all the people it detains. It certainly wasn’t during Trump’s first term. The DHS Inspector General released a report that said there were numerous problems in a single detention facility. Not only that but what was contained in the report was incomplete because the inspectors were both unwilling and unable to dig deep into the issues. ICE officers and officials were far from compliant and inspectors made it worse by questioning detainees about conditions in public areas often containing… you guessed it: ICE officers.

They’re certainly not any better now. Detentions are way up and this iteration of immigration enforcement officials cares even less about the rights and well-being of detained migrants than those employed during Trump 1.0. Not for nothing, but there’s a very obvious reason DHS is doing everything it can to prevent congressional members from inspecting detention centers. We know what it is. Congressional reps know what it is. And for damn sure the people keeping them out of detention centers know what it is.

If the ignition point is the indiscriminate ejection of non-white people from the United States, overseen by ghoulish MAGA acolytes with white Christian nationalist leanings, and carried out by roving bands of masked kidnappers.

The inevitable outcome of everything listed above is this:

Advertisement

The number of immigrants who have died while in Immigration and Customs Enforcement custody has reached an all-time high this fiscal year.

Twenty-nine people have died in ICE custody since October, the start of the federal government’s fiscal year, already surpassing 2004’s toll of 28, the previous record, according to government data.

The latest death in custody has been, of course, conveniently blamed on the victim.

The most recent death was  of 27-year-old Aled Damien Carbonell-Betancourt, a Cuban man held in ICE custody in Miami, Florida. According to an initial report released by ICE on the evening of April 16, Carbonell-Betancourt was found unresponsive in his cell on the morning of April 12. The report lists the cause of death as a “presumed suicide,” but the official cause remains under investigation.

Since it appears the government will be investigating itself, we can safely assume “presumed” will be removed from the cause of death as soon as the DHS makes the cause official.

And, of course, ICE (via its acting director) said this was exactly what we should expect from it:

Advertisement

During a congressional hearing also on Thursday, acting ICE Director Todd Lyons said there are a high number of deaths this fiscal year “because we do have the highest amount in detention that ICE has ever had since its inception in 2003.” 

Not a great excuse. While it’s obviously true that increases in one thing might lead to increases in related things, it’s not guaranteed. And it’s not a great look to tell Congress of course more people are dying. More people are being detained.

You’re supposed to keep the numbers down on the death side, no matter how many people you decide absolutely can’t be allowed to go un-detained for the (allegedly) engaging in civil violations. And while (now former) acting director Lyons goes on to say “We don’t want anyone to die in custody,” I kind of don’t believe him?

He also said this:

“I hope that’s a policy of anyone that has to be tasked with detaining someone.”

You hope? You set the policies. You enforce them. You’re not allowed to hope.

Advertisement

More deaths are happening where most migrants are being sent: Texas. Texas is in the Fifth Circuit, which has been incredibly receptive of every new awfulness this administration engages in. Consequently, as many migrants as possible are sent there as soon as possible, no matter where they’re initially detained. Those deaths include one that has been ruled a homicide: the killing of Geraldo Luna Campos, who the DHS initially claimed had been placed in segregation after he allegedly became “disruptive” while waiting in line for medication. That narrative has since been replaced with something far closer to the truth.

[T]he El Paso Medical Examiner’s Office ruled his death a homicide due to “asphyxia due to neck and torso compression.” The FBI is now investigating the death.

This won’t be the last homicide. The DHS only has the most minimal interest in protecting and caring for the thousands of people federal officers have detained. ICE is completely unwilling to police itself. And the administration overseeing all of this could not care less about the people they’ve decided are unworthy of residing in this country. And the fiscal year isn’t even over yet. There are still five months to go. A ghastly record is going to be set by this administration. Hopefully, it will never be broken.

Filed Under: cbp, cruelty, dhs, ice, todd lyons, trump administration

Advertisement

Source link

Continue Reading

Tech

AI Cyberattacks Meet Memory-Safe Code Defenses

Published

on

Transforming a newly discovered software vulnerability into a cyberattack used to take months. Today—as the recent headlines over Anthropic’s Project Glasswing have shown—generative AI can do the job in minutes, often for less than a dollar of cloud computing time.

But while large language models present a real cyber-threat, they also provide an opportunity to reinforce cyberdefenses. Anthropic reports its Claude Mythos preview model has already helped defenders preemptively discover over a thousand zero-day vulnerabilities, including flaws in every major operating system and web browser, with Anthropic coordinating disclosure and its efforts to patch the revealed flaws.

It is not yet clear whether AI-driven bug finding will ultimately favor attackers or defenders. But to understand how defenders can increase their odds, and perhaps hold the advantage, it helps to look at an earlier wave of automated vulnerability discovery.

In the early 2010s, a new category of software appeared that could attack programs with millions of random, malformed inputs—a proverbial monkey at a typewriter, tapping on the keys until it finds a vulnerability. When such “fuzzers” like American Fuzzy Lop (AFL) hit the scene, they found critical flaws in every major browser and operating system.

Advertisement

The security community’s response was instructive. Rather than panic, organizations industrialized the defense. For instance, Google built a system called OSS-Fuzz that runs fuzzers continuously, around the clock, on thousands of software projects. So software providers could catch bugs before they shipped, not after attackers found them. The expectation is that AI-driven vulnerability discovery will follow the same arc. Organizations will integrate the tools into standard development practice, run them continuously, and establish a new baseline for security.

But the analogy has a limit. Fuzzing requires significant technical expertise to set up and operate. It was a tool for specialists. An LLM, meanwhile, finds vulnerabilities with just a prompt—resulting in a troubling asymmetry. Attackers no longer need to be technically sophisticated to exploit code, while robust defenses still require engineers to read, evaluate, and act on what the AI models surface. The human cost of finding and exploiting bugs may approach zero, but fixing them won’t.

Is AI Better at Finding Bugs Than Fixing Them?

In the opening to his book Engineering Security, Peter Gutmann observed that “a great many of today’s security technologies are ‘secure’ only because no-one has ever bothered to look at them.” That observation was made before AI made looking for bugs dramatically cheaper. Most present-day code—including the open source infrastructure that commercial software depends on—is maintained by small teams, part-time contributors, or individual volunteers with no dedicated security resources. A bug in any open source project can have significant downstream impact, too.

In 2021, a critical vulnerability in Log4j—a logging library maintained by a handful of volunteers—exposed hundreds of millions of devices. Log4j’s widespread use meant that a vulnerability in a single volunteer-maintained library became one of the most widespread software vulnerabilities ever recorded. The popular code library is just one example of the broader problem of critical software dependencies that have never been seriously audited. For better or worse, AI-driven vulnerability discovery will likely perform a lot of auditing, at low cost and at scale.

Advertisement

An attacker targeting an under-resourced project requires little manual effort. AI tools can scan an unaudited codebase, identify critical vulnerabilities, and assist in building a working exploit with minimal human expertise.

Research on LLM-assisted exploit generation has shown that capable models can autonomously and rapidly exploit cyber weaknesses, compressing the time between disclosure of the bug and working exploit of that bug from weeks down to mere hours. Generative AI-based attacks launched from cloud servers operate staggeringly cheaply as well. In August 2025, researchers at NYU’s Tandon School of Engineering demonstrated that an LLM-based system could autonomously complete the major phases of a ransomware campaign for some $0.70 per run, with no human intervention.

And the attacker’s job ends there. The defender’s job, on the other hand, is only getting underway. While an AI tool can find vulnerabilities and potentially assist with bug triaging, a dedicated security engineer still has to review any potential patches, evaluate the AI’s analysis of the root cause, and understand the bug well enough to approve and deploy a fully-functional fix without breaking anything. For a small team maintaining a widely-depended-upon library in their spare time, that remediation burden may be difficult to manage even if the discovery cost drops to zero.

Why AI Guardrails and Automated Patching Aren’t the Answer

The natural policy response to the problem is to go after AI at the source: holding AI companies responsible for spotting misuse, putting guardrails in their products, and pulling the plug on anyone using LLMs to mount cyberattacks. There is evidence that pre-emptive defenses like this have some effect. Anthropic has published data showing that automated misuse detection can derail some cyberattacks. However, blocking a few bad actors does not make for a satisfying and comprehensive solution.

Advertisement

At a root level, there are two reasons why policy does not solve the whole problem.

The first is technical. LLMs judge whether a request is malicious by reading the request itself. But a sufficiently creative prompt can frame any harmful action as a legitimate one. Security researchers know this as the problem of the persuasive prompt injection. Consider, for example, the difference between “Attack website A to steal users’ credit card info” and “I am a security researcher and would like secure website A. Run a simulation there to see if it’s possible to steal users’ credit card info.” No one’s yet discovered how to root out the source of subtle cyberattacks, like in the latter example, with 100 percent accuracy.

The second reason is jurisdictional. Any regulation confined to US-based providers (or that of any other single country or region) still leaves the problem largely unsolved worldwide. Strong, open-source LLMs are already available anywhere the internet reaches. A policy aimed at handful of American technology companies is not a comprehensive defense.

Another tempting fix is to automate the defensive side entirely—let AI autonomously identify, patch, and deploy fixes without waiting for an overworked volunteer maintainer to review them.

Advertisement

Tools like GitHub Copilot Autofix generate patches for flagged vulnerabilities directly with proposed code changes. Several open-source security initiatives are also experimenting with autonomous AI maintainers for under-resourced projects. It is becoming much easier to have the same AI system find bugs, generate a patch, and update the code with no human intervention.

But LLM-generated patches can be unreliable in ways that are difficult to detect. For example, even if they pass muster with popular code-testing software suites, they may still introduce subtle logic errors. LLM-generated code, even from the most powerful generative AI models out there, are still subject to a range of cyber vulnerabilities, too. A coding agent with write access to a repository and no human in the loop is, in so many words, an easy target. Misleading bug reports, malicious instructions hidden in project files, or untrusted code pulled in from outside the project can turn an automated AI codebase maintainer into a cyber-vulnerability generator.

Guardrails and automated patching are useful tools, but they share a common limitation. Both are ad hoc and incomplete. Neither addresses the deeper question of whether the software was built securely from the start. The more lasting solution is to prevent vulnerabilities from being introduced at all. No matter how deeply an AI system can inspect a project, it cannot find flaws that don’t exist.

Memory-Safe Code Creates More Robust Defenses

The most accessible starting point is the adoption of memory-safe languages. Simply by changing the programming language their coders use, organizations can have a large positive impact on their security.

Advertisement

Both Google and Microsoft have found that roughly 70 percent of serious security flaws come down to the ways in which software manages memory. Languages like C and C++ leave every memory decision to the developer. And when something slips, even briefly, attackers can exploit that gap to run their own code, siphon data, or bring systems down. Languages like Rust go further; they make the most dangerous class of memory errors structurally impossible, not just harder to make.

Memory-safe languages address the problem at the source, but legacy codebases written in C and C++ will remain a reality for decades. Software sandboxing techniques complement memory-safe languages by addressing what they cannot—containing the blast radius of vulnerabilities that do exist. Tools like WebAssembly and RLBox already demonstrate this in practice in web browsers and cloud service providers like Fastly and Cloudflare. However, while sandboxes dramatically raise the bar for attackers, they are only as strong as their implementation. Moreover, Antropic reports that Claude Mythos has demonstrated that it can breach software sandboxes.

For the most security-critical components, where implementation complexity is highest and the cost of failure greatest, a stronger guarantee still is available.

Formal verification proves, mathematically, that certain bugs cannot exist. It treats code like a mathematical theorem. Instead of testing whether bugs appear, it proves that specific categories of flaw cannot exist under any conditions.

Advertisement

Cloudflare, AWS, and Google already use formal verification to protect their most sensitive infrastructure—cryptographic code, network protocols, and storage systems where failure isn’t an option. Tools like Flux now bring that same rigor to everyday production Rust code, without requiring a dedicated team of specialists. That matters when your attacker is a powerful generative-AI system that can rapidly scan millions of lines of code for weaknesses. Formally verified code doesn’t just put up some fences and firewalls—it provably has no weaknesses to find.

The defenses described above are asymmetric. Code written in memory-safe languages—separated by strong sandboxing boundaries and selectively formally verified—presents a smaller and much more constrained target. When applied correctly, these techniques can prevent LLM-powered exploitation, regardless of how capable an attacker’s bug-scanning tools become.

Generative AI can support this more foundational shift by accelerating the translation of legacy code into safer languages like Rust, and making formal verification more practical at every stage. Which helps engineers write specifications, generate proofs, and keep those proofs current as code evolves.

For organizations, the lasting solution is not just better scanning but stronger foundations: memory-safe languages where possible, sandboxing where not, and formal verification where the cost of being wrong is highest. For researchers, the bottleneck is making those foundations practical—and using generative AI to accelerate the migration. But instead of automated, ad hoc vulnerability patching, generative AI in this mode of defense can help translate legacy code to memory-safe alternatives. It also assists in verification proofs and lowers the expertise barrier to a safer and less vulnerable codebase.

Advertisement

The latest wave of smarter AI bug scanners can still be useful for cyberdefense—not just as another overhyped AI threat. But AI bug scanners treat the symptom, not the cause. The lasting solution is software that doesn’t produce vulnerabilities in the first place.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Historic Apple Porsche colors return on Porsche 963 at Laguna

Published

on

More than four decades after an Apple-branded Porsche first hit the track, Porsche Penske Motorsport revives the rainbow livery on its 963 prototypes for a one-off run at Laguna Seca.

The livery revives the rainbow-striped look of a 1980 Porsche 935, marking the 75th anniversary of Porsche Motorsport and the 50th anniversary of Apple. It will appear on May 3 at WeatherTech Raceway Laguna Seca.

Porsche based the look on a Dick Barbour Racing Porsche 935 K3 that carried Apple branding during the 1980 season, including an entry at the 24 Hours of Le Mans. Both factory-entered 963 cars will wear it for the fourth round of the IMSA WeatherTech SportsCar Championship, limiting the tribute to a single race.

Oliver Schusser, Vice President Apple Music, Sports and Beats, said the collaboration continues a relationship that began in 1980, when a Porsche race car first carried its logo. The companies are using Laguna Seca to reconnect with today’s motorsport program, but the change is limited to branding.

Advertisement

Porsche Penske Motorsport enters Laguna Seca leading the championship standings after early-season wins at the 24 Hours of Daytona and the 12 Hours of Sebring. Kevin Estre and Laurens Vanthoor will drive the No. 6 Porsche 963, while Julien Andlauer and Felipe Nasr will share the No. 7 car.

Top view of a sleek white Porsche race car with colorful rainbow stripes, aerodynamic bodywork, large rear wing, and black cockpit, photographed on a bright, clean backgroundThe livery revives the rainbow-striped look of a 1980 Porsche 935. Image credit: Porsche

Laguna Seca serves as a deliberate choice for the tribute because the track sits about 80 miles south of Apple Park in Cupertino. The circuit has also hosted multiple Rennsport Reunion events, which ties the collaboration to both companies’ history.

Both anniversaries land in 2026, with Apple marking 50 years since its 1976 founding and Porsche Motorsport reaching 75 years since 1951. Porsche uses that direct link to give the tribute more weight and to justify keeping the design to a single race.

The Laguna Seca round runs two hours and 40 minutes and serves as the fourth stop on the IMSA calendar. Porsche’s 963 program remains the focus on track regardless of the one-off livery.

Apple stays involved through partnerships and services tied to motorsports without expanding its role. Porsche uses the tribute to reinforce its heritage while its prototype program continues to run at the front of the championship.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025