Connect with us
DAPA Banner

Tech

Kessler Syndrome Alert: Satellites’ 5.5-Day Countdown

Published

on

Thousands of satellites are tightly packed into low Earth orbit, and the overcrowding is only growing.

Scientists have created a simple warning system called the CRASH Clock that answers a basic question: If satellites suddenly couldn’t steer around one another, how much time would elapse before there was a crash in orbit? Their current answer: 5.5 days.

The CRASH Clock metric was introduced in a paper originally published on the Arxiv physics preprint server in December and is currently under consideration for publication. The team’s research measures how quickly a catastrophic collision could occur if satellite operators lost the ability to maneuver—whether due to a solar storm, a software failure, or some other catastrophic failure.

To be clear, say the CRASH Clock scientists, low Earth orbit is not about to become a new unstable realm of collisions. But what the researchers have shown, consistent with recent research and public outcry, is that low Earth orbit’s current stability demands perfect decisions on the part of a range of satellite operators around the globe every day. A few mistakes at the wrong time and place in orbit could set a lot of chaos in motion.

Advertisement

But the biggest hidden threat isn’t always debris that can be seen from the ground or via radar imaging systems. Rather, thousands of small pieces of junk that are still big enough to disrupt a satellite’s operations are what satellite operators have nightmares about these days. Making matters worse is SpaceX essentially locking up one of the most valuable altitudes with their Starlink satellite megaconstellation, forcing Chinese competitors to fly higher through clouds of old collision debris left over from earlier accidents.

IEEE Spectrum spoke with astrophysicists Sarah Thiele (graduate student at Princeton University), Aaron Boley (professor of physics and astronomy at the University of British Columbia, in Vancouver, Canada), and Samantha Lawler (associate professor of astronomy at the University of Regina, in Saskatchewan, Canada) about their new paper, and about how close satellites actually are to one another, why you can’t see most space junk, and what happens to the power grid when everything in orbit fails at once.

Does the CRASH Clock measure Kessler syndrome, or something different?

Sarah Thiele: A lot of people are claiming we’re saying Kessler syndrome is days away, and that’s not what our work is saying. We’re not making any claim about this being a runaway collisional cascade. We only look at the timescale to the first collision—we don’t simulate secondary or tertiary collisions. The CRASH Clock reflects how reliant we are on errorless operations and is an indicator for stress on the orbital environment.

Advertisement

Aaron Boley: A lot of people’s mental vision of Kessler syndrome is this very rapid runaway, and in reality this is something that can take decades to truly build.

Thiele: Recent papers found that altitudes between 520 and 1,000 kilometers have already reached this potential runaway threshold. Even in that case, the timescales for how slowly this happens are very long. It’s more about whether you have a significant number of objects at a given altitude such that controlling the proliferation of debris becomes difficult.

Understanding the CRASH Clock’s Implications

What does the CRASH Clock approaching zero actually mean?

Thiele: The CRASH Clock assumes no maneuvers can happen—a worst-case scenario where some catastrophic event like a solar storm has occurred. A zero value would mean if you lose maneuvering capabilities, you’re likely to have a collision right away. It’s possible to reach saturation where any maneuver triggers another maneuver, and you have this endless swarm of maneuvers where dodging doesn’t mean anything anymore.

Advertisement

Boley: I think about the CRASH Clock as an evaluation of stress on orbit. As you approach zero, there’s very little tolerance for error. If you have an accidental explosion—whether a battery exploded or debris slammed into a satellite—the risk of knock-on effects is amplified. It doesn’t mean a runaway, but you can have consequences that are still operationally bad. It means much higher costs—both economic and environmental—because companies have to replace satellites more often. Greater launches, more satellites going up and coming down. The orbital congestion, the atmospheric pollution, all of that gets amplified.

Are working satellites becoming a bigger danger to each other than debris?

Boley: The biggest risk on orbit is the lethal non-trackable debris—this middle region where you can’t track it, it won’t cause an explosion, but it can disable the spacecraft if hit. This population is very large compared with what we actually track. We often talk about Kessler syndrome in terms of number density, but really what’s also important is the collisional area on orbit. As you increase the area through the number of active satellites, you increase the probability of interacting with smaller debris.

Samantha Lawler: Starlink just released a conjunction report—they’re doing one collision avoidance maneuver every two minutes on average in their megaconstellation.

Advertisement

The orbit at 550 km altitude, in particular, is densely packed with Starlink satellites. Is that right?

Lawler: The way Starlink has occupied 550 km and filled it to very high density means anybody who wants to use a higher-altitude orbit has to get through that really dense shell. China’s megaconstellations are all at higher altitudes, so they have to go through Starlink. A couple of weeks ago, there was a headline about a Starlink satellite almost hitting a Chinese rocket. These problems are happening now. Starlink recently announced they’re moving down to 350 km, shifting satellites to even lower orbits. Really, everybody has to go through them—including ISS, including astronauts.

Thiele: 550 km has the highest density of active payloads. There are other orbits of concern around 800 km—the altitude of the [2007] Chinese anti-satellite missile test and the [2009] Cosmos-Iridium collision. Above 600 km, atmospheric drag takes a very long time to bring objects down. Below 600 km, drag acts as a natural cleaning mechanism. In that 800 km to 900 km band, there’s a lot of debris that’s going to be there for centuries.

Impact of Collisions at 550 Kilometers

What happens if there’s a collision at 550 km? Would that orbit become unusable?

Advertisement

Thiele: No, it would not become unusable—not a Gravity movie scenario. Any catastrophic collision is an acute injection of debris. You would still be able to use that altitude, but your operating conditions change. You’re going to do a lot more collision-avoidance maneuvers. Because it’s below 600 km, that debris will come down within a handful of years. But in the meantime, you’re dealing with a lot more danger, especially because that’s the altitude with the highest density of Starlink satellites.

Lawler: I don’t know how quickly Starlink can respond to new debris injections. It takes days or weeks for debris to be tracked, cataloged, and made public. I hope Starlink has access to faster services, because in the meantime that’s an awful lot of risk.

How do solar storms affect orbital safety?

Lawler: Solar storms make the atmosphere puff up—high-energy particles smashing into the atmosphere. Drag can change very quickly. During the May 2024 solar storm, orbital uncertainties were kilometers. With things traveling 7 kilometers per second, that’s terrifying. Everything is maneuvering at the same time, which adds uncertainty. You want to have margin for error, time to recover after an event that changes many orbits. We’ve come off solar maximum, but over the next couple of years it’s very likely we’ll have more really powerful solar storms.

Advertisement

Thiele: The risk for collision within the first few days of a solar storm is a lot higher than under normal operating conditions. Even if you can still communicate with your satellite, there’s so much uncertainty in your positions when everything is moving because of atmospheric drag. When you have high density of objects, it makes the likelihood of collision a lot more prominent.

Graph: collision chance vs. days. Danger, caution, safe zones. Red dashed line at June 2025. Canadian and American researchers simulated satellite orbits in low Earth orbit and generated a metric, the CRASH Clock, that measures the number of days before collisions start happening if collision-avoidance maneuvers stop. Sarah Thiele, Skye R. Heiland, et al.

Between the first and second drafts of your paper that were uploaded to the preprint server, your key metric, the CRASH Clock finding, was updated from 2.8 days to 5.5 days. Can you explain the revision?

Thiele: We updated based on community feedback, which was excellent. The newer numbers are 164 days for 2018 and 5.5 days for 2025. The paper is submitted and will hopefully go through peer review.

Lawler: It’s been a very interesting process putting this on Arxiv and receiving community feedback. I feel like it’s been peer-reviewed almost—we got really good feedback from top-tier experts that improved the paper. Sarah put a note, “feedback welcome,” and we got very helpful feedback. Sometimes the internet works well. If you think 5.5 days is okay when 2.8 days was not, you missed the point of the paper.

Advertisement

Thiele: The paper is quite interdisciplinary. My hope was to bridge astrophysicists, industry operators, and policymakers—give people a structure to assess space safety. All these different stakeholders use space for different reasons, so work that has an interdisciplinary connection can get conversations started between these different domains.

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

This Washer Brand Ranks The Highest For Customer Satisfaction

Published

on





As fun as it can be to shop for a washing machine, we’re assuming that nobody really wants to be in the market for a new one. After all, if you are, it is likelier than not that your old machine has spun its last cycle. In that scenario, you’re probably hustling to ensure you and your family have freshly laundered clothing.

If there’s a silver lining to the death of your washing machine, it’s that the major appliance manufacturers of the world have plenty of options available on the retail scene. While those machines are all decked out with different bells and whistles, choosing which brand to buy from will no doubt be one of the first and most important decisions you make. 

Given that fact, it may be wise to research the customer satisfaction numbers on washers bearing the logo of major brands like Whirlpool, Samsung, or Bosch. The American Customer Satisfaction Index (ACSI) can offer unique insight into how real-world owners feel about their appliances. When it comes to washing machines, it would seem that none of the aforementioned brands left their customers quite as satisfied as LG. The South Korean manufacturer bested its competitors with an impressive score of 84 out of 100 points in 2025. It is unclear, however, where the likes of Samsung and Whirlpool stand in the rankings, as the ACSI survey only shows the top-rated brand. 

Advertisement

LG scored well on the brand’s survey in other departments, too

You might be wondering how the American Customer Satisfaction Index gathered the information that landed LG in the top spot of the washing machine satisfaction category. The consumer ratings group utilizes numbers collected from customer surveys and interviews as drivers for a multi-equation econometric “cause-and-effect” model first developed at the University of Michigan. The questions are designed to measure satisfaction based on several factors, including customer expectations, perceived quality, perceived value, and customer loyalty, among others.

For the record, those methods also helped LG earn top honors in the dishwasher category, though its score of 82 placed it in a tie with Bosch. Those are the only appliance-specific categories in which LG products took top honors. The brand did, however, score well in ACSI’s 2025 brand satisfaction sector of the survey, placing second overall with a score of 81.

Advertisement

Though LG placed second in the overall satisfaction survey, there are actually two other brands listed ahead of the South Korean manufacturer. Samsung and Whirlpool tied for the top spot with a score of 82. Interestingly, LG would’ve made that a three-way tie if its survey score held over from 2024, when the brand earned an 82. However, the 1% regression still led to a strong showing. Bosch, Electrolux, and Haier rounded out the top five in the ASCI overall brand satisfaction survey, though it should be noted that, since Haier now owns GE and Hotpoint, appliances from those brands are included in Haier’s overall satisfaction rating.



Advertisement

Source link

Continue Reading

Tech

Ransomware Defense, Faster Replication, vSphere 9, and Proxmox VE 9.0 Support

Published

on

Nakivo

The new release adds automated replication, support for newer VMware vSphere and Proxmox versions, and modern authentication for faster, safer recovery.

Sparks, Nevada – April 3rd, 2026 – NAKIVO Inc., trusted by over 16,000 organizations in 191 countries, announced the general availability of NAKIVO Backup & Replication v11.2, focused on fast, reliable, and proactive data protection.

As ransomware attacks evolve and downtime costs rise, v11.2 provides IT teams with tools to quicken recovery, support next-generation infrastructure, and maintain secure data protection without added complexity.

Automated Real-Time Replication

At the core of v11.2 is an automated real-time replication engine. It keeps replica VMs synchronized with production workloads, allowing organizations to fail over to a recent replica within minutes after hardware failures, ransomware, or human error.

Advertisement

For businesses where every minute of downtime carries measurable financial or reputational consequences, this capability closes one of the most dangerous blind spots in traditional backup strategies: The window between the last scheduled job and the moment of failure.

Support for VMware vSphere 9 and Proxmox VE 9.0 and 9.1

Keeping your backup stack aligned with hypervisor versions is mission-critical for teams managing VMware, Proxmox, or hybrid environments. NAKIVO Backup & Replication v11.2 addresses that directly while also tightening security and laying the groundwork for faster disaster recovery.

Full VMware vSphere 9 Support

Full VMware vSphere 9 Support

The most significant update for VMware administrators: v11.2 delivers complete, production-ready support for vSphere 9, including vCenter Server 9.0.1.0, ESXi 9.0.1.0, and VDDK 9.0.1.0.

Earlier builds introduced initial compatibility, but v11.2 provides full readiness, enabling teams to upgrade their VMware infrastructure with confidence that NAKIVO will operate without disrupting existing jobs.

Advertisement

All core capabilities are fully operational under vSphere 9:

  • Agentless image-based backup and replication using Changed Block Tracking (CBT) for efficient, low-impact incrementals
  • Instant VM recovery to restore workloads in minutes, not hours
  • Granular file-level and application-object recovery for Exchange and SQL workloads, without restoring the entire VM
  • Built-in DR orchestration with failover, failback, and non-disruptive testing via Site Recovery
  • Ransomware resilience through immutable backups, AES-256 encryption, air-gapped copies, and pre-recovery malware scanning
  • Fast, deduplicated, compressed backups to minimize storage footprint across repositories

For organizations tracking the licensing shift away from standalone vSphere Standard and Enterprise Plus editions toward VMware vSphere Foundation 9.0, this update ensures NAKIVO keeps pace with where VMware is heading.

NAKIVO v11.2: OAuth 2.0 authentication, immutable backups, real-time replication, and ransomware resilience.

Stay ahead of evolving cyber threats with enterprise-grade security.  

Start your 15-day free trial now

Proxmox VE 9.0 Support, with 9.1 Already in Scope

Proxmox VE 9.0 Support, with 9.1 Already in Scope

NAKIVO’s Proxmox support continues to mature. v11.2 brings full compatibility with Proxmox VE 9.0, and support for Proxmox VE 9.1 is already built in, letting Proxmox environments upgrade without risking protection gaps.

Advertisement

For environments running Proxmox at the edge, in cost-sensitive production, or as a VMware alternative, the full feature set includes:

  • Agentless host-level backup and replication with no guest agents required, keeping VM overhead minimal
  • Block-level incrementals via native change tracking, matching the efficiency of CBT in VMware environments
  • Instant VM and file-level recovery for rapid restoration of individual machines or specific files
  • Automated verification with screenshot confirmation to validate recoverability without manual intervention
  • Immutable backups on S3-compatible and object storage targets, including AWS S3, Wasabi, Azure Blob, and Backblaze B2
  • AES-256 encryption at source, in transit, and at rest with air-gapped copy options via tape or detached storage

For hybrid environments running VMware and Proxmox side by side, NAKIVO’s unified management interface provides a single workflow that covers both platforms, which matters as infrastructure grows in complexity.

Ransomware Defense Across the Board

ransomware defense

Ransomware protection in v11.2 is integrated into the architecture rather than isolated as a single feature. Immutability is supported across a wide range of targets, including AWS S3, Wasabi, Azure Blob, Backblaze B2, HPE StoreOnce, NEC HYDRAstor, and Dell EMC Data Domain. Pre-recovery malware scanning catches threats before they re-enter production. Air-gapped options — tape, detached USB, or offline NAS — provide a last line of defense when network-connected copies are compromised.

Our priority is to give customers a smooth and secure path forward as their environments evolve,” said Bruce Talley, CEO of NAKIVO. “v11.2 focuses on compatibility, security, and consistent performance as virtualization platforms advance.

Matt Mitchell, Web Developer at SEHD at the University of Colorado Denver, said: “With NAKIVO Backup & Replication, I can recover VMware VMs within 10 minutes. With data deduplication, we were able to decrease storage space by 80%.

Advertisement

OAuth 2.0: Secure Email Notifications by Default

OAuth 2.0: Secure Email Notifications by Default

v11.2 introduces native OAuth 2.0 authentication for email notifications, replacing the deprecated basic authentication that major providers like Google Workspace and Microsoft 365 are actively phasing out.

The shift to token-based authentication removes stored plain-text credentials from the equation, delivering a meaningful compliance and security improvement, particularly for organizations under regulatory scrutiny.

HPE StoreOnce users gain full support for VSA Gen 5, improving deduplication appliance integration and repository performance. The platform has also been updated to Java SE 24 and the latest Spring Framework, delivering stability improvements, security patches, and incremental gains in backup and restore throughput — benefits that compound over time in high-frequency backup environments.

Enhanced MSP Direct Connect for Multi-Tenant Management

Enhanced MSP Direct Connect for Multi-Tenant Management

Managed service providers running multi-tenant environments gain efficiency through enhanced MSP Direct Connect. The updated interface provides single-pane visibility across multiple tenants, reducing overhead and accelerating response times.

Advertisement

For MSPs scaling their service portfolios, this improvement directly supports growth without a proportional increase in administrative burden.

The Bottom Line

NAKIVO Backup & Replication v11.2 is an operationally important release. It removes the compatibility friction that holds teams back from upgrading infrastructure, strengthens ransomware resilience, and tightens security in areas that are easy to overlook until they become a problem. For VMware administrators preparing for a vSphere 9 migration, Proxmox environments approaching a version upgrade, or any organization seeking to enhance recovery capabilities, v11.2 provides a robust foundation for operational stability.

Availability

NAKIVO Backup & Replication v11.2 is available now. Organizations can download the fully featured free trial at nakivo.com.

Resources:

Advertisement

About NAKIVO

NAKIVO is a US-based corporation dedicated to delivering the ultimate backup, ransomware protection, and disaster recovery solution for virtual, physical, cloud, and SaaS environments. Over 16,000 customers in 191 countries trust NAKIVO with protecting their data, including global brands like Coca-Cola, Honda, Siemens, and Cisco.

Visit: www.nakivo.com

Sponsored and written by NAKIVO.

Advertisement

Source link

Continue Reading

Tech

PSX Development With Unity And LUA

Published

on

The Unity game development platform was first released in 2005, long after the PlayStation had ceased to be a relevant part of the console market. And yet, you could use Unity to develop for the platform, if you so desire, thanks to the efforts of [Bandwidth] and the team behind psxsplash. 

Yes, it really is possible to design games for the original PlayStation using Unity and Lua. Using a tool called SplashEdit, you can whip up scenes, handle scripting, loading screens, create UIs, and do all the other little bits required to lash a game together. You can then run your creation via the psxsplash engine, deploying to emulator or even real hardware with a single click. Currently, development requires a Windows or Linux machine and Unity 6000.0+, but other than that, it’s pretty straightforward to start making games with a modern toolset for one of the most popular consoles of all time. Just remember, you’ve only got 33 MHz and 2MB of RAM to play with.

We still love to see the legendary grey machine get used and hacked in new and inventive ways, so many decades after release.

Advertisement

Thanks to [Nick] for the tip!

Advertisement

Source link

Continue Reading

Tech

AKG K1000 Returns at AXPONA 2026 as Apos Revives Iconic ‘Earspeaker’ Headphone

Published

on

There are headphones that people remember, and then there are the ones that never really leave the conversation. The AKG K1000 belongs in that second group. Alongside the Sony MDR-R10 and HiFiMAN HE-6, it set a standard that still holds up under scrutiny. Years later, all three continue to command serious money on the used market, not because they are rare, but because very few modern designs have matched what they got right.

Most attempts to revisit that level of performance have missed the mark. HiFiMAN has reworked the HE-6 and chased the R10 formula with uneven results, and while the MySphere 3.2 clearly draws inspiration from the K1000, it plays in a different lane at roughly $4,000 to $6,000 depending on configuration and market. The K1000 has been left alone. Until now. At AXPONA 2026, Apos is stepping in with something that aims directly at one of the most iconic designs in personal audio. That alone makes this more than just another product launch.

apos-community-k1k-earspeakers-open
Apos x Community K1K Earspeakers at AXPONA 2026

AKG’s Fall and the K1000 That Refused to Follow the Rules

Sadly, AKG today is a shell of what it once was. After being absorbed into Harman and eventually folded deeper into Samsung’s ecosystem, the brand lost much of the identity that made it matter in the first place. Some of the talent behind those earlier designs walked away and formed Austrian Audio, which tells you everything you need to know about how that transition went. What followed was a slow dilution. Models were revised, repositioned, or quietly dropped, and the through line that defined AKG through the 1990s and early 2000s became harder to recognize.

Those who spent time with the AKG that gave us the K240, K612, K712, and K553 know exactly what that meant. There was a consistency to the design language and tuning. You could spot them across the room and you could usually tell what you were listening to within a few minutes.

Advertisement

The K1000 never fit into that mold. It arrived earlier, in the late 1980s, and looked like it came from a completely different company. It was not really a headphone. It was closer to a pair of miniature loudspeakers suspended next to your ears. The drivers sat inside rectangular frames, hinged to a headband, allowing you to adjust the angle and distance from your ears. Open air in every direction. No seal. No isolation. Just space.

That design is exactly why people still talk about it. For some, it looks completely impractical. For others, that open geometry is the entire point. It creates a presentation that feels less like something clamped to your head and more like sound existing around you. But there was a cost to doing it that way. If it behaved like a speaker, it demanded to be powered like one.

apos-community-k1k-earspeakers-side

The K1000 had a real appetite for power, and when it was released, the kind of dedicated headphone amplifiers we have today did not exist. So owners got creative. Most ran them straight off speaker taps from integrated amplifiers and receivers just to get them to wake up. It was inconvenient, sometimes risky, and absolutely necessary.

Despite the unusual design and the need for serious power, the K1000 has not lost its grip on the market. If anything, it has tightened it. A quick scan shows listings pushing toward $2,000 and beyond, with active bids not far behind. That is not casual interest. That is sustained demand for something that has not been available for decades.

Part of that comes down to how few were made. AKG produced roughly 10,000 units in total, split between earlier and later runs that listeners still argue about. The earlier version is often associated with a fuller low end and tends to command the highest prices. The later production models are easier to find by comparison, but not by much, and they still sell for well above their original retail price when they surface.

Advertisement

That imbalance tells the story. There are far more people looking for a K1000 than there are owners willing to part with one. Supply is fixed. Interest is not. That gap has been sitting there for years, waiting for someone to take a serious swing at it.

Apos K1K Steps In with a Modern Take on an Unfinished Story

APOS Audio is taking a real swing at it. Their new K1K is not a clone of the original, but it clearly draws from the same playbook. Recreating something like this was never going to be straightforward. Much of the original knowledge is gone, and tracking down details from a model developed in Vienna decades ago meant reverse engineering existing units and speaking with people who have long since moved on. That kind of effort shows in the final direction. This is not a cosmetic tribute. It is an attempt to understand what made the original work and build from there.

Advertisement. Scroll to continue reading.
apos-community-k1k-earspeakers-back

My first time with the K1K came at AXPONA in the APOS Audio room. In the hand, it feels familiar but not dated. On the head, it leans into what made the original compelling. The presentation is open and speaker like, with a sense of space that most headphones still struggle to replicate. More importantly, it captures the weight and impact that defined the earlier version without sounding thin or clinical. That balance is not easy to get right but APOS seems to have found a good balance so far based on the sample at the show.

There are still details to come. Final specifications have not been released and pricing was not locked at press time, but APOS indicated that it will land close to the original MSRP. Adjusted for today, that puts it in a very different position relative to what the market is charging for used K1000 units. If they stick that landing, it could shift the conversation quickly. Production is expected to begin late 2026 or early 2027, and based on what I heard, this is one to watch closely.

Advertisement

Keep an eye on the APOS Audio K1K page for updates as more details are released: https://apos.audio/products/apos-x-community-k1k-earspeakers

Source link

Advertisement
Continue Reading

Tech

A comet gets destroyed by the sun, data centers endanger the Potomac River, and more science news

Published

on

The Artemis II astronauts are settling back into life on Earth, but we’re not quite tired yet of hearing about their amazing journey. There’s a new PBS documentary now streaming on YouTube that dives into the Artemis program and the latest efforts to send humans to the moon again. Also this week, NASA shared some awesome images of a comet flying into the sun, the nonprofit American Rivers released its annual report on the most endangered rivers in the US and ESA posted a throwback image of Mars to highlight some interesting changes down on the surface. Here are the science stories that caught our attention this week.

A comet grazes too close to the sun

Earlier this month, a recently discovered comet made a close approach to the sun — but it couldn’t handle the heat. NASA has shared incredible images of the encounter that took place on April 4, showing the comet exploding into dust as it swings around our star. As NASA notes in a social media post, this was “its first and last observed flyby of the Sun.”

The comet, C/2026 A1 (also known as MAPS) was first spotted on January 13 of this year. As it neared the sun, it was observed by a slew of instruments: NASA and ESA’s SOHO (Solar and Heliospheric Observatory) spacecraft, NASA’s STEREO (Solar Terrestrial Relations Observatory) and NASA’s PUNCH (Polarimeter to Unify the Corona and Heliosphere). This allowed for views of its passage from multiple angles. Seen in a narrow-field coronagraph view captured by SOHO, the comet appears to plunge directly into the sun. But, the wide view from NASA’s STEREO shows it actually swinging closely around the sun before breaking apart.

MAPS was one of a family of comets aptly called Kreutz sungrazing comets, and according to Karl Battams, the principal investigator for SOHO’s coronagraph, its destruction occurred likely several hours before what would have been its closest approach.

Advertisement

Potomac named most endangered river in the US

The nonprofit conservation organization American Rivers has released its 2026 report on the most endangered rivers in the country, and data centers play a major role in the status of its top pick. According to American Rivers, the Potomac River is the most endangered in the US due both to the threat of sewage pollution from aging pipe systems and the “unprecedented surge in data center development” in its vicinity.

The Potomac River basin spans parts of Pennsylvania, Maryland, Virginia, West Virginia and Washington, DC. In January, the catastrophic failure of the Potomac Interceptor wastewater pipe in Montgomery County, Maryland dumped hundreds of millions of gallons of untreated sewage into the Potomac River and the Chesapeake and Ohio (C&O) Canal, causing bacteria levels to hit over 4,000 times the safe recreational limit at sites closest to the incident, according to the report. The Potomac Interceptor is over 60 years old, and is just one of many in the region that is at or past the 50-year service life, American Rivers notes.

On top of that, data center development in places like Virginia and Maryland has skyrocketed, which could put a strain on local water and energy sources. Data centers also have potential to cause further pollution to the river.

“The region currently has over 300 data centers and is on track to have a total of about 1,000 centers occupying roughly 200 million square feet of buildings — enough to cover 3,472 football fields — on an estimated 20,000 acres of land,” the report explains. “These facilities pose a significant and growing threat to both water quality and water quantity, yet are being approved without meaningful transparency, regulatory review, and assessment of cumulative impacts.”

Advertisement

The organization is calling for Congress to reauthorize infrastructure funding bills so aging systems can be upgraded, and for regulators in these states to require transparency about data centers’ resource use, along with comprehensive environmental assessments before development plans are approved.

Mars ash: then vs now

An image of a section in Mars' Utopia Planitia showing tan sand on the left side and dark, purplish ash covering the land on the right, creating a stark contrast

ESA/DLR/FU Berlin

The European Space Agency this week shared a look at how a region on Mars has changed since it was observed by NASA’s Viking orbiters way back in 1976. New images captured by ESA’s Mars Express spacecraft show how dark volcanic ash has encroached upon a swath of land in an area known as the Utopia Planitia basin. If you visit the blog post, you’ll find a side by side comparison of images from the two time periods.

It’s a rare example of an observable change on the surface of the red planet that’s occurred over such a short period of time, ESA notes. The agency explains, “The spread of the ash over the last 50 years has two possible explanations: either it has been picked up and moved about by martian winds, or the ochre dust that previously covered the dark ash has been blown away.”


Before you go, be sure to check these stories out too:

Source link

Advertisement
Continue Reading

Tech

Rockstar On Latest Potential Hack & Information Leak: Meh, We Don’t Care

Published

on

from the this-is-the-way dept

Several years ago, Rockstar Games suffered an intrusion into its corporate network. During that intrusion, a trove of data, files, and information about the in-development and unfinished Grand Theft Auto 6 game was exfiltrated. Under monetary threat of that data leaking, Rockstar completely lost its mind and went on a DMCA takedown campaign to try to remove any leaked content or footage that was being teased by the hacker in circulation. Readers here will already know that this kind of DMCA whac-a-mole never works and instead served only to Streisand the whole story into wider consciousness, working directly against Rockstar’s purposes in the first place.

Today, Rockstar is under threat of a similar leak. The company has acknowledged that hacking group ShinyHunters gained access to Rockstar information through a third-party data breach, namely that of Anodot, and has threatened to leak all that data if it isn’t paid by Rockstar.

ShinyHunters claim to have breached Rockstar’s outsourced Snowflake cloud storage system by way of a third-party analytics tool, Anodot, which reportedly suffered its own breach recently. With authentication tokens from Anodot, ShinyHunters would not have needed to crack Snowflake’s security directly⁠. They would have just been recognized as an authorized party and let in through the front door, like Agent 47 in a security guard outfit. ShinyHunters claims to have had access to Rockstar’s database for a significant amount of time before it was realized anything was amiss.

“Your Snowflake instances were compromised thanks to Anodot.com. Pay or leak,” ShinyHunters wrote in a post on their site. “This is a final warning to reach out by 14 Apr 2026 before we leak along with several annoying (digital) problems that’ll come your way. Make the right decision, don’t be the next headline.”

Unlike the previous hack and threat of a leak, however, Rockstar appears to be taking a completely different tactic. In addition to once again refusing to pay any ransom, which is absolutely the correct course of action, the company has also basically shrugged its shoulders over this entire situation.

Advertisement

Rockstar quickly responded to Kotaku saying that while “a limited amount of non-material company information was accessed,” the incursion would have “no impact on our organization or our players.”

There’s still no clear idea of what data has been taken, but Rockstar is certainly playing it very cool. ShinyHunters, should it go through with plans to publish the information, will likely post it to its dark web pages from which it’ll eventually filter to the wider public.

Now, I want to be careful to not give Rockstar any undue credit here. As discussed below, the type of data that was gained in this particular breach is far more banal than the previous one, which included actual unfinished game footage, and perhaps it’s that which explains this change in stance.

But I would argue that this is mostly the right course even if that weren’t the case. You can’t bottle up the genie once the leak is out there, so you might as well put your PR hat on and engage with the public in a way that puts the company and the product in the best light, while also acknowledging the thirst for more information on the unreleased game.

This is something we’ve advocated for for years now. It’s a simple as putting out a statement roughly like:

Advertisement

Hey, everyone! We know there might be a leak about our company and the upcoming Grand Theft Auto title coming out soon and we know you’re interested in anything you can get your hands on about the game. We are too! We want you to see the game, but we do prefer you see it in its finished state. But if you can’t wait that long, we understand. Please just also understand that we are something of a victim in all of this. It kind of hurts and is frustrating to have our plans for this release get derailed by this kind of criminal activity, but all we ultimately care about is making sure you know just how awesome the next GTA is going to be!

Good will would abound, the hackers wouldn’t get the payout that wished for, and the company could appear awesome, and, more importantly, human. I very much hope that this response from Rockstar thus far is an indication that that’s where the company is headed with all of this.

In this case, ShinyHunters did eventually release the leaked info, and you can see why Rockstar didn’t care:

Looking at the structure of the data, it does appear to come from automated exports generated by analytics pipelines. The files are compressed CSV outputs, commonly used for batch reporting in cloud data platforms like Snowflake. This supports earlier reporting that the access point was not Rockstar’s core network but a third-party analytics integration, believed to involve Anodot.

Some of the files also reference internal monitoring and testing. For example, dataset names linked to cheat detection models and platform-level revenue mismatches suggest the data includes operational insights used by Rockstar teams to manage gameplay balance and detect abuse. There are also references to Zendesk ticket metrics and customer support reporting, indicating visibility into service operations rather than individual player accounts.

What is not present in the leaked material is just as important. There are no player credentials, account data, or unreleased game assets such as GTA VI content. That aligns with Rockstar’s earlier statement that the breach involved limited company information and did not impact players.

Advertisement

So perhaps Rockstar’s reaction is more explained by the lack of any really problematic content in the leak. But, still, it is a reminder that you don’t have to completely freak out over every leak.

Filed Under: breach, grand theft auto, hack, leak, shinyhunters

Companies: rockstar games

Source link

Advertisement
Continue Reading

Tech

Daily Deal: Python Crash Course

Published

on

from the good-deals-on-cool-stuff dept

The Python Crash Course is a guide on how to get started in Python, why you should learn it, and how you can learn it. The syntax of the language is clean and the length of the code is relatively short. In this comprehensive course, you will get in-depth knowledge in data types, loops, python command lines, docstrings, and much more. It’s fun to work in Python because it allows you to think about the problem rather than focusing on the syntax. If all this excites you, then join this python coding course today! It’s on sale for $11.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Filed Under: daily deal

Source link

Advertisement
Continue Reading

Tech

Game Jam Winner Spotlight: Lilac Song

Published

on

from the gaming-like-it’s-1930 dept

We’re past the halfway point in our series of spotlight posts looking at the winners of our eighth annual public domain game jam, Gaming Like It’s 1930! We’ve already covered the Best Adaptation, Best Deep Cut, and Best Visuals winners, and this week we’re looking at the winner of Best Remix: Lilac Song by Autumn Chen.

There were fewer interactive fiction submissions in this year’s jam than there often have been in past editions, but even if the field had been more crowded, Lilac Song would have undoubtedly stood out. It’s a somber, thoughtful story that casts the player as a servant to Prussian Minister-President Otto Braun during the last few years of the Weimar Republic. It revolves around and intriguing and fitting premise: the servant has been designing a simulation game about power and politics in Germany, from which she aims to draw insights that could preserve democracy and prevent the rise of Hitler and the Nazi Party.

The story is far more than a cursory look at these events: it’s clearly rooted in robust historical knowledge about this critical time and place, with myriad details about the specifics of the political situation as well as an additional exploration of gender politics and transgenderism in the era. But what’s especially notable for this jam is the way it weaves in a wide variety of artistic and musical works from 1930, which form the backdrop of its setting and the game itself. Amidst the story unfolding (and careening towards its inevitable ending) the player wanders the halls of Braun’s house and chooses paintings to admire and music to listen to. These works (by Paul Klee, Wassily Kandinsky, Felix Mendelssohn, and more) become the wallpaper and soundtrack of the game.

Though the story takes center stage, the careful selection and use of these public domain works lend verisimilitude to the story and polish to the game design, resulting in more immersion than the text alone could achieve. For employing a curated combination of newly-public-domain works that elevates the interactive fiction without overtaking it, Lilac Song is this year’s Best Remix.

Congratulations to Autumn Chen for the win! You can play Lilac Song in your browser on Itch. We’ll be back next week with another winner spotlight, and don’t forget to check out the many great entries that didn’t quite make the cut. And stay tuned for next year, when we’ll be back for Gaming Like It’s 1931!

Filed Under: copyright, game jam, games, gaming, gaming like it’s 1930, public domain, winner spotlight

Advertisement

Source link

Continue Reading

Tech

Auto Enthusiast Scores Running Tesla Model 3 for Two Grand and Turns It Into Bare-Bones Go-Kart

Published

on

Remmy Evans Wrecked Tesla Go-Kart
Remmy Evans learned via a friend that a Tesla Model 3 was sitting in some guy’s driveway in Idaho. The owner had bought it cheaply with the intention of removing the drivetrain and installing it in an old car from the 1970s, but he abandoned the plan after realizing how much time the body work would take. Evans was able to negotiate a price of exactly $2k and walk away with a rolling chassis that was still capable of moving on its own.



The seats and steering wheel remained in place, as did the motors, battery pack, and center screen. Everything else had been stripped out. There were no body panels remaining, including the windshield. The tires were so worn down that you could see the wires through the rubber in a couple locations, and this car had been lying unregistered for at least two years. Nonetheless, the electric motors started immediately away, and the readout displayed a full battery charge.

Sale


Segway Ninebot S2 Electric Self-Balancing Scooter – Master Your Commute w/t 11.2 mph Max. Speed, 21.7 Mi…
  • Speed & Range: Experience exhilarating rides with the Ninebot S2’s impressive top speed of 11.2 mph and range of 21.7 miles.
  • Beginner-Friendly: Perfect for riders aged 16-50, the Segway S2 features a user-friendly learning mode, providing a smooth and gradual introduction.
  • Adjustable & Supportive: Enjoy a customized fit tailored to your needs, as the Segway S2 accommodates heights ranging from 4’3″ to 6’6″ and supports…

Remmy Evans Wrecked Tesla Go-Kart
Evans fitted it with bright red wheels and new tires so that it could really grip the road rather than destroying the tires every time he turned the wheel. As a safety measure, he looped a heavy-duty ratchet strap across his lap, similar to a makeshift harness. He also disabled the car’s road-sensing safety measures, allowing it to run freely in track mode.

Remmy Evans Wrecked Tesla Go-Kart
However, charging was a potential disaster. The first attempt at a fast charger was unsuccessful since the adapter just would not fit the port. Evans went to a hardware store, got a cutting tool, and fixed the problem by cutting the top of the adaptor on the spot so it could slide in. Once connected, the battery charged to full capacity and the LCD displayed 212 miles of range remaining. Home charging took roughly 7 or 8 hours on a Level 2 unit, or closer to 14 hours on a standard wall plug.

Remmy Evans Wrecked Tesla Go-Kart
So, with the car all fixed up, Evans took it for a test drive, and unexpectedly, it passed the test without drawing any unwelcome notice from the cops. Later practices included donuts in a parking lot, burnouts, and a few of open-road runs that exceeded 60 mph. Because there was no top, the wind blew directly into the automobile, but it handled perfectly. At a friend’s home, the stripped-down Tesla ripped over dirt berms, jumped off a tabletop jump, and just kept going. One of his buddies rode along and claimed it was similar to the three-wheeled roadster that some people enjoy, but more faster.

Remmy Evans Wrecked Tesla Go-Kart
Heavy drifting around the lot had a toll on the battery, which depleted far faster than it would on a regular drive. After a long afternoon of zooming around, the range had dropped to just 18 miles remaining, yet the car still made it home with a mile to spare. There was only one evident drawback: the onboard computer was logging an alarming 78 error codes because all of the cameras and sensors were missing.
[Source]

Advertisement

Source link

Continue Reading

Tech

As if the plate wasn’t already full, AI is about to worsen the global e-waste crisis

Published

on

AI is already changing how the world works, but it’s also quietly making one of our biggest environmental problems even worse. And no, this isn’t about energy consumption this time. It’s about the hardware. Because every smarter AI model comes with a physical cost.

AI is about to supercharge the e-waste problem

According to a study published in Nature Computational Science (via Rest of World), the rapid rise of AI could add between 1.2 to 5 million metric tons of e-waste by 2030. The reason is pretty simple. AI relies on high-performance hardware like GPUs and specialized servers, and these don’t last very long. Most of this equipment gets replaced every 2 to 5 years, which means older hardware is quickly discarded as newer, faster systems take over.

And this is happening at scale. As companies race to build bigger data centers and train more powerful models, the demand for hardware keeps rising, along with the pile of obsolete machines left behind.

This isn’t just a tech problem but a global one

E-waste is already one of the fastest-growing waste streams in the world, with tens of millions of tonnes generated every year. And the worst part? A large chunk of it isn’t properly recycled. Improper handling can release toxic materials like lead and mercury into the environment, posing serious risks to both ecosystems and human health. And here’s the uncomfortable truth: most of this waste ends up in lower-income countries, where recycling often happens under unsafe conditions. That means that while the benefits of AI are global, the environmental cost is not equally shared.

At the end of the day, AI might feel like a purely digital revolution. But behind the scenes, it’s building a very real, very physical footprint. And if things don’t change, that footprint is only going to keep growing.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025