Connect with us
DAPA Banner

Tech

How Bluetooth LE Audio Enhances Listening Experience

Published

on

This is a sponsored article brought to you by Audio Precision.

Bluetooth started as a simple wireless connection between a phone and a headset. Since its inception, it has become the invisible scaffolding for music, calls, gaming, and hearing assistance across consumer and professional devices alike. Bluetooth’s evolution to support more use cases has been driven not by a single breakthrough but by a steady accumulation of radio innovations, codecs, transport schemes, and power management strategies that together enhance the user experience with wireless audio. Today, a new architectural baseline—Bluetooth Low Energy (LE) Audio—promises low-power, high quality, and scalable audio delivery to open up the standard for an even wider range of applications [1][2].

Evolution of Bluetooth Radio Technologies

The original Basic Rate (BR) radio introduced with Bluetooth 1.0 in 1999 used a Gaussian frequency-shift keying (GFSK) at 1 Msym/s, hopping through 79 channels in the 2.4 GHz band with alternating transmission directions in a tight time-division duplex rhythm. The short-range robustness and reliability afforded by this technology helped gain performance at par with traditional cable-based devices.

In 2003, the Advanced Audio Distribution Profile (A2DP) arrived as the enabling standard for stereo audio streaming over Bluetooth Classic, marking the technology’s expansion beyond voice into music playback. A2DP uses the Audio/Video Distribution Transport Protocol (AVDTP) for stream management and mandates the Sub-Band Codec (SBC) as its baseline audio compression format. The SBC codec employs 4- or 8-band analysis/synthesis filter banks with adaptive bit allocation, spanning bitrates from 128 to 345 kbps for stereo content. Embedded DSP work showed how to optimize SBC implementation—Weighted Overlap Add (WOLA) filter banks, fixed-point pipelines, and real-time decoding that is audibly indistinguishable from floating point reference implementations while consuming fewer MIPS and milliwatts [3].

Advertisement

In 2004, Bluetooth 2.0 introduced Enhanced Data Rate (EDR) that moved payloads to π/4 DQPSK or 8 DPSK modulation to boost gross throughput to 2–3 Mb/s, while retaining the GFSK for packet headers. This innovation boosted stereo streaming quality and adoption during the decade.

Around 2010, Bluetooth Low Energy (BLE) 1 M PHY technology was introduced via Bluetooth 4.0. This new radio technology continued to use GFSK but tuned for low duty cycles and intermittent bursts. This fundamental difference with BR/EDR (Basic Rate/Enhanced Data Rate) led to common usage of the term “Bluetooth Classic” for Bluetooth 1.0 to distinguish it from BLE.

Isochronous Transport Architecture

In late 2016, Bluetooth 5.0 introduced the LE 2M PHY, doubling the symbol rate to 2 Msym/s. For a healthy link margin, halving a packet’s airtime was found to reduce collision exposure and lower the energy delivered/bit. By 2020, Bluetooth 5.2 or Bluetooth LE Audio radically shifted the focus from continuous streaming to a transport designed explicitly around deadlines. LE (Low Energy) Audio leverages the existing LE 1M and LE 2M PHYs but carries audio over isochronous channels—slots with timing commitments. The isochronous channel architecture comes in two forms. Connected Isochronous Streams (CIS) are unicast flows whose parameters (intervals, subevents, retransmissions) can be tuned to meet frame deadlines with bounded jitter, enabling the radio to sleep predictably between bursts while the application knows precisely when a frame will arrive. A systematic review of BLE performance corroborates that output and latency in the real world are bounded as much by connection interval, event length, and retransmissions as by the raw symbol rate; under the right parameters, faster PHYs reduce radioactive time and improve energy efficiency, while coded long-range modes trade airtime for robustness in harsher channels [1].

Broadcast Isochronous Streams (BIS)—commercially branded as Auracast—extend that scheduling to one-to-many transmissions, enabling connectionless audio delivery to unlimited receivers [2][7].

Advertisement

This difference in architecture over continuous streams requires careful selection of intervals, packetization, codec forming and appropriate models to determine parameters that meet deadlines without wasting airtime. Markov chain analyses of CIS—validated via simulation—translate developer choices (intervals, subevents, retransmission counts) into quantitative predictions for packet loss rate (PLR), backlog, delay, throughput, and average power consumption. [7]

The LC3 Codec Advantage

LE Audio’s Low Complexity Communication Codec (LC3) fundamentally shifts the bitrate-quality-complexity balance. Peer-reviewed listening tests across speech and music demonstrate that LC3 delivers superior perceived quality compared with SBC and mSBC at roughly half the bitrate; it also provides robust packet loss concealment and flexible frame sizes, including low-latency modes that make the encoding delay a smaller slice of the end‑to-end budget [2]. The benefits are practical: lower bitrate shrinks airtime, which reduces collision risk; shorter frames pair cleanly with CIS scheduling so deadlines are easier to meet; the codec’s computational footprint is modest enough for miniature devices [2].

AP logo with blue swoosh, text reads "An Axiometrics Solutions Brand."Audio Precision provides high-performance audio analyzers, accessories, and applications that have helped engineers worldwide design, validate, characterize, and manufacture audio products for over 40 years.

Hearing Aids: Power-Constrained Wireless Audio

Modern hearing devices are a complex assembly of multiple microphones, digital signal processors, and miniature power sources. Except for Completely-in-Canal (CIC) and Invisible-in-Canal (IIC) designs, which are so small they fit entirely within the ear canal, most hearing aids incorporate two or more microphones to support directional processing, beamforming, and noise reduction. Audio output is provided by a single electro-acoustic transducer. The compact form factor severely limits battery capacity, making energy efficiency critical.

Compared to Bluetooth Classic (A2DP/HFP), LE Audio improves energy efficiency through three broad mechanisms: the LC3 codec achieves equivalent perceived audio quality at significantly lower bitrates than the SBC codec used in Bluetooth Classic; the LE 1M and 2M PHYs reduce on-air time per packet relative to BR/EDR; and Connected Isochronous Streams (CIS) enable precise scheduling, allowing the radio to sleep between transmissions, whereas BR/EDR audio requires longer active radio periods.

Advertisement

BLE‑compliant wake‑up receivers (WuRx) monitor the air with micro/nano-watt sensitivity and trigger the main radio with packet preambles. Reported designs demonstrate sensitivity to extremely weak radio signals (down to −80 dBm), with within‑bit duty cycling that trades latency for power from hundreds of microseconds to seconds [4]. Sleep scheduling techniques primarily apply heuristics for periodic check‑ins, event‑driven wake-ups, clustering, and time division to stretch lifetime while meeting QoS targets [5][6].

From True Wireless Stereo to Coordinated Sets

Bluetooth Classic’s A2DP supports only a single audio stream. In Bluetooth Classic’s True Wireless Stereo (TWS) devices, one earbud acts as the primary, receiving the stereo stream from the phone and relaying audio to the secondary earbud—a forwarding or relay architecture. The additional transmission hop adds latency to the secondary earbud, while increasing power consumption in the primary.

LE Audio eliminates this limitation entirely. The technology’s dual CIS capability lets the phone send synchronized left and right streams directly to both earbuds. This architectural shift enables independent CIS connections from the phone to the left and right earbuds or hearing aids, enabling synchronized stereo delivery without relaying.

Discovery and pairing have evolved to match multi‑device use. The Coordinated Set Identification Service (CSIS) allows two earbuds—or two hearing aids—to be discovered and managed as a coordinated set rather than independently, with resolvable identifiers and set‑level locks. While peer‑reviewed empirical literature on CSIS is thin, timing and carrier synchronization theory is mature: clock‑offset estimation, jitter control, phase‑locked loops, buffer alignment, and recovery strategies hold binaural timing within tens of milliseconds for lip‑sync and spatial imaging [9].

Advertisement

Gaming Headsets: Low Latency With Bidirectional Stereo

Gaming represents a demanding stress test for wireless audio. Bluetooth Classic’s Headset Profile (HSP) and Hands-Free Profile (HFP) support bidirectional audio for voice communication but are fundamentally limited: they transmit only in mono with a maximum sampling rate of 16 kHz, restricting both spatial audio quality and voice fidelity.

LE Audio Unicast Voice transforms this scenario by supporting stereo audio with sampling rates up to 32 kHz, significantly improving spatial audio and speech quality for gaming while maintaining voice communication with other players. End‑to‑end latency often must stay under a few tens of milliseconds for responsive play and coherent spatial sound. LC3’s shorter frames and lower bitrates shrink codec delay; tuned CIS parameters preserve deadlines while limiting retransmissions to useful values; beamforming improves capture quality for bidirectional voice without ballooning computational cost [2][7].

Close-up of smartphone screen showing Bluetooth icon in blue with other icons around it. Audio Precision’s new Bluetooth® 5 module provides an interface to audio devices using the latest version of the Bluetooth specification, including LE Audio devices utilizing Unicast and Auracast™. Adobe Stock

Public Broadcast Audio: Auracast

Bluetooth Classic supports only one active audio connection and typically provides a range of approximately 10 meters, making it fundamentally unsuitable for broadcast scenarios such as lecture halls, churches, gyms, and airports.

LE Audio introduces the Broadcast Isochronous Stream (BIS), commercially branded as Auracast, enabling true one-to-many audio transmission. Multiple hearing aids, headphones, and earbuds can receive the same broadcast, which may be public (e.g., airport announcements) or private (encrypted, non-discoverable, optional password protection). Typical Auracast ranges extend up to 30 meters indoors and 100 meters outdoors, depending on environment and configuration.

Advertisement

BIS’s connectionless nature scales easily to unlimited receivers without pairing overhead; isochronous delivery tolerates packet loss well through forward error correction and interleaving; and the unidirectional transmission eliminates return traffic, reducing radio congestion. Assistive listening studies report that bypassing room acoustics and delivering audio directly can improve signal‑to‑noise ratios by 15–20 dB, making announcements comprehensible and lectures clearer [8].

Ensuring It Sounds Good in, on or Over the Listener’s Ear

LE Audio delivers the music or voice signal more efficiently than its predecessor, Bluetooth Classic. Audio engineers still need to verify their devices’ audio performance as experienced by the end user.

The listener’s pinna, the external part of the ear, and ear canal are a critical part of the playback system. For example, the low-frequency response and the effectiveness of active noise-cancellation are highly dependent on the seal between the device and the listener’s ear canal. Similarly, on-ear and over-ear headphones interact with the listener’s pinnas.

Anthropomorphic test fixtures—most notably GRAS KEMAR (Knowles Electronics Manikin for Acoustic Research) head and torso simulators—incorporate soft, deformable anthropomorphic pinnas that replicate realistic insertion and sealing conditions. These allow accurate replication of insertion depth, sealing, low-frequency response, and ANC performance [10][12].

Advertisement

Gaming headsets both receive and send audio. Just like music headphones, gaming headset testing benefits from fixtures with a human-like pinna to ensure repeatable measurement of ear-pad interaction. The headset’s microphone can be either a traditional boom microphone positioned close to the mouth or an array of microphones located farther away on the ear cups incorporating beamforming to isolate the wearer’s voice from any background noise. Test fixtures use an artificial mouth and a microphone positioned at the Mouth Reference Point (MRP) according to ITU-T standards to evaluate microphone performance under realistic speech and background noise conditions [10].

For testing of devices intended as broadcast receivers, an integrated test system with Auracast broadcast capability—like the Audio Precision Bluetooth 5 module—proves invaluable.

Conclusion

Bluetooth audio is no longer defined by a single radio or a single profile. It is defined by a timed pipeline—a codec that makes better sound with fewer bits, a transport that guarantees when those bits arrive, a radio that can sleep most of the time, and front‑end processing that gives the codec an easier job.

Hearing aids illustrate the payoff: arrays and beamformers improve intelligibility first; LC3 compresses with low delay; CIS schedules delivery; the radio sleeps; batteries last. Enhancements in other applications, such as gaming and public broadcast, further strengthen the case for adoption of this cutting-edge technology.

Advertisement

While Bluetooth audio began as a low-bandwidth, mono voice technology over Basic Rate (BR) radio in 1999, more than 25 years of evolution has produced a fundamental architectural shift. LE Audio replaces continuous point-to-point streams with scheduled, low-power, scalable audio delivery, enabling new classes of devices and use cases. The standards are ready, and audio test systems like Audio Precision’s Bluetooth 5 module are updated to incorporate the new transmission technology; the rest is execution—deploying LE Audio broadly so audio becomes instant, clear, and inclusive [2][7].

References

[1] Tosi, J., Taffoni, F., Santacatterina, M., Sannino, R., & Formica, D. (2017). Performance evaluation of Bluetooth Low Energy: A systematic review. Sensors, 17(12), Article 2898. https://doi.org/10.3390/s17122898

[2] Schnell, M., Riedl, M., Löllmann, H., & Multrus, M. (2021). LC3 and LC3plus: The new audio transmission standards for wireless communication. Proceedings of the AES 150th Convention, Online.

[3] Hermann, D., Herre, J., & Teichmann, R. (2004). Low-power implementation of the Bluetooth subband audio codec. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Montreal, QC, Canada.

Advertisement

[4] Abdelhamid, M. R., Chen, R., Cho, J., Chandrakasan, A. P., & Wentzloff, D. D. (2018). A −80 dBm BLE-compliant, FSK wake-up receiver with system and within-bit duty-cycling for scalable power and latency. Proceedings of the IEEE Custom Integrated Circuits Conference (CICC), San Diego, CA, USA.

[5] Mutar, M. S., Mohammed, A. H., & Abdulkareem, M. B. (2024). A survey of sleep scheduling techniques in wireless sensor networks for maximizing energy efficiency. AIP Conference Proceedings.

[6] Mikhaylov, K., & Karvonen, H. (2020). Wake-up radio enabled BLE wearables: Empirical and analytical evaluation of energy efficiency. Proceedings of the IEEE International Symposium on Medical Information and Communication Technology (ISMICT).

[7] Yan, Z., Xu, H., & Shen, Z. (2024). Modeling and analysis of the performance for CIS-based Bluetooth LE Audio [Preprint].

Advertisement

[8] Kaufmann, T. B., Weller, T., Stiefelhagen, R., & Adiloglu, K. (2023). Requirements for mass adoption of assistive listening technology by the general public. arXiv. https://arxiv.org/abs/2303.02523

[9] Nasir, A. A., Durrani, S., Mehrpouyan, H., Blostein, S. D., & Kennedy, R. A. (2015). Timing and carrier synchronization in wireless communication systems: A survey and classification of research in the last five years. arXiv. https://arxiv.org/abs/1507.02032

[10] Okorn, E., & Wulf-Andersen, P. (2019). Acoustic test fixtures: From KEMAR and beyond! The Journal of the Acoustical Society of America, 146(4), 2815. https://doi.org/10.1121/1.5136656

[11] An analytical model of Bluetooth performance considering physical and link-layer effects. (2021). IEEE Xplore.

Advertisement

[12] IEC/ITU acoustic standards literature for headphone and earbud testing. (n.d.). Indexed in The Journal of the Acoustical Society of America and AIP Conference Proceedings.

Disclosure: AI tools were used by Wiley, which produced this sponsored article, to skim through research literature for technical insights on the evolution and state of the art of Bluetooth technology. AI was also used to polish the text for conciseness and technical accuracy.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

14-inch MacBook Pro M5 vs Asus Zenbook A16: $2,000 shootout

Published

on

The Asus Zenbook A16 is a thin and light Windows notebook aiming to take the portability crown from Apple. Here’s how it compares against a similarly-priced MacBook Pro.

Two open laptops side by side: a dark Apple MacBook Pro on the left with abstract screen, and a beige ASUS Zenbook on the right showing a canyon landscape, gradient background.
M5 14-inch MacBook Pro vs Asus Zenbook A16

For our spec-sheet brawl, we’re going to put the $1,999 Asus Zenbook A16 against the 14-inch MacBook Pro with M5. As much as we would compare the similarly-sized 16-inch MacBook Pro, the other upgrades to the base-spec version pushes it to $2,699, which is a bit too high.
To make it a little bit closer in price, we will set the 14-inch MacBook Pro as having an enhanced memory allowance of 24GB or 32GB.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

3 underrated Amazon Prime Video movies you should watch this weekend (April 10-12)

Published

on

This weekend’s watchlist covers three different genres of movies, so you can pick whatever you are in the mood for. We have a trio of hidden gems on Amazon Prime Video that deserve way more attention.

There is a gritty Michael Caine revenge thriller you should not miss, a micro-budget 1950s sci-fi mystery that thrives on atmosphere and dialogue. For horror fans, we have a psychological horror bout a hospice nurse whose faith tips into something far more dangerous that gets inside your skin.

We also have guides to the best new movies to stream, the best movies on Netflix, the best movies on Hulu, the best free movies, and the best movies on Amazon Prime Video.

Saint Maud (2019)

Advertisement

Saint Maud is not a horror film in the traditional sense, and going in expecting one will work against you. What it actually is is a deeply unsettling psychological portrait of a young hospice nurse named Maud, a recent Catholic convert who becomes dangerously fixated on saving her terminally ill patient’s soul in ways that grow increasingly disturbing.

Morfydd Clark’s performance is the engine of the whole thing, holding a fragile, frightening line between piety and paranoia throughout. I really like how the film gets under your skin without ever fully explaining itself. You finish it feeling like you witnessed something you were not supposed to see, and that feeling does not leave quickly.

You can watch Saint Maud on Amazon Prime Video

Harry Brown (2009)

Advertisement

If you have a soft spot for slow-burn British crime dramas, Harry Brown is the movie you need to watch this weekend. Michael Caine plays the title character, a widowed, retired Royal Marines veteran living on a decaying South London housing estate overrun by gang violence. When his only friend is murdered, Harry stops looking the other way.

What makes this film work so well is how it refuses to glamorize what follows. Harry is not an action hero. He is an old man with emphysema who stumbles during a chase and collapses on a canal path.

I really like how the film earns every moment of tension because it keeps Harry vulnerable and the world around him genuinely threatening. Caine is absolutely extraordinary here, and there are sequences in this film that will make you forget you are watching a 77-year-old man.

You can watch Harry Brown on Amazon Prime Video

Advertisement

The Vast of Night (2019)

Have you accidentally tuned into a late-night radio broadcast and could not bring yourself to switch off. Well, The Vast of Night is exactly that kind of sci-fi movie.

Set over a single night in 1950s small-town New Mexico, the film follows Fay, a teenage switchboard operator, and Everett, a fast-talking local radio DJ, as they stumble onto a mysterious audio frequency that sends them down a strange and increasingly eerie rabbit hole.

There are no big set pieces or alien invasions. The tension is built almost entirely through dialogue, long unbroken camera takes, and an incredibly precise sound design that makes the night feel alive and watchable.

Advertisement

What I really love about this movie is how it makes stillness feel tense. A long phone call, a quiet street, a voice crackling through static, and somehow all of it keeps you completely locked in. For a movie made on a low budget, The Vast of Night makes an entertaining watch.

You can watch The Vast of Night on Amazon Prime Video

Source link

Advertisement
Continue Reading

Tech

Alibaba leads $293m round in Chinese AI start-up after HappyHorse reveal

Published

on

HappyHorse 1.0 shot up to the top ranks in the Artificial Analysis leaderboard.

Chinese technology giant Alibaba’s cloud division led a $293m funding round into ShengShu Technology, a 2023-founded Beijing-based start-up behind the Vidu AI video-generation tool.

Baidu Ventures and Luminous Ventures also participated in the round. The company’s post-money valuation has not been disclosed.

The latest investment comes after ShengShu raised nearly $88m in a Series A round in February.

Advertisement

Vidu is marketed towards independent creators and animators, promising “effortless” production of content with “diverse artistic styles”.

The start-up is focusing on building ‘world models’ built on multimodal data such as audio, video and “touch”. The latest funding, the company said, will help support the development of a “general world model”.

The company’s latest Vidu Q3 Pro, which launched in January, places at the seventh rank on the Artificial Analysis leaderboard on text-to-video models, while making it to the 10th spot on the image-to-video rankings.

Vidu competes with other Chinese AI heavyweights, including ByteDance’s Seedance 2.0 and lead investor Alibaba’s own video model HappyHorse 1.0 that shot up to the top rank on the Artificial Analysis leaderboard.

Advertisement

Meanwhile, models from companies such as Singapore’s Skywork AI and Beijing-based Kuaishou, behind KlingAI, also rank high on the boards. These models are hungry to fill the gap in the video generation space left by OpenAI after it shuttered Sora late last month. Top leaderboard rankings are increasingly being filled by Chinese models.

HappyHorse was anonymously launched earlier this week before Alibaba claimed ownership today (10 April). The model is a product of Alibaba’s new Token Hub (ATH) innovation unit, placing number one on text-to-video and image-to-video ranks with no audio, while placing at the second spot with audio.

Bloomberg News reported that HappyHorse 1.0, which is under beta testing currently, will be followed up with more new ATH products. Alibaba’s share prices shot up following speculation that the company was behind the model.

Alibaba made the decision last month to bring its AI services and development works under a single roof called ATH, led by CEO Eddie Wu.

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Analysis of one billion CISA KEV remediation records exposes limits of human-scale security

Published

on

Person looking over a datacenter

Author: Saeed Abbasi, Senior Manager, Threat Research Unit, Qualys

With Time-to-Exploit now at negative seven days and autonomous AI agents accelerating threats, the data no longer supports incremental improvement. The architecture of defense must change.

What Leaders Need to Know

Analysis of CISA’s Known Exploited Vulnerabilities over the past four years shows critical vulnerabilities still open at Day 7 worsened from 56% to 63% despite teams closing 6.5x more tickets. Staffing cannot solve this.

Advertisement

Of the 52 tracked weaponized vulnerabilities in our study, 88% were patched more slowly than they were exploited — half were weaponized before any patch existed.

The problem is not speed. It is the operational model itself.

Cumulative exposure, not CVE counts, is the true risk metric that security teams now need to measure. While dashboards reward the sprint to get patches implemented, breaches exploit the tail. AI is not another attack surface — instead, the transition period where AI-powered attackers face human defenders is the industry’s most dangerous window.

In response, defenders have to implement their own autonomous, closed-loop risk operations.

Advertisement

The Broken Physics

New research from the Qualys Threat Research Unit, analyzing more than one billion CISA KEV remediation records from across 10,000 organizations over four years, quantifies what the industry has long suspected but never proved at scale. The operational model underpinning enterprise security is broken.

Vulnerability volumes have grown 6.5 times since 2022. According to Google M-Trends 2026, the average Time-to-Exploit has collapsed to negative seven days; in other words, adversaries are weaponizing the most serious vulnerabilities before patches exist. The percentage of critical vulnerabilities still open at seven days has climbed from 56 percent to 63 percent.

Yet this is not for lack of effort. Organizations closed 400 million more vulnerability events annually now than they did at baseline. Teams work harder, but it fails to make the difference where it counts. Our researchers call this the “human ceiling” — a structural limit no amount of staffing or process maturity can overcome. The constraint is not effort. It is the model itself.

Of 52 high-profile weaponized vulnerabilities tracked with complete exploitation timelines, 88 percent were remediated slower than they were exploited. As an example, Spring4Shell was exploited two days before disclosure, yet the average enterprise needed 266 days to remediate.

Advertisement

Similarly, the flaw in Cisco IOS XE was weaponized a month early; average close was 263 days.

The attacker’s advantage was measured in days. The defender’s response was measured in seasons. This is not an intelligence failure. It is an operationalization failure.

To understand the future around risk operations, AI and managing remediation at scale, come to ROCON EMEA, the Risk Operations Center Conference.

Join your peers and learn more about automated remediation.

Advertisement

Register Today

The Manual Tax and Risk Mass

The report identifies a “Manual Tax” — the multiplier effect where long-tail assets that human processes cannot reach drag exposure from weeks into months. For Spring4Shell, average remediation was 5.4 times the median.

The median tells a manageable story. The average tells the truth. Infrastructure systems face a harsher reality: for Cisco IOS XE, even the median was 232 days — compared to endpoint medians consistently under 14. When the best-case outcome is eight months, the Manual Tax is no longer a multiplier. It is the baseline.

Looking at average figures is no longer helpful for decision-making. Instead, looking at Risk Mass — vulnerable assets multiplied by days exposed — captures what CVE counts obscure around cumulative exposure. A companion metric, Average Window of Exposure (AWE), measures the full duration from weaponization to remediation across the environment.

As an example, Follina was weaponized 30 days before disclosure with an average close at Day 55.

Advertisement

However, the AWE stretched to 85 days. While the blind spot before disclosure accounted for 36 percent of that 85 days, the long tail of patching accounted for a further 44 percent. In total, pre-disclosure and long tail together represent 80 percent. The sprint that gets measured makes up less than 20.

At the same time, of 48,172 vulnerabilities disclosed in 2025, only 357 were remotely exploitable and actively weaponized. Organizations are burning remediation cycles on theoretical exposure while genuinely exploitable gaps persist.

Why the Gap Will Widen

Cybersecurity has long operated as a derivative of technology shifts — Windows security followed Windows, cloud security followed cloud. Leading practitioners and investors now argue AI breaks that pattern. It is not merely a new surface to defend; it is a fundamental transformation of the adversary itself.

Offensive agents can already discover, weaponize, and execute faster than any human-staffed operation can respond. The remediation data proves humans cannot keep pace today. Autonomous AI ensures the gap will accelerate tomorrow.

Advertisement

The transition period — where AI-powered attackers face human-speed defenders — represents the industry’s most dangerous window, compounded by the structural vulnerabilities that dominate the near term: attack surfaces expanded beyond what teams can govern, identity sprawl that outpaces policy, and remediation workflows still built on manual execution.

The traditional scan-and-report model was built for lower volumes of CVEs and longer exploit timelines. What replaces it is an end-to-end Risk Operations Center: embedded intelligence arriving as machine-readable decision logic, active confirmation validating whether a vulnerability is actually exploitable in a specific environment, and autonomous action compressing response to the timescale the threat demands.

The objective is not to eliminate human judgment but to elevate it, shifting practitioners from tactical execution to governing the policies that direct their own autonomous systems.

The organizations already winning the physics gap are not winning with larger teams. They are winning because they have removed human latency from the critical path.

Advertisement

How Security Teams can close the Risk Gap

The scan-and-report model — discover, score, ticket, manually route — was built for lower volumes and longer exploit timelines.

What replaces it is an end-to-end Risk Operations Center: embedded intelligence arriving as machine-readable decision logic, active confirmation validating whether a vulnerability is actually exploitable in a specific environment, and autonomous action compressing response to the timescale the threat demands.

The objective is not to eliminate human judgment but to elevate it — shifting practitioners from tactical execution to governing the policies that direct autonomous systems. The organizations already winning the physics gap are not winning with larger teams. They are winning because they have removed human latency from the critical path.

Time-to-Exploit will not return to positive numbers. Vulnerability volume will not plateau. The reactive model has hit a hard mathematical ceiling.

Advertisement

The only remaining question is whether organizations will use the architecture to match the mathematics — before the window between human-scale defense and autonomous-scale offense closes for good.

Contact Qualys for insights into how companies manage remediation at scale with automation and AI, and how you can make that difference right now.

Sponsored and written by Qualys.

Advertisement

Source link

Continue Reading

Tech

5 Tech Items You Shouldn’t Try To Donate To Thrift Stores

Published

on





We may receive a commission on purchases made from links.

You might feel like offloading electronics at a thrift store is an easy way to get rid of them while also letting others enjoy their use. To be fair, there are always some cool gadgets and electronics to look out for as a buyer, but there are some tech items that you shouldn’t even try donating to thrift stores. Because of different policies and simple safety concerns, certain pieces of tech will be rejected by thrift stores before they even leave your hands.

A great number of thrift stores have a list of items that they’ll accept or deny. These lists aren’t always uniform across different outlets, but a few pieces of tech are more likely to be refused than not. The ones that get turned down tend to be old or volatile for one reason or another, and stores obviously wouldn’t want to sell things that are broken or even dangerous. In some cases, there might also be items that you just shouldn’t want to give them anyway. Here are five different types of items that just aren’t worth trying to donate to thrift stores.

Advertisement

Printers and fax machines

Fax machines are generally seen as old tech devices that the latest generation will never learn to use, and they aren’t exactly small when compared to other types of electronics like phones or even laptops. Printers are a bit more universal, but again their size still makes them difficult for many thrift stores to accept. Generally, small electronics have a much better chance at being taken off your hands. It’s less a matter of function and more a matter of size and space.

Some thrift stores won’t have this issue for printers, but you might still run into issues depending on the type of printer you give them. In the past, many donators have found difficulty offloading printers that use proprietary cartridges for ink and toner. These are expensive, manufacturer-specific, and sometimes aren’t even made anymore. Even if these older printers are cheap, with so many restrictions on what allows them to work in the first place, many thrift stores simply don’t find it worthwhile to stock them at all.

Advertisement

Batteries, or items with batteries

It shouldn’t be too surprising to hear that thrift stores aren’t very willing to accept loose batteries. You should already be aware of their safety risks, especially if you’ve already experienced batteries leaking from improper storage and use. Besides, considering the specific tasks and devices they’re meant for, you probably don’t have much reason to donate AA or AAA batteries instead of throwing them away. And once they’re used up, you should be recycling them properly, not giving them away.

As you might expect, this rule can apply to more than just the batteries themselves. Car batteries and devices with batteries built-in can pose very similar risks. You might get away with being able to donate the latter, but rechargeable batteries integrated into small electronics such as smartphones can end up getting swollen over time. This is a sign that it’s just about ready to catch fire, and it should go without saying that no thrift store will be happy about that.

Advertisement

Older tech, including CRTs

You might think that a thrift store would happily accept an older television set. They’ve been making a comeback in recent years, and they don’t seem very harmful on the surface. But older CRT televisions are pretty much universally denied by these locations. Some shoppers have found thrift stores carrying CRTs in certain areas, but you might have a tough time getting your local location to accept one.

Once again, the problem here is safety above all else. Goodwill in Southern Alleghenies mentions how it had to stop accepting CRTs because they “contain five to eight pounds of lead.” In this case, there’s also a high cost for the store to offload them in the first place; it’s forced to pay fees and find landfills that will actually take the items. Few places have the freedom or motivation to deal with these issues, and fewer still will want to take the safety risks involved in keeping these stocked.

Advertisement

Computer monitors and other screens

The aforementioned Goodwill location refuses to take flat-screen TVs for similar reasons as CRTs: hazardous materials and risks to safety. But the rules aren’t universal for every location, even when it comes to different Goodwill stores. And this goes for other screens and displays, too, such as computer monitors. It’s really up in the air whether you’ll be able to find a thrift store near you that’ll accept them.

LCD monitors might be an example of tech that’s still worth buying used, but they can still face notable quality issues such as dead pixels. OLED monitors also have the risk of burn-in, which further creates problems with how attractive they are to buyers. Thrift stores aren’t likely to accept broken or damaged electronics, and depending on their definition, monitors with those problems could be quickly denied by them. At that point, it’s a much better decision to take those screens to a recycling center, not a thrift store.

Advertisement

Unwiped storage devices

Donators have faced difficulties in giving their digital storage devices to certain thrift stores, though some locations will still accept them without a major issue. The problem here is on your end, as you can’t be sure that these stores will reliably wipe these drives on their own. If you simply give away your older storage devices carelessly, whoever ends up buying it might end up picking through your personal information. Even a full deletion might not guarantee your safety unless you use special programs or physically destroy the old drive entirely — to the point where there’s no chance a thrift store will accept it.

Advertisement

On top of hard drives, USB flash sticks, and solid state drives themselves, you should be aware of any device that might have storage built-in. This applies most to computers and laptops, obviously, but smart TVs and game consoles can be problematic to donate if you still have them signed into your accounts. Many of the electronics thrift stores refuse are a risk to their safety, but make sure the items they accept aren’t a risk to your own.



Advertisement

Source link

Continue Reading

Tech

NVIDIA’s DLSS 5 Demo Video Briefly Taken Down Because YouTube’s Take Down Process Sucks

Published

on

from the the-italian-job dept

Last month, we discussed NVIDIA’s demo video for its forthcoming DLSS 5 technology and the controversy surrounding it. While I’m going to continue to be of the posture that an injection of nuance is desperately needed in the reaction to AI tools and the like, our comments section largely disagreed with me on that post. That’s cool, that’s what this place is for, and I still love you all.

But this post is not about DLSS 5. Rather, it’s about the video itself and how it was briefly taken down over automated copyright claims thanks to an Italian news channel. Please note that the source material here was written while the video was still down, but it has since been restored.

And now, here we are in April, and NVIDIA’s DLSS 5 announcement trailer is no longer available to watch on YouTube on the company’s official GeForce channel. And no, it’s not because NVIDIA is responding to the feedback and retooling the technology for a re-reveal or re-announcement; it’s now blocked on “copyright grounds.”

A clear mistake, but also one that highlights the limitations of Google’s automated system for YouTube. Apparently, the Italian television channel La7 included footage from the DLSS 5 reveal in a recent broadcast and has since copyrighted it. From there, essentially every video on YouTube with DLSS 5 trailer footage was issued a copyright strike and said to be in violation, with the videos taken down with the following message: “Video unavailable: This video contains content from La7, who has blocked it in your country on copyright grounds.”

Yes, this was clearly a mistake. But it’s a mistake that I’m frankly tired of hearing about, all while Google does absolutely nothing to iterate on its copyright process and systems to mitigate such mistakes. The examples of this very thing are so legion as to be laughable. Whether due to error or due to malicious intent, videos that include content from other videos for the purposes of reporting and commentary, which are then copyrighted and result in takedowns of the source material, happens all the damned time.

Advertisement

This is almost certainly all automated, which means there are no human eyes looking for an error in the flagging of a copyright violation. It just gets tagged as such and taken down. And, no, the irony is not lost on me that we need human eyes to keep an automated copyright takedown on a video about AI from occurring.

What makes this alarming is that the video was taken down with seemingly no human interaction or input, as it’s clear that NVIDIA not only created DLSS 5, for better or worse, but also the trailer that has been a hot topic of discussion this year. We’re assuming this will be resolved fairly quickly. Still, it will be interesting to see whether YouTube responds to this case and claims that false copyright infringement notices like this are prevalent on the platform.

Google hasn’t been terribly interested in commenting on the plethora of cases like this in the past, so I strongly doubt it will now. Which is a damned shame, honestly, because the company really should be advocating for all of the users on its platform, if not especially those that are negatively impacted by this haphazard process.

But, for now, the video is back, so you can go hate-watch it again if you like.

Filed Under: copyright, dlss 5, geforce, takedowns, video games

Companies: la7, nvidia, youtube

Advertisement

Source link

Continue Reading

Tech

Florida launches probe into OpenAI as company eyes massive IPO

Published

on


In a video posted to X, he said his office is examining whether OpenAI’s data and artificial intelligence systems “could fall into the hands of America’s enemies, such as the Chinese Communist Party.”
Read Entire Article
Source link

Continue Reading

Tech

ChatGPT rolls out new $100 Pro subscription to challenge Claude

Published

on

Claude

OpenAI has rolled out a new Pro subscription that costs $100 and is in line with Claude’s pricing, which also has a $100 subscription, in addition to the $200 Max monthly plan.

Until now, OpenAI has offered three subscription tiers.

First is Go, which costs approx $8, second is Plus for $20, and then the final tier is at $200, a jump of $180.

Wiz

On the other hand, Anthropic does not offer an $8 subscription, but it has a $100 subscription that comes between the cheapest $20 and the expensive $200 subscription, and it works for the company because it caters to the coding audience.

OpenAI has realized that it needs to go after coders and enterprises, similar to Anthropic’s strategy.

Advertisement

The company’s answer is ChatGPT Pro, which is designed for people who rely on AI to get high-stakes, complex work done for $100.

After this change, OpenAI’s offering looks like the following:

  • Plus $20 – For lighter use. Try advanced capabilities like Codex and Deep Research for select projects throughout the week.
  • Pro $100 – Built for real projects. For those who use advanced tools and models throughout the week, with 5x higher limits than Plus (and 10x Codex usage vs. Plus for a limited time).
  • Pro $200 – For heavy lifting. Run your most demanding workflows continuously, even across parallel projects, with 20× higher limits than Plus.

All Pro plans include access to advanced features, including:

  • Pro models
  • Codex
  • Deep research
  • Image creation
  • Memory
  • File uploads

OpenAI says the Pro plan also includes unlimited access to GPT-5 and legacy models, but it’s not truly unlimited because the typical “Terms of Use” policies apply, including sharing of accounts.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading

Tech

Mythos autonomously exploited vulnerabilities that survived 27 years of human review. Security teams need a new detection playbook

Published

on

A 27-year-old bug sat inside OpenBSD’s TCP stack while auditors reviewed the code, fuzzers ran against it, and the operating system earned its reputation as one of the most security-hardened platforms on earth. Two packets could crash any server running it. Finding that bug cost a single Anthropic discovery campaign approximately $20,000. The specific model run that surfaced the flaw cost under $50.

Anthropic’s Claude Mythos Preview found it. Autonomously. No human guided the discovery after the initial prompt.

The capability jump is not incremental

On Firefox 147 exploit writing, Mythos succeeded 181 times versus 2 for Claude Opus 4.6. A 90x improvement in a single generation. SWE-bench Pro: 77.8% versus 53.4%. CyberGym vulnerability reproduction: 83.1% versus 66.6%. Mythos saturated Anthropic’s Cybench CTF at 100%, forcing the red team to shift to real-world zero-day discovery as the only meaningful evaluation left. Then it surfaced thousands of zero-day vulnerabilities across every major operating system and every major browser, many one to two decades old. Anthropic engineers with no formal security training asked Mythos to find remote code execution vulnerabilities overnight and woke up to a complete, working exploit by morning, according to Anthropic’s red team assessment.

Anthropic assembled Project Glasswing, a 12-partner defensive coalition including CrowdStrike, Cisco, Palo Alto Networks, Microsoft, AWS, Apple, and the Linux Foundation, backed by $100 million in usage credits and $4 million in open-source grants. Over 40 additional organizations that build or maintain critical software infrastructure also received access. The partners have been running Mythos against their own infrastructure for weeks. Anthropic committed to a public findings report “within 90 days,” landing in early July 2026.

Advertisement

Security directors got the announcement. They didn’t get the playbook.

“I’ve been in this industry for 27 years,” Cisco SVP and Chief Security and Trust Officer Anthony Grieco told VentureBeat in an exclusive interview at RSAC 2026. “I have never been more optimistic for what we can do to change security because of the velocity. It’s also a little bit terrifying because we’re moving so quickly. It’s also terrifying because our adversaries have this capability as well, and so frankly, we must move this quickly.”

Security directors saw this story told fifteen different ways this week, including VentureBeat’s exclusive interview with Anthropic’s Newton Cheng. As one widely shared X post summarizing the Mythos findings noted, the model cracked cryptography libraries, broke into a production virtual machine monitor, and gave engineers with zero security training working exploits by morning. What that coverage left unanswered: Where does the detection ceiling sit in the methods they already run, and what should they change before July?

Seven vulnerability classes that show where every detection method hits its ceiling

  1. OpenBSD TCP SACK, 27 years old. Two crafted packets crash any server. SAST, fuzzers, and auditors missed a logic flaw requiring semantic reasoning about how TCP options interact under adversarial conditions. Campaign cost ~$20,000. Anthropic notes the $50 per-run figure reflects hindsight.

  2. FFmpeg H.264 codec, 16 years old. Fuzzers exercised the vulnerable code path 5 million times without triggering the flaw, according to Anthropic. Mythos caught it by reasoning about code semantics. Campaign cost ~$10,000.

  3. FreeBSD NFS remote code execution, CVE-2026-4747, 17 years old. Unauthenticated root from the internet, per Anthropic’s assessment and independent reproduction. Mythos built a 20-gadget ROP chain split across multiple packets. Fully autonomous.

  4. Linux kernel local privilege escalation. Mythos chained two to four low-severity vulnerabilities into full local privilege escalation via race conditions and KASLR bypasses. CSA’s Rich Mogull noted Mythos failed at remote kernel exploitation but succeeded locally. No automated tool chains vulnerabilities today.

  5. Browser zero-days across every major browser. Thousands identified. Some required human-model collaboration. In one case, Mythos chained four vulnerabilities into a JIT heap spray, escaping both the renderer and the OS sandboxes. Firefox 147: 181 working exploits versus two for Opus 4.6.

  6. Cryptography library vulnerabilities (TLS, AES-GCM, SSH). Implementation flaws enabling certificate forgery or decryption of encrypted communications, per Anthropic’s red team blog and Help Net Security. A critical Botan library certificate bypass was disclosed the same day as the Glasswing announcement. Bugs in the code that implements the math. Not attacks on the math itself.

  7. Virtual machine monitor guest-to-host escape. Guest-to-host memory corruption in a production VMM, the technology keeping cloud workloads from seeing each other’s data. Cloud security architectures assume workload isolation holds. This finding breaks that assumption.

Nicholas Carlini, in Anthropic’s launch briefing: “I’ve found more bugs in the last couple of weeks than I found in the rest of my life combined.”

VentureBeat’s prescriptive matrix

Vulnerability Class

Advertisement

Why Current Methods Miss It

What Mythos Does

Security Director Action

OS kernel logic (OpenBSD 27yr, Linux 2-4 chain)

Advertisement

SAST lacks semantic reasoning. Fuzzers miss logic flaws. Pen testers time-boxed. Bounties scope-exclude kernel.

Chains 2-4 low-severity findings into local priv-esc. ~$20K campaign.

Add AI-assisted kernel review to pen test RFPs. Expand bounty scope. Request Glasswing findings from OS vendors before July. Re-score clustered findings by chainability.

Media codec (FFmpeg 16yr H.264)

Advertisement

SAST unflagged. Fuzzers hit path 5M times, never triggered.

Reasons about semantics beyond brute-force. ~$10K campaign.

Inventory FFmpeg, libwebp, ImageMagick, libpng. Stop treating fuzz coverage as security proxy. Track Glasswing codec CVEs from July.

Network stack RCE (FreeBSD 17yr, CVE-2026-4747)

Advertisement

DAST limited at protocol depth. Pen tests skip NFS.

Full autonomous chain to unauthenticated root. 20-gadget ROP chain.

Patch CVE-2026-4747 now. Inventory NFS/SMB/RPC services. Add protocol fuzzing to 2026 cycle.

Multi-vuln chaining (2-4 sequenced, local)

Advertisement

No tool chains. Pen testers hours-limited. CVSS scores in isolation.

Autonomous local chaining via race conditions + KASLR bypass.

Require AI-assisted chaining in pen test methodology. Build chainability scoring. Budget AI red teams for 2026.

Browser zero-days (thousands, 181 Firefox exploits)

Advertisement

Bounties + continuous fuzzing missed thousands. Some required human-model collaboration.

90x over Opus 4.6. Chained 4 vulns into JIT heap spray escaping renderer + OS sandbox.

Shorten patch SLA to 72hr critical. Pre-stage pipeline for July cycle. Pressure vendors for Glasswing timelines.

Crypto libraries (TLS, AES-GCM, SSH, Botan bypass)

Advertisement

SAST limited on crypto logic. Pen testers rarely audit crypto depth. Formal verification not standard.

Found cert forgery + decryption flaws in battle-tested libraries.

Audit all crypto library versions now. Track Glasswing crypto CVEs from July. Accelerate PQC migration.

VMM / hypervisor (guest-to-host memory corruption)

Advertisement

Cloud security assumes isolation. Few pen tests target hypervisor. Bounties rarely scope VMM.

Guest-to-host escape in production VMM.

Inventory hypervisor/VMM versions. Request Glasswing findings from cloud providers. Reassess multi-tenant isolation assumptions.

Attackers are faster. Defenders are patching once a year.

The CrowdStrike 2026 Global Threat Report documents a 29-minute average eCrime breakout time, 65% faster than 2024, with an 89% year-over-year surge in AI-augmented attacks. CrowdStrike CTO Elia Zaitsev put the operational reality plainly in an exclusive interview with VentureBeat. “Adversaries leveraging agentic AI can perform those attacks at such a great speed that a traditional human process of look at alert, triage, investigate for 15 to 20 minutes, take an action an hour, a day, a week later, it’s insufficient,” Zaitsev said. A $20,000 Mythos discovery campaign that runs in hours replaces months of nation-state research effort.

Advertisement

CrowdStrike CEO George Kurtz reinforced that timeline pressure on LinkedIn the same day as the Glasswing announcement. “AI is creating the largest security demand driver since enterprises moved to the cloud,” Kurtz wrote. The regulatory clock compounds the operational one. The EU AI Act’s next enforcement phase takes effect August 2, 2026, imposing automated audit trails, cybersecurity requirements for every high-risk AI system, incident reporting obligations, and penalties up to 3% of global revenue. Security directors face a two-wave sequence: July’s Glasswing disclosure cycle, then August’s compliance deadline.

Mike Riemer, Field CISO at Ivanti and a 25-year US Air Force veteran who works closely with federal cybersecurity agencies, told VentureBeat what he is hearing from the government. “Threat actors are reverse engineering patches, and the speed at which they’re doing it has been enhanced greatly by AI,” Riemer said. “They’re able to reverse engineer a patch within 72 hours. So if I release a patch and a customer doesn’t patch within 72 hours of that release, they’re open to exploit.” Riemer was blunt about where that leaves the industry. “They are so far in front of us as defenders,” he said.

Grieco confirmed the other side of that collision at RSAC 2026. “If you talk to an operational team and many of our customers, they’re only patching once a year,” Grieco told VentureBeat. “And frankly, even in the best of circumstances, that is not fast enough.”

CSA’s Mogull makes the structural case that defenders hold the long-term advantage: fix a vulnerability once and every deployment benefits. But the transition period, when attackers reverse-engineer patches in 72 hours and defenders patch once a year, favors offense.

Advertisement

Mythos is not the only model finding these bugs. Researchers at AISLE, an AI cybersecurity startup, tested Anthropic’s showcase vulnerabilities on small, open-weights models and found that eight out of eight detected the FreeBSD exploit. AISLE says one model had only 3.6 billion parameters and costs 11 cents per million tokens, and that a 5.1-billion-parameter open model recovered the core analysis chain of the 27-year-old OpenBSD bug. AISLE’s conclusion: “The moat in AI cybersecurity is the system, not the model.” That makes the detection ceiling a structural problem, not a Mythos-specific one. Cheap models find the same bugs. The July timeline gets shorter, not longer.

Over 99% of the vulnerabilities Mythos has identified have not yet been patched, per Anthropic’s red team blog. The public Glasswing report lands in early July 2026. It will trigger a high-volume patch cycle across operating systems, browsers, cryptography libraries, and major infrastructure software. Security directors who have not expanded their patch pipeline, re-scoped their bug bounty programs, and built chainability scoring by then will absorb that wave cold. July is not a disclosure event. It is a patch tsunami.

What to tell the board

Every security director tells the board “we have scanned everything.” Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, told VentureBeat that the statement does not survive Mythos without a qualifier.

“What security leaders actually mean is: we have exhaustively scanned for what our tools know how to see,” Baer said in an exclusive interview with VentureBeat. “That’s a very different claim.”

Advertisement

Baer proposed reframing residual risk for boards around three tiers: known-knowns (vulnerability classes your stack reliably detects), known-unknowns (classes you know exist but your tools only partially cover, like stateful logic flaws and auth boundary confusion), and unknown-unknowns (vulnerabilities that emerge from composition, how safe components interact in unsafe ways). “This is where Mythos is landing,” Baer said.

The board-level statement Baer recommends: “We have high confidence in detecting discrete, known vulnerability classes. Our residual risk is concentrated in cross-function, multi-step, and compositional flaws that evade single-point scanners. We are actively investing in capabilities that raise that detection ceiling.”

On chainability, Baer was equally direct. “Chainability has to become a first-class scoring dimension,” she said. “CVSS was built to score atomic vulnerabilities. Mythos is exposing that risk is increasingly graph-shaped, not point-in-time.” Baer outlined three shifts security programs need to make: from severity scoring to exploitability pathways, from vulnerability lists to vulnerability graphs that model relationships across identity, data flow, and permissions, and from remediation SLAs to path disruption, where fixing any node that breaks the chain gets priority over fixing the highest individual CVSS.

“Mythos isn’t just finding missed bugs,” Baer said. “It’s invalidating the assumption that vulnerabilities are independent. Security programs that don’t adapt, from coverage thinking to interaction thinking, will keep reporting green dashboards while sitting on red attack paths.”

Advertisement

VentureBeat will update this story with additional operational details from Glasswing’s founding partners as interviews are completed.

Source link

Continue Reading

Tech

A Mercury Rover Could Explore The Planet By Sticking To The Terminator

Published

on

The planet Mercury in true color. (Credit: NASA)
The planet Mercury in true color. (Credit: NASA)

With multiple rovers currently scurrying around on the surface of Mars to continue a decades-long legacy, it can be easy to forget sometimes that repeating this feat on other planets that aren’t Earth or Mars isn’t quite as straightforward. In the case of Earth’s twin – Venus – the surface conditions are too extreme to consider such a mission. Yet Mercury might be a plausible target for a rover, according to a study by [M. Murillo] and [P. G. Lucey], via Universe Today’s coverage.

The advantages of putting a rover’s wheels on a planet’s surface are obvious, as it allows for direct sampling of geological and other features unlike an orbiting or passing space probe. To make this work on Mercury as in some ways a slightly larger version of Earth’s moon that’s been placed right next door to the Sun is challenging to say the least.

With no atmosphere it’s exposed to some of the worst that the Sun can throw at it, but it does have a magnetic field at 1.1% of Earth’s strength to take some of the edge off ionizing radiation. This just leaves a rover to deal with still very high ionizing radiation levels and extreme temperature swings that at the equator range between −173 °C and 427 °C, with an 88 Earth day day/night cycle. This compares to the constant mean temperature on Venus of 464 °C.

To deal with these extreme conditions, the researchers propose that a rover might be able to thrive if it sticks to the terminator, being the transition between day and night. To survive, the rover would need to be able to gather enough solar power – if solar-powered – due to the Sun being very low in the sky. It would also need to keep up with the terminator velocity being at least 4.25 km/h, as being caught on either the day or night side of Mercury would mean a certain demise. This would leave little time for casual exploration as on Mars, and require a high level of autonomy akin to what is being pioneered today with the Martian rovers.

Advertisement

Top image: the planet Mercury with its magnetic field. (Credit: A loose necktie, Wikimedia)

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025