Connect with us
DAPA Banner

Tech

SpaceX Prioritizes Lunar ‘Self-Growing City’ Over Mars Project, Musk Says

Published

on

“Elon Musk said on Sunday that SpaceX has shifted its focus to building a ‘self-growing city’ on the moon,” reports Reuters, “which could be achieved in less than 10 years.”


SpaceX still intends to start on Musk’s long-held ambition of a city on Mars within five to seven years, he wrote on his X social media platform, “but the overriding priority is securing the future of civilization and the Moon is faster.”

Musk’s comments echo a Wall Street Journal report on Friday, stating that SpaceX has told investors it would prioritize going to the moon and attempt a trip to Mars at a later time, targeting March 2027 for an uncrewed lunar landing. As recently as last year, Musk said that he aimed to send an uncrewed mission to Mars by the end of 2026.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Gmail finally lets you change your cringey old usernames

Published

on


Google is finally doing the thing Gmail users have been begging for years, which is letting them change the actual username in their Gmail address. This is no longer just an early rollout, as Google says the feature is now available for all Google Account users in the US. So it’s still a limited release, […]

Source link

Continue Reading

Tech

Volvo’s parent just revealed a $15,000 extended-range EV, and it shows how wide the US value gap has become

Published

on

Geely, the Chinese automotive giant that owns Volvo, has just unveiled the Boyue EREV in China with a limited-time price of 107,900 Yuan, or roughly about $14,900. This price is worth noting, considering it’s not a stripped-down city car, but an extended-range SUV. It further highlights the value gulf between China and the US looks even wider.

This isn’t some tiny -range compromise either. Geely says the Boyue EREV offers up to 375 km of CLTC electric range and as much as 1,525 km of combined range, depending on the variant. It uses a 1.5 liter range extender, a 160kW electric motor, and either a 28.3 kWh or 50.4 kWh LFP battery pack. The larger battery also supports 3C fast charging, which claims to hit 80% charge from 30% in just about 15 minutes.

What else does it offer?

The Boyue EREV also doesn’t cut corners for the price, offering a 14.6-inch central display, an 8.8-inch instrument cluster, Flyme Auto, and support for both Carlink and Huawei HiCar. Keeping up with other high-tech Chinese EVs, you also get 50W wireless charging, an optional 16-speaker audio, an optional HUD, and L2-level driver assistance. It is also a real family SUV too, measuring 4,680mm long with a 2,778mm wheelbase.

Why this is such a big deal

The bigger story here is not just Geely’s new SUV. It is what this kind of product says about the market split. Reuters reported earlier this week on Geely’s broader importance to Volvo as the Swedish brand navigates a tough car market. It also underlines just how central the Chinese parent has become. And despite US buyers wanting to buy Chinese EVs, they remain largely shut out of this kind of value.

Source link

Advertisement
Continue Reading

Tech

European Union wants to ban AI-created images and video in official messaging

Published

on


  • EU reckons it could assert trust and authenticity by removing AI-generated content
  • The bloc is also drafting a code of practice to protect citizens
  • Blocking AI altogether might not be the best move, though

The European Union is reportedly considering a ban on AI-generated images and videos – otherwise known as deepfakes – in official communications.

According to new Politico reporting, with ongoing geopolitical tensions rising, elections running their courses and further public announcements, it’s believed the focus would be to protect trust in government messaging.

Advertisement

Source link

Continue Reading

Tech

Samsung Galaxy Book6 Pro review: a super thin slab with a glorious display

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

Samsung Galaxy Book6 Pro: Two-minute review

The Samsung Galaxy Book6 Pro is a laptop in the ultrabook class, featuring a sublime design that keeps bulk to a minimum.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Google fixes fourth Chrome zero-day exploited in attacks in 2026

Published

on

Google Chrome

Google released emergency updates to fix another Chrome zero-day vulnerability exploited in attacks, marking the fourth such security flaw patched since the start of the year.

“Google is aware that an exploit for CVE-2026-5281 exists in the wild,” Google said in a security advisory issued on Tuesday.

As detailed in the Chromium commit history, this vulnerability stems from a use-after-free weakness in Dawn, the underlying cross-platform implementation of the WebGPU standard used by the Chromium project.

Attackers can exploit this Dawn security flaw to trigger web browser crashes, data corruption, rendering issues, or other abnormal behavior.

Advertisement

While Google has found evidence that threat actors were exploiting this zero-day flaw in the wild, it did not share details about these incidents.

“Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed,” the company noted.

Google Chrome 146.0.7680.178

​Google has now fixed the zero-day for users in the Stable Desktop channel, with new versions rolling out to Windows, macOS (146.0.7680.177/178), and Linux users (146.0.7680.177). While Google says that this out-of-band update could take days or weeks to reach all users, it was immediately available when BleepingComputer checked for updates today.

If you don’t want to update the browser manually, you can also have it check for updates at the next launch and install them automatically.

Advertisement

This is the fourth actively exploited Chrome zero-day patched since the start of the year. The first (CVE-2026-2441) was an iterator invalidation bug in CSSFontFeatureValuesMap (Chrome’s implementation of CSS font feature values), which Google addressed in mid-February.

Google patched two other Chrome zero-day bugs exploited in attacks earlier this month: the first is an out-of-bounds write weakness in the Skia 2D graphics library (CVE-2026-3909), and the second is an inappropriate implementation vulnerability in the V8 JavaScript and WebAssembly engine (CVE-2026-3910).

In 2025, Google fixed a total of eight zero-days exploited in the wild, many of which were discovered and reported by Google’s Threat Analysis Group (TAG), which is known for tracking and identifying zero-day exploits used in spyware attacks.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Advertisement

Source link

Continue Reading

Tech

Startup Pitches ‘Brainless Clones’ To Serve the Role of Backup Human Bodies

Published

on

MIT Technology Review discovered that startup R3 Bio has pitched an ethically and scientifically explosive long-term vision beyond its public work on non-sentient monkey “organ sacks”: creating human “brainless clones” or replacement bodies for organs as part of an extreme life-extension agenda. From the report: Imagine it like this: a baby version of yourself with only enough of a brain structure to be alive in case you ever need a new kidney or liver. Or, alternatively, he has speculated, you might one day get your brain placed into a younger clone. That could be a way to gain a second lifespan through a still hypothetical procedure known as a body transplant.

The fuller context of R3’s proposals, as well as activities of another stealth startup with related goals, have not previously been reported. They’ve been kept secret by a circle of extreme life-extension proponents who fear that their plans for immortality could be derailed by clickbait headlines and public backlash. And that’s because the idea can sound like something straight from a creepy science fiction film. One person who heard R3’s clone presentation, and spoke on the condition of anonymity, was left reeling by its implications and shaken by [R3 founder John Schloendorn’s] enthusiastic delivery. The briefing, this person said, was like a “close encounter of the third kind” with “Dr. Strangelove.” […]

MIT Technology Review found no evidence that R3 has cloned anyone, or even any animal bigger than a rodent. What we did find were documents, additional meeting agendas, and other sources outlining a technical road map for what R3 called “body replacement cloning” in a 2023 letter to supporters. That road map involved improvements to the cloning process and genetic wiring diagrams for how to create animals without complete brains. A main purpose of the fundraising, investors say, was to support efforts to try these techniques in monkeys from a base in the Caribbean. That offered a path to a nearer-term business plan for more ethical medical experiments and toxicology testing — if the company could develop what it now calls monkey “organ sacks.” However, this work would clearly inform any possible human version.

Source link

Advertisement
Continue Reading

Tech

If TikTok doomscrolling wasn’t bad enough, it now serves an emoji game in DMs

Published

on

As if endless scrolling wasn’t bad enough already, TikTok has now quietly added a hidden emoji game inside DMs. The mini-game is live right now and works in both one-on-one messages and group chats. It means the app now has one more little trick to keep users hanging around even when they are technically done watching videos.

And honestly, it is exactly the kind of feature you would expect from a platform that has mastered years of mastering the art of making “just five more minutes” turn into an hour.

What’s the game, and why you should be wary

The game kicks off when you send a single emoji in a chat. If you tap on this emoji, your chosen emoji becomes part of the game itself, floating across the screen to give you a speed boost as you try to bounce upward across a stack of alligators.

The goal is to climb as high as possible while avoiding skeleton alligators, with some of these disappearing after one landing. So it’s all about quick reactions and enough chaos to make you give it another try. TikTok also shows both your score and your opponent’s high score in the top-right corner. So this basically turns it into a lightweight little competition instead of just a throwaway gimmick.

It is very on-brand

TikTok told TechCrunch that it launched the Easter egg to make messaging more fun and add a playful competitive element to DMs. This isn’t the first time we’re seeing something like this. Instagram added its own hidden emoji DM game two years ago, and Meta has also been experimenting with games inside Threads chats.

On paper, this is just a harmless little DM mini-game. But in practice, it is one more engagement hook dropped into a platform that was already very good at monopolizing attention.

Advertisement

Source link

Continue Reading

Tech

Hackers slipped a trojan into the code library behind most of the internet. Your team is probably affected

Published

on

Attackers stole a long-lived npm access token belonging to the lead maintainer of axios, the most popular HTTP client library in JavaScript, and used it to publish two poisoned versions that install a cross-platform remote access trojan. The malicious releases target macOS, Windows, and Linux. They were live on the npm registry for roughly three hours before removal.

Axios gets more than 100 million downloads per week. Wiz reports it sits in approximately 80% of cloud and code environments, touching everything from React front-ends to CI/CD pipelines to serverless functions. Huntress detected the first infections 89 seconds after the malicious package went live and confirmed at least 135 compromised systems among its customers during the exposure window.

This is the third major npm supply chain compromise in seven months. Every one exploited maintainer credentials. This time, the target had adopted every defense the security community recommended.

One credential, two branches, 39 minutes

The attacker took over the npm account of @jasonsaayman, a lead axios maintainer, changed the account email to an anonymous ProtonMail address, and published the poisoned packages through npm’s command-line interface. That bypassed the project’s GitHub Actions CI/CD pipeline entirely.

Advertisement

The attacker never touched the Axios source code. Instead, both release branches received a single new dependency: plain-crypto-js@4.2.1. No part of the codebase imports it. The package exists solely to run a postinstall script that drops a cross-platform RAT onto the developer’s machine.

The staging was precise. Eighteen hours before the axios releases, the attacker published a clean version of plain-crypto-js under a separate npm account to build publishing history and dodge new-package scanner alerts. Then came the weaponized 4.2.1. Both release branches hit within 39 minutes. Three platform-specific payloads were pre-built. The malware erases itself after execution and swaps in a clean package.json to frustrate forensic inspection.

StepSecurity, which identified the compromise alongside Socket, called it among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package.

The defense that existed on paper

Axios did the right things. Legitimate 1.x releases shipped through GitHub Actions using npm‘s OIDC Trusted Publisher mechanism, which cryptographically ties every publish to a verified CI/CD workflow. The project carried SLSA provenance attestations. By every modern measure, the security stack looked solid.

Advertisement

None of it mattered. Huntress dug into the publish workflow and found the gap. The project still passed NPM_TOKEN as an environment variable right alongside the OIDC credentials. When both are present, npm defaults to the token. The long-lived classic token was the real authentication method for every publish, regardless of how OIDC was configured. The attacker never had to defeat OIDC. They walked around it. A legacy token sat there as a parallel auth path, and npm‘s own hierarchy silently preferred it.

“From my experience at AWS, it’s very common for old auth mechanisms to linger,” said Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS, in an exclusive interview with VentureBeat. “Modern controls get deployed, but if legacy tokens or keys aren’t retired, the system quietly favors them. Just like we saw with SolarWinds, where legacy scripts bypassed newer monitoring.”

The maintainer posted on GitHub after discovering the compromise: “I’m trying to get support to understand how this even happened. I have 2FA / MFA on practically everything I interact with.”

Endor Labs documented the forensic difference. Legitimate axios@1.14.0 showed OIDC provenance, a trusted publisher record, and a gitHead linking to a specific commit. Malicious axios@1.14.1 had none. Any tool checking provenance would have flagged the gap instantly. But provenance verification is opt-in. No registry gate rejected the package.

Advertisement

Three attacks, seven months, same root cause

Three npm supply chain compromises in seven months. Every one started with a stolen maintainer credential.

The Shai-Hulud worm hit in September 2025. A single phished maintainer account gave attackers a foothold that self-replicated across more than 500 packages, harvesting npm tokens, cloud credentials, and GitHub secrets as it spread. CISA issued an advisory. GitHub overhauled npm’s entire authentication model in response.

Then in January 2026, Koi Security’s PackageGate research dropped six zero-day vulnerabilities across npm, pnpm, vlt, and Bun that punched through the very defenses the ecosystem adopted after Shai-Hulud. Lockfile integrity and script-blocking both failed under specific conditions. Three of the four package managers patched within weeks. npm closed the report.

Now axios. A stolen long-lived token published a RAT through both release branches despite OIDC, SLSA, and every post-Shai-Hulud hardening measure in place.

Advertisement

npm shipped real reforms after Shai-Hulud. Creation of new classic tokens got deprecated, though pre-existing ones survived until a hard revocation deadline. FIDO 2FA became mandatory, granular access tokens were capped at seven days for publishing, and trusted publishing via OIDC gave projects a cryptographic alternative to stored credentials. Taken together, those changes hardened everything downstream of the maintainer account. What they didn’t change was the account itself. The credential remained the single point of failure.

“Credential compromise is the recurring theme across npm breaches,” Baer said. “This isn’t just a weak password problem. It’s structural. Without ephemeral credentials, enforced MFA, or isolated build and signing environments, maintainer access remains the weak link.”

What npm shipped vs. what this attack walked past

What SOC leaders need

npm defense shipped

Advertisement

vs. axios attack

The gap

Block stolen tokens from publishing

FIDO 2FA required. Granular tokens, 7-day expiry. Classic tokens deprecated

Advertisement

Bypassed. Legacy token coexisted alongside OIDC. npm preferred the token

No enforcement removes legacy tokens when OIDC is configured

Verify package provenance

OIDC Trusted Publishing via GitHub Actions. SLSA attestations

Advertisement

Bypassed. Malicious versions had no provenance. Published via CLI

No gate rejects packages missing provenance from projects that previously had it

Catch malware before install

Socket, Snyk, Aikido automated scanning

Advertisement

Partial. Socket flagged in 6 min. First infections hit at 89 seconds

Detection-to-removal gap. Scanners catch it, registry removal takes hours

Block postinstall execution

–ignore-scripts recommended in CI/CD

Advertisement

Not enforced. npm runs postinstall by default. pnpm blocks by default; npm does not

postinstall remains primary malware vector in every major npm attack since 2024

Lock dependency versions

Lockfile enforcement via npm ci

Advertisement

Effective only if lockfile committed before compromise. Caret ranges auto-resolved

Caret ranges are npm default. Most projects auto-resolve to latest minor

What to do now at your enterprise

SOC leaders whose organizations run Node.js should treat this as an active incident until they confirm clean systems. The three-hour exposure window fell during peak development hours across Asia-Pacific time zones, and any CI/CD pipeline that ran npm install overnight could have pulled the compromised version automatically.

“The first priority is impact assessment: which builds and downstream consumers ingested the compromised package?” Baer said. “Then containment, patching, and finally, transparent reporting to leadership. What happened, what’s exposed, and what controls will prevent a repeat. Lessons from log4j and event-stream show speed and clarity matter as much as the fix itself.”

Advertisement
  • Check exposure. Search lockfiles and CI logs for axios@1.14.1, axios@0.30.4, or plain-crypto-js. Pin to axios@1.14.0 or axios@0.30.3.

  • Assume compromise if hit. Rebuild affected machines from a known-good state. Rotate every accessible credential: npm tokens, AWS keys, SSH keys, cloud credentials, CI/CD secrets, .env values.

  • Block the C2. Add sfrclak.com and 142.11.206.73 to DNS blocklists and firewall rules.

  • Check for RAT artifacts. /Library/Caches/com.apple.act.mond on macOS. %PROGRAMDATA%\wt.exe on Windows. /tmp/ld.py on Linux. If found, preform a full rebuild.

  • Harden going forward. Enforce npm ci --ignore-scripts in CI/CD. Require lockfile-only installs. Reject packages missing provenance from projects that previously had it. Audit whether legacy tokens coexist with OIDC in your own publishing workflows.

The credential gap nobody closed

Three attacks in seven months. Each different in execution, identical in root cause. npm’s security model still treats individual maintainer accounts as the ultimate trust anchor. Those accounts remain vulnerable to credential hijacking, no matter how many layers get added downstream.

“AI spots risky packages, audits legacy auth, and speeds SOC response,” Baer said. “But humans still control maintainer credentials. We mitigate risk. We don’t eliminate it.”

Mandatory provenance attestation, where manual CLI publishing is disabled entirely, would have caught this attack before it reached the registry. So would mandatory multi-party signing, where no single maintainer can push a release alone. Neither is enforced today. npm has signaled that disabling tokens by default when trusted publishing is enabled is on the roadmap. Until it ships, every project running OIDC alongside a legacy token has the same blind spot axios had.

The axios maintainer did what the community asked. A legacy token nobody realized was still active and undermined all of it.

Advertisement

Source link

Continue Reading

Tech

France buys supercomputer maker Bull in tech sovereignty push

Published

on

‘By supporting the emergence of Bull, we are choosing strategic independence,’ said France’s minister delegate for artificial intelligence and digital affairs.

France has completed its acquisition of 100pc of the capital of supercomputer maker Bull from Atos Group, in a deal that marks a “major step forward for French and European technological sovereignty”.

The acquisition, the completion of which was announced yesterday (31 March), is expected to boost France and Europe’s tech sovereignty particularly in the areas of high‑performance computing, AI and quantum technologies, according to the French state and Bull. The French state is now the sole shareholder of Bull.

“The revival of Bull as an independent company supported by the French state marks a decisive step in our history,” said Emmanuel Le Roux, CEO of Bull. “With a long‑term strategic shareholder, we are strengthening our position as a trusted industrial partner across the entire value chain of high‑performance computing, quantum computing and artificial intelligence.”

Advertisement

The deal to acquire Bull from Atos Group was first agreed in July of last year, when France agreed to pay an enterprise value of up to €404m for the company.

Bull, which is headquartered in Bezons, France, designs and manufactures supercomputers and high‑performance servers, as well as enterprise servers, software solutions, AI use cases and innovations in quantum computing.

“The supercomputers produced there meet the most demanding needs of national defence, industry and fundamental research, and are also essential for training and deploying artificial intelligence models,” read yesterday’s announcement. “They are recognised for their performance and energy efficiency – two decisive criteria for training large AI models.”

The computing company has been in operation for nearly a century, having been founded in 1931. The company was acquired by Atos Group in 2014, when it became the organisation’s advanced computing business.

Advertisement

Europe’s sovereignty push

The completion of France’s purchase of Bull comes amid a wider push for tech sovereignty in Europe in recent times – particularly in the wake of recent transatlantic tensions with the current US administration.

France, along with Germany, have been prominent figureheads in the push for European digital sovereignty, with both countries taking centre stage at last November’s Summit on European Digital Sovereignty to propose a number of initiatives – including the launch of a joint taskforce on European digital sovereignty led by the two nations.

Sovereignty efforts have seen milestones achieved in Europe’s supercomputing space in particular.

Last September, Jupiter, the first computer system in Europe to achieve exascale threshold – one that performs more than one quintillion operations per second – was inaugurated at the Jülich Supercomputing Centre in Germany.

Advertisement

Jupiter joined existing supercomputers in the EuroHPC network – namely, MareNostrum in Spain, Leonardo in Italy, Lumi in Finland, Discoverer in Bulgaria, MeluXina in Luxembourg, Vega in Slovenia, Karolina in Czechia and Deucalion in Portugal – together conducting billions of calculations per second.

A month later, the European High Performance Computing Joint Undertaking (EuroHPC JU) signed a procurement contract with Eviden for the delivery of Alice Recoque, a new European exascale supercomputer (named after the late pioneering French computer scientist) to be located in France.

“The state’s entry into Bull’s share capital marks a decisive step for our digital sovereignty,” said Anne Le Hénanff, France’s minister delegate for artificial intelligence and digital affairs. “At a time when artificial intelligence and quantum technologies are profoundly reshaping technological balances, France is equipping itself with a leading industrial player in high‑performance computing.

“By supporting the emergence of Bull, we are choosing strategic independence. It is a strong signal: that of a country that invests, that protects its expertise, and that is determined to remain sovereign in the technologies that will shape the world of tomorrow.”

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

This Deep Sea Submersible Let Humans Explore the Abyss

Published

on

As a kid, I loved the 1980s aquatic adventure show Danger Bay. True to the TV show’s name, danger was always lurking at the Vancouver Aquarium, where the show was set. In one memorable episode, young Jonah and a friend get trapped in a sabotaged mini-submarine, and Jonah’s dad, a marine-mammal veterinarian, comes to the rescue in a bubble-shaped underwater vehicle. Good stuff! Only recently—as in when I started working on this column—did I learn that the rescue vehicle was not a stage prop but rather a real-world research submersible named Deep Rover.

What Was Deep Rover and What Did It Do?

Built in 1984 and launched the following year, Deep Rover was a departure from standard underwater vehicles, which typically required divers to lie in a prone position and look through tiny portholes while tethered to a support ship.

Deep Rover was designed to satisfy human curiosity about the underwater world. As the rover moved freely through the water down to depths of 1,000 meters, the operator sat up in relative comfort in the cab, inside a clear 13-centimeter-thick acrylic bubble with panoramic views—an inverted fishbowl, with the human immersed in breathable air while the sea creatures looked in. Used for scientific research and deepwater exploration, it set a number of dive records along the way.

Photo of a man and a woman in a wood-paneled room with a scale model of an underwater vehicle in front of them.Submarine designer Graham Hawkes [left] and marine biologist Sylvia Earle [right] came up with the idea for Deep Rover.Alain Le Garsmeur/Alamy

The team behind Deep Rover included U.S. marine biologist Sylvia Earle and British marine engineer and submarine designer Graham Hawkes. Earle and Hawkes’s collaboration had begun in May 1980, when Earle complained to Hawkes about the “stupid” arms on Jim, an atmospheric diving suit; she didn’t realize she was complaining to one of Jim’s designers. Hawkes explained the difficulty of designing flexible joints that could withstand dueling pressures of 101 kilopascals on the inside—that is, the normal atmospheric pressure at sea level—and up to about 4,100 kPa on the outside. But he listened carefully to Earle’s wish list for a useful manipulator. Several months later, he came back with a design for a superbly dexterous arm that could hold a pencil and write normal-size letters.

Advertisement

Earle and Hawkes next turned to designing a one-person bubble sub, which they considered so practical that it would be an easy sell. But after failing to attract funding, they decided to build it themselves. In the summer of 1981, they pooled their resources and cofounded Deep Ocean Technology, setting up shop in Earle’s garage in Oakland, Calif.

Photo of a man sitting in an underwater vehicle with the words \u201cNewtsub DeepWorker 2000\u201d across the front and the logos of NASA and the National Geographic Society.Phil Nuytten, a Canadian designer of submersibles and dive systems, engineered Deep Rover.Stuart Westmorland/RGB Ventures/Alamy

They still found that customers weren’t interested in their crewed submersible, though, so they turned to unmanned systems. Their first contract was for a remotely operated vehicle (ROV) for use in oil-rig inspection, maintenance, and repair. Other customers followed, and they ended up building 10 of these ROVs. In 1983, they returned to their original idea and contracted with the Canadian inventor and entrepreneur Phil Nuytten to engineer Deep Rover.

Nuytten didn’t have to be convinced of the value of the submersible. He had grown up on the water and shared their dream. As a teenager, he opened Vancouver’s first dive shop. He then worked as a commercial diver. He founded the ocean- and research-tech companies Can-Dive Services (in 1965) and Nuytco Research (in 1982), and he developed advanced submersibles as well as diving systems. These included the Newtsuit, an aluminum atmospheric diving suit for use on drilling rigs and salvage operations.

Advertisement

Deep Rover’s first assignment was to boost offshore oil exploration and drilling in eastern Canada. Funding came from the provincial government of Newfoundland and Labrador and the oil companies Petro-Canada and Husky Oil. But the collapse of oil prices in the mid-1980s made it uneconomical to operate the submersible. So the rover’s mission broadened to scientific research.

Deep Rover’s Technical Specs

The pilot could operate Deep Rover safely for 4 to 6 hours at a depth of 1,000 meters and speeds of up to 1.5 knots (46 meters per minute). The submersible could be tethered to a support ship or move freely on its own. Two deep-cycle, lead-acid battery pods weighing about 170 kilograms apiece provided power. It had a VHF radio and two frequencies of through-water communications, plus tracking beacons.

Park ranger operates aircraft cockpit controls surrounded by cameras and instruments

Two photos, one showing a smiling man in the cab of a heavily instrumented vehicle, the other showing the underwater view out the front of the vehicle. From 1987 to 1989, Deep Rover did a series of dives in Oregon’s Crater Lake, the deepest lake in the United States. During one dive, National Park Service biologist Mark Buktenica [top] collected rock samples.NPS

The rover’s four thrusters—two horizontal fixed aft thrusters and two rotating wing thrusters—could be activated in any combination through microswitches built into the armrest. The pilot navigated using a gyro compass, sonar, and depth gauges (both digital and analog).

Much to Earle’s delight, Deep Rover had two excellent manipulators, each with four degrees of freedom, thus solving the problem that had started her down this path of invention. The pilot controlled the manipulators with a joystick at the end of each armrest. Sensory feedback systems helped the pilot “feel” the force, motion, and touch. The two arms had wraparound jaws and could lift about 90 kg.

Advertisement

If something went wrong, Deep Rover carried five days’ worth of life support stores and had a variety of redundant safety features: oxygen and carbon dioxide monitoring equipment; a halon (breathable) fire extinguisher; a full-face BIBS (built-in breathing system) that tapped into the starboard air bank; and a ground fault-detection system.

If needed, the rover could surface quickly by jettisoning equipment, including the battery pods and a 90-kg drop weight in the forward bay. In dire circumstances, the pressure hull (the acrylic bubble, that is) could separate from the frame, taking with it only its oxygen tanks, strobe, through-water communications, and wing thrusters.

Deep Rover’s achievements

From 1984 to 1992, Deep Rover conducted about 280 dives. It inspected two of the tunnels near Niagara Falls that divert water to the Sir Adam Beck II hydroelectric plant. In California’s Monterey Bay, the rover let researchers film previously unknown deep-sea marine life, which helped establish the Monterey Bay Aquarium Research Institute. At Crater Lake National Park, in Oregon, Deep Rover proved the existence of geothermal vents and bacteria mats, leading to the protection of the site from extractive drilling.

Deep Rover was featured in a short film shown at Vancouver’s Expo ’86, the first of several TV and movie appearances. There was Danger Bay. Director James Cameron used an early prototype of the submersible in his 1989 film The Abyss. Deep Rover also made an appearance in Cameron’s 2005 documentary Aliens of the Deep.

Advertisement

In 1992, Deep Rover came to the end of its working life. It now resides at Ingenium, Canada’s Museums of Science and Innovation, in Ottawa. For a time, Deep Ocean Engineering continued to develop later generations of the submersible. Eventually, though, uncrewed remotely operated and autonomous underwater vehicles became the norm for deep-sea missions, replacing human pilots with sensors and equipment. New ROVs can dive significantly deeper than human-piloted ones, and new cameras are so good that it feels like you’re there…almost. And yet, humans still long to have the personal experience of exploring the depths of the oceans.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the April 2026 print issue as “All Alone in the Abyss.”

From Your Site Articles

Advertisement

Related Articles Around the Web

Source link

Continue Reading

Trending

Copyright © 2025