Connect with us

Tech

Ryobi RY18BLCXA-125 Review – Trusted Reviews

Published

on

Verdict

The Ryobi RY18BLCXA-125 blower is a powerful yet lightweight garden tool. With an extremely comfortable grip shape like to the ones you’d find on Ryobi’s drills, it’s easy to manoeuvre around with minimal hand fatigue. It lacks a bit of raw power but makes up for it by being so easy to handle.


  • Comfortable grip shape

  • Light and manoeuvrable

  • Comes with two nozzle tips

  • Can only be locked on full power

Key Features


  • Cordless


    Uses the same batteries as Ryobi’s cordless tools


  • Powerful for smaller jobs

    Advertisement


    Blows air up to 7m/s (from one metre away), making it good for smaller jobs

Introduction

Compatible with the company’s range of batteries, the Ryobi RY18BLCXA-125 is a flexible and versatile leaf blower. A little limited in power, it’s still a good choice for smaller jobs, particularly for those who own Ryobi tools already.

Heading

Design and Features

  • The grip shape is ergonomically designed and very comfortable
  • Supplied with two nozzle tips for focus and wide sweeping
  • Can be locked on full power

Advertisement

If you’re familiar with Ryobi’s bright green offerings, then the RY18BLCXA-125 is another cleverly designed tool to join the family. It feels sturdy, well-thought-out and is more like holding a drill than a leaf blower. This choice makes it supremely easy to point the nozzle tip at individual leaves that stick to wet grass.  

Weighing just 1.5 kg with the battery in place, this blower is ultra lightweight and very easy to hang on to. It boasts a variable speed trigger that is sensitive and responsive. The trigger can be locked on, a bit like cruise control on a car, but it only locks on full power. 

Advertisement
Ryobi RY18BLCXA-125 triggerRyobi RY18BLCXA-125 trigger
Image Credit (Trusted Reviews)

The Ryobi RY18BLCXA-125 comes with a 2.5 Ah battery and charger, as well as a pair of nozzle tips and extension tubes. The standard round tip is for focused blowing, while the wide tip works a bit like a broom. You lose a bit of air speed, but the wide stream of air is great for jobs like clearing a path of fallen leaves. 

Ryobi RY18BLCXA-125 battery and controlRyobi RY18BLCXA-125 battery and control
Image Credit (Trusted Reviews)

And because it comes with a battery and charger, you can use it in any one of hundreds of Ryobi tools. You can take apart the extension pieces and nozzle tips to store the blower away neatly, and hang it up by the handle to save floor space. 

Advertisement

Ryobi RY18BLCXA-125 heroRyobi RY18BLCXA-125 hero
Image Credit (Trusted Reviews)

Performance

  • Excellent focused air stream
  • Lightweight yet powerful
  • Loud and harsh on full power

What stands out about the RY18BLCXA is how easy it is to point at the target. Thanks to the excellent grip shape and overall light weight, it’s a doddle to use. Unlike some of the big and chunky blowers, anyone could use this tool without getting tired after a few minutes. 

At high speed from one metre away, I measured the air speed at 7m/s, which is enough of a gust to blower lighter debris around. This blower lacks the raw strength of the Einhell GP-LB 36/270 but has an impressive power-to-weight ratio. Overall, this kind of power is good for smaller jobs in smaller gardens, but you’ll need something larger and more powerful for bigger piles of leaves or bigger gardens.

I like the idea of being able to lock the trigger on, but as it only does so on full power it will drain the battery in less than 10 minutes, so it’s not always ideal. Keeping the blower on about half power extends the runtime to a decent 15 minutes. 

The real downside of this blower is the noise that it makes. The noise levels of 80dB on the lowest power setting and 98dB on the highest are not ideal. The tone is quite high too – on full power, it’s quite piercing. 

Advertisement

Advertisement

Should you buy it?

You want a lightweight yet powerful little leaf blower

If you already own Ryobi tools, it’s an easy decision to make. 

Advertisement

You want to move big piles of leaves around

Advertisement

More suitable for focused blowing, this leaf blower lacks the raw power of bigger machines. 

Final Thoughts

I like this blower for its lightness and ease of use. The two nozzle tips make it useful for focused blowing as well as path clearance too. The brushless motor is mighty enough for smaller jobs, but annoyingly loud on full power. If you need something more powerful, read the guide to the best leaf blowers.

Advertisement

How We Test

We test every leaf blower we review thoroughly over an extended period of time. We use standard tests to compare features properly. We’ll always tell you what we find. We never, ever, accept money to review a product.

Find out more about how we test in our ethics policy.

  • Tested with a variety of garden debris
  • We measure wind speed and air flow

FAQs

Is the Ryobi RY18BLCXA-125 compatible with the same batteries as the power tools?

Yes, you can use the standard batteries you use with the cordless drills and so on with this leaf blower.

Advertisement

Test Data

  Ryobi RY18BLCXA-125
Sound (normal) 93 dB
Air speed 15cm (low) 10 m/s
Air speed 15cm (high) 15 m/s

Advertisement

Full Specs

  Ryobi RY18BLCXA-125 Review
UK RRP £129.99
Manufacturer
Weight 1.53 KG
Release Date 2026
First Reviewed Date 03/03/2026
Accessories Two nozzles
Leaf blower type Cordless
Speed settings Variable speed trigger, trigger lock
Max air speed 15 m/s
Adjustable length

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government

Published

on

More than 30 employees from OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief on Monday in support of Anthropic in its legal fight against the US government.

“If allowed to proceed, this effort to punish one of the leading US AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote.

The brief was filed just hours after Anthropic sued the Department of Defense and other federal agencies over the Pentagon’s decision to designate the company a “supply-chain risk.” The sanction, which severely limits Anthropic’s ability to work with military contractors, went into effect after Anthropic’s negotiations with the Pentagon fell apart. The AI startup is seeking a temporary restraining order to continue its work with military partners as the lawsuit progresses. This brief specifically supports this motion.

Signatories of the brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal filings submitted by parties that are not directly involved in a court case but that have expertise relevant to it. The employees signed in a personal capacity and don’t represent the views of their companies, according to the brief.

Advertisement

OpenAI and Google did not immediately respond to WIRED’s request for comment.

The amicus brief says that the Pentagon’s decision to blacklist Anthropic “introduces an unpredictability in [their] industry that undermines American innovation and competitiveness” and “chills professional debate on the benefits and risks of frontier AI systems.” It notes that the Pentagon could have simply dropped Anthropic’s contract if it no longer wished to be bound by its terms.

The brief also says that the red lines Anthropic claims it requested, including that its AI wouldn’t be used for mass domestic surveillance and the development of autonomous lethal weapons, are legitimate concerns and require sufficient guardrails. “In the absence of public law, the contractual and technological requirements that AI developers impose on the use of their systems represent a vital safeguard against their catastrophic misuse,” the brief says.

Several other AI leaders have also publicly questioned the Pentagon’s decision to label Anthropic a supply-chain risk. OpenAI CEO Sam Altman said in a post on social media that “enforcing the SCR [supply-chain risk] designation on Anthropic would be very bad for our industry and our country.” He added that “this is a very bad decision from the DoW and I hope they reverse it.” As Anthropic’s relationship with the Pentagon soured, OpenAI quickly signed its own contract with the US military, a decision some people criticized as opportunistic.

Advertisement

Source link

Continue Reading

Tech

Worried about Australian age verification measures? I’ve found the cheapest way to secure your personal data for years to come

Published

on

Australia’s new age verification legislation has left Australians raising their eyebrows.

Similar to enforcements in the UK and US, citizens will be required to verify that they are over 18 to access adult content. But, with this comes concerns that people’s most sensitive data will be put at risk by communications with third parties.

Advertisement

Source link

Continue Reading

Tech

New York lawmakers move to block AI chatbots from giving legal or medical advice

Published

on


  • A proposed New York bill would ban AI chatbots from providing legal or medical advice
  • The legislation would allow users to sue companies if their chatbots impersonate licensed professionals
  • Lawmakers say the measure is meant to protect the public as AI tools become more widely used

AI chatbots have spent the past few years answering nearly every kind of question imaginable, but New York lawmakers are preparing to draw a firm line around at least a couple of categories of conversation. A bill advancing through the state legislature would prohibit AI chatbots from providing legal or medical advice and would allow users to sue the companies behind those systems if they cross that boundary.

The proposal, Senate Bill S7263, would apply to AI chatbots that mimic or impersonate licensed professionals such as lawyers or physicians. The heart of the bill applies the same principle about how individuals cannot practice law or medicine without the appropriate licenses to AI. That rule is meant to ensure that people receive guidance from trained professionals who can be held accountable for their advice.

Source link

Advertisement
Continue Reading

Tech

The open-source AI red-teaming tool used by Fortune 500 companies is now part of OpenAI

Published

on

The acquisition of Promptfoo, which counts more than 125,000 developers and 30-plus Fortune 500 companies among its users, is OpenAI’s most direct move yet into AI application security. Its technology will go into Frontier, the company’s enterprise agent platform launched just a month ago.

When Ian Webster was leading the LLM engineering team at Discord, shipping AI products to 200 million users, he noticed something the security industry had not yet caught up with: the tools his team relied on to keep those products safe were built for a different era. Traditional vulnerability scanners could not reason about prompt injection. Static analysis had nothing to say about a model that promised a user something it had no authority to deliver. The testing infrastructure for AI applications, he concluded, simply did not exist.

So he built it himself, nights and weekends, as an open-source project. That project became Promptfoo. On Monday, OpenAI announced it is acquiring the company.

The deal, terms of which were not disclosed, will see Promptfoo’s technology integrated into OpenAI Frontier, the enterprise agent management platform that OpenAI launched in early February. In a post on X, OpenAI said the acquisition would “strengthen agentic security testing and evaluation capabilities” within Frontier, and pledged that Promptfoo would remain open source under its current licence, with continued support for existing customers.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Promptfoo, which Webster co-founded with Michael D’Angelo – a former VP of engineering and head of AI at identity verification firm Smile Identity – launched commercially in 2024 with $5 million in seed funding from Andreessen Horowitz. The seed round attracted backing from a notable roster of angels, including Shopify CEO Tobi Lütke, Discord CTO Stanislav Vishnevskiy, and Okta co-founder Frederic Kerrest. By July 2025, the company had raised an $18.4 million Series A led by Insight Partners, with a16z again participating. Total funding ahead of the acquisition was approximately $23.4 million.

At the time of the Series A, Promptfoo said it had more than 125,000 developers using its open-source framework and over 30 Fortune 500 companies running its enterprise platform in production. Customers span retail, telecoms, financial services, and media, sectors with acute exposure to the regulatory and reputational risks of AI failures.

Advertisement

The product works by acting as an automated adversary. Rather than relying on manual penetration testing, Promptfoo’s platform talks directly to a customer’s AI application, through its chat interface or APIs, using specialised models and agents that behave like users, or specifically like attackers. When an attack succeeds, the platform records it, analyses why it worked, and iterates through an agentic reasoning loop to refine the test and expose deeper vulnerabilities. Risks the platform targets include prompt injection, data leakage, jailbreaks, and what Webster has called “application-level” failures: AI systems that promise users things they cannot deliver, or that reveal database contents to a customer service query, or that stray into political opinion in a homework tutor.

It is precisely those application-level risks that make Promptfoo’s acquisition a strategic fit for OpenAI’s current direction. Frontier, which OpenAI has described as an attempt to create “AI coworkers” for the enterprise, is designed to give AI agents access to production systems, CRM platforms, data warehouses, internal ticketing tools, and to execute workflows with real-world consequences. Agents operating at that level of access create a correspondingly enlarged attack surface. Early customers named by OpenAI for Frontier include Uber, State Farm, Intuit, and Thermo Fisher Scientific: organisations for whom a misbehaving agent is not an inconvenience but a liability.

OpenAI has been building out Frontier at speed. Since launching the platform on 5 February, the company has announced Frontier Alliances with Accenture, Boston Consulting Group, Capgemini, and McKinsey, enlisting the consulting firms to drive enterprise deployment. Separately, the company has been rolling out Codex Security, an AI-powered application security agent for software repositories, formerly known internally as Aardvark, which entered wider availability on the same day as the Promptfoo acquisition announcement.

Promptfoo is not the only AI security product entering broader availability this month. Anthropic launched Claude Code Security in February, targeting similar vulnerability scanning use cases. The convergence suggests that as AI agents move into production at scale, the question of who secures them, and how,  is fast becoming one of the defining commercial battlegrounds in enterprise AI.

Advertisement

For Promptfoo’s open-source community, OpenAI’s commitment to keeping the project open source under its current licence will be the line to watch. The project has over 248 contributors, and its adoption by developers at companies across the AI industry – including, according to Promptfoo’s own website, teams at Anthropic and Google – was built on the premise that the tool belonged to the developer community rather than to any one vendor. That promise now sits alongside a commercial integration into one of the most powerful enterprise AI platforms in the market.

Source link

Advertisement
Continue Reading

Tech

Microsoft Teams phishing targets employees with A0Backdoor malware

Published

on

Microsoft Teams phishing targets employees with backdoors

Hackers contacted employees at financial and healthcare organizations over Microsoft Teams to trick them into granting remote access through Quick Assist and deploy a new piece of malware called A0Backdoor.

The attacker relies on social engineering to gain the employee’s trust by first flooding their inbox with spam and then contacting them over Teams, pretending to be the company’s IT staff, offering assistance with the unwanted messages.

To obtain access to the target machine, the threat actor instructs the user to start a Quick Assist remote session, which is used to deploy a malicious toolset that includes digitally signed MSI installers hosted in a personal Microsoft cloud storage account.

According to researchers at cybersecurity company BlueVoyant, the malicious MSI files masquerade as Microsoft Teams components and the CrossDeviceService, a legitimate Windows tool used by the Phone Link app.

Advertisement
Commandline argument for CrossDeviceService.exe
Command line argument to install the malicious CrossDeviceService.exe
Source: BlueVoyant

Using the DLL sideloading technique with legitimate Microsoft binaries, the attacker deploys a malicious library (hostfxr.dll) that contains compressed or encrypted data. Once loaded in memory, the library decrypts the data into shellcode and transfers execution to it.

The researchers say that the malicious library also uses the CreateThread function to prevent analysis. BlueVoyant explains that the excessive thread creation could cause a debugger to crash, but it does not have a significant impact under normal execution.

The shellcode performs sandbox detection and then generates a SHA-256-derived key, which it uses to extract the A0Backdoor, which is encrypted using the AES algorithm.

Encrypted payload in the shellcode
Encrypted payload in the shellcode
Source: BlueVoyant

The malware relocates itself into a new memory region, decrypts its core routines, and relies on Windows API calls (e.g., DeviceIoControl, GetUserNameExW, and GetComputerNameW) to collect information about the host and fingerprint it.

Communication with the command-and-control (C2) is hidden in DNS traffic, with the malware sending DNS MX queries with encoded metadata in high-entropy subdomains to public recursive resolvers. The DNS servers respond with MX records containing encoded command data.

Captured DNS communication
Captured DNS communication
Source: BlueVoyant

“The malware extracts and decodes the leftmost label to recover command/configuration data, then proceeds accordingly,” explains BlueVoyant.

“Using DNS MX records helps the traffic blend in and can evade controls tuned to detect TXT-based DNS tunneling, which may be more commonly monitored.”

Advertisement

BlueVoyant states that two of the targets of this campaign are a financial institution in Canada and a global healthcare organization.

The researchers assess with moderate-to-high confidence that the campaign is an evolution of tactics, techniques and procedures associated with the BlackBasta ransomware gang, which has dissolved after the internal chat logs of the operation were leaked.

While there are plenty of overlaps, BlueVoyant notes that the use of signed MSIs and malicious DLLs, the A0Backdoor payload, and using DNS MX-based C2 communication are new elements.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Advertisement

Source link

Continue Reading

Tech

Microsoft’s new Copilot Cowork integrates Anthropic’s Claude in rollout of new E7 licensing tier

Published

on

Microsoft is leveraging its new Anthropic partnership to bolster Copilot adoption among businesses. (GeekWire Photo / Todd Bishop)

Microsoft unveiled Copilot Cowork, a new AI assistant that can run tasks in the background, create documents, and work across Microsoft 365 apps, the company announced Monday.

The product integrates technology from Anthropic’s Claude family of models into Microsoft’s existing Copilot assistant, the latest example of Microsoft expanding beyond its tight partnership with OpenAI. Anthropic already offers Claude Cowork through its own platform.

It comes as Microsoft tries to boost adoption of Copilot, which remains a relatively small fraction of its commercial user base amid big investments in AI infrastructure.

Copilot Cowork is part of what Microsoft is calling Wave 3 of Microsoft 365 Copilot. The company also announced a new $99-per-user Microsoft 365 E7 tier launching May 1 — a new level of its technology licensing program for businesses — which bundles Copilot, identity management tools, and a new $15 Agent 365 product for managing AI agents.

The E7 tier costs 65% more than the current $60 E5 subscription.

Advertisement

“Customers have told us E5 alone is no longer enough; they do not want multiple tools stitched together, they want one trusted solution,” Judson Althoff, CEO of Microsoft’s commercial business, wrote in a blog post.

Microsoft says Copilot Cowork can handle multiple tasks simultaneously, pulling from a user’s calendar, email, and files to complete work without constant supervision.

“Copilot Chat already makes it easy to research topics and think through ideas, and Copilot Cowork allows you to take action and complete activities in the background so you can get more work done on a regular basis,” said Charles Lamanna, Microsoft’s president of Business Applications & Agents, in a demo video.

In the video, Lamanna showed Copilot Cowork analyzing a month of meetings with direct reports, compiling customer notes from a business trip, and generating a competitive analysis with accompanying Word document and Excel spreadsheet. 

The company emphasized the role of Work IQ, its intelligence layer that connects Copilot to a user’s work patterns, relationships, and content across Microsoft 365.

Copilot Cowork runs within Microsoft 365’s security and compliance boundaries, with actions and outputs auditable by default. Microsoft is pitching its multi-model approach as a differentiator, saying it will choose the right model for each task regardless of provider.

The announcement drew mixed reactions. Ethan Mollick, a Wharton professor and author of “Co-Intelligence” who studies AI adoption, raised questions on LinkedIn.

Advertisement

“Will it continue to use lower-end models or older models without telling you the way Copilot does?” Mollick wrote. He also asked whether Microsoft would keep the product updated, noting that Anthropic’s standalone Cowork product “was built in a couple of weeks using Claude Code and is being updated and evolving quickly.” 

Microsoft, he added, “has a tendency to launch a leading product and then let it sit for awhile,” noting that he was “curious about whether their pacing will change.”

Copilot Cowork is available in limited research preview and will roll out to Microsoft’s Frontier program later this month.

[Editor’s Note: Charles Lamanna will be among the speakers at GeekWire’s upcoming AI event, Agents of Transformation, March 24. More info and tickets.]

Advertisement

Source link

Continue Reading

Tech

What is the release date for The Pitt season 2 episode 10 on HBO Max?

Published

on

For good reason, Dr. Robby (Noah Wyle) is on the verge of quitting again in The Pitt season 2. He’s already due to go on a three-month sabbatical when this hellish Fourth of July shift ends, but now that’s starting to feel like a permanent leave of absence.

In his defence, I don’t blame him. The ER is currently under digital lockdown to prevent a cyber attack, meaning no computer records can be accessed, the number of patients practically doubles every five seconds, and replacement Dr. Al-Hashimi (Sepideh Moafi) isn’t making life easier for anyone.

Advertisement

Source link

Continue Reading

Tech

Microsoft adding Anthropic’s AI technology to its Copilot service

Published

on

The organisation is aiming to tap into the growing demand for autonomous agents.

Tech giant Microsoft has announced plans to launch Copilot Cowork, which is a tool based on Anthropic’s popular Claude Cowork. Reportedly, it is part of a larger initiative to take advantage of the growing demand for autonomous agents.

The news comes two months after Anthropic launched its Cowork model, which it described as a “simpler version of Claude Code”. This prompted concerns among those heavily invested in ‘traditional’ software companies resulting in a strong sell-off in US and European software. According to Reuters, Microsoft’s own shares fell nearly 9pc in February.

Currently, ​Copilot Cowork is in the testing phase and will be ​available to early-access ⁠users in later March. The organisation has not disclosed the pricing structure, but has revealed that some usage would be included ​in its $30-per-user, per-month M365 Copilot offering for enterprises.

Advertisement

Jared Spataro, the chief marketing officer of AI at Work at Microsoft said: “Frontier transformation starts with a simple idea: AI must do more than optimise what already exists. It must unlock new levels of creativity, innovation, and growth. And it must show up inside real work, grounded in real context and solve real problems for people and organisations. 

“We’ve found that to do this, the two most important elements are intelligence and trust. Intelligence ensures AI is contextual, relevant and grounded. Trust ensures AI can scale safely, securely and responsibly. Our announcements today (9 March) show how intelligence and trust together turn AI from experimentation into durable, enterprise-wide value.”

Following the reveal of Microsoft’s Copilot Cowork, Forrester vice-president and principal analyst JP Gownder said: “Microsoft’s launch of Copilot Cowork signals a strategic shift in its AI approach, showing the company moving Copilot away from reliance on OpenAI alone and toward a multi-model architecture that includes partners such as Anthropic. 

“The move also highlights the current limitations of Microsoft’s existing Copilot agents: while the company has talked extensively about autonomous ‘agents’, they have so far struggled to take meaningful action compared with newer agentic systems such as Anthropic’s. 

Advertisement

“At the same time, Copilot Cowork clearly taps into the growing hype around Anthropic’s Claude Cowork concept, but significantly extends it by embedding the capability across Microsoft 365 applications rather than keeping it as a desktop-centric tool.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

You can (sort of) block Grok from editing your uploaded photos

Published

on

People can block the xAI’s Grok chatbot from creating modifications of their uploaded images on social network X. Neither X or xAI, both Elon Musk-owned businesses, have made a public announcement about this feature, which users began noticing on the iOS app within the image/video upload menu over the past few days.

This option is likely a response to Grok’s latest scandal, which began at the start of 2026 when the addition of image generation tools to the chatbot saw about 3 million sexualized or nudified images created. An estimated 23,000 of the images made in that 11-day period contained sexualized images of children, according to the Center for Countering Digital Hate. Grok is now facing two separate investigations by regulators in the EU over the issue.

The positive side of the recent feature addition is that X and xAI have taken a step toward limiting inappropriate uses of Grok. This block is a simple toggle and it hasn’t been buried in the UI. So that’s nice.

The negative side, however, is that this token gesture that doesn’t amount to any serious improvement to how Grok works or can be used. It’s great that the chatbot won’t alter the file uploaded by one person, but as reported by The Verge, the block only limits tagging Grok in a reply to create an image edit. There are plenty of workarounds for those dedicated individuals who insist on being able to use generative AI to undress people without their consent or knowledge.

Advertisement

Hopefully xAI has more powerful protective tools in the works. The limitations Grok on putting real people in scanty clothing that X announced in January seem to have had only partial success at best. If this additional and narrow use case is all the company offers, then the claims of being a zero-tolerance space for nonconsensual nudity are going to ring hollow. Especially since, as we noted at the time, xAI could stop allowing image generation at all until the issue is properly and thoroughly fixed.

Source link

Continue Reading

Tech

Here’s How to Track the Artemis II Mission in Real Time With NASA’s New Tool

Published

on

More than half a century after astronauts last left footprints on the lunar surface, humanity is preparing to return to the moon. The excitement surrounding NASA’s Apollo program once captivated the world, and now NASA hopes to rekindle that same sense of wonder with its modern lunar effort, the Artemis program.

NASA’s Artemis II launch is scheduled for the first week of April. It’ll be the first human mission to the moon since 1972, and it should be quite the achievement for the Artemis program. Now, NASA has released a new tool that lets the public track Artemis II in real time.

The Artemis program is NASA’s long-term effort to return humans to the moon and establish a sustained presence there for the first time since the Apollo program. The program aims to land astronauts near the lunar south pole, develop new technologies for long-term exploration and use the moon as a stepping stone for future missions to Mars.

Advertisement

The Artemis Real-time Orbit Website, dubbed AROW, is already available to the public, although there isn’t much to see since the launch is still a few weeks away. It’s also available directly from the NASA app if you’re using a mobile device. The site lets the public visualize data collected by sensors on Orion and sent to the Mission Control Center at NASA’s Johnson Space Center in Houston.

The website is simple to navigate. You’ll see a visual representation of Artemis II’s progress, including its speed, distance from Earth and distance to the moon. Mobile app users get all of the above, along with an extra augmented reality tracker that lets you point your phone at the sky and see where Artemis II is relative to your position on Earth. It works much like Google Star Map and other stargazing apps that use similar technology. 

According to NASA, tracking will be available once the Orion capsule separates from the rocket’s upper stage, which is expected about 3 hours after the upcoming April launch. The site will then update its information in real time for the entire 10-day mission.

Advertisement

NASA is also making flight data available for download so that people interested in creating their own content, such as visualizations or tracking apps, can do so. The data will include all sorts of things, including state vectors, which are data that “describe precisely where Orion is located and how it moves.” That same data will be used by NASA to study Orion and make improvements for future Artemis missions

An exact launch date for Artemis II hasn’t been set, but the agency plans on launching the mission no earlier than April 1. The launch was originally scheduled for February, but it was delayed multiple times due to a hydrogen leak and a helium flow issue. NASA says it has since fixed both issues.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025