Connect with us
DAPA Banner

Tech

Best 360 Cameras (2026): DJI, Insta360, GoPro

Published

on

Top 5 360 Cameras Compared

Honorable Mentions

Two Insta360 cameras long rectangular black devices on a beachside rock.

Photograph: Scott Gilbertson

Insta360 X4 for $340: I’d recommend skipping this one unless you can get it on sale for under $300. The X4 Air is (usually) cheaper, smaller, and more capable, though the X4 does have a larger screen and the battery life is better (though again, the video quality is not as good as the X4 Air). If you can find a killer deal under $300, the X4 is worth nabbing. Otherwise though, stick with the X4 Air.

Advertisement

Qoocam 3 Ultra for $539: It’s not widely available, and we have not had a chance to try one, but Kandao’s Qoocam 3 Ultra is another 8K 360 camera that looks promising, at least on paper. The f/1.6 aperture is especially interesting, as most of the rest of these are in the f/2 and up range. We’ll update this guide when we’ve had a chance to test a Qoocam.

360 Cameras to Avoid

Insta360 One RS: Insta360’s interchangeable-lens action-camera/360-camera hybrid was a novel idea that just didn’t seem to catch on. Now it’s a bit dated. The video footage isn’t as good as the other cameras in this guide, but you can swap the lens and have an action camera in a moment, which is the major selling point. Ultimately I’d say skip this, get the X4 Air and if you want to use it like a GoPro, just shoot in single lens mode.

GoPro Max: You’ll still run across GoPro’s original Max sometimes, but again, there are better options.

Advertisement

Insta360 One X3: Insta360’s older X3 is not worth buying at this point.

Insta360 One RS 1 360 Edition: Although I still like and use this camera, it appears to have been discontinued, and there’s no replacement in sight. The X5 delivers better video quality in a lighter, less fragile body, but I will miss those 1-inch sensors that managed to pull a lot of detail, even if the footage did top out at 6K. These are still available used, but at outrageous prices. You’re better off with the X5.

Advertisement

Frequently Asked Questions

There are two reasons you’d want a 360-degree camera. The first is to shoot virtual reality content, where the final viewing is done on a 360 screen, e.g., VR headsets and the like. So far this is mostly the province of professionals who are shooting on very expensive 360 rigs not covered in this guide, though there is a growing body of amateur creators as well. If this is what you want to do, go for the highest-resolution camera you can get. Either of our top two picks will work.

For most of us though, the main appeal of a 360 camera is to shoot everything around you and then edit or reframe to the part of the scene we want to focus on, or panning and tracking objects within the 360 footage, but with the result being a typical, rectangular video that then gets exported to the web. The video resolution and image quality will never match what you get from a high-end DSLR, but the DSLR might not be pointed at the right place, at the right time. The 360 camera doesn’t have to be pointed anywhere, it just has to be on.

This is the best use case for the cameras on this page, which primarily produce HD (1080p) or better video—but not 4K—when reframed. I expect to see 12K-capable consumer-level 360 cameras in the next year or two (which is what you need to reframe to 4K), but for now, these are the best cameras you can buy.

Advertisement

Whether you’re shooting virtual tours or your kid’s birthday, the basic premise of a 360 camera is the same. The fisheye lens (usually two very wide-angle lenses combined) captures the entire scene around you, ideally editing out the selfie stick if you’re using one. Once you’ve captured your 360-degree view, you can then edit or reframe that content down to something ready to upload to YouTube, TikTok, and other video-sharing sites.

Why Is High Resolution Important in 360 Cameras?

Camera makers have been pushing ever-higher video resolution for so long it feel like a gimmick in many cases, but not with 360 cameras. Because the camera is capturing a huge field of view, the canvas if you will, is very large. To get a conventional video from that footage you have to crop which zooms in on the image, meaning your 8K 360 shot becomes just under 2.7K when you reframe that footage.

How Does “Reframing” Work?

Advertisement

Reframing is the process of taking the huge, 360-degree view of the world that your camera capture and zooming in on just a part of it to tell your story. This makes the 360 footage fit traditional movie formats (like 16:9), but as noted above it means cropping your footage, so the higher resolution you start with the better your reframed video will look.

If you’re shooting for VR headsets or other immersive tools, then you don’t have to reframe anything.

I’ve been shooting with 360 cameras since Insta360 released the X2 back in 2020. Early 360 cameras were fun, but the video they produced wasn’t high enough resolution to fit with footage from other cameras, limiting their usefulness. Thankfully we’ve come a long way in the last five years. The 360 camera market has grown and the footage these cameras produce is good enough to mix seamless with your action camera and even your high end mirrorless camera footage.

To test 360 cameras I’ve broken the process down into different shooting scenarios, especially scenes with different lighting conditions, to see how each performs. No camera is perfect, so which one is right for you depends on what you’re shooting. I’ve paid special attention to the ease of use of each camera (360 cameras can be confusing for beginners), along with what kind of helpful extras each offers, HDR modes, and support for accessories.

Advertisement

The final element of the picture is the editing workflow and tools available for each camera. Since most people are shooting for social media, the raw 360 footage has to be edited before you post it anywhere. All the cameras above have software for mobile, Windows and macOS.

Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that’s too important to ignore. Subscribe Today.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The Hazards Of Charging USB-C Equipped Cells In-Situ

Published

on

Can you charge those Li-ion based cells with USB-C charging ports without taking them out of the device? While this would seem to be answered with an unequivocal ‘yes’, recently [Colin] found out that this could easily have destroyed the device they were to be installed in.

After being tasked with finding a better way to keep the electronics of some exercise bikes powered than simply swapping the C cells all the time, [Colin] was led to consider using these Li-ion cells in such a manner. Fortunately, rather than just sticking the whole thing together and calling it a day, he decided to take some measurements to satisfy some burning safety questions.

As it turns out, at least the cells that he tested – with a twin USB-C connector on a single USB-A – have all the negative terminals and USB-C grounds connected. Since the cells are installed in a typical series configuration in the device, this would have made for an interesting outcome. Although you can of course use separate USB-C leads and chargers per cell, it’s still somewhat disconcerting to run it without any kind of electrical isolation.

Advertisement

In this regard the suggestion by some commentators to use NiMHs and trickle-charge these in-situ similar to those garden PV lights might be one of the least crazy solutions.

Advertisement

Source link

Continue Reading

Tech

ShinyHunters claim responsibility for European Commission breach

Published

on

Reportedly, the crime group accessed more than 350GB of stolen data related to data dumps of mail servers, databases, confidential documents, contracts and other sensitive material.

The extortion group ShinyHunters has been linked to the recent (24 March) breach of the European Commission’s Europa.eu platform, in which a reported 350GB of data, across multiple databases, was accessed and stolen. 

In a statement issued after the incident (27 March), the European Commission stated that their early findings suggest that private data has been accessed and Union entities affected by the attack will be contacted. The Commission’s internal systems are not believed to have been affected.

The Commission explained it will continue to monitor the situation, taking the necessary precautions to ensure the security of its systems and data, as well as work to analyse what happened so it can use the results to improve its cybersecurity capabilities.

Advertisement

While the Commission has not shared further details on the incident, alleged data dumps uploaded to ShinyHunters’ Tor data leak site are said to include content from mail servers, internal communications systems, databases, confidential documents, contracts and additional sensitive material. 90GB of information allegedly stolen from the European Commission’s compromised cloud network has already been shared. 

ShinyHunters are an extortion group established around 2020, who have carried out a number of high-profile, financially-motivated attacks on groups such as Salesforce, Allianz Life, SoundCloud and Ticketmaster. The criminal organisation also claimed responsibility for an attack on Match Group, which owns Tinder, Hinge, Meetic, Match.com and OkCupid. 

In July 2024, AT&T paid a member of the ShinyHunters hacking group $370,000 to delete the data of millions of customers following a massive data breach of its systems. Reportedly, the stolen data exposed the calls and texts of nearly all of the platform’s 110m cellular customers after ShinyHunters stole the information from the cloud data giant Snowflake.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

This Is How Trump Is Already Threatening the Midterms

Published

on

The White House did not respond to a request for comment about the meetings, but an official who was not authorized to speak on the record, told WIRED at the time: “The White House does not comment on mysterious meetings with unnamed staffers.”

Simultaneously, Trump has also sought to absolve officials of any wrongdoing in the wake of the 2020 election. Last year, Trump gave “full, complete and unconditional” pardons to a slate of people who had tried, and failed, to help him overturn the 2020 election results. In recent months, Trump has pressured Colorado governor Jared Polis to release Tina Peters, the former county clerk in Mesa County, Colorado, who became a hero for the right’s election deniers when she facilitated a security breach during a software update of her county’s election management system.

Peters was found guilty of four felonies, but Trump has been mounting a campaign in recent months to get her released, even going so far as to say he “pardoned” her, even though he has no power to do so given she was convicted on state charges.

Election Day Interference

While Trump has not announced specific plans to deploy troops to polling locations or seize voting machines, he and his administration have certainly been suggesting that such action is not off the table.

Advertisement

In January, Trump lamented not having the National Guard seize certain voting machines after the 2020 election. In early February, White House press secretary Karoline Leavitt told reporters that while she hasn’t specifically heard Trump discussing the possibility, she couldn’t “guarantee that an ICE agent won’t be around a polling location in November.” (The question was in response to former White House adviser Steve Bannon stating: “We’re going to have ICE surround the polls come November. We’re not going to sit here and allow you to steal the country again … We will never again allow an election to be stolen.”)

Earlier this month, during his confirmation hearing to head up the Department of Homeland Security, Senator Markwayne Mullin said he would be willing to deploy ICE to polling locations to address “a specific threat.”

The result of the Trump administration’s drip feed of threats and dog whistles is that those who are running elections in states across the country are already war-gaming what happens if ICE or the National Guard show up at their voting locations.

Michael McNulty, the policy director at Issue One, a nonprofit that tracks the impact of money in politics, also points to the fact that the Department of Justice sent monitors to oversee elections in November in New Jersey and California, despite no federal elections being held. “The concern is that this could become a massive deployment of, quote unquote, observers by the DOJ in 2026 who might do something more, whether it’s intimidation, whether it’s interfering with local election officials, to get data to confirm conspiracy theories,” McNulty tells WIRED.

Advertisement

FBI Raids

On January 28, the FBI raided the election office in Fulton County, Georgia, executing a search warrant that allowed it to seize ballots, ballot images, tabulator tapes, and the voter rolls related to the 2020 election. The search warrant affidavit, unsealed a few weeks ago, shows that the FBI relied on the work of Kurt Olsen, a lawyer who was appointed by the administration to investigate election security in October and who has a long history of working with some of the country’s biggest election deniers, including Patrick Byrne, Mike Lindell, and Kari Lake. Olsen’s claims are based on debunked and previously investigated conspiracy theories about the 2020 election.

The raid was also notable for the presence of Tulsi Gabbard, the director of national intelligence, who is, according to The Guardian, running a parallel investigation into the 2020 election with the apparent tacit approval of Trump.

Source link

Advertisement
Continue Reading

Tech

Fiber HDMI cables enable full-bandwidth 8K over runs up to 990 feet

Published

on


The product is an active optical cable (AOC) for HDMI. Instead of relying solely on copper, it carries most of its signal over fiber-optic strands. Inside the cable, HDMI electrical signals are converted into optical signals for the journey between the two ends, then converted back to electrical signals at…
Read Entire Article
Source link

Continue Reading

Tech

In with a bang, out in silence — the end of the Mac Pro

Published

on

For almost two decades, the Mac Pro bounced between coveted and beloved, to derided and forgotten. Now, it’s finally over.

Silver computer tower with a handle, power button, and ventilation holes on the side.
Apple is reportedly pressing the off switch on the Mac Pro

All political careers end in failure, and all devices fade out as they are eventually superseded. Yet this time it’s more that the Mac Pro has been usurped, and possibly even stabbed in the back.
If you’re a Mac Pro fan, you know this day is coming, and you probably don’t want to believe it. It’s true that the Mac Pro has long lost its crown as the most powerful Mac, but still this is the legendary Mac Pro.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Is It Time For Open Source to Start Charging For Access?

Published

on

“It’s time to charge for access,” argues a new opinion piece at The Register. Begging billion-dollar companies to fund open source projects just isn’t enough, writes long-time tech reporter Steven J. Vaughan-Nichols:


Screw fair. Screw asking for dimes. You can’t live off one-off charity donations… Depending on what people put in a tip jar is no way to fund anything of value… [A]ccording to a 2024 Tidelift maintainer report, 60 percent of open source maintainers are unpaid, and 60 percent have quit or considered quitting, largely due to burnout and lack of compensation. Oh, and of those getting paid, only 26 percent earn more than $1,000 a year for their work. They’d be better paid asking “Would you like fries with that?” at your local McDonald’s…

Some organizations do support maintainers, for example, there’s HeroDevs and its $20 million Open Source Sustainability Fund. Its mission is to pay maintainers of critical, often end-of-life open source components so they can keep shipping patches without burning out. Sentry’s Open Source Pledge/Fund has given hundreds of thousands of dollars per year directly to maintainers of the packages Sentry depends on. Sentry is one of the few vendors that systematically maps its dependency tree and then actually cuts checks to the people maintaining that stack, as opposed to just talking about “giving back.”

Sentry is on to something. We have the Linux Foundation to manage commercial open source projects, the Apache Foundation to oversee its various open source programs, the Open Source Initiative (OSI) to coordinate open source licenses, and many more for various specific projects. It’s time we had an organization with the mission of ensuring that the top programmers and maintainers of valuable open source projects get a cut of the tech billionaire pie.

Advertisement

We must realign how businesses work with open source so that payment is no longer an optional charitable gift but a cost of doing business. To do that, we need an organization to create a viable, supportable path from big business to individual programmer. It’s time for someone to step up and make this happen. Businesses, open source software, and maintainers will all be better off for it.

One possible future… Bruce Perens wrote the original Open Source definition in 1997, and now proposes a not-for-profit corporation developing “the Post Open Collection” of software, distributing its licensing fees to developers while providing services like user support, documentation, hardware-based authentication for developers, and even help with government compliance and lobbying.

Source link

Continue Reading

Tech

From “Hello, World!” to AI: What Skills Actually Prepare Students for the Future?

Published

on

This article is part of the collection: Teaching Tech: Navigating Learning and AI in the Industrial Revolution.


A little over a decade ago, schools were swept into what many described as a movement to prepare students for the future of work. That work was coding — “Hello, world!”

Districts introduced new courses, nonprofits expanded access to computer science education and a growing ecosystem of programs promised to teach students the skills needed to enter the tech workforce. For many, it felt like a necessary correction to a rapidly digitizing world. But over time, a more complicated picture emerged.

While access to computer science education expanded, the relationship between early coding exposure and long-term workforce outcomes became uneven. The “learn to code” movement raised an important question that still lingers today: Which skills actually endure when technologies change? That question has resurfaced in a new form.

Advertisement

Today, generative AI is driving a similar wave of urgency. Schools are once again being encouraged to adapt quickly, often with the same underlying rationale that teachers must prepare students for a future shaped by emerging technologies.

But if the instructional role of AI remains unclear, and if the tools themselves are likely to evolve rapidly, the more persistent challenge may lie elsewhere.

After conducting a two-year research project alongside teachers, who are adapting and are open to integrating AI, we found that uptake is still minimal. Most of our participants, including those who are engineering or computer science teachers, still struggle to identify a clear or universal instructional use case for widespread AI integration.

So, what should students learn to help them adapt to whatever comes next?

Advertisement

A growing body of research suggests that the answer may lie not in teaching students how to use a particular AI system, but in helping them understand the computational ideas that make those systems possible.

The Limits of Teaching the Tool

In recent years, many discussions about AI education have centered on teaching students how to use generative tools effectively. Prompt engineering, for example, has become a common topic in professional development workshops and online tutorials.

Yet, focusing heavily on tool-specific skills can create a familiar educational problem, because technology changes faster than curricula.

Teaching students how to interact with a specific interface risks becoming the equivalent of teaching to standardized tests, rather than teaching students important lessons that don’t appear on state exams.

Advertisement

The history of computing education offers a useful example. In the early 2010s, a wave of coding initiatives encouraged schools to teach programming skills broadly. While many of those programs expanded access to computer science education, subsequent analysis showed that workforce pipelines in technology remained uneven, and many students learned tool-specific skills without developing deeper computational reasoning abilities.

That experience offers a cautionary lesson for the current AI moment. If the goal of integrating AI into education is long-term preparation for technological change, focusing narrowly on how to use today’s tools may not be the most durable strategy.

The Skill That Outlasts the Tool

A growing body of research suggests that computational thinking is a more durable educational objective.

Computational thinking refers to a set of problem-solving practices used in computer science and other analytical disciplines. These include:

Advertisement
  • breaking complex problems into smaller components
  • recognizing patterns
  • designing step-by-step processes
  • evaluating the outputs of automated systems

These skills apply not only to programming but also to fields ranging from engineering to public policy.

Importantly, they also help students understand how algorithmic systems operate.

When students learn computational thinking, they gain the ability to analyze how technologies like AI produce results rather than simply accepting those results as authoritative.

In this sense, computational thinking provides a conceptual bridge between traditional academic skills and emerging digital systems.

What Teachers Are Already Doing

Many teachers in our study were already moving in this direction, often without using the term computational thinking.

Advertisement

When teachers asked students to analyze chatbot errors, they were encouraging students to examine how algorithmic systems produce outputs. When they designed exercises comparing training data and algorithms to everyday processes, they were helping students reason about how automated systems work.

These approaches do not require students to rely heavily on AI tools themselves. Instead, they position AI as a case study for examining how technology shapes information.

That framing aligns with longstanding educational goals around critical thinking, media literacy and problem-solving.

Implications for Educators

If the instructional use case for generative AI remains uncertain, educators may benefit from focusing on skills that remain valuable regardless of which tools dominate in the future.

Advertisement

Several practical approaches are already emerging in classrooms. Teachers can use AI systems as objects of analysis, asking students to evaluate outputs, identify errors and investigate how models generate responses.

Lessons can connect AI to broader topics such as data quality, algorithmic bias and information reliability.

Assignments that emphasize reasoning, structured problem solving and evidence evaluation continue to support the kinds of cognitive work that remain central to learning.

These approaches allow students to engage with AI without allowing the technology to replace the thinking process itself.

Advertisement

Implications for EdTech Developers

The experiences teachers described also highlight an opportunity for edtech companies.

Many current AI tools were developed as general-purpose language systems and later introduced into education contexts. As a result, teachers are often left to determine whether and how those tools align with classroom learning goals. Future products may benefit from deeper collaboration with educators during the design process.

Teachers in our conversations were already experimenting with small classroom applications, designing AI literacy lessons and building course-specific chatbots.

These experiments resemble early-stage product development.

Advertisement

Partnerships between educators, edtech developers and product managers could help identify instructional problems that AI systems could realistically address.

The Next Phase of the Research

The conversations described in this series represent an early attempt to document how teachers are navigating the arrival of generative AI.

As schools continue experimenting with these tools, the next challenge will be to develop governance frameworks that help educators evaluate when and how AI should be used in learning environments.

Our research team is beginning the next phase of this work by partnering with school districts to develop guidance for AI governance and inviting edtech companies interested in exploring these questions collaboratively.

Advertisement

Rather than assuming that AI will inevitably transform classrooms, this phase of the project will focus on identifying the conditions under which AI tools actually support teaching and learning and how to reduce harm when they don’t.

The fourth grade teacher’s question remains a useful guide: What can I actually use this for in math?

Until the answer becomes clearer, many teachers will likely continue doing what professionals in any field do when new technologies appear: experimenting cautiously, adopting what works and relying on their judgment to decide where or if the tool belongs.


If your school, district, organization, or edtech company is interested in learning more about joining our next project on AI governance, contact our research team at research@edsurge.com.

Advertisement

Source link

Continue Reading

Tech

French AI start-up Mistral raises $830m in debt

Published

on

The Paris-based company is building out ‘cutting-edge’ European data centres with a total capacity ambition of 200MW by 2027.

French AI start-up Mistral has raised $830m in its first debt financing, for the purposes of funding its data centre near Paris.

The company said the deal, supported by a consortium of seven “top-tier” global banks, would pay for Nvidia Grace Blackwell infrastructure with 13,800 Nvidia GB300 GPUs at the “cutting-edge” centre, bringing powered capacity to 44MW.

The data centre at Bruyères-le-Châtel, scheduled to be operational in the first half of this year, was previously earmarked to train AI models belonging to Mistral and its customers, while also “delivering high-performance inference services”, according to the company.

Advertisement

Last month, Mistral said it would spend over $1.4bn in Sweden on digital infrastructure, including a data centre, building towards its stated goal of 200MW of capacity across Europe by 2027.

“Scaling our infrastructure in Europe is critical to empower our customers and to ensure AI innovation and autonomy remain at the heart of Europe,” said Arthur Mensch, CEO of Mistral AI.

“We will continue to invest in this area, given the surging and sustained demand from governments, enterprises and research institutions seeking to build their own customised AI environment, rather than depend on third-party cloud providers.”

Mistral said it is building a “vertically integrated AI company” comprising “frontier open-weight models, deep enterprise integration, production deployments and its own compute infrastructure”.

Advertisement

It counts organisations in the tech, retail, logistics and public sectors among its customers, and has already partnered with the likes of AMSL, Ericsson and the European Space Agency to train models on their proprietary data.

Earlier this month, Mistral launched both ‘Small 4’, the newest model in its fully open-source ‘Small’ series with an aim of consolidating capabilities of its flagship models, and ‘Forge’, a platform that lets enterprises build custom models trained on their own data.

Last September, the 2023-founded French AI darling announced a Series C raise of around $2bn at a post-money valuation of more than $13bn, led by Dutch chipmaker ASML. Existing investors DST Global, Andreessen Horowitz, Bpifrance, General Catalyst, Index Ventures, Lightspeed and Nvidia took part.

Although a frontrunner in the European AI space, Mistral is far behind US competitors such as OpenAI and Anthropic in terms of funding levels and valuations.

Advertisement

Mistral is a founding member of the Nvidia Nemotron Coalition. As part of the initiative, Mistral and Nvidia plan to co-develop frontier open-source AI models.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

NASA Picks Intuitive Machines for a 2030 Artemis Moon Delivery Loaded with Science Tools and a Human Time Capsule

Published

on

NASA Intuitive Machines 2030 Artemis Moon Delivery
NASA has awarded Intuitive Machines a $180.4 million contract to deliver seven science payloads to a carefully chosen site near the lunar south pole. The Houston based company will use one of its larger lander configurations for the mission, designated IM-5, with a target landing date of around 2030 at Mons Malapert. The location was selected for good reason. The ridge maintains fairly consistent line of sight with Earth, receives relatively steady sunlight, and sits close to permanently shadowed regions that may hold water ice, a resource that could prove critical to sustaining long term human operations on the Moon.



The lander arrives loaded with instruments ready to start collecting data from the moment it touches down. A stereo camera package developed at NASA’s Langley Research Center, called the Stereo Cameras for Lunar Plume Surface Studies, will capture how the descent engines disturb the fine lunar soil, information that will help engineers design landing systems that cause less disruption to the surface. A near infrared spectrometer mounted on a small rover from Honeybee Robotics, led by NASA’s Ames Research Center, will then scan for minerals and potential ice deposits while also measuring surface temperatures and mapping how the soil composition varies across the landing area.


LEGO Technic NASA Artemis Space Launch System Rocket Building Toy for Boys & Girls – STEM Learning…
  • BUILD AN OFFICIAL NASA ROCKET – Kids prepare to explore outer space with the LEGO Technic NASA Artemis Space Launch System Rocket (42221) building…
  • 3-STAGE ROCKET SEPARATION – Young builders can turn the hand crank to watch the rocket separate in 3 distinct stages: solid rocket boosters, core…
  • STEM BUILDING TOY FOR KIDS – This educational rocket kit was created in collaboration with NASA and ESA to showcase the authentic system that will…

A mass spectrometer called MSolo, built at NASA’s Kennedy Space Center, will analyze gases present at the landing site immediately after touchdown, focusing on lightweight molecules that could prove useful for future lunar explorers. Radiation monitoring is handled by a set of four detectors developed by the Korea Astronomy and Space Science Institute, measuring surface exposure levels to assess risks for both equipment and future crew while also providing insight into the geological history of the surrounding area.


A set of small sensors aboard the Australian Space Agency’s Roo-ver will track how landing plumes interact with surface materials across varying distances over time, part of NASA Goddard Space Flight Center’s Multifunctional Nanosensor Platform. The Roo-ver will also demonstrate its ability to navigate and move independently across uneven lunar terrain. A Laser Retroreflector Array, also out of Goddard, rounds out the payload with a compact set of mirrors designed to bounce laser signals back to orbiting spacecraft, improving navigation accuracy for future missions passing overhead or coming in to land nearby and helping establish reliable reference points across the lunar surface.

Advertisement


Rounding out the cargo is Sanctuary on the Moon, a time capsule developed in France containing information about human civilization, science, technology, culture, and the human genome, etched onto 24 durable synthetic sapphire discs. It is built to last, and designed to be found.
[Source]

Source link

Advertisement
Continue Reading

Tech

Google’s new compression drastically shrinks AI memory use while quietly speeding up performance across demanding workloads and modern hardware environments

Published

on


  • Google TurboQuant reduces memory strain while maintaining accuracy across demanding workloads
  • Vector compression reaches new efficiency levels without additional training requirements
  • Key-value cache bottlenecks remain central to AI system performance limits

Large language models (LLMs) depend heavily on internal memory structures that store intermediate data for rapid reuse during processing.

One of the most critical components is the key-value cache, described as a “high-speed digital cheat sheet” that avoids repeated computation.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025