If you’re excited, or even just a little curious, about the future of augmented reality, Meta’s Orion prototype makes the most compelling case yet for the technology.
For Meta, Orion is about more than finally making AR glasses a reality. It’s also the company’s best shot at becoming less dependent on Apple and Google’s app stores, and the rules that come with them. If Orion succeeds, then maybe we won’t need smartphones for much at all. Glasses, Zuckerberg , might eventually become “the main way we do computing.”
At the moment, it’s still way too early to know if Zuckerberg’s bet will actually pay off. Orion is, for now, still a prototype. Meta hasn’t said when it might become widely available or how much it might cost. That’s partly because the company, which has already poured tens of billions of dollars into AR and VR research, still needs to figure out how to make Orion significantly more affordable than the $10,000 it costs to make the current version. It also needs to refine Orion’s hardware and software. And, perhaps most importantly, the company will eventually need to persuade its vast user base that AI-infused, eye-tracking glasses offer a better way to navigate the world.
Still, Meta has been eager to show off Orion since at Connect. And, after recently getting a chance to try out Orion for myself, it’s easy to see why: Orion is the most impressive AR hardware I’ve seen.
Advertisement
Meta has clearly gone to great lengths to make its AR glasses look, well, normal. While Snap has been mocked for its oversized Spectacles, Orion’s shape and size is closer to a traditional pair of frames.
Even so, they’re still noticeably wide and chunky. The thick black frames, which house an array of cameras, sensors and custom silicon, may work on some face shapes, but I don’t think they are particularly flattering. And while they look less cartoonish than Snap’s AR Spectacles, I’m pretty sure I’d still get some funny looks if I walked around with them in public. At 98 grams, the glasses were noticeably bulkier than my typical prescription lenses, but never felt heavy.
1 / 4
Meta Orion glasses
Advertisement
Meta’s Orion glasses are still quite bulky.
In addition to the actual glasses, Orion relies on two other pieces of kit: a 182-gram “wireless compute puck, which needs to stay near the glasses, and an electromyography (EMG) wristband that allows you to control the AR interface with a series of hand gestures. The puck I saw was equipped with its own cameras and sensors, but Meta told me they’ve since simplified the remote control-shaped device so that it’s mainly used for connectivity and processing.
When I first saw the three-piece Orion setup at Connect, my first thought was that it was an interesting compromise in order to keep the glasses smaller. But after trying it all together, it really doesn’t feel like a compromise at all.
You control Orion’s interface through a combination of eye tracking and gestures. After a quick calibration the first time you put the glasses on, you can navigate the AR apps and menus by glancing around the interface and tapping your thumb and index finger together. Meta has been experimenting with wrist-based neural interfaces for years, and Orion’s EMG wristband is the result of that work. The band, which feels like little more than a fabric watch band, uses sensors to detect the electrical signals that occur with even subtle movements of your wrist and fingers. Meta then uses machine learning to decode those signals and send them to the glasses.
Advertisement
That may sound complicated, but I was surprised by how intuitive the navigation felt. The combination of quick gestures and eye tracking felt much more precise than hand tracking controls I’ve used in VR. And while Orion also has hand-tracking abilities, it feels much more natural to quickly tap your fingers together than to extend your hands out in front of your face.
What it’s like to use Orion
Meta walked me through a number of demos meant to show off Orion’s capabilities. I asked Meta AI to generate an image, and to come up with recipes based on a handful of ingredients on a shelf in front of me. The latter is a trick I’ve with the Ray-Ban Meta Smart Glasses, except with Orion, Meta AI was also able to project the recipe steps onto the wall in front of me.
I also answered a couple of video calls, including one from a surprisingly lifelike . I watched a YouTube video, scrolled Instagram Reels, and dictated a response to an incoming message. If you’ve used mixed reality headsets, much of this will sound familiar, and a lot of it wasn’t that different from what you can do in VR headsets.
The magic of AR, though, is that everything you see is overlaid onto the world around you and your surroundings are always fully visible. I particularly appreciated this when I got to the gaming portion of the walkthrough. I played a few rounds of a Meta-created game called Stargazer, where players control a retro-looking spacecraft by moving their head to avoid incoming obstacles while shooting enemies with finger tap gestures. Throughout that game, and a subsequent round of AR Pong, I was able to easily keep up a conversation with the people around me while I played. As someone who easily gets motion sick from VR gaming, I appreciated that I never felt disoriented or less aware of my surroundings.
Advertisement
Orion’s displays rely on silicon carbide lenses, micro-LED projectors and waveguides. The actual lenses are clear, though they can dim depending on your environment. One of the most impressive aspects is the 70-degree field of view. It was noticeably wider and more immersive than what I experienced with Snap’s AR Spectacles, which have a 46-degree field of view. At one point, I had three windows open in one multitasking view: Instagram Reels, a video call and a messaging inbox. And while I was definitely aware of the outer limits of the display, I could easily see all three windows without physically moving my head or adjusting my position. It’s still not the all-encompassing AR of sci-fi flicks, but it was wide enough I never struggled to keep the AR content in view.
What was slightly disappointing, though, was the resolution of Orion’s visuals. At 13 pixels per degree, the colors all seemed somewhat muted and projected text was noticeably fuzzy. None of it was difficult to make out, but it was much less vivid than what I saw on , which have a 37 pixels per degree resolution.
Meta’s VP of Wearable Devices, Ming Hua, told me that one of the company’s top priorities is to increase the brightness and resolution of Orion’s displays. She said that there’s already a version of the prototype with twice the pixel density, so there’s good reason to believe this will improve over time. She’s also optimistic that Meta will eventually be able to bring down the costs of its AR tech, eventually reducing it to something “similar to a high end phone.”
What does it mean?
Leaving my demo at Meta’s headquarters, I was reminded of the first time I tried out a prototype of the wireless VR headset that would eventually become known as Quest, back in 2016. Called at the time, it was immediately obvious, even to an infrequent VR user, that the wireless, room-tracking headset was the future of the company’s VR business. Now, it’s almost hard to believe there was a time when Meta’s headsets weren’t fully untethered.
Advertisement
Orion has the potential to be much bigger. Now, Meta isn’t just trying to create a more convenient form factor for mixed reality hobbyists and gamers. It’s offering a glimpse into how it views the future, and what our lives might look like when we’re no longer tethered to our phones.
For now, Orion is still just that: a glimpse. It’s far more complex than anything the company has attempted with VR. Meta still has a lot of work to do before that AR-enabled future can be a reality. But the prototype shows that much of that vision is closer than we think.
Epic Games is about to host big in-game event ahead of its next throwback season — and it could be pretty musical.
If you want to watch the event, here’s what you need to know.
The Remix: The Prelude event is set to kick off at 6:30PM ET / 3:30PM PT. Epic suggests logging in early so that you don’t miss the event; events have reached capacity in the past.
If you’re in the game, jump into a Battle Royale or Zero Build match ahead of the event’s start time and head to the Restored Reels location.
Advertisement
If you aren’t able to watch in-game, there will almost certainly be a bunch of streamers live-streaming the show, so check Twitch or YouTube to find one to watch.
Cybercriminals are attacking surveillance cameras from multiple manufacturers, leveraging two zero-day vulnerabilities to take over the endpoints, watch and manipulate the feeds, and more.
Cybersecurity researchers GreyNoise claim to have spotted the attacks after their AI-powered analysis tool Sift raised an alarm that crooks are attacking network device interface-enabled (NDI) pan-tilt-zoom (PTZ) cameras from multiple manufacturers.
The cameras can be found in different environments, including industrial and manufacturing plants, where they are used for machinery surveillance, and quality control. They can also be found in business conferences, used for high-definition video streaming and remote presentations, in healthcare (used for telehealth consultations and surgical live streams), state and local government environments, including courtrooms, and houses of worship, where they’re used for live streaming.
Waiting on patches
GreyNoise says the affected devices are typically high-cost, with some models costing several thousand dollars.
Advertisement
Affected devices use VHD PTZ camera firmware < 6.3.40 used in PTZOptics, Multicam Systems SAS, and SMTAV Corporation devices based on Hisilicon Hi3516A V600 SoC V60, V61, and V63.
The vulnerabilities in question are now tracked as CVE-2024-8956, and CVE-2024-8957. The former is deemed critical (9.1), and the latter high (7.2). When exploited, the vulnerabilities can be used to completely take over the cameras, view and manipulate video feeds, disable different camera operations, and assimilate the devies into a botnet.
While for some models, patches have already been released, others remain vulnerable. According to BleepingComputer, PTZOptics released a security update on September 17, but since multiple models reached end-of-life status (PT20X-NDI-G2 and PT12X-NDI-G2) not all were patched. Furthermore, PT20X-SE-NDI-G3, and PT30X-SE-NDI-G3 are still pending a fix.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Advertisement
Chances are, the list of affected models is a lot longer than what the researchers determined at this time. Users are advised to check with their manufacturer if they’ve released a fix for the abovementioned flaws.
You haven’t fully shopped the best headphone deals until you’ve had a look at everything Beats has to offer. It’s one of the most popular headphone brands on the planet, but unlike Bose headphone deals and even Sony headphone deals, Beats headphone deals often turn out some significant price drops. Whether you’re looking for an in-ear option or a set of the best wireless headphones Beats has you covered, and we’ve got you covered when it comes to the check-out line. Below you’ll find all of the best Beats headphone deals. They include some substantial discounts on the Beats Studio 3 and Powerbeats Pro headphones, but if you’d like to consider some other options be sure to check out what’s going on among today’s best AirPods deals, best AirPods Pro deals, and best AirPods Max deals.
If you want to go for a pair of true wireless earbuds there are quite a few good options from Beats. There are some excellent deals on budget options, including some deals on refurbished Beats Studio Buds. One of the higher end options is the Beats Powerbeats Pro, which are seeing a great price drop right now.
If over-the-ear headphones are your listening preference, there’s plenty of savings in store on a new set of Beats. The Beats Solo 3 headphones are pretty much a regular when it comes to Beats deals, and that’s the case right now as well. You’ll also find some pretty impressive price drops on the Beats Studio 3 and Beats Studio Pro headphones.
What it all boils down to when picking either AirPods or Beats is what your budget is. In almost every straight comparison between an AirPods product or a Beats product, the AirPods will always win, like for example, when comparing the Studio Pro vs. Apple AirPods Max. That said, the AirPods Max is a couple of hundred dollars more expensive, and this will hold true of pretty much all AirPods to Beats comparisons. So, ultimately, if you can afford an AirPod, that’s generally the better audio quality, but if you feel that it’s out of your budget range, the Beats are cheaper and are essentially just as good.
In case you don’t know what Claude is, it’s one of the major competitors to ChatGPT and Gemini. Just like those two platforms, Claude has different models of varying capabilities. There’s the Sonnet model and the Haiku model. You can use the former model for free on the dedicated website. Haiku, on the other hand, requires a paid membership.
Claude now has a desktop app
Just recently, Anthropic announced that Claude now has the ability to perform actions on your computer by itself. Well, the desktop app doesn’t grant Claude that ability. What it does is give users a quick and easy way to access the chatbot. Just like the smartphone app, it provides a simple interface that lets you use the chatbot.
Claude’s interface isn’t very different from most other chatbot interfaces. The star of the show is the text field. You’ll see your conversation fill the screen as you write.
Advertisement
If you don’t have anything in the text field, you’ll see various other UI elements floating around. Right under the text field, you’ll see your recent conversations. The app will show up to six recent conversations unless you click on the View all button.
In between the text field and the recent conversations, you’ll see recent updates and news regarding Claude. Each bit of news will sit in a rounded rectangular button, and they’ll be stacked on one another. If you click on one of them, then you’ll be taken to a webpage on your default browser.
Just like most other chatbots, there’s a panel on the left of the screen that will list your recent conversations and let you access your account settings. To see your account settings, click on your name at the bottom of the panel.
Up top, you’ll see the menu that will have the File, Edit, View, and Help sections (if you’re using the Windows application). Clicking on the Settings button from the File menu will let you change the keyboard shortcut. This shortcut will summon a little floating text field.
Advertisement
The application is available for free. You’ll have to sign in to use it, but you’ll be able to use it even if you’re a free user.
There’s a new Star Wars show coming out in just over a month. Star Wars: Skeleton Crew premieres on December 3 with two episodes on Disney+. The streamer just released a brand-new trailer to prove it.
For the uninitiated, this is a live action show set during the same time period as and , or around ten years after the events of Return of the Jedi. We don’t know too much about the plot, other than it involves some suburban kids finding a spaceship and going on an adventure.
If that reminds you of some classic flicks from the 1980s, you aren’t alone. The whole thing seems to be an homage to Steven Spielberg, Amblin and the vast array of kid-friendly adventures from that decade. People have been calling it “Goonies in space,” but a more modern reference would be “Stranger Things in space.”
The trailer also showcases one of the things I’m personally most interested in with this show. Some of it is set in settled planets, likely core worlds such as Coruscant. There are suburban neighborhoods and schools. There are people going to work. We haven’t gotten many looks as to how regular people live in a galaxy far, far away. That’s my jam, right there.
Advertisement
The showrunners here are Jon Watts and Christopher Ford, who made the recent Spider-Man movies for the MCU. The cast is primarily composed of unknown kids, including an elephant alien who may or may not be related to Mos Espa band leader Max Rebo. However, Jude Law is in it. He’s likely playing a Jedi, though there could be a twist there.
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
While many existing risks and controls can apply to generative AI, the groundbreaking technology has many nuances that require new tactics, as well.
Models are susceptible to hallucinations, or the production of inaccurate content. Other risks include the leaking of sensitive data via a model’s output, tainting of models that can allow for prompt manipulation and biases as a consequence of poor training data selection or insufficiently well-controlled fine-tuning and training.
Ultimately, conventional cyber detection and response needs to be expanded to monitor for AI abuses — and AI should conversely be used for defensive advantage, said Phil Venables, CISO of Google Cloud.
Advertisement
“The secure, safe and trusted use of AI encompasses a set of techniques that many teams have not historically brought together,” Venables noted in a virtual session at the recent Cloud Security AllianceGlobal AI Symposium.
Lessons learned at Google Cloud
Venables argued for the importance of delivering controls and common frameworks so that every AI instance or deployment does not start all over again from scratch.
“Remember that the problem is an end-to-end business process or mission objective, not just a technical problem in the environment,” he said.
Nearly everyone by now is familiar with many of the risks associated with the potential abuse of training data and fine-tuned data. “Mitigating the risks of data poisoning is vital, as is ensuring the appropriateness of the data for other risks,” said Venables.
Advertisement
Importantly, enterprises should ensure that data used for training and tuning is sanitized and protected and that the lineage or provenance of that data is maintained with “strong integrity.”
“Now, obviously, you can’t just wish this were true,” Venables acknowledged. “You have to actually do the work to curate and track the use of data.”
This requires implementing specific controls and tools with security built in that act together to deliver model training, fine-tuning and testing. This is particularly important to assure that models are not tampered with, either in the software, the weights or any of their other parameters, Venables noted.
“If we don’t take care of this, we expose ourselves to multiple different flavors of backdoor risks that can compromise the security and safety of the deployed business or mission process,” he said.
Advertisement
Filtering to fight against prompt injection
Another big issue is model abuse from outsiders. Models may be tainted through training data or other parameters that get them to behave against broader controls, said Venables. This could include adversarial tactics such as prompt manipulation and subversion.
Venables pointed out that there are plenty of examples of people manipulating prompts both directly and indirectly to cause unintended outcomes in the face of “naively defended, or flat-out unprotected models.”
This could be text embedded in images or other inputs in single or multimodal models, with problematic prompts “perturbing the output.”
“Much of the headline-grabbing attention is triggering on unsafe content generation, some of this can be quite amusing,” said Venables.
Advertisement
It’s important to ensure that inputs are filtered for a range of trust, safety and security goals, he said. This should include “pervasive logging” and observability, as well as strong access control controls that are maintained on models, code, data and test data, as well.
“The test data can influence model behavior in interesting and potentially risky ways,” said Venables.
Controlling the output, as well
Users getting models to misbehave is indicative of the need to manage not just the input, but the output, as well, Venables pointed out. Enterprises can create filters and outbound controls — or “circuit breakers” —around how a model can manipulate data, or actuate physical processes.
“It’s not just adversarial-driven behavior, but also accidental model behavior,” said Venables.
Advertisement
Organizations should monitor for and address software vulnerabilities in the supporting infrastructure itself, Venables advised. End-to-end platforms can control the data and the software lifecycle and help manage the operational risk of AI integration into business and mission-critical processes and applications.
“Ultimately here it’s about mitigating the operational risks of the actions of the model’s output, in essence, to control the agent behavior, to provide defensive depth of unintended actions,” said Venables.
He recommended sandboxing and enforcing the least privilege for all AI applications. Models should be governed and protected and tightly shielded through independent monitoring API filters or constructs to validate and regulate behavior. Applications should also be run in lockdown loads and enterprises need to focus on observability and logging actions.
In the end, “it’s all about sanitizing, protecting, governing your training, tuning and test data. It’s about enforcing strong access controls on the models, the data, the software and the deployed infrastructure. It’s about filtering inputs and outputs to and from those models, then finally making sure you’re sandboxing more use and applications in some risk and control framework that provides defense in depth.”
Advertisement
VB Daily
Stay in the know! Get the latest news in your inbox daily
You must be logged in to post a comment Login