Let’s get this straight up front: The Vive Focus Vision isn’t a competitor to the Meta Quest 3, or the recently released Quest 3S. At $999, how could it be? Instead, it’s another stab at the high-end VR market for HTC Vive, an audience it’s cultivated since the launch of the first Vive headset in 2016. While Meta has leaned more towards cheaper and more mainstream VR headsets over the last decade, HTC Vive has done practically the opposite, aiming for VR gearheads and enterprise customers with PC headsets like the Vive Pro 2 and feature-rich standalone models like the Focus 3.
You can think of the Vive Focus Vision as a cross between the Focus 3 and last year’s goggle-like XR Elite. It’s a standalone headset with two 16MP color cameras for mixed reality, built-in eye tracking and automatic interpupillary distance (IPD) adjustment. It could also be appealing to PC gamers with its $149 DisplayPort wired streaming kit, which gives you an uncompressed view of high-end VR experiences like Half-Life: Alyx.
HTC Vive
The Vive Focus Vision is a sleek premium standalone VR headset that can also deliver solid PC VR. But it’s also running aging hardware, it’s riddled with software issues and it’s expensive compared to the Meta Quest 3.
As intriguing as its new features are, though, the more I tested the Focus Vision, the more it felt like a missed opportunity for HTC’s Vive VR platform. For one, it’s running the same Snapdragon XR2 chip as the Focus 3 and Quest 2. That chip originally debuted in 2020, and it simply seems inexcusable in a high-end headset today. Both the $300 Quest 3S and $500 Quest 3 sport the XR2 Gen 2 processor, which is 2.5 times faster than the original chip and also has up to eight times faster AI processing. For a high-end headset at the tail-end of 2024, I would have expected HTC to at least match the power of far cheaper competitors, or – even better – to include Qualcomm’s newer XR2+ Gen 2 chip.
The Focus Vision is also still using older Fresnel lens optics, which are prone to artifacts and light bleeding, instead of the sharper pancake lenses in the Quest 3. At least HTC shoved in 12GB of RAM this time around, compared to the 8GB found on the Focus 3 and Quest 3. And the company still has a resolution advantage over the Quest 3: The Focus Vision delivers 2,448 by 2,448 pixels per eye, compared to Meta’s 2,064 by 2,208 pixels per eye. HTC Vive’s 120-degree field of view also delivers a greater sense of immersion than the 110-degree FOV in the Quest 3.
The Focus Vision shines best when it comes to overall build quality and comfort. Even though it’s made of plastic like the Quest 3, it’s a sturdy device that clearly looks more high-end than Meta’s offerings. Ample cushioning helps the Focus Vision rest comfortably on your forehead and behind your noggin. And its halo-like head strap, together with the ability to flip up the visor, makes it easy to slip on over large glasses.
Advertisement
Best of all, the Focus Vision features a removable battery at the back of its headstrap. That provides a helpful counterweight to the bulky front-end, and it could conceivably let you stay in wireless VR all day if you’ve got enough spare batteries. The headset also has a small built-in battery, which allows you to stay in your VR session even when you’re swapping out the larger rear power cell. This is the sort of thing we’ll probably never see in a consumer Quest headset, as it’s simply too expensive to implement, and Meta isn’t building for enterprise customers who demand continuous wireless. (And to be fair, it’s also easy to just plug the Quest 3 into a USB battery pack.)
In use
Using the Focus Vision doesn’t feel much different than the Focus — a headset I liked when I reviewed in 2021, but as a business-focused device I warned that no consumer should actually buy it. That’s not too surprising, I suppose, since both headsets share the same basic design, displays and CPU. In standalone VR mode, playing the Maestro demo genuinely made me feel like I was conducting an orchestra (an experience I also had on the Quest 3S), and I enjoyed hopping around a few virtual worlds in VR Chat.
Other experiences, like the classic underwater VR short theBlu, felt just as immersive as they did on clunkier tethered headsets. While I could tell the Focus Vision didn’t have the best lenses around, and I wished it had more graphical horsepower, it still delivered a thrill as I stood in the middle of a sunken shipwreck, waiting for an enormous blue whale to pass by. It was also nice to see the Vive app storefront a bit more populated than it was in 2021. Still, it pales in comparison to Meta’s Quest library, which has far more titles and plenty of compelling exclusives (including Star Wars titles like the Vader Immortal series and Tales from the Galaxy’s Edge).
We already knew that HTC Vive could build a decent headset – the Focus Vision’s controllers and speakers are just as capable as they were on the previous model – but what about the Focus Vision’s new features, like mixed reality and eye tracking? Unfortunately, there’s not much to say just yet. There are a handful of mixed reality experiences available, like the creation app Figmin XR and the shooter Yuki, but they’re not exactly mind blowing. The Focus Vision’s 16MP mixed reality cameras deliver a fuzzy view of the real world (similar to the Quest 3 and 3S), so it’s not nearly as immersive as something like the far pricier Apple Vision Pro.
The Focus Vision’s eye tracking feature also refused to work for me entirely, even after I tried to calibrate it without glasses multiple times. That didn’t seem like a huge loss though, as there are only a handful of games in the Vive store that support it (like Capsule Critters and Mare). It’s a feature that seems more useful for developers who want to build their own eye tracking experiences, than it is for people who just want to play games with eye tracking.
Advertisement
Solid stand-alone VR
A better selling point for the Focus Vision is its ability to stream uncompressed desktop VR experiences — but only when you invest in the $149 DisplayPort streaming kit. While Meta’s Quest’s headsets have been able to connect to PCs for years, first via USB-C cables then wirelessly, they also deliver a heavily compressed view of desktop VR. By going straight to the DisplayPort connection on your video card, HTC Vive aims to deliver something closer to what we saw with the Vive Pro 2 and other dedicated PC headsets.
After playing half an hour of Half-Life: Alyx, I can confirm that the Focus Vision delivers a solid desktop VR experience, especially for a standalone headset. But given that it already costs $999 and requires an additional $149 accessory to get there, it’s hard to tell who will find this compelling. True VR heads have likely already invested in serious desktop setups like the Valve Index, or the recent Bigscreen Beyond (which uses absurdly clear microLED screens like the Vision Pro).
The beauty of connecting standalone headsets to PCs has always been about value. It was a huge bonus when the $300 Quest 2 could deliver adequate desktop VR. But that just isn’t the case for the Focus Vision. I suppose if you’re a developer who wants a single device for testing both standalone VR and complex desktop experiences, or working for a business that needs multi-use VR headsets, the Focus Vision could fill some sort of need. But either way, that seems like a fairly niche use case.
The Focus Vision’s auto-IPD adjustment, which scans your eyes and physically moves the lenses to be in the ideal position, was also hit-or-miss for me. Sometimes it worked just fine and landed near my prescribed IPD of 66. But sometimes the automatic process would land on an IPD of around 72, which made everything look a bit blurry. And occasionally the feature just wouldn’t work at all. Auto adjustment is helpful if you’re sharing a headset with other people, but otherwise manually choosing your preferred IPD is far more useful.
During my typical standalone usage, the Focus Vision lasted for around one hour and 45 minutes, close to the two-hour estimate from HTC Vive. That’s less than what I typically see on the Quest 3 and 3S, but at least you can purchase additional batteries and easily swap them. The built-in battery, which enables hot swapping, lasts for about twenty minutes, but it’s also not something you’ll typically be stressing.
Should you buy the Vive Focus Vision?
Despite my issues, the Focus Vision still sits in an interesting position in the world of VR – especially since Meta gave up on the Quest Pro, which would have been a close competitor. It still delivers decent standalone VR, despite using an aging CPU and lenses. And if you don’t want the clutter of SteamVR sensors in your office, it’s a smart way to tap into powerful PCs for more immersive VR experiences (so long as you buy the $149 DisplayPort kit). But for a $999 headset, it’s a shame HTC Vive didn’t try harder to make the Focus Vision stand out.
Watching old episodes of ER won’t make you a doctor, but watching videos may be all the training a robotic surgeon’s AI brain needs to sew you up after a procedure. Researchers at Johns Hopkins University and Stanford University have published a new paper showing off a surgical robot as capable as a human in carrying out some procedures after simply watching humans do so.
The research team tested their idea with the popular da Vinci Surgical System, which is often used for non-invasive surgery. Programming robots usually requires manually inputting every movement that you want them to make. The researchers bypassed this using imitation learning, a technique that implanted human-level surgical skills in the robots by letting them observe how humans do it.
The researchers put together hundreds of videos recorded from wrist-mounted cameras demonstrating how human doctors do three particular tasks: needle manipulation, tissue lifting, and suturing. The researchers essentially used the kind of training ChatGPT and other AI models use, but instead of text, the model absorbed information about the way human hands and the tools they are holding move. This kinematic data essentially turns movement into math the model can apply to carry out the procedures upon request. After watching the videos, the AI model could use the da Vinci platform to mimic the same techniques. It’s not too dissimilar from how Google is experimenting with teaching AI-powered robots to navigate spaces and complete tasks by showing them videos.
“It’s really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery. We believe this marks a significant step forward toward a new frontier in medical robotics,” senior author and JHU assistant professor Axel Krieger said in a release. “The model is so good learning things we haven’t taught it. Like if it drops the needle, it will automatically pick it up and continue. This isn’t something I taught it do.”
The idea of an AI-controlled robot holding blades and needles around your body might sound scary, but the precision of machines can make them better in some cases than human doctors. Robotic surgery is increasingly common in some instances. A robot performing complex procedures independently might actually be safer, with fewer medical errors. Human doctors could have more time and energy to focus on unexpected complications and the more difficult parts of a surgery that machines aren’t up to handling yet.
Advertisement
The researchers have plans to test using the same techniques to teach an AI how to do a complete surgery. They’re not alone in pursuing the idea of AI-assisted robotic healthcare. Earlier this year, AI dental technology developer Perceptive showcased the success of an AI-guided robot performing a dental procedure on a human without supervision.
You might also like
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Strava, a popular app for tracking fitness activities, is expanding its Hatmaps feature to help improve the safety of its users. The update should be especially useful now for users in the Northern Hemisphere, which is heading into winter with reduced daylight.
The new Night and Weekly Heatmaps were announced by the San Francisco-based company on Wednesday and are available to all Strava subscribers. As the name of the feature suggests, the Heatmaps show where Strava users are choosing to exercise, with dark thick lines showing well-used routes, and light thin lines showing less popular ones.
First up, the new Night Heatmaps feature is ideal for those who are doing their activities in the late evening or early morning hours, when there’s less light. They show the most popular areas for outdoor activities from sunset to sunrise, helping athletes to better plan their outdoor activities during this time frame. If it’s a new area for you, you may also want to cross-check the Night Heatmap data with Google Street View images to get a better understanding of the place.
Weekly Heatmaps, on the other hand, show data for recent heat from the last seven days so that users can see which trails and roads are currently active, particularly during seasonal transitions when conditions may be impacted by weather.
Advertisement
“Our global community powers ourHeatmaps and now we’ve made it easier for our community members to build routes with confidence, regardless of the season or time of day,” Matt Salazar, Strava’s chief product officer, said in Wednesday’s announcement about the new features. “We are continually improving our mapping technology to make human-powered movement easier for all skill levels.”
Strava has also shared a useful at-a-glance guide to all four of its Heatmaps, Night, Weekly, Global, and Personal:
Night (new): Discover the most frequented areas between sunset and sunrise; ideal for evening or early morning users.
Weekly (new): Stay updated with the latest data from the past seven days; perfect for adjusting plans around seasonal changes or unexpected closures.
Advertisement
Global (existing): Viewable by anyone regardless of whether you have a Strava account, the Global Heatmap allows you to see what areas are most popular around the world based on community activity uploads.
Personal (existing): A one-of-a-kind illustration showing the record of everywhere you’ve logged a GPS activity. This heatmap is private and only available to you.
Google’s Gemini is useful as an educational tool to help you study for that exam. However, Gemini is sort of the “Everything chatbot” that’s useful for just about everything. Well, Google has a new model for people looking for more of a robust educational tool. Google calls it Learn About, and it could give other tools a run for their money.
Say what you want about Google’s AI, the company has been hard at work making AI tools centered around teaching rather than cheating. For example, it has tools in Android studio that guides programmers and help them learn coding. Also, we can’t forget about NotebookLM. This is the tool that takes your uploaded educational content and helps you digest it. We can’t forget abou the Audio Overviews feature that turns your uploaded media into a live podcast-style educational discussion.
So, Google has a strong focus on education with its AI tools. Let’s just hope that other companies will follow suit.
Google’s new AI tool is called Learn About
This tool is pretty self-explanatory, as it focus on giving you more text-book style explanations for your questions. Rather than simply giving you answers, this tool will go the extra mile to be more descriptive with its explanation. Along with that, Learn About will also provide extra context on the subject and give you other educational material on it.
Advertisement
Google achieved this by using a totally different model to power this tool. Rather than using the Gemini model, Ask About uses a model called LearnLM. At this point, we don’t really know much about this model, but we know that Google steered it more towards providing academic answers.
Gemini’s answer vs. Learn About’s answers
We tested it out by asking what pulsars are, and we compared the answer to what Gemini gave us for the same question. Gemini delivered a pretty fleshed-out explanation in the form of a few paragraphs. It also snagged a few pictures from the internet and pasted the link to a page at the bottom. This is good for a person who’s casually looking up a definition. Maybe that person isn’t looking to learn the ins and outs of what a pulsar is.
There was one issue with Gemini’s answer; one of the images that it pasted was an image of a motorcycle. It pasted an image of the Bajaj Pulsar 150. So, while it technically IS a pulsar, a motorcycle shares very few similarities with massive rapidly spinning balls of superheated plasma billions of miles away from Earth.
What about Learn About?
Learn About also gave an explanation in the form of a few paragraphs; however, Learn About’s explanation was shorter. It makes up for it by producing more extraneous material. Along with images, it provided three links (one of which was a YouTube video) and chips with commands like Simplify, Go deeper, and Get images (more on the chips below).
Advertisement
Under the chips, you’ll see suggestions of other queries that you can put in for additional context. Lastly, in textbook style, you’ll see colored blocks with additional content. For example, there’s a Why it matters block and a Stop & think block.
Chips
Going back to the aforementioned chips, selecting Simplify and Get images are axiomatic enough. Tapping/clicking on the Go Deeper chip is a bit more interesting. It brought up an Interactive List consisting of a selection of additional queries that will provide extra information about pulsars. Each query you select will bring up even more information.
Textbook blocks
Think about the textbooks you used in school, and you’ll be familiar with these blocks. These blocks come in different colors. The Why it matters block tells you why this information is important. Next, the Stop & think block seems to give you a little bit of tangential information. It asks a question and has a button to reveal the answer. It’s a way to get you to think outside of the box a bit.
There’s a Build your vocab box that introduces you to a relevant term and shows you a dictionary-style definition of it. This is a term that the reader is most likely not familiar with.
Advertisement
The next block we encountered was the Test your knowledge block. This one has a quiz-style question and it gives you two options. Other subject matters might have more choices, but this is what we got in our usage.
We also saw a Common misconception block. This one pretty much explains itself.
Bottom bar
At the very bottom of the screen, you’ll see a bar with some additional chips. One chip should show the title of the current subject, and Tapping/clicking on it will bring up a floating window with additional topic suggestions. In our case, we also saw the interactive list that we saw previously. This one will show the list in a floating window.
One issue
So, do you remember when Gemini gave us the image of the motorcycles? Well, while the majority of Learn About’s images were relevant to the subject, it still retrieved two images of the motorcycles. As comical as it is, it shows that Google’s AI still has a ways to go before it’s perfect. However, barring that little mishap, Learn About runs as smoothly as the motorcycle it’s surfacing pictures of.
Advertisement
Use it today!
You can use Learn About today if you want to try it out. Just go to the Learn About website Learn About website, and you’ll be able to try it out. Just know that, as with most Google services, the availability will depend on your region. We were able to access it in the U.S. in English. Just know that you may not have it in regions that Google typically overlooks.
You can use it regardless of if you’re a free or paid user. Please note that Learn About is technically an experiment. This means that Google only put this on the market for testing. Google could potentially lock this behind a paywall after the beta testing phase. Just know that this feature could disappear down the line. So, you’ll want to get in and use it while you can.
GOG is launching an effort to help make older video games playable on modern hardware. The will label the classic titles that the platform has taken steps to adapt in order to make them compatible with contemporary computer systems, controllers and screen resolutions, all while adhering to its DRM-free policy. The move could bring new life to games of decades past, just as GOG did two years ago with a refresh of . So far, 92 games have received the preservation treatment.
“Our guarantee is that they work and they will keep working,” the company says in the video announcing the initiative.
Preservation has been a hot topic as more games go digital only. Not only are some platforms disk drives by default, but ownership over your library is more ephemeral than it seems. After all, most game purchases are , and licenses can be revoked (as The Crew players know ).
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
One-bit large language models (LLMs) have emerged as a promising approach to making generative AI more accessible and affordable. By representing model weights with a very limited number of bits, 1-bit LLMs dramatically reduce the memory and computational resources required to run them.
Microsoft Research has been pushing the boundaries of 1-bit LLMs with its BitNet architecture. In a new paper, the researchers introduce BitNet a4.8, a new technique that further improves the efficiency of 1-bit LLMs without sacrificing their performance.
The rise of 1-bit LLMs
Traditional LLMs use 16-bit floating-point numbers (FP16) to represent their parameters. This requires a lot of memory and compute resources, which limits the accessibility and deployment options for LLMs. One-bit LLMs address this challenge by drastically reducing the precision of model weights while matching the performance of full-precision models.
Advertisement
Previous BitNet models used 1.58-bit values (-1, 0, 1) to represent model weights and 8-bit values for activations. This approach significantly reduced memory and I/O costs, but the computational cost of matrix multiplications remained a bottleneck, and optimizing neural networks with extremely low-bit parameters is challenging.
Two techniques help to address this problem. Sparsification reduces the number of computations by pruning activations with smaller magnitudes. This is particularly useful in LLMs because activation values tend to have a long-tailed distribution, with a few very large values and many small ones.
Quantization, on the other hand, uses a smaller number of bits to represent activations, reducing the computational and memory cost of processing them. However, simply lowering the precision of activations can lead to significant quantization errors and performance degradation.
Furthermore, combining sparsification and quantization is challenging, and presents special problems when training 1-bit LLMs.
Advertisement
“Both quantization and sparsification introduce non-differentiable operations, making gradient computation during training particularly challenging,” Furu Wei, Partner Research Manager at Microsoft Research, told VentureBeat.
Gradient computation is essential for calculating errors and updating parameters when training neural networks. The researchers also had to ensure that their techniques could be implemented efficiently on existing hardware while maintaining the benefits of both sparsification and quantization.
BitNet a4.8
BitNet a4.8 addresses the challenges of optimizing 1-bit LLMs through what the researchers describe as “hybrid quantization and sparsification.” They achieved this by designing an architecture that selectively applies quantization or sparsification to different components of the model based on the specific distribution pattern of activations. The architecture uses 4-bit activations for inputs to attention and feed-forward network (FFN) layers. It uses sparsification with 8 bits for intermediate states, keeping only the top 55% of the parameters. The architecture is also optimized to take advantage of existing hardware.
“With BitNet b1.58, the inference bottleneck of 1-bit LLMs switches from memory/IO to computation, which is constrained by the activation bits (i.e., 8-bit in BitNet b1.58),” Wei said. “In BitNet a4.8, we push the activation bits to 4-bit so that we can leverage 4-bit kernels (e.g., INT4/FP4) to bring 2x speed up for LLM inference on the GPU devices. The combination of 1-bit model weights from BitNet b1.58 and 4-bit activations from BitNet a4.8 effectively addresses both memory/IO and computational constraints in LLM inference.”
Advertisement
BitNet a4.8 also uses 3-bit values to represent the key (K) and value (V) states in the attention mechanism. The KV cache is a crucial component of transformer models. It stores the representations of previous tokens in the sequence. By lowering the precision of KV cache values, BitNet a4.8 further reduces memory requirements, especially when dealing with long sequences.
The promise of BitNet a4.8
Experimental results show that BitNet a4.8 delivers performance comparable to its predecessor BitNet b1.58 while using less compute and memory.
Compared to full-precision Llama models, BitNet a4.8 reduces memory usage by a factor of 10 and achieves 4x speedup. Compared to BitNet b1.58, it achieves a 2x speedup through 4-bit activation kernels. But the design can deliver much more.
“The estimated computation improvement is based on the existing hardware (GPU),” Wei said. “With hardware specifically optimized for 1-bit LLMs, the computation improvements can be significantly enhanced. BitNet introduces a new computation paradigm that minimizes the need for matrix multiplication, a primary focus in current hardware design optimization.”
Advertisement
The efficiency of BitNet a4.8 makes it particularly suited for deploying LLMs at the edge and on resource-constrained devices. This can have important implications for privacy and security. By enabling on-device LLMs, users can benefit from the power of these models without needing to send their data to the cloud.
Wei and his team are continuing their work on 1-bit LLMs.
“We continue to advance our research and vision for the era of 1-bit LLMs,” Wei said. “While our current focus is on model architecture and software support (i.e., bitnet.cpp), we aim to explore the co-design and co-evolution of model architecture and hardware to fully unlock the potential of 1-bit LLMs.”
VB Daily
Advertisement
Stay in the know! Get the latest news in your inbox daily
Starfish Space has closed a new tranche of funding led by a major defense tech investor as it looks to launch three full-size satellite servicing and inspection spacecraft in 2026.
The Washington-based startup’s Otter spacecraft is designed for two primary missions: extending the operational life of expensive satellites in geostationary orbit (GEO) and disposing of defunct satellites in low Earth orbit (LEO). It’s a series of capabilities that have never been available for satellite operators, who launch their satellites with the expectation that they’ll only have a limited span of useful life.
The aim, as Starfish CEO and co-founder Austin Link put it in a recent interview, is to “make it affordable enough that the benefits of having your satellite serviced outweigh the costs.”
The $29 million round was led by Shield Capital, a venture firm focused on funding technologies that will affect U.S. national security. It has participated in just a handful of other deals in the space industry. The round also includes participation from new investors Point72 Ventures, Booz Allen Ventures, Aero X Ventures, Trousdale Ventures, TRAC VC, and existing investors Munich Re Ventures, Toyota Ventures, NFX, and Industrious Ventures.
Advertisement
“You start a company because you want to build satellites, not because you want to fundraise,” Link told TechCrunch. Link founded Starfish in 2019 with Trevor Bennett after the pair worked as flight sciences engineers at Blue Origin. They raised $7 million in 2021 and $14 million two years later. Starfish launched its first demonstration mission, a sub-scale spacecraft fittingly called Otter Pup, last summer.
Although that mission did not quite go according to plan, Starfish has racked up several wins since then, including three separate contracts for full-size Otter spacecraft. That includes a $37.5 million deal with the U.S. Space Force for a first-of-its-kind docking and maneuvering mission with a defense satellite in GEO and a contract with major satellite communications company Intelsat for life extension services. The third contract, a $15 million NASA mission to inspect multiple defunct satellites in LEO, was announced while Starfish was in the middle of fundraising, Link said.
Starfish purposefully set out to find investors that had experience helping their portfolio companies navigate selling to the government, Link said. “The government is a customer that it sometimes can be harder to scale with, so having investors that understood the process a little better … we thought they’d be good additions to our cap table.”
Link added that the company is seeing a “fairly even split” in demand between government and commercial customers.
Advertisement
Satellite servicing, life extension, and satellite disposal are “exciting first steps,” Link said, but they’re stepping stones on the way to developing a broader suite of capabilities for even more ambitious missions on orbit.
“Along the way, we end up with this set of autonomy and robotics technologies and capabilities and datasets that allow us to go eventually do broadly a set of complex robotic or servicing or ISAM-type missions in space that maybe stretch a little beyond what we do with the Otter,” he said. “I think a lot of those are a long ways off, and not necessarily where our focus is right now … but some of the effort that goes into the Otter today and is funded through this funding round, and some of the growth there leads to a longer term where Starfish Space can have a broad impact on the way that humans go out into the universe.”
You must be logged in to post a comment Login