Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Liv, a pioneer in mixed reality capture, has partnered with Meta to become the official capture solution for Meta Quest wireless virtual reality headsets.
Founded by AJ “DrDoom” Shewki, Liv has inked a multi-year partnership with Meta to bring Liv’s mixed reality capture and virtual camera solutions to developers publishing on Meta Quest and also creators who wish to use those features in Meta Rift and Quest apps. Developers are adding the capability now for standalone headsets and it should be available for consumers to use in the coming weeks in VR games.
A big part of game marketing today combines trailers and video content made by the creator and player community inside games, and Liv wants to continue supporting developers in their mission to reach as many creators, players and fans as possible in XR. If you think about it, Liv is probably going to be the way that we’re going to record our first experiences in the metaverse.
Advertisement
“The first and most important thing is that it’s going to run natively on a Quest,” Shewki said in an interview with GamesBeat. “This will work on a standalone device, on the Quest — meaning the Quest, Quest 2 and Quest 3 and all the upcoming Quest devices. It’s part of our partnership with Meta.”
Shewki added, “We spent seven years building technologies. Our mission has always been to empower people in VR and AR to play and share their favorite moments with their friends, family and fans. This will be the first time that we can do this directly on standalone devices like the Quest.”
The Liv capture will work with Gorilla Tag by Another Axiom; Penguin Paradise (and their new game Skelly), by Sava Studios; and Scary Baboon, by Flixzy.
“Another Axiom builds fully realized spaces that are meant to be shared together, like in our popular game Gorilla Tag,” said David Yee, COO at Another Axiom, in a statement. “We’re always looking at new ways to give our players and creators a great experience they can share with their family and friends. This partnership between Liv and Meta provides access to best-in-class capture and virtual camera technology, introducing new ways to capture and share in-headset experiences. We can’t wait to see what the community does with these new tools.”
One of the key initial goals of the work was to make it easier for developers and creators to capture authentic mixed reality content using a PC with an external camera in both immersive apps and mixed reality apps that use the Meta Quest Presence Platform features. That work has been going on for a while and now it has moved on to standalone devices.
“It’s going to be a massive boost to user generated content inside the top VR applications,” Shewki said.
These capture possibilities include hand tracking, passthrough (where video cameras show you what’s happening in the real world while you’re wearing a headset), scene understanding, anchoring and occlusion. The latter refers to keeping a VR app grounded on the floor and able to figure out when a real-world object blocks the view of a mixed reality object in the virtual space.
As part of this partnership, Meta is deprecating its own mixed reality capture (MRC) tooling and Liv will take over as the official solution.
Advertisement
With Liv, VR players can do things like take a virtual selfie, operate first-person and third-person cameras and perform real-time mixed reality capture with physical cameras.
How it works
MRC has shown its power as a tool to market XR apps and games, and Liv believes it will continue to be an important tool to drive game awareness through video content and hence drive sales. It’s particularly good for filming trailers. A developer can set up the SDK to work with a game in perhaps 30 minutes, Shewki said.
Liv spawns a camera in a game. Its impact on performance is variable depending on the complexity and optimization of the game. In the wired version, the minimum spec for the PC the headset is connected to is a machine with an Nvidia 1080 graphics processing unit (GPU) and Intel i7 CPU.
If a PC crashes during the game, that’s OK. The Liv App and your game are decoupled. If Liv crashes, the game will continue to run and only the Liv features will stop working for the user.
Background
Liv has served the VR game dev community since 2016 and will continue to do so now with the help and support of Meta, Shewki said. But for most of that time, the Liv technology for capturing videos only worked with wired VR headsets that connect to a PC. The tech started there because it was easier to have a recording application running alongside an actual game using the power of a PC. But now the VR wireless headsets have become more powerful, as has the Liv tech.
Now, the difference is that it will work with the most popular VR headsets, the Meta Quest standalone wireless headsets, which don’t need to be connected to a PC. The company has 24 people and they’ve been working on this part for about a year.
“We spent the last seven years building camera technologies for app developers in VR, and content creators and players in VR. Historically, we’ve primarily been on Steam. When we started seven years ago, open VR, or Steam VR, was the only platform available,” Shewki said. “We made a whole bunch of assumptions back then about how the technology ought to work and how it ought to integrate with games. So what we’re releasing is effectively taking those seven years worth of learnings that we have learned alongside developers as we’ve been building this and we’re releasing a new SDK.”
The company has a patent for its volumetric capture and replay system.
Advertisement
So far, roughly 50 to 100 developers have been downloading the SDK every month. Most of them are making games, and they’re developing for VR systems that are connected to the PC. Many of these are for educational users at schools and universities.
The Quest market, for wireless standalone headsets, is an order of magnitude bigger than the PCVR market, thanks to games like Gorilla Tag.
“We expect our monthly creator numbers to go up,” he said. “We are going to roll out with tons of games. Our goal has been to be on every device on every platform. “
And there are some games where this works now.
Advertisement
“As part of this announcement, we’re also excited to share that we’ve got tons of new games getting Liv support, including Gorilla Tag by Another Axiom, and Racket Club by Resolution Games,” Shewki said.
The rush to the Quest
With the mixed reality capture, Liv can film real people composited into the game world, which before now has been primarily used for high-end company production. And Liv also has a trailer production studio in Australia that uses its own tooling, and Liv makes trailers for some of the biggest game developers and platforms in the world.
And then there is the Liv App, which people use for mixed reality capture and a virtual camera. But the limitation has been it had to be wired to a PC. Now, it will be available on standalone devices without the need for a PC or high-powered GPU in your PC, and will be natively available in the game.
Advertisement
“You won’t need to download a Liv app and run the Liv app in parallel. You will spawn the camera. You will film with the camera, and you will save the content and soon also stream the concept natively from the device without ever leaving outside,” Shewki said. “So it is a solution built for people who are primarily in Quests and don’t have additional tooling on their PC.”
To clarify, he said that if you’re on a Quest and you’re primarily playing Quest games without a PC, you can finally create really high-quality and rich video content from your favorite applications using a combination of selfie cameras, third person cameras, drone cameras and first-person view cameras, with all the bells and whistles that you need to make great content, he said.
The rollout plan for the new SDK
The beta release of Liv’s new software development kit (V1.6 SDK) for Unity-based apps that support Presence Platform features is available now. The goal is to unlock the ability for developers who publish on Quest, but build on PC, to capture high-quality video content using real and virtual cameras.
Later this year will mark the arrival of the beta release of the V1.6 SDK for Unreal-based apps that support Presence Platform features. That’s for apps built in Unreal Engine 4.27 or after.
All Liv SDK features are included in V1.6 SDK including the regular virtual cameras (first person, selfie, third person, drone) and avatars.
Advertisement
Beyond 2024, Liv is looking at improving the tooling for creators on the Liv App. Liv’s backlog has grown big over the years, Shewki said.
The Liv SDK for Unity and Unreal are both MIT licensed. It is free for game developers. You can only use Liv with games running on PC using SteamVR and Oculus PCVR/Rift.
As for trailers, the company used Liv for the raw captures, and with a bunch of editing and post-production magic courtesy of Liv Productions.
Sava, the creator at Sava Studios, said in a statement, “I am proud to say we added LCK into Penguin Paradise and our new game Skelly because we want to give our community a way to make the best and high quality content in our game.”
“I am proud to say we added LCK into Penguin Paradise and our new game Skelly because we want to give our community a way to make the best and high quality content in our game!”
Advertisement
Asked if the capture tools will work natively on a wireless Quest headset without a PC, Shewki said this will happen.
“We are releasing our Quest SDK in a few weeks,” Shewki said.
Creators will be able to immediately use the new Presence Platform features immediately. Mixed Reality capture for passthrough applications has one complication in that it’s typically filmed at a real location versus in front of a green screen, and that means that the final composite happens in post-production since the dev needs to cut out the human subject.
The Liv capture features are being built into the Liv App on Steam, and so the company said it is committed to helping creators across platforms. The company will invest more into the Liv App on the PC on Steam. A large part of the work with Meta is focused around bolstering the PC app on Steam.
Shewki said he wants these cameras to feel like they belong in the game world.
Advertisement
“We specifically want to avoid people thinking of it as, ‘I have to go download an additional tool.’ I’m going to have a rich camera available to me that I can spawn at any time and record my favorite moments without ever leaving the headset. And that’s what this will unlock,” Shewki said. “We’re going to be rolling out with Gorilla Tag and some other big titles initially. Once we roll out, the SDK will be publicly accessible.”
In the app now, only one camera can be running at any given time. Rather than monetize game developers, Shewki said the company will monetize directly with Meta. The company will make sure its tech can work on all upcoming popular AR and VR devices when they launch, he said.
Streaming will be the next thing that Liv will work on. At that point, players will be able to upload and stream directly to Liv. But that’s not ready yet.
VB Daily
Advertisement
Stay in the know! Get the latest news in your inbox daily
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Meta AI has announced the open-source release of MobileLLM, a set of language models optimized for mobile devices, with model checkpoints and code now accessible on Hugging Face. However, it is presently only available under a Creative Commons 4.0 non-commercial license, meaning enterprises can’t use it on commercial products.
The release of these open weights makes MobileLLM a more direct, if roundabout, competitor to Apple Intelligence, Apple’s on-device/private cloud hybrid AI solution made up of multiple models, shipping out to users of its iOS 18 operating system in the U.S. and outside the EU this week. However, being restricted to research use and requiring downloading and installation from Hugging Face, it’s likely to remain limited to a computer science and academic audience for now.
Advertisement
More efficiency for mobile devices
MobileLLM aims to tackle the challenges of deploying AI models on smartphones and other resource-constrained devices.
With parameter counts ranging from 125 million to 1 billion, these models are designed to operate within the limited memory and energy capacities typical of mobile hardware.
By emphasizing architecture over sheer size, Meta’s research suggests that well-designed compact models can deliver robust AI performance directly on devices.
Resolving scaling issues
The design philosophy behind MobileLLM deviates from traditional AI scaling laws that emphasize width and large parameter counts.
Advertisement
Meta AI’s research instead focuses on deep, thin architectures to maximize performance, improving how abstract concepts are captured by the model.
Yann LeCun, Meta’s Chief AI Scientist, highlighted the importance of these depth-focused strategies in enabling advanced AI on everyday hardware.
MobileLLM incorporates several innovations aimed at making smaller models more effective:
• Depth Over Width: The models employ deep architectures, shown to outperform wider but shallower ones in small-scale scenarios.
Advertisement
• Embedding Sharing Techniques: These maximize weight efficiency, crucial for maintaining compact model architecture.
• Grouped Query Attention: Inspired by work from Ainslie et al. (2023), this method optimizes attention mechanisms.
• Immediate Block-wise Weight Sharing: A novel strategy to reduce latency by minimizing memory movement, helping keep execution efficient on mobile devices.
Performance Metrics and Comparisons
Advertisement
Despite their compact size, MobileLLM models excel on benchmark tasks. The 125 million and 350 million parameter versions show 2.7% and 4.3% accuracy improvements over previous state-of-the-art (SOTA) models in zero-shot tasks.
Remarkably, the 350M version even matches the API calling performance of the much larger Meta Llama-2 7B model.
These gains demonstrate that well-architected smaller models can handle complex tasks effectively.
Designed for smartphones and the edge
MobileLLM’s release aligns with Meta AI’s broader efforts to democratize access to advanced AI technology.
Advertisement
With the increasing demand for on-device AI due to cloud costs and privacy concerns, models like MobileLLM are set to play a pivotal role.
The models are optimized for devices with memory constraints of 6-12 GB, making them practical for integration into popular smartphones like the iPhone and Google Pixel.
Open but non-commercial
Meta AI’s decision to open-source MobileLLM reflects the company’s stated commitment to collaboration and transparency. Unfortunately, the licensing terms prohibit commercial usage for now, so only researchers can benefit.
By sharing both the model weights and pre-training code, they invite the research community to build on and refine their work.
Advertisement
This could accelerate innovation in the field of small language models (SLMs), making high-quality AI accessible without reliance on extensive cloud infrastructure.
Developers and researchers interested in testing MobileLLM can now access the models on Hugging Face, fully integrated with the Transformers library. As these compact models evolve, they promise to redefine how advanced AI operates on everyday devices.
VB Daily
Stay in the know! Get the latest news in your inbox daily
Investors are rushing to throw millions at a hot startup called Kalshi as loans or even as unusual we’ll-figure-it-out later cash. Kalshi is an exchange that allows people to bet, as official commodity trading contracts, on the outcomes of cultural events, from election results to how long Taylor Swift’s latest album will top the charts.
Betting on the outcome of the upcoming U.S. election has spiked demand so high that Kalshi surged to the top spot on Apple’s app store, after years of being unranked among the finance category and to the seventh position overall as of this writing.
Kalshi’s need for cash reserves increased sharply to ensure it can provide instant funding for customers betting on the U.S. election. So, over the last several days, the Sequoia-backed five-year-old startup has received tens of millions from investors in short-term loans, according to a source with knowledge of the situation. Additionally, the company is currently in discussions with new and existing investors about raising a formal equity round of as much as $50 million, though it is also possible the startup could raise more, the person said.
Investors who provided capital to Kalshi so the company could sustain its growth until election day included VC firm Neo, one of its earliest backers. Neo’s founder, Ali Pavroti, sent Kalshi a total of $12.4 million, comprised of $5.4 million of Neo’s capital and $7 million of Pavroti’s personal funds, according to the now-deleted tweet posted by Kalshi’s co-founder and CEO, Tarek Mansour. While it’s extremely rare for investors to send money (much less millions) without terms locked down and a signed contract, Pavroti’s message to Mansour said, “We can figure out the terms later.”
Advertisement
Kalshi opened its election market last month after a judge denied the Commodity Futures and Trade Commission’s request to block the trading of elections-linked derivatives. (The CFTC is appealing the court’s ruling.) Since then, the company traded nearly $200 million in contract value for people wanting to bet on the outcome of the political race, Mansour told CNBC on Monday. “The demand curve is truly exponential,” he said.
Kalshi rushed to boost its cash position in anticipation of additional betting on the U.S. election. Like most brokerages, the company offers instant funding to new users. This means users can start trading right away, even though it may take two to three business days for the funds to be officially transferred from the customer’s bank account to Kalshi’s.
Although investors suppose that Kalshi’s growth spike will subside after the election, they believe the company grew so much over the last month that it won’t revert to its prior size, the person said.
Since Kalshi won the ruling against the CFTC, other companies began to offer election contract trading for U.S. citizens. On Monday, Robinhood introduced a market for betting on the presidential election. Interactive Brokers also launched election contracts following Kalshi’s legal victory.
In addition to Sequoia and Neo, Kalshi’s backers include Y Combinator, Henry Kravis, and Mantis VC, a fund managed. The company raised a total of $106 million in equity capital and was last valued at $787 million, according to PitchBook data.
The music streaming app Tidal is laying off more workers. In a statement to Fortune, an unnamed Tidal spokesperson confirmed “the elimination of some roles across our business and design teams.”
On Wednesday, Fortune published a leaked memo from Jack Dorsey, the CEO of Tidal parent company BlockBlock Head, who said that the company is going to “part ways with a number of folks” on the team. “We’re reducing the size of our design team and foundational roles supporting TIDAL, and we will consider reducing engineering over the next few weeks as we have more clarity around leadership going forward,” Dorsey wrote, according to Fortune.
For decades now, I’ve been trying to reassure people that the coming robot revolution will not result in job loss for us humans. It’s a notion I firmly believed – or at least did until this morning.
Earlier this week, Boston Dynamics released a fresh demonstration video of its new Atlas humanoid robot (see below). Unveiled earlier this year, this Atlas is a wholesale redesign and radical upgrade from its already impressive and Parkour-performing original Atlas. This new robot looks a lot more like us, though it can move in ways that none of us can.
The latest video is in some ways unremarkable: Another humanoid robot performing drudgery tasks we’d rather not perform. In this case, Atlas is sorting plastic engine covers between a supplier container with horizontal slots and something called a “mobile sequencing dolly” with vertical slots. It does so in the drab environs of what appears to be some sort of manufacturing facility, though it’s probably just a warehouse in Boston Dynamic’s development campus.
What’s remarkable about the nearly three-minute video is that Altas is doing it all autonomously. That’s right, unlike the remote-controlled Optimus robots Elon Musk and Tesla tried to pawn off as self-directed at his “We, Robot” event, there is, according to Boston Dynamics, no one guiding Atlas’ motion or decisions.
Advertisement
In the video, Atlas faces a cart full of plastic engine cover trays. The robot first reaches for one, placing its two ‘fingers’ underneath the cover, and then pulls it forward. Atlas then releases its grip and rotates its hand so that one ‘finger’ is on top and the other is on the bottom, grabs the tray, and pulls it out.
Viewed from a distance, you’d be forgiven for assuming you were watching a slow-moving human worker. Of course, the next bit would belie that notion. Atlas appears to walk backward toward the vertical set of tray holders but also twists its body around as it moves. As I said, it can do some things not possible with a human body.
Before inserting the tray into its new holder, Atlas appears to examine it. Later, we see an inset video feed that shows us how Atlas’s vision system is assessing the size and shape of the tray.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
Atlas continues its work, crouching and bending down to grab engine covers on lower shelves. It all goes smoothly, except for one moment when a tray gets caught on the fabric edge of one shelf. Instead of pulling it back slowly, Atltas yanks it back before smoothly inserting the part.
Like I said, not exactly compelling viewing except when you consider what this means. Robots are widely used in manufacturing and warehouses but they’re often not employed when fine motor controls are required and especially not in places that require on-the-fly decisions.
It’s clear from this video, however, that we’re on the path to where robots that look and work like us will soon stand alongside or replace factory workers. They’ll do the job as well as us but also be able to walk backward while turning their head around 180 degrees.
Plus, with the introduction of generative AI, robots like Boston Dynamics Atlas will be able to report on their work, respond when you ask them questions about production levels, and even join you for some witty banter at lunchtime (they still won’t eat but may plug in for an hour).
Advertisement
So now I have to adjust what I tell people about robots: They won’t take our jobs yet, but in 10-20 years, you may be looking for another line of work.
Feedback is often both baffled and intrigued by the tricks advertisers will pull to try to sell things, but the latest gambit seems designed to wrong-foot: deliberately odd capitalisation and bad grammar.
During our time spent mucking around on our smartphone, Feedback has repeatedly seen ads for a mobile game that promises the “Hardest LEvel in the HisTory”. We have SPent days tRYing to Work out wHy it looks like thaT.
The game in question is called Go Climb! It is a puzzle game in which a group of mountaineers ascending a peak have got their safety lines tangled and the player must untangle them. So it is, essentially, the back of Feedback’s TV, except it has been gamified and is also at least somewhat possible to solve.
Advertisement
Feedback initially wondered if this was a case of non-English-speaking developers skimping on translation costs. There is precedent for this: back in 1991, the Japanese space shooter Zero Wing was released in Europe with a notoriously shonky translation. As a result, in the introductory cutscene, an alien invader announced: “All your base are belong to us.” After this was rediscovered in the late 1990s, it became one of the most widely shared internet memes of the time.
However, a closer look at Go Climb! suggests something else is going on. It is made by a company called FOMO Games. The firm is based in Turkey, but its staff clearly have an excellent command of English, as evidenced by the information provided about all its other games, not to mention the gloriously corporate text on its website explaining that “FOMO stands for Fear Of Missing Out, which defines our product vision and culture.”
Instead, Feedback suspects the bad English is intentionally designed to get our attention. In line with this, the advert has other odd features that add to the off-kilter feeling. Notably, in it, the mountaineers from the game are replaced with astronauts in spacesuits drifting around against a starry backdrop, so the game’s title makes absolutely no sense. It was only when we looked at the game in an app store that the mountaineering theme was revealed and things became clear.
This seems to be a new and devilish way to advertise a product online: purposely make a complete hash of your ad and hope this intrigues people enough to get them to click through.
Advertisement
And on some level it worked, because here we are. But Feedback hasn’t downloaded the game. On principle, we don’t believe in rewarding deliberately bad spelling.
Monkeys in politics
At the time of writing, the US presidential election is imminent and Feedback is trapped in an endless cycle of news stories reporting polls, pundits endlessly reinterpreting said polls, and then more polls. It is a terribly long-winded way of saying “we don’t know what’s going to happen”.
Now, our colleague Alexandra Thompson has highlighted an important new contribution to the field of psephological forecasting: a paper titled “Monkeys predict US elections“.
Sadly, this doesn’t involve placing an infinite number of monkeys into voting booths. Instead, researchers showed monkeys pairs of photos of candidates from senatorial and gubernatorial elections.
Advertisement
The monkeys spent more time looking at the losers than at the winners. This seems like a peculiar form of torture for politicians: not only did you lose, it says, but monkeys stared at you judgmentally.
The study extended previous work showing that children can identify the winners and losers in elections based purely on photos of the candidates. Both the children and the monkeys were picking based on face shape, with square jawlines being the key sign of an improved chance of victory.
Who would do such a study? Three of the researchers are at the University of Pennsylvania, but the fourth is based at a Portuguese institution called the Champalimaud Center for the Unknown. Feedback isn’t quite sure what to make of that.
It does seem that unconscious factors play into our voting decisions. It is often claimed that taller candidates tend to win US elections, and there appears to be some truth to this.
Advertisement
A 2013 study pulled data on all US presidential elections to date and found that taller candidates won more of the popular vote – although this didn’t translate to them being more likely to actually be elected. In what can only be described as double nominative determinism, one of the authors is a social psychologist called Abraham Buunk.
Readers who are invested in the outcome of the US election are hereby advised: whatever you do, don’t look up Donald Trump’s and Kamala Harris’s respective heights.
One more for the road
In such stressful times, like many people, Feedback has turned to the soothing alternative reality of The Great British Bake Off (The Great British Baking Show, if you are in North America).
There are all sorts of fascinating and delicious things to learn about the materials science of breads, cakes and biscuits, but we just want to point out that the show’s home economist, who produces all the sample biscuits, tarts and desserts for the technical challenges, is called Hattie Baker.
It wouldn’t be Halloween without everyone’s least favorite clown, Pennywise. In honor of the holiday, Max has unveiled eight first-look images from It: Welcome to Derry, HBO’s upcoming It prequel series from Andy Muschietti, Barbara Muschietti, and Jason Fuchs.
Set in 1962, It: Welcome to Derry is set 27 years before the events of Muschietti’s It. The series explores interludes written by Mike Hanlon, who interviews older people — specifically, his father, Will — who lived in the town in the 1960s. Will and his Air Force buddies opened The Black Spot, a nightclub that catered to Black patrons.
In 1962, a white supremacist group known as the Maine Legion of White Decency burned the Black Spot down, killing several people inside. Through his investigation, Mike learns that Pennywise appeared on that tragic night in 1962. Instead of a clown, Pennywise showed up as a giant bird and snatched a victim in its talons.
“Twenty-seven years is the dormant period of Pennywise,” wrote the Muschiettis in an email to EW. “It’s a different part of American history with a new set of fears for children, as well as adults having in mind the cost of the Cold War. Our baseline is 1962, but we do a few jumps to the past … Every 27 years when It appears, it’s cycle is marked by two catastrophic events, one at the beginning and one in the end. We are using the Black Spot as an event in which many stories are built around.”
It: Welcome to Derry stars Jovan Adepo, Taylour Paige, Chris Chalk, James Remar, Stephen Rider, Madeleine Stowe, and Rudy Mancuso. Bill Skarsgård returns as Pennywise from the It films. The prequel series will explore Pennywise’s origins. Character details remain hidden. However, Adepo is wearing a military uniform with the nametag “Hanlon,” suggesting he could be playing Will Hanlon.
Andy will direct four of the nine episodes. Based on Stephen King’s It, Welcome to Derry premieres on HBO and streams on Max in 2025.
You must be logged in to post a comment Login