Connect with us

Technology

Tencent and Guillemot family considering buyout of Ubisoft | Bloomberg report

Published

on

Tencent and Guillemot family considering buyout of Ubisoft | Bloomberg report

Bloomberg reported that Tencent and Ubisoft’s founding Guillemot family are discussing a potential buyout of the French video game developer, in a story citing unnamed sources.

Ubisoft has lost more than half its market value in 2024, and it has taken a pounding for a poorly received Star Wars: Outlaws open world title and a delay in its holiday release Assassin’s Creed: Shadows.

Tencent is the biggest company in gaming, and it already holds a stake in Ubisoft from the time when it helped CEO Yves Guillemot fend off a hostile acquisition by Vivendi.

Bloomberg said Tencent and Guillemot Brothers Ltd. have been speaking with advisers to help explore ways to stabilize Ubisoft and bolster its value. One option is taking the company private together. That’s a logical solution, considering the company’s valuation has fallen 40% to around $1.9 billion this year. After the report, Ubisoft shares rose as much as 33% in Paris trading on Friday. That was the biggest gain since Ubisoft went public in 1996.

Advertisement

Join us for GamesBeat Next!

GamesBeat Next is almost here! GB Next is the premier event for product leaders and leadership in the gaming industry. Coming up October 28th and 29th, join fellow leaders and amazing speakers like Matthew Bromberg (CEO Unity), Amy Hennig (Co-President of New Media Skydance Games), Laura Naviaux Sturr (GM Operations Amazon Games), Amir Satvat (Business Development Director Tencent), and so many others. See the full speaker list and register here.


Of course, if they go private, Ubisoft’s numbers will become opaque, and we’ll lose another source of transparency in the game industry. Tencent has about 9.2% of Ubisoft’s net voting rights, while the Guillemot family holds about 20.5%.

The 3D environment of Assassin's Creed: Shadows.
The 3D environment of Assassin’s Creed: Shadows.

Ubisoft said it does not comment on “rumors or speculation,” and we’re awaiting comment from Tencent.

Star Wars: Outlaws has underperformed expectations, and Assassin’s Creed Shadows was highly anticipated because of its setting in the samurai era of Japan. But the game has faced criticism from core gamers and some Japanese fans over its choice of main characters.

Advertisement

After Vivendi’s failed attempt, Ubisoft was reportedly in play a couple of years ago as well. In 2022, Tencent bought 49.9% of the Giullemot Brothers holding company in addition to its stake in Ubisoft as part of a friendly investment that helped ward off any hostile takeover.

Ubisoft has more than 19,000 employees, making its seasoned game development teams among the envy of the industry. It has also shown willingness to expand to just about any new game platform. But those employees come with high operating costs that have lowered profits over time.


Source link
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Samsung Galaxy Ring considered a medical device under HSA rules

Published

on

Samsung Galaxy Ring considered a medical device under HSA rules

The Samsung Galaxy Ring has become eligible to be called a medical device under certain HSA rules in the US. This could set a precedent for users to claim reimbursement for Over The Counter (OTC) commercial devices that do not need a prescription.

Samsung Galaxy Ring can be considered a medical device

The wearable market took a huge leap recently when two Apple devices secured FDA approval as medical devices. The Apple AirPods Pro 2 can serve as hearing aids and the Apple Watch can help detect sleep apnea.

FDA approval is just the first, but very important, milestone. The Samsung Galaxy Ring has gone a step further. According to Android Authority, Samsung’s new Galaxy Ring has become eligible for reimbursement. Simply put, buyers can recoup the amount they paid for the smart ring if they meet certain criteria.

The Samsung Galaxy Ring has reportedly become eligible to be considered a “medical device” under FSA and HSA rules. US citizens must have a Flexible Spending Account (FSA) or Health Savings Accounts (HSA) to claim reimbursement.

Advertisement

FSA and HSA accounts are different from standard health insurance. US citizens can open these accounts and pay for medical expenses that aren’t already covered by their insurance. Civilians build a corpus in these accounts by setting a portion of their income aside for medical expenses before the government takes its regular cut.

Usually considered an employee benefit, US citizens can deposit money in these accounts to lower their taxable income. While employers open FSAs for their employees, individuals own HSAs. There are a few prerequisites to having these accounts, one of which is to have a high-deductible health plan.

Buying a Galaxy Ring is now a medical expense?

This is a big step forward for smart rings, smartwatches, and other wearable devices. If the Samsung Galaxy Ring can count as a medical device, several other wearables could qualify too.

Samsung has assured that getting an HSA/FSA to reimburse a Galaxy Ring isn’t a complicated process. After buying a Galaxy Ring, buyers should submit the required information to an eligible HSA or FSA plan. If there are sufficient funds in these accounts, buyers get a reimbursement for the purchase of a Galaxy Ring.

Advertisement

It is a US citizen’s, not the US government’s money in the HSA or FSA. In other words, this isn’t exactly a reimbursement. However, this approval confirms OTC devices can now be eligible for reimbursement from the FSA or HSA accounts if they qualify as medical devices that do not need any prescription.

Source link

Continue Reading

Servers computers

15U Wall-Mount Server Rack – RK15WALLO | StarTech.com

Published

on

15U Wall-Mount Server Rack - RK15WALLO | StarTech.com



This open-frame wall-mount server rack provides 15U of storage, allowing you to save space and stay organized. The equipment rack can hold up to 198 lb. (90 kg).

Easy, hassle-free installation
The network rack is 12 in. (30.4 cm) deep, so it’s perfect for smaller spaces. It’s easy to assemble and features mounting holes that are spaced 16 in. apart, making it suitable for mounting on almost any wall, based on common wall-frame stud spacing. The wall-mount rack comes in a flat-pack making it easy to transport and reducing your cost in shipping.

Free up space
You can mount the open-frame rack where space is limited, such as on a server room wall, office, or above a doorway, to expand your workspace and ensure your equipment is easy to access.

Keep your equipment cool and accessible
By having an open frame the server rack provides passive cooling to equipment. Additionally, the open frame provides easy access and configurability to your equipment, allowing you to maximize productivity.

The RK15WALLO is backed by a StarTech.com 5-year warranty and free lifetime technical support.

To learn more visit StarTech.com

source

Continue Reading

Technology

Gmail’s Gemini-powered Q&A feature comes to iOS

Published

on

Gmail's Gemini-powered Q&A feature comes to iOS

A few months ago, Google introduced a new way to search Gmail with the help of its Gemini AI. The feature, called Gmail Q&A, lets you find specific emails and information by asking the Gemini chatbot questions. You can ask things like “What time is our dinner reservation on Saturday?” to quickly find the information you need. It was only initially available on Android devices, but now Google has started rolling it out to iPhones.

In addition to being able to ask questions, you can also use the feature to find unread emails from a specific sender simply by telling Gemini to “Find unread emails by [the person’s name].” You can ask the chatbot to summarize a topic you know you can find in your inbox, such as work projects that you’ve been on for months consisting of multiple conversations across several threads. And you can even use Gemini in Gmail to do general search queries without having to leave your inbox. To access Gemini, simply tap on the star icon at the top right corner of your Gmail app.

Google says the feature could take up to 15 days to reach your devices. Take note, however, that you do need to have access to Gemini Business, Enterprise, Education, Education Premium or Google One AI Premium to be able to use it.

Source link

Continue Reading

Technology

Meta enters AI video wars with powerful Movie Gen model

Published

on

Meta enters AI video wars with powerful Movie Gen model

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Meta founder and CEO Mark Zuckerberg, who built the company atop of its hit social network Facebook, finished this week strong, posting a video of himself doing a leg press exercise on a machine at the gym on his personal Instagram (a social network Facebook acquired in 2012).

Except, in the video, the leg press machine transforms into a neon cyberpunk version, an Ancient Roman version, and a gold flaming version as well.

As it turned out, Zuck was doing more than just exercising: he was using the video to announce Movie Gen, Meta’s new family of generative multimodal AI models that can make both video and audio from text prompts, and allow users to customize their own videos, adding special effects, props, costumes and changing select elements simply through text guidance, as Zuck did in his video.

Advertisement

The models appear to be extremely powerful, allowing users to change only selected elements of a video clip rather than “re-roll” or regenerate the entire thing, similar to Pika’s spot editing on older models, yet with longer clip generation and sound built in.

Meta’s tests, outlined in a technical paper on the model family released today, show that it outperforms the leading rivals in the space including Runway Gen 3, Luma Dream Machine, OpenAI Sora and Kling 1.5 on many audience ratings of different attributes such as consistency and “naturalness” of motion.

Meta has positioned Movie Gen as a tool for both everyday users looking to enhance their digital storytelling as well as professional video creators and editors, even Hollywood filmmakers.

Movie Gen represents Meta’s latest step forward in generative AI technology, combining video and audio capabilities within a single system.

Specificially, Movie Gen consists of four models:

Advertisement

1. Movie Gen Video – a 30B parameter text-to-video generation model

2. Movie Gen Audio – a 13B parameter video-to-audio generation model

3. Personalized Movie Gen Video – a version of Movie Gen Video post-trained to generate personalized videos based on a person’s face

4. Movie Gen Edit – a model with a novel post-training procedure for precise video editing

Advertisement

These models enable the creation of realistic, personalized HD videos of up to 16 seconds at 16 FPS, along with 48kHz audio, and provide video editing capabilities.

Designed to handle tasks ranging from personalized video creation to sophisticated video editing and high-quality audio generation, Movie Gen leverages powerful AI models to enhance users’ creative options.

Key features of the Movie Gen suite include:

Video Generation: With Movie Gen, users can produce high-definition (HD) videos by simply entering text prompts. These videos can be rendered at 1080p resolution, up to 16 seconds long, and are supported by a 30 billion-parameter transformer model. The AI’s ability to manage detailed prompts allows it to handle various aspects of video creation, including camera motion, object interactions, and environmental physics.

Advertisement

Personalized Videos: Movie Gen offers an exciting personalized video feature, where users can upload an image of themselves or others to be featured within AI-generated videos. The model can adapt to various prompts while maintaining the identity of the individual, making it useful for customized content creation.

Precise Video Editing: The Movie Gen suite also includes advanced video editing capabilities that allow users to modify specific elements within a video. This model can alter localized aspects, like objects or colors, as well as global changes, such as background swaps, all based on simple text instructions.

Audio Generation: In addition to video capabilities, Movie Gen also incorporates a 13 billion-parameter audio generation model. This feature enables the generation of sound effects, ambient music, and synchronized audio that aligns seamlessly with visual content. Users can create Foley sounds (sound effects amplifying yet solidifying real life noises like fabric ruffling and footsteps echoing), instrumental music, and other audio elements up to 45 seconds long. Meta posted an example video with Foley sounds below (turn sound up to hear it):

Trained on billions of videos online

Movie Gen is the latest advancement in Meta’s ongoing AI research efforts. To train the models, Meta says it relied upon “internet scale image, video, and audio data,” specifically, 100 million videos and 1 billion images from which it “learns about the visual world by ‘watching’ videos,” according to the technical paper.

Advertisement

However, Meta did not specify if the data was licensed in the paper or public domain, or if it simply scraped it as many other AI model makers have — leading to criticism from artists and video creators such as YouTuber Marques Brownlee (MKBHD) — and, in the case of AI video model provider Runway, a class-action copyright infringement suit by creators (still moving through the courts). As such, one can expect Meta to face immediate criticism for its data sources.

The legal and ethical questions about the training aside, Meta is clearly positioning the Movie Gen creation process as novel, using a combination of typical diffusion model training (used commonly in video and audio AI) alongside large language model (LLM) training and a new technique called “Flow Matching,” the latter of which relies on modeling changes in a dataset’s distribution over time.

At each step, the model learns to predict the velocity at which samples should “move” toward the target distribution. Flow Matching differs from standard diffusion-based models in key ways:

Zero Terminal Signal-to-Noise Ratio (SNR): Unlike conventional diffusion models, which require specific noise schedules to maintain a zero terminal SNR, Flow Matching inherently ensures zero terminal SNR without additional adjustments. This provides robustness against the choice of noise schedules, contributing to more consistent and higher-quality video outputs  .

Advertisement

Efficiency in Training and Inference: Flow Matching is found to be more efficient both in terms of training and inference compared to diffusion models. It offers flexibility in terms of the type of noise schedules used and shows improved performance across a range of model sizes. This approach has also demonstrated better alignment with human evaluation results.

The Movie Gen system’s training process focuses on maximizing flexibility and quality for both video and audio generation. It relies on two main models, each with extensive training and fine-tuning procedures:

Movie Gen Video Model: This model has 30 billion parameters and starts with basic text-to-image generation. It then progresses to text-to-video, producing videos up to 16 seconds long in HD quality. The training process involves a large dataset of videos and images, allowing the model to understand complex visual concepts like motion, interactions, and camera dynamics. To enhance the model’s capabilities, they fine-tuned it on a curated set of high-quality videos with text captions, which improved the realism and precision of its outputs. The team further expanded the model’s flexibility by training it to handle personalized content and editing commands.

Movie Gen Audio Model: With 13 billion parameters, this model generates high-quality audio that syncs with visual elements in the video. The training set included over a million hours of audio, which allowed the model to pick up on both physical and psychological connections between sound and visuals. They enhanced this model through supervised fine-tuning, using selected high-quality audio and text pairs. This process helped it generate realistic ambient sounds, synced sound effects, and mood-aligned background music for different video scenes.

Advertisement

It follows earlier projects like Make-A-Scene and the Llama Image models, which focused on high-quality image and animation generation.

This release marks the third major milestone in Meta’s generative AI journey and underscores the company’s commitment to pushing the boundaries of media creation tools.

Launching on Insta in 2025

Set to debut on Instagram in 2025, Movie Gen is poised to make advanced video creation more accessible to the platform’s wide range of users.

While the models are currently in a research phase, Meta has expressed optimism that Movie Gen will empower users to produce compelling content with ease.

Advertisement

As the product continues to develop, Meta intends to collaborate with creators and filmmakers to refine Movie Gen’s features and ensure it meets user needs.

Meta’s long-term vision for Movie Gen reflects a broader goal of democratizing access to sophisticated video editing tools. While the suite offers considerable potential, Meta acknowledges that generative AI tools like Movie Gen are meant to enhance, not replace, the work of professional artists and animators.

As Meta prepares to bring Movie Gen to market, the company remains focused on refining the technology and addressing any existing limitations. It plans further optimizations aimed at improving inference time and scaling up the model’s capabilities. Meta has also hinted at potential future applications, such as creating customized animated greetings or short films entirely driven by user input.

The release of Movie Gen could signal a new era for content creation on Meta’s platforms, with Instagram users among the first to experience this innovative tool. As the technology evolves, Movie Gen could become a vital part of Meta’s ecosystem and that of creators — pro and indie alike.

Advertisement

Source link
Advertisement
Continue Reading

Servers computers

27U Network Server Rack #homelab #network #business #informationtechnology #website #homenetworking

Published

on

27U Network Server Rack #homelab #network #business #informationtechnology #website #homenetworking

source

Continue Reading

Technology

Epic has a plan for the rest of the decade

Published

on

Epic has a plan for the rest of the decade

Just over a year ago, Epic Games laid off around 16 percent of its employees. The problem, Epic said, was its own big ideas for the future and just how expensive they were to build. “For a while now, we’ve been spending way more money than we earn,” Epic CEO Tim Sweeney wrote in an email to staff.

On Tuesday, onstage at the Unreal Fest conference in Seattle, Sweeney declared that the company is now “financially sound.” The announcement kicked off a packed two-hour keynote with updates on Unreal Engine, the Unreal Editor for Fortnite, the Epic Games Store, and more.

In an interview with The Verge, Sweeney says that reining in Epic’s spending was part of what brought the company to this point. “Last year, before Unreal Fest, we were spending about a billion dollars a year more than we were making,” Sweeney says. “Now, we’re spending a bit more than we’re making.”

“The real power will come when we bring these two worlds together”

Advertisement

Sweeney says the company is well set up for the future, too, and that it has the ability to make the types of long-term bets he spent the conference describing. “We have a very, very long runway comparing our savings in the bank to our expenditure,” ​​Sweeney says. “We have a very robust amount of funding relative to pretty much any company in the industry and are making forward investments really judiciously that we could throttle up or down as our fortunes change. We feel we’re in a perfect position to execute for the rest of this decade and achieve all of our plans at our size.”

Epic has ambitious plans. Right now, Epic offers both Unreal Engine, its high-end game development tools, and Unreal Editor for Fortnite, which is designed to be simpler to use. What it’s building toward is a new version of Unreal Engine that can tie them together.

“The real power will come when we bring these two worlds together so we have the entire power of our high-end game engine merged with the ease of use that we put together in [Unreal Editor for Fortnite],” Sweeney says. “That’s going to take several years. And when that process is complete, that will be Unreal Engine 6.”

Unreal Engine 6 is meant to let developers “build an app once and then deploy it as a standalone game for any platform,” Sweeney says. Developers will be able to deploy the work that they do into Fortnite or other games that “choose to use this technology base,” which would allow for interoperable content.

Advertisement

The upcoming “persistent universe” Epic is building with Disney is an example of the vision. “We announced that we’re working with Disney to build a Disney ecosystem that’s theirs, but it fully interoperates with the Fortnite ecosystem,” Sweeney says. “And what we’re talking about with Unreal Engine 6 is the technology base that’s going to make that possible for everybody. Triple-A game developers to indie game developers to Fortnite creators achieving that same sort of thing.”

If you read my colleague Andrew Webster’s interview with Sweeney from March 2023, the idea of interoperability to make the metaverse work will seem familiar. At Unreal Fest this week, I got a better picture of how the mechanics of that might work with things like Unreal Engine 6 and the company’s soon-to-open Fab marketplace to shop for digital assets.

Fab will be able to host assets that can work in Minecraft or Roblox, Sweeney says. But the bigger goal is to let Fab creators offer “one logical asset that has different file formats that work in different contexts.” He gave an example of how a user might buy a forest mesh set that has different content optimized for Unreal Engine, Unity, Roblox, and Minecraft. “Having seamless movement of content from place to place is going to be one of the critical things that makes the metaverse work without duplication.”

But for an interoperable metaverse to really be possible, companies like Epic, Roblox, and Microsoft will need to find ways for players to move between those worlds instead of keeping them siloed — and for the most part, that isn’t on the horizon.

Advertisement

Sweeney says Epic hasn’t had “those sorts of discussions” with anyone but Disney yet. “But we will, over time,” he says. He described an ideal where companies, working as peers, would use revenue sharing as a way to create incentives for item shops that people want to buy digital goods from and “sources of engagement” (like Fortnite experiences) that people want to spend time in.

“The whole thesis here is that players are gravitating towards games which they can play together with all their friends, and players are spending more on digital items in games that they trust they’re going to play for a long time,” Sweeney says. “If you’re just dabbling in a game, why would you spend money to buy an item that you’re never going to use again? If we have an interoperable economy, then that will increase player trust that today’s spending on buying digital goods results in things that they’re going to own for a long period of time, and it will work in all the places they go.”

“People are not dogmatic about where they play”

“There’s no reason why we couldn’t have a federated way to flow between Roblox, Minecraft, and Fortnite,” Epic EVP Saxs Persson says. “From our perspective, that would be amazing, because it keeps people together and lets the best ecosystem win.” Epic sees in its surveys that “people are not dogmatic about where they play,” Persson says.

Advertisement

Of course, there’s plenty of opportunity for Epic, which already makes a widely played game and a widely used game engine and is building Fortnite into a game-making tool. (And I haven’t even mentioned how Unreal Engine is increasingly used in filmmaking and other industries.) The end state sounds great for Epic, but Epic also has to make the math make sense for everyone else.

And it has to do that without much of a presence on mobile. The company has spent years in legal battles with Apple and Google over their mobile app store practices, and it just sued Samsung, too. The Epic Games Store recently launched on Android globally and on iOS in the EU, but thanks to restrictions on third-party app stores, the company’s game store boss, Steve Allison, tells The Verge that reaching its end-of-year install goal is “likely impossible.” Any major change could take quite a while, according to Sweeney. “It will be a long battle, and it will likely result in a long series of battles, each of which moves a set of freedoms forward, rather than having a single worldwide moment of victory,” Sweeney says.

There’s one other battle Epic is fighting: Fortnite is still hugely popular, but there is waning interest — or hype, at least — in the metaverse. Sweeney and Persson, however, don’t exactly agree about the term seemingly falling out of popularity.

“It’s like there’s metaverse weather,” Sweeney says. “Some days it’s good, some days it’s bad. Depends on who’s doing the talking about it.”

Advertisement

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com