Connect with us

Technology

Sequence launches on Google Cloud Marketplace for Web3 gaming tech

Published

on

Sequence launches on Google Cloud Marketplace for Web3 gaming tech

Sequence, the all-in-one platform for building Web3 games, has partnered with Google Cloud Marketplace to widen its reach to game developers.

This milestone brings a comprehensive suite of Web3 development tools and solutions directly to game developers, simplifying the integration of blockchain technology into their games.

Sequence will be listed on Google Cloud Marketplace as an integrated provider of Ethereum Virtual Machine (EVM) technology to Google Cloud customers, said Michael Sanders, cofounder and chief storyteller, in an interview with GamesBeat.

Google Cloud Marketplace enables software vendors to offer their products and services directly through Google Cloud, providing a convenient platform for users to discover, purchase, and deploy software solutions.

Advertisement

By making Sequence (a division of Horizon Blockchain Games) available on Google Cloud Marketplace, developers can now leverage innovative blockchain technology to enhance user acquisition, monetization, and retention as part of the full suite of their game’s backend.

“We launched on Google Cloud Marketplace as the integrated provider of EVM technology,” Sanders said.

Sam Barberie, head of strategy and partnerships at Sequence, in an interview with GamesBeat that Google vetted the blockchain ecosystem and saw Sequence as a leader in solving Web3 game development problems.

Barberie said, “The benefit is now that we’ve gone live and we both have the opportunity to help integrate Web3 and EVM for every Google Cloud Client. And so now that we’re on the marketplace and the Sequence stack is built on Google Cloud. We’ve already had a deep integration with the Google Cloud ecosystem for a while, but it means that developers can just get access to Sequence and begin the integration for any game, like be an existing triple-A developer wanting to integrate with Web3, or a new studio that is building a game with Web3 capabilities for the first time.”

Advertisement

Google also has a relationship with Aptos, but most of the global blockchain activity is in EVM and that steered Google in Sequence’s direction, Barberie said.

“EVM is the go-to tech here,” he said. “It has the largest developer ecosystem.”

The context

Sequence makes it easy for game devs to adopt crypto wallets.

Developers globally are integrating Web3 into existing and new games to enhance developer economics and reward players. The gaming industry has a range of challenges.

Barberie said, “Last year, Google Cloud focused on leveraging blockchain technology to solve what they saw as universal developer problems, where user acquisition costs are up 431%, development costs are up 20% and player spend is up only 1%. They’re thinking of Web3 as a way to harness and accelerate the market. So Google aproached us.”

Game makers who integrate Sequence note significant changes in game performance, including 4.5 times day 30 retention, 7.2 times average revenue per user, and 20%-plus incremental revenue.

Advertisement

Sequence leverages the benefits of Web3 for all players across platforms with invisible wallet solutions, boost player acquisition with targeted analytics and marketing tools, and retain users with custom loyalty rewards.

The company said it can increase monetization by four times with app-store-compliant marketplaces, diverse and gamer-friendly payment options, and cross-platform trading.

And Sequence can help devs utilize a comprehensive web3 gaming backend stack, real-time blockchain data access, and seamless integration with popular game engines like Unity and Unreal Engine across devices. Sequence SDK suite is also the first EVM-based verified solution on both Unity Asset Store and Unreal Engine Marketplace.

“We’re excited to bring Sequence to Google Cloud Marketplace,” said Greg Canessa, president and COO at Sequence, in a statement. “This collaboration empowers game developers to leverage the player and developer benefits of Web3, allowing them to focus on creative execution and delivering amazing games. Our vision of dynamic, living games and solutions that solve universal game developer challenges aligns with Google Cloud, and we’re pleased to support game developers in building visionary experiences.”   

Advertisement

The Sequence platform supports all EVM chains, including layer 1s, layer 2s, and layer 3/app chains. As the simplest and most comprehensive Web3 development platform, Sequence is built for flexibility and is trusted by game publishers and studios, Sanders said.

“Bringing Sequence to Google Cloud Marketplace will help customers quickly deploy, manage, and grow the Web3 game development platform on Google Cloud’s trusted, global infrastructure,” said Dai Vu, managing director for marketplace and ISV GTM Programs at Google Cloud, in a statement. “Sequence can now securely scale and support customers on their digital transformation journeys.”

Sequence has 65 employees. It powers thousands of game developers who have millions of players. There have been $5.3 billion in transactions using Sequence’s technology.

Sequence helps onboard, monetize, grow, and retain players with its Web3 technology. From collectibles and ownable rewards to fully on-chain experiences, Sequence’s easy-to-integrate platform solves blockchain complexities, so developers can focus on creative execution and delivering amazing player experiences. Sequence is backed by Take-Two Interactive, Ubisoft, Xsolla, Bitkraft, Brevan Howard, Coinbase, Polygon, and more.

Advertisement

Ubisoft is using Sequence for a couple of its Web3 games. Sequence also has Off the Grid, a battle royale game that was just launched by Gunzilla Games. Popular streamers like Ninja and TimtheTatman touted the game during its recent launch.


Source link
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

M4 chip: everything we know about Apple’s latest silicon

Published

on

M4 chip: everything we know about Apple's latest silicon

Apple is on the cusp of announcing new Macs equipped with its latest M4 chip, bringing more powerful performance and extra features to its computers. But this won’t be the first time the M4 has made an appearance — it’s already out in the latest iPad Pro.

But is the M4 chip any good? Should you upgrade your Mac or iPad to take advantage of it? And what new features will it bring to your devices? We’ve set out to answer these questions and more, blending together what we’ve learned from the M4 iPad Pro and information that has been leaked ahead of the M4 Macs launching this year. That should give you everything you need to know about Apple’s latest chip.

Price and release date

Russian YouTuber Romancev768 with what is claimed to be a real M4 MacBook Pro unit.
Romancev768

Apple has opted for something of a staggered approach for the M4 chip’s launch. Unlike most years, where Apple first releases its new chips in Mac computers, this time the company opted to put the M4 chip in the iPad Pro first, which arrived in May 2024.

As for Macs equipped with the M4 chip, those are expected imminently. Bloomberg reporter Mark Gurman has claimed these computers will go on sale on or around November 1, which is just days away now. There could be an online event announcing these Macs a few days beforehand, Gurman believes.

We’re expecting to see the M4 chip in the MacBook Pro, the iMac and the Mac mini (the latter of which is due for a total makeover with a new, smaller design). Apple often launches new Macs in the fall, so the unveiling of these devices in a few days’ time makes sense.

Advertisement

That’s not the end of the M4 releases, though. The MacBook Air is due out in early 2025, with the Mac Studio and Mac Pro following later in the year. The MacBook Air will almost certainly be limited to the M4, while the Mac Studio and Mac Pro will come with more powerful variants (such as the M4 Max or M4 Ultra).

As for the price of the M4 Macs, we wouldn’t rule out an increase here. When Apple launched the M4 iPad Pro, it increased the price of both the larger and smaller models by $200 each. Granted, it changed other things (such as doubling the starting storage and introducing a new Tandem OLED display) that would have contributed to the price, but the fact remains that the cost to the consumer went up. That means we might end up seeing a similar price hike when the M4 chip comes to the Mac line.

Performance and features

Home Screen of the M4 iPad Pro.
Nadeem Sarwar / Digital Trends

Normally, assessing the performance of an Apple chip before it comes to the Mac would be mostly guesswork, but everything’s changed this time around. That’s down to two main reasons: the iPad Pro and a series of massive leaks of the M4 MacBook Pro.

Starting with the iPad Pro, we can infer how the M4 Macs might perform based on this tablet’s abilities. The M4 chip in the iPad Pro comes with a 10-core CPU and a 10-GPU (up from the 8-core CPU in the M3 chip), which helps to give it a bit more oomph in all sorts of workloads.

In our review, we saw significant improvements over both Apple’s past iPad Pros and rival tablets. Part of that improvement will be due to the second-generation 3nm process used to manufacture the chip, which Apple says is more efficient than its previous efforts. The M4 also contains an updated 16-core Neural Engine that can perform up to 38 trillion operations per second (TOPS). Apple says it’s more powerful than any neural processing unit in any AI PC today.

Advertisement

An M4 MacBook Pro being tested in Geekbench by Russian YouTuber Wylsacom.
Wylsacom

But what about Mac performance? Well, we have an inkling of what to expect here too, and it’s all thanks to a series of monumental leaks that saw the M4 MacBook Pro fall into the hands of various YouTubers well ahead of schedule, who proceeded to benchmark it and reveal its performance chops.

One of the YouTubers was a popular Russian tech reviewer named Wylsacom, who put the M4 MacBook Pro through its paces using a series of Geekbench tests. Here, the M4 chip in the MacBook Pro got a single-core score of 3864 and a multi-core result of 15288, which are roughly 27% and 31% better than those achieved by the M3 chip, respectively. That’s a sizable improvement.

As well as that, benchmarks for Apple’s Metal API have also surfaced on Geekbench, and these shed light on the upcoming MacBook Pro’s graphical capabilities. In these tests, the M4 chip scored 57603, which is about 20% ahead of the M3 chip. Again, that’s an encouraging result.

Apple's Tim Millet presents the Apple silicon A14 Bionic chip.
Apple

There’s one more thing revealed by the M4 MacBook Pro leaks: Apple is likely to upgrade the starting memory in these laptops from 8GB to 16GB. That’s a long-overdue change and should enable the devices to better handle multitasking and demanding workflows.

Many of these improvements — the RAM increase in particular — are likely to help Apple’s devices handle artificial intelligence (AI) duties. With the launch of Apple Intelligence, Apple has finally entered the AI mainstream, and it needs its devices to perform well in this increasingly important area.

When it comes to the most powerful chip in the M4 range (the M4 Ultra), there’s a potentially momentous change that might be on the way. As claimed by YouTube channel Max Tech, the M3 Max reportedly does not feature Apple’s UltraFusion tech. This is what has previously allowed Apple to stitch two M2 Max chips together to create the M2 Ultra.

Advertisement

If the M3 Max doesn’t have this feature, it suggests that the M4 Ultra could be its own standalone chip rather than two M4 Max chips fused together. That could mean better performance scaling compared to previous generations. Alternatively, the M3 Max may have lacked UltraFusion because the M3 Ultra never launched, and thus there was no need to fuse two M3 Max chips together for a non-existent higher-end chip. We might not know for sure until the M4 Max and M4 Ultra launch in 2025.

Which devices will get the M4?

A person holds a MacBook Air at Apple's Worldwide Developer's Conference (WWDC) in 2023.
Apple

We can make an educated guess as to which Macs will end up with the M4 chip, as well as which will get the higher-end variations like the M4 Pro, M4 Max and M4 Ultra.

Starting with the M4, this is likely to land in the MacBook Air (both 13-inch and 15-inch sizes), the 14-inch MacBook Pro, the iMac and the Mac mini. Given the chip will be at the lower end of Apple’s hierarchy, it makes sense to expect it in more consumer-facing devices. That said, it’s in the new iPad Pro, which Apple touts as a very powerful device.

Moving onto the more pro-grade chips, we can expect the M4 Pro to end up in the 16-inch and 14-inch MacBook Pro and the Mac mini as an upgrade variant. The M4 Max, meanwhile, should come to the 16-inch and 14-inch MacBook Pro and the Mac Studio.

Finally, look for the flagship M4 Ultra inside the Mac Studio and Mac Pro. Those aren’t expected until some point in the second half of 2025, though, so there will probably be quite a wait until we see them in action.

Advertisement






Source link

Continue Reading

Technology

Alienware 34 QD-OLED monitor drops to $612

Published

on

Alienware 34 QD-OLED monitor drops to $612

Right now you can snag a really great deal on the Alienware 34 QD-OLED gaming monitor that brings it down to its lowest price ever to date, and with how good this monitor is that’s a deal you don’t want to pass up.

This monitor is understandably an exceptionally popular option for gamers thanks to its ultrawide design and 34-inch screen size, but also because it has a high refresh rate and great resolution. That’s on top of a bunch of other really nice features. Normally this monitor retails for $899.99, but you can currently pick it up for $612 through Dell. The initial discount brings the price down to $679.99, a price that Amazon and Best Buy are also offering.

However, if you go through Dell, there are additional savings to get the price down to $612. This is the lowest price this monitor has been since its release.

To get the price down to $612, you’ll have to make sure to go to the Dell Rewards page and sign in. After you sign in, go to the Dell Benefits Tab and claim the 10% off promo for Alienware monitors. Claiming this promo will give you a promo code to use when you check out with the monitor in your cart.

Advertisement

This monitor gives you one HDMI 2.0 port, two DisplayPort 1.4 ports, four USB-A 3.2 Gen1 ports, one USB SuperSpeed port, and an audio port for headphones or desk speakers. It also features AMD FreeSync Premium Pro, and several vision modes for optimizing your gameplay. For instance, the Night Vision mode adds clarity and contrast to darker scenes. There’s also a crosshair function you can enable for shooter games.

Then there’s the 165Hz refresh rate for super-smooth gameplay. There are lots of monitor options out there these days, and this is hardly the only awesome gaming monitor available. However, this is most certainly a great monitor at a great price.

Buy at Dell

Source link

Advertisement

Continue Reading

Technology

Of course telecom companies are suing the FTC to block the new ‘click-to-cancel’ rule

Published

on

Of course telecom companies are suing the FTC to block the new 'click-to-cancel' rule

An industry group representing telecom providers like Comcast and Charter has sued the FTC to “click-to-cancel” rule, . The NCTA, formerly known as the , filed the suit with the 5th U.S. Circuit Court of Appeals in New Orleans on the grounds that the rule oversteps the FTC’s authority.

The Interactive Advertising Bureau, which represents the online advertising industry, and the Electronic Security Association, which represents the home security industry, are also involved in the lawsuit. The groups call the FTC ruling “arbitrary, capricious, and an abuse of discretion.” There’s also language in the suit that suggests that jumping through annoying hoops to cancel a subscription is actually helpful to consumers, . So this little mom and pop trade organization is just looking out for us, the little guy. I’m practically glowing with appreciation.

For news junkies, the lawsuit’s venue may have raised some eyebrows. The 5th U.S. Circuit Court of Appeals in New Orleans is widely considered to be the nation’s , so it’s where giant corporations and political entities like to drop suits like this.

Judges from this court , FBI and the Surgeon General from urging social media companies to take down posts filled with misinformation. The court also invalidated a ban on bump stocks, limited access to the and the Consumer Financial Protection Bureau (CFPB.)

Advertisement

Several of these decisions were reversed by the Supreme Court, so the 5th Circuit is actually markedly more conservative than even SCOTUS. To that end, 12 of the 17 judges on the court were appointed by Republican presidents, with six being appointed by former President Trump. The NCTA and its industry partners have been accused by consumer advocacy groups of “venue shopping” by selecting a federal appeals court that would likely look favorably on the suit.

“The big businesses that deploy deceptive subscription models to trap customers are trying to sue their way out of this regulation to lower costs for millions of consumers,” Liz Zelnick, director for the said in a statement published by USA Today. “We’ve seen this movie before, with big industry players venue shopping in a corporate-friendly jurisdiction regardless of the impact on Americans.”

The FTC ratified the “click-to-cancel” in a vote that went down along party lines. Simply put, this ruling requires providers to make it as easy to cancel a subscription as it is to sign up for one. It prohibits companies from misrepresenting their recurring services and memberships.

“Too often, businesses make people jump through endless hoops just to cancel a subscription,” said Chair Lina Khan. “The FTC’s rule will end these tricks and traps, saving Americans time and money. Nobody should be stuck paying for a service they no longer want.”

If you buy something through a link in this article, we may earn commission.

Source link

Advertisement

Continue Reading

Technology

Midjourney launches AI image editor: how to use it

Published

on

Midjourney launches AI image editor: how to use it

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Midjourney, the hit AI image generation startup founded and run by former Magic Leap engineer David Holz, is wowing users with a new feature unveiled last night: AI image editing.

As a good portion of Midjourney’s 20 million+ users (including some of us at VentureBeat) likely know, Midjourney previously allowed users to upload their own images gathered outside of the service to its alpha web interface and/or Discord server to serve as a reference for its AI image generator diffusion models — the latest one being Midjourney 6.1. After receiving an uploaded reference image, the Midjourney AI model is able generate new images based on the user’s provided file.

However, this reference feature didn’t actually make any alterations to the source image — merely using it as a kind of loose starting point.

Advertisement

Now, with Midjourney’s new “Edit” feature, users can upload any image of their choosing and actually edit sections of it with AI, or change the style and texture of it from the source to something totally different, such as turning a vintage photograph into anime — while preserving most of the image’s subjects and objects and spatial relationships.

It even works on doodles and hand drawings that the submits, turning scribbles into full art pieces in seconds.

Midjourney posted a video demo showing how to use the new features which we’ve embedded below:

VentureBeat uses Midjourney and other AI tools to create content for our website, social channels and other formats.

Advertisement

Note that despite its popularity, Midjourney is one of several AI companies being sued by a class action of human artists for alleged copyright infringement due to its scraping of human-created works without express permission, authorization, consent, or compensation to train its models. The case remains in court for now.

The Midjourney Image Editor only appears to be restricted to its latest AI model, Midjourney 6.1, which makes sense.

In a message to Midjourney’s Discord community, Holz wrote that: “All of these things are very new, and we want to give the community and human moderation staff time to ease into it gently…”

As a consequence, the new Midjourney Editor feature is for now restricted to users who have generated more than 10,000 images with the service, those with annual paid memberships, and those who have been a subscriber for a year or more.

Advertisement

However, if you fit those criteria, you can use the new Midjourney Image Editor by following the directions below.

How to find and start using Midjourney’s Image Editor

The new Midjourney Image Editor is only available on the alpha web interface, available at alpha.midjourney.com.

Once there and signed in, the qualifying user should see a new button along the left sidebar menu about halfway down with an icon showing a small pencil on a pad. Hovering over will show that it reads “Edit” (or the text will automatically display on its own persistently if your browser window is wide enough).

Clicking on this should pull up the new Editor screen, which should prompt the user with two major options “Edit from URL” and “Edit Uploaded Image.”

The latter requires the user to have a file saved on their machine, whereas the former can accept a wide range of images hosted on various websites such as Wikimedia Commons, if the user simply pastes in the correct link to the web-hosted image. For purposes of this article, I included a URL to the following image of a concept car from Wikimedia Commons.

Advertisement

Once a copy of the file is uploaded to Midjourney via the URL or the user’s own file repository, the image should appear in the middle of the new editor screen like so:

You’ll note there are a wide variety of options and various buttons on the left inner sidebar menu that users can select to modify the image with Midjourney 6.1, including “1. Erase” which allows the user to remove and paint over portions of the image with AI using a brush and a text prompt, “2. Move/Resize” which allows the user to move the image around the virtual canvas and extend its edges with new matching AI imagery, and “3. Restore” which is the inverse of Erase and lets the user retain any portions of the source image that they accidentally painted over with the Erase brush.

The user can control the brush size with a slider on the left sidebar as well as the “scale” of the image, zooming in or out, and the aspect ratio itself with more presets below that.

There’s also a “Suggest Prompt” button which Midjourney explains via a helpful hover over text is designed to aid the user in generating a prompt describing the image they’ve just uploaded — in case they want to alter that prompt or use it to generate a whole new similar image. The suggested prompt text should automatically appear in the prompt entry box/bar at the top of the screen.

Looking at our concept car example, I went ahead and used the Erase brush tool on the driver and used the text prompt entry bar at the top of the Midjourney web interface to replace the driver with a “flaming skeleton driving.” After I typed my text prompt in the top entry bar/box, I hit the button marked “Submit Edit” or enter on my keyboard to apply the changes.

As with Midjourney’s raw image generator, the Editor creates four versions automatically for each text prompt — visible on the right sidebar under the “Submit” button.

Here is the best result from my experiment:

The user can then choose to keep making new changes to this resulting image, upscale with Midjourney’s build in upscaler via a button below, or download it as is.

Retexturing turns images into new adaptations in different styles

In addition, the discerning reader and Midjourney user will note there was also another whole set of options for the Editor found by clicking the tab marked “Retexture” on the left sidebar.

As Midjourney itself explains in the left sidebar after licking this option: “Retexture will change the contents of the input image while trying to preserve the original structure. For good results, avoid using prompts that are incompatible with the general structure of the image.”

As you’ll see in the above screenshot I’ve embedded, the Rexture screen has far less going on than the regular Edit screen. In fact, basically the only option is to use the prompt text entry bar/box at the top of the screen to spell out what kind of retexturing you want done to the source image you/the user provided.

After entering this, the user can hit “Submit Rexture” and viola, Midjourney will use AI to apply the new texture and adapt the image according to the user’s prompt, again generating four versions for them to choose from.

In my case, I tried a bunch of different styles including anime, cave painting, colored sand, grotesque ooze, and cyberpunk styles, among others. See some of the retexturing examples I received below. One cautionary note in my limited tests so far — the retexturing feature does appear to warp and remove some detail from the resulting source image, as well as gender swap the subjects and add extraneous new details as well. However, this is part of the fun with using Midjourney or other generative AI creative tools — seeing what the model spits out based on your guidance!

Advertisement

Warm reception among AI image creators on X

The AI image and art community on the social network X applauded Midjourney’s new editor — which had been rumored for several weeks. Already, some of the leading AI creators have tried it out and posted their examples, many of which are impressive. Here’s a sampling:

If you’re a Midjourney user who meets the criteria outlined above, go ahead and log in and try it out! Let me know your thoughts: carl.franzen@venturebeat.com. Midjourney has also been open about its plans to launch a 3D or video editor, which may come later this year.


Source link
Advertisement
Continue Reading

Technology

Cheers lets you play matchmaker with friends

Published

on

Cheers lets you play matchmaker with friends

The latest addition to the dating app scene is Cheers, a newly launched matchmaking app available to users in New York City. Founded by former Instagram engineer, Sahil Ahuja, Cheers sets itself apart by offering friend-matchmaking and social posting features to facilitate new connections.

Cheers (where almost everybody knows your name) puts a twist on the familiar mechanics of dating apps. Users can swipe through profiles and engage in direct messaging, all while leveraging their existing friend network to play matchmaker, letting users swipe on behalf of their friends, share profiles, and ask for introductions, removing the awkwardness of meeting potential matches online.  

Additionally, Cheers incorporates social media features such as photo sharing, letting users post as many images as they want, rather than being limited to the six or so photos that most dating apps allow. Ahuja believes that adding unlimited images, featured on a Instagram-style profile, will enhance the user experience, making the dating app more like a social platform. It also helps with the vetting process, since a potential match may be tagged in one of their friend’s photos.

Ahuja told TechCrunch, “I’ve talked to a lot of women who have felt that that’s actually something that is really beneficial for them because if they see people on there with a friend, it validates this is a legit person. They’re not sketchy. It’d be okay to go out with them in real life.”

Advertisement

Image Credits:Cheers

Ahuja worked at Instagram for four years before venturing into the startup world to begin his web3 company, Soho, which was sold to Sound last year. It has always been his dream to build Cheers, but he wanted to work at Instagram first to enhance his skills. In a way, Instagram is its own dating app. More and more users have turned to the app to “slide into the DMs” of their latest crush.

The idea of a friend matchmaking app is not a new . Tinder attempted this in 2023 with “Tinder Matchmaker,” and Bumble has its “Recommend to a Friend” feature. Startups like Loop and Wingman also operate on the same principle.

Although friend matchmaking isn’t new, Cheers reflects the changing behavior of online daters, who are gravitating toward making more authentic connections. What better way to guarantee they’ll find a decent match than by relying on close friends who know them best?

Unlike the distant connections users may follow on Instagram or Facebook, Cheers takes a more personal approach. It requires users to exclusively invite individuals from their contact list, emphasizing a more intentional way of connecting. By restricting users to viewing only three matches per day, the app promotes deliberate and purposeful dating interactions, putting the user’s experience at the forefront.

Image Credits:Cheers

As many dating apps experiment with AI, Cheers is utilizing ChatGPT to suggest which photos to post and generate captions. It also uses AI to help users set up profiles. However, the app prohibits AI-generated profile images. 

Currently, Ahuja is working alone on Cheers and is focused on improving the app before promoting it in new markets or hiring anyone. He plans to add paid features in the future but wants to wait until Cheers has reached 5,000 users.

Advertisement

The app has garnered 150 signups so far, and new users need an invitation from friends to join. It’s currently only available on iOS.

Source link

Continue Reading

Technology

Razer added RGB lighting to its Barracuda X wireless headphones

Published

on

Razer added RGB lighting to its Barracuda X wireless headphones

Razer has announced a new version of its Barracuda X wireless gaming headset that adds a glowing Razer logo and a ring of color-changing LEDs outlining each earcup. The new Barracuda X Chroma headphones are now available for preorder in a black or white colorway for $129.99 and are expected to ship in late October or early November.

As with other Razer headphones featuring RGB LEDs, the Barracuda X Chroma’s accent lighting can be customized through the Razer Chroma Studio desktop app or the Razer Audio mobile app. The headphones feature six zones that can be set to one of 16.8 million colors to match the motif of a gaming room or other hardware, or you can choose from preset effects, including lighting that corresponds to “over 300 games.”

The Barracuda X Chroma’s microphone is removable.
Image: Razer

The Barracuda X Chroma can wirelessly connect to devices over Bluetooth or using Razer’s 2.4GHz HyperSpeed connection for reduced lag. The company says the headphone’s battery life has been boosted to up to 70 hours with a 2.4GHz connection and the lighting turned off, but that’s halved to just 35 hours with the RGB LEDs running. A 15-minute charge will provide around six hours of playtime, but when using Bluetooth, battery life will potentially be even longer.

Advertisement

The Barracuda X Chroma are available in either a white or black colorway.
Image: Razer

Other features include memory foam cushions on swiveling earcups, 40mm drivers, a detachable HyperClear cardioid microphone on an adjustable arm, and a mute button with a volume dial integrated into the left earcup. At 285 grams, the headphones are also slightly heavier than the original Barracuda X which weighed in at 250 grams.

The Barracuda X Chroma are compatible with PCs, Macs, Nintendo Switch, Sony Playstation, Steam Deck, and Android and iOS mobile devices. But like their predecessor, they still don’t support the Xbox’s wireless protocol.

Source link

Advertisement

Continue Reading

Trending

Copyright © 2024 WordupNews.com