Connect with us

Technology

OnePlus’ Android 15 open beta rollout schedule extends to 2025

Published

on

OnePlus' Android 15 open beta rollout schedule extends to 2025

Yesterday, OnePlus officially announced OxygenOS 15, the next big update to its custom Android skin for phones and tablets. The company showed off the main new features that the new firmware will bring, highlighting the arrival of multiple AI-powered features. Now, OnePlus has updated the information with the rollout schedule for the Android 15-based beta update, confirming that some models will have to wait until 2025.

According to the official rollout schedule, the OxygenOS 15 open beta will be available for the first eligible devices as early as October 30. However, the last models will get it in February next year.

Android 15 rollout schedule to eligible OnePlus devices for 2024 and 2025

Starting October 30, users of the OnePlus 12, OnePlus 12R, and OnePlus 12R Genshin Impact Edition will be able to install Android 15 beta. This is great news for users of said devices, as they are just a few days away from enjoying early access to the new features and improvements. Then, sometime in November, users of the OnePlus Open and OnePlus Pad 2 will be able to do so.

Starting in December, the OnePlus 11, OnePlus 11R, and every model in the OnePlus Nord 4 series will gain access to the beta program. The first OnePlus Pad is also on the list for the last month of the year. Next, in January 2025, Android 15 beta will be available to owners of the OnePlus 10 Pro, OnePlus 10T, and OnePlus Nord 3. Lastly, the OnePlus 10R and OnePlus Nord CE 3 will receive the open beta starting in February 2025.

Advertisement

oneplus android 15 beta rollout schedule 2025

Some key OxygenOS 15 improvements

OxygenOS 15 will bring a set of nice improvements and new features to users of eligible devices. There are a number of new AI-powered features for image editing and enhancement. The artificial intelligence will also enable grammar checks, suggested answers, and summaries. There’s also an AI Toolbox 2.0 sidebar that will offer direct access to the AI features available for each app. The company is even polishing the OxygenOS UI and redesigning its icons.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

MBU launches balloon satellite in partnership with NARL- The Week

Published

on

MBU launches balloon satellite in partnership with NARL- The Week

Mohan Babu University (MBU), in association with the National Atmospheric Research Laboratory, has launched a ‘High-Altitude Balloon Satellite’. The MBUSAT-1 project, which was launched in November last year, gives students and faculty real-world experience in space technology and atmospheric science.

The project also aims to ignite a passion for science and space exploration in young minds through community outreach programs, shaping the next generation of scientists and engineers. MBUSAT-1 will enable young innovators to pursue a career in the growing space industry as well. 

According to university officials, a key goal of the project is collecting vital atmospheric data to enhance weather forecasting and disaster management, with the potential to save lives. MBU has selected as many as 25 students from various engineering disciplines to lead this ambitious effort. Specialised teams under faculty guidance will also formed under the project.

The project, partnering with NARL, boasts impressive cost-effectiveness, with a total investment of only Rs 1.5 lakh, making it an economical and efficient endeavour.

Advertisement

“The launch of MBUSAT-1 underscores MBU’s dedication to advancing space research and educational excellence,” said pro-chancellor Vishnu Manchu. 

NARL, a leading research institution, provides technical skills and experience in satellite design, testing, and launch operations. The collaborative venture between MBU and NARL marks a significant advancement in India’s space research efforts. 

Source link

Advertisement
Continue Reading

Technology

Venom 3 is already falling short at the box office

Published

on

Venom 3 is already falling short at the box office

It may only be halfway through its opening weekend, but Venom: The Last Dance is already falling short of its franchise’s standards at the box office.

The new comic book film brings Tom Hardy’s big-screen adventures as Eddie Brock to an end and was, therefore, marketed as a zany and climactic final outing for the superhero genre’s quirkiest duo. Unfortunately, while certain elements of the film’s early trailers — like the Venom-ized horse that Eddie and his symbiote pal use at one point to travel a bit more quickly — went viral online, it doesn’t seem like moviegoers feel as compelled to seek out Venom: The Last Dance like they did its two predecessors.

According to Variety, Venom: The Last Dance has so far earned only $22 million stateside from all of its Friday and preview screenings. That means the film is currently en route to falling shy of its initial box office projections, which placed its opening weekend total around $65 million. The movie’s first-day domestic gross is also lower than the $32 million that Venom accrued in the same amount of time in 2018 and the $37 million that 2021’s Venom: Let There Be Carnage raked in across its preview and initial Friday screenings.

Venom smiles underground in Venom: The Last Dance.
Sony Pictures Releasing

The fact that Sony’s third — and potentially final — Venom film is on track to perform worse financially than its two parent movies may not come as much of a surprise to those who have been keeping up with The Last Dance. The film received largely negative reviews prior to its theatrical debut, and multiple critics dinged the sequel for not focusing enough on the wacky, fun relationship between Hardy’s Eddie and his immature alien pal. The first two Venom movies didn’t receive overwhelmingly positive reviews, either, but The Last Dance‘s current audience score on Rotten Tomatoes is also lower than those of its predecessors.

Ultimately, it remains to be seen how close Venom: The Last Dance comes to matching its predecessors’ overall $856 million and $506 million respective worldwide box office totals. Both Venom and Let There Be Carnage notably earned more outside of the U.S. than they did domestically, and it’s possible The Last Dance will continue that franchise trend and end up doing better than its current earnings suggest. Only time will tell.

Advertisement

Venom: The Last Dance is now playing in theaters.






Source link

Advertisement
Continue Reading

Technology

Google could release Gemini 2.0 before the end of the year

Published

on

Google could release Gemini 2.0 before the end of the year

Google Gemini could be getting a 2.0 version soon, as it’s now being reported that Google has plans for a launch by the end of the year. That would put it about 12 months after the release of the first version, which Google launched in December of 2023.

The new AI tool/conversationalist has been getting fed into a few Google products since it launched. You can find it on Google’s Pixel 8 and Pixel 9 smartphones, and it’s now integrated with the Pixel Buds Pro 2 earbuds. It’s also available and integrated with Gmail, the Galaxy S24 series, and several other products. At current Google is up to Gemini 1.5 Pro, and offers a handful of Gemini plans for a varying degree of price points.

Its most advanced, aptly named Gemini Advanced, features access to features like Gemini Live, which at current doesn’t do much beyond talk to you like a normal human being would. However, that could change with the launch of Gemini 2.0.

Google may launch Gemini 2.0 in December of this year

Google announced and then launched Gemini 1.0 in December of 2023, and now it looks like the plan is to announce and launch in the same month a year later. According to The Verge, Google is intending to announce Gemini 2.0 sometime in December. It will potentially launch at the same time or soon thereafter.

Advertisement

As with Gemini 1.0, Google may end up rolling out the new version of its AI tool to both developers and end users. There is still no exact date that’s been mentioned though. So it’s unclear when exactly in December this announcement and launch would happen.

It’s unclear what new features may be added

Gemini has several features in its current state but at its most basic you can use it to get answers to your questions. Either by feeding it text or images and asking it to provide you with more information on the context. Although we now have a potential Gemini 2.0 announcement and release date, it’s still unclear what new features may be coming along with the version update. Google has also yet to confirm any details about the launch or what the new model will bring to the table.

Source link

Advertisement
Continue Reading

Technology

‘Ongoing notifications’ similar to Apple’s Live Activities could be coming to Android

Published

on

‘Ongoing notifications’ similar to Apple’s Live Activities could be coming to Android

Google is reportedly working on a new Android API for what it’s calling Rich Ongoing Notifications, which would allow apps to display at-a-glance information in a status bar much like Apple’s Live Activities in the Dynamic Island on iPhone. This is according to journalist , who spotted the code in the Android 15 QPR1 Beta 3 release. It could work a lot like the time tracker that currently appears when you’re on a phone call, with a bit of text in a bubble at the top of the display that you can tap to open the app for more details.

Writing for Android Authority, Rahman says the API “will let apps create chips with their own text and background color that live in the status bar.” It could be especially useful for things like transit updates, allowing users to keep track of pertinent information like departure times or an Uber’s ETA while using other apps. The feature isn’t yet complete, though, and it could still be some time before we see it. Rahman predicts it’ll arrive with Android 16.

Source link

Continue Reading

Technology

How (and why) federated learning enhances cybersecurity

Published

on

How (and why) federated learning enhances cybersecurity

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Each year, cyberattacks become more frequent and data breaches become more expensive. Whether companies seek to protect their AI system during development or use their algorithm to improve their security posture, they must alleviate cybersecurity risks. Federated learning might be able to do both.

What is federated learning?

Federated learning is an approach to AI development in which multiple parties train a single model separately. Each downloads the current primary algorithm from a central cloud server. They train their configuration independently on local servers, uploading it upon completion. This way, they can share data remotely without exposing raw data or model parameters.

The centralized algorithm weighs the number of samples it receives from each disparately trained configuration, aggregating them to create a single global model. All information remains on each participant’s local servers or devices — the centralized repository weighs the updates instead of processing raw data.

Advertisement

Federated learning’s popularity is rapidly increasing because it addresses common development-related security concerns. It is also highly sought after for its performance advantages. Research shows this technique can improve an image classification model’s accuracy by up to 20% — a substantial increase.

Horizontal federated learning

There are two types of federated learning. The conventional option is horizontal federated learning. In this approach, data is partitioned across various devices. The datasets share feature spaces but have different samples. This enables edge nodes to collaboratively train a machine learning (ML) model without sharing information.

Vertical federated learning

In vertical federated learning, the opposite is true — features differ, but samples are the same. Features are distributed vertically across participants, each possessing different attributes about the same set of entities. Since just one party has access to the complete set of sample labels, this approach preserves privacy. 

How federated learning strengthens cybersecurity

Traditional development is prone to security gaps. Although algorithms must have expansive, relevant datasets to maintain accuracy, involving multiple departments or vendors creates openings for threat actors. They can exploit the lack of visibility and broad attack surface to inject bias, conduct prompt engineering or exfiltrate sensitive training data.

Advertisement

When algorithms are deployed in cybersecurity roles, their performance can affect an organization’s security posture. Research shows that model accuracy can suddenly diminish when processing new data. Although AI systems may appear accurate, they may fail when tested elsewhere because they learned to take bogus shortcuts to produce convincing results.

Since AI cannot think critically or genuinely consider context, its accuracy diminishes over time. Even though ML models evolve as they absorb new information, their performance will stagnate if their decision-making skills are based on shortcuts. This is where federated learning comes in.

Other notable benefits of training a centralized model via disparate updates include privacy and security. Since every participant works independently, no one has to share proprietary or sensitive information to progress training. Moreover, the fewer data transfers there are, the lower the risk of a man-in-the-middle attack (MITM).

All updates are encrypted for secure aggregation. Multi-party computation hides them behind various encryption schemes, lowering the chances of a breach or MITM attack. Doing so enhances collaboration while minimizing risk, ultimately improving security posture.

Advertisement

One overlooked advantage of federated learning is speed. It has a much lower latency than its centralized counterpart. Since training happens locally instead of on a central server, the algorithm can detect, classify and respond to threats much faster. Minimal delays and rapid data transmissions enable cybersecurity professionals to handle bad actors with ease.

Considerations for cybersecurity professionals

Before leveraging this training technique, AI engineers and cybersecurity teams should consider several technical, security and operational factors.

Resource usage

AI development is expensive. Teams building their own model should expect to spend anywhere from $5 million to $200 million upfront, and upwards of $5 million annually for upkeep. The financial commitment is significant even with costs spread out among multiple parties. Business leaders should account for cloud and edge computing costs.

Federated learning is also computationally intensive, which may introduce bandwidth, storage space or computing limitations. While the cloud enables on-demand scalability, cybersecurity teams risk vendor lock-in if they are not careful. Strategic hardware and vendor selection is of the utmost importance.

Advertisement

Participant trust

While disparate training is secure, it lacks transparency, making intentional bias and malicious injection a concern. A consensus mechanism is essential for approving model updates before the centralized algorithm aggregates them. This way, they can minimize threat risk without sacrificing confidentiality or exposing sensitive information.

Training data security

While this machine learning training technique can improve a firm’s security posture, there is no such thing as 100% secure. Developing a model in the cloud comes with the risk of insider threats, human error and data loss. Redundancy is key. Teams should create backups to prevent disruption and roll back updates, if necessary. 

Decision-makers should revisit their training datasets’ sources. In ML communities, heavy borrowing of datasets occurs, raising well-founded concerns about model misalignment. On Papers With Code, more than 50% of task communities use borrowed datasets at least 57.8% of the time. Moreover, 50% of the datasets there come from just 12 universities.

Applications of federated learning in cybersecurity

Once the primary algorithm aggregates and weighs participants’ updates, it can be reshared for whatever application it was trained for. Cybersecurity teams can use it for threat detection. The advantage here is twofold — while threat actors are left guessing since they cannot easily exfiltrate data, professionals pool insights for highly accurate output.

Advertisement

Federated learning is ideal for adjacent applications like threat classification or indicator of compromise detection. The AI’s large dataset size and extensive training build its knowledge base, curating expansive expertise. Cybersecurity professionals can use the model as a unified defense mechanism to protect broad attack surfaces.

ML models — especially those that make predictions — are prone to drift over time as concepts evolve or variables become less relevant. With federated learning, teams could periodically update their model with varied features or data samples, resulting in more accurate, timely insights.

Leveraging federated learning for cybersecurity

Whether companies want to secure their training dataset or leverage AI for threat detection, they should consider using federated learning. This technique could improve accuracy and performance and strengthen their security posture as long as they strategically navigate potential insider threats or breach risks.

 Zac Amos is the features editor at ReHack.

Advertisement

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

Advertisement

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Source link
Continue Reading

Technology

After selling Anchor to Spotify, co-founders reunite to build AI educational startup Oboe

Published

on

After selling Anchor to Spotify, co-founders reunite to build AI educational startup Oboe

The co-founders who sold their last startup to Spotify are working on a new project: an AI-powered educational startup called Oboe backed by a $4 million seed investment. The new company, hailing from Nir Zicherman and Michael Mignano, aims to democratize access to learning the way that their prior startup, Anchor, made it possible for anyone to create a podcast. That is, Oboe intends to produce a user-friendly interface that helps people accomplish the task at hand — in this case, expanding their knowledge via a combination of AI technology, audio, and video.

“This idea is something that Mike and I have been talking about for a long time now, because we have both felt for a while that there is a really big opportunity in the education space — much bigger than I think a lot of people realize,” Zicherman says.

After taking a brief period to recharge after leaving Spotify in October 2023, Zicherman was soon ready to roll up his sleeves and build something new with a small team, he says, similar to Anchor’s early days. He also took inspiration from his work at Spotify, where he had spent the last few years building out its audiobooks business and scaling it to more markets.

“One of the big things … that drew me to audiobooks, as a business, and as a product, was this idea of enabling so many more people than ever before to get access to incredible, high-quality content, including educational content and making that really ubiquitous,” he notes.

Advertisement

Oboe looks to expand on that mission, but not via audiobooks.

Instead, the team envisions a product that would allow more people to engage with “active learning journeys,” as the company calls it, by offering learning tools that optimize the development of a curriculum and personalize those to the way the individual user learns most effectively.

The tools offered will be available across platforms and will involve native applications, similar to existing online learning services.

However, the startup intends to differentiate itself from others in the space by leveraging AI to both customize the curriculum materials and enable an interactive experience. For instance, synthetic AI voices will be a part of the offering. Meanwhile, machine learning combined with Oboe’s back-end architecture will help to personalize how the material is presented and will improve over time.

Advertisement

Because AI tends to hallucinate or cite bad information, part of Oboe’s secret sauce will be focused on ensuring the content is accurate, high-quality, and scalable.

In part, Oboe will rely on third-party foundational AI models, but the team is also undertaking a “significant” amount of work in-house to build its data architecture and optimize the curriculum on a per-user basis, Zicherman says.

“This product is not one of these thin wrappers around existing LLMs. There’s a lot more happening under the hood,” he teases.

In addition, access to the material will be made available across different formats. When you can’t look at a screen — like if out for a jog or driving to work — you could tune in via audio. Other times, you may be watching videos, using an app, or engaging with a website.

Advertisement

Initially, Oboe will target just a few verticals, ranging from someone teaching themselves programming to a college student supplementing their classroom experience with more materials, for instance. These debut courses will focus on learners older than the K through 12 demographic, but Oboe’s eventual goal is to fulfill its mission of “making humanity smarter.” (A tall order, indeed.) That includes the K through 12 and higher-ed space, as well as those upskilling for their careers, or just engaged on their own to learn something new, like playing a new instrument. (Fun fact: Not only is the oboe the instrument an orchestra tunes to, but it’s also the root of the Japanese word meaning “to learn.”)

New York-based Oboe isn’t yet ready to share much more in terms of product details, but it has raised funding from a crowd of investors, including those who have worked with Zicherman and Mignano previously. Mignano will remain a full-time partner at Lightspeed but will serve on the board of this new company and support Zicherman in his role of CEO, he says.

“In my co-founding role at Oboe, Nir and I have worked closely together to set the company up for success through its initial strategy and product direction,” Mignano tells TechCrunch. “My partners at Lightspeed are super supportive of me being both investor and founder — there’s a long history of our investors starting or incubating their own companies. Nir and I were thrilled to raise this initial round from a number of amazing seed funds and angels — many who backed us previously at Anchor,” he adds.

Oboe’s $4 million seed round was led by Eniac Ventures — the VC firm that led Anchor’s seed. The round also includes investment from Haystack, Factorial Capital, Homebrew, Offline Ventures, Scott Belsky, Kayvon Beykpour, Nikita Bier, Tim Ferriss, and Matt Lieber.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com