Connect with us

Technology

Qualcomm’s Snapdragon 8 Elite DESTROYS Apple’s A18 Pro

Published

on

Qualcomm's Snapdragon 8 Elite DESTROYS Apple's A18 Pro

To close out the Snapdragon Summit this week, Qualcomm held a benchmarking session for the new chipset. Showing just how confident they are with the new 2nd generation Oryon CPU. And boy, it did not disappoint.

We were given about an hour or so to benchmark a reference design device and run all sorts of benchmarks. We mostly ran benchmarks that we use in our reviews, this way we can compare it to many other chipsets over the past year, like the A18 Pro, Snapdragon 8 Gen 3, MediaTek Dimensity 9300, etc. The results were surprising, but they were also not surprising.

Qualcomm has a reference device here at Snapdragon Summit, which we’ll call the “Snapdragon 8 Elite QRD” moving forward. This device of course has the Snapdragon 8 Elite inside, it also has 24GB of LPDDR5x RAM up to 4.8Gbps, 1TB of UFS 4.0 storage and a 6.8-inch WFHD+ 144Hz AMOLED display. It also has a fairly small battery, but this isn’t meant to show off the battery life – it’s only 4,167mAh.

Geekbench 6

Let’s start off with Geekbench 6. As you likely know, Geekbench 6 tests the raw performance of the CPU and GPU, including both single- and multi-core. It’s a really good test to gauge just how powerful the device is, and here’s the results:

Advertisement
  • Single-core: 3,220
  • Multi-core: 10,415
  • GPU: 17,867

These are pretty insane numbers, which actually beat Apple’s latest and greatest, at least on the CPU side. Apple is still doing some crazy stuff with their CPUs. Here’s how it compares to the Galaxy S24 Ultra (Snapdragon 8 Gen 3 for Galaxy), Apple iPhone 16 Pro (A18 Pro), Samsung Galaxy Tab S10 Ultra (MediaTek Dimensity 9300), and the Google Pixel 9 Pro XL (Tensor G4).

Geekbench 6

On the single core, these scores break down as:

  • Snapdragon 8 Elite QRD: 3,220
  • Samsung Galaxy S24 Ultra: 2,176
  • Apple iPhone 16 Pro: 2,981
  • Samsung Galaxy Tab S10 Ultra: 2,191
  • Google Pixel 9 Pro XL: 1,947

On the multi-core score, these scores are:

  • Snapdragon 8 Elite QRD: 10,415
  • Samsung Galaxy S24 Ultra: 6,567
  • Apple iPhone 16 Pro: 7,939
  • Samsung Galaxy Tab S10 Ultra: 7,358
  • Google Pixel 9 Pro XL: 4,654

And finally on the GPU test, these scores are:

  • Snapdragon 8 Elite QRD: 17,867
  • Samsung Galaxy S24 Ultra: 11,414
  • Apple iPhone 16 Pro: 32,846
  • Samsung Galaxy Tab S10 Ultra: 12,204
  • Google Pixel 9 Pro XL: 6,464

As you can tell, Apple still has a health margin on the GPU, but then again the iPhone does have console-quality games running on their chipsets. So, that score is definitely what you’d expect to see from an Apple chipset. When comparing the Snapdragon 8 Gen 3 to the new Snapdragon 8 Elite, we’re seeing about a 37% increase year-over-year. That is pretty much unheard of in the world of processors, but definitely good to see.

Qualcomm Snapdragon 8 Elite reference design AM AH

3D Mark Extreme Stress Test

This is another test that we run quite a bit on our review devices, so we can do a good job of showing how it compares to other devices on the market today.

The Snapdragon 8 Elite QRD scored a 7,121 in the 3D Mark Extreme Stress Test. Now we do tend to measure the thermals with this test, but since the QRD is not designed for dissipating heat, it would be kind of unfair. So we’ll have to save that for a device that launches with the Snapdragon 8 Elite, which will be as soon as next week.

Compared to other chipsets, this really blows everything out of the water.

Advertisement
  • Snapdragon 8 Elite QRD: 7,121
  • Samsung Galaxy S24 Ultra: 4,376
  • Apple iPhone 16 Pro: 3,976
  • Samsung Galaxy Tab S10 Ultra: 5,352
  • Google Pixel 9 Pro XL: 2,565

That’s nearly double what the A18 Pro could do, about 39% faster than the Snapdragon 8 Gen 3, and almost triple what the Tensor G4 has done in our testing. This is expected, as the Tensor G4 is not built for speed but for AI.

AnTuTu

The last test we ran, we don’t typically do on our reviews, but we did have a few devices with different chipsets here in Maui with us to do the testing here. So here’s how it broke down.

  • Snapdragon 8 Elite QRD: 3,035,115
  • Xiaomi 14 Ultra: 1,771,035
  • Apple iPhone 16 Pro: 1,657,579
  • Samsung Galaxy S10 Ultra: 2,038,129
  • Google Pixel 9 Pro XL: 1,187,754

This means that on AnTuTu, the Snapdragon 8 Elite has over a million-point lead on every single processor here. And it nearly triples what the Tensor G4 did, which is not surprising in the least.

There’s a new king in the world of mobile devices, and anything powered by the Snapdragon 8 Elite is going to have incredible performance.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Nvidia CEO touts India’s progress with sovereign AI and over 100K AI developers trained

Published

on

Nvidia CEO touts India's progress with sovereign AI and over 100K AI developers trained

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Nvidia CEO Jensen Huang noted India’s progress in its AI journey in a conversation at the Nvidia AI Summit in India. India now has more than 2,000 Nvidia Inception AI companies and more than 100,000 developers trained in AI.

That compares to a global developer count of 650,000 people trained in Nvidia AI technologies, and India’s strategic move into AI is a good example of what Huang calls “sovereign AI,” where countries choose to create their own AI infrastructure to maintain control of their own data.

Nvidia said that India is becoming a key producer of AI for virtually every industry — powered by thousands of startups that are serving the country’s multilingual, multicultural population and scaling
out to global users.

Advertisement

The country is one of the top six global economies leading generative AI adoption and has seen rapid growth in its startup and investor ecosystem, rocketing to more than 100,000 startups this year from under 500 in 2016.

More than 2,000 of India’s AI startups are part of Nvidia Inception, a free program for startups designed to accelerate innovation and growth through technical training and tools, go-to-market support and opportunities to connect with venture capitalists through the Inception VC Alliance.

At the NVIDIA AI Summit, taking place in Mumbai through Oct. 25, around 50 India-based startups are sharing AI innovations delivering impact in fields such as customer service, sports media, healthcare and robotics.

Conversational AI for Indian Railway customers

Nvidia is working closely with India on AI factories.

Bengaluru-based startup CoRover.ai already has over a billion users of its LLM-based conversational AI platform, which includes text, audio and video-based agents.

“The support of NVIDIA Inception is helping us advance our work to automate conversational AI use cases with domain-specific large language models,” said Ankush Sabharwal, CEO of CoRover, in a statement. “NVIDIA AI technology enables us to deliver enterprise-grade virtual assistants that support 1.3 billion users in over 100 languages.”

Advertisement

CoRover’s AI platform powers chatbots and customer service applications for major private and public sector customers, such as the Indian Railway Catering and Tourism Corporation, the official provider of online tickets, drinking water and food for India’s railways stations and trains.

Dubbed AskDISHA, after the Sanskrit word for direction, the IRCTC’s multimodal chatbot handles more than 150,000 user queries daily, and has facilitated over 10 billion interactions for more than 175 million passengers to date. It assists customers with tasks such as booking or canceling train tickets, changing boarding stations, requesting refunds, and checking the status of their booking in languages including English, Hindi, Gujarati and Hinglish — a mix of Hindi and English.

The deployment of AskDISHA has resulted in a 70% improvement in IRCTC’s customer satisfaction rate and a 70% reduction in queries through other channels like social media, phone calls and emails.

CoRover’s modular AI tools were developed using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI. They run on Nvidia GPUs in the cloud, enabling CoRover to automatically scale up compute resources during peak usage — such as the moment train tickets are released.

Advertisement

Nvidia also noted that VideoVerse, founded in Mumbai, has built a family of AI models using Nvidia technology to support AI-assisted content creation in the sports media industry — enabling global customers including the Indian Premier League for cricket, the Vietnam Basketball Association and the Mountain West Conference for American college football to generate game highlights up to 15 times faster and boost viewership. It uses Magnifi, with tech like vision analysis to detect players and key moments for short form video.

Nvidia also highlighted Mumbai-based startup Fluid AI, which offers generative AI chatbots, voice calling bots and a range of application programming interfaces to boost enterprise efficiency. Its AI tools let workers perform tasks like creating slide decks in under 15 seconds.

Karya, based in Bengaluru, is a smartphone-based digital work platform that enables members of low-income and marginalized communities across India to earn supplemental income by completing language-based tasks that support the development of multilingual AI models. Nearly 100,000 Karya workers are recording voice samples, transcribing audio or checking the accuracy of AI-generated sentences in their native languages, earning nearly 20 times India’s minimum wage for their work. Karya also provides royalties to all contributors each time its datasets are sold to AI developers.

Karya is employing over 30,000 low-income women participants across six language groups in India to help create the dataset, which will support the creation of diverse AI applications across agriculture, healthcare and banking.

Advertisement

Serving over a billion local language speakers with LLMs

India is investing in sovereign AI in an alliance with Nvidia.

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its government and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These public and private sector projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable government services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, Nvidia has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an Nvidia NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any Nvidia GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects.

Advertisement

Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by Nvidia. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with Nvidia NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses Nvidia RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership.

It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

Advertisement

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the Nvidia AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments. A number of Indian companies are using the services.

India’s AI factories can transform economy

India’s robotics ecosystem.

India’s leading cloud infrastructure providers and server manufacturers are ramping up accelerated data center capacity in what Nvidia calls AI factories. By year’s end, they’ll have boosted Nvidia GPU
deployment in the country by nearly 10 times compared to 18 months ago.

Tens of thousands of Nvidia Hopper GPUs will be added to build AI factories — large-scale data centers for producing AI — that support India’s large businesses, startups and research centers running AI workloads in the cloud and on premises. This will cumulatively provide nearly 180 exaflops of compute to power innovation in healthcare, financial services and digital content creation.

Announced today at the Nvidia AI Summit, this buildout of accelerated computing technology is led by data center provider Yotta Data Services, global digital ecosystem enabler Tata Communications, cloud service provider E2E Networks and original equipment manufacturer Netweb.

Advertisement

Their systems will enable developers to harness domestic data center resources powerful enough to fuel a new wave of large language models, complex scientific visualizations and industrial digital twins that could propel India to the forefront of AI-accelerated innovation.

Yotta Data Services is providing Indian businesses, government departments and researchers access to managed cloud services through its Shakti Cloud platform to boost generative AI adoption and AI education.

Powered by thousands of Nvidia Hopper GPUs, these computing resources are complemented by Nvidia AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade copilots and other generative AI applications.

With Nvidia AI Enterprise, Yotta customers can access Nvidia NIM, a collection of microservices for optimized AI inference, and Nvidia NIM Agent Blueprints, a set of customizable reference architectures for generative AI applications. This will allow them to rapidly adopt optimized, state-of-the-art AI for applications including biomolecular generation, virtual avatar creation and language generation.

Advertisement

“The future of AI is about speed, flexibility and scalability, which is why Yotta’s Shakti Cloud platform is designed to eliminate the common barriers that organizations across industries face in AI adoption,” said Sunil Gupta, CEO of Yotta, in a statement. “Shakti Cloud brings together high-performance GPUs, optimized storage and a services layer that simplifies AI development from model training to deployment, so organizations can quickly scale their AI efforts, streamline operations and push the boundaries of what AI can accomplish.”


Source link
Continue Reading

Technology

Cash collection startup Upflow also wants to handle B2B payments

Published

on

Cash collection startup Upflow also wants to handle B2B payments

Upflow, a French startup we’ve been covering for quite a while, originally focused on managing outstanding invoices. The company is now announcing a shift in its strategy to become a B2B payment platform with its own payment gateway to complement its accounts receivable automation solution.

Like many software-as-a-service products, Upflow started by building a central hub specifically designed for one job in particular: CFOs. From the Upflow dashboards, CFOs and finance teams could see all their company’s invoices, track payments, communicate with team members, and send reminders to clients.

It integrates nicely with other financial tools and services to automatically import data from those third-party services. And a tool like Upflow can be particularly important as many tech companies struggle to raise their next funding round and want to improve their cash balance.

But that was just the first step in a bigger roadmap.

Advertisement

“Basically, my vision has always been that the real problem is payment methods,” Upflow co-founder and CEO Alexandre Louisy (pictured above) told TechCrunch. “Today, when you pay in a store, you pay with your phone. When you pay for your Spotify subscription or your Amazon subscription, you don’t even think about how you pay.”

“But when you look at B2B payments, the way you pay today hasn’t changed in the last 50 years. And for us, that’s why people struggle with late payments. The thing I’m really trying to fight against is the idea that late payments are linked to bad payers.“

According to him, around 90% of B2B payments still happen offline in the U.S. It’s still mostly paper checks. In Europe, it’s a different story as companies have adopted bank transfers. But transfers “are completely unstructured and require manual reconciliation,” Louisy said.

Upflow sells its accounts receivable automation software tool to midsized companies with a revenue between $10 million and $500 million per year. The company’s biggest client generates around $1 billion in annual revenue.

Advertisement

“But when you ask [CFOs], ‘What’s your strategy for setting up direct debit on part of your customer base?’ They don’t have a solution,” Louisy said.

Upflow helps you set up incentive strategies so that a portion of your client base moves to online payments, such as card payments or direct debits. The idea isn’t that all your clients are going to pay with a business card overnight. But Upflow can help you change the payment method for something like 20% or 30% of your client base.

Just like CRMs help you manage your sales processes with clients, Upflow now wants to be a financial relationship management (FRM) solution. It’s an interesting strategy as it shows how a startup like Upflow is thinking about diversifying its revenue sources.

“With our model shift, we’re moving from a model where we are 100% based on SaaS revenue to a hybrid model where we have SaaS revenue and payment revenue because we have our own payment gateway that we’ve set up with Stripe,” Louisy said.

Advertisement

Payment is the second brick in Upflow’s product suite. Up next, the company plans to integrate financing options with B2B “buy now, pay later” payment methods on the supplier front and factoring for a company’s outstanding invoices.

“We evaluate solutions … that provide embedded finance,” Louisy said. ”It’s not our core business to perform risk assessment. On the other hand, what’s interesting is that we can bring them useful data for credit scoring that they don’t necessarily have when they just connect to one of our users’ accounts.”

Source link

Continue Reading

Technology

Automatic emergency braking is getting better at preventing crashes

Published

on

Automatic emergency braking is getting better at preventing crashes

Automatic emergency braking (AEB) isn’t perfect, but the technology is improving, according to a recent study conducted by AAA. The research comes on the heels of a new federal rule requiring all vehicles to have the most robust version of AEB by 2029.

AAA wanted to see how newer vehicles with AEB fared compared to older models with the technology. AEB uses forward-facing cameras and other sensors to automatically tell the car to apply the brakes when a crash is imminent. And according to the test results, newer versions of AEB are much better at preventing forward collisions than older versions of the tech.

The motorist group conducted its test on a private closed course using older (2017–2018) and newer versions (2024) of the same three vehicles: Jeep Cherokee, Nissan Rogue, and Subaru Outback. Each vehicle was tested at 12mph, 25mph, and 35mph to see how well AEB performed at different speeds. And a fake vehicle was placed in the middle of the road to see whether AEB could prevent a collision.

100 percent of new vehicles braked before a collision

Advertisement

Unsurprisingly, the newer models performed a lot better than the older ones: 100 percent of the 2024 vehicles braked before a collision, as compared to 51 percent of the older vehicles.

Still, this more recent test only involved forward collisions. Past AAA studies found AEB to be ill-equipped at preventing other common types of crashes, like T-bone collisions and left turns in front of approaching vehicles.

“Since we began testing AEB in 2014, the advancements by automakers are commendable and promising in improving driver safety,” said Greg Brannon, director of automotive engineering research. “There is still significant work ahead to ensure the systems work at higher speeds.”

It was a positive sign that AEB is improving, considering the National Highway Traffic Safety Administration (NHTSA) finalized a new requirement for all light-duty vehicles to have robust AEB systems by 2029. Around 90 percent of vehicles on the road today come standard with AEB, but the new rule requires automakers to adopt a more robust version of the technology that can stop vehicles traveling at higher speeds and detect vulnerable road users, like cyclists and pedestrians, even at night.

Advertisement

Even so, automakers are scrambling to put the brakes on the new rule’s adoption. Earlier this year, the Alliance for Automotive Innovation, which represents most of the major automakers, sent a letter to NHTSA arguing that the final rule is “practically impossible with available technology” and urging the agency to delay its implementation.

Source link

Continue Reading

Technology

Liquid Web launches new GPU hosting service for AI and HPC — and users can access a range of Nvidia GPUs (including H100s)

Published

on

Liquid Web launches new GPU hosting service for AI and HPC — and users can access a range of Nvidia GPUs (including H100s)

Liquid Web has unveiled the launch of a new GPU hosting service designed to keep pace with growing high-performance computing (HPC) requirements.

The new offering will harness Nvidia GPUs and is catered specifically toward developers focused on AI and machine learning tasks, the company confirmed.

Source link

Continue Reading

Technology

Sunita Williams turns 59! Find out how the astronaut celebrated her birthday in space- The Week

Published

on

Sunita Williams turns 59! Find out how the astronaut celebrated her birthday in space- The Week

NASA astronaut Sunita Williams just turned 59 in space on Thursday. She celebrated her milestone birthday aboard the International Space Station (ISS), which is around 400 kilometres above Earth, for the second time. 

Earlier her birthday celebration took place during a 2012 mission. 

Since June 6, Sunita Williams along with NASA astronaut Barry ‘Butch’ Wilmore has been aboard the ISS as part of the Boeing Crew Flight Test mission. Due to technical issues with the Boeing Starliner spacecraft, their stay has been unexpectedly extended. 

They are expected to return in February 2025. 

Advertisement

On her special day, Williams took up the task of maintaining the space laboratory. 

Reportedly, Williams celebrated her birthday by replacing filters in the waste and hygiene compartment. She also performed the essential task with the help of NASA astronaut Don Pettit to ensure safe and healthy living conditions on the ISS. 

Williams participated in a conference with Mission Control in Houston, Texas. Williams also engaged in discussions with flight directors in Houston, collaborating with astronauts Wilmore and Frank Rubio to outline mission objectives and upcoming tasks. 

Sunita Williams also received birthday wishes from Bollywood stars along with loved ones and family. 

Advertisement

Saregama Official shared a heartwarming video on Instagram that featured a compilation video of famous Indian stars singing Happy Birthday in Hindi to the astronaut. 

The video began with filmmaker Karan Johar sending birthday wishes to Williams, followed by singers, Hariharan, Sonu Nigam, Neeti Mohan and Shaan Mukherji.

In 1998, after joining NASA’s astronaut program, Williams launched into space for the first time on December 9, 2006, during the STS-116 mission. 

As a flight engineer for Expeditions 14 and 15, Williams set multiple records, including over 29 hours of spacewalks and more than 195 days in orbit.

Advertisement

By piloting Boeing’s CST-100 Starliner’s first crewed test flight, Williams made history by successfully docking with the ISS despite facing technical challenges. 

Source link

Continue Reading

Technology

Watch Boston Dynamics’ Spot robot helping out at Michelin

Published

on

Watch Boston Dynamics' Spot robot helping out at Michelin

Spot at Michelin | Boston Dynamics

It’s been four years since the robot wizards at Boston Dynamics declared its dog-like Spot robot ready for the workplace.

In that time, the quadruped robot has been trialed in various roles at a number of firms, including for factory mapping at Ford, safety inspections at a Kia auto plant, and radiation surveys for Dominion Energy.

Its latest gig is at a Michelin facility in Lexington, South Carolina, which manufactures tires and light trucks. A video (top) released by Boston Dynamics on Wednesday shows Spot making its way around the site, carrying out various tasks as part of a pilot program.

“We were like kids at Christmas when we first got Spot,” said Wayne Pender, a reliability manager at Michelin whose job it is to ensure that all of the facility’s equipment is running at optimal efficiency.

Advertisement

Ryan Burns, also a reliability manager, said it’s important to get ahead of equipment failures in order to avoid a plant shutdown. Spot helps out by scanning 350 locations with a thermal camera to see if any parts are overheating or performing differently in some other way. Using specially designed software called Orbit, Spot then processes the data and sends it to to its operators for final analysis. If an anomaly is spotted, a human technician is sent out to review the situation before a final decision is made on how to respond.

“From a technician standpoint, Spot going out and doing these routes eliminates a mundane task that humans are doing,” Burns said. “By Spot finding these anomalies and these issues, it gives the technician more time to go out and plan and schedule how they’re going to fix the problem versus going out, identifying, then trying to plan and schedule everything.”

Burns added that it would be ideal to have more Spots at the facility so that the company can improve its inspection procedures, leading to enhanced efficiency and greater output.

Boston Dynamics is continuing to develop Spot and refine its capabilities through various pilot programs and partnerships in the U.S. and beyond.

Advertisement






Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com