Connect with us

Technology

Nvidia CEO touts India’s progress with sovereign AI and over 100K AI developers trained

Published

on

Nvidia CEO touts India's progress with sovereign AI and over 100K AI developers trained

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Nvidia CEO Jensen Huang noted India’s progress in its AI journey in a conversation at the Nvidia AI Summit in India. India now has more than 2,000 Nvidia Inception AI companies and more than 100,000 developers trained in AI.

That compares to a global developer count of 650,000 people trained in Nvidia AI technologies, and India’s strategic move into AI is a good example of what Huang calls “sovereign AI,” where countries choose to create their own AI infrastructure to maintain control of their own data.

Nvidia said that India is becoming a key producer of AI for virtually every industry — powered by thousands of startups that are serving the country’s multilingual, multicultural population and scaling
out to global users.

Advertisement

The country is one of the top six global economies leading generative AI adoption and has seen rapid growth in its startup and investor ecosystem, rocketing to more than 100,000 startups this year from under 500 in 2016.

More than 2,000 of India’s AI startups are part of Nvidia Inception, a free program for startups designed to accelerate innovation and growth through technical training and tools, go-to-market support and opportunities to connect with venture capitalists through the Inception VC Alliance.

At the NVIDIA AI Summit, taking place in Mumbai through Oct. 25, around 50 India-based startups are sharing AI innovations delivering impact in fields such as customer service, sports media, healthcare and robotics.

Conversational AI for Indian Railway customers

Nvidia is working closely with India on AI factories.

Bengaluru-based startup CoRover.ai already has over a billion users of its LLM-based conversational AI platform, which includes text, audio and video-based agents.

“The support of NVIDIA Inception is helping us advance our work to automate conversational AI use cases with domain-specific large language models,” said Ankush Sabharwal, CEO of CoRover, in a statement. “NVIDIA AI technology enables us to deliver enterprise-grade virtual assistants that support 1.3 billion users in over 100 languages.”

Advertisement

CoRover’s AI platform powers chatbots and customer service applications for major private and public sector customers, such as the Indian Railway Catering and Tourism Corporation, the official provider of online tickets, drinking water and food for India’s railways stations and trains.

Dubbed AskDISHA, after the Sanskrit word for direction, the IRCTC’s multimodal chatbot handles more than 150,000 user queries daily, and has facilitated over 10 billion interactions for more than 175 million passengers to date. It assists customers with tasks such as booking or canceling train tickets, changing boarding stations, requesting refunds, and checking the status of their booking in languages including English, Hindi, Gujarati and Hinglish — a mix of Hindi and English.

The deployment of AskDISHA has resulted in a 70% improvement in IRCTC’s customer satisfaction rate and a 70% reduction in queries through other channels like social media, phone calls and emails.

CoRover’s modular AI tools were developed using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI. They run on Nvidia GPUs in the cloud, enabling CoRover to automatically scale up compute resources during peak usage — such as the moment train tickets are released.

Advertisement

Nvidia also noted that VideoVerse, founded in Mumbai, has built a family of AI models using Nvidia technology to support AI-assisted content creation in the sports media industry — enabling global customers including the Indian Premier League for cricket, the Vietnam Basketball Association and the Mountain West Conference for American college football to generate game highlights up to 15 times faster and boost viewership. It uses Magnifi, with tech like vision analysis to detect players and key moments for short form video.

Nvidia also highlighted Mumbai-based startup Fluid AI, which offers generative AI chatbots, voice calling bots and a range of application programming interfaces to boost enterprise efficiency. Its AI tools let workers perform tasks like creating slide decks in under 15 seconds.

Karya, based in Bengaluru, is a smartphone-based digital work platform that enables members of low-income and marginalized communities across India to earn supplemental income by completing language-based tasks that support the development of multilingual AI models. Nearly 100,000 Karya workers are recording voice samples, transcribing audio or checking the accuracy of AI-generated sentences in their native languages, earning nearly 20 times India’s minimum wage for their work. Karya also provides royalties to all contributors each time its datasets are sold to AI developers.

Karya is employing over 30,000 low-income women participants across six language groups in India to help create the dataset, which will support the creation of diverse AI applications across agriculture, healthcare and banking.

Advertisement

Serving over a billion local language speakers with LLMs

India is investing in sovereign AI in an alliance with Nvidia.

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its government and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These public and private sector projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable government services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, Nvidia has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an Nvidia NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any Nvidia GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects.

Advertisement

Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by Nvidia. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with Nvidia NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses Nvidia RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership.

It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

Advertisement

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the Nvidia AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments. A number of Indian companies are using the services.

India’s AI factories can transform economy

India’s robotics ecosystem.

India’s leading cloud infrastructure providers and server manufacturers are ramping up accelerated data center capacity in what Nvidia calls AI factories. By year’s end, they’ll have boosted Nvidia GPU
deployment in the country by nearly 10 times compared to 18 months ago.

Tens of thousands of Nvidia Hopper GPUs will be added to build AI factories — large-scale data centers for producing AI — that support India’s large businesses, startups and research centers running AI workloads in the cloud and on premises. This will cumulatively provide nearly 180 exaflops of compute to power innovation in healthcare, financial services and digital content creation.

Announced today at the Nvidia AI Summit, this buildout of accelerated computing technology is led by data center provider Yotta Data Services, global digital ecosystem enabler Tata Communications, cloud service provider E2E Networks and original equipment manufacturer Netweb.

Advertisement

Their systems will enable developers to harness domestic data center resources powerful enough to fuel a new wave of large language models, complex scientific visualizations and industrial digital twins that could propel India to the forefront of AI-accelerated innovation.

Yotta Data Services is providing Indian businesses, government departments and researchers access to managed cloud services through its Shakti Cloud platform to boost generative AI adoption and AI education.

Powered by thousands of Nvidia Hopper GPUs, these computing resources are complemented by Nvidia AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade copilots and other generative AI applications.

With Nvidia AI Enterprise, Yotta customers can access Nvidia NIM, a collection of microservices for optimized AI inference, and Nvidia NIM Agent Blueprints, a set of customizable reference architectures for generative AI applications. This will allow them to rapidly adopt optimized, state-of-the-art AI for applications including biomolecular generation, virtual avatar creation and language generation.

Advertisement

“The future of AI is about speed, flexibility and scalability, which is why Yotta’s Shakti Cloud platform is designed to eliminate the common barriers that organizations across industries face in AI adoption,” said Sunil Gupta, CEO of Yotta, in a statement. “Shakti Cloud brings together high-performance GPUs, optimized storage and a services layer that simplifies AI development from model training to deployment, so organizations can quickly scale their AI efforts, streamline operations and push the boundaries of what AI can accomplish.”


Source link
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Cinemark’s Gladiator II AR-enabled popcorn bucket claims ‘you can eat war’

Published

on

Menu

The popcorn bucket wars just became literal with Cinemark’s latest entry for Gladiator II. The theater chain’s new entry is not only shaped like the Roman colosseum, it plays a cutesy augmented reality gladiator battle when you point your smartphone at QR code on the bottom (AR-ENA, get it?). The butter on the popcorn is Cinemark’s tagline, claiming “you can eat war.”

In fact, all of the ad copy is brilliantly cheesy: “Every kernel of strength, every ounce of honor, is for the glory of Rome. As you preside over this gladiator arena, you can… eat war. Finish the popcorn and unleash the battle within. You will be entertained.” Being intoned in the Honest Trailer style takes it up an extra notch.

It’s the latest popcorn bucket movie merch, following high-profile entries from Dune and Deadpool. We’ve also seen entries for Ghostbusters: Frozen Empire, Beetlejuice Beetlejuice and others that didn’t quite capture the same zeitgeist.

Sure, you might need to wipe off some popcorn grease to read the Cinemark bucket’s QR code, and the AR animation of two fighting gladiators is reminiscent of a PlayStation 2 render. Still, neither the Dune nor Wolverine buckets boast any interactive features, so the Gladiator II bucket has them beat there — and it’s a smart way to rope in the tech press.

Advertisement

Cinemark’s vessel is also plausibly shaped like a popcorn bucket with its colosseum form. The same can’t be said for Dune‘s sandworm-shaped bucket or Deadpool’s Wolverine head bucket (Dune director Denis Villeneuve called the latter “horrific” and he’s right). If you’re looking to expand your collection, the Gladiator II popcorn bucket will arrive “soon” and the movie itself hits theaters on November 22.

Source link

Continue Reading

Technology

OpenAI researchers develop new model that speeds up media generation by 50X

Published

on

OpenAI researchers develop new model that speeds up media generation by 50X

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


A pair of researchers at OpenAI has published a paper describing a new type of model — specifically, a new type of continuous-time consistency model (sCM) — that increases the speed at which multimedia including images, video, and audio can be generated by AI by 50 times compared to traditional diffusion models, generating images in nearly a 10th of a second compared to more than 5 seconds for regular diffusion.

With the introduction of sCM, OpenAI has managed to achieve comparable sample quality with only two sampling steps, offering a solution that accelerates the generative process without compromising on quality.

Described in the pre-peer reviewed paper published on arXiv.org and blog post released today, authored by Cheng Lu and Yang Song, the innovation enables these models to generate high-quality samples in just two steps—significantly faster than previous diffusion-based models that require hundreds of steps.

Advertisement

Song was also a leading author on a 2023 paper from OpenAI researchers including former chief scientist Ilya Sutskever that coined the idea of “consistency models,” as having “points on the same trajectory map to the same initial point.”

While diffusion models have delivered outstanding results in producing realistic images, 3D models, audio, and video, their inefficiency in sampling—often requiring dozens to hundreds of sequential steps—has made them less suitable for real-time applications.

Theoretically, the technology could provide the basis for a near-realtime AI image generation model from OpenAI. As fellow VentureBeat reporter Sean Michael Kerner mused in our internal Slack channels, “can DALL-E 4 be far behind?”

Faster sampling while retaining high quality

In traditional diffusion models, a large number of denoising steps are needed to create a sample, which contributes to their slow speed.

Advertisement

In contrast, sCM converts noise into high-quality samples directly within one or two steps, cutting down on the computational cost and time.

OpenAI’s largest sCM model, which boasts 1.5 billion parameters, can generate a sample in just 0.11 seconds on a single A100 GPU.

This results in a 50x speed-up in wall-clock time compared to diffusion models, making real-time generative AI applications much more feasible.

Reaching diffusion-model quality with far less computational resources

The team behind sCM trained a continuous-time consistency model on ImageNet 512×512, scaling up to 1.5 billion parameters.

Advertisement

Even at this scale, the model maintains a sample quality that rivals the best diffusion models, achieving a Fréchet Inception Distance (FID) score of 1.88 on ImageNet 512×512.

This brings the sample quality within 10% of diffusion models, which require significantly more computational effort to achieve similar results.

Benchmarks reveal strong performance

OpenAI’s new approach has undergone extensive benchmarking against other state-of-the-art generative models.

By measuring both the sample quality using FID scores and the effective sampling compute, the research demonstrates that sCM provides top-tier results with significantly less computational overhead.

Advertisement

While previous fast-sampling methods have struggled with reduced sample quality or complex training setups, sCM manages to overcome these challenges, offering both speed and high fidelity.

The success of sCM is also attributed to its ability to scale proportionally with the teacher diffusion model from which it distills knowledge.

As both the sCM and the teacher diffusion model grow in size, the gap in sample quality narrows further, and increasing the number of sampling steps in sCM reduces the quality difference even more.

Applications and future uses

The fast sampling and scalability of sCM models open new possibilities for real-time generative AI across multiple domains.

Advertisement

From image generation to audio and video synthesis, sCM provides a practical solution for applications that demand rapid, high-quality output.

Additionally, OpenAI’s research hints at the potential for further system optimization that could accelerate performance even more, tailoring these models to the specific needs of various industries.


Source link
Continue Reading

Technology

Vinted hits $5.4B valuation amid wave of secondary share sales in Europe

Published

on

Vinted CEO Thomas Plantenga

Lithuania’s Vinted has secured a new valuation of €5 billion (around $5.4 billion at current exchange rates), after the second-hand fashion marketplace closed a secondary share sale worth €340 million ($367 million).

The transaction was led by private equity giant TPG, with other new participants including Baillie Gifford, FJ Labs, Hedosophia, Invus Opportunities, Manhattan Venture Partners, and Moore Strategic Ventures. It’s unclear how much Vinted’s existing investors cashed out, but the company says that all its existing institutional investors — which include Accel, EQT, Insight Partners, and Lightspeed Venture Partners — have retained at least some stake.

It’s proving to be a bumper year for secondary market transactions, particularly in Europe, as scale-ups seek to unlock liquidity for their employees and VCs in a decidedly tepid IPO market. In the past few months alone, we’ve seen neobanks Revolut and Monzo pursue secondary market routes, attaining lofty valuations off the back of strong user growth and profitability.

In the U.S., meanwhile, fintech giant Stripe followed a similar path to unlock liquidity, reaching a private valuation of $65 billion back in February as it continues to delay a long-rumored IPO. This figure later jumped to $70 billion as Sequoia sought a larger stake from existing investors.

Advertisement

Vinted CEO Thomas Plantenga (pictured above) noted that the sale “rewards our employees for their dedication in making Vinted a success.” The company was valued at €3.5 billion ($3.8 billion) pre-money for its previous €250 million Series F fundraise back in 2021. Since then, it has gone from strength to strength, reporting record revenue growth of 61% in 2023 compared to the previous year and reaching profitability for the first time.

At the same time, Vinted has expanded geographically and is also extending beyond its core fashion roots into the electronics realm — a growth trajectory that prompted marketplace stalwart eBay to respond by removing seller fees in key European markets.

Source link

Continue Reading

Technology

Industry groups are suing the FTC to stop its click to cancel rule

Published

on

Industry groups are suing the FTC to stop its click to cancel rule

Three industry groups are suing to prevent the Federal Trade Commission (FTC) from enforcing its new “Click to Cancel” rule that requires companies to make it easy to cancel subscriptions, according to Reuters. And yes, it’s exactly who you’d expect.

Click to cancel expands the Negative Option Rule to forbid businesses from making customers cancel services using a method that differs from how they signed up. So, if you sign up online, you must be allowed to cancel online, rather than needing to call a support line, write a letter, or show up in person. Most aspects of the rule, assuming it isn’t blocked, will go into effect 180 days from its entry into the Federal Register.

That’s “arbitrary, capricious, and an abuse of discretion,” the Internet and Television Association, Electronic Security Association, and Interactive Advertising Bureau allege in their complaint filed with the US Fifth Circuit Appeals Court today. The groups — many of whose member companies profit from subscriptions that are easy to start and harder to stop — argue that the FTC is trying to “regulate consumer contracts for all companies in all industries and across all sectors of the economy.”

Indeed, the rule applies to any automatically renewing subscription, whether it’s a gym membership or Amazon Prime, including free trials or those plans that ship you easy-to-cook dinners. The horror!

Advertisement

Source link

Continue Reading

Technology

Helldivers 2 will get a PS5 Pro upgrade, but Arrowhead is keeping quiet on the details

Published

on

A Helldiver wears the CE-27 yellow and black Ground Breaker armor and runs towards the camera across a desert planet

Arrowhead Game Studios has confirmed that Helldivers 2 will receive a PS5 Pro upgrade in the future.

That’s according to Arrowhead community manager ‘Twinbeard’ who revealed on the Helldivers 2 Discord channel that the popular third-person online shooter will eventually get some form of PS5 Pro upgrade at a later date, but stopped short of sharing what those enhancements will be (via PSU).

Source link

Continue Reading

Technology

Australian cricketer Steve Waugh at Bengaluru Space Expo- The Week

Published

on

Former Australian cricketer Steve Waugh says that he is learning a lot about space technology and is very excited about it. He was a surprise at the Bengaluru Space Expo (BSX) 2024 that began today. “I am surprised myself to be at the space expo. Space is exciting and new for me and I am learning a lot about space as I go along. I am very happy to be involved in this joint venture between Austraila and India. I have been coming to India for the last forty years with charity, with cricket, with business and this is another opportunity to collaborate with India. I am excited to be involved in it and Australia and India can do great things together in space technology,” said Waugh who is also the brand ambassador of Space Machines Company an Australian India in space servicing firm. 

The Space Machines Company has forged strategic partnerships with two Indian companies Ananth Technologies and Digantara. These partnerships are expected to play a significant role in the upcoming Space MAITRI (Mission for Australia-India’Technology, Research and Innovation) mission and the launch of Space Machines Company’s second Optimus satellite. Scheduled for 2026, the satellite will be deployed abroad NewSpace India Limited (NSIL) Small Satellite Launch Vehicle (SSLV).  

It will be the largest Australian-made spacecraft in orbit. The Space Machines Company’s second Optimus spacecraft, a 450 kg Orbital Servicing Vehicle will be launched on NSIL’s Small Satellite Launch Vehicle and is part of the first dedicated launch agreement between Australia and India. 

The mission will focus on debris management and sustainability and will significantly advance Australia’s domestic space industry, by combining Australian spacecraft capabilities with India’s launch expertise. 

Advertisement

“We will work closely with Ananth Technologies and Digantara throughout the space MAITRI project lifecycle, leveraging each company’s advanced engineering, logisitc, and situation space awareness capabilities to fulfil the joint Australian-Indian mission of building a more sustainable space future,” said Rajat Kulshrestha, CEO and Co-founder of Space Machines Company. 

Under the partnership, Ananth Technologies will provide Assembly Integration and Testing (AIT) and comprehensive engineering and logistics support throughout the Space MAITRI program. This will include the safe transportation and handling of all spacecraft components in India, extensive testing and launch site spacecraft fueling. This collaboration with Ananth Technologies between the two companies will ensure that Space Machines Company’s second Optimus spacecraft is successfully integrated into the SSLV and ready for launch. 

On the other hand, the collaboration with Digantara will enable the Optimus spacecraft to track and engage short range resident space objects a vital capability when executing close approach maneuvers during in orbit operations. 

Interestingly the Australian government has invested $ 8.5 million in the Space MAITRI mission in April 2024 through the Australian Space Agency’s $18 million International Space Investment India Projects program. “This mission and the collaborations that underpin it emphasise the role that space can play in enhancing cooperation in the Indo-Pacific region for mutual benefit. This mission leverages our nation’s respective capabilities and advantages to make space activities more sustainable-something the global space community is focussed on to protect and maintain the assets in orbit that are central to a functioning modern society,” remarked Enrico Palermo, head of the Australian Space Agency. 

Advertisement

Space Machines Company is an Australian company that delivers on orbit servicing and protection of critical space infrastructure through its Orbital Servicing Network. This company supports mobility, inspection, deorbiting, repair, life extension and protection capbaility to satellite customers when and where they need it. 

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com