Connect with us

Technology

Nvidia makes 7 tech announcements in Washington D.C.

Published

on

Nvidia makes 7 tech announcements in Washington D.C.

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Nvidia showed off its technology in Washington, D.C. today at its AI Summit to help educate the nation’s capital.

The world’s biggest maker of AI chips made seven big announcements at the summit, and we’ll summarize them here. First, it is teaming with U.S. tech leaders to help organizations create custom AI
applications and transform the world’s industries using the latest Nvidia NIM Agent Blueprints and Nvidia NeMo and Nvidia NIM microservices.

Across industries, organizations like AT&T, Lowe’s and the University of Florida are using the microservices to create their own data-driven AI flywheels to power custom generative AI applications.

Advertisement

U.S. technology consulting leaders Accenture, Deloitte, Quantiphi and SoftServe are adopting Nvidia NIM Agent Blueprints and Nvidia NeMo and NIM microservices to help clients in healthcare, manufacturing, telecommunications, financial services and retail create custom generative AI agents and copilots.

Data and AI platform leaders Cadence, Cloudera, DataStax, Google Cloud, NetApp, SAP, ServiceNow and Teradata are advancing their data and AI platforms with Nvidia NIM.

“AI is driving transformation and shaping the future of global industries,” said Jensen Huang, CEO of Nvidia, in a statement. “In collaboration with U.S. companies, universities and government agencies, Nvidia will help advance AI adoption to boost productivity and drive economic growth.”

New NeMo microservices — NeMo Customizer, NeMo Evaluator and NeMo Guardrails — can be paired with NIM microservices to help developers easily curate data at scale, customize and evaluate models, and manage responses to align with business objectives. Developers can then seamlessly deploy a custom NIM microservice across any GPU-accelerated cloud, data center or workstation.

Advertisement

Lowe’s, a home improvement company, is exploring the use of Nvidia NIM and NeMo microservices to improve experiences for associates and customers and enhance productivity of their store associates. For
example, the retailer is leveraging Nvidia NeMo Guardrails to enhance the safety and security of its generative AI solution platform.

Nvidia is helping SETI sift through radio data faster.

SETI Institute researchers are also using Nvidia tech to conduct the first real-time AI search for fast radio bursts that might be a sign of life somewhere else. To better understand new and rare astronomical phenomena, radio astronomers are adopting accelerated computing and AI on Nvidia Holoscan and IGX platforms.

This summer, scientists supercharged their tools in the hunt for signs of life beyond Earth. Researchers at the SETI Institute became the first to apply AI to the real-time direct detection of faint radio signals from space. Their advances in radio astronomy are available for any field that applies accelerated computing and AI.

“We’re on the cusp of a fundamentally different way of analyzing streaming astronomical data, and the kinds of things we’ll be able to discover with it will be quite amazing,” said Andrew Siemion, Bernard M. Oliver Chair for SETI at the SETI Institute, a group formed in 1984 that now includes more than 120 scientists.

The SETI Institute operates the Allen Telescope Array (pictured above) in Northern California. It’s a cutting-edge telescope used in the search for extraterrestrial intelligence (SETI) as well as for the study of intriguing transient astronomical events such as fast radio bursts. The project started more than a decade ago, during early attempts to marry machine learning and astronomy.

Advertisement

Pittsburgh trades steel for AI tech

Pittsburgh is getting new Nvidia AI tech centers.

Carnegie Mellon University and the University of Pittsburgh will accelerate innovation and public-private collaboration through a pair of joint technology centers with Nvidia.

Serving as a bridge for academia, industry and public-sector groups to partner on artificial intelligence innovation, Nvidia is launching its inaugural AI Tech Community in Pittsburgh, Pennsylvania.

Collaborations with Carnegie Mellon University and the University of Pittsburgh, as well as startups, enterprises and organizations based in the “city of bridges,” are part of the new Nvidia AI Tech Community initiative, announced today during the Nvidia AI Summit in Washington, D.C.

The initiative aims to supercharge public-private partnerships across communities rich with potential for enabling technological transformation using AI. Two Nvidia joint technology centers will be established in Pittsburgh to tap into expertise in the region.

Nvidia’s Joint Center with Carnegie Mellon University (CMU) for Robotics, Autonomy and AI will equip higher-education faculty, students and researchers with the latest technologies and boost innovation in the fields of AI and robotics. And Nvidia’s Joint Center with the University of Pittsburgh for AI and Intelligent Systems will focus on computational opportunities across the health sciences, including applications of AI in clinical medicine and biomanufacturing.

Advertisement

CMU — the nation’s No. 1 AI university according to the U.S. News & World Report — has pioneered work in autonomous vehicles and natural language processing. CMU’s Robotics Institute, the world’s largest university-affiliated robotics research group, brings a diverse group of more than a thousand faculty, staff, students, post-doctoral fellows and visitors together to solve humanity’s toughest challenges through robotics.

The University of Pittsburgh — designated as an R1 research university at the forefront of innovation — is ranked No. 6 among U.S. universities in research funding from the National Institutes of Health, topping more than $1 billion in research expenditures in fiscal year 2022 and ranking No. 14 among U.S. universities granted utility patents. Nvidia will provide the centers with DGX for AI training, Omniverse for simulation and Jetson for robotics edge computing.

U.S. healthcare system deploys AI agents for research to rounds

The U.S. healthcare system is harnessing AI agents from research laboratories to clinical settings.

Nvidia also said the U.S. healthcare system is adopting digital health agents to harness AI across the board, from research laboratories to clinical settings.

The latest AI-accelerated tools — on display at the Nvidia AI Summit taking place this week in Washington, D.C. — include Nvidia NIM, a collection of cloud-native microservices that support AI model deployment and execution, and Nvidia NIM Agent Blueprints, a catalog of pretrained, customizable workflows.

These technologies are already in use in the public sector to advance the analysis of medical images, aid the search for new therapeutics and extract information from massive PDF databases containing text, tables and graphs.

Advertisement

For example, researchers at the National Cancer Institute, part of the National Institutes of Health (NIH), are using several AI models built with Nvidia MonAI for medical imaging — including the Vista-3D NIM foundation model for segmenting and annotating 3D CT images. A team at NIH’s National Center for Advancing Translational Sciences (NCATS) is using the NIM Agent Blueprint for generative AI-based virtual screening to reduce the time and cost of developing novel drug molecules.

With the Nvidia tech, medical researchers across the public sector can jump-start their adoption of state-of-the-art, optimized AI models to accelerate their work. The pretrained models are customizable based on an organization’s own data and can be continually refined based on user feedback.

Massive quantities of healthcare data — including research papers, radiology reports and patient records — are unstructured and locked in PDF documents, making it difficult for researchers to quickly search for information.

The Genetic and Rare Diseases Information Center, also run by NCATS, is exploring using the PDF data extraction blueprint to develop generative AI tools that enhance the center’s ability to glean information from previously unsearchable databases. These tools will help answer questions from those affected by rare diseases.

Advertisement

Nvidia leaders, customers and partners are presenting over 50 sessions highlighting impactful work in the public sector.

Nvidia’s blueprint for cybersecurity

Nvidia NIM Agent Blueprint for  container security helps enterprises build safe AI using
open-source  software.
Nvidia NIM Agent Blueprint for container security helps enterprises build safe AI using open-source software.

And Nvidia said Deloitte has adopted Nvidia NIM Agent Blueprint for container security to help enterprises build safe AI using open-source software.

AI is transforming cybersecurity with new generative AI tools and capabilities that were once the stuff of science fiction. And like many of the heroes in science fiction, they’re arriving just in time.

AI-enhanced cybersecurity can detect and respond to potential threats in real time — often before human analysts even become aware of them. It can analyze vast amounts of data to identify patterns and anomalies that might indicate a breach. And AI agents can automate routine security tasks, freeing up human experts to focus on more complex challenges.

All of these capabilities start with software, so Nvidia has introduced an Nvidia NIM Agent Blueprint for container security that developers can adapt to meet their own application requirements.

Advertisement

The blueprint uses Nvidia NIM microservices, the Nvidia Morpheus cybersecurity AI framework, Nvidia cuVS and Nvidia Rapids accelerated data analytics to help accelerate analysis of common vulnerabilities and exposures (CVEs) at enterprise scale — from days to just seconds.

All of this is included in Nvidia AI Enterprise, a cloud-native software platform for developing and deploying secure, supported production AI applications.

Deloitte is among the first to use the Nvidia NIM Agent Blueprint for container security in its cybersecurity solutions, which supports agentic analysis of open-source software to help enterprises build secure AI. It can help enterprises enhance and simplify cybersecurity by improving efficiency and reducing the time needed to identify threats and potential adversarial activity.

Software containers incorporate large numbers of packages and releases, some of which may be subject to security vulnerabilities. Traditionally, security analysts would need to review each of these packages to understand potential security exploits across any software deployment. These manual processes are tedious, time-consuming and error-prone. They’re also difficult to automate effectively because of the complexity of aligning software packages, dependencies, configurations and the operating environment.

Advertisement

With generative AI, cybersecurity applications can rapidly digest and decipher information across a wide range of data sources, including natural language, to better understand the context in which potential vulnerabilities could be exploited.

Enterprises can then create cybersecurity AI agents that take action on this generative AI intelligence. The NIM Agent Blueprint for container security enables quick, automatic and actionable CVE risk analysis using large language models and retrieval-augmented generation for agentic AI applications. It helps developers and security teams protect software with AI to enhance accuracy, efficiency and streamline potential issues for human agents to investigate.

CUDA-X accelerates Polars data processing library for faster AI development for data scientists

CUDA-x is helping data science.

Nvidia also said Polars, one of the fastest growing data analytics tools, has just crossed 9 million monthly downloads. As a modern DataFrame library, it is designed for efficiently processing datasets that fit on a single machine, without the overhead and complexity of distributed computing systems that are required for massive-scale workloads.

As enterprises grapple with complex data problems — ranging from detecting time-boxed patterns in credit card transactions to managing quickly shifting inventory needs across a global customer base — even higher performance is essential.

Polars and Nvidia engineers just released the Polars GPU engine powered by Rapids cuDF in open beta, bringing accelerated computing to the growing Polars community with zero code change required. This brings even more acceleration to the query execution for Polars — making this speedy data processing software up to 13x faster, compared to running on CPUs. It’s like giving rocket fuel to a cheetah to help it sprint even faster.

Advertisement

With data science and engineering teams building more and more data processing pipelines to fuel AI applications, it’s critical to choose the right software and infrastructure for the job to keep things running smoothly. For workloads well suited to individual servers, workstations and laptops, developers frequently use libraries like Polars to accelerate iterations, reduce complexity in development environments and lower infrastructure costs.

On these single machine-sized workloads, quick iteration time is often the top priority, as data scientists often need to do exploratory analysis to guide downstream model training or decision-making. Performance bottlenecks from CPU-only computing reduce productivity and can limit the number of test/train cycles that can be completed.

For large-scale data processing workloads too large for a single machine, organizations turn to frameworks like Apache Spark to help them distribute the work across nodes in the data center. At this scale, cost- and power-efficiency are often the top priorities, but costs can quickly balloon due to the inefficiencies of using traditional CPU-based computing infrastructure.

Nvidia’s CUDA-X data processing platform is designed with these needs in mind, optimized for cost- and energy-efficiency for large-scale workloads and performance for single-machine sized workloads.

Advertisement

[Updated: 8:33 a.m. on 10/8/24: Nvidia noted it has not been subpoenaed in an antitrust case in D.C.]


Source link
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Servers computers

Server rack RGB lighting – What do you think?

Published

on

Server rack RGB lighting - What do you think?



Touchy subject, I know – but it looks SOOOO good! .

source

Continue Reading

Technology

AI mortgage startup LoanSnap loses license to operate in Connecticut

Published

on

LoanSnap Karl Jacob

LoanSnap, an AI-powered mortgage startup, has had its license to operate revoked in Connecticut, according to the state’s Banking Commissioner. This occurred after LoanSnap violated a consent order it had entered into with the Department of Banking in May.

The department revealed Tuesday that its consumer credit division opened an investigation that found multiple violations of state law following that prior consent order. As a result, the Department and LoanSnap entered into a new consent order on October 2 that resulted in revoking LoanSnap’s license.

The revocation comes four months after TechCrunch’s exclusive reporting about how LoanSnap became inundated with lawsuits and was evicted from its California headquarters, all while its business cratered amid sky-high interest rates.

LoanSnap founders Karl Jacob and Allan Carroll did not immediately respond to an emailed request for comment. Connecticut’s Department of Banking referred TechCrunch to the new consent order.

Advertisement

The Department of Banking says LoanSnap violated state law in a few ways. First, the company failed to file a change of address with the Nationwide Multistate Licensing System and Registry, the nationwide licensing system for loan originators — a change that was necessitated by the company’s eviction. LoanSnap also violated Connecticut law by failing to disclose multiple default judgements entered against the company as a result of multiple lawsuits, according to the new consent order.

Connecticut’s Department of Banking also says LoanSnap failed to craft new policies and procedures that it was supposed to implement as part of the May consent order, including making sure that unlicensed employees weren’t originating loans.

LoanSnap raised millions of dollars since its 2017 founding from the likes of Richard Branson’s Virgin Group, the Chainsmokers’ Mantis Ventures, Baseline Ventures, and Reid Hoffman. It promised to use AI to simplify the home lending process, and in 2021, it originated nearly 1,300 loans for a total value of almost $500 million, according to data filed with federal regulators

But as interest rates rose, LoanSnap’s business dried up. Recently-released data from the Consumer Financial Protection Bureau shows LoanSnap only originated 42 loans in 2023 for a combined value of $3.6 million.

Advertisement

Source link

Continue Reading

Technology

The Oura Ring Gen 3 is 20 percent off for Prime Day

Published

on

The Oura Ring Gen 3 is 20 percent off for Prime Day

So long as you don’t have new product FOMO, this is an excellent opportunity for bargain hunters. Retailers and companies love discounts on older models right after launching a new one! This is the Horizon model as well, which is normally more expensive than the base Gen 3 due to its fully round shape. Given that Oura comes with a $6 monthly subscription to unlock all the features, getting a discount on hardware can help take some of the sting out. That said, Gen 3 inventory won’t last forever.

Even if you have FOMO, you won’t be missing out on much. I’ve used the Gen 3 for the past three years and tested a bunch of the competition — this is still the clear winner in my book. Compared to the new model, it’s only missing out on flatter sensors, a larger dock, and a slightly more comfortable form factor. The newer Oura Ring 4 does have an updated algorithm that’s supposed to be more accurate, but functionally, you’re still going to get the same metrics with the Gen 3.

That said, if you absolutely can’t stand the idea of a monthly subscription, my second favorite smart ring is also on sale. The Ultrahuman Ring Air is also 13 percent off for Prime Day at $305 and is a great choice if you’re focused on training metrics.

Source link

Continue Reading

Servers computers

Perapian kabel data di rack server

Published

on

Perapian kabel data di rack server



Transmart

source

Continue Reading

Technology

First reviews of Intel’s fastest CPU ever shows that it has finally caught up with AMD – 128-core Xeon 6980P CPU won’t come cheap though

Published

on

First reviews of Intel’s fastest CPU ever shows that it has finally caught up with AMD – 128-core Xeon 6980P CPU won’t come cheap though

Intel’s new 128-core Granite Rapids Xeon 6900P processor family is designed to compete directly with AMD’s EPYC offerings and comes in five variations.

There’s the 6960P (72 cores), 6952P (96 cores), 6972P (96 cores), and 6979P (120 cores), and the flagship 6980P with 128 cores and 256 threads, with a 2.0 GHz clock speed and 504MB of L3 cache.

Source link

Continue Reading

Science & Environment

Florida gas stations are running out of fuel as people flee Hurricane Milton

Published

on

Florida gas stations are running out of fuel as people flee Hurricane Milton


Florida gas stations are rapidly running out of fuel, says GasBuddy's Patrick De Haan

Nearly 16% of gas stations in Florida had run out of fuel as of late morning Tuesday as people flee north to escape the path of Hurricane Milton, according to data from GasBuddy.

Many stations simply can’t keep up with gasoline demand as millions of Floridians collectively evacuate, said Patrick De Haan, head of petroleum analysis at GasBuddy. About 1,200 of the state’s 7,900 gas stations are currently without fuel, according to the data.

“The sheer bulk of this is simply people getting out of harm’s way,” De Haan told CNBC. Prices should not rise as a consequence of the storm because infrastructure and refineries are not expected to be impacted, he said.

Milton is forecast to make landfall on the west-central coast of Florida on Wednesday night and then move east-northeastward across the central part of the state through Thursday, according to the National Hurricane Center.

Milton is currently a Category 4 storm with maximum sustained winds of 150 miles per hour. The storm is forecast to remain “an extremely dangerous hurricane” through landfall in Florida, according to the forecasters.

Advertisement

Florida Gov. Ron DeSantis said Tuesday morning that the state was stockpiling fuel ahead of the storm. Gas stations are running out of fuel and lines are long but there is not a shortage in the state, DeSantis said.

Fuel continues to arrive in Florida by port and trucks have been dispatching supplies to stations in the anticipated impact area, the governor said. The Port of Tampa and other Gulf Coast ports are not receiving ships but the facilities have fuel on hand and dockside operations will continue, he said.

There are no oil refineries in Florida and only 2,000 barrels of oil per day are produced in the state, said Andy Lipow, president of Lipow Oil Associates. However, there are several terminals along the waterfront in Tampa that could be impacted by Milton, Lipow said.

“They are at risk from storm surge, flooding, and power outages,” Lipow said in a Tuesday note. “Significant damage to these facilities will impact on the ability to get gasoline and diesel back into the area for distribution.”

Advertisement

Gasoline delivery by tanker likely will not be possible in Tampa until Sunday at the earliest, Lipow said. Terminals with power, however, could begin distribution of fuel Saturday and Sunday depending on worker and truck driver availability as well as whether local gas stations are working, he said.

Don’t miss these energy insights from CNBC PRO:



Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com