Connect with us

Technology

Character.AI and Google sued after chatbot-obsessed teen’s death

Published

on

Google’s NotebookLM now lets you guide the hosts of your AI podcast

A lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google in the wake of a teenager’s death, alleging wrongful death, negligence, deceptive trade practices, and product liability. Filed by the teen’s mother, Megan Garcia, it claims the platform for custom AI chatbots was “unreasonably dangerous” and lacked safety guardrails while being marketed to children.

As outlined in the lawsuit, 14-year-old Sewell Setzer III began using Character.AI last year, interacting with chatbots modeled after characters from The Game of Thrones, including Daenerys Targaryen. Setzer, who chatted with the bots continuously in the months before his death, died by suicide on February 28th, 2024, “seconds” after his last interaction with the bot.

Accusations include the site “anthropomorphizing” AI characters and that the platform’s chatbots offer “psychotherapy without a license.” Character.AI houses mental health-focused chatbots like “Therapist” and “Are You Feeling Lonely,” which Setzer interacted with.

Garcia’s lawyers quote Shazeer saying in an interview that he and De Freitas left Google to start his own company because “there’s just too much brand risk in large companies to ever launch anything fun” and that he wanted to “maximally accelerate” the tech. It says they left after the company decided against launching the Meena LLM they’d built. Google acquired the Character.AI leadership team in August.

Advertisement

Character.AI’s website and mobile app has hundreds of custom AI chatbots, many modeled after popular characters from TV shows, movies, and video games. A few months ago, The Verge wrote about the millions of young people, including teens, who make up the bulk of its user base, interacting with bots that might pretend to be Harry Styles or a therapist. Another recent report from Wired highlighted issues with Character.AI’s custom chatbots impersonating real people without their consent, including one posing as a teen who was murdered in 2006.

Because of the way chatbots like Character.ai generate output that depends on what the user inputs, they fall into an uncanny valley of thorny questions about user-generated content and liability that, so far, lacks clear answers.

Character.AI has now announced several changes to the platform, with communications head Chelsea Harrison saying in an email to The Verge, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

Some of the changes include:

Advertisement

“As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation,” Harrison said. Google didn’t immediately respond to The Verge’s request for comment.

Source link

Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Science & Environment

Polar bears face higher risk of disease in a warming Arctic

Published

on

Polar bears face higher risk of disease in a warming Arctic


USGS A polar bear mother and cubs USGS

In a warming Arctic, polar bears are spending more of their time on land

As the Arctic warms, polar bears face a growing risk of contracting viruses, bacteria and parasites that they were less likely to encounter just 30 years ago, research has revealed.

In a study that has provided clues about how polar bear disease could be linked to ice loss, scientists examined blood samples from bears in the Chukchi Sea – between Alaska and Russia.

They analysed samples that had been gathered between 1987 and 1994, then collected and studied samples three decades later – between 2008 and 2017.

Advertisement

The researchers found that significantly more of the recent blood samples contained chemical signals that bears had been infected with one of five viruses, bacteria or parasites.

USGS Wildlife biologist Dr Karyn Rode from the US Geological Survey checks on a sedated wild polar bear in the Alaskan Arctic  USGS

Wildlife biologist Karyn Rode (here with a sedated wild polar bear) and her colleagues collected blood samples from wild bears to monitor the animals’ health

It is difficult to know, from blood samples, how the bears’ physical health was affected, but wildlife biologist Dr Karyn Rode from the US Geological Survey said it showed that something was changing throughout the whole Arctic ecosystem.

The researchers tested for six different pathogens in total – viruses, bacteria or parasites that are primarily associated with land-based animals but have been recorded before in marine animals, including species that polar bears hunt.

The study covered three decades, Dr Rode said, “when there had been a substantial loss of sea ice and there’s been increased land use in [this population of polar bears]”.

Advertisement

“So we wanted to know if exposure had changed – particularly for some of these pathogens that we think are primarily land-oriented.”

The five pathogens, as disease-causing agents are collectively called, that have become more common in polar bears, are two parasites that cause toxoplasmosis and neosporosis, two types of bacteria that cause rabbit fever and brucellosis, and the virus that causes canine distemper.

“Bears in general are pretty robust to disease,” explained Dr Rode. “It’s not typically been known to affect bear population, but I think what it just highlights is that things [in the Arctic] are changing.”

Key polar bear facts

Advertisement
  • There are about 26,000 polar bears left in the world, with the majority in Canada. Populations are also found in the US, Russia, Greenland and Norway
  • Polar bears are listed as vulnerable to extinction by the International Union for Conservation of Nature, with climate change a key factor in their decline
  • Adult males can grow to be around 3m long and can weigh close to 600kg
  • Polar bears can eat up to 45kg of blubber in one sitting
  • These bears have a powerful sense of smell and can sniff out prey from up to 16km away
  • They are strong swimmers and have been spotted up to 100km offshore. They can swim at speeds of around 10km per hour, due in part to their paws being slightly webbed
USGS A group of polar bears captured from a collar camera USGS

Studies with collar cameras have revealed what polar bears eat during the ice-free summer, as well as capturing surprising social interactions

In the US, polar bears are classified as a threatened species; scientists say the biggest threat to their future is the continuing loss of sea ice habitat, which they depend on as a platform from which to pounce on their marine prey.

Previous research using collar cameras on bears has shown that, as they spend more of the year on land – when there is no available sea ice to hunt from – the bears are unable to find enough calories.

Dr Rode explained that polar bears are top predators: “Our study suggested that they’re getting their exposure to some pathogens primarily through their prey species.

“So what we saw as changes in pathogen exposure for polar bears is indicative of changes that other species are also experiencing.”

Advertisement

The findings are published in the scientific journal PLOS One.

Thin, green banner promoting the Future Earth newsletter with text saying, “Get the latest climate news from the UK and around the world every week, straight to your inbox”. There is also a graphic of an iceberg overlaid with a green circular pattern.



Source link

Continue Reading

Technology

Mistral AI & Qualcomm partner will boost AI on Snapdragon devices

Published

on

Mistral AI & Qualcomm partner will boost AI on Snapdragon devices

Qualcomm is one of the companies that has been driving the development of artificial intelligence in the tech industry. The company offers powerful AI processing capabilities for both laptops and mobile devices with its Snapdragon chips. There is also the Qualcomm AI Hub, which makes it easier for developers to access multiple AI models from a single site. Now, Qualcomm has announced Mistral AI as a new partner in the integration of more AI models on Snapdragon hardware.

The market for AI models is witnessing an increasing number of alternatives. New companies have emerged to compete with their own models adapted to different needs. For example, Meta recently presented an AI model capable of autonomously evaluating and training other AI models. There is also Personal AI that enables offline assistant experiences with a business focus on Snapdragon-powered laptops.

Mistral AI is the latest Qualcomm partner to bring AI experiences to Snapdragon-powered devices

The Mistral AI models bear similarities to Personal AI, but they cater to a wider audience and possess sufficient versatility to seamlessly integrate with devices such as PCs, smartphones, and vehicles.

Qualcomm has announced the optimization of the Mistral AI models for its multiple hardware platforms. These include the Snapdragon 8 Elite, Snapdragon Cockpit Elite, Snapdragon Ride Elite, and Snapdragon X Elite. For reference, the Mistral AI models share a similar goal to that of the Gemini Nano. That is, the Mistral AI models are designed to be low-power models, making them ideal for enabling on-device AI experiences on mobile devices. However, Mistral AI asserts that its models are also compatible with cars.

Advertisement

“Mistral AI’s Ministral 3B and Ministral 8B will enable device manufacturers, software vendors, and digital service providers to deliver innovative experiences, such as AI assistants and other applications that understand users’ wants and needs, thanks to the immediacy, reliability, and enhanced privacy of on-device AI,” said Durga Malladi, senior vice president and general manager of technology at Mistral AI.

Mistral 7B v0.3 now available on Qualcomm AI Hub

Currently, the Mistral 7B v0.3 model is available on the Qualcomm AI Hub platform. Therefore, developers can now access this model to create experiences specifically tailored for the Snapdragon hardware. On the other hand, the Ministral 3B and Ministral 8B models will be available soon.

Source link

Advertisement
Continue Reading

Technology

Minecraft is ending all virtual reality support next spring

Published

on

Minecraft is ending all virtual reality support next spring

For Minecraft players, virtual and mixed reality will soon go the way of a hissing creeper. Developer Mojang announced last month that March 2025 would be the last update for the game . Yesterday’s for the Bedrock edition of the game use similar language, stating that “Our ability to support VR/MR devices has come to an end, and will no longer be supported in updates after March of 2025.”

All is not lost for the block builders who have been enjoying Minecraft in virtual reality. After the final March 2025 update, the patch notes clarify that “you can keep building in your worlds, and your Marketplace purchases (including Minecoins) will continue to be available on a non-VR/MR graphics device such as a computer monitor.” It’s a sad development for a game that was such a good match for the VR experience. And with the Minecraft continues to put up year after year, it’s also a bit discouraging for the broader virtual reality and mixed reality ecosystem to lose such an iconic title.

There is a silver lining for the Minecraft community, however. After a very long wait, the game finally has a native edition available . Sony’s latest console generation has been relegated to using the PS4 version until now, but going forward the game will have 4K resolution and 60 fps even at a longer draw distance. If you’re a PS5 owner who already has the PS4 version of Minecraft, you can claim the new update for free in the PlayStation Store. And with the Bundles of Bravery update rolling out yesterday, it’s a promising time to start a new blocky adventure.

Source link

Advertisement
Continue Reading

Technology

Nvidia CEO touts India’s progress with sovereign AI and over 100K AI developers trained

Published

on

Nvidia CEO touts India's progress with sovereign AI and over 100K AI developers trained

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Nvidia CEO Jensen Huang noted India’s progress in its AI journey in a conversation at the Nvidia AI Summit in India. India now has more than 2,000 Nvidia Inception AI companies and more than 100,000 developers trained in AI.

That compares to a global developer count of 650,000 people trained in Nvidia AI technologies, and India’s strategic move into AI is a good example of what Huang calls “sovereign AI,” where countries choose to create their own AI infrastructure to maintain control of their own data.

Nvidia said that India is becoming a key producer of AI for virtually every industry — powered by thousands of startups that are serving the country’s multilingual, multicultural population and scaling
out to global users.

Advertisement

The country is one of the top six global economies leading generative AI adoption and has seen rapid growth in its startup and investor ecosystem, rocketing to more than 100,000 startups this year from under 500 in 2016.

More than 2,000 of India’s AI startups are part of Nvidia Inception, a free program for startups designed to accelerate innovation and growth through technical training and tools, go-to-market support and opportunities to connect with venture capitalists through the Inception VC Alliance.

At the NVIDIA AI Summit, taking place in Mumbai through Oct. 25, around 50 India-based startups are sharing AI innovations delivering impact in fields such as customer service, sports media, healthcare and robotics.

Conversational AI for Indian Railway customers

Nvidia is working closely with India on AI factories.

Bengaluru-based startup CoRover.ai already has over a billion users of its LLM-based conversational AI platform, which includes text, audio and video-based agents.

“The support of NVIDIA Inception is helping us advance our work to automate conversational AI use cases with domain-specific large language models,” said Ankush Sabharwal, CEO of CoRover, in a statement. “NVIDIA AI technology enables us to deliver enterprise-grade virtual assistants that support 1.3 billion users in over 100 languages.”

Advertisement

CoRover’s AI platform powers chatbots and customer service applications for major private and public sector customers, such as the Indian Railway Catering and Tourism Corporation, the official provider of online tickets, drinking water and food for India’s railways stations and trains.

Dubbed AskDISHA, after the Sanskrit word for direction, the IRCTC’s multimodal chatbot handles more than 150,000 user queries daily, and has facilitated over 10 billion interactions for more than 175 million passengers to date. It assists customers with tasks such as booking or canceling train tickets, changing boarding stations, requesting refunds, and checking the status of their booking in languages including English, Hindi, Gujarati and Hinglish — a mix of Hindi and English.

The deployment of AskDISHA has resulted in a 70% improvement in IRCTC’s customer satisfaction rate and a 70% reduction in queries through other channels like social media, phone calls and emails.

CoRover’s modular AI tools were developed using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI. They run on Nvidia GPUs in the cloud, enabling CoRover to automatically scale up compute resources during peak usage — such as the moment train tickets are released.

Advertisement

Nvidia also noted that VideoVerse, founded in Mumbai, has built a family of AI models using Nvidia technology to support AI-assisted content creation in the sports media industry — enabling global customers including the Indian Premier League for cricket, the Vietnam Basketball Association and the Mountain West Conference for American college football to generate game highlights up to 15 times faster and boost viewership. It uses Magnifi, with tech like vision analysis to detect players and key moments for short form video.

Nvidia also highlighted Mumbai-based startup Fluid AI, which offers generative AI chatbots, voice calling bots and a range of application programming interfaces to boost enterprise efficiency. Its AI tools let workers perform tasks like creating slide decks in under 15 seconds.

Karya, based in Bengaluru, is a smartphone-based digital work platform that enables members of low-income and marginalized communities across India to earn supplemental income by completing language-based tasks that support the development of multilingual AI models. Nearly 100,000 Karya workers are recording voice samples, transcribing audio or checking the accuracy of AI-generated sentences in their native languages, earning nearly 20 times India’s minimum wage for their work. Karya also provides royalties to all contributors each time its datasets are sold to AI developers.

Karya is employing over 30,000 low-income women participants across six language groups in India to help create the dataset, which will support the creation of diverse AI applications across agriculture, healthcare and banking.

Advertisement

Serving over a billion local language speakers with LLMs

India is investing in sovereign AI in an alliance with Nvidia.

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its government and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These public and private sector projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable government services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, Nvidia has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an Nvidia NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any Nvidia GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects.

Advertisement

Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by Nvidia. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using Nvidia NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with Nvidia NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses Nvidia RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership.

It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

Advertisement

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the Nvidia AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments. A number of Indian companies are using the services.

India’s AI factories can transform economy

India’s robotics ecosystem.

India’s leading cloud infrastructure providers and server manufacturers are ramping up accelerated data center capacity in what Nvidia calls AI factories. By year’s end, they’ll have boosted Nvidia GPU
deployment in the country by nearly 10 times compared to 18 months ago.

Tens of thousands of Nvidia Hopper GPUs will be added to build AI factories — large-scale data centers for producing AI — that support India’s large businesses, startups and research centers running AI workloads in the cloud and on premises. This will cumulatively provide nearly 180 exaflops of compute to power innovation in healthcare, financial services and digital content creation.

Announced today at the Nvidia AI Summit, this buildout of accelerated computing technology is led by data center provider Yotta Data Services, global digital ecosystem enabler Tata Communications, cloud service provider E2E Networks and original equipment manufacturer Netweb.

Advertisement

Their systems will enable developers to harness domestic data center resources powerful enough to fuel a new wave of large language models, complex scientific visualizations and industrial digital twins that could propel India to the forefront of AI-accelerated innovation.

Yotta Data Services is providing Indian businesses, government departments and researchers access to managed cloud services through its Shakti Cloud platform to boost generative AI adoption and AI education.

Powered by thousands of Nvidia Hopper GPUs, these computing resources are complemented by Nvidia AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade copilots and other generative AI applications.

With Nvidia AI Enterprise, Yotta customers can access Nvidia NIM, a collection of microservices for optimized AI inference, and Nvidia NIM Agent Blueprints, a set of customizable reference architectures for generative AI applications. This will allow them to rapidly adopt optimized, state-of-the-art AI for applications including biomolecular generation, virtual avatar creation and language generation.

Advertisement

“The future of AI is about speed, flexibility and scalability, which is why Yotta’s Shakti Cloud platform is designed to eliminate the common barriers that organizations across industries face in AI adoption,” said Sunil Gupta, CEO of Yotta, in a statement. “Shakti Cloud brings together high-performance GPUs, optimized storage and a services layer that simplifies AI development from model training to deployment, so organizations can quickly scale their AI efforts, streamline operations and push the boundaries of what AI can accomplish.”


Source link
Continue Reading

Technology

Cash collection startup Upflow also wants to handle B2B payments

Published

on

Cash collection startup Upflow also wants to handle B2B payments

Upflow, a French startup we’ve been covering for quite a while, originally focused on managing outstanding invoices. The company is now announcing a shift in its strategy to become a B2B payment platform with its own payment gateway to complement its accounts receivable automation solution.

Like many software-as-a-service products, Upflow started by building a central hub specifically designed for one job in particular: CFOs. From the Upflow dashboards, CFOs and finance teams could see all their company’s invoices, track payments, communicate with team members, and send reminders to clients.

It integrates nicely with other financial tools and services to automatically import data from those third-party services. And a tool like Upflow can be particularly important as many tech companies struggle to raise their next funding round and want to improve their cash balance.

But that was just the first step in a bigger roadmap.

Advertisement

“Basically, my vision has always been that the real problem is payment methods,” Upflow co-founder and CEO Alexandre Louisy (pictured above) told TechCrunch. “Today, when you pay in a store, you pay with your phone. When you pay for your Spotify subscription or your Amazon subscription, you don’t even think about how you pay.”

“But when you look at B2B payments, the way you pay today hasn’t changed in the last 50 years. And for us, that’s why people struggle with late payments. The thing I’m really trying to fight against is the idea that late payments are linked to bad payers.“

According to him, around 90% of B2B payments still happen offline in the U.S. It’s still mostly paper checks. In Europe, it’s a different story as companies have adopted bank transfers. But transfers “are completely unstructured and require manual reconciliation,” Louisy said.

Upflow sells its accounts receivable automation software tool to midsized companies with a revenue between $10 million and $500 million per year. The company’s biggest client generates around $1 billion in annual revenue.

Advertisement

“But when you ask [CFOs], ‘What’s your strategy for setting up direct debit on part of your customer base?’ They don’t have a solution,” Louisy said.

Upflow helps you set up incentive strategies so that a portion of your client base moves to online payments, such as card payments or direct debits. The idea isn’t that all your clients are going to pay with a business card overnight. But Upflow can help you change the payment method for something like 20% or 30% of your client base.

Just like CRMs help you manage your sales processes with clients, Upflow now wants to be a financial relationship management (FRM) solution. It’s an interesting strategy as it shows how a startup like Upflow is thinking about diversifying its revenue sources.

“With our model shift, we’re moving from a model where we are 100% based on SaaS revenue to a hybrid model where we have SaaS revenue and payment revenue because we have our own payment gateway that we’ve set up with Stripe,” Louisy said.

Advertisement

Payment is the second brick in Upflow’s product suite. Up next, the company plans to integrate financing options with B2B “buy now, pay later” payment methods on the supplier front and factoring for a company’s outstanding invoices.

“We evaluate solutions … that provide embedded finance,” Louisy said. ”It’s not our core business to perform risk assessment. On the other hand, what’s interesting is that we can bring them useful data for credit scoring that they don’t necessarily have when they just connect to one of our users’ accounts.”

Source link

Continue Reading

Technology

Automatic emergency braking is getting better at preventing crashes

Published

on

Automatic emergency braking is getting better at preventing crashes

Automatic emergency braking (AEB) isn’t perfect, but the technology is improving, according to a recent study conducted by AAA. The research comes on the heels of a new federal rule requiring all vehicles to have the most robust version of AEB by 2029.

AAA wanted to see how newer vehicles with AEB fared compared to older models with the technology. AEB uses forward-facing cameras and other sensors to automatically tell the car to apply the brakes when a crash is imminent. And according to the test results, newer versions of AEB are much better at preventing forward collisions than older versions of the tech.

The motorist group conducted its test on a private closed course using older (2017–2018) and newer versions (2024) of the same three vehicles: Jeep Cherokee, Nissan Rogue, and Subaru Outback. Each vehicle was tested at 12mph, 25mph, and 35mph to see how well AEB performed at different speeds. And a fake vehicle was placed in the middle of the road to see whether AEB could prevent a collision.

100 percent of new vehicles braked before a collision

Advertisement

Unsurprisingly, the newer models performed a lot better than the older ones: 100 percent of the 2024 vehicles braked before a collision, as compared to 51 percent of the older vehicles.

Still, this more recent test only involved forward collisions. Past AAA studies found AEB to be ill-equipped at preventing other common types of crashes, like T-bone collisions and left turns in front of approaching vehicles.

“Since we began testing AEB in 2014, the advancements by automakers are commendable and promising in improving driver safety,” said Greg Brannon, director of automotive engineering research. “There is still significant work ahead to ensure the systems work at higher speeds.”

It was a positive sign that AEB is improving, considering the National Highway Traffic Safety Administration (NHTSA) finalized a new requirement for all light-duty vehicles to have robust AEB systems by 2029. Around 90 percent of vehicles on the road today come standard with AEB, but the new rule requires automakers to adopt a more robust version of the technology that can stop vehicles traveling at higher speeds and detect vulnerable road users, like cyclists and pedestrians, even at night.

Advertisement

Even so, automakers are scrambling to put the brakes on the new rule’s adoption. Earlier this year, the Alliance for Automotive Innovation, which represents most of the major automakers, sent a letter to NHTSA arguing that the final rule is “practically impossible with available technology” and urging the agency to delay its implementation.

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com