Connect with us

Technology

Why AI is a know-it-all know nothing

Published

on

Why AI is a know-it-all know nothing

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


More than 500 million people every month trust Gemini and ChatGPT to keep them in the know about everything from pasta, to sex or homework. But if AI tells you to cook your pasta in petrol, you probably shouldn’t take its advice on birth control or algebra, either.

At the World Economic Forum in January, OpenAI CEO Sam Altman was pointedly reassuring: “I can’t look in your brain to understand why you’re thinking what you’re thinking. But I can ask you to explain your reasoning and decide if that sounds reasonable to me or not. … I think our AI systems will also be able to do the same thing. They’ll be able to explain to us the steps from A to B, and we can decide whether we think those are good steps.”

Knowledge requires justification

It’s no surprise that Altman wants us to believe that large language models (LLMs) like ChatGPT can produce transparent explanations for everything they say: Without a good justification, nothing humans believe or suspect to be true ever amounts to knowledge. Why not? Well, think about when you feel comfortable saying you positively know something. Most likely, it’s when you feel absolutely confident in your belief because it is well supported — by evidence, arguments or the testimony of trusted authorities.

Advertisement

LLMs are meant to be trusted authorities; reliable purveyors of information. But unless they can explain their reasoning, we can’t know whether their assertions meet our standards for justification. For example, suppose you tell me today’s Tennessee haze is caused by wildfires in western Canada. I might take you at your word. But suppose yesterday you swore to me in all seriousness that snake fights are a routine part of a dissertation defense. Then I know you’re not entirely reliable. So I may ask why you think the smog is due to Canadian wildfires. For my belief to be justified, it’s important that I know your report is reliable.

The trouble is that today’s AI systems can’t earn our trust by sharing the reasoning behind what they say, because there is no such reasoning. LLMs aren’t even remotely designed to reason. Instead, models are trained on vast amounts of human writing to detect, then predict or extend, complex patterns in language. When a user inputs a text prompt, the response is simply the algorithm’s projection of how the pattern will most likely continue. These outputs (increasingly) convincingly mimic what a knowledgeable human might say. But the underlying process has nothing whatsoever to do with whether the output is justified, let alone true. As Hicks, Humphries and Slater put it in “ChatGPT is Bullshit,” LLMs “are designed to produce text that looks truth-apt without any actual concern for truth.”

So, if AI-generated content isn’t the artificial equivalent of human knowledge, what is it? Hicks, Humphries and Slater are right to call it bullshit. Still, a lot of what LLMs spit out is true. When these “bullshitting” machines produce factually accurate outputs, they produce what philosophers call Gettier cases (after philosopher Edmund Gettier). These cases are interesting because of the strange way they combine true beliefs with ignorance about those beliefs’ justification.

AI outputs can be like a mirage

Consider this example, from the writings of 8th century Indian Buddhist philosopher Dharmottara: Imagine that we are seeking water on a hot day. We suddenly see water, or so we think. In fact, we are not seeing water but a mirage, but when we reach the spot, we are lucky and find water right there under a rock. Can we say that we had genuine knowledge of water?

Advertisement

People widely agree that whatever knowledge is, the travelers in this example don’t have it. Instead, they lucked into finding water precisely where they had no good reason to believe they would find it.

The thing is, whenever we think we know something we learned from an LLM, we put ourselves in the same position as Dharmottara’s travelers. If the LLM was trained on a quality data set, then quite likely, its assertions will be true. Those assertions can be likened to the mirage. And evidence and arguments that could justify its assertions also probably exist somewhere in its data set — just as the water welling up under the rock turned out to be real. But the justificatory evidence and arguments that probably exist played no role in the LLM’s output — just as the existence of the water played no role in creating the illusion that supported the travelers’ belief they’d find it there.

Altman’s reassurances are, therefore, deeply misleading. If you ask an LLM to justify its outputs, what will it do? It’s not going to give you a real justification. It’s going to give you a Gettier justification: A natural language pattern that convincingly mimics a justification. A chimera of a justification. As Hicks et al, would put it, a bullshit justification. Which is, as we all know, no justification at all.

Right now AI systems regularly mess up, or “hallucinate” in ways that keep the mask slipping. But as the illusion of justification becomes more convincing, one of two things will happen. 

Advertisement

For those who understand that true AI content is one big Gettier case, an LLM’s patently false claim to be explaining its own reasoning will undermine its credibility. We’ll know that AI is being deliberately designed and trained to be systematically deceptive.

And those of us who are not aware that AI spits out Gettier justifications — fake justifications? Well, we’ll just be deceived. To the extent we rely on LLMs we’ll be living in a sort of quasi-matrix, unable to sort fact from fiction and unaware we should be concerned there might be a difference.

Each output must be justified

When weighing the significance of this predicament, it’s important to keep in mind that there’s nothing wrong with LLMs working the way they do. They’re incredible, powerful tools. And people who understand that AI systems spit out Gettier cases instead of (artificial) knowledge already use LLMs in a way that takes that into account. Programmers use LLMs to draft code, then use their own coding expertise to modify it according to their own standards and purposes. Professors use LLMs to draft paper prompts and then revise them according to their own pedagogical aims. Any speechwriter worthy of the name during this election cycle is going to fact check the heck out of any draft AI composes before they let their candidate walk onstage with it. And so on.

But most people turn to AI precisely where we lack expertise. Think of teens researching algebra… or prophylactics. Or seniors seeking dietary — or investment — advice. If LLMs are going to mediate the public’s access to those kinds of crucial information, then at the very least we need to know whether and when we can trust them. And trust would require knowing the very thing LLMs can’t tell us: If and how each output is justified. 

Advertisement

Fortunately, you probably know that olive oil works much better than gasoline for cooking spaghetti. But what dangerous recipes for reality have you swallowed whole, without ever tasting the justification?

Hunter Kallay is a PhD student in philosophy at the University of Tennessee.

Kristina Gehrman, PhD, is an associate professor of philosophy at University of Tennessee.

DataDecisionMakers

Advertisement

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Advertisement

Read More From DataDecisionMakers


Source link
Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Cloud, edge or on-prem? Navigating the new AI infrastructure paradigm

Published

on

Cloud, edge or on-prem? Navigating the new AI infrastructure paradigm

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


This article is part of a VB Special Issue called “Fit for Purpose: Tailoring AI Infrastructure.” Catch all the other stories here.

No doubt, enterprise data infrastructure continues to transform with technological innovation — most notably today due to data-and-resource hungry generative AI. 

As gen AI changes the enterprise itself, leaders continue to grapple with the cloud/edge/on-prem question. On the one hand, they need near-instant access to data; on the other, they need to know that that data is protected. 

Advertisement

As they face this conundrum, more and more enterprises are seeing hybrid models as the way forward, as they can exploit the different advantages of what cloud, edge and on-prem models have to offer. Case in point: 85% of cloud buyers are either deployed or in the process of deploying a hybrid cloud, according to IDC. 

“The pendulum between the edge and the cloud and all the hybrid flavors in between has kept shifting over the past decade,” Priyanka Tembey, co-founder and CTO at runtime application security company Operant, told VentureBeat. “There are quite a few use cases coming up where compute can benefit from running closer to the edge, or as a combination of edge plus cloud in a hybrid manner.”

>>Don’t miss our special issue: Fit for Purpose: Tailoring AI Infrastructure.<<

The shifting data infrastructure pendulum

For a long time, cloud was associated with hyperscale data centers — but that is no longer the case, explained Dave McCarthy, research VP and global research lead for IDC’s cloud and edge services. “Organizations are realizing that the cloud is an operating model that can be deployed anywhere,” he said. 

Advertisement

“Cloud has been around long enough that it is time for customers to rethink their architectures,” he said. “This is opening the door for new ways of leveraging hybrid cloud and edge computing to maximize the value of AI.”

AI, notably, is driving the shift to hybrid cloud and edge because models need more and more computational power as well as access to large datasets, noted Miguel Leon, senior director at app modernization company WinWire

“The combination of hybrid cloud, edge computing and AI is changing the tech landscape in a big way,” he told VentureBeat. “As AI continues to evolve and becomes a de facto embedded technology to all businesses, its ties with hybrid cloud and edge computing will only get deeper and deeper.”

Edge addresses issues cloud can’t alone

According to IDC research, spending on edge is expected to reach $232 billion this year. This growth can be attributed to several factors, McCarthy noted — each of which addresses a problem that cloud computing can’t solve alone. 

Advertisement

One of the most significant is latency-sensitive applications. “Whether introduced by the network or the number of hops between the endpoint and server, latency represents a delay,” McCarthy explained. For instance, vision-based quality inspection systems used in manufacturing require real-time response to activity on a production line. “This is a situation where milliseconds matter, necessitating a local, edge-based system,” he said. 

“Edge computing processes data closer to where it’s generated, reducing latency and making businesses more agile,” Leon agreed. It also supports AI apps that need fast data processing for tasks like image recognition and predictive maintenance.

Edge is beneficial for limited connectivity environments, as well, such as internet of things (IoT) devices that may be mobile and move in and out of coverage areas or experience limited bandwidth, McCarthy noted. In certain cases — autonomous vehicles, for one — AI must be operational even if a network is unavailable. 

Another issue that spans all computing environments is data — and lots of it. According to the latest estimates, approximately 328.77 million terabytes of data are generated every day. By 2025, the volume of data is expected to increase to more than 170 zettabytes, representing a more than 145-fold increase in 15 years. 

Advertisement

As data in remote locations continues to increase, costs associated with transmitting it to a central data store also continue to grow, McCarthy pointed out. However, in the case of predictive AI, most inference data does not need to be stored long-term. “An edge computing system can determine what data is necessary to keep,” he said. 

Also, whether due to government regulation or corporate governance, there can be restrictions to where data can reside, McCarthy noted. As governments continue to pursue data sovereignty legislation, businesses are increasingly challenged with compliance. This can occur when cloud or data center infrastructure is located outside a local jurisdiction. Edge can come in handy here, as well, 

With AI initiatives quickly moving from proof-of-concept trials to production deployments, scalability has become another big issue. 

“The influx of data can overwhelm core infrastructure,” said McCarthy. He explained that, in the early days of the internet, content delivery networks (CDNs) were created to cache content closer to users. “Edge computing will do the same for AI,” he said. 

Advertisement

Benefits and uses of hybrid models

Different cloud environments have different benefits, of course. For example, McCarthy noted, that auto-scaling to meet peak usage demands is “perfect” for public cloud. Meanwhile, on-premises data centers and private cloud environments can help secure and provide better control over proprietary data. The edge, for its part, provides resiliency and performance in the field. Each plays its part in an enterprise’s overall architecture.

“The benefit of a hybrid cloud is that it allows you to choose the right tool for the job,” said McCarthy. 

He pointed to numerous use cases for hybrid models: For instance, in financial services, mainframe systems can be integrated with cloud environments so that institutions can maintain their own data centers for banking operations while leveraging the cloud for web and mobile-based customer access. Meanwhile, in retail, local in-store systems can continue to process point-of-sale transactions and inventory management independently of the cloud should an outage occur. 

“This will become even more important as these retailers roll out AI systems to track customer behavior and prevent shrinkage,” said McCarthy. 

Advertisement

Tembey also pointed out that a hybrid approach with a combination of AI that runs locally on a device, at the edge and in larger private or public models using strict isolation techniques can preserve sensitive data.

Not to say that there aren’t downsides — McCarthy pointed out that, for instance, hybrid can increase management complexity, especially in mixed vendor environments. 

“That is one reason why cloud providers have been extending their platforms to both on-prem and edge locations,” he said, adding that original equipment manufacturers (OEMs) and independent software vendors (ISVs) have also increasingly been integrating with cloud providers. 

Interestingly, at the same time, 80% of respondents to an IDC survey indicated that they either have or plan to move some public cloud resources back on-prem.  

Advertisement

“For a while, cloud providers tried to convince customers that on-premises data centers would go away and everything would run in the hyperscale cloud,” McCarthy noted. “That has proven not to be the case.”


Source link
Continue Reading

Servers computers

Dell PowerEdge R640 NVMe 10 Bay Server Build | Configured To Order | Timelapse #technology #dell

Published

on

Dell PowerEdge R640 NVMe 10 Bay Server Build | Configured To Order | Timelapse #technology #dell



At Cloud Ninjas, we pride ourselves on our quality control and high standard practices. As always, we have our technician wearing ESD gear when coming into contact with any servers or components. They start by laying out all the components of the build on their workstation and go section by section, following all safety protocols. Finally, they finish off by doing a full system test just like you see Scott do with Dell Diagnostics!

We have Dell, HP, Supermicro, Cisco, and IBM servers in stock. If you are interested in purchasing a custom configured server, head over to our website https://cloudninjas.com/ or email us at Sales@CloudNinjas.com

Please smash that subscribe button and learn more about what we offer at Cloud Ninjas.

Follow us on:
https://www.facebook.com/realcloudninjas/
https://twitter.com/realcloudninjas .

source

Continue Reading

Technology

Apple’s homework is due Monday no matter what, says judge

Published

on

Apple’s homework is due Monday no matter what, says judge

“THE COURT: — so let me make it clear then if you obviously didn’t understand. I want all of Apple’s documents relative to its decision-making process with respect to the issues in front of the Court. All of them. All. If there is a concern, then be overly broad.

MR. PERRY: Your Honor, may I ask time parameter for the Court’s request.

THE COURT: All.

MR. PERRY: Thank you, Your Honor.

Advertisement

THE COURT: So let’s say from the day that my decision came out until the present.”

Source link

Continue Reading

Technology

NYT Strands today — hints, answers and spangram for Sunday, September 29 (game #210)

Published

on

NYT Strands homescreen on a mobile phone screen, on a light blue background

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Want more word-based fun? Then check out my Wordle today, NYT Connections today and Quordle today pages for hints and answers for those games.

Source link

Continue Reading

Servers computers

Dell PowerEdge 2950 Rack Server – Overview, Specifications, Benefits & Uses

Published

on

Dell PowerEdge 2950 Rack Server - Overview, Specifications, Benefits & Uses



Buy Refurbished Dell PowerEdge 2950 Server https://www.serverbasket.com/shop/refurbished-dell-poweredge-2950-server/
For Information on Dell PowerEdge 2950 Rack Server Contact Us-
Website: https://www.serverbasket.com
Email: sales@serverbasket.com
Toll-Free No: 1800 123 1346
WhatsApp: +91 8886001858

————————————————————————————————————————————————————
Subscribe To Our Channel @ https://www.youtube.com/channel/UCO8bZFM0NzVsjG7Ss83LvOQ
————————————————————————————————————————————————————
Check out the Powerful Dell PowerEdge 2950 Rack Server.

Buy Refurbished Dell PowerEdge 2950 Rack Server from Server Basket as it is Optimal for Growing SME & Demanding Businesses. With High Memory Capacity & Huge Storage Ability, Dell PowerEdge 2950 Rack Server is an Ideal Server for Tech Startups & Booming SME Businesses.

————————————————————————————————————————————————————
Key Benefits:

– Flexibility And Storage Capacity
– High Performance and Availability To Maximize Uptime
– Manageability for Reduced Complexity
– Easy To Use
– Best Price in Market
– Quick Support

Dell PowerEdge 2950 Rack Server Specifications:

CPU Capacity:

– Supports 2 Processors
– Intel® Xeon® 5400,5300,5200,5100 processor Series
– Single CPU = 4 Cores Max
– Quad CPU = 8 Cores Max
– Max VCPUs: 12 VCPUs

RAM Capacity:

– Inbuilt 8 DIMM Slots
– 8 GB Max Memory Per DIMM Slot
– 64 GB Maximum Memory Capacity
– Supported Technology: DDR2 Memory
– RAM Speed: 667 MHz

Storage Capacity:

-8 x 2.5″ Hard Drive Option: 2.5″ HD Option: up to 8 SAS HDs (10 K)
-4 x 3.5″ Hard Drive Option: 3.5″ HD Option: up to 4 SAS (10 K/15 K)SATA (7.2 K) drives
-6 x 3.5″ Hard Drive Option: 3.5″ HD Option: up to 6 SAS (10 K/15 K) or SATA (7.2 K) drives

-Max potential Storage: 6 TB

Raid Controller :

-PERC integrated SAS/SATA

Power Supply:

-Single or redundant 750W hot-plug auto-switching 110/220VAC
or redundant hot plug -48-60V 20A DC power supplies

Operating System:

-Microsoft® Windows® Server
-Microsoft® Windows® Storage Server
-Red Hat® Linux® Enterprise
-Novell® Netware®
-Novell® SUSE Linux
-VMware® Virtual Infrastructure

Systems management:

-Dell OpenManage

Remote management:

-Baseboard Management Controller with IMPI 2.0 support;
optional DRAC5 (advanced capabilities)

————————————————————————————————————————————————————
Check out the Powerful Dell PowerEdge 2950 Rack Server from Server Basket.
#ServerBasket #DellPoweredge2950 #Dell2950Server .

source

Continue Reading

Technology

Computer viruses can spread by using ChatGPT to write sneaky emails

Published

on

Computer viruses can spread by using ChatGPT to write sneaky emails

Malware could use ChatGPT to rewrite its own code

JuSun/Getty

Researchers have shown that a computer virus can use ChatGPT to rewrite its code to avoid detection, then write tailored emails that look like genuine replies, spreading itself in an email attachment.

As well as producing human-like text, large language models (LLMs) – the artificial intelligences behind powerful chatbots like ChatGPT – can also write computer code. David Zollikofer at ETH Zurich in Switzerland and Benjamin Zimmerman at Ohio State University are concerned that this facility could be exploited by viruses that rewrite their own code, known as metamorphic malware.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com