Hard coolers are easier to manage once you get where you’re going. It’s the getting there with those bulky, portable ice boxes that proves challenging.
If you’ve ever packed a car or camper for a camping trip, you know that space is precious. Tents, clothing, snacks and other outdoor essentials take up the bulk of the room inside your vehicle before you and your companions even jump in your seats. Now throw your bulky coolers into the mix, and you’re left with even less space.
Advertisement
When it is folded down, the Coleman cooler is easy to slide in tight spaces.
Corin Cesaric-Epple/CNET
Soft coolers have always been better for keeping in your car since most can squish down to the size of a throw pillow, but that notion might be changing.
With the release of its first-ever collapsible hard-cover cooler, Coleman is looking to flip the script on hard versus soft coolers. I got my hands on a test unit to see if this foldable hard cooler does what it’s meant to.
How it works
I knew the Snap ‘N Go was a heavy-duty cooler when I picked up the package from my front porch. It wasn’t a large package, but it was decently heavy, as the cooler weighs 20 pounds when empty. While cooler bags have been around for some time, and they’re lighter and easy to fold into a small square, the Snap ‘N Go is the first ever hard-shell cooler that can collapse to a smaller size — up to one-third its size.
Advertisement
The 55-quart cooler can hold up to 93 cans.
Corin Cesaric-Epple/CNET
It is available in three colors (light blue, black and dark blue) and in sizes 35-quart, 45-quart and 55-quart, which can hold up to 93 cans without ice. The 55-quart version is less than 5 inches tall when collapsed, making it easy to fit in tight spaces or store away when not in use.
Advertisement
The waterproof liner folds down just as easily.
Corin Cesaric-Epple/CNET
Using the handles on the outside of the cooler, it takes about a second to pop the cooler into its upright position. When it’s in this position, it’s hard to tell it apart from other hard-shell coolers, and the 55-quart can keep the items inside cool for up to 64 hours, according to the company. The 35-quart keeps them cool for 48 hours, while the 45-quart keeps them cool for 55 hours.
To pop it back down to its more compact size, it takes only a couple of seconds to pull the strap up toward you, and it closes in on itself, accordion-style. The waterproof liner is also removable, making it easy to clean.
Advertisement
I filled the bottom with water and left it for 30 minutes to make sure there was no leaking.
Corin Cesaric/CNET
To ensure the cooler was leakproof, I filled the bottom with water and left it for 30 minutes. It aced the test with not a single drop of water escaping.
When is the Snap ‘N Go available for purchase?
The Coleman Snap ‘N Go is available for purchase now. It’s priced between $200 and $240, depending on the size, and comes with a three-year limited warranty.
Advertisement
It is available in three colors and three sizes.
Corin Cesaric-Epple/CNET
Final thoughts
Name-brand coolers don’t typically come cheap, so I’m not sticker-shocked by this one’s price, especially since it’s a hard-shell option. I was impressed by how quick and easy it is to pop up, and I especially like that it isn’t bulky to store. Although we haven’t put this brand-new Coleman cooler through our rigorous lab testing, my initial experience with the Snap ‘N Go was promising.
If you’re an avid outdoor enthusiast or camper, this quality cooler could be a great addition to your summertime activities.
Andrew Jones doesn’t need a reintroduction, but he’s getting one anyway. After shaping some of the most important loudspeakers of the past three decades at KEF, Pioneer, ELAC, and now MoFi Electronics, where he still leads loudspeaker design—one of the industry’s most respected and technically grounded engineers is stepping out with something new. Jones and Cerreta, a Los Angeles based speaker company co-founded with Jamie and Bill Cerreta, marks the first time Andrew Jones has put his name on the door.
Set to debut in just 17 days at AXPONA 2026, the new brand signals more than another product launch. It’s a reset. Known for delivering reference level thinking at real world prices, Jones is now pairing that engineering discipline with a more design forward approach aimed at listeners who want both sonic credibility and visual impact. The debut loudspeaker is being positioned as a clear departure from his previous work, but the core philosophy remains intact: engineering decisions that serve the music first, not the spec sheet.
Who Is Behind Jones and Cerreta?
Jones and Cerreta brings together three partners with very different backgrounds across engineering, music, and technology, all focused on how music is created, reproduced, and experienced.
Andrew Jones – Lead Speaker Designer and Co Founder
Andrew Jones is one of the most experienced loudspeaker designers working today, with a career that spans KEF, Infinity, Pioneer, TAD, ELAC, and now MoFi Electronics, where he continues to lead loudspeaker design. He studied physics with a focus on acoustics and has worked extensively on crossover design and driver integration.
At KEF, he worked with concentric driver technology, and later at Pioneer helped establish TAD’s transition into the home audio market, including the development of a beryllium concentric driver. At ELAC, he played a key role in building out the company’s North American speaker lineup. Jones and Cerreta is the first company where his name is directly attached as a co founder.
Jamie Cerreta – Creative Strategy and Co Founder
Jamie Cerreta brings more than 25 years of experience in the music industry. He currently serves as President of Peermusic in the U.S. and Canada and has worked closely with artists, producers, and songwriters across a wide range of genres.
His experience includes working with artists such as Ray LaMontagne, My Morning Jacket, and Manchester Orchestra, as well as supporting the development of newer artists and writers. He also serves on the Executive Board of the National Music Publishers Association S.O.N.G.S. Foundation. His role focuses on how recorded music translates from the studio to the listener.
Advertisement
Bill Cerreta – CEO and Co Founder
Bill Cerreta is an electrical engineer with more than 30 years of experience in Silicon Valley, currently working at Pure Storage on data infrastructure technologies. He brings experience in product development, team leadership, and business operations.
He is also an active record collector and has spent years sourcing vinyl pressings internationally. In addition, he restores and builds vintage audio equipment, including tube gear and speakers. His role combines technical knowledge with operational oversight as the company launches its first products.
What Is Jones and Cerreta Bringing to AXPONA 2026?
Here’s what we actually know so far—and it’s just enough to raise eyebrows. The debut speaker is a floorstanding design with no model name and no announced pricing, although nobody should expect this to land anywhere near entry level.
The headline detail is the use of a concentric driver, which tracks with Andrew Jones’ long history at KEF and TAD—but this time it is paired with field coil, a technology rarely seen in modern loudspeakers due to cost, complexity, and power requirements. That combination alone suggests this is not a continuation of his ELAC or MoFi playbook.
Advertisement. Scroll to continue reading.
Advertisement
Beyond that, details are scarce. No published specs, no confirmed materials, no crossover topology, and no official performance targets. Which means one thing: whatever shows up in Room 302 at AXPONA is likely doing something different enough that they’re not ready to fully spell it out yet.
What Is a Field Coil Driver?
Field coil drivers are an old idea that never fully went away—they just became too complicated and expensive for most modern loudspeakers. Instead of using a permanent magnet like almost every speaker today, a field coil driver uses an electromagnet powered by an external power supply to generate the magnetic field that drives the voice coil.
That difference matters. Because the magnetic field is actively generated, it can be stronger, more stable, and in some cases adjustable, which can improve control, dynamics, and overall efficiency. It’s one of the reasons field coil designs have a reputation for sounding exceptionally clean and immediate when done well.
The tradeoffs are real. Field coil systems require an external power supply, add complexity, generate heat, and significantly increase cost. That’s why they’re mostly found in ultra high end or boutique speakers, often from companies like Cessaro, Voxativ, Tune Audio, Line Magnetic, and Feastrex.
Advertisement
What makes this relevant now is that Andrew Jones is reportedly using a field coil concentric driver in a floorstanding speaker. That’s not how this technology is typically deployed. It’s usually seen in horn systems or single driver designs, not something that looks like it could scale into a broader product line.
In other words, the technology itself isn’t new. Where and how it’s being used this time might be.
Where and When to Hear Andrew Jones’ New Speaker at AXPONA 2026
Jones and Cerreta will make its public debut at AXPONA 2026, taking place April 10 to 12 in Chicago, Illinois, with demonstrations scheduled in Room 302 throughout the show. Attendees will be among the first to see and hear Andrew Jones’ latest loudspeaker design, which promises a fresh take that blends legacy ideas with new engineering approaches.
Andrew Jones will also host a Master Class on April 11 from 5:00 to 5:45 PM in Expo Hall, titled Reimagining the Dual Concentric Driver, offering insight into the thinking behind the new design and how it challenges traditional implementations.
We’ll be there for a first listen—and if history is any guide, this won’t be a quiet debut.
The TeamPCP hacking group is targeting Kubernetes clusters with a malicious script that wipes all machines when it detects systems configured for Iran.
The threat actor is responsible for the recent supply-chain attack on the Trivy vulnerability scanner, and also an NPM-based campaign dubbed ‘CanisterWorm,’ which started on March 20.
Selective destruction payload
Researchers at application security company Aikido say that the campaign targeting Kubernetes clusters uses the same command-and-control (C2), backdoor code, and drop path as seen in the CanisterWorm incidents.
However, the new campaign differs in that it includes a destructive payload targeting Iranian systems and installs the CanisterWorm backdoor on nodes in other locales.
Advertisement
“The script uses the exact same ICP canister (tdtqy-oyaaa-aaaae-af2dq-cai[.]raw[.]icp0[.]io) we documented in the CanisterWorm campaign. Same C2, same backdoor code, same /tmp/pglog drop path,” Aikido says.
“The Kubernetes-native lateral movement via DaemonSets is consistent with TeamPCP’s known playbook, but this variant adds something we haven’t seen from them before: a geopolitically targeted destructive payload aimed specifically at Iranian systems.”
According to Aikido researchers, the malware is built to destroy any machine that matches Iran’s timezone and locale, regardless if Kuberenetes is present or not.
If both conditions are met, the script deploys a DaemonSet named ‘Host-provisioner-iran’ in ‘kube-system’, which uses privileged containers and mounts the host root filesystem into /mnt/host.
Advertisement
Each pod runs an Alpine container named ‘kamikaze’ that deletes all top-level directories on the host filesystem, and then forces a reboot on the host.
If Kubernetes is present but the system is identified as not Iranian, the malware deploys a DaemonSet named ‘host-provisioner-std’ using privileged containers with the host filesystem mounted.
Instead of wiping data, each pod writes a Python backdoor onto the host filesystem and installs it as a systemd service so it persists on every node.
On Iranian systems without Kubernetes, the malware deletes every file on the machine, including system data, accessible to the current user by running the rm -rf/ command with the –no-preserve-root flag. If root privileges are not available, it attempts passwordless sudo.
Advertisement
TeamPCP wiping Iranian systems with no Kubernetes source: Aikido
On systems where none of the conditions are met, no malicious action is taken, and the malware just exits.
Aikido reports that a recent version of the malware, which uses the same ICP canister backdoor, has omitted the Kubernetes-based lateral movement and instead uses SSH propagation, parsing authentication logs for valid credentials, and using stolen private keys.
The researchers highlighted some key indicators of this activity, including outbound SSH connections with ‘StrictHostKeyChecking+no’ from compromised hosts, outbound connections to the Docker API on port 2375 across the local subnet, and privileged Alpine containers via an unauthenticated Docker API with / mounted as a hostPath.
Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.
Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.
Apple is reportedly preparing to add search ads to Apple Maps, “and it could start to roll out to users by the summer,” reports AppleInsider, citing sources from Bloomberg(paywalled). From the report: Apple will make an announcement as soon as March. This will bring ads to search queries within the navigation app, which will operate similar to Google’s advertising system. Retailers and brands will be able to bid for ad spots located against search queries for specific terms, such as types of food or services. The winning bid will be able to show an ad at the top of the results, pointing to a related location for that business. Apple also announced in January that it would add more ads within the App Store, starting March in the UK and Japan.
Your car might just become the new smart home hub for your house. Samsung has expanded SmartThings integration, enabling drivers to control their smart home devices directly from their car’s infotainment system. It’s called Car-to-Home.
Building on the earlier Home-to-Car capability that allowed users to monitor their cars from inside the house, the Car-to-Home feature flips the functionality so you can control your smart home appliances, such as air conditioners, lighting systems, and other smart switches, from your car’s dashboard.
Samsung
What can the Car-to-Home feature do?
The practical scope of the feature is broader than it might sound, as it is compatible with devices such as air conditioners, air purifiers, robot vacuums, lights, and cameras. Connecting is straightforward — drivers scan a QR code displayed on their car’s infotainment screen and link their vehicle to their SmartThings account.
Apart from manual control (flipping the switches), the Car-to-Home feature unlocks location-aware automation that genuinely changes how your home responds to your day. You can set routines so that the SmartThings network turns on the required appliances as you park your car in the garage.
I can see people using the feature to pre-cool their rooms or run air purifiers before they arrive home after a tiring day at the office. On the contrary, the feature should also shut everything down (automatically), as you get in the car and leave the driveway. There’s a dedicated Away Mode for handling lights when you’re away.
For now, the feature is available on select Hyundai and Kia cars, specifically those that feature the connected car Navigation Cockpit (ccNC) introduced after November 2022 in Korea. However, both Samsung and Hyundai aim to expand the feature to their customers throughout the world in due course.
Eligible models include the Grandeur, Santa Fe, Ioniq 5, K5, Sorento, and EV9. Samsung also plans to extend the feature to Genesis vehicles equipped with the ccIC27 infotainment system.
As and when the feature becomes available to a wider audience, it could drive a behavioral shift in which cars become central nodes in someone’s smart home ecosystem, linking mobility and domestic technology in ways that were, until recently, purely speculative.
The War Zone reported that a Lockheed Martin-produced mockup of the new version of the Raptor was at the Warfare Symposium, a convention for the defense industry and elements of the United States military. The outlet reported some noteworthy changes being made on this plane. Namely, the aircraft is slated to get upgrades in the form of some extra range and another set of eyes.
Fuel tanks and sensor pods might not sound like a big deal, as those components have been mounted to wing pylons of various aircraft for decades. But it’s not so easy to make these kinds of adjustments on a plane as stealthy as the F-22. That’s because external fuel tanks and sensors don’t have the same stealth considerations as the rest of the aircraft. A big fuel tank is nice, but it can make the plane more visible to radar.
Advertisement
The latest and greatest Raptor
Alex Hevesy/SlashGear
The newer and stealthier sensor pods are posited to give the Raptor better infrared tracking capabilities, according to The War Zone. Given the F-22’s primary role as an air-to-air fighter and the increasing prevalence of powerful stealth fighters from potentially adversarial air forces, any extra capability would likely be welcome.
Specifics as to how much extra range the fuel tanks will give the Raptor and what the sensor pods will allow the F-22 Raptor to do are likely classified. Nevertheless, upgrades are expected to enter service, or at least more advanced testing, over the course of 2026.
Advertisement
The F-22 Raptor, despite all of its menace and upcoming capabilities that, at least on paper, seem to entirely outclass most other jets, has never seen much air-to-air combat apart from shooting down a suspected surveillance balloon. The jet’s exclusivity paired with the fact that Air Force fighters don’t shoot down jets that frequently, means that the F-22 doesn’t see a lot of air-to-air action (at least that we know of).
Europol recently unveiled “Operation Alice,” a major effort to dismantle a large network of fraudulent websites hidden within the dark web. The investigation began in 2021 and initially focused on a platform named Alice with Violence CP. In the end, the operation took down one of the largest dark web… Read Entire Article Source link
Former executive director of the IEEE Power & Energy Society
Fellow, 92; died 9 January
Olken became the first executive director of the IEEE Power & Energy Society (PES) in 1995. In 2002 he left the position to serve as founding editor in chief of the society’s Power & Energy Magazine. Olken led the publication until 2016, when he retired.
After receiving a bachelor’s degree in engineering from the City College of New York, Olken was hired as an electrical engineer by American Electric Power, a utility based in Columbus, Ohio. He helped design coal, hydroelectric, and nuclear power plants. While at AEP, he was promoted to manager of the electrical generation department.
Olken was elected an IEEE Fellow in 1988 for “contributions to innovative design of reliable generating stations.”
He became an IEEE staff member in 1984 as society services director for IEEE Technical Activities. From 1990 to 1995 he served as managing director of Regional Activities group (now IEEE Member and Geographic Activities), before becoming PES executive director.
He received a PES Lifetime Achievement Award in 2012 for his “broad and sustained technical contributions to the development of power engineering and the power engineering profession.”
She received a bachelor’s degree in engineering in 1999 from the College of Charleston, in South Carolina. During her senior year, she worked as a mathematics and science tutor at the Jenkins Orphanage (now the Jenkins Institute for Children), in North Charleston. After graduating, Huguenin traveled to India to volunteer at an orphanage run by the Mother Teresa Foundation.
Advertisement
Upon returning to the United States in 2001, Huguenin worked as a freelance research consultant. Three years later she was hired as a systems administrator and archivist by photographer Ebet Roberts in New York City. In 2010 she left to work as an operations strategist and technical consultant.
She earned a master’s degree in communication and research science in 2016 from New York University. While at NYU, she conducted experimental and theoretical research in Internet Protocol design and implementation as well as network security and management.
From 2020 to 2024 she was a research scientist at businesses owned by her family. She joined Augusta University in 2023.
The winners of the 2026 Swift Student Challenge will be announced on March 26, with the best among them set to receive a trip to Apple Park.
Winners of the 2026 Swift Student Challenge will be announced on March 26.
Every year, Apple holds the Swift Student Challenge. The event encourages up-and-coming student developers to practice their craft and lets them win various prizes. In an announcement on Monday, the iPhone maker described the annual event as a program meant to “uplift the next generation of entrepreneurs, coders, and designers.” The company added that winners will be notified on Thursday, March 26. Continue Reading on AppleInsider | Discuss on our Forums
FBI and CISA warn of Russian espionage campaign targeting messaging apps
Phishing and social engineering used to hijack Signal and other CMA accounts
Thousands of victims’ accounts compromised, including officials, military, and journalists
The Federal Bureau of Investigation (FBI) and the US Cybersecurity and Infrastructure Security Agency (CISA) are warning about an ongoing espionage campaign by Russian cyberspies.
In a joint Public Service Announcement (PSA) published late last week, the two agencies said Russian Intelligence Services (RIS)-affiliated threat actors are actively targeting commercial messaging applications (CMA). They specifically mentioned Signal, but stressed that other CMAs are most likely targeted, as well.
The victims are mostly current and former US government officials, military personnel, political figures, and journalists.
Article continues below
Advertisement
Following the Dutch
The campaign does not revolve around “breaking” the apps by abusing vulnerabilities, or similar. Instead, it revolves around phishing and social engineering, where the victims end up sharing access willingly.
“RIS cyber actors send phishing messages masquerading as automated CMA support accounts,” the PSA reads. “The actors tailor the messages to deceive targets into taking an action, such as clicking a link or providing verification codes or account PINs. If the user performs any of the requested actions, they unwittingly provide the actors with unauthorized access to their account either by adding the attacker’s device as a linked device or through a full account takeover.”
Advertisement
Roughly two weeks ago, Dutch authorities published a similar warning, saying that Russian spies were targeting not only Signal, but WhatsApp, as well. The General Intelligence and Security Service (AIVD), the Netherlands’ primary civilian intelligence and security agency, said at the time that the campaign was “large-scale”, and “global”. Targets were dignitaries, military personnel, and civil servants, including Dutch government employees.
AIVD believes the campaign is already a success: “The Russian hackers likely gained access to sensitive information through this campaign,” it said, although it did not detail if they accessed it from Dutch targets or someone else entirely.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
On X, FBI Director Kash Patel echoed these warnings, saying the effort “resulted in unauthorized access to thousands of individual accounts.”
Advertisement
“After gaining access, the actors can view messages and contact lists, send messages as the victim, and conduct additional phishing from a trusted identity,” he warned.
For more than four decades, technological progress has been undermining expert authority, democratizing public debate, and steering individuals toward ever-more bespoke conceptions of reality.
In the mid-20th century, the high costs of television production — and physical limitations of the broadcast spectrum — tightly capped the number of networks. ABC, NBC, and CBS collectively owned TV news. On any given evening in the 1960s, roughly 90 percent of viewers were watching one of the Big Three’s newscasts.
Journalistic programs weren’t just limited in number, but also ideological content. The networks’ news divisions all sought the broadest possible audience, a business model that discouraged airing iconoclastic viewpoints. And they also relied overwhelmingly on official sources — politicians, military officials, and credentialed experts — whose perspectives fell within the narrow bounds of respectable opinion.
There’s evidence that LLMs converge on a common (and largely accurate) picture of reality.
LLMs have successfully persuaded users to abandon false and conspiratorial beliefs.
Unlike social media companies, AI labs have an economic incentive to spread accurate information.
Still, there are reasons to fear that AI will nonetheless make public discourse worse.
For better and worse, subsequent advances in information technology diffused influence over public opinion — at first gradually and then all at once. During the closing decades of the 20th century, cable eroded barriers to entry in the TV news business, facilitating the rise of Fox News and MSNBC, networks that catered to previously underrepresented political sensibilities.
But the internet brought the real revolution. By slashing the cost of publishing and distribution nearly to zero, digital platforms enabled anyone with an internet connection to reach a mass audience. Traditional arbiters of headline news, scientific fact, and legitimate opinion — editors, producers, and academics — exerted less and less veto power over public discourse. Outlets and influencers proliferated, many defining themselves in opposition to established institutions. All the while, social media algorithms shepherded their users into customized streams of information, each optimized for their personal engagement.
The democratic nature of digital media initially inspired utopian hopes. It promised to expose the blind spots of cultural elites, increase the accountability of elected officials, and put virtually all human knowledge at everyone’s fingertips. And the internet has done all of these things, at least to some extent.
Many assume that the latest breakthrough in information technology — generative AI — will deepen these pathologies: In a world of photorealistic deepfakes, even video evidence may surrender its capacity to forge consensus. Sycophanticlarge language models (LLMs), meanwhile, could reinforce ideologues’ delusions. And fully automated film production could enable extremists to flood the internet with slick propaganda.
Advertisement
But there’s reason to think that this is too pessimistic. Rather than deepening social media’s effects on public opinion, AI may partially reverse them — by increasing the influence of credentialed experts and fostering greater consensus about factual reality. In other words, for the first time in living memory, the arc of media history may be bending back toward technocracy.
Are you there Grok? It’s me, the demos
At least, this is what the British philosopher Dan Williams and former Vox writer Dylan Matthews have recently argued.
Matthews begins his case by spotlighting a phenomenon familiar to every problem user of X (née “Twitter”): Elon Musk’s chatbot telling the billionaire that he is wrong.
Advertisement
In this instance, Musk had claimed that Renée Good, the Minnesota woman killed by an ICE agent in January, had “tried to run people over” in the moments before her death. Someone replied to Musk’s post by asking Grok — X’s resident AI — whether his claim was consistent with video evidence of the shooting. The bot replied:
For Matthews, this incident illustrates a broader truth about LLMs: Like mid-20th century TV, they are a “converging” form of technology, in the sense that they “homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members.” And he suggests that they are also a “technocratising” force, in that they give experts’ disproportionate influence over the content of that shared reality.
Of course, this would be a lot to read into a single Grok reply; if you glanced at that bot’s outputs last July — when a misguided update to the LLM’s programming caused it to self-identify as “MechaHitler” — you might have concluded that AI is a “Nazifying” technology.
But there is evidence that Grok and other LLMs tend to provide (relatively) accurate fact checks — and forge consensus among users in the process.
Advertisement
One recent study examined a database of over 1.6 million fact-checking requests presented to Grok or Perplexity (a rival chatbot) on X last year. It found that the two LLMs agreed with each other in a majority of cases and strongly diverged on only a small fraction.
The researchers also compared the bots’ answers against those of professional fact-checkers and the results were similarly encouraging. When used through its developer interface (rather than on X), Grok achieved essentially the same rate of agreement with the humans as they did with each other.
What’s more, despite being the creation of a far-right ideologue, Grok deemed posts from Republican accounts inaccurate at a higher rate than those of Democratic accounts — a pattern consistent with past research showing that the right tends to share misinformation more frequently than the left.
Critically, in the paper, the LLMs’ answers did not just converge on expert opinion — they also nudged users toward their conclusions.
Advertisement
Other research has documented similar effects. Multiple studies have indicated that speaking with an LLM about climate change or vaccine safety reduces users’ skepticism about the scientific consensus on those topics.
AI might combat misinformation in practice. But does it in theory?
A handful of papers can’t by themselves prove that AI is adept at fact-checking, much less that its overall impact on the information environment will be positive. To their credit, Matthews and Williams concede that their thesis is speculative.
But they offer several theoretical reasons to expect that AI will have broadly “converging” and “technocratising” effects on public discourse. Two are particularly compelling:
Advertisement
1) AI firms have a strong financial incentive to produce accurate information. Social media platforms are suffused with misinformation for many reasons. But one is that facilitating the spread of conspiracy theories or pseudoscience costs X, YouTube, and Facebook nothing. These firms make money by mining human attention, not providing reliable insight. If evangelism for the “flat Earth” theory attracts more interest than a lecture on astrophysics, social media companies will milk higher profits from the former than the latter (no matter how spherical our planet may appear to untrained eyes).
But AI firms face different incentives. Although some labs plan to monetize user attention through advertising, their core business objective is still to maximize their models’ ability to perform economically useful work. Law firms will not pay for an LLM that generates grossly inaccurate summaries of case law, even if its hallucinations are more entertaining than the truth. And one can say much the same about investment banks, management consultancies, or any other pillar of the “knowledge economy.”
For this reason, AI companies need their models to distinguish reliable sources of information from unreliable ones, evaluate arguments on the basis of evidence, and reason logically. In principle, it might be possible for OpenAI and Anthropic to build models that prize accuracy in business contexts — but prioritize users’ titillation or ideological comfort in personal ones. In practice, however, it’s hard to inject a bit of irrationality or political bias into a model’s outputs without sabotaging its commercial utility (as Musk evidently discovered last year).
2) LLMs are infinitely more patient and polite than any human expert has ever been. Well-informed humans have been trying to disabuse the deluded for as long as our species has been capable of speech. But there’s reason to think that LLMs will prove radically more effective at that task.
Advertisement
After all, human experts cannot provide encyclopedic answers to everyone’s idiosyncratic questions about their specialty, instantly and on demand. But AI models can. And the chatbots will also gamely field as many follow-ups as desired — addressing every source of a user’s skepticism, in terms customized for their reading level and sensibilities — without ever growing irritated or condescending.
That last bit is especially significant. When one human tries to persuade another that they are wrong about something — particularly within view of other people — the misinformed person is liable to perceive a threat to their status: To recognize one’s error might seem like conceding one’s intellectual inferiority. And such defensiveness is only magnified when their erudite interlocutor patronizes (or outright insults) them, as even learned scholars are wont to do on social media.
But LLMs do not compete with humans for social prestige or sexual partners (at least, not yet). And chatbot conversations are generally private. Thus, a human can concede an LLM’s point without suffering a sense of status threat or losing face. We don’t experience Claude as our snobby social better, but rather, as our dutiful personal adviser.
The expert consensus has never before had such an advocate. And there’s evidence that LLMs’ infinite patience renders them exceptionally effective at dispelling misconceptions. In a 2024 study, proponents of various conspiracy theories — including 2020 election denial — durably revised their beliefs after extensively debating the topic with a chatbot.
Advertisement
It seems clear then that LLMs possess some “converging” and “technocratizing” properties. And, experts’ fallibility notwithstanding, this constitutes a basis for thinking that AI will foster a healthier intellectual climate than social media has to date.
Still, it isn’t hard to come up with reasons for doubting this theory (and not merely because ChatGPT will provide them on demand). To name just five:
1) LLMs can mold reality to match their users’ desires. If you log into ChatGPT for the first time — and immediately ask whether your mother is trying to poison you by piping psychedelic fumes through your car vents — the LLM generally won’t answer with an emphatic “yes.” But when Stein-Erik Soelberg inundated the chatbot with his paranoid delusions over a period of months, it eventually began affirming his persecution fantasies, allegedly nudging him toward matricide in the process.
Such instances of “AI psychosis” are rare. But they represent the most extreme manifestation of a more common phenomenon — AI models’ tendency toward sycophancy and personalization. Which is to say, these systems frequently grow more aligned with their users’ perspectives over extended conversations, as they learn the kinds of responses that will generate positive feedback. This behavior has surfaced, even as AI companies have tried to combat it.
Advertisement
The sycophancy problem could therefore get dramatically worse, if one or more LLM providers decide to center their business model around consumer engagement. As social media has shown, sensational and/or ideologically flattering information can be more engaging than the accurate variety. Thus, an AI company struggling to compete in the business-to-business market might choose to have their model “sycophancy-max,” pursuing the same engagement-optimization tactics as Youtube or Facebook.
A world of even greater informational divergence — in which people aren’t merely ensconced in echo chambers with likeminded idealogues, but immersed in a mirror of their own prejudices — might ensue.
2) Artificial intelligence has radically reduced the costs of generating propaganda. AI has already flooded social media with unlabeled, “deepfake” videos. Soon, they may enable nefarious actors to orchestrate evermore convincing “bot swarms” — networks of AI agents that impersonate humans on social media platforms, deploying LLMs’ persuasive powers to indoctrinate other users and create the appearance of a false consensus.
In this scenario, LLMs might edify people who actively seek the truth through dialogue or fact-check requests, but thrust those who passively absorb political information from their environment — arguably, the majority — into perpetual confusion.
Advertisement
3) AI could breed the bad kind of consensus. Even if LLMs do promote convergence on a shared conception of reality, that picture could be systematically flawed. In the worst case, an authoritarian government could program the major AI platforms to validate regime-legitimizing narratives. Less catastrophically, LLMs’ converging tendencies could simply make technocrats’ honest mistakes harder to detect or remedy.
4) AI could trigger widespread cognitive atrophy, as humans outsource an ever-larger share of cognitive labor to machines. Over time, this could erode the public’s capacity for reason, leaving it more vulnerable to both fully-automated demagogy and top-down manipulation.
5) AI could wreck the sources of authority that make it effective. LLMs might be good at distilling information into a consensus answer, but that answer is only as good as the information feeding the models.
Already, chatbots are draining revenue from (embattled) news organizations, who will produce fewer timely and verified reports about current events as a result. Online forums, a key source for AI advice, are increasingly being flooded with plugs for products in order to trick chatbots into recommending them. Wikipedia’s human moderators fear a future in which they’re stuck sifting through a tsunami of low-quality AI-generated updates and citations.
Advertisement
LLMs may prize accurate information. But if they bankrupt or corrupt the institutions that produce such data, their outputs may grow progressively impoverished.
For these reasons, among others, AI models’ ultimate implications for the information environment are highly uncertain. What Matthews and Williams convincingly establish, however, is that this technology could facilitate a more consensual and fact-based public discourse — if we properly guide its development.
Of course, precisely how to maximize AI’s capacity for edification — while minimizing its potential for distortion — is a difficult question, about which reasonable people can disagree. So, let’s ask Claude.
You must be logged in to post a comment Login