Images from the missile strike in southern Iran were more horrifying than any of the case studies Air Force combat veteran Wes J. Bryant had pored over in his mission to overhaul how the U.S. military safeguards civilian life.
Parents wept over their children’s bodies. Crushed desks and blood-stained backpacks poked through the rubble. The death toll from the attack on an elementary school in Minab climbed past 165, most of them under age 12, with nearly 100 others wounded, according to Iranian health officials. Photos of small coffins and rows of fresh graves went viral, a devastating emblem of Day 1 in the open-ended U.S.-Israeli war in Iran.
Advertisement
Bryant, a former special operations targeting specialist, said he couldn’t help but think of what-ifs as he monitored fallout from the Feb. 28 attack.
Just over a year ago, he had been a senior adviser in an ambitious new Defense Department program aimed at reducing civilian harm during operations. Finally, Bryant said, the military was getting serious about reforms. He worked out of a newly opened Civilian Protection Center of Excellence, where his supervisor was a veteran strike-team targeter who had served as a United Nations war crimes investigator.
Today, that momentum is gone. Bryant was forced out of government in cuts last spring. The civilian protection mission was dissolved as Defense Secretary Pete Hegseth made “lethality” a top priority. And the world has witnessed a tragedy in Minab that, if U.S. responsibility is confirmed, would be the most civilians killed by the military in a single attack in decades.
Dismantling the fledgling harm-reduction effort, defense analysts say, is among several ways the Trump administration has reorganized national security around two principles: more aggression, less accountability.
Advertisement
Trump and his aides lowered the authorization level for lethal force, broadened target categories, inflated threat assessments and fired inspectors general, according to more than a dozen current and former national security personnel. Nearly all spoke on condition of anonymity for fear of retaliation.
“We’re departing from the rules and norms that we’ve tried to establish as a global community since at least World War II,” Bryant said. “There’s zero accountability.”
Citing open-source intelligence and government officials, several news outlets have concluded that the strike in Minab most likely was carried out by the United States. President Donald Trump, without providing evidence, told reporters March 7 that it was “done by Iran.” Hegseth, standing next to the president aboard Air Force One, said the matter was under investigation.
The next day, the open-source research outfit Bellingcat said it had authenticated a video showing a Tomahawk missile strike next to the school in Minab. Iranian state media later showed fragments of a U.S.-made Tomahawk, as identified by Bellingcat and others, at the site. The United States is the only party to the conflict known to possess Tomahawks. U.N. human rights experts have called for an investigation into whether the attack violated international law.
Advertisement
The Department of Defense and White House did not respond to requests for comment.
Since the post-9/11 invasions of Afghanistan and Iraq, successive U.S. administrations have faced controversies over civilian deaths. Defense officials eager to shed the legacy of the “forever wars” have periodically called for better protections for civilians, but there was no standardized framework until 2022, when Biden-era leaders adopted a strategy rooted in work that had begun under the first Trump presidency.
Formalized in a 2022 action plan and in a Defense Department instruction, the initiatives are known collectively as Civilian Harm Mitigation and Response, a clunky name often shortened to CHMR and pronounced “chimmer.” Around 200 personnel were assigned to the mission, including roughly 30 at the Civilian Protection Center of Excellence, a coordination hub near the Pentagon.
The CHMR strategy calls for more in-depth planning before an attack, such as real-time mapping of the civilian presence in an area and in-depth analysis of the risks. After an operation, reports of harm to noncombatants would prompt an assessment or investigation to figure out what went wrong and then incorporate those lessons into training.
Advertisement
By the time Trump returned to power, harm-mitigation teams were embedded with regional commands and special operations leadership. During Senate confirmation hearings, several Trump nominees for top defense posts voiced support for the mission. Once in office, however, they stood by as the program was gutted, current and former national security officials said.
Around 90% of the CHMR mission is gone, former personnel said, with no more than a single adviser now at most commands. At Central Command, where a 10-person team was cut to one, “a handful” of the eliminated positions were backfilled to help with the Iran campaign. Defense officials can’t formally close the Civilian Protection Center of Excellence without congressional approval, but Bryant and others say it now exists mostly on paper.
“It has no mission or mandate or budget,” Bryant said.
Spike in Strikes
Global conflict monitors have since recorded a dramatic increase in deadly U.S. military operations. Even before the Iran campaign, the number of strikes worldwide since Trump returned to office had surpassed the total from all four years of Joe Biden’s presidency.
Advertisement
Had the Defense Department’s harm-reduction mission continued apace, current and former officials say, the policies almost certainly would’ve reduced the number of noncombatants harmed over the past year.
Beyond the moral considerations, they added, civilian casualties fuel militant recruiting and hinder intelligence-gathering. Retired Gen. Stanley McChrystal, who commanded U.S. and NATO forces in Afghanistan, explains the risk in an equation he calls “insurgent math”: For every innocent killed, at least 10 new enemies are created.
U.S.-Israeli strikes have already killed more than 1,200 civilians in Iran, including nearly 200 children, according to Human Rights Activists News Agency, a U.S.-based group that verifies casualties through a network in Iran. The group says hundreds more deaths are under review, a difficult process given Iran’s internet blackout and dangerous conditions.
Defense analysts say the civilian toll of the Iran campaign, on top of dozens of recent noncombatant casualties in Yemen and Somalia, reopens dark chapters from the “war on terror” that had prompted reforms in the first place.
Advertisement
“It’s a recipe for disaster,” a senior counterterrorism official who left the government a few months ago said of the Trump administration’s yearlong bombing spree. “It’s ‘Groundhog Day’ — every day we’re just killing people and making more enemies.”
In 2015, twodozen patients and 14 staff members were killed when a heavily armed U.S. gunship fired for over an hour on a Doctors Without Borders hospital in northern Afghanistan, a disaster that has become a cautionary tale for military planners.
“Our patients burned in their beds, our medical staff were decapitated or lost limbs. Others were shot from the air while they fled the burning building,” the international aid group said in a report about the destruction of its trauma center in Kunduz.
A U.S. military investigation found that multiple human and systems errors had resulted in the strike team mistaking the building for a Taliban target. The Obama administration apologized and offered payouts of $6,000 to families of the dead.
Advertisement
Human rights advocates had hoped the Kunduz debacle would force the U.S. military into taking concrete steps to protect civilians during U.S. combat operations. Within a couple years, however, the issue came roaring back with high civilian casualties in U.S.-led efforts to dislodge Islamic State extremists from strongholds in Syria and Iraq.
In a single week in March 2017, U.S. operations resulted in three incidents of mass civilian casualties: A drone attack on a mosque in Syria killed around 50; a strike in another part of Syria killed 40 in a school filled with displaced families; and bombing in the Iraqi city of Mosul led to a building collapse that killed more than 100 people taking shelter inside.
In heavy U.S. fighting to break Islamic State control over the Syrian city of Raqqa, “military leaders too often lacked a complete picture of conditions on the ground; too often waved off reports of civilian casualties; and too rarely learned any lessons from strikes gone wrong,” according to an analysis by the Pentagon-adjacent Rand Corp. think tank.
Released in 2019, the review Mattis launched was seen by some advocacy groups as narrow in scope but still a step in the right direction. Yet the issue soon dropped from national discourse, overshadowed by the coronavirus pandemic and landmark racial justice protests.
During the Biden administration’s chaotic withdrawal of U.S. forces from Afghanistan in August 2021, a missile strike in Kabul killed an aid worker and nine of his relatives, including seven children. Then-Defense Secretary Lloyd Austin apologized and said the department would “endeavor to learn from this horrible mistake.”
That incident, along with a New York Times investigative series into deaths from U.S. airstrikes, spurred the adoption of the Civilian Harm Mitigation and Response action plan in 2022. When they established the new Civilian Protection Center of Excellence the next year, defense officials tapped Michael McNerney — the lead author of the blunt RAND report — to be its director.
“The strike against the aid worker and his family in Kabul pushed Austin to say, ‘Do it right now,’” Bryant said.
Advertisement
The first harm-mitigation teams were assigned to leaders in charge of some of the military’s most sensitive counterterrorism and intelligence-gathering operations: Central Command at MacDill Air Force Base in Tampa, Florida; the Joint Special Operations Command at Fort Bragg, North Carolina; and Africa Command in Stuttgart, Germany.
A former CHMR adviser who joined in 2024 after a career in international conflict work said he was reassured to find a serious campaign with a $7 million budget and deep expertise. The adviser spoke on condition of anonymity for fear of retaliation.
Only a few years before, he recalled, he’d had to plead with the Pentagon to pay attention. “It was like a back-of-the-envelope thing — the cost of a Hellfire missile and the cost of hiring people to work on this.”
Bryant became the de facto liaison between the harm-mitigation team and special operations commanders. In December, he described the experience in detail in a private briefing for aides of Sen. Chris Van Hollen, D-Md., who had sought information on civilian casualty protocols involvingboat strikes in the Caribbean Sea.
Advertisement
Bryant’s notes from the briefing, reviewed by ProPublica, describe an embrace of the CHMR mission by Adm. Frank Bradley, who at the time was head of the Joint Special Operations Command. In October, Bradley was promoted to lead Special Operations Command.
At the end of 2024 and into early 2025, Bryant worked closely with the commander’s staff. The notes describe Bradley as “incredibly supportive” of the three-person CHMR team embedded in his command.
Bradley, Bryant wrote, directed “comprehensive lookbacks” on civilian casualties in errant strikes and used the findings to mandate changes. He also introduced training on how to integrate harm prevention and international law into operations against high-value targets. “We viewed Bradley as a model,” Bryant said.
Still, the military remained slow to offer compensation to victims and some of the new policies were difficult to independently monitor, according to a report by the Stimson Center, a foreign policy think tank. The CHMR program also faced opposition from critics who say civilian protections are already baked into laws of war and targeting protocols; the argument is that extra oversight “could have a chilling effect” on commanders’ abilities to quickly tailor operations.
Advertisement
To keep reforms on track, Bryant said, CHMR advisers would have to break through a culture of denial among leaders who pride themselves on precision and moral authority.
“The initial gut response of all commands,” Bryant said, “is: ‘No, we didn’t kill civilians.’”
Reforms Unraveled
As the Trump administration returned to the White House pledging deep cuts across the federal government, military and political leaders scrambled to preserve the Civilian Harm Mitigation and Response framework.
At first, CHMR advisers were heartened by Senate confirmation hearings where Trump’s nominees for senior defense posts affirmed support for civilian protections.
Advertisement
Gen. Dan Caine, chairman of the Joint Chiefs of Staff, wrote during his confirmation that commanders “see positive impacts from the program.” Elbridge Colby, undersecretary of defense for policy, wrote that it’s in the national interest to “seek to reduce civilian harm to the degree possible.”
When questioned about cuts to the CHMR mission at a hearing last summer, U.S. Navy Vice Adm. Brad Cooper, head of Central Command, said he was committed to integrating the ideas as “part of our culture.”
Despite the top-level support, current and former officials say, the CHMR mission didn’t stand a chance under Hegseth’s signature lethality doctrine.
The former Fox News personality, who served as an Army National Guard infantry officer in Iraq and Afghanistan, disdains rules of engagement and other guardrails as constraining to the “warrior ethos.” He has defended U.S. troops accused of war crimes, including a Navy SEAL charged with stabbing an imprisoned teenage militant to death and then posing for a photo with the corpse.
Advertisement
A month after taking charge, Hegseth fired the military’s top judge advocate generals, known as JAGs, who provide guidance to keep operations in line with U.S. or international law. Hegseth has described the attorneys as “roadblocks” and used the term “jagoff.”
At the Civilian Protection Center of Excellence, the staff tried in vain to save the program. At one point, Bryant said, he even floated the idea of renaming it the “Center for Precision Warfare” to put the mission in terms Hegseth wouldn’t consider “woke.”
By late February 2025, the CHMR mission was imploding, say current and former defense personnel.
Shortly before his job was eliminated, Bryant openly spoke out against the cuts in The Washington Post and Boston Globe, which he said landed him in deep trouble at the Pentagon. He was placed on leave in March, his security clearance at risk of revocation.
Advertisement
Bryant formally resigned in September and has since become a vocal critic of the administration’s defense policies. In columns and on TV, he warns that Hegseth’s cavalier attitude toward the rule of law and civilian protections is corroding military professionalism.
Bryant said it was hard to watch Bradley, the special operations commander and enthusiastic adopter of CHMR, defending a controversial “double-tap” on an alleged drug boat in which survivors of a first strike were killed in a follow-up hit. Legal experts have said such strikes could violate laws of warfare. Bradley did not respond to a request for comment.
“Everything else starts slipping when you have this culture of higher tolerance for civilian casualties,” Bryant said.
Concerns were renewed in early 2025 with the Trump administration’s revived counterterrorism campaign against Islamist militants regrouping in parts of Africa and the Middle East.
Advertisement
Last April, a U.S. air strike hit a migrant detention center in northwestern Yemen, killing at least 61 African migrants and injuring dozens of others in what Amnesty International says “qualifies as an indiscriminate attack and should be investigated as a war crime.”
Operations in Somalia also have become more lethal. In 2024, Biden’s last year in office, conflict monitors recorded 21 strikes in Somalia, with a combined death toll of 189. In year one of Trump’s second term, the U.S. carried out at least 125 strikes, with reported fatalities as high as 359, according to the New America think tank, which monitors counterterrorism operations.
“It is a strategy focused primarily on killing people,” said Alexander Palmer, a terrorism researcher at the Washington-based Center for Strategic and International Studies.
Last September, the U.S. military announced an attack in northeastern Somalia targeting a weapons dealer for the Islamist militia Al-Shabaab, a U.S.-designated terrorist group. On the ground, however, villagers said the missile strike incinerated Omar Abdullahi, a respected elder nicknamed “Omar Peacemaker” for his role as a clan mediator.
Advertisement
After the death, the U.S. military released no details, citing operational security.
“The U.S. killed an innocent man without proof or remorse,” Abdullahi’s brother, Ali, told Somali news outlets. “He preached peace, not war. Now his blood stains our soil.”
In Iran, former personnel say, the CHMR mission could have made a difference.
Under the scrapped harm-prevention framework, they said, plans for civilian protection would’ve begun months ago, when orders to draw up a potential Iran campaign likely came down from the White House and Pentagon.
Advertisement
CHMR personnel across commands would immediately begin a detailed mapping of what planners call “the civilian environment,” in this case a picture of the infrastructure and movements of ordinary Iranians. They would also check and update the “no-strike list,” which names civilian targets such as schools and hospitals that are strictly off-limits.
One key question is whether the school was on the no-strike list. It sits a few yards from a naval base for the Iranian Revolutionary Guard. The building was formerly part of the base, though it has been marked on maps as a school since at least 2013, according to visual forensics investigations.
“Whoever ‘hits the button’ on a Tomahawk — they’re part of a system,” the former adviser said. “What you want is for that person to feel really confident that when they hit that button, they’re not going to hit schoolchildren.”
If the guardrails failed and the Defense Department faced a disaster like the school strike, Bryant said, CHMR advisers would’ve jumped in to help with transparent public statements and an immediate inquiry.
Advertisement
Instead, he called the Trump administration’s response to the attack “shameful.”
“It’s back to where we were years ago,” Bryant said. If confirmed, “this will go down as one of the most egregious failures in targeting and civilian harm-mitigation in modern U.S. history.”
There is a familiar media failure in which opposing viewpoints are presented as equally valid, even when the evidence overwhelmingly supports one side. It’s called Bothsidesism. This false balance phenomenon legitimizes misinformation and undermines public understanding by giving disproportionate weight to baseless claims.
Why bring this up? Because the new AI Doc film is based on it.
Once you understand that false equivalence is baked into the film’s storytelling, you understand how misleading and manipulative the documentary is. And it is compounded by a series of falsehoods that go unchallenged and uncorrected.
This review addresses both failures.
Advertisement
The “AI Doc” Movie
“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Daniel Roher and Charlie Tyrell, sets out to explore AI, especially its potential for good and bad, with a strong emphasis on the filmmakers’ anxieties and fears. Its basic premise is: “A father-to-be tries to figure out what is happening with all this AI insanity.” As summarized by Andrew Maynard from Future of Being Human:
“The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times or worst of times for him and his wife (filmmaker Caroline Lindy) to bring a kid into the world.”
The “loudest voices” include many AI doomer figures, such as Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, Jeffrey Ladish, and two of the most populist voices on emerging tech (first social media and now AI): Tristan Harris and Yuval Noah Harari. The film also features voices on AI ethics, including David Evan Haris, Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao. On the more boosterish side, there are Peter Diamandis and Guillaume Verdon (AKA Beff Jezos). Three leading AI CEOs were also interviewed: OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Amodei siblings, Dario and Daniela. (Meta’s Mark Zuckerberg declined, and xAI’s Elon Musk agreed but never showed up).
The movie started playing in theaters on March 27, but there are already plenty of reviews (dating back to the Sundance Film Festival). The praise is fairly consistent: It is timely, wide-ranging, visually energetic, and unusually well-connected, with access to major AI figures.
Advertisement
The most common criticism is that it is too deferential to interviewees and too thin on hard interrogation or concrete answers. As several reviewers put it:
“Roher’s willingness to blindly accept any and all of his speakers’ pronouncements leaves The AI Doc feeling toothless.”
“By giving its doomer and accelerationist voices so much time to present AI’s most hyperbolic potential outcomes with little pushback, the documentary’s first half plays more like an overlong advertisement for the technology as opposed to a piece of measured analysis.”
Tristan Harris, co-founder of the Center for Humane Technology, told the AP: “My hope is that this film is kind of like ‘An Inconvenient Truth’ or ‘The Social Dilemma’ for AI.”
That is not reassuring. It is more like a glaring warning sign. Harris’s “Social Dilemma” and “AI Dilemma” movies were full of misinformation and nonsensical hyperbole, and both were designed to be manipulative and dishonest. If anything, his endorsement tells you exactly what kind of movie this is.
After watching the AI Doc, I realized what the doomers had managed to accomplish here: The film absorbs the panic rather than investigates it.
The False Balance of The AI Doc
Advertisement
The AI Doc starts with what one reviewer called a “Doom Parade.” It aims to set the tone.
“The worst AI predictions are presented first,” another reviewer noted. “Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the ‘abrupt extermination’ of humanity.”
And it is worth remembering who Yudkowsky is and what he has actually advocated. In his notorious TIME op-ed, “Shut it All Down,” he argued that governments should “be willing to destroy a rogue datacenter by airstrike.” In his book “If Anyone Builds It, Everyone Dies,” which many reviewers found unconvincing and “unnecessarily dramatic sci-fi,” he (and his co-author Nate Soares) proposed that governments must bomb labs suspected of developing AI. Based on what exactly? On the authors’ overconfident, binary worldview and speculative scenarios, which they mistake for inevitability.
One review of that book observed, “The plan with If Anyone Builds It seems to be to sane-wash him [Yudkowsky] for the airport books crowd, sanding off his wild opinions.”
Advertisement
That is more or less what the new documentary does, too. The AI Doc sane-washes the loudest doomers for mainstream viewers, sanding off their wild opinions.
In his newsletter, David William Silva addresses the documentary’s “series of doomers,” who “describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them.”
“Roher’s reaction is full terror,” Silva adds. “I hope it is unequivocally evident that this is not journalism.”
That gets to the heart of it. The film pretends to weigh competing perspectives, but in practice, it grants disproportionate authority to people most invested in flooding the zone with AI panic. And there is a well-oiled machine behind this kind of AI panic. As Silva writes:
Advertisement
“The people behind the AI anxiety machine. […] They know that predicting human extinction by software is an extraordinary claim requiring extraordinary evidence. They know they don’t have it. They know ‘my kids won’t live to see middle age’ is nothing but performance. […] And they do it anyway. Why do you think that is? The calculation is simple. Some people will see through it, and they will be annoyed, write rebuttals, call it what it is. Ok, fine. Just an acceptable loss. The believers, on the other hand, are a market. As long as the ratio stays favorable, the machine is profitable.”
One of the biggest beneficiaries of this film is Harris.[1] He is framed as if he is in the middle between the two main camps (doomers and accelerationists), and his narrative gradually becomes the film’s narrative (similar to the Social Dilemma). His call to action even serves as the ending (with a QR code directing viewers to a designated website).
The problem is that this framing has very little to do with reality. Harris’s Center for Humane Technology got $500,000 from the Future of Life Institute for “AI-related policy work and messaging cohesion within the AI X-risk [existential risk] community.” That is not a neutral player.
There’s a touching scene in the film where Roher mentions his father’s cancer treatment and expresses hope that AI might help. Harris appears visibly emotional. But in other contexts, Harris has argued against looking at AI for help with cancer treatment… in the belief that it would lead to extinction. Here he is on Glenn Beck’s show in 2023:
“My mother died from cancer several years ago. And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all the world would go extinct a year later, because of the, the only way to develop that was to bring something, some Demon into the world that would we would not be able to control, as much as I love my mother, and I would want her to be here with me right now, I wouldn’t take that trade.”
That sort of hyperbole seems relevant to Harris’ stance on such things, but was not mentioned in the film at all.
Advertisement
Connor Leahy of Conjecture and ControlAI gets a similar makeover. In the documentary, he appears as another pessimistic expert. Elsewhere, he said he does not expect humanity “to make it out of this century alive; I’m not even sure we’ll get out of this decade!” His “Narrow Path” proposal for policymakers begins with the claim that “AI poses extinction risks to human existence.” Instead of calling for a six-month AI pause, he argued for a 20-year pause, because “two decades provide the minimum time frame to construct our defenses.”
This is exactly why background checks matter. Viewers of the AI Doc deserve to know the full scope of the more extreme positions these interviewees have publicly taken elsewhere. If someone has publicly argued for destroying data centers by airstrikes or stopping AI for 20 years, the audience should know that.
Debunking the Falsehoods
The film goes way beyond just pushing a panic. It also recycles several misleading or plainly false claims, letting them pass as established facts. Three stood out in particular.
Advertisement
Anthropic’s Blackmail study
One of the most repeated “facts” in reviews of the movie is that Anthropic’s AI model, Clause, decided, unprompted, to blackmail a fictional employee. In the film, Daniel Roher asks, “And nobody taught it to do that?” Jeffrey Ladish, of Palisade Research and Tristan’s Center for Humane Technology, replies: “No, it learned to do that on its own.”
That is a misleading characterization of the actual experiment, it has already been debunked in “AI Blackmail: Fact-Checking a Misleading Narrative.” Anthropic researchers admitted that they strongly pressured the model and iterated through hundreds of prompts before producing that outcome. It wasn’t a spontaneous emergence of “evil” behavior; the researchers explicitly ensured it would be the default. Telling viewers that the model has gone full “HAL 9000” omits the facts about the heavily engineered experimental setup.
Although this is a classic case of big claims and thin evidence, the film offers so little pushback that viewers are left to take Ladish’s statements at face value.
Advertisement
It is also worth remembering that Ladish has fought against open-source AI, pushed for a crackdown on open-source models, and once said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” He later updated his position (and it’s good to revise such views). But does the film mention his earlier public hysteria? No.
Is AI less regulated than sandwich shops? No.
Connor Leahy tells Daniel Roher, “There is currently more regulation on selling a sandwich to the public” than there is on AI development. This talking point has become a favorite slogan in AI doomer circles. It was repeatedly stated by The Future of Life Institute’s Max Tegmark and, more recently, by Senator Bernie Sanders. It’s catchy. It’s also false.
State attorneys general from both parties have explicitly argued that existing laws already apply to AI. Lina Khan, writing on behalf of the Federal Trade Commission, stated that “AI is covered by existing laws. Each agency here today has legal authorities to readily combat AI-driven harm.” The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.
Advertisement
So no, AI is not less regulated than sandwich shops. It’s a misleading soundbite, not a serious description of legal reality.
Data center water usage
In the film, Karen Hao criticizes data centers, warning that “People are literally at risk, potentially of running out of drinking water.” That sounds alarming, which is presumably the point. But it is highly misleading.
In fact, Karen Hao had to issue corrections to her “Empire of AI” book because a key water-use figure was off by a factor of 4,500. The discrepancy was not 45x or 450x, but rather 4,500x. That is not a rounding error. For detailed rebuttals, see Andy Masley’s “The AI water issue is fake” and “Empire of AI is widely misleading about AI water use.”
Advertisement
There is also a basic proportionality issue here. As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the country’s golf courses.” To put that plainly: data centers accounted for just 0.1% of the county’s water use.
It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption.
None of this is hidden information. A basic fact-check by the filmmakers could have brought it to light. But that was not the film’s goal. They chose fear-based framing over actual reporting. They could have pressed interviewees on their track records, failed predictions, and political agendas. Instead, they let them narrate the stakes, unchallenged.
So, I think we can conclude that the AI Doc may want to appear balanced and thoughtful, but, unfortunately, too often it is not.
Advertisement
Final Remark
While Western filmmakers are busy platforming advocates for “bombing data centers” and “Stop AI for 20 years,” the Chinese Communist Party is building the actual infrastructure. The CCP is not making doom-and-gloom documentaries; it is racing ahead. This is a real strategic threat, and it is far more concerning than anything featured in this film.
—————————
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
If you own a computer that’s not mobile, it’s almost certain that it will receive its power in some form from a mains wall outlet. Whether it’s 230 V at 50 Hz or 120 V at 60 Hz, where once there might have been a transformer and a rectifier there’s now a switch-mode power supply that delivers low voltage DC to your machine. It’s a system that’s efficient and works well on the desktop, but in the data center even its efficiency is starting to be insufficient. IEEE Spectrum has a look at newer data centers that are moving towards DC power distribution, raising some interesting points which bear a closer look.
A traditional data center has many computers which in power terms aren’t much different from your machine at home. They get their mains power at distribution voltage — probably 33 KV AC where this is being written — they bring it down to a more normal mains voltage with a transformer just like the one on your street, and then they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC. The AC then snakes around the data center from rack to rack, and inside each computer there’s another rectifier and switch-mode power supply to make the low voltage DC the computer uses.
The increasing demands of data centers full of GPUs for AI processing have raised power consumption to the extent that all these conversion steps now cost a significant amount of wasted power. The new idea is to convert once to DC (at a rather scary 800 volts) and distribute it direct to the cabinet where the computer uses a more efficient switch mode converter to reach the voltages it needs.
Advertisement
It’s an attractive idea not just for the data center. We’ve mused on similar ideas in the past and even celebrated a solution at the local level. But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context. The fourth of our rules for the responsible use of a new technology comes in to play. Fortunately we think that both an inevitable cooling of the current AI hype and a Moore’s Law driven move towards locally-run LLMs may go some way towards solving that problem on its own.
Last June, Mia Ballard’s self-published novel Shy Girl took the internet by storm. After winning the hearts of readers and publisher Hachette alike, it was set for a major US debut in the coming months.
Now, the novel may never become available through any official channel again. Hachette has officially pulled the plug on the novel’s US release following a wave of allegations that generative AI played a role in the manuscript’s creation.
Originally self-published in February 2025, the horror novel was traditionally released by Hachette’s science fiction and fantasy label Orbit in the UK in November. After The New York Times provided evidence of AI usage in Shy Girl, Hachette canceled the planned spring US release and removed the book from its website completely.
Advertisement
“Hachette remains committed to protecting original creative expression and storytelling,” the publisher said in a statement to the Times.
Authors are required to disclose to Hachette whether AI was used in the creation of their work. Ballard has denied using AI tools to write the book, claiming an editor was responsible for the portions that appear to be AI-generated.
“My name is ruined for something I didn’t even personally do,” Ballard wrote in an email to the New York Times.
Advertisement
Hachette UK
The cancellation of Shy Girl by Hachette marks the first time a major publisher has publicly pulled an existing title due to suspicions of AI-generated prose.
For the past few months, readers online have raised concerns about the book’s apparent use of AI.
A video from YouTuber frankie’s shelf provides a lengthy analysis of the novel, pointing out linguistic patterns that are characteristic of AI writing. The video also lists words in Shy Girl that are repeated with unusual frequency (“edge” is used 84 times and “sharp” 159 times), often in ways that are abstract and nonsensical.
In January, Max Spero, founder and chief executive of Pangram, ran the text of Shy Girl through his AI detection program. He claimed that the novel was 78% AI-generated.
The rise of AI has caught the publishing industry off guard. Though AI writing has already appeared in many self-published books, traditional publishers like Hachette are more critical of the technology.
Advertisement
Representatives for Hachette didn’t immediately respond to a request for comment.
A research team determined that the torpedo bat, left, and traditional bat perform equally well in hitting power with only a slight difference in the location of the bat’s sweet spot (WSU Photo / Voiland College of Engineering and Architecture)
The New York Yankees just cruised through Seattle and won two out of three games against the Mariners. On the other side of Washington state, the Bronx Bombers’ “torpedo bats” were being scientifically scrutinized.
In what Washington State University is calling the first-ever laboratory experiments on the new baseball bat design, researchers found that torpedo bats and traditional bats basically perform the same.
It didn’t look that way last season, when the Yankees hit a franchise-record nine home runs in a game against the Milwaukee Brewers and drew viral attention to the bats that they were swinging.
The torpedo bat design relies on a slightly different shape in which wood is removed from the barrel tip and added to the bat’s sweet spot, so that the diameter tapers down, a little like a bowling pin. But the hype appears overblown.
“Wood is wood,” Lloyd Smith, a professor in WSU’s School of Mechanical and Materials Engineering and director of the university’s Sports Science Laboratory, told WSU Insider. “When it comes to baseball, there’s not a lot you can do with wood. If your goal is to keep the game steady and consistent and not have a lot of change, wood bats are good.”
Advertisement
Smith is part of a research team that includes Alan Nathan from University of Illinois and Daniel Russell from Penn State University. They’ll present their findings at the upcoming International Sports Engineering Association conference, June 1–4 in Pullman, Wash.
According to WSU Insider, the researchers created two maple bats that were duplicates of a standard Major League Baseball bat. Two additional maple bats were made with a torpedo-shaped barrel that gave them the same swing weight as the standard bat.
They measured how much energy the bat returns to the ball by firing baseballs from an air cannon at a stationary bat and using light gates and cameras to measure the speed of the incoming and rebounding ball.
The team found nearly identical performance for the torpedo and standard bats except that the sweet spot for the torpedo bat was a half inch farther from the bat tip than the standard bat.
Advertisement
“It was actually pretty phenomenal how close they were,” said Smith.
While some Yankees players said last year that any little tweak could provide an advantage, the team’s captain wasn’t convinced.
Aaron Judge hit an American League-record 62 homers in 2022, 58 in an MVP season in 2024 and 53 as repeat MVP in 2025. He had three homers using a traditional bat in that much-talked-about rout of the Brewers.
“The past couple of seasons kind of speak for itself,” Judge told ESPN last May. “Why try to change something?”
Slapping the Xteink X3 onto an iPhone takes only a few seconds. This is owing in part to its built-in magnets, which exactly align with MagSafe and allow it to be easily snapped into place. You get a thin black or white slab that sits flush against the phone’s back without adding any bulk. Anyone who is continually reaching for their phone dozens of times per day would appreciate having a book right at their fingertips, all from the same move.
At only 58 grams, this device is easy to forget about until you need it, and then, as if by magic, it appears. Its overall size is a modest 100mm long and 60mm wide, so it goes unnoticed in a pocket until reading time beckons. Commuters and individuals waiting in lines can just pull out their phones and start reading a chapter without having to dig through their bags for another device.
Pocket-Size Mini eReader for Reading Anywhere: Ultra-light at just 0.23 inch and only 2.72 oz, Xteink X4 is designed for true portability. Slip it…
4.3″ Paper-Like E-Ink Display: The 4.3-inch E-Ink screen delivers a natural paper-like reading experience that’s gentle on the eyes. Enjoy clear…
Magnetic-Ready Design for Quick Access: Includes magnetic stick-on rings, so you can attach Xteink X4 to the back of your phone or other magnetic…
The 3.7-inch E Ink screen displays clear text, with over 250 pixels per inch. You can easily change the font size with a few simple adjustments, so even the smallest pages provide a comfortable reading experience. With adequate lighting, the characters simply pop, and there is no eye strain to contend with, as opposed to phone screens. You also have real buttons on the sides and bottom for turning pages and accessing menus. One-handed operation feels perfectly normal, whether you’re on a train or confined in bed. The gyroscope within detects even the tiniest shake and flips pages forward, allowing you to maintain a solid grip during those rapid reading periods.
Navigation is straightforward, with a grid of icons instead of swipes or touches. Choose a book or change the settings with a few presses, and it remains dependable even when your fingers are clumsy. The strategy minimizes distractions and allows you to concentrate on the words themselves. You can load books onto the device using either the 16GB microSD card included in the box or a companion app on your phone. Transferring EPUB files is quick and easy over Wi-Fi or by inserting the card into your computer, and storage increases up to 512 gigabytes, allowing you to carry thousands of titles without running out of space.
The battery will last you 10 to 14 days on a single charge, even if you only read for an hour or two every day, and charging is simple; simply insert the special cable with magnetic pogo pins into the gadget and it will clip right into place. Okay, there is one little flaw: there is no built-in front light (yet), but you can get a separate clip-on version for only $9.99 if you plan on reading late into the evening. If you need more connectivity, there are Bluetooth and NFC connections available, as well as Wi-Fi for the occasional update or transfer. It’s available now on the official Xteink website and can be purchased for $79. [Source]
Maybe the third time is the charm. Writer/producer David E. Kelley is adapting Tom Wolfe’s “The Bonfire of the Vanities” novel into a series for Apple TV, with “The Batman” director Matt Reeves.
Apple TV is dramatizing “The Bonfire of the Vanities” — image credit: Apple
David E. Kelley is still best known for “The Practice” and “Ally McBeal” shows, but he’s also the writer of Apple TV’s “Presumed Innocent” and “Margo’s Got Money Troubles.” Now according to Deadline, he’s dramatizing Tom Wolfe’s famous 1987 novel of greed and Wall Street money. Not to spoil the story, but as excellent as it is, Wolfe’s novel feels as if it fades out rather than have a big finish, which has made it difficult to successfully adapt. It was filmed in 1990, with Tom Hanks starring and Brian DePalma directing from a screenplay by Michael Cristofer, but that was a flop. Continue Reading on AppleInsider | Discuss on our Forums
Artificial intelligence is no longer a back office enabler or a set of isolated automation software tools. It is becoming a core component of how organizations operate, compete, and deliver value.
As businesses accelerate their adoption of increasingly autonomous systems, often referred to as agentic AI, a significant leadership dilemma is emerging. The workforce is no longer exclusively human.
Digital agents capable of making decisions, initiating actions, and influencing outcomes, are now woven into the operational fabric of the company.
Advertisement
Article continues below
Peter Connolly
This shift represents far more than a technological upgrade. It is a structural transformation that puts business leaders in uncharted territory.
The World Economic Forum’s Four Futures framework warns of rising technological fragmentation, declining trust, and widening governance gaps.
In this context, the question for leaders is no longer whether to deploy autonomous AI, but how to govern a hybrid workforce of humans and digital agents without introducing systemic risk.
For many organizations, this is becoming one of the defining leadership challenges of the decade.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The Rise of the Non Human Workforce
Agentic AI systems differ from traditional automation in one critical way: they do not merely execute predefined tasks but interpret data, make decisions, and adapt their behavior to context. In many organizations, these systems are already performing functions once reserved for skilled employees, triaging customer requests, optimizing supply chains, generating code, or even making financial recommendations.
The productivity gains are undeniable, but so is the complexity. When digital agents act with autonomy, they also introduce new forms of organizational risk. Decisions may be opaque, accountability may be unclear, and the potential for unintended consequences increases dramatically.
Advertisement
Leaders must now grapple with a workforce that does not think, behave, or act like humans, and who cannot be governed through traditional management structure. This is where structured identity, access, and behavioral governance become essential.
The Governance Gap: A Growing Leadership Risk
The most significant challenge is not the technology itself, but the governance vacuum surrounding it. Many organizations deploy autonomous systems faster than they establish the controls and guardrails required to manage them. This creates a widening gap between capability and oversight.
Advertisement
Several risks are already becoming visible:
1. Accountability gaps: When an AI agent makes a decision that leads to financial loss, regulatory exposure, or reputational harm, who is responsible? Without clear lines of accountability, organizations face legal and ethical uncertainty.
2. Insider threat like behavior: Autonomous systems often operate with high levels of privilege and can access sensitive data, trigger workflows, or interact with customers. If misconfigured or compromised, they can behave like highly privileged insider threats, an issue we frequently encounter when assessing digital identity posture.
3. Fragmentation and drift: As organizations deploy multiple AI agents across different functions, the risk of inconsistent behavior, configuration drift, and misaligned objectives increases. Without centralized governance, autonomous systems can evolve in ways that diverge from organizational intent.
Advertisement
4. Erosion of trust: Employees, customers, and regulators are increasingly concerned about how AI systems make decisions. A lack of transparency and explainability can undermine confidence and impede adoption.
AI adoption alone is no longer sufficient. Governance has become the true leadership mandate.
A Governance First Mindset: The New Leadership Imperative
To navigate this new landscape, business leaders must adopt a governance first mindset that aligns with the World Economic Forum’s call for Digital Trust and systemic resilience. This requires treating agentic AI not as a standalone technology, but as a governed member of the workforce.
Advertisement
Several principles should guide this shift:
Establish Clear Accountability Structures
Every AI agent must have an identified human owner responsible for its actions, performance, and outcomes. This includes defining escalation paths, decision boundaries, and audit requirements. Without explicit accountability, organizations risk regulatory exposure and operational ambiguity.
Apply Identity and Access Controls to Digital Agents
Advertisement
Just as employees have identities, permissions, and access levels, so too must AI agents. Leaders should ensure that digital agents are integrated into identity management frameworks with least privilege access, continuous monitoring, and lifecycle management. This reduces the risk of insider threat like behavior and prevents privilege creep, these are key principles central to our approach to digital workforce governance.
Implement Behavioral Guardrails
Autonomous systems require constraints that define acceptable behavior. These guardrails may include ethical guidelines, operational limits, safety checks, and real time monitoring. Guardrails ensure that AI agents act within organizational intent and do not drift into unsafe or unintended territory.
Build Oversight and Auditability into the System
Advertisement
Transparency is essential for trust. AI agents must be auditable, explainable, and observable. This includes maintaining logs of decisions, enabling post incident analysis, and ensuring that humans can intervene when necessary. Oversight is foundational to responsible autonomy.
Foster a Culture of Digital Trust
Governance is more than a technical challenge, it is a cultural one. Leaders must champion a culture that values transparency, accountability, and responsible innovation. This includes educating employees about how AI agents operate, how decisions are made, and how risks are managed. Organizations that succeed here tend to be those that treat governance as a strategic capability, not a compliance burden.
Advertisement
From Liability to Advantage: Building the Hybrid Workforce of the Future
When governed effectively, agentic AI can become a powerful force multiplier. It can enhance productivity, accelerate innovation, and enable organizations to operate with greater agility and precision. But without governance, the same systems can introduce systemic vulnerabilities that undermine resilience.
The role of business leaders is to ensure that autonomy does not outpace oversight. By reframing agentic AI as part of the workforce, subject to the same expectations, controls, and accountability as human employees, leaders can transform a potential liability into a strategic advantage.
The future of work will be hybrid. The organizations that continue to evolve in 2026 will be those that recognize that governing AI is not a technical task delegated to IT, but a core leadership responsibility.
Leaders who embrace this governance first approach will not only mitigate risk, but they will also build resilient, high performing organizations that define the future of the workplace and how businesses function.
As with any new tech, there’s a scale for AI adoption among businesses leaving some are ahead of the curve and others much further behind as they continue to resist and delay.
But what’s clear is that adoption is happening with or without formal strategy because nearly two-thirds (65%) of employees now say they intentionally use AI for work.
This shift is impacting expectations on many levels. It changes what organizations expect from their people, and it changes what people expect from their organizations.
Advertisement
Article continues below
Nick Pearson
Polished sounding, in-depth output can now be generated in minutes, meaning everyone has the ability at their fingertips to produce more in less time.
As managers and organizations increasingly realize that this doesn’t always lead to good work, the differentiator that defines what good really is, is becoming less about speed and more about who can work alongside AI well.
That means having the ability to analyze and assess its output and use it to make better human decisions – not replace them.
This marks a turning point for CIOs especially. The role that used to center simply on identifying and providing access to new tools to improve efficiency, is now increasingly responsible for shaping an environment in which AI tools truly raise the bar.
Advertisement
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
AI is resetting the performance baseline
AI is, and has for some time, been accelerating routine and repeatable work across every function, from drafting documents and analysing data to summarizing meetings and generating code. At first, many employees approached these tools with caution. AI made them faster, but they still treated its output as something to sense-check and refine.
Now, as AI becomes more normalized and trusted, that caution can slip. In some cases, speed is no longer paired with scrutiny and teams rely on confident-sounding outputs that may be incomplete, biased or wrong if they haven’t been properly reviewed. So, while managers are getting used to quicker turnaround and coming to expect it, they may also be receiving work that looks finished but hasn’t been validated.
Advertisement
If work is easier to produce across the board, then volume alone becomes a much less reliable indicator of value. It’s more about the ability to work with AI’s output, interpreting and analysing it in context and feeding it into final outputs and decisions rather than relying on it to do that for you.
Because of this, every role becomes more technical by default. This new expectation means employees need to be able to use AI tools but also use them well and understand their outputs. That includes framing prompts effectively, challenging assumptions, identifying bias and translating outputs within the right commercial and organizational context.
Without leaders prioritizing AI and how to use it correctly, this shift can create divergence. Some teams build confidence quickly, while others feel nervous and hesitate or over-rely on automation which can result in uneven standards and unnecessary risk. The responsibility for avoiding that fragmentation sits with the CIO.
Advertisement
The answer isn’t simply introducing more technology, in fact in many ways that may complicate things further. What employees need is better ways of working with existing tools that are embedded across the organization.
This starts with being clear about where AI is genuinely helping the business. Rather than experimenting everywhere at once, organizations need to identify the areas where AI can improve outcomes, whether that’s speeding up analysis, reducing manual work or improving decision-making.
Leadership teams play an important role here by setting priorities and making sure AI initiatives stay focused on solving real business challenges rather than chasing the latest trend.
Advertisement
But introducing tools alone aren’t enough. Employees need practical training on how to use AI well and how to check and interpret its outputs. Without that support, AI risks becoming either underused or over-relied on.
In many cases, the most effective approach is building confidence and competence over time through hands-on learning in the flow of work. When employees can experiment, feedback on what’s working and refine how they use AI in real situations, organizations create a much stronger foundation for long-term progress.
Governance that enables trust and better decisions
If capability enables AI use, governance ensures it is used responsibly and consistently. Without clear guardrails, AI adoption can quickly become fragmented, with employees using different tools, handling data inconsistently or relying on outputs that haven’t been properly checked.
Advertisement
In practice, governance means giving employees clear guidance on how AI should be used across the organization. That could include clearly outlining which AI tools or large language models are approved for work, when enterprise or paid versions must be used and what kinds of data can or cannot be entered into these systems.
It also means making sure teams understand how to handle sensitive information and comply with local regulations. When these boundaries are clear, employees can innovate confidently and leadership can better trust their employees, tools and the outputs that the two together are able to produce. Without governance, the risk is unchecked, low-value outputs that affect results and increase exposure.
The CIO has the power to connect aligning technology, ethics and responsibility. Embedding review mechanisms, defining who owns what and making sure human judgement sits firmly at the center of it all.
Advertisement
Conclusion
AI is raising the bar across the workplace. The organizations that approach it in the right way build in clear direction on where it should be applied, practical support that helps people use it well and a governance model that protects the integrity of decisions.
For CIOs, the aim is to create an environment where experimentation is encouraged while standards stay high and accountability is clear. When capability and trust are built in tandem, AI becomes a lever for stronger outcomes over time, not just quicker output in the short term.
Technology may be redefining how work is produced, but it is leadership that determines whether those higher standards translate into long-term advantage.
OpenAI said the purchase will be part of its strategy to further the conversation on the changes brought about by artificial intelligence.
OpenAI, in what is being described as an unusual move, is set to purchase the Technology Business Programming Network (TBPN), a daily, live tech talk show hosted by Jordi Hays and John Coogan, that often features high-profile tech leaders and entrepreneurs. OpenAI
OpenAI’s chief executive officer of applications Fidji Simo said: “As I’ve been thinking about the future of how we communicate at OpenAI, one thing that’s become clear is that the standard communications playbook just doesn’t apply to us. We’re not a typical company.
“We’re driving a really big technological shift. And with our mission to ensure artificial general intelligence benefits all of humanity comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates, with builders and people using the technology at the centre.”
Advertisement
While the full details of the deal have yet to be disclosed, OpenAI said the TBPN team will maintain editorial independence and make decisions on their guests and programming. According to the Wall Street Journal, TBPN stated that it generated $5m in advertising revenue last year and is on track to exceed $30m in revenue in 2026.
However, an OpenAI spokesperson told Bloomberg that the platform is not aiming to make TBPN a money-making enterprise.
In a statement, Hays expressed excitement at the venture, while making note of the importance of a strong partnership where both parties work as a team to communicate change and innovation in the AI and tech spaces.
He said: “While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right. Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”
Advertisement
Earlier this week OpenAI closed a larger than expected funding round in which it raised $122bn, exceeding the projected figure of $110bn. Part of that funding is expected to be put towards the scale and growth of the platform’s AI technologies and research, in line with current global demands.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
The best thing about retail warehouse stores is obviously the selection. After all, where else can you buy a new T-shirt, birthday cake, and a set of tires on the same day? But the ability to fill up with gas before leaving the parking lot is a plus as well. That’s why stores like Costco, where you can use these tips to save time at the pump, are so convenient. But now the company is moving forward with standalone gas stations, and the company’s first in California is members-only.
Members will need to insert or scan their membership card to refuel, just as they would at Costco’s attached gas stations. However, non-members may be able to access the pumps using a Costco Shop card, as they currently can at on-site locations. Costco’s new gas station is located in Mission Viejo, California, and it’s a 17,000 square foot facility operated by company employees. It has 40 pumps covered by a large canopy, and it will run from 5 a.m. to 10 p.m. daily, Sunday through Saturday.
The station is expected to open by the end of June 2026. But if you don’t live in California, you may not have to wait long. Costco is planning to build more standalone gas stations, beginning in Honolulu, Hawaii. As of this writing, the company hasn’t publicly addressed this new program. But the belief is that stand alone stations can help reduce the heavy traffic flow that currently plagues many on-site locations.
Advertisement
Costco’s gas boom and competitive pricing strategy
Gary Hershorn/Getty Images
Costco’s first standalone gas station (which will also strategically stay cheaper than most) was initially announced in the summer of 2025. The facility is located off Interstate 5 in Mission Viejo, California, at the site where a Bed Bath & Beyond once stood. At the time of the announcement, the company’s gas stations were experiencing a boom in business, thanks mostly to extended operating hours. The decision to move forward with a new test store may have been influenced by this positive reaction.
Costco members get access to gas prices that can often beat other competitors by anywhere from 10 to 25 cents per gallon. This is possible because of the company’s warehouse approach, which includes buying fuel in large quantities. Costco also works directly with suppliers to get the best cost and then passes that savings on to its members.
Advertisement
Costco’s first gas station opened in 1995 and since then, their fuel business has grown. The company currently has over 700 stations around the world, serving millions of paid members every day. Those members can use the Costco app to check fuel prices in real time, as well as store hours, and locations near them.
You must be logged in to post a comment Login