Spending a Friday evening doing your taxes probably isn’t the most appealing way to kick off your weekend…but what if you added drinks, delicious takeout, and a couple of buddies who were also tending to all the annoying little tasks they’ve been avoiding?
Tech
People are using ‘admin nights’ to turn productivity into a party
That’s the idea behind “admin nights,” a new trend that is proliferating on TikTok. The conceit is simple: Friends get together, pull out their laptops, and start hacking away at their to-do lists. Think of a girls’ night out, but…in, and centered on tedious tasks instead of cocktails and clubbing.
“It’s the perfect blend of both,” Brie Ever, a Birmingham, Alabama-based content creator who hosts weekly “admin nights,” told Vox. “There are moments when I know I need to lock in, and I’ll just put in my headphones. But for the most part, everyone’s talking, working, and having a glass of wine all at the same time.”
While it might seem strange that people are opting for errands or chores over happy hour, task-themed meetups have become a popular form of hanging out. Other examples you’ll see online include “freezer meal parties,” where friends prepare ready-to-microwave dinners and “vision board nights,” where groups make collages of their life goals.
These gatherings represent the experimental and less obvious ways people are prioritizing friendship while tackling the struggles of modern living. Everything has the potential to be a party now.
Hanging out has become more complicated
Spending time with friends can naturally become more difficult as you get older. Work, romantic relationships, kids, and other caregiving responsibilities can completely drain your social battery and cut into the time that was once reserved for your pals. But even younger adults who theoretically have less on their plates aren’t free of the exhaustion that accompanies modern living.
Anna Goldfarb, author of Modern Friendship: How to Nurture Our Most Valued Connections, told Vox that a lot of friend groups have become decentralized, as people relocate and change jobs more frequently. “Our grandparents might’ve stayed in the same town for most of their lives,” Goldfarb said. “They might have stayed at the same job. They didn’t have to work so hard to keep these connections afloat.”
Life has also become more expensive for a lot of people due to inflation and tariffs. Going to the movies, restaurants, or out for drinks regularly can feel like a luxury for many consumers, and just might not feel worth it. (YouGov’s 2025 Dining Out Report found that 37 percent of US diners say they’re dining out less frequently than they were a year ago, with 69 percent citing “a perceived rise in expensiveness.” And a 2025 CivicScience poll found that 27 percent of respondents are ditching the multiplex and staying home due to movie ticket prices.)
With all these hurdles in mind, it’s not surprising that social gatherings are beginning to look a lot different.
Gathering is all about intention now
In the past few years, social activities have started to look a lot more productive and intentional. Running clubs, for example, became a more visible trend during the first two years of the pandemic, and book club events have been increasing, according to data from Eventbrite. There’s also the phenomenon of “soft clubbing,” first reported last summer, which sees typical nightlife activities replaced with sober, wellness-focused gatherings. (Think: cold-plunge parties and saunas featuring DJ sessions.) Admin nights are a natural evolution of this optimization of social activities, or at least just a collective desire to avoid hangovers.
Vision board nights and meal prep parties are a welcome hangout for organized, goal-oriented pals. In other instances, friends are getting together to clean each other’s homes, bake, and even provide life updates. Many of these gatherings lean into a psychological concept called “body doubling,” which is often used by people with ADHD. (Ever, the content creator, used the term when discussing the appeal of admin nights.) It simply means having other people present while you complete tasks to help you stay focused.
Irene S. Levine, a psychologist and author of the book Best Friends Forever: Surviving A Breakup With Your Best Friend, sees a lot of value in tackling errands with your pals, although it doesn’t have to be as structured as a planned party. “That could extend to going to the gym together or doing your food shopping together,” she told Vox. “When you’re stretched for time, doing things simultaneously with your friends kills two birds with one stone. You’re taking care of business, so there’s less guilt associated with it.”
But, Levine clarified, there’s nothing self-indulgent about spending quality time with your friends. “It’s actually so important to our health and emotional well-being,” she said.
There have been plenty of reports and casual handwringing over the idea that people are partying less nowadays, and that Gen Z isn’t having as much fun as their peers were at the same age. At first glance, these new modes of hanging out may not look like the stereotypical young person’s idea of a good time. There’s presumably no hard drugs, no sex, no stumbling home at 4 am involved in admin nights. But it makes sense that gatherings would look a bit different when the world looks dramatically different. As life becomes more difficult to manage and relationships get harder to maintain, the hottest club in town might be your friend’s couch, laptop open, finally setting up automated bill pay.
Tech
The AI Doc’s Falsehoods And False Balance
from the hype-without-substance dept

There is a familiar media failure in which opposing viewpoints are presented as equally valid, even when the evidence overwhelmingly supports one side. It’s called Bothsidesism. This false balance phenomenon legitimizes misinformation and undermines public understanding by giving disproportionate weight to baseless claims.
Why bring this up? Because the new AI Doc film is based on it.
The film wants credit for being “balanced” because it assembles a wide range of experts. But putting Prof. Fei-Fei Li, a pioneering computer scientist, next to someone like Eliezer Yudkowsky, an author of a Harry Potter fanfic, is not “balance.”
Once you understand that false equivalence is baked into the film’s storytelling, you understand how misleading and manipulative the documentary is. And it is compounded by a series of falsehoods that go unchallenged and uncorrected.
This review addresses both failures.
The “AI Doc” Movie
“The AI Doc: Or How I Became an Apocaloptimist,” co-directed by Daniel Roher and Charlie Tyrell, sets out to explore AI, especially its potential for good and bad, with a strong emphasis on the filmmakers’ anxieties and fears. Its basic premise is: “A father-to-be tries to figure out what is happening with all this AI insanity.” As summarized by Andrew Maynard from Future of Being Human:
“The documentary progresses through the eyes of director Daniel Roher as he faces a tsunami of existential AI angst while grappling with the responsibility of becoming a father. Motivated by a fear that artificial intelligence could spell the end of everything that matters, he sets out to interview some of the largest (and loudest) voices in AI to fathom out whether this is the best of times or worst of times for him and his wife (filmmaker Caroline Lindy) to bring a kid into the world.”
The “loudest voices” include many AI doomer figures, such as Eliezer Yudkowsky, Dan Hendrycks, Daniel Kokotajlo, Connor Leahy, Jeffrey Ladish, and two of the most populist voices on emerging tech (first social media and now AI): Tristan Harris and Yuval Noah Harari. The film also features voices on AI ethics, including David Evan Haris, Emily M. Bender, Timnit Gebru, Deborah Raji, and Karen Hao. On the more boosterish side, there are Peter Diamandis and Guillaume Verdon (AKA Beff Jezos). Three leading AI CEOs were also interviewed: OpenAI’s Sam Altman, DeepMind’s Demis Hassabis, and Anthropic’s Amodei siblings, Dario and Daniela. (Meta’s Mark Zuckerberg declined, and xAI’s Elon Musk agreed but never showed up).
The movie started playing in theaters on March 27, but there are already plenty of reviews (dating back to the Sundance Film Festival). The praise is fairly consistent: It is timely, wide-ranging, visually energetic, and unusually well-connected, with access to major AI figures.
The most common criticism is that it is too deferential to interviewees and too thin on hard interrogation or concrete answers. As several reviewers put it:
- “Roher’s willingness to blindly accept any and all of his speakers’ pronouncements leaves The AI Doc feeling toothless.”
- “By giving its doomer and accelerationist voices so much time to present AI’s most hyperbolic potential outcomes with little pushback, the documentary’s first half plays more like an overlong advertisement for the technology as opposed to a piece of measured analysis.”
- “Roher acts as a fantastic storyteller, but he treats his subjects too gently. The film desperately needs more pushback during the interviews.”
Tristan Harris, co-founder of the Center for Humane Technology, told the AP: “My hope is that this film is kind of like ‘An Inconvenient Truth’ or ‘The Social Dilemma’ for AI.”
That is not reassuring. It is more like a glaring warning sign. Harris’s “Social Dilemma” and “AI Dilemma” movies were full of misinformation and nonsensical hyperbole, and both were designed to be manipulative and dishonest. If anything, his endorsement tells you exactly what kind of movie this is.
After watching the AI Doc, I realized what the doomers had managed to accomplish here: The film absorbs the panic rather than investigates it.
The False Balance of The AI Doc
The AI Doc starts with what one reviewer called a “Doom Parade.” It aims to set the tone.
“The worst AI predictions are presented first,” another reviewer noted. “Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, calmly talks of the ‘abrupt extermination’ of humanity.”
And it is worth remembering who Yudkowsky is and what he has actually advocated. In his notorious TIME op-ed, “Shut it All Down,” he argued that governments should “be willing to destroy a rogue datacenter by airstrike.” In his book “If Anyone Builds It, Everyone Dies,” which many reviewers found unconvincing and “unnecessarily dramatic sci-fi,” he (and his co-author Nate Soares) proposed that governments must bomb labs suspected of developing AI. Based on what exactly? On the authors’ overconfident, binary worldview and speculative scenarios, which they mistake for inevitability.
One review of that book observed, “The plan with If Anyone Builds It seems to be to sane-wash him [Yudkowsky] for the airport books crowd, sanding off his wild opinions.”
That is more or less what the new documentary does, too. The AI Doc sane-washes the loudest doomers for mainstream viewers, sanding off their wild opinions.
In his newsletter, David William Silva addresses the documentary’s “series of doomers,” who “describe AI-driven extinction with the calm confidence of people who have said these things so many times they have stopped noticing they have no evidence for them.”
“Roher’s reaction is full terror,” Silva adds. “I hope it is unequivocally evident that this is not journalism.”
That gets to the heart of it. The film pretends to weigh competing perspectives, but in practice, it grants disproportionate authority to people most invested in flooding the zone with AI panic. And there is a well-oiled machine behind this kind of AI panic. As Silva writes:
“The people behind the AI anxiety machine. […] They know that predicting human extinction by software is an extraordinary claim requiring extraordinary evidence. They know they don’t have it. They know ‘my kids won’t live to see middle age’ is nothing but performance. […] And they do it anyway. Why do you think that is? The calculation is simple. Some people will see through it, and they will be annoyed, write rebuttals, call it what it is. Ok, fine. Just an acceptable loss. The believers, on the other hand, are a market. As long as the ratio stays favorable, the machine is profitable.”
One of the biggest beneficiaries of this film is Harris.[1] He is framed as if he is in the middle between the two main camps (doomers and accelerationists), and his narrative gradually becomes the film’s narrative (similar to the Social Dilemma). His call to action even serves as the ending (with a QR code directing viewers to a designated website).
The problem is that this framing has very little to do with reality. Harris’s Center for Humane Technology got $500,000 from the Future of Life Institute for “AI-related policy work and messaging cohesion within the AI X-risk [existential risk] community.” That is not a neutral player.
There’s a touching scene in the film where Roher mentions his father’s cancer treatment and expresses hope that AI might help. Harris appears visibly emotional. But in other contexts, Harris has argued against looking at AI for help with cancer treatment… in the belief that it would lead to extinction. Here he is on Glenn Beck’s show in 2023:
“My mother died from cancer several years ago. And if you told me that we could have AI that was going to cure her of cancer, but on the other side of that coin was that all the world would go extinct a year later, because of the, the only way to develop that was to bring something, some Demon into the world that would we would not be able to control, as much as I love my mother, and I would want her to be here with me right now, I wouldn’t take that trade.”
That sort of hyperbole seems relevant to Harris’ stance on such things, but was not mentioned in the film at all.
Connor Leahy of Conjecture and ControlAI gets a similar makeover. In the documentary, he appears as another pessimistic expert. Elsewhere, he said he does not expect humanity “to make it out of this century alive; I’m not even sure we’ll get out of this decade!” His “Narrow Path” proposal for policymakers begins with the claim that “AI poses extinction risks to human existence.” Instead of calling for a six-month AI pause, he argued for a 20-year pause, because “two decades provide the minimum time frame to construct our defenses.”
This is exactly why background checks matter. Viewers of the AI Doc deserve to know the full scope of the more extreme positions these interviewees have publicly taken elsewhere. If someone has publicly argued for destroying data centers by airstrikes or stopping AI for 20 years, the audience should know that.
Debunking the Falsehoods
The film goes way beyond just pushing a panic. It also recycles several misleading or plainly false claims, letting them pass as established facts. Three stood out in particular.
Anthropic’s Blackmail study
One of the most repeated “facts” in reviews of the movie is that Anthropic’s AI model, Clause, decided, unprompted, to blackmail a fictional employee. In the film, Daniel Roher asks, “And nobody taught it to do that?” Jeffrey Ladish, of Palisade Research and Tristan’s Center for Humane Technology, replies: “No, it learned to do that on its own.”
That is a misleading characterization of the actual experiment, it has already been debunked in “AI Blackmail: Fact-Checking a Misleading Narrative.” Anthropic researchers admitted that they strongly pressured the model and iterated through hundreds of prompts before producing that outcome. It wasn’t a spontaneous emergence of “evil” behavior; the researchers explicitly ensured it would be the default. Telling viewers that the model has gone full “HAL 9000” omits the facts about the heavily engineered experimental setup.
Although this is a classic case of big claims and thin evidence, the film offers so little pushback that viewers are left to take Ladish’s statements at face value.
It is also worth remembering that Ladish has fought against open-source AI, pushed for a crackdown on open-source models, and once said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” He later updated his position (and it’s good to revise such views). But does the film mention his earlier public hysteria? No.
Is AI less regulated than sandwich shops? No.
Connor Leahy tells Daniel Roher, “There is currently more regulation on selling a sandwich to the public” than there is on AI development. This talking point has become a favorite slogan in AI doomer circles. It was repeatedly stated by The Future of Life Institute’s Max Tegmark and, more recently, by Senator Bernie Sanders. It’s catchy. It’s also false.
State attorneys general from both parties have explicitly argued that existing laws already apply to AI. Lina Khan, writing on behalf of the Federal Trade Commission, stated that “AI is covered by existing laws. Each agency here today has legal authorities to readily combat AI-driven harm.” The existing AI regulatory stack already includes antitrust & competition regulation, civil rights & anti-discrimination law, consumer protection, data privacy & security, employment & labor law, financial regulation, insurance & accident compensation, property & contract law, among others.
So no, AI is not less regulated than sandwich shops. It’s a misleading soundbite, not a serious description of legal reality.
Data center water usage
In the film, Karen Hao criticizes data centers, warning that “People are literally at risk, potentially of running out of drinking water.” That sounds alarming, which is presumably the point. But it is highly misleading.
In fact, Karen Hao had to issue corrections to her “Empire of AI” book because a key water-use figure was off by a factor of 4,500. The discrepancy was not 45x or 450x, but rather 4,500x. That is not a rounding error. For detailed rebuttals, see Andy Masley’s “The AI water issue is fake” and “Empire of AI is widely misleading about AI water use.”
There is also a basic proportionality issue here. As demonstrated by The Washington Post, “The water used by data centers caused a stir in Arizona’s drought-prone Maricopa County. But while they used about 905 million gallons there last year, that’s a small fraction of the 29 billion gallons devoted to the country’s golf courses.” To put that plainly: data centers accounted for just 0.1% of the county’s water use.
It is also worth noting that “most of the water used by data centers returns to its source unchanged.” In closed-loop cooling systems, for example, water is recirculated multiple times, which significantly reduces net consumption.
None of this is hidden information. A basic fact-check by the filmmakers could have brought it to light. But that was not the film’s goal. They chose fear-based framing over actual reporting. They could have pressed interviewees on their track records, failed predictions, and political agendas. Instead, they let them narrate the stakes, unchallenged.
So, I think we can conclude that the AI Doc may want to appear balanced and thoughtful, but, unfortunately, too often it is not.
Final Remark
While Western filmmakers are busy platforming advocates for “bombing data centers” and “Stop AI for 20 years,” the Chinese Communist Party is building the actual infrastructure. The CCP is not making doom-and-gloom documentaries; it is racing ahead. This is a real strategic threat, and it is far more concerning than anything featured in this film.
—————————
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
Filed Under: ai, ai doomerism, daniel roher, eliezer yudkowsky, the ai doc, tristan harris
Tech
DC In The Data Center For A More Efficient Future
If you own a computer that’s not mobile, it’s almost certain that it will receive its power in some form from a mains wall outlet. Whether it’s 230 V at 50 Hz or 120 V at 60 Hz, where once there might have been a transformer and a rectifier there’s now a switch-mode power supply that delivers low voltage DC to your machine. It’s a system that’s efficient and works well on the desktop, but in the data center even its efficiency is starting to be insufficient. IEEE Spectrum has a look at newer data centers that are moving towards DC power distribution, raising some interesting points which bear a closer look.
A traditional data center has many computers which in power terms aren’t much different from your machine at home. They get their mains power at distribution voltage — probably 33 KV AC where this is being written — they bring it down to a more normal mains voltage with a transformer just like the one on your street, and then they feed a battery-backed uninterruptible Power Supply (UPS) that converts from AC to DC, and then back again to AC. The AC then snakes around the data center from rack to rack, and inside each computer there’s another rectifier and switch-mode power supply to make the low voltage DC the computer uses.
The increasing demands of data centers full of GPUs for AI processing have raised power consumption to the extent that all these conversion steps now cost a significant amount of wasted power. The new idea is to convert once to DC (at a rather scary 800 volts) and distribute it direct to the cabinet where the computer uses a more efficient switch mode converter to reach the voltages it needs.
It’s an attractive idea not just for the data center. We’ve mused on similar ideas in the past and even celebrated a solution at the local level. But given the potential ecological impact of these data centers, it’s a little hard to get excited about the idea in this context. The fourth of our rules for the responsible use of a new technology comes in to play. Fortunately we think that both an inevitable cooling of the current AI hype and a Moore’s Law driven move towards locally-run LLMs may go some way towards solving that problem on its own.
header image: Christopher Bowns, CC BY-SA 2.0.
Tech
A Major Publisher Just Canceled This Book Over AI Writing Concerns
Last June, Mia Ballard’s self-published novel Shy Girl took the internet by storm. After winning the hearts of readers and publisher Hachette alike, it was set for a major US debut in the coming months.
Now, the novel may never become available through any official channel again. Hachette has officially pulled the plug on the novel’s US release following a wave of allegations that generative AI played a role in the manuscript’s creation.
Originally self-published in February 2025, the horror novel was traditionally released by Hachette’s science fiction and fantasy label Orbit in the UK in November. After The New York Times provided evidence of AI usage in Shy Girl, Hachette canceled the planned spring US release and removed the book from its website completely.
“Hachette remains committed to protecting original creative expression and storytelling,” the publisher said in a statement to the Times.
Authors are required to disclose to Hachette whether AI was used in the creation of their work. Ballard has denied using AI tools to write the book, claiming an editor was responsible for the portions that appear to be AI-generated.
“My name is ruined for something I didn’t even personally do,” Ballard wrote in an email to the New York Times.
The cancellation of Shy Girl by Hachette marks the first time a major publisher has publicly pulled an existing title due to suspicions of AI-generated prose.
For the past few months, readers online have raised concerns about the book’s apparent use of AI.
A video from YouTuber frankie’s shelf provides a lengthy analysis of the novel, pointing out linguistic patterns that are characteristic of AI writing. The video also lists words in Shy Girl that are repeated with unusual frequency (“edge” is used 84 times and “sharp” 159 times), often in ways that are abstract and nonsensical.
In January, Max Spero, founder and chief executive of Pangram, ran the text of Shy Girl through his AI detection program. He claimed that the novel was 78% AI-generated.
The rise of AI has caught the publishing industry off guard. Though AI writing has already appeared in many self-published books, traditional publishers like Hachette are more critical of the technology.
Representatives for Hachette didn’t immediately respond to a request for comment.
Tech
‘Wood is wood’: WSU research finds Yankees’ viral ‘torpedo’ bats perform the same as traditional bats

The New York Yankees just cruised through Seattle and won two out of three games against the Mariners. On the other side of Washington state, the Bronx Bombers’ “torpedo bats” were being scientifically scrutinized.
In what Washington State University is calling the first-ever laboratory experiments on the new baseball bat design, researchers found that torpedo bats and traditional bats basically perform the same.
It didn’t look that way last season, when the Yankees hit a franchise-record nine home runs in a game against the Milwaukee Brewers and drew viral attention to the bats that they were swinging.
The torpedo bat design relies on a slightly different shape in which wood is removed from the barrel tip and added to the bat’s sweet spot, so that the diameter tapers down, a little like a bowling pin. But the hype appears overblown.
“Wood is wood,” Lloyd Smith, a professor in WSU’s School of Mechanical and Materials Engineering and director of the university’s Sports Science Laboratory, told WSU Insider. “When it comes to baseball, there’s not a lot you can do with wood. If your goal is to keep the game steady and consistent and not have a lot of change, wood bats are good.”
Smith is part of a research team that includes Alan Nathan from University of Illinois and Daniel Russell from Penn State University. They’ll present their findings at the upcoming International Sports Engineering Association conference, June 1–4 in Pullman, Wash.
According to WSU Insider, the researchers created two maple bats that were duplicates of a standard Major League Baseball bat. Two additional maple bats were made with a torpedo-shaped barrel that gave them the same swing weight as the standard bat.
They measured how much energy the bat returns to the ball by firing baseballs from an air cannon at a stationary bat and using light gates and cameras to measure the speed of the incoming and rebounding ball.
The team found nearly identical performance for the torpedo and standard bats except that the sweet spot for the torpedo bat was a half inch farther from the bat tip than the standard bat.
“It was actually pretty phenomenal how close they were,” said Smith.
While some Yankees players said last year that any little tweak could provide an advantage, the team’s captain wasn’t convinced.
Aaron Judge hit an American League-record 62 homers in 2022, 58 in an MVP season in 2024 and 53 as repeat MVP in 2025. He had three homers using a traditional bat in that much-talked-about rout of the Brewers.
“The past couple of seasons kind of speak for itself,” Judge told ESPN last May. “Why try to change something?”
Tech
Xteink’s X3 E-Reader Snaps Onto Your iPhone and Ready for Any Spare Moment

Slapping the Xteink X3 onto an iPhone takes only a few seconds. This is owing in part to its built-in magnets, which exactly align with MagSafe and allow it to be easily snapped into place. You get a thin black or white slab that sits flush against the phone’s back without adding any bulk. Anyone who is continually reaching for their phone dozens of times per day would appreciate having a book right at their fingertips, all from the same move.
At only 58 grams, this device is easy to forget about until you need it, and then, as if by magic, it appears. Its overall size is a modest 100mm long and 60mm wide, so it goes unnoticed in a pocket until reading time beckons. Commuters and individuals waiting in lines can just pull out their phones and start reading a chapter without having to dig through their bags for another device.
XTEINK X4 E-Book Reader, 4.3″ Portable Pocket E-Ink eReader with Physical Page-Turn Buttons, Ultra-Thin…
- Pocket-Size Mini eReader for Reading Anywhere: Ultra-light at just 0.23 inch and only 2.72 oz, Xteink X4 is designed for true portability. Slip it…
- 4.3″ Paper-Like E-Ink Display: The 4.3-inch E-Ink screen delivers a natural paper-like reading experience that’s gentle on the eyes. Enjoy clear…
- Magnetic-Ready Design for Quick Access: Includes magnetic stick-on rings, so you can attach Xteink X4 to the back of your phone or other magnetic…
The 3.7-inch E Ink screen displays clear text, with over 250 pixels per inch. You can easily change the font size with a few simple adjustments, so even the smallest pages provide a comfortable reading experience. With adequate lighting, the characters simply pop, and there is no eye strain to contend with, as opposed to phone screens. You also have real buttons on the sides and bottom for turning pages and accessing menus. One-handed operation feels perfectly normal, whether you’re on a train or confined in bed. The gyroscope within detects even the tiniest shake and flips pages forward, allowing you to maintain a solid grip during those rapid reading periods.

Navigation is straightforward, with a grid of icons instead of swipes or touches. Choose a book or change the settings with a few presses, and it remains dependable even when your fingers are clumsy. The strategy minimizes distractions and allows you to concentrate on the words themselves. You can load books onto the device using either the 16GB microSD card included in the box or a companion app on your phone. Transferring EPUB files is quick and easy over Wi-Fi or by inserting the card into your computer, and storage increases up to 512 gigabytes, allowing you to carry thousands of titles without running out of space.

The battery will last you 10 to 14 days on a single charge, even if you only read for an hour or two every day, and charging is simple; simply insert the special cable with magnetic pogo pins into the gadget and it will clip right into place. Okay, there is one little flaw: there is no built-in front light (yet), but you can get a separate clip-on version for only $9.99 if you plan on reading late into the evening. If you need more connectivity, there are Bluetooth and NFC connections available, as well as Wi-Fi for the occasional update or transfer. It’s available now on the official Xteink website and can be purchased for $79.
[Source]
Tech
'The Bonfire of the Vanities' series headed to Apple TV
Maybe the third time is the charm. Writer/producer David E. Kelley is adapting Tom Wolfe’s “The Bonfire of the Vanities” novel into a series for Apple TV, with “The Batman” director Matt Reeves.

Apple TV is dramatizing “The Bonfire of the Vanities” — image credit: Apple
David E. Kelley is still best known for “The Practice” and “Ally McBeal” shows, but he’s also the writer of Apple TV’s “Presumed Innocent” and “Margo’s Got Money Troubles.” Now according to Deadline, he’s dramatizing Tom Wolfe’s famous 1987 novel of greed and Wall Street money.
Not to spoil the story, but as excellent as it is, Wolfe’s novel feels as if it fades out rather than have a big finish, which has made it difficult to successfully adapt. It was filmed in 1990, with Tom Hanks starring and Brian DePalma directing from a screenplay by Michael Cristofer, but that was a flop.
Continue Reading on AppleInsider | Discuss on our Forums
Tech
The leadership dilemma: Governing the “Agentic AI” workforce
Artificial intelligence is no longer a back office enabler or a set of isolated automation software tools. It is becoming a core component of how organizations operate, compete, and deliver value.
As businesses accelerate their adoption of increasingly autonomous systems, often referred to as agentic AI, a significant leadership dilemma is emerging. The workforce is no longer exclusively human.
Article continues below
This shift represents far more than a technological upgrade. It is a structural transformation that puts business leaders in uncharted territory.
The World Economic Forum’s Four Futures framework warns of rising technological fragmentation, declining trust, and widening governance gaps.
In this context, the question for leaders is no longer whether to deploy autonomous AI, but how to govern a hybrid workforce of humans and digital agents without introducing systemic risk.
For many organizations, this is becoming one of the defining leadership challenges of the decade.
The Rise of the Non Human Workforce
Agentic AI systems differ from traditional automation in one critical way: they do not merely execute predefined tasks but interpret data, make decisions, and adapt their behavior to context. In many organizations, these systems are already performing functions once reserved for skilled employees, triaging customer requests, optimizing supply chains, generating code, or even making financial recommendations.
The productivity gains are undeniable, but so is the complexity. When digital agents act with autonomy, they also introduce new forms of organizational risk. Decisions may be opaque, accountability may be unclear, and the potential for unintended consequences increases dramatically.
Leaders must now grapple with a workforce that does not think, behave, or act like humans, and who cannot be governed through traditional management structure. This is where structured identity, access, and behavioral governance become essential.
The Governance Gap: A Growing Leadership Risk
The most significant challenge is not the technology itself, but the governance vacuum surrounding it. Many organizations deploy autonomous systems faster than they establish the controls and guardrails required to manage them. This creates a widening gap between capability and oversight.
Several risks are already becoming visible:
1. Accountability gaps: When an AI agent makes a decision that leads to financial loss, regulatory exposure, or reputational harm, who is responsible? Without clear lines of accountability, organizations face legal and ethical uncertainty.
2. Insider threat like behavior: Autonomous systems often operate with high levels of privilege and can access sensitive data, trigger workflows, or interact with customers. If misconfigured or compromised, they can behave like highly privileged insider threats, an issue we frequently encounter when assessing digital identity posture.
3. Fragmentation and drift: As organizations deploy multiple AI agents across different functions, the risk of inconsistent behavior, configuration drift, and misaligned objectives increases. Without centralized governance, autonomous systems can evolve in ways that diverge from organizational intent.
4. Erosion of trust: Employees, customers, and regulators are increasingly concerned about how AI systems make decisions. A lack of transparency and explainability can undermine confidence and impede adoption.
AI adoption alone is no longer sufficient. Governance has become the true leadership mandate.
A Governance First Mindset: The New Leadership Imperative
To navigate this new landscape, business leaders must adopt a governance first mindset that aligns with the World Economic Forum’s call for Digital Trust and systemic resilience. This requires treating agentic AI not as a standalone technology, but as a governed member of the workforce.
Several principles should guide this shift:
Establish Clear Accountability Structures
Every AI agent must have an identified human owner responsible for its actions, performance, and outcomes. This includes defining escalation paths, decision boundaries, and audit requirements. Without explicit accountability, organizations risk regulatory exposure and operational ambiguity.
Apply Identity and Access Controls to Digital Agents
Just as employees have identities, permissions, and access levels, so too must AI agents. Leaders should ensure that digital agents are integrated into identity management frameworks with least privilege access, continuous monitoring, and lifecycle management. This reduces the risk of insider threat like behavior and prevents privilege creep, these are key principles central to our approach to digital workforce governance.
Implement Behavioral Guardrails
Autonomous systems require constraints that define acceptable behavior. These guardrails may include ethical guidelines, operational limits, safety checks, and real time monitoring. Guardrails ensure that AI agents act within organizational intent and do not drift into unsafe or unintended territory.
Build Oversight and Auditability into the System
Transparency is essential for trust. AI agents must be auditable, explainable, and observable. This includes maintaining logs of decisions, enabling post incident analysis, and ensuring that humans can intervene when necessary. Oversight is foundational to responsible autonomy.
Foster a Culture of Digital Trust
Governance is more than a technical challenge, it is a cultural one. Leaders must champion a culture that values transparency, accountability, and responsible innovation. This includes educating employees about how AI agents operate, how decisions are made, and how risks are managed. Organizations that succeed here tend to be those that treat governance as a strategic capability, not a compliance burden.
From Liability to Advantage: Building the Hybrid Workforce of the Future
When governed effectively, agentic AI can become a powerful force multiplier. It can enhance productivity, accelerate innovation, and enable organizations to operate with greater agility and precision. But without governance, the same systems can introduce systemic vulnerabilities that undermine resilience.
The role of business leaders is to ensure that autonomy does not outpace oversight. By reframing agentic AI as part of the workforce, subject to the same expectations, controls, and accountability as human employees, leaders can transform a potential liability into a strategic advantage.
The future of work will be hybrid. The organizations that continue to evolve in 2026 will be those that recognize that governing AI is not a technical task delegated to IT, but a core leadership responsibility.
Leaders who embrace this governance first approach will not only mitigate risk, but they will also build resilient, high performing organizations that define the future of the workplace and how businesses function.
Tech
How CIOs can create a strong foundation for an AI-enabled workplace
As with any new tech, there’s a scale for AI adoption among businesses leaving some are ahead of the curve and others much further behind as they continue to resist and delay.
But what’s clear is that adoption is happening with or without formal strategy because nearly two-thirds (65%) of employees now say they intentionally use AI for work.
Article continues below
Polished sounding, in-depth output can now be generated in minutes, meaning everyone has the ability at their fingertips to produce more in less time.
As managers and organizations increasingly realize that this doesn’t always lead to good work, the differentiator that defines what good really is, is becoming less about speed and more about who can work alongside AI well.
That means having the ability to analyze and assess its output and use it to make better human decisions – not replace them.
This marks a turning point for CIOs especially. The role that used to center simply on identifying and providing access to new tools to improve efficiency, is now increasingly responsible for shaping an environment in which AI tools truly raise the bar.
AI is resetting the performance baseline
AI is, and has for some time, been accelerating routine and repeatable work across every function, from drafting documents and analysing data to summarizing meetings and generating code. At first, many employees approached these tools with caution. AI made them faster, but they still treated its output as something to sense-check and refine.
Now, as AI becomes more normalized and trusted, that caution can slip. In some cases, speed is no longer paired with scrutiny and teams rely on confident-sounding outputs that may be incomplete, biased or wrong if they haven’t been properly reviewed. So, while managers are getting used to quicker turnaround and coming to expect it, they may also be receiving work that looks finished but hasn’t been validated.
If work is easier to produce across the board, then volume alone becomes a much less reliable indicator of value. It’s more about the ability to work with AI’s output, interpreting and analysing it in context and feeding it into final outputs and decisions rather than relying on it to do that for you.
Because of this, every role becomes more technical by default. This new expectation means employees need to be able to use AI tools but also use them well and understand their outputs. That includes framing prompts effectively, challenging assumptions, identifying bias and translating outputs within the right commercial and organizational context.
Without leaders prioritizing AI and how to use it correctly, this shift can create divergence. Some teams build confidence quickly, while others feel nervous and hesitate or over-rely on automation which can result in uneven standards and unnecessary risk. The responsibility for avoiding that fragmentation sits with the CIO.
The answer isn’t simply introducing more technology, in fact in many ways that may complicate things further. What employees need is better ways of working with existing tools that are embedded across the organization.
This starts with being clear about where AI is genuinely helping the business. Rather than experimenting everywhere at once, organizations need to identify the areas where AI can improve outcomes, whether that’s speeding up analysis, reducing manual work or improving decision-making.
Leadership teams play an important role here by setting priorities and making sure AI initiatives stay focused on solving real business challenges rather than chasing the latest trend.
But introducing tools alone aren’t enough. Employees need practical training on how to use AI well and how to check and interpret its outputs. Without that support, AI risks becoming either underused or over-relied on.
In many cases, the most effective approach is building confidence and competence over time through hands-on learning in the flow of work. When employees can experiment, feedback on what’s working and refine how they use AI in real situations, organizations create a much stronger foundation for long-term progress.
Governance that enables trust and better decisions
If capability enables AI use, governance ensures it is used responsibly and consistently. Without clear guardrails, AI adoption can quickly become fragmented, with employees using different tools, handling data inconsistently or relying on outputs that haven’t been properly checked.
In practice, governance means giving employees clear guidance on how AI should be used across the organization. That could include clearly outlining which AI tools or large language models are approved for work, when enterprise or paid versions must be used and what kinds of data can or cannot be entered into these systems.
It also means making sure teams understand how to handle sensitive information and comply with local regulations. When these boundaries are clear, employees can innovate confidently and leadership can better trust their employees, tools and the outputs that the two together are able to produce. Without governance, the risk is unchecked, low-value outputs that affect results and increase exposure.
The CIO has the power to connect aligning technology, ethics and responsibility. Embedding review mechanisms, defining who owns what and making sure human judgement sits firmly at the center of it all.
Conclusion
AI is raising the bar across the workplace. The organizations that approach it in the right way build in clear direction on where it should be applied, practical support that helps people use it well and a governance model that protects the integrity of decisions.
For CIOs, the aim is to create an environment where experimentation is encouraged while standards stay high and accountability is clear. When capability and trust are built in tandem, AI becomes a lever for stronger outcomes over time, not just quicker output in the short term.
Technology may be redefining how work is produced, but it is leadership that determines whether those higher standards translate into long-term advantage.
Tech
OpenAI purchases online tech talk show TBPN
OpenAI said the purchase will be part of its strategy to further the conversation on the changes brought about by artificial intelligence.
OpenAI, in what is being described as an unusual move, is set to purchase the Technology Business Programming Network (TBPN), a daily, live tech talk show hosted by Jordi Hays and John Coogan, that often features high-profile tech leaders and entrepreneurs. OpenAI
OpenAI’s chief executive officer of applications Fidji Simo said: “As I’ve been thinking about the future of how we communicate at OpenAI, one thing that’s become clear is that the standard communications playbook just doesn’t apply to us. We’re not a typical company.
“We’re driving a really big technological shift. And with our mission to ensure artificial general intelligence benefits all of humanity comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates, with builders and people using the technology at the centre.”
While the full details of the deal have yet to be disclosed, OpenAI said the TBPN team will maintain editorial independence and make decisions on their guests and programming. According to the Wall Street Journal, TBPN stated that it generated $5m in advertising revenue last year and is on track to exceed $30m in revenue in 2026.
However, an OpenAI spokesperson told Bloomberg that the platform is not aiming to make TBPN a money-making enterprise.
In a statement, Hays expressed excitement at the venture, while making note of the importance of a strong partnership where both parties work as a team to communicate change and innovation in the AI and tech spaces.
He said: “While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right. Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”
Earlier this week OpenAI closed a larger than expected funding round in which it raised $122bn, exceeding the projected figure of $110bn. Part of that funding is expected to be put towards the scale and growth of the platform’s AI technologies and research, in line with current global demands.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
Tech
This State Has Costco’s First Stand Alone Gas Station
The best thing about retail warehouse stores is obviously the selection. After all, where else can you buy a new T-shirt, birthday cake, and a set of tires on the same day? But the ability to fill up with gas before leaving the parking lot is a plus as well. That’s why stores like Costco, where you can use these tips to save time at the pump, are so convenient. But now the company is moving forward with standalone gas stations, and the company’s first in California is members-only.
Members will need to insert or scan their membership card to refuel, just as they would at Costco’s attached gas stations. However, non-members may be able to access the pumps using a Costco Shop card, as they currently can at on-site locations. Costco’s new gas station is located in Mission Viejo, California, and it’s a 17,000 square foot facility operated by company employees. It has 40 pumps covered by a large canopy, and it will run from 5 a.m. to 10 p.m. daily, Sunday through Saturday.
The station is expected to open by the end of June 2026. But if you don’t live in California, you may not have to wait long. Costco is planning to build more standalone gas stations, beginning in Honolulu, Hawaii. As of this writing, the company hasn’t publicly addressed this new program. But the belief is that stand alone stations can help reduce the heavy traffic flow that currently plagues many on-site locations.
Costco’s gas boom and competitive pricing strategy
Costco’s first standalone gas station (which will also strategically stay cheaper than most) was initially announced in the summer of 2025. The facility is located off Interstate 5 in Mission Viejo, California, at the site where a Bed Bath & Beyond once stood. At the time of the announcement, the company’s gas stations were experiencing a boom in business, thanks mostly to extended operating hours. The decision to move forward with a new test store may have been influenced by this positive reaction.
Costco members get access to gas prices that can often beat other competitors by anywhere from 10 to 25 cents per gallon. This is possible because of the company’s warehouse approach, which includes buying fuel in large quantities. Costco also works directly with suppliers to get the best cost and then passes that savings on to its members.
Costco’s first gas station opened in 1995 and since then, their fuel business has grown. The company currently has over 700 stations around the world, serving millions of paid members every day. Those members can use the Costco app to check fuel prices in real time, as well as store hours, and locations near them.
-
NewsBeat7 days agoThe Story hosts event on Durham’s historic registers
-
NewsBeat18 hours agoSteven Gerrard disagrees with Gary Neville over ‘shock’ Chelsea and Arsenal claim | Football
-
Sports7 days agoSweet Sixteen Game Thread: Tide vs Michigan
-
Entertainment4 days ago
Fans slam 'heartbreaking' Barbie Dream Fest convention debacle with 'cardboard cutout' experience
-
Crypto World2 days agoGold Price Prediction: Worst Month in 17 Years fo Save Haven Rock
-
Business13 hours agoNo Jackpot Winner and $194 Million Prize Rolls Over
-
Entertainment6 days agoLana Del Rey Celebrates Her Husband’s 51st Birthday In New Post
-
Tech4 days agoThe Pixel 10a doesn’t have a camera bump, and it’s great
-
Crypto World3 days ago
Dems press CFTC, ethics board on prediction-market insider trades
-
Sports3 days agoTallest college basketball player ever, standing at 7-foot-9, entering transfer portal
-
Tech3 days agoEE TV is using AI to help you find something to watch
-
Fashion5 days agoAmazon Sundays: Soft Spring Layers
-
Business1 day agoLogin and Checkout Issues Spark Merchant Frustration
-
Tech4 days agoAvatar Legends: The Fighting Game comes out in July and it looks pretty slick
-
Fashion6 days agoWhen Evening Dressing Gets Colorful for Spring
-
Tech3 days agoHow to back up your iPhone & iPad to your Mac before something goes wrong
-
Tech5 days agoElon Musk’s last co-founder reportedly leaves xAI
-
Tech4 days agoApple will hide your email address from apps and websites, but not cops
-
Crypto World4 days agoU.S. rule change may open trillions in 401(k) funds to crypto
-
Politics4 days agoShould Trump Be Scared Strait?


You must be logged in to post a comment Login