Convoy co- founder and Microsoft Corporate Vice President Dan Lewis at the Seattle AI Startup Summit on April 2, 2026. (Ken Yeung Photo)
Dan Lewis’ career is hard to summarize in a sentence. He was a product manager at Microsoft, then an early employee at the Seattle AI startup Wavii, which Google later acquired. He made a stop at Amazon before ultimately boomeranging back to Microsoft, where he is today.
At this week’s Seattle AI Startup Summit, it was his experience building Convoy, the one-time unicorn trucking startup that shuttered in 2023, that he wanted to talk about.
But instead of relitigating what led to Convoy’s collapse, Lewis used his time on stage to share lessons to help entrepreneurs build a startup from the ground up.
Be deliberate about culture
Every company develops a culture, whether the founder shapes it or not, Lewis said. “The question is, are you involved in influencing what that is and helping to shape it around something you think aligns with your mission and the people you want in the company?”
Codify values only after you see what’s working
Back when Lewis was at Amazon, he asked then-CEO Jeff Bezos how the leadership principles were derived. Bezos told him that “he started writing stuff down when he first created the company, and then he realized he didn’t quite know what he was doing. So he waited a year to see what was working and what wasn’t, just to get a feel for how things were going.”
Advertisement
Anything that Bezos wanted to keep was codified. Lewis mirrored this approach for Convoy.
Make sure people know why, not just what
Founders shouldn’t have a culture in which workers accept decisions simply because the CEO says so. Lewis called that dynamic “demotivating,” arguing that employees who don’t understand the reasoning behind decisions can’t act independently or feel real ownership. Without that context, he said, people won’t feel like they’re truly part of the company.
Name teams after problems, not solutions
Lewis urged founders to name teams after the customer problems they’re solving, not the products they’re building. He pointed to his time at Amazon, where he built a Q&A tool called “Ask a Question, Get an Answer” for the ratings and reviews team.
The team pushed back: their mandate was to grow ratings and reviews, not launch someone else’s product. Had the team been named around a broader goal like customer or buyer confidence, Lewis said, its members would have been more open to creative approaches rather than feeling like they were “executing somebody else’s plan.”
Advertisement
Innovate deliberately
Invest time and energy into the areas that will really differentiate your company and “give you a chance to win.” Lewis acknowledged that it can feel uncomfortable to copy someone else’s innovations in undifferentiated areas, but sometimes it’s OK, especially when you’re not spending time on things “that don’t matter a lot.”
Storytelling is a startup superpower
Convoy co-founder Dan Lewis discusses the power of storytelling at the 2026 Seattle AI Startup Summit. (Ken Yeung Photo)
Another critical cultural value is the company’s story. Have you crafted a narrative that is interesting, something people can relate to, and want to be a part of?
“Think about for what [you’re] doing, what’s the context in the world?” Lewis said. “What is the opportunity that’s just right there in front of us? What’s the tension point as to why we can’t get that opportunity? What is holding the world back from it, and how are we going to unlock it for everyone so it makes everything better?”
When it came to Convoy, for example, he had his work cut out for him early on trying to sign on new business.“Why would my customer, who’s never worked with a technology company, because they’re shipping freight, want to take a bet?” Lewis explained. “Because they want to be part of the story. It’s interesting.”
Clarify expectations bidirectionally
Trust between founders and employees doesn’t happen by accident. Lewis recommended sitting down — perhaps over a meal — and laying out expectations from both sides before the work begins. It’s a bidirectional process, meaning that both the leader and employee must be heard.
Advertisement
Hire deliberately — and reluctantly
Dan Lewis offers recruiting and hiring insights at the 2026 Seattle AI Startup Summit. (Ken Yeung Photo)
When it comes to hiring, Lewis offered three tips.
First, every company wants team members who want to “show up every day, knock down walls, and make it happen.” But for more established organizations, they also need an additional type of employee, those capable of operating and innovating existing systems. This creates conflict inside a large business, Lewis said, because two cultures can’t live in harmony, nor is it possible to have “two compensation structures that manage the risk-reward.”
He argued that startups have the “pure play” advantage where there’s one culture, one risk-reward trade-off, and founders can focus on the type of person they need. In fact, Lewis thinks 80% of the workforce should possess that “wall-knocker” mentality.
Second, startups must be deliberate in hiring, applying filters to candidates throughout the candidate funnel, and rating how someone introduced themselves, spoke during the first meeting, and followed up. At the end of the process, companies will “only have people that really want to be there and want to be part of this.”
Founders shouldn’t invest a lot of time trying to convince someone to join their company. If they are, “you’re working too hard,” and that it’s “probably not the right sign for a startup.”
Advertisement
Lewis’ last tip: Don’t hire. He admitted that it may sound counterintuitive, but he wants founders to think that every time someone new is onboarded, “it was a failure to operate more efficiently and to innovate” in a way that wouldn’t have required bringing a new person aboard.
Instead, they should first ask whether there was an alternative way to complete the task — perhaps through AI — rather than increasing headcount.
And to be clear, Lewis isn’t advocating for the end of great hiring. Rather, he wants leaders to approach it this way: “Always consider it to be the thing that you wish you didn’t have to do. You wish you could have gotten it done without hiring that person.”
People don’t read instructions
At Convoy, Lewis said, they designed an operations system assuming people would carefully read each other’s notes during multi-day truck jobs with multiple support shifts. Most skipped the notes and started from scratch, irritating customers who had to repeat themselves.
Advertisement
When Lewis asked investor Henry Kravis of KKR for advice, the answer was blunt: “Stop building a system that assumes people are going to read.”
The lesson applies beyond operations. Whether it’s customers, employees, or end users, people scan for a button rather than read text. Founders should design processes and products, especially in the AI era, that work even if nobody reads the instructions.
Use data, and embrace concrete examples
Convoy founder Dan Lewis urges startups to back up data with concrete examples at the 2026 Seattle AI Startup Summit. (Ken Yeung Photo)
One final piece of advice from Lewis: be data-driven. Leave the jargon behind and look to the data when something’s wrong, or there’s confusion, and you’re talking it through with your team or customer.
But also be specific — use clear, concrete examples, along with the exact words customers use, to clarify quickly.
Lewis closed his keynote with a note of humility. None of these lessons came easily, he acknowledged. In fact, many of them weren’t obvious to him until his experience at Convoy forced the issue. The company reached the heights of the startup world before closing its doors, but for entrepreneurs trying to build something that lasts, that hard-won experience may be exactly the point.
Advertisement
His talk kicked off a day of conversation at the second annual Seattle AI Startup Summit, a conference that brings together investors, founders, executives and others.
A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Thursday’s puzzle instead then click here: NYT Strands hints and answers for Thursday, April 2 (game #761).
Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.
Want more word-based fun? Then check out my NYT Connections today and Quordle today pages for hints and answers for those games, and Marc’s Wordle today page for the original viral word game.
Advertisement
SPOILER WARNING: Information about NYT Strands today is below, so don’t read on if you don’t want to know the answers.
Article continues below
NYT Strands today (game #762) – hint #1 – today’s theme
What is the theme of today’s NYT Strands?
• Today’s NYT Strands theme is… Early risers
NYT Strands today (game #762) – hint #2 – clue words
Play any of these words to unlock the in-game hints system.
Advertisement
SOLID
COIN
PROUD
BINGO
TWINS
SOIL
NYT Strands today (game #762) – hint #3 – spangram letters
How many letters are in today’s spangram?
• Spangram has 13 letters
NYT Strands today (game #762) – hint #4 – spangram position
What are two sides of the board that today’s spangram touches?
First side: left, 4th column
Last side: top, 5th column
Advertisement
Right, the answers are below, so DO NOT SCROLL ANY FURTHER IF YOU DON’T WANT TO SEE THEM.
Advertisement
NYT Strands today (game #762) – the answers
(Image credit: New York Times)
The answers to today’s Strands, game #762, are…
DAFFODIL
TULIP
HYACINTH
CROCUS
SNOWDROP
SPANGRAM: SPRINGBLOSSOM
My rating: Hard
My score: 2 hints
I quickly abandoned my search for jobs that required an early morning start (mainly because I couldn’t think of any).
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Fortunately in my hunt for non-game words I found ‘blossom” and then “spring” and stuck them together for today’s spangram.
However, after finding the most obvious of SPRINGBLOSSOM with DAFFODIL I reached the limits of my floral knowledge and needed two hints to see me virtually to a full bouquet.
Advertisement
Yesterday’s NYT Strands answers (Thursday, April 3, game #761)
GUAVA
ACAI
PINEAPPLE
LYCHEE
MANGO
PAPAYA
SPANGRAM: TROPICALFRUIT
What is NYT Strands?
Strands is the NYT’s not-so-new-any-more word game, following Wordle and Connections. It’s now a fully fledged member of the NYT’s games stable that has been running for a year and which can be played on the NYT Games site on desktop or mobile.
I’ve got a full guide to how to play NYT Strands, complete with tips for solving it, so check that out if you’re struggling to beat it each day.
AI vibe coders have yet another reason to thank Andrej Karpathy, the coiner of the term.
The former Director of AI at Tesla and co-founder of OpenAI, now running his own independent AI project, recently posted on X describing a “LLM Knowledge Bases” approach he’s using to manage various topics of research interest.
By building a persistent, LLM-maintained record of his projects, Karpathy is solving the core frustration of “stateless” AI development: the dreaded context-limit reset.
As anyone who has vibe coded can attest, hitting a usage limit or ending a session often feels like a lobotomy for your project. You’re forced to spend valuable tokens (and time) reconstructing context for the AI, hoping it “remembers” the architectural nuances you just established.
Advertisement
Karpathy proposes something simpler and more loosely, messily elegant than the typical enterprise solution of a vector database and RAG pipeline.
Instead, he outlines a system where the LLM itself acts as a full-time “research librarian”—actively compiling, linting, and interlinking Markdown (.md) files, the most LLM-friendly and compact data format.
By diverting a significant portion of his “token throughput” into the manipulation of structured knowledge rather than boilerplate code, Karpathy has surfaced a blueprint for the next phase of the “Second Brain”—one that is self-healing, auditable, and entirely human-readable.
Beyond RAG
For the past three years, the dominant paradigm for giving LLMs access to proprietary data has been Retrieval-Augmented Generation (RAG).
Advertisement
In a standard RAG setup, documents are chopped into arbitrary “chunks,” converted into mathematical vectors (embeddings), and stored in a specialized database.
When a user asks a question, the system performs a “similarity search” to find the most relevant chunks and feeds them into the LLM.Karpathy’s approach, which he calls LLM Knowledge Bases, rejects the complexity of vector databases for mid-sized datasets.
Instead, it relies on the LLM’s increasing ability to reason over structured text.
The system architecture, as visualized by X user @himanshuin part of the wider reactions to Karpathy’s post, functions in three distinct stages:
Advertisement
Data Ingest: Raw materials—research papers, GitHub repositories, datasets, and web articles—are dumped into a raw/ directory. Karpathy utilizes the Obsidian Web Clipper to convert web content into Markdown (.md) files, ensuring even images are stored locally so the LLM can reference them via vision capabilities.
The Compilation Step: This is the core innovation. Instead of just indexing the files, the LLM “compiles” them. It reads the raw data and writes a structured wiki. This includes generating summaries, identifying key concepts, authoring encyclopedia-style articles, and—crucially—creating backlinks between related ideas.
Active Maintenance (Linting): The system isn’t static. Karpathy describes running “health checks” or “linting” passes where the LLM scans the wiki for inconsistencies, missing data, or new connections. As community member Charly Wargnier observed, “It acts as a living AI knowledge base that actually heals itself.”
By treating Markdown files as the “source of truth,” Karpathy avoids the “black box” problem of vector embeddings. Every claim made by the AI can be traced back to a specific .md file that a human can read, edit, or delete.
Implications for the enterprise
While Karpathy’s setup is currently described as a “hacky collection of scripts,” the implications for the enterprise are immediate.
Karpathy agreed, suggesting that this methodology represents an “incredible new product” category.
Advertisement
Most companies currently “drown” in unstructured data—Slack logs, internal wikis, and PDF reports that no one has the time to synthesize.
A “Karpathy-style” enterprise layer wouldn’t just search these documents; it would actively author a “Company Bible” that updates in real-time.
As AI educator and newsletter author Ole Lehmann put it on X: “i think whoever packages this for normal people is sitting on something massive. one app that syncs with the tools you already use, your bookmarks, your read-later app, your podcast app, your saved threads.”
Eugen Alpeza, co-founder and CEO of AI enterprise agent builder and orchestration startup Edra, noted in an X post that: “The jump from personal research wiki to enterprise operations is where it gets brutal. Thousands of employees, millions of records, tribal knowledge that contradicts itself across teams. Indeed, there is room for a new product and we’re building it in the enterprise.”
Advertisement
As the community explores the “Karpathy Pattern,” the focus is already shifting from personal research to multi-agent orchestration.
A recent architectural breakdown by @jumperz, founder of AI agent creation platform Secondmate, illustrates this evolution through a “Swarm Knowledge Base” that scales the wiki workflow to a 10-agent system managed via OpenClaw.
The core challenge of a multi-agent swarm—where one hallucination can compound and “infect” the collective memory—is addressed here by a dedicated “Quality Gate.”
Using the Hermes model (trained by Nous Research for structured evaluation) as an independent supervisor, every draft article is scored and validated before being promoted to the “live” wiki.
Advertisement
This system creates a “Compound Loop”: agents dump raw outputs, the compiler organizes them, Hermes validates the truth, and verified briefings are fed back to agents at the start of each session. This ensures that the swarm never “wakes up blank,” but instead begins every task with a filtered, high-integrity briefing of everything the collective has learned
Scaling and performance
A common critique of non-vector approaches is scalability. However, Karpathy notes that at a scale of ~100 articles and ~400,000 words, the LLM’s ability to navigate via summaries and index files is more than sufficient.
For a departmental wiki or a personal research project, the “fancy RAG” infrastructure often introduces more latency and “retrieval noise” than it solves.
Tech podcaster Lex Fridman (@lexfridman) confirmed he uses a similar setup, adding a layer of dynamic visualization:
Advertisement
“I often have it generate dynamic html (with js) that allows me to sort/filter data and to tinker with visualizations interactively. Another useful thing is I have the system generate a temporary focused mini-knowledge-base… that I then load into an LLM for voice-mode interaction on a long 7-10 mile run.”
This “ephemeral wiki” concept suggests a future where users don’t just “chat” with an AI; they spawn a team of agents to build a custom research environment for a specific task, which then dissolves once the report is written.
Licensing and the ‘file-over-app’ philosophy
Technically, Karpathy’s methodology is built on an open standard (Markdown) but viewed through a proprietary-but-extensible lens (note taking and file organization app Obsidian).
Markdown (.md): By choosing Markdown, Karpathy ensures his knowledge base is not locked into a specific vendor. It is future-proof; if Obsidian disappears, the files remain readable by any text editor.
Obsidian: While Obsidian is a proprietary application, its “local-first” philosophy and EULA (which allows for free personal use and requires a license for commercial use) align with the developer’s desire for data sovereignty.
The “Vibe-Coded” Tools: The search engines and CLI tools Karpathy mentions are custom scripts—likely Python-based—that bridge the gap between the LLM and the local file system.
This “file-over-app” philosophy is a direct challenge to SaaS-heavy models like Notion or Google Docs. In the Karpathy model, the user owns the data, and the AI is merely a highly sophisticated editor that “visits” the files to perform work.
Librarian vs. search engine
The AI community has reacted with a mix of technical validation and “vibe-coding” enthusiasm. The debate centers on whether the industry has over-indexed on Vector DBs for problems that are fundamentally about structure, not just similarity.
Advertisement
Jason Paul Michaels (@SpaceWelder314), a welder using Claude, echoed the sentiment that simpler tools are often more robust:
“No vector database. No embeddings… Just markdown, FTS5, and grep… Every bug fix… gets indexed. The knowledge compounds.”
However, the most significant praise came from Steph Ango (@Kepano), co-creator of Obsidian, who highlighted a concept called “Contamination Mitigation.”
He suggested that users should keep their personal “vault” clean and let the agents play in a “messy vault,” only bringing over the useful artifacts once the agent-facing workflow has distilled them.
Which solution is right for your enteprise vibe coding projects?
Feature
Advertisement
Vector DB / RAG
Karpathy’s Markdown Wiki
Data Format
Opaque Vectors (Math)
Advertisement
Human-Readable Markdown
Logic
Semantic Similarity (Nearest Neighbor)
Explicit Connections (Backlinks/Indices)
Advertisement
Auditability
Low (Black Box)
High (Direct Traceability)
Compounding
Advertisement
Static (Requires re-indexing)
Active (Self-healing through linting)
Ideal Scale
Millions of Documents
Advertisement
100 – 10,000 High-Signal Documents
The “Vector DB” approach is like a massive, unorganized warehouse with a very fast forklift driver. You can find anything, but you don’t know why it’s there or how it relates to the pallet next to it. Karpathy’s “Markdown Wiki” is like a curated library with a head librarian who is constantly writing new books to explain the old ones.
The next phase
Karpathy’s final exploration points toward the ultimate destination of this data: Synthetic Data Generation and Fine-Tuning.
As the wiki grows and the data becomes more “pure” through continuous LLM linting, it becomes the perfect training set.
Advertisement
Instead of the LLM just reading the wiki in its “context window,” the user can eventually fine-tune a smaller, more efficient model on the wiki itself. This would allow the LLM to “know” the researcher’s personal knowledge base in its own weights, essentially turning a personal research project into a custom, private intelligence.
Bottom-line: Karpathy hasn’t just shared a script; he’s shared a philosophy. By treating the LLM as an active agent that maintains its own memory, he has bypassed the limitations of “one-shot” AI interactions.
For the individual researcher, it means the end of the “forgotten bookmark.”
For the enterprise, it means the transition from a “raw/ data lake” to a “compiled knowledge asset.” As Karpathy himself summarized: “You rarely ever write or edit the wiki manually; it’s the domain of the LLM.” We are entering the era of the autonomous archive.
Nick Shirley—the right-wing creator whose YouTube investigation sparked the Trump administration’s immigration crackdown in Minnesota—claims that his most recent video about alleged fraud in California was bolstered by data provided by none other than Edward Coristine, one of the first members of the so-called Department of Government Efficiency (DOGE) known online as “Big Balls.”
Coristine, who joined DOGE at 19 years old with no prior government experience, was staffed across several agencies including the Social Security Administration (SSA) and the Small Business Administration (SBA). Before joining DOGE, Coristine worked at Elon Musk’s Neuralink for several months and founded a startup known for hiring black hat hackers.
In an interview with Coristine published on Shirley’s YouTube channel on Thursday, Shirley claims that Coristine personally pulled data on Medicaid spending for businesses based in California as potential targets. Coristine nodded along, telling Shirley that the government must create more opportunities to crowdsource fraud investigations.
The information Coristine allegedly pulled for Shirley was from a dataset published by the DOGE team at the Department of Health and Human Services (HHS) in February. In a post to X at the time, the HHS DOGE team referred to it as “the largest Medicaid dataset in department history.” The post also claimed that the dataset could be used to “detect” large-scale fraud.
Advertisement
“After that, I went to California based off that dataset you had helped me extract, and these fraudsters also weren’t even trying to hide it,” Shirley told Coristine in Thursday’s interview.
Coristine said that by open-sourcing data on government spending, vigilante investigators like Shirley who are “more well-positioned” could uncover fraudulent payments. “You are someone who actually went to the places where we were spending all this money and confronted the people and got to know the truth. I think we just have to create more opportunities for that to happen. We have to continue to open source data,” Coristine said.
The intersection of the right’s favorite fraud influencer and one of the most notorious DOGE engineers exemplifies the next evolution of DOGE and the Trump administration’s fight against “waste, fraud, and abuse.”
Shirley’s videos have become key pieces of evidence for the Trump administration’s fraud and immigration crackdowns. When Shirley released his December video claiming to have uncovered more than $100 million in Somali-run childcare fraud in Minnesota, figures like vice president JD Vance shared it. A surge of immigration agents were then sent to Minnesota, resulting in mass arrests, detainments, and the deaths of two protesters, Renee Good and Alex Pretti.
Advertisement
Early in their YouTube video, Shirley and Coristine directly tie fraud to immigrant communities and foreigners. “A lot of the money is being stolen and siphoned out of the country,” Coristine says, without providing evidence. “Once that money is in a suitcase to Somalia, that’s never coming back,” Shirley replies.
Later in the video, Shirley and Coristine cite specific examples of “waste and fraud” identified by DOGE, including funding for a “Sesame Street style children’s TV program in Iraq” and “tax policy consulting in Liberia.” Both programs weresupported by the US Agency for International Development (USAID), which DOGE effectively shut down in the early months of 2025. Coristine also alleged that the SBA “did a terrible job,” particularly with loans during the height of COVID, and that there were “no checks at all on who’s receiving money, not even the most basic checks of like, if [a Social Security number] is real.”
Emily Power, CEO of Ocean Made, shows the difference in the root structure of tomato plants grown in the startup’s kelp-based pots, on the left, versus plastic pots. (Ocean Made Photo)
The University of Washington’s CoMotion Labs has selected the second cohort of startups for its Climate Tech Incubator. The founders are tackling wide-ranging sustainability challenges including boosting EV adoption, reducing plastic use, supporting local food and beverage production, and developing smart climate strategies for cities.
The six-month program is located at the Seattle Climate Innovation Hub, a public-private partnership in the city’s downtown. The venue supports climate entrepreneurship beyond the incubator and hosts regular public events.
Eight early-stage startups participating in the program receive support in building teams, developing their business plans, forging strategic partnerships and preparing to make their pitch to investors. The cohort will share their progress at a demo day in September.
“If our region is serious about being a leader in climate tech, we need to find more ways to support more founders,” Silvia said. “The Climate Tech Incubator is a fantastic addition to the support ecosystem.”
Advertisement
Here are the participants:
Astraeus Ocean Systems is a maritime ag-tech startup, offering water-quality monitoring and crop modeling for shellfish and seaweed growing operations. The Bellingham, Wash.-based company’s founding team includes two Ph.D.-holding research scientists and a leader in business development.
Benchmark Star helps facilities managers comply with clean building regulations by automating regulation tracking and streamlining utility data reporting. The effort launched out of a Seattle Climate Innovation Hub hackathon last year and is led by Renee Gastineau, who has worked in clean energy for more than a decade.
Climate Solutions International is the brainchild of Jan Whittington, a UW urban planning professor. Whittington developed strategies for helping cities take action on climate change while making their infrastructure more resilient in a warming world. The World Bank funded her to apply the approach across 300 cities in 30 countries, and her startup is turning that expertise into a business.
Advertisement
EVQ is a one-stop, AI-powered platform helping drivers find, buy and operate electric vehicles, demystifying battery charging and other hurdles to EV ownership. The Seattle startup spun out of Coltura, a nonprofit promoting EV policies and research founded by EVQ CEO Matthew Metz.
FlameWise produces portable kilns for individuals and communities to turn unwanted woody debris into biochar that sequesters carbon and provides soil benefits. The kilns are a low-smoke alternative to burn piles. Seattle’s Korina Stark launched the effort following challenges to manage wood waste on her own 20-acre forested property.
OceanMade offers seaweed-based pots for nurseries, landscapers, gardeners and small farms who want to avoid plastic waste. The kelp containers also support root development and naturally degrade in the soil after planting. CEO Emily Power previously worked at Microsoft for nearly eight years before founding the Seattle startup in 2021.
REearthable is manufacturing biodegradable plastics from waste limestone recovered from mining operations. The material from the Seattle-area startup is suitable for cosmetics, food packaging and other applications. CEO Charlotte Wintermann is a serial entrepreneur with a background in sales, marketing and business strategy.
Advertisement
Seeking Ferments produces fermented beverages and kombucha that are brewed in Seattle from locally sourced ingredients. Co-founders Jeanette Macias and Lyz Macias launched their startup in 2019 and now sell their beverages online and at farmers markets and their “filling station.”
OpenAI announced a major reorganization on Friday as the company’s CEO of AGI deployment, Fidji Simo, takes medical leave to focus on her health. OpenAI president Greg Brockman will handle the product teams in Simo’s absence. Simo’s previous title was CEO of applications.
Brad Lightcap, the chief operating officer and one of CEO Sam Altman’s top deputies, is transitioning to a “special projects” role. Kate Rouch, the chief marketing officer, is taking a leave of absence to focus on her health. Rouch has been undergoing treatment for breast cancer. When she returns, it will be in “a different, more narrowly scoped role,” according to a note Simo shared with OpenAI staff which was viewed by WIRED.
“As I shared when I joined, I had a relapse of my neuroimmune condition a few weeks before starting the job,” Simo said in the note which was sent in OpenAI’s “core” Slack channel. “It’s been a bit of a rollercoaster since, and the last month has been particularly rough health-wise. For my entire time here, I’ve postponed medical tests and new therapies to stay completely focused on the job and not miss a single day of work. I took time off for the first time two weeks before the break for some medical tests, and it’s now clear that I’ve pushed a little too far and I really need to try new interventions to stabilize my health.”
Simo is expected to take “several weeks” of leave according to her internal post.
Advertisement
In his new role, Lightcap will be in charge of the company’s forward-deployed engineers, which embed within enterprise organizations and help integrate OpenAI’s technology, among other duties.
OpenAI will begin searching for a new CMO, Simo said. The company is also looking for a chief communications officer to replace Hannah Wong, who left her position in January. Chris Lehane has taken over as the leader of the communications team in the interim.
“We have a strong leadership team focused on our biggest priorities: advancing frontier research, growing our global user base of nearly 1 billion users, and powering enterprise use cases,” said an OpenAI spokesperson in a statement. “We’re well-positioned to keep executing with continuity and momentum.”
Simo joined OpenAI in August 2025, where she took over many of the company’s consumer-facing products, including ChatGPT, Codex, and the social-video app Sora. She recently shuttered the Sora app and told staff that the company needed to cut side projects and refocus around its core products.
Advertisement
The decision comes as OpenAI eyes an IPO as soon as this year. The company recently raised $122 billion in the largest funding round the tech industry has ever seen, which valued the company at $852 billion.
The two largest Gemma 4 models – 26B Mixture of Experts and 31B Dense – require an 80GB Nvidia H100 GPU to run unquantized in bfloat16 format. Google claims these models deliver “frontier intelligence on personal computers” for students, researchers, and developers, providing advanced reasoning capabilities for IDEs, coding assistants, and agentic workflows. Read Entire Article Source link
OpenAI has spent the last few weeks seemingly trying to refocus on using AI for business instead of what execs dubbed “side quests,” dumping its AI video generator and its plans for an adult-themed chatbot. So this week, of course, the company announced it’s jumping into the media business.
OpenAI said it was acquiring Technology Business Programming Network, better known as TBPN, which runs a 3-hour show streamed on weekdays that delves into the biggest topics — and brings in the biggest names — in tech business.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Advertisement
OpenAI said it added TBPN to “help create a space for a real, constructive conversation about the changes AI creates,” Fidji Simo, CEO of AGI deployment at OpenAI, wrote in a message to employees shared by OpenAI. Simo said the company also wanted to take advantage of TBPN’s marketing prowess. “They have a strong pulse on where the industry is going, their comms and marketing ideas have really impressed me,” Simo said.
TBPN launched in October 2024 and has been compared to ESPN in how it covers tech — two guys at a big desk with news, analysis, commentary and banter about topics such as AI, crypto, startups and the defense industry. The show’s two hosts and co-founders, Jordi Hays and John Coogan, have had some of tech’s biggest names in studio — OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Satya Nadella, entrepreneur Mark Cuban and Salesforce’s Marc Benioff, to name some.
The show is streamed live from 11 a.m. to 2 p.m. PT Monday through Friday on YouTube and X from the Ultradome, a studio on a Hollywood film lot. The show has 70,000 viewers daily and looks set to make more than $30 million in revenue this year, according to the Wall Street Journal.
TBPN co-host Hays acknowledged in a statement that the show has been “critical” of the AI industry.
Advertisement
“After getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right,” Hays said. “Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”
In an era of fast-moving media consolidation, it’s a fair question — can TBPN keep saying what they really think, even if that ruffles OpenAI’s feathers? In her statement, Simo said OpenAI wants the show to maintain its “editorial independence.”
“TBPN will continue to run their programming, choose their guests, and make their own editorial decisions,” she said. “That’s foundational to their credibility, and it’s something we’re explicitly protecting as part of this agreement.”
Altman, OpenAI founder, echoed that sentiment with a posting on X. also calling TBPN his “favorite tech show.”
Advertisement
“We want them to keep that going and for them to do what they do so well,” Altman posted. “I don’t expect them to go any easier on us, am sure I’ll do my part to help enable that with occasional stupid decisions.”
The acquisition prompted some criticism and concern on social media as people wondered whether TBPN could really maintain editorial independence.
“Reporters doing accountability journalism are getting mowed down by mass layoffs & are now almost extinct — while the targets of their accountability reporting are giving hundreds of millions of dollars to pundits,” David Sirota, a longtime columnist and founder of the investigative news outlet The Lever, posted on X. “What stage of the media dystopia is this?”
TBPN will be under the supervision of OpenAI’s Chief Global Affairs Officer Chris Lehane, who joined the company in October 2024 and is the company’s main strategist in working with government officials. Decades ago, he worked in the White House of President Bill Clinton — helping to handle the Whitewater and Monica Lewinsky investigations — and as press secretary to Vice President Al Gore. Lehane also set up a procrypto super PAC called Fairshake that helped defeat anticrypto candidates during the 2024 elections and helped Airbnb battle housing regulations.
HarperCollins is tapping into AI to bring some of its book franchises to life. Specifically, the publisher is teaming up with Toonstar, an AI animation studio, to turn them into digital shows. The first project will be an adaptation of Lisa Greenwald’s “Friendship List” series, which will also be joined by a graphic novel.
You’d be forgiven for being unaware of Toonstar, a studio that received some buzzy early on for simplifying typically complex animation pipelines with AI, but has mostly remained under the radar. Its biggest claim to fame is producing StEvEn and Parker YouTube series, which has amassed 3.38 million subscribers and sometimes has episodes reaching around a million views. It’s not something I’ve heard animation fans speaking about, though. And honestly, it was tough to sit through a few minutes of its sub-South Park animation.
“By leaning into the [AI] technology, we can make full episodes 80 percent faster and 90 percent cheaper than industry norms,” Toonstar co-founder John Attanasio, told The New York Times last year. In that same interview, the company revealed that it uses AI across its production, including having it dub dialog for international audiences, as well as working on storylines.
Toonstar initially pitched itself as an animation studio leaning into Web3 and NFTs, but those technologies seem virtually absent from the company’s presence today. Space Junk, one of its early series, was “put on hold for a variety of reasons,” a representative told Engadget. “It’s possible we’ll resurrect the concept in the future,” they added. Its original domain now points to a crypto gambling site.
Advertisement
“We’re honored to bring Friendship List to life as an animated series,” Attanasio said in a press release. “Our artist-centered approach ensures these beloved characters and stories stay true to the author’s vision, while our Ink & Pixel production technology enables fast, high-quality production at scale which unlocks the ability to meet audiences where and when they enjoy content today.”
Toonstar has certainly proved it can make “content” for YouTube. Can it actually produce an enjoyable animat edshow? That’s another question entirely.
Iranian strikes have reportedly knocked out key AWS availability zones in Bahrain and Dubai, leaving parts of both regions effectively offline for an extended period and forcing Amazon to urge teams and customers to shift workloads elsewhere. “These two regions continue to be impaired, and services should not expect to be operating with normal levels of redundancy and resiliency,” an internal Amazon communication memo reads. “We are actively working to free and reserve as much capacity as possible in the region for customers, and services should be scaled to the minimal footprint required to support customer migration.” Big Technology reports: With the war now nearing its sixth week, Iran has made Amazon infrastructure in the Gulf an economic target and is now eyeing its peers. Amazon’s Bahrain facilities have been hit multiple times, including a Wednesday strike that caused a fire. And its facilities in the UAE also sustained multiple hits. The IRGC is threatening multiple other U.S. tech giants, including Microsoft, Google, and Apple.
Amazons infrastructure in Bahrain and Dubai each have three ‘availability zones’ or clusters of compute. Both Bahrain and Dubai have a zones that are “hard down” and and “impaired but functioning,” per the internal communication. “We do not have a timeline for when DXB and BAH will return to normal operations,” the internal post said.
Researchers have determined that Microsoft’s LinkedIn is scanning browser plug-ins and other information without permission, building user profiles using data that the company did not get permission to take.
Safari
A European advocacy group claims LinkedIn is probing browser extensions through its website code. Fairlinked e.V. published its “BrowserGate” report alleging LinkedIn detects installed browser extensions by probing for known identifiers through JavaScript. The group says the technique reveals personally identifiable information. Safari users are less likely to be affected by this specific mechanism, based on how extension detection typically works across browsers. Apple’s browser model limits fingerprinting surfaces, which reduces how much information sites can infer from installed extensions. Continue Reading on AppleInsider | Discuss on our Forums
You must be logged in to post a comment Login