Connect with us
DAPA Banner

Tech

The “Tin Blimp” Was A Neither Tin Nor A Blimp: The Detroit ZMC-2 Story

Published

on

That fireball was LZ37. Nobody wanted to see repeats post-war.
Image: “The great exploit of lieutenant Warnefort 1916 England” by Gordon Crosby, public domain.

After all the crashing and burning of Imperial Germany’s Zeppelins in the later part of WWI – once the Brits managed to build interceptors that could hit their lofty altitude, and figured out the trick of using incendiary rounds to set off the hydrogen lift gas – there was a certain desire in airship circles to avoid fires. In the USA, that mostly took the form of substituting hydrogen for helium. Sure, it didn’t lift quite as well, but it also didn’t explode.

Still, supplies of helium were– and are– very much limited, and at least on a rigid Zeppelin, the hydrogen wasn’t even the most flammable part. As has become widely known, thanks in large part to the Mythbusters episode about the Hindenburg disaster, the doped cotton skin in use in those days was more flammable than some firestarters you can buy these days.

That’s a problem, because, as came up in the comments of our last airship article, rigid airships beat blimps largely on Rule of Cool. Who invented the blimp? Well, arguably it was Henri Griffard with his steam-driven balloon in 1857, but not many people have ever heard his name. Who invented the rigid airship? You know his name: Ferdinand Adolf Heinrich August Graf von Zeppelin. No relation. Probably. Well, admittedly most people don’t know the full name, but Count Zeppelin is still practically a household name over a century after his death. His invention was just that much cooler.

That unavoidable draw of coolness led to the Detroit Airship Company and their amazing tin blimp. The idea was the brainchild of a man named Ralph Upton, and is startling in its simplicity: why not take the all-metal, monocoque design that was just then being so successfully applied to heavier-than-air flight, and use it to build an airship?

Of course everyone’s initial reaction to the idea is that it’s absurd: metal is too heavy to fly! They said that about airplanes once, too, but airships are surely a different matter. Airships must be lighter than air. Could a skin of aluminum really hold enough lift gas to keep itself in the air? Upton convinced no lesser lights than Henry Ford to back him, and the Detroit Aircraft Company ultimately found a customer for the design in the US Navy.

Advertisement
Schwartz’s unsuccessful airship, shortly before its crash.
Image credit: unknown, public domain.

It helped that Upton wasn’t exactly the first to come up with this idea: David Schwarz had tried to build a metal airship at the end of the 19th century. Arguably it is he who invented the rigid airship, not my aura farming not-ancestor. His design had metal skin over an internal framework, rather than the lighter monocoque construction Upton was exploring. While it was by no means a success, being destroyed on its maiden flight, the fact that it had a maiden flight at all at least proved that metal structures could be made light enough to get off the ground.

The Detroit Airship Company’s first– and only, as it turned out– prototype was much more successful, as we will see. It was immediately nicknamed the “tin blimp” by the press after it was unveiled in 1929, that name was incorrect in every particular. It wasn’t tin, and it wasn’t a blimp. Well, not exactly, anyway. More on that later.

How To Make a Metal Balloon

Compared to the various frames, longitudinal girders, bracing wires and fabric-backed gas bags of a Zeppelin-type airship, the ZMC-2’s balloon was simplicity itself. The balloon–if you can call it that–was a hollow spheroid built up of strips of 0.0095” (0.24 mm) Alclad sheeting. Alclad is a sort of metallic composite material: a sheet of duraluminum coated with a very thin protective layer of pure aluminum to provide corrosion resistance. The ZMC-2 was actually the first major use of Alclad, but hardly the last. At least for skins, most aircraft aluminum is actually alclad, as alloys with the desired strength-to-weight ratio are generally too vulnerable to corrosion to be exposed to the elements.

The cavernous interior of the ZPG-2’s gas ‘bag’, looking forwards. The ballonets have not yet been installed. Image credit unknown, via Aviation Rapture

So, contrary to popular belief, no tin was involved. And the sturdy aluminum spheroid was not at all flexible, so the ZMC-2 was not really any kind of blimp. It also was not, technically, a Zeppelin. It was a whole new beast: a metalclad airship.

There is a film of the ship being built, and it’s rather fascinating. The strips of alclad are rolled into conical sections and riveted together, with a bituminous material serving as sealant. Even today, you would not want to weld this material, so instead three and a half million 0.035” (0.89 mm) rivets hold the plates together. A special automated riveting machine was invented for the construction of the metalclad airship, which “sewed” three rows simultaneously at a rate of five thousand rivets per hour.

Just like most monocoque airplanes, then and now, the skin doesn’t hold the entire load: there were five circular frames, flanged and full of lightening holes just like the ribs of an aeroplane fuselage, of various diameters to help the ‘gas bag’ hold shape. The gondola would attach to two of these.

Advertisement

Amazingly, with all of those rivets and the low-tech sealant, the metalclad held helium much better than its rivals. Yes, helium. While more expensive than hydrogen, the US Navy had already transitioned away from that more volatile gas and had no interest in going back. All of their groundside infrastructure was centered around helium. If that meant that the fireproof metalclad would not be able to lift quite so much as it otherwise might, well, too bad.

By the time the ZMC-2 got to Lakehurst as pictured here, only helium was on tap.
Image: Navy History and Heritage Command

OK, It’s a Bit Like a Blimp

Aside from outward appearance, the metalclad airship is similar to a blimp in some respects. For one, like the blimps that would go on to serve into and well past WWII, and unlike every Zeppelin ever built, the metalclad design had no internal subdivisions. The great metal balloon, 52 ‘8 ” in diameter (16 m) and 149’ 5” (45.5m) long, held two air bladders, one fore, and one aft, but was otherwise cavernously empty.

Just like the blimps, those air bladders were used for trim: by pressurizing the fore bladder, the nose becomes heavy and trims the blimp down; likewise pressurizing the rear bladder trims the nose upwards. With both under pressure, the overall excess lift of the gasbag is reduced slightly, though the hull was not designed to withstand enough pressure for that to be notably useful at affecting overall buoyancy. The maximum the ZMC-2’s hull could take was said to be about two inches of water, or 0.07 PSIg (0.5 kPa).

Also like a blimp, that pressure was required to resist the force of aerodynamic drag, at least at high speeds. The aluminum skin could hold its own shape, obviously, and even at low speeds it was safe to fly at atmospheric pressure, but at speeds above about half velocity never exceed (VNE) there was a risk of buckling the nose. So, like a blimp–or the balloon tanks on the much later Atlas rockets–gas pressure was used as reinforcement. For that reason, there was much consternation at the time–and since–whether to count the metalclad as a rigid or non-rigid airship. Ultimately the US Navy, whose code was “Z” for airship and “R” for rigid or “S” for non-rigid, called it ZMC– z-airship, metal clad. That dodged the issue well enough.

A larger ship might have been able to afford the weight of stronger aluminum to take the buffeting of high-speed flight, thanks to the square-cube law, but the comparatively tiny ZMC-2 lacked that lift capacity. Even larger ships were always intended to use pressure-reinforcement; it’s a key part of the metalclad concept. Why waste lift capacity on metal when the gas can do it for you? As it was, the useful load of the prototype ZMC-2 was only 750 lbs (340 kg). The ZMC-2 wasn’t designed for useful load, though; it was only ever meant as a testbed.

Advertisement

Flying the Tin Blimp

As a testbed, the ZMC-2 was reasonably successful, and also a complete failure. It was reasonably successful in that its logbooks recorded 2,265 incident-free hours over 725 flights between its debut in August 1929 and its grounding in August 1939. In those ten years, it was found to fly well, in spite of its oddities.

The control car, with its crew of two or three–plus four passengers–and a pair of 220 HP Wright Whirlwind engines, would not have looked out of place on a blimp of similar size. Its overall size was not unlike blimps Goodyear was flying. Nor was the ZMC-2 particularly speedy, or unusually slow with a top speed of 70 mph (113 km/h). Aside from the metal-clad construction, two things made the ZMC-2 stand out amongst its contemporaries. The empennage — the “tail” — was perhaps unique in airship history– as near as I can tell, the Detroit Airship Company was the only one to ever fit eight equally-spaced fins to the rear of an airship. All had control surfaces, and in practice, there was no control mixing: four acted as elevators, and four as rudders. It worked well enough, as the ship was apparently quite maneuverable.

The only thing normal in this photo is the gondola. Note the four visible tail surfaces– there are four more on the other side. Image: Screenshot from “Tin Balloon” (Silent) by zrsmovie.com

The other oddity helped with this maneuverability: the airship’s fineness ratio. It was oddly squat, at only 2.83. Like much in the world of airships, the concept of a fineness ratio is borrowed from the naval world– there, it is the ratio between a ship’s length and its beam, or width. For a flying ship, it’s the length to diameter of the gas bag, but the effect is the same. Picture a racing skiff vs a coracle, or a whitewater kayak. The racing skiff has a very high fineness ratio, which gives it high speed and low maneuverability as it cuts through the water. A coracle or whitewater kayak, on the other hand, has a low fineness ratio, often less than two, so that they can turn on a dime. They’re also incredibly difficult to keep going in a straight line. The ZMC-2 wasn’t quite that squat, but from the boating analogy I can only imagine it was a handful to keep on a straight course at times.

ZMC-2 looks positively squat at top-right, compared to ZR-3 Los Angeles at center and the J-2 blimp on the left. That has pros and cons but was not an inherent characteristic of the metalclad concept.
Image: Naval History and Heritage Command

The only reason I dare call the fabulous tin blimp a failure is because there was no ZMC-3, or -4, or N≠2. It was indeed the only metalclad to ever fly.

One of a Kind

It wasn’t the cute little prototype’s fault; it was the timing. The Detroit Aircraft Company launched the ZMC-2 with big plans– Upton’s first design was for a larger express passenger/cargo airship of 1,600,000 cu.ft. (45,307 m³) gas volume, compared to the meager 200,000 cu.ft. (5,663 m³) of the prototype. There was interest in the bigger designs, but the ZMC-2 would need to prove the concept– which it did, in August 1929. Then in October, the stock market crashed, the Great Depression hit, and there was a lot less money available for pie-in-the-sky ideas like metalclad airships.

The interest was there, mind you. The U.S. Army liked what they saw, and went hat-in-hand in 1931 to Congress asking for 4.5 million to buy a 20-ton-lift model that would have been larger than the Graf Zeppelin. At that point, Congress felt there were other priorities. Later on, Detroit’s metalclad design was The Navy’s preferred choice to replace the ill-fated Akron and Macon, but there were problems with funding and the Detroit Aircraft Company didn’t have a hangar big enough to build the thing in anyway.

Advertisement
The Army’s large metalclad might have looked like this, according to Popular Mechanics
Image: Popular Mechanics April 1931, via lynceans.org

That was the end of it. Though there was no notable metal fatigue or corrosion, the ZMC-2 flew less and less as the odds of a successor dropped. Some accounts claim it was grounded completely in 1939; others imply a handful of flights until US entry into WWII. With the war on, aluminum was in short supply and the ZMC-2 was broken up for scrap in 1941. It was simply too small for the antisubmarine duty the Navy’s blimps were being put to, and too weird to use as a training ship. Though the gondola was kept for a time as a learning aide for ground school, it was not preserved. It is likely that no physical trace of the fabulous tin blimp remains.

Legacy

Ultimately, the ZMC-2 was successful in proving that a metalclad airship could fly. During the various aborted attempts at an ‘airship renaissance’, various proposals for metalclads or similarly-built composite ships have been put forth, but as with Ralph Upton’s larger designs, no capital sufficient for construction ever materialized.

In spite of my praise of the non-rigid airship’s ability to shift with the winds– going so far as to say “Blimps win” in my last article, based on the historical record, I for one would love to see a metalclad fly again. Maybe it’s just the Rule of Cool– rigids are cooler, and metalclads are cooler yet. Maybe the image of the doughty ZMC-2 buzzing about like a giant, clumsy bumble bee has made me sentimental for the design. Maybe it’s just that there’s potential there. Thanks to the great Nan ships, we’ve got a pretty idea of what non-rigid airships are capable of. ZMC-2 only scratches the surface of what a metalclad could do; perhaps someday we’ll find out. With modern lithium-aluminum alloys being that much lighter, or the ‘black’ aluminum of carbon composites, we could probably build something exceeding Ralph Upton’s wildest dreams… if there was money to pay for it.

12 years was a good run for a prototype. So long, and thanks for all the AvGas.
Image: Naval History and Heritage Command

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Microsoft launches three in-house AI models in direct challenge to OpenAI

Published

on

Six months after renegotiating the contract that once barred it from independently pursuing frontier AI, Microsoft has released three in-house models that directly challenge the partner it spent $13 billion cultivating. MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 are now available in Microsoft Foundry, and they do not carry OpenAI’s name anywhere on the label.

The models are the first publicly released output of the MAI Superintelligence team that Mustafa Suleyman, CEO of Microsoft AI, formed in November 2025 with a stated mission of pursuing what the company calls “humanist superintelligence.” In a March internal memo first reported by Business Insider, Suleyman wrote that he intended to focus all of his energy on superintelligence and deliver world-class models for Microsoft over the next five years. That ambition now has its first tangible evidence.

MAI-Transcribe-1 is, on paper, the most immediately disruptive of the three. The speech-to-text model claims the lowest word error rate across 25 languages on the FLEURS benchmark, averaging 3.8 per cent, and Microsoft says it outperforms OpenAI’s Whisper-large-v3 on all 25 languages, Google’s Gemini 3.1 Flash on 22 of 25, and ElevenLabs’ Scribe v2 on 15 of 25. It runs 2.5 times faster than Microsoft’s previous Azure Fast transcription service and is priced at $0.36 per hour of audio. Perhaps most revealing is the team that built it: just 10 people.

MAI-Voice-1 completes the audio loop. The text-to-speech model generates 60 seconds of natural-sounding audio in under one second on a single GPU and supports custom voice creation from a few seconds of sample audio. Combined with MAI-Transcribe-1 and a large language model of the customer’s choosing, it forms a complete voice pipeline that runs entirely on Microsoft infrastructure without any dependency on OpenAI’s technology.

Advertisement

MAI-Image-2, the oldest of the three, had already debuted at number three on the Arena.ai text-to-image leaderboard in March, placing it behind only Google’s Gemini 3.1 Flash and OpenAI’s GPT Image 1.5. The model was developed in collaboration with photographers, designers, and visual storytellers, and WPP, one of the world’s largest marketing groups, is among the first enterprise partners building with it at scale.

Advertisement

The strategic context matters more than the benchmarks. Until the September 2025 renegotiation, Microsoft’s original partnership agreement with OpenAI contractually prevented the company from independently pursuing general AI development. The revised memorandum of understanding changed that calculus fundamentally. Microsoft retained licensing rights to everything OpenAI builds through 2032, gained $250 billion in new Azure cloud business commitments, and crucially won the freedom to build competing models. Suleyman acknowledged the pivot directly: the contract renegotiation, he said, enabled Microsoft to independently pursue its own superintelligence.

The timing is deliberate. Jacob Andreou, formerly a senior vice-president at Snap, took over as executive vice-president of Copilot on 17 March, freeing Suleyman from day-to-day product responsibilities. The MAI models landed barely two weeks later. Microsoft also hired Ali Farhadi, the former chief executive of the Allen Institute for AI, for Suleyman’s superintelligence team in March, a recruitment signal that the ambitions extend well beyond transcription and image generation.

For OpenAI, the development creates an awkward dynamic. Microsoft remains its single largest investor and its primary cloud infrastructure provider, and the two companies continue to share a platform in Foundry, which hosts both OpenAI and Microsoft models. But OpenAI’s own push into commercial monetisation is accelerating in parallel, and the relationship is beginning to resemble two companies orbiting the same market with overlapping products rather than a partnership with a clear division of labour. OpenAI’s $110 billion raise in February, backed by SoftBank, Nvidia, and Amazon, valued the company independently of Microsoft at a level that makes the original partnership framing increasingly anachronistic.

The broader AI model market is fragmenting along similar lines. Anthropic’s $30 billion raise at a $380 billion valuation established it as a credible third force in enterprise AI, with run-rate revenue of $14 billion. Google continues to iterate rapidly on Gemini. The era in which OpenAI was the only game in town for frontier AI capabilities, and Microsoft was content to be its exclusive distribution channel, is definitively over.

Advertisement

Microsoft Foundry, the platform formerly known as Azure AI Foundry and before that Azure AI Studio (the second rebrand in twelve months), now serves developers at more than 80,000 enterprises including 80 per cent of Fortune 500 companies. That distribution advantage is what makes the MAI model family strategically significant: Microsoft does not need to beat OpenAI on every benchmark to shift enterprise spending toward in-house models. It needs to be competitive enough that customers choose the integrated option over the third-party alternative, a dynamic that the past year of AI industry consolidation has made increasingly plausible.

Suleyman has said it will take another year or two before the superintelligence team produces frontier-class language models. What landed this week is the foundation: a multimodal toolkit that gives Microsoft its own voice, ears, and eyes independent of OpenAI. The $13 billion partnership is not ending. But the premise on which it was built, that Microsoft needed OpenAI to compete in AI, is being quietly dismantled one model release at a time.

Source link

Advertisement
Continue Reading

Tech

NYT Strands hints and answers for Saturday, April 4 (game #762)

Published

on

Looking for a different day?

A new NYT Strands puzzle appears at midnight each day for your time zone – which means that some people are always playing ‘today’s game’ while others are playing ‘yesterday’s’. If you’re looking for Thursday’s puzzle instead then click here: NYT Strands hints and answers for Thursday, April 2 (game #761).

Strands is the NYT’s latest word game after the likes of Wordle, Spelling Bee and Connections – and it’s great fun. It can be difficult, though, so read on for my Strands hints.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Karpathy shares ‘LLM Knowledge Base’ architecture that bypasses RAG with an evolving markdown library maintained by AI

Published

on

AI vibe coders have yet another reason to thank Andrej Karpathy, the coiner of the term.

The former Director of AI at Tesla and co-founder of OpenAI, now running his own independent AI project, recently posted on X describing a “LLM Knowledge Bases” approach he’s using to manage various topics of research interest.

By building a persistent, LLM-maintained record of his projects, Karpathy is solving the core frustration of “stateless” AI development: the dreaded context-limit reset.

As anyone who has vibe coded can attest, hitting a usage limit or ending a session often feels like a lobotomy for your project. You’re forced to spend valuable tokens (and time) reconstructing context for the AI, hoping it “remembers” the architectural nuances you just established.

Advertisement

Karpathy proposes something simpler and more loosely, messily elegant than the typical enterprise solution of a vector database and RAG pipeline.

Instead, he outlines a system where the LLM itself acts as a full-time “research librarian”—actively compiling, linting, and interlinking Markdown (.md) files, the most LLM-friendly and compact data format.

By diverting a significant portion of his “token throughput” into the manipulation of structured knowledge rather than boilerplate code, Karpathy has surfaced a blueprint for the next phase of the “Second Brain”—one that is self-healing, auditable, and entirely human-readable.

Beyond RAG

For the past three years, the dominant paradigm for giving LLMs access to proprietary data has been Retrieval-Augmented Generation (RAG).

Advertisement

In a standard RAG setup, documents are chopped into arbitrary “chunks,” converted into mathematical vectors (embeddings), and stored in a specialized database.

When a user asks a question, the system performs a “similarity search” to find the most relevant chunks and feeds them into the LLM.Karpathy’s approach, which he calls LLM Knowledge Bases, rejects the complexity of vector databases for mid-sized datasets.

Instead, it relies on the LLM’s increasing ability to reason over structured text.

The system architecture, as visualized by X user @himanshu in part of the wider reactions to Karpathy’s post, functions in three distinct stages:

Advertisement
  1. Data Ingest: Raw materials—research papers, GitHub repositories, datasets, and web articles—are dumped into a raw/ directory. Karpathy utilizes the Obsidian Web Clipper to convert web content into Markdown (.md) files, ensuring even images are stored locally so the LLM can reference them via vision capabilities.

  2. The Compilation Step: This is the core innovation. Instead of just indexing the files, the LLM “compiles” them. It reads the raw data and writes a structured wiki. This includes generating summaries, identifying key concepts, authoring encyclopedia-style articles, and—crucially—creating backlinks between related ideas.

  3. Active Maintenance (Linting): The system isn’t static. Karpathy describes running “health checks” or “linting” passes where the LLM scans the wiki for inconsistencies, missing data, or new connections. As community member Charly Wargnier observed, “It acts as a living AI knowledge base that actually heals itself.”

By treating Markdown files as the “source of truth,” Karpathy avoids the “black box” problem of vector embeddings. Every claim made by the AI can be traced back to a specific .md file that a human can read, edit, or delete.

Implications for the enterprise

While Karpathy’s setup is currently described as a “hacky collection of scripts,” the implications for the enterprise are immediate.

As entrepreneur Vamshi Reddy (@tammireddy) noted in response to the announcement: “Every business has a raw/ directory. Nobody’s ever compiled it. That’s the product.”

Karpathy agreed, suggesting that this methodology represents an “incredible new product” category.

Advertisement

Most companies currently “drown” in unstructured data—Slack logs, internal wikis, and PDF reports that no one has the time to synthesize.

A “Karpathy-style” enterprise layer wouldn’t just search these documents; it would actively author a “Company Bible” that updates in real-time.

As AI educator and newsletter author Ole Lehmann put it on X: “i think whoever packages this for normal people is sitting on something massive. one app that syncs with the tools you already use, your bookmarks, your read-later app, your podcast app, your saved threads.”

Eugen Alpeza, co-founder and CEO of AI enterprise agent builder and orchestration startup Edra, noted in an X post that: “The jump from personal research wiki to enterprise operations is where it gets brutal. Thousands of employees, millions of records, tribal knowledge that contradicts itself across teams. Indeed, there is room for a new product and we’re building it in the enterprise.”

Advertisement

As the community explores the “Karpathy Pattern,” the focus is already shifting from personal research to multi-agent orchestration.

A recent architectural breakdown by @jumperz, founder of AI agent creation platform Secondmate, illustrates this evolution through a “Swarm Knowledge Base” that scales the wiki workflow to a 10-agent system managed via OpenClaw.

The core challenge of a multi-agent swarm—where one hallucination can compound and “infect” the collective memory—is addressed here by a dedicated “Quality Gate.”

Using the Hermes model (trained by Nous Research for structured evaluation) as an independent supervisor, every draft article is scored and validated before being promoted to the “live” wiki.

Advertisement

This system creates a “Compound Loop”: agents dump raw outputs, the compiler organizes them, Hermes validates the truth, and verified briefings are fed back to agents at the start of each session. This ensures that the swarm never “wakes up blank,” but instead begins every task with a filtered, high-integrity briefing of everything the collective has learned

Scaling and performance

A common critique of non-vector approaches is scalability. However, Karpathy notes that at a scale of ~100 articles and ~400,000 words, the LLM’s ability to navigate via summaries and index files is more than sufficient.

For a departmental wiki or a personal research project, the “fancy RAG” infrastructure often introduces more latency and “retrieval noise” than it solves.

Tech podcaster Lex Fridman (@lexfridman) confirmed he uses a similar setup, adding a layer of dynamic visualization:

Advertisement

“I often have it generate dynamic html (with js) that allows me to sort/filter data and to tinker with visualizations interactively. Another useful thing is I have the system generate a temporary focused mini-knowledge-base… that I then load into an LLM for voice-mode interaction on a long 7-10 mile run.”

This “ephemeral wiki” concept suggests a future where users don’t just “chat” with an AI; they spawn a team of agents to build a custom research environment for a specific task, which then dissolves once the report is written.

Licensing and the ‘file-over-app’ philosophy

Technically, Karpathy’s methodology is built on an open standard (Markdown) but viewed through a proprietary-but-extensible lens (note taking and file organization app Obsidian).

  • Markdown (.md): By choosing Markdown, Karpathy ensures his knowledge base is not locked into a specific vendor. It is future-proof; if Obsidian disappears, the files remain readable by any text editor.

  • Obsidian: While Obsidian is a proprietary application, its “local-first” philosophy and EULA (which allows for free personal use and requires a license for commercial use) align with the developer’s desire for data sovereignty.

  • The “Vibe-Coded” Tools: The search engines and CLI tools Karpathy mentions are custom scripts—likely Python-based—that bridge the gap between the LLM and the local file system.

This “file-over-app” philosophy is a direct challenge to SaaS-heavy models like Notion or Google Docs. In the Karpathy model, the user owns the data, and the AI is merely a highly sophisticated editor that “visits” the files to perform work.

Librarian vs. search engine

The AI community has reacted with a mix of technical validation and “vibe-coding” enthusiasm. The debate centers on whether the industry has over-indexed on Vector DBs for problems that are fundamentally about structure, not just similarity.

Advertisement

Jason Paul Michaels (@SpaceWelder314), a welder using Claude, echoed the sentiment that simpler tools are often more robust:

“No vector database. No embeddings… Just markdown, FTS5, and grep… Every bug fix… gets indexed. The knowledge compounds.”

However, the most significant praise came from Steph Ango (@Kepano), co-creator of Obsidian, who highlighted a concept called “Contamination Mitigation.”

He suggested that users should keep their personal “vault” clean and let the agents play in a “messy vault,” only bringing over the useful artifacts once the agent-facing workflow has distilled them.

Which solution is right for your enteprise vibe coding projects?

Feature

Advertisement

Vector DB / RAG

Karpathy’s Markdown Wiki

Data Format

Opaque Vectors (Math)

Advertisement

Human-Readable Markdown

Logic

Semantic Similarity (Nearest Neighbor)

Explicit Connections (Backlinks/Indices)

Advertisement

Auditability

Low (Black Box)

High (Direct Traceability)

Compounding

Advertisement

Static (Requires re-indexing)

Active (Self-healing through linting)

Ideal Scale

Millions of Documents

Advertisement

100 – 10,000 High-Signal Documents

The “Vector DB” approach is like a massive, unorganized warehouse with a very fast forklift driver. You can find anything, but you don’t know why it’s there or how it relates to the pallet next to it. Karpathy’s “Markdown Wiki” is like a curated library with a head librarian who is constantly writing new books to explain the old ones.

The next phase

Karpathy’s final exploration points toward the ultimate destination of this data: Synthetic Data Generation and Fine-Tuning.

As the wiki grows and the data becomes more “pure” through continuous LLM linting, it becomes the perfect training set.

Advertisement

Instead of the LLM just reading the wiki in its “context window,” the user can eventually fine-tune a smaller, more efficient model on the wiki itself. This would allow the LLM to “know” the researcher’s personal knowledge base in its own weights, essentially turning a personal research project into a custom, private intelligence.

Bottom-line: Karpathy hasn’t just shared a script; he’s shared a philosophy. By treating the LLM as an active agent that maintains its own memory, he has bypassed the limitations of “one-shot” AI interactions.

For the individual researcher, it means the end of the “forgotten bookmark.”

For the enterprise, it means the transition from a “raw/ data lake” to a “compiled knowledge asset.” As Karpathy himself summarized: “You rarely ever write or edit the wiki manually; it’s the domain of the LLM.” We are entering the era of the autonomous archive.

Advertisement

Source link

Continue Reading

Tech

Edward ‘Big Balls’ Coristine Is Helping Out on Viral Fraud Videos Now

Published

on

Nick Shirley—the right-wing creator whose YouTube investigation sparked the Trump administration’s immigration crackdown in Minnesota—claims that his most recent video about alleged fraud in California was bolstered by data provided by none other than Edward Coristine, one of the first members of the so-called Department of Government Efficiency (DOGE) known online as “Big Balls.”

Coristine, who joined DOGE at 19 years old with no prior government experience, was staffed across several agencies including the Social Security Administration (SSA) and the Small Business Administration (SBA). Before joining DOGE, Coristine worked at Elon Musk’s Neuralink for several months and founded a startup known for hiring black hat hackers.

In an interview with Coristine published on Shirley’s YouTube channel on Thursday, Shirley claims that Coristine personally pulled data on Medicaid spending for businesses based in California as potential targets. Coristine nodded along, telling Shirley that the government must create more opportunities to crowdsource fraud investigations.

The information Coristine allegedly pulled for Shirley was from a dataset published by the DOGE team at the Department of Health and Human Services (HHS) in February. In a post to X at the time, the HHS DOGE team referred to it as “the largest Medicaid dataset in department history.” The post also claimed that the dataset could be used to “detect” large-scale fraud.

Advertisement

“After that, I went to California based off that dataset you had helped me extract, and these fraudsters also weren’t even trying to hide it,” Shirley told Coristine in Thursday’s interview.

Coristine said that by open-sourcing data on government spending, vigilante investigators like Shirley who are “more well-positioned” could uncover fraudulent payments. “You are someone who actually went to the places where we were spending all this money and confronted the people and got to know the truth. I think we just have to create more opportunities for that to happen. We have to continue to open source data,” Coristine said.

The intersection of the right’s favorite fraud influencer and one of the most notorious DOGE engineers exemplifies the next evolution of DOGE and the Trump administration’s fight against “waste, fraud, and abuse.”

Shirley’s videos have become key pieces of evidence for the Trump administration’s fraud and immigration crackdowns. When Shirley released his December video claiming to have uncovered more than $100 million in Somali-run childcare fraud in Minnesota, figures like vice president JD Vance shared it. A surge of immigration agents were then sent to Minnesota, resulting in mass arrests, detainments, and the deaths of two protesters, Renee Good and Alex Pretti.

Advertisement

Early in their YouTube video, Shirley and Coristine directly tie fraud to immigrant communities and foreigners. “A lot of the money is being stolen and siphoned out of the country,” Coristine says, without providing evidence. “Once that money is in a suitcase to Somalia, that’s never coming back,” Shirley replies.

Later in the video, Shirley and Coristine cite specific examples of “waste and fraud” identified by DOGE, including funding for a “Sesame Street style children’s TV program in Iraq” and “tax policy consulting in Liberia.” Both programs were supported by the US Agency for International Development (USAID), which DOGE effectively shut down in the early months of 2025. Coristine also alleged that the SBA “did a terrible job,” particularly with loans during the height of COVID, and that there were “no checks at all on who’s receiving money, not even the most basic checks of like, if [a Social Security number] is real.”

Source link

Advertisement
Continue Reading

Tech

From kelp pots to kilns: UW’s CoMotion Labs reveals 8 startups joining its newest climate cohort

Published

on

Emily Power, CEO of Ocean Made, shows the difference in the root structure of tomato plants grown in the startup’s kelp-based pots, on the left, versus plastic pots. (Ocean Made Photo)

The University of Washington’s CoMotion Labs has selected the second cohort of startups for its Climate Tech Incubator. The founders are tackling wide-ranging sustainability challenges including boosting EV adoption, reducing plastic use, supporting local food and beverage production, and developing smart climate strategies for cities.

The six-month program is located at the Seattle Climate Innovation Hub, a public-private partnership in the city’s downtown. The venue supports climate entrepreneurship beyond the incubator and hosts regular public events.

Eight early-stage startups participating in the program receive support in building teams, developing their business plans, forging strategic partnerships and preparing to make their pitch to investors. The cohort will share their progress at a demo day in September.

Jared Silvia, partner at Gliding Ant Ventures and former CEO of BlueDot Photonics, is a CoMotion mentor.

“If our region is serious about being a leader in climate tech, we need to find more ways to support more founders,” Silvia said. “The Climate Tech Incubator is a fantastic addition to the support ecosystem.”

Advertisement

Here are the participants:

Astraeus Ocean Systems is a maritime ag-tech startup, offering water-quality monitoring and crop modeling for shellfish and seaweed growing operations. The Bellingham, Wash.-based company’s founding team includes two Ph.D.-holding research scientists and a leader in business development.

Benchmark Star helps facilities managers comply with clean building regulations by automating regulation tracking and streamlining utility data reporting. The effort launched out of a Seattle Climate Innovation Hub hackathon last year and is led by Renee Gastineau, who has worked in clean energy for more than a decade.

Climate Solutions International is the brainchild of Jan Whittington, a UW urban planning professor. Whittington developed strategies for helping cities take action on climate change while making their infrastructure more resilient in a warming world. The World Bank funded her to apply the approach across 300 cities in 30 countries, and her startup is turning that expertise into a business.

Advertisement

EVQ is a one-stop, AI-powered platform helping drivers find, buy and operate electric vehicles, demystifying battery charging and other hurdles to EV ownership. The Seattle startup spun out of Coltura, a nonprofit promoting EV policies and research founded by EVQ CEO Matthew Metz.

FlameWise produces portable kilns for individuals and communities to turn unwanted woody debris into biochar that sequesters carbon and provides soil benefits. The kilns are a low-smoke alternative to burn piles. Seattle’s Korina Stark launched the effort following challenges to manage wood waste on her own 20-acre forested property.

OceanMade offers seaweed-based pots for nurseries, landscapers, gardeners and small farms who want to avoid plastic waste. The kelp containers also support root development and naturally degrade in the soil after planting. CEO Emily Power previously worked at Microsoft for nearly eight years before founding the Seattle startup in 2021.

REearthable is manufacturing biodegradable plastics from waste limestone recovered from mining operations. The material from the Seattle-area startup is suitable for cosmetics, food packaging and other applications. CEO Charlotte Wintermann is a serial entrepreneur with a background in sales, marketing and business strategy.

Advertisement

Seeking Ferments produces fermented beverages and kombucha that are brewed in Seattle from locally sourced ingredients. Co-founders Jeanette Macias and Lyz Macias launched their startup in 2019 and now sell their beverages online and at farmers markets and their “filling station.”

Related: UW’s CoMotion Labs names six startups for inaugural climate and green tech incubator

Source link

Advertisement
Continue Reading

Tech

OpenAI’s Fidji Simo Is Taking Medical Leave Amid an Executive Shake-Up

Published

on

OpenAI announced a major reorganization on Friday as the company’s CEO of AGI deployment, Fidji Simo, takes medical leave to focus on her health. OpenAI president Greg Brockman will handle the product teams in Simo’s absence. Simo’s previous title was CEO of applications.

Brad Lightcap, the chief operating officer and one of CEO Sam Altman’s top deputies, is transitioning to a “special projects” role. Kate Rouch, the chief marketing officer, is taking a leave of absence to focus on her health. Rouch has been undergoing treatment for breast cancer. When she returns, it will be in “a different, more narrowly scoped role,” according to a note Simo shared with OpenAI staff which was viewed by WIRED.

“As I shared when I joined, I had a relapse of my neuroimmune condition a few weeks before starting the job,” Simo said in the note which was sent in OpenAI’s “core” Slack channel. “It’s been a bit of a rollercoaster since, and the last month has been particularly rough health-wise. For my entire time here, I’ve postponed medical tests and new therapies to stay completely focused on the job and not miss a single day of work. I took time off for the first time two weeks before the break for some medical tests, and it’s now clear that I’ve pushed a little too far and I really need to try new interventions to stabilize my health.”

Simo is expected to take “several weeks” of leave according to her internal post.

Advertisement

In his new role, Lightcap will be in charge of the company’s forward-deployed engineers, which embed within enterprise organizations and help integrate OpenAI’s technology, among other duties.

OpenAI will begin searching for a new CMO, Simo said. The company is also looking for a chief communications officer to replace Hannah Wong, who left her position in January. Chris Lehane has taken over as the leader of the communications team in the interim.

“We have a strong leadership team focused on our biggest priorities: advancing frontier research, growing our global user base of nearly 1 billion users, and powering enterprise use cases,” said an OpenAI spokesperson in a statement. “We’re well-positioned to keep executing with continuity and momentum.”

Simo joined OpenAI in August 2025, where she took over many of the company’s consumer-facing products, including ChatGPT, Codex, and the social-video app Sora. She recently shuttered the Sora app and told staff that the company needed to cut side projects and refocus around its core products.

Advertisement

The decision comes as OpenAI eyes an IPO as soon as this year. The company recently raised $122 billion in the largest funding round the tech industry has ever seen, which valued the company at $852 billion.

Source link

Continue Reading

Tech

Google's Gemma 4 AI can run on smartphones, no Internet required

Published

on


The two largest Gemma 4 models – 26B Mixture of Experts and 31B Dense – require an 80GB Nvidia H100 GPU to run unquantized in bfloat16 format. Google claims these models deliver “frontier intelligence on personal computers” for students, researchers, and developers, providing advanced reasoning capabilities for IDEs, coding assistants, and agentic workflows.
Read Entire Article
Source link

Continue Reading

Tech

After Cutting Down on ‘Side Quests,’ OpenAI Bought a Talk Show

Published

on

OpenAI has spent the last few weeks seemingly trying to refocus on using AI for business instead of what execs dubbed “side quests,” dumping its AI video generator and its plans for an adult-themed chatbot. So this week, of course, the company announced it’s jumping into the media business.

OpenAI said it was acquiring Technology Business Programming Network, better known as TBPN, which runs a 3-hour show streamed on weekdays that delves into the biggest topics — and brings in the biggest names — in tech business.

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

Advertisement

OpenAI said it added TBPN to “help create a space for a real, constructive conversation about the changes AI creates,” Fidji Simo, CEO of AGI deployment at OpenAI, wrote in a message to employees shared by OpenAI. Simo said the company also wanted to take advantage of TBPN’s marketing prowess. “They have a strong pulse on where the industry is going, their comms and marketing ideas have really impressed me,” Simo said.

TBPN launched in October 2024 and has been compared to ESPN in how it covers tech — two guys at a big desk with news, analysis, commentary and banter about topics such as AI, crypto, startups and the defense industry. The show’s two hosts and co-founders, Jordi Hays and John Coogan, have had some of tech’s biggest names in studio — OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Satya Nadella, entrepreneur Mark Cuban and Salesforce’s Marc Benioff, to name some.

The show is streamed live from 11 a.m. to 2 p.m. PT Monday through Friday on YouTube and X from the Ultradome, a studio on a Hollywood film lot. The show has 70,000 viewers daily and looks set to make more than $30 million in revenue this year, according to the Wall Street Journal.

TBPN co-host Hays acknowledged in a statement that the show has been “critical” of the AI industry.

Advertisement
AI Atlas

“After getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right,” Hays said. “Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”

In an era of fast-moving media consolidation, it’s a fair question — can TBPN keep saying what they really think, even if that ruffles OpenAI’s feathers? In her statement, Simo said OpenAI wants the show to maintain its “editorial independence.”

“TBPN will continue to run their programming, choose their guests, and make their own editorial decisions,” she said. “That’s foundational to their credibility, and it’s something we’re explicitly protecting as part of this agreement.”

Altman, OpenAI founder, echoed that sentiment with a posting on X. also calling TBPN his “favorite tech show.”

Advertisement

“We want them to keep that going and for them to do what they do so well,” Altman posted. “I don’t expect them to go any easier on us, am sure I’ll do my part to help enable that with occasional stupid decisions.”

The acquisition prompted some criticism and concern on social media as people wondered whether TBPN could really maintain editorial independence.

“Reporters doing accountability journalism are getting mowed down by mass layoffs & are now almost extinct — while the targets of their accountability reporting are giving hundreds of millions of dollars to pundits,” David Sirota, a longtime columnist and founder of the investigative news outlet The Lever, posted on X. “What stage of the media dystopia is this?”

TBPN will be under the supervision of OpenAI’s Chief Global Affairs Officer Chris Lehane, who joined the company in October 2024 and is the company’s main strategist in working with government officials. Decades ago, he worked in the White House of President Bill Clinton — helping to handle the Whitewater and Monica Lewinsky investigations — and as press secretary to Vice President Al Gore. Lehane also set up a procrypto super PAC called Fairshake that helped defeat anticrypto candidates during the 2024 elections and helped Airbnb battle housing regulations.

Advertisement

Source link

Continue Reading

Tech

AI animation studio Toonstar will turn books into digital shows for HarperCollins

Published

on

HarperCollins is tapping into AI to bring some of its book franchises to life. Specifically, the publisher is teaming up with Toonstar, an AI animation studio, to turn them into digital shows. The first project will be an adaptation of Lisa Greenwald’s “Friendship List” series, which will also be joined by a graphic novel.

You’d be forgiven for being unaware of Toonstar, a studio that received some buzzy early on for simplifying typically complex animation pipelines with AI, but has mostly remained under the radar. Its biggest claim to fame is producing StEvEn and Parker YouTube series, which has amassed 3.38 million subscribers and sometimes has episodes reaching around a million views. It’s not something I’ve heard animation fans speaking about, though. And honestly, it was tough to sit through a few minutes of its sub-South Park animation.

“By leaning into the [AI] technology, we can make full episodes 80 percent faster and 90 percent cheaper than industry norms,” Toonstar co-founder John Attanasio, told The New York Times last year. In that same interview, the company revealed that it uses AI across its production, including having it dub dialog for international audiences, as well as working on storylines.

Toonstar initially pitched itself as an animation studio leaning into Web3 and NFTs, but those technologies seem virtually absent from the company’s presence today. Space Junk, one of its early series, was “put on hold for a variety of reasons,” a representative told Engadget. “It’s possible we’ll resurrect the concept in the future,” they added. Its original domain now points to a crypto gambling site.

Advertisement

“We’re honored to bring Friendship List to life as an animated series,” Attanasio said in a press release. “Our artist-centered approach ensures these beloved characters and stories stay true to the author’s vision, while our Ink & Pixel production technology enables fast, high-quality production at scale which unlocks the ability to meet audiences where and when they enjoy content today.”

Toonstar has certainly proved it can make “content” for YouTube. Can it actually produce an enjoyable animat edshow? That’s another question entirely.

Source link

Advertisement
Continue Reading

Tech

Iran Strikes Leave Amazon Availability Zones ‘Hard Down’ In Bahrain and Dubai

Published

on

Iranian strikes have reportedly knocked out key AWS availability zones in Bahrain and Dubai, leaving parts of both regions effectively offline for an extended period and forcing Amazon to urge teams and customers to shift workloads elsewhere. “These two regions continue to be impaired, and services should not expect to be operating with normal levels of redundancy and resiliency,” an internal Amazon communication memo reads. “We are actively working to free and reserve as much capacity as possible in the region for customers, and services should be scaled to the minimal footprint required to support customer migration.” Big Technology reports: With the war now nearing its sixth week, Iran has made Amazon infrastructure in the Gulf an economic target and is now eyeing its peers. Amazon’s Bahrain facilities have been hit multiple times, including a Wednesday strike that caused a fire. And its facilities in the UAE also sustained multiple hits. The IRGC is threatening multiple other U.S. tech giants, including Microsoft, Google, and Apple.

Amazons infrastructure in Bahrain and Dubai each have three ‘availability zones’ or clusters of compute. Both Bahrain and Dubai have a zones that are “hard down” and and “impaired but functioning,” per the internal communication. “We do not have a timeline for when DXB and BAH will return to normal operations,” the internal post said.

Source link

Continue Reading

Trending

Copyright © 2025