Connect with us
DAPA Banner

Crypto World

Jack Dorsey says AI should replace the middle manager after Block (XYZ) cuts 4,000 jobs

Published

on

Jack Dorsey says AI should replace the middle manager after Block (XYZ) cuts 4,000 jobs

In Jack Dorsey’s view of the world, the job most at risk from the AI revolution is the middle manager.

Dorsey argues in a new essay, “From Hierarchy to Intelligence,” published with Roelof Botha, Sequoia Capital’s managing partner, an investor in Block, that his company’s decision to cut approximately 4,000 of its more than 10,000 employees was not a cost reduction but a permanent restructuring to replace middle managers with AI.

Corporate hierarchy, the essay argues, has always existed to solve one problem: routing information through organizations too large for any single person to oversee.

Managers aggregate context from below, act as messengers from above, and maintain alignment across teams. AI can now perform those functions continuously and at scale, the authors argue, making the messenger redundant.

Advertisement

In place of management layers, Dorsey and Botha proposes two AI-driven “world models.”

One aggregates internal data from code, decisions, workflows, and performance metrics to create a continuously updated picture of company operations, replacing the context that managers traditionally carried.

The other maps customer and merchant behavior using transaction data from Cash App and Square.

Those models feed what Block calls an “intelligence layer” that composes financial products dynamically to fit market demand.

Advertisement

If done properly, the models absorb the coordination work that previously justified the existence of middle management.

Rather than building from fixed roadmaps, the essay proposes breaking Block’s business into modular capabilities, including payments, lending, card issuance and payroll.

When the system identifies a need, the essay’s example is a merchant facing a seasonal cash flow gap, it assembles a solution from existing capabilities. When it cannot, the missing capability defines what gets built next, replacing the product roadmap with a system-generated backlog.

The organizational structure is reduced accordingly. Block plans to operate with three roles: individual contributors who build the system, directly responsible individuals who own specific outcomes on 90-day cycles, and player-coaches who remain hands-on while developing people.

Advertisement

Dorsey told Wired in early Marchthe restructuring was triggered by a capability shift he observed in December in tools including Anthropic’s Opus 4.6 and OpenAI’s Codex 5.3, which he said was now capable of operating effectively in large codebases.

But current and former Block employees told the Guardian that roughly 95% of AI-generated code changes still require human modification, and that AI tools cannot yet lead in regulated areas like banking and money transfers.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Crypto World

US Law Firm Apologizes For AI Hallucinations in Filing

Published

on

US Law Firm Apologizes For AI Hallucinations in Filing

Sullivan & Cromwell’s Andrew Dietderich said the company has AI policies to prevent incorrect citations and other errors, but procedures weren’t followed on this occasion.

Wall Street law firm Sullivan & Cromwell has apologized to a federal judge after submitting a court filing that contained around 40 incorrect citations and other errors caused by AI hallucinations.

“We deeply regret that this has occurred,” Andrew Dietderich, co-head of Sullivan & Cromwell’s global restructuring team, wrote Friday in a letter to Chief Judge Martin Glenn of the US Bankruptcy Court for the Southern District of New York.

Advertisement

“The Firm and I are keenly aware of our responsibility to ensure the accuracy of all submissions including under Local Bankruptcy Rule 9011-1(d), and I take responsibility for the failure to do so,” he said of an emergency motion filed nine days earlier.

Excerpt from Andrew Dietderich’s letter to Chief Judge Martin Glenn. Source: Sullivan & Cromwell

The incident highlights the risk AI tools can pose in high-stakes professional work without proper oversight. A database managed by legal technologist Damien Charlotin has recorded 1,334 incidents of AI hallucinations in court filings around the world, including more than 900 in the US.

Charlotin pointed out that most of these hallucinations involve fabricated citations, though AI-generated legal arguments have also occasionally been identified.

Dietderich said Sullivan & Cromwell has policies in place for the use of AI tools, which include a review of the citations it uses, but said the policies weren’t followed.

“Regrettably, this review process did not identify the inaccurate citations generated by AI, nor did it identify other errors that appear to have resulted in whole or in part from manual error.”

Sullivan & Cromwell is one of the largest law firms in the US by revenue, ranking 30th on the AmLaw Global 200. The firm also represented crypto exchange FTX in its bankruptcy case.

Advertisement

Sullivan & Cromwell is conducting an internal investigation

Dietderich said the law firm took “immediate remedial measures,” including a full review of the circumstances that led to the errors. 

Related: Coinbase’s AI payments protocol x402 launches app store for AI agents

The firm is also “evaluating whether further enhancements to its internal training and review processes are warranted,” Dietderich said.

Dietderich also noted that the errors were spotted by a rival law firm.

Advertisement

“I also called Boies Schiller Flexner LLP on Friday to thank them for bringing this matter to our attention and to apologize directly to them as well,” he said. 

Magazine: IronClaw rivals OpenClaw, Olas launches bots for Polymarket — AI Eye