Connect with us
DAPA Banner

Tech

OpenAI Releases New ChatGPT Model For Working In Excel and Google Sheets

Published

on

OpenAI today released GPT-5.4, an upgraded ChatGPT model designed to be faster, cheaper, and more accurate for workplace tasks. The update also introduces tools that let ChatGPT work directly inside Excel and Google Sheets. Axios reports: GPT-5.4 is designed to be less error-prone, more efficient and better at workplace tasks like drafting documents, OpenAI said. The new model can create files in fewer tries with less back-and-forth than prior models, the company said. GPT-5.4 outperformed office workers 83% of the time on GDPval, an OpenAI benchmark measuring performance on real-world tasks across 44 occupations.

The model can also solve problems using fewer tokens, OpenAI says — which can translate to faster responses and lower costs. The company is also debuting OpenAI for Financial Services, a set of new tools that includes the version of ChatGPT that runs inside spreadsheets and new apps and skills within ChatGPT. Partners include FactSet, MSCI, Third Bridge and Moody’s.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Everyone is a builder: Microsoft and OpenAI execs on the new era of AI-powered personal software

Published

on

Vijaye Raji, OpenAI’s CTO of applications and former CEO at Statsig, speaks at GeekWire’s Agents of Transformation event in Seattle on March 24. (GeekWire Photos / Kevin Lisota)

Vijaye Raji wanted to figure out how to keep up with the firehose of Slack messages. After a couple prompts, he had a solution.

Raji, OpenAI’s CTO of applications, vibe-coded his own personal tool using Codex, OpenAI’s coding agent. It runs on his laptop and summarizes his messages, emails, and notifications every 15 minutes.

His story reflects how software in the age of AI agents is becoming something anyone can create on the fly — which could have major implications in the way “applications” are designed, built, and used.

“Everyone is going to be a builder,” said Raji, speaking at GeekWire’s Agents of Transformation event in Seattle on Tuesday. “You’re going to lower the threshold of what building is.”

GeekWire co-founder Todd Bishop interviews Vijaye Raji.

Raji said that when he has a new idea now, his first instinct isn’t to pitch it to a team and ask someone to code it up. Instead, he starts prototyping it himself using Codex.

That habit has become the norm across OpenAI, he said.

Advertisement

“People come to meetings, right before they start the meeting they send a prompt out, keep the laptop slightly open, and when the meeting ends you go back and see what it’s built,” Raji said.

During an earlier fireside chat, Charles Lamanna, Microsoft’s executive vice president of Business Applications & Agents, said he’s starting to see agents change the way his teams share information internally — shifting from static documents to lightweight, bespoke “mini web apps.”

In one recent example, a discussion about investment changes and team structure would have traditionally produced a spreadsheet and a PowerPoint deck. Instead, his group spun up an interactive web app that pulled live data from Microsoft’s employee directory and funding systems, letting leaders click through different scenarios in real time.

Charles Lamanna, Microsoft’s executive vice president of Business Applications & Agents.

He described a similar shift in customer meeting prep, where a set of internal agents automatically assembles product telemetry, CRM data, and account notes — work that used to take hours of manual effort.

The broader potential impact goes beyond any single tool. And the underlying technology continues to improve at a rapid pace. Raji described the current era as “capability overhang” — the idea that models can do far more than people are asking of them.

Advertisement

“People need to start adapting and learning,” he said. “What more could they do with these models? What more could they do with these agents? The people that are able to do that and go to that level are many, many times more productive and many more times able to accomplish larger tasks than those that haven’t.”

Source link

Continue Reading

Tech

The AI skills gap is here, says AI company, and power users are pulling ahead

Published

on

Anthropic’s latest research suggests that while AI is rapidly changing the way work gets done, it hasn’t meaningfully eliminated jobs. At least, not yet. But beneath what Anthropic’s head of economics, Peter McCrory, says is a “still healthy” labor market, early signs are pointing to uneven impacts, especially for younger workers just entering the workforce. 

In an interview on the sidelines of the Axios AI Summit in Washington, D.C., McCrory said the company’s newest economic impact report finds little evidence of widespread job displacement so far. 

“There’s no material difference in unemployment rates” between workers who use Claude for the “most central task of their job in automated ways” — like technical writers, data entry clerks, and software engineers — and workers in jobs less exposed to AI that require “physical interaction and dexterity with the real world.” 

But with AI adoption spreading across industries, that could shift — fast. If Anthropic CEO Dario Amodei is to be believed, AI could wipe out half of all entry-level white-collar jobs and push unemployment as high as 20% within the next five years.

Advertisement

“Displacement effects could materialize very quickly, so you want to establish a monitoring framework to understand that before it materializes so that we can catch it as it’s happening and ideally identify the appropriate policy response,” McCrory told TechCrunch.

Advertisement

Staying ahead of those trends is why tracking AI growth, adoption, and diffusion is so important, he said.

In theory, McCrory said, AI models like Claude can do almost anything a computer can do. In practice, most users are only scratching the surface of those capabilities.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

He said Anthropic looked at which roles involve tasks that AI is particularly good at, that are already being automated, and that are tied to real workplace use cases — the areas most likely to signal where displacement could emerge. 

Anthropic’s fifth economic impact report, released Tuesday, also found that even where there hasn’t been much displacement yet, there’s a growing skills gap between earlier Claude adopters and newcomers.

Advertisement

Earlier adopters are more likely to get significantly more value from the model, using it for work-related tasks rather than casual or one-off purposes and in more sophisticated ways, like as a “thought partner” for iteration and feedback. 

McCrory said the findings suggest AI is becoming a technology that rewards those who already know how to use it — and that workers who can effectively incorporate it into their work will increasingly have an edge.

That advantage isn’t evenly distributed geographically, either. The report also found that “Claude is used more intensely in high-income countries, within the U.S. in places with more knowledge workers, and for a relatively small set of specialized tasks and occupations.”

In other words, despite promises of AI as an equalizer, adoption may already be tilting toward the wealthy and could amplify those advantages as power users pull further ahead.

Advertisement

Source link

Continue Reading

Tech

Bring back the joy of buying new tech and toys

Published

on

Imagine the perfect online shop. It’d offer great deals on the biggest tech, gaming and entertainment brands. It’d give you same-day delivery without charging extra. It’d have real humans answering the phone and 24/7 customer service. And it would stock everything from AirPods and action cameras to air fryers and large appliances.

We’ve just described Joybuy, a fantastic new place to shop for almost anything – and to celebrate its UK launch it’s offering amazing launch deals including up to 50% off selected items from big brands, a “spend £99 and save £10” on selected products offer and a “spend £200 and save £100” deal on selected home appliances. And while we’re interested in the amazing tech deals you’ll also be able to get some deep discounts on appliances, beauty, groceries and more.

An image showing the Joybuy home page

(Image credit: Joybuy)

Source link

Continue Reading

Tech

Google’s new TurboQuant algorithm speeds up AI memory 8x, cutting costs by 50% or more

Published

on

As Large Language Models (LLMs) expand their context windows to process massive documents and intricate conversations, they encounter a brutal hardware reality known as the “Key-Value (KV) cache bottleneck.”

Every word a model processes must be stored as a high-dimensional vector in high-speed memory. For long-form tasks, this “digital cheat sheet” swells rapidly, devouring the graphics processing unit (GPU) video random access memory (VRAM) system used during inference, and slowing the model performance down rapidly over time.

But have no fear, Google Research is here: yesterday, the unit within the search giant released its TurboQuant algorithm suite — a software-only breakthrough that provides the mathematical blueprint for extreme KV cache compression, enabling a 6x reduction on average in the amount of KV memory a given model uses, and 8x performance increase in computing attention logits, which could reduce costs for enterprises that implement it on their models by more than 50%.

The theoretically grounded algorithms and associated research papers are available now publicly for free, including for enterprise usage, offering a training-free solution to reduce model size without sacrificing intelligence.

Advertisement

The arrival of TurboQuant is the culmination of a multi-year research arc that began in 2024. While the underlying mathematical frameworks—including PolarQuant and Quantized Johnson-Lindenstrauss (QJL)—were documented in early 2025, their formal unveiling today marks a transition from academic theory to large-scale production reality.

The timing is strategic, coinciding with the upcoming presentations of these findings at the upcoming conferences International Conference on Learning Representations (ICLR 2026) in Rio de Janeiro, Brazil, and Annual Conference on Artificial Intelligence and Statistics (AISTATS 2026) in Tangier, Morocco.

By releasing these methodologies under an open research framework, Google is providing the essential “plumbing” for the burgeoning “Agentic AI” era: the need for massive, efficient, and searchable vectorized memory that can finally run on the hardware users already own. Already, it is believed to have an effect on the stock market, lowering the price of memory providers as traders look to the release as a sign that less memory will be needed (perhaps incorrect, given Jevons’ Paradox).

The Architecture of Memory: Solving the Efficiency Tax

To understand why TurboQuant matters, one must first understand the “memory tax” of modern AI. Traditional vector quantization has historically been a “leaky” process.

Advertisement

When high-precision decimals are compressed into simple integers, the resulting “quantization error” accumulates, eventually causing models to hallucinate or lose semantic coherence.

Furthermore, most existing methods require “quantization constants”—meta-data stored alongside the compressed bits to tell the model how to decompress them. In many cases, these constants add so much overhead—sometimes 1 to 2 bits per number—that they negate the gains of compression entirely.

TurboQuant resolves this paradox through a two-stage mathematical shield. The first stage utilizes PolarQuant, which reimagines how we map high-dimensional space.

Rather than using standard Cartesian coordinates (X, Y, Z), PolarQuant converts vectors into polar coordinates consisting of a radius and a set of angles.

Advertisement

The breakthrough lies in the geometry: after a random rotation, the distribution of these angles becomes highly predictable and concentrated. Because the “shape” of the data is now known, the system no longer needs to store expensive normalization constants for every data block. It simply maps the data onto a fixed, circular grid, eliminating the overhead that traditional methods must carry.

The second stage acts as a mathematical error-checker. Even with the efficiency of PolarQuant, a residual amount of error remains. TurboQuant applies a 1-bit Quantized Johnson-Lindenstrauss (QJL) transform to this leftover data. By reducing each error number to a simple sign bit (+1 or -1), QJL serves as a zero-bias estimator. This ensures that when the model calculates an “attention score”—the vital process of deciding which words in a prompt are most relevant—the compressed version remains statistically identical to the high-precision original.

Performance benchmarks and real-world reliability

The true test of any compression algorithm is the “Needle-in-a-Haystack” benchmark, which evaluates whether an AI can find a single specific sentence hidden within 100,000 words.

In testing across open-source models like Llama-3.1-8B and Mistral-7B, TurboQuant achieved perfect recall scores, mirroring the performance of uncompressed models while reducing the KV cache memory footprint by a factor of at least 6x.

Advertisement

This “quality neutrality” is rare in the world of extreme quantization, where 3-bit systems usually suffer from significant logic degradation.

Beyond chatbots, TurboQuant is transformative for high-dimensional search. Modern search engines increasingly rely on “semantic search,” comparing the meanings of billions of vectors rather than just matching keywords. TurboQuant consistently achieves superior recall ratios compared to existing state-of-the-art methods like RabbiQ and Product Quantization (PQ), all while requiring virtually zero indexing time.

This makes it an ideal candidate for real-time applications where data is constantly being added to a database and must be searchable immediately. Furthermore, on hardware like NVIDIA H100 accelerators, TurboQuant’s 4-bit implementation achieved an 8x performance boost in computing attention logs, a critical speedup for real-world deployments.

Rapt community reaction

The reaction on X, obtained via a Grok search, included a mixture of technical awe and immediate practical experimentation.

Advertisement

The original announcement from @GoogleResearch generated massive engagement, with over 7.7 million views, signaling that the industry was hungry for a solution to the memory crisis.

Within 24 hours of the release, community members began porting the algorithm to popular local AI libraries like MLX for Apple Silicon and llama.cpp.

Technical analyst @Prince_Canuma shared one of the most compelling early benchmarks, implementing TurboQuant in MLX to test the Qwen3.5-35B model.

Across context lengths ranging from 8.5K to 64K tokens, he reported a 100% exact match at every quantization level, noting that 2.5-bit TurboQuant reduced the KV cache by nearly 5x with zero accuracy loss. This real-world validation echoed Google’s internal research, proving that the algorithm’s benefits translate seamlessly to third-party models.

Advertisement

Other users focused on the democratization of high-performance AI. @NoahEpstein_ provided a plain-English breakdown, arguing that TurboQuant significantly narrows the gap between free local AI and expensive cloud subscriptions.

He noted that models running locally on consumer hardware like a Mac Mini “just got dramatically better,” enabling 100,000-token conversations without the typical quality degradation.

Similarly, @PrajwalTomar_ highlighted the security and speed benefits of running “insane AI models locally for free,” expressing “huge respect” for Google’s decision to share the research rather than keeping it proprietary.

Market impact and the future of hardware

The release of TurboQuant has already begun to ripple through the broader tech economy. Following the announcement on Tuesday, analysts observed a downward trend in the stock prices of major memory suppliers, including Micron and Western Digital.

Advertisement

The market’s reaction reflects a realization that if AI giants can compress their memory requirements by a factor of six through software alone, the insatiable demand for High Bandwidth Memory (HBM) may be tempered by algorithmic efficiency.

As we move deeper into 2026, the arrival of TurboQuant suggests that the next era of AI progress will be defined as much by mathematical elegance as by brute force. By redefining efficiency through extreme compression, Google is enabling “smarter memory movement” for multi-step agents and dense retrieval pipelines. The industry is shifting from a focus on “bigger models” to “better memory,” a change that could lower AI serving costs globally.

Strategic considerations for enterprise decision-makers

For enterprises currently using or fine-tuning their own AI models, the release of TurboQuant offers a rare opportunity for immediate operational improvement.

Unlike many AI breakthroughs that require costly retraining or specialized datasets, TurboQuant is training-free and data-oblivious.

Advertisement

This means organizations can apply these quantization techniques to their existing fine-tuned models—whether they are based on Llama, Mistral, or Google’s own Gemma—to realize immediate memory savings and speedups without risking the specialized performance they have worked to build.

From a practical standpoint, enterprise IT and DevOps teams should consider the following steps to integrate this research into their operations:

Optimize Inference Pipelines: Integrating TurboQuant into production inference servers can reduce the number of GPUs required to serve long-context applications, potentially slashing cloud compute costs by 50% or more.

Expand Context Capabilities: Enterprises working with massive internal documentation can now offer much longer context windows for retrieval-augmented generation (RAG) tasks without the massive VRAM overhead that previously made such features cost-prohibitive.

Advertisement

Enhance Local Deployments: For organizations with strict data privacy requirements, TurboQuant makes it feasible to run highly capable, large-scale models on on-premise hardware or edge devices that were previously insufficient for 32-bit or even 8-bit model weights.

Re-evaluate Hardware Procurement: Before investing in massive HBM-heavy GPU clusters, operations leaders should assess how much of their bottleneck can be resolved through these software-driven efficiency gains.

Ultimately, TurboQuant proves that the limit of AI isn’t just how many transistors we can cram onto a chip, but how elegantly we can translate the infinite complexity of information into the finite space of a digital bit. For the enterprise, this is more than just a research paper; it is a tactical unlock that turns existing hardware into a significantly more powerful asset.

Source link

Advertisement
Continue Reading

Tech

Baseball’s new robot umpires look like a compromise. They’re not.

Published

on

For a sport that’s more than 150 years old, the opening of the 2026 Major League Baseball season is set to feature an unusual number of firsts. The official Opening Day on March 26 is the earliest in baseball history. The first official game of the season tonight between the Giants and the Yankees — which is Opening Night, not Opening Day, totally different — will be the first-ever game streamed on Netflix.

And chances are that some time during that game, a player will tap his helmet or hat after a pitch is thrown, challenging the umpire’s call and triggering baseball’s first-ever Automated Ball-Strike (ABS) system review. The robot umpires are here.

The system is remarkably straightforward. Each team gets two challenges per game, retaining them if successful, losing them if wrong. Only the pitcher, catcher, or batter can challenge, only over balls and strikes calls, and only within two seconds of the pitch.

Once a challenge is made, a network of 12 high-speed cameras installed around the stadium tracks the pitch’s exact location, and then software creates a 3D model of the pitch’s trajectory — on the Jumbotron for everyone to see — against the batter’s individualized strike zone. The verdict is made instantly. The umpire doesn’t go to a monitor and reconsider for minutes, like in NFL or NBA replay. He is merely the conduit to announce what the machine has decided.

Advertisement

This change should in theory make everyone better off. Teams have an appeal in the event of a potential blown call at a crucial moment (such as the brutal game-ending strike call for the Dominican Republic in this month’s World Baseball Classic). Challenges are limited and rapidly decided, so the game doesn’t slow down. The automated system is accurate to within 0.25 inches — roughly the width of a pencil — and quick enough to catch an Aroldis Chapman 103-mph fastball. Human umpires are still largely in charge of the game.

All in all, the ABS system appears to be an ideal compromise — preserving human judgement while allowing machines to correct the worst mistakes. While the system isn’t AI-powered, it seems like an example of how humans and AI could fruitfully work together in the future, with humans firmly in the loop but aided by the machines.

Except there’s a problem with splitting the difference between human and machine. Once you’ve conceded that the machine is the final authority on whether a call is right — which is exactly what baseball has done here — you’ve quietly eliminated the case for having the human there at all. What might seem like a stable equilibrium isn’t stable at all.

Calling balls and strikes

Advertisement

You can see this breakdown already underway in the minor leagues, which has been experimenting with the ABS system for years. Baseball reporter Jayson Stark has written about umpires in the AAA minors who, having grown tired of being overturned for all to see by the machine, began to change the way they handled the game, “calling balls and strikes the way they think the robot would call them.”

Because the league has given the machine final say, the human behind the mask doesn’t stay independent — he starts mimicking the machine. The umpire — once the lord of the diamond, whose word was law — becomes in effect the rough draft for the AI. Human knowledge and expertise becomes degraded.

To which a baseball fan might respond, perhaps with more colorful language, “they’re all bums anyway.” Which wouldn’t be quite fair to our carbon-based umpires, not that fairness to umps has ever been a concern for baseball fans. MLB estimates that umpires call 94 percent of pitches correctly, which on one hand is good — I’m not sure I’m 94 percent accurate on anything — but on the other hand, means they’re still making mistakes on around 17 or 18 pitches a game on average.

And even though the data suggests umpires have actually been getting better, we’re now able to see replays and precise pitch-tracking data that make it crystal clear just when a call has been blown. A guy named Ethan Singer even created an independent project called Umpire Scorecards, which uses publicly available Statcast/pitch tracking data to score every umpire, every game. The new ABS system just ratifies what previous technology made obvious years ago.

Advertisement

So the technological assault on the umpire’s authority has been underway for some time, and while even the ABS system has its margin of error, the end result of introducing machines will be a more accurately called game. But real human skills will be lost along the way. The best catchers are experts at framing pitches to make them look like strikes, even if they aren’t. Good batters learn an umpire’s individual strike zone and adjust game to game. (The Red Sox great Ted Williams used to say there were three strike zones: his own, the pitcher’s, and the umpire’s.) All of these skills were built on human imperfection, and all of them will become less valuable even as machines make the game “fairer.”

The one-way street of automation

To get a glimpse of baseball’s possible future, just look at tennis.

In 2006, pro tennis introduced the Hawk-Eye challenges, which allowed players to appeal a limited number of line calls to an automated camera system. The players were, initially, not fans. (As Marat Safin put it: “Who was the genius who came up with this stupid idea?”)

Advertisement

But the logic, especially as the sport got faster and faster, was undeniable. By 2020, the US Open had eliminated human line judging altogether, and Wimbledon followed suit in 2025. Human umpires are still employed, but mostly for the purposes of match management; i.e., shushing the crowd. The challenge system turned out to be just a stop on the path to near full-scale automation. And now baseball is stepping onto the same road.

The ABS system is what you get when an institution knows that the machine is better at the job but isn’t ready to say so. That’s exactly the position that a lot of organizations find themselves in right now, as AI grows ever more capable. The result, for the moment, tends to be a hybrid approach that leaves too many workers feeling stressed and disempowered, while failing to capture the benefits of more complete automation.

But over time, automation tends to prove to be a one-way street. The question isn’t whether machines will eventually call balls and strikes. It’s how much longer the halfway point can hold — for those umpires we love to hate, and for the rest of us.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Advertisement

Source link

Advertisement
Continue Reading

Tech

PolyShell attacks target 56% of all vulnerable Magento stores

Published

on

PolyShell attacks target 56% of all vulnerable Magento stores

Attacks leveraging the ‘PolyShell’ vulnerability in version 2 of Magento Open Source and Adobe Commerce installations are underway, targeting more than half of all vulnerable stores.

According to eCommerce security company Sansec, hackers started exploiting the critical PolyShell issue en masse last week, just two days after public disclosure.

“Mass exploitation of PolyShell started on March 19th, and Sansec has now found PolyShell attacks on 56.7% of all vulnerable stores,” Sansec says.

The researchers previously reported that the problem lies in Magento’s REST API, which accepts file uploads as part of the custom options for the cart item, allowing polyglot files to achieve remote code execution or account takeover via stored cross-site scripting (XSS), if the web server configuration allows it.

Advertisement

Adobe released a fix in version 2.4.9-beta1 on March 10, 2026, but it has not yet reached the stable branch. BleepingComputer previously contacted Adobe to ask about when a security update addressing PolyShell will become available for production versions, but we have not received a response.

Meanwhile, Sansec has published a list of IP addresses that target scanning for web stores vulnerable to PolyShell.

WebRTC skimmer

Sansec reports that in some of the attacks suspected to exploit PolyShell, the threat actor delivers a novel payment card skimmer that uses Web Real-Time Communication (WebRTC) to exfiltrates data.

WebRTC uses DTLS-encrypted UDP rather than HTTP, so it is more likely to evade security controls even on sites with strict Content Security Policy (CSP) controls like “connect-src.”

Advertisement

The skimmer is a lightweight JavaScript loader that connects to a hardcoded command-and-control (C2) server via WebRTC, bypassing normal signaling by embedding a forged SDP exchange.

It receives a second-stage payload over the encrypted channel, then executes it while bypassing CSP, primarily by reusing an existing script nonce, or falling back to unsafe-eval or direct script injection. Execution is delayed using ‘requestIdleCallback’ to reduce detection.

Sansec noted that this skimmer was detected on the e-commerce website of a car maker valued at over $100 billion, which did not respond to their notifications.

The researchers provide a set of indicators of compromise that can help defenders protect against these attacks.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

Supreme Court Sides With Internet Provider In Copyright Fight Over Pirated Music

Published

on

Longtime Slashdot reader JackSpratts writes: The Supreme Court unanimously said on Wednesday that a major internet provider could not be held liable for the piracy of thousands of songs online in a closely watched copyright clash. Music labels and publishers sued Cox Communications in 2018, saying the company had failed to cut off the internet connections of subscribers who had been repeatedly flagged for illegally downloading and distributing copyrighted music. At issue for the justices was whether providers like Cox could be held legally responsible and required to pay steep damages — a billion dollars or more in Cox’s case — if they knew that customers were pirating music but did not take sufficient steps to terminate their internet access.

In its opinion released (PDF) on Wednesday, the court said a company was not liable for “merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights.” Writing for the court, Justice Clarence Thomas said a provider like Cox was liable “only if it intended that the provided service be used for infringement” and if it, for instance, “actively encourages infringement.” Justice Sonia Sotomayor, joined by Justice Ketanji Brown Jackson, wrote separately to say that she agreed with the outcome but for different reasons. […] Cox called the court’s unanimous decision a “decisive victory” for the industry and for Americans who “depend on reliable internet service.”

“This opinion affirms that internet service providers are not copyright police and should not be held liable for the actions of their customers,” the company said.

Source link

Advertisement
Continue Reading

Tech

Judge Rejects Government’s Weak Attempt To Memory-Hole DOGE Deposition Videos

Published

on

from the melted-snowflakes dept

Last week we covered how the government successfully convinced Judge Colleen McMahon to order the plaintiffs in the DOGE/National Endowment for the Humanities (NEH) lawsuit to “claw back” the viral deposition videos they had posted to YouTube — videos showing DOGE operatives Justin Fox and Nate Cavanaugh stumbling through questions about how they used ChatGPT to decide which humanities grants to kill, and struggling mightily to define “DEI” despite it apparently being the entire basis for their work.

The government’s argument was that the videos had led to harassment and death threats against Fox and Cavanaugh — the same two who had no problem obliterating hundreds of millions in already approved grants with a simplistic ChatGPT prompt, but apparently couldn’t handle the public seeing them struggle to explain themselves under oath. The government argued the videos needed to come down. The judge initially agreed and ordered the plaintiffs to pull them. As we noted at the time, archivists had already uploaded copies to the Internet Archive and distributed them as torrents, because that’s how the internet works.

Well, now Judge McMahon has issued a full ruling on the government’s motion for a protective order, and has reversed course. The government’s motion is denied. The videos are now back up. There are hours and hours of utter nonsense for you to enjoy. Here are just a couple of the videos:


Advertisement

The ruling is worth reading in full, because McMahon manages to be critical of both sides while ultimately landing firmly against the government’s attempt to suppress the videos. She spends a good chunk of the opinion scolding the plaintiffs for what she clearly views as a procedural end-run — they sent the full deposition videos to chambers on a thumb drive without ever filing them on the docket or seeking permission to do so, which she sees as a transparent attempt to manufacture a “judicial documents” argument that would give the videos a presumption of public access.

McMahon doesn’t buy it:

When deciding a motion for summary judgment, the Court wants only those portions of a deposition on which a movant actually relies, and does not want to be burdened with irrelevant testimony merely because counsel chose to, or found it more convenient to, submit it. And because videos cannot be filed on the public docket without leave of court, there was no need for the rule to contain a specific reference to video transcriptions; the only way to get such materials on the docket (and so before the Court) was to make a motion, giving the Court the opportunity to decide whether the videos should be publicly docketed. This Plaintiffs did not do.

But if Plaintiffs wanted to know whether the Court’s rule applied to video-recorded depositions, they could easily have sought clarification – just as they could easily have filed a motion seeking leave to have the Clerk of Court accept the videos and place them on the public record. Again, they did not. At the hearing held on March 17, 2026, on Defendants’ present motion for a protective order, counsel for ACLS Plaintiffs, Daniel Jacobson, acknowledged the reason, stating “Frankly, your Honor, part of it was just the amount of time that it would have taken” to submit only the portions of the videos on which Plaintiffs intended to rely. Hr’g Tr., 15:6–7. In other words, “It would have been too much work.” That is not an acceptable excuse.

The Court is left with the firm impression that at least “part of” the reason counsel did not ask for clarification was because they wished to manufacture a “judicial documents” argument and did not wish to be told they could not do so. The Court declines to indulge that tactic.

Advertisement

Fair enough. But having knocked the plaintiffs for their procedural maneuver, the judge then turns to the actual question: has the government shown “good cause” under Rule 26(c) to justify a protective order keeping the videos off the internet? And the answer is a pretty resounding no. And that’s because public officials acting in their official capacities have significantly diminished privacy interests in their official conduct:

The Government’s motion fails for three independent reasons. First, the materials at issue concern the conduct of public officials acting in their official capacities, which substantially diminishes any cognizable privacy interest and weighs against restriction. Second, the Government has not made the particularized showing of a “clearly defined, specific and serious injury” required by Rule 26(c). Third, the Government has not demonstrated that the prospective relief it seeks would be effective in preventing the harms it identifies, particularly where those harms arise from the conduct of third-party actors beyond the control of the parties.

She cites Garrison v. Louisiana (the case that extended the “actual malice” standard from NY Times v. Sullivan) for the proposition that the public’s interest “necessarily includes anything which might touch on an official’s fitness for office,” and that “[f]ew personal attributes are more germane to fitness for office than dishonesty, malfeasance, or improper motivation.” Given that these depositions are literally about how government officials decided to terminate hundreds of millions of dollars in grants, that framing fits.

The judge also directly calls out the government’s arguments about harassment and reputational harm, and essentially says: that’s the cost of being a public official whose official conduct is being scrutinized. Suck it up, DOGE bros.

Reputational injury, public criticism, and even harsh commentary are not unexpected consequences of disclosing information about public conduct. They are foreseeable incidents of public scrutiny concerning government action. Where, as here, the material sought to be shielded by a protective order is testimony about the actions of government officials acting in their official capacities, embarrassment and reputational harm arising from the public’s reaction to official conduct is not the sort of harm against which Rule 26(c) protects. Public officials “accept certain necessary consequences” of involvement in public affairs, including “closer public scrutiny than might otherwise be the case.”

As for the death threats and harassment — which McMahon explicitly says she takes seriously and calls “deeply troubling” and “highly inappropriate” — she notes that there are actual laws against threats and cyberstalking, and that Rule 26(c) protective orders aren’t a substitute for law enforcement doing its job:

Advertisement

There are laws against threats and harassment; the Government and its witnesses have every right to ask law enforcement to take action against those who engage in such conduct, by enforcing federal prohibitions on interstate threats and cyberstalking, see, e.g., 18 U.S.C. §§ 875(c), 2261A, as well as comparable state laws. Rule 26(c) is not a substitute for those remedies.

And then there’s the practical reality McMahon acknowledges directly: it’s too damn late. The videos have already spread everywhere. A protective order aimed solely at the plaintiffs would accomplish approximately nothing.

At bottom, the Government has not shown that the relief it seeks is capable of addressing the harm it identifies. The videos have already been widely disseminated across multiple platforms, including YouTube, X, TikTok, Instagram, and Reddit, where they have been shared, reposted, and viewed by at least hundreds of thousands of users, resulting in near-instantaneous and effectively permanent global distribution. This is a predictable consequence of dissemination in the modern digital environment, where content can be copied, redistributed, and indefinitely preserved beyond the control of any single actor. Given this reality, a protective order directed solely at Plaintiffs would not meaningfully limit further dissemination or mitigate the Government’s asserted harms.

Separately, the plaintiffs asked for attorney’s fees, and McMahon denied that too, noting that she wasn’t going to “reward Plaintiffs for bypassing its procedures” even though the government’s motion ultimately failed. So everyone gets a little bit scolded here. But the bottom line is clear: you don’t get to send unqualified DOGE kids to nuke hundreds of millions in grants using a ChatGPT prompt, and then ask a court to hide the video of them trying to explain themselves under oath.

Releasing full deposition videos is certainly not the norm, but given that these are government officials who were making massively consequential decisions with a chatbot and no discernible expertise, the world is much better off with this kind of transparency — even if Justin and Nate had to face some people on the internet making fun of them for it.

Advertisement

Filed Under: depositions, doge, justin fox, nate cavanaugh, neh, public scrutiny

Companies: american council of learned societies, american historical association, authors guild

Source link

Advertisement
Continue Reading

Tech

Apple Can Create Smaller On-Device AI Models From Google’s Gemini

Published

on

Apple reportedly has full access to customize Google’s Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports: The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power.

Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn’t always meet Apple’s needs.

Source link

Continue Reading

Tech

Google & Meta found liable for social media addiction

Published

on

Meta and Google have been found liable for building intentionally addictive social media services, in a trial that sets a strong precedent for hundreds of other lawsuits that are still pending.

Man in a gray t-shirt onstage, shrugging with one arm extended and lips pursed, standing before a dark blue geometric background, wearing a headset microphone.
Meta CEO Mark Zuckerberg

On Wednesday, a jury in Los Angeles Superior Court finished its deliberations over a lawsuit between Meta and Google, and a young woman. The jury found that the tech giants were liable for enabling a woman identified as Kaley to become addicted to social media as a child.
The lawsuit commenced in January, while the jury deliberations started on March 13.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025