Connect with us
DAPA Banner

Tech

Judge Rejects Government’s Weak Attempt To Memory-Hole DOGE Deposition Videos

Published

on

from the melted-snowflakes dept

Last week we covered how the government successfully convinced Judge Colleen McMahon to order the plaintiffs in the DOGE/National Endowment for the Humanities (NEH) lawsuit to “claw back” the viral deposition videos they had posted to YouTube — videos showing DOGE operatives Justin Fox and Nate Cavanaugh stumbling through questions about how they used ChatGPT to decide which humanities grants to kill, and struggling mightily to define “DEI” despite it apparently being the entire basis for their work.

The government’s argument was that the videos had led to harassment and death threats against Fox and Cavanaugh — the same two who had no problem obliterating hundreds of millions in already approved grants with a simplistic ChatGPT prompt, but apparently couldn’t handle the public seeing them struggle to explain themselves under oath. The government argued the videos needed to come down. The judge initially agreed and ordered the plaintiffs to pull them. As we noted at the time, archivists had already uploaded copies to the Internet Archive and distributed them as torrents, because that’s how the internet works.

Well, now Judge McMahon has issued a full ruling on the government’s motion for a protective order, and has reversed course. The government’s motion is denied. The videos are now back up. There are hours and hours of utter nonsense for you to enjoy. Here are just a couple of the videos:


Advertisement

The ruling is worth reading in full, because McMahon manages to be critical of both sides while ultimately landing firmly against the government’s attempt to suppress the videos. She spends a good chunk of the opinion scolding the plaintiffs for what she clearly views as a procedural end-run — they sent the full deposition videos to chambers on a thumb drive without ever filing them on the docket or seeking permission to do so, which she sees as a transparent attempt to manufacture a “judicial documents” argument that would give the videos a presumption of public access.

McMahon doesn’t buy it:

When deciding a motion for summary judgment, the Court wants only those portions of a deposition on which a movant actually relies, and does not want to be burdened with irrelevant testimony merely because counsel chose to, or found it more convenient to, submit it. And because videos cannot be filed on the public docket without leave of court, there was no need for the rule to contain a specific reference to video transcriptions; the only way to get such materials on the docket (and so before the Court) was to make a motion, giving the Court the opportunity to decide whether the videos should be publicly docketed. This Plaintiffs did not do.

But if Plaintiffs wanted to know whether the Court’s rule applied to video-recorded depositions, they could easily have sought clarification – just as they could easily have filed a motion seeking leave to have the Clerk of Court accept the videos and place them on the public record. Again, they did not. At the hearing held on March 17, 2026, on Defendants’ present motion for a protective order, counsel for ACLS Plaintiffs, Daniel Jacobson, acknowledged the reason, stating “Frankly, your Honor, part of it was just the amount of time that it would have taken” to submit only the portions of the videos on which Plaintiffs intended to rely. Hr’g Tr., 15:6–7. In other words, “It would have been too much work.” That is not an acceptable excuse.

The Court is left with the firm impression that at least “part of” the reason counsel did not ask for clarification was because they wished to manufacture a “judicial documents” argument and did not wish to be told they could not do so. The Court declines to indulge that tactic.

Advertisement

Fair enough. But having knocked the plaintiffs for their procedural maneuver, the judge then turns to the actual question: has the government shown “good cause” under Rule 26(c) to justify a protective order keeping the videos off the internet? And the answer is a pretty resounding no. And that’s because public officials acting in their official capacities have significantly diminished privacy interests in their official conduct:

The Government’s motion fails for three independent reasons. First, the materials at issue concern the conduct of public officials acting in their official capacities, which substantially diminishes any cognizable privacy interest and weighs against restriction. Second, the Government has not made the particularized showing of a “clearly defined, specific and serious injury” required by Rule 26(c). Third, the Government has not demonstrated that the prospective relief it seeks would be effective in preventing the harms it identifies, particularly where those harms arise from the conduct of third-party actors beyond the control of the parties.

She cites Garrison v. Louisiana (the case that extended the “actual malice” standard from NY Times v. Sullivan) for the proposition that the public’s interest “necessarily includes anything which might touch on an official’s fitness for office,” and that “[f]ew personal attributes are more germane to fitness for office than dishonesty, malfeasance, or improper motivation.” Given that these depositions are literally about how government officials decided to terminate hundreds of millions of dollars in grants, that framing fits.

The judge also directly calls out the government’s arguments about harassment and reputational harm, and essentially says: that’s the cost of being a public official whose official conduct is being scrutinized. Suck it up, DOGE bros.

Reputational injury, public criticism, and even harsh commentary are not unexpected consequences of disclosing information about public conduct. They are foreseeable incidents of public scrutiny concerning government action. Where, as here, the material sought to be shielded by a protective order is testimony about the actions of government officials acting in their official capacities, embarrassment and reputational harm arising from the public’s reaction to official conduct is not the sort of harm against which Rule 26(c) protects. Public officials “accept certain necessary consequences” of involvement in public affairs, including “closer public scrutiny than might otherwise be the case.”

As for the death threats and harassment — which McMahon explicitly says she takes seriously and calls “deeply troubling” and “highly inappropriate” — she notes that there are actual laws against threats and cyberstalking, and that Rule 26(c) protective orders aren’t a substitute for law enforcement doing its job:

Advertisement

There are laws against threats and harassment; the Government and its witnesses have every right to ask law enforcement to take action against those who engage in such conduct, by enforcing federal prohibitions on interstate threats and cyberstalking, see, e.g., 18 U.S.C. §§ 875(c), 2261A, as well as comparable state laws. Rule 26(c) is not a substitute for those remedies.

And then there’s the practical reality McMahon acknowledges directly: it’s too damn late. The videos have already spread everywhere. A protective order aimed solely at the plaintiffs would accomplish approximately nothing.

At bottom, the Government has not shown that the relief it seeks is capable of addressing the harm it identifies. The videos have already been widely disseminated across multiple platforms, including YouTube, X, TikTok, Instagram, and Reddit, where they have been shared, reposted, and viewed by at least hundreds of thousands of users, resulting in near-instantaneous and effectively permanent global distribution. This is a predictable consequence of dissemination in the modern digital environment, where content can be copied, redistributed, and indefinitely preserved beyond the control of any single actor. Given this reality, a protective order directed solely at Plaintiffs would not meaningfully limit further dissemination or mitigate the Government’s asserted harms.

Separately, the plaintiffs asked for attorney’s fees, and McMahon denied that too, noting that she wasn’t going to “reward Plaintiffs for bypassing its procedures” even though the government’s motion ultimately failed. So everyone gets a little bit scolded here. But the bottom line is clear: you don’t get to send unqualified DOGE kids to nuke hundreds of millions in grants using a ChatGPT prompt, and then ask a court to hide the video of them trying to explain themselves under oath.

Releasing full deposition videos is certainly not the norm, but given that these are government officials who were making massively consequential decisions with a chatbot and no discernible expertise, the world is much better off with this kind of transparency — even if Justin and Nate had to face some people on the internet making fun of them for it.

Advertisement

Filed Under: depositions, doge, justin fox, nate cavanaugh, neh, public scrutiny

Companies: american council of learned societies, american historical association, authors guild

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Oracle converges the AI data stack to give enterprise agents a single version of truth

Published

on

Enterprise data teams moving agentic AI into production are hitting a consistent failure point at the data tier. Agents built across a vector store, a relational database, a graph store and a lakehouse require sync pipelines to keep context current. Under production load, that context goes stale. 

Oracle, whose database infrastructure runs the transaction systems of 97% of Fortune Global 100 companies by the company’s own count, is now making a direct architectural argument that the database is the right place to fix that problem.

Oracle this week announced a set of agentic AI capabilities for Oracle AI Database, built around a direct architectural counter-argument to that pattern.

The core of the release is the Unified Memory Core, a single ACID (Atomicity, Consistency, Isolation, and Durability)-transactional engine that processes vector, JSON, graph, relational, spatial and columnar data without a sync layer. Alongside that, Oracle announced Vectors on Ice for native vector indexing on Apache Iceberg tables, a standalone Autonomous AI Vector Database service and an Autonomous AI Database MCP Server for direct agent access without custom integration code.

Advertisement

The news isn’t just that Oracle is adding new features, it’s about the world’s largest database vendor realizing that things have changed in the AI world that go beyond what its namesake database was providing.

“As much as I’d love to tell you that everybody stores all their data in an Oracle database today — you and I live in the real world,” Maria Colgan, Vice President, Product Management for Mission-Critical Data and AI Engines, at Oracle told VentureBeat. “We know that that’s not true.”

Four capabilities, one architectural bet against the fragmented agent stack

Oracle’s release spans four interconnected capabilities. Together they form the architectural argument that a converged database engine is a better foundation for production agentic AI than a stack of specialized tools.

Unified Memory Core. Agents reasoning across multiple data formats simultaneously — vector, JSON, graph, relational, spatial — require sync pipelines when those formats live in separate systems. The Unified Memory Core puts all of them in a single ACID-transactional engine. Under the hood it is an API layer over the Oracle database engine, meaning ACID consistency applies across every data type without a separate consistency mechanism.

Advertisement

“By having the memory live in the same place that the data does, we can control what it has access to the same way we would control the data inside the database,” Colgan explained.

Vectors on Ice. For teams running data lakehouse architectures on the open-source Apache Iceberg table format, Oracle now creates a vector index inside the database that references the Iceberg table directly. The index updates automatically as the underlying data changes and works with Iceberg tables that are managed by Databricks and Snowflake. Teams can combine Iceberg vector search with relational, JSON, spatial or graph data stored inside Oracle in a single query.

Autonomous AI Vector Database. A fully managed, free-to-start vector database service built on the Oracle 26ai engine. The service is designed as a developer entry point with a one-click upgrade path to full Autonomous AI Database when workload requirements grow.

Autonomous AI Database MCP Server. Lets external agents and MCP clients connect to Autonomous AI Database without custom integration code. Oracle’s row-level and column-level access controls apply automatically when an agent connects, regardless of what the agent requests.

Advertisement

“Even though you are making the same standard API call you would make with other platforms, the privileges that user has continued to kick in when the LLM is asking those questions,” Colgan said.

Standalone vector databases are a starting point, not a destination

Oracle’s Autonomous AI Vector Database enters a market occupied by purpose-built vector services including Pinecone, Qdrant and Weaviate. The distinction Oracle is drawing is about what happens when vector alone is not enough.

“Once you are done with vectors, you do not really have an option,” Steve Zivanic, Global Vice President, Database and Autonomous Services, Product Marketing at Oracle, told VentureBeat. “With this, you can get graph, spatial, time series — whatever you may need. It is not a dead end.”

Holger Mueller, principal analyst at Constellation Research, said that the architectural argument is credible precisely because other vendors cannot make it without moving data first. Other database vendors require transactional data to move to a data lake before agents can reason across it. Oracle’s converged legacy, in his view, gives it a structural advantage that is difficult to replicate without a ground-up rebuild.

Advertisement

Not everyone sees the feature set as differentiated. Steven Dickens, CEO and principal analyst at HyperFRAME Research, told VentureBeat that vector search, RAG integration and Apache Iceberg support are now standard requirements across enterprise databases — Postgres, Snowflake and Databricks all offer comparable capabilities. 

“Oracle’s move to label the database itself as an AI Database is primarily a rebranding of its converged database strategy to match the current hype cycle,” Dickens said. In his view the real differentiation Oracle is claiming is not at the feature level but at the architectural level — and the Unified Memory Core is where that argument either holds or falls apart.

Where enterprise agent deployments actually break down

The four capabilities Oracle shipped this week are a response to a specific and well-documented production failure mode. Enterprise agent deployments are not breaking down at the model layer. They are breaking down at the data layer, where agents built across fragmented systems hit sync latency, stale context and inconsistent access controls the moment workloads scale.

Matt Kimball, vice president and principal analyst at Moor Insights and Strategy, told VentureBeat the data layer is where production constraints surface first.

Advertisement

 “The struggle is running them in production,” Kimball said. “The gap is seen almost immediately at the data layer — access, governance, latency and consistency. These all become constraints.”

Dickens frames the core mismatch as a stateless-versus-stateful problem. Most enterprise agent frameworks store memory as a flat list of past interactions, which means agents are effectively stateless while the databases they query are stateful. The lag between the two is where decisions go wrong.

“Data teams are exhausted by fragmentation fatigue,” Dickens said. “Managing a separate vector store, graph database and relational system just to power one agent is a DevOps nightmare.”

That fragmentation is precisely what Oracle’s Unified Memory Core is designed to eliminate. The control plane question follows directly.

Advertisement

“In a traditional application model, control lives in the app layer,” Kimball said. “With agentic systems, access control breaks down pretty quickly because agents generate actions dynamically and need consistent enforcement of policy. By pushing all that control into the database, it can all be applied in a more uniform way.”

What this means for enterprise data teams

The question of where control lives in an enterprise agentic AI stack is not settled.

Most organizations are still building across fragmented systems, and the architectural decisions being made now — which engine anchors agent memory, where access controls are enforced, how lakehouse data gets pulled into agent context — will be difficult to undo at scale.

The distributed data challenge is still the real test.

Advertisement

“Data is increasingly distributed across SaaS platforms, lakehouses and event-driven systems, each with its own control plane and governance model,” Kimball said. “The opportunity now is extending that model across the broader, more distributed data estates that define most enterprise environments today.”

Source link

Continue Reading

Tech

GeekWire Awards: Breakthrough tech for healthcare and data centers highlight Innovation of the Year

Published

on

The finalists for Innovation of the Year at the 2026 GeekWire Awards. Clockwise from top: Starcloud; RevealDx; Alpenglow Biosciences; VerAvanti; and Pacific Northwest National Laboratory. (GeekWire / Company Photos)

From the research lab to the healthcare clinic and all the way above Earth — the Pacific Northwest continues to produce game-changing innovation.

The finalists for Innovation of the Year at the 2026 GeekWire Awards — Alpenglow Biosciences; Pacific Northwest National Laboratory; RevealDx; Starcloud; and VerAvanti — include companies and organizations thinking outside the box to develop cutting-edge technology that help power data centers, modernize healthcare diagnostics, and more.

Now in its 18th year, the GeekWire Awards is the premier event recognizing the top leaders, companies and breakthroughs in Pacific Northwest tech, bringing together hundreds of people to celebrate innovation and the entrepreneurial spirit. It takes place May 7 at the Showbox SoDo in Seattle.

Microsoft’s Majorana 1, a new quantum processor based on a novel state of matter, won Innovation of the Year honors last year.

This category is presented by Astound Business Solutions.

Continue reading for information on Innovation of the Year finalists, who were chosen by a panel of independent judges from community nominations. You can help pick the winner: Cast your ballot here or in the embedded form at the bottom. Voting runs through April 10.

Advertisement

Seattle-based Alpenglow Biosciences, which spun out of the University of Washington in 2018, has developed tools to quickly create multi-dimensional images from biological tissue samples and analyze the results. The company recently announced a partnership with PathNet, a leading U.S. pathology laboratory, to help commercialize use of the startup’s 3D microscope technology in clinical settings.

Alpenglow is led by CEO and co-founder Dr. Nick Reder, who helped launch the company to solve problems he experienced as a medical resident in pathology at the UW.

(PNNL Photo)

Pacific Northwest National Laboratory, known as PNNL, is a 60-year-old institution managed by the U.S. Department of Energy that performs research in areas including energy, chemistry, data analytics and other science and technology fields. More than 210 companies have their roots at the laboratory, and 3,213 patents have been issued for research that started at PNNL.

Some of the latest work from the lab includes research on quantum computing; the application of new AI models for scientific discovery; the intersection of robotics and lab experiments; and tiny fish monitoring technology.

(RevealDx Photo)

RevealDx is a Seattle-based startup that develops software aimed at improving the way healthcare professionals diagnose lung cancer. The company’s product uses machine learning techniques to assess the probability that lung nodules found on chest CT scans are cancerous — an alternative to more invasive procedures. RevealDx recently received FDA clearance for its RevealAI-Lung imaging software.

The company is led by CEO Chris Wood, who previously founded Seattle health tech company Clario Medical Imaging and was CTO at Intelerad Medical Systems.

Advertisement

Starcloud is building out a space-based data centers, powered by grids of massive solar panels that offer an alternative to data centers on Earth amid a surge in energy demand from the AI boom. The Redmond, Wash.-based company, previously known as Lumen Orbit, graduated from Y Combinator in 2024. NVIDIA showed off Starcloud’s data center at the beginning of Jensen Huang’s keynote at the chip giant’s recent GTC conference.

Starcloud is led by CEO and co-founder Philip Johnston, a former associate at McKinsey & Co. who also co-founded an e-commerce venture called Opontia.

(VerAvanti Photo)

VerAvanti, a Bothell, Wash.-based medical technology company founded in 2013, develops ultra-thin imaging scopes that can be used for diagnosis in cardiology, neurosurgery, and peripheral artery work. The company raised a $31.5 million round last year and later announced a $5 million investment from a Middle Eastern family office that operates as a medical device distributor.

VerAvanti is led by CEO Gerald McMorrow, who previously helped launch Verathon, another medical device company that sold in 2009 for $300 million.

Astound Business Solutions is the presenting sponsor of the 2026 GeekWire Awards. Thanks also to gold sponsors Amazon Sustainability, BairdBECU, JLLFirst Tech and Wilson Sonsini, and silver sponsors Prime Team Partners.

The event will feature a VIP reception, sit-down dinner and fun entertainment mixed in. Tickets go fast. A limited number of half-table and full-table sponsorships available. Contact events@geekwire.com to reserve a spot for your team today.

Advertisement

(function(t,e,s,n){var o,a,c;t.SMCX=t.SMCX||[],e.getElementById(n)||(o=e.getElementsByTagName(s),a=o[o.length-1],c=e.createElement(s),c.type=”text/javascript”,c.async=!0,c.id=n,c.src=”https://widget.surveymonkey.com/collect/website/js/tRaiETqnLgj758hTBazgd5M58tggxeII7bOlSeQcq8A_2FgMSV6oauwlPEL4WBj_2Fnb.js”,a.parentNode.insertBefore(c,a))})(window,document,”script”,”smcx-sdk”); Create your own user feedback survey

Source link

Continue Reading

Tech

Google bumps up Q Day deadline to 2029, far sooner than previously thought

Published

on

Google is dramatically shortening its readiness deadline for the arrival of Q Day, the point at which existing quantum computers can break public-key cryptography algorithms that secure decades’ worth of secrets belonging to militaries, banks, governments, and nearly every individual on earth.

In a post published on Wednesday, Google said it is giving itself until 2029 to prepare for this event. The post went on to warn that the rest of the world needs to follow suit by adopting PQC—short for post-quantum cryptography—algorithms to augment or replace elliptic curves and RSA, both of which will be broken.

The end is nigh

“As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline,” wrote Heather Adkins, Google’s VP of security engineering, and Sophie Schmieg, a senior cryptography engineer. “By doing this, we hope to provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”

Separately, Google detailed its timeline for making Android quantum resistant, the first time the company has publicly discussed PQC support on the operating system. Starting with the beta version, Android 17 will support ML-DSA, a digital signing algorithm standard advanced by the National Institute for Standards and Technology. ML-DSA will be added to Android’s hardware root of trust. The move will allow developers to have PQC keys for signing their apps and verifying other software signatures.

Advertisement

Google said it now has ML-DSA integrated into the Android verified boot library, which secures the boot sequence against manipulation. Google engineers are also beginning to move remote attestation to PQC. Remote attestation is a feature that allows a device to prove its current state to a remote server to, for example, prove to a server on a corporate network that it’s running a secure OS version.

Source link

Advertisement
Continue Reading

Tech

Sony’s Best Soundbars Just Got a Bass Boost (And Two Little Brothers)

Published

on

Today Sony unveiled two new soundbars in their BRAVIA Theater line, the BRAVIA Theater Bar 5 (HTB-500) and BRAVIA Theater Bar 7 (HTA-7100). The Bar 5 is a simple two-piece 3.1-channel system that comes with the bar itself plus a powered subwoofer and can handle Dolby Atmos or DTS:X surround via virtualized surround sound. The Bar 7 is a step-up model that can be used on its own, or enhanced with rear speakers and a powered subwoofer (or two!). 

The company also announced a new pair of wireless surround speakers (BRAVIA Theater Rear 9), which are compatible with the new BAR 7 and the existing BAR 8 and BAR 9 as well as Sony’s latest generation of AVRs (audio/video receivers). Sony also announced three new subwoofers (BRAVIA Theater Sub 7, Sub 8 and Sub 9) that will be compatible with the new and existing soundbar-based systems and receivers.

01-Bar5_sub-900px
The BRAVIA Theater Bar 5 soundbar comes with a wireless subwoofer.

But I’ve saved the best news for last. Lovers of deep powerful cinematic bass will be happy to hear that Sony now supports the use of two subwoofers with the new BRAVIA Theater Bar 7 and the existing BAR 8 and BAR 9 soundbars. By using two subwoofers, you can get a more uniform, more extended bass response, even in larger rooms with open floor plans. This dual-sub functionality will come with the BRAVIA Theater Bar 7 right out of the box and will be added to the BAR 8 and BAR 9 via a free over the air software update.

exp_sys_Bar9_Sub8_Rear9-900px
Sony’s BRAVIA Theater Bar 9 shown here with the Sub 8 subwoofer and Rear 9 speakers.

In our review of the BRAVIA Theater Bar 9 system, our main gripe was that the bass response wasn’t as extended or powerful as we would have liked, even using their best (at the time) powered subwoofer. With the new larger Sub 9 subwoofer and the ability to add dual subs, it appears this criticism has been addressed. And, based on a quick audition of a system that used two Sub 9 subwoofers, we believe it will be more than up to the task of providing deep, precise bass even in large rooms.

The BRAVIA Theater Bar 7 supports Dolby Atmos, DTS:X, and Sony 360 Reality Audio, either on its own or with the addition of a pair of rear speakers and one or two powered subwoofers. With the addition of a subwoofer and rear speakers, the Bar 7 becomes IMAX Enhanced Certified, and can reproduce the IMAX Enhanced DTS:X soundtracks currently available on Disney+ and Sony Pictures Core streaming services, as well as select Blu-ray Discs. The Theater Bar 7 is compatible with Sony’s current Rear 8 speakers and the new Rear 9 speakers. For subs, the BRAVIA Theater Bar 7 can works with one or two of the new Sub 7, Sub 8 or Sub 9 subwoofers.

A Sony rep told us the company’s current BRAVIA Theater Quad system runs on a different chip-set than the BRAVIA soundbars so it will not be getting the dual-sub upgrade (at least not yet).  

Advertisement

BRAVIA Theater Bar 7 – A Great Choice for Medium Sized Screens

Smaller than the BRAVIA Theater Bar 8 ($999.99) and Bar 9 ($1,499.99), the Theater Bar 7 ($869.99) still packs a punch. It features a total of nine drivers including front-firing, up-firing and side-firing drivers to create a 5.1.2-channel system on its own, expandable to 7.2.4 with the addition of two subwoofers and a pair of the Rear 9 speakers. You can also use the more affordable Rear 8 speakers, but those lack up-firing drivers so you won’t get as pronounced a height effect as you will with the Rear 9s. Like the Bar 8 and Bar 9, the Bar 7 includes Sony’s 360 Spatial Sound Mapping (360 SSM) to create an immersive and enveloping soundstage, no matter where you place your speakers.

05-Bar7Skeleton-900px
A peek inside the Sony BRAVIA Theater Bar 7 soundbar.

Like the Bar 8 and Bar 9, the Bar 7 can be controlled with the BRAVIA Connect mobile app, and can be fully integrated into the TV’s settings menu when used with a compatible Sony BRAVIA TV. It also supports Sony’s AI-based Voice Zoom 3 feature for intelligent enhanced dialogue reproduction that raises voices with minimal impact to the rest of the soundtrack (also requires a compatible Sony TV).

Holding Down the Rear

Sony’s new BRAVIA Theater Rear 9 speakers ($749.99/pair) are replacing the current SA-RS5 in the line-up. The cylinder-shaped Rear 9s appear similar in cosmetic design to their predecessors, but the new ones come with an integrated swivel stand which can help direct the rear channel sounds to the listening area better. This is particularly useful when your seating area or room layout is not ideal, like when your couch is right up against a rear wall. Directing the sound will help Sony’s 360 Spatial Sound Mapping work even better to create an immersive dome of sound, even with non-ideal speaker layouts.

04-Rear9_swivel_wm-900px
The BRAVIA Theater Rear 9 speakers feature a swivel mount that allows you to point the drivers at your listening position for optimum immersion.

Bringing Up the Bass

Sony’s new BRAVIA Theater Sub 7 ($329), Sub 8 ($499) and Sub 9 ($899) offer customers three options based on budget and size preferences. As the size goes up, so does the price as well as the bass extension and output.

SW7_Main2_Skeleton-800px
The BRAVIA Theater Sub 7 features a single 130mm bass driver in a slim cabinet.

As for driver sizes and configuration, the Sub 7 features a 130mm (5.1-inch) bass driver, the Sub 8 has a single 200mm (7.9-inch) bass driver and the Sub 9 includes dual 200mm (7.9-inch) drivers in a vibration-cancelling dual-opposing driver configuration for deep bass extension and low distortion. With dual subwoofers now an option, you can always start with one sub and add a second later one if you feel like you need more bass.

Advertisement. Scroll to continue reading.
Sub9_Main2_Skeleton-900px
A peek inside the new Sony BRAVIA Theater Sub 9 subwoofer reveals its dual 200mm woofers.

The Bottom Line

We’re surprised (and pleased) to see Sony addressing the one main area of weakness of their soundbar-based systems: low bass reproduction. While we don’t have full specifications of the new woofers, we have heard a pair of Sub 9s in action and were quite impressed with what we heard. Of course, with this new functionality and performance, up goes the price. A fully spec’ed out system with the BRAVIA Theater Bar 9, Rear 9 speakers and pair of Sub 9 subwoofers will set you back around $4,000 (MSRP) and that’s quite a price tag for a soundbar-based system. But for those who want a simple, elegant, high performance and cosmetically pleasing solution, particularly for use with a large screen Sony BRAVIA TV or Projector, it may actually be worth the investment.

Pricing & Availability

All of these speakers are available now to pre-order at the following prices:

Advertisement
  • Sony BRAVIA Theater Bar 5 (HTB-500) – $329.99
  • Sony BRAVIA Theater Bar 7 (HTA-7100) – $869.00
  • Sony BRAVIA Theater Rear 9 (SA-RS9) – $749.99
  • Sony BRAVIA Theater Sub 7 (SA-SW7) – $329.99
  • Sony BRAVIA Theater Sub 8 (SA-SW8) – $499.99
  • Sony BRAVIA Theater Sub 9 (SA-SW9) – $899.99

Source link

Continue Reading

Tech

OpenAI Gives Users a Long-Term Storage Option With ChatGPT Library

Published

on

ChatGPT users can now store, browse and retrieve the files they upload and create with the AI tool, OpenAI announced this week. 

All of the documents you upload inside the normal chat window are automatically saved to the library, as long as you’re logged into your account. Now you can search for and pull up documents in one central place. 

The feature is limited to Plus, Pro and Business users, so you have to pay at least $20 per month to store files using ChatGPT Library. You also have to be online to access your files. 

Advertisement
AI Atlas

(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

If you turn on ChatGPT’s Memory feature, the chatbot can also reference the files you’ve saved to bring up in future chats. 

OpenAI mentions documents, spreadsheets, presentations and images as supported file types. However, the images you generate using ChatGPT will remain in the Images tab.

Read more: OpenAI’s Slop Machine Sora Is Dead. We’re All Better Off Without It

Advertisement

Save your files in the chatbot

To use the Library feature, sign in to your account and click the plus sign on the left side of the window where you type commands. Select the “Add from Library” option to choose the file you want to bring up. 

The library is visible in a left-hand sidebar that’s searchable. You can filter results by file type and whether you uploaded or created the file. 

There are some restrictions on file size. The maximum file size is 512MB, and all documents and chat conversations are limited to 2 million tokens (characters). Spreadsheets and CSV files must be 50MB or smaller, and images must be 20MB or smaller.

Deleting files is a little tricky. You can select a file in the library window and click “delete” or use the trash icon beside the file name. Then OpenAI will delete the file within 30 days, unless the company needs it for security or legal obligations, or if “the chat has already been de-identified and disassociated from you.”

Advertisement

OpenAI’s big recent changes 

Lately, OpenAI has been refining its models and expanding services for coders and developers, with faster models that are suited for debugging code. OpenAI announced these improved models as the company is competing with rivals that offer coding-specific tools, like Anthropic’s Claude Code

OpenAI executives have also been talking about building a “superapp” desktop interface that consolidates its AI tools in one place. The three tools included in the app would be ChatGPT, the coding platform Codex and the internet browser Atlas, which uses AI as an assistant. 

The company also announced this week it would shut down its AI video app Sora as it pivots away from video generation into more coding and productivity tools, like Codex. 

Advertisement

Source link

Continue Reading

Tech

Epic Games to lay off 1,000 employees as Fortnite engagement drops

Published

on

The organisation explained that a number of internal and external factors have impacted working life and profits at Epic.

US games and software developer Epic Games has announced plans to lay off more than 1,000 people amid a drop in the popularity of its online gaming platform Fortnite over the last 12 months. 

In a memo issued to Epic’s workforce, CEO Tim Sweeney said he was sorry that the organisation is once again in this position, having previously cut 16pc of its workforce in 2023. He explained that the downturn in Fortnite engagement, which began in 2025, has resulted in the organisation spending more money than it is currently making. 

“This layoff, together with over $500m of identified cost savings in contracting, marketing and closing some open roles puts us in a more stable place,” said Sweeney. 

Advertisement

He added: “Some of the challenges we’re facing are industry-wide challenges, slower growth, weaker spending and tougher cost economics, current consoles selling less than last generation’s and games competing for time against other increasingly-engaging forms of entertainment.”

However, he explained that some of the issues are unique to Epic. For example, last week, Epic raised the prices of Fortnite’s in-game currency, saying that “the cost of running Fortnite has gone up a lot and we’re raising prices to help pay the bills”. 

Sweeney also noted that despite its prevalence in the industry and wider workplace conversation, the layoffs have not been prompted by AI. “To the extent it improves productivity, we want to have as many awesome developers developing great content and tech as we can.” 

Impacted employees will receive a severance package that includes at least four months of base pay, extended Epic-paid healthcare coverage, an acceleration of stock options vesting through January 2027 and extended equity exercise options for up to two years. There is to be a meeting on Thursday (26 March) to discuss the matter further. 

Advertisement

In November of last year, Google and Epic Games reached a settlement over an antitrust lawsuit that was filed in 2020 by Epic, in which the search engine giant was found to hold a Play Store monopoly. 

The more than five-year conflict began when Fortnite was removed from the Apple App Store and Google Play Store for violating their policies with its in-game payment system that would allow users to pay directly for in-app purchases. At the time, Epic said the process where organisations took a 30pc cut from every transaction made through apps on their platforms was unfair.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Everyone is a builder: Microsoft and OpenAI execs on the new era of AI-powered personal software

Published

on

Vijaye Raji, OpenAI’s CTO of applications and former CEO at Statsig, speaks at GeekWire’s Agents of Transformation event in Seattle on March 24. (GeekWire Photos / Kevin Lisota)

Vijaye Raji wanted to figure out how to keep up with the firehose of Slack messages. After a couple prompts, he had a solution.

Raji, OpenAI’s CTO of applications, vibe-coded his own personal tool using Codex, OpenAI’s coding agent. It runs on his laptop and summarizes his messages, emails, and notifications every 15 minutes.

His story reflects how software in the age of AI agents is becoming something anyone can create on the fly — which could have major implications in the way “applications” are designed, built, and used.

“Everyone is going to be a builder,” said Raji, speaking at GeekWire’s Agents of Transformation event in Seattle on Tuesday. “You’re going to lower the threshold of what building is.”

GeekWire co-founder Todd Bishop interviews Vijaye Raji.

Raji said that when he has a new idea now, his first instinct isn’t to pitch it to a team and ask someone to code it up. Instead, he starts prototyping it himself using Codex.

That habit has become the norm across OpenAI, he said.

Advertisement

“People come to meetings, right before they start the meeting they send a prompt out, keep the laptop slightly open, and when the meeting ends you go back and see what it’s built,” Raji said.

During an earlier fireside chat, Charles Lamanna, Microsoft’s executive vice president of Business Applications & Agents, said he’s starting to see agents change the way his teams share information internally — shifting from static documents to lightweight, bespoke “mini web apps.”

In one recent example, a discussion about investment changes and team structure would have traditionally produced a spreadsheet and a PowerPoint deck. Instead, his group spun up an interactive web app that pulled live data from Microsoft’s employee directory and funding systems, letting leaders click through different scenarios in real time.

Charles Lamanna, Microsoft’s executive vice president of Business Applications & Agents.

He described a similar shift in customer meeting prep, where a set of internal agents automatically assembles product telemetry, CRM data, and account notes — work that used to take hours of manual effort.

The broader potential impact goes beyond any single tool. And the underlying technology continues to improve at a rapid pace. Raji described the current era as “capability overhang” — the idea that models can do far more than people are asking of them.

Advertisement

“People need to start adapting and learning,” he said. “What more could they do with these models? What more could they do with these agents? The people that are able to do that and go to that level are many, many times more productive and many more times able to accomplish larger tasks than those that haven’t.”

Source link

Continue Reading

Tech

The AI skills gap is here, says AI company, and power users are pulling ahead

Published

on

Anthropic’s latest research suggests that while AI is rapidly changing the way work gets done, it hasn’t meaningfully eliminated jobs. At least, not yet. But beneath what Anthropic’s head of economics, Peter McCrory, says is a “still healthy” labor market, early signs are pointing to uneven impacts, especially for younger workers just entering the workforce. 

In an interview on the sidelines of the Axios AI Summit in Washington, D.C., McCrory said the company’s newest economic impact report finds little evidence of widespread job displacement so far. 

“There’s no material difference in unemployment rates” between workers who use Claude for the “most central task of their job in automated ways” — like technical writers, data entry clerks, and software engineers — and workers in jobs less exposed to AI that require “physical interaction and dexterity with the real world.” 

But with AI adoption spreading across industries, that could shift — fast. If Anthropic CEO Dario Amodei is to be believed, AI could wipe out half of all entry-level white-collar jobs and push unemployment as high as 20% within the next five years.

Advertisement

“Displacement effects could materialize very quickly, so you want to establish a monitoring framework to understand that before it materializes so that we can catch it as it’s happening and ideally identify the appropriate policy response,” McCrory told TechCrunch.

Advertisement

Staying ahead of those trends is why tracking AI growth, adoption, and diffusion is so important, he said.

In theory, McCrory said, AI models like Claude can do almost anything a computer can do. In practice, most users are only scratching the surface of those capabilities.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Advertisement

He said Anthropic looked at which roles involve tasks that AI is particularly good at, that are already being automated, and that are tied to real workplace use cases — the areas most likely to signal where displacement could emerge. 

Anthropic’s fifth economic impact report, released Tuesday, also found that even where there hasn’t been much displacement yet, there’s a growing skills gap between earlier Claude adopters and newcomers.

Advertisement

Earlier adopters are more likely to get significantly more value from the model, using it for work-related tasks rather than casual or one-off purposes and in more sophisticated ways, like as a “thought partner” for iteration and feedback. 

McCrory said the findings suggest AI is becoming a technology that rewards those who already know how to use it — and that workers who can effectively incorporate it into their work will increasingly have an edge.

That advantage isn’t evenly distributed geographically, either. The report also found that “Claude is used more intensely in high-income countries, within the U.S. in places with more knowledge workers, and for a relatively small set of specialized tasks and occupations.”

In other words, despite promises of AI as an equalizer, adoption may already be tilting toward the wealthy and could amplify those advantages as power users pull further ahead.

Advertisement

Source link

Continue Reading

Tech

Bring back the joy of buying new tech and toys

Published

on

Imagine the perfect online shop. It’d offer great deals on the biggest tech, gaming and entertainment brands. It’d give you same-day delivery without charging extra. It’d have real humans answering the phone and 24/7 customer service. And it would stock everything from AirPods and action cameras to air fryers and large appliances.

We’ve just described Joybuy, a fantastic new place to shop for almost anything – and to celebrate its UK launch it’s offering amazing launch deals including up to 50% off selected items from big brands, a “spend £99 and save £10” on selected products offer and a “spend £200 and save £100” deal on selected home appliances. And while we’re interested in the amazing tech deals you’ll also be able to get some deep discounts on appliances, beauty, groceries and more.

An image showing the Joybuy home page

(Image credit: Joybuy)

Source link

Continue Reading

Tech

Google’s new TurboQuant algorithm speeds up AI memory 8x, cutting costs by 50% or more

Published

on

As Large Language Models (LLMs) expand their context windows to process massive documents and intricate conversations, they encounter a brutal hardware reality known as the “Key-Value (KV) cache bottleneck.”

Every word a model processes must be stored as a high-dimensional vector in high-speed memory. For long-form tasks, this “digital cheat sheet” swells rapidly, devouring the graphics processing unit (GPU) video random access memory (VRAM) system used during inference, and slowing the model performance down rapidly over time.

But have no fear, Google Research is here: yesterday, the unit within the search giant released its TurboQuant algorithm suite — a software-only breakthrough that provides the mathematical blueprint for extreme KV cache compression, enabling a 6x reduction on average in the amount of KV memory a given model uses, and 8x performance increase in computing attention logits, which could reduce costs for enterprises that implement it on their models by more than 50%.

The theoretically grounded algorithms and associated research papers are available now publicly for free, including for enterprise usage, offering a training-free solution to reduce model size without sacrificing intelligence.

Advertisement

The arrival of TurboQuant is the culmination of a multi-year research arc that began in 2024. While the underlying mathematical frameworks—including PolarQuant and Quantized Johnson-Lindenstrauss (QJL)—were documented in early 2025, their formal unveiling today marks a transition from academic theory to large-scale production reality.

The timing is strategic, coinciding with the upcoming presentations of these findings at the upcoming conferences International Conference on Learning Representations (ICLR 2026) in Rio de Janeiro, Brazil, and Annual Conference on Artificial Intelligence and Statistics (AISTATS 2026) in Tangier, Morocco.

By releasing these methodologies under an open research framework, Google is providing the essential “plumbing” for the burgeoning “Agentic AI” era: the need for massive, efficient, and searchable vectorized memory that can finally run on the hardware users already own. Already, it is believed to have an effect on the stock market, lowering the price of memory providers as traders look to the release as a sign that less memory will be needed (perhaps incorrect, given Jevons’ Paradox).

The Architecture of Memory: Solving the Efficiency Tax

To understand why TurboQuant matters, one must first understand the “memory tax” of modern AI. Traditional vector quantization has historically been a “leaky” process.

Advertisement

When high-precision decimals are compressed into simple integers, the resulting “quantization error” accumulates, eventually causing models to hallucinate or lose semantic coherence.

Furthermore, most existing methods require “quantization constants”—meta-data stored alongside the compressed bits to tell the model how to decompress them. In many cases, these constants add so much overhead—sometimes 1 to 2 bits per number—that they negate the gains of compression entirely.

TurboQuant resolves this paradox through a two-stage mathematical shield. The first stage utilizes PolarQuant, which reimagines how we map high-dimensional space.

Rather than using standard Cartesian coordinates (X, Y, Z), PolarQuant converts vectors into polar coordinates consisting of a radius and a set of angles.

Advertisement

The breakthrough lies in the geometry: after a random rotation, the distribution of these angles becomes highly predictable and concentrated. Because the “shape” of the data is now known, the system no longer needs to store expensive normalization constants for every data block. It simply maps the data onto a fixed, circular grid, eliminating the overhead that traditional methods must carry.

The second stage acts as a mathematical error-checker. Even with the efficiency of PolarQuant, a residual amount of error remains. TurboQuant applies a 1-bit Quantized Johnson-Lindenstrauss (QJL) transform to this leftover data. By reducing each error number to a simple sign bit (+1 or -1), QJL serves as a zero-bias estimator. This ensures that when the model calculates an “attention score”—the vital process of deciding which words in a prompt are most relevant—the compressed version remains statistically identical to the high-precision original.

Performance benchmarks and real-world reliability

The true test of any compression algorithm is the “Needle-in-a-Haystack” benchmark, which evaluates whether an AI can find a single specific sentence hidden within 100,000 words.

In testing across open-source models like Llama-3.1-8B and Mistral-7B, TurboQuant achieved perfect recall scores, mirroring the performance of uncompressed models while reducing the KV cache memory footprint by a factor of at least 6x.

Advertisement

This “quality neutrality” is rare in the world of extreme quantization, where 3-bit systems usually suffer from significant logic degradation.

Beyond chatbots, TurboQuant is transformative for high-dimensional search. Modern search engines increasingly rely on “semantic search,” comparing the meanings of billions of vectors rather than just matching keywords. TurboQuant consistently achieves superior recall ratios compared to existing state-of-the-art methods like RabbiQ and Product Quantization (PQ), all while requiring virtually zero indexing time.

This makes it an ideal candidate for real-time applications where data is constantly being added to a database and must be searchable immediately. Furthermore, on hardware like NVIDIA H100 accelerators, TurboQuant’s 4-bit implementation achieved an 8x performance boost in computing attention logs, a critical speedup for real-world deployments.

Rapt community reaction

The reaction on X, obtained via a Grok search, included a mixture of technical awe and immediate practical experimentation.

Advertisement

The original announcement from @GoogleResearch generated massive engagement, with over 7.7 million views, signaling that the industry was hungry for a solution to the memory crisis.

Within 24 hours of the release, community members began porting the algorithm to popular local AI libraries like MLX for Apple Silicon and llama.cpp.

Technical analyst @Prince_Canuma shared one of the most compelling early benchmarks, implementing TurboQuant in MLX to test the Qwen3.5-35B model.

Across context lengths ranging from 8.5K to 64K tokens, he reported a 100% exact match at every quantization level, noting that 2.5-bit TurboQuant reduced the KV cache by nearly 5x with zero accuracy loss. This real-world validation echoed Google’s internal research, proving that the algorithm’s benefits translate seamlessly to third-party models.

Advertisement

Other users focused on the democratization of high-performance AI. @NoahEpstein_ provided a plain-English breakdown, arguing that TurboQuant significantly narrows the gap between free local AI and expensive cloud subscriptions.

He noted that models running locally on consumer hardware like a Mac Mini “just got dramatically better,” enabling 100,000-token conversations without the typical quality degradation.

Similarly, @PrajwalTomar_ highlighted the security and speed benefits of running “insane AI models locally for free,” expressing “huge respect” for Google’s decision to share the research rather than keeping it proprietary.

Market impact and the future of hardware

The release of TurboQuant has already begun to ripple through the broader tech economy. Following the announcement on Tuesday, analysts observed a downward trend in the stock prices of major memory suppliers, including Micron and Western Digital.

Advertisement

The market’s reaction reflects a realization that if AI giants can compress their memory requirements by a factor of six through software alone, the insatiable demand for High Bandwidth Memory (HBM) may be tempered by algorithmic efficiency.

As we move deeper into 2026, the arrival of TurboQuant suggests that the next era of AI progress will be defined as much by mathematical elegance as by brute force. By redefining efficiency through extreme compression, Google is enabling “smarter memory movement” for multi-step agents and dense retrieval pipelines. The industry is shifting from a focus on “bigger models” to “better memory,” a change that could lower AI serving costs globally.

Strategic considerations for enterprise decision-makers

For enterprises currently using or fine-tuning their own AI models, the release of TurboQuant offers a rare opportunity for immediate operational improvement.

Unlike many AI breakthroughs that require costly retraining or specialized datasets, TurboQuant is training-free and data-oblivious.

Advertisement

This means organizations can apply these quantization techniques to their existing fine-tuned models—whether they are based on Llama, Mistral, or Google’s own Gemma—to realize immediate memory savings and speedups without risking the specialized performance they have worked to build.

From a practical standpoint, enterprise IT and DevOps teams should consider the following steps to integrate this research into their operations:

Optimize Inference Pipelines: Integrating TurboQuant into production inference servers can reduce the number of GPUs required to serve long-context applications, potentially slashing cloud compute costs by 50% or more.

Expand Context Capabilities: Enterprises working with massive internal documentation can now offer much longer context windows for retrieval-augmented generation (RAG) tasks without the massive VRAM overhead that previously made such features cost-prohibitive.

Advertisement

Enhance Local Deployments: For organizations with strict data privacy requirements, TurboQuant makes it feasible to run highly capable, large-scale models on on-premise hardware or edge devices that were previously insufficient for 32-bit or even 8-bit model weights.

Re-evaluate Hardware Procurement: Before investing in massive HBM-heavy GPU clusters, operations leaders should assess how much of their bottleneck can be resolved through these software-driven efficiency gains.

Ultimately, TurboQuant proves that the limit of AI isn’t just how many transistors we can cram onto a chip, but how elegantly we can translate the infinite complexity of information into the finite space of a digital bit. For the enterprise, this is more than just a research paper; it is a tactical unlock that turns existing hardware into a significantly more powerful asset.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025