Connect with us
DAPA Banner

Tech

Blocking The Internet Archive Won’t Stop AI, But It Will Erase The Web’s Historical Record

Published

on

from the willingly-burning-libraries dept

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

Advertisement

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Archiving and Search Are Legal 

Making material searchable is a well-established fair use. Courts have long recognized it’s often impossible to build a searchable index without making copies of the underlying material. That’s why when Google copied entire books in order to make a searchable database, courts rightly recognized it as a clear fair use. The copying served a transformative purpose: enabling discovery, research, and new insights about creative works. 

Advertisement

The Internet Archive operates on the same principle. Just as physical libraries preserve newspapers for future readers, the Archive preserves the web’s historical record. Researchers and journalists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages. And that’s only one example. Countless bloggers, researchers, and reporters depend on the Archive as a stable, authoritative record of what was published online.

The same legal principles that protect search engines must also protect archives and libraries. Even if courts place limits on AI training, the law protecting search and web archiving is already well established.

The Internet Archive has preserved the web’s historical record for nearly thirty years. If major publishers begin blocking that mission, future researchers may find that huge portions of that historical record have simply vanished. There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake. 

Republished from the EFF’s Deeplinks blog.

Advertisement

Filed Under: ai, archives, copyright, culture, fair use, history

Companies: internet archive, ny times, the guardian

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Ryu and Ken Lead the Charge in the Latest Street Fighter Trailer

Published

on

New Street Fighter Trailer Movie 2026
Set in neon-lit 1993, the new Street Fighter film follows Ryu and Ken, formerly best friends but now estranged, as they are brought back together by the mysterious Chun-Li. Some nefarious powers lurk in the shadows of the World Warrior Tournament, compelling the couple to confront each other as well as some unpleasant memories from their past that they thought they’d forgotten.



Ken Masters appears on screen as the has-been celebrity that he has become, the ideal host for some cringeworthy MTV-style show, but Noah Centineo manages to lend a little of humor to the role, even when Ken is screaming out off-key karaoke and plainly has lost a step. Despite this, he manages to unleash his trusted Shoryuken uppercut, followed by the roll back throw that usually sends opponents reeling, and land a Jinrai Kick straight out of the current games, a lovely touch that reminds everyone that you never lose your fighting talents totally.

New Street Fighter Movie Screenshot
Ryu is about as far from the flashy showbiz life as you can get, a solitary martial artist who has managed to step back from the limelight, but Andrew Koji adds depth to the character, revealing a man who is a force to be reckoned with even when he is alone in the still of the night training for the next battle. Even as Ken delivers a flying kick, Ryu blinks briefly before he and Ken exchange glances, sizing out their old enemy. Later, we see Ryu gathering energy in his palms, and then, in a moment that will send every Street Fighter fan wild, he says the one phrase we’ve all been waiting for: “Hadouken.”

New Street Fighter Movie Screenshot
Chun-Li emerges as the driving force for Ryu and Ken’s reunion, and she possesses a steely will. Callina Liang has the poise and strength of an Interpol agent who is personally invested in seeing justice served. She wants vengeance on those who killed her father, and she believes the infamous M. Bison is behind it all. From the start, she sees Ryu as a man hiding from the world, and Ken as something of a sideshow relic, but when she recruits them both, we get a taste of the banter, including a throwaway exchange in which she trades barbs with Cammy about her famously muscular legs, a wink to all the old-school players. Chun-Li then demonstrates her own top training with the Hundred Lightning Legs move, a whirlwind of kicks that proves she’s one of the best.

New Street Fighter Movie Screenshot
Additional fighters round out the roster, resulting in some very unforgettable scenes. Joe Anoa’i makes a brief appearance as Akuma, hurtling at Ryu with a dark energy whirling about him like a mad storm in a technique known as Empyrean’s End. Curtis Jackson appears as Balrog, the heavyweight boxer who has a serious go at Ken early on. Olivier Richters looms enormous as Zangief, who attacks Ken with a piledriver that appears to have been plucked straight from the games. Jason Momoa plays Blanka, the green-skinned warrior whose electric powers stem from a bizarre tale concerning an aircraft crash. Cody Rhodes exudes Guile’s confidence with military swagger. Meanwhile, lesser roles are given to some fairly familiar faces, like Orville Peck as the clawed Vega, Vidyut Jammwal as the stretchy Dhalsim, and Eric Andre as the loudmouth announcer Don Sauvage, and each cameo just long enough to get that old familiar spark going before the next blowout begins.

New Street Fighter Movie Blanka
Director Kitao Sakurai, who has worked on episodes of Twisted Metal and other projects, keeps the whole thing moving at a high energy level while yet keeping it grounded. The film will be released in theaters on October 16, 2026.

Source link

Advertisement
Continue Reading

Tech

How are balance, inclusion and skills critical to the workforce of the future?

Published

on

Rent the Runway’s Stephanus Meiring discusses the workforce of the future and what it will take to make it productive and sustainable.

For Stephanus Meiring, vice-president of engineering, managing director and site lead at Rent the Runway, the biggest challenge facing the workplace and workforce of the future is the pace of change. 

He said, “Things are moving faster than many organisations and individuals are used to and that creates uncertainty. But the opportunity is just as real. The people who tend to do well in these moments are the ones who are willing to learn, adapt and get involved rather than dismissing change as hype. 

“In my space – software engineering and delivery – AI is already changing how work gets done. I think that same pattern will play out across most industries in different ways.”

Advertisement
What part will diversity and inclusion play in the make-up of the workforce of the future?

A very important one. As technology becomes a bigger part of how work gets done, different perspectives shaping the tools, the decisions and the guardrails around them are needed. Inclusion will also mean making sure people have access to learning and the opportunity to build new skills, not just access to jobs.  If that part is missed, the benefits of change will not be shared evenly.

Work-life balance is arguably central to job satisfaction – is this achievable by having a future-focused mindset?

It can be, but only if companies approach it in the right way. Technology should help remove repetitive work, reduce some of the manual grind and give people more time for the parts of work that really need judgement and creativity. Used well, that should improve work-life balance. But if companies simply use new tools to expect more and more output without rethinking workload and priorities, then it could just create a different kind of pressure.

We’ve recently seen increases in salary, particularly in tech. Do you think the future of work will bring in other types of non-salary benefits?

Yes, definitely. Salary will always matter, but I think other benefits will become even more important. Flexibility, learning opportunities, time to upskill, access to strong tools, good leadership and work that feels meaningful will all matter more. In a fast-changing environment, helping people stay relevant and grow their skills is a real benefit in itself.

We’re looking at a more automated future, so how do you think this will affect roles in your sector?

In software engineering, delivery and support, the nature of the work is already changing. The craft of sitting and typing code all day is shifting quickly. But writing code was never really the whole job, and it was rarely the biggest bottleneck on its own. The harder part has always been understanding the problem properly, making good decisions, and delivering something reliable, secure and useful.

Advertisement

AI can now help more with those parts too, which is exciting, but it also means human judgement, critical thinking and accountability become even more important, not less.

What new jobs do you think will come to the fore?

I think we will see more roles and responsibilities emerge around AI governance, automation oversight, quality control, platform enablement, model risk, as well as jobs where people need to bridge technical depth with business context. But beyond that, I think some of the most important jobs of the future may be ones we cannot fully describe yet.

A role a few years from now might look less like someone doing one narrow task all day, and more like someone directing systems, validating outputs, making judgement calls, and bringing creativity and context that machines do not have. In many cases, jobs may not disappear so much as evolve. A lot of what comes next is still speculation, so the best response is to get involved, keep learning and be ready for change.

What will companies need to do to attract and support the best talent?

They will need to create environments where people can learn quickly, use modern tools and still feel supported and trusted. The best talent wants room to grow, but also clarity and direction. Companies that combine ambition with sensible guardrails will be in the strongest position. People are much more likely to stay where they feel they are growing, where leadership is clear and where the company is helping them prepare for what is next rather than leaving them to figure it out alone.

Advertisement
In your opinion and from experience, what can companies do today to prepare for the work of tomorrow?

Start now. Encourage people to learn, experiment safely and build confidence with new tools. Update your ways of working, not just your technology. In the next year, I think the biggest shift will be how quickly teams bring AI into everyday work.

Over five years, I expect we will see many roles reshaped around judgement, oversight, creativity and decision-making far more than routine execution. The companies that prepare early, invest in people and build the right habits now will have a real advantage.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Google adds Nano Banana image generation to Gemini’s Personal Intelligence feature

Published

on

In short: Google has added Nano Banana-powered image generation to Gemini’s Personal Intelligence feature, letting the AI create images informed by a user’s Gmail, Photos, Calendar, Drive, and other Google app data. The feature rolls out to Plus, Pro, and Ultra subscribers in the US first, with Europe excluded from the initial global launch. Nano Banana is Google’s native image generation family for Gemini, now spanning three model versions.

Google has added Nano Banana-powered image generation to Gemini’s Personal Intelligence feature, letting the AI create images that draw on a user’s personal context across Gmail, Photos, Calendar, Drive, and other Google apps. The update means Gemini can now generate images informed by who you are and what you do, not just what you type into the prompt.

The feature is rolling out to Plus, Pro, and Ultra subscribers in the United States in the coming days, with free users expected to gain access over the next few weeks. Google says it plans to expand to Gemini in Chrome on desktop and to additional markets, though Europe is notably excluded from the initial global rollout of Personal Intelligence.

What Nano Banana is

Nano Banana is Google’s native image generation capability for the Gemini model family, distinct from Imagen, Google’s dedicated text-to-image line. Where Imagen is built for users who prioritise quality, iteration speed, and professional workflows, Nano Banana is designed for conversational image generation within the Gemini interface, accepting text, images, or both as inputs.

Advertisement

The family now includes three versions. The original Nano Banana, built on Gemini 2.5 Flash, handles basic conversational image generation. Nano Banana 2, launched in February 2026 on Gemini 3.1 Flash, combines the advanced features of the Pro version with faster iteration speeds. Nano Banana Pro, built on Gemini 3 Pro, incorporates the model’s full reasoning and real-world knowledge into image generation, producing outputs that reflect deeper understanding of prompts rather than surface-level pattern matching.

Advertisement

The technical advantage Google claims is that Nano Banana uses the Gemini model’s language understanding to capture prompt nuance in ways that standalone image generators cannot. Because the image generation is native to Gemini rather than bolted on as a separate system, the model can reason about what you are asking for before generating the image, drawing on context from the conversation and, now, from your personal data.

The personal intelligence angle

Personal Intelligence is Google’s framework for connecting Gemini to a user’s Google account data. Launched in January 2026, it lets Gemini access text, photos, and videos from Gmail, Calendar, Drive, Google Photos, YouTube, Search, Maps, and other first-party apps. The feature is opt-in, with users controlling which apps Gemini can access, and Google says the AI does not train on personal data.

Until now, Personal Intelligence has primarily powered text-based personalisation: answering questions about your travel plans by reading your Gmail confirmations and calendar entries, or making shopping suggestions based on purchase history. Adding Nano Banana image generation extends the personalisation to visual outputs. Google’s example use cases include generating images that incorporate your personal photos, creating visuals informed by your preferences and context, and producing outputs that reflect an understanding of your life rather than generic stock imagery.

A “sources” button shows how Gemini derived the context for each personalised image, giving users visibility into which personal data informed the output. This is a meaningful transparency feature in a product category where the provenance of AI-generated content is increasingly contentious.

Advertisement

The competitive context

Google is not the first to combine personal context with AI image generation, but it has a structural advantage that no competitor can easily replicate: it already has more personal data than any other consumer technology company. Gmail, Google Photos, Drive, Calendar, Maps, Search, and YouTube collectively represent a more comprehensive picture of a user’s life than any single app or platform can offer. Connecting that data to a capable image generator creates a personalisation moat that is difficult for OpenAI, Apple, or Meta to match without equivalent data breadth.

The timing also matters. ChatGPT’s image generation capabilities have driven significant user engagement for OpenAI, and Apple Intelligence has been integrating on-device AI features across the iPhone ecosystem. Google’s response is to lean into what it does best: cross-product integration powered by its unmatched data infrastructure.

On-device image generation with Gemini Nano is also coming to Pixel phones and Android devices, which would enable instant, private generation without cloud dependency. That combination, cloud-powered personalised generation for complex requests and on-device generation for speed and privacy, positions Google to cover both ends of the use case spectrum.

The privacy question

The obvious concern is that giving an AI image generator access to your personal photos, emails, and browsing history creates risks that Google’s opt-in controls may not fully address. Google says it does not train on personal data, but the feature necessarily involves processing that data to generate contextually relevant images. The distinction between “training on” and “using for inference” is technically meaningful but may be lost on users who simply see an AI that knows what their house, their children, or their holiday looked like.

Advertisement

Europe’s exclusion from the rollout suggests that Google’s own assessment is that the feature may face regulatory friction under GDPR and the AI Act. The company has form here: Gemini’s initial launch was also delayed in Europe, and Google has repeatedly had to navigate the gap between the personalisation its products depend on and the data protection frameworks that European regulators enforce.

For users who opt in, the value proposition is clear: an AI assistant that can generate images reflecting your actual life, preferences, and context rather than producing generic outputs from a prompt alone. For users who are wary of handing that much personal context to an AI system, the “sources” button offers some transparency, but the fundamental trade-off, personalisation in exchange for data access, is one that Google has been asking its users to make for two decades. This is simply the latest, and most visually expressive, version of that bargain.

Google has not disclosed pricing changes or additional costs for Nano Banana-powered Personal Intelligence image generation. The feature appears to be included in existing subscription tiers, which range from the free plan through Plus, Pro, and Ultra, with the pace of rollout tied to subscription level.

Advertisement

Source link

Continue Reading

Tech

The Always Pan People Made a Rice Cooker, and It’s Totally Adorable

Published

on

Popular kitchen brand Our Place is expanding its lineup with a new rice cooker. The compact kitchen appliance was created with chef and best-selling author J. Kenji López-Alt, and we had a chance to test it firsthand.

The Wonder Oven Pro and the Always Pan 2.0 made millions of fans, including me, so I was excited to test the new appliance to see if it lived up to the company’s innovative reputation and social media hype. 

Like all of Our Place’s products, the rice cooker has a sleek, simple, vaguely retro design with just the right amount of controls. A large button on the front of the appliance pops the device open, while the smaller buttons on the top control the mode and temperature.  

Advertisement
a tan rice cooker on a wooden table

The rice cooker keeps a sleek look by showcasing the digital display only when it’s plugged in.

Corin Cesaric-Epple/CNET

There are four modes: white rice, brown rice, oatmeal and a custom option for additional grains, and it has a six-cup capacity of cooked rice.

It also uses “fuzzy logic technology,” a common feature in high-end rice cookers. Essentially, it means the appliance uses built-in sensors that read the internal temperature and adjust it as needed for perfect cooking.

Advertisement

When the rice cooker is on, the digital display matches that of the Wonder Oven Pro, another element that creates the cohesive look the brand is known for. Using the rice cooker is easy and feels familiar if you’ve used any of Our Place’s other appliances. 

the top of a tan rice cooker sitting on a counter

You can adjust time and temperature.

Corin Cesaric-Epple/CNET

The rice cooker comes with a silicone measuring cup and a small wooden spatula, essentially a mini version of the one included with the Always Pan. Plus, it won’t scratch the nonstick coating.

Advertisement
a tan rice cooker open on sitting on a wooden table

Although the rice maker feels small, it fits six cups of cooked rice.

The Our Place rice cooker is available now for $139 — admittedly a bit pricey for a rice cooker — in five different colors, including blue salt, steam, spice, char and the limited-edition pistachio. 

Advertisement

Source link

Continue Reading

Tech

A first look at Metro 2039 shows how its Ukrainian developer turned the darkness up to 11

Published

on

If the real world isn’t grim enough for you, Ukranian developer 4A Games has your back: Metro 2039 has been announced and is scheduled to arrive this winter. And based on the developer’s first look at the title, Metro 2039 looks to be an even darker affair than previous titles in the series. A tall order, but the real-world turmoil that has enveloped 4A Games since Russia’s invasion of Ukraine sounds like it has turned into a painful inspiration for the developer.

The lengthy cinematic reveal, which also contains a brief bit of gameplay at the end, doesn’t give much of the story away. But it does serve to place you right in the ruined, terrifying world of the Metro series. Metro 2039 arrives about 25 years after a nuclear apocalypse wiped out most life on the planet. The series focuses on survivors who live in Moscow’s ruined metro system. 4A says that this time out, the different underground factions have been united by a group known as “the Novoreich,” complete with a new ruler, the Spartan known as Hunter.

Despite Hunter promising “salvation and a new life” for the survivors left on the surface, things aren’t exactly rosy underground. As you might expect, this supposedly “united” society is still a complete disaster, with propaganda, authoritarian rule and violence the hallmark of the regime.

Screenshot from Metro 2039.

Screenshot from Metro 2039. (4A Games)

The Metro series is based on novels by Dmitry Glukhovsky, a Russian author who has been in exile due to his public denouncement of Russia’s invasion of Ukraine. 4A Studios says that while this new game isn’t based specifically on one of his works, they worked in collaboration with Glukhovsky on the story for Metro 2039 “shaped by shared values of freedom and truth, and informed by the harsh realities of the world today.”

Advertisement

In statements from the studio, 4A directly acknowledges the conditions that Metro 2039 was created under. “Many developers continue to work from multiple locations, facing daily challenges never anticipated,” the studio says. “Through power outages, reliance on generators, and disruptions from missile and drone attacks, development has continued – driven by resilience, shared support, and a commitment to the work.”

It goes on to state that: “The war has directly shaped the development of Metro 2039, with its story focused acutely on choices, actions, consequences, and the cost of securing a future. While told from a distinctly Ukrainian perspective, Metro 2039 remains an authentic Metro story.” While the Metro series has been unfailingly bleak, it’s not hard to imagine how Russia’s invasion could have influenced the storytelling coming out of a Ukranian studio with an exiled Russian being part of the story team. But the limited bit of the game we’ve seen so far doesn’t make anything too explicit.

Screenshot from Metro 2039's reveal trailer.

Screenshot from Metro 2039’s reveal trailer. (4A Games)

The trailer shows off the new player-character known as The Stranger, the first voiced protagonist in the series (though we don’t hear him do anything but scream in the preview). The Stranger has apparently been surviving in the above-ground wasteland but is forced to return to the metro. The little bit of gameplay we saw was the standard first-person shooter view of The Stranger heading underground to be immediately ambushed by a pretty horrific monster that he barely escapes from — he’s then dragged to “safety” by a group of survivors who just get the doors to their shelter shut before being overrun by a larger horde. Creepy stuff.

The rest of the preview largely feels like a dream (or nightmare) sequence — but while it’s hard to put together what is going on, there’s no doubt that the detail in the environments and characters is top-notch. Given that the last metro game, Metro Exodus, was released way back in 2019, it’s fair to say that we’re getting a more graphically impressive rendering of ruined Moscow and the tunnels beneath it.

Advertisement

There’s no exact release date yet, but 4A Games says Metro 2039 will arrive this winter for Xbox Series X/S, PlayStation 5 and PC.

Source link

Continue Reading

Tech

Dan D’Agostino Momentum C2Z Preamplifier Debuts with Z-Circuitry, Optional Streaming Module, and Reference-Level Design

Published

on

Dan D’Agostino Master Audio Systems has added a new flagship control component to its Momentum lineup with the Momentum C2Z Preamplifier, a two chassis design derived from the earlier Momentum C2 and optimized for use with the company’s Momentum Z Monoblock Power Amplifiers. The visual differences are minimal, limited to the front panel, but the engineering focus is clear: improve system performance within the Momentum ecosystem rather than reinvent the product from scratch.

It also needs to be said plainly that this is ultra high end audio with pricing to match. The Momentum C2Z sits firmly in luxury purchase territory on its own, and building the full matching system with two Momentum Z monoblocks pushes the investment into a range that will make even affluent buyers stop and do the math twice. That does not make it unreasonable in the context of cost no object audio, but it certainly puts the C2Z far outside the reach of anyone shopping for value.

Z-Circuitry

Engineered to take full advantage of the Z circuitry introduced in the Momentum Z monoblock amplifier, the C2Z builds on the established Momentum C2 preamplifier platform with targeted refinements rather than a complete redesign.

It does not replace the original Momentum C2, but instead serves as the more suitable match for the Momentum Z monoblocks. The updated circuitry focuses on improving the electrical interface between preamplifier and power amplifier, with the goal of delivering better system integration, control, and overall sonic accuracy.

Advertisement

With the C2Z upgrade,” said Dan D’Agostino, Founder of Dan D’Agostino Master Audio Systems, “we set out to do more than refine an already exceptional platform—we wanted to fully exploit the performance of the Momentum Z Monoblock circuitry. We have created a more direct and optimized connection between the preamplifier and the amplifier.”

The FET Factor

At the core of the C2Z is a revised output stage for the Momentum preamplifier, built around a current capable Field Effect Transistor architecture. The goal is straightforward: increase current delivery and improve stability, allowing the preamplifier to drive the Momentum Z amplifier’s input stage more effectively and make full use of the Z circuitry across the system.

This implementation integrates FETs directly into the output stage, enabling higher current delivery where it matters most. The result is a more controlled electrical interface between preamplifier and amplifier, with the C2Z providing current with greater precision and the Momentum Z monoblocks designed to receive and respond to it without bottlenecks.

FETs are well suited to high performance audio applications. Their high input impedance reduces loading on source components, helping preserve signal integrity, tonal balance, and dynamic detail. They also offer low noise and strong linearity, which is critical when handling low level signals at the front end of the amplification chain.

Advertisement

Unlike bipolar devices, FETs operate as voltage controlled components and require minimal input current. That behavior contributes to smoother, more linear transfer characteristics, lower distortion, and a more stable connection between components.

The Momentum Z Amplifier Factor

In the Momentum Z amplifier, the Z circuitry takes a different approach by lowering the input impedance typically associated with FET-based designs. This change allows the amplifier to draw more current from the Momentum C2Z preamplifier, strengthening the electrical interface between the two components.

The result is improved transient response, greater dynamic contrast, and more precise retrieval of musical detail. Rather than treating the preamplifier and amplifier as separate stages, the design focuses on how they interact as a system.

Advertisement. Scroll to continue reading.
Advertisement

One of the more audible benefits is in low-frequency performance. Bass reproduction shows better control, texture, and definition, particularly at lower listening levels where detail can often be lost. With the full Z circuitry signal path in place, low-frequency information remains more consistent and better resolved without relying on higher volume levels.

Taken together, the Momentum C2Z preamplifier and Momentum Z monoblock amplifiers are designed to function as a tightly integrated system, with an emphasis on dynamic control, low-level resolution, and stable performance across a wide range of listening conditions.

Other Features

momentum-streaming-control

Control: The Momentum C2 and C2Z preamplifiers include a bi-directional remote housed in precision-machined aluminum and Delrin, with full access to unit functions and settings. The remote features an ergonomic layout and an integrated LCD screen that mirrors the information shown on the preamplifier’s front panel display, allowing for straightforward operation from the listening position. An iOS control app is also available for system management, although there is no Android support.

momentum-streaming-module

Optional Digital Streaming Module: The optional Digital Streaming Module adds optical, coaxial, and USB digital inputs, along with network connectivity via Ethernet (RJ45) or Wi-Fi 8 using an external antenna. An active internet connection is required to access streaming services such as TIDAL, Qobuz, and Spotify. The module is also Roon Ready, allowing integration into a Roon-based playback system.

Power Supply: The power supply is housed in a separate chassis that connects to the C2 or C2Z preamplifier. It incorporates dedicated custom toroidal transformers for the analog and digital sections, helping to isolate critical circuits. Internal filtering is designed to reduce RF noise and address power line irregularities, supporting more stable overall operation.

Comparison

Dan D’Agostino Momentum Model C2Z (2026) C2 (2024)
Product Type Preamplifier Preamplifier
Price TBD $55,000 to $65,000
Frequency Response 0.1 Hz to 1 MHz, -1 dB
Advertisement

20 Hz to 20 kHz, ±0 dB

0.1 Hz to 1 MHz, -1 dB

 20 Hz to 20 kHz, ±0 dB

Signal-to-Noise Ratio >110 dB, A-weighted 
Advertisement

 >120 dB, unweighted 

>110 dB, A-weighted

 >120 dB, unweighted

Gain 11.5 dB +10 dB
Input Impedance 1 MΩ 1 MΩ
Output Impedance 0.01 Ω  0.05 Ω
Distortion (full output) <0.00066%  <0.002% 
Analog Input 3 pr balanced XLR stereo
Advertisement

1 pr unbalanced RCA stereo

3 pr balanced XLR stereo

1 pr unbalanced RCA stereo

Digital Inputs with Optional Digital Streaming Module (DSM) 1 ea SPDIF coaxial
Advertisement

RJ45 Ethernet

Optical

USB-B

Wi-Fi

Advertisement
1 ea SPDIF coaxial

RJ45 Ethernet

Optical

USB-B

Advertisement

Wi-Fi

Outputs 2 pr balanced XLR stereo
1⁄4-inch headphone
2 pr balanced XLR stereo
1⁄4-inch headphone
Dimensions (WHD) 17.0 x 8.9 x 15.2 in

43.2 x 22.6 x 38.6 cm

17.0 x 8.9 x 15.2 in
Advertisement

43.2 x 22.6 x 38.6 cm

Weight 73.0 lb / 33.1 kg 73.0 lb / 33.1 kg
momentum-c2-preamplifier-rear
Momentum C2 (rear)

The Bottom Line 

The Momentum C2Z is not a ground-up redesign, but it does introduce meaningful changes where they matter. The revised output stage with FET-based current delivery is specifically engineered to work with the lower input impedance of the Momentum Z monoblocks, creating a more tightly coupled interface between preamplifier and amplifier. That system-level thinking is the real story here. This is not about adding features for the sake of it, but refining how the components interact to improve control, stability, and low-level detail.

What also stands out is the execution. At 73 pounds and built as a dual-chassis design with a dedicated power supply, the C2Z reflects the same emphasis on materials, isolation, and power management that defines the rest of the Momentum line. It is a physically and electrically substantial preamplifier, designed to anchor a reference-level system.

The reality, however, is the cost. While final pricing has not been confirmed, expectations in the $50,000 to $60,000 range place the C2Z firmly in cost-no-object territory. Building the intended system with two Momentum Z monoblocks at $125,000 each pushes the total investment into a level that requires serious financial commitment, even before speakers and source components enter the equation.

At this level engineering, everything comes separate. An optional Digital Streaming Module leaves nothing to chance, but adds to total cost of ownership. Same deal for adding a phono stage. If you need to ask the price, you can’t afford it.

Advertisement

Price & Availability

The official price of the Dan D’Agostino Momentum C2Z Preamplifier is still forthcoming, but its predecessor, the C2, is priced between $55,000 and $65,000. 

Dan D’Agostino Audio components can be purchased through Authorized Dealers in the US, Canada, Mexico, and internationally.

For more information: dandagostino.com

Advertisement. Scroll to continue reading.
Advertisement

Source link

Continue Reading

Tech

Intel Core Series 3 processors are here and they promise more performance for less money

Published

on

Intel has just launched its new Core Series 3 mobile processors for the next-generation of affordable laptops. The goal of these new chips is to give a more modern foundation to these accessible notebooks without dragging them into premium pricing territory.

The official announcement of the new lineup is aimed at value buyers, schools, small businesses, and essential edge devices. But the highlight is that these chips are still based on the same broader foundation as Intel’s powerful new Core Ultra Series 3 family. So it still uses Intel’s 18A process node, features the hybrid CPU architecture, AI-ready capability, and updated connectivity to more affordable systems.

How Intel is bringing nicer features downmarket

Raise expectations for what everyday computing can deliver with #IntelCore Series 3 mobile processors—designed to transform computing for schools, small businesses, and value buyers while delivering the features people care about at unmatched scale.

Read the press release:… pic.twitter.com/sPHcLNE7jF

— Intel (@intel) April 16, 2026

Advertisement

Intel says Core Series 3 is its first hybrid AI-ready Core Series processor, with support for up to 40 platform TOPS. You even get modern connectivity with up to two Thunderbolt 4 ports, Wi-Fi 7, and Bluetooth 6.

The company is also making big value claims against older PCs, with the Core Series 3 delivering up to 47% better single-thread performance, up to 41% better multi-thread performance, and up to 2.8x better GPU AI performance compared to a typical five-year-old PC. Against the previous-gen Core 7 150U, Intel also claims up to 2.1x faster creation and productivity, 64% lower processor power, and up to 2.7x AI GPU performance.

Why these are still everyday laptop-friendly

Under the hood, Intel’s Core Series 3 platform supports up to 2 Cougar Cove P-cores and 4 Darkmont LP E-cores, plus NPU 5, Xe graphics, support for LPDDR5X-7467 and DDR5-6400, and a design clearly tuned around battery life and lower-cost system builds. Intel also says the platform supports either UFS 3.0 or Gen 4 SSD storage, depending on system design.

In other words, the Intel Core Series 3 is making the next wave of affordable laptops feel less cheap in areas like battery life, responsiveness, video calls, light AI tasks, and basic creative work.

Advertisement

Intel Core Series 3 processor specs

Processor Cores / Threads Max Turbo (GHz) NPU TOPS Xe-cores GPU TOPS Base & Max Power
Intel Core 7 360 6 / 6 4.8 17 2 21 15W-35W
Intel Core 7 350 6 / 6 4.8 17 2 21 15W-35W
Intel Core 5 330 6 / 6 4.6 16 2 20 15W-35W
Intel Core 5 320 6 / 6 4.6 16 2 20 15W-35W
Intel Core 5 315 6 / 6 4.4 15 2 18 15W-35W
Intel Core 3 304 5 / 5 4.3 15 1 9 15W-35W

When do they drop?

Intel says more than 70 designs from OEM partners are on the way, with availability starting April 16, 2026 for consumer and commercial systems, while edge systems will start shipping in Q2 2026. The long list of partners include Acer, Asus, Dell, HP, Lenovo, MSI, Samsung, and others.

The announcement sounds great, on paper, but with the industry-wide price hikes, we’ll have to wait for the upcoming laptop releases to see if they are truly a solid value purchase.

Source link

Advertisement
Continue Reading

Tech

The Kentucky Cave Wars, And Going Viral In 1925

Published

on

Floyd Collins, the unfortunate star of this post. (Public Domain)

Information, it seems, flows at the speed of media. In the old days, information traveled with people on ships or horses, so if, say, a battle was won or lost, it could be months or even years before anyone back home knew what happened. While books and movable type let people store information, they still moved at the speed people moved. Before the telegraph, there were attempts to use things like semaphores to speed the flow of information,  but those were generally limited to line-of-sight operations. Carrier pigeons were handy, but don’t really move much faster than people.

The telegraph helped, but people didn’t have telegraph stations in their homes. At least not ordinary people. But radio was different. It didn’t take long for every home to have a radio, and while the means of broadcasting remained in the hands of a few, the message could go everywhere virtually instantly. This meant news could go from one side of the globe to the other in seconds. It also meant rumors, fads, and what we might think of today as memes could, too.

You might think that things “going viral” is a modern problem, but, in reality, media sensations have always been with us. All that changes is the number of them and their speed.

One of the earliest viral media sensations dealt with William Floyd Collins, an unfortunate man who was exploring caves during the Kentucky Cave Wars.

Background

Mammoth Cave in Kentucky had become a major tourist attraction. The accessible entrance to the cave was located on land owned by the Croghan family. The massive cave system had been made famous in the 19th century, and with the construction of a lock and dam nearby in 1906, Mammoth Cave became accessible to ordinary tourists.

Advertisement
Rescuers weighing options at the entrance to Sand Cave. (Public Domain)

However, the cave wasn’t completely under the Croghan land. There were also other caves that may or may not have been connected with Mammoth Cave. This led to fierce competition. The Croghan family suppressed information about exactly what land was over the cave. Meanwhile, other cave “owners” would intercept people heading for the cave, tell them that Mammoth Cave was closed, and “helpfully” direct them to another location.

By the 1920s, George Morrison blasted new entrances to the cave on non-Croghan land. There was fierce interest in finding new entrances to the cave or nearby caves to capture tourist money.

Back to Floyd

Floyd Collins found an entrance into what would become known as the “Great Crystal Cave” in 1917 and opened it to tourists in 1918. Unfortunately, the cave was hard to access, so it didn’t make much money.

Floyd had started entering caves in 1893 at the age of six. He discovered his first cave in 1910. But Great Crystal Cave was too far off the main road. He entered into a deal with three farmers who owned land closer to the main highway. If Floyd could find a suitable cave or, even better, an entrance to Mammoth Cave, he’d partner with them and create a mutually profitable tourist attraction.

Floyd found a hole in what would become known as Sand Cave. Some of the passages he had to move through were as tight as 9 inches, which, of course, would not be suitable for tourists, but they opened, apparently, into a large grotto. He was determined to expand the entrance to make the cave commercially viable.

Advertisement

In January of 1925, he was working in the cave when his gas lamp started to dim. He tried to leave, but while trying to move through a small passage, he knocked over his light, leaving him in total darkness.

In the dark, he put his foot against a seemingly stable wall and caused a shift that pinned his leg with a rock weighing nearly 30 pounds. He was also buried in gravel. At this point, he was 150 feet from the hole to the surface.

The Media

The next day, people noticed Floyd was missing, but no one would dare to follow him through the narrow passages. His younger brother finally got close enough to determine what happened. He was able to give Floyd food and water as plans for a rescue developed.

After four days in the cave, several people tried to pull Floyd out using a rope and a harness, but they only wound up injuring him. Meanwhile, the media had taken interest in the case, and the publicity drew hundreds of tourists and amateur spelunkers. Campfires and, possibly, the electric light that had been placed to give Floyd some light and warmth, melted ice inside the cave, creating puddles of water around the trapped man.

Advertisement

Two days after the failed rescue attempt, rain and the melting ice caused the cave passage to collapse, and the rescue team determined it was too dangerous to dig it back out after making an attempt to do so. They decided to dig straight down to reach Floyd.

Digging

Unfortunately, the cave drew air in so they decided they could not use mechanical diggers without risking suffocating Floyd. That meant humans would have to dig the 55-foot shaft to reach the victim. The initial estimate that 75 volunteers could dig the shaft in 30 hours proved optimistic, as conditions worsened and the hole grew deeper.

Someone disconnected the wires from the light bulb and connected them to an audio amplifier to detect signs of life from the victim. They believed the repetitive crackling noise meant he was breathing.

The light bulb went open on February 11th, twelve days after the incident started. Five days later, they reached his body. He had died and had been dead for several days.

Advertisement

You can find a well-done documentary from Remix Films in the video below. For a movie inspired by the event, check out the Billy Wilder film Ace in the Hole (1951) starring Kirk Douglas.

Viral

A newspaper reporter, William Miller, was on the scene and, being a small man, was able to actually help remove gravel from Floyd before the cave-in. His interview with the man from inside the cave won a Pulitzer Prize.

Not a circus. A cave rescue.

There was a time when this would have been only a sensational local story, but by the modern year of 1925, reports “went out on the wire” by telegraph and were picked up by newspapers worldwide. The nearest telegraph station was miles away, so two ham radio operators (9BRK and 9CHG) provided a link between the site, the newspaper, and the authorities.

The first broadcast radio station, KDKA, was only five years old, but stations provided news bulletins detailing the progress. Thanks to the media, crowds were reported to number in the tens of thousands. Eventually, the National Guard arrived to help control the crowds.

Advertisement

Vendors popped up to sell hamburgers and memorabilia like a macabre circus. As you can see in the video below, memorabilia about the event and Floyd Collins can be worth a pretty penny to collectors.

The whole thing became one of the three largest media events between World War I and World War II. The other two were Lindbergh’s transatlantic flight (1927) and the kidnapping of Lindbergh’s baby (1932). Oddly, Lindbergh was an acquaintance of Floyd’s and also flew news photos from the scene (although, reportedly, to the wrong newspaper).

While it wasn’t quite as big an event, Canada’s 1936 Moose River Gold Mine collapse was a similar situation and also received worldwide media attention. It has the distinction of being the first 24-hour radio coverage of a breaking news story in Canada.

Advertisement

Today

These days, sensational news stories pop up everywhere. It seems as if they hardly get started when they are displaced by another one. But we submit that “going viral” isn’t a modern phenomenon. Only the speed at which it happens. Even an 1835 newspaper was able to spur a viral hoax.

Featured image: “Mammoth Cave Saltpeter Mine” by [Bpluke01]

Source link

Advertisement
Continue Reading

Tech

The Digital Accessibility Deadline Is Here. Schools Aren’t Ready.

Published

on

A big civil rights deadline that impacts schools and vendors will hit this month.

Federal law has required accessibility for people with disabilities for decades, says Glenda Sims, chief information accessibility officer at Deque Systems, a company that specializes in digital accessibility.

But two years ago, the federal government finally gave schools a way to measure whether their websites, mobile apps and digital content were accessible under law when it released a “final rule.”

In essence, the final rule updated 2024 Title II of the Americans with Disabilities Act, a federal law concerning equal opportunity, setting out standards for public institutions around website and mobile app accessibility. When the deadline was put in place, disability experts told EdSurge that the rules provided clarity for schools and edtech vendors, and also set a ticking clock for when they would have to make changes. The rule set varying deadlines for school districts and state and local governments — in April 2026 or April 2027, based on population size.

Advertisement

On April 24, the first deadline will hit. By then, institutions have to make their web content and mobile apps comply with Level AA of the Web Content Accessibility Guidelines (WCAG) 2.1, a widely recognized accessibility standard that includes accommodations such as a minimum contrast ratio and a requirement for audio descriptions.

But with the well-advertised deadline just days away, schools are well behind schedule.

Some advocates worry that digital accessibility is being swept up in broader political trends. So, what happens when the deadline hits?

Not Ready for Prime Time

Only 14 percent of districts had completed the accessibility updates required by law, according to a survey from the National School Public Relations Association released last December. The survey also found fewer than half of districts prioritized digital accessibility or had procedures for vetting vendor accessibility, which is required by the rule.

Advertisement

It’s not just about course content, but also the apps that a school may use, says Sambhavi Chandrashekar, global accessibility lead at D2L, a company that runs a widely used learning management system. “I doubt if a single K-12 district in the U.S. or anywhere else has an inventory today of all the web apps and forms and content that they have that are not accessible,” Chandrashekar says.

Figuring that out requires performing an audit, which most schools likely haven’t done and which can be expensive, she adds.

At EdSurge’s request, AAAtraq, a company that sells disability-related legal compliance services, surveyed around 20 of the largest schools across a number of states — in California, Colorado, Florida, Illinois, New York, Texas and Washington state. Many school websites and online PDFs failed along “basic accessibility fundamentals,” based on a benchmark the company uses to assess legal exposure. Alt text was missing, there was not enough color contrast and many websites didn’t have an accessibility statement, the company reports. The company found that 88 percent received an “F,” the lowest possible grade.

The company uses AI in its assessments, which do not cover all of the WCAG technical guidelines, and its assessment was meant only as rough barometer. In some cases, the use of AI in accessibility is controversial.

Advertisement

“Title II should have been a wake up call,” said AAAtraq CEO Lawrence Shaw in an emailed comment, referring to the major disability law behind the “final rule.” Yet many schools, including some of the largest in the country, have left themselves open to legal action.

Digital Exhaustion

Schools’ relationship to technology has also changed since two years ago, from rushing to embrace it to trying to limit it.

These days, beset by digital exhaustion and regret over the reach of tech into children’s lives, schools have sought to restrict screens in schools.

But it’s important for schools and lawmakers to distinguish between meaningful tech and doomscrolling on social media, says Luis Pérez, senior director of disability and accessibility for CAST, a digital access advocacy group. Students are under more pressure to manage their own attention, Pérez says, but those with disabilities and multilingual learners rely on certain digital tools, such as text-to-speech and adjustable text sizing to navigate daily learning. When used correctly, digital tools that expand accessibility can foster a sense of belonging, especially for underrepresented groups.

Advertisement

He worries that screen time laws that lump all screens together could make digital accessibility harder.

K-12 schools may be having the toughest time. Universities are usually more prepared for digital accessibility than state or local governments, which run K-12 public schools, says Sims of Deque. That’s partly because students with disabilities represent a more identifiable group in universities and that allows them to advocate for accommodation, she says.

These schools are heavily reliant on vendors for accessibility, Sims says.

It doesn’t help that there’s uncertainty at the moment.

Advertisement

Old Rules, New Rulers

While the accessibility deadline is still in place, the intentions of the federal government have become murky.

Last year, the Department of Justice signaled that it might issue a new “interim final rule” that would impact the deadline. And recently, the Office of Information and Regulatory Affairs — a federal agency that is usually not involved with accessibility — has been holding meetings on the rule, as “credible rumors” have circulated that the rule is in danger of getting delayed or scrapped.

Yet, the federal government has not publicly released information about its intentions, according to Jarret Cummings, senior adviser for policy and public relations at Educause.

The Office of Information and Regulatory Affairs did not immediately respond to a question from EdSurge about whether a delay is expected.

Advertisement

However, some documents related to the meetings are publicly accessible, giving a glimpse into what they are hearing.

A group representing more than 800 Minnesota cities argued in written testimony that none of the Minnesota cities that would be impacted by the rule are fully compliant with the law. The letter states that the cost of compliance would squeeze small government budgets. In a similar argument, testimony from the National Association of Counties estimated that it would cost small counties about $32,000 to fix problems with accessibility on their sites, and large counties as much as $700,000.

Cummings’ organization, Educause, has also argued that two years was not enough time for most higher-ed institutions to make changes. It suggested that the government alter the timeline.

In contrast, Mark Riccobono, president of the National Federation of the Blind, testified that the rulemaking process has been ongoing for decades, with ample time for comment. The bill represents a compromise that clarifies rules, while reducing the burden of those under the law by providing exceptions and generous timelines, Riccobono argued.

Advertisement

Politically, the national mood has changed since the rule was issued a couple of years ago.

The affiliation of accessibility with diversity, equity and inclusion has politically backfired under the Trump administration. The administration has shredded grants it has identified with “radical” DEI ideology, and mass firings have gutted agencies like the Education Department, which the administration is actively trying to dismantle.

For students with disabilities, it means that there’s no guarantee of federal support, even when a federal complaint is filed.

“I would say that so many of the places that were reasonably staffed… have been reduced to almost bare bones, nothing. And so even if there are complaints coming in, there’s no way to truly handle them,” says Sims, of Deque.

Advertisement

Indeed, mass firings have led to 90 percent of all student civil rights complaints, including from students with disabilities, being dismissed by the federal government in the second half of last year, according to a nonpartisan government watchdog report published in January.

In the absence of federal help, people with disabilities have turned to the courts. There were more than 3,000 accessibility lawsuits filed in federal court last year, according to legal analysis of court data.

Long-term Goals

Pérez of CAST maintains that advocates should keep on track, focusing on long-term strategy, no matter what happens at the federal level. Accessibility benefits everyone, regardless of their background or disability status, he says.

Sims, of Deque, has also made a “business case” for considering accessibility during the design of products, suggesting that as schools embrace accessibility, the vendors that can show they build accessibility into their products will be rewarded.

Advertisement

Some hope that artificial intelligence tools will help students with disabilities access information on their own, and point toward tools like Aira, an AI tool that aids in remote video interpretation for people with visual impairment.

But even there, disability law experts insist that the federal rule hasn’t actually changed. “The rule is the rule until it isn’t,” wrote Lainey Feingold in early March.

Source link

Advertisement
Continue Reading

Tech

AI lowered the cost of building software. Enterprise governance hasn’t caught up

Published

on

Presented by Retool


The logic used to be: buying software is cheaper, faster, and safer for most use cases. Building was reserved for companies with large engineering teams, deep pockets, and problems so specific that no vendor could address them. But now, the cost to code a piece of software has dropped to zero.

Anyone can build their own software now, but enterprise and governance models have yet to catch up. Retool’s 2026 Build vs. Buy Shift Report, based on a survey of 817 builders, traces exactly how this shift is playing out.

The cost curve changed; SaaS pricing didn’t

Two years ago, a custom internal tool might have taken an engineering team weeks or months and cost six figures. Today, an operations lead with the right platform can have a working prototype in a day or two. This structural shift is driven by AI-assisted development and the maturation of enterprise app-building platforms.

Advertisement

Meanwhile, SaaS pricing hasn’t adjusted, still charging per-seat for generic software that requires customization and integration costs on top. When the cost of building drops by an order of magnitude but the cost of buying stays flat, the math changes for every company, not just the ones with large engineering teams.

The data reflects this. Retool’s report found that 35% of teams have already replaced at least one SaaS tool with a custom build, and 78% plan to build more custom tooling in 2026.

Workflow automations and admin tools are among SaaS tools at risk

The shift isn’t happening uniformly. The top SaaS tools respondents have replaced or considered replacing include workflow automations (35%) and internal admin tools (33%), followed by BI tools (29%) and CRMs (25%).

A purchased workflow automation tool has to serve thousands of customers, so it optimizes for the average case — and the average case is nobody’s actual case. Every company’s internal workflows are different. They reflect org structure, compliance requirements, data systems, and business logic unique to that organization.

Advertisement

Internal admin tools carry the same problem: they’re inherently company-specific. These categories were always the most awkward fit for off-the-shelf software, and there’s now an affordable, accessible alternative (MIT’s State of AI in Business reported $2-10 million in savings annually for customer service and document processing tasks).

The replacement pattern tends to be additive rather than wholesale (nobody is just ripping out Salesforce). They’re replacing the specific pieces that never quite fit: an approval flow that required three workarounds, the dashboard that couldn’t connect to their actual data … but those narrow replacements add up. Once a team builds one tool that works better than what they bought, the default question shifts from “What should we buy?” to “Can we build this?”

Builders go around IT, signaling broader procurement challenges

The clearest evidence that procurement processes haven’t kept up with building capability is the scale of shadow IT now occurring inside enterprises. Retool’s report found that 60% of builders have created tools, workflows, or automations outside of IT oversight in the past year — and 25% report doing so frequently.

Even experienced, high-judgment people choose speed over process. Two-thirds of total survey respondents (64%) are senior managers and above. Existing procurement cycles weren’t designed for a world where building software takes days rather than months. When people love to quote the 95% generative AI pilot failure rate they’re not accounting for the robust grassroots adoption happening under executives’ noses.

Advertisement

Shadow IT at this scale is a demand signal. The people closest to the problems are telling organizations that the existing process can’t can’t keep up — 31% of those going around IT do so simply because they can build faster than IT can provision tools. So, suppression isn’t a productive response. The challenge is that the tools being built in the shadows are also the ones most likely to stall before they become useful.

A vibe-coded prototype running on sample data is impressive. A production tool connected to your actual Salesforce instance, with role-based access and a security review, is useful. The report found that 51% of builders have shipped production software currently in use by their teams, and among those, about half report saving six or more hours per week.

When building happens in an ungoverned environment, organizations get neither outcome reliably. Someone connects an AI-powered tool to production data with no audit trail, no access controls, and no owner. Multiply that by dozens of builders across an organization, and you have an expanding security surface that IT doesn’t even know exists.[1]

The teams whose homebuilt solutions reach production tend to have three things the others don’t: connectivity to real data sources, a security and permissions model they trust, and a review process for what gets deployed. Channeling builder energy into governed environments, where speed and security aren’t in conflict, is how organizations avoid shadow IT becoming a liability.

Advertisement

Governance will define the next era of SaaS

The build vs. buy shift is already underway. The more important question now is who controls the environment where that building happens.

Ungoverned building invites security risks and makes the ROI case difficult to close. You can’t measure time saved by tools IT doesn’t know exist, or are only run in one individual’s workflow. You can’t enforce access controls on a prototype that someone connected to production data last Tuesday. And those aren’t hypothetical risks: in Deloitte’s 2026 State of AI in the Enterprise survey of 3,200+ leaders, data privacy and security ranked as the top AI concern at 73%, with governance capabilities close behind at 46%. The 35% of organizations with no AI productivity metrics are missing more than just a dashboard. They’re missing the accountability infrastructure that justifies building over buying in the first place.

The organizations that treat governed environments as a prerequisite for building at scale will be the ones that can actually prove it’s working. The ones that don’t will find out when something breaks.

For a closer look at the data, including how enterprises are approaching AI-assisted building, read the full 2026 Build vs. Buy Shift Report.

Advertisement

[1] The cost of which can be steep: IBM’s 2025 Cost of Data Breach Report found that AI-associated cases cost organizations more than $650,000 per breach.


David Hsu is CEO at Retool.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025