Connect with us
DAPA Banner

Tech

When AI turns software development inside-out: 170% throughput at 80% headcount

Published

on

Many people have tried AI tools and walked away unimpressed. I get it — many demos promise magic, but in practice, the results can feel underwhelming.

That’s why I want to write this not as a futurist prediction, but from lived experience. Over the past six months, I turned my engineering organization AI-first. I’ve shared before about the system behind that transformation — how we built the workflows, the metrics, and the guardrails. Today, I want to zoom out from the mechanics and talk about what I’ve learned from that experience — about where our profession is heading when software development itself turns inside out. 

Before I do, a couple of numbers to illustrate the scale of change. Subjectively, it feels that we are moving twice as fast. Objectively, here’s how the throughput evolved. Our total engineering team headcount floated from 36 at the beginning of the year to 30. So you get ~170% throughput on ~80% headcount, which matches the subjective ~2x. 

Image 1

Zooming in, I picked a couple of our senior engineers who started the year in a more traditional software engineering process and ended it in the AI-first way. [The dips correspond to vacations and off-sites]:

Image 2
Image 3

Note that our PRs are tied to JIRA tickets, and the average scope of those tickets didn’t change much through the year, so it’s as good a proxy as the data can give us. 

Qualitatively, looking at the business value, I actually see even higher uplift. One reason is that, as we started last year, our quality assurance (QA) team couldn’t keep up with our engineers’ velocity. As the company leader, I wasn’t happy with the quality of some of our early releases. As we progressed through the year, and tooled our AI workflows to include writing unit and end-to-end tests, our coverage improved, the number of bugs dropped, users became fans, and the business value of engineering work multiplied.

Advertisement

From big design to rapid experimentation

Before AI, we spent weeks perfecting user flows before writing code. It made sense when change was expensive. Agile helped, but even then, testing multiple product ideas was too costly.

Once we went AI-first, that trade-off disappeared. The cost of experimentation collapsed. An idea could go from whiteboard to a working prototype in a day: From idea to AI-generated product requirements document (PRD), to AI-generated tech spec, to AI-assisted implementation. 

It manifested itself in some amazing transformations. Our website—central to our acquisition and inbound demand—is now a product-scale system with hundreds of custom components, all designed, developed, and maintained directly in code by our creative director

Now, instead of validating with slides or static prototypes, we validate with working products. We test ideas live, learn faster, and release major updates every other month, a pace I couldn’t imagine three years ago.

Advertisement

For example, Zen CLI was first written in Kotlin, but then we changed our mind and moved it to TypeScript with no release velocity lost.

Instead of mocking the features, our UX designers and project managers vibe code them. And when the release-time crunch hit everyone, they jumped into action and fixed dozens of small details with production-ready PRs to help us ship a great product. This included an overnight UI layout change.

From coding to validation

The next shift came where I least expected it: Validation.

In a traditional org, most people write code and a smaller group tests it. But when AI generates much of the implementation, the leverage point moves. The real value lies in defining what “good” looks like — in making correctness explicit.

Advertisement

We support 70-plus programming languages and countless integrations. Our QA engineers have evolved into system architects. They build AI agents that generate and maintain acceptance tests directly from requirements. And those agents are embedded into the codified AI workflows that allow us to achieve predictable engineering outcomes by using a system.

This is what “shift left” really means. Validation isn’t a stand-alone function, it’s an integral part of the production process. If the agent can’t validate it’s work, it can’t be trusted to generate production code. For QA professionals, this is a moment of reinvention, where, with the right upskilling, their work becomes a critical enabler and accelerator of the AI adoption

Product managers, tech leads, and data engineers now share this responsibility as well, because defining correctness has become a cross-functional skill, not a role confined to QA.

From diamond to double funnel

For decades, software development followed a “diamond” shape: A small product team handed off to a large engineering team, then narrowed again through QA.

Advertisement

Today, that geometry is flipping. Humans engage more deeply at the beginning — defining intent, exploring options — and again at the end, validating outcomes. The middle, where AI executes, is faster and narrower.

It’s not just a new workflow; it’s a structural inversion.

The model looks less like an assembly line and more like a control tower. Humans set direction and constraints, AI handles execution at speed, and people step back in to validate outcomes before decisions land in production.

Engineering at a higher level of abstraction

Every major leap in software raised our level of abstraction — from punch cards to high-level programming languages, from hardware to cloud. AI is the next step. Our engineers now work at a meta-layer: Orchestrating AI workflows, tuning agentic instructions and skills, and defining guardrails. The machines build; the humans decide what and why.

Advertisement

Teams now routinely decide when AI output is safe to merge without review, how tightly to bound agent autonomy in production systems, and what signals actually indicate correctness at scale, decisions that simply didn’t exist before.

And that’s the paradox of AI-first engineering — it feels less like coding, and more like thinking. Welcome to the new era of human intelligence, powered by AI.

Andrew Filev is founder and CEO of Zencoder

Welcome to the VentureBeat community!

Advertisement

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

How to back up your iPhone & iPad to your Mac before something goes wrong

Published

on

Backing up your iPhone or iPad to your Mac is the fastest and most reliable way to protect your data, and is especially useful before updates, repairs, or device replacement.

Apple iPhone and iPad on a colorful blurred background, with a large Time Machine backup app icon centered between them
How to back up your iPhone and iPad to Mac

Backing up your iPhone or iPad to your Mac remains the fastest and most complete way to protect your data before updates, repairs, or hardware changes. Apple built local backup support directly into macOS through Finder, allowing full-device backups without relying on an internet connection.
Local backups are like full system snapshots, saving your device settings, messages, app data, and media stored on your device. Backing up to iCloud does save your data, but restoring from a Mac is also faster than from because the data transfers directly over USB.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

EE TV is using AI to help you find something to watch

Published

on

EE is taking aim at one of streaming’s biggest annoyances, endlessly scrolling for something to watch.

The company has launched Smart Search, a new AI-powered feature on EE TV. It lets users find content simply by describing what they’re in the mood for.

Instead of typing exact titles, Smart Search understands more natural queries like “a funny detective show” or even a quote from a scene. It then pulls results from across live TV, on-demand services and integrated streaming apps. As a result, it presents everything in one place.

The idea is simple: less app-hopping, more watching.

Advertisement

Alongside it, EE is introducing Mood Matcher, another AI-driven tool designed to tackle the “what should we watch?” problem. Users answer a few quick prompts about mood, genre or themes. The system then serves up tailored recommendations. This is something EE says is particularly useful when multiple people are trying to agree on what to watch.

Advertisement

The launch leans heavily on a very real problem. EE’s own research suggests 41% of viewers struggle to discover new content, while 45% fall back on rewatching shows just to avoid the effort of choosing. Perhaps more tellingly, 38% say deciding what to watch causes household tension which probably sounds familiar.

EE TV Box Pro designEE TV Box Pro design
Image Credit (Trusted Reviews)

There’s also the issue of fragmentation. With 42% of users relying on their phones to find content and 61% wanting a more unified viewing experience, EE is positioning Smart Search as a way to bring everything together into a single interface.

That broader shift, making discovery as important as the content itself, is becoming a key battleground for TV platforms. As analyst Paolo Pescatore puts it, speed and simplicity are now just as critical as having a deep catalogue.

Advertisement

Smart Search and Mood Matcher are available now through the EE TV app on compatible devices. A wider rollout is planned for EE TV Pro and EE TV Box Edge hardware in the near future.

For EE, the pitch is clear: stop searching like a database, and start searching like a human.

Advertisement

Source link

Advertisement
Continue Reading

Tech

What Are The Biggest Limitations Of Supercomputers?

Published

on





Supercomputers are built to solve very large, difficult problems and do it quickly. Instead of relying on a single processor, supercomputers like El Capitan at Lawrence Livermore National Laboratory and Frontier at Oak Ridge National Laboratory use a large number of processors working together simultaneously. That makes them especially useful for jobs like climate modeling, genetic research, nuclear simulations, artificial intelligence, and identifying flaws in jet engine design.

We’re not talking about quantum computers here, though. A supercomputer is still a classical computer: it uses ordinary bits, which are either 0 or 1, and it solves problems by doing massive numbers of conventional calculations very quickly. A quantum computer works differently by using quantum bits, or qubits. Quantum computing is still largely in the experimental and early developmental stage. Right now, the real work is being done by classical supercomputers, helping scientists explore problems that would take ordinary computers far too long to solve. Some of today’s fastest machines can perform more than a billion calculations per second.

Even so, supercomputers are not all-powerful. Their biggest limitations usually come down to four things: workload scaling, data transfer issues, power consumption, and reliability. Engineers are making progress on all four, but none of these problems has disappeared.

Advertisement

Supercomputers work best when they can break tasks into chunks

One of the biggest limitations is that supercomputers are only useful for certain kinds of tasks. They are best at problems that can be broken into many smaller pieces and worked on concurrently. This is known as parallel processing; for example, a climate model can split the atmosphere and oceans into many sections and calculate each one in parallel. But some problems do not work that way. Some tasks have steps that must happen sequentially. When that happens, a supercomputer cannot speed things up very much. If part of a job has to wait for another task to be finished, the whole system slows down. The answer here often isn’t to add more hardware. Instead, it’s to redesign the software so more of the work can happen simultaneously. 

Another major limitation involves the process of moving data around. A supercomputer may be able to calculate incredibly quickly, but it still needs to fetch information from memory. In many cases, the machine is not limited by calculation speed, but by the time it takes to move data from one place to another. To mitigate this challenge, supercomputers store data physically closer to the processors to move it more efficiently. Researchers are also redesigning programs to reuse data more effectively instead of constantly fetching it.

Advertisement

Supercomputers use a lot of power and have a lot of parts that can go wrong

Power use is also a huge limitation. The fastest supercomputers use enormous amounts of electricity. They also need advanced cooling systems to prevent overheating. This creates two problems. First, it makes supercomputers very expensive to run. Second, it raises environmental concerns, especially as people push back on the large data centers needed to house them. Building better supercomputers will depend not only on making them more powerful, but also on making them more energy-efficient.

Another problem is reliability. A supercomputer contains an enormous number of parts: processors, memory units, cables, storage systems, cooling equipment, and more. The more parts a machine has, the more chances there are for something to go wrong. A loose cable, faulty memory chip, or cooling issue can interrupt a major calculation. This matters because some scientific jobs run for hours or days. If something fails midway through, that work may need to be restarted or recovered from a saved checkpoint. Engineers employ tools like the Lawrence Livermore National Laboratory’s Scalable Checkpoint/Restart (SCR) to minimize the amount of work lost when an issue occurs, but there’s no way to fully prevent hardware issues from occurring. After all, building a massive machine also means there are a massive number of things that can break.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Daily Deal: StackSkills Premium Annual Pass

Published

on

StackSkills Premium is your destination for mastering today’s most in-demand skills wherever and whenever your schedule allows. Now, with this exclusive limited-time offer, you’ll gain access to 1000+ StackSkills courses for just one low annual fee! Whether you’re looking to earn a promotion, make a career change, or pick up a side hustle to make some extra cash, StackSkills delivers engaging online courses featuring the skills that matter most today. From blockchain to growth hacking to iOS development, StackSkills stays ahead of the hottest trends to offer the most relevant courses and up-to-date information. Best of all, StackSkills’ elite instructors are experts in their fields and are passionate about sharing learnings based on first-hand successes and failures. If you’re ready to commit to your personal and career growth, you won’t want to pass on this incredible all access pass to the web’s top online courses. It’s on sale for $60.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

Source link

Continue Reading

Tech

Flipsnack and the shift toward motion-first business content with living visuals

Published

on

Interactive content now generates 52.6% higher engagement than static formats, with users spending significantly longer interacting with dynamic media and showing higher recall for brands that use it. In practical terms, that shift may have transformed expectations around how digital content should be produced, especially in commerce and B2B environments, where attention is often a […]

This story continues at The Next Web

Source link

Advertisement
Continue Reading

Tech

See The Computers That Powered The Voyager Space Program

Published

on

Have you ever wanted to see the computers behind the first (and for now only) man-made objects to leave the heliosphere? [Gary Friedman] shows us, with an archived tour of JPL building 230 in the ’80s.

A NASA employee picks up a camcorder and decides to record a tour of the place “before they replace it all with mainframes”. They show us computers that would seem prehistoric compared to anything modern; early Univac and IBM machines whose power is outmatched today by even an ESP32, yet made the Voyager program possible all the way back in 1977. There are countless peripherals to see, from punch card writers to Univac debug panels where you can see the registers, and from impressive cabinets full of computing hardware to the zip-tied hacks “attaching” a small box they call the “NIU”, dangling off the inner wall of the cabinet. And don’t forget the tape drives that are as tall as a refrigerator!

We could go on ad nauseum, nerding out about the computing history, but why don’t you see it for yourself in the video after the break?

Advertisement


Thanks to [Michael] for the tip!

Advertisement

Source link

Continue Reading

Tech

Invences Provides Smart Telecom Networks to Small Firms

Published

on

To stay competitive, many small businesses need advanced wireless communication networks, not only to communicate but also to leverage technologies such as artificial intelligence, the Internet of Things, and robotics. Often, however, the businesses lack the technical expertise needed to install, configure, and maintain the systems.

Bhaskara Rallabandi, who spent more than two decades working for major telecom companies, decided to use his expertise to help small businesses. Rallabandi, an IEEE senior member, is an expert certified by the International Council on Systems Engineering.

Invences

Cofounder

Advertisement

Bhaskara Rallabandi

Founded

2023

Headquarters

Advertisement

Frisco, Texas

Employees

100

In 2023 he helped found Invences, a telecommunications automation company headquartered in Frisco, Texas.

Advertisement

Invences services include designing, building, and installing data centers, as well as cost-effective and secure wireless, private, IoT, and virtual communications networks.

The company has set up systems for farms, factories, and universities in rural and urban areas including underserved communities. Its mission, Rallabandi says, is to “build autonomous, ethical, and sustainable networks that connect communities intelligently.”

For his work, he was recognized last year for “entrepreneurial leadership in founding and scaling a U.S.-based technology company, advancing innovation in 5G/6G and Open RAN [radio access network], shaping global standards, and inspiring future leaders through mentorship and community impact” with the IEEE-USA Entrepreneur Achievement Award for Leadership in Entrepreneurial Spirit.

Building a telecommunications career

He began his telecommunications career in 2009 as a manager and principal network engineer at Verizon’s Innovation Labs in Waltham, Mass. He and his team ran some of the earliest long-term evolution and evolved packet core performance trials. (LTE is the 4G wireless broadband standard for mobile devices. EPC is the IP-based, high-performance core network architecture for 4G LTE networks.)

Advertisement

That work at Innovation Labs, he says, was key to the development of the first 4G systems. It set the stage for scalable, interoperable broadband architectures that underpin today’s 5G and 6G designs.

“We built the first bridge between legacy and cloud-native networks,” he says.

He left in 2011 to join AT&T Labs in Redmond, Wash. As senior manager and principal solutions architect, he oversaw the design, integration, and testing of the company’s next-generation wireless systems. He also led projects that redefined automation of networks and set up cloud computing systems including FirstNet, the nationwide broadband network for first responders, and VoLTE, the first voice-over-video LTE for conducting video calls.

In 2018 Rallabandi was hired as a principal and a senior manager of engineering at Samsung Networks Division’s Technology Solutions Division, in Plano, Texas. He led the development of 5G virtualization and Open RAN initiatives, which enable more flexible, scalable, and efficient large network deployments and interoperability among vendors.

Advertisement

Designing networks for small businesses

Feeling that he wasn’t reaching his full potential in the corporate world, and to help small businesses, he opted to start his own venture in 2023 with his wife, Lakshmi Rallabandi, a computer science engineer. She is Invences’s CEO, and he is its founding principal and chief technology advisor.

Invences, which is self-funded and employs about 100 people, has more than 50 customers from around the world.

“I wanted to do something more interesting where I could use the knowledge I gained working for these big companies to fill the gaps they overlooked in terms of automation” for small businesses, he says. “I have a team of people who, combined, have 200 years of technology experience.”

The startup builds networks that simplify its clients’ operations and reduce their costs, he says.

Advertisement

Instead of duplicating how major telecom carriers build networks for dense urban areas, he says, his designs reimagine the network architecture to lower its complexity, costs, and operational overhead.

“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”

The systems integrate new technologies such as Open RAN, virtualized RAN, digital twins, telemetry, and advanced analytics. Some networks also incorporate agentic AI, an autonomous system that runs independently of humans and uses AI agents that plan and act across the network. Digital twins evaluate the agent’s decisions before releasing them.

“Autonomy is not about removing humans from the loop,” Rallabandi says. “It is about giving systems the ability to manage complexity so humans can focus on intent and outcomes.”

Advertisement

Rallabandi also has worked on AI-driven telecom observability technologies designed to allow networks to detect anomalies and optimize performance automatically.

He has developed a virtual O-RAN innovation lab, where clients can test the interoperability of their 5G systems, try out their enhancements, run trials of future functions, and experiment with updates.

Invences partnered with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz. FarmGrid used private 5G networks, edge-computing AI, and digital twins to make the operations more efficient.

“The project connects farms with sensors, analytics platforms, and autonomous equipment to enable precision agriculture, water optimization, and real-time decision-making,” Rallabandi says.

Advertisement

IEEE Senior Member Bhaskara Rallabandi talks about partnering with Trilogy Networks to build the FarmGrid platform for farms in Fargo, N.D., and Yuma, Ariz.TECKNEXUS

Paying it forward through IEEE programs

Rallabandi says he believes staying involved with IEEE is important to his career development and a way to give back to the profession. He is a frequent invited speaker at IEEE conferences.

He is active with IEEE Future Networks and its Connecting the Unconnected (CTU) initiative. Members of the Future Networks technical community work to develop, standardize, and deploy 5G and 6G networks as well as successive generations.

CTU aims to bridge the digital divide by bringing Internet service to underserved communities. During itsannual challenge, Rallabandi works with the winning students, researchers, and innovators to help them turn their concepts into affordable, cost-effective options.

Advertisement

“CTU represents the best of IEEE,” he says. “It is about taking innovation out of conferences and into communities that need it the most.

“Connectivity should not be a luxury. Rural communities deserve an infrastructure that fits their needs.”

He participates in the recently launched IEEE Future Networks Empowerment Through Mentorship initiative, which helps innovators, entrepreneurs, and startups expand their companies by educating them about finance, marketing, and related concepts.

“IEEE gives me both a voice and a responsibility,” Rallabandi says. “We’re not just developing technology; we are shaping how humanity connects.”

Advertisement

From Your Site Articles

Related Articles Around the Web

Source link

Advertisement
Continue Reading

Tech

Singapore’s PixVerse picks Seattle area for its first U.S. office amid $300M funding round

Published

on

AI video generator PixVerse is opening an office in Bellevue, Wash., and released its latest model, which it says “delivers complex scenes with coherent motion and consistent detail.” (PixVerse Image)

PixVerse, the Singapore-based AI video generator, is establishing its first U.S. office in Bellevue, Wash., and on Monday released the newest version of its model.

Earlier this month the startup announced a $300 million Series C round that raised its valuation to more than $1 billion and into unicorn territory. Unnamed sources cited by Bloomberg said the funds would help “accelerate its global expansion and target enterprise customers across North America and Asia.”

John He will lead the Seattle-area office as U.S. general manager, builder and chief of staff. He is PixVerse’s first and so far only U.S. employee, but is making job offers and hopes to build a team of six within the next couple of months. The Bellevue office will initially focus on product marketing and sales, with plans to expand into AI research and engineering this summer.

He is temporarily working out of an extraSlice co-working space while searching for a permanent office in downtown Bellevue. 

The company launched in 2024 and has at least 110 employees across its Singapore and Beijing offices, He said. There are also plans to open a second U.S. office in San Francisco.

Advertisement

Key investors in PixVerse include Alibaba Capital Partners, Ant Group and 37 Interactive Entertainment, according to PitchBook.

He joined the startup from Salesforce, saying he was attracted to the PixVerse’s mission.

“It’s very simple,” he said. “They want to turn everyone’s imagination into reality.”

AI video generation tools have sparked major debates over ethics, misuse and sustainability. That includes concerns and legal challenges over deepfakes, copyright violations and the improper use of intellectual property. There are also significant environmental costs in energy and water usage given the high computational demands of AI video production.

Advertisement

He defended the technology, saying that it doesn’t aim to replace human creativity and that artists using PixVerse are able to increase their earnings.

“It empowers regular people to do a better job,” he said.

PixVerse says it has more than 100 million users across 175 countries. Its latest model adds “precision camera control, expressive character performance, and one-click commercial output,” the company says.

Last week, The Wall Street Journal and others reported that OpenAI was shutting down Sora, its AI video generation platform used by consumers, filmmakers and other professionals. The decision, WSJ said, would allow OpenAI to focus on productivity tools for enterprise and individual users.

Advertisement

OpenAI employees in the past have questioned the computational costs of the Sora technology and unproven demand from customers, WSJ added.

Other competitors in the space include Runway ML, Kling AI, Higgsfield and products from Google and Adobe.

Source link

Advertisement
Continue Reading

Tech

Pete Hegseth’s War On Truth

Published

on

from the good-news-only-is-not-journalism dept

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Martha Gellhorn stowed away on a hospital ship to become the only woman journalist to land on Normandy Beach on D-Day. She carried stretchers before writing her harrowing account of the invasion.

The New Yorker’s famously epicurean writer A.J. Liebling subsisted on military rations and came under fire during World War II to describe what it was like for the soldiers and sailors at war.

Syndicated columnist Ernie Pyle died, in a helmet and Army fatigues, among some of the troops whose names and hometowns he carefully included in his dispatches. “At this spot, the 77th Infantry lost a buddy,” read the makeshift sign posted at the place where a Japanese machine gun bullet felled him.

Advertisement

Those reporters told stories of war in all its gore and its glory, its exhilaration and its ennui. Others have laid bare the anxiety and doubts.

Veteran Vietnam correspondent Neil Sheehan broke the story of the Pentagon Papers, which showed how government officials deceived the public about the Vietnam war. Sheehan won a Pulitzer Prize for his book, “A Bright Shining Lie,” which chronicled the war’s impact on idealists who once believed in it, through the story of his relationship with an inside source.

Well before bombs started dropping on Iran and President Donald Trump began to tease the notion of a ground invasion, his defense secretary, Pete Hegseth, began putting obstacles in the way of the reporters with the most experience covering the nation’s military. While Hegseth’s moves haven’t stopped the reporters from doing their jobs, it has made it harder for them to keep the public informed.

As someone who worked as a Washington correspondent for decades, I worry that these obstacles could limit the number of reporters who have the experience with – and trust of – key sources to do the kind of in-depth, nuanced journalism that a war, with its price in lives and resources, deserves.

Advertisement

Corralling the watchdogs

Generally, war correspondents need the cooperation of the military they are covering to get to the front. For the U.S. press, that requires relationships and credibility at the Pentagon.

Early in 2025, Hegseth ordered major news organizations to give up their desks in the Pentagon press room to MAGA favorites. NPR’s desk went to Breitbart News. Roaming the hallways, where reporters sometimes found sources who would deviate from the company line, became verboten.

Eventually, the area in the Pentagon where reporters were allowed was circumscribed to a single corridor outside the press room – even though the public affairs officers who worked most closely with reporters were in an office on the other side of the 6½-million-square-foot building.

Then Hegseth conditioned the issuance of press credentials on reporters, effectively giving military brass the right to censor or sanitize their reports.

Advertisement

As a result, almost the entire Pentagon press corps, which included outlets ranging from The Associated Press to The New York Times to Fox News and USNI News, which covers the Navy, moved out of the building in October 2025. Some have been invited back for the press briefings Hegseth and Gen. Dan Caine, chairman of the Joint Chiefs of Staff, have begun to give on progress of the battle in Iran.

But after the first of these briefings, the Pentagon abruptly banned photographers from attending, reportedly because Hegseth’s staff found some of their images of him to be unflattering.

Secretary on defense

Gone are the off-camera “background” briefings where Department of Defense brass could give trusted reporters greater context and nuance for battlefield decisions. Gone are the impromptu hallway meetings where reporters have, with luck or persistence, picked up information that deviates from an administration’s agreed-upon script.

Also not in evidence, at least not so far: the deployment of the kind of journalistic embed program that the Pentagon used during the Iraq war to give the American people an up-close look at troops in the conflict zone.

Advertisement

How might that affect what you, the public, gets to know? It was a combination of an anonymous tip and insider access that led the legendary investigative reporter Seymour Hersh to break the devastating story of My Lai, the American soldiers’ massacre of civilians during the Vietnam War.

At the made-for-TV briefings he does hold, Hegseth devotes most of the session to questions from outlets such as the Epoch Times, The Daily Caller and LindellTV – owned by Mike Lindell, the head of the well-known pillow company.

At one recent briefing, one of the favored new cadre tossed Hegseth a shameless softball. Referring to American troops in the Middle East, the questioner asked: “What is your prayer for them?”

Yet as hostilities drag on, even some among Hegseth’s chosen press corps have begun to ask irksome questions about the war. The normally Trump-friendly Daily Caller ran a less-than-flattering piece about the president berating a reporter for asking about troop deployments.

Advertisement

On March 4, 2026, Hegseth accused journalists of focusing on war casualties to make “the president look bad.” On March 13, Hegseth castigated as “more fake news” CNN’s report that the Trump administration had underestimated the impact of the war on shipping traffic in the Strait of Hormuz.

“The sooner David Ellison takes over that network, the better,” Hegseth concluded, adding fuel to the speculation that a Trump supporter who won a bidding war for CNN’s corporate parent is going to turn the network into a more administration-friendly outlet.

Soon after, Federal Communications Commission chairman Brendan Carr threatened network broadcast licenses over coverage critical of the administration’s conduct of the war. Echoing Carr’s threats the next day: the president himself.

‘Be a Marine’

The Trump administration is not alone in its disdain for a free press: Israel has long been notorious for restricting press access from areas where it is conducting military operations.

Advertisement

Leaders of the theocratic Iranian regime are even worse; the country is cited by press freedom advocate Reporters Without Borders as “one of the world’s most repressive countries in terms of press freedom.”

But the United States has historically distinguished itself by making freedom its calling card, even – or perhaps especially – in wartime.

“The news may be good, or bad. We shall tell you the truth,” Voice of America, a U.S. government-launched radio network, promised – in German – in its very first broadcast to Nazi Germany in 1942.

Now, however, the Trump administration, is busy trying to undermine the editorial independence of Voice of America, which broadcasts news to countries that don’t have a free press.

Advertisement

Pentagon reporters are continuing to find ways to get around the propaganda. NPR’s Tom Bowman told me that he takes inspiration from a pep talk he overheard a military source deliver to another reporter crestfallen over the lack of access.

“Quit whining and be a Marine,” the official said. “Go over, under or around the obstacle. Find a way to do it.”

Most reporters and their organizations are doing just that, finding sources outside the administration, like the ones in Congress who told The Hill how much money the war is costing taxpayers per day. And they’re continuing to get information from sources on the inside, like the ones who told The Wall Street Journal that Trump’s military advisers warned him that Iran might block the Gulf of Hormuz, but that he opted for war anyway.

So far, neither Hegseth’s obstacle course nor threats from the White House and the FCC have stopped the press from reporting stories or asking questions that the administration would rather not see or hear.

Advertisement

But restrictions on press freedom have a corrosive effect. We already have seen how Trump, using lawsuits and licensing threats, has used his power to make corporate media owners think twice about pursuing news he doesn’t like.

Seasoned Pentagon reporters will still find ways to get to sources they already have. But Hegseth’s tactic of blocking press access to the military keeps reporters from developing new sources and keeps new reporters from building the relationships they need to become seasoned Pentagon reporters.

Americans have long been able to understand the triumphs and tribulations of American troops at war, and to make intelligent decisions about whether they approve of a war’s cost, because a free press has been able to tell the story – good or bad. That tradition is now at risk.

Kathy Kiely is Professor and Lee Hills Chair of Free Press Studies at the University of Missouri-Columbia

Advertisement

Filed Under: defense department, dod, journalism, pete hegseth, propaganda

Source link

Advertisement
Continue Reading

Tech

Recreating One of the First Hackintoshes

Published

on

Apple’s Intel era was a boon for many, especially for software developers who were able to bring their software to the platform much more easily than in the PowerPC era. Macs at the time were even able to run Windows fairly easily, which was unheard of. A niche benefit to few was that it made it much easier to build Hackintosh-style computers, which were built from hardware not explicitly sanctioned by Apple but could be tricked into running OSX nonetheless. Although the Hackintosh scene exploded during this era, it actually goes back much farther and [This Does Not Compute] has put together one of the earliest examples going all the way back to the 1980s.

The build began with a Macintosh SE which had the original motherboard swapped out for one with a CPU accelerator card installed. This left the original motherboard free, and rather than accumulate spare parts [This Does Not Compute] decided to use it to investigate the Hackintosh scene of the late 80s. There were a few publications put out at the time that documented how to get this done, so following those as guides he got to work. The only original Apple part needed for this era was a motherboard, which at the time could be found used for a bargain price. The rest of the parts could be made from PC components, which can also be found for lower prices than most Mac hardware. The cases at the time would be literally hacked together as well, but in the end a working Mac would come out of the process at a very reasonable cost.

[This Does Not Compute]’s case isn’t scrounged from 80s parts bins, though. He’s using a special beige filament to print a case with the appropriate color aesthetic for a computer of this era. There are also some modern parts that make this style computer a little easier to use in today’s world like a card that lets the Mac output a VGA signal, an SD card reader, and a much less clunky power supply than the original would have had. He’s using an original floppy disk drive though, so not everything needs to be modernized. But, with these classic Macintosh computers, modernization can go to whatever extreme suits your needs.

Thanks to [Stephen] for the tip!

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025