Connect with us
DAPA Banner

Tech

Ctrl-Alt-Speech: Think Globally, Stack Locally

Published

on

from the ctrl-alt-speech dept

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this week’s roundup of the latest news in online speech, content moderation and internet regulation, Mike is joined by Konstantinos Komaitis, Senior Resident Fellow for Global and Democratic Governance at the Digital Forensics Research Lab (DFRLab) at the Atlantic Council. Together, they discuss:

Advertisement

Filed Under: content moderation, eu, europe, grok, iran, social media

Companies: meta, oracle, starlink, tiktok, twitter, upscrolled, x

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Anthropic Supply-Chain-Risk Designation Halted by Judge

Published

on

Anthropic won a preliminary injunction barring the US Department of Defense from labeling it a supply-chain risk, potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for the generative AI company as it tries to preserve its business and reputation.

“Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin wrote in justifying the temporary relief. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.”

Anthropic and the Pentagon did not immediately respond to requests to comment on the ruling.

The Department of Defense, which under Trump calls itself the Department of War, has relied on Anthropic’s Claude AI tools for writing sensitive documents and analyzing classified data over the past couple of years. But this month, it began pulling the plug on Claude after determining that Anthropic could not be trusted. Pentagon officials cited numerous instances in which Anthropic allegedly placed or sought to put usage restrictions on its technology that the Trump administration found unnecessary.

Advertisement

The administration ultimately issued several directives, including designating the company a supply-chain risk, which have had the effect of slowly halting Claude usage across the federal government and hurting Anthropic’s sales and public reputation. The company filed two lawsuits challenging the sanctions as unconstitutional. In a hearing on Tuesday, Lin said the government had appeared to illegally “cripple” and “punish” Anthropic.

Lin’s ruling on Thursday “restores the status quo” to February 27, before the directives were issued. “It does not bar any defendant from taking any lawful action that would have been available to it” on that date, she wrote. “For example, this order does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers, so long as those actions are consistent with applicable regulations, statutes, and constitutional provisions.”

The ruling suggests the Pentagon and other federal agencies are still free to cancel deals with Anthropic and ask contractors that integrate Claude into their own tools to stop doing so, but without citing the supply-chain-risk designation as the basis.

The immediate impact is unclear because Lin’s order won’t take effect for a week. And a federal appeals court in Washington, DC, has yet to rule on the second lawsuit Anthropic filed, which focuses on a different law under which the company was also barred from providing software to the military.

Advertisement

But Anthropic could use Lin’s ruling to demonstrate to some customers concerned about working with an industry pariah that the law may be on its side in the long run. Lin has not set a schedule to make a final ruling.

Source link

Continue Reading

Tech

Landmark case finds Meta, YouTube addictive to children

Published

on

‘These verdicts mark an unsurprising breaking point,’ said Forrester VP research director Mike Proulx.

A landmark legal case has found that Meta and YouTube are designed to be addictive to children. A day earlier, Meta lost a child safety lawsuit, which found that its platforms’ design features enable child sexual exploitation.

The mounting legal challenges are being heralded by some as Big Tech’s ‘Big Tobacco moment’, intending to address some of the harm caused by social media platforms to its youngest users.

A jury in Los Angeles deliberated the case across nine days and concluded that Meta and YouTube are liable to pay the 20-year-old plaintiff behind the lawsuit a total of $6m in damages. Meta has been assigned 70pc of the financial responsibility, and YouTube 30pc.

Advertisement

Half of each company’s penalties will be used to compensate the plaintiff’s losses, including for mental health support, while the other half is for punitive damages to punish the companies.

Kaley GM’s lawsuit, filed in 2022, also included TikTok and Snapchat; however, both of them have since settled outside of court.

The young plaintiff said she began using YouTube from the age of six, and Instagram from nine. One day, she spent 16 hours on Instagram, she said. The plaintiff blamed the platforms for inflicting harm, including depression and body dysmorphia.

Her lawsuit is one of thousands currently pending, which together could deliver serious financial damages to the companies involved and help change the legal landscape social media platforms function under.

Advertisement

Meta and Google said they disagreed with the verdict. Google said it plans to appeal, while Meta said it is evaluating its legal options.

“This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site,” Google added.

Attempts have been made in recent years to bolster child safety on social media, including a controversial underage social media ban which took effect in Australia, and is currently being debated in several European countries. Platforms are also beginning to self-police.

‘Traditional’ social media aside, the advent of generative AI tools has added to the difficulty of protecting users online, as seen with Grok, where users can prompt the chatbot to undress people in pictures and videos.

Advertisement

“These verdicts mark an unsurprising breaking point. Negative sentiment toward social media has been building for years, and now it’s finally boiled over,” said Forrester’s VP research director Mike Proulx.

“This problem sits at the intersection of social media companies’ platform responsibility, years of government regulatory inaction, and the role parents and educators play in helping kids build healthier digital habits.

“These verdicts aren’t just about social media’s past. They’re a dire warning about how we handle the next wave of technology.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Senators Elizabeth Warren and Josh Hawley Push for Data Center Energy Transparency

Published

on

In a bipartisan team-up, Democratic Sen. Elizabeth Warren and Republican Sen. Josh Hawley are demanding more transparency regarding the energy use of data centers

The pair sent a letter to the Energy Information Administration on Thursday, urging the EIA to “establish a mandatory annual reporting requirement for data centers.” Wired reported the news earlier. 

Data centers have become a major topic of debate, as tech giants like Amazon Web Services, Google, Meta and Microsoft continue to buy massive amounts of land to house artificial intelligence data centers. While some landowners are taking the payouts, others — like a Kentucky woman and her mother, who turned down $26 million to sell their land — are holding out because of their opposition to data centers.

Advertisement

The interested buyer in Kentucky remains anonymous, but the landowner told WLKY they were described as a “major artificial intelligence company.”

Not only do data centers need large plots of land for their infrastructure, but they also require substantial water and electricity to operate. The exact amounts are not always known, which is why the senators are urging the change. 

The collected information would help with grid planning and “will support policymaking to prevent large companies from increasing electricity costs for American families,” Warren and Hawley’s letter stated in part. 

BloombergNEF reports that by 2035, the energy demand for data centers will more than double.

Advertisement

On Wednesday, Rep. Alexandria Ocasio-Cortez and Sen. Bernie Sanders introduced a bill to pause all data center construction until the government enacts safeguards. 

“AI and robotics are creating the most sweeping technological revolution in the history of humanity,” Sanders said in a statement. “The scale, scope and speed of that change is unprecedented. Congress is way behind where it should be in understanding the nature of this revolution and its impacts.” 

Source link

Advertisement
Continue Reading

Tech

Reddit cracks down on bots with new labels and human verification

Published

on


The move comes just weeks after social aggregator Digg, which once aimed to rival Reddit, shut down its app, citing an inability to control a surge of bots. Reddit, by contrast, appears determined to tackle the problem head-on.
Read Entire Article
Source link

Continue Reading

Tech

Apple Discontinues Mac Pro – Slashdot

Published

on

Apple has discontinued the Mac Pro and says it has no plans for future models. “The ‘buy’ page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed,” reports 9to5Mac. From the report: The Mac Pro has lived many lives over the years. Apple released the current Mac Pro industrial design in 2019 alongside the Pro Display XDR (which was also discontinued earlier this month). That version of the Mac Pro was powered by Intel, and Apple refreshed it with the M2 Ultra chip in June 2023. It has gone without an update since then, languishing at its $6,999 price point even as Apple debuted the M3 Ultra chip in the Mac Studio last year.

Source link

Continue Reading

Tech

Wikipedia cracks down on the use of AI in article writing

Published

on

As AI makes inroads into the worlds of editorial and media, websites are scrambling to establish ground rules for its usage. This week, Wikipedia banned the use of AI-generated text by its editors — although it stopped short of banning AI outright from the site’s editorial processes.

In a recent policy change, the site now states that “the use of LLMs to generate or rewrite article content is prohibited.” This new language updates and clarifies previous, vaguer language that stated that LLMs “should not be used to generate new Wikipedia articles from scratch.”

AI in Wikipedia articles has become a contentious issue among the site’s sprawling, volunteer-driven community of editors. 404 Media reports that the new policy, which was put to a vote by the site’s editors, garnered majority support — 40 to 2.

That said, the new policy still makes room for continued AI use in some editorial processes.

Advertisement

“Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own,” the new policy states. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

Source link

Continue Reading

Tech

AI smart glasses win $1.4 million prize as dementia care shifts toward assistive tech and everyday independence tools

Published

on


  • CrossSense AI smart glasses gain recognition as funding flows into dementia support tools
  • $1.4 million prize reflects growing reliance on technology in cognitive care strategies
  • Early results suggest benefits, but long-term clinical effectiveness is yet to be confirmed

The Longitude Prize on Dementia has awarded £1 million (roughly $1.4 million) to a smart-glasses system designed to support people living with dementia.

Backed by Alzheimer’s Society and Innovate UK, the prize is a major incentive for practical innovation rather than theoretical research.

Advertisement

Source link

Continue Reading

Tech

America’s Self-Proclaimed Free Speech Warrior, Brendan Carr, Gets A Letter Documenting His First Amendment Violations

Published

on

from the censorial-dipshit dept

For years, certain folks on the left kept insisting they wanted to bring back the Fairness Doctrine — the old FCC policy that required broadcasters to present “both sides” of controversial issues. Many of us in the tech policy world kept explaining why that was a terrible idea, one ripe for abuse and fundamentally at odds with the First Amendment. The FCC itself repealed the Doctrine back in 1987, partly because it found that compelling broadcasters to present multiple views actually reduced the quality and volume of coverage on important issues — the exact opposite of what it was supposed to do. The requirement to air “both sides” of a controversial story was the kind of burden that just made the broadcast media less willing… to cover controversial stories at all.

Well, congratulations to everyone who wanted to reanimate that corpse. FCC Chairman Brendan Carr is doing something remarkably similar — except he’s only using it in one direction (the other problem with the Fairness Doctrine, it depends entirely on the enforcers), to punish outlets that report things the Trump administration doesn’t like, while conveniently leaving alone outlets that parrot the administration’s preferred narratives.

We’ve been covering Carr’s censorial ambitions for a while now. When Trump picked Carr to chair the FCC, we noted that despite all the “free speech warrior” branding from the administration and the credulous political press that repeated it, Carr had made it abundantly clear he wanted to be America’s top censor. And he’s delivered on that promise with remarkable enthusiasm — going after CBS over “60 Minutes”, threatening ABC over Jimmy Kimmel’s jokes, and most recently threatening to revoke broadcast licenses of outlets that accurately report on the disastrous war in Iran.

Now, a broad coalition of more than 80 legal scholars, former FCC officials, and civil society organizations — organized by TechFreedom and signed by groups ranging from the ACLU to EFF to the Knight First Amendment Institute to the Institute for Free Speech — has sent a formal letter to Carr laying out, in meticulous legal detail, exactly how his threats violate the First Amendment. I’m proud to note that our think tank, the Copia Institute, is among the signatories, and this was a very easy decision.

Advertisement

The letter is direct about what Carr is doing:

We write concerning your abuse of the “public interest” standard as a weapon against viewpoints you and President Donald Trump do not like. You assert that “[b]roadcasters … are running hoaxes and news distortions – also known as the fake news” in a retweet of a President Donald Trump’s complaint that The Wall Street Journal and The New York Times were the “Fake News Media” because of headlines he alleged were misleading. You threatened that broadcasters who engaged in similar reporting would “lose their licenses” if they do not “correct course before their license renewals come up.” The next day, the President threatened broadcasters and programmers with “Charges for TREASON for the dissemination of false information!”

It’s kind of incredible how much of this is absolutely batshit crazy and simply could never have been imagined under any other presidential administration. The President of the United States threatened news outlets with treason charges — which carry the death penalty — for reporting things he didn’t like. And the FCC Chairman who spent years claiming to be a “free speech” absolutist, rather than defending the press from this kind of authoritarian nonsense, was the one who teed it up.

The letter does an excellent job of explaining why Carr’s reliance on the vague and essentially dormant “news distortion” policy is legally bankrupt. There’s an important distinction here that Carr is deliberately blurring: the FCC has an actual, codified Broadcast Hoax Rule that is extremely narrow and specific — it applies only when a broadcaster knowingly broadcasts false information about a crime or catastrophe, where it’s foreseeable that it will cause substantial public harm, and it actually does cause such harm. The FCC has applied it rarely, and typically only in cases involving the outright fabrication of news events like staged kidnappings.

That’s a world apart from what Carr is doing, which is invoking the far vaguer “news distortion” policy to go after headlines the president finds insufficiently flattering. As the letter notes:

Advertisement

[Y]our unsupported claim that unnamed broadcasters are engaged in unspecified “hoaxes,” combined with your invocation of the news distortion policy is plainly unconstitutional: it aims to do something the Supreme Court has forbidden—correcting bias or balancing speech—while its vagueness makes good-faith compliance impossible and invites arbitrary enforcement.

On that Supreme Court point, the letter cites Moody v. NetChoice (you remember: the Supreme Court case that ended Florida social media content moderation law). Recall, this is the very same Court that many expected would be friendly to conservative arguments about tech platforms supposedly “censoring” conservatives, but instead it made it crystal clear that the government has no business trying to reshape private editorial decisions:

In Moody v. Netchoice (2024), the Supreme Court rejected government efforts “to decide what counts as the right balance of private expression — to ‘un-bias’ what it thinks is biased.” “On the spectrum of dangers to free expression,” Moody said, “there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana.”

The letter also draws on NRA v. Vullo, another unanimous Supreme Court decision which we cite often, which held that “a government official cannot do indirectly what she is barred from doing directly: A government official cannot coerce a private party to punish or suppress disfavored speech on her behalf.” That’s a pretty precise description of what Carr is doing when he posts threats on social media about license renewals while his boss muses about treason prosecutions.

The most damning part of the letter is the receipts on Carr’s own hypocrisy. Back in 2019, Carr himself tweeted: “The FCC does not have a roving mandate to police speech in the name of the ‘public interest.’”

As the letter dryly observes, if the law were as “clear” as Carr now claims, why did he insist the FCC needed to “start a rulemaking” on it?

If, as you now claim, the “law is clear,” you would not have needed to suggest in 2024, that “we should start a rulemaking to take a look at what [the public interest standard] means.” In fact, the “public interest” standard becomes less clear each time you invoke it.

The letter also point out that Carr’s former colleague and mentor Ajit Pai also knows how messed up all this is:

Advertisement

Chairman Ajit Pai, your Republican predecessor, could “hardly think of an action more chilling of free speech than the federal government investigating a broadcast station because of disagreement with its news coverage or promotion of that coverage.” You have launched a flurry of such investigations.

And the letter documents that the chilling effect is already working:

Commissioner Anna Gomez has “heard from broadcasters who are telling their reporters to be careful about the way they cover this administration.”

Even Trump-supporting Republican officials like Ted Cruz have had enough of Brendan Carr’s censorial bullshit:

Sen. Ted Cruz (R-TX) understood that this a “mafioso” tactic “right out of ‘Goodfellas,’” essentially: “‘nice bar you have here, it’d be a shame if something happened to it.”

The fact that Ted Cruz of all people can see this for what it is should tell you something.

The signatories on this letter are worth noting. Beyond the civil society organizations, you’ve got former FCC officials from both parties, more than fifty First Amendment and communications law scholars from institutions ranging from Harvard to Stanford to Emory, and journalism scholars from across the country. There are people signed onto this letter who don’t agree with each other on much at all.

Advertisement

But on Brendan Carr’s censorship campaign, they all agree — because this really has nothing to do with partisan politics. This is about whether you believe the Constitution means what it says — or whether the First Amendment is just a talking point to wave around when it’s politically convenient and discard when it gets in the way. The same people who spent years fundraising off claims that Biden officials sending cranky emails about COVID misinformation represented an existential threat to free speech are now openly wielding license revocation and treason charges to dictate editorial content.

Look, we know Carr won’t do a damn thing in response to this letter. If anything, he’ll just screenshot parts and post it on X as proof that he’s upsetting the right people. That’s his whole game — the trolling, the culture war posturing, the audition tape for whatever higher office he’s eyeing. He doesn’t actually have to revoke any licenses (and likely couldn’t survive the legal challenge if he tried). The mere threat is the point, because, as the letter explains, the FCC can exercise “regulation by the lifted eyebrow” and hang a “Sword of Damocles” over each broadcaster’s head.

But highlighting the record still matters. When future scholars look back at this period and try to understand how a sitting FCC Chairman openly abandoned the First Amendment in service of a President who thinks “treason” is a synonym for “journalism I don’t like,” the documentation will be there.

And the breadth of the coalition sending this message matters too. This many scholars, former officials, and organizations — many of whom disagree vehemently on plenty of other issues — all looked at what Carr is doing and arrived at the same conclusion: this is unconstitutional, it’s dangerous, and someone needs to say so clearly and publicly, even if the person doing it couldn’t care less.

Advertisement

The letter closes with a quote from the Supreme Court that fits this moment uncomfortably well, drawn from West Virginia Board of Education v. Barnette, decided in 1943 when the country faced actual existential threats:

“[T]here is ‘one fixed star in our constitutional constellation: that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.’”

Brendan Carr has decided he can ignore all that and censor at will. He’ll likely ignore this letter too. But unlike Carr, the record doesn’t forget.

Filed Under: 1st amendment, brendan carr, fairness doctrine, fcc, free speech, jawboning, news distortion, public interest

Advertisement

Source link

Continue Reading

Tech

Blocking The Internet Archive Won’t Stop AI, But It Will Erase The Web’s Historical Record

Published

on

from the willingly-burning-libraries dept

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

Advertisement

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Archiving and Search Are Legal 

Making material searchable is a well-established fair use. Courts have long recognized it’s often impossible to build a searchable index without making copies of the underlying material. That’s why when Google copied entire books in order to make a searchable database, courts rightly recognized it as a clear fair use. The copying served a transformative purpose: enabling discovery, research, and new insights about creative works. 

Advertisement

The Internet Archive operates on the same principle. Just as physical libraries preserve newspapers for future readers, the Archive preserves the web’s historical record. Researchers and journalists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages. And that’s only one example. Countless bloggers, researchers, and reporters depend on the Archive as a stable, authoritative record of what was published online.

The same legal principles that protect search engines must also protect archives and libraries. Even if courts place limits on AI training, the law protecting search and web archiving is already well established.

The Internet Archive has preserved the web’s historical record for nearly thirty years. If major publishers begin blocking that mission, future researchers may find that huge portions of that historical record have simply vanished. There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake. 

Republished from the EFF’s Deeplinks blog.

Advertisement

Filed Under: ai, archives, copyright, culture, fair use, history

Companies: internet archive, ny times, the guardian

Source link

Advertisement
Continue Reading

Tech

AI amplifies whatever you feed it, including confusion

Published

on

Most organizations are not failing at AI because of technology. They are failing because they do not know which data actually matters, and they are scaling that confusion faster than ever. At a time when investment continues to surge, the expectation is that more intelligence will naturally follow. Instead, many teams are finding themselves overwhelmed. The issue is the inability to distinguish between signal and noise in a way that leads to confident decisions. 

The broader landscape makes this tension hard to ignore. According to the State of Enterprise AI 2026, global spending is projected to reach $2.52 trillion, yet only 14% of CFOs report measurable returns. At the same time, 42% of companies abandoned most of their AI pilots in 2025. These point to a systemic disconnect between ambition and execution. As boards demand accountability and leaders look for proof of value, many organizations are confronting a difficult reality: they invested in capability without first ensuring clarity. 

The usual explanation is that the data is not clean enough. That is not wrong, but it misses something more fundamental. Clean data has limited value if it is not relevant, connected, or usable in the context of real decisions. Over time, organizations have accumulated dashboards, reports, and tracking systems that create the appearance of visibility while leaving critical questions unresolved. Teams often cannot explain why a metric moves, how it connects to outcomes, or what action should follow. That gap between information and understanding is where progress stalls.

Part of the problem is scale. The volume of data has expanded faster than the systems used to interpret it. Teams track what they can, often without a clear view of why it matters, and the result is an environment filled with metrics that compete for attention. Definitions vary across departments, events are recorded inconsistently, and reporting relies on manual interventions that introduce further distortion. In that environment, it becomes difficult to form a single, coherent narrative. People operate from fragments, and those fragments rarely align. 

Advertisement

This fragmentation becomes more consequential as AI is introduced into the workflow. Systems trained on inconsistent inputs do not resolve ambiguity; they extend it. According to a report, 61% of data leaders say better data quality is helping move AI initiatives into production, yet 50% still identify data quality and retrieval as major barriers. There is also a concerning dynamic emerging around trust. While 65% of leaders believe employees trust the data used for AI, 75% acknowledge gaps in data literacy. That combination creates a situation where decisions are made with confidence but not necessarily with understanding.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

There is a belief in some circles that better tools will eventually close this gap. We have seen the opposite. Organizations struggle because their operational systems were never designed to produce reliable signals. When processes are inconsistent, ownership is unclear, and metrics are loosely defined, the data generated from those systems reflects that ambiguity. Signals, which are meant to guide decisions and automation, end up reflecting fragmented realities instead of coherent ones. The outcome is hesitation and misalignment.

Advertisement

The effects show up in subtle but persistent ways. Teams spend more time reconciling numbers than acting on them. Leaders request additional reporting to compensate for uncertainty, which adds more layers without resolving the underlying issue. Priorities shift based on partial views of performance, and coordination across functions becomes more difficult. Over time, this erodes confidence, not just in the data, but in the systems that produce it. The organization moves, but without a shared understanding of direction.

A useful way to think about this is through navigation. Having more instruments in a cockpit does not guarantee a better flight if those instruments are not calibrated to the same reality. Pilots rely on a small number of trusted signals that are consistently defined and clearly understood. In many organizations, the opposite is true. There is an abundance of instrumentation, but little agreement on which signals matter or how they should be interpreted. The result is constant adjustment without meaningful progress. 

The urgency of this issue is reflected in broader research. A report shows that improving data governance has become a top priority for over 40% of leaders, even surpassing some AI-specific initiatives. The reasoning is straightforward: AI and automation amplify the condition of the data they rely on. When that condition is poor, the impact grows quickly, affecting both operational performance and strategic outcomes. This is a question of how organizations define, manage, and use information in practice.

Addressing this requires a shift in focus. The goal is to build more sophisticated dashboards. It is to establish clarity around what decisions need to be made and what information is required to support them. That begins with defining ownership so that data is tied to accountability. It involves standardizing processes so that events are captured consistently across teams. It requires designing metrics that reflect how work actually happens, not just how it is reported. And it depends on building a data layer that brings these elements together into a coherent, usable view. 

Advertisement

Even more important is the human dimension and understanding how people actually work in their day-to-day. Without that capability, even well-structured data will fall short of its potential. People need to understand not just how to access information, but how to apply it in the context of daily decisions. This is where change management becomes critical. It is the ability to help teams separate meaningful signals from background noise and to act with confidence based on that distinction. 

For those trying to move forward, there is a practical starting point that often gets overlooked. Identify the questions that are difficult to answer today. These are usually the questions that require excessive effort, multiple sources, or reliance on individual knowledge. They reveal where the gaps exist in how information is captured and structured. Once those gaps are visible, it becomes possible to design systems that address them directly, focusing on relevance and usability instead of volume.

AI will continue to advance, and its potential remains significant. But its effectiveness will always depend on the environment it operates within. Organizations that invest in clarity, clear processes, clear ownership, and clear signals will find that technology enhances their capabilities. Those who do not will continue to struggle, regardless of how advanced their tools become. The difference comes down to discernment and whether it is treated as a priority or an afterthought.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025