Connect with us
DAPA Banner

Tech

a16z partner Kofi Ampadu to leave firm after TxO program pause

Published

on

Kofi Ampadu, the partner at a16z who led the firm’s Talent x Opportunity (TxO) fund and program, has left the firm, according to an email he sent to staff that TechCrunch obtained. This comes months after the firm paused TxO and laid off most of its staff.

“During my time at the firm, I was deeply grateful for the opportunity and the trust to lead this work,” Ampadu wrote in the email, sent Friday afternoon, with the subject line “Closing My a16z Chapter.”

“Identifying out-of-network entrepreneurs and supporting them as they sharpened their ideas, raised capital, and grew into confident leaders was one of the most meaningful experiences of my career,” he wrote.

Ampadu led the program, which launched in 2020, for over four years until its pause last November, taking over for the initial leader, Nait Jones. Afterward, Ampadu seems to have worked at a16z’s latest accelerator, Speedrun.

Advertisement

Ampadu’s departure perhaps signals the end of the TxO chapter. The fund and program focused on supporting underserved founders by providing access to tech networks and investment capital through a donor-advised fund. Though some founders spoke highly of the program, others criticized the controversial donor-advised structure. The program also launched a grant program in 2024 to provide $50,000 to nonprofits that help diverse founders.

Its last cohort was in March 2025, and its indefinite pause came as many top tech names reframe, cut, or eliminate prior public commitments to diversity, equity, and inclusion. We’ve reached out to a16z and Ampadu for comment.

His full note below:

I moved to the United States three months before my 11th birthday. One month later, I started 6th grade in a school more than 5,000 miles from my home, my friends, and everything familiar. Recently, my mom reminded me that my school required me to enroll as an English-as-a-Second-Language student. My memory immediately returned to how confused I felt. Even at 10 years old, I knew it made no sense that a kid from Ghana, an English-speaking country, was being asked to learn a language he already spoke fluently.

This was a systems requirement, a blanketed assumption about what students from certain places could or could not do. That same type of systemic assumption is what we set out to challenge through the Talent x Opportunity Initiative. The venture ecosystem often relies on proxies such as schools, networks, and prior credentials, which can obscure exceptional founders who do not follow the most common paths. TxO invested in and supported these overlooked founders to bridge the gap between talent and opportunity.

Advertisement

During my time at the firm, I was deeply grateful for the opportunity and the trust to lead this work. Identifying out-of-network entrepreneurs and supporting them as they sharpened their ideas, raised capital, and grew into confident leaders was one of the most meaningful experiences of my career.

As I move on to my next chapter, I leave with pride in what we built and gratitude for everyone who helped shape it. Thank you for the trust, the collaboration, and the belief in what is possible. There is more work to do and I am excited to keep building.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Apple Discontinues Mac Pro – Slashdot

Published

on

Apple has discontinued the Mac Pro and says it has no plans for future models. “The ‘buy’ page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed,” reports 9to5Mac. From the report: The Mac Pro has lived many lives over the years. Apple released the current Mac Pro industrial design in 2019 alongside the Pro Display XDR (which was also discontinued earlier this month). That version of the Mac Pro was powered by Intel, and Apple refreshed it with the M2 Ultra chip in June 2023. It has gone without an update since then, languishing at its $6,999 price point even as Apple debuted the M3 Ultra chip in the Mac Studio last year.

Source link

Continue Reading

Tech

Wikipedia cracks down on the use of AI in article writing

Published

on

As AI makes inroads into the worlds of editorial and media, websites are scrambling to establish ground rules for its usage. This week, Wikipedia banned the use of AI-generated text by its editors — although it stopped short of banning AI outright from the site’s editorial processes.

In a recent policy change, the site now states that “the use of LLMs to generate or rewrite article content is prohibited.” This new language updates and clarifies previous, vaguer language that stated that LLMs “should not be used to generate new Wikipedia articles from scratch.”

AI in Wikipedia articles has become a contentious issue among the site’s sprawling, volunteer-driven community of editors. 404 Media reports that the new policy, which was put to a vote by the site’s editors, garnered majority support — 40 to 2.

That said, the new policy still makes room for continued AI use in some editorial processes.

Advertisement

“Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own,” the new policy states. “Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”

Source link

Continue Reading

Tech

AI smart glasses win $1.4 million prize as dementia care shifts toward assistive tech and everyday independence tools

Published

on


  • CrossSense AI smart glasses gain recognition as funding flows into dementia support tools
  • $1.4 million prize reflects growing reliance on technology in cognitive care strategies
  • Early results suggest benefits, but long-term clinical effectiveness is yet to be confirmed

The Longitude Prize on Dementia has awarded £1 million (roughly $1.4 million) to a smart-glasses system designed to support people living with dementia.

Backed by Alzheimer’s Society and Innovate UK, the prize is a major incentive for practical innovation rather than theoretical research.

Advertisement

Source link

Continue Reading

Tech

America’s Self-Proclaimed Free Speech Warrior, Brendan Carr, Gets A Letter Documenting His First Amendment Violations

Published

on

from the censorial-dipshit dept

For years, certain folks on the left kept insisting they wanted to bring back the Fairness Doctrine — the old FCC policy that required broadcasters to present “both sides” of controversial issues. Many of us in the tech policy world kept explaining why that was a terrible idea, one ripe for abuse and fundamentally at odds with the First Amendment. The FCC itself repealed the Doctrine back in 1987, partly because it found that compelling broadcasters to present multiple views actually reduced the quality and volume of coverage on important issues — the exact opposite of what it was supposed to do. The requirement to air “both sides” of a controversial story was the kind of burden that just made the broadcast media less willing… to cover controversial stories at all.

Well, congratulations to everyone who wanted to reanimate that corpse. FCC Chairman Brendan Carr is doing something remarkably similar — except he’s only using it in one direction (the other problem with the Fairness Doctrine, it depends entirely on the enforcers), to punish outlets that report things the Trump administration doesn’t like, while conveniently leaving alone outlets that parrot the administration’s preferred narratives.

We’ve been covering Carr’s censorial ambitions for a while now. When Trump picked Carr to chair the FCC, we noted that despite all the “free speech warrior” branding from the administration and the credulous political press that repeated it, Carr had made it abundantly clear he wanted to be America’s top censor. And he’s delivered on that promise with remarkable enthusiasm — going after CBS over “60 Minutes”, threatening ABC over Jimmy Kimmel’s jokes, and most recently threatening to revoke broadcast licenses of outlets that accurately report on the disastrous war in Iran.

Now, a broad coalition of more than 80 legal scholars, former FCC officials, and civil society organizations — organized by TechFreedom and signed by groups ranging from the ACLU to EFF to the Knight First Amendment Institute to the Institute for Free Speech — has sent a formal letter to Carr laying out, in meticulous legal detail, exactly how his threats violate the First Amendment. I’m proud to note that our think tank, the Copia Institute, is among the signatories, and this was a very easy decision.

Advertisement

The letter is direct about what Carr is doing:

We write concerning your abuse of the “public interest” standard as a weapon against viewpoints you and President Donald Trump do not like. You assert that “[b]roadcasters … are running hoaxes and news distortions – also known as the fake news” in a retweet of a President Donald Trump’s complaint that The Wall Street Journal and The New York Times were the “Fake News Media” because of headlines he alleged were misleading. You threatened that broadcasters who engaged in similar reporting would “lose their licenses” if they do not “correct course before their license renewals come up.” The next day, the President threatened broadcasters and programmers with “Charges for TREASON for the dissemination of false information!”

It’s kind of incredible how much of this is absolutely batshit crazy and simply could never have been imagined under any other presidential administration. The President of the United States threatened news outlets with treason charges — which carry the death penalty — for reporting things he didn’t like. And the FCC Chairman who spent years claiming to be a “free speech” absolutist, rather than defending the press from this kind of authoritarian nonsense, was the one who teed it up.

The letter does an excellent job of explaining why Carr’s reliance on the vague and essentially dormant “news distortion” policy is legally bankrupt. There’s an important distinction here that Carr is deliberately blurring: the FCC has an actual, codified Broadcast Hoax Rule that is extremely narrow and specific — it applies only when a broadcaster knowingly broadcasts false information about a crime or catastrophe, where it’s foreseeable that it will cause substantial public harm, and it actually does cause such harm. The FCC has applied it rarely, and typically only in cases involving the outright fabrication of news events like staged kidnappings.

That’s a world apart from what Carr is doing, which is invoking the far vaguer “news distortion” policy to go after headlines the president finds insufficiently flattering. As the letter notes:

Advertisement

[Y]our unsupported claim that unnamed broadcasters are engaged in unspecified “hoaxes,” combined with your invocation of the news distortion policy is plainly unconstitutional: it aims to do something the Supreme Court has forbidden—correcting bias or balancing speech—while its vagueness makes good-faith compliance impossible and invites arbitrary enforcement.

On that Supreme Court point, the letter cites Moody v. NetChoice (you remember: the Supreme Court case that ended Florida social media content moderation law). Recall, this is the very same Court that many expected would be friendly to conservative arguments about tech platforms supposedly “censoring” conservatives, but instead it made it crystal clear that the government has no business trying to reshape private editorial decisions:

In Moody v. Netchoice (2024), the Supreme Court rejected government efforts “to decide what counts as the right balance of private expression — to ‘un-bias’ what it thinks is biased.” “On the spectrum of dangers to free expression,” Moody said, “there are few greater than allowing the government to change the speech of private actors in order to achieve its own conception of speech nirvana.”

The letter also draws on NRA v. Vullo, another unanimous Supreme Court decision which we cite often, which held that “a government official cannot do indirectly what she is barred from doing directly: A government official cannot coerce a private party to punish or suppress disfavored speech on her behalf.” That’s a pretty precise description of what Carr is doing when he posts threats on social media about license renewals while his boss muses about treason prosecutions.

The most damning part of the letter is the receipts on Carr’s own hypocrisy. Back in 2019, Carr himself tweeted: “The FCC does not have a roving mandate to police speech in the name of the ‘public interest.’”

As the letter dryly observes, if the law were as “clear” as Carr now claims, why did he insist the FCC needed to “start a rulemaking” on it?

If, as you now claim, the “law is clear,” you would not have needed to suggest in 2024, that “we should start a rulemaking to take a look at what [the public interest standard] means.” In fact, the “public interest” standard becomes less clear each time you invoke it.

The letter also point out that Carr’s former colleague and mentor Ajit Pai also knows how messed up all this is:

Advertisement

Chairman Ajit Pai, your Republican predecessor, could “hardly think of an action more chilling of free speech than the federal government investigating a broadcast station because of disagreement with its news coverage or promotion of that coverage.” You have launched a flurry of such investigations.

And the letter documents that the chilling effect is already working:

Commissioner Anna Gomez has “heard from broadcasters who are telling their reporters to be careful about the way they cover this administration.”

Even Trump-supporting Republican officials like Ted Cruz have had enough of Brendan Carr’s censorial bullshit:

Sen. Ted Cruz (R-TX) understood that this a “mafioso” tactic “right out of ‘Goodfellas,’” essentially: “‘nice bar you have here, it’d be a shame if something happened to it.”

The fact that Ted Cruz of all people can see this for what it is should tell you something.

The signatories on this letter are worth noting. Beyond the civil society organizations, you’ve got former FCC officials from both parties, more than fifty First Amendment and communications law scholars from institutions ranging from Harvard to Stanford to Emory, and journalism scholars from across the country. There are people signed onto this letter who don’t agree with each other on much at all.

Advertisement

But on Brendan Carr’s censorship campaign, they all agree — because this really has nothing to do with partisan politics. This is about whether you believe the Constitution means what it says — or whether the First Amendment is just a talking point to wave around when it’s politically convenient and discard when it gets in the way. The same people who spent years fundraising off claims that Biden officials sending cranky emails about COVID misinformation represented an existential threat to free speech are now openly wielding license revocation and treason charges to dictate editorial content.

Look, we know Carr won’t do a damn thing in response to this letter. If anything, he’ll just screenshot parts and post it on X as proof that he’s upsetting the right people. That’s his whole game — the trolling, the culture war posturing, the audition tape for whatever higher office he’s eyeing. He doesn’t actually have to revoke any licenses (and likely couldn’t survive the legal challenge if he tried). The mere threat is the point, because, as the letter explains, the FCC can exercise “regulation by the lifted eyebrow” and hang a “Sword of Damocles” over each broadcaster’s head.

But highlighting the record still matters. When future scholars look back at this period and try to understand how a sitting FCC Chairman openly abandoned the First Amendment in service of a President who thinks “treason” is a synonym for “journalism I don’t like,” the documentation will be there.

And the breadth of the coalition sending this message matters too. This many scholars, former officials, and organizations — many of whom disagree vehemently on plenty of other issues — all looked at what Carr is doing and arrived at the same conclusion: this is unconstitutional, it’s dangerous, and someone needs to say so clearly and publicly, even if the person doing it couldn’t care less.

Advertisement

The letter closes with a quote from the Supreme Court that fits this moment uncomfortably well, drawn from West Virginia Board of Education v. Barnette, decided in 1943 when the country faced actual existential threats:

“[T]here is ‘one fixed star in our constitutional constellation: that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein.’”

Brendan Carr has decided he can ignore all that and censor at will. He’ll likely ignore this letter too. But unlike Carr, the record doesn’t forget.

Filed Under: 1st amendment, brendan carr, fairness doctrine, fcc, free speech, jawboning, news distortion, public interest

Advertisement

Source link

Continue Reading

Tech

Blocking The Internet Archive Won’t Stop AI, But It Will Erase The Web’s Historical Record

Published

on

from the willingly-burning-libraries dept

Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper. 

That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.

But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit. 

For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.

Advertisement

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use

Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for. 

If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record. 

Archiving and Search Are Legal 

Making material searchable is a well-established fair use. Courts have long recognized it’s often impossible to build a searchable index without making copies of the underlying material. That’s why when Google copied entire books in order to make a searchable database, courts rightly recognized it as a clear fair use. The copying served a transformative purpose: enabling discovery, research, and new insights about creative works. 

Advertisement

The Internet Archive operates on the same principle. Just as physical libraries preserve newspapers for future readers, the Archive preserves the web’s historical record. Researchers and journalists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages. And that’s only one example. Countless bloggers, researchers, and reporters depend on the Archive as a stable, authoritative record of what was published online.

The same legal principles that protect search engines must also protect archives and libraries. Even if courts place limits on AI training, the law protecting search and web archiving is already well established.

The Internet Archive has preserved the web’s historical record for nearly thirty years. If major publishers begin blocking that mission, future researchers may find that huge portions of that historical record have simply vanished. There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake. 

Republished from the EFF’s Deeplinks blog.

Advertisement

Filed Under: ai, archives, copyright, culture, fair use, history

Companies: internet archive, ny times, the guardian

Source link

Advertisement
Continue Reading

Tech

AI amplifies whatever you feed it, including confusion

Published

on

Most organizations are not failing at AI because of technology. They are failing because they do not know which data actually matters, and they are scaling that confusion faster than ever. At a time when investment continues to surge, the expectation is that more intelligence will naturally follow. Instead, many teams are finding themselves overwhelmed. The issue is the inability to distinguish between signal and noise in a way that leads to confident decisions. 

The broader landscape makes this tension hard to ignore. According to the State of Enterprise AI 2026, global spending is projected to reach $2.52 trillion, yet only 14% of CFOs report measurable returns. At the same time, 42% of companies abandoned most of their AI pilots in 2025. These point to a systemic disconnect between ambition and execution. As boards demand accountability and leaders look for proof of value, many organizations are confronting a difficult reality: they invested in capability without first ensuring clarity. 

The usual explanation is that the data is not clean enough. That is not wrong, but it misses something more fundamental. Clean data has limited value if it is not relevant, connected, or usable in the context of real decisions. Over time, organizations have accumulated dashboards, reports, and tracking systems that create the appearance of visibility while leaving critical questions unresolved. Teams often cannot explain why a metric moves, how it connects to outcomes, or what action should follow. That gap between information and understanding is where progress stalls.

Part of the problem is scale. The volume of data has expanded faster than the systems used to interpret it. Teams track what they can, often without a clear view of why it matters, and the result is an environment filled with metrics that compete for attention. Definitions vary across departments, events are recorded inconsistently, and reporting relies on manual interventions that introduce further distortion. In that environment, it becomes difficult to form a single, coherent narrative. People operate from fragments, and those fragments rarely align. 

Advertisement

This fragmentation becomes more consequential as AI is introduced into the workflow. Systems trained on inconsistent inputs do not resolve ambiguity; they extend it. According to a report, 61% of data leaders say better data quality is helping move AI initiatives into production, yet 50% still identify data quality and retrieval as major barriers. There is also a concerning dynamic emerging around trust. While 65% of leaders believe employees trust the data used for AI, 75% acknowledge gaps in data literacy. That combination creates a situation where decisions are made with confidence but not necessarily with understanding.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

There is a belief in some circles that better tools will eventually close this gap. We have seen the opposite. Organizations struggle because their operational systems were never designed to produce reliable signals. When processes are inconsistent, ownership is unclear, and metrics are loosely defined, the data generated from those systems reflects that ambiguity. Signals, which are meant to guide decisions and automation, end up reflecting fragmented realities instead of coherent ones. The outcome is hesitation and misalignment.

Advertisement

The effects show up in subtle but persistent ways. Teams spend more time reconciling numbers than acting on them. Leaders request additional reporting to compensate for uncertainty, which adds more layers without resolving the underlying issue. Priorities shift based on partial views of performance, and coordination across functions becomes more difficult. Over time, this erodes confidence, not just in the data, but in the systems that produce it. The organization moves, but without a shared understanding of direction.

A useful way to think about this is through navigation. Having more instruments in a cockpit does not guarantee a better flight if those instruments are not calibrated to the same reality. Pilots rely on a small number of trusted signals that are consistently defined and clearly understood. In many organizations, the opposite is true. There is an abundance of instrumentation, but little agreement on which signals matter or how they should be interpreted. The result is constant adjustment without meaningful progress. 

The urgency of this issue is reflected in broader research. A report shows that improving data governance has become a top priority for over 40% of leaders, even surpassing some AI-specific initiatives. The reasoning is straightforward: AI and automation amplify the condition of the data they rely on. When that condition is poor, the impact grows quickly, affecting both operational performance and strategic outcomes. This is a question of how organizations define, manage, and use information in practice.

Addressing this requires a shift in focus. The goal is to build more sophisticated dashboards. It is to establish clarity around what decisions need to be made and what information is required to support them. That begins with defining ownership so that data is tied to accountability. It involves standardizing processes so that events are captured consistently across teams. It requires designing metrics that reflect how work actually happens, not just how it is reported. And it depends on building a data layer that brings these elements together into a coherent, usable view. 

Advertisement

Even more important is the human dimension and understanding how people actually work in their day-to-day. Without that capability, even well-structured data will fall short of its potential. People need to understand not just how to access information, but how to apply it in the context of daily decisions. This is where change management becomes critical. It is the ability to help teams separate meaningful signals from background noise and to act with confidence based on that distinction. 

For those trying to move forward, there is a practical starting point that often gets overlooked. Identify the questions that are difficult to answer today. These are usually the questions that require excessive effort, multiple sources, or reliance on individual knowledge. They reveal where the gaps exist in how information is captured and structured. Once those gaps are visible, it becomes possible to design systems that address them directly, focusing on relevance and usability instead of volume.

AI will continue to advance, and its potential remains significant. But its effectiveness will always depend on the environment it operates within. Organizations that invest in clarity, clear processes, clear ownership, and clear signals will find that technology enhances their capabilities. Those who do not will continue to struggle, regardless of how advanced their tools become. The difference comes down to discernment and whether it is treated as a priority or an afterthought.

Advertisement

Source link

Continue Reading

Tech

Looking At A Bike Built For The Apocalypse

Published

on

So-called bug out cars are a rather silly venture that serve little purpose more than snagging your jumper. The odds of a car working well through a nuclear winter are rather minimal. But what about a bicycle? On paper it’s a better choice, with extreme efficiency, reliability, and runs off whatever sustenance you can find in the barren landscape of a collapsed society. But [Seth] over at Berm Peak proved an apocalypse bike is at least as silly as a bug out car.

While a utilitarian bike fit for a cross-country trek across a nuclear wasteland can certainly be a reasonable venture, this particular bicycle is not that. This three wheeled monstrosity of a bicycle (is it still a bicycle if it has three wheels?) was built by [TOMO] for the Bespoked bike show’s apocalypse buildoff. It placed second among a number of strange bikes with features ranging from pedal driven circular saws to beer keg grills. But this particular example of apocalypse bike is easily the strangest example of the lot.

The features on this custom build are rather extensive, but the star of the show is the trailing link two wheel drive rear end. The third wheel was thrown on last minute with a random shock providing some measure of compliance to the rather unwieldy system. But while adding unnecessary complexity, the third wheel does offer the benefit of bringing along a number of spare parts on the last bikepacking trip of a lifetime. Moreover, it can be easily removed to get something resembling bicycle.

Advertisement

The aforementioned front of the bike while being an actual bike, is likewise a rather strange build. It’s best described as a fat-tired long nosed tall cargo bike. The removable cargo rack is quite effective in storing heavy loads by keeping the center of gravity near or below the axles, it can remain rideable with quite heavy loads. But, if ground clearance is needed, then simply remove the cargo rack, and the bike becomes a bike capable of navigating the nuclear wasteland it was made for.

While this is a silly and questionable bike, it’s certainly not the first strange bike we have seen.

 

Advertisement

 

Source link

Advertisement
Continue Reading

Tech

Can A $40 Knockoff DeWalt Chainsaw Beat The $130 Original? This Test Found Out

Published

on





A range of brands make cordless pruning saws, and they’re potentially a very handy addition to your arsenal of outdoor tools. Much like its long-standing rivals Milwaukee and Makita, DeWalt makes a pruning saw that has been copied by knockoff brands that use the same interchangeable batteries as the real thing. A test by Project Farm put all of these saws to the test to see how they performed in a variety of situations, and surprisingly, the knockoff versions of some big-name tools performed impressively well, with the knockoff DeWalt saw not far behind its legitimate counterpart.

Advertisement

In a test that timed how long each saw took to cut through 2×8 lumber, the knockoff DeWalt couldn’t match the real DeWalt saw, with the former taking 5.72 seconds while the latter took just 2.87 seconds. However, the knockoff still outperformed a genuine Makita saw, which took 5.93 seconds to make the same cut.

Another test was designed to see how much downward force each saw could take without stalling. The knockoff DeWalt stalled at 10 pounds, and the real DeWalt stalled at 21 pounds, comfortably beating its counterfeit counterpart. However, Ryobi and Craftsman’s saws both outperformed the real DeWalt, hitting over 30 pounds of downward force before stalling, while the Milwaukee saw that Project Farm tested hit 94 pounds and still didn’t stall, making it the winner by a large margin.

Advertisement

In one key area, the knockoff DeWalt beat the real thing

In terms of pure performance, the real DeWalt ranked mostly above its knockoff counterpart, but in efficiency, the knockoff claimed a surprise victory. Project Farm calculated the runtime per amp-hour of each saw, and the real DeWalt managed 1.38 minutes, the second worst of the test group. Meanwhile, the knockoff DeWalt could run for 1.72 minutes with the same amount of power.

The knockoff could also make significantly more cuts through a hardwood log per amp hour, achieving 10.4 cuts compared to the real DeWalt’s 8.1 cuts. However, both were far behind the best in class, with the Milwaukee saw delivering 41.5 cuts per amp hour. Kobalt took the second-place spot with 31.7 cuts per amp hour. Project Farm’s final combined ranking saw the knockoff DeWalt finish only one place behind the legitimate DeWalt saw, although both were roundly beaten by rivals from Milwaukee, Kobalt, and Ryobi. That might seem surprising considering the major price difference between the two.

However, despite their close ranking in the test, you still probably shouldn’t buy knockoff DeWalt tools. Their lack of warranty and inconsistent production standards can potentially mean you end up spending more money in the long run, and in some cases, knockoffs may even pose a safety risk. Buyers looking for the best-performing pruning chainsaw would be better off considering a rival tool from another major chainsaw brand, or coughing up the cash for the real DeWalt saw, even if it isn’t the best in class.

Advertisement



Source link

Advertisement
Continue Reading

Tech

The consequential AI work that actually moves the needle for enterprises

Published

on

Presented by OutSystems


After two years of flashy AI demos, rushed agent prototypes, and breathless predictions, enterprise technology leaders are striking a more pragmatic tone in 2026. In a recent webinar hosted by OutSystems, a panel of software executives and enterprise practitioners made the case that the most consequential AI work happening now is focused on the practical matters of governance, orchestration, and iteration, along with integrating agents into the systems they’ve spent decades building.

Enterprise leaders are increasingly focused on fundamentals. The priority is using new AI technologies

to accelerate productivity, improve delivery, and produce measurable business results.

Advertisement

Three elements shape this work:

  • The move from AI agent prototypes to agentic systems that deliver measurable ROI in production

  • The growing role of enterprise platforms in governing, orchestrating, and scaling AI agents safely

  • The rise of the generalist developer and enterprise architect as the most valuable technical profiles in an era of AI-generated code

Against this backdrop, the panel discussed governance frameworks, the economics of enterprise AI, and the limits of large language models without orchestration. The conversation ultimately turned to how leading organizations are building multi-agent systems grounded in existing enterprise data and workflows.

Agents in the real world

Enabling agents to work in production across the enterprise is best accomplished with a unified platform that handles development, iteration, and deployment. And that’swhere capabilities like the Agent Workbench in the OutSystems platform matter, said Rajkiran Vajreshwari, senior manager of app development at Thermo Fisher Scientific. It provides the infrastructure to learn, iterate, and govern agents at scale.

His team at Thermo Fisher has moved away from single-task AI assistants in customer service to building a coordinated team of specialized agents using the workbench. When a support case arrives, a triage assistant classifies the request and dynamically routes it to the right specialist agent, whether that’s an intent and priority agent, a product context agent, a troubleshooting agent, or a compliance agent.

Advertisement

“We don’t have to think about what will work and how. It’s all pre-built,” he explained. “Each agent has a narrow role and clear guardrails. They stay accurate and auditable.”

Governing the risks of shadow AI

A new category of risk emerges when AI makes it possible for anyone in a company to generate production-level code without IT oversight. Basically, this is ungoverned shadow AI. These homegrown products are prone to hallucinations, data leakage, policy violations, model drift, and agents taking actions that were never formally approved.

To get ahead of the risk, leading organizations need to do three things, said Luis Blando, CPTO of OutSystems.

“Give users guardrails. They’re going to use AI whether you like it or not. Companies that seem to be getting ahead are using AI to govern AI across their full portfolio,” he explained. “That is the difference between shadow AI chaos and enterprise-grade scale.”

Advertisement

Eric Kavanagh, CEO of The Bloor Group, noted that governance requires a layered set of disciplines that includes securing data, monitoring models for drift, and making deliberate choices about where AI connects to existing business processes.

“Companies don’t have to be manually creating these controls,” he added. “A lot of those guardrails and levers are baked in to platforms like OutSystems.”

Why the real orchestration challenge is models vs. platforms

Much of the early excitement around enterprise AI focused on selecting the right large language model. Now the harder challenge, and far more durable source of value, is orchestration. This includes routing tasks, coordinating workflows, governing execution, and integrating AI into existing enterprise systems.

Scott Finkle, VP of development at McConkey Auction Group, noted that LLMs, however impressive, are pieces of complex workflows, not final solutions. Organizations should be ready to hot-swap between Gemini, ChatGPT, Claude, and whatever emerges next without having to rebuild the agentic system around it.

Advertisement

A platform with orchestration capabilities makes that possible. It manages the lifecycle, provides visibility, and ensures processes execute reliably, even as AI handles the reasoning layer on top.

“The AI and the models change, the workflows can change, but the orchestration remains the same,” Finkle said. “That’s how we’re going to extract value out of AI.”

The economics of enterprise AI investing

Security, compliance, governance, and platform-level AI capabilities will all command greater investment in 2026, particularly as AI moves into core workflows like finance and supply chain. Enterprises should favor incremental wins rather than expect big, immediate gains.

“We’re focusing on base hits,” Finkle said. “The way it counts is by getting something into production and having it make an impact. Big investments in pilot projects that don’t make it into production don’t save any money. It’s not going to happen overnight, but over time I think we’ll see tremendous savings.”

Advertisement

There’s still a split in how enterprises are approaching AI transformation. Some start from scratch and reimagine every process. Others, especially those with billions of dollars in existing infrastructure depreciating in-house, want AI to integrate with their systems. They want agentic systems to reuse data, APIs, and proven processes while speeding up delivery. The agent platform approach serves both camps, but particularly the latter. Organizations can deploy agents where they add clear value while preserving the integrity of established, deterministic workflows.

The rise of the enterprise architect and the generalist developer

As AI accelerates code generation, bottlenecks in software delivery are dissolving. In its place is a premium on systems thinking. This is the ability to understand the broader enterprise architecture, decompose complex business problems, and reason about how AI integrates with existing infrastructure. Kavanagh pointed to enterprise architects specifically as the professionals best positioned to capitalize on this moment.

“We’re entering a very interesting age of the generalist,” he explained. “The better you know your enterprise architecture and your business architecture and how those things align, the better off you’re going to be. ”

“The result is faster delivery with fewer interruptions and fewer bugs,” Kavanaugh said. “You can focus on the non-repetitive tasks. It’s a benefit to the developer, to the business, and to the whole IT organization.”

Advertisement

Catch the entire webinar here.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Source link

Advertisement
Continue Reading

Tech

Blossom Health raises $20 million to put AI copilots alongside psychiatrists

Published

on

Blossom Health, a New York-based telepsychiatry startup founded in 2024, has raised $20 million in combined seed and Series A funding to scale an AI-powered platform that pairs psychiatrists with clinical copilots and automated administrative support. The round was led by Headline, whose co-founder and managing partner Mathias Schilling is joining the company’s board. Village Global and TA Ventures returned from earlier rounds, with Operator Partners and Correlation Ventures joining as new institutional backers alongside angel investors including founders from General Catalyst, Flatiron Health, Sword Health, and Zip.

The company, founded by CEO John Zhao, is built around a specific premise: that the bottleneck in psychiatric care is not a shortage of clinical knowledge but a shortage of time. Psychiatrists in the United States spend roughly half their working hours on non-clinical tasks, including documentation, billing, insurance authorisation, and scheduling. Blossom’s platform automates much of this through a network of AI agents that handle billing, reception, care coordination, and medical scribing, while a separate set of clinical copilots assist with symptom evaluation, diagnosis refinement, and medication selection during patient encounters.

The scale of the problem

The psychiatric workforce shortage in the United States is severe and worsening. More than 122 million Americans live in federally designated mental health professional shortage areas, according to the Health Resources and Services Administration. The national psychiatrist-to-population ratio stands at one provider for every 5,058 residents. Roughly 60 per cent of practising psychiatrists are 55 or older, meaning a significant portion of the existing workforce will retire within the next decade. Wait times for an initial psychiatric appointment range from three weeks to six months depending on location, and in many rural counties there are no psychiatrists at all.

This gap has created a market. US digital health startups raised $14.2 billion in 2025, the highest total since 2022, with AI-powered companies accounting for 54 per cent of that funding. Within mental health specifically, Talkiatry, an in-network telepsychiatry platform, raised $210 million in February 2026. Spring Health, which uses AI for personalised treatment recommendations, is valued at $3.3 billion. Ambient clinical scribes, the category of AI that automatically generates notes from patient conversations, produced $600 million in revenue last year alone.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Blossom is small by comparison. The company says its tools are used by hundreds of clinicians treating more than 10,000 patients across multiple US states. Most patients are seen within 48 hours, with many receiving same-day appointments. Blossom accepts all major commercial insurers, including Optum UnitedHealthcare, Aetna, Cigna Evernorth, and Blue Cross Blue Shield, with average copays of around $22.

Copilot, not replacement

The “copilot” framing is deliberate and important. Blossom is not building a therapy chatbot. Its AI tools sit alongside licensed psychiatrists during clinical encounters, surfacing relevant information, helping evaluate symptoms against diagnostic criteria, and suggesting medication adjustments based on the patient’s history and current presentation. The psychiatrist retains clinical authority over every decision.

Advertisement

Between appointments, the platform uses AI agents to maintain contact with patients through text-based check-ins on sleep, mood, medication adherence, and other indicators. Fortune reported that in the case of postpartum depression, for example, the system follows up with conversational prompts that surface warning signs and prepare information for clinicians ahead of the next visit. This approach converts what has traditionally been episodic care, where a patient sees a psychiatrist for 15 minutes every few months and is otherwise unsupported, into something closer to continuous monitoring.

The clinical claims are plausible but early. Blossom says it has demonstrated the ability to stabilise mental health conditions and prevent progression toward more intensive care, but the company has not published peer-reviewed clinical evidence. At 10,000 patients, the dataset is meaningful for a company this young but far too small to draw population-level conclusions about clinical efficacy.

The Cerebral cautionary tale

Any startup operating at the intersection of AI, telepsychiatry, and controlled substance prescribing inherits the reputational burden of what came before. Cerebral, the telemental health company that raised $300 million at a $4.8 billion valuation in 2022, became the subject of a Department of Justice investigation into its prescribing practices for controlled substances and paid a $7 million settlement to the Federal Trade Commission over allegations of misleading cancellation policies and data sharing. The company’s rapid growth, which prioritised patient volume over clinical rigour, damaged trust across the sector.

Blossom’s architecture is different in important ways. It works through licensed psychiatrists rather than nurse practitioners prescribing independently, and its AI tools are positioned as decision support rather than decision-makers. But the fundamental tension remains: scaling psychiatric care through technology requires maintaining clinical quality at volumes that a traditional practice model was never designed to handle. The AI copilot must be good enough to genuinely assist clinicians without introducing errors that a time-pressed psychiatrist might not catch, particularly in medication selection, where psychiatric pharmacology is notoriously complex and highly individual.

Advertisement

The $20 million will fund expansion into additional US states, new insurance partnerships, clinician recruitment, and continued research and development. For a company founded less than two years ago, treating over 10,000 patients with in-network insurance coverage is a notable operational achievement. Whether the clinical copilot meaningfully improves outcomes, or simply makes it faster to deliver care at the same quality, is the question the next round of funding will need to answer.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025