Connect with us
DAPA Banner

Tech

Steve Wozniak says he's "disappointed a lot" by AI and rarely uses it

Published

on


In a CNN interview in which he was asked about Apple’s upcoming 50th anniversary and how the company has shaped the tech industry, Wozniak was asked what excites and scares him about AI.
Read Entire Article
Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

What is DeerFlow 2.0 and what should enterprises know about this new, powerful local AI agent orchestrator?

Published

on

ByteDance, the Chinese tech giant behind TikTok, last month released what may be one of the most ambitious open-source AI agent frameworks to date: DeerFlow 2.0. It’s now going viral across the machine learning community on social media. But is it safe and ready for enterprise use?

This is a so-called “SuperAgent harness” that orchestrates multiple AI sub-agents to autonomously complete complex, multi-hour tasks. Best of all: it is available under the permissive, enterprise-friendly standard MIT License, meaning anyone can use, modify, and build on it commercially at no cost.

DeerFlow 2.0 is designed for high-complexity, long-horizon tasks that require autonomous orchestration over minutes or hours, including conducting deep research into industry trends, generating comprehensive reports and slide decks, building functional web pages, producing AI-generated videos and reference images, performing exploratory data analysis with insightful visualizations, analyzing and summarizing podcasts or video content, automating complex data and content workflows, and explaining technical architectures through creative formats like comic strips.

ByteDance offers a bifurcated deployment strategy that separates the orchestration harness from the AI inference engine. Users can run the core harness directly on a local machine, deploy it across a private Kubernetes cluster for enterprise scale, or connect it to external messaging platforms like Slack or Telegram without requiring a public IP.

Advertisement

While many opt for cloud-based inference via OpenAI or Anthropic APIs, the framework is natively model-agnostic, supporting fully localized setups through tools like Ollama. This flexibility allows organizations to tailor the system to their specific data sovereignty needs, choosing between the convenience of cloud-hosted “brains” and the total privacy of a restricted on-premise stack.

Importantly, choosing the local route does not mean sacrificing security or functional isolation. Even when running entirely on a single workstation, DeerFlow still utilizes a Docker-based “AIO Sandbox” to provide the agent with its own execution environment.

This sandbox—which contains its own browser, shell, and persistent filesystem—ensures that the agent’s “vibe coding” and file manipulations remain strictly contained. Whether the underlying models are served via the cloud or a local server, the agent’s actions always occur within this isolated container, allowing for safe, long-running tasks that can execute bash commands and manage data without risk to the host system’s core integrity.

Since its release last month, it has accumulated more than 39,000 stars (user saves) and 4,600 forks — a growth trajectory that has developers and researchers alike paying close attention.

Advertisement

Not a chatbot wrapper: what DeerFlow 2.0 actually is

DeerFlow is not another thin wrapper around a large language model. The distinction matters.

While many AI tools give a model access to a search API and call it an agent, DeerFlow 2.0 gives its agents an actual isolated computer environment: a Docker sandbox with a persistent, mountable filesystem.

The system maintains both short- and long-term memory that builds user profiles across sessions. It loads modular “skills” — discrete workflows — on demand to keep context windows manageable. And when a task is too large for one agent, a lead agent decomposes it, spawns parallel sub-agents with isolated contexts, executes code and Bash commands safely, and synthesizes the results into a finished deliverable.

It is similar to the approach being pursued by NanoClaw, an OpenClaw variant, which recently partnered with Docker itself to offer enterprise-grade sandboxes for agents and subagents.

Advertisement

But while NanoClaw is extremely open ended, DeerFlow has more clearly defined its architecture and scoped tasks: Demos on the project’s official site, deerflow.tech, showcase real outputs: agent trend forecast reports, videos generated from literary prompts, comics explaining machine learning concepts, data analysis notebooks, and podcast summaries.

The framework is designed for tasks that take minutes to hours to complete — the kind of work that currently requires a human analyst or a paid subscription to a specialized AI service.

From Deep Research to Super Agent

DeerFlow’s original v1 launched in May 2025 as a focused deep-research framework. Version 2.0 is something categorically different: a ground-up rewrite on LangGraph 1.0 and LangChain that shares no code with its predecessor. ByteDance explicitly framed the release as a transition “from a Deep Research agent into a full-stack Super Agent.”

New in v2: a batteries-included runtime with filesystem access, sandboxed execution, persistent memory, and sub-agent spawning; progressive skill loading; Kubernetes support for distributed execution; and long-horizon task management that can run autonomously across extended timeframes.

Advertisement

The framework is fully model-agnostic, working with any OpenAI-compatible API. It has strong out-of-the-box support for ByteDance’s own Doubao-Seed models, as well as DeepSeek v3.2, Kimi 2.5, Anthropic’s Claude, OpenAI’s GPT variants, and local models run via Ollama. It also integrates with Claude Code for terminal-based tasks, and with messaging platforms including Slack, Telegram, and Feishu.

Why it’s going viral now

The project’s current viral moment is the result of a slow build that accelerated sharply this week.

The February 28 launch generated significant initial buzz, but it was coverage in machine learning media — including deeplearning.ai’s The Batch — over the following two weeks that built credibility in the research community.

Then, on March 21, AI influencer Min Choi posted to his large X following: “China’s ByteDance just dropped DeerFlow 2.0. This AI is a super agent harness with sub-agents, memory, sandboxes, IM channels, and Claude Code integration. 100% open source.” The post earned more than 1,300 likes and triggered a cascade of reposts and commentary across AI Twitter.

Advertisement

A search of X using Grok uncovered the full scope of that response. Influencer Brian Roemmele, after conducting what he described as intensive personal testing, declared that “DeerFlow 2.0 absolutely smokes anything we’ve ever put through its paces” and called it a “paradigm shift,” adding that his company had dropped competing frameworks entirely in favor of running DeerFlow locally. “We use 2.0 LOCAL ONLY. NO CLOUD VERSION,” he wrote.

More pointed commentary came from accounts focused on the business implications. One post from @Thewarlordai, published March 23, framed it bluntly: “MIT licensed AI employees are the death knell for every agent startup trying to sell seat-based subscriptions. The West is arguing over pricing while China just commoditized the entire workforce.”

Another widely shared post described DeerFlow as “an open-source AI staff that researches, codes and ships products while you sleep… now it’s a Python repo and ‘make up’ away.”

Cross-linguistic amplification — with substantive posts in English, Japanese, and Turkish — points to genuine global reach rather than a coordinated promotion campaign, though the latter is not out of the question and may be contributing to the current virality.

Advertisement

The ByteDance question

ByteDance’s involvement is the variable that makes DeerFlow’s reception more complicated than a typical open-source release.

On the technical merits, the open-source, MIT-licensed nature of the project means the code is fully auditable. Developers can inspect what it does, where data flows, and what it sends to external services. That is materially different from using a closed ByteDance consumer product.

But ByteDance operates under Chinese law, and for organizations in regulated industries — finance, healthcare, defense, government — the provenance of software tooling increasingly triggers formal review requirements, regardless of the code’s quality or openness.

The jurisdictional question is not hypothetical: U.S. federal agencies are already operating under guidance that treats Chinese-origin software as a category requiring scrutiny.

Advertisement

For individual developers and small teams running fully local deployments with their own LLM API keys, those concerns are less operationally pressing. For enterprise buyers evaluating DeerFlow as infrastructure, they are not.

A real tool, with limitations

The community enthusiasm is credible, but several caveats apply.

DeerFlow 2.0 is not a consumer product. Setup requires working knowledge of Docker, YAML configuration files, environment variables, and command-line tools. There is no graphical installer. For developers comfortable with that environment, the setup is described as relatively straightforward; for others, it is a meaningful barrier.

Performance when running fully local models — rather than cloud API endpoints — depends heavily on available VRAM and hardware, with context handoff between multiple specialized models a known challenge. For multi-agent tasks running several models in parallel, the resource requirements escalate quickly.

Advertisement

The project’s documentation, while improving, still has gaps for enterprise integration scenarios. There has been no independent public security audit of the sandboxed execution environment, which represents a non-trivial attack surface if exposed to untrusted inputs.

And the ecosystem, while growing fast, is weeks old. The plugin and skill library that would make DeerFlow comparably mature to established orchestration frameworks simply does not exist yet.

What does it mean for enterprises in the AI transformation age?

The deeper significance of DeerFlow 2.0 may be less about the tool itself and more about what it represents in the broader race to define autonomous AI infrastructure.

DeerFlow’s emergence as a fully capable, self-hostable, MIT-licensed agentic orchestrator adds yet another twist to the ongoing race among enterprises — and AI builders and model providers themselves — to turn generative AI models into more than chatbots, but something more like full or at least part-time employees, capable of both communications and reliable actions.

Advertisement

In a sense, it marks the natural next wave after OpenClaw: whereas that open source tool sought to great a dependable, always on autonomous AI agent the user could message, DeerFlow is designed to allow a user to deploy a fleet of them and keep track of them, all within the same system.

The decision to implement it in your enterprise hinges on whether your organization’s workload demands “long-horizon” execution—complex, multi-step tasks spanning minutes to hours that involve deep research, coding, and synthesis. Unlike a standard LLM interface, this “SuperAgent” harness decomposes broad prompts into parallel sub-tasks performed by specialized experts. This architecture is specifically designed for high-context workflows where a single-pass response is insufficient and where “vibe coding” or real-time file manipulation in a secure environment is necessary.

The primary condition for use is the technical readiness of an organization’s hardware and sandbox environment. Because each task runs within an isolated Docker container with its own filesystem, shell, and browser, DeerFlow acts as a “computer-in-a-box” for the agent. This makes it ideal for data-intensive workloads or software engineering tasks where an agent must execute and debug code safely without contaminating the host system. However, this “batteries-included” runtime places a significant burden on the infrastructure layer; decision-makers must ensure they have the GPU clusters and VRAM capacity to support multi-agent fleets running in parallel, as the framework’s resource requirements escalate quickly during complex tasks.

Strategic adoption is often a calculation between the overhead of seat-based SaaS subscriptions and the control of self-hosted open-source deployments. The MIT License positions DeerFlow 2.0 as a highly capable, royalty-free alternative to proprietary agent platforms, potentially functioning as a cost ceiling for the entire category. Enterprises should favor adoption if they prioritize data sovereignty and auditability, as the framework is model-agnostic and supports fully local execution with models like DeepSeek or Kimi. If the goal is to commoditize a digital workforce while maintaining total ownership of the tech stack, the framework provides a compelling, if technically demanding, benchmark.

Advertisement

Ultimately, the decision to deploy must be weighed against the inherent risks of an autonomous execution environment and its jurisdictional provenance. While sandboxing provides isolation, the ability of agents to execute bash commands creates a non-trivial attack surface that requires rigorous security governance and auditability. Furthermore, because the project is a ByteDance-led initiative via Volcengine and BytePlus, organizations in regulated sectors must reconcile its technical performance with emerging software-origin standards. Deployment is most appropriate for teams comfortable with a CLI-first, Docker-heavy setup who are ready to trade the convenience of a consumer product for a sophisticated and extensible SuperAgent harness.

Source link

Continue Reading

Tech

I Only Listened to AI Music for a Week. It Was Terrible, but Not for the Reason You Think

Published

on

Music is my constant companion. I’m almost always listening to a carefully curated playlist or new album. I wholeheartedly believe Spotify Wrapped Day should be a national holiday. So, as an AI reporter who has watched the so-called AI music industry grow over the past few years, I decided it was finally time to see how these artificial artists stack up. So I set a challenge for myself: I would only listen to AI-created music for a full week. 

It was a very, very long week. AI music really takes the “art” out of artificial. But it was an educational and revealing experience, too. 

The story of AI music is an old record that’s been played before. Musicians have debated the role of technology in music creation for hundreds of years, from the introduction of recorded music using phonographs to synthesizers, autotune and production tech going mainstream. What makes this moment unique is that AI can create entire songs with very little human guidance. But the AI models that do so are built using music created by actual humans, creating a haze of legal woes and ethical chaos — similar to that faced by other creators like writers, artists and filmmakers.

Advertisement

Music is one of the few universal cultural touchstones we have. Generative AI is rapidly changing how music is created, and in effect, changing our humanity with it.

A week of AI music

For the purpose of my self-imposed experiment, I only listened to songs that were verifiably altered by AI. I was pleased to see that the AI music sites offered a wide range of songs, but that initial excitement was short-lived. Most disappointingly, the vast majority of the pop music was shrill and squeaky — the musical version of plastic, in my opinion. 

A lot of the trending songs were electronic music, which I’m sure EDM fans would’ve appreciated more than me. It just reminded me of a canon event every young person experiences: Being stuck at a house party where the person on the aux is “an aspiring DJ.” The house and techno styles just reinforced the idea that I was listening to robotic AI music. It made it hard to enjoy when I knew there wasn’t even the illusion of human creation behind the songs.

I fared much better with country and folk music, which had a big focus on the instrumentals and an acoustic sound. A lot of it sounded like it could’ve been by Noah Kahan, Kacey Musgraves or Luke Combs. This is where I started to relax into my typical music habits — getting hooked by a particularly appealing song on a first listen, adding those interesting songs to a playlist that I would eventually prefer over exploring new music as I grew more comfortable and attached to my favorite songs. 

Advertisement

Then there was the truly weird, wacky AI music. Beyond Suno, there is an entire universe of unique AI music on sites like YouTube. My favorite (or the least worst one?) was the 8-minute Game of Thrones disco, complete with a music video, while my editor favored the Lord of the Rings version. I found the songs engrossing, probably because they’re music videos, not just songs, with haunting, AI slop visuals.

Game of Thrones white walker on an orange disco floor.

I have no idea what’s going on in this Game of Thrones music video, where white walkers dance like it’s the 1970s, but it was something.

WickedAI/Screenshot by CNET

Tech and music: A song that’s been played before

Technology has always played a role in music. Musical AI is part of a longer arc in music’s history, Mark Ethier, founder of the iZoptope music tech company and executive director of Berklee’s Emerging Artistic Technology Lab, told me.

Advertisement

“When GarageBand came out, people felt like, ‘Oh my gosh, I can make music because I can drag some samples of a guitar, have a bass and some drums, and I’ve made a song, right?’” said Ethier. “Where we are today is the most extreme version of that.” 

AI Atlas

Traditional music software, such as GarageBand, was meant to enhance and democratize the process of creating music. AI music companies say they do the same, but there’s a big difference: You can pop out entire AI songs with just a sentence or two to guide the vibe. The underlying tech is similar to what is running in chatbots and image generators — transformers and diffusion methods, Suno cofounder Mikey Shulman said in 2023.

AI music generators like Suno do more than piecing together a song or tweaking a template. Like with imagery and videos, AI has made it quicker, cheaper and easier than ever to create something that feels like it was professionally produced.

“[AI] has changed is just how much easier it is to do, and how indistinguishable the output is,” Ethier said. Before AI, throwing some loops together on GarageBand wouldn’t be enough to make a full song or hit record. “Now, that distinction is not as clear anymore,” he said.

Advertisement

The AI music arena has grown quickly in a short period of time. Sites like Suno and Udio have racked up subscribers and gained notoriety. Suno reached a milestone of 2 million paying subscribers, its cofounder shared in February. But like other creative AI companies, Suno and Udio have been sued by record labels alleging the AI companies used musicians’ work for AI training without permission or compensation. 

Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It

Can we make connections with AI music?

The amount of time I spent listening to music dropped significantly on the days when I was restricted to only AI music, and I felt that deprivation deeply. It wasn’t until I came across a specific category of AI music that I began to border on enjoying the experience. There’s a neuroscientific and psychological reason why, I learned.

Joy Allen, a music therapist and director of Berklee’s Music and Health Institute, told me that there’s a reason music from our teen years sticks so strongly with us. Our adolescent brains are sponges, and music is one of the only things that activates every part of our brain, Allen said. Those connections, fueled by teenage hormones and neurochemicals, stay with us long after.

Advertisement

“When you listen to music, it’s not just activating the auditory cortex. It’s activating where you process emotions [and] physical responses … Our brains love patterns,” Allen said. “If you think about music, it’s patterns, it’s chordal structures, it’s the melody line… so we get used to patterns and predictability.”

My teen years were largely set to the soundtrack of Taylor Swift, and anyone who’s met me knows she’s still my favorite artist. But even knowing what Allen told me, I was surprised at how emotional the AI covers of Taylor Swift songs made me. 

A lot of the AI covers I listened to took Swift’s songs and reimagined them in different genres. An AI pop punk version of “You Belong With Me” sounded like it could’ve been sung by another band from my teen years, 5 Seconds of Summer. It was strangely gratifying, with a heavy dose of nostalgia. It was also the only AI song to get stuck in my head.

Advertisement
Taylor Swift at the Eras Tour - TTPD era

Nothing like Taylor Swift for a good dose of nostalgia.

Katie Collins/CNET

We can make emotional attachments to any music — created by humans or AI, theoretically, Allen said — during this time. But since my musical identity is already formed, the AI songs that brought out the more visceral, emotional reaction in me were those that drew on those connections and memories, firing those neurochemicals in my brain. I was more engaged and happier listening to these AI Swiftie covers than any other AI song. The songs were different, but they were still the lyrics I had sung into my hairbrush as a kid and in a million other scenarios throughout my life, brought to life in a new way.

While these songs were the highlight of my experiment, they didn’t sell me on AI music any more than the “original” songs did. The AI largely reminded me of the covers I had listened to in real life and seen clips of online. I liked the AI folk cover of Swift’s “All Too Well,” but it was a cheap imitation compared to the guitarist I heard sing it in a coffee shop last year, or the indie bands adding their own individual touches that I come across on TikTok.

The power of a great artist is their ability to create music that inspires others, to move them and spark flames of creativity. Covers by human musicians are a way to pay tribute and express appreciation; AI covers felt like cheap imitations and mockery by comparison. 

Advertisement

Music is human

I was irritatingly cognizant of my experiment while I was doing it. The AI music never held my attention the same way that human music did. With a few notable exceptions, the AI songs were basically white noise. I often caught myself drifting toward the Spotify app to turn on better music. In the final days of my experiment, no music was better than AI music. Even now as I write this, the car horns and bird chirps outside my window are better company than fake instruments. 

AI has become a part of our lives, for better or worse. But it’s not just part of our technology; it’s slowly infiltrating our culture. Music is one of the strongest cultural touchstones we have, and to have AI so quickly and effectively mimic something that is inherently human is… awe-inspiring. Worrisome. But definitely a very clear sign that AI is remaking the very things that define our humanity. It left me with an increasingly deep sense of dread about the havoc AI is wreaking on our culture and humanity.

It’s not just listeners like me who are struggling — musicians are, too. AI-generated music is flooding streaming platforms, leaving companies like Apple Music and Spotify struggling to define what’s allowed, what isn’t and what’s monetizable. It’s even more complex from a legal and ethical point of view.

“As a musician, this is a really complicated time to be understanding tools,” Ethier said. “You used to be able to pick up a trumpet and play trumpet. You didn’t have to think about how that trumpet was trained, or if the trumpet owns your music.”

Advertisement

Music is intrinsically human and social by design. So it wasn’t surprising that I felt disconnected throughout my AI music week. It was an isolating experience — no memories tied to core moments, no TikTok dances, no culture. No artist personality, little fandom. No thoughts of “remember how she jumped an octave when she performed it live?” It was a superficial listening experience. I didn’t want to revisit them once my experiment was done.

So much of the music we listen to is tied to specific memories. The AI songs I felt most connected to were covers of songs I already had a strong emotional connection with: Taylor Swift songs I listened to for the first time at eight years old in the backseat with my childhood besties; songs that were inspired by but utterly lacking the emotion of the ’90s power ballad my dad loves but my mom bemoans every time he plays it; a “Stick Season” AI wannabe that lacks Noah Kahan’s signature “dance while the world burns” flavor.

Music scores so many of our moments of life, from big moments like a married couple’s first dance to the small moments that flow by without us noticing. All of that builds up over our lives. Removing the humanity — or worse, trying to mimic it — sucks the soul out of what makes music worthwhile.

So, no, I would not recommend listening to only AI-generated music for a week. But it was useful, if only to further refine my worries about the way AI is eroding our humanity.

Advertisement

Source link

Continue Reading

Tech

Cauldron Ferm has turned microbes into nonstop assembly lines

Published

on

Cauldron Ferm has an unlikely origin story, as startups go. Its core technology can be traced back to the 1960s, or maybe the 1970s. The exact start is a bit hazy, actually. What is known is that David and Polly McLennan had a dream of feeding the world using protein grown from microbes.

The pair knew they needed to improve the process, which was pricy and time consuming. Most fermentation happens in batches. Picture a brewery or a vineyard. Ingredients go in and the microbes work for a while, but then the process stops when it’s time to take out the finished product. It works for alcohol because booze commands a premium price. Food, though? That needs to be cheaper.

Still, the McLennans stuck with it, starting a small business that would over the course of 40 years refine their approach to continuous fermentation, which turns microbes into assembly lines capable of cranking out products uninterrupted.

“We didn’t know what we had,” Michele Stansfied, co-founder and CEO of Cauldron Ferm, told TechCrunch. But eventually, Stansfield who arrived at the McLennans’ company in 2012, realized they had more than initially thought.

Advertisement

“We didn’t understand the challenge of continuous fermentation for synthetic biology,” Stansfield said. But when she did, she sought to transform the company from a small fee-for-service operators to a fast-moving startup. “At that point, I raised a seed round and acquired the IP, physical, and business assets.”

Cauldron has now raised $13.25 million in a Series A2 round that was led by Main Sequence Ventures with participation from Horizons Ventures, NGS Super, and SOSV, the company exclusively told TechCrunch. It had previously raised $6.5 million in 2024. Cauldron plans to use the funding to “increase the technology moat,” Stansfield said. 

The company calls it’s technology “hyper fermentation,” which helps keep microbes in their maximally productive state. It can work in existing batch fermenters with a few modifications to the facility to accommodate the process. Cauldron’s customers bring their own microbes and strains, and the startup works to tweak their growing conditions, including nutrients, to keep them humming.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Currently, Cauldron is focused on producing fats and proteins, including whey protein, “a product that can just slip into supply chains,” Stansfield said, though she adds there are more products the company has its eyes on.

Advertisement

“Sixty percent of all inputs to global economy can be produced from biology,” she said. “Food was where we started, but now we’re starting to really diversify.”

Source link

Continue Reading

Tech

Jury struggles to reach verdict in social media addiction trial against Meta and YouTube

Published

on


Jurors did not say whether the holdout relates to Meta or YouTube, but Kuhl told them to keep deliberating and warned that if they cannot reach a verdict, that part of the case will have to be retried before a new jury.
Read Entire Article
Source link

Continue Reading

Tech

Dutch Ministry of Finance discloses breach affecting employees

Published

on

Netherlands Dutch Ministry of Finance

The Dutch Ministry of Finance confirmed on Monday that some of its systems were breached in a cyberattack detected last week.

Officials said the ministry was notified by a third party of the breach on March 19, and it’s still investigating the cyberattack. An ongoing investigation found that the incident affects some employees.

“The Ministry of Finance’s ICT security detected unauthorized access to systems for a number of primary processes within the policy department on Thursday, March 19,” an official statement revealed.

“Following the alert, an immediate investigation was launched, and access to these systems has been blocked as of today. This affects the work of a portion of the employees.”

Advertisement

The ministry added that the cyberattack did not impact systems used to manage tax collection, import/export regulations, and income-linked subsidies, which handle over 9.5 million tax returns annually for income tax alone.

“Services to citizens and businesses provided by the Tax and Customs Administration, Customs, and Benefits have not been affected. We will update this message when we can share more information.”

Although the ministry said the breach affected some of its employees, it didn’t disclose how many were affected or whether the attackers stole any sensitive data. Also, no cybercrime group or threat actors have taken responsibility for the attack.

BleepingComputer reached out to a Ministry of Finance spokesperson with questions about the incident, including the total number of impacted employees and how long the attackers had access to the compromised systems, but a response was not immediately available.

Advertisement

In September 2024, the Dutch national police (Politie) was also breached in a cyberattack believed to be orchestrated by a “state actor” that stole work-related contact details of multiple police officers.

More recently, in February, Dutch authorities arrested a 40-year-old man for an extortion attempt after he downloaded confidential documents mistakenly shared by the police and refused to delete them unless he received “something in return.”

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Direct Pressure Advance Measurement For Fast Calibration

Published

on

Some people love fiddling with their 3D printers, others love printing. Some fiddle so they can spend more time printing, which is probably where this latest project comes in: an automated pressure advance calibration tool by [markniu].

Most of us don’t take enough care with pressure advance (PA). But if you want absolutely perfect prints, its something you should be calibrating for every type filament in your collection. Some would argue, ideally every individual spool. While that sort of dialing in can be fun, it takes away from actually running off prints. Bambu printers automate PA by scanning the usual sort of calibration print, but that’s still a very indirect measurement. Why not, just advance the filament, and measure the pressure at the nozzle directly? That is what PA is meant to account for, after all: the pressure of the plastic in the hotend causing oozing and blobbing at corners.

Did we mention it connects via USB-C? That’s helpfully broken out well away from the heat with a ribbon cable.

[mark]’s solution comes very close to a direct measurement. It uses a strain gauge that sits directly on top of the heatbreak, with the sound logic that the strain there experienced will be directly proportional to the pressure inside, at least along the axis of flow. Instead of filling half the bed with lines, the calibration process instead is a ‘printer poop’ style extrusion that doesn’t take nearly as long, and seems to save plastic, too. Since this puts a strain gauge in your hotend, you also get the bonus of being able to use it for bed leveling if you should so desire.

[mark] is claiming sub-90 second calibration — as you can see in the demo video embedded below — versus over seven minutes for the indirect calibration print. The value is plugged directly into Klipper, assuming you configured everything correctly, which should be easy enough looking at the instructions on the GitHub.

Advertisement

Source link

Continue Reading

Tech

Canonical Joins Rust Foundation – Slashdot

Published

on

BrianFagioli writes: Canonical has joined the Rust Foundation as a Gold Member, signaling a deeper investment in the Rust programming language and its role in modern infrastructure. The company already maintains an up-to-date Rust toolchain for Ubuntu and has begun integrating Rust into parts of its stack, citing memory safety and reliability as key drivers. By joining at a higher tier, Canonical is not just adopting Rust but also stepping closer to its governance and long-term direction.

The move also highlights ongoing tensions in Rust’s ecosystem. While Rust can reduce entire classes of bugs, it often depends heavily on external crates, which can introduce complexity and auditing challenges, especially in enterprise environments. Canonical appears aware of that tradeoff and is positioning itself to influence how the ecosystem evolves, as Rust continues to gain traction across Linux and beyond. “As the publisher of Ubuntu, we understand the critical role systems software plays in modern infrastructure, and we see Rust as one of the most important tools for building it securely and reliably. Joining the Rust Foundation at the Gold level allows us to engage more directly in language and ecosystem governance, while continuing to improve the developer experience for Rust on Ubuntu,” said Jon Seager, VP Engineering at Canonical. “Of particular interest to Canonical is the security story behind the Rust package registry, crates.io, and minimizing the number of potentially unknown dependencies required to implement core concerns such as async support, HTTP handling, and cryptography — especially in regulated environments.”

Source link

Continue Reading

Tech

What Does The Viral Afroman Trial Have to Do with Section 230?

Published

on

from the because-i-got-section-230 dept

The internet has been rightfully enjoying videos from the defamation trial against Afroman, a musician known for his humorous songs including “Because I got high.” The lawsuit involves songs he wrote about a 2022 raid police conducted on his house, which was based on flimsy evidence. The songs justifiably mock the officers involved. Mike Masnick wrote a recap of the case here, which is worth reading for many reasons, but the songs and Afroman’s testimony are true highlights. 

After the raid, Afroman released his songs on YouTube and they went viral initially on TikTok, both massive platforms for users to share their speech and that of other users. The officers who raided his home, seeking to silence someone making fun of them, sued Afroman for defamation, emotional distress, and other causes in 2023. 

Spoiler: Afroman won. The songs are not defamatory. But we didn’t know that for sure until a jury told us so this week. For three years, from the moment the lawsuit was filed until the jury issued its verdict, the songs were allegedly defamatory. And their continued “publication” ran the risk of liability.

So why could we still see the songs on YouTube, TikTok, Bluesky, and whatever other online platforms where we first encountered them? One big reason is Section 230 of the Communications Decency Act. 

Advertisement

Section 230 says that interactive computer service providers, like online platforms, cannot be treated as the publisher or speaker of information content provided by other information content providers. That means that YouTube could not be liable for the content of Afroman’s songs, even if they were defamatory. That’s the balance Section 230 strikes. Under 230, there is still accountability for the speaker, but online platforms are not liable for their users’ illegal speech.

By and large this balance has been incredibly beneficial to free expression online, supporting speech about everything from the profoundly consequential (#MeToo and Black Lives Matter) to the somewhat silly (a song about a cop who got distracted from a raid by a delicious looking “Lemon Pound Cake”). But now, members of Congress like Senator Lindsey Graham and Senator Dick Durbin want to repeal or replace Section 230 without much of a plan for what comes next. 

On March 18, Daphne Keller, a professor of law at Stanford and expert in intermediary liability laws around the world, testified before the Senate Commerce Committee. She tried to explain to the Senators that Section 230 may not be perfect, but it’s still better than any of the options she has seen. To understand why Daphne’s right, let’s think about what Afroman’s case might have looked like without Section 230. The moment Afroman was allowed to distribute his songs about the raid on YouTube, the company could have been liable for any potentially illegal speech they contained. That means YouTube probably also would have been a co-defendant in the cops’ suit. At the scale many online platforms operate at, these kinds of accusations of defamation and lawsuits related to user posts would happen hundreds of thousands, if not millions, of times a day.

That’s a lot of litigation.

Advertisement

Staring down the barrel of that many potential lawsuits every day, no reasonable platform would have allowed Afroman’s speech to stay up. The moment an accusation of illegality surfaced, a platform acting reasonably would likely take the speech down. And to be clear, we have evidence that this is how they would react: That’s the incentive structure currently in place under the Digital Millenium Copyright Act (DMCA). The DMCA creates a notice and takedown system for alleged copyright violations and evidence suggests that improper takedown requests are common and, even with the safeguards for speech built into that law, result in over-censorship. Replicating a version of the DMCA for all content on the internet writ large would likely produce the same overcensorship result. At a minimum, the platforms certainly wouldn’t allow their algorithms to recommend posts linking to the defamatory songs, effectively “shadowbanning” them, which is probably one of the main ways many people came across the songs to begin with.

The upshot is: Section 230 created the conditions that allowed us to hear Afroman’s songs, and allowed platforms to recommend them, even while their status was in legal limbo. 

There are millions of similar situations, large and small, every day where Section 230 ensures that online platforms do not have to try to make context-specific legal judgment calls. Section 230 may not be perfect. No law is. But it’s the best and most effective protection for free expression online we have, allowing online services to simply let their users speak. Congress should be very cautious about changing it, let alone eliminating it altogether.

Kate Ruane is the Director of the Free Expression Program and the Center for Democracy & Technology, where she advocates for the protection of free speech and human rights in the digital age.

Advertisement

Filed Under: afroman, defamation, intermiediaries, section 230

Source link

Advertisement
Continue Reading

Tech

Clear Drop Soft Plastic Compactor Review: Eco Experiment

Published

on

Soft plastics are notorious for jamming sorting machines, slipping through processing lines, and wreaking havoc on the environment. They’re also not accepted in most municipal curbside recycling programs.

Facilities for recycling these types of plastic exist, but getting waste to these locations clean and free of what some call “wishful recycling” items (compostable cups, plastic utensils) is such a challenge that the majority of soft plastics, even the bags recycled at the front of grocery stores, end up in the trash. The SPC is what Arbouzov calls a “pre-recycling device,” designed to simplify this stream and deliver plastic that’s contained, traceable, and more likely to make it through the system.

I tried to envision how the blocks would turn into patio furniture, as advertised, but didn’t learn exactly how until months later, when Arbouzov sent me a video of the blocks at their final destination—a facility in Frankfort, Indiana, that specializes in processing polyethylene and polypropylene films. The blocks get shredded into crumbles resembling, at least on video, handfuls of wet newspaper, which are then compressed into composite decking, chairs, garden edging, and more.

Courtesy of Clear Drop

Advertisement

Courtesy of Clear Drop

“The full cycle from mailing a block to it entering recycling processing typically takes a few weeks,” Arbouzov said, “depending on shipping time and batching schedules.” Right now, the Frankfort location is the only facility processing the blocks, but Arbouzov said he hopes this is only temporary.

“Our goal is to shift more of this processing closer to where the material is generated, so blocks can move in bulk through regional recycling infrastructure rather than through mail-based logistics,” he said. “The mail-back system is essentially a bridge that allows the material to be captured today while that larger infrastructure develops.”

Recycling, Rewired

I found that my household of three was able to produce a block every couple of weeks, which quickly outpaced the provided supply of mailers. As the blocks started piling up on the floor of my office, I found myself wishing the SPC made something useful for consumers. Spoons, straws, 3D-printing filament … anything that could be used at home.

Advertisement

However, a 2023 Greenpeace report found that recycling plastic can actually make it even more toxic than it already is—heating it can not only cause existing chemicals to escape into the air and water supply, but even create new ones, like benzene. Would I want this in my house? Does recycled plastic actually belong in a circular economy? I asked Arbouzov what he thought.

Source link

Continue Reading

Tech

A Broken Game Boy Advance Returns Stronger Than Before

Published

on

Game Boy Advance Restoration Upgrade Mods
Plenty of old handhelds spend their retirement gathering dust in a box somewhere, and this Game Boy Advance was no exception. Abandoned, completely dead, and sporting a screen that had burned out from years of neglect, it was not an obvious candidate for a comeback. Odd Tinkering took it apart piece by piece anyway, worked through every problem methodically, and brought it back to life with a handful of modern upgrades that breathe new life into the hardware without losing any of what made it special in the first place.



From the start it was completely dead, just a dark screen and no response when you tried to power it on. Some thorough cleaning got the electricity flowing again, and original Game Boy and Game Boy Color titles loaded up without complaint. GBA games were a different story though, refusing to run no matter what. The small mode detection switch inside the cartridge slot got a good wipe, which seemed like it should have done the trick, but the games still wouldn’t cooperate. The real culprit turned out to be oxidation sitting on the pins of the main chip. One more cleaning session and the problem disappeared entirely, with the system reading every cartridge thrown at it without a single issue.

Game Boy Advance Restoration Upgrade Mods
The screen was in rough shape, covered in dark blotches from years of burn in. New polarizing film cleared that up, though the display was still noticeably dim by modern standards, so an IPS panel went in next and solved the brightness issue immediately. Colors are vivid and the viewing angles are excellent, exactly what you want from a handheld you are actually going to use. The upgraded screen meant the original shell no longer fit, so the team scanned it with a 3D scanner and printed a new one in resin, a deep blue that nods to the classic aesthetic while hiding the modern hardware inside. The fit is perfect, with no gaps or wobble anywhere.

Game Boy Advance Restoration Upgrade Mods
The toolkit was refreshingly basic, a set of screwdrivers for disassembly, a soldering iron and desoldering tool for any stubborn connections, and hydrogen peroxide with UV light to lift the yellowing from the plastic. No specialty equipment, no secret techniques, just a clean and methodical process from the first screw to the last.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025