Connect with us
DAPA Banner

Tech

Denon’s Home 2.0 takes the fight for the living room to Sonos and Bluesound

Published

on

After several years where didn’t hear much from Denon’s Home speaker series, the Japanese brand has whipped up a brand new range, and it’s got Dolby Atmos support across the entire range.

The range is made up of the Home 200, Home 400, and Home 600; which sort of but not quite replace the previous models. The Denon Home 150, Home 250 and Home 350 haven’t been wiped from existence, but they’ll be bundled in their own group, dubbed the Home 1.0.

You’ll be able to operate the Home 1.0 and Home 2.0 systems through the same app, unlike some rivals who decided to cordon off their older products from their more recent models (cough, Sonos, cough).

You’ll be able to play music to the old and new speakers within the same Denon HEOS ecosystem, though of course you won’t be able to stereo pair models across generations. You can, however, pair the speakers with the Denon Home 550 Soundbar to create a surround system.

Advertisement

In fact, the you can use a sole Home 600 speaker, which can split the audio signal into left and right channels to create the sense of two rear speakers.

Advertisement

Almost the same price across all speakers

Denon Home 600 newDenon Home 600 new
Image Credit (Trusted Reviews)

Pricing for the new models is as follows

  • Denon Home 200 — $399 | £299 | €349
  • Denon Home 400 — $599 | £449 | €499
  • Denon Home 600 — $799 | £599 | €699

Which is better than expected given that it’s been almost seven years since the Home 1.0 launched, and the prices are still relatively in a similar ballpark.

The Home 400 is the same price (in the UK at least) as the Home 250, and it’s the same case for the Home 350. The Home 200 is the one where the price has shot up, from £219 to £299. These aren’t necessarily equivalent devices with the Home 200 featuring virtual Dolby Atmos support.

New look, new sound

Denon Home 400 newDenon Home 400 new
Image Credit (Trusted Reviews)

Advertisement

The new Denon Home series share a “unified design and performance philosophy” that Denon says is built for modern living. There’s a choice of Stone of Charcoal finishes (we do like the Stone look), with physical controls used (depending on the device, they’re either on the side or top surface); and there’s support for Wi-Fi, Bluetooth, USB-C audio, aux-in.

Advertisement

What ties the experience together is Denon’s HEOS app, through which you can connect up to 64 HEOS products (AV receivers, mini systems, etc) across 32 zones in your home. High-resolution audio support is provided from Tidal, Amazon Music and Qobuz; while you can stream audio with Spotify Connect as well.

Denon Home 200 newDenon Home 200 new
Image Credit (Trusted Reviews)

We’ve heard the new speakers in the flesh and they sounded good, with an emphasis on rich, warm sound, decent bass thump and a wide soundstage. We also heard how they play with Dolby Atmos music, the soundstage stretching quite high with the Home 600 to create a sound bigger than the speaker itself.

It remains to be seen how well this new era of Denon’s Home speakers with Sonos set to release new speakers in 2026, and Bluesound releasing more models. But you can find out for yourself how good the Denon Home 2.0 serie is, as they’re on sale now from Denon and authorised retailers.

Advertisement

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

I Only Listened to AI Music for a Week. It Was Terrible, but Not for the Reason You Think

Published

on

Music is my constant companion. I’m almost always listening to a carefully curated playlist or new album. I wholeheartedly believe Spotify Wrapped Day should be a national holiday. So, as an AI reporter who has watched the so-called AI music industry grow over the past few years, I decided it was finally time to see how these artificial artists stack up. So I set a challenge for myself: I would only listen to AI-created music for a full week. 

It was a very, very long week. AI music really takes the “art” out of artificial. But it was an educational and revealing experience, too. 

The story of AI music is an old record that’s been played before. Musicians have debated the role of technology in music creation for hundreds of years, from the introduction of recorded music using phonographs to synthesizers, autotune and production tech going mainstream. What makes this moment unique is that AI can create entire songs with very little human guidance. But the AI models that do so are built using music created by actual humans, creating a haze of legal woes and ethical chaos — similar to that faced by other creators like writers, artists and filmmakers.

Advertisement

Music is one of the few universal cultural touchstones we have. Generative AI is rapidly changing how music is created, and in effect, changing our humanity with it.

A week of AI music

For the purpose of my self-imposed experiment, I only listened to songs that were verifiably altered by AI. I was pleased to see that the AI music sites offered a wide range of songs, but that initial excitement was short-lived. Most disappointingly, the vast majority of the pop music was shrill and squeaky — the musical version of plastic, in my opinion. 

A lot of the trending songs were electronic music, which I’m sure EDM fans would’ve appreciated more than me. It just reminded me of a canon event every young person experiences: Being stuck at a house party where the person on the aux is “an aspiring DJ.” The house and techno styles just reinforced the idea that I was listening to robotic AI music. It made it hard to enjoy when I knew there wasn’t even the illusion of human creation behind the songs.

I fared much better with country and folk music, which had a big focus on the instrumentals and an acoustic sound. A lot of it sounded like it could’ve been by Noah Kahan, Kacey Musgraves or Luke Combs. This is where I started to relax into my typical music habits — getting hooked by a particularly appealing song on a first listen, adding those interesting songs to a playlist that I would eventually prefer over exploring new music as I grew more comfortable and attached to my favorite songs. 

Advertisement

Then there was the truly weird, wacky AI music. Beyond Suno, there is an entire universe of unique AI music on sites like YouTube. My favorite (or the least worst one?) was the 8-minute Game of Thrones disco, complete with a music video, while my editor favored the Lord of the Rings version. I found the songs engrossing, probably because they’re music videos, not just songs, with haunting, AI slop visuals.

Game of Thrones white walker on an orange disco floor.

I have no idea what’s going on in this Game of Thrones music video, where white walkers dance like it’s the 1970s, but it was something.

WickedAI/Screenshot by CNET

Tech and music: A song that’s been played before

Technology has always played a role in music. Musical AI is part of a longer arc in music’s history, Mark Ethier, founder of the iZoptope music tech company and executive director of Berklee’s Emerging Artistic Technology Lab, told me.

Advertisement

“When GarageBand came out, people felt like, ‘Oh my gosh, I can make music because I can drag some samples of a guitar, have a bass and some drums, and I’ve made a song, right?’” said Ethier. “Where we are today is the most extreme version of that.” 

AI Atlas

Traditional music software, such as GarageBand, was meant to enhance and democratize the process of creating music. AI music companies say they do the same, but there’s a big difference: You can pop out entire AI songs with just a sentence or two to guide the vibe. The underlying tech is similar to what is running in chatbots and image generators — transformers and diffusion methods, Suno cofounder Mikey Shulman said in 2023.

AI music generators like Suno do more than piecing together a song or tweaking a template. Like with imagery and videos, AI has made it quicker, cheaper and easier than ever to create something that feels like it was professionally produced.

“[AI] has changed is just how much easier it is to do, and how indistinguishable the output is,” Ethier said. Before AI, throwing some loops together on GarageBand wouldn’t be enough to make a full song or hit record. “Now, that distinction is not as clear anymore,” he said.

Advertisement

The AI music arena has grown quickly in a short period of time. Sites like Suno and Udio have racked up subscribers and gained notoriety. Suno reached a milestone of 2 million paying subscribers, its cofounder shared in February. But like other creative AI companies, Suno and Udio have been sued by record labels alleging the AI companies used musicians’ work for AI training without permission or compensation. 

Read More: AI Slop Is Destroying the Internet. These Are the People Fighting to Save It

Can we make connections with AI music?

The amount of time I spent listening to music dropped significantly on the days when I was restricted to only AI music, and I felt that deprivation deeply. It wasn’t until I came across a specific category of AI music that I began to border on enjoying the experience. There’s a neuroscientific and psychological reason why, I learned.

Joy Allen, a music therapist and director of Berklee’s Music and Health Institute, told me that there’s a reason music from our teen years sticks so strongly with us. Our adolescent brains are sponges, and music is one of the only things that activates every part of our brain, Allen said. Those connections, fueled by teenage hormones and neurochemicals, stay with us long after.

Advertisement

“When you listen to music, it’s not just activating the auditory cortex. It’s activating where you process emotions [and] physical responses … Our brains love patterns,” Allen said. “If you think about music, it’s patterns, it’s chordal structures, it’s the melody line… so we get used to patterns and predictability.”

My teen years were largely set to the soundtrack of Taylor Swift, and anyone who’s met me knows she’s still my favorite artist. But even knowing what Allen told me, I was surprised at how emotional the AI covers of Taylor Swift songs made me. 

A lot of the AI covers I listened to took Swift’s songs and reimagined them in different genres. An AI pop punk version of “You Belong With Me” sounded like it could’ve been sung by another band from my teen years, 5 Seconds of Summer. It was strangely gratifying, with a heavy dose of nostalgia. It was also the only AI song to get stuck in my head.

Advertisement
Taylor Swift at the Eras Tour - TTPD era

Nothing like Taylor Swift for a good dose of nostalgia.

Katie Collins/CNET

We can make emotional attachments to any music — created by humans or AI, theoretically, Allen said — during this time. But since my musical identity is already formed, the AI songs that brought out the more visceral, emotional reaction in me were those that drew on those connections and memories, firing those neurochemicals in my brain. I was more engaged and happier listening to these AI Swiftie covers than any other AI song. The songs were different, but they were still the lyrics I had sung into my hairbrush as a kid and in a million other scenarios throughout my life, brought to life in a new way.

While these songs were the highlight of my experiment, they didn’t sell me on AI music any more than the “original” songs did. The AI largely reminded me of the covers I had listened to in real life and seen clips of online. I liked the AI folk cover of Swift’s “All Too Well,” but it was a cheap imitation compared to the guitarist I heard sing it in a coffee shop last year, or the indie bands adding their own individual touches that I come across on TikTok.

The power of a great artist is their ability to create music that inspires others, to move them and spark flames of creativity. Covers by human musicians are a way to pay tribute and express appreciation; AI covers felt like cheap imitations and mockery by comparison. 

Advertisement

Music is human

I was irritatingly cognizant of my experiment while I was doing it. The AI music never held my attention the same way that human music did. With a few notable exceptions, the AI songs were basically white noise. I often caught myself drifting toward the Spotify app to turn on better music. In the final days of my experiment, no music was better than AI music. Even now as I write this, the car horns and bird chirps outside my window are better company than fake instruments. 

AI has become a part of our lives, for better or worse. But it’s not just part of our technology; it’s slowly infiltrating our culture. Music is one of the strongest cultural touchstones we have, and to have AI so quickly and effectively mimic something that is inherently human is… awe-inspiring. Worrisome. But definitely a very clear sign that AI is remaking the very things that define our humanity. It left me with an increasingly deep sense of dread about the havoc AI is wreaking on our culture and humanity.

It’s not just listeners like me who are struggling — musicians are, too. AI-generated music is flooding streaming platforms, leaving companies like Apple Music and Spotify struggling to define what’s allowed, what isn’t and what’s monetizable. It’s even more complex from a legal and ethical point of view.

“As a musician, this is a really complicated time to be understanding tools,” Ethier said. “You used to be able to pick up a trumpet and play trumpet. You didn’t have to think about how that trumpet was trained, or if the trumpet owns your music.”

Advertisement

Music is intrinsically human and social by design. So it wasn’t surprising that I felt disconnected throughout my AI music week. It was an isolating experience — no memories tied to core moments, no TikTok dances, no culture. No artist personality, little fandom. No thoughts of “remember how she jumped an octave when she performed it live?” It was a superficial listening experience. I didn’t want to revisit them once my experiment was done.

So much of the music we listen to is tied to specific memories. The AI songs I felt most connected to were covers of songs I already had a strong emotional connection with: Taylor Swift songs I listened to for the first time at eight years old in the backseat with my childhood besties; songs that were inspired by but utterly lacking the emotion of the ’90s power ballad my dad loves but my mom bemoans every time he plays it; a “Stick Season” AI wannabe that lacks Noah Kahan’s signature “dance while the world burns” flavor.

Music scores so many of our moments of life, from big moments like a married couple’s first dance to the small moments that flow by without us noticing. All of that builds up over our lives. Removing the humanity — or worse, trying to mimic it — sucks the soul out of what makes music worthwhile.

So, no, I would not recommend listening to only AI-generated music for a week. But it was useful, if only to further refine my worries about the way AI is eroding our humanity.

Advertisement

Source link

Continue Reading

Tech

Cauldron Ferm has turned microbes into nonstop assembly lines

Published

on

Cauldron Ferm has an unlikely origin story, as startups go. Its core technology can be traced back to the 1960s, or maybe the 1970s. The exact start is a bit hazy, actually. What is known is that David and Polly McLennan had a dream of feeding the world using protein grown from microbes.

The pair knew they needed to improve the process, which was pricy and time consuming. Most fermentation happens in batches. Picture a brewery or a vineyard. Ingredients go in and the microbes work for a while, but then the process stops when it’s time to take out the finished product. It works for alcohol because booze commands a premium price. Food, though? That needs to be cheaper.

Still, the McLennans stuck with it, starting a small business that would over the course of 40 years refine their approach to continuous fermentation, which turns microbes into assembly lines capable of cranking out products uninterrupted.

“We didn’t know what we had,” Michele Stansfied, co-founder and CEO of Cauldron Ferm, told TechCrunch. But eventually, Stansfield who arrived at the McLennans’ company in 2012, realized they had more than initially thought.

Advertisement

“We didn’t understand the challenge of continuous fermentation for synthetic biology,” Stansfield said. But when she did, she sought to transform the company from a small fee-for-service operators to a fast-moving startup. “At that point, I raised a seed round and acquired the IP, physical, and business assets.”

Cauldron has now raised $13.25 million in a Series A2 round that was led by Main Sequence Ventures with participation from Horizons Ventures, NGS Super, and SOSV, the company exclusively told TechCrunch. It had previously raised $6.5 million in 2024. Cauldron plans to use the funding to “increase the technology moat,” Stansfield said. 

The company calls it’s technology “hyper fermentation,” which helps keep microbes in their maximally productive state. It can work in existing batch fermenters with a few modifications to the facility to accommodate the process. Cauldron’s customers bring their own microbes and strains, and the startup works to tweak their growing conditions, including nutrients, to keep them humming.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

Currently, Cauldron is focused on producing fats and proteins, including whey protein, “a product that can just slip into supply chains,” Stansfield said, though she adds there are more products the company has its eyes on.

Advertisement

“Sixty percent of all inputs to global economy can be produced from biology,” she said. “Food was where we started, but now we’re starting to really diversify.”

Source link

Continue Reading

Tech

Jury struggles to reach verdict in social media addiction trial against Meta and YouTube

Published

on


Jurors did not say whether the holdout relates to Meta or YouTube, but Kuhl told them to keep deliberating and warned that if they cannot reach a verdict, that part of the case will have to be retried before a new jury.
Read Entire Article
Source link

Continue Reading

Tech

Dutch Ministry of Finance discloses breach affecting employees

Published

on

Netherlands Dutch Ministry of Finance

The Dutch Ministry of Finance confirmed on Monday that some of its systems were breached in a cyberattack detected last week.

Officials said the ministry was notified by a third party of the breach on March 19, and it’s still investigating the cyberattack. An ongoing investigation found that the incident affects some employees.

“The Ministry of Finance’s ICT security detected unauthorized access to systems for a number of primary processes within the policy department on Thursday, March 19,” an official statement revealed.

“Following the alert, an immediate investigation was launched, and access to these systems has been blocked as of today. This affects the work of a portion of the employees.”

Advertisement

The ministry added that the cyberattack did not impact systems used to manage tax collection, import/export regulations, and income-linked subsidies, which handle over 9.5 million tax returns annually for income tax alone.

“Services to citizens and businesses provided by the Tax and Customs Administration, Customs, and Benefits have not been affected. We will update this message when we can share more information.”

Although the ministry said the breach affected some of its employees, it didn’t disclose how many were affected or whether the attackers stole any sensitive data. Also, no cybercrime group or threat actors have taken responsibility for the attack.

BleepingComputer reached out to a Ministry of Finance spokesperson with questions about the incident, including the total number of impacted employees and how long the attackers had access to the compromised systems, but a response was not immediately available.

Advertisement

In September 2024, the Dutch national police (Politie) was also breached in a cyberattack believed to be orchestrated by a “state actor” that stole work-related contact details of multiple police officers.

More recently, in February, Dutch authorities arrested a 40-year-old man for an extortion attempt after he downloaded confidential documents mistakenly shared by the police and refused to delete them unless he received “something in return.”

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Direct Pressure Advance Measurement For Fast Calibration

Published

on

Some people love fiddling with their 3D printers, others love printing. Some fiddle so they can spend more time printing, which is probably where this latest project comes in: an automated pressure advance calibration tool by [markniu].

Most of us don’t take enough care with pressure advance (PA). But if you want absolutely perfect prints, its something you should be calibrating for every type filament in your collection. Some would argue, ideally every individual spool. While that sort of dialing in can be fun, it takes away from actually running off prints. Bambu printers automate PA by scanning the usual sort of calibration print, but that’s still a very indirect measurement. Why not, just advance the filament, and measure the pressure at the nozzle directly? That is what PA is meant to account for, after all: the pressure of the plastic in the hotend causing oozing and blobbing at corners.

Did we mention it connects via USB-C? That’s helpfully broken out well away from the heat with a ribbon cable.

[mark]’s solution comes very close to a direct measurement. It uses a strain gauge that sits directly on top of the heatbreak, with the sound logic that the strain there experienced will be directly proportional to the pressure inside, at least along the axis of flow. Instead of filling half the bed with lines, the calibration process instead is a ‘printer poop’ style extrusion that doesn’t take nearly as long, and seems to save plastic, too. Since this puts a strain gauge in your hotend, you also get the bonus of being able to use it for bed leveling if you should so desire.

[mark] is claiming sub-90 second calibration — as you can see in the demo video embedded below — versus over seven minutes for the indirect calibration print. The value is plugged directly into Klipper, assuming you configured everything correctly, which should be easy enough looking at the instructions on the GitHub.

Advertisement

Source link

Continue Reading

Tech

Canonical Joins Rust Foundation – Slashdot

Published

on

BrianFagioli writes: Canonical has joined the Rust Foundation as a Gold Member, signaling a deeper investment in the Rust programming language and its role in modern infrastructure. The company already maintains an up-to-date Rust toolchain for Ubuntu and has begun integrating Rust into parts of its stack, citing memory safety and reliability as key drivers. By joining at a higher tier, Canonical is not just adopting Rust but also stepping closer to its governance and long-term direction.

The move also highlights ongoing tensions in Rust’s ecosystem. While Rust can reduce entire classes of bugs, it often depends heavily on external crates, which can introduce complexity and auditing challenges, especially in enterprise environments. Canonical appears aware of that tradeoff and is positioning itself to influence how the ecosystem evolves, as Rust continues to gain traction across Linux and beyond. “As the publisher of Ubuntu, we understand the critical role systems software plays in modern infrastructure, and we see Rust as one of the most important tools for building it securely and reliably. Joining the Rust Foundation at the Gold level allows us to engage more directly in language and ecosystem governance, while continuing to improve the developer experience for Rust on Ubuntu,” said Jon Seager, VP Engineering at Canonical. “Of particular interest to Canonical is the security story behind the Rust package registry, crates.io, and minimizing the number of potentially unknown dependencies required to implement core concerns such as async support, HTTP handling, and cryptography — especially in regulated environments.”

Source link

Continue Reading

Tech

Steve Wozniak says he's "disappointed a lot" by AI and rarely uses it

Published

on


In a CNN interview in which he was asked about Apple’s upcoming 50th anniversary and how the company has shaped the tech industry, Wozniak was asked what excites and scares him about AI.
Read Entire Article
Source link

Continue Reading

Tech

What Does The Viral Afroman Trial Have to Do with Section 230?

Published

on

from the because-i-got-section-230 dept

The internet has been rightfully enjoying videos from the defamation trial against Afroman, a musician known for his humorous songs including “Because I got high.” The lawsuit involves songs he wrote about a 2022 raid police conducted on his house, which was based on flimsy evidence. The songs justifiably mock the officers involved. Mike Masnick wrote a recap of the case here, which is worth reading for many reasons, but the songs and Afroman’s testimony are true highlights. 

After the raid, Afroman released his songs on YouTube and they went viral initially on TikTok, both massive platforms for users to share their speech and that of other users. The officers who raided his home, seeking to silence someone making fun of them, sued Afroman for defamation, emotional distress, and other causes in 2023. 

Spoiler: Afroman won. The songs are not defamatory. But we didn’t know that for sure until a jury told us so this week. For three years, from the moment the lawsuit was filed until the jury issued its verdict, the songs were allegedly defamatory. And their continued “publication” ran the risk of liability.

So why could we still see the songs on YouTube, TikTok, Bluesky, and whatever other online platforms where we first encountered them? One big reason is Section 230 of the Communications Decency Act. 

Advertisement

Section 230 says that interactive computer service providers, like online platforms, cannot be treated as the publisher or speaker of information content provided by other information content providers. That means that YouTube could not be liable for the content of Afroman’s songs, even if they were defamatory. That’s the balance Section 230 strikes. Under 230, there is still accountability for the speaker, but online platforms are not liable for their users’ illegal speech.

By and large this balance has been incredibly beneficial to free expression online, supporting speech about everything from the profoundly consequential (#MeToo and Black Lives Matter) to the somewhat silly (a song about a cop who got distracted from a raid by a delicious looking “Lemon Pound Cake”). But now, members of Congress like Senator Lindsey Graham and Senator Dick Durbin want to repeal or replace Section 230 without much of a plan for what comes next. 

On March 18, Daphne Keller, a professor of law at Stanford and expert in intermediary liability laws around the world, testified before the Senate Commerce Committee. She tried to explain to the Senators that Section 230 may not be perfect, but it’s still better than any of the options she has seen. To understand why Daphne’s right, let’s think about what Afroman’s case might have looked like without Section 230. The moment Afroman was allowed to distribute his songs about the raid on YouTube, the company could have been liable for any potentially illegal speech they contained. That means YouTube probably also would have been a co-defendant in the cops’ suit. At the scale many online platforms operate at, these kinds of accusations of defamation and lawsuits related to user posts would happen hundreds of thousands, if not millions, of times a day.

That’s a lot of litigation.

Advertisement

Staring down the barrel of that many potential lawsuits every day, no reasonable platform would have allowed Afroman’s speech to stay up. The moment an accusation of illegality surfaced, a platform acting reasonably would likely take the speech down. And to be clear, we have evidence that this is how they would react: That’s the incentive structure currently in place under the Digital Millenium Copyright Act (DMCA). The DMCA creates a notice and takedown system for alleged copyright violations and evidence suggests that improper takedown requests are common and, even with the safeguards for speech built into that law, result in over-censorship. Replicating a version of the DMCA for all content on the internet writ large would likely produce the same overcensorship result. At a minimum, the platforms certainly wouldn’t allow their algorithms to recommend posts linking to the defamatory songs, effectively “shadowbanning” them, which is probably one of the main ways many people came across the songs to begin with.

The upshot is: Section 230 created the conditions that allowed us to hear Afroman’s songs, and allowed platforms to recommend them, even while their status was in legal limbo. 

There are millions of similar situations, large and small, every day where Section 230 ensures that online platforms do not have to try to make context-specific legal judgment calls. Section 230 may not be perfect. No law is. But it’s the best and most effective protection for free expression online we have, allowing online services to simply let their users speak. Congress should be very cautious about changing it, let alone eliminating it altogether.

Kate Ruane is the Director of the Free Expression Program and the Center for Democracy & Technology, where she advocates for the protection of free speech and human rights in the digital age.

Advertisement

Filed Under: afroman, defamation, intermiediaries, section 230

Source link

Advertisement
Continue Reading

Tech

Clear Drop Soft Plastic Compactor Review: Eco Experiment

Published

on

Soft plastics are notorious for jamming sorting machines, slipping through processing lines, and wreaking havoc on the environment. They’re also not accepted in most municipal curbside recycling programs.

Facilities for recycling these types of plastic exist, but getting waste to these locations clean and free of what some call “wishful recycling” items (compostable cups, plastic utensils) is such a challenge that the majority of soft plastics, even the bags recycled at the front of grocery stores, end up in the trash. The SPC is what Arbouzov calls a “pre-recycling device,” designed to simplify this stream and deliver plastic that’s contained, traceable, and more likely to make it through the system.

I tried to envision how the blocks would turn into patio furniture, as advertised, but didn’t learn exactly how until months later, when Arbouzov sent me a video of the blocks at their final destination—a facility in Frankfort, Indiana, that specializes in processing polyethylene and polypropylene films. The blocks get shredded into crumbles resembling, at least on video, handfuls of wet newspaper, which are then compressed into composite decking, chairs, garden edging, and more.

Courtesy of Clear Drop

Advertisement

Courtesy of Clear Drop

“The full cycle from mailing a block to it entering recycling processing typically takes a few weeks,” Arbouzov said, “depending on shipping time and batching schedules.” Right now, the Frankfort location is the only facility processing the blocks, but Arbouzov said he hopes this is only temporary.

“Our goal is to shift more of this processing closer to where the material is generated, so blocks can move in bulk through regional recycling infrastructure rather than through mail-based logistics,” he said. “The mail-back system is essentially a bridge that allows the material to be captured today while that larger infrastructure develops.”

Recycling, Rewired

I found that my household of three was able to produce a block every couple of weeks, which quickly outpaced the provided supply of mailers. As the blocks started piling up on the floor of my office, I found myself wishing the SPC made something useful for consumers. Spoons, straws, 3D-printing filament … anything that could be used at home.

Advertisement

However, a 2023 Greenpeace report found that recycling plastic can actually make it even more toxic than it already is—heating it can not only cause existing chemicals to escape into the air and water supply, but even create new ones, like benzene. Would I want this in my house? Does recycled plastic actually belong in a circular economy? I asked Arbouzov what he thought.

Source link

Continue Reading

Tech

A Broken Game Boy Advance Returns Stronger Than Before

Published

on

Game Boy Advance Restoration Upgrade Mods
Plenty of old handhelds spend their retirement gathering dust in a box somewhere, and this Game Boy Advance was no exception. Abandoned, completely dead, and sporting a screen that had burned out from years of neglect, it was not an obvious candidate for a comeback. Odd Tinkering took it apart piece by piece anyway, worked through every problem methodically, and brought it back to life with a handful of modern upgrades that breathe new life into the hardware without losing any of what made it special in the first place.



From the start it was completely dead, just a dark screen and no response when you tried to power it on. Some thorough cleaning got the electricity flowing again, and original Game Boy and Game Boy Color titles loaded up without complaint. GBA games were a different story though, refusing to run no matter what. The small mode detection switch inside the cartridge slot got a good wipe, which seemed like it should have done the trick, but the games still wouldn’t cooperate. The real culprit turned out to be oxidation sitting on the pins of the main chip. One more cleaning session and the problem disappeared entirely, with the system reading every cartridge thrown at it without a single issue.

Game Boy Advance Restoration Upgrade Mods
The screen was in rough shape, covered in dark blotches from years of burn in. New polarizing film cleared that up, though the display was still noticeably dim by modern standards, so an IPS panel went in next and solved the brightness issue immediately. Colors are vivid and the viewing angles are excellent, exactly what you want from a handheld you are actually going to use. The upgraded screen meant the original shell no longer fit, so the team scanned it with a 3D scanner and printed a new one in resin, a deep blue that nods to the classic aesthetic while hiding the modern hardware inside. The fit is perfect, with no gaps or wobble anywhere.

Game Boy Advance Restoration Upgrade Mods
The toolkit was refreshingly basic, a set of screwdrivers for disassembly, a soldering iron and desoldering tool for any stubborn connections, and hydrogen peroxide with UV light to lift the yellowing from the plastic. No specialty equipment, no secret techniques, just a clean and methodical process from the first screw to the last.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025