Connect with us
DAPA Banner

Tech

Recycling PLA And Other Plastic Waste With Compression Molding

Published

on

After previously trying out low-tech compression molding with a toaster oven and 3D printed molds, [future things] is back with a video that seeks to explore some of the questions raised after the first video. Questions such as how well this method works with HDPE and PLA thermoplastics, whether the flashing could be cut off by the mold and the right temperatures and times to heat the plastic before a charge is ready for inserting into the mold.

In this video the same PHA-based mold is used, but in a three-piece configuration to allow for a more complex shape. This way game tokens could be made for use by the son of the author, which also shows one straightforward and very practical use of this method.

A big change here is that no more metal chopsticks are used to handle the charge, as this was found to cool down the heated plastic too much. Instead the hot charge is handled with fingers and wooden chopsticks, with the plastic heated until it has about the consistency of thick honey. For LDPE this takes about 5-7 minutes at 130°C. After compressing the charge into the mold, about 30 seconds are all it takes for the plastic to cool down enough.

There was a question about the use of mold release spray, but this didn’t seem to cause any issues, so can probably be used safely. As for other plastic types, HDPE works fine too when you heat it up at a slightly higher temperature and don’t mind it being tougher to handle.

Advertisement

Easiest is probably PLA, which would seem unsurprising. Using some chopped-up PLA printing waste it was easy enough to make a few more game tokens, demonstrating that this method is very viable for converting scrap FDM print waste into such items. As noted in the comments by [edmundchao] this method works great too for PETG, using PETG molds, while using a ratcheting clamp for extra pressure instead of just pressing by hand.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

At his OpenAI trial, Musk relitigates an old friendship

Published

on

Among the most interesting parts of Elon Musk’s testimony Tuesday in his lawsuit against OpenAI wasn’t the charity he claims was stolen from him (we all knew that was coming). It was about an old friend.

Musk testified that one of his core motivations for co-founding OpenAI was a falling-out with Google’s Larry Page over AI safety — specifically, a conversation in which Musk raised the prospect of AI wiping out humanity and Page shrugged it off as “fine,” so long as AI itself survived. Page called Musk a “speciest” for being “pro human.” Musk called the attitude “insane.”

That’s mostly notable given how close the two once were. Fortune included them on its 2016 list of secretly best-friend business leaders; Musk was so comfortable with Page that he regularly crashed at his Palo Alto home. Page once told Charlie Rose that he’d rather give his money to Musk than to charity.

The friendship didn’t survive OpenAI. When Musk recruited Google AI star Ilya Sutskever to help launch the company in 2015, Page felt personally betrayed and cut off contact.

Advertisement

It’s a story Musk has told before — including to author Walter Isaacson for his bestselling biography of Musk — but Tuesday was the first time he said it under oath. Page hasn’t commented, and it’s worth remembering everything that Musk said was in service of a lawsuit. Still, as recently as 2023 he told tech podcaster Lex Fridman he wanted to patch things up: “We were friends for a very long time.”

Source link

Continue Reading

Tech

South Africa withdraws national AI policy after at least 6 of 67 academic citations found to be AI-generated hallucinations

Published

on

TL;DR

South Africa’s Communications Minister Solly Malatsi withdrew the country’s draft national AI policy after News24 discovered that at least 6 of its 67 academic citations were AI-generated hallucinations, citing fake articles in real journals. The policy had been approved by Cabinet in March and published for public comment. Malatsi called it an “unacceptable lapse” and promised consequence management. The scandal leaves South Africa without an AI governance framework and raises questions about institutional capacity to regulate the technology.

South Africa’s Department of Communications and Digital Technologies spent months drafting a national artificial intelligence policy. It proposed a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund. It outlined five pillars of AI governance: skills capacity, responsible governance, ethical and inclusive AI, cultural preservation, and human-centred deployment. It adopted a risk-based approach modelled on the EU AI Act. Cabinet approved the draft on 25 March. The Government Gazette published it on 10 April for public comment. And then News24, the South African news outlet, checked the bibliography and discovered that at least six of the document’s 67 academic citations did not exist. The journals were real. The articles were not. The authors credited with foundational research on AI governance had never written the papers attributed to them. Editors at the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy independently confirmed to News24 that the cited articles had never been published in their pages. The most plausible explanation, according to Communications Minister Solly Malatsi, is that the drafters used a generative AI tool and published the output without verifying a single reference. A government policy designed to govern artificial intelligence was undermined by the artificial intelligence it failed to govern.

Advertisement

The withdrawal

Malatsi announced the withdrawal on 27 April, calling the fictitious citations an “unacceptable lapse” that “compromised the integrity and credibility of the draft policy.” He said consequence management would follow for those responsible for drafting and quality assurance. “This failure is not a mere technical issue,” the minister said. The parliamentary portfolio committee chair offered a more concise assessment, suggesting the department “skip using ChatGPT this time” when redrafting. The document will be revised before being reissued for public comment, but no timeline has been given. South Africa is now without a formal AI governance framework at a time when governments worldwide are grappling with how to regulate AI, and the country’s credibility as a serious participant in that conversation has taken a blow that will outlast the policy revision.

The scandal is not simply that fake citations appeared in a government document. It is that they appeared in a government document about artificial intelligence, written by the department responsible for the country’s digital technology strategy, during the exact period when the world’s most consequential AI governance debates are being fought in Brussels, Washington, and Beijing. The EU AI Act, the most ambitious regulatory framework for artificial intelligence, is grappling with delayed standards and an implementation timeline that has been pushed back to 2027 for high-risk systems. The United States has no federal AI legislation and is watching states legislate independently while the White House attempts to preempt their efforts. China has enacted AI regulations but applies them selectively. Into this landscape, South Africa offered a policy that could not survive a bibliography check.

The pattern

South Africa’s hallucinated citations are an extreme case of a problem that is quietly spreading across institutions that use generative AI for research and drafting. A study published in Nature found that 2.6 per cent of academic papers published in 2025 contained at least one potentially hallucinated citation, up from 0.3 per cent in 2024. If that rate holds across the roughly seven million scholarly publications from 2025, more than 110,000 papers contain invalid references. GPTZero, a Canadian detection startup, analysed more than 4,000 research papers accepted at NeurIPS 2025, one of the world’s premier AI conferences, and found over 100 hallucinated citations across at least 53 papers. In a separate multi-model study, only 26.5 per cent of AI-generated bibliographic references were entirely correct. The problem is structural: large language models generate citations through probabilistic token prediction rather than information retrieval. They do not look up papers. They predict what a citation should look like based on the patterns in their training data, and when the prediction is confident enough, they produce a reference that reads as authoritative but points to nothing.

The South African case is distinctive not because the technology hallucinated, which is a well-documented and inherent limitation of generative AI, but because the hallucinations were published in an official government policy document that passed through Cabinet approval without anyone verifying the references. The drafting process included civil servants, subject matter consultations, and ministerial review. Dumisani Sondlo, the department’s AI policy lead, had previously described the policy development as “an act of acknowledging that we don’t know enough.” That acknowledgment did not extend to acknowledging that the tool being used to help draft the policy was itself unreliable. The six fake citations that News24 identified are the ones that were caught. Whether additional citations in the document’s 67 references are genuine has not been publicly confirmed. The entire bibliography is now under suspicion, and by extension, so is the analytical foundation on which the policy’s proposals were built.

The implications

The immediate consequence is that South Africa’s AI governance timeline has been reset. The draft policy, which was intended to position the country as a leader in responsible AI adoption on the African continent, will need to be redrafted, reconsulted, and resubmitted. The institutional credibility damage extends beyond the policy itself. If the department responsible for governing AI cannot verify whether the sources in its own policy document are real, the question becomes whether it has the capacity to evaluate the AI systems it proposes to regulate. The policy envisioned a multi-regulator model in which AI governance and human oversight would be embedded within existing supervisory frameworks rather than centralised under a single authority. That model requires each participating regulator to have sufficient technical understanding to assess AI systems in their sector. The hallucination scandal does not inspire confidence that the coordinating department meets that threshold.

Advertisement

The broader lesson is not that governments should avoid using AI in policy development. It is that the failure mode of AI is not dramatic. It does not crash. It does not display an error message. It produces fluent, formatted, confident text that looks exactly like the output of a competent researcher. The fake citations in South Africa’s AI policy were not obviously wrong. They were plausible. They cited real journals. They attributed work to real people. They followed the formatting conventions of academic references. The only way to catch them was to check whether each one actually existed, a task that requires exactly the kind of methodical human verification that AI is supposed to make unnecessary. Growing public distrust of AI is not irrational. It is a response to a technology that is simultaneously powerful enough to draft a national policy and unreliable enough to fabricate the evidence that policy rests on. South Africa’s embarrassment is singular, but the underlying failure, using AI without the capacity to verify its output, is not. It is happening in universities, law firms, newsrooms, and government departments around the world. South Africa is simply the first government to publish the receipts. The challenges of implementing AI regulation are real, but they begin with a prerequisite that South Africa’s department did not meet: understanding what the technology does before trying to write the rules for it.

Source link

Advertisement
Continue Reading

Tech

The Secretary Of Health & Human Services Doesn’t Believe In The Foundation Of Modern Medicine

Published

on

from the medicine-101 dept

We discussed RFK Jr.’s recent appearance before Congress, where he bravely declared that the current measles outbreak in America has absolutely nothing to do with him, despite that definitely not being true. But, unsurprisingly, that wasn’t the only craziness that Kennedy put on display in the hearing.

The Secretary of HHS doesn’t believe in the foundational theory that powers modern medicine.

Read that again. It’s an insane sentence, the sort that should be fiction. What we’re talking about here is the germ theory of disease, which is the accepted science when it comes to how many diseases infect and spread through pathogens. We mentioned in a post last year, which was chiefly about how Kennedy decided to take his grandkids swimming in a creek filled with poop, that he had also written in a 2021 book that he doesn’t believe in germ theory, and instead believes in what he incorrectly labels “miasma theory”.

It’s one thing to write something in a book as we were mired in a global pandemic. But Kennedy both admitted that he doesn’t believe in germ theory, and defended that belief, before Congress.

Advertisement

In the hearing on Wednesday, Sanders called attention to Kennedy’s denial of germ theory while raising one of Kennedy’s shaky arguments for debunking. In opening statements, Sanders warned Kennedy that he wanted to question the “things that you have written which call in doubt the very existence of the germ theory.”

Sanders pointed out a 2024 study led by the World Health Organization and published in The Lancet that found that since 1974, vaccines had saved an estimated 154 million lives, including 146 million children under the age of 5—or, as WHO put it, vaccines saved the equivalent of six lives every minute of every year over the past 50 years.

“My question is a simple one,” Sanders said, “do you still believe that one of the central tenets of the germ theory, that vaccines sharply reduce infant mortality, is quote-unquote simply untrue?”

Kennedy first did what he always does: try to tell you that the experts and studies have no idea what they’re talking about, or are hopelessly corrupted tools of industry. He does this so often that you can set your watch by it. If a study agrees with him, it’s a good study. If it doesn’t, it’s bad. He’s more like Trump than any of us realized.

Then he launched into his own justification and offered up a 2000 study that he claimed demonstrated that it was improved nutrition and sanitation that reduced childhood deaths this century, and explicitly not medicines like vaccines. Unfortunately for Kennedy, Bill Cassidy piped up with a, oh, let’s call it a minor correction.

Advertisement

The study by Guyer notes that sanitation, among other public health strategies introduced in the first half of the 20th century, drove major declines in mortality. But, as Cassidy noted during the hearing, it’s not all that the study found. Cassidy looked up the studies Kennedy raised and read through them during the hearing.

The Guyer study highlighted that vaccination did not become widely used until after the middle of the century, thus it cannot account for mortality declines prior to that. But it concluded, as Cassidy read out loud at the hearing:

The reductions in vaccine-preventable diseases, however, are impressive. In the early 1920s, diphtheria accounted for about 175,000 cases annually and pertussis for nearly 150,000 cases; measles accounted for about half a million annual cases before the introduction of vaccine in the 1960s. Deaths from these diseases have been virtually eliminated, as have deaths from Haemophilus influenzae, tetanus, and poliomyelitis.

Kennedy tried again, with another study, but Cassidy pointed out that it had the same issue as Kennedy’s first: it measured data from the beginning of the century to the early 1970s. Many of the vaccines Kennedy rails against had barely been out during the period the study analyzed, or in many cases hadn’t come out at all. Speaking specifically to the measles vaccine, released in 1963, Cassidy said:

“There’s 3.5 million cases of measles per year before the vaccine came along and about 550 deaths, and then the vaccine took those to less than 100 [cases] and like zero deaths,” Cassidy said. “So a tremendous impact of the vaccination.”

The problem with Cassidy is that he’s acting like he’s trying to convince Kennedy to change his mind on this. He’s not going to. Not ever. He’s made that clear.

So impeach him or convince Trump to make Kennedy his next cabinet firing. That’s all that’s left to do. Because we certainly cannot continue having someone run HHS who doesn’t believe in the very baseline theory for medicine.

Advertisement

Filed Under: bill cassidy, germ theory, health & human services, rfk jr.

Source link

Advertisement
Continue Reading

Tech

What skills drive a senior software engineer’s work at Yahoo?

Published

on

Graham Bartley discusses his role in the software engineering landscape and the evolution of his job over recent years.

“No two days are exactly the same, which is part of what keeps the job engaging,” Yahoo senior software engineer Graham Bartley told SiliconRepublic.com. 

“On days when I go into the Dublin office, I typically spend the first hour of the morning enjoying tea and catching up with my colleagues face-to-face,” he said.

“I always enjoy hearing their stories, learning a little about what they’re working on and spotting opportunities for collaboration.

Advertisement

“On remote days, I usually start by checking in on our squad’s Jira board and any overnight Slack discussions. I lead a squad called Optimus, which is Yahoo Demand Side Platform (DSP)’s core generative AI (GenAI) backend team, so there’s often a thread to catch up on from my squad members or a design question from one of the other squads we support.”

From there, Bartley’s day is a mix of writing code, reviewing pull requests, design work, managing Jira tickets and taking calls with squad members. On a coding day, he might find himself building out a new API endpoint for Yahoo’s troubleshooting agent, writing integration tests or debugging a production issue end-to-end across multiple services.

“There are usually quite a few meetings, including our daily squad standup, individual calls with each squad member, cross-team syncs with other squads, architecture meetings and office-hour sessions where engineers present designs for feedback.

“I try to protect blocks of focused time for deep work, but being a squad lead means always being available when someone needs support.”

Advertisement
As a senior software engineer, how does your role fit into the wider software industry?

I work on Yahoo’s DSP, which is the technology behind how advertisers plan, target and optimise digital advertising campaigns across channels like mobile, connected TV, desktop and audio. It’s a big part of what’s called programmatic advertising and it’s a space that keeps growing.

Within that, my current focus is on applying GenAI to make the platform smarter and more intuitive. I lead a squad that built Yahoo DSP’s first GenAI-powered feature, an AI troubleshooting agent that helps advertisers understand why their campaigns might be underdelivering and what they can do about it.

What skills do you use on a daily basis and how have they evolved over time?

Day to day, I write Python and Java, design APIs and distributed systems, review code and work across the full stack from database schemas through backend services to UI components, although recently my focus has been entirely on the backend as that is my squad’s responsibility. I also design technical architectures, write design documents, and present to both technical and non-technical audiences.

The breadth of what I do has grown a lot over time. Earlier in my career, I was heads-down writing code and delivering features within a couple of codebases based on the Jira tickets I was assigned. Now I spend as much time on system design, cross-team coordination and setting technical direction as I do writing code.

Advertisement

The biggest recent shift has been AI, both in what I build and how I build it. On the product side, I’ve gone deep on large language models, agent frameworks, model context protocol, vector databases, knowledge bases and evaluation pipelines for non-deterministic AI outputs. Two years ago, none of that was part of my daily vocabulary. 

What are the hardest parts of your working day and how do you navigate them? 

Context-switching is perhaps the biggest challenge. As a squad lead and code-owner across many repositories, I might go from a deep debugging session to a design review to a cross-team Slack discussion about authentication architecture, all within a couple of hours. Each context requires full attention and the transitions are costly. I navigate that by being disciplined about time-blocking. I protect mornings for deep technical work wherever possible and cluster meetings in the afternoons. I also use structured notes and documentation heavily. If I’m interrupted mid-task, I want to be able to pick up exactly where I left off. 

AI tools actually help here, too. Being able to drop back into a Claude Code session and say ‘here’s where I was, here’s what I was trying to do’ and have it reconstruct the context is genuinely useful.

Working remotely from Ireland with teams distributed globally can present some challenges, such as working after-hours to interface with colleagues abroad. Managing this is definitely a skill in itself and can be difficult because it can feel very productive to work late when US colleagues are available, for example.

Advertisement

I try to be aware of making up for late hours worked to keep a good work-life balance, but it’s definitely challenging balancing the feeling of productivity with the reality of burnout. 

Has your role changed as the sector has grown and evolved?

Significantly. When I started, engineering work in the DSP was largely about building features in reliable CRUD applications and optimising serving workflows. The biggest evolution since then has been the arrival of GenAI. In late 2024, I was part of a small working group exploring how GenAI could be applied to our product. Within months, that exploratory work became a full strategic initiative. I went from leading a commerce media team to founding and leading Yahoo DSP’s first GenAI squad.

The way we build software is changing too. A year ago, our team wrote code the traditional way. Now we’re using AI coding assistants daily. It’s still early days and there are real learning curves. We’ve found that the AI works brilliantly for well-defined, bounded tasks but struggles with cross-cutting concerns that span multiple services and large contexts. Knowing when to lean on it and when to step back and think for yourself is a skill in itself.

The organisational model has changed too. We’ve moved from traditional team structures to a ‘squads, tribes, chapters, and guilds’ model, which puts more emphasis on cross-functional collaboration and autonomous delivery. My role has shifted from primarily writing code to being a technical leader who sets direction, defines standards and enables other teams to build on the foundations we create together.

Advertisement
Is there anything you know now about working in software that you wish you knew starting out?

I wish I’d known more about work-life balance and recognising the signs of burnout. Early in my career, I had a tendency to just push through and it took me a while to realise that’s not sustainable. Now I’m much more intentional about managing my time, protecting my energy and knowing when to step away from a problem. You actually come back sharper when you do.

I also wish I’d challenged assumptions more. When you’re starting out, it’s natural to take requirements and existing designs at face value. But as I’ve become more senior and taken on more design work, I’ve found that some of the biggest improvements come from questioning why something is the way it is. The best solutions often come from pushing back on the premise, not just optimising within the constraints you’re given.

What advice do you have for others looking to follow in your professional footsteps?

Don’t be afraid to say ‘I don’t know’ and then go learn. This is crucial. The best engineers I’ve worked with aren’t the ones who know everything. They’re the ones who learn quickly and aren’t embarrassed to be a beginner at something new. Being a mentor also means being a mentee and I’ve learned a lot from the engineers I’ve mentored. A year ago, I had never built an AI agent. Now I lead a team that builds them.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Can You 3D Print A Pinball Machine That’s Fun To Play?

Published

on

It seems fair to say that pinball machines are among the most universally loved gaming systems known today, yet the full-sized ones are both very expensive and very large, while even the good quality table-sized ones tend to be on the expensive side. That raises the question of whether a fully 3D printed pinball machine could at all be fun and not just feel like a cheapo toy? A recent video by [Steven] from [3D Printer Academy] on YouTube makes here a compelling argument that it might actually be worth something to consider.

In addition to being fully modular and customizable the most compelling element is probably that the design supports two- and four-player multiplayer. This sees the metal balls leaving at the rear and from there entering the playing field of another player’s machine, which can probably get pretty chaotic.

Unfortunately this is part of a Kickstarter campaign, so you’ll have to either shell out some cash to get access to the print files or DIY your own version. We’d also be remiss to not address the durability concerns of a 100% plastic pinball machine like this, plus the lack of serious heft to compensate for more enthusiastic playing styles.

If you are more into traditional DIY pinball machines, we have covered these as well, along with small screen-based machines, and their miniature brethren for when space is really at a premium.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Musk Testifies OpenAI Was Created As Nonprofit To Counter Google

Published

on

Elon Musk testified on day two of his trial against OpenAI, saying he helped create the company as a nonprofit counterweight to Google and would not have backed it if the goal had been private profit. CNBC reports: Musk on Tuesday was the first witness called to testify in the trial. He spoke about his upbringing, his many companies, his role in founding OpenAI and his understanding of its structure. Musk said in his testimony that he was not opposed to the creation of a small for-profit subsidiary, “as long as the tail didn’t wag the dog.” Musk said he was motivated to start OpenAI to serve as a counterweight to Google. He got the idea after an argument he had with Google co-founder Larry Page, who called Musk a “speciesist for being pro-human,” he testified. “I could have started it as a for profit and I chose not to,” Musk said on the stand.

Earlier, attorneys for Musk and OpenAI presented their opening arguments to the jury. Musk’s lead trial lawyer, Steven Molo, delivered the opening statement for the Tesla and SpaceX CEO. OpenAI lawyer William Savitt gave the opening statement for the AI company, Altman and Brockman. OpenAI has characterized Musk’s lawsuit as a baseless “harassment campaign.” The company said Monday in a post on X that it “can’t wait to make our case in court where both the truth and the law are on our side.”

During his testimony on Tuesday, Musk repeatedly emphasized that he founded OpenAI to serve as a counterweight to Google. He said he got the idea after an argument about AI safety with Google co-founder Larry Page, who Musk said called him “a speciesist for being pro-human.” Musk said he was concerned Page was not taking AI safety seriously, so he wanted there to be an nonprofit, open source alternative to Google. “I could have started it as a for profit and I chose not to,” Musk said on the stand. Further reading: Elon Musk and OpenAI CEO Sam Altman Head To Court

Source link

Advertisement
Continue Reading

Tech

Apple is finally building the AI Photo editor that Google and Samsung have had for years

Published

on

Google’s Photos app has been doing things that Apple’s Photos app couldn’t, for years, and the iPhone-maker has noticed. Bloomberg’s Mark Gurman, in his latest report, claims that iOS 27, iPadOS 27, and macOS 27 will come with a dedicated “Apple Intelligence Tools” section inside the Photos editing interface. 

The “Apple Intelligence Tools” section will include three new AI-powered photo-editing features: Extend, Enhance, and Reframe. Before we begin with what the features actually do, all of them will run entirely on-device, and, in a typical Apple fashion, complete their edits in seconds. 

What will the new Apple Intelligence photo editing tools do?

Extend, as the name suggests, extends a picture’s boundaries by generating new imagery and seamlessly stitching it to the existing one. You should be able to use the feature to add some surrounding to close-up shots or add some negative space to either side of the subject. 

Enhance, on the other hand, works as a one-tap enhancement button, which immediately adjusts the color, lighting, and the overall image quality, without going through different editing options and fiddling with various sliders. 

Reframe is designed primarily for spatial photos captured for the Vision Pro headset. It lets users shift the perspective of a 3D image after it’s already been taken, allowing you to move from a front-facing to a side-facing view. 

Advertisement

Is Apple actually ready to release all three features?

Not at the moment, no. Per Gurman, both the Extend and Reframe features are producing inconsistent results in the internal testing. If the underlying AI models don’t adapt or the results don’t improve significantly before the September launch event, Apple might delay them or scale back. 

While I’m a big fan of the Apple Photos app myself, it currently offers only one AI-based editing feature, Clean Up, and that doesn’t work as well as the feature does on other smartphones like the Galaxy S or the Pixel flagship series. 

I remember when Google released its Magic zeditor in 2023, and Samsung’s Galaxy AI followed quite aggressively in the coming years. In response, the best Apple could come up with was Clean Up. In my opinion, Apple genuinely needs the Extend, Enhance, and Reframe features to work, and work in time for a showcase at the WWDC 2026 and a public release in September. 

Source link

Advertisement
Continue Reading

Tech

Witcher 3 director's Blood of Dawnwalker launches September 3 with steep system requirements

Published

on


Konrad Tomaszkiewicz, a writer and quest designer on CD Projekt Red’s first two Witcher games who went on to direct the award-winning Witcher 3, has spent the past several years building an original RPG at his new studio, Rebel Wolves. While the game resembles the director’s prior fantasy series in…
Read Entire Article
Source link

Continue Reading

Tech

Tech Lobbyists Hard At Work Undermining Proposed Alaska ‘Right To Repair’ Law

Published

on

from the fix-your-own-shit dept

There’s still a meaningful effort afoot to implement statewide “right to repair” laws that try to make it cheaper, easier, and environmentally friendlier for you to repair the technology you own. All fifty states have at least flirted with the idea, though only Massachusetts, New York, Texas, Minnesota, Colorado, California, Oregon, and Washington have actually passed laws.

Alaska could be up next. Two versions of a new right to repair law are winding their way through the Alaska state House and Senate. The bills would amend the Alaska Unfair Trade Practices and Consumer Protection Act, requiring tech hardware manufacturers to make parts, tools or software needed for repairs available to independent service providers and consumers.

As is always the case, the proposal has broad, bipartisan support among the actual public:

“In a lot of ways, this is a deeply conservative bill in the sense that for most of the 20th century, you could fix the stuff you bought, and the parts would be available, because it was another revenue stream for the businesses,” said Anchorage Democratic Sen. Forrest Dunbar, the sponsor of the Senate bill.”

As is also always the case, hardware vendors from a variety of sectors (agricultural, medical, tech, consumer hardware) are lobbying against Alaska’s proposal, falsely claiming that easier, more affordable repairs constitute a privacy and security threat to the public.

Advertisement

TechNet (a lobbying coalition that includes Dell, Apple, Amazon, Google, Nvidia, and Verizon), for example, is trying to convince the Alaska state legislature that everything is working fine currently, and that fixing anything would make consumers less safe. Apparently because truly independent repair professionals are too incompetent if they don’t have big corporate oversight:

“TechNet wrote that the bill would erode the current system where manufacturers work with authorized repair service providers, and that these agreements “ensure that technicians have the appropriate training, access to safe repair procedures, and the qualifications necessary to protect both the device and the consumer.”

TechNet is also trying to claim that the Alaska bill is “misaligned with language of right-to-repair bills from other states” such as New York. Granted they would say that, given that after New York passed its bill, tech lobbyists convinced NY Governor Kathy Hochul to water that states’s bill down to the point of uselessness.

The concern now is that lobbyists successfully manage to water the Alaska bill down so badly that it ultimately becomes similarly useless.

Something that’s broadly not mentioned in coverage of right to repair: while eight states have passed right to repair laws in recent years, not one of those states has actually managed to actively enforce it, despite no shortage of bad behavior by companies looking to secure repair monopolies. That’s something that needs to change if the movement is to have any serious impact.

Advertisement

Filed Under: alaska, bipartisan, hardware, independent, monopoly, right to repair, software

Source link

Advertisement
Continue Reading

Tech

Why There’s Simply No Need For Smart TVs Anymore

Published

on





We may receive a commission on purchases made from links.

With the rise of streaming services came smart TVs with built-in apps and features that make it easier than ever to start watching content on the big screen from the moment you take it out of the box.  But let’s be honest, most smart TVs aren’t all they’re cracked up to be, and they really never have been. Many proprietary operating systems that come with your TV are both sluggish and confusing to navigate. They may drop support for popular apps over time, lack quality-of-life features, and turn the experience of using your smart TV into a constant headache.

Meanwhile, a few companies have created products that universalize the smart TV experience, adding features and app libraries inside of HDMI dongles that cost less than the sales tax you’ll pay on most TVs. That means you don’t even need to get a different television if you’re looking to escape the software experience on the one you own. It also means you can extend your TV’s longevity by years longer than you might think. Between all of those factors, the entire concept of a smart TV is becoming somewhat obsolete. Diving in a bit deeper, here’s why there’s hardly any need for smart TVs anymore.

Advertisement

Streaming boxes and dongles are cheap and capable

There’s one glaring reason to ignore your TV’s smart features: smart TV dongles and set-top boxes are cheap and often have a wider range of features. Even a “dumb” TV can instantly be made smart by plugging in something as cheap as an Amazon Fire TV Stick 4K or Roku Streaming Stick 4K for under $40, and even on the expensive side, a Google TV Streamer is just shy of $100. You can even flex your budget a drop more to $130 for an Apple TV 4K, which can integrate with other Apple devices. In essence, you’re plugging a brand-new operating system into your TV, and it will add just as much value to your setup, whether your display is a cheap computer monitor or a multi-thousand-dollar OLED with all the fixings.

Advertisement

All of the aforementioned products will immediately add features your TV may have lacked, such as voice control, the ability to pair Bluetooth headphones, and smart home integration. You can control Google Home through your Chromecast or control your Alexa ecosystem through your Fire TV Stick, and even Roku will integrate with both so that you can control media with voice commands by using a smart speaker.

Moreover, Fire OS, Roku OS, and Google TV (the respective operating systems used by those devices) have a robust app library that contains pretty much every streaming service you could ask for. No more hoping that the developers of that obscure service, which exclusively streams the movie you’re trying to watch, bothered to make an app for your Samsung Tizen TV. In the case of Google TV, you can even install many apps from the Play Store just like on an Android phone.

Advertisement

Many smart TV operating systems are simply awful

Not only do cheap streaming dongles make any TV into a modern smart TV, but you can configure most TVs to simply boot into the connected device when you turn them on, bypassing their built-in operating system entirely. That’s a good thing, because not only do some of the major smart TV operating systems have features your TV likely doesn’t have by default, but also because your TV’s operating system may not be particularly great to use.

On the high end, TVs may have enough processing power and memory to run their OS fluidly, but budget TVs rarely do. That makes them frustratingly sluggish as you wait for apps and menus to load. Another common issue is that, due to their relatively small user bases, many developers abandon app development for proprietary smart TV operating systems. Even if your TV has great app support when you buy it, just a few years later, you might have lost some of your favorites. Among the apps that have dropped support for certain smart TVs in some capacity are YouTube, Disney+, and Netflix, all streamers that are staples of the modern media ecosystem.

Although smart capabilities are still essential for modern TVs, the real brains have moved off-device to streaming peripherals. That’s actually good news, since it means you can keep your TV up-to-date without needing to buy a brand new one.

Advertisement



Source link

Advertisement
Continue Reading

Trending

Copyright © 2025