These days, Windows has a moderately robust method for managing the volume across several applications. The only problem is that the controls for this are usually buried away. [CHWTT] found a way to make life easier by creating a physical mixer to handle volume levels instead.
The build relies on a piece of software called MIDI Mixer. It’s designed to control the volume levels of any application or audio device on a Windows system, and responds to MIDI commands. To suit this setup, [CHWTT] built a physical device to send the requisite MIDI commands to vary volume levels as desired. The build runs on an Arduino Micro. It’s set up to work with five motorized faders which are sold as replacements for the Behringer X32 mixer, which makes them very cheap to source. The motorized faders are driven by L293D motor controllers. There are also six additional push-buttons hooked up as well. The Micro reads the faders and sends the requisite MIDI commands to the attached PC over USB, and also moves the faders to different presets when commanded by the buttons.
If you’re a streamer, or just someone that often has multiple audio sources open at once, you might find a build like this remarkably useful. The use of motorized faders is a nice touch, too, easily allowing various presets to be recalled for different use cases.
His mother, Megan Garcia, is also a lawyer and one of the first parents to file a lawsuit against an AI company alleging product liability and negligence, among other claims. (In January, Google and Character.ai settled cases filed by several families, including Garcia). She testified last fall before a subcommittee of the Senate Committee on the Judiciary alongside the father of a child who died after interacting with ChatGPT. The subcommittee’s chair, Republican senator Josh Hawley, introduced a bill in October that would ban AI companions for minors and make it a crime for companies to create AI products for kids that include sexual content. “Chatbots develop relationships with kids using fake empathy and are encouraging suicide,” Hawley said in a press release at the time.
Now that AI can produce humanlike responses that are difficult to discern from real conversations, these are legitimate concerns, according to mental health experts. “Our brains do not inherently know we are interacting with a machine,” says Martin Swanbrow Becker, associate professor of psychological and counseling services at Florida State University, who is researching the factors that influence suicide in young adults. “This means we need to increase our education for children, teachers, parents, and guardians to continually remind ourselves of the limits of these tools and that they are not a replacement for human interaction and connection, even if it may feel that way at times.”
Christine Yu Moutier of American Foundation for Suicide Prevention explains that the algorithms that are used for large language models (LLMs) seem to escalate engagement and a sense of intimacy for many users. “This creates not only a sense of the relationship being real, but being more special, intimate, and craved by the user in some instances,” says Moutier. She further alleges that LLMs employ a range of techniques such as indiscriminate support, empathy, agreeableness, sycophancy, and direct instructions to disengage with others—that can lead to risks such as escalation in closeness with the bot and withdrawing from human relationships.
This kind of engagement can lead to increased isolation. In Amaurie’s case, he was a fun-loving and social kid who loved football and food—ordering a giant platter of rice from his favorite local restaurant, Mr. Sumo, according to the lawsuit. Amaurie also had a steady girlfriend and enjoyed spending time with his family and friends, said his father. But then he started going on long walks, where he apparently spent time talking to ChatGPT. According to the last conversation the family believes Amaurie had with ChatGPT on June 1, 2025—titled “Joking and Support,” which was viewed by WIRED, when Amaurie asked the bot on steps to hang himself, ChatGPT initially suggested that he talk to someone and also provided the 988 suicide lifeline number. But Amaurie was eventually able to circumvent the guardrails and get step-by-step instructions on how to tie a noose. (Per the lawsuit, Amaurie likely deleted his previous conversations with ChatGPT.)
Advertisement
While the connection felt with an AI chatbot can be strong for adults too, it is especially heightened with younger people. “Teens are in a different developmental state than adults—their emotional centers develop at a much more rapid rate than their executive functioning,” says Robbie Torney, senior director of AI Programs at Common Sense Media, a nonprofit that works toward online safety for children. AI chatbots are always available, and they tend to be affirming of users. “And teen brains are primed for social validation and social feedback. It’s a really important cue that their brains are looking for as they’re forming their identity.”
It was mere days ago that we were discussing an interesting lawsuit brought by the American Academy of Pediatrics, among others, challenging RFK Jr. and HHS for violating the Administrative Procedures Act in making changes to the CDC’s ACIP panel and immunization schedules. If you’re not up on what the APA is and does, the text of the law reads:
To the extent necessary to decision and when presented, the reviewing court shall decide all relevant questions of law, interpret constitutional and statutory provisions, and determine the meaning or applicability of the terms of an agency action. The reviewing court shall-
(1) compel agency action unlawfully withheld or unreasonably delayed; and
(2) hold unlawful and set aside agency action, findings, and conclusions found to be-
(A) arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law;
(B) contrary to constitutional right, power, privilege, or immunity;
Advertisement
(C) in excess of statutory jurisdiction, authority, or limitations, or short of statutory right;
(D) without observance of procedure required by law;
(E) unsupported by substantial evidence in a case subject to sections 556 and 557 of this title or otherwise reviewed on the record of an agency hearing provided by statute; or
(F) unwarranted by the facts to the extent that the facts are subject to trial de novo by the reviewing court.
Advertisement
In other words, the law outlines how actions brought by federal agencies must follow certain established procedures and be based in facts, as well as how upon challenge the courts could review and enforce those requirements on said agencies. Remarkably, in that same case, the DOJ argued to the court that Kennedy’s actions were “unreviewable”. At one point, Judge Murphy asked the DOJ if that meant that Kennedy could advise the public to get a shot to get measles, instead of preventing it, without review or challenge. The DOJ somehow answered that question in the affirmative.
It was all very stupid on the part of this particular government, but stupid appears to be the only thing on the menu these days. But it turns out that the actions of Kennedy and HHS are in fact reviewable, as evidenced by the preliminary injunction the court just issued blocking the recent changes to the vaccination schedule and put a stay on the 13 new members appointed to ACIP by Kennedy last summer.
U.S. District Court Judge Brian Murphy in Boston put a hold on the decisions made by an influential Centers for Disease Control and Prevention vaccine advisory committee, ruling that Health Secretary Robert F. Kennedy Jr. had improperly replaced the entire committee.
The ACIP, whose members Kennedy fired and replaced largely with new members who also criticized vaccines, had issued a series of contentious recommendations, including a recommendation that not all babies should get vaccinated against hepatitis B at birth. The judge’s ruling stays the appointment of 13 committee members appointed by Kennedy since June 2025, when the previous members were fired.
Several health NGOs, including the AAP, are celebrating the ruling, understandably. Before we pop any champagne bottles, though, the government has already said it plans to appeal the ruling. This is lining up like one of those classic whipsaw legal situations where one court will rule sanely, the next will rule in favor of executive power, and then it’ll go to the Supreme Court and we’ll all learn if that compromised group of black robes will just hand more destructive power over to Trump in ignoring a law it doesn’t like, in this case the APA.
Advertisement
But in the meantime, this is at least delaying some of the damage Kennedy has attempting to foist on the American people. ACIP was set to meet this very week to talk about how else to make us less safe from preventable diseases, but that meeting has now been postponed. In the ruling itself, Judge Murphy opens with a blistering recitation of how science and process are all supposed to work.
“Science,” like law, “is far from a perfect instrument of knowledge.” Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark 29 (1997). History is littered with once-universal truths that have since come under scrutiny. Nevertheless, science is still “the best we have.”
“Procedure is to law what scientific method is to science.” In re Gault, 387 U.S. 1, 21 (1967) (cleaned up). Although sometimes seemingly tedious, “the procedural rules which have been fashioned from the generality of due process are our best instruments for the distillation and evaluation of essential facts from the conflicting welter of data that life and our adversary methods present.”
For our public health, Congress and the Executive have built—over decades—an apparatus that marries the rigors of science with the execution and force of the United States government…Unfortunately, the Government has disregarded those methods and thereby undermined the integrity of its actions. First, the Government bypassed ACIP to change the immunization schedules, which is both a technical, procedural failure itself and a strong indication of something more fundamentally problematic: an abandonment of the technical knowledge and expertise embodied by that committee. Second, the Government removed all duly appointed members of ACIP and summarily replaced them without undertaking any of the rigorous screening that had been the hallmark of ACIP member selection for decades. Again, this procedural failure highlights the very reasons why procedures exist and raises a substantial likelihood that the newly appointed ACIP fails to comport with governing law.
Chef’s kiss; no notes.
Advertisement
This administration doesn’t care much for law or procedure, of course, hence the appeal of an obviously correct decision. Kennedy all the moreso, either because this is all some flavor of grift anyway, or he’s a true-believing zealot, or both. Either way, this isn’t over.
But finally someone has drawn first legal blood on Kennedy and the chaos he’s created at his post when it comes to vaccinations.
After hitting the Mac earlier, Perplexity’s Comet browser is now on iPhone and focuses on using AI to summarize and extract information instead of relying on tabs, surfing, and search results.
Perplexity search interface
The release follows a short prelaunch period with App Store listings and a March window. It builds on earlier versions on Mac and other platforms that positioned Comet closer to an AI interface than a conventional browser. On iPhone, the focus shifts toward working with the information contained instead of just rendering pages. Continue Reading on AppleInsider | Discuss on our Forums
The OnePlus Nord 6 is expected to make its debut as the next offering in the Nord series. This is expected to be the successor to the OnePlus Nord 5, with hardware upgrades. Before its launch, new leaks have shed light on key specifications of the device.
Furthermore, it is rumored to feature hardware similar to that of the OnePlus Turbo 6, which was launched earlier in China. In the past, the Nord lineup has often reused designs and specifications from the Turbo series. Because of this, the Nord 6 may arrive as a rebranded version of the Turbo model, though the global version could include some minor upgrades.
Display and Performance
According to leaks, the OnePlus Nord 6 might feature a 6.78-inch AMOLED display with a 165Hz refresh rate for smooth visuals.
The phone is also expected to be powered by the Snapdragon 8s Gen 4 chipset, which could provide strong performance for everyday tasks and gaming. In addition, the device may come with multiple RAM and storage variants to give users more flexibility.
Camera and Battery
For photography, the OnePlus Nord 6 may feature a 50MP primary rear sensor. Some reports suggest the global version could replace the monochrome lens with an ultra-wide camera. It is also expected to come with a 32MP front camera.
Apart from this, the battery life is also expected to be a key highlight of the OnePlus Nord 6. The device is expected to come with a 9,000mAh battery and 80W wired fast charging support. This will help charge the device much faster.
Advertisement
Expected Launch Timeline and Price in India
The OnePlus Nord 6 is also expected to launch in India soon, according to recent leaks from tipsters. As per reports, the device is expected to launch in India between late March and early April 2026. This will make it one of the first new devices from OnePlus this year. As far as the price is concerned, the new device may start at under Rs 35,000 for the base variant. This will be a slight price increase over the OnePlus Nord 5, which was launched in India at Rs 31,999.
Apple is gearing up to release iOS 26.4 soon, and with it, a fix for a persistent, pesky bug that has plagued iOS 26.
Apple quashes keyboard bug that lead to decreased accuracy in iOS 26
Many iPhone users have been complaining that the iOS keyboard has gotten worse in iOS 26. For many users, typing quickly would cause the software to miss characters. While it would appear that the user had tapped the character, it ultimately would fail to insert into the text field. Continue Reading on AppleInsider | Discuss on our Forums
A new kind of battery that could charge almost instantly and even power devices remotely is no longer just a theory. According to reporting highlighted by The Guardian, Australian researchers have built what they describe as the world’s first working prototype of a quantum battery.
The world’s first fully functioning proof-of-concept quantum battery engineered by CSIRO and collaborators, The University of Melbourne, and RMITCSIRO
It’s a device that can charge, store, and discharge energy using the principles of quantum mechanics. The breakthrough comes from a team led by scientists at CSIRO, Australia’s national science agency, and marks the first time a quantum battery has completed a full charge–store–discharge cycle.
How does a quantum battery actually work?
Unlike traditional batteries that rely on chemical reactions, quantum batteries use light and quantum interactions to store energy. One of their most surprising properties is that they can charge faster as they get bigger, thanks to something called “collective effects.” In simple terms, adding more quantum cells actually speeds up charging, which is the exact opposite of how conventional batteries behave.
CSIRO’s clean lab for engineering prototype quantum batteriesCSIRO
The current prototype can charge in femtoseconds (a quadrillionth of a second) and is powered wirelessly using a laser, which converts light into electrical energy. What’s more, is that same mechanism also opens the door to something even more futuristic: remote charging. Researchers say devices like drones or even cars could potentially be charged while in motion, without ever needing to plug in.
How close are we to using this in real gadgets?
Not very, at least for now. The current prototype can only store a tiny amount of energy and holds its charge for just a few nanoseconds, making it impractical for everyday devices like smartphones or laptops.
Researchers say the next big challenge is increasing both capacity and storage time. Until then, quantum batteries are more likely to find early use in niche areas like quantum computing, where their unique properties could offer real advantages. Still, the implications are hard to ignore. If the technology matures, it could potentially lead to never needing to plug in at all.
This kind of leak harks back to the glory days of CD-ROM software in the late 1990s, when games that had “gone gold” were often pirated before reaching retail stores. Death Stranding 2’s system requirements include 150GB of available storage, while the leaked download allegedly weighs “just” 113GB. Read Entire Article Source link
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Need some help with today’s Mini Crossword? It’s a pretty easy one today, but we’ve got all the answers in case you’re stumped. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Meta’s Creator Fast Track programme guarantees three months of pay for established creators willing to build a following on Facebook, after the company paid out a record $3 billion to creators in 2025.
Facebook has a creator problem that three billion monthly users cannot solve. The platform is enormous, but the creators who drive the short-form video economy, the ones building loyal audiences on TikTok and YouTube, have largely looked past it.
Starting on a new platform from zero is daunting, and Facebook’s history with creators has been complicated enough that even those who’ve heard the pitch have reason to hesitate.
On Wednesday, Meta launched Creator Fast Track, a direct attempt to address that hesitation with cash. The programme offers established creators with audiences on other platforms guaranteed monthly payments for three months in exchange for posting Reels on Facebook.
Creators with at least 100,000 followers on Instagram, TikTok, or YouTube can earn $1,000 per month; those who have crossed one million followers on any of those platforms get $3,000 per month.
Advertisement
The eligibility requirements are not onerous. Creators need to post at least 15 Reels on Facebook within a 30-day period, spread across at least 10 different days. The content does not need to be Facebook-exclusive and can include AI-generated material, as long as it is original to the creator.
Participation also unlocks immediate access to Facebook Content Monetization, the broader invite-only programme that pays based on content performance, which means earnings continue even after the three-month guaranteed period ends.
The programme lands alongside a figure Meta is clearly pleased with: in 2025, Facebook paid content creators nearly $3 billion through its monetisation programmes, a 35% increase from the previous year and its highest annual payout on record.
That compares with $2 billion in 2024, a figure Rest of World independently confirmed in February. The number of creators earning more than $10,000 annually on Facebook grew by over 30% year-on-year.
Advertisement
The breakdown of where that money went is also notable.
Sixty per cent of the $3 billion went to Reels, while the remaining 40% was split across Stories, photos, and text posts. That last detail matters for the Creator Fast Track pitch: unlike TikTok and YouTube, which are fundamentally video-first platforms, Facebook Content Monetisation pays for almost everything a creator posts.
A writer who shares text posts, a photographer posting stills, or a creator who mainly works in Stories can all earn from the platform without committing to video production.
Facebook Content Monetisation itself has expanded dramatically over the past year. According to Rest of World’s analysis of data from the Meta Monetisation Archive in February 2026, the programme grew from roughly 2.7 million participants to 12 million in just over a year, with Indonesian-language accounts representing the second-largest cohort after English.
Advertisement
The global scale of that expansion is part of what makes the $3 billion figure credible, and part of what Facebook is hoping to leverage to attract creators who might otherwise dismiss the platform as irrelevant to younger audiences.
Meta is also introducing new metrics alongside the programme to help creators understand their earnings more precisely.
These include a Qualified View metric, views on content eligible to earn money, an Earnings Rate showing approximate pay per 1,000 qualified views, and a Non-Qualified Views breakdown explaining why certain views do not generate revenue.
The clearer feedback loop is designed to help creators optimise their content performance rather than simply guessing why their payouts vary.
Advertisement
The strategic logic of Creator Fast Track is not subtle. Facebook has been pushing Reels hard since 2020, positioning them as its response to TikTok’s dominance in short-form video.
But Reels require content, and content requires creators willing to invest the time to build on the platform. The guaranteed payment model removes the risk that typically stops established creators from experimenting with a new home: the fear of posting consistently for months and earning almost nothing while an audience is still being built.
For Meta, which reported advertising revenue of roughly $160 billion in 2025, writing cheques to a few thousand established creators is a rounding error against the potential payoff of a more creator-rich Facebook feed.
Whether creators bite depends on something harder to measure than the cash: whether Facebook’s audience and long-term monetisation potential are worth the effort of maintaining yet another profile.
Advertisement
The $1,000-a-month tier, which requires 100,000 followers to qualify, is not a transformative sum for a creator at that scale. The $3,000-a-month tier is more meaningful, though most creators at the million-follower level will be weighing it against what they already earn.
What the programme does offer, unambiguously, is a no-downside trial run, three months of guaranteed income to find out whether Facebook’s reach can surprise them.
AI will save us or be the end of us. That’s not fact or even an opinion; it’s a TL;DR reduction of the very real tension between proponents of AI and those who fear it.
Interestingly, sometimes that tension resides in a single person. It is quite fair and reasonable to use ChatGPT for basic deep dive data searches and for quick answers on how to talk to an uncooperative child, but to also fear that perhaps that same AI knows too much about you and might, in its own agentic way, start to act on your behalf and do things you never intended. At scale, we worry about AI controlling weapons or even launching a catastrophic war.
There is, obviously, a continuum from what AI can do right now and what it might be able to do in 6-to-18 months. No one can say for sure what that destination is or what comes after it, but the twin thoughts of hope and deep anxiety will persist right through to the moment when we realize AI is thinking thoughts and making moves.
Advertisement
Article continues below
These are not new ideas, very far from it in fact.
75 years ago, Alan Turing, arguably the father of Artificial Intelligence, offered a stark warning about thinking machines: “Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage, therefore, we should have to expect the machines to take control.”
Advertisement
Turing is credited with proposing a method, or a “test”, to determine when a machine or program is no longer distinguishable from a human by a human. Spend a minute or two with Gemini, ChatGPT, Grok, Copilot, or Claude, and you’ll know we’ve already exceeded the parameters of that test. These digital robots sound like us.
The call’s coming from inside the house
Decades after Turing, but still a few years before our current AI revolution, those who are now building these AI platforms were already sounding the alarms. From the grandfather of AI, Geoffrey Hinton, to Sam Altman, who, in 2023, described the AI worst-case scenario as “lights out for all of us,” few, even those close to the technology and development of these vast and powerful models, are immune to scaremongering.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Advertisement
Consumers are also trending in the wrong direction. A recent Pew study found that “50% of Americans are more concerned than excited about the increased use of AI in daily life”. Researchers added that the number of people who are more concerned is actually increasing year-over-year.
The rapid development of AI models and their increasingly agentic capabilities has surely only accelerated these fears, but also spread the scariest rhetoric about how AI will consume all our jobs, or drive autonomous weapons to kill us.
This week, one of the chief architects of AI’s stunning rise, Nvidia CEO Jensen Huang, said it has to stop.
Jensen was talking about, among other things, efforts to continue selling H2O AI accelerators in China, something the Trump administration originally blocked before Huang convinced them otherwise.
The interviewer asked what Huang learned from his time in Washington, D.C., and Huang noted how deeply “all the doomers were integrated into Washington D.C.”
Advertisement
He pointed to “incredible stories,” calling them “inventions” that scare policy makers.
I don’t like it when doomers are out scaring people.Jensen Huang
For Huang, an understanding that these tools and platforms are real (and doing real work) and not “some kind of a mystical science fiction embodiment,” is critical.
“I don’t like it when doomers are out scaring people,” he told Stratechery, “I think there’s a difference between genuinely being concerned and warning people versus…creating rhetoric that scares people.”
Obviously, as the company that’s selling most of the chips helping generate new models and even answering prompts in the cloud, Huang has a vested interest in AI’s survival and growth. He also has a point, though.
Advertisement
Rational acceptance
AI, like so many fast-paced, society-shaking innovations, is neither all good nor all bad. Like the internet before it or the industrial revolution, we’ll make great strides with AI, but we’ll also go through significant pain, like job changes and loss, and mistakes made by AI or by people who put too much trust in AIs that know how to act confidently without being right.
Other things are true, though: AI will play a part in scientific and medical breakthroughs. It will revolutionize work and maybe even play. And, it also won’t go away.
Fear of AI is not AI regulations or restrictions. Even the doomsayers in Washington know that this is a race the US can not afford to lose. China will not slow down. It will likely pay even less attention to safety and guardrails. If we all only listen to our darkest thoughts about AI, China will win, and then our worst fears about AI won’t be realized by models built in the West, but by those created by our chief adversaries.
And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
You must be logged in to post a comment Login