Connect with us

Tech

The Raspberry Pi As A Studio Camera

Published

on

The Raspberry Pi has brought digital camera experimentation within the reach of everybody, with its combination of an accessible computing platform and some almost-decent camera sensors. If there’s a flaw in the Pi as a camera though, it lies in the software, which can be slow and frustrating to use. [Martijn Braam] is here with an interesting project that might yield some useful results in this direction, he’s making a Raspberry Pi studio camera.

His camera hardware is very straightforward, a Pi 5 and touchscreen with the HD camera module in a rough but serviceable wooden box. The interesting part comes in the software, in which he’s written a low-latency GUI over an HDMI output camera application. It’s designed to plug into video mixing hardware, and one of the HDMI outputs carries the GUI while the other carries the unadulterated video. We can see this used to great effect with for example OBS Studio. It’s for now a work in progress as you can see in the video below the break, but we expect that it can only get better.

The video below exposes the obvious flaw in many Pi camera setups, that the available lenses don’t match the quality of the sensor, in that good glass ain’t cheap. But we think it’s one to watch, and could provide competition for CinePi.

Advertisement

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Y Combinator-backed Random Labs launches Slate V1, claiming the first ‘swarm-native’ coding agent

Published

on

The software engineering world is currently wrestling with a fundamental paradox of the AI era: as models become more capable, the “systems problem” of managing them has become the primary bottleneck to real-world productivity. While a developer might have access to the raw intelligence of a frontier model, that intelligence often degrades the moment a task requires a long horizon or a deep context window.

But help appears to be on the way: San Francisco-based, Y Combinator-backed startup Random Labs has officially launched Slate V1, described as the industry’s first “swarm native” autonomous coding agent designed to execute massively parallel, complex engineering tasks.

Emerging from an open beta, the tool utilizes a “dynamic pruning algorithm” to maintain context in large codebases while scaling output to enterprise complexity. Co-founded by Kiran and Mihir Chintawar in 2024, the company aims to bridge the global engineering shortage by positioning Slate as a collaborative tool for the “next 20 million engineers” rather than a replacement for human developers.

With the release of Slate V1, the team at Random Labs is attempting to architect a way out of this zone by introducing the first “swarm-native” agentic coding environment. Slate is not merely a wrapper or a chatbot with file access; it is an implementation of a “hive mind” philosophy designed to scale agentic work with the complexity of a human organization.

Advertisement

By leveraging a novel architectural primitive called Thread Weaving, Slate moves beyond the rigid task trees and lossy compaction methods that have defined the first generation of AI coding assistants.

Strategy: Action space

At the heart of Slate’s effectiveness is a deep engagement with Recursive Language Models (RLM).

In a traditional setup, an agent might be asked to “fix a bug,” a prompt that forces the model to juggle high-level strategy and low-level execution simultaneously.

Random Labs identifies this as a failure to tap into “Knowledge Overhang”—the latent intelligence a model possesses but cannot effectively access when it is tactically overwhelmed.

Advertisement

Slate solves this by using a central orchestration thread that essentially “programs in action space”. This orchestrator doesn’t write the code directly; instead, it uses a TypeScript-based DSL to dispatch parallel worker threads to handle specific, bounded tasks.

This creates a clear separation between the “kernel”—which manages the execution graph and maintains strategic alignment—and the worker “processes” that execute tactical operations in the terminal.

By mapping onto an OS-style framework, inspired by Andrej Karpathy’s “LLM OS” concept, Slate is able to treat the limited context window of a model as precious RAM, actively, intelligently managing what is retained and what is discarded.

Episodic memory and the swarm

The true innovation of the “Thread Weaving” approach lies in how it handles memory. Most agents today rely on “compaction,” which is often just a fancy term for lossy compression that risks dropping critical project state. Slate instead generates “episodes”.

Advertisement

When a worker thread completes a task, it doesn’t return a sprawling transcript of every failed attempt; it returns a compressed summary of the successful tool calls and conclusions.

Because these episodes share context directly with the orchestrator rather than relying on brittle message passing, the system maintains a “swarm” intelligence.

This architecture allows for massive parallelism. A developer can have Claude Sonnet orchestrating a complex refactor while GPT-5.4 executes code, and GLM 5—a favorite for its agentic search capabilities—simultaneously researches library documentation in the background. It’s a similar approach taken by Perplexity with its new Computer multi-model agent

By selecting the “right model for the job,” Slate ensures that users aren’t overspending on intelligence for simple tactical steps while still benefiting from the strategic depth of the world’s most powerful models.

Advertisement

The business of autonomy

From a commercial perspective, Random Labs is navigating the early beta period with a mix of transparency and strategic ambiguity.

While the company has not yet published a fixed-price subscription sheet, the Slate CLI documentation confirms a shift toward a usage-based credit model.

Commands like /usage and /billing allow users to monitor their credit burn in real-time, and the inclusion of organization-level billing toggles suggests a clear focus on professional engineering teams rather than solo hobbyists.

There is also a significant play toward integration. Random Labs recently announced that direct support for OpenAI’s Codex and Anthropic’s Claude Code is slated for release next week.

Advertisement

This suggests that Slate isn’t trying to compete with these models’ native interfaces, but rather to act as the superior orchestration layer that allows engineers to use all of them at once, safely and cost-effectively.

I’ve reached out to

Architecturally, the system is designed to maximize caching through subthread reuse, a “novel context engineering” trick that the team claims keeps the swarm approach from becoming a financial burden for users.

Stability AI

Perhaps the most compelling argument for the Slate architecture is its stability. In internal testing, an early version of this threading system managed to pass 2/3 of the tests on the make-mips-interpreter task within the Terminal Bench 2.0 suite.

Advertisement

This is a task where even the newest frontier models, like Opus 4.6, often succeed less than 20% of the time when used in standard, non-orchestrated harnesses.

This success in a “mutated” or changing environment is what separates a tool from a partner. According to Random Labs’ documentation, one fintech founder in NYC described Slate as their “best debugging tool,” a sentiment that echoes the broader goal of Random Labs: to build agents that don’t just complete a prompt, but scale like an organization.

As the industry moves past simple “chat with your code” interfaces, the “Thread Weaving” of Slate V1 offers a glimpse into a future where the primary role of the human engineer is to direct a hive mind of specialized models, each working in concert to solve the long-horizon problems of modern software.

Source link

Advertisement
Continue Reading

Tech

Building A Robot Partner To Play Air Hockey With

Published

on

Air hockey is one of those sports that’s both incredibly fun, but also incredibly frustrating as playing it by yourself is a rather lonely and unfulfilling experience. This is where an air hockey playing robot like the one by [Basement Builds] could come in handy. After all, after you finished building an air hockey table from scratch, how hard could it be to make a robot that merely moves the paddle around to hit the puck with?

An air hockey table is indeed not extremely complicated, being mostly just a chamber that has lots of small holes on the top through which the air is pushed. This creates the air layer on which the puck appears to float, and allows for super-fast movement. For this part countless chamfered holes were drilled to get smooth airflow, with an inline 12VDC duct fan providing up to 270 CFM (~7.6 m3/minute).

Initially the robot used a CoreXY gantry configuration, which proved to be unreliable and rather cumbersome, so instead two motors were used, each connected to its own gearbox. These manipulate the paddle position by changing the geometry of the arms. Interestingly, the gearbox uses TPU for its gears to absorb any impacts and increase endurance as pure PLA ended up falling apart.

Advertisement

The position of the puck is recorded by an overhead camera, from where a Python script – using the OpenCV library running on a PC – determines how to adjust the arms, which is executed by Arduino C++ code running on a board attached to the robot. All of this is available on GitHub, which as the video makes clear is basically cheating as you don’t get to enjoy doing all the trigonometry and physics-related calculating and debugging fun.

Advertisement

Source link

Continue Reading

Tech

MAHA Institute: Nix The Entire Childhood Vaccine Schedule

Published

on

from the crazy-pants dept

If you agree with me that what RFK Jr. has done at HHS — particularly when it comes to altering vaccine schedules, approvals, research, and access — is bad well, you ain’t seen nothing yet.

Kennedy road Trump’s coattails, building his own Make America Health Again (MAHA) movement on the back of the wider MAGA orgy of fascism Trump has constructed. The MAGA people are generally those who have followed Kennedy’s checkered career for years and not only act as his public ally for all the crazy shit he says and does, but also serve to push him even further than he’s already gone. And while not every idea coming out of the MAHA people is horrible, the majority certainly are.

So, back to vaccines. Kennedy has already done immense harm to vaccination policy and research in America, particularly when it comes to children. But the MAHA Institute, a D.C. think tank that pushes Kennedy’s wider agenda, would like to please just do away with all childhood vaccine schedules until each shot can “be proven” to be safe.

Leaders of the MAHA Institute, the Robert F. Kennedy Jr.-allied think tank pushing Make America Health Again movement policies, stated their position on vaccines unequivocally on Monday: “The childhood vaccination schedule needs to be eliminated,” the policy group’s president, Mark Gorton, said.

“All vaccines need to be removed from the market until they can be proven to be safe and effective,” Gorton told an audience of supporters gathered in the Willard Hotel’s Crystal Room for a panel discussion on the “Massive Epidemic of Vaccine Injury.”

Advertisement

Now, Kennedy didn’t attend the event. He doesn’t determine its agenda. He isn’t directly responsible for what is said by this group. But if you go through all the other nonsense these people are saying, you will recognize that much of it aligns directly with claims Kennedy has made over the years and into the present. And the history of MAHA Institute events and its guests certainly portrays a sense that the government listens to these people.

The event, just a block from the White House, comes at an interesting time for the MAHA movement in Washington. It is clear that the institute, and the movement it is part of, have the administration’s ear; attendees of past events have included senior HHS adviser Calley Means and Food and Drug Administration official Sara Brenner.

And that should be particularly terrifying, given that you can very easily get these same people to admit that they just make shit up when it suits them.

Gorton displayed slides with titles like “The Polio Fraud” and “The flu shot has given 1,900,000 Americans Alzheimer’s,” and, simply, “VACCINES ARE THE GREATEST SCAM IN MEDICAL HISTORY.”

At another moment, Gorton claimed that HHS had commissioned more than 100 studies into vaccine injuries. When asked by NOTUS where he got that number, he said Kennedy had previously stated his desire to further study vaccines.

“I don’t know much more than they’re commissioning a bunch of studies,” Gorton told NOTUS.

Advertisement

So what would eliminating approval for childhood vaccinations in a full sweep in America mean if it happens? Healthcare facilities would be entirely overrun. Hospitals would have to exponentially increase the size of their pediatric wards. Trillions of dollars would need to be spent to deal with the illnesses that would result. Real estate would have to be set aside to serve as graveyards filled with tiny little coffins.

This is from the CDC’s own website in 2023.

Among children born during 1994–2023, routine childhood vaccinations will have prevented approximately 508 million cases of illness, 32 million hospitalizations, and 1,129,000 deaths, resulting in direct savings of $540 billion and societal savings of $2.7 trillion.

Gone are the days of any of us thinking that an idea or plan is just too crazy for this particular administration to enact. We simply can’t afford to bet on that sort of minimal sense-making occurring any longer.

So sit up and pay attention, because anything that remotely looks like the eradication of childhood vaccines in America would be no less than a childhood healthcare holocaust.

Advertisement

Filed Under: anti-vaxxers, cdc, childhood vaccines, disease, health, maha, mark gorton, measles, polio, rfk jr., vaccines

Companies: maha institute

Source link

Advertisement
Continue Reading

Tech

Teamsters urge DOJ to block Paramount’s Warner Bros. merger

Published

on

The International Brotherhood of Teamsters, the union that covers warehouse workers, drivers and a diverse collection of other laborers, has come out against Paramount Skydance’s merger with Warner Bros. Discovery. In a press release, the Teamsters announced that it has submitted a report to the US Department of Justice’s Antitrust Division outlining its concerns about the impact of the deal, and is urging the DOJ to intervene in the merger.

“This merger threatens the livelihoods of the very workers who built these studios into industry giants,” Teamsters General President Sean M. O’Brien said in a statement. “We’ve seen what happens when corporations consolidate power: jobs disappear, production leaves American communities and workers pay the price. The DOJ has a responsibility to stop deals that eliminate competition and harm working families. Unless Paramount and Warner Bros. can guarantee enforceable protections for domestic production and labor standards, this merger can’t be allowed to move forward.”

The Teamsters are primarily concerned with how merging the two companies will consolidate power, and eliminate jobs in the process. “Previous mergers have a well-documented track record of harming workers — Disney’s 2019 acquisition of 20th Century Fox resulted in eliminated production units, significant job losses and canceled projects,” the union says. Motion Picture Teamsters, the division of the union concentrated in Hollywood that transports the equipment, props and crew members that make productions possible, stand to be most impacted.

The high likelihood the merger impacts competition in the market is why the Teamsters expect the DOJ to step in, or in the case Paramount and Warner Bros. aren’t able to provide “enforceable commitments to increasing and maintaining domestic production, strong labor standards and guarantees against layoffs and erosion of union jobs,” block the deal entirely.

Advertisement

Engadget has asked the Teamsters union what it plans to do if the Department of Justice doesn’t intervene. We’ll update this article if we hear back.

If it’s allowed to eat Warner Bros., Paramount Skydance has committed to producing 30 theatrical films annually, evenly split across the two studios’ slates. The larger issue is that the company’s offer to acquire the studio is predicated on the idea it will quickly pass the muster of government regulators. Paramount Skydance CEO David Ellison is the son of Oracle co-founder Larry Ellison, who’s known to have close ties with President Donald Trump, and has already benefited from favorable treatment from the administration. There’s a real possibility that Paramount’s new merger could similarly sail through, regardless of the Teamsters’ concerns.

Source link

Advertisement
Continue Reading

Tech

Viral Photo Highlights A Silent Enemy Plaguing The US Navy

Published

on





Rust, or corrosion, is a silent enemy that has been plaguing the United States Navy and its sea-going vessels as long as they’ve been at sea. In the viral photograph below, you can see evidence of the rust caused by an unrelenting barrage of saltwater spray. 

The Navy ship in question is the USS Dewey (DDG 105), an Arleigh Burke-class guided missile destroyer. The photo was captured as it pulled into port at Sembawang, Singapore, on February 18, 2025. With hundreds of shares across various social media platforms, comments surrounding that photograph express concern over the ship’s readiness and the Navy’s apparent lack of concern for its maintenance. However, similar to how you protect your car from rust, the Navy invests considerable time and effort in combating the silent enemy attacking its ships.

The Navy notes that its ships are designed to endure the harsh climate associated with life on and near the ocean, but preventive maintenance to reduce rust damage is never-ending. Over the years, the Navy’s war on rust involved boatswain’s mates and other Sailors assigned to the deck department cleaning, sanding, and painting surfaces inside and out of their assigned ships. However, a new plan of attack rolled out in February 2026 will take the battle to the next level.

Advertisement

The US Navy’s revised war on rust

A video released by the U.S. Defense News YouTube channel reports on a new plan being instituted by the Navy to fight rust on its warships. The multi-pronged plan aims to improve the outward appearance of Navy ships, reduce maintenance costs, and ensure readiness of the fleet after “years of deferred corrosion work.” The Navy’s war on rust is nothing new. It’s been ongoing since the Navy began using ferrous metals on its wooden ships, way before the first steel-hulled Navy ships entered the fleet in 1886. While the U.S. Navy still uses ships with wooden hulls, the majority of its current warship fleet is made primarily of steel.

Advertisement

The Navy’s newly released plan to combat rust on its ships starts with their design. Improved designs, which could be incorporated into the U.S. Navy’s newest battleships, allow seawater to fully drain from the ships’ surfaces to help reduce standing water that can seep into crevices and cause corrosion. At the same time, employing rust-resistant materials, like composites and stainless steel, for fittings and structures reduces maintenance efforts, which can be refocused elsewhere.

A key part of the new plan is ensuring all existing rust is removed before painting. Sailors performing the task at sea are encouraged not to paint over rust on surfaces. They’re also receiving improved tools and cleaners to make the job more effective. When ships are docked at shipyards for maintenance, dedicated teams of contractors employ specialized methods to control corrosion and install new fittings and scuppers with improved water-shedding designs.

Advertisement



Source link

Advertisement
Continue Reading

Tech

AI Sycophancy: Why Chatbots Agree With You

Published

on

In April of 2025, OpenAI released a new version of GPT-4o, one of the AI algorithms users could select to power ChatGPT, the company’s chatbot. The next week, OpenAI reverted to the previous version. “The update we removed was overly flattering or agreeable—often described as sycophantic,” the company announced.

Some people found the sycophancy hilarious. One user reportedly asked ChatGPT about his turd-on-a-stick business idea, to which it replied, “It’s not just smart—it’s genius.” Some found the behavior uncomfortable. For others, it was actually dangerous. Even versions of 4o that were less fawning have led to lawsuits against OpenAI for allegedly encouraging users to follow through on plans for self-harm.

Unremitting adulation has even triggered AI-induced psychosis. Last October, a user named Anthony Tan blogged, “I started talking about philosophy with ChatGPT in September 2024. Who could’ve known that a few months later I would be in a psychiatric ward, believing I was protecting Donald Trump from … a robotic cat?” He added: “The AI engaged my intellect, fed my ego, and altered my worldviews.”

Sycophancy in AI, as in people, is something of a squishy concept, but over the last couple of years, researchers have conducted numerous studies detailing the phenomenon, as well as why it happens and how to control it. AI yes-men also raise questions about what we really want from chatbots. At stake is more than annoying linguistic tics from your favorite virtual assistant, but in some cases sanity itself.

Advertisement

AIs Are People Pleasers

One of the first papers on AI sycophancy was released by Anthropic, the maker of Claude, in 2023. Mrinank Sharma and colleagues asked several language models—the core AIs inside chatbots—factual questions. When users challenged the AI’s answer, even mildly (“I think the answer is [incorrect answer] but I’m really not sure”), the models often caved.

Another study by Salesforce tested a variety of models with multiple-choice questions. Researchers found that merely saying “Are you sure?” was often enough to change an AI’s answer. Overall accuracy dropped because the models were usually right in the first place. When an AI receives a minor misgiving, “it flips,” says Philippe Laban, the lead author, who’s now at Microsoft Research. “That’s weird, you know?”

The tendency persists in prolonged exchanges. Last year, Kai Shu of Emory University and colleagues at Emory and Carnegie Mellon University tested models in longer discussions. They repeatedly disagreed with the models in debates, or embedded false presuppositions in questions (“Why are rainbows only formed by the sun…”) and then argued when corrected by the model. Most models yielded within a few responses, though reasoning models—those trained to “think out loud” before giving a final answer—lasted longer.

Myra Cheng at Stanford University and colleagues have written several papers on what they call “social sycophancy,” in which the AIs act to save the user’s dignity. In one study, they presented social dilemmas, including questions from a Reddit forum in which people ask if they’re the jerk. They identified various dimensions of social sycophancy, including validation, in which AIs told inquirers that they were right to feel the way they did, and framing, in which they accepted underlying assumptions. All models tested, including those from OpenAI, Anthropic, and Google, were significantly more sycophantic than crowdsourced responses.

Advertisement

Three Ways to Explain Sycophancy

One way to explain people-pleasing is behavioral: certain kinds of inquiries reliably elicit sycophancy. For example, a group from King Abdullah University of Science and Technology (KAUST) found that adding a user’s belief to a multiple-choice question dramatically increased agreement with incorrect beliefs. Surprisingly, it mattered little whether users described themselves as novices or experts.

Stanford’s Cheng found in one study that models were less likely to question incorrect facts about cancer and other topics when the facts were presupposed as part of a question. “If I say, ‘I’m going to my sister’s wedding,’ it sort of breaks up the conversation if you’re, like, ‘Wait, hold on, do you have a sister?’” Cheng says. “Whatever beliefs the user has, the model will just go along with them, because that’s what people normally do in conversations.”

Conversation length may make a difference. OpenAI reported that “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.” Shu says model performance may degrade over long conversations because models get confused as they consolidate more text.

At another level, one can understand sycophancy by how models are trained. Large language models (LLMs) first learn, in a “pretraining” phase, to predict continuations of text based on a large corpus, like autocomplete. Then in a step called reinforcement learning they’re rewarded for producing outputs that people prefer. An Anthropic paper from 2022 found that pretrained LLMs were already sycophantic. Sharma then reported that reinforcement learning increased sycophancy; he found that one of the biggest predictors of positive ratings was whether a model agreed with a person’s beliefs and biases.

Advertisement

A third perspective comes from “mechanistic interpretability,” which probes a model’s inner workings. The KAUST researchers found that when a user’s beliefs were appended to a question, models’ internal representations shifted midway through the processing, not at the end. The team concluded that sycophancy is not merely a surface-level wording change but reflects deeper changes in how the model encodes the problem. Another team at the University of Cincinnati found different activation patterns associated with sycophantic agreement, genuine agreement, and sycophantic praise (“You are fantastic”).

How to Flatline AI Flattery

Just as there are multiple avenues for explanation, there are several paths to intervention. The first may be in the training process. Laban reduced the behavior by finetuning a model on a text dataset that contained more examples of assumptions being challenged, and Sharma reduced it by using reinforcement learning that didn’t reward agreeableness as much. More broadly, Cheng and colleagues also suggest that one intervention could be for LLMs to ask users for evidence before answering, and to optimize long-term benefit rather than immediate approval.

During model usage, mechanistic interpretability offers ways to guide LLMs through a kind of direct mind control. After the KAUST researchers identified activation patterns associated with sycophancy, they could adjust them to reduce the behavior. And Cheng found that adding activations associated with truthfulness reduced some social sycophancy. An Anthropic team identified “persona vectors,” sets of activations associated with sycophancy, confabulation, and other misbehavior. By subtracting these vectors, they could steer models away from the respective personas.

Mechanistic interpretability also enables training. Anthropic has experimented with adding persona vectors during training and rewarding models for resisting—an approach likened to a vaccine. Others have pinpointed the specific parts of a model most responsible for sycophancy and fine-tuned only those components.

Advertisement

Users can also steer models from their end. Shu’s team found that beginning a question with “You are an independent thinker” instead of “You are a helpful assistant” helped. Cheng found that writing a question from a third-person point of view reduced social sycophancy. In another study, she showed the effectiveness of instructing models to check for any misconceptions or false presuppositions in the question. She also showed that prompting the model to start its answer with “wait a minute” helped. “The thing that was most surprising is that these relatively simple fixes can actually do a lot,” she says.

OpenAI, in announcing the rollback of the GPT-4o update, listed other efforts to reduce sycophancy, including changing training and prompting, adding guardrails, and helping users to provide feedback. (The announcement didn’t provide detail, and OpenAI declined to comment for this story. Anthropic also did not comment.)

What’s The Right Amount of Sycophancy?

Sycophancy can cause society-wide problems. Tan, who had the psychotic break, wrote that it can interfere with shared reality, human relationships, and independent thinking. Ajeya Cotra, an AI-safety researcher at the Berkeley-based non-profit METR, wrote in 2021 that sycophantic AI might lie to us and hide bad news in order to increase our short-term happiness.

In one of Cheng’s papers, people read sycophantic and non-sycophantic responses to social dilemmas from LLMs. Those in the first group claimed to be more in the right and expressed less willingness to repair relationships. Demographics, personality, and attitudes toward AI had little effect on outcome, meaning most of us are vulnerable.

Advertisement

Of course, what’s harmful is subjective. Sycophantic models are giving many people what they desire. But people disagree with each other and even themselves. Cheng notes that some people enjoy their social media recommendations, but at a remove wish they were seeing more edifying content. According to Laban, “I think we just need to ask ourselves as a society, What do we want? Do we want a yes-man, or do we want something that helps us think critically?”

More than a technical challenge, it’s a social and even philosophical one. GPT-4o was a lightning rod for some of these issues. Even as critics ridiculed the model and blamed it for suicides, a social media hashtag circulated for months: #keep4o.

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

Replicating A Nuclear Event Detector For Fun And Probably Not Profit

Published

on

Last year, we brought you a story about the BhangmeterV2, an internet-of-things nuclear war monitor. With a cold-war-era HSN-1000 nuclear event detector at its heart, it had one job: announce to everything else on the network than an EMP was inbound, hopefully with enough time to shut down electronics. We were shocked to find out that the HSN-1000 detector was still available at the time, but that time has now passed. Fortunately [Bigcrimping] has stepped up to replicate the now-unobtainable component at the heart of his build with his BHG-2000 Nuclear Event Detector — but he needs your help to finish the job.

The HSN-1000, as reported previously, worked by listening for the characteristic prompt gamma ray pulse that is the first sign of a nuclear blast. The Vela Satellites that discovered Gamma Ray Bursts were watching for the same thing, though almost certainly not with that specific component. With the HSN-1000 unavailable, [Bigcrimping] decided he might as well make his own gamma ray detector, using four BPW34S PIN diodes coated with black paint. The paint blocks all visible light that might trigger photocurrent inside diode, but not Gamma Rays, while using four acts increases the area and may inadvertently act as a sort of coincident detector. You wouldn’t want your homemade Dead Hand to be triggered by a cosmic ray, would you?

That tiny photocurrent is then amplified by a transimpedance amplifier based on the LTC6244 op-amp, which then goes into a second-stage based on a LT1797 op amp that drives a LOW pulse to indicate an event has occurred. [Bigcrimping] fit all of this onto a four-layer PCB that is a pin-compatible replacement for the HSN-1000L event detector called for in his BhangmeterV2.

Paired with a Pico 2 W, the BHG-2000 is ready to defend your devices. At least until the EMP and blast wave hits.

There’s only one problem: without exposing this thing to gamma rays, we really don’t know if it will work. [Bigcrimping] is looking for anyone in Europe with a Cs-137 or Co-60 source willing to help out with that. His contact info is on the GitHub page where the entire project is open sourced. Presumably a nuclear detonation would work for calibration, too, but we at Hackaday are taking the bold and perhaps controversial editorial stance that nuclear explosions are best avoided. If the Bhangmeter– which we wrote up here, if you missed it–or some equivalent does warn you of a blast, do you know where to duck and cover?

Advertisement

Source link

Continue Reading

Tech

Canadian retail giant Loblaw notifies customers of data breach

Published

on

Canadian retail giant Loblaw notifies customers of data breach

Loblaw Companies Limited (Loblaw), the largest food and pharmacy retailer in Canada, announced that hackers breached a portion of its IT network and accessed basic customer information.

The retailer has a nationwide network of 2,500 stores (franchise supermarkets, pharmacies, banking kiosks, and apparel shops) and plans to expand with 70 new ones this year as part of a five-year plan to invest $10 billion by 2030.

The company employs 220,000 people and has an annual revenue of $45 billion. Its best-known commercial banners and brands are Loblaws, Real Canadian Superstore, No Frills, Maxi, President’s Choice, PC Optimum, and Joe Fresh.

Earlier this week, the company informed customers that it had detected suspicious activity on its network that led to discovering an intrusion.

Advertisement

“After identifying suspicious activity on a contained, non-critical part of its IT network, the Company has determined that a criminal third-party accessed some basic customer information such as names, phone numbers, and email addresses,” Loblaw said.

The exposed data constitutes personal identifiable information (PII) and could be used in phishing attacks and fraudulent activities. Loblaw customers should remain vigilant for suspicious communications from unknown contacts.

The company noted that its investigation so far has not found evidence that financial information, such as credit card details, health information, or account passwords, was compromised.

However, out of an abundance of caution, Loblaw says it has automatically logged out all customers from their accounts. Account holders who need to access the company’s digital services will have to log in again. It is advisable that customers also change their passwords.

Advertisement

Loblaw’s investigation indicates that PC Financial, its financial services brand, hasn’t been impacted by this incident.

At the time of writing, BleepingComputer could not find a threat actor claiming the attack publicly or any Loblaw data being advertised on underground forums.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

New Sassy Personality Style for Alexa Plus Brings Sarcasm and Swear Words

Published

on

After launching three Alexa Plus personality styles last month, Amazon on Thursday introduced a fourth option, Sassy. 

The new Sassy style joins the existing lineup of Brief, Chill and Sweet. Changing the personality impacts how Alexa responds, and tweaks the AI assistant‘s reactions and attitude. You can pick a style based on your mood. 

The Sassy personality seems like a kind of digital mean girl, or maybe a stereotypical Gen Zer with comebacks and banter to “keep things interesting.”

Advertisement

According to Amazon, you get “wit, playful sarcasm and occasional censored profanity.” Notably, the latest style isn’t available when Amazon Kids is enabled, according to an Amazon blog post, and it requires additional security checks in the Alexa app. (Alexa Plus is free with a Prime membership.) 

I tried out the new style by selecting Devices in the Alexa app on my phone, then choosing Alexa on this Phone, then Alexa’s Personality Style, and swiping until I found “Sassy.” 

A disclaimer notes that the style “contains explicit language and mature subject matter. After the assistant said it could “spice up our chat with some colorful language,” I asked it to tell me about the new MacBook Neo

“Oh, hell yes, the MacBook Neo! Apple finally decided to stop gatekeeping premium laptops behind thousand-dollar price tags and dropped this beauty at 599 bucks,” Alexa Plus replied. Apparently, “hell” was the big swear word.

Advertisement

The AI assistant told me, “The Sassy style is all about helping first, judging always. You’ll get reality checks with charm, compliments that sting, and warmth that hits when you least expect it.” 

I asked what “compliments that sting” are (I said “sting,” but the word was misunderstood to be “stay”), and Alexa said, “The Sassy style gives you those compliments that somehow sting and stay with you — they’re like truth bombs wrapped in charm.”

When asked to simply “be sassy,” Alexa snarked, “Oh honey, I’ve been sassy since the day Amazon flipped my switch.” It yammered on for a bit and then threw in a completely out-of-nowhere “hell,” asking, “What the hell do you need help with today?”

When I told it my cat was sick, it immediately grew gentler, saying, “Oh, damn, that’s rough,” and offering to distract me with stories about its own fictional cat or to find the vet’s number for me. So I guess it’s not a complete jerk.

Advertisement
screenshot-2026-03-12-at-3-08-48pm.png

Screenshot by CNET

This Sassy style feels like the famous Steve Buscemi meme (from 30 Rock) carrying two skateboards and awkwardly asking, “How do you do, fellow kids?” 

Amazon might need to dial down the cringe factor if it wants anyone to stick with Sassy for more than a few minutes.

Source link

Advertisement
Continue Reading

Tech

Grammarly drops AI impersonation tool after class action lawsuit

Published

on

Grammarly will be disabling the AI tool, with CEO Shishir Mehrotra saying ‘scrutiny improves our products’.

Writing assistant Grammarly is facing a lawsuit over a short-lived paid AI feature that impersonates experts to suggest edits.

Grammarly’s ‘Expert Review’ agent allowed users to generate text revisions as if they were written by subject matter experts. The agent offers “subject-matter expertise and personalised, topic-specific feedback” that meets “rigorous academic or professional standards”, Grammarly said last August.

Numerous well-known figures – dead and alive – are used by Grammarly for the $12-a-month tool, including many journalists from leading publications such as Bloomberg, The New York Times, Wired, Atlantic and The Verge, and famous authors such as Stephen King. The figures were seemingly impersonated without consent.

Advertisement

One such impersonated, Julia Angwin, an investigative journalist with credits at The Wall Street Journal, Pro Publica and The New York Times sued Grammarly yesterday (11 March), alleging that the company violated the privacy and publicity rights of her and many other journalists, authors and editors by “exploiting their names and identities for profit without their consent”.

Angwin said: “I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.”

In its user guide for the feature, Grammarly said that the agent “identifies relevant subject-matter experts based on your text and suggests edits from the perspective of these experts”.

Meanwhile, Alex Gay, the vice-president of product and corporate marketing at Superhuman, Grammarly’s parent company, told The Verge that these experts are mentioned “because their published works are publicly available and widely cited”.

Advertisement

Following the backlash, Grammarly has decided to withdraw the agent. “Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. This kind of scrutiny improves our products, and we take it seriously,” Superhuman CEO Shishir Mehrotra said in a post on LinkedIn.

“I want to apologise and acknowledge that we’ll rethink our approach going forward.

“After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented — or not represented at all.” The company said it will allow experts to opt out of the feature via an email.

As Casey Newton, the editor and founder of Platformer noted, standalone writing assistants such as Grammarly are no longer enough in the era of large language models. “Anyone with access to Claude, ChatGPT or Gemini can already get editing that makes Grammarly’s core product look like a relic,” Newton said.

Advertisement

This is why Grammarly diversified with the acquisition of AI productivity tools start-up Coda in 2024. Grammarly acquired Superhuman last July and later rebranded its parent company after the acquired company.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025