Brittani Phillips checked her phone. A middle school counselor in Putnam County, Florida, Phillips receives messages from an artificial intelligence-enabled therapy platform that students use during nonschool hours. It flags when a student may be at risk for harming themself or others based on what the student types into a chat.
Phillips saw that this was a “severe” alert for an eighth grader.
So, Phillips spent her evening on the phone with the student’s mom, probing her to figure out what was going on and how vulnerable the student was. Phillips also called the police, she says, noting that she tells students that the chats are confidential until they can’t be.
That was last school year, in the spring.
Advertisement
“He’s alive and well. He’s in ninth grade this year,” Phillips says. She believes that the interaction built trust between her and the family. When the student passes her in the hall now, he makes a point to greet her, she adds.
Navigating budget shortfalls and limited mental health staff, Interlachen Jr.-Sr. High School, where Phillips works, is using an AI platform to vet students’ mental health needs.
Phillips’ district has used Alongside, an automated student monitoring system, for three years. It’s an example of the growing category of tools that are marketed to K-12 schools for similar purposes, with at least 9 companies getting funding deals since 2022.
Alongside says its tool is used by more than 200 schools around the US and argues that its platform offers better services than typical telehealth options because it has a social and emotional skill-building chat tool — where students yak about their life-problems with a llama called Kiwi that tries to teach them to build up resilience — and its AI-generated content is monitored by clinicians. The system offers resource-tapped schools, especially in rural areas, access to critical mental health resources, company representatives say.
Many experts and families also worry that students attach to AI too strongly. Even as a recent national survey found that 20 percent of high schoolers have used AI romantically or know someone who has, there’s significant interest in keeping students from emotionally connecting with bots. That even includes a proposed federal law that would force AI companies to remind students that chatbots aren’t real people.
Still, in her job, Phillips says the tool her school uses is exceptional at putting out the “small fires.” With around 360 middle schoolers to support, having this tool to hand-hold them through the breakups and other routine problems they face allows her to focus her time with students nearing crisis. Plus, students sometimes find it easier to turn to AI for dealing with emotional problems, she says.
On the Digital Couch
Student nervousness plays into why they are comfortable confiding in these technologies, school counselors say.
Advertisement
Speaking with a mental health professional can be intimidating, especially for adolescents, says Sarah Caliboso-Soto, a licensed clinical social worker who serves as the assistant director of clinical programs at the USC Suzanne Dworak-Peck School of Social Work and the clinical director for the Trauma Recovery Center and Telebehavioral Health at USC.
There’s a generational component as well. For students who’ve grown up encountering chat interfaces through social media and websites, AI interfaces can feel familiar. And kids today find that it’s easier to text than call someone on the phone, says Linda Charmaraman, director of the Youth, Media & Wellbeing Research Lab at Wellesley Centers for Women.
Using AI to work through emotions also allows students to avoid watching facial expressions, which they may worry will carry judgment, she adds. Also, chatbots are available at times when a human might not be, without the hassle of having to make an appointment, Charmaraman says.
“It’s almost more natural than interacting with another human being,” Caliboso-Soto says.
Advertisement
In her work with a telehealth clinic, Caliboso-Soto has seen a rise in crisis text lines and chat lines. The clinic doesn’t use AI of any kind, she says, but it often gets approached by companies looking to get AI into the therapy sessions as notetakers.
It’s not necessarily bad in Caliboso-Soto’s opinion. For resource-strapped schools, AI can be used “as a first line of defense,” regularly checking in with students and pointing them in the right direction when they need more help, she says.
The starting price for a school to use Alongside’s services is about $10 per student per year, according to the company. Larger districts usually receive volume-based discounts.
But Caliboso-Soto worries about using AI as a substitute counselor. It lacks the discernment that clinicians provide when interacting with students, she notes. While large language models can be trained to notice symptoms in text, they cannot see or hear what a human clinician can when interacting with a student, the inflections of the voice and the movements of the body, nor can it reliably catch subtle observations or behaviors. “You can’t replace human connection, human judgment,” she adds.
Advertisement
While AI can speed up the diagnostic process or free up time for school counselors, it’s crucial not to overly rely on it for mental health, says Charmaraman. The technology can miss some of the nuances that a human counselor would catch, and it can give students unrealistic positive reinforcement. Schools need to adopt a holistic approach that includes families and caregivers, she argues.
Plus, if a school is increasingly using AI intervention to filter serious cases, it’s worth paying attention to whether students are having less frequent contact with clinically-trained humans, Caliboso-Soto says.
For its part, Alongside representatives say that the platform is not meant as a replacement for human therapy. The app is a stepping stone to seeking help from adults, says Ava Shropshire, a junior at Washington University who serves as a youth adviser for Alongside. She argues that the app makes mental health and social-emotional learning feel more normal for students and can lead them to seek out human help.
Still, some students think it’s at best a Band-Aid.
Advertisement
Social Accountability
“Can you think of another time in history when people have been so lonely, when our communities have been so weak?” asks Sam Hiner, executive director of The Young People’s Alliance, a North Carolina-based organization that lobbies for more youth participation in politics and policymaking.
During a time of economic upheaval, technology and social media have manipulated and isolated students from one another, and that’s led to a deep yearning for community and belonging, Hiner says.
Students will get it wherever they can, even if that’s through ChatGPT, he adds.
The Young People’s Alliance released a framework for regulating AI that allows for some therapeutic uses of the technology.
Advertisement
But in general, the organization is striving to rebuild the human community and is set against use of AI when it threatens to replace human companionship, Hiner says. “That’s a critical aspect of therapy and of living a fulfilled life and having social connection and having mental well-being,” he adds.
So for Hiner, the main concern is what’s called a “parasocial relationship,” when students develop a one-sided emotional attachment, especially when the technology enters schools for therapeutic purposes. It might be valuable to have an AI that can provide feedback or conduct analysis, even to mental health, but Hiner says that the AI should not hint or convey that it has its own emotional state — for instance, saying “I’m proud of you” to a student user — because that encourages attachment.
Even though platforms often claim to decrease loneliness, they don’t really measure whether people are more connected and are more set up to live fulfilled, connected, happy lives in the long term, says Hiner: “All [tech platforms are] measuring is whether this bot is serving as an effective crutch for the immediate feelings of loneliness that they’re experiencing.”
What advocates want to prevent is these bots fueling the loss of social skills because they pull people away from relationships with other people, where they have social accountability, Hiner says.
Advertisement
Pushing Boundaries
Privacy experts note that these chatbots do not generally carry the same privacy protections of conversations with a licensed therapist. And when concerns about student privacy and encounters with the police are high, use of these tools raise “messy” privacy concerns, even when supervised by people with clinical training, a privacy law expert says.
Both the company and Phillips, the counselor in Putnam County, stress that, to work, these systems need human oversight. Phillips feels like this tool is an improvement over other monitoring tools the district has used, which point students toward in-school discipline rather than mental health help.
This school year, Phillips noted 19 “severe” alerts from the AI health tool as of February (from a total of 393 active users). The company doesn’t separate the incidents by which students caused them. So some of the same students are causing multiple of those 19 “severe alerts,” Phillips notes.
Phillips has learned, in using the tool, that it takes a human to perceive teenage humor, too.
Advertisement
That’s because some alerts aren’t genuine. On occasion, middle school students — usually boys — will test the boundaries of this technology, Phillips says. They type “my uncle touches me” or “my mom beat me with a pole” into the chat to test whether Phillips will follow up on it.
These boys are just trying to see if anyone is listening, to test whether anyone cares, she says. Sometimes, they just find it funny.
When she pulls them aside to discuss it, she can observe their body language, and whether it changes, which might suggest that the comment was real. If it was a joke, they often become apologetic. When a student doesn’t seem remorseful, Phillips will call and let the parents know what happened. But even in these cases, Phillips feels she has more options than provided by other monitoring systems, which would refer the student to in-school suspension.
Because Phillips is keeping her eye on the interactions, the students also learn to trust that she’s actually monitoring the system, she adds.
Advertisement
And, she says, the number of boys who do test the system in that way goes down every year.
Boston Dynamics’ Spot, a four-legged machine that has been making its way through factories, warehouses, and power plants on its own for years, can now connect to the Orbit platform and the AIVI-Learning tool. This Google Gemini-powered program uses the photos to provide reports on safety, equipment health, and cleanliness. The system has done well with easy tasks, but when scenarios become cluttered, things become a little hazy.
That all changed with Google Gemini Robotics ER 1.6. This new model brings some high-level thinking to the party, allowing Spot to assess its surroundings, plan its next step, and determine whether or not it has completed the task. It captures photographs from numerous viewpoints at simultaneously, even if the illumination changes or anything obscures the view. It can point to anything on the screen and precisely count them, and it can even avoid producing results that do not exist.
【Next-Generation Robotic Companion: Meet the Unitree Go2 Robotic Dog】 The Unitree Go2 X is a powerful and intelligent quadruped robot designed for…
【Intelligent Navigation with 3D LiDAR & Obstacle Avoidance】 Featuring ultra-wide 3D LiDAR with 360°x96° perception, the Go2 X detects obstacles…
【High-Definition Vision & Seamless App Integration】 A front HD camera streams 1280×720 video to the app. Control the robot, view real-time data…
Pressure gauges are an excellent example of how all of this new technology adds up. Spot moves up to a dial, zooms in if necessary, and then reports the exact reading. It can even manage camera angle distortions and check numerous needles at once if there are more than one to deal with. Sight glasses operate similarly, allowing the robot to estimate liquid levels from empty to full in plain old percentage terms, and those digital displays that used to give it a headache due to glare or bad typefaces. They now work much more consistently.
Spot can also address the bigger picture, as it performs 5S compliance audits without issue, detecting misplaced tools or clutter that violates housekeeping guidelines. If it sees a puddle of liquid, it’s now clever enough to recognize it as a hazard rather than a harmless reflection. Conveyor belts, valves, and other equipment are all thoroughly inspected to detect any minor damage or leaks before they cause major problems.
Every inspection includes a step-by-step analysis of how the robot reached its decision, allowing customers to understand exactly what steps the AI performed rather than receiving a black box response. When the stakes are high and someone will be penalized or the business will be shut down due to unanticipated downtime, that transparency truly creates confidence. The good news is that all of these changes take place completely behind the scenes, with Boston Dynamics and Google handling everything in the cloud, so your robot continues to function normally. As Spot conducts regular patrols, new photographs are fed back into the system, and the models gradually gain a sense of the unique layout, lighting, and equipment of that location. [Source]
An anonymous reader quotes a report from Cord Cutters News: Sony has notified owners of its recent BRAVIA television models that significant changes to the built-in TV Guide for its OTA TV antenna users and related menu features will take effect starting in late May 2026. The update affects a range of premium sets released between 2023 and 2025, marking another instance of feature adjustments for older smart TV hardware as manufacturers shift focus toward newer product lines. The changes primarily target the program guide functionality for over-the-air antenna TV channels received via the ATSC tuner. After the cutoff date, program information may fail to display on certain channels, limiting the guide’s usefulness for planning viewing schedules. Users will often see listings only for channels they have recently watched, rather than a comprehensive overview of available broadcasts. Additionally, channel logos that previously appeared in the guide will disappear, and any thumbnail images accompanying program descriptions will no longer load or show.
Further modifications will appear in the television’s menu system. For users relying on connected set-top boxes, the dedicated Set Top Box menu option will be removed entirely. In its place, a simpler Control menu will surface, streamlining access but eliminating some specialized navigation previously available. Program thumbnails, which provided visual previews in various menu sections, will also cease to appear across affected interfaces. These adjustments stem from Sony’s ongoing efforts to manage backend services and data feeds that support enhanced guide features on its Google TV-powered BRAVIA lineup. As television ecosystems evolve rapidly with advancements in processing power, artificial intelligence integration, and cloud-based content delivery, companies periodically retire select capabilities on prior-generation hardware to optimize resources. The 2023 through 2025 models, while still offering excellent picture quality through advanced OLED and LCD panels with features like XR processing, now fall into the category of devices receiving scaled-back support. These are the models impacted:
Microsoft recently released a new preview build of Windows 11 for the Windows Insider channels. Users enrolled in the Insider program can now test a somewhat historic change: a new “hard” size limit for disk volumes formatted with the FAT32 file system. This long-anticipated update may improve compatibility and flexibility… Read Entire Article Source link
Japanese entertainment company Toho has released a teaser video for Godzilla Minus Zero, the upcoming sequel to the award-winning film Godzilla Minus One. The teaser shows the famous monster next to the Statue of Liberty as it rampages across New York. Godzilla Minus Zero is set in 1949, two years after the events of the first film, and will be a direct sequel. You’ll see familiar faces from Minus One in the short trailer, as well, namely Koichi Shikishima and Noriko Oishi, two of the first movie’s main characters.
The kaiju flick was filmed specifically for IMAX with high-definition digital cameras. Even its audio was optimized for the massive screen’s immersive cinema experience. Minus One won an Oscar for Best Visual Effects, so expectations are high for this sequel. The good news is that this movie is also helmed by Takashi Yamazaki, who wrote, directed and oversaw the visual effects for Minus One. Godzilla Minus Zero is heading to cinemas in Japan on November 3 and in the United States on November 6 this year.
Soccer piracy losses estimated between $700M and $800M annually
Real-time AI detection cuts piracy rates across major matches
Traditional blocking tools struggle against large-scale streaming networks
Piracy of live football streams has grown into an industrial-scale problem, with Spanish clubs warning that illegal viewing is draining hundreds of millions of dollars from the sport each year.
LaLiga estimates piracy costs its clubs, which include Real Madrid, Barcelona, and Atlético Madrid, between $700m and $800m annually, a figure that reflects both lost subscriptions and declining broadcast value.
The league has been working with infrastructure company Fastly on tools which attempt to detect illegal streams as matches unfold rather than after they have already spread.
Article continues below
Advertisement
The problem of Illegal streaming
Millions of unauthorized streams now operate in parallel during major matches, often appearing faster than traditional enforcement tools can react.
A study by Grant Thornton recorded at least 10.8 million unauthorized retransmissions of live events in 2024, with more than 81% never suspended and only 2.7% removed within the first 30 minutes.
Advertisement
Illegal streaming is widespread across Europe, with estimates suggesting nearly four million people in the UK use unauthorized sources to watch live sport.
Traditional methods such as IP blocking have long been used to restrict access to illegal streams, but those measures can disrupt legitimate viewers while pirate services quickly reappear under new addresses. That has created a cycle where enforcement lags behind distribution.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
LaLiga and Fastly have been developing systems that rely on AI and content-based signals to identify illegal streams in real time. Instead of blocking large network ranges, the system focuses on detecting specific signals linked to copyrighted broadcasts.
Advertisement
“At LaLiga, we have succeeded in reducing piracy of our streams in Spain by 60% during the 2024/25 season through a comprehensive, end-to-end strategy focused on legal, educational, institutional, and technological measures,” said Javier Tebas, President at LaLiga.
“This success is due in large part to our ecosystem of partners like Fastly, enabling us to continue exploring new and more effective ways to tackle piracy at its root. LaLiga remains firmly committed to putting an end to piracy, and achieving this goal requires the collaboration of all stakeholders working together.”
The partnership focuses on shrinking the time window in which illegal streams can operate before being flagged and removed.
Advertisement
Faster detection increases the chance of stopping unauthorized broadcasts before large audiences gather.
“Unlike alternative approaches based on regional blocking, our strategy focuses on precision, letting fans enjoy the game while protecting content from abuse by criminals,” said Kelly Shortridge, Chief Product Officer at Fastly.
“At Fastly, we love co-innovating with customers to solve their thorniest challenges, and we look forward to continuing our work with LaLiga to help protect content owners around the world.”
Efforts to curb piracy are becoming more technical as viewing habits shift online and illegal distribution tools grow more sophisticated. Leagues increasingly view rapid detection and targeted removal as necessary to protect broadcast revenue and limit the spread of unauthorized streams.
California-based auditor webXray reports that tech giants have continued to use cookies to track users across the internet, even when website visitors reject them. Google, Microsoft, and Meta have all disputed the findings. Read Entire Article Source link
The first step was to deal with the really grungy case. The shell was soaked in dish soap and given a good brushing before being packed and sent to a collaborator. Upon inspection of the internals, several unknown modifications to the PCB were evident. These were likely to support playing home-burned copies of pirated games, as well as an NTSC region hack (for this PAL version of the console), courtesy of a dodgy-looking crystal oscillator hanging on the end of some wires.
Luckily, the PS1 product design is highly modular, giving excellent repairability, which made reversing this a doddle. The mod wiring was removed by simply desoldering it, but the cut traces needed to be cleaned up and reconnected to return it to stock condition.
After the first round of fixes, [Elliot] plugged into the TV for a test. It was still outputting black-and-white. Something was still amiss. He had simply connected one of the repair wires to the wrong spot on the PCB. After correcting that error (and getting lucky, no damage was done), the correct colour PAL output was seen.
An unidentified Chinese 1080p HDMI upscaler mod
Next, a PicoStation ZeroWire was soldered in place. This cleverly-shaped PCB hosts one of the Pico MCU chips and allows launching games from an SD card. Using a combination of large through holes on the PCB and a few castellated edge holes, installation looks very easy. ZeroWire is a bit of an unfortunate name, as it actually requires one jumper wire to be attached, but we’re just nitpicking here. Next, there was some really precarious-looking pin lifting on the CDROM controller chip. Cleanliness is in order here for a successful soldering mod. A special ESD toothbrush (not really) was pressed into service for cleaning with IPA. Proper ESD tools are not expensive, but you can get away without them.
An Amazon-sourced PAL-to-HDMI adapter was tried to perform some 720p “upscaling”. This reduced the obvious jaggies a bit, but it was not really good enough for [Elliot]. So instead, he installed an HDMI mod board sourced from an Aliexpress store (listing now defunct). The metal shielding can was removed to reveal the video ICs. The serial port connector was removed, as this is the location for the new HDMI port. The ‘fun’ part of this particular mod is attaching the custom flex PCB to the video chip. This is quite a daunting task for those not comfortable with SMT soldering techniques. It may look hard, but it’s actually dead easy to drag-solder this, so long as you use plenty of good-quality flux and keep the heat under control. Once that was out of the way and second smaller cable was routed to the audio chip.
Advertisement
The final result internals. Tidy!
Next up was to deal with the old-school wired controllers. The TechnoBit Videojuegos Re-Live BT controller board allows the use of a modern wireless controller. Its installation requires disassembling the original controller connector module. The PCB from the rear of the module is removed along with the ribbon cable connector and a through-hole Zener diode, both of which are reused and soldered to the new controller board. This seems like an unnecessary faff and could have easily been pre-installed or at least included with the PCB. Also, soldering the through-hole beeper to surface-mount pads made us cringe. That looks like someone forgot to make the correct footprint for a part that normal humans can solder.
Finally, a Robot Retro USB-C power supply was dropped in to replace the original AC power supply, bringing this build’s connectivity into the current decade. USB power, HDMI ‘1080p’ output, SD card game loading, and a BT controller. Nice! The last part of the build features a custom respray of the enclosure, a nod to the original ‘dev kit blue’ version when the PS1 was first announced all those years ago. Ah, we remember it well!
Fluidstack, a startup that builds specialized data centers for AI companies, is in talks to raise a $1 billion round at an $18 billion valuation, potentially led by Jane Street, Bloomberg reports.
Should this deal come to fruition, it would more than double Fluidstack’s valuation in a matter of months.
In December, the company was reportedly raising around $700 million at a $7.5 billion valuation, sources told Bloomberg at the time, although it didn’t formally announce the close of that round. That round was said to be led by Situational Awareness, an AGI-focused fund founded by former OpenAI researcher Leopold Aschenbrenner, and backed by Stripe’s Collison brothers, former GitHub CEO Nat Friedman, and the AI investor and entrepreneur Daniel Gross.
Talks were apparently still ongoing for this round in February, at least with Google, which was considering kicking in $100 million to the round, The Wall Street Journal reported.
Advertisement
There’s good reason for the hype over Fluidstack. In November, Anthropic announced that it had signed a $50 billion deal with the startup to build data centers custom-designed for its needs in Texas and New York. Unlike hyperscalers like AWS, which serve all kinds of computing needs, Fluidstack’s infrastructure is built specifically for AI.
The deal was a huge vote of confidence for Fluidstack, a company that was relatively unknown in the U.S. Anthropic primarily uses AWS and Google Cloud to serve Claude (though it also has a partnership with Microsoft to supply Claude to that software giant’s customers). But just like rival OpenAI, Anthropic is growing so fast that it needs more capacity, and this deal gives Anthropic more control over its own cloud infrastructure.
This partnership is so significant to the startup that Fluidstack — which was spun out of Oxford and had been a rising star in Europe’s AI scene — relocated its headquarters from the U.K. to New York. Last month, it also pulled out of a key €10 billion AI project in France, Bloomberg reported, to focus on U.S. opportunities.
Techcrunch event
Advertisement
San Francisco, CA | October 13-15, 2026
In addition to Anthropic, it counts Meta, Poolside, Black Forest Labs, and others as customers. Prior to the deal with Anthropic, Fluidstack was probably best known for providing infrastructure to Mistral.
Advertisement
Fluidstack did not respond to a request for comment.
Have you ever deleted a game you were not finished with simply because your Xbox Series X|S had run out of room, only to face a lengthy re-download the next time you wanted to play?
That frustration is exactly what the WD_BLACK C50 2TB Storage Expansion Card addresses, and it is currently down from £282.99 to £189.99 on Amazon, making this one of the better moments to fix the problem properly.
With a 33% discount back on the table, the WD_BLACK C50 2TB is an easy way to expand your Xbox storage before things get tight
At this price, this WD_Black C50 deal is a straightforward upgrade for anyone who has to make difficult decisions about their games storage.
The key word is properly, because unlike plugging in an external USB drive, the C50 slots directly into the dedicated expansion port on your Xbox Series X and Series S and operates through Xbox Velocity Architecture, which means games stored on it run with the same speed and responsiveness as titles on the console’s internal SSD.
Advertisement
That matters more than it might sound, because Xbox Series X|S games are designed around that architecture, and running them from a slower external drive forces them off the internal storage entirely, costing you the fast load times and Quick Resume functionality that make the console worth owning in the first place.
Quick Resume itself is worth unpacking here, as it lets you suspend multiple games simultaneously and jump back into any of them almost instantly, but that feature depends entirely on having enough fast storage available to hold those suspended states ready to go.
At 2TB, the WD_BLACK C50 gives you room to keep a substantial library installed and ready without constant management, which changes the relationship you have with your game collection from one of rationing to one of just playing whatever you feel like.
Advertisement
The card weighs just 25 grams and is officially licensed by Microsoft, so it slots in without any setup process or compatibility concerns, and the five-year limited warranty means it is built to last well beyond the current console generation.
This is a straightforward upgrade for any Xbox Series X|S owner who has started making difficult decisions about which games to keep installed, and at £189.99 the WD_BLACK C50 2TB makes that problem disappear without a complicated solution.
More than 100 malicious extensions in the official Chrome Web Store are attempting to steal Google OAuth2 Bearer tokens, deploy backdoors, and carry out ad fraud.
Researchers at application security company Socket discovered that the malicious extensions are part of a coordinated campaign that uses the same command-and-control (C2) infrastructure.
The threat actor published the extensions under five distinct publisher identities in multiple categories: Telegram sidebar clients, slot machine and Keno games, YouTube and TikTok enhancers, a text translation tool, and utilities.
According to the researchers, the campaign uses a central backend hosted on a Contabo VPS, with multiple subdomains handling session hijacking, identity collection, command execution, and monetization operations.
Socket has found evidence indicating a Russian malware-as-a-service (MaaS) operation, based on comments in the code for authentication and session theft.
Advertisement
Extensions linked to the same campaign Source: Socket
Harvesting data and hijacking accounts
The largest cluster, comprising 78 extensions, injects attacker-controlled HTML into the user interface via the ‘innerHTML’ property.
The second-largest group, with 54 extensions, uses ‘chrome.identity.getAuthToken’ to collect the victim’s email, name, profile picture, and Google account ID.
They also steal the Google OAuth2 Bearer token, a short-lived access token that permits applications to access a user’s data or to act on their behalf.
Google account data harvesting Source: Socket
A third batch of 45 extensions features a hidden function that runs on browser startup, acting as a backdoor that fetches commands from the C2 and can open arbitrary URLs. This function does not require the user to interact with the extension.
One extension highlighted by Socket as “the most severe” steals Telegram Web sessions every 15 seconds, extracts session data from ‘localStorage’ and the session token for Telegram Web, and sends the info to the C2.
“The extension also handles an inbound message (set_session_changed) that performs the reverse operation: it clears the victim’s localStorage, overwrites it with threat actor-supplied session data, and force-reloads Telegram,” describes Socket.
Advertisement
“This allows the operator to swap any victim’s browser into a different Telegram account without the victim’s knowledge.”
The researchers also found three extensions that strip security headers and inject ads into YouTube and TikTok, one that proxies translation requests through a malicious server, and a non-active Telegram session theft extension that uses staged infrastructure.
Socket has notified Google about the campaign, but warns that all malicious extensions are still available on the Chrome Web Store at the time of publishing their report.
BleepingComputer confirms that many of the extensions listed in Socket’s report are still available at publishing time. We have reached out to Google for a comment on this, but we have not heard back.
Advertisement
Users are recommended to search their installed extensions against the IDs Socket published, and uninstall any matches immediately.
Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.
This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.
You must be logged in to post a comment Login