Connect with us

Tech

Lumma Stealer and Ninja Browser malware campaign abusing Google Groups

Published

on

LummaStealer + Ninja Browser malware campaign

CTM360 reports that more than 4,000 malicious Google Groups and 3,500 Google-hosted URLs are being used in an active malware campaign targeting global organizations.

The attackers abuse Google’s trusted ecosystem to distribute credential-stealing malware and establish persistent access on compromised devices.

The activity is global, with attackers embedding organization names and industry-relevant keywords into posts to increase credibility and drive downloads.

Read the full report here: https://www.ctm360.com/reports/ninja-browser-lumma-infostealer

Advertisement

How the campaign works

The attack chain begins with social engineering inside Google Groups. Threat actors infiltrate industry-related forums and post technical discussions that appear legitimate, covering topics such as network issues, authentication errors, or software configurations

Within these threads, attackers embed download links disguised as: “Download {Organization_Name} for Windows 10”

To evade detection, they use URL shorteners or Google-hosted redirectors via Docs and Drive. The redirector is designed to detect the victim’s operating system and deliver different payloads depending on whether the target is using Windows or Linux

 

Malware lifecycle

Windows Infection Flow: Lumma Info-Stealer

For Windows users, the campaign delivers a password-protected compressed archive hosted on a malicious file-sharing infrastructure

Advertisement

Oversized archive to evade detection

The decompressed archive size is approximately 950MB, though the actual malicious payload is only around 33MB. CTM360 researchers found that the executable was padded with null bytes — a technique designed to exceed antivirus file-size scanning thresholds and disrupt static analysis engines.

AutoIt-based reconstruction

Once executed, the malware:

  • Reassembles segmented binary files.

  • Launches an AutoIt-compiled executable.

  • Advertisement
  • Decrypts and executes a memory-resident payload.

The behavior matches Lumma Stealer, a commercially sold infostealer frequently used in credential-harvesting campaigns

Observed behavior includes:

  • Browser credential exfiltration.

  • Session cookie harvesting.

  • Shell-based command execution.

  • Advertisement
  • HTTP POST requests to C2 infrastructure (e.g., healgeni[.]live).

  • Use of multipart/form-data POST requests to mask exfiltrated content.

CTM360 identified multiple associated IP addresses and SHA-256 hashes linked to the Lumma-stealer payload.

CTM360 identified thousands of fraudulent HYIP websites that mimic legitimate crypto and forex trading platforms and funnel victims into high-loss investment traps.

Get insights into attacker infrastructure, fake compliance signals, and how these scams monetize through crypto wallets, cards, and payment gateways.

Advertisement

Read the intelligence report here

Linux Infection Flow: Trojanized “Ninja Browser”

Linux users are redirected to download a trojanized Chromium-based browser branded as “Ninja Browser.”

The software presents itself as a privacy-focused browser with built-in anonymity features.

However, CTM360’s analysis reveals that it silently installs malicious extensions without user consent and implements hidden persistence mechanisms that enable future compromise by the threat actor.

Malicious extension behavior

A built-in extension named “NinjaBrowserMonetisation” was observed to:

Advertisement
  • Track users via unique identifiers

  • Inject scripts into web sessions

  • Load remote content

  • Manipulate browser tabs and cookies

  • Advertisement
  • Store data externally

The extension contains heavily obfuscated JavaScript using XOR and Base56-like encoding

While not immediately activating all embedded domains, the infrastructure suggests future payload deployment capability.

The installed extensions by the threat actor to the browser from server-side view
The installed extensions by the threat actor to the browser from server-side view
Source: CTM360

Silent persistence mechanism

CTM360 also identified scheduled tasks configured to:

  • Poll attacker-controlled servers daily

  • Silently install updates without user interaction

  • Advertisement
  • Maintain long-term persistence

Additionally, researchers observed that the browser defaults to a Russian-based search engine named “X-Finder” and redirects to another suspicious AI-themed search page

The infrastructure appears tied to domains such as:

  • ninja-browser[.]com

  • nb-download[.]com

  • Advertisement
  • nbdownload[.]space

Campaign Infrastructure & Indicators of Compromise

CTM360 linked the activity to infrastructure, including:

IPs:

  • 152.42.139[.]18

  • 89.111.170[.]100

C2 domain:

Advertisement

Multiple SHA-256 hashes and domains associated with credential harvesting and info-stealer distribution were identified and are available in the report.

Risks to organizations

Lumma Stealer risks:

Ninja Browser risks:

  • Silent credential harvesting

  • Remote command execution

  • Backdoor-like persistence

  • Advertisement
  • Automatic malicious updates without user consent

Because the campaign abuses Google-hosted services, the attack bypasses traditional trust-based filtering mechanisms and increases user confidence in malicious content.

Defensive recommendations

CTM360 advises organizations to:

  • Inspect shortened URLs and Google Docs/Drive redirect chains.

  • Block the IoCs at firewall and EDR levels.

  • Advertisement
  • Educate users against downloading software from public forums/sources without verification.

  • Monitor scheduled task creation on endpoints.

  • Audit browser extension installations.

The campaign highlights a broader trend: attackers are increasingly weaponizing trusted SaaS platforms as delivery infrastructure to evade detection.

About the Research

The findings were published in CTM360’s February 2026 threat intelligence report, “Ninja Browser & Lumma Infostealer Delivered via Weaponized Google Services”

Advertisement

CTM360 continues to monitor this activity and track related infrastructure.

Read the full report here: https://www.ctm360.com/reports/ninja-browser-lumma-infostealer

Detect Cyber Threats 24/7 with CTM360

Monitor, analyze, and promptly mitigate risks across your external digital landscape with the CTM360.

Join our Community Edition

Advertisement

Sponsored and written by CTM360.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Anthropic and the Pentagon are reportedly arguing over Claude usage

Published

on

The Pentagon is pushing AI companies to allow the U.S. military to use their technology for “all lawful purposes,” but Anthropic is pushing back, according to a new report in Axios.

The government is reportedly making the same demand to OpenAI, Google, and xAI. An anonymous Trump administration official told Axios that one of those companies has agreed, while the other two have supposedly shown some flexibility.

Anthropic, meanwhile, has reportedly been the most resistant. In response, the Pentagon is apparently threatening to pull the plug on its $200 million contract with the AI company.

In January, the Wall Street Journal reported that there was significant disagreement between Anthropic and Defense Department officials over how its Claude models could be used. The WSJ subsequently said that Claude was used in the U.S. military’s operation to capture then-Venezuelan President Nicolás Maduro.

Advertisement

Anthropic did not immediately respond to TechCrunch’s request for comment.

A company spokesperson told Axios that the company has “not discussed the use of Claude for specific operations with the Department of War” but is instead “focused on a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance.”

Source link

Advertisement
Continue Reading

Tech

How to Watch Netflix’s ‘America’s Next Top Model’ Docuseries

Published

on

A new three-part Netflix docuseries will cover the chaos and complicated legacy of the hit reality series, America’s Next Top Model. 

ANTM premiered in 2003 and ran for 24 seasons, helping launch the careers of contestants like Eva Marcille, Lio Tipton and Yaya DaCosta. Netflix’s synopsis describes the new doc as the definitive chronicle of the modeling competition, which “became a pop-culture juggernaut defined by explosive drama, public meltdowns and controversies that still fuel viral moments today.” 

Advertisement

Former contestants, judges and producers — including host and creator Tyra Banks — took part in Netflix’s series, Reality Check, which you can stream shortly. 

When to watch Reality Check: Inside America’s Next Top Model on Netflix

Netflix will drop its three-episode doc on the modeling competition series in the early morning hours on Monday, Feb. 16 (3 a.m. ET, to be exact).

Like many other streaming services, Netflix’s cheapest tier is ad-supported, and you can opt for a pricier tier to avoid commercials. You can subscribe to Standard with ads for $8 per month, Standard for $18 per month or Premium for $25 per month.

Advertisement

James Martin/CNET

For ad-free streaming and access to every title Netflix offers, you should opt for the streamer’s Standard or Premium tiers. The Standard with ads tier comes with some limits on what you can watch due to licensing restrictions. Netflix’s website lets you compare the simultaneous streams, downloads and extra member slots you get with each tier.

Advertisement

Source link

Continue Reading

Tech

Solid-State EV Batteries Just Got One Step Closer To American Roads

Published

on





Just about every modern electric vehicle on American roads is powered by one of three battery types: lithium-iron phosphate (the most common, also known as LFP), nickel-manganese cobalt (NMC), and nickel-cobalt aluminum (NCA). Each of these is a relatively mature and well-understood system, with each holding certain advantages — LFP batteries are cheap and stable, whereas NCA batteries are energy-dense and powerful. But these EVs have only really been commonplace on today’s roads for the past two decades or so, a comparatively small amount of time when measured against the common internal combustion engine’s history spanning almost 140 years. Technology advances at an ever-increasing pace, and we may be on the precipice of that next evolution — at least on American roads.

Enter the solid-state battery, a pioneering technology that promises to combine all the benefits of the aforementioned configurations into a single entity. High performance, excellent energy density, potentially lasting many years, and stable thermal conductivity, though it comes at a steep cost — one that Karma Automotive appears to be willing to pay. As of February 2026, Karma Automotive announced plans to ship the first mass-production vehicle powered by solid-state batteries stateside, equipped with Factorial FEST SSBs.

Advertisement

Karma Automotive is the only American ultra-luxury manufacturer offering a diverse portfolio of vehicles, a specialized firm dedicated to producing EVs deep into six-figure USD territory. The company currently fields six distinct models, but only one will receive the solid-state battery at first: the Kaveya super coupe, scheduled for a 2027 debut. Let’s dive in and explore more about the car and solid-state batteries, along with what the technology promises to accomplish.

Advertisement

How solid-state batteries work

First thing’s first: what is a solid-state battery and how does it differ from most other EV battery types? In short, the typical EV battery houses two poles on either side, the anode and cathode — positive and negative, respectively. In between these is an ion that’s constantly shifting from the positive to negative side, like a relay runner, going from one electrolyte solution to the other. There are several types of these batteries, the most common of which is lithium-ion, but they all use a sort of gel-like electrolyte. Solid-state batteries, or SSBs for short, use a solid electrolyte instead, providing a more stable and energy-dense solution to power storage.

There are several variants of SSBs in service; the one Karma Automotive is testing is actually known as a quasi-solid-state battery. Produced by Factorial Energy, the quasi-SSB design prioritizes a combination of thermal stability (quasi-SSBs are inherently far less flammable than standard lithium-ion batteries) and high energy density, which translates to double the range. The company website cites range figures of at least 500 miles for the next generation of EV while weighing roughly one third less, based on the typical 90 kWh battery. Factorial also lists the Solstice SSB as a potential candidate for future EVs alongside the FEST quasi-SSB.

With standard battery technology fully matured, the current consensus is that SSBs represent the next technological leap forward for battery technology. Implementing such designs in cars holds a number of benefits: lighter vehicles with higher ranges, greater battery longevity, and greater power. However, because it’s still an emerging technology as far as EVs go, costs are currently prohibitively expensive for regular mass-production cars in the United States, and so you still can’t buy them for any U.S.-sold EV — yet.

Advertisement

The Karma Kaveya

As for the car itself, the Karma Kaveya is a sleek, ultra modern super coupe designed with a high-end grand tourer aesthetic. The name “Kaveya” is Sanskrit, meaning “power in motion,” a theme present in the promised statistics — Karma claims the high-end coupe to be capable of 0-60 times in less than 3 seconds and speeds in excess of 180 mph, thanks to its 1,000 hp powertrain. All of that is speculative for now, of course — especially given the emergent nature of the battery it houses.

According to the official figures listed on Karma’s website, the battery boasts a HV120 kWh output for a grand total of 1,270 lb-ft combined available torque, coupled with a 10-80% charging time of about 45 minutes. This contrasts an earlier estimate by Stellantis, which announced a partnership with Factorial back in April 2025 to use the batteries in Dodge demonstration vehicles to promote SSB technology; their figures listed an estimated charging time of 18 minutes from 15-90%.

Advertisement

Regardless of the battery’s performance now, it’ll likely exceed that of even the most advanced mass-production standard battery pack, albeit for a steep cost. But Karma isn’t in the business of cheap vehicles, so it’s a model that suits the company well. With the Kaveya representing the current cutting-edge of EV technology, Karma looks poised to leave a definitive mark in the ongoing electric arms race no matter what happens.



Advertisement

Source link

Continue Reading

Tech

Nvidia, Groq and the limestone race to real-time AI: Why enterprises win or lose here

Published

on

​From miles away across the desert, the Great Pyramid looks like a perfect, smooth geometry — a sleek triangle pointing to the stars. Stand at the base, however, and the illusion of smoothness vanishes. You see massive, jagged blocks of limestone. It is not a slope; it is a staircase.

​Remember this the next time you hear futurists talking about exponential growth.

​Intel’s co-founder Gordon Moore (Moore’s Law) is famously quoted for saying in 1965 that the transistor count on a microchip would double every year. Another Intel executive, David House, later revised this statement to “compute power doubling every 18 months.” For a while, Intel’s CPUs were the poster child of this law. That is, until the growth in CPU performance flattened out like a block of limestone.

​If you zoom out, though, the next limestone block was already there — the growth in compute merely shifted from CPUs to the world of GPUs. Jensen Huang, Nvidia’s CEO, played a long game and came out a strong winner, building his own stepping stones initially with gaming, then computer visioniand recently, generative AI.

Advertisement

​The illusion of smooth growth

​Technology growth is full of sprints and plateaus, and gen AI is not immune. The current wave is driven by transformer architecture. To quote Anthropic’s President and co-founder Dario Amodei: “The exponential continues until it doesn’t. And every year we’ve been like, ‘Well, this can’t possibly be the case that things will continue on the exponential’ — and then every year it has.”

​But just as the CPU plateaued and GPUs took the lead, we are seeing signs that LLM growth is shifting paradigms again. For example, late in 2024, DeepSeek surprised the world by training a world-class model on an impossibly small budget, in part by using the MoE technique.

​Do you remember where you recently saw this technique mentioned? Nvidia’s Rubin press release: The technology includes “…the latest generations of Nvidia NVLink interconnect technology… to accelerate agentic AI, advanced reasoning and massive-scale MoE model inference at up to 10x lower cost per token.”

​Jensen knows that achieving that coveted exponential growth in compute doesn’t come from pure brute force anymore. Sometimes you need to shift the architecture entirely to place the next stepping stone.

Advertisement

​The latency crisis: Where Groq fits in

​This long introduction brings us to Groq.

​The biggest gains in AI reasoning capabilities in 2025 were driven by “inference time compute” — or, in lay terms, “letting the model think for a longer period of time.” But time is money. Consumers and businesses do not like waiting.

​Groq comes into play here with its lightning-speed inference. If you bring together the architectural efficiency of models like DeepSeek and the sheer throughput of Groq, you get frontier intelligence at your fingertips. By executing inference faster, you can “out-reason” competitive models, offering a “smarter” system to customers without the penalty of lag.

​From universal chip to inference optimization

​For the last decade, the GPU has been the universal hammer for every AI nail. You use H100s to train the model; you use H100s (or trimmed-down versions) to run the model. But as models shift toward “System 2” thinking — where the AI reasons, self-corrects and iterates before answering — the computational workload changes.

Advertisement

​Training requires massive parallel brute force. Inference, especially for reasoning models, requires faster sequential processing. It must generate tokens instantly to facilitate complex chains of thought without the user waiting minutes for an answer. ​Groq’s LPU (Language Processing Unit) architecture removes the memory bandwidth bottleneck that plagues GPUs during small-batch inference, delivering lightning-fast inference.

​The engine for the next wave of growth

​For the C-Suite, this potential convergence solves the “thinking time” latency crisis. Consider the expectations from AI agents: We want them to autonomously book flights, code entire apps and research legal precedent. To do this reliably, a model might need to generate 10,000 internal “thought tokens” to verify its own work before it outputs a single word to the user.

  • On a standard GPU: 10,000 thought tokens might take 20 to 40 seconds. The user gets bored and leaves.

  • On Groq: That same chain of thought happens in less than 2 seconds.

​If Nvidia integrates Groq’s technology, they solve the “waiting for the robot to think” problem. They preserve the magic of AI. Just as they moved from rendering pixels (gaming) to rendering intelligence (gen AI), they would now move to rendering reasoning in real-time.

​Furthermore, this creates a formidable software moat. Groq’s biggest hurdle has always been the software stack; Nvidia’s biggest asset is CUDA. If Nvidia wraps its ecosystem around Groq’s hardware, they effectively dig a moat so wide that competitors cannot cross it. They would offer the universal platform: The best environment to train and the most efficient environment to run (Groq/LPU).

Advertisement

Consider what happens when you couple that raw inference power with a next-generation open source model (like the rumored DeepSeek 4): You get an offering that would rival today’s frontier models in cost, performance and speed. That opens up opportunities for Nvidia, from directly entering the inference business with its own cloud offering, to continuing to power a growing number of exponentially growing customers.

​The next step on the pyramid

​Returning to our opening metaphor: The “exponential” growth of AI is not a smooth line of raw FLOPs; it is a staircase of bottlenecks being smashed.

  • Block 1: We couldn’t calculate fast enough. Solution: The GPU.

  • Block 2: We couldn’t train deep enough. Solution: Transformer architecture.

  • Block 3: We can’t “think” fast enough. Solution: Groq’s LPU.

​Jensen Huang has never been afraid to cannibalize his own product lines to own the future. By validating Groq, Nvidia wouldn’t just be buying a faster chip; they would be bringing next-generation intelligence to the masses.

Andrew Filev, founder and CEO of Zencoder

Advertisement

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Hideki Sato, known as the father of Sega hardware, has reportedly died

Published

on

Hideki Sato, who led the design of Sega’s beloved consoles from the ’80s and ’90s, died on Friday, according to the Japanese gaming site Beep21. He was 77. Sato worked with Sega from 1971 until the early 2000s, but he’s best known for his involvement in the development of the Sega arcade games and home consoles that defined many late Gen X and early millennial childhoods, starting with the SG-1000 to the Genesis, Saturn and Dreamcast.

Sato went on to serve as Sega’s president from 2001 to 2003. In the post announcing his death, Beep21, which interviewed Sato numerous times over the years, wrote (translated from Japanese), “He was truly a great figure who shaped Japanese gaming history and captivated Sega fans all around the world. The excitement and pioneering spirit of that era will remain forever in the hearts and memories of countless fans, for all eternity.” Sato’s passing comes just a few months after that of Sega co-founder David Rosen, who died in December at age 95. 

Source link

Continue Reading

Tech

OpenClaw creator Peter Steinberger joins OpenAI

Published

on

Peter Steinberger, who created the AI personal assistant now known as OpenClaw, has joined OpenAI.

Previously known as Clawdbot, then Moltbot, OpenClaw achieved viral popularity over the past few weeks with its promise to be the “AI that actually does things,” whether that’s managing your calendar, booking flights, or even joining a social network full of other AI assistants. (The name changed the first time after Anthropic threatened legal action over its similarity to Claude, then changed again because Steinberger liked the new name better.)

In a blog post announcing his decision to join OpenAI, the Austrian developer said that while he might have been able to turn OpenClaw into a huge company, “It’s not really exciting for me.”

“What I want is to change the world, not build a large company[,] and teaming up with OpenAI is the fastest way to bring this to everyone,” Steinberger said.

Advertisement

OpenAI CEO Sam Altman posted on X that in his new role, Steinberger will “drive the next generation of personal agents.” As for OpenClaw, Altman said it will “live in a foundation as an open source project that OpenAI will continue to support”

Source link

Continue Reading

Tech

Researchers turn Edison's 1879 light bulb into a mini graphene reactor

Published

on


Graphene is a two-dimensional lattice of carbon atoms arranged in a hexagonal pattern, renowned for its exceptional electrical conductivity, thermal transport, and mechanical strength. Turbostratic graphene is a stacked variant in which the layers are rotated and misaligned, weakening interlayer coupling and making the material easier to process at scale.
Read Entire Article
Source link

Continue Reading

Tech

Software Development On The Nintendo Famicom In Family BASIC

Published

on

Back in the 1980s, your options for writing your own code and games were rather more limited than today. This also mostly depended on what home computer you could get your hands on, which was a market that — at least in Japan — Nintendo was very happy to slide into with their ‘Nintendo Family Computer’, or ‘Famicom’ for short. With the available peripherals, including a tape deck and keyboard, you could actually create a fairly decent home computer, as demonstrated by [Throaty Mumbo] in a recent video.

After a lengthy unboxing of the new-in-box components, we move on to the highlight of the show, the HVC-007 Family BASIC package, which includes a cartridge and the keyboard. The latter of these connects to the Famicom’s expansion port. Inside the package, you also find a big Family BASIC manual that includes sprites and code to copy. Of course, everything is in Japanese, so [Throaty] had to wrestle his way through the translations.

The cassette tape is used to save applications, with the BASIC package also including a tape with the Sample 3 application, which is used in the video to demonstrate loading software from tape on the Famicom. Although [Throaty] unfortunately didn’t sit down to type over the code for the sample listings in the manual, it does provide an interesting glimpse at the all-Nintendo family computer that the rest of the world never got to enjoy.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Google Docs can turn long documents into audio summaries in latest Workspace update

Published

on


The new feature will roll out across Google Workspace over the next two weeks. It will appear under Tools > Audio > Listen to document summary, where users can trigger a small media player to control playback. The summaries, typically under three minutes, draw on information from multiple document tabs…
Read Entire Article
Source link

Continue Reading

Tech

Longtime NPR host David Greene sues Google over NotebookLM voice

Published

on

David Greene, the longtime host of NPR’s “Morning Edition,” is suing Google, alleging that the male podcast voice in the company’s NotebookLM tool is based on Greene, according to The Washington Post.

Greene said that after friends, family members, and coworkers began emailing him about the resemblance, he became convinced that the voice was replicating his cadence, intonation, and use of filler words like “uh.”

“My voice is, like, the most important part of who I am,” said Greene, who currently hosts the KCRW show “Left, Right, & Center.”

Among other features, Google’s NotebookLM allows users to generate a podcast with AI hosts. A company spokesperson told the Post that the voice used in this product is unrelated to Greene’s: “The sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired.”

Advertisement

This isn’t the first dispute over AI voices resembling real people. In one notable example, OpenAI removed a ChatGPT voice after actress Scarlett Johansson complained that it was an imitation of her own.

Source link

Continue Reading

Trending

Copyright © 2025