Connect with us
DAPA Banner

Tech

Canadian employees targeted in payroll pirate attacks

Published

on

Canada hackers

A financially motivated threat actor tracked as Storm-2755 is stealing Canadian employees’ salary payments after hijacking their accounts in payroll redirection (also known as payroll pirate) attacks.

The attackers used malicious Microsoft 365 sign-in pages to steal victims’ authentication tokens and session cookies by redirecting them to domains (e.g., bluegraintours[.]com) hosting malicious web pages (pushed to the top of search engine results through malvertising or SEO poisoning) that masqueraded as Microsoft 365 sign-in forms.

This allowed Storm-2755 to bypass multifactor authentication (MFA) in adversary‑in‑the‑middle (AiTM) attacks by replaying stolen session tokens rather than re-authenticating.

Wiz

“Rather than harvesting only usernames and passwords, AiTM frameworks proxy the entire authentication flow in real time, enabling the capture session cookies and OAuth access tokens issued upon successful authentication,” Microsoft explained.

“Due to these tokens representing a fully authenticated session, threat actors can reuse them to gain access to Microsoft services without being prompted for credentials or MFA, effectively bypassing legacy MFA protections not designed to be phishing-resistant.”

Advertisement
Storm-2755 attack flow
Storm-2755 attack flow (Microsoft)

After gaining access to an employee’s account, the attacker created inbox rules that automatically moved messages from human resources staff containing the words “direct deposit” or “bank” to hidden folders, preventing the victim from seeing the correspondence.

In the next stage, they searched for “payroll,” “HR,” “direct deposit,” and “finance,” then sent emails to human resources staff with the subject line “Question about direct deposit” to trick staff into updating banking information.

​Where social engineering failed, the attacker logged directly into HR software platforms such as Workday, using the stolen session to manually update direct deposit details.

Storm-2755 emailing HR staff
Storm-2755 emailing HR staff (Microsoft)

To harden defenses against AiTM and payroll pirate attacks, Microsoft advises defenders to block legacy authentication protocols and implement phishing-resistant MFA.

If any signs of compromise are detected, they should also revoke compromised tokens and sessions immediately, remove malicious inbox rules, and reset MFA methods and credentials for all affected accounts.

In October, Microsoft disrupted another pirate payroll campaign targeting Workday accounts since March 2025, in which a cybercrime gang tracked as Storm-2657 targeted university employees across the United States to hijack their salary payments.

Advertisement

​In these attacks, Storm-2657 breached the targets’ accounts via phishing emails and stole MFA codes using AITM tactics, which allowed the threat actors to compromise the victims’ Exchange Online accounts.

Payroll pirate attacks are a variant of business email compromise (BEC) scams that target businesses and individuals who regularly make wire transfers. Last year, the FBI’s Internet Crime Complaint Center (IC3) recorded over 24,000 BEC fraud complaints, resulting in losses exceeding $3 billion, making it the second most lucrative crime type behind investment scams.

Automated pentesting proves the path exists. BAS proves whether your controls stop it. Most teams run one without the other.

This whitepaper maps six validation surfaces, shows where coverage ends, and provides practitioners with three diagnostic questions for any tool evaluation.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

The AI RAM Shortage is Also Driving Up SSD Prices

Published

on

In 2024 the Verge’s consumer tech reporter paid $173 for a WD Black SN850X 2TB SSD. But “now that same SSD costs $649…”

“Like with RAM, demand from the AI industry is swallowing up supply from a limited number of manufacturers, leading to a drastic reduction in the inventory that’s available to consumers” — and skyrocketing prices:

The price on my WD Black drive nearly quadrupled since November 2025, and consumer SSDs across the board are seeing similar increases, much like with RAM. The 4TB version of the popular Samsung 990 Pro SSD previously cost $320, but will now run you nearly $1,000. External SanDisk SSDs saw a 200 percent price hike at the Apple Store in March….

According to price trends from PC Part Picker, NVMe SSD prices began ticking upward in December 2025, with prices on 256GB to 4TB SSDs now double or triple what they were just a few months ago, and continuing to climb.

Advertisement

Source link

Continue Reading

Tech

DJI ‘confirms’ the Osmo Pocket 4 with a tempting teaser

Published

on

DJI has all but confirmed that the Osmo Pocket 4 will be unveiled later this month, and we can’t wait.

launch event is scheduled for April 16 at 12PM GMT. The company’s teasers also suggest this will be a global release, rather than a China-only debut.

The Osmo Pocket 4 has been a long time coming. Early references to the device date back to mid-2025.

There were multiple sightings of test units surfacing in the months that followed. It’s expected to replace the popular Osmo Pocket 3. That camera helped cement DJI’s position in the compact vlogging camera space.

Advertisement

While DJI hasn’t shared full specifications yet, early details point to a meaningful upgrade rather than a minor refresh.

Advertisement

The Osmo Pocket 4 is expected to feature improved camera hardware. In addition, it should have better subject tracking and built-in storage, which could make it a more self-contained option for creators on the go.

Advertisement

There have also been rumours of a more advanced “Pro” variant, though DJI has yet to acknowledge its existence. For now, the company is keeping the focus firmly on the standard Osmo Pocket 4 ahead of its official reveal.

If the leaks hold up, pricing is expected to land at around $499 in the US, putting it in line with its predecessor. That would position it as a competitive option for vloggers. It will appeal to those looking for a compact, stabilised camera without stepping up to larger mirrorless setups.

Reaction online has been quick, and in some cases, already looking ahead. Some users welcomed the long-awaited announcement but noted that attention may soon shift to a rumoured Pro version. Others pointed to teaser imagery that might hint at a dual-lens design, fuelling speculation that DJI could expand the Pocket range further.

There’s also a knock-on effect for the current model. Several users mentioned snapping up discounted Pocket 3 bundles, while others are now tempted to buy one outright or hold off in anticipation of price drops once the new model lands.

Advertisement

Advertisement

With just days to go before the announcement, the Osmo Pocket 4 looks set to build on DJI’s existing formula. However, there are enough upgrades to keep it relevant in an increasingly crowded creator market.

Source link

Advertisement
Continue Reading

Tech

Meta’s Superintelligence Labs debuts first product Muse Spark

Published

on

Muse Spark is part of a ‘ground-up overhaul’ of Meta’s AI efforts, the company said.

Nearly a year after being established, Meta Superintelligence Labs (MSL) has finally debuted its first product, a multimodal model “purpose-built” for Meta’s products.

Muse Spark is the first in the family of Muse models and represents a “ground-up overhaul” of the company’s AI efforts, Meta said in a statement. The launch comes after the company poured multiple billions into its supposed efforts towards ‘superintelligence’, a hypothetical AI system with abilities beyond human intelligence.

Muse Spark is the “first step toward a personal superintelligence”, Meta said. The model can be accessed via Meta.ai and the Meta AI app.

Advertisement

According to the company, Muse Spark achieves strong performance on visual STEM questions, entity recognition and localisation. It performs on par with existing models from AI rivals such as OpenAI’s GPT-5.4, Anthropic’s Opus 4.6 and Google’s Gemini 3.1 Pro.

Muse is also marketed as a way to “learn about and improve” user health, Meta added, and is expected to be rolled out to WhatsApp, Facebook, Instagram and the company’s AI glasses in the coming weeks.

The company said it collaborated with more than 1,000 physicians to curate training data that enables “factual and comprehensive” responses. For comparison, OpenAI said it worked with 260 physicians to develop its ChatGPT Health offering.

Moreover, Meta found that Muse Spark demonstrated a “strong refusal behaviour” across high-risk areas such as biological and chemical weapons. The model also does not demonstrate requisite autonomous capability or hazardous tendencies to realise threat scenarios around cybersecurity, Meta added.

Advertisement

Meanwhile, Anthropic’s new Claude Mythos, released in preview to select users earlier this week, was found to be significantly more capable at generating exploits than other models.

Concerned that Meta was lagging behind the likes of OpenAI and Anthropic, CEO Mark Zuckerberg set up MSL last June after acquiring Scale AI for $14.3bn and hiring its CEO Alexandr Wang to lead the team.

“This is only the start. As we expand these features, expect richer, more visual results, with Reels, photos and posts woven directly into your answers,” Meta said.

MSL has continued to make big-name hires to add to the efforts, including Moltbook founders Matt Schlicht and Ben Parr, co-founder of Safe Superintelligence Daniel Gross and Apple’s former AI lead Ruoming Pang. The company cut 600 jobs at MSL in October.

Advertisement

Earlier this year, Meta said that it is budgeting up to $135bn in total expenses for 2026. The growth, it said, is driven by an increased investment to support MSL as well as its core business.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Sam Altman responds to ‘incendiary’ New Yorker article after attack on his home

Published

on

OpenAI CEO Sam Altman published a blog post on Friday evening responding to both an apparent attack on his home and an in-depth New Yorker profile raising questions about his trustworthiness.

Early Friday morning, someone allegedly threw a Molotov cocktail at Altman’s home. No one was hurt in the incident, and a suspect was later arrested at OpenAI headquarters, where he was threatening to burn down the building, according to the San Francisco Police Department.

While the police have not identified the suspect publicly, Altman noted that the incident came a few days after “an incendiary article” was published about him. He said someone had suggested that the article’s publication “at a time of great anxiety about AI” could make things “more dangerous” for him.

“I brushed it aside,” Altman said. “Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives.”

Advertisement

The article in question was a lengthy investigative piece written by Ronan Farrow (who won a Pulitzer for reporting that revealed many of the sexual abuse allegations around Harvey Weinstein) and Andrew Marantz (who’s written extensively about technology and politics).

Farrow and Marantz said that during interviews with more than 100 people who have knowledge of Altman’s business conduct, most described Altman as someone with “a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart.” 

Echoing other journalists who have profiled Altman, Farrow and Marantz suggested that many sources raised questions about his trustworthiness, with one anonymous board member saying he combines “a strong desire to please people, to be liked in any given interaction” with “a sociopathic lack of concern for the consequences that may come from deceiving someone.”

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

In his response, Altman said that looking back, he can identify “a lot of things I’m proud of and a bunch of mistakes.”

Advertisement

Among the mistakes, he said, is a tendency towards “being conflict-averse,” which he said has “caused great pain for me and OpenAI.”

“I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company,” Altman said, presumably referring to his removal and rapid reinstatement as OpenAI CEO back in 2023. “I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission.”

He added, “I am sorry to people I’ve hurt and wish I had learned more faster.”

Altman also acknowledged that there seems to be “so much Shakespearean drama between the companies in our field,” which he attributed to a “‘ring of power’ dynamic” that “makes people do crazy things.”

Advertisement

Of course, the correct way to deal with the ring of power is to destroy it, so Altman added, “I don’t mean that [artificial general intelligence] is the ring itself, but instead the totalizing philosophy of ‘being the one to control AGI.’” His proposed solution is “to orient towards sharing the technology with people broadly, and for no one to have the ring.”

Altman concluded by saying that he welcomes “good-faith criticism and debate,” while reiterating his belief that “technological progress can make the future unbelievably good, for your family and mine.”

“While we have that debate, we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally,” he said.

Source link

Advertisement
Continue Reading

Tech

Two-Week Social Media ‘Detox’ Erases a Decade Age-Related Decline, Study Finds

Published

on

Critics say social media is engineered to be as addictive as tobacco or gambling, writes the Washington Post — while adding that “the science has been moving in parallel with the court’s recognition.”

A growing body of research links heavy social media use not only to declines in mental health but to measurable cognitive effects — on attention, memory and focus — that in some studies resemble accelerated aging. Science also suggests we have more control than we realize when it comes to reversing this damage, and the solution is surprisingly simple: Take a break… “Digital detoxes” can sound like a fad. But in one of the largest studies to date, published in PNAS Nexus and involving more than 467 participants with an average age of 32, even a short time away produced striking results — effectively erasing a decade of age-related cognitive decline.

For 14 days, participants used a commercially available app, Freedom, to block internet access on their phones. They were still allowed calls and text messages, essentially turning a smartphone into a dumb phone. Their time online decreased from 314 minutes to 161 minutes, and by the end of the period the participants had improvements in sustained attention, mental health as well as self-reported well-being. The improvement in sustained attention was about the same magnitude as 10 years of age-related decline, the researchers noted, and the effect of the intervention on depression symptoms was larger than antidepressants and similar to that of cognitive behavioral therapy.

But two things were even more mind-blowing… Even those people who cheated and broke the rules after a few days seemed to have positive effects from the break; and in follow-up reports after the two weeks, many people reported the positive effects lingered. “So you don’t have to necessarily restrict yourself forever. Even taking a partial digital detox, even for a few days, seems to work,” Kushlev said.
The article also notes a November study at Harvard published in JAMA Network Open where nearly 400 people ‘found that even a short break can make a measurable difference: After just one week of reduced smartphone use, participants reported drops in anxiety (16.1 percent), depression (24.8 percent) and insomnia (14.5 percent)…”

Advertisement

“Other experiments point in the same direction — whether decreasing social media use by an hour a day for one week or stepping away from just Facebook and Instagram.”

Source link

Continue Reading

Tech

I’ve tested every iPhone since the iPhone 12, and Ceramic Shield 2 is the first iPhone glass I fully trust

Published

on

Marketing is one thing, but reality is quite another. Like many of us, I won’t forget the claimed “durable” microtwill of FineWoven, the shaky initial launch of Apple Maps, or the infamous butterfly keyboard that was supposedly four times more stable. Remember the promise of AirPower? Of course you don’t.

It’s worth celebrating when the real-world experience does actually live up to the hype, then. And that’s the case with Apple’s Ceramic Shield 2, the tech giant’s latest and unquestionably greatest iPhone glass.

Source link

Advertisement
Continue Reading

Tech

IBM settles its DEI lawsuit with the DOJ for $17 million

Published

on

IBM has agreed to settle the US Department of Justice’s accusations that the company violated civil rights laws with its DEI practices. According to a press release from the DOJ, IBM will pay more than $17 million to resolve allegations of taking “race, color, national origin, or sex” into account when making employment decisions. This settlement is the latest development in a longstanding effort from the Trump administration to end DEI programs, which was kick-started from an executive order in early 2025.

IBM denied any wrongdoing and said the settlement wasn’t an admission of liability, while the US government said this conclusion wasn’t a concession that its claims weren’t well founded, according to the settlement agreement. According to the DOJ, IBM had violated the Civil Rights Act of 1964 with practices that included altering “interview criteria based on race or sex,” developing “race and sex demographic goals for business units,” using “a diversity modifier that tied bonus compensation to achieving demographic targets” and more.

An IBM spokesperson told Engadget in an email that the company “is pleased to have resolved this matter,” adding that “our workforce strategy is driven by a single principle: having the right people with the right skills that our clients depend on.”

According to Todd Blanche, the agency’s acting attorney general, this action is one of the first resolutions to come out of the Civil Rights Fraud Initiative, which was launched in May 2025. IBM isn’t the only company to alter its policies, with both T-Mobile and Meta agreeing to put an end to its DEI initiatives last year.

Advertisement

Source link

Continue Reading

Tech

What’s Your Favorite Kind Of Hack?

Published

on

Talking with [Tom Nardi] on the podcast this week, he mentioned his favorite kind of hack: the community-developed open-source firmware that can be flashed into a commercial product that has crappy firmware, thus saving it. The example, just for the record, is the CrossPoint open e-book reader firmware that turns a mediocre cheap e-book into something that you can do anything you want with. Very nice!

And that got me thinking about “kinds of hacks” in general. Do we have a classification scheme for the hacks that we see here on Hackaday? For instance, the obvious precursor to many of Tom’s favorite hacks is the breaking-into-the-locked-firmware hack, where a device that didn’t want you loading your own firmware on it is convinced to let you do so. Junk-hacking is probably also a category of its own, where instead of finding your prey on AliExpress, you find it on eBay, or in the alleyway. And the save-it-from-the-landfill repair and renovation hacks are close relatives.

The doing-too-much-with-too-little hacks are maybe my personal favorite. I just love to see when someone manages to get DOOM running in Linux on a computer made with only 8-pin microcontrollers. Because of the nature of the game, these often also include a handful of abusing-a-component-to-do-something-it’s-not-meant-to-do hacks. Heck, we even had a challenge for just exactly those kind of hacks.

Advertisement

Then there are fine-art-hacks, where the aesthetic outcome is as important as the technical, or games-hacks where fun is the end result.

What other broad categories of hacks are we missing? And which are your favorite?

Advertisement

Source link

Continue Reading

Tech

AI And Cybersecurity: A Glass Half-Empty/Half-Full Proposition, Where The Glass Is Holding Nitroglycerin

Published

on

from the yikes dept

First, some of the good news: certain AI models—currently Anthropic’s Mythos, but surely others are well on their way if they haven’t already arrived—turn out to be really good at finding cybersecurity vulnerabilities. As Anthropic itself reported:

During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it finds are often subtle or difficult to detect. Many of them are ten or twenty years old, with the oldest we have found so far being a now-patched 27-year-old bug in OpenBSD—an operating system known primarily for its security.

That’s quite the tool, if it can help find vulnerabilities so that they can be patched.

But it’s also quite the tool to help find vulnerabilities so that they can be exploited. Like so many tools, including technological tools, whether they are good or bad depends entirely in how they are used. A hammer is a really helpful tool for building things, but it also smashes windows. And with this news, AI now has the capability for some really destructive uses.

To try to prevent them, Anthropic is working with some of the largest tech companies in the world to let them use a preview of its model on their own software to help QA them and proactively patch vulnerabilities. As Casey Newton reports:

Advertisement

Anthropic announced Mythos alongside Project Glasswing, an initiative with more than 40 of the world’s biggest tech companies that will see Anthropic grant early access to the model to find and patch vulnerabilities across many of the world’s most important systems. Launch partners in the coalition include Apple, Google, Microsoft, Cisco and Broadcom.

They’ll be tasked with scanning and patching their own systems along with the critical open-source systems that modern digital infrastructure depends on. Anthropic is giving participants $100 million in usage credits for Mythos, and donating another $4 million to open-source security efforts.

This sounds like a great program. It also should be noted that the Mythos model is not consumer-grade AI; it takes expensive, dedicated infrastructure to run, which means that, at least for the moment, there’s not an imminent danger of it being misused. But trouble is nevertheless brewing, and someday it will be here, which raises certain questions, like:

(A) What about other AI models, which will inevitably be similarly powerful? What if they are produced by less ethical companies, who would have no compunction against rogue actors using their systems in destructive ways that Project Glasswing won’t have intercepted?

(B) And what about every single legacy technology system in use, which Project Glasswing is unlikely to be able to retroactively fix? Large, resourced companies may be able to weather the on-coming storm, but what about your local dentist office? Or a hospital? Municipal IT systems? Networked technology is everywhere, and these smaller businesses and institutions are likely to both have older, unpatched technology and also fewer resources to update and secure them, or deal with the consequences of a hack, which can be devastating for the business or the people they serve.

On the other hand, there does seem to be one other bit of good news with this revelation: governments, including that of the United States, have often engaged in the dubious practice of hording zero-days, or collecting information about vulnerabilities that they then kept secret so that they could exploit them themselves by using them on an adversary. For those unfamiliar, “zero-day” refers to a vulnerability that has yet to be disclosed, which is why it’s on “day zero,” or before the first day of it being a known vulnerability that could now be fixed.

Advertisement

Mythos’s capabilities would seem to obviate this strategy, because suddenly the stash of unknown vulnerabilities isn’t really going to be such a secret, since anyone using the model will be able to find them. Mythos’s existence changes the balance of interests, where the stronger national security play by the government would be to disclose any discovered vulnerability to the vendor as soon as possible so that they can be patched and our nation’s systems more secured. Arguably that was always the better national security play, but now there’s definitely no upside to trying to keep them secret because it now definitely needs to be presumed that adversaries will be able to find and exploit them. They’ll have the tools.

With these AI models we’re going to need to presume that everyone is going to have the tools to know about every vulnerability. Up to now there has been at least the illusion of some security, because vulnerabilities couldn’t be exploited if no one knew about them, and finding vulnerabilities is hard. But now that it will be easy, the risk to the nation’s cybersecurity is greater than we have ever before contended with.

It is also not really a great harbinger that we know about Mythos because… a copy of the software got leaked. It’s just the software that was leaked and not the models it uses to tune its “reasoning,” which means that anyone trying to now build their own Mythos is still missing an important piece if they want to mimic its full capabilities, but they would have a lot. Which is probably why Anthropic has been sending DMCA takedown notices to have the leaked software removed from the Internet.

But doing so raises a related issue: the role of copyright law when it comes to “vibe coding,” or “having an AI system write the software rather than a programmer, just by instructing it on what to do. It’s especially important in light of the cybersecurity concerns always raised by software (and including vibe-coded software, as we’re having to trust that what’s produced does not have vulnerabilities). Copyright requires a human author, which raises the question: can software written by an AI be copyrightable? The answer would appear to be no, unless there was a great deal of creative effort on the part of a human being to instruct the AI or modify the output. But as Ed Lee chronicled, per Anthropic itself, even its own software (“pretty much 100%”) is being written by AI. And if that’s the case, then Anthropic has no business sending takedown notices for its software because DMCA takedown notices are only for demanding the removal of copyrighted works, which, it would appear, Anthropic’s own code does not qualify for.

Advertisement

But maybe it’s better if software stops being subject to copyright. “Vibe coding,” is becoming increasingly efficient, to the point that there is likely no need for copyright to incentivize its authorship. Instead, what public policy really needs to emphasize is that whatever software is produced is secure software. But in many ways copyright obstructs that goal, like through its lengthy terms, which mean that while a copyright holder might not still be maintaining its older software, no one else can maintain and patch it either, without potentially infringing the software’s copyright.  Or through its privileged secrecy (unusually for copyright, when it comes to software you don’t actually have to disclose all the actual code to register a copyright in it!) and other powers to lock out security research efforts, like through Section 1201 of the DMCA, when such efforts aren’t specifically supported by the developer–assuming the developer supports any security testing at all, as right now there aren’t necessarily the incentives to make them care about it.  Instead public policy has given them the ability, like with copyright, to escape oversight of the security of their software products, even as those products end up embedded in more and more of our lives.

It’s time to change that focus and get copyright out of the way of making software security our top policy priority.

And fast.

Filed Under: ai, claude, claude code, copyright, cybersecurity, mythos, project glasswing, vulnerabilities, zero days

Companies: anthropic

Advertisement

Source link

Continue Reading

Tech

‘Crimson Desert’ Is a Cat Dad Simulator

Published

on

Step into the shoes of the strongest, goodest boy in a game that is beautiful, baffling, and impossible to put down.

Source link

Continue Reading

Trending

Copyright © 2025