Connect with us
DAPA Banner

Tech

Microsoft to disable NTLM by default in future Windows releases

Published

on

Windows

Microsoft announced that it will disable the 30-year-old NTLM authentication protocol by default in upcoming Windows releases due to security vulnerabilities that expose organizations to cyberattacks.

NTLM (short for New Technology LAN Manager) is a challenge-response authentication protocol introduced in 1993 with Windows NT 3.1 and is the successor to the LAN Manager (LM) protocol.

Kerberos has superseded NTLM and is now the current default protocol for domain-connected devices running Windows 2000 or later. While it was the default protocol in older Windows versions, NTLM is still used today as a fallback authentication method when Kerberos is unavailable, even though it uses weak cryptography and is vulnerable to attacks.

Wiz

Since its release, NTLM has been widely exploited in NTLM relay attacks (where threat actors force compromised network devices to authenticate against attacker-controlled servers) to escalate privileges and take complete control over the Windows domain. Despite this, NTLM is still used on Windows servers, allowing attackers to exploit vulnerabilities such as PetitPotam, ShadowCoerce, DFSCoerce, and RemotePotato0 to bypass NTLM relay attack mitigations.

NTLM has also been targeted by pass-the-hash attacks, in which cybercriminals exploit system vulnerabilities or deploy malicious software to steal NTLM hashes (hashed passwords) from targeted systems. These hashed passwords are used to authenticate as the compromised user, allowing the attackers to steal sensitive data and spread laterally across the network.

Advertisement

“Blocked and no longer used automatically”

On Thursday, as part of a broader push toward passwordless, phishing-resistant authentication methods, Microsoft announced that NTLM will finally be disabled by default in the next major Windows Server release and associated Windows client versions, marking a significant shift away from the legacy protocol to more secure Kerberos-based authentication.

Microsoft also outlined a three-phase transition plan designed to mitigate NTLM-related risks while minimizing disruption. In phase one, admins will be able to use enhanced auditing tools available in Windows 11 24H2 and Windows Server 2025 to identify where NTLM is still in use.

Phase two, scheduled for the second half of 2026, will introduce new features, such as IAKerb and a Local Key Distribution Center, to address common scenarios that trigger NTLM fallback.

Phase three will disable network NTLM by default in future releases, even though the protocol will remain present in the operating system and can be explicitly re-enabled through policy controls if needed.

Advertisement
NTLM timeline
NTLM timeline (Microsoft)

​”Disabling NTLM by default does not mean completely removing NTLM from Windows yet. Instead, it means that Windows will be delivered in a secure-by-default state where network NTLM authentication is blocked and no longer used automatically,” Microsoft said.

“The OS will prefer modern, more secure Kerberos-based alternatives. At the same time, common legacy scenarios will be addressed through new upcoming capabilities such as Local KDC and IAKerb (pre-release).”

Microsoft first announced plans to retire the NTLM authentication protocol in October 2023, noting that it also wanted to expand management controls to give administrators greater flexibility in monitoring and restricting NTLM usage within their environments.

It also officially deprecated NTLM authentication on Windows and Windows servers in July 2024, advising developers to transition to Kerberos or Negotiation authentication to prevent future issues.

Microsoft has been warning developers to stop using NTLM in their apps since 2010 and advising Windows admins to either disable NTLM or configure their servers to block NTLM relay attacks using Active Directory Certificate Services (AD CS).

Advertisement

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

WhatsApp rolls out updates including multiple accounts for iOS

Published

on

WhatsApp shared multiple quality of life updates coming to its messaging platform starting today. The first is a long awaited option to have two accounts on a single iOS device. The option has been available for years on Android, and iPhone users can now be logged into two separate accounts at once. The profile photo for the account will be visible in the bottom tab to double-check which persona you’re messaging as.

The other new features allow for easier movement of chat histories, both between platforms and devices in the same ecosystem. This chat transfer should make it easier to retain messages when upgrading to a new phone, especially if you’re switching between iOS and Android. There’s also a new option to delete large files directly from a WhatsApp chat to avoid storage clutter. It’s available under the Manage Storage option when you tap a chat’s name. It includes an option to delete just media files from a conversation.

And of course it wouldn’t be a tech news announcement without at least some AI features present. WhatsApp now supports using Meta AI for light photo editing, including removing backgrounds, changing aesthetic styles and deleting elements from the composition. There’s also a Writing Help prompt that uses AI to help draft a message, although Meta’s blog post states that using this still keeps chats private. The above features should be arriving to all WhatsApp users “soon,” according to the company.

Source link

Advertisement
Continue Reading

Tech

Are interdisciplinary teams reshaping work in the engineering space?

Published

on

We spoke with Sarthak Kumar Barik and Stephen Conneely about the engineering sector and how team dynamics are evolving with the times.

Blue and white engineering focus banner with font displayed

As technologies advance, siloed working environments have the potential to become a thing of the past, particularly as we find more convenient and effective ways to stay in contact with globally dispersed peers. 

The engineering space is no different and for Stephen Conneely, the director of QA engineering at Fidelity Investments Ireland, interdisciplinary engineering has reshaped how teams deliver results, especially in an environment where AI-assisted development has become more commonplace.  

Conneely told SiliconRepublic.com, “Teams bring together software engineers, quality engineers, analysts and platform specialists to jointly own problems end‑to‑end, with AI tools supporting activities such as test design, code review, and documentation.

Advertisement

“This shared ownership reduces hand‑offs and allows risks to surface earlier, while maintaining strong governance and accountability. Quality is designed in from the start, with disciplines collaborating to decide where AI accelerates delivery and where human validation remains critical.”

He explained, this is all happening in an atmosphere where engineers are expected to understand how their work impacts adjacent systems, data integrity and the client experience. “The result is teams that move faster with confidence, using AI as an enabler rather than a shortcut and delivering more predictable outcomes in complex environments.”

This is echoed by Workhuman’s Sarthak Kumar Barik, a principal engineer who stated, “As a platform team, our work does not exist in isolation. Product teams across the organisation own use cases built on the same legacy foundation and they face the same migration challenges, often without the same depth of context.” 

It is the responsibility of engineers and other employees, he finds, to close that gap in knowledge. He explained this can be achieved by translating the migration experience into reusable patterns, clear guidance and well-defined integration points that other teams can adopt without starting from scratch. 

Advertisement

He said, “We work alongside product teams as active partners, helping them map their existing behaviour to the new platform, identifying where the gaps are, and making sure each migration they undertake is faster and less risky than the one before. The goal is that knowledge compounds across the organisation rather than staying locked within a single team.”

A more tangible example of this reshaping, he explained, is in how the organisation uses artificial intelligence in the engineering workflow. Workhuman ran an AI-assisted workshop where developers provided context about the system, its architecture, data flows and constraints and used this as the foundation for AI-generated code. 

He said, “The difference compared to generic prompting was striking. When AI is given the real context of your system it becomes a genuine accelerator, producing code that is relevant, grounded and faster to review and adapt. This has changed how both our team and the product teams we support think about velocity.”

Adding, “Interdisciplinary engineering, for us, is less about organisational charts and more about shared context. When platform and product teams work from the same understanding of the system, the target and the tools available, progress accelerates across the board.”

Advertisement

With that in mind, what skills and processes do engineers need to be on top of to ensure they are keeping pace with change across the sector?

Fundamentals and the future

For Conneely, the most in demand skills, in today’s engineering landscape, are a combination of the fundamentals alongside the adoption of emerging technologies. He said, “We continue to prioritise deep capability in software engineering, quality engineering, cloud platforms and data, but increasingly value engineers who can use AI‑assisted tooling responsibly to improve productivity, quality and decision making.”

Engineers should also prioritise the ability to critically evaluate AI‑generated output, apply sound engineering judgement and  develop the ability to understand where human oversight is essential. As well as adopt a systems thinking, automation-first approach and risk-based decision making, which he said are as important as framework or language expertise. 

Advertisement

“Just as critical are communication skills, particularly in regulated environments where engineers must explain technical decisions, including AI usage, in clear business terms. As technology evolves, learning mindset and adaptability are now core competencies rather than nice‑to‑haves.”

Similarly, for Barik, the challenge is often in matching critical but older systems, with newer, more advanced models and processes. He explained, the challenge is not just technical, but more intuitive, as you have to figure out whether you are actually making progress when the system is deeply coupled and cannot be taken offline.

He said, “We defined the target architecture upfront, not as an aspiration but as a concrete end state against which every decision is measured. From there, we decomposed the system into smaller subsystems with a roadmap of agreed milestones. Each milestone represents a discrete, verifiable unit of progress, a subsystem dialed down in the legacy platform and enabled in the new one. 

“Every pragmatic shortcut taken along the way is recorded as technical debt, so the team always knows exactly what remains rather than discovering it later. The most powerful measure of progress has been observability. By instrumenting both old and new systems, we track in real time what percentage of load is flowing through the new platform versus the legacy one. 

Advertisement

“A subsystem is not truly migrated until the traffic data confirms it. Progress is not a milestone ticked off, it is a measurable, visible shift in where the load is flowing.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

A ‘pound of flesh’ from data centers: one senator’s answer to AI job losses

Published

on

The signs that AI could lead to mass job displacement are already piling up: entry-level job postings in the U.S. have sunk 35% since 2023, mass layoffs have swept across Big Tech, and even AI leaders themselves are warning about what’s coming. 

Backstage at the Axios AI Summit in Washington on Wednesday, Sen. Mark Warner (D-VA) said a venture capitalist recently told him he’s writing software investments down to zero in large part due to the strides of Anthropic’s Claude, and a major law firm told him it’s not hiring first-year associates because AI can now handle much of the work once assigned to junior lawyers.

Warner says the fear of AI-related job loss is “palpable,” even as data from one AI company suggests AI hasn’t yet started taking jobs. As those fears grow, they’re bleeding over into a different fight, which is who should foot the bill.

Warner has a proposal: tax the data centers powering the AI boom and use that revenue to help workers through the transition. He hasn’t introduced legislation yet, but the idea is gaining urgency as public anger toward AI and data centers grows.

Advertisement

Across the U.S., there’s been pushback on data centers, including a bill on Wednesday introduced by Sen. Bernie Sanders (D-VT) and Rep. Alexandria Ocasio-Cortez (D-NY), calling for a data center moratorium. The loudest concerns are about noise, pollution, and rising electricity costs. But there’s a bubbling resentment underneath those concerns, a resistance to suffering the potential ill effects of having a data center in your backyard that powers the technology some fear will replace workers. 

Warner doesn’t plan to support his colleagues’ bill. On stage at the event, he said: “A data center moratorium simply means China is gonna move quicker, and this is one where we can’t lose.”

There’s no stuffing the genie back into the bottle when it comes to AI and data centers, he added. And while Warner believes in strict requirements that ensure data centers don’t pass their water and power costs to residents, he told TechCrunch he thinks there’s another way for communities to extract their “pound of flesh” in a way that addresses the underlying job loss fears. 

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

“I’ve thought for a long time there’s an obligation from the industry to help figure this out and help pay for it, but one of the questions I was asking was, Who should pay?” Warner told TechCrunch. “Should it be the chip makers, Jensen [Huang, Nvidia’s CEO]? Should it be the large language model companies? Should it be the Goldman Sachs of the world who are using these tools to cut back on a number of first-year associates?”

Advertisement

Ultimately, he said, he thinks the “easiest place to extract the pound of flesh is probably going to be from the data centers.”

That could look like putting data center tax revenue toward training for new nurses or funding AI upskilling programs — so long as there’s a “tangible benefit to communities” as they navigate this economic transition AI companies have foisted on them. 

Warner sees it as a way to balance the need to build data centers with some obligation to the communities bearing their costs

The idea is not without precedent. Warner pointed to Henrico County, Virginia which used the tax revenue from a local data center to kickstart a new affordable housing project.  

Advertisement

Finding a way to connect data centers to a tangible benefit to the community will be essential, he says, because otherwise, “the pitchforks are coming out.”

The public mood suggests he could be on to something. According to a recent NBC News poll, AI has a lower public approval rating than Immigration and Customs Enforcement (ICE), with 46% of registered voters viewing AI negatively compared to only 26% viewing it positively. In Virginia, that is playing out in a proposal to repeal the state’s tax breaks for data center buildouts, which cost the state and localities nearly $2 billion a year in lost tax revenue in one of the world’s largest data center markets. Warner says other states might follow suit. 

AI and data centers, he said, are “easy to demonize.”

Source link

Advertisement
Continue Reading

Tech

Legal-tech start-up Harvey valued at $11bn after new raise of $200m

Published

on

Harvey’s platform uses AI agents to reduce manual effort for lawyers by running complete workflows for high-volume and increasingly complex tasks.

AI legal-tech start-up Harvey has raised $200m at a valuation of $11bn.

The new funds will be used to further develop the company’s AI agents for legal firms and in-house legal departments, and grow the engineering teams that support them.

The funding round was co-led by returning investors GIC and Sequoia, with participation from existing investors Andreessen Horowitz, Coatue, Conviction Partners, Elad Gil, Evantic and Kleiner Perkins.

Advertisement

Harvey’s platform uses AI agents to reduce manual effort for lawyers by running complete workflows for high-volume and increasingly complex tasks, according to the company, which has now raised more than $1bn to date.

“AI isn’t just assisting lawyers. It’s becoming the system through which legal work gets done,” said Winston Weinberg, CEO and co-founder of Harvey.

“The law firms and in-house teams leading the way are building agents that execute complex workflows so lawyers can focus on judgement, strategy and outcomes.”

The company said it runs more than 25,000 custom agents executing work in fields such as contracts, compliance, litigation, due diligence, and mergers and acquisitions.

Advertisement

“Harvey has become the platform on which legal work runs,” said Pat Grady, partner at Sequoia.

“More than 100,000 lawyers around the world run their most critical work on Harvey, and we believe it’s positioned to become one of the most important companies of the next decade.”

Harvey was founded in 2022 and is based in San Francisco. It claims more than 1,300 customers – including “global law firms and Fortune 500 enterprises” – in more than 60 countries around the world.

In January, Harvey began hiring for roles at a new Dublin office. At the end of last year, the company was valued at $8bn.

Advertisement

The legal-tech start-up sector is a lively one at the moment.

Two weeks ago, Swedish player Legora announced a Series D raise of $550m, bringing the company’s valuation to $5.55bn.

Last November, Canadian company Clio closed a $500m Series G funding round, taking it to a $5bn valuation, and also unveiled its plans for an office in Dublin.

Norwegian software company Newcode will also open a Dublin office after raising more than $6.5m this week, adding to its existing locations in the US and Europe.

Advertisement

And last November, Ireland and UK-based company TrialView secured $4.1m in a growth funding round led by Elkstone Ventures.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Supreme Court rules ISPs aren't liable for user piracy without intent

Published

on


In a unanimous judgment for Cox Communications, the Court ruled that an ISP is contributorily liable for user infringement “only if it intended that the provided service be used for infringement,” and that intent can be shown “only if the party induced the infringement or the provided service is tailored…
Read Entire Article
Source link

Continue Reading

Tech

Reddit Takes On Bots With ‘Human Verification’ Requirements

Published

on

Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears “fishy,” and that it is “not conducting sitewide human verification.” TechCrunch reports: To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors — like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).

To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman’s World ID — or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it’s not the company’s preferred method. “If we need to verify an account is human, we’ll do it in a privacy-first way,” Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. “Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn’t have to sacrifice one for the other.”

Source link

Continue Reading

Tech

15 jobs to go in Meta’s Irish operations as global cuts announced

Published

on

Just this week, Meta was found to be enabling social media addiction, and endangering children on its platforms.

Meta has begun laying off several hundred employees globally, as the company continues to redirect priorities towards AI.

Some news publications have placed the total number of layoffs globally at 700. According to reports, affected departments include Reality Labs, Facebook, global operations, recruiting and sales.

The tech giant employs nearly 79,000 globally, with around 1,800 in Ireland spread across 80 teams. SiliconRepublic.com understands that around 15 jobs were impacted in Ireland, with no roles in Reality Labs affected – which, The Information reports, is expected to be hit hard globally.

Advertisement

“Teams across Meta regularly restructure or implement changes to ensure they’re in the best position to achieve their goals,” a Meta spokesperson told SiliconRepublic.com. “Where possible, we are finding other opportunities for employees whose positions may be impacted.”

Meanwhile, as the company lays off hundreds, a stock option for its key leaders announced on 24 March could see some of them increase their compensation by more than $900m over the next five years.

Earlier this month, Reuters reported that Meta was planning to cut 20pc or more of the company’s global workforce. Meta called this a “speculative report about theoretical approaches.” It is understood that the latest organisational changes are unrelated to Reuters’ story.

Reports from January 2026 suggested that Meta could cut 10pc of its Reality Labs division, which employs roughly 15,000. In December, it was speculated that the company would be reducing the budget and cutting staff in its ‘metaverse’ sections.

Advertisement

The layoffs highlight a strong shift in how Big Tech companies are approaching work and productivity. In January, Meta CEO Mark Zuckerberg said that 2026 might be the year “AI starts to dramatically change the way that we work.

“We’re starting to see projects that used to require big teams now be accomplished by a single very talented person,” he said.

Meta’s not alone in this – Atlassian, Amazon and Block have all laid off thousands in recent months as slimmer teams and AI tools take the industry by storm. Oracle could also cut thousands of jobs to funnel funds into its AI data centre expansion efforts.

The Instagram, WhatsApp and Facebook parent lost two landmark lawsuits this past week, with critics hailing this as Big Tech’s ‘Big Tobacco moment’.

Advertisement

Earlier this week, a New Mexico jury found that Meta endangered children by misleading users about the safety of its platforms, while yesterday, a Los Angeles jury found that Instagram and YouTube design their platforms to addict young users.

However, the $1.5trn company is only facing penalties of less than $380m for both the lawsuits combined.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Sony won’t bring back the Vita, but Anbernic did

Published

on

Sony seems to have moved on from the PlayStation Vita, but its influence clearly hasn’t gone anywhere.

Anbernic has just unveiled the new RG Vita and RG Vite Pro, which are two handheld gaming consoles that feature a design inspired by the PS Vita. From the wide layout to the button placement and overall aesthetic, these pay homage to Sony’s last true portable console.

But these aren’t a one-on-one copy, and rather serve as a modern take on the Vita idea.

Everything you need to know about the Vitas

The lineup consists of two variants, namely the RG Vita and RG Vita Pro.

The standard Vita is a more affordable option that featurse a 5.46-inch IPS display with 720p resolution, powered by a Unisoc T618 chipset, paired with 3GB of RAM and 64GB of storage. On the other hand, the RG Vita Pro steps things with a slightly taller 1080p IPS display, a more capable Rockchip RK3576 processor, 4GB RAM, and the same expandable storage support via microSD.

Advertisement

Both models are powered by a 5,000mAh batteries that promise to offer several hours of gameplay.

Built for retro, but doesn’t stick to the past

Anbernic’s new RG Vita series is a throwback to a great age in game, but it isn’t just about nostalgia.

The consoles supports Android (and Linux on the Pro), which allows it to run Android games and the emulators for consoles like PS2, PSP, GameCube, and more. So it is a lot more versatile than its original inspiration. Anbernic is even adding modern touches like WiFi, Bluetooth, USB-C output, and even AI-based features like real-time translate and in-game assistance tools.

That said, this isn’t aiming to be a true successor to the PS Vita. Performane is aimed more at emulation and casual Android gaming rather than running modern AAA titles.

Anbernic has yet to confirm the official pricing, but the devices are expected to land in the budget to mid-range handheld category.

Advertisement

Source link

Continue Reading

Tech

Sony wants to mount your phone on a DualSense controller, and it could change how you game

Published

on

Sony wants to use your phone as a secondary input for a PlayStation controller, and it might actually change how we play games. 

Gaming controllers have come a long way, but let’s be honest, they haven’t changed that much at all. Sure, we got haptic feedback, adaptive triggers, and TMR sensors, but the core design and gameplay have remained the same for decades. Sony might be about to change that, and the solution is your phone.

As reported by CheatHappens, a newly discovered Sony patent describes a hybrid input system that attaches your smartphone to a PlayStation controller using a magnetic attachment unit. 

The phone essentially becomes a second controller, giving developers access to its cameras, gyroscope, touchscreen, and other sensors to create entirely new gameplay experiences.

What’s the need for this patent?

The patent makes an interesting argument. Traditional controllers are excellent for certain game genres, such as racing titles, where physical buttons and triggers shine, but they’re not ideal for first-person shooters.

Advertisement

By mounting a phone onto the controller, developers get access to a much wider variety of inputs, making the hybrid system more versatile across all game genres.

The possibilities are exciting. Developers could use the phone’s camera for in-game avatar customization, leverage motion sensors for spatial awareness, or display extra gameplay data directly on the phone.

Is this just a concept or could it become a reality?

That’s the big question. Sony has filed several unconventional patents in recent years, and most of them haven’t seen the next stage. It’s not just Sony; on average, only 2–5% of patents that are filed actually materialize into a real product, so the probabilities are not in favor. 

However, this patent has several advantages that could help it reach the market. It doesn’t require new hardware, the attachment mechanism should be straightforward, and the potential benefits for gamers are real. 

If Sony can make this work, it could genuinely add more depth to console gaming without asking players to buy an extra accessory.

Advertisement

Source link

Continue Reading

Tech

3D Printed Wire Stripper Uses PLA Blades

Published

on

One might think that [Da_Rius]’s mostly 3D printed wire stripper would count its insulation-shearing blades among the small number of metal parts required, but that turns out to not be the case. The blades are actually printed in PLA, seem to work just fine for this purpose. (We imagine they need somewhat frequent replacement, but still.)

Proper wire strippers are one of the most useful tools for a budding electronics enthusiast, because stripping hookup wire is a common task and purpose-built strippers make for quick and consistent results.

As far as tools go they are neither particularly expensive nor difficult to source, but making one’s own has a certain appeal to it. The process of assembling the tool is doubtless a rewarding one, and it looks like it results in a pretty good conversation starter if nothing else.

As mentioned, the tool is mostly 3D printed and does require some metal parts: fasteners, heat-set inserts, and a couple springs. Metal nuts and heat-set inserts are easy enough to obtain, but springs of particular size and shape are a bit trickier.

It is perfectly possible to make custom springs, and as it happens [Da_Rius] already has that covered with a separate project for using a hex key and printed jig to make exactly the right shapes and sizes from pre-tempered spring wire.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025