Connect with us
DAPA Banner

Tech

AI joins the 8-hour work day as GLM ships 5.1 open source LLM, beating Opus 4.6 and GPT 5.4 on SWE-Bench Pro

Published

on

Is China picking back up the open source AI baton?

Z.ai, also known as Zhupai AI, a Chinese AI startup best known for its powerful, open source GLM family of models, has unveiled GLM-5.1 today under a permissive MIT License, allowing for enterprises to download, customize and use it for commercial purposes. They can do so on Hugging Face.

This follows its release of GLM-5 Turbo, a faster version, under only proprietary license last month.

The new GLM-5.1 is designed to work autonomously for up to eight hours on a single task, marking a definitive shift from vibe coding to agentic engineering.

Advertisement

The release represents a pivotal moment in the evolution of artificial intelligence. While competitors have focused on increasing reasoning tokens for better logic, Z.ai is optimizing for productive horizons.

GLM-5.1 is a 754-billion parameter Mixture-of-Experts model engineered to maintain goal alignment over extended execution traces that span thousands of tool calls.

“agents could do about 20 steps by the end of last year,” wrote z.ai leader Lou on X. “glm-5.1 can do 1,700 rn. autonomous work time may be the most important curve after scaling laws. glm-5.1 will be the first point on that curve that the open-source community can verify with their own hands. hope y’all like it^^”

In a market increasingly crowded with fast models, Z.ai is betting on the marathon runner. The company, which listed on the Hong Kong Stock Exchange in early 2026 with a market capitalization of $52.83 billion, is using this release to cement its position as the leading independent developer of large language models in the region.

Advertisement

Technology: the staircase pattern of optimization

GLM-5.1s core technological breakthrough isn’t just its scale, though its 754 billion parameters and 202,752 token context window are formidable, but its ability to avoid the plateau effect seen in previous models.

In traditional agentic workflows, a model typically applies a few familiar techniques for quick initial gains and then stalls. Giving it more time or more tool calls usually results in diminishing returns or strategy drift.

Z.ai research demonstrates that GLM-5.1 operates via what they call a staircase pattern, characterized by periods of incremental tuning within a fixed strategy punctuated by structural changes that shift the performance frontier.

In Scenario 1 of their technical report, the model was tasked with optimizing a high-performance vector database, a challenge known as VectorDBBench.

Advertisement
VectorDBBench graphic from z.ai for GLM-5.1

VectorDBBench graphic from z.ai for GLM-5.1. Credit: z.ai

The model is provided with a Rust skeleton and empty implementation stubs, then uses tool-call-based agents to edit code, compile, test, and profile. While previous state-of-the-art results from models like Claude Opus 4.6 reached a performance ceiling of 3,547 queries per second, GLM-5.1 ran through 655 iterations and over 6,000 tool calls. The optimization trajectory was not linear but punctuated by structural breakthroughs.

At iteration 90, the model shifted from full-corpus scanning to IVF cluster probing with f16 vector compression, which reduced per-vector bandwidth from 512 bytes to 256 bytes and jumped performance to 6,400 queries per second.

By iteration 240, it autonomously introduced a two-stage pipeline involving u8 prescoring and f16 reranking, reaching 13,400 queries per second. Ultimately, the model identified and cleared six structural bottlenecks, including hierarchical routing via super-clusters and quantized routing using centroid scoring via VNNI. These efforts culminated in a final result of 21,500 queries per second, roughly six times the best result achieved in a single 50-turn session.

Advertisement

This demonstrates a model that functions as its own research and development department, breaking complex problems down and running experiments with real precision.

The model also managed complex execution tightening, lowering scheduling overhead and improving cache locality. During the optimization of the Approximate Nearest Neighbor search, the model proactively removed nested parallelism in favor of a redesign using per-query single-threading and outer concurrency.

When the model encountered iterations where recall fell below the 95 percent threshold, it diagnosed the failure, adjusted its parameters, and implemented parameter compensation to recover the necessary accuracy. This level of autonomous correction is what separates GLM-5.1 from models that simply generate code without testing it in a live environment.

Kernelbench: pushing the machine learning frontier

The model’s endurance was further tested in KernelBench Level 3, which requires end-to-end optimization of complete machine learning architectures like MobileNet, VGG, MiniGPT, and Mamba.

Advertisement

In this setting, the goal is to produce a faster GPU kernel than the reference PyTorch implementation while maintaining identical outputs. Each of the 50 problems runs in an isolated Docker container with one H100 GPU and is limited to 1,200 tool-use turns. Correctness and performance are evaluated against a PyTorch eager baseline in separate CUDA contexts.

The results highlight a significant performance gap between GLM-5.1 and its predecessors. While the original GLM-5 improved quickly but leveled off early at a 2.6x speedup, GLM-5.1 sustained its optimization efforts far longer. It eventually delivered a 3.6x geometric mean speedup across 50 problems, continuing to make useful progress well past 1,000 tool-use turns.

Although Claude Opus 4.6 remains the leader in this specific benchmark at 4.2x, GLM-5.1 has meaningfully extended the productive horizon for open-source models.

This capability is not simply about having a longer context window; it requires the model to maintain goal alignment over extended execution, reducing strategy drift, error accumulation, and ineffective trial and error. One of the key breakthroughs is the ability to form an autonomous experiment, analyze, and optimize loop, where the model can proactively run benchmarks, identify bottlenecks, adjust strategies, and continuously improve results through iterative refinement.

Advertisement

All solutions generated during this process were independently audited for benchmark exploitation, ensuring the optimizations did not rely on specific benchmark behaviors but worked with arbitrary new inputs while keeping computation on the default CUDA stream.

Product strategy: subscription and subsidies

GLM-5.1 is positioned as an engineering-grade tool rather than a consumer chatbot. To support this, Z.ai has integrated it into a comprehensive Coding Plan ecosystem designed to compete directly with high-end developer tools.

The product offering is divided into three subscription tiers, all of which include free Model Context Protocol tools for vision analysis, web search, web reader, and document reading.

The Lite tier at $27 USD per quarter is positioned for lightweight workloads and offers three times the usage of a comparable Claude Pro plan. The Pro tier at $81 per quarter is designed for complex workloads, offering five times the Lite plan usage and 40 to 60 percent faster execution.

Advertisement

The Max tier at $216 per quarter is aimed at advanced developers with high-volume needs, ensuring guaranteed performance during peak hours.

For those using the API directly or through platforms like OpenRouter or Requesty, Z.ai has priced GLM-5.1 at $1.40 per one million input tokens and $4.40 per million output tokens. There’s also a cache discount available for $0.26 per million input tokens.

Model

Input

Advertisement

Output

Total Cost

Source

Grok 4.1 Fast

Advertisement

$0.20

$0.50

$0.70

xAI

Advertisement

MiniMax M2.7

$0.30

$1.20

$1.50

Advertisement

MiniMax

Gemini 3 Flash

$0.50

$3.00

Advertisement

$3.50

Google

Kimi-K2.5

$0.60

Advertisement

$3.00

$3.60

Moonshot

MiMo-V2-Pro (≤256K)

Advertisement

$1.00

$3.00

$4.00

Xiaomi MiMo

Advertisement

GLM-5

$1.00

$3.20

$4.20

Advertisement

Z.ai

GLM-5-Turbo

$1.20

$4.00

Advertisement

$5.20

Z.ai

GLM-5.1

$1.40

Advertisement

$4.40

$5.80

Z.ai

Claude Haiku 4.5

Advertisement

$1.00

$5.00

$6.00

Anthropic

Advertisement

Qwen3-Max

$1.20

$6.00

$7.20

Advertisement

Alibaba Cloud

Gemini 3 Pro

$2.00

$12.00

Advertisement

$14.00

Google

GPT-5.2

$1.75

Advertisement

$14.00

$15.75

OpenAI

GPT-5.4

Advertisement

$2.50

$15.00

$17.50

OpenAI

Advertisement

Claude Sonnet 4.5

$3.00

$15.00

$18.00

Advertisement

Anthropic

Claude Opus 4.6

$5.00

$25.00

Advertisement

$30.00

Anthropic

GPT-5.4 Pro

$30.00

Advertisement

$180.00

$210.00

OpenAI

Notably, the model consumes quota at three times the standard rate during peak hours, which are defined as 14:00 to 18:00 Beijing Time daily, though a limited-time promotion through April 2026 allows off-peak usage to be billed at a standard 1x rate. Complementing the flagship is the recently debuted GLM-5 Turbo.

Advertisement

While 5.1 is the marathon runner, Turbo is the sprinter, proprietary and optimized for fast inference and tasks like tool use and persistent automation.

At a cost of $1.20 per million input / $4 per million output, it is more expensive than the base GLM-5 but comes in at more affordable than the new GLM-5.1, positioning it as a commercially attractive option for high-speed, supervised agent runs.

The model is also packaged for local deployment, supporting inference frameworks including vLLM, SGLang, and xLLM. Comprehensive deployment instructions are available at the official GitHub repository, allowing developers to run the 754 billion parameter MoE model on their own infrastructure.

For enterprise teams, the model includes advanced reasoning capabilities that can be accessed via a thinking parameter in API requests, allowing the model to show its step-by-step internal reasoning process before providing a final answer.

Advertisement

Benchmarks: a new global standard

The performance data for GLM-5.1 suggests it has leapfrogged several established Western models in coding and engineering tasks.

SWE-Bench-Pro

SWE-Bench Pro benchmark comparison chart showing GLM-5.1 leading other major models. Credit: z.ai

On SWE-Bench Pro, which evaluates a model’s ability to resolve real-world GitHub issues using an instruction prompt and a 200,000 token context window, GLM-5.1 achieved a score of 58.4. For context, this outperforms GPT-5.4 at 57.7, Claude Opus 4.6 at 57.3, and Gemini 3.1 Pro at 54.2.

Beyond standardized coding tests, the model showed significant gains in reasoning and agentic benchmarks. It scored 63.5 on Terminal-Bench 2.0 when evaluated with the Terminus-2 framework and reached 66.5 when paired with the Claude Code harness.

Advertisement

On CyberGym, it achieved a 68.7 score based on a single-run pass over 1,507 tasks, demonstrating a nearly 20-point lead over the previous GLM-5 model. The model also performed strongly on the MCP-Atlas public set with a score of 71.8 and achieved a 70.6 on the T3-Bench.

In the reasoning domain, it scored 31.0 on Humanitys Last Exam, which jumped to 52.3 when the model was allowed to use external tools. On the AIME 2026 math competition benchmark, it reached 95.3, while scoring 86.2 on GPQA-Diamond for expert-level science reasoning.

The most impressive anecdotal benchmark was the Scenario 3 test: building a Linux-style desktop environment from scratch in eight hours.

Unlike previous models that might produce a basic taskbar and a placeholder window before declaring the task complete, GLM-5.1 autonomously filled out a file browser, terminal, text editor, system monitor, and even functional games.

Advertisement

It iteratively polished the styling and interaction logic until it had delivered a visually consistent, functional web application. This serves as a concrete example of what becomes possible when a model is given the time and the capability to keep refining its own work.

Licensing and the open segue

The licensing of these two models tells a larger story about the current state of the global AI market. GLM-5.1 has been released under the MIT License, with its model weights made publicly available on Hugging Face and ModelScope.

This follows the Z.ai historical strategy of using open-source releases to build developer goodwill and ecosystem reach. However, GLM-5 Turbo remains proprietary and closed-source. This reflects a growing trend among leading AI labs toward a hybrid model: using open-source models for broad distribution while keeping execution-optimized variants behind a paywall.

Industry analysts note that this shift arrives amidst a rebalancing in the Chinese market, where heavyweights like Alibaba are also beginning to segment their proprietary work from their open releases.

Advertisement

Z.ai CEO Zhang Peng appears to be navigating this by ensuring that while the flagship’s core intelligence is open to the community, the high-speed execution infrastructure remains a revenue-driving asset.

The company is not explicitly promising to open-source GLM-5 Turbo itself, but says the findings will be folded into future open releases. This segmented strategy helps drive adoption while allowing the company to build a sustainable business model around its most commercially relevant work.

Community and user reactions: crushing a week’s work

The developer community response to the GLM-5.1 release has been overwhelmingly focused on the model’s reliability in production-grade environments.

User reviews suggest a high degree of trust in the model’s autonomy.

Advertisement

One developer noted that GLM-5.1 shocked them with how good it is, stating it seems to do what they want more reliably than other models with less reworking of prompts needed. Another developer mentioned that the model’s overall workflow from planning to project execution performs excellently, allowing them to confidently entrust it with complex tasks.

Specific case studies from users highlight significant efficiency gains.

A user from Crypto Economy News reported that a task involving preprocessing code, feature selection logic, and hyperparameter tuning solutions, which originally would have taken a week, was completed in just two days. Since getting the GLM Coding plan, other developers have noted being able to operate more freely and focus on core development without worrying about resource shortages hindering progress.

On social media, the launch announcement generated over 46,000 views in its first hour, with users captivated by the eight-hour autonomous claim. The sentiment among early adopters is that Z.ai has successfully moved past the hallucination-heavy era of AI into a period where models can be trusted to optimize themselves through repeated iteration.

Advertisement

The ability to build four applications rapidly through correct prompting and structured planning has been cited by multiple users as a game-changing development for individual developers.

The implications of long-horizon work

The release of GLM-5.1 suggests that the next frontier of AI competition will not be measured in tokens per second, but in autonomous duration.

If a model can work for eight hours without human intervention, it fundamentally changes the software development lifecycle.

However, Z.ai acknowledges that this is only the beginning. Significant challenges remain, such as developing reliable self-evaluation for tasks where no numeric metric exists to optimize against.

Advertisement

Escaping local optima earlier when incremental tuning stops paying off is another major hurdle, as is maintaining coherence over execution traces that span thousands of tool calls.

For now, Z.ai has placed a marker in the sand. With GLM-5.1, they have delivered a model that doesn’t just answer questions, but finishes projects. The model is already compatible with a wide range of developer tools including Claude Code, OpenCode, Kilo Code, Roo Code, Cline, and Droid.

For developers and enterprises, the question is no longer, “what can I ask this AI?” but “what can I assign to it for the next eight hours?”

The focus of the industry is clearly shifting toward systems that can reliably execute multi-step work with less supervision. This transition to agentic engineering marks a new phase in the deployment of artificial intelligence within the global economy.

Advertisement

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Golf star Bryson DeChambeau leads acquisition of Seattle-area startup Sportsbox AI

Published

on

Bryson DeChambeau swings while the Sportsbox AI app captures his motion on a smartphone. (Sportsbox AI Photo)

First Bryson DeChambeau used Sportsbox AI to win a major. Then he invested in the Bellevue, Wash.-based startup. Now he’s taking a swing at the entire company.

DeChambeau, the two-time U.S. Open champion and one of golf’s most tech-obsessed players, is leading a group of investors that has acquired Sportsbox AI, the startup that uses AI and 3D motion capture to analyze golf swings from smartphone video.

With the announcement Tuesday morning, the company also announced SAMI, an upcoming agentic AI coaching assistant powered by Google Cloud that’s designed to translate the app’s swing data into personalized, conversational coaching advice.

As part of the partnership, DeChambeau will also carry the Google Cloud logo on his golf bag at the Masters and future tournaments — reportedly the first time the Google Cloud brand has appeared on a professional golfer’s bag.

“This is about making golf more accessible, especially premium coaching,” DeChambeau said in the acquisition announcement, saying they’re “building something that brings real coaching to anyone with a smartphone, not just elite players. That’s what gets me fired up.”

Advertisement

Financial details: DeChambeau, who is preparing to compete in the Masters later this week, told Bloomberg the transaction is worth eight figures, without being more specific. 

Sportsbox had raised more than $9 million, GeekWire previously reported. It was last valued at $41 million in a March 2023 seed round, according to PitchBook.

The press release announcing the acquisition describes the buyers as a group of investors led by DeChambeau but does not name the other members. 

Co-founders Jeehae Lee and Samuel Menaker will continue to run Sportsbox, a spokesperson confirmed. The company’s roughly 30 employees will stay on, and Sportsbox will remain headquartered in Bellevue, though many employees work remotely.

Advertisement

PitchBook lists 19 sellers who fully exited in the deal, including Elysian Park Ventures, the PGA of America, pro golfer Michelle Wie West, golf instructor David Leadbetter, Randi Zuckerberg, and Twitch co-founder Kevin Lin.

Backstory: Sportsbox launched in 2020 as a spinoff of AI Thinktank, a Bellevue-based incubator founded by Mike and Rich Kennewick, the brothers behind Voicebox Technologies, an early speech recognition company.

Lee, the CEO, is a former LPGA Tour player who previously led strategy and business development at Topgolf. Menaker, the CTO, was VP of engineering at Voicebox.

The app uses a smartphone camera to create a 3D model of a golfer’s swing and measure hundreds of data points that would otherwise require an expensive motion-capture studio. 

Advertisement

Sportsbox generates revenue through coaching subscriptions and a consumer tier for golfers at $15.99 per month or $110 per year.

DeChambeau’s connection: In the week leading up to the 2024 U.S. Open at Pinehurst, DeChambeau used Sportsbox to identify and fix a slight miss to the right in his shots. He gave the company a shout-out at his winner’s press conference and soon after joined as an investor.

SAMI — short for Sportsbox AI Motion Intelligence — is the next step.

Built on Google’s Gemini models, it’s designed to act as a conversational AI coach, interpreting the app’s 3D biomechanical data and delivering personalized advice. The press release describes it as moving Sportsbox from a passive measurement tool to a proactive AI agent.

Advertisement

SAMI is currently in beta, and the company said it will begin rolling out agentic AI features throughout the second quarter, starting this week with AI-generated highlights available to subscribers of its 3D Player and 3D Player Plus tiers on iOS.

DeChambeau told Bloomberg he’s been using the technology ahead of the Masters and plans to keep using it during and after the tournament. But he said it isn’t meant to replace coaches. 

“The camera and the phone are only going to tell you so much,” he told Bloomberg. “They can’t make you feel what you’re doing.”

Source link

Advertisement
Continue Reading

Tech

Trump’s FY27 budget would cut $700M from CISA and kill election security

Published

on

In short: The Trump administration’s FY2027 budget proposes cutting $707 million from CISA, eliminating the agency’s election security programme entirely and shedding 860 positions, a dramatic escalation that would reduce the country’s primary civilian cybersecurity agency to a $2 billion operation after a year already defined by DOGE-driven layoffs and mass departures.

The United States’ central civilian cybersecurity agency has lost roughly a third of its workforce over the past 14 months. Its red team has been dissolved. Scores of staff working on election security, incident response, and continuous monitoring were fired by the Department of Government Efficiency in early 2025, then partially reinstated under court order, then placed on paid leave in legal limbo. Against that backdrop, the Trump administration released its FY2027 budget request on 7 April 2026, proposing to cut a further $707 million from the Cybersecurity and Infrastructure Security Agency, a reduction the White House frames as a long-overdue refocusing on the agency’s core mission and critics describe as an act of deliberate dismantlement.

The proposed cuts amount to approximately $700 million in programme eliminations, producing a net reduction of around $360 million once internal transfers and targeted new hires are factored in. If enacted, CISA’s operating budget would fall to roughly $2 billion, down from the approximately $3 billion it received when the current administration took office. The budget also projects eliminating around 867 positions, partially offset by transfers into the agency, for a net workforce reduction of approximately 860 roles.

What would disappear

The most politically conspicuous cut is the outright elimination of CISA’s election security programme. The proposal would end CISA’s funding for the Elections Infrastructure Information Sharing and Analysis Center, known as EI-ISAC, which serves as the primary hub for sharing cyber threat intelligence, ransomware alerts, and incident response resources with state and local election offices. It would also remove dedicated election security advisors stationed across the country and terminate the information-sharing support CISA has provided to state and local election officials since the agency’s founding in 2018. Those advisors have been the first point of contact for county clerks and election administrators facing phishing attacks, foreign probing of registration databases, and disinformation campaigns targeting election infrastructure.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Beyond elections, the proposal would substantially scale back CISA’s stakeholder engagement function, eliminating offices responsible for coordinating with private-sector infrastructure operators and managing the agency’s international affairs partnerships. Workforce development programmes and what the budget characterises as “duplicative” state and local cyber funding streams would also be cut. The proposal shifts more responsibility for certain infrastructure security and emergency communications programmes directly to state and local governments, though it does not specify additional funding to those governments to absorb the transfer.

The White House’s argument

The administration’s budget justification is pointed in its language. The document states that “CISA was more focused on censorship than on protecting the nation’s critical systems, and put them at risk due to poor management and inefficiency, as well as a focus on self-promotion.” The proposed reductions, it argues, “refocus CISA on its core mission” of securing the federal civilian network and helping critical infrastructure operators defend against cyberattacks and physical threats.

Advertisement

The censorship framing refers primarily to CISA’s now-disbanded counter-disinformation work, including a unit that coordinated with social media companies on election-related content moderation during the 2020 and 2022 election cycles. That work was shut down after Republican criticism and subsequent litigation. Sean Plankey, Trump’s nominee to lead CISA, addressed the issue directly during his confirmation hearings. “It is not CISA’s job, and nor is it in its authorities, to censor or determine the truths,” Plankey said, adding that the agency would not pursue such work under his leadership. Plankey also pledged to “rebuild and refocus” CISA, emphasising that his goal would be to “empower the operators to operate“, referring to the private sector entities responsible for critical infrastructure. Plankey has not yet been confirmed by the Senate.

A year already defined by cuts

The FY27 proposal lands on an agency that has spent the past year contracting sharply. When Trump returned to office in January 2025, CISA had approximately 3,300 employees. By December 2025, that figure had fallen to roughly 2,400, a loss of nearly 900 people. The departures came through a combination of voluntary exits during the deferred resignation programme, probationary staff terminations, and direct DOGE action. In late February and early March 2025, DOGE terminated contracts and fired staff in waves that eliminated CISA’s entire red team, more than 80 employees working on continuous monitoring, and between 30 and 50 incident response staff. A federal judge subsequently ordered the reinstatement of probationary employees, but reinstated staff were placed on paid administrative leave rather than returned to active duties.

The red team cuts drew particular alarm from security professionals, since red-team exercises, in which agency staff simulate real-world attacks against government networks to identify vulnerabilities before adversaries do, are among the most operationally consequential work any cybersecurity organisation undertakes. Removing that capability does not just reduce CISA’s headcount; it eliminates a specific function that cannot simply be assumed by the remaining staff. The governance of AI-assisted cybersecurity tools across critical infrastructure has become a defining challenge for 2026, and the debate about CISA’s role sits at its centre: the agency was positioned to set standards and share threat intelligence precisely as those questions become most consequential.

Congressional pushback, and its limits

The proposed $707 million cut represents a sharp escalation from the administration’s FY26 request, which sought approximately $490 million in reductions. Congressional resistance at that stage, including from Republican committee members who considered the cuts excessive, ultimately narrowed the actual reductions to somewhere between $130 million and $300 million. Whether that resistance holds in the current budget environment is uncertain.

Advertisement

The sharpest opposition came from the Democratic side of the aisle. Representative Bennie Thompson, Democrat of Mississippi and the ranking member on the House Homeland Security Committee, rejected both the scale of the proposed cuts and the administration’s framing. “Like the President’s cyber strategy, the President’s CISA budget reflects his utter lack of understanding of the urgency of the cyber threats we face and how to mobilize the government to help confront them,” Thompson said in a statement. Citing the threat environment that has intensified in recent months, Thompson added: “There is nothing that justifies a reckless $700 million cut to CISA, particularly at a time of heightened tensions with Iran and an increasingly aggressive China.”

Thompson said he was “committed to working with colleagues to push back against these cuts” and to ensuring the government can protect federal and critical infrastructure networks. Separately, bipartisan legislation introduced earlier in 2026 would require CISA to maintain “sufficient” staffing levels, though the bill has not advanced to a vote.

What $2 billion buys, and what it doesn’t

The cuts do not eliminate CISA. Under the proposed budget, the agency would retain its core federal network security functions, its role supporting critical infrastructure operators, and some capacity for coordination with the private sector. The Einstein intrusion detection system and the Continuous Diagnostics and Mitigation programme for federal civilian networks are expected to survive. What the budget removes is the outward-facing, partnership-intensive layer of CISA’s operations: the work with state and local governments, the election security apparatus, the international engagement, and the stakeholder advisory infrastructure that has grown since the agency’s founding.

The commercial cybersecurity sector is watching closely. CISA has historically been a significant source of free threat intelligence, vulnerability advisories, and incident response support for smaller organisations and local governments that cannot afford enterprise-grade security tools. As the AI-driven expansion of the threat landscape accelerated through 2025, the agency’s advisories on vulnerabilities in industrial control systems and critical infrastructure became more, not less, relied upon by the operators responsible for power grids, water systems, and financial networks. The proposed cuts do not formally end that advisory function, but an agency operating at $2 billion with 860 fewer staff will inevitably produce fewer advisories, respond to fewer incidents, and reach fewer of the operators it was designed to support.

Advertisement

The budget is a proposal, not a law. Congress must still appropriate the funds, and the FY26 experience suggests that the final number will likely be lower than requested. What has already happened at CISA, however, does not require a vote to reverse: a third of the workforce is gone, the red team no longer exists, and election security advisors have been standing down since early 2025. The budget fight now is largely about whether what remains gets smaller still.

Source link

Advertisement
Continue Reading

Tech

Is your data integrity framework just a fancy spreadsheet?

Published

on

Nahla Davies examines what constitutes an appropriate data integrity framework, and how inadequate frameworks damage data quality.

If you asked most companies whether they have a data integrity framework, they’d say yes without hesitation. They’d point you to a shared drive, maybe a Confluence page, possibly a colour-coded spreadsheet with tabs labelled ‘Validation Rules’ and ‘Ownership Matrix’. It looks official. It’s got a logo on it. Someone even added conditional formatting.

But here’s the thing: looking like a framework and actually functioning as one are two wildly different realities. Across industries, organisations are confusing documentation with governance, and the gap between those two things is where data quality quietly falls apart. The problem isn’t that teams don’t care. It’s that they’ve convinced themselves the spreadsheet is enough.

The spreadsheet trap is more common than anyone admits

There’s a pattern that plays out in nearly every mid-size org that’s undergone some kind of digital transformation push in the last five years. Someone in data engineering or analytics gets tasked with ‘building a data integrity framework’. They do their research, pull together some best practices, and create a document. Maybe it lives in Google Sheets, maybe it’s a Notion database, maybe it’s an actual PDF that got emailed around once and then forgotten about. Whatever form it takes, it checks a box. Leadership sees it and feels reassured.

Advertisement

The trouble starts when that document has to survive contact with reality. Data pipelines change. New sources get added. Team members rotate. And that spreadsheet? It doesn’t update itself. It doesn’t send alerts when a schema shifts or when a critical field starts returning nulls at twice the usual rate. It just sits there, frozen in the moment it was created, slowly becoming a historical artifact rather than an operational tool.

What’s worse is that people keep referencing it as though it’s still accurate. Decisions get made based on validation rules that haven’t been reviewed in months. Ownership columns list people who’ve left the company. It’s the organisational equivalent of navigating with a map from 2019 and wondering why you keep hitting dead ends.

And it’s not a niche problem. A 2023 Gartner survey found that poor data quality costs organisations an average of $12.9m per year. That number doesn’t come from dramatic, headline-grabbing breaches. It comes from the slow, invisible accumulation of bad records, missed anomalies, and unchecked assumptions that a static document simply can’t catch.

What a real framework actually looks like

So what separates a functioning data integrity framework from a well-formatted spreadsheet? It comes down to whether the thing can operate without someone manually babysitting it. A real framework is embedded in your infrastructure. It’s automated, observable and responsive.

Advertisement

That means validation checks run as part of your data pipelines, not as a quarterly audit someone remembers to do in the last week of the quarter. It means the data is correctly annotated and that there’s monitoring in place that flags anomalies in real time, whether that’s a sudden spike in null values or a mismatch between source and destination row counts. Tools like Great Expectations, Monte Carlo and dbt tests exist specifically to bring this kind of rigor into the workflow.

It also means ownership is enforced through tooling, not just documented in a tab. When a data asset has a registered owner in a data catalogue, and that catalogue integrates with your alerting system, accountability becomes structural. It stops being something you have to chase people about in Slack.

There’s a cultural component here, too. Organisations with mature data integrity practices treat data quality as a product concern and are better prepared to establish proper AI governance. Product managers care about it. Analysts flag issues proactively instead of working around them. Engineers write tests for data the same way they write tests for code. That kind of culture doesn’t emerge from a spreadsheet. It emerges from leadership, making it clear that data integrity is a priority, not a side project someone handles when things are slow.

The companies getting this right tend to share a few traits. They’ve invested in observability across their data stack. They treat schema changes as events that require review, not things that just happen silently. And they’ve moved past the idea that documentation alone equals governance.

Advertisement

Why it matters more now than it did five years ago

The stakes around data integrity have shifted significantly. Five years ago, a bad record in a reporting dashboard was annoying but manageable. Today, that same bad record might be feeding a machine learning model that’s making automated decisions about credit, hiring or patient care. The blast radius of poor data quality has expanded because the systems consuming that data have become more autonomous and more consequential.

Regulatory pressure is also mounting. Frameworks like the EU’s AI Act and evolving data privacy regulations are putting more scrutiny on how organisations manage the data that powers their products. It’s getting harder to shrug off data quality issues as ‘technical debt we’ll get to eventually’. Regulators want to see evidence of governance, and a spreadsheet with last year’s date on it won’t cut it.

There’s also the competitive angle. Companies that can trust their data move faster. They make decisions with more confidence. They spend less time reconciling conflicting reports and more time actually acting on insights. Data integrity isn’t glamorous, but it’s one of those foundational things that quietly determines whether an organisation can execute on its strategy or just talk about it.

Final thoughts

The uncomfortable truth is that most data integrity frameworks weren’t built to be frameworks at all. They were built to satisfy a request, to check a compliance box, or to give someone something to present in a meeting.

Advertisement

And that’s fine as a starting point. Every mature system started somewhere. But if your ‘framework’ is still a spreadsheet that no one’s touched in six months, it’s time to be honest about what you actually have.

Real integrity requires automation, observability and cultural buy-in. The spreadsheet was never the destination. Treat it as the rough draft it always was, and start building something that can actually keep up with your data.

 

By Nahla Davies

Advertisement

Nahla Davies is a software developer and tech writer. Before devoting her work full time to technical writing, she managed – among other intriguing things – to serve as a lead programmer at an Inc. 5,000 experiential branding organisation, where clients include Samsung, Time Warner, Netflix and Sony.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

Elon Musk wants any damages from his OpenAI lawsuit given to the AI company’s nonprofit arm

Published

on

Elon Musk is still taking OpenAI to court over its transition to a for-profit company, but today he amended the complaint so that he won’t personally get any of the $150 billion in damages he’s pushing for. The Wall Street Journal reported that if Musk wins in his upcoming trial, he wants any damages should be awarded to the OpenAI nonprofit branch. He’s also seeking OpenAI CEO Sam Altman’s removal from the nonprofit’s board of directors if his suit succeeds.

Musk launched a lawsuit against OpenAI in 2024, claiming that the business had become a “closed-source de facto subsidiary” of Microsoft when it dropped its nonprofit designation. He claims that, as a co-chair of the OpenAI founding group, the change to a for-profit operation defrauded him as a donor. As a result, he’s now claiming that he, or apparently the remaining nonprofit side of OpenAI, deserve a portion of the company’s current valuation.

Considering the reputation Musk, Altman and their various business endeavors have for creating spicy PR situations, it seems likely that the exchanges between the two camps will get more heated as the trial date approaches.

Source link

Advertisement
Continue Reading

Tech

Stolen session cookies give hackers full account access for under a thousand dollars per month without raising alerts

Published

on


  • Storm enables session hijacking that bypasses passwords and multi-factor authentication
  • Attackers can restore stolen sessions remotely without triggering standard security alerts
  • Malware operates server-side to process encrypted browser credentials for stealthy exploitation

A new strain of infostealer malware dubbed Storm is changing how account compromise works, experts have warned.

New findings from Varonis Threat Labs have outlined how this strain moves away from passwords and focuses on session cookies that keep users logged in.

Source link

Advertisement
Continue Reading

Tech

Open Reel Ensemble’s Cyklepedia Spins Wikipedia Knowledge Into Magnetic Tape Music

Published

on

Open Reel Ensemble Cyklepedia Wikipedia Magnetic Tape Music
Japanese musicians commemorated Wikipedia’s 25th anniversary with a unique composition made up entirely of Wikipedia entries. Open Reel Ensemble produced the song as part of a virtual birthday celebration, and it’s a true journey because it’s totally made up of ancient reel-to-reel equipment that also function as instruments. Every sound is produced by physically moving the tape over the heads, with no artificial samples added after the fact.



The video shows the trio jamming at a table surrounded by recorders. Snippets of Wikipedia material appear on screen, and the lads are completely freestyling their way through them; grab an entry on how a term is defined, and the machines come to life. One of them is rewinding quickly to fit the description, producing a wonderful smooth swooshing sound across the speakers. Another one goes into fast-forward mode whenever the text flashes by, raising the pitch and adding a little of edge to the beat.

Each one flows seamlessly into the next, and the overall effect just seems natural Pitch control slows down for deeper tones and speeds up for brighter ones. Loops take small pieces and repeat them to create these steady rhythms below, while vibrato puts in some wavey portions by slowly changing the pace. Tremolo reduces the loudness in these rapid little pulses, and when they scratch the tape edges, they make these sharp little snappy noises. Then there are the wow effects, which are simply natural wobbles that go up and down in the same rhythm as your breathing.

Open Reel Ensemble Cyklepedia Wikipedia Magnetic Tape Music
The layers just develop as the devices interact with one another; definition after definition for reel-to-reel recording, tension, cut-up technique, and even magnetic punk all appear on screen, activating their corresponding action. The music remains techno and dance-friendly throughout, but it is all anchored in the mechanical slapping and hissing of the tape. The moniker “Cyklepedia” refers to the entire cycle of information that repeats itself through these physical rotations. Masaru Yoshida composed the song, Haruka Yoshida was in charge of the camera and editing, and the entire group collaborated to bring the performance to life, with even Wikipedia getting in on the action, with the anniversary event playing a role.
[Source]

Source link

Advertisement
Continue Reading

Tech

I’ve Seen Sony’s Upcoming True RGB TV: Here’s Why It Could Be a Game-Changer

Published

on

At an event at Sony’s TV headquarters in Tokyo last month, we were treated to some one-on-one time with Sony’s upcoming RGB LED-backlit LCD TV, and I can say this TV is clearly something special. We got to see the new set, which Sony is calling “True RGB,” in its final form and with its LCD panel and screen removed, exposing the RGB backlight unit. Next to it was Sony’s current Mini LED flagship TV, the BRAVIA 9, also in complete form and also with its LCD panel and screen removed, exposing the Mini LED back light unit for comparison.

Sony Mini LED vs New True RGB Backlight 900 px
Sony BRAVIA 9 Mini LED backlight (left) vs. True RGB backlight.

Compared to the BRAVIA 9, the True RGB TV exceeded the performance of that set in just about every measurable (and subjective) way, with wider color gamut, impressive peak brightness and freedom from artifacts like aliasing and color banding. It also had black levels and contrast that will give an OLED TV a run for its money. The new set offered excellent off-axis viewing with minimal dimming and color shift when viewing it from the sides. The upcoming set, which will be publicly unveiled later this spring, does all this while actually using less power than its predecessor, thanks to highly efficient power management and precise control over its RGB backlighting system.

Mini LED/LCD TVs like the BRAVIA 9 have a relatively easy job when it comes to color reproduction. The blue LED elements combine with a quantum dot layer to generate a white backlight. Each pixel on the LCD panel itself creates colors by adjusting the opaqueness of each LCD pixel’s red, green and blue subpixel. Because the backlight is uniform in color, the color filter process is entirely predictable and uniform from pixel to pixel.

conventional-miniled-vs-rgb-miniled

With an RGB backlit TV, the image processor has to decide how to adjust both the intensity of each individual red, green and blue LED diode in each zone of the backlight unit and do further adjustment at the pixel level adjusting each of the red, green and blue LCD subpixels in order to create each pixel’s final color. This two-step process can lead to more accurate and more vivid color reproduction, wider color gamut and higher overall brightness, but at the expense of requiring more processing power. It is just this complexity that has led to Sony taking its time in releasing its first RGB-lit TV of the new era.

Panel Structure RGB backlit LCD TV

At Sony’s headquarters, we got to see the new True RGB set up against several RGB-backlit models from competitors. In this comparison, the Sony True RGB set was better able to remain in its true RGB backlighting mode, taking full advantage of its wide color gamut reproduction with independent control over its red, green and blue diodes, while at least one competitive model switched to a full white backlight whenever multiple contrasting colors were displayed on the screen concurrently. This caused the competing set to lose its RGB color advantage by reverting to a uniform white backlight. And this was evident in visible loss of color saturation.

We’ve seen some RGB backlit TVs struggle with reproduction of multiple colors onscreen at the same time, due to a condition called “color crosstalk.” This occurs when you have multiple colors on screen at a time, or white objects next to or surrounded by colored backgrounds. Some of that background color can bleed into the white due to less than perfect backlight and color filter management. The Sony True RGB set we saw in Japan exhibited none of these color crosstalk issues or color bleed.

Advertisement
micro-rgb-crosstalk-900px
Poor backlight control or color filter management on an RGB backlit TV can lead to color crosstalk. This artifact is shown here on a competitor’s RGB backlit TV where white dots in the image take on a magenta or aqua tinge based on the colored areas surrounding them.

Off-axis viewing and glare reduction were both exceptionally good on the True RGB TV, with the new TV able to maintain its colors and rich black levels, even when viewed from the sides in a fairly bright room. While there was occasionally some mild blooming on brightly colored images set against a black background, the use of RGB lighting elements made these faint artifacts nearly imperceptible. On traditional LED/LCD TVs, the bloom or halo around a bright object is typically white, while on a True RGB TV, the light bloom matches the color of the on-screen object, making it much less noticeable.

True RGB off axis 2 900px
Sony’s True RGB TV (right) maintains good color accuracy, black levels and saturation even when viewed from the sides.

We viewed several challenging 4K/HDR clips highlighting HDR tone mapping and found that the new True RGB set outperformed the BRAVIA 9 MiniLED TV in both specular highlights and shadow detail. And the BRAVIA 9 is already a strong performer for tone mapping, so this was a pretty impressive feat.

Coming-Soon- True RGB 900px

The True RGB TV we spent time with in Japan was a 65-inch version, but, because these TVs use standard LCD “mother glass,” we can expect Sony’s True RGB tech to be available in much larger screen sizes. Certainly larger than OLED TVs which currently max out at 97 inches diagonally. More details will follow later this spring.

April-7-Sony Qualia005 RGB Backlit LCD TV from 2004
Sony’s Qualia 005 TV, released in 2004, was the first LCD TV to feature an RGB backlight unit.

The Bottom Line

While Sony was the first TV maker to use RGB LED backlights in an LCD TV, with the Qualia 005 from 2004, they were not first to market with this new wave of RGB-backlit LCD TVs. Models from Samsung, TCL and Hisense were introduced last year, and second generation models are coming soon from these same manufacturers. LG also unveiled their own RGB-backlit LCD TVs this year, though they are still standing behind OLED technology for their flagship TVs.

Sony has been working on perfecting RGB backlighting in LCD TVs for several years. About a year ago, we saw Sony’s then current prototype RGB backlit TV, which was impressive, but this latest version is even more so. From what we can gather, the company wanted to make sure their version of RGB backlighting was truly ready for prime time before its release. And, from what we’ve seen so far, the wait will be worth it.

Advertisement. Scroll to continue reading.

Stay tuned to eCoustics for more details on Sony True RGB TVs, including industrial design, model numbers, screen sizes, prices and more, coming later this spring.

We’ve Seen the Future of Sony TVs in All its Red, Green and Blue Glory: Here’s What’s Coming

Advertisement

It’s Here! Check Out the 116-inch Hisense 116UX RGB Backlit TV

Sony, TCL Finalize Joint Venture for TVs and Audio; Say Hello to BRAVIA, Inc.

Source link

Advertisement
Continue Reading

Tech

I can’t help rooting for tiny open source AI model maker Arcee

Published

on

Arcee, a tiny 26-person U.S. startup that built a massive, 400B-parameter open source LLM on a $20 million shoestring budget, has released its new reasoning model. Arcee calls the model Trinity Large Thinking — and it’s the most capable open-weight model “ever released by a non-Chinese company,” claims CEO Mark McQuade to TechCrunch.

As that comment implies, Arcee has a goal that I can’t help but root for: It wants to give U.S. and Western companies a model that gives them no reason to use a Chinese-based one.

While Chinese models are extremely capable, they are perceived as risky, putting power, and perhaps data, into the hands of a government that doesn’t share all of the Western world’s ideals.

With Arcee, companies can download the model, train it to their own needs, and use it on premises. Companies can also use Arcee’s cloud-hosted version, accessible via API.

Advertisement

While Arcee’s models are not outperforming the closed source models from the big labs like Anthropic or OpenAI, they’re not being held hostage by the whims of those giants, either.

For instance, Claude, with its exceptional abilities to code, has been a popular choice for users of open source AI agent tool OpenClaw. But Anthropic pulled the rug out from them last week when it told users that their Anthropic subscriptions will no longer cover OpenClaw usage — they will have to pay additionally for that. (In February, OpenClaw creator Peter Steinberger said he was joining Anthropic’s biggest rival, OpenAI.)

In contrast, McQuade proudly points to data from OpenRouter that says it has become one of the top models used with OpenClaw.

Techcrunch event

Advertisement

San Francisco, CA
|
October 13-15, 2026

So, how good is Trinity Large Thinking? It is comparable to some of the other top open source models, according to the benchmark results it shared with TechCrunch.

Advertisement
Arcee Trinity large thinking Benchmarks
Arcee Trinity large thinking BenchmarksImage Credits:Arcee / Arcee

As we previously reported, it is not a head-to-head threat to the big cheese among U.S.-built open models: Meta’s Llama 4. But it also doesn’t have the odd, not-really open source license issues of Meta’s model. All of Arcee’s Trinity models are released under the gold standard for OS licenses, Apache 2.0.

Just to be clear, there are also countless other U.S. startups offering open source models and, as a fan of the ingenuity of startups, I’m rooting for them, too.

Source link

Continue Reading

Tech

How Long Can You Drive With Expired Registration? What Florida Law Says

Published

on





The internet has given us many things, including the infamous “Florida Man” trope. That last one isn’t the nickname of an unknown cryptid stalking the swamps of the Sunshine State; instead, it refers to a seemingly never-ending series of headlines featuring random Floridians doing wild and crazy things, usually involving one of the state’s many creatures (think possums, alligators, snakes, and iguanas). Oh, and they’re all true. 

Florida is also home to some truly weird traffic laws, but “Florida Man Drives With Expired Registration” doesn’t have quite the same ring as “Florida Man Ties Elephant to Parking Meter Without Paying Fee.” Still, the rule around expired tags in the state is a bit odd. Fundamentally, though, it’s not too dissimilar to other states: vehicles in Florida must have a valid registration, and letting it lapse can lead to a range of unpleasant consequences. 

However, section 320.07 (subsection 3A) of the state statutes lays out that anyone with an expired registration of less than six months is only committing “a noncriminal traffic infraction, punishable as a nonmoving violation.” There’s also a caveat to this otherwise very straightforward law: Police can’t write up a citation for it “until midnight on the last day of the owner’s birth month of the year the registration expires.” If it’s been expired for more than six months, though, the proverbial can of worms gets opened. First-time offenders may be subjected to a monetary penalty, while second-time offenders could face a second-degree misdemeanor with a $500 fine plus up to 60 days in prison.

Advertisement

Vehicle registrations in Florida

Florida wouldn’t make it difficult for itself, would it? Barring some truly obscure traffic laws that most drivers don’t know about, no, because most registrations expire at midnight on the owner’s birthday. They can be renewed for one or two years, beginning three months before expiration, at least for individual car owners. However, while registration technically expires on the owner’s birthday, penalties can’t be assessed — and the vehicle can still be driven — until the last day of the owner’s birthday month. If and when you do get a ticket, you can either pay the fee (which varies by county, not to exceed $500) or show up for your day in court.

Initially, registering a vehicle in Florida will set you back $225 plus proof of insurance with minimum coverage levels ($10,000 in Personal Injury Protection and $10,000 in Property Damage Liability). Annual license taxes on privately owned vehicles are based on weight. One weighing less than 2,500 pounds costs a mere $14.50, while one weighing between 2,500 and 3,499 pounds incurs a fee of $22.50, and those over 3,500 pounds cost $32.50.

Advertisement

Furthermore, anyone cited for expired tags has 10 working days to obtain a valid certificate of registration. But there’s yet another caveat to this law, and it pertains to active service members. If their vehicle registration expires while they’re deployed, they will not be dinged — as long as the soldier can provide official military orders or a written statement from their commanding officer attesting to their deployment.



Advertisement

Source link

Continue Reading

Tech

MasterPlug Auraline Black Glass Panel Heater Review

Published

on

Verdict

A great-looking convection heater, available in black or white, the MasterPlug Auraline Black Glass Panel Heater is also great value for a smart heater. Its front control panel is a little basic, but the smart app offers versatility and remote control.

  • Great price

  • Flexible installation

  • Useful smart app

  • Feet are fiddly to attach

Key Features

Introduction

Convection heaters might all work in the same way, but that doesn’t mean that you have to compromise on style or features, as the MasterPlug Auraline Black Glass Panel Heater demonstrates.

Advertisement

Available in black or white, this panel heater can stand on the floor or you can wall-mount it to keep it out of the way. Its front control panel is a little basic, but connect this well-priced 2kW heater to Wi-Fi and you get more via the app.

Advertisement

Design and features

  • Can stand on the floor or be wall-mounted
  • Black or white versions
  • Compatible with the SmartLife app

A lot of convection heaters are very ugly, but the MasterPlug Auraline Black Glass Panel Heater is much more attractive than most. As the name says, this heater has a glossy glass finish to it that gives it an air of quality and makes this heater look at lot more expensive than it is. I’ve got the black version, but there’s also a white version.

The eagle-eyed may spot that the MasterPlug Auraline Black Glass Panel Heater looks very similar to the Princess Glass Smart Panel Heater that I reviewed a few years ago. Both have the same finish, screen and controls. While the Princess heater was a 1.5kW model, there was also a 2kW model – the same rating as the MasterPlug here.

However, the MasterPlug version is cheaper at the time of review, and there are a few app differences, too.

Advertisement

As with the Princess, the MasterPlug Auraline Black Glass Panel Heater can be wall-mounted, or you can attach the provided feet to the base and have it freestanding. The feet are as annoying to attach here as they were to the Princess heater.

MasterPlug Auraline Black Glass Panel Heater legsMasterPlug Auraline Black Glass Panel Heater legs
Image Credit (Trusted Reviews)

Advertisement

Thanks to a deep recess on the feet, the tiny screws are hard to get through the holes. I recommend a magnetic screwdriver to hold the screws while you delicately move them into position.

Once attached, the legs provide a lot of stability, but if the heater is knocked over, tip-over protection cuts the power to prevent damage.

Once plugged in, the heater has a physical power switch on the side that cuts power off. For most cases, you can leave this switch on, but it’s handy to have the option to fully cut power in the warmer months or if you won’t be using the heater for a while.

Advertisement
MasterPlug Auraline Black Glass Panel Heater power switchMasterPlug Auraline Black Glass Panel Heater power switch
Image Credit (Trusted Reviews)

With the main switch on, the heater is controlled via the touch buttons on the front. There’s a simple power button that turns the heater on and brings the screen to life, showing the current room temperature.

MasterPlug Auraline Black Glass Panel Heater main screen and controlsMasterPlug Auraline Black Glass Panel Heater main screen and controls
Image Credit (Trusted Reviews)

Advertisement

The plus/minus buttons let me select the target temperature in 1°C increments up to 40°C. Once the target has been reached, the heater turns off until the temperature drops, and then the heating process starts again.

MasterPlug Auraline Black Glass Panel Heater target temperatureMasterPlug Auraline Black Glass Panel Heater target temperature
Image Credit (Trusted Reviews)

There’s a timer button, which cycles through hourly increments up to 24 hours. It’s handy to have if you want to give a room a boost, but want the heater to shut down after a set time.

MasterPlug Auraline Black Glass Panel Heater timerMasterPlug Auraline Black Glass Panel Heater timer
Image Credit (Trusted Reviews)

Finally, there’s a control to switch between high and low power modes (2000W and 1000W). Lower power mode is useful if it’s either slightly warmer weather and you’re worried about overshooting the target temperature, or you have solar power and want to keep power usage below your energy generation on a bright day.

To get more from the heater, you need to connect it to the app. This heater is compatible with the SmartLife app, where you can mix and match devices from different manufacturers, or you can use the MPSmartEnergy app instead. Both give you the same interface, so there’s no good reason to us the MPSmartEnergy app over SmartLife.

Advertisement

MasterPlug Auraline Black Glass Panel Heater appMasterPlug Auraline Black Glass Panel Heater app
Image Credit (Trusted Reviews)

Be careful, as the app gives the wrong instructions for getting the radiator connected to Wi-Fi: it shows a flashing LED and says to hold the reset button, showing a diagram of someone holding down the power button; the flashing icon is actually on the LCD and shows what looks like a ringed-planet, and the reset button is actually the mode select button.

Once connected to the app, you get the same controls as on the MasterPlug Auraline Black Glass Panel Heater itself, plus the timer can be set in minutes as well as hours.

Advertisement

Scheduling is available via simple rules: you need one for each time the heater should turn on and which temperature it should aim for, and another rule for each time you want to turn it off. It’s handy to have this option, but it’s not as thorough as the full scheduling tool you get with Mill WiFi Max Portable Heater 1500W.

MasterPlug Auraline Black Glass Panel Heater app scheduleMasterPlug Auraline Black Glass Panel Heater app schedule
Image Credit (Trusted Reviews)

What MasterPlug offers over the similar Princess heaters is full energy monitoring in-app, so you can see how much energy the heater is using and how much it has used over time.

Advertisement

Performance

One of the main benefits of a convection heater is that it’s completely silent. Aside from a clunk as the heating element turns on or off, there’s no sound to be heard from the MasterPlug Auraline Black Glass Panel Heater at all.

The front gets hot, but no so hot as you’d burn yourself, but most of the heat comes out of the vent at the top. As the MasterPlug Auraline Black Glass Panel Heater heats up, it causes air to circulate, warming the room.

Advertisement

Testing it in the large front room of the Trusted Reviews Home Technology Lab (just shy of the 25m2 maximum that I’d recommend for a 2kW heater), it didn’t take long to raise the temperature from 17°C to a more pleasant 21°C. For living rooms and larger bedrooms, this heater would be all that you need.

I measured the MasterPlug Auraline Black Glass Panel Heater as drawing just under 2kW on maximum power and just over 1kW on low-power. How much energy is uses will depend on many things: the target temperature, starting temperature, outdoor temperature and target temperature. But, overall running costs are the same for this electric heater as for any other model specified for a target room size.

Advertisement

Should you buy it?

Advertisement

You want a cheap, good-looking smart heater

Versatile with wall- or floor-mounting and a smart app, this heater is well priced and attractive.

Advertisement

You want something smaller or with more features

If you’re limited on space, a fan heater might be better, and many of those can also double up as cooling fans in the summer.

Advertisement

Final Thoughts

Impressively cheap, the MasterPlug Auraline Black Glass Panel Heater also looks great and comes with a very useful smart app to get more out of it. If you want a fan heater or something for a smaller room, then check out my guide to the best electric heaters.

How we test

Unlike other sites, we test every heater we review thoroughly over an extended period of time. We use industry standard tests to compare features properly. We’ll always tell you what we find. We never, ever, accept money to review a product.

Find out more about how we test in our ethics policy.

Advertisement
  • Used as our main heater for the review period
  • We measure the fan speed (if available) using an anemometer so that we can accurately compare performance between models
  • We measure the heat output of the fan and its effect on our test lab.

FAQs

What does the MasterPlug Auraline Black Glass Panel Heater’s app do?

Using the app you can set schedules and more detailed timers, and view power usage.

Advertisement

Test Data

  MasterPlug Auraline Black Glass Panel Heater

Advertisement

Full Specs

  MasterPlug Auraline Black Glass Panel Heater Review
UK RRP £91.99
Manufacturer
Size (Dimensions) 920 x 235 x 430 MM
Weight 7.4 KG
Release Date 2026
First Reviewed Date 26/03/2026
Model Number MasterPlug Auraline Black Glass Panel Heater
Modes 1000W, 2000W
Stated Power 2000 W
App Control Yes
Timer Yes (hourly up to 24 hours)
Heater type Convection heater
Thermostat Yes
Safety features Overheat and tip-over protection

Source link

Continue Reading

Trending

Copyright © 2025