Connect with us

Tech

Google Chrome Is Switching To a Two-Week Release Cycle

Published

on

Google is accelerating Chrome’s major release cadence from four weeks to two starting with version 153 on September 8th. “…our goal is to ensure developers and users have immediate access to the latest performance improvements, fixes and new capabilities,” says Google. “Building on our history of adapting our release process to match the demands of a modern web, Chrome is moving to a two-week release cycle.” The company says the “smaller scope” of these releases “minimizes disruption and simplifies post-release debugging.” They also cite “recent process enhancements” that will “maintain [Chrome’s] high standards for stability.” 9to5Google reports: There will still be weekly security updates between milestones. This applies to desktop, Android, and iOS, while there are “no changes to the Dev and the Canary channels”: “A Chrome Beta for each version will ship three weeks before the stable release. We recommend developers test with the beta to keep up to date with any upcoming changes that might impact your sites and applications.”

The eight-week Extended Stable release schedule for enterprise customers and Chromium embedders will not change. Chromebooks will also have “extended release options”: “Our priority is a seamless experience, so the latest Chrome releases will roll out to Chromebooks after dedicated platform testing. We are adapting these channels for the new two-week browser cycle and we will share more details soon regarding milestone updates for managed devices.”

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

SIVGA SV021 Pro Review: Carved to Impress, But Is the Tuning?

Published

on

In a headphone market crowded with plastic shells and predictable tuning, Dongguan-based brand SIVGA continues to carve out its niche the old-fashioned way with real wood and old-school wired design. The new SV021 Pro arrives at $179 with handcrafted wooden earcups, a closed-back architecture, and a promise of premium aesthetics without the usual boutique markup.

But in a category where looks can only get you so far, the real question is whether SIVGA’s latest budget-friendly over-ear delivers the sonic performance to match its striking build, or if it’s simply another pretty face in a very competitive field.

Driver Technology

Inside each earcup of the SV021 Pro sits a 50mm dynamic driver developed specifically for this model rather than pulled from a generic parts bin. SIVGA states that considerable in-house tuning and material research went into its design.

The diaphragm uses a five-layer aluminum composite construction, intended to balance rigidity with controlled damping. In practical terms, a stiffer diaphragm can improve transient response and clarity, while proper damping helps prevent unwanted resonance that can blur detail.

Advertisement
sivga-sv021-pro-headphones-inside-box

Driving the diaphragm is an ultra-fine black copper-clad aluminum voice coil. This type of voice coil is commonly used to reduce moving mass while maintaining conductivity, which can improve efficiency and responsiveness. SIVGA also claims benefits to perceived resolution and micro-detail retrieval.

The driver assembly itself is mounted within a six-layer reinforced composite housing, engineered to minimize unwanted vibration and reduce distortion by improving structural stability and energy transfer.

The technical story sounds promising on paper. Whether that engineering translates into real-world sonic performance is something best judged in listening — which we’ll dive into next.

Design & Comfort

Before getting to the sound, it’s worth spending a moment on design and comfort because this is an area where SIVGA has built a strong reputation. With the SV021 Pro, the brand continues that tradition. At $179, the overall build quality and material selection feel well above what you typically expect at this price point.

Our review sample came in the lighter beechwood finish. The fine grain pattern is clearly visible and gives the headphones an authentically handcrafted character rather than a synthetic “wood look.” For those who prefer something darker and more understated, SIVGA also offers a zebrawood version, which delivers a similarly premium aesthetic with a subtler visual impact. Whichever finish you choose, the attention to detail in the woodwork stands out immediately.

Advertisement
SIVGA SV021 Pro Headband

The structural components including the headband frame, yokes, and adjustment rails, are constructed from CNC-machined metal. This manufacturing process allows for tight tolerances and consistent precision, contributing to a solid, confidence-inspiring feel. The headphones never come across as fragile or delicate. While I didn’t test their durability with an accidental drop, they feel robust enough to handle normal daily use without anxiety.

Comfort is another area where the SV021 Pro performs well. At 289 grams, it’s relatively lightweight for a full-size closed-back design, and that lower mass pays dividends during longer listening sessions. The earpads are generously padded and notably soft, allowing the headphones to sit securely without creating pressure hotspots. On the head, they largely “disappear,” which is exactly what you want from a daily-use wired model.

Advertisement. Scroll to continue reading.

The velour contact surfaces are also a smart choice. They feel gentle against the skin and do a good job of managing heat buildup, reducing that sweaty, sealed-in sensation that can occur with synthetic leather pads — especially in warmer environments.

There is one ergonomic limitation worth noting: the yokes do not swivel. For some listeners, a lack of horizontal articulation can affect how well the earcups conform to the jawline and head shape. In my case, the depth and plushness of the earpads compensated effectively, creating a consistent seal without issue. That said, fit is personal, and those with narrower or more angular head shapes may want to be aware of this design decision.

Advertisement
SIVGA SV021 Pro Open-back Headphones  in Beechwood
SIVGA SV021 Pro (Beechwood)

SIVGA includes a 1.6-meter detachable cable with the SV021 Pro, and it’s noticeably better than the generic rubber leads often bundled at this price. The supplied cable uses a quad-braided design with a fabric outer sheath, giving it a more premium feel while also helping to minimize microphonics. In daily use, it resists kinks and doesn’t retain awkward bends, which makes it easy to manage at a desk or in a portable setup.

The stock termination is 3.5mm single-ended, which will suit most users and devices. That said, those running balanced outputs from modern portable DAC/amps may wish SIVGA had offered a 4.4mm option in the box. Fortunately, the detachable design makes aftermarket upgrades straightforward.

Also included are a simple hemp storage pouch and a 6.35mm adapter for use with full-size amplifiers. With accessories covered, it’s time to focus on what matters most: how the SV021 Pro actually sounds.

sivga-sv021-pro-headphones-kit

Listening

On paper and certainly in the hand, the SV021 Pro checks a lot of boxes. It looks distinctive, feels premium for $179, and remains comfortable over long sessions.

Where things become more complicated is in its tuning.

In practice, the sonic presentation comes across as uneven, with balance issues that are immediately noticeable. Within the first few minutes of listening to familiar reference tracks, it became clear that something was not quite aligned. The frequency response does not feel cohesively voiced, and certain areas of the spectrum draw attention to themselves in ways that disrupt overall musicality.

Advertisement

That is somewhat surprising given SIVGA’s track record. The brand has demonstrated competent tuning before, and its sister company, Sendy Audio, recently impressed us with the Egret, a model that showed careful tonal balance and refinement. By comparison, the SV021 Pro feels less resolved in its final voicing decisions.

Let us break things down by frequency range, starting with the bass.

Bass

In the low frequencies, the SV021 Pro takes a decidedly heavy handed approach. The bass is elevated to the point where it becomes dominant, introducing bloom that spills into the lower midrange and softens overall clarity.

There is certainly an audience for this kind of presentation. Listeners who prioritize impact over precision may enjoy the added weight, especially with bass driven genres such as drum and bass or electronic music where a strong low end can create a more physical and immersive experience.

Advertisement
Advertisement. Scroll to continue reading.

The trade off is control. The excess warmth masks finer details and reduces separation between instruments. Male vocals are particularly affected, often sounding pushed back in the mix. On tracks like “Papaoutai” by Stromae, his voice loses immediacy and presence, as if it is positioned behind the instrumental layer rather than anchored at the forefront where it belongs.

Midrange

By contrast, the midrange feels recessed. There is a noticeable dip through the central vocal region that robs instruments and voices of density and presence. As a result, guitars lack crunch, pianos lose harmonic richness, and vocals struggle to anchor the mix.

The overall impression is one of distance and diffusion. Instead of sounding centered and tangible, the mids come across as washed out, with reduced impact and body. The tonal imbalance between the elevated bass and pulled back midrange makes the presentation feel hollow rather than cohesive. Even casual listeners are likely to sense that the tuning does not sound quite right, as the core of the music lacks the weight and immediacy that define a natural sounding headphone.

Advertisement

Treble

The frequency response does recover somewhat as it moves into the upper midrange and lower treble. Female vocals cut through the mix with more clarity than male baritones, benefiting from the added energy in this region. There is a greater sense of articulation here, which helps prevent the presentation from sounding completely veiled.

However, that lift continues into the lower treble where it becomes problematic. A noticeable glare is introduced, adding sharpness that can turn strident at moderate to higher volumes. Over time, this emphasis contributes to listening fatigue rather than engagement.

Brass instruments in particular highlight the issue. On “Careless Whisper” by George Michael, the iconic saxophone line carries more bite than body, making the track sound harsher and more fatiguing than intended. Instead of smooth, sultry texture, the upper register leans toward edge and glare, which further reinforces the uneven tonal balance of the SV021 Pro.

sivga-sv021-pro-headphones-earcup

Technicalities & Soundstaging

There is a sense that the underlying hardware in the SV021 Pro is capable of more than what its final tuning allows. The custom 50mm drivers appear technically competent, but the chosen sound signature limits their ability to showcase resolution and balance.

The elevated bass does more than just mask fine detail. It also compresses the perceived space, leading to a narrower soundstage and less precise imaging. Instruments tend to cluster rather than occupy clearly defined positions, which reduces layering and separation. While closed back headphones rarely deliver expansive staging, there is still a baseline expectation for coherence and placement that is not fully met here.

Advertisement

In direct comparison, models such as the FiiO FT1 and Beyerdynamic DT 700 Pro X present a more balanced frequency response with stronger spatial performance. Both offer better control in the low end and more convincing imaging, allowing them to sound more open and organized despite operating within the same closed back category.

Drivability

With a rated sensitivity of 106dB/mW and an impedance of 45 ohms, the SV021 Pro is very easy to drive. In practice, it reaches high listening levels straight from a standard smartphone headphone jack without strain. There is no requirement for a dedicated amplifier to achieve adequate volume.

Using external sources does bring incremental improvements. Paired with dongle DACs and a desktop chain consisting of the SMSL DO400 and Aune S17 Pro, the presentation gained slight refinement in control and clarity. However, the changes were subtle rather than transformative. Given the headphone’s accessible price point, investing in higher end source equipment does not materially alter its core tuning characteristics. The fundamental tonal imbalance remains, and additional amplification cannot meaningfully correct it.

sivga-sv021-pro-headphones-beechwood-zebrawood
The SIVGA SV021 Pro are available in Beechwood (left) or Zebrawood (right)

The Bottom Line

The SV021 Pro is a frustrating release because so much of it is done right. The wood earcups look fantastic, the CNC machined metal frame feels durable and confidence inspiring, the cable is better than most at this price, and long term comfort is genuinely impressive. At 289 grams with plush velour pads, it is easy to wear for hours. From a design and build perspective, this is one of the more premium feeling options under $200.

Advertisement
Advertisement. Scroll to continue reading.

Unfortunately, none of that offsets the tuning. The elevated and bloated bass, recessed midrange, and glare in the lower treble combine to create an uneven and fatiguing presentation. Detail retrieval is masked, vocals lack natural body, and spatial performance suffers as a result. No amount of better amplification meaningfully corrects the core imbalance. That is the deal breaker.

SIVGA has proven it can voice headphones well in the past, which makes this outcome more disappointing. At $179, there are closed back alternatives that deliver a more cohesive and accurate sound signature. When sound quality is the primary metric, as it should be with any headphone, the SV021 Pro falls short. For that reason, it is difficult to recommend despite its undeniable strengths in design and comfort.

Pros:

  • Custom 50mm dynamic drivers with multi layer aluminum composite diaphragm and lightweight voice coil design
  • Genuine beechwood or zebrawood earcups with solid CNC machined metal frame give a premium look and feel for the price
  • Lightweight 289 g construction with plush velour pads delivers excellent long term comfort
  • Elevated bass response adds strong impact for electronic, hip hop, and other bass driven genres

Cons:

  • No swivel in the yokes may affect fit and seal for some head shapes
  • Uneven tuning with boosted bass, recessed midrange, and pronounced lower treble glare reduces tonal balance and realism
  • Congested staging and less precise imaging compared to similarly priced closed back competitors

Where to buy:

Source link

Advertisement
Continue Reading

Tech

Podcast: Exploring Japan’s Hi-Fi Scene

Published

on

Japan Hi-Fi and Music Culture Podcast with Eric Pye
Eric Pye (@audioloveyyc) returns to Japan to explore the hi-fi and music scene in late-2025.

Source link

Continue Reading

Tech

Common IT Automation Mistakes to Avoid (With a Safer Workflow)

Published

on

IT automation is supposed to reduce risk, speed delivery, and shrink operational overhead—but in real environments, it can also amplify mistakes, spread misconfigurations faster, and create “unknown unknowns” at scale.

This guide focuses on the failure patterns that hit intermediate-to-advanced teams (SRE/DevOps/Platform/IT Ops), plus a practical workflow for building automation that’s safe to run repeatedly, safe to change, and safe to roll back.

Quick take (read this first)

  • Avoid “automation theater.” If you can’t explain the goal, blast radius, and rollback, you’re not ready to automate that workflow.
  • Design for “safe retries”: idempotent actions, clear state checks, and predictable error handling.
  • Ship guardrails by default: input validation, rate limits, timeouts, and a human fallback when conditions look unsafe.
  • Treat automation as change management: version control, approvals where needed, and audit logs of what changed and who/what changed it.

Dominant intent (what searchers want)

Most people searching “IT automation mistakes” are not looking for tool comparisons—they want a practical checklist of what goes wrong in production and a concrete method to prevent those failures (especially around change control, security, and reliability).

The mistakes (and what to do instead)

1) Automating the wrong thing (or automating too early)

Mistake: Automating a workflow you don’t fully understand yet, or automating edge-case-heavy work before you’ve stabilized the “happy path.”

Advertisement

Do instead: Start by writing a one-page runbook that a human can follow, then automate that runbook. For a runbook pattern you can standardize across teams, create an internal “production runbooks” page like a production runbook template.

Practitioner note: If you can’t list the top 3 failure modes of the workflow, automation will discover them for you—at the worst possible time.

2) No clear definition of “done” (success criteria are vague)

Mistake: “Automate onboarding” without measurable success: time saved, error rate reduced, fewer tickets, fewer escalations.

Do instead: Pick one outcome metric (e.g., median provisioning time) and one safety metric (e.g., failed-run rate) before you write code. If you want a simple measurement framework, align it to toil-reduction thinking from Google’s guidance on eliminating toil.

Advertisement

3) Treating automation as a script, not as a product

Mistake: A “one-off” script becomes production-critical, but it has no owner, no lifecycle, and no on-call expectations.

Do instead: Assign an owner, a repo, and a release process (even if lightweight). For larger orgs, define a small internal policy page like an automation ownership model so abandoned automations don’t become permanent operational debt.

4) No change management (automation changes go out like ad-hoc edits)

Mistake: Updating automation directly on a server, or merging automation changes without review, testing, and traceability.

Do instead: Treat automation as change management: controlled changes, auditable history, and clear permissions. AWS explicitly frames change management as necessary for reliable operation and calls out automatic logging of changes as an auditing aid in the AWS Well-Architected change management guidance.

Advertisement

Advanced note: When you can, make changes small and reversible (roll-forward is nice; fast rollback is mandatory).

5) Skipping preflight checks (inputs aren’t validated)

Mistake: Assuming upstream systems always send sane values, or that “only admins will run it.”

Do instead: Validate inputs like a hostile internet user might control them: bounds checks, allowlists, required fields, and “dry run” modes that show intended actions without taking them.

6) No guardrails (automation can trigger outages at scale)

Mistake: Automation that loops aggressively, fans out without limits, or repeatedly performs expensive “read” operations that become costly at scale.

Advertisement

Do instead: Add guardrails: timeouts, rate limits, concurrency limits, and safety checks using live signals (error rates, saturation, dependency health). Google’s SRE guidance warns that even read operations can spike device load at scale and that automation should default to humans if it hits unsafe conditions.

7) Not idempotent (re-runs cause damage)

Mistake: A failed run leaves partial state; rerunning makes it worse (duplicate accounts, duplicate firewall rules, double-billed resources).

Do instead: Design for safe retries: check current state first, apply only the delta, and make “no-op” a normal success path. If your team needs a shared pattern library, create idempotent automation patterns internally and enforce them in code reviews.

8) Poor error handling (failures are silent, unclear, or non-actionable)

Mistake: Catch-all exceptions that hide real failures, or errors that don’t tell the operator what to do next.

Advertisement

Do instead: Use structured error handling, return explicit exit codes, and log enough context to remediate quickly. For PowerShell-heavy environments, follow Microsoft’s official try/catch/finally guidance to handle terminating errors predictably.

9) No baseline config thinking (automation fights drift instead of controlling it)

Mistake: “Our automation sets the config” but there’s no approved baseline, no monitoring, and no controlled change process—so the environment drifts and nobody knows what “correct” is anymore.

Do instead: Establish and manage approved baselines and monitor for unauthorized changes as part of configuration management. NIST describes security-focused configuration management as managing and monitoring configurations to achieve adequate security and minimize organizational risk in NIST SP 800-128.

10) No “checklist” layer (you can’t verify automation outcomes)

Mistake: Automation changes settings, but you don’t have a consistent way to verify the final state (or detect unauthorized changes later).

Advertisement

Do instead: Treat verification as first-class: post-run checks, periodic compliance scans, and “expected state” reports. NIST describes security configuration checklists as instructions/procedures for securely configuring IT products, including verifying configuration and identifying unauthorized changes, in NIST SP 800-70 Rev. 5 (IPD).

11) Concurrency mistakes (two automations fight each other)

Mistake: Two pipeline runs apply infrastructure changes concurrently, or two operators run the same automation against the same target at the same time.

Do instead: Enforce locking and single-writer rules. Terraform state locking is designed to prevent concurrent writes and potential state corruption; if locking fails, Terraform doesn’t continue, per HashiCorp’s Terraform state locking documentation.

12) Supply-chain blind spots (automation depends on unpinned dependencies)

Mistake: CI/CD workflows pull third-party components by mutable tags, so what runs today isn’t guaranteed to be what ran yesterday.

Advertisement

Do instead: Pin and verify dependencies for your automation pipeline. GitHub’s guidance on secure use of Actions states that pinning to a full-length commit SHA is currently the only way to use an action as an immutable release in GitHub’s secure use reference.

If your org is standardizing workflow hardening, create a CI/CD security hardening playbook that covers pinning, reviews for sensitive workflows, and secret exposure pathways.

How to build automation safely (a practical workflow)

Step 1: Define the boundary

  • What is the exact trigger (human, ticket, webhook, schedule)?
  • What is the target scope (single host, one service, one environment, one account)?
  • What must never happen (data loss, public exposure, mass deletion, privilege escalation)?

Step 2: Design the safety model

  • Preflight: Validate inputs and permissions; confirm the target exists.
  • Guardrails: Timeouts, rate limits, concurrency limits, circuit breakers, and a “stop” switch.
  • Fallback: If conditions look unsafe, stop and route to a human with a clear message.

Step 3: Make it idempotent (safe retries)

  • Read current state.
  • Compute delta.
  • Apply changes.
  • Verify final state (and record evidence).

Step 4: Build observability and auditability

  • Log: who/what triggered the run, what changed, and where.
  • Metric: success rate, duration, retries, and rollbacks.
  • Traceability: link runs to commits and tickets.

From a governance perspective, automatic logging of changes helps audit and quickly identify actions that might have impacted reliability, as described in the AWS Well-Architected change management guidance.

Step 5: Roll out like you mean it

  • Start small: Canary a subset of targets, then expand.
  • Prefer reversible changes: Plan rollback (or roll-forward) before the first run.
  • Write the “undo” path: If reversal is impossible, add extra approval gates.

AWS highlights that deployments are a major production risk area and encourages automation (including testing and deploying changes) in its guidance on deploying changes with automation.

Decision tree: should you automate this?

 START | |-- Does the workflow happen often (weekly+) OR during incidents? | |-- No --> Keep manual; improve documentation/runbook. | |-- Yes | |-- Can you clearly define "success" AND "unsafe" conditions? | |-- No --> Stabilize process; add measurements; then automate. | |-- Yes | |-- Can you make it safe to retry (idempotent) with a bounded blast radius? | |-- No --> Add guardrails/locks/approvals; then automate. | |-- Yes | '--> Automate, ship with preflight + guardrails + rollback + logging.

Implementation checklist (copy/paste for your PR)

  • Has an owner and a repo (not “a script on a server”).
  • Inputs validated; “dry run” supported for risky actions.
  • Idempotent behavior documented (what happens on rerun).
  • Concurrency controlled (locks, single-writer rules).
  • Guardrails present (timeouts, rate limits, circuit breakers).
  • Logs, metrics, and run IDs are emitted; changes are auditable.
  • Rollback path defined and tested (or explicit approval gates if not reversible).
  • Dependencies pinned and reviewed; CI/CD hardening applied.

Troubleshooting (real-world failure modes)

Problem: “It worked in staging but caused a production incident”

Common causes: missing guardrails, scale effects (read load), hidden dependencies, or assumptions about data shape. Add timeouts/rate limits and use live signals; SRE guidance notes automation needs safeguards and that scale can change the risk profile dramatically.

Problem: “We can’t explain what changed”

Fix: require versioned changes, run IDs, and change logs; align to controlled change management and automatic change logging as described in AWS’s change management guidance.

Problem: “Two runs conflicted and corrupted state”

Fix: enforce locking/single-writer rules; Terraform’s state locking model exists specifically to prevent concurrent state writes and to stop runs if locking fails.

Advertisement

Problem: “The automation fails, and the error is useless”

Fix: make errors actionable (what failed, why, what to do next), and use structured error handling. In PowerShell, ensure you’re handling terminating errors using try/catch/finally patterns described in Microsoft’s documentation.

FAQ

What’s the fastest way to reduce automation risk without slowing delivery?

Start with guardrails (timeouts, rate limits, safe defaults) and add change traceability (who/what/when) before you add more features.

When should we not automate?

Don’t automate workflows with unclear “unsafe conditions,” no rollback, or unclear ownership—until you fix those prerequisites.

How do we keep automation from creating more toil?

Measure the time spent operating the automation itself and ensure it reduces net operational work; toil framing and safeguards are emphasized in Google’s SRE guidance.

Advertisement

Is “configuration drift” always bad?

Not always—sometimes reality changes faster than code—but unmanaged drift makes environments less predictable; treat baselines and monitoring as first-class.

How do we implement configuration baselines in a practical way?

Define a baseline, implement it consistently, monitor deviations, and control changes; NIST’s security-focused configuration management guidance is a strong baseline reference for this program.

Do we need checklists if we already have IaC?

Yes—IaC expresses intent, but you still need verification that deployed systems match the intended secure configuration; NIST describes checklists as including verification and unauthorized-change detection.

What’s a minimum viable CI/CD hardening step for automation pipelines?

Pin third-party components to immutable identifiers; GitHub’s secure use guidance states full-length commit SHA pinning is the way to make an Action immutable.

Advertisement

How do we align “automation” with formal change management without drowning in process?

Automate the evidence: logs, approvals where needed, and a clear history of changes; AWS explicitly calls out automatic logging as an auditing aid.

Key takeaways

  • Automation failures are rarely “tool problems”—they’re safety, ownership, and change-management problems.
  • Make automation safe to rerun, safe to stop, and safe to explain.
  • Build guardrails that assume scale and bad inputs, and default to humans when unsafe.

Source link

Continue Reading

Tech

CISA flags VMware Aria Operations RCE flaw as exploited in attacks

Published

on

VMware

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added a VMware Aria Operations vulnerability tracked as CVE-2026-22719 to its Known Exploited Vulnerabilities catalog, flagging the flaw as exploited in attacks.

Broadcom also warned that it is aware of reports indicating the vulnerability is exploited but says it cannot independently confirm the claims.

VMware Aria Operations is an enterprise monitoring platform that helps organizations track the performance and health of servers, networks, and cloud infrastructure.

The vulnerability was originally disclosed and patched on February 24, 2026, as part of VMware’s VMSA-2026-0001 advisory, which was rated Important with a CVSS score of 8.1.

Advertisement

The flaw has now been added to the CISA’s Known Exploited Vulnerabilities (KEV) catalog, with the US cyber agency requiring federal civilian agencies to address the issue by March 24, 2026.

In a recent update to the advisory, Broadcom said it is aware of reports indicating the vulnerability is exploited in attacks but cannot confirm the claims.

“Broadcom is aware of reports of potential exploitation of CVE-2026-22719 in the wild, but we cannot independently confirm their validity,” states the updated advisory.

At this time, no technical details about how the flaw may be exploited have been publicly disclosed.

Advertisement

BleepingComputer contacted Broadcom with questions regarding the reported activity, but has not received a response.

The command injection flaw

According to Broadcom, CVE-2026-22719 is a command injection vulnerability that allows an unauthenticated attacker to execute arbitrary commands on vulnerable systems.

“A malicious unauthenticated actor may exploit this issue to execute arbitrary commands which may lead to remote code execution in VMware Aria Operations while support-assisted product migration is in progress,” the advisory explains.

Broadcom released security patches on February 24 and also provided a temporary workaround for organizations unable to apply the patches immediately.

Advertisement

The mitigation is a shell script named “aria-ops-rce-workaround.sh,” which must be executed as root on each Aria Operations appliance node.

The script disables components of the migration process that could be abused during exploitation, including removing the “/usr/lib/vmware-casa/migration/vmware-casa-migration-service.sh” and the following sudoers entry that allows vmware-casa-workflow.sh to run as root without a password:


NOPASSWD: /usr/lib/vmware-casa/bin/vmware-casa-workflow.sh

Admins are advised to apply available VMware Aria Operations security patches or implement workarounds as soon as possible, especially if the flaw is being actively exploited in attacks.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Advertisement

Source link

Continue Reading

Tech

Downdetector, Speedtest sold to IT service provider Accenture in $1.2B deal

Published

on

In a statement, Accenture CEO and chair Julie Sweet said:

By acquiring Ookla, we will help our clients across business and government scale AI safely and build the trusted data foundations they need to deliver the reliable, seamless connectivity that creates value.

Current Accenture public sector clients include the US Air Force, the US Social Security Administration, and, recently, the US Department of State.

Speedtest and Downdetector are popular among people seeking something to help quickly test their current internet speed and the status of online services, respectively. Downdetector is often cited by media reports discussing the availability of websites, apps, banks, and more.

Under Ziff Davis, both programs also have business-to-business (B2B) applications. Using Speedtest, for instance, Ookla gathers, aggregates, and analyzes data for “billions of mobile network samples daily, which measure radio signal levels, network coverage, and availability, and [quality of experience] metrics for a number of connected experiences, such as streaming video, video conferencing, gaming, web browsing, and CDN and cloud provider performance,” Ookla says. Currently, Speedtest claims telecommunications operators, regulatory and trade bodies, analysts, journalists, and nonprofits as B2B customers.

Advertisement

Downdetector Explorer, meanwhile, is a monitoring tool that’s supposed to help businesses detect outages. Customers include streaming services, banks, social networks, and communication service providers.

Should Accenture’s acquisition close, the IT consultant will similarly use data from Speedtest and Downdetector to inform clients, and individual users will be subject to a new privacy policy and any other changes Accenture potentially makes.

An Accenture spokesperson told Ars Technica that Accenture plans to operate the Ookla “business as it operates today.” 

Advertisement

Source link

Continue Reading

Tech

Meta signs a multimillion dollar AI licensing deal with News Corp

Published

on

Meta has signed an AI licensing deal with News Corp that will allow the Meta AI maker to use content from The Wall Street Journal and other brands in its chatbot responses and for training of its AI models. News Corp confirmed to Engadget that it had struck a deal with Meta, but didn’t provide specifics on the terms of the arrangement. According to The Wall Street Journal, Meta will pay News Corp. “up to $50 million a year” for a three-year deal that covers content from The Journal, as well as the media giant’s other brands in the US and UK.

News Corp previously struck a five-year deal with OpenAI that was valued at around $250 million. During a recent appearance at Morgan Stanley’s annual Technology, Media & Telecom (TMT) conference, News Corp CEO Robert Thomson hinted that the media company was in the “advanced stage with other negotiations.”

He described the company’s overall approach to such arrangements as “a woo and a sue” strategy, depending on whether companies want to pay for content or scrape it without permission. “We have what you might call a woo and a sue strategy,” he said. “We’ll woo you. We’d like you to be our partner. But if you’re stealing our stuff, we are going to sue you. So there’ll be a discount for those who hand themselves in, and there’ll be a penalty for those that resist.”

Meta didn’t immediately respond to a request for comment. But the company, which has been reorganizing its AI teams as it looks to create its next model, has struck a number of licensing deals in recent months. It previously signed multi-year agreements with USA Today, People, CNN, Fox News and other outlets.

Advertisement

Source link

Continue Reading

Tech

Best Business Laptop for 2026

Published

on

There are a ton of laptops on the market at any given moment, and almost all of those models are available in multiple configurations to match your performance and budget needs. If you’re feeling overwhelmed with options when looking for a new laptop, it’s understandable. To help simplify things for you, here are the main things you should consider when you start looking.

Price

The search for a new laptop for most people starts with price, and laptop pricing is on the rise. If the statistics that chipmaker Intel and PC manufacturers hurl at us are correct, you’ll be holding onto your next laptop for at least three years. If you can afford to stretch your budget a little to get better specs, do it. That stands whether you’re spending $500 or more than $1,000. In the past, you could get away with spending less upfront and look to upgrade memory and storage in the future. Laptop makers are increasingly moving away from making components easily upgradable, so it’s best to get as good a laptop as you can afford from the start. 

Generally speaking, the more you spend, the better the laptop. That could mean better components for faster performance, a nicer display, sturdier build quality, a smaller or lighter design from higher-end materials or even a more comfortable keyboard. All of these things add to the cost of a laptop. I’d love to say $500 will get you a powerful gaming laptop, for example, but that’s not the case. Right now, the sweet spot for a reliable laptop that can handle average work, home office or school tasks is between $700 and $800 and a reasonable model for creative work or gaming upward of about $1,000. The key is to look for discounts on models in all price ranges so you can get more laptop features for less. 

Operating system

Choosing an operating system is part personal preference and part budget. For the most part, Microsoft Windows and Apple’s MacOS do the same things (except for gaming, where Windows is the winner), but they do them differently. Unless there’s an OS-specific application you need, go with the one you feel most comfortable using. If you’re not sure which that is, head to an Apple store or a local electronics store and test them out. Or ask friends or family to let you test theirs for a bit. If you have an iPhone or iPad and like it, chances are you’ll like MacOS too. 

Advertisement

When it comes to price and variety (and, again, PC gaming), Windows laptops win. If you want MacOS, you’re getting a MacBook. While Apple’s MacBooks regularly top our best lists, the least expensive one is the M1 MacBook Air for $999. It is regularly discounted to $750 or $800, but if you want a cheaper MacBook, you’ll have to consider older refurbished ones. 

Windows laptops can be found for as little as a couple of hundred dollars and come in all manner of sizes and designs. Granted, we’d be hard-pressed to find a $200 laptop we’d give a full-throated recommendation to, but if you need a laptop for online shopping, email and word processing, they exist. 

If you are on a tight budget, consider a Chromebook. ChromeOS is a different experience than Windows; make sure the applications you need have a Chrome, Android or Linux app before making the leap. If you spend most of your time roaming the web, writing, streaming video or using cloud-gaming services, they’re a good fit. 

Size

Remember to consider whether having a lighter, thinner laptop or a touchscreen laptop with a good battery life will be important to you in the future. Size is primarily determined by the screen — hello, laws of physics — which in turn factors into battery size, laptop thickness, weight and price. Keep in mind other physics-related characteristics, such as an ultrathin laptop isn’t necessarily lighter than a thick one, you can’t expect a wide array of connections on a small or ultrathin model and so on. 

Advertisement

Screen

When it comes to deciding on a screen, there are a variety of considerations: how much you need to display (which is surprisingly more about resolution than screen size), what types of content you’ll be looking at and whether you’ll be using it for gaming or creative work.

You want to optimize pixel density; that’s the number of pixels per inch the screen can display. Although other factors contribute to sharpness, a higher pixel density usually means sharper rendering of text and interface elements. (You can easily calculate the pixel density of any screen at DPI Calculator if you don’t feel like doing the math, and you can also find out what math you need to do there.) We recommend a dot pitch of at least 100 pixels per inch (ppi) as a rule of thumb.

Because of the way Windows and MacOS scale for the display, you’re frequently better off with a higher resolution than you’d think. You can always make things bigger on a high-resolution screen, but you can never make them smaller — to fit more content in the view — on a low-resolution screen. This is why a 4K, 14-inch screen may sound like unnecessary overkill, but may not be if you need to, say, view a wide spreadsheet.

If you need a laptop with relatively accurate color that displays the most colors possible or that supports HDR, you can’t simply trust the specs. Manufacturers usually fail to provide the necessary context to understand what the specs they quote mean. You can find a ton of detail about considerations for different types of screen uses in our monitor buying guides for general-purpose monitors, creators, gamers and HDR viewing.

Advertisement

Processor

The processor, aka the CPU, is the brains of a laptop. Intel and AMD are the main CPU makers for Windows laptops, with Qualcomm as a new third option with its Arm-based Snapdragon X processors. Both Intel and AMD offer a staggering selection of mobile processors. Making things trickier, both manufacturers have chips designed for different laptop styles, like power-saving chips for ultraportables or faster processors for gaming laptops. Their naming conventions will let you know what type is used. You can head to Intel’s or AMD’s sites for explanations so you get the performance you want. Generally speaking, the faster the processor speed and the more cores it has, the better the performance will be.

Apple makes its own chips for MacBooks, which makes things slightly more straightforward. Like Intel and AMD, you’ll still want to pay attention to the naming conventions to know what kind of performance to expect. Apple uses its M-series chipsets in Macs. The entry-level MacBook Air uses an M1 chip with an eight-core CPU and seven-core GPU. The current models have M2-series silicon that starts with an eight-core CPU and 10-core GPU and goes up to the M2 Max with a 12-core CPU and a 38-core GPU. Again, generally speaking, the more cores it has, the better the performance. 

Battery life has less to do with the number of cores and more to do with CPU architecture, Arm versus x86. Apple’s Arm-based MacBooks and the first Arm-based Copilot Plus PCs we’ve tested offer better battery life than laptops based on x86 processors from Intel and AMD.

Graphics

The graphics processor (GPU) handles all the work of driving the screen and generating what gets displayed, as well as speeding up a lot of graphics-related (and increasingly, AI-related) operations. For Windows laptops, there are two types of GPUs: integrated (iGPU) or discrete (dGPU). As the names imply, an iGPU is part of the CPU package, while a dGPU is a separate chip with dedicated memory (VRAM) that it communicates with directly, making it faster than sharing memory with the CPU.

Advertisement

Because the iGPU splits space, memory and power with the CPU, it’s constrained by the limits of those. It allows for smaller, lighter laptops, but doesn’t perform nearly as well as a dGPU. There are some games and creative software that won’t run unless they detect a dGPU or sufficient VRAM. Most productivity software, video streaming, web browsing and other nonspecialized apps will run fine on an iGPU, though.

For more power-hungry graphics needs, like video editing, gaming and streaming, design and so on, you’ll need a dGPU; there are only two real companies that make them, Nvidia and AMD, with Intel offering some based on the Xe-branded (or the older UHD Graphics branding) iGPU technology in its CPUs.

Memory

For memory, we highly recommend 16GB of RAM (8GB absolute minimum). RAM is where the operating system stores all the data for currently running applications, and it can fill up fast. After that, it starts swapping between RAM and SSD, which is slower. A lot of sub-$500 laptops have 4GB or 8GB, which in conjunction with a slower disk can make for a frustratingly slow Windows laptop experience. Also, many laptops now have the memory soldered onto the motherboard. Most manufacturers disclose this, but if the RAM type is LPDDR, assume it’s soldered and can’t be upgraded. 

Some PC makers will solder memory on and also leave an empty internal slot for adding a stick of RAM. You may need to contact the laptop manufacturer or find the laptop’s full specs online to confirm. Check the web for user experiences, because the slot may still be hard to get to, it may require nonstandard or hard-to-get memory or other pitfalls.

Advertisement

Storage

You’ll still find cheaper hard drives in budget laptops and larger hard drives in gaming laptops, but faster solid-state drives (SSDs) have all but replaced hard drives in laptops. They can make a big difference in performance. Not all SSDs are equally speedy, and cheaper laptops typically have slower drives. If the laptop has only 4GB or 8GB of RAM, it may end up swapping to that drive and the system may slow down quickly while you’re working. 

Get what you can afford, and if you need to go with a smaller drive, you can always add an external drive or two down the road or use cloud storage to bolster a small internal drive. The one exception is gaming laptops: We don’t recommend going with less than a 512GB SSD unless you really like uninstalling games every time you want to play a new game.

Source link

Advertisement
Continue Reading

Tech

The Perfect Cheat’s Racing Bicycle

Published

on

One of the ongoing rumors and scandals in professional cycle sport concerns “motor doping” — the practice of concealing an electric motor in a bicycle to provide the rider with an unfair advantage. It’s investigated in a video from [Global Cycling Network], in which they talk about the background and then prove its possible by creating a motor doped racing bike.

To do this they’ve recruited a couple of recent graduate engineers, who get to work in a way most of us would be familiar with: prototyping with a set of 18650 cells, some electronics, and electromagnets. It uses what they call a “Magic wheel”, which features magnets embedded in its rim that engage with hidden electromagnets. It gives somewhere just under 20 W boost, which doesn’t sound much, but could deliver those crucial extra seconds in a race.

Perhaps the most interesting part is the section which looks at the history of motor doping with some notable cases mentioned, and the steps taken by cycling competition authorities to detect it. They use infra-red cameras, magnetometers, backscatter detectors, and even X-ray machines, but even these haven’t killed persistent rumors in the sport. It’s a fascinating video we’ve placed below the break, and we thank [Seb] for the tip. Meanwhile the two lads who made the bike are looking for a job, so if any Hackaday readers are hiring, drop them a line.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Exploring Security Vulnerabilities In A Cheapo WiFi Extender

Published

on

If all you want is just a basic WiFi extender that gets some level of network connectivity to remote parts of your domicile, then it might be tempting to get some of those $5, 300 Mbit extenders off Temu as [Low Level] recently did for a security audit. Naturally, as he shows in the subsequent analysis of its firmware, you really don’t want to stick this thing into your LAN. In this context it is also worrying that the product page claims that over a 100,000 of these have been sold.

Starting the security audit is using $(reboot) as the WiFi password, just to see whether the firmware directly uses this value in a shell without sanitizing. Shockingly, this soft-bricks the device with an infinite reboot loop until a factory reset is performed by long-pressing the reset button. Amusingly, after this the welcome page changed to the ‘Breed web recovery console’ interface, in Chinese.

Here we also see that it uses a Qualcomm Atheros QCA953X SoC, which incidentally is OpenWRT compatible. On this new page you can perform a ‘firmware backup’, making it easy to dump and reverse-engineer the firmware in Ghidra. Based on this code it was easy to determine that full remote access to these devices was available due to a complete lack of sanitization, proving once again that a lack of input sanitization is still the #1 security risk.

In the video it’s explained that it was tried to find and contact a manufacturer about these security issues, but this proved to be basically impossible. This leaves probably thousands of these vulnerable devices scattered around on networks, but on the bright side they could be nice targets for OpenWRT and custom firmware development.

Advertisement

Source link

Advertisement
Continue Reading

Tech

Just three companies dominated the $189 billion in VC investments last month

Published

on

AI continues to dominate the venture world, per a new Crunchbase report.

A record $189 billion of global venture capital flowed to startups in February, according to the report. AI startups overall raised $171 billion, or 90% of the capital raised last month. It’s a stunning number that feels like only the start. 

That record spending was more than three times the global VC spend in January, and was dominated by mammoth funding rounds from just three companies: OpenAI, Anthropic, and Waymo.

OpenAI’s latest $110 billion raise led the pack. It was one of the largest private rounds ever raised and valued the company at $730 billion. Its rival Anthropic also nabbed a $30 billion Series G at a $380 billion valuation. Lastly, Weymo raised $16 billion at a valuation of $126 billion. These three companies alone were responsible for 83% of the venture dollars raised last month.

Advertisement

The amount raised by just OpenAI, Anthropic, and Waymo last month was one-third of the total $425 billion venture spend in 2025, according to Crunchbase.

Source link

Continue Reading

Trending

Copyright © 2025