Connect with us

Tech

Critical Juniper Networks PTX flaw allows full router takeover

Published

on

Critical Juniper Networks PTX flaw allows full router takeover

A critical vulnerability in the Junos OS Evolved network operating system running on PTX Series routers from Juniper Networks could allow an unauthenticated attacker to execute code remotely with root privileges.

PTX Series routers are high-performance core and peering routers built for high throughput, low latency, and scale. They are commonly used by internet service providers, telecommunication services, and cloud network applications.

The security issue is identified as CVE-2026-21902 and is caused by incorrect permission assignment in the ‘On-Box Anomaly Detection’ framework, which should be exposed to internal processes only over the internal routing interface.

Wiz

However, the glitch allows accessing the framework over an externally exposed port, Juniper Networks explains in a security advisory.

Because the service runs as root and is enabled by default, successful exploitation would allow an attacker who is already on the network to take full control of the device without authentication.

Advertisement

The issue affects Junos OS Evolved versions before 25.4R1-S1-EVO and 25.4R2-EVO, on PTX Series routers. Older versions may also be impacted, but the vendor does not assess releases that have reached the end-of-engineering or end-of-life (EoL) phase.

Versions before 25.4R1-EVO, and standard (non-Evolved) Junos OS versions are not impacted by CVE-2026-21902. Juniper Networks has delivered fixes in versions 25.4R1-S1-EVO, 25.4R2-EVO, and 26.2R1-EVO of the product.

Juniper’s Security Incident Response Team (SIRT) states that it was not aware of malicious exploitation of the vulnerability at the time of publishing the security bulletin.

If immediate patching is not possible, the vendor’s recommendation is to restrict access to the vulnerable endpoints to trusted networks only using firewall filters or Access Control Lists (ACLs). Alternatively, administrators may disable the vulnerable service entirely using:

Advertisement

'request pfe anomalies disable'

Juniper Networks products are typically an attractive target for advanced hackers as the network equipment is used by service providers requiring high bandwidth, such as cloud data centers and large enterprises.

In March 2025, it was revealed that Chinese cyber-espionage actors were deploying custom backdoors on EoL Junos OS MX routers to drop a set of ‘TinyShell’ backdoor variants.

In January 2025, a malware campaign dubbed ‘J-magic’ targeted Juniper VPN gateways used in the semiconductor, energy, manufacturing, and IT sectors, deploying network-sniffing malware that activated upon receiving a “magic packet.”

Advertisement

In December 2024, Juniper Networks Smart routers became targets of Mirai botnet campaigns, getting enlisted in distributed denial of service (DDoS) swarms.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

These Are Our Absolute Favorite Android Earbuds, and They’re Below $200

Published

on

If you’re an esteemed Android user like me, and you felt left out of yesterday’s deal on the AirPods Pro 3, I’ve got you covered today with an even bigger discount on the Pixel Buds Pro 2. Both Amazon and Best Buy have the hazel color marked down from $229 to $180, a $49 discount on Google’s most upgraded wireless earbuds.

  • Photograph: Julian Chokkattu

The first change you’ll notice from the previous generation Pixel Buds Pro is that the newer model is much lighter, and the buds are 27 percent smaller. As a result, these are an excellent choice for anyone with small ears, and they stay put super well. Reviewer Parker Hall “had no problem doing hours of tree pruning and going on long sweaty runs in Portland’s early fall heat wave.”

With some help from top-notch physical sound isolation, the active noise-canceling on these is just as good as Apple’s and even goes toe-to-toe with big hitters like Bose and Sony. The transparency mode works just as well, too, with a wider range and clearer audio than a lot of other headphones offer. When it’s time to actually turn up the tunes, you can enjoy a wide, natural soundstage that has excellent detail in the midrange and clear, sparkling treble.

The Gemini integration, unfortunately, leaves a bit to be desired. It’s not the smoothest experience, particularly when asking multiple questions, and the Pixel Buds Pro 2 aren’t offering anything that other earbuds can’t do. Apple’s live translations and heart rate monitors are more useful features, but if you’re on Android, you’re locked out of them anyway.

If you’re interested in upgrading your earbud game, and you already have a Pixel, you can grab the Pixel Buds Pro 2 in hazel for $180 from either Amazon or Best Buy. If that color doesn’t suit you, I also spotted lesser discounts on the peony color for $189, or the porcelain color for $210. For anyone who isn’t already sold on the Pixel Buds Pro 2, make sure to swing by our guide to the best wireless earbuds, with picks for both Apple and Android owners.

Advertisement

Source link

Continue Reading

Tech

AMD FSR 4.1 update leaks with big image quality gains for Radeon GPUs

Published

on


A Guru3D forums user named “The Creator” recently shared a beta DLL file for an unannounced update to AMD’s FSR upscaler. It didn’t take long for users on Guru3D and Reddit to circulate mirrors and begin publishing side-by-side comparisons. The early verdict: noticeably less blur than the public release at…
Read Entire Article
Source link

Continue Reading

Tech

The scenery steals the show in this epic SpaceX rocket landing

Published

on

Well, those Falcon 9 landings never get old. Imagine, just over a decade ago the idea of being able to land a rocket upright after it’d been to space seemed crazy. And then SpaceX went and did it.

Following its first successful touchdown in December 2015, SpaceX suffered the occasional mishap with its booster landings, but in recent years it’s well and truly nailed the process.

The Elon Musk-led spaceflight company shared a video (below) this week of its most recent landing, with dramatic footage captured by a camera attached to the rocket showing the spectacular early-morning ride home.

The Falcon 9’s mission started from Vandenberg Space Force Base in California, and involved the launch of 25 Starlink satellites to low-Earth orbit.

Advertisement

This was the 11th flight for the first-stage booster (B1093) supporting this mission, which previously launched SDA T1TL-B, SDA T1TL-C, and now nine Starlink missions.

As the video shows, after deploying the upper stage, the 41.2 meter-tall (about 135 feet) booster returned to Earth minutes later, landing on the Of Course I Still Love You droneship waiting in the Pacific Ocean.

To achieve an autonomous landing like this, a Falcon 9 booster begins by performing a flip using cold gas thrusters after stage separation, sometimes followed by a boostback burn. As it descends, the booster deploys its grid fins to steer through the atmosphere before performing an entry burn to slow down. Finally, it executes a landing burn while deploying its legs for a stable touchdown.

The landings allow SpaceX to reuse its boosters multiple times, reducing the cost of spaceflight and opening access to more companies and organizations.

Advertisement

Just last weekend, another Falcon 9 booster set a new reuse record of 33 flights after launching for the first time in June 2021.

SpaceX has applied what it’s learned from the landings to its much bigger and more powerful Starship rocket, which is expected to take its 12th test flight in March.

Source link

Advertisement
Continue Reading

Tech

Four Convicted Over Spyware Affair That Shook Greece

Published

on

A Greek court has convicted four individuals linked to the marketing of Predator spyware in the wiretapping scandal that shook the country in 2022. The BBC reports: In what became known as “Greece’s Watergate,” surveillance software called Predator was used to target 87 people — among them government ministers, senior military officials and journalists. The four who had marketed the software were found guilty by an Athens court of misdemeanours of violating the confidentiality of telephone communications and illegally accessing personal data and conversations.

The court sentenced the four defendants to lengthy jail sentences, suspended pending appeal. Although they each face 126 years, only eight would be typically served which is the upper limit for misdemeanors. One in three of the dozens of figures targeted had also been under legal surveillance by Greece’s intelligence services (EYP). Prime Minister Kyriakos Mitsotakis, who had placed EYP directly under his supervision, called it a scandal, but no government officials have been charged in court and critics accuse the government of trying to cover up the truth.

The case dates back to the summer of 2022, when the current head of Greek Socialist party Pasok, Nikos Androulakis – then an MEP – was informed by the European Parliament’s IT experts that he had received a malicious text message containing a link. Predator spyware, marketed by the Athens-based Israeli company Intellexa, can get access to a device’s messages, camera, and microphone. Its use was illegal in Greece at that time but a new law passed in 2022 has since legalised state security use of surveillance software under strict conditions. Androulakis also discovered that he had been tracked for “national security reasons” by Greece’s intelligence services. The scandal has since escalated into a debate over democratic accountability in Greece.

Source link

Advertisement
Continue Reading

Tech

Plaintiffs took 'unsupported leap' in lawsuit Apple hopes to get dismissed

Published

on

Apple has requested that the lawsuit against its AI delays and response to an Epic injunction be dismissed. It cites that both counts are unsubstantiated.

A gavel with an Apple logo and Apple Intelligence logo superimposed on the round end of the tool
Apple’s AI delays are fodder for class action lawsuits

There are multiple lawsuits around Apple’s delay of a more personalized Siri. One class action suit is being led by South Korea’s National Pension Service, and claims that Apple’s recent actions have cost billions in stock market losses.
According to a report from Reuters, Apple is being targeted by two counts of defrauding shareholders. The first claim is that Apple is overpromising Siri capabilities,
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Driving WS2812Bs With Pure Logic

Published

on

The WS2812B has become one of the most popular addressable LEDs out there. They’re easy to drive from just about any microcontroller you can think of. But what if you have a microcontroller at all? [Povilas Dumcius] decided to try and drive the LEDs with raw logic only.

The project consists of a small board full of old-school ICs that can be used to drive WS2812Bs in a simplistic manner. A 74HC14 Schmitt trigger oscillator provides the necessary beat for this tune, generating an 800 kHz clock to keep everything in time and provide the longer pulse trains that represent logic one to a WS2812B. A phase-shifted AND gate generates the shorter pulses necessary to indicate logic zero. Meanwhile, a binary counter cycles through 24 bits (8 per R, G, and B) to handle color. Pressing each one of the three pushbuttons allows each color channel to be activated or deactivated as desired. It can make the strip red, green, or blue, or combine the channels if you press multiple buttons at once. That’s all the control you get—it would take a bit more logic to enable variable levels of each channel. Certainly within the realms of possibility, though.

We’ve featured some other nifty tricks for driving WS2812Bs in unconventional ways, like using DMA hardware or even I2S audio outputs. If you’ve got your own tricks, don’t hesitate to notify the tipsline. Video after the break.

Advertisement

Source link

Advertisement
Continue Reading

Tech

This Galaxy S26 Ultra pre-order deal is the one to beat: 512GB plus a $200 gift card

Published

on

This is the first Galaxy S26 Ultra deal that actually feels worth talking about.

Amazon has the Samsung Galaxy S26 Ultra 512GB pre-order bundle for $1,299.99, and it comes with a $200 Amazon gift card. That alone is a strong launch offer, but the real reason this stands out is the storage angle: this promo gives you double the storage without forcing you to pay the usual premium for it.

That matters because flagship phone deals at launch are usually underwhelming. You might get a tiny credit, maybe a trade-in boost, and that’s it. This one is better. You’re getting the 512GB model, not the base-tier version, and Amazon is still sweetening it with store credit. If you were already planning to buy the S26 Ultra, this is the version to get.

What you’re getting

This bundle is for the unlocked Galaxy S26 Ultra 512GB. It’s Samsung’s newest top-end phone, so you’re getting the full flagship treatment: Privacy Display, Galaxy AI, AI camera features, Super Fast Charging 3.0, and the latest Ultra-level hardware.

Advertisement

More importantly, you’re getting the storage tier most people should buy anyway. On a phone like this, 512GB just makes more sense. Cameras are better, video files are bigger, AI features are heavier, and people keep their phones longer now. The base storage option always looks fine on paper, then starts feeling tight much sooner than you expect.

Why it’s worth it

This is a confident, easy recommendation because it’s genuinely a good deal if you were already planning on pre-ordering the S26 Ultra.

You aren’t just getting a bonus gift card. You’re also avoiding the usual upcharge for more storage. That means Amazon is solving the two biggest launch-day problems at once:

  • Flagship phones cost too much
  • The better storage tier usually costs even more

Here, the 512GB model is the obvious choice, and Amazon is making that choice easy. The $200 gift card is a great addition too. If you already buy from Amazon, that’s real value back in your pocket. Put it toward accessories, earbuds, a case, chargers, or just treat it like a straight offset against the cost of the phone. Either way, it makes this launch price a lot easier to swallow.

The bottom line

If you want the Galaxy S26 Ultra, buy it this way. The phone is brand new, the 512GB model is the one you actually want, and Amazon is pairing it with a $200 gift card while the pre-order offer is live. That is a better-than-usual launch bundle, full stop.

Advertisement

Source link

Continue Reading

Tech

Apple starts rolling out iOS age verification in the UK

Published

on

Apple has begun rolling out OS-level age verification to users in the UK, starting with the latest iOS 26.4 beta.

After installing the update, some users are prompted to confirm they’re over 18 (via The Verge). Apple warns that those who don’t verify their age may be unable to download apps, make purchases, or complete in-app transactions.

Screenshots shared by beta users show Apple explaining that it may automatically confirm someone’s age using the payment method linked to their Apple ID or existing account information. If that isn’t possible, users could be asked to scan a credit card.

Apple hasn’t yet provided an official statement detailing how widely the feature is rolling out in the UK. Also, it’s unclear whether all iOS 26.4 beta users are seeing the prompt.

Advertisement

The move comes as tech companies face growing regulatory pressure around age checks. Earlier this week, Apple confirmed it would begin blocking users in Australia, Brazil and Singapore from downloading apps rated 18+ unless they verify their age using what it calls “reasonable methods.” The company has also said it will start sharing age category data with developers in certain US states. Specifically, this includes Utah and Louisiana, to comply with local laws.

Advertisement

Online reaction has been mixed. Some users on Reddit have criticised the change, arguing that OS-level verification goes too far, while others point out that Apple is responding to legislation rather than acting independently. Age verification requirements have been expanding globally. This is particularly the case for platforms that distribute adult-rated content or enable in-app purchases.

For now, the UK rollout appears limited to beta software. However, the inclusion at the operating system level suggests Apple is preparing for broader enforcement.

Advertisement

Source link

Continue Reading

Tech

Robot Looks Exactly Like A Roll Of Filament, If Filament Had Eyes

Published

on

[Matt Denton]’s SpoolBot is a surprisingly agile remote-controlled robot that doesn’t just repurpose filament spool leftovers. It looks exactly like a 2 kg spool of filament; that’s real filament wound around the outside of the drum. In fact, Spoolie the SpoolBot looks so much like the real thing that [Matt] designed a googly-eye add-on, because the robot is so easily misplaced.

The robot’s mass rotates around a central hub in order to move forward or back.

SpoolBot works by rotating its mass around the central hub, which causes it to roll forward or back. Steering is accomplished by tank-style turning of the independent spool ends. While conceptually simple, quite a bit of work is necessary to ensure SpoolBot rolls true, and doesn’t loop itself around inside the shell during maneuvers. Doing that means sensors, and software work.

To that end, a couple of rotary encoders complement the gearmotors and an IMU takes care of overall positional sensing while an ESP32 runs the show. The power supply uses NiMH battery packs, in part for their added weight. Since SpoolBot works by shifting its internal mass, heavier batteries are more effective.

The receiver is a standard RC PWM receiver which means any RC transmitter can be used, but [Matt] shows off a slick one-handed model that not only works well with SpoolBot but tucks neatly into the middle of the spool for storage. Just in case SpoolBot was not hard enough to spot among other filament rolls, we imagine.

The googly-eye add-on solves that, however. They clip to the central hub and so always show “forward” for the robot. They do add quite a bit of personality, as well as a visual indication of the internals’ position relative to the outside.

Advertisement

The GitHub repository and Printables page have all the design files, and the video (embedded just below) shows every piece of the internals.

The kind of hardware available nowadays makes self-balancing devices much more practical and accessible than they ever have been. Really, SpoolBot has quite a lot in common with other self-balancing robots and self-balancing electric vehicles (which are really just larger, ridable self-balancing robots) so there’s plenty of room for experimentation no matter one’s budget or skill level.

Advertisement

Source link

Continue Reading

Tech

Google’s Nano Banana 2 takes aim at the production cost problem that’s kept AI image gen out of enterprise workflows

Published

on

For the last six months, enterprises wanting to deploy high quality AI image generation at scale have faced an uncomfortable trade-off: pay premium prices for Google’s Nano Banana Pro model, or settle for cheaper (sometimes free), faster, but noticeably inferior alternatives — especially in terms of enterprise requirements like embedded accurate text, slides, diagrams, and other non aesthetic information.

Today, Google DeepMind is attempting to collapse that gap with the launch of Nano Banana 2 (formally Gemini 3.1 Flash Image) — a model that brings the reasoning, text rendering, and creative control of the Pro tier down to Flash-level speed and pricing.

The release comes just sixteen days after Alibaba’s Qwen team dropped Qwen-Image-2.0, a 7-billion parameter open-weight challenger that many developers argued had already matched Nano Banana Pro’s quality at a fraction of the inference cost.

Advertisement

For IT leaders evaluating image generation pipelines, Nano Banana 2 reframes the decision matrix. The question is no longer whether AI image models are good enough for production — it’s which vendor’s cost curve best fits the workflow.

The production cost problem: why Nano Banana Pro stayed in the sandbox

When Google released Nano Banana Pro in November 2025, built on the Gemini 3 Pro backbone, the developer community was impressed by its visual fidelity and reasoning capabilities.

The model could render accurate text in images, maintain character consistency across multi-turn conversations, and follow complex compositional instructions — all capabilities that previous image generators struggled with.

But Pro-tier pricing created a barrier to deployment at scale. According to Google’s API pricing page, Nano Banana Pro’s image output is priced at $120 per million tokens, working out to roughly $0.134 per generated image at 1K pixel resolution.

Advertisement

For applications generating thousands of images daily — think e-commerce product visualization, marketing asset pipelines, or localized content generation — those costs compound quickly.

Nano Banana 2, built on the Gemini 3.1 Flash backbone, dramatically undercuts that pricing. Flash-tier image output is priced at $60 per million tokens, approximately $0.067 per 1K image per image — roughly 50% cheaper than the Pro model. For enterprises running high-volume image generation workflows, that’s the difference between a proof of concept and a production deployment.

What Nano Banana 2 actually delivers

The model is not simply a cheaper Nano Banana Pro. According to Google DeepMind’s announcement, Nano Banana 2 brings several capabilities that were previously exclusive to the Pro tier while introducing new features of its own.

The headline improvement is text rendering and translation. The model can generate images with accurate, legible text — a historically weak point for AI image generators — and then translate that text into different languages within the same image editing workflow. 

Advertisement

Subject consistency has also improved significantly. Nano Banana 2 can maintain character resemblance across up to five characters and preserve the fidelity of up to 14 reference objects in a single generation workflow.

This enables storyboarding, product photography with multiple SKUs, and brand asset creation where visual continuity matters. Google’s documentation highlights the ability to provide up to 14 different reference images as input, allowing the model to compose scenes incorporating multiple distinct objects or characters from separate sources.

On the technical specification side, the model supports full aspect ratio control, resolutions ranging from 512 pixels up to 4K, and two thinking levels that let developers balance quality against latency.

One notable addition that Nano Banana Pro lacks is an image search tool — the model can perform image searches and use retrieved images as grounding context for generation, expanding its utility for workflows that require visual reference material.

Advertisement

The Qwen-Image-2.0 factor: why Google needed to move fast

Google’s timing is not coincidental. On February 10, Alibaba’s Qwen team released Qwen-Image-2.0, a unified image generation and editing model that immediately drew comparisons to Nano Banana Pro — but with a dramatically smaller footprint.

Qwen-Image-2.0 runs on just 7 billion parameters, down from 20 billion in its predecessor, while unifying text-to-image generation and image editing into a single architecture.

The model generates natively at 2K resolution (2048×2048 pixels), supports prompts up to 1,000 tokens for complex layouts, and ranks at or near the top of AI Arena’s blind human evaluation leaderboard for both generation and editing tasks.

For enterprise buyers, the competitive dynamics are significant. Qwen-Image-2.0’s 7B parameter count means substantially lower inference costs when self-hosted — a critical consideration for organizations with data residency requirements or high-volume workloads.

Advertisement

The Qwen team’s previous model, Qwen-Image v1, was released under Apache 2.0 approximately one month after its initial announcement, and the developer community widely expects the same trajectory for v2.0. If open weights materialize, organizations could run a Nano Banana Pro-competitive image model on their own infrastructure without per-image API charges.

The model’s unified generation-and-editing architecture also simplifies deployment. Rather than chaining separate models for creation and modification — the current industry norm — Qwen-Image-2.0 handles both tasks in a single pass, reducing latency and the quality degradation that occurs when outputs are passed between different systems.

Where Qwen-Image-2.0 currently trails is ecosystem integration. Google’s Nano Banana 2 launches today across the Gemini app, Google Search (AI Mode and Lens), AI Studio, the Gemini API, Google Antigravity, Vertex AI, Google Cloud, and Flow — where it becomes the default image generation model at zero credit cost. That breadth of distribution is difficult for any challenger to replicate, particularly one whose API access is currently limited to Alibaba Cloud’s platform.

What this means for enterprise AI image strategies

The simultaneous availability of Nano Banana 2 and Qwen-Image-2.0 creates a decision framework that IT leaders haven’t had before in the image generation space.

Advertisement

For organizations already embedded in Google’s cloud ecosystem, Nano Banana 2 is the obvious first evaluation. The cost reduction from Pro pricing, combined with native integration across Google’s product surface, makes it the path of least resistance for teams that need production-quality image generation without re-architecting their stack. The model’s text rendering capabilities make it particularly well-suited for marketing asset generation, localization workflows, and any application where legible in-image text is a requirement.

For organizations with data sovereignty concerns, high-volume workloads that make per-image API pricing prohibitive, or a strategic preference for open-weight models, Qwen-Image-2.0 presents a compelling alternative — provided Alibaba follows through on open-weight availability. The model’s smaller parameter count translates to lower GPU requirements for self-hosting, and its unified generation-editing architecture reduces pipeline complexity.

The wild card is Nano Banana Pro itself, which isn’t going away. Google AI Pro and Ultra subscribers retain access to the Pro model for specialized tasks, accessible via the regeneration menu in the Gemini app. For use cases demanding maximum visual fidelity and creative reasoning — think high-end creative campaigns or applications where every image needs to look bespoke — Pro remains the ceiling.

The provenance layer: a quiet but important enterprise differentiator

Buried in Google’s announcement is a detail that may matter more to enterprise legal and compliance teams than any quality benchmark: provenance tooling. Nano Banana 2 ships with SynthID watermarking — Google’s AI-generated content identification technology — coupled with C2PA Content Credentials, the cross-industry standard for content authenticity metadata.

Advertisement

Google reports that since launching SynthID verification in the Gemini app last November, the feature has been used over 20 million times to identify AI-generated images, video, and audio. C2PA verification is coming to the Gemini app soon as well.

For enterprises operating in regulated industries or jurisdictions with emerging AI transparency requirements, baked-in provenance is no longer optional. It’s a compliance checkbox — and one that self-hosted open-weight alternatives like Qwen-Image-2.0 don’t natively provide.

The bottom line

Nano Banana 2 doesn’t represent a generational leap in image generation quality. What it represents is the maturation of AI image generation from a creative novelty into a production-ready infrastructure component. By collapsing the cost and speed gap between Flash and Pro tiers while retaining the reasoning and text rendering capabilities that make these models useful for actual business workflows, Google is making a calculated bet: the next wave of enterprise AI image adoption will be driven not by the models that produce the most beautiful images, but by the ones that produce good-enough images fast enough and cheaply enough to deploy at scale.

With Qwen-Image-2.0 pushing from the open-weight flank and Nano Banana Pro holding the quality ceiling, Nano Banana 2 occupies exactly the middle ground where most enterprise workloads actually live. For IT decision-makers who’ve been waiting for the cost curve to bend, it just did.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025