Connect with us

Tech

Google’s Nano Banana 2 takes aim at the production cost problem that’s kept AI image gen out of enterprise workflows

Published

on

For the last six months, enterprises wanting to deploy high quality AI image generation at scale have faced an uncomfortable trade-off: pay premium prices for Google’s Nano Banana Pro model, or settle for cheaper (sometimes free), faster, but noticeably inferior alternatives — especially in terms of enterprise requirements like embedded accurate text, slides, diagrams, and other non aesthetic information.

Today, Google DeepMind is attempting to collapse that gap with the launch of Nano Banana 2 (formally Gemini 3.1 Flash Image) — a model that brings the reasoning, text rendering, and creative control of the Pro tier down to Flash-level speed and pricing.

The release comes just sixteen days after Alibaba’s Qwen team dropped Qwen-Image-2.0, a 7-billion parameter open-weight challenger that many developers argued had already matched Nano Banana Pro’s quality at a fraction of the inference cost.

Advertisement

For IT leaders evaluating image generation pipelines, Nano Banana 2 reframes the decision matrix. The question is no longer whether AI image models are good enough for production — it’s which vendor’s cost curve best fits the workflow.

The production cost problem: why Nano Banana Pro stayed in the sandbox

When Google released Nano Banana Pro in November 2025, built on the Gemini 3 Pro backbone, the developer community was impressed by its visual fidelity and reasoning capabilities.

The model could render accurate text in images, maintain character consistency across multi-turn conversations, and follow complex compositional instructions — all capabilities that previous image generators struggled with.

But Pro-tier pricing created a barrier to deployment at scale. According to Google’s API pricing page, Nano Banana Pro’s image output is priced at $120 per million tokens, working out to roughly $0.134 per generated image at 1K pixel resolution.

Advertisement

For applications generating thousands of images daily — think e-commerce product visualization, marketing asset pipelines, or localized content generation — those costs compound quickly.

Nano Banana 2, built on the Gemini 3.1 Flash backbone, dramatically undercuts that pricing. Flash-tier image output is priced at $60 per million tokens, approximately $0.067 per 1K image per image — roughly 50% cheaper than the Pro model. For enterprises running high-volume image generation workflows, that’s the difference between a proof of concept and a production deployment.

What Nano Banana 2 actually delivers

The model is not simply a cheaper Nano Banana Pro. According to Google DeepMind’s announcement, Nano Banana 2 brings several capabilities that were previously exclusive to the Pro tier while introducing new features of its own.

The headline improvement is text rendering and translation. The model can generate images with accurate, legible text — a historically weak point for AI image generators — and then translate that text into different languages within the same image editing workflow. 

Advertisement

Subject consistency has also improved significantly. Nano Banana 2 can maintain character resemblance across up to five characters and preserve the fidelity of up to 14 reference objects in a single generation workflow.

This enables storyboarding, product photography with multiple SKUs, and brand asset creation where visual continuity matters. Google’s documentation highlights the ability to provide up to 14 different reference images as input, allowing the model to compose scenes incorporating multiple distinct objects or characters from separate sources.

On the technical specification side, the model supports full aspect ratio control, resolutions ranging from 512 pixels up to 4K, and two thinking levels that let developers balance quality against latency.

One notable addition that Nano Banana Pro lacks is an image search tool — the model can perform image searches and use retrieved images as grounding context for generation, expanding its utility for workflows that require visual reference material.

Advertisement

The Qwen-Image-2.0 factor: why Google needed to move fast

Google’s timing is not coincidental. On February 10, Alibaba’s Qwen team released Qwen-Image-2.0, a unified image generation and editing model that immediately drew comparisons to Nano Banana Pro — but with a dramatically smaller footprint.

Qwen-Image-2.0 runs on just 7 billion parameters, down from 20 billion in its predecessor, while unifying text-to-image generation and image editing into a single architecture.

The model generates natively at 2K resolution (2048×2048 pixels), supports prompts up to 1,000 tokens for complex layouts, and ranks at or near the top of AI Arena’s blind human evaluation leaderboard for both generation and editing tasks.

For enterprise buyers, the competitive dynamics are significant. Qwen-Image-2.0’s 7B parameter count means substantially lower inference costs when self-hosted — a critical consideration for organizations with data residency requirements or high-volume workloads.

Advertisement

The Qwen team’s previous model, Qwen-Image v1, was released under Apache 2.0 approximately one month after its initial announcement, and the developer community widely expects the same trajectory for v2.0. If open weights materialize, organizations could run a Nano Banana Pro-competitive image model on their own infrastructure without per-image API charges.

The model’s unified generation-and-editing architecture also simplifies deployment. Rather than chaining separate models for creation and modification — the current industry norm — Qwen-Image-2.0 handles both tasks in a single pass, reducing latency and the quality degradation that occurs when outputs are passed between different systems.

Where Qwen-Image-2.0 currently trails is ecosystem integration. Google’s Nano Banana 2 launches today across the Gemini app, Google Search (AI Mode and Lens), AI Studio, the Gemini API, Google Antigravity, Vertex AI, Google Cloud, and Flow — where it becomes the default image generation model at zero credit cost. That breadth of distribution is difficult for any challenger to replicate, particularly one whose API access is currently limited to Alibaba Cloud’s platform.

What this means for enterprise AI image strategies

The simultaneous availability of Nano Banana 2 and Qwen-Image-2.0 creates a decision framework that IT leaders haven’t had before in the image generation space.

Advertisement

For organizations already embedded in Google’s cloud ecosystem, Nano Banana 2 is the obvious first evaluation. The cost reduction from Pro pricing, combined with native integration across Google’s product surface, makes it the path of least resistance for teams that need production-quality image generation without re-architecting their stack. The model’s text rendering capabilities make it particularly well-suited for marketing asset generation, localization workflows, and any application where legible in-image text is a requirement.

For organizations with data sovereignty concerns, high-volume workloads that make per-image API pricing prohibitive, or a strategic preference for open-weight models, Qwen-Image-2.0 presents a compelling alternative — provided Alibaba follows through on open-weight availability. The model’s smaller parameter count translates to lower GPU requirements for self-hosting, and its unified generation-editing architecture reduces pipeline complexity.

The wild card is Nano Banana Pro itself, which isn’t going away. Google AI Pro and Ultra subscribers retain access to the Pro model for specialized tasks, accessible via the regeneration menu in the Gemini app. For use cases demanding maximum visual fidelity and creative reasoning — think high-end creative campaigns or applications where every image needs to look bespoke — Pro remains the ceiling.

The provenance layer: a quiet but important enterprise differentiator

Buried in Google’s announcement is a detail that may matter more to enterprise legal and compliance teams than any quality benchmark: provenance tooling. Nano Banana 2 ships with SynthID watermarking — Google’s AI-generated content identification technology — coupled with C2PA Content Credentials, the cross-industry standard for content authenticity metadata.

Advertisement

Google reports that since launching SynthID verification in the Gemini app last November, the feature has been used over 20 million times to identify AI-generated images, video, and audio. C2PA verification is coming to the Gemini app soon as well.

For enterprises operating in regulated industries or jurisdictions with emerging AI transparency requirements, baked-in provenance is no longer optional. It’s a compliance checkbox — and one that self-hosted open-weight alternatives like Qwen-Image-2.0 don’t natively provide.

The bottom line

Nano Banana 2 doesn’t represent a generational leap in image generation quality. What it represents is the maturation of AI image generation from a creative novelty into a production-ready infrastructure component. By collapsing the cost and speed gap between Flash and Pro tiers while retaining the reasoning and text rendering capabilities that make these models useful for actual business workflows, Google is making a calculated bet: the next wave of enterprise AI image adoption will be driven not by the models that produce the most beautiful images, but by the ones that produce good-enough images fast enough and cheaply enough to deploy at scale.

With Qwen-Image-2.0 pushing from the open-weight flank and Nano Banana Pro holding the quality ceiling, Nano Banana 2 occupies exactly the middle ground where most enterprise workloads actually live. For IT decision-makers who’ve been waiting for the cost curve to bend, it just did.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Ford Suddenly Charges Drivers Extra For A Signature Mach-E Feature

Published

on





It may not feel like it, but the Ford Mustang Mach-E has become a bit of an elder statesman in the electric crossover segment. Ford first unveiled this ambitious EV that controversially borrowed the Mustang’s name in 2019 and, in the years since, has given the now-familiar Mach-E some minor tweaks, including the addition of an exciting, rally-focused performance model for 2024. 

The latest change that Ford’s given the Mach-E, though, feels like more of a head-scratcher or, for lack of a better word, a cash grab. It doesn’t involve adding a new feature to the car, but rather taking one away and charging buyers extra if they’d like it back. For 2026, the Mustang Mach-E’s formerly standard front cargo compartment, better known as a “frunk,” is now a separate option that will set buyers back an extra $495. 

Advertisement

Yes, this is a relatively small change in the context of a car that starts at nearly $40,000, but removing any formerly standard feature (without an equivalent price drop) and then charging extra for it is generally not something that buyers appreciate. But Ford is justifying the move by arguing that few buyers were actually using the Mach-E’s frunk in the first place.

Advertisement

What’s in a frunk?

There are a lots valid arguments that could be used against electric vehicles when comparing them to gas cars, but even the most dedicated EV critics would have to admit that the availability of a frunk is one of the best benefits of an electric vehicle. Not every single EV on the market has a frunk, but many use their lack of an engine to turn their underhood areas into an extra cargo compartment — as the name suggests, a front trunk.

Every Tesla currently on sale has a frunk, and Ford’s own F-150 Lightning has a massive “Mega Power Frunk” where its engine would be. Though not nearly as large as the Lightning’s frunk, the Mach-E has always had extra cargo space up front, and we listed this frunk as one of the Mach-E’s 10 coolest features back in 2022. Ford even filled a Mach-E’s frunk with shrimp and buffalo wings for a 2020 publicity stunt.

Adding extra cargo space without impeding on the cabin seems like it’d be a win-win and a popular feature. But Ford found that Mach-E buyers were not using their frunks nearly as much as expected. According to Ford, this spurred the decision to change the frunk from a standard feature to a standalone extra on the options sheet.

Advertisement

Smart decision or cash grab?

There wouldn’t be any issue with this move if Ford simply dropped the Mach-E’s price by $495 while making the frunk a $495 option, but that’s not quite how Ford is going about it. While Ford did drop Mach-E prices slightly for 2026, adding the frunk as an option on the base RWD 2026 Mach-E makes it around $350 more expensive than the identical 2025 model. In a similar move to the frunk change, Ford has also removed the 2026 Mach-E Rally’s standard rear spoiler and made it a standalone option.

Are these changes likely to have a big impact on Mach-E demand on their own? Probably not, given that many buyers are already conditioned to expect car prices that creep up each year. But our reviews have shown that the Mach-E lags behind its competition in terms of value, and these price bumps certainly won’t help its case there.

Advertisement

With the Mach-E’s relatively old age, it was once thought this EV would be due for a new generation, or at least a significant refresh by 2026, but industry reports suggest it could be a while longer before Ford redesigns the Mach-E. Instead, it’s said that Ford will continue working on the current platform to cut costs and increase profitability — and these small but notable equipment moves seem to back up that pivot.



Advertisement

Source link

Continue Reading

Tech

Read AI rolls out ‘Digital Twin’ that can respond to work emails and schedule meetings

Published

on

(Read AI Image)

Seattle startup Read AI launched a new “Digital Twin” product that works through email and can help schedule meetings, answer questions, and keep conversations moving.

The AI bot, branded as “Ada,” builds on the company’s existing meeting and productivity tools. Read AI says it’s the largest deployment of a digital twin product to date.

Digital Twin enters a crowded market of AI agents and workplace copilots from giants like Microsoft and Google, along with startups that offer AI‑driven scheduling, inbox triage and autonomous task management. Read is trying to differentiate by centering the agent in email, tightly coupling it to meeting and document context, and offering enterprise branding such as a custom name and company domain for customers with 25 or more licenses.

Here’s how it works. Users cc ada@read.ai on a thread and can ask it to find time on everyone’s calendars, draft replies, or answer questions using context from their meetings, email, files, CRMs and other connected systems. Read says its platform pulls from more than 20 native integrations and, on average, about 10,000 documents per user.

For anything beyond scheduling, Ada “sidebars” with the user first, proposing draft responses and waiting for approval before sending them, and it must be cc’d on email threads where it takes action. The idea is to let the AI cover for you when you’re too busy or out of the office, while giving you veto power on anything sensitive or high‑stakes.

Advertisement

Read AI CEO David Shim likened Digital Twin to OpenClaw, an open source AI digital assistant tool that works with messaging apps and went viral this month. “What OpenClaw did for tinkers, Digital Twin brings to the mainstream,” he told GeekWire.

Shim framed the launch as an evolution from “AI assistant” to something closer to a software colleague that can act on your behalf. In internal beta, he said a quarter of user interactions with Ada were just to say “thank you,” a signal that people were treating the product more like a teammate than a tool.

He said the Digital Twin launch shifts Read AI from “a system of record for productivity” to an “extension of you.”

“This is the moment we change the way we interact with AI, from pull to push, where the agent acts on your behalf,” he said.

Advertisement

More broadly, Shim is betting that digital twins — and AI assistants more broadly — will proliferate.

“If I said internet access was a human right 20 years ago, I’d be laughed out of the room — today, it’s an expected value,” he said. “We believe that digital twins will be a human right, akin to internet access, in the next few years, delivering a level playing field when it comes to AI and productivity.”

Founded in 2021 by Shim, Robert Williams, and Elliott Waldron, Read AI has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools. It has 5 million monthly active users.

Source link

Advertisement
Continue Reading

Tech

Hands-On With Nano Banana 2, the Latest Version of Google’s AI Image Generator

Published

on

Google just debuted Nano Banana 2, an updated version of its AI image generator. It combines the abilities of Google’s previous release, Nano Banana Pro—like text rendering and web searching—with speedier image generation. This tool will be the new default in Google’s Gemini chatbot.

The first image model from Google under the Nano Banana moniker dropped last August, and the Pro version arrived three months later. The AI tool was widely adopted online to alter photos of real people, from generating custom action figures to nostalgic images of people hugging younger versions of themselves.

Nano Banana 2 is not only faster at crafting images, it’s also a more powerful photo editor. Despite some rough edges and unconvincing generations in my initial hands-on experience through Gemini, Google’s latest release marks the continued improvement of photorealistic AI tools that can manipulate existing images and serves as a stark reminder to always scrutinize unverified images you see online.

Getting Started

If you want to try the new image model, the easiest way to access Nano Banana 2 is through the Gemini app or website. You can either click the banana emoji to generate images or just put the request in your prompts to the chatbot. This new image model is also available through Google’s Search tools, AI Studio, Cloud, and other services.

Advertisement

Google says the Nano Banana 2 image generator pulls real-time information from the web, which can be useful for generating infographics. To test this, I asked Gemini to generate a custom weather report for my upcoming weekend getaway. Here’s my prompt:

I’m going skiing in Dodge Ridge this weekend with some friends. Could you create an infographic that covers the weather conditions?

Image may contain Advertisement Poster Person Outdoors Nature Text and Snow

Nano Banana Pro made it easier to generate images with text—pulled from data on the web—and Nano Banana 2 makes that image generation speedier than ever.

AI-GENERATED BY REECE ROGERS

At first glance, the result looks decent. No wobbly text or disfigured skiers in the background. The forecast for each day includes expected temperatures as well as wind and snow conditions. A small disclaimer at the bottom of the infographic reads, “Weather and conditions subject to change. Check official sources.”

Advertisement

I’m glad I did! When I looked up the forecast for this weekend from a different source, I realized that Gemini had messed up the dates and pulled the Google Weather context from last week. When I pointed out this mistake to the bot, it used Nano Banana 2 to replace the text from its first attempt with the correct weather data.

Tub Time

If you want more details about my getaway, I’m headed to a cozy ski lodge with friends who are skiers. I’m a novice and still deciding whether to actually hit the slopes or just turn into a wrinkly prune sitting in the hot tub all day long. Maybe Nano Banana 2 could make a dumb meme to send to the group chat? I uploaded a photo of myself to Gemini with this prompt:

Take this image and put me in a cozy outdoor jacuzzi surrounded by snow. Make my skin comically wrinkly from sitting in there for hours.

Source link

Advertisement
Continue Reading

Tech

These Are Our Absolute Favorite Android Earbuds, and They’re Below $200

Published

on

If you’re an esteemed Android user like me, and you felt left out of yesterday’s deal on the AirPods Pro 3, I’ve got you covered today with an even bigger discount on the Pixel Buds Pro 2. Both Amazon and Best Buy have the hazel color marked down from $229 to $180, a $49 discount on Google’s most upgraded wireless earbuds.

  • Photograph: Julian Chokkattu

The first change you’ll notice from the previous generation Pixel Buds Pro is that the newer model is much lighter, and the buds are 27 percent smaller. As a result, these are an excellent choice for anyone with small ears, and they stay put super well. Reviewer Parker Hall “had no problem doing hours of tree pruning and going on long sweaty runs in Portland’s early fall heat wave.”

With some help from top-notch physical sound isolation, the active noise-canceling on these is just as good as Apple’s and even goes toe-to-toe with big hitters like Bose and Sony. The transparency mode works just as well, too, with a wider range and clearer audio than a lot of other headphones offer. When it’s time to actually turn up the tunes, you can enjoy a wide, natural soundstage that has excellent detail in the midrange and clear, sparkling treble.

The Gemini integration, unfortunately, leaves a bit to be desired. It’s not the smoothest experience, particularly when asking multiple questions, and the Pixel Buds Pro 2 aren’t offering anything that other earbuds can’t do. Apple’s live translations and heart rate monitors are more useful features, but if you’re on Android, you’re locked out of them anyway.

If you’re interested in upgrading your earbud game, and you already have a Pixel, you can grab the Pixel Buds Pro 2 in hazel for $180 from either Amazon or Best Buy. If that color doesn’t suit you, I also spotted lesser discounts on the peony color for $189, or the porcelain color for $210. For anyone who isn’t already sold on the Pixel Buds Pro 2, make sure to swing by our guide to the best wireless earbuds, with picks for both Apple and Android owners.

Advertisement

Source link

Continue Reading

Tech

AMD FSR 4.1 update leaks with big image quality gains for Radeon GPUs

Published

on


A Guru3D forums user named “The Creator” recently shared a beta DLL file for an unannounced update to AMD’s FSR upscaler. It didn’t take long for users on Guru3D and Reddit to circulate mirrors and begin publishing side-by-side comparisons. The early verdict: noticeably less blur than the public release at…
Read Entire Article
Source link

Continue Reading

Tech

The scenery steals the show in this epic SpaceX rocket landing

Published

on

Well, those Falcon 9 landings never get old. Imagine, just over a decade ago the idea of being able to land a rocket upright after it’d been to space seemed crazy. And then SpaceX went and did it.

Following its first successful touchdown in December 2015, SpaceX suffered the occasional mishap with its booster landings, but in recent years it’s well and truly nailed the process.

The Elon Musk-led spaceflight company shared a video (below) this week of its most recent landing, with dramatic footage captured by a camera attached to the rocket showing the spectacular early-morning ride home.

The Falcon 9’s mission started from Vandenberg Space Force Base in California, and involved the launch of 25 Starlink satellites to low-Earth orbit.

Advertisement

This was the 11th flight for the first-stage booster (B1093) supporting this mission, which previously launched SDA T1TL-B, SDA T1TL-C, and now nine Starlink missions.

As the video shows, after deploying the upper stage, the 41.2 meter-tall (about 135 feet) booster returned to Earth minutes later, landing on the Of Course I Still Love You droneship waiting in the Pacific Ocean.

To achieve an autonomous landing like this, a Falcon 9 booster begins by performing a flip using cold gas thrusters after stage separation, sometimes followed by a boostback burn. As it descends, the booster deploys its grid fins to steer through the atmosphere before performing an entry burn to slow down. Finally, it executes a landing burn while deploying its legs for a stable touchdown.

The landings allow SpaceX to reuse its boosters multiple times, reducing the cost of spaceflight and opening access to more companies and organizations.

Advertisement

Just last weekend, another Falcon 9 booster set a new reuse record of 33 flights after launching for the first time in June 2021.

SpaceX has applied what it’s learned from the landings to its much bigger and more powerful Starship rocket, which is expected to take its 12th test flight in March.

Source link

Advertisement
Continue Reading

Tech

Four Convicted Over Spyware Affair That Shook Greece

Published

on

A Greek court has convicted four individuals linked to the marketing of Predator spyware in the wiretapping scandal that shook the country in 2022. The BBC reports: In what became known as “Greece’s Watergate,” surveillance software called Predator was used to target 87 people — among them government ministers, senior military officials and journalists. The four who had marketed the software were found guilty by an Athens court of misdemeanours of violating the confidentiality of telephone communications and illegally accessing personal data and conversations.

The court sentenced the four defendants to lengthy jail sentences, suspended pending appeal. Although they each face 126 years, only eight would be typically served which is the upper limit for misdemeanors. One in three of the dozens of figures targeted had also been under legal surveillance by Greece’s intelligence services (EYP). Prime Minister Kyriakos Mitsotakis, who had placed EYP directly under his supervision, called it a scandal, but no government officials have been charged in court and critics accuse the government of trying to cover up the truth.

The case dates back to the summer of 2022, when the current head of Greek Socialist party Pasok, Nikos Androulakis – then an MEP – was informed by the European Parliament’s IT experts that he had received a malicious text message containing a link. Predator spyware, marketed by the Athens-based Israeli company Intellexa, can get access to a device’s messages, camera, and microphone. Its use was illegal in Greece at that time but a new law passed in 2022 has since legalised state security use of surveillance software under strict conditions. Androulakis also discovered that he had been tracked for “national security reasons” by Greece’s intelligence services. The scandal has since escalated into a debate over democratic accountability in Greece.

Source link

Advertisement
Continue Reading

Tech

Plaintiffs took 'unsupported leap' in lawsuit Apple hopes to get dismissed

Published

on

Apple has requested that the lawsuit against its AI delays and response to an Epic injunction be dismissed. It cites that both counts are unsubstantiated.

A gavel with an Apple logo and Apple Intelligence logo superimposed on the round end of the tool
Apple’s AI delays are fodder for class action lawsuits

There are multiple lawsuits around Apple’s delay of a more personalized Siri. One class action suit is being led by South Korea’s National Pension Service, and claims that Apple’s recent actions have cost billions in stock market losses.
According to a report from Reuters, Apple is being targeted by two counts of defrauding shareholders. The first claim is that Apple is overpromising Siri capabilities,
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Critical Juniper Networks PTX flaw allows full router takeover

Published

on

Critical Juniper Networks PTX flaw allows full router takeover

A critical vulnerability in the Junos OS Evolved network operating system running on PTX Series routers from Juniper Networks could allow an unauthenticated attacker to execute code remotely with root privileges.

PTX Series routers are high-performance core and peering routers built for high throughput, low latency, and scale. They are commonly used by internet service providers, telecommunication services, and cloud network applications.

The security issue is identified as CVE-2026-21902 and is caused by incorrect permission assignment in the ‘On-Box Anomaly Detection’ framework, which should be exposed to internal processes only over the internal routing interface.

Wiz

However, the glitch allows accessing the framework over an externally exposed port, Juniper Networks explains in a security advisory.

Because the service runs as root and is enabled by default, successful exploitation would allow an attacker who is already on the network to take full control of the device without authentication.

Advertisement

The issue affects Junos OS Evolved versions before 25.4R1-S1-EVO and 25.4R2-EVO, on PTX Series routers. Older versions may also be impacted, but the vendor does not assess releases that have reached the end-of-engineering or end-of-life (EoL) phase.

Versions before 25.4R1-EVO, and standard (non-Evolved) Junos OS versions are not impacted by CVE-2026-21902. Juniper Networks has delivered fixes in versions 25.4R1-S1-EVO, 25.4R2-EVO, and 26.2R1-EVO of the product.

Juniper’s Security Incident Response Team (SIRT) states that it was not aware of malicious exploitation of the vulnerability at the time of publishing the security bulletin.

If immediate patching is not possible, the vendor’s recommendation is to restrict access to the vulnerable endpoints to trusted networks only using firewall filters or Access Control Lists (ACLs). Alternatively, administrators may disable the vulnerable service entirely using:

Advertisement

'request pfe anomalies disable'

Juniper Networks products are typically an attractive target for advanced hackers as the network equipment is used by service providers requiring high bandwidth, such as cloud data centers and large enterprises.

In March 2025, it was revealed that Chinese cyber-espionage actors were deploying custom backdoors on EoL Junos OS MX routers to drop a set of ‘TinyShell’ backdoor variants.

In January 2025, a malware campaign dubbed ‘J-magic’ targeted Juniper VPN gateways used in the semiconductor, energy, manufacturing, and IT sectors, deploying network-sniffing malware that activated upon receiving a “magic packet.”

Advertisement

In December 2024, Juniper Networks Smart routers became targets of Mirai botnet campaigns, getting enlisted in distributed denial of service (DDoS) swarms.

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.

Source link

Advertisement
Continue Reading

Tech

Driving WS2812Bs With Pure Logic

Published

on

The WS2812B has become one of the most popular addressable LEDs out there. They’re easy to drive from just about any microcontroller you can think of. But what if you have a microcontroller at all? [Povilas Dumcius] decided to try and drive the LEDs with raw logic only.

The project consists of a small board full of old-school ICs that can be used to drive WS2812Bs in a simplistic manner. A 74HC14 Schmitt trigger oscillator provides the necessary beat for this tune, generating an 800 kHz clock to keep everything in time and provide the longer pulse trains that represent logic one to a WS2812B. A phase-shifted AND gate generates the shorter pulses necessary to indicate logic zero. Meanwhile, a binary counter cycles through 24 bits (8 per R, G, and B) to handle color. Pressing each one of the three pushbuttons allows each color channel to be activated or deactivated as desired. It can make the strip red, green, or blue, or combine the channels if you press multiple buttons at once. That’s all the control you get—it would take a bit more logic to enable variable levels of each channel. Certainly within the realms of possibility, though.

We’ve featured some other nifty tricks for driving WS2812Bs in unconventional ways, like using DMA hardware or even I2S audio outputs. If you’ve got your own tricks, don’t hesitate to notify the tipsline. Video after the break.

Advertisement

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025