For the last six months, enterprises wanting to deploy high quality AI image generation at scale have faced an uncomfortable trade-off: pay premium prices for Google’s Nano Banana Pro model, or settle for cheaper (sometimes free), faster, but noticeably inferior alternatives — especially in terms of enterprise requirements like embedded accurate text, slides, diagrams, and other non aesthetic information.
Today, Google DeepMind is attempting to collapse that gap with the launch of Nano Banana 2 (formally Gemini 3.1 Flash Image) — a model that brings the reasoning, text rendering, and creative control of the Pro tier down to Flash-level speed and pricing.
The release comes just sixteen days after Alibaba’s Qwen team dropped Qwen-Image-2.0, a 7-billion parameter open-weight challenger that many developers argued had already matched Nano Banana Pro’s quality at a fraction of the inference cost.
Advertisement
For IT leaders evaluating image generation pipelines, Nano Banana 2 reframes the decision matrix. The question is no longer whether AI image models are good enough for production — it’s which vendor’s cost curve best fits the workflow.
The production cost problem: why Nano Banana Pro stayed in the sandbox
When Google released Nano Banana Pro in November 2025, built on the Gemini 3 Pro backbone, the developer community was impressed by its visual fidelity and reasoning capabilities.
The model could render accurate text in images, maintain character consistency across multi-turn conversations, and follow complex compositional instructions — all capabilities that previous image generators struggled with.
But Pro-tier pricing created a barrier to deployment at scale. According to Google’s API pricing page, Nano Banana Pro’s image output is priced at $120 per million tokens, working out to roughly $0.134 per generated image at 1K pixel resolution.
Advertisement
For applications generating thousands of images daily — think e-commerce product visualization, marketing asset pipelines, or localized content generation — those costs compound quickly.
Nano Banana 2, built on the Gemini 3.1 Flash backbone, dramatically undercuts that pricing. Flash-tier image output is priced at $60 per million tokens, approximately $0.067 per 1K image per image — roughly 50% cheaper than the Pro model. For enterprises running high-volume image generation workflows, that’s the difference between a proof of concept and a production deployment.
What Nano Banana 2 actually delivers
The model is not simply a cheaper Nano Banana Pro. According to Google DeepMind’s announcement, Nano Banana 2 brings several capabilities that were previously exclusive to the Pro tier while introducing new features of its own.
The headline improvement is text rendering and translation. The model can generate images with accurate, legible text — a historically weak point for AI image generators — and then translate that text into different languages within the same image editing workflow.
Advertisement
Subject consistency has also improved significantly. Nano Banana 2 can maintain character resemblance across up to five characters and preserve the fidelity of up to 14 reference objects in a single generation workflow.
This enables storyboarding, product photography with multiple SKUs, and brand asset creation where visual continuity matters. Google’s documentation highlights the ability to provide up to 14 different reference images as input, allowing the model to compose scenes incorporating multiple distinct objects or characters from separate sources.
On the technical specification side, the model supports full aspect ratio control, resolutions ranging from 512 pixels up to 4K, and two thinking levels that let developers balance quality against latency.
One notable addition that Nano Banana Pro lacks is an image search tool — the model can perform image searches and use retrieved images as grounding context for generation, expanding its utility for workflows that require visual reference material.
Advertisement
The Qwen-Image-2.0 factor: why Google needed to move fast
Google’s timing is not coincidental. On February 10, Alibaba’s Qwen team released Qwen-Image-2.0, a unified image generation and editing model that immediately drew comparisons to Nano Banana Pro — but with a dramatically smaller footprint.
Qwen-Image-2.0 runs on just 7 billion parameters, down from 20 billion in its predecessor, while unifying text-to-image generation and image editing into a single architecture.
The model generates natively at 2K resolution (2048×2048 pixels), supports prompts up to 1,000 tokens for complex layouts, and ranks at or near the top of AI Arena’s blind human evaluation leaderboard for both generation and editing tasks.
For enterprise buyers, the competitive dynamics are significant. Qwen-Image-2.0’s 7B parameter count means substantially lower inference costs when self-hosted — a critical consideration for organizations with data residency requirements or high-volume workloads.
Advertisement
The Qwen team’s previous model, Qwen-Image v1, was released under Apache 2.0 approximately one month after its initial announcement, and the developer community widely expects the same trajectory for v2.0. If open weights materialize, organizations could run a Nano Banana Pro-competitive image model on their own infrastructure without per-image API charges.
The model’s unified generation-and-editing architecture also simplifies deployment. Rather than chaining separate models for creation and modification — the current industry norm — Qwen-Image-2.0 handles both tasks in a single pass, reducing latency and the quality degradation that occurs when outputs are passed between different systems.
Where Qwen-Image-2.0 currently trails is ecosystem integration. Google’s Nano Banana 2 launches today across the Gemini app, Google Search (AI Mode and Lens), AI Studio, the Gemini API, Google Antigravity, Vertex AI, Google Cloud, and Flow — where it becomes the default image generation model at zero credit cost. That breadth of distribution is difficult for any challenger to replicate, particularly one whose API access is currently limited to Alibaba Cloud’s platform.
What this means for enterprise AI image strategies
The simultaneous availability of Nano Banana 2 and Qwen-Image-2.0 creates a decision framework that IT leaders haven’t had before in the image generation space.
Advertisement
For organizations already embedded in Google’s cloud ecosystem, Nano Banana 2 is the obvious first evaluation. The cost reduction from Pro pricing, combined with native integration across Google’s product surface, makes it the path of least resistance for teams that need production-quality image generation without re-architecting their stack. The model’s text rendering capabilities make it particularly well-suited for marketing asset generation, localization workflows, and any application where legible in-image text is a requirement.
For organizations with data sovereignty concerns, high-volume workloads that make per-image API pricing prohibitive, or a strategic preference for open-weight models, Qwen-Image-2.0 presents a compelling alternative — provided Alibaba follows through on open-weight availability. The model’s smaller parameter count translates to lower GPU requirements for self-hosting, and its unified generation-editing architecture reduces pipeline complexity.
The wild card is Nano Banana Pro itself, which isn’t going away. Google AI Pro and Ultra subscribers retain access to the Pro model for specialized tasks, accessible via the regeneration menu in the Gemini app. For use cases demanding maximum visual fidelity and creative reasoning — think high-end creative campaigns or applications where every image needs to look bespoke — Pro remains the ceiling.
The provenance layer: a quiet but important enterprise differentiator
Buried in Google’s announcement is a detail that may matter more to enterprise legal and compliance teams than any quality benchmark: provenance tooling. Nano Banana 2 ships with SynthID watermarking — Google’s AI-generated content identification technology — coupled with C2PA Content Credentials, the cross-industry standard for content authenticity metadata.
Advertisement
Google reports that since launching SynthID verification in the Gemini app last November, the feature has been used over 20 million times to identify AI-generated images, video, and audio. C2PA verification is coming to the Gemini app soon as well.
For enterprises operating in regulated industries or jurisdictions with emerging AI transparency requirements, baked-in provenance is no longer optional. It’s a compliance checkbox — and one that self-hosted open-weight alternatives like Qwen-Image-2.0 don’t natively provide.
The bottom line
Nano Banana 2 doesn’t represent a generational leap in image generation quality. What it represents is the maturation of AI image generation from a creative novelty into a production-ready infrastructure component. By collapsing the cost and speed gap between Flash and Pro tiers while retaining the reasoning and text rendering capabilities that make these models useful for actual business workflows, Google is making a calculated bet: the next wave of enterprise AI image adoption will be driven not by the models that produce the most beautiful images, but by the ones that produce good-enough images fast enough and cheaply enough to deploy at scale.
With Qwen-Image-2.0 pushing from the open-weight flank and Nano Banana Pro holding the quality ceiling, Nano Banana 2 occupies exactly the middle ground where most enterprise workloads actually live. For IT decision-makers who’ve been waiting for the cost curve to bend, it just did.
Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.
Was today’s Mini Crossword too short for you? The New York Times now has a Midi Crossword, which is not as big as the original NYT Crossword, but longer than the Mini. Read on for the answers to today’s Mini Crossword. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.
If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.
Although it may only feel like yesterday, the Galaxy S24 Ultra is now two years old and has been succeeded by both the S25 Ultra and, more recently, the Galaxy S26 Ultra.
So what’s really new with the Galaxy S26 Ultra in comparison to its older sibling? Are there enough changes to warrant an upgrade?
To help you decide, we’ve compared the specs of the Galaxy S26 Ultra to the Galaxy S24 Ultra and noted the key changes that Samsung has made with its top-end flagship.
Otherwise, for more of an overview check out our best smartphones guide.
Advertisement
Samsung Galaxy S26 Ultra
Samsung Galaxy S24 Ultra
Battery
5000mAh
5000mAh
Chipset
Snapdragon 8 Elite Gen 5 for Galaxy
Snapdragon 8 Gen 3 for Galaxy
Front Camera
12MP
12MP
Rear Camera
200MP + 50MP + 50MP + 10MP
200MP + 12MP + 50MP + 10MP
Screen Size
6.9-inches
6.82-inches
Storage Capacity
256GB, 512GB, 1TB
256GB, 512GB, 1TB
UK RRP
£1279
£1249
Wired Charging
60W
45W
Weight
214g
232g
Price and Availability
Along with the rest of the Galaxy S26 series, the Galaxy S26 Ultra can be pre-ordered now ahead of its official launch on March 11th. Starting at £1279/$1279, you can nab the handset in a choice of four colours: Black, White, Sky Blue or Cobalt Violet. There are also a couple of Samsung-exclusive colours too, including Pink Gold and Silver Shadow.
SQUIRREL_PLAYLIST_10208275
Advertisement
Unsurprisingly, the Galaxy S24 Ultra is no longer available to buy directly from Samsung’s official store. However, you can still nab the handset from third party retailers in both new and refurbished condition. The price will vary depending on the retailer and condition you opt for, as we’ve seen prices range from as little as £550 up to around £800.
SQUIRREL_PLAYLIST_10187680
Galaxy S26 Ultra is thinner and lighter
Not only is the Galaxy S26 Ultra thinner and lighter than the Galaxy S24 Ultra, but it actually boasts the title of being the thinnest Ultra ever. It may not be as thin as the Galaxy S25 Edge, but at just 7.9mm it’s noticeably more agile in hand compared to the 8.6mm thick Galaxy S24 Ultra.
Advertisement
Advertisement
In addition, the Galaxy S26 Ultra weighs just 214g whereas the Galaxy S24 Ultra weighs 232g. However, the reason for this difference is due to the materials Samsung has used. While the Galaxy S24 Ultra sports a premium titanium frame, the Galaxy S26 Ultra has the same Armour Aluminium as the rest of the S26 range. Even so, we still found the Galaxy S26 Ultra retains the premium feel that the Ultra models are known for.
Samsung Galaxy S26 Ultra Samsung Galaxy S24 Ultra
Different custom Snapdragon chips
Both the Galaxy S26 Ultra and Galaxy S24 Ultra are fitted with custom versions of Qualcomm chips. While the Galaxy S24 Ultra runs on the 2024 Snapdragon 8 Gen 3 for Galaxy, which was Qualcomm’s flagship chip at the time, the Galaxy S26 Ultra uses Qualcomm’s current top-end chip: Snapdragon 8 Elite Gen 5 for Galaxy.
Advertisement
As the Galaxy S24 Ultra was among the first to sport Samsung’s Galaxy AI toolkit, its chip needed to be powerful enough to ensure on-device AI ran as smoothly as possible. Fortunately, we concluded that to be the case with Snapdragon 8 Gen 3 for Galaxy. Not only did we find the likes of Circle to Search and Live Translation ran well, but the Galaxy S24 Ultra itself felt rapid in everyday use too.
Samsung Galaxy S24 Ultra. Image Credit (Trusted Reviews)
While we haven’t tested the Galaxy S26 Ultra yet, Qualcomm’s default Snapdragon 8 Elite Gen 5 chip is behind many of the best Android phones of 2026 and offers seriously fast and powerful performance. Even during intensive tasks like video editing or gaming, we’ve rarely noticed any slowdown with Snapdragon 8 Elite Gen 5. With this in mind, we expect the Galaxy S26 Ultra to offer a similar impressive performance.
Advertisement
Plus, naturally the Galaxy S26 Ultra will be fitted with Galaxy AI features alongside some new additions like Now Nudge which provides suggestions based on what you’re doing on screen.
Samsung Galaxy S26 Ultra. Image Credit (Trusted Reviews)
Galaxy S26 Ultra has a privacy display
The Galaxy S26 Ultra is slightly larger than the S24 Ultra, measuring in at 6.9-inches compared to 6.82-inches. And that’s not where the differences end. In fact, the Galaxy S26 Ultra is the first smartphone ever to sport a privacy display.
Essentially, the privacy display hides itself when it’s viewed at certain angles. That means although you’ll be able to perfectly see your content, prying eyes sitting next to you won’t. You can even adjust the level of privacy too, by either setting it on a per-app basis or by only blocking out specific areas on the screen. Or you can simply turn it off altogether.
Advertisement
Privacy Display on Galaxy S26 Ultra. Image Credit (Trusted Reviews)
This, combined with anti-reflective screen tech taken from the Galaxy S25 Ultra and Gorilla Armour 2 protection means the Galaxy S26 Ultra looks set to boast one of the best smartphone screens. Of course, we’ll need to spend more time with the phone before confirming this, but we’re certainly excited.
Advertisement
However, do keep in mind that the two are fitted with many of the same premium screen technologies including LTPO-enabled 1-120Hz refresh rate, QHD+ resolution and S-Pen support.
Galaxy S26 Ultra promises faster charging
Battery prowess has never been high on Samsung’s list of priorities and, though the Galaxy S26 Ultra has some welcome improvements over the S24 Ultra, it still doesn’t quite match up to the likes of the OnePlus 15 or Oppo Find X9 Pro’s post-7000mAh batteries.
Even so, we found the Galaxy S24 Ultra was easily an all-day device – in fact, we concluded that we’d end 16-hour days with around 50 to 60% left in the tank. While its 45W wired charging support isn’t particularly speedy, it still managed to get from 1-100% in just over 70 minutes.
Advertisement
Although we’re yet to test the Galaxy S26 Ultra’s battery prowess, Samsung has upgraded the wired charging speed up to 60W. This, according to Samsung, means the S26 Ultra will take just 30 minutes to reach 75%. It’s a welcome improvement at least, but the phone still falls short from many of the best Android phones.
The Galaxy S26 Ultra’s camera has had a slight tweak
Samsung’s Ultra handsets have repeatedly made their way into our best camera phones guide, and we expect this tradition will continue with the Galaxy S26 Ultra. Rather than a complete overhaul, the Galaxy S26 Ultra has made a few camera hardware improvements from the Galaxy S24 Ultra.
Advertisement
The biggest noticeable difference is the Galaxy S26 Ultra’s 50MP ultrawide lens whereas the Galaxy S24 Ultra sports a 12MP ultrawide instead. The Galaxy S24 Ultra’s ultrawide lens might sound pretty average compared to the S26 Ultra, however it does do a solid job in good lighting conditions. However, we did find it let itself down when we tried to take photos at night.
Advertisement
Galaxy S26 Ultra cameras Galaxy S24 Ultra cameras
The Galaxy S26 Ultra’s 50MP ultrawide was actually first introduced with Galaxy S25 Ultra. We found that the biggest difference with the 50MP ultrawide was its ability to capture photos in darker conditions, so we expect to see the same results with the Galaxy S26 Ultra.
Otherwise both handsets sport 200MP wide, 50MP and 10MP telephoto lenses, although Samsung has made some tweaks with the Galaxy S26 Ultra. Essentially, Samsung promises the wide lens now has a 47% brighter aperture while the 50MP telephoto should offer 37% improvement in brightness too.
Advertisement
Early Verdict
Although it isn’t a complete overhaul, the Galaxy S26 Ultra is fitted with plenty of improvements that make it a more appealing option over the Galaxy S24 Ultra. These tweaks include, but aren’t limited to, faster charging, a more capable camera and the latest top-end Qualcomm chip too.
Having said that, that’s not to say the Galaxy S24 Ultra has aged poorly. In fact, it remains a brilliant option that offers speedy performance, a still solid camera set-up and decent battery life too – all for a much lower price.
Advertisement
At this stage, we’d suggest that if you do want the best of the best, the Galaxy S26 Ultra is an easy recommendation. However, the Galaxy S24 Ultra is still a great smartphone that still works admirably.
This week, theUncanny Valley team dives into the feud that has been brewing between Anthropic and the Pentagon—and what it says about how the government interacts with tech companies. Later, Zoë Schiffer tells us why figuring out whether you are agentic or mimetic has become the new litmus test in Silicon Valley. Plus, we discuss the key takeaways from the State of the Union address and give a farewell to the TAT-8 undersea cables—the ones that made our modern internet possible.
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
Advertisement
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Brian Barrett: Hey, it’s Brian. Zoë, Leah, and I have really enjoyed being your new hosts these past few weeks, and we want to hear from you. If you like the show and have a minute, please leave us a review in the podcast or app of your choice. It really helps us reach more people. And for any questions and comments, you can always reach us at uncannyvalley@wired.com. Thank you for listening. On to the show.
Leah Feiger: Hey, how’s it going?
Advertisement
Zoë Schiffer: I feel great. Brian?
Brian Barrett: I feel terrific, and I know Leah does too, because Survivor‘s back tonight, another thing that we care about and you don’t.
Zoë Schiffer: How do you know I don’t? I mean, I don’t. I don’t, except for my best friend from childhood tried to go on it and then she didn’t get on, so it’s irrelevant.
Leah Feiger: Famously, one day I’m going to apply, and both Brian and our colleague Tim have assured me that I can leave for a month to the beaches of Fiji and come back and still keep my job.
Advertisement
Zoë Schiffer: I think most people would be like, Leah, you’re not going to survive out there, but they don’t know about your deep-sea-diving prowess.
Leah Feiger: I actually think I would be fine. I really, really want to do this. One day, you guys.
Brian Barrett: But Leah, it would require you to potentially kill some fish to eat them, which is not normally—
Leah Feiger: That’s OK.
Advertisement
Brian Barrett: Oh, OK.
Leah Feiger: No, no, no, no, fishing’s fine. Subsistence living, that’s very OK. It’s, like, the larger institutionalization of the mass murder of our sea that I take a bit of a bigger issue with.
Zoë Schiffer: And on that note, welcome to WIRED’s Uncanny Valley. I’m Zoë Schiffer, WIRED’s director of business and industry.
Brian Barrett: I’m Brian Barrett, executive editor.
Advertisement
Leah Feiger: And I’m Leah Feiger, senior politics editor.
As electric vehicles roll off assembly lines, a bottleneck sits upstream: lithium refinement. Turning raw lithium into the compounds needed for batteries is expensive, messy, and energy-intensive, but Mangrove Lithium, a Vancouver-based startup, has a better way. The company has developed an electrochemical refining process that converts lithium feedstocks into battery-grade lithium hydroxide.
Converting raw lithium to lithium hydroxide typically requires roasting spodumene—a mineral from which lithium is derived—at high temperatures, and then leaching it with acid to convert it to lithium sulfate. That compound then needs to be converted to lithium hydroxide. “It’s a thermochemical reaction that uses heavy amounts of reagent chemicals, and generates a sodium sulfate waste stream,” says Ryan Day, Mangrove Lithium’s director of operations.
Further tightening the bottleneck, the majority of the world’s lithium—60 to 70 percent—is now refined in China, and export restrictions and geopolitical tensions have disrupted supply chains in recent years. Shipping raw lithium overseas to be refined also adds to batteries’ total carbon footprint. A new model for lithium refining could reshape not just the economics of electric vehicles, but the geography and environmental footprint of the global battery supply chain.
Mangrove’s demo plant in British Columbia is scheduled to start production in the second half of 2026.
Advertisement
How Does Mangrove’s Refinement Work?
Mangrove replaces the conventional, resource-intensive reaction with a process that uses electricity, water, and oxygen. In an electrochemical cell, they flow brine through an electrolyzer, which consists of a metal box with three compartments between the cathode and anode. The compartments are separated by ion exchange membranes, semipermeable barriers that only allow certain ions to pass. Lithium sulfate flows through the central compartment, and the cell’s electric field splits the salt apart. “Lithium, which is a positive ion, will move across a membrane toward the cathode,” says Day. There, “we are reacting oxygen and water to create hydroxide ions, which join with the lithium from the salt to make lithium hydroxide.”
Meanwhile, on the opposite side of the cell, the sulfate—a negative ion—moves towards the anode, where water is being split to produce protons and oxygen gas. The protons combine with sulfate ions to make sulfuric acid.
“You run that process continuously, and over time you’re generating lithium hydroxide, which you can send to a crystallizer,” Day says. “There’s no significant waste product and all you’re feeding in is brine, water, oxygen, and electricity.” The sulfuric acid is recovered and can be circulated back upstream to leach more brine from the raw feed material.
In general, keeping the ion exchange membrane intact is one of the biggest challenges for scaling this type of process, says Feifei Shi, assistant professor of energy engineering at Penn State. Shi, who researches electrochemical-based refinement methods, notes that the approach can more easily activate the necessary reactions, but faces limitations for large-scale applications.
Advertisement
The electrochemical process separates out lithium by passing it through three compartments separated by semipermeable barriers. Mangrove Lithium
Mangrove’s Oxygen-Based Cathode
Mangrove’s key innovation and what enables the process is an oxygen-based cathode. “Driving the reaction requires detailed engineering,” says Day. The company designed an electrode that lets a gas and a liquid react together, using just enough water to make the oxygen reaction work—without adding so much that it floods the system and creates hydrogen gas instead.
The electrodes are made with a proprietary process that combines several dedicated layers which allow for a balanced flow of water and oxygen to access the active catalyst sites. This design favors the oxygen reduction reaction for over 99.5 percent of the total cathode activity. It also reduces the amount of electricity needed to drive the process, because “oxygen reduction requires less voltage than water reduction,” Day says. Demand for battery minerals is surging beyond just lithium, with automakers competing for supplies of nickel, cobalt, graphite, and manganese. Simultaneously, utilities are deploying grid-scale batteries that use the same materials in even larger volumes. Refining capacity—not just mining—could become the critical choke point in this buildout, because battery makers require highly specified, ultra-pure compounds.
While Mangrove is initially targeting lithium, their electrochemical architecture is not inherently lithium-specific, and could be adapted to other battery materials that face similar purification bottlenecks. Nickel and cobalt sulfate production, for example, still rely on multi-step precipitation and solvent-extraction processes that generate significant waste and require large reagent inputs. “It would work immediately in application to other alkali-metal salts,” Day says.
Mangrove’s demo plant in British Columbia will make 1,000 tons per year of lithium hydroxide. If the company can scale its technology as it hopes, it could begin to reshape not just the battery supply chain, but the geopolitics of the energy transition.
Among the titles added are three drama shows that really stood out to me. If you want a gripping story to keep you on the edge of your seat, there’s plenty to lose yourself in with my three new Netflix picks. They’re different types of dramas, but I think there’s something for everyone here.
Check out my three February recommendations to blow.
Advertisement
Lead Children
Lead Children | Official Trailer | Netflix – YouTube
Creator: Maciej Pieprzyca Cast: Joanna Kulig, Kinga Preis, Michał Żurawski, Agata Kulesza, Marian Dziędziel, and Zbigniew Zamachowski Seasons: 1 Age Rating: TV-MA
This Polish-language series is based on the life of Jolanta Wadowska-Król, a doctor who noticed that the children living near the Szopienice Steelworks were becoming very ill with lead poisoning.
Following this horrifying discovery, Wadowska-Król does everything in her power to save the sick children and confront those responsible. It’s a harrowing yet inspiring watch as we witness one woman who refused to back down, even though fighting the oppressive communist state of 1970s Upper Silesia was a seemingly impossible task.
Advertisement
You may not be familiar with Wadowska-Król’s work, but this excellent drama will teach you so much about her.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The Lincoln Lawyer
The Lincoln Lawyer Season 1 Trailer | Rotten Tomatoes TV – YouTube
Creator: David E. Kelley Cast: Manuel Garcia-Rulfo, Neve Campbell, Becki Newton, Jazz Raycole, Angus Sampson, Yaya DaCosta Seasons: 4 (5 on the way) Age Rating: TV-MA
The fourth season of The Lincoln Lawyer is now on Netflix, making it the perfect time for you to watch it from the beginning if you haven’t seen it yet. If you have, then you can skip straight to the new episodes, so everybody wins.
Advertisement
The Lincoln Lawyer is an addictive series that follows lawyer Mickey Haller as he works in the back of his Lincoln Navigator and takes on various cases across Los Angeles. If you love the drama of an intriguing legal case, you won’t want to miss this.
Better yet, it’s been renewed for season 5, so there’s plenty more where that came from.
Advertisement
The Way Home
Preview – The Way Home – Hallmark Channel – YouTube
Finally, fantasy lovers will get a real kick out of The Way Home, which currently has a 100% Rotten Tomatoes score. It was originally a Hallmark release, and you can now watch all three seasons on Netflix.
Time travel is at the heart of this drama as we follow three generations of strong, willful, and independent women. They embark on a journey to find their way back to each other and learn many important lessons along the way.
A fourth season of The Way Home has been confirmed, so if you enjoy it, more episodes are heading our way soon.
It may not feel like it, but the Ford Mustang Mach-E has become a bit of an elder statesman in the electric crossover segment. Ford first unveiled this ambitious EV that controversially borrowed the Mustang’s name in 2019 and, in the years since, has given the now-familiar Mach-E some minor tweaks, including the addition of an exciting, rally-focused performance model for 2024.
The latest change that Ford’s given the Mach-E, though, feels like more of a head-scratcher or, for lack of a better word, a cash grab. It doesn’t involve adding a new feature to the car, but rather taking one away and charging buyers extra if they’d like it back. For 2026, the Mustang Mach-E’s formerly standard front cargo compartment, better known as a “frunk,” is now a separate option that will set buyers back an extra $495.
Advertisement
Yes, this is a relatively small change in the context of a car that starts at nearly $40,000, but removing any formerly standard feature (without an equivalent price drop) and then charging extra for it is generally not something that buyers appreciate. But Ford is justifying the move by arguing that few buyers were actually using the Mach-E’s frunk in the first place.
Advertisement
What’s in a frunk?
There are a lots valid arguments that could be used against electric vehicles when comparing them to gas cars, but even the most dedicated EV critics would have to admit that the availability of a frunk is one of the best benefits of an electric vehicle. Not every single EV on the market has a frunk, but many use their lack of an engine to turn their underhood areas into an extra cargo compartment — as the name suggests, a front trunk.
Every Tesla currently on sale has a frunk, and Ford’s own F-150 Lightning has a massive “Mega Power Frunk” where its engine would be. Though not nearly as large as the Lightning’s frunk, the Mach-E has always had extra cargo space up front, and we listed this frunk as one of the Mach-E’s 10 coolest features back in 2022. Ford even filled a Mach-E’s frunk with shrimp and buffalo wings for a 2020 publicity stunt.
Adding extra cargo space without impeding on the cabin seems like it’d be a win-win and a popular feature. But Ford found that Mach-E buyers were not using their frunks nearly as much as expected. According to Ford, this spurred the decision to change the frunk from a standard feature to a standalone extra on the options sheet.
Advertisement
Smart decision or cash grab?
There wouldn’t be any issue with this move if Ford simply dropped the Mach-E’s price by $495 while making the frunk a $495 option, but that’s not quite how Ford is going about it. While Ford did drop Mach-E prices slightly for 2026, adding the frunk as an option on the base RWD 2026 Mach-E makes it around $350 more expensive than the identical 2025 model. In a similar move to the frunk change, Ford has also removed the 2026 Mach-E Rally’s standard rear spoiler and made it a standalone option.
Are these changes likely to have a big impact on Mach-E demand on their own? Probably not, given that many buyers are already conditioned to expect car prices that creep up each year. But our reviews have shown that the Mach-E lags behind its competition in terms of value, and these price bumps certainly won’t help its case there.
Advertisement
With the Mach-E’s relatively old age, it was once thought this EV would be due for a new generation, or at least a significant refresh by 2026, but industry reports suggest it could be a while longer before Ford redesigns the Mach-E. Instead, it’s said that Ford will continue working on the current platform to cut costs and increase profitability — and these small but notable equipment moves seem to back up that pivot.
Seattle startup Read AI launched a new “Digital Twin” product that works through email and can help schedule meetings, answer questions, and keep conversations moving.
The AI bot, branded as “Ada,” builds on the company’s existing meeting and productivity tools. Read AI says it’s the largest deployment of a digital twin product to date.
Digital Twin enters a crowded market of AI agents and workplace copilots from giants like Microsoft and Google, along with startups that offer AI‑driven scheduling, inbox triage and autonomous task management. Read is trying to differentiate by centering the agent in email, tightly coupling it to meeting and document context, and offering enterprise branding such as a custom name and company domain for customers with 25 or more licenses.
Here’s how it works. Users cc ada@read.ai on a thread and can ask it to find time on everyone’s calendars, draft replies, or answer questions using context from their meetings, email, files, CRMs and other connected systems. Read says its platform pulls from more than 20 native integrations and, on average, about 10,000 documents per user.
For anything beyond scheduling, Ada “sidebars” with the user first, proposing draft responses and waiting for approval before sending them, and it must be cc’d on email threads where it takes action. The idea is to let the AI cover for you when you’re too busy or out of the office, while giving you veto power on anything sensitive or high‑stakes.
Advertisement
Read AI CEO David Shim likened Digital Twin to OpenClaw, an open source AI digital assistant tool that works with messaging apps and went viral this month. “What OpenClaw did for tinkers, Digital Twin brings to the mainstream,” he told GeekWire.
Shim framed the launch as an evolution from “AI assistant” to something closer to a software colleague that can act on your behalf. In internal beta, he said a quarter of user interactions with Ada were just to say “thank you,” a signal that people were treating the product more like a teammate than a tool.
He said the Digital Twin launch shifts Read AI from “a system of record for productivity” to an “extension of you.”
“This is the moment we change the way we interact with AI, from pull to push, where the agent acts on your behalf,” he said.
Advertisement
More broadly, Shim is betting that digital twins — and AI assistants more broadly — will proliferate.
“If I said internet access was a human right 20 years ago, I’d be laughed out of the room — today, it’s an expected value,” he said. “We believe that digital twins will be a human right, akin to internet access, in the next few years, delivering a level playing field when it comes to AI and productivity.”
Founded in 2021 by Shim, Robert Williams, and Elliott Waldron, Read AI has raised more than $80 million and landed major enterprise customers for its cross-platform AI meeting assistant and productivity tools. It has 5 million monthly active users.
Google just debuted Nano Banana 2, an updated version of its AI image generator. It combines the abilities of Google’s previous release, Nano Banana Pro—like text rendering and web searching—with speedier image generation. This tool will be the new default in Google’s Gemini chatbot.
The first image model from Google under the Nano Banana moniker dropped last August, and the Pro version arrived three months later. The AI tool was widely adopted online to alter photos of real people, from generating custom action figures to nostalgic images of people hugging younger versions of themselves.
Nano Banana 2 is not only faster at crafting images, it’s also a more powerful photo editor. Despite some rough edges and unconvincing generations in my initial hands-on experience through Gemini, Google’s latest release marks the continued improvement of photorealistic AI tools that can manipulate existing images and serves as a stark reminder to always scrutinize unverified images you see online.
Getting Started
If you want to try the new image model, the easiest way to access Nano Banana 2 is through the Gemini app or website. You can either click the banana emoji to generate images or just put the request in your prompts to the chatbot. This new image model is also available through Google’s Search tools, AI Studio, Cloud, and other services.
Advertisement
Google says the Nano Banana 2 image generator pulls real-time information from the web, which can be useful for generating infographics. To test this, I asked Gemini to generate a custom weather report for my upcoming weekend getaway. Here’s my prompt:
I’m going skiing in Dodge Ridge this weekend with some friends. Could you create an infographic that covers the weather conditions?
Nano Banana Pro made it easier to generate images with text—pulled from data on the web—and Nano Banana 2 makes that image generation speedier than ever.
AI-GENERATED BY REECE ROGERS
At first glance, the result looks decent. No wobbly text or disfigured skiers in the background. The forecast for each day includes expected temperatures as well as wind and snow conditions. A small disclaimer at the bottom of the infographic reads, “Weather and conditions subject to change. Check official sources.”
Advertisement
I’m glad I did! When I looked up the forecast for this weekend from a different source, I realized that Gemini had messed up the dates and pulled the Google Weather context from last week. When I pointed out this mistake to the bot, it used Nano Banana 2 to replace the text from its first attempt with the correct weather data.
Tub Time
If you want more details about my getaway, I’m headed to a cozy ski lodge with friends who are skiers. I’m a novice and still deciding whether to actually hit the slopes or just turn into a wrinkly prune sitting in the hot tub all day long. Maybe Nano Banana 2 could make a dumb meme to send to the group chat? I uploaded a photo of myself to Gemini with this prompt:
Take this image and put me in a cozy outdoor jacuzzi surrounded by snow. Make my skin comically wrinkly from sitting in there for hours.
If you’re an esteemed Android user like me, and you felt left out of yesterday’s deal on the AirPods Pro 3, I’ve got you covered today with an even bigger discount on the Pixel Buds Pro 2. Both Amazon and Best Buy have the hazel color marked down from $229 to $180, a $49 discount on Google’s most upgraded wireless earbuds.
Photograph: Julian Chokkattu
The first change you’ll notice from the previous generation Pixel Buds Pro is that the newer model is much lighter, and the buds are 27 percent smaller. As a result, these are an excellent choice for anyone with small ears, and they stay put super well. Reviewer Parker Hall “had no problem doing hours of tree pruning and going on long sweaty runs in Portland’s early fall heat wave.”
With some help from top-notch physical sound isolation, the active noise-canceling on these is just as good as Apple’s and even goes toe-to-toe with big hitters like Bose and Sony. The transparency mode works just as well, too, with a wider range and clearer audio than a lot of other headphones offer. When it’s time to actually turn up the tunes, you can enjoy a wide, natural soundstage that has excellent detail in the midrange and clear, sparkling treble.
The Gemini integration, unfortunately, leaves a bit to be desired. It’s not the smoothest experience, particularly when asking multiple questions, and the Pixel Buds Pro 2 aren’t offering anything that other earbuds can’t do. Apple’s live translations and heart rate monitors are more useful features, but if you’re on Android, you’re locked out of them anyway.
If you’re interested in upgrading your earbud game, and you already have a Pixel, you can grab the Pixel Buds Pro 2 in hazel for $180 from either Amazon or Best Buy. If that color doesn’t suit you, I also spotted lesser discounts on the peony color for $189, or the porcelain color for $210. For anyone who isn’t already sold on the Pixel Buds Pro 2, make sure to swing by our guide to the best wireless earbuds, with picks for both Apple and Android owners.
A Guru3D forums user named “The Creator” recently shared a beta DLL file for an unannounced update to AMD’s FSR upscaler. It didn’t take long for users on Guru3D and Reddit to circulate mirrors and begin publishing side-by-side comparisons. The early verdict: noticeably less blur than the public release at… Read Entire Article Source link