The new MacBook Pro will be offered in 14-inch and 16-inch configurations, both featuring Liquid Retina XDR displays with up to 1600 nits of peak HDR brightness. Storage capacity starts at 1 TB for the M5 Pro model and 2 TB for the M5 Max variant. Read Entire Article Source link
There are a ton of laptops on the market at any given moment, and almost all of those models are available in multiple configurations to match your performance and budget needs. If you’re feeling overwhelmed with options when looking for a new laptop, it’s understandable. To help simplify things for you, here are the main things you should consider when you start looking.
Price
The search for a new laptop for most people starts with price, and laptop pricing is on the rise. If the statistics that chipmaker Intel and PC manufacturers hurl at us are correct, you’ll be holding onto your next laptop for at least three years. If you can afford to stretch your budget a little to get better specs, do it. That stands whether you’re spending $500 or more than $1,000. In the past, you could get away with spending less upfront and look to upgrade memory and storage in the future. Laptop makers are increasingly moving away from making components easily upgradable, so it’s best to get as good a laptop as you can afford from the start.
Generally speaking, the more you spend, the better the laptop. That could mean better components for faster performance, a nicer display, sturdier build quality, a smaller or lighter design from higher-end materials or even a more comfortable keyboard. All of these things add to the cost of a laptop. I’d love to say $500 will get you a powerful gaming laptop, for example, but that’s not the case. Right now, the sweet spot for a reliable laptop that can handle average work, home office or school tasks is between $700 and $800 and a reasonable model for creative work or gaming upward of about $1,000. The key is to look for discounts on models in all price ranges so you can get more laptop features for less.
Operating system
Choosing an operating system is part personal preference and part budget. For the most part, Microsoft Windows and Apple’s MacOS do the same things (except for gaming, where Windows is the winner), but they do them differently. Unless there’s an OS-specific application you need, go with the one you feel most comfortable using. If you’re not sure which that is, head to an Apple store or a local electronics store and test them out. Or ask friends or family to let you test theirs for a bit. If you have an iPhone or iPad and like it, chances are you’ll like MacOS too.
Advertisement
When it comes to price and variety (and, again, PC gaming), Windows laptops win. If you want MacOS, you’re getting a MacBook. While Apple’s MacBooks regularly top our best lists, the least expensive one is the M1 MacBook Air for $999. It is regularly discounted to $750 or $800, but if you want a cheaper MacBook, you’ll have to consider older refurbished ones.
Windows laptops can be found for as little as a couple of hundred dollars and come in all manner of sizes and designs. Granted, we’d be hard-pressed to find a $200 laptop we’d give a full-throated recommendation to, but if you need a laptop for online shopping, email and word processing, they exist.
If you are on a tight budget, consider a Chromebook. ChromeOS is a different experience than Windows; make sure the applications you need have a Chrome, Android or Linux app before making the leap. If you spend most of your time roaming the web, writing, streaming video or using cloud-gaming services, they’re a good fit.
Size
Remember to consider whether having a lighter, thinner laptop or a touchscreen laptop with a good battery life will be important to you in the future. Size is primarily determined by the screen — hello, laws of physics — which in turn factors into battery size, laptop thickness, weight and price. Keep in mind other physics-related characteristics, such as an ultrathin laptop isn’t necessarily lighter than a thick one, you can’t expect a wide array of connections on a small or ultrathin model and so on.
Advertisement
Screen
When it comes to deciding on a screen, there are a variety of considerations: how much you need to display (which is surprisingly more about resolution than screen size), what types of content you’ll be looking at and whether you’ll be using it for gaming or creative work.
You want to optimize pixel density; that’s the number of pixels per inch the screen can display. Although other factors contribute to sharpness, a higher pixel density usually means sharper rendering of text and interface elements. (You can easily calculate the pixel density of any screen at DPI Calculator if you don’t feel like doing the math, and you can also find out what math you need to do there.) We recommend a dot pitch of at least 100 pixels per inch (ppi) as a rule of thumb.
Because of the way Windows and MacOS scale for the display, you’re frequently better off with a higher resolution than you’d think. You can always make things bigger on a high-resolution screen, but you can never make them smaller — to fit more content in the view — on a low-resolution screen. This is why a 4K, 14-inch screen may sound like unnecessary overkill, but may not be if you need to, say, view a wide spreadsheet.
If you need a laptop with relatively accurate color that displays the most colors possible or that supports HDR, you can’t simply trust the specs. Manufacturers usually fail to provide the necessary context to understand what the specs they quote mean. You can find a ton of detail about considerations for different types of screen uses in our monitor buying guides for general-purpose monitors, creators, gamers and HDR viewing.
Advertisement
Processor
The processor, aka the CPU, is the brains of a laptop. Intel and AMD are the main CPU makers for Windows laptops, with Qualcomm as a new third option with its Arm-based Snapdragon X processors. Both Intel and AMD offer a staggering selection of mobile processors. Making things trickier, both manufacturers have chips designed for different laptop styles, like power-saving chips for ultraportables or faster processors for gaming laptops. Their naming conventions will let you know what type is used. You can head to Intel’s or AMD’s sites for explanations so you get the performance you want. Generally speaking, the faster the processor speed and the more cores it has, the better the performance will be.
Apple makes its own chips for MacBooks, which makes things slightly more straightforward. Like Intel and AMD, you’ll still want to pay attention to the naming conventions to know what kind of performance to expect. Apple uses its M-series chipsets in Macs. The entry-level MacBook Air uses an M1 chip with an eight-core CPU and seven-core GPU. The current models have M2-series silicon that starts with an eight-core CPU and 10-core GPU and goes up to the M2 Max with a 12-core CPU and a 38-core GPU. Again, generally speaking, the more cores it has, the better the performance.
Battery life has less to do with the number of cores and more to do with CPU architecture, Arm versus x86. Apple’s Arm-based MacBooks and the first Arm-based Copilot Plus PCs we’ve tested offer better battery life than laptops based on x86 processors from Intel and AMD.
Graphics
The graphics processor (GPU) handles all the work of driving the screen and generating what gets displayed, as well as speeding up a lot of graphics-related (and increasingly, AI-related) operations. For Windows laptops, there are two types of GPUs: integrated (iGPU) or discrete (dGPU). As the names imply, an iGPU is part of the CPU package, while a dGPU is a separate chip with dedicated memory (VRAM) that it communicates with directly, making it faster than sharing memory with the CPU.
Advertisement
Because the iGPU splits space, memory and power with the CPU, it’s constrained by the limits of those. It allows for smaller, lighter laptops, but doesn’t perform nearly as well as a dGPU. There are some games and creative software that won’t run unless they detect a dGPU or sufficient VRAM. Most productivity software, video streaming, web browsing and other nonspecialized apps will run fine on an iGPU, though.
For more power-hungry graphics needs, like video editing, gaming and streaming, design and so on, you’ll need a dGPU; there are only two real companies that make them, Nvidia and AMD, with Intel offering some based on the Xe-branded (or the older UHD Graphics branding) iGPU technology in its CPUs.
Memory
For memory, we highly recommend 16GB of RAM (8GB absolute minimum). RAM is where the operating system stores all the data for currently running applications, and it can fill up fast. After that, it starts swapping between RAM and SSD, which is slower. A lot of sub-$500 laptops have 4GB or 8GB, which in conjunction with a slower disk can make for a frustratingly slow Windows laptop experience. Also, many laptops now have the memory soldered onto the motherboard. Most manufacturers disclose this, but if the RAM type is LPDDR, assume it’s soldered and can’t be upgraded.
Some PC makers will solder memory on and also leave an empty internal slot for adding a stick of RAM. You may need to contact the laptop manufacturer or find the laptop’s full specs online to confirm. Check the web for user experiences, because the slot may still be hard to get to, it may require nonstandard or hard-to-get memory or other pitfalls.
Advertisement
Storage
You’ll still find cheaper hard drives in budget laptops and larger hard drives in gaming laptops, but faster solid-state drives (SSDs) have all but replaced hard drives in laptops. They can make a big difference in performance. Not all SSDs are equally speedy, and cheaper laptops typically have slower drives. If the laptop has only 4GB or 8GB of RAM, it may end up swapping to that drive and the system may slow down quickly while you’re working.
Get what you can afford, and if you need to go with a smaller drive, you can always add an external drive or two down the road or use cloud storage to bolster a small internal drive. The one exception is gaming laptops: We don’t recommend going with less than a 512GB SSD unless you really like uninstalling games every time you want to play a new game.
One of the ongoing rumors and scandals in professional cycle sport concerns “motor doping” — the practice of concealing an electric motor in a bicycle to provide the rider with an unfair advantage. It’s investigated in a video from [Global Cycling Network], in which they talk about the background and then prove its possible by creating a motor doped racing bike.
To do this they’ve recruited a couple of recent graduate engineers, who get to work in a way most of us would be familiar with: prototyping with a set of 18650 cells, some electronics, and electromagnets. It uses what they call a “Magic wheel”, which features magnets embedded in its rim that engage with hidden electromagnets. It gives somewhere just under 20 W boost, which doesn’t sound much, but could deliver those crucial extra seconds in a race.
Perhaps the most interesting part is the section which looks at the history of motor doping with some notable cases mentioned, and the steps taken by cycling competition authorities to detect it. They use infra-red cameras, magnetometers, backscatter detectors, and even X-ray machines, but even these haven’t killed persistent rumors in the sport. It’s a fascinating video we’ve placed below the break, and we thank [Seb] for the tip. Meanwhile the two lads who made the bike are looking for a job, so if any Hackaday readers are hiring, drop them a line.
If all you want is just a basic WiFi extender that gets some level of network connectivity to remote parts of your domicile, then it might be tempting to get some of those $5, 300 Mbit extenders off Temu as [Low Level] recently did for a security audit. Naturally, as he shows in the subsequent analysis of its firmware, you really don’t want to stick this thing into your LAN. In this context it is also worrying that the product page claims that over a 100,000 of these have been sold.
Starting the security audit is using $(reboot) as the WiFi password, just to see whether the firmware directly uses this value in a shell without sanitizing. Shockingly, this soft-bricks the device with an infinite reboot loop until a factory reset is performed by long-pressing the reset button. Amusingly, after this the welcome page changed to the ‘Breed web recovery console’ interface, in Chinese.
Here we also see that it uses a Qualcomm Atheros QCA953X SoC, which incidentally is OpenWRT compatible. On this new page you can perform a ‘firmware backup’, making it easy to dump and reverse-engineer the firmware in Ghidra. Based on this code it was easy to determine that full remote access to these devices was available due to a complete lack of sanitization, proving once again that a lack of input sanitization is still the #1 security risk.
In the video it’s explained that it was tried to find and contact a manufacturer about these security issues, but this proved to be basically impossible. This leaves probably thousands of these vulnerable devices scattered around on networks, but on the bright side they could be nice targets for OpenWRT and custom firmware development.
A record $189 billion of global venture capital flowed to startups in February, according to the report. AI startups overall raised $171 billion, or 90% of the capital raised last month. It’s a stunning number that feels like only the start.
That record spending was more than three times the global VC spend in January, and was dominated by mammoth funding rounds from just three companies: OpenAI, Anthropic, and Waymo.
OpenAI’s latest $110 billion raise led the pack. It was one of the largest private rounds ever raised and valued the company at $730 billion. Its rival Anthropic also nabbed a $30 billion Series G at a $380 billion valuation. Lastly, Weymo raised $16 billion at a valuation of $126 billion. These three companies alone were responsible for 83% of the venture dollars raised last month.
Advertisement
The amount raised by just OpenAI, Anthropic, and Waymo last month was one-third of the total $425 billion venture spend in 2025, according to Crunchbase.
The 85-inch model supports large vertical 4K storytelling for high-traffic commercial spaces
Samsung Electronics America has announced availability of its Spatial Signage, a commercial display system that delivers glasses-free 3D visuals in physical environments.
The company says the technology can change how organizations approach visual communication in retail, museums, and large venues.
Unlike conventional 3D installations that rely on bulky housings, this system operates within a slim 2-inch panel.
3D Displays for commercial spaces
At the core is Samsung’s patented 3D Plate technology, which uses a custom optical layer to bend light and generate perceived depth directly from the screen surface.
The approach removes the need for wearable devices while maintaining a flat-panel structure suitable for commercial interiors.
Conventional 3D displays often depend on large, box-like enclosures that restrict placement options and disrupt architectural aesthetics.
Advertisement
The 85-inch model, identified as SM85HX, offers 4K UHD resolution at 2,160 x 3,840 pixels in a 9:16 portrait orientation.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Samsung says this configuration supports large-format storytelling in high-visibility locations.
The company also confirmed that 32-inch and 55-inch versions will follow later in 2026.
Advertisement
Samsung states Spatial Signage is built for high-traffic indoor environments where lighting conditions vary throughout the day.
The display runs on the company’s Quantum Processor, supporting 4K upscaling, HDR refinement, and 16-bit color mapping.
An anti-glare panel helps maintain brightness and image clarity under direct indoor lighting.
Advertisement
The system integrates Samsung Visual eXperience Transformation, known as VXT, including an AI Studio tool that converts static images into video formatted for the display.
Samsung says the software refines shadows, adjusts margins, and enhances background treatments to strengthen perceived depth.
The system handles content optimization automatically, enabling remote campaign updates without additional production tools.
Samsung has shown strong interest in large displays and earlier released the 75-inch 5K ePaper display, described as the largest of its kind, which also integrates an internal battery.
Advertisement
According to the company, a gap in the display industry is increasing the need for a better in-store experience.
Citing reports, Samsung states that 65% of retailers are not satisfied with current display technology, which does not meet modern expectations.
It is committed to launching displays that deliver a seamless experience and improve engagement.
“Physical spaces are becoming strategic platforms for engagement, storytelling, and brand connection,” said David Phelps, Head of Display Solutions, Samsung Electronics America.
Advertisement
“As consumer expectations rise, organizations need to do more than showcase content — they need to create presence and impact. Spatial Signage reflects the next evolution of commercial displays, helping businesses differentiate and create meaningful connections that drive results.”
Whether you prefer to listen to your favourite tracks or to podcasts on your commute, these noise-cancelling earbuds are for you.
There are tons of earbuds out there, but if you want the best of the best in terms of features, comfort and more, then the new Samsung Galaxy Buds 4 Pro are a strong pick.
The Galaxy Buds 4 Pro now come with a free $30 Amazon gift card, making the offer even sweeter
Pick up the Samsung Galaxy Buds 4 Pro on Amazon and you’ll get a bonus $30 gift card included, turning a top‑tier pair of earbuds into an even more appealing offer.
Aside from looking incredibly chic in black, the Galaxy Buds Pro pack a large amount of features that will make any type of audio you want to listen to sound incredible.
Almost everything is centred around the use of AI when it comes to the Buds 4 Pro.
Advertisement
Advertisement
When listening to music, the AI can enhance the surrounding audio through the built-in Hi-Res Audio. Pair this with the two-way speaker system found within the Buds 4 Pro, and you’ll be able to hear every nuance in its glory.
Active Noise Cancellation (ANC) 2.0 also makes an appearance, doing a far better job at filtering through background noise than standard ANC, giving the audio you hear a chance to really open up.
As a final addition, the Buds 4 Pro have live translation capabilities so you can hear a voice in your native tongue, while someone else speaks in a language you don’t understand. This could be a huge boon for anyone who travels a lot and relies on these types of features during their journeys.
Advertisement
The earbuds have also been optimised for comfort, so you’ll forget you were even wearing them after a short while. They even have a certified IP57 rating, so you don’t have to worry too much if they encounter dust or even a bit of water.
Advertisement
Backed with two years warranty under the US version of the device, the Samsung Galaxy Buds 4 Pro give you peace of mind with your investment.
As data-center developers frantically seek to secure power for their operations, one startup is proposing a novel solution: Build them into floating offshore wind turbines.
San Francisco–based offshore wind-power developer Aikido Technologies today announced its plans to start housing data centers in the underwater tanks that keep its turbine platforms afloat. The turbines will supply the power for the servers, and onboard batteries and grid connection will provide backup.
The company’s first prototype, a 100-kilowatt unit, is scheduled to launch in the North Sea off the coast of Norway by the end of this year. A 15-to-18-megawatt project off the coast of the United Kingdom may follow in 2028.
Aikido is one of several companies planning data centers in unusual places—underwater, on floating buoys, in coal mines and now on offshore wind turbines. The creativity stems from the forces of several trends: rapidly rising energy demand from data centers, the need for domestic renewable power production, and limited real estate.
Advertisement
The North Sea serves as an ideal first spot for floating, wind-powered data centers because European policymakers and companies are looking to regain domestic control over energy production. They’re also looking to host an AI economy on servers within the continent’s boundaries. Floating wind platforms keep the compute out of sight while tapping the stronger, more consistent air streams that blow over deep waters, where traditional, seabed-mounted turbine monopiles can’t go.
“A lot of energy in the clean-energy space is focused on powering AI data centers quickly, reliably, and cleanly in a way that does not upset neighbors and remains safe, fast, and cheap,” says Ramez Naam, an independent clean-energy investor who does not have a stake in Aikido. “Aikido has that, and a smart team,” he says.
Floating Wind-Power Designs Evolve
Aikido’s design builds on many iterations tested by the growing floating wind industry. When Norwegian energy giant Equinor finished construction on the world’s first floating wind farm in 2017, it kept the turbines upright with ballasted steel columns extending 78 meters into the water—a design called a spar platform. This gave it a dense mass like the keel of a boat. Since then, the floating wind industry has largely coalesced around a semisubmersible design based on oil and gas platforms. Semisubmersibles don’t go as deep as spar platforms; instead, they extend buoyancy horizontally. Anchors, chains, and ropes keep the platform floating within a certain radius.
Aikido is taking the semisubmersible approach. Its football-field-size platform holds the turbine in the center, and three legs extend tripod-like outward, like a Christmas-tree stand. At the end of each leg is a ballast that reaches 20 meters deep. This holds tanks largely filled with fresh water to maintain the platform’s buoyancy in the salty ocean.
Advertisement
The data centers will go in the upper part of each ballast tank. There’s room for a 3- to 4-MW data hall in each tank, giving the platform a combined compute of 10 to 12 MW. Below the data halls is an open chamber used as a safety barrier, and below that sit the freshwater tanks. The water is piped up to the data center for liquid cooling of the servers. The warmed water is then funneled back down the ballast into the tank. There, proximity to the cold ocean water cools it again as the heat is conducted out through the tank’s steel walls.
“We have this power from the wind. We have free cooling. We think we can be quite cost competitive compared to conventional data-center solutions,” says Aikido CEO Sam Kanner. “This crunch in the next five years is an opportunity for us to prove this out and supply AI compute where it’s needed.”
One challenge, he says, is that liquid cooling can’t cover all the data center’s needs. For example, heat generated from Ethernet switches that connect the GPUs can’t be liquid-cooled with commercially available technology. So Aikido installed an air-conditioning method for that.
Another challenge is the marine environment, which is “pretty brutal to engineer around because there’s the increased salinity, there’s debris, and there’s various kinds of corrosion and fouling of metal piping that you wouldn’t have in a freshwater environment,” says Daniel King, a research fellow at the Foundation for American Innovation in Washington who focuses on AI infrastructure.
Advertisement
Offshore Data Centers Face Challenges
Aikido’s plan avoids the prickly not-in-my-backyard complaints that are dogging both onshore wind and data-center projects. It might also circumvent some inquiries into water usage and power demand too, or so Aikido’s thinking goes.
But it might not be that easy. “Instinctively many people reach for offshore or even orbital outer-space data centers as a way to circumvent the typical burdens of environmental reviews,” says King. “But there could be more or additional requirements around discharging heat and the effects that has on marine life that are different from the considerations of a terrestrial data center. It’s unclear to me whether this actually makes life easier or harder for a developer.”
Prefabricated data halls could be installed quayside, followed by final electrical and plumbing connections to commission the data center.Aikido
Aikido’s “design choice to use the fresh water in the ballast as a working fluid is a novel one” that, thanks to the closed-loop system, may “alleviate some of the engineering problems you see when a really high temperature fluid is pumping its heat directly into a marine environment,” King says.
Offshore sites are also vulnerable to sabotage, King notes. Since Russia’s invasion of Ukraine, fleets of vessels directed by the Kremlin have reportedly started messing with offshore wind and communications infrastructure in northern Europe. Russian and Chinese boats have allegedly cut subsea cables in recent years.
Advertisement
But vandalism is a risk anywhere, including at conventional data centers, Aikido CEO Kanner notes. Unlike those on land, where the local police have jurisdiction, Aikido’s data centers would enjoy protection from national coast guards, which he suggests gives an added degree of security.
North Sea Hosts Clean Energy
Kanner first began thinking about offshore wind turbines as a place to build data centers after a chance phone call with a cryptocurrency billionaire. The financier wanted to know whether turbines in international waters could power servers generating digital tokens at a moment when crypto-mining faced increased scrutiny from regulators. The talks fizzled. But that encounter sparked Kanner’s curiosity about how to use power generated onboard floating turbines.
When ChatGPT emerged in 2022 and sparked a heated debate over how to power and cool such technology, the idea to put the data center in the floating turbine clicked for Kanner. The idea really congealed after he met with the chief executive of Portland, Ore.–based Panthalassa. The wave-energy company was proposing to enclose small, remote data centers in buoys attached to equipment that generates power from the surf. Panthalassa just completed its full-scale prototype tests off the coast of Washington state last summer.
At that point, Aikido had already designed a modular platform for floating wind turbines. Each platform consists of 13 major steel components that are snapped together with pin joints—like IKEA furniture. The platforms fold up in a flat configuration that takes up roughly half the space of other designs, allowing it to be transported by a wider range of ships, according to Aikido. From there, it was a matter of figuring out how to accommodate a data center in the unused space.
Advertisement
Aikido’s prototype will use a refurbished Vesta V-17 turbine. It will need onboard batteries for backup power and will also be connected to the grid for additional power during seasons with less wind. Aikido envisions eventually sprinkling its data centers among large arrays of offshore turbines to tap into that larger power infrastructure.
Between Russia’s threat to expand its war in Ukraine to EU countries and the Trump administration’s bid to pressure Denmark into ceding sovereignty of Greenland to Washington, Europe is scrambling to build up its own energy production and AI capabilities. The North Sea, increasingly, looks like a primary theater of that effort. In January, nearly a dozen European nations banded together in a pact to transform the North Sea into a “reservoir” of clean power from offshore wind.
Google’s newest AI model is here: Gemini 3.1 Flash-Lite, and the biggest improvements this time around come in cost and speed, especially for enterprises and developers seeking to leverage powerful reasoning and multimodal capabilities from the U.S. search and cloud giant.
Positioning it as the most cost-efficient and responsive model in the Gemini 3 series, Google is offering a solution built specifically for intelligence at scale.
This launch arrives just weeks after the February debut of its heavy-lifting sibling, Gemini 3.1 Pro, completing a tiered strategy that allows enterprises to scale intelligence across every layer of their infrastructure.
Technology: optimized for the “time to first token”
In the world of high-throughput AI, the metric that often dictates user experience isn’t just accuracy—it’s latency. For real-time customer support, live content moderation, or instant user interface generation, the “time to first answer token” is the primary indicator of whether an application feels like a tool or a teammate. If a model takes even two seconds to begin its response, the illusion of fluid interaction is broken.
Advertisement
Gemini 3.1 Flash-Lite is engineered specifically for this instant feel. According to internal benchmarks and third-party evaluations, Flash-Lite outperforms its predecessor, Gemini 2.5 Flash, with a 2.5X faster time to first token. Furthermore, it boasts a 45 percent increase in overall output speed — 363 tokens per second compared to 249.
This speed is achieved through what Koray Kavukcuoglu, VP of Research at Google DeepMind, describes in an X post as an unbelievable amount of complex engineering to make AI feel instantaneous.
Perhaps the most innovative technical addition is the introduction of thinking levels.
Standardized across both the Flash-Lite and Pro variants, this feature allows developers to modulate the model’s reasoning intensity dynamically. For a simple classification task or a high-volume sentiment analysis, the model can be dialed down for maximum speed and minimum cost.
Advertisement
Conversely, for complex code exploration, generating dashboards, or creating simulations, the thinking can be dialed up, allowing the model to perform deeper reasoning and logic before emitting its first response.
Product: benchmarking the lite-weight heavy hitter
While the “Lite” suffix often implies a significant sacrifice in capability, the performance data suggests a model that punches well into the territory of much larger systems. Gemini 3.1 Flash-Lite achieved an Elo score of 1432 on the Arena.ai Leaderboard, placing it in a competitive tier with models much larger in parameter count.
Gemini 3.1 Flash-Lite benchmarks. Credit: Google
Key benchmark results highlight its specialized strengths across diverse cognitive domains:
Advertisement
Scientific knowledge: 86.9 percent on GPQA Diamond.
Multimodal understanding: 76.8 percent on MMMU-Pro.
Multilingual Q&A: 88.9 percent on MMMLU.
Parametric knowledge: 43.3 percent on SimpleQA Verified.
Abstract reasoning: 16.0 percent on Humanity’s Last Exam (full set)
The model is particularly adept at structured output compliance—a critical requirement for enterprise developers who need AI to generate valid JSON, SQL, or UI code that won’t break downstream systems.
In benchmarks like LiveCodeBench, Flash-Lite scored a 72.0 percent, outperforming several rivals in its weight class, including GPT-5 mini, which scored 80.4 percent on a different subset but lagged significantly in speed and cost efficiency.
Furthermore, its performance on CharXiv Reasoning (73.2 percent) and Video-MMMU (84.8 percent) demonstrates that its multimodal capabilities are robust enough for complex chart synthesis and knowledge acquisition from video.
The intelligence hierarchy: Flash-Lite vs. 3.1 Pro
To understand Flash-Lite’s place in the market, one must look at it alongside Gemini 3.1 Pro, which Google released in mid-February 2026 to retake the AI crown. While Flash-Lite is the reflexes of the Gemini system, 3.1 Pro is undoubtedly the brain.
Advertisement
The primary differentiator is the depth of cognitive processing. Gemini 3.1 Pro was engineered to double the reasoning performance of the previous generation, achieving a verified score of 77.1 percent on ARC-AGI-2—a benchmark designed to test a model’s ability to solve entirely new logic patterns it has not encountered during training.
While Flash-Lite holds its own in scientific knowledge at 86.9 percent, the Pro model pushes that boundary to a staggering 94.3 percent, making it the superior choice for deep research and high-stakes synthesis. The application focus also differs significantly based on these reasoning gaps.
Gemini 3.1 Pro is capable of vibe-coding—generating animated SVGs and complex 3D simulations directly from text prompts. For example, in one demonstration, Pro coded a complex 3D starling murmuration that users could manipulate via hand-tracking. It can even reason through abstract literary themes, such as translating the atmospheric tone of Emily Brontë’s Wuthering Heights into a functional web design.
Gemini 3.1 Flash-Lite, conversely, is the workhorse for high-volume execution. It handles the millions of daily tasks—translation, tagging, and moderation—that require consistent, repeatable results without the massive compute overhead of a reasoning-heavy model.
Advertisement
It fills a wireframe with hundreds of products instantly or orchestrates intent routing with 94 percent accuracy, as reported by early testers.
1/8th the cost of the flagship Gemini 3.1 Pro model (and cheaper than its predecessor, Flash-Lite 2.5)
For enterprise technical decision-makers, the most compelling part of the Gemini 3.1 series is the reasoning-to-dollar ratio.
Google has priced Gemini 3.1 Flash-Lite at $0.25 per 1 million input tokens and $1.50 per 1 million output tokens.
This pricing makes it significantly more affordable than competitors like Claude 4.5 Haiku, which is priced at $1.00 per 1 million input and $5.00 per 1 million output tokens.
Advertisement
Even compared to Gemini 2.5 Flash, which cost $0.30 per 1 million input, Flash-Lite offers a cost reduction alongside its performance gains.
When contrasted with Gemini 3.1 Pro—which maintains a price of $2.00 per million input tokens for prompts up to 200k—the strategic advantage of the dual-model approach becomes clear. In high-context usage (above 200,000 tokens per interaction), Flash-Lite is actually between 12x and 16x cheaper.
By using a cascading architecture, an enterprise can use 3.1 Pro for the initial complex planning, architectural design, and deep logic, then hand off high-frequency, repetitive execution to Flash-Lite at one-eighth of the cost.
This shift effectively moves AI from an expensive experimental cost center to a utility-grade resource that can be run over every log file, email, and customer chat without exhausting the cloud budget.
Community and developer reactions
Early feedback from Google’s partner network suggests that the 3.1 series is successfully filling a critical gap in the market for reliable autonomy.
Andrew Carr, Chief Scientist at Cartwheel, has tested both models and noted their unique strengths. Regarding 3.1 Pro, he highlighted its substantially improved understanding of 3D transformations, which resolved long-standing rotation order bugs in animation pipelines.
Advertisement
However, he found Flash-Lite to be a different kind of unlock for the business: “3.1 Flash-Lite is a remarkably competent model. It is lightning fast, but still somehow finds a way to follow all instructions… The intelligence to speed ratio is unparalleled in any other model”.
For consumer-facing applications, the low latency of Flash-Lite has been the key to market expansion.
Kolby Nottingham, Head of AI at Latitude, shared that the model achieved a 20 percent higher success rate and 60 percent faster inference times compared to their previous model, enabling sophisticated storytelling to a much wider audience than would have otherwise been possible.
Reliability in data tagging has also emerged as a standout feature. Bianca Rangecroft, CEO of Whering, reported that by integrating 3.1 Flash-Lite into their classification pipeline, they achieved 100 percent consistency in item tagging, providing a highly reliable foundation for their label assignment and increasing confidence in structured outputs.
Advertisement
Kaan Ortabas, Co-Founder of HubX, noted that as a root orchestration engine, Flash-Lite delivered sub-10 second completions with near-instant streaming and 97 percent structured output compliance.
On the flagship side, Vladislav Tankov, Director of AI at JetBrains, noted a 15 percent quality improvement in the Pro model, emphasizing that it is stronger, faster, and more efficient, requiring fewer output tokens to achieve its goals.
Licensing and enterprise availability
Both Gemini 3.1 Flash-Lite and Pro are offered through Google AI Studio and Vertex AI. As proprietary models, they follow a standard commercial software-as-a-service model rather than an open-source license.
Operating through Vertex AI provides grounded reasoning within a secure perimeter, ensuring that high-volume workloads—like those being run by Databricks to achieve best-in-class results on the OfficeQA benchmark—remain protected by enterprise-grade security and data residency guarantees.
Advertisement
However, they also are limited in terms of customizability and require persistent internet connectivity, as opposed to purely open source rivals like the powerful new Qwen3.5 series released by Alibaba over the last few weeks.
The current preview status for Flash-Lite allows Google to refine safety and performance based on real-world developer feedback before general availability.
For developers already building via the Gemini API, the transition to 3.1 Pro and Flash-Lite represents a direct performance upgrade at the same or lower price points, effectively lowering the barrier to entry for complex agentic workflows.
The verdict: the new standard for utility AI
The release of Gemini 3.1 Flash-Lite represents the final piece of a strategic pivot for Google. While the industry has been obsessed with state-of-the-art reasoning for the most complex problems, the vast majority of enterprise work consists of high-volume, repetitive, but high-precision tasks.
Advertisement
By providing both the brain in Gemini 3.1 Pro and the reflexes in Gemini 3.1 Flash-Lite, Google is signaling that the next phase of the AI race will be won by models that can think through a problem, but also execute that solution at scale.
For the CTO or technical lead deciding which model to bake into their 2026 product roadmap, the Gemini 3.1 series offers a compelling argument: you no longer have to pay a reasoning tax to get reliable, instantaneous results. As Flash-Lite rolls out in preview today, the message to the developer community is clear: the barrier to intelligence at scale hasn’t just been lowered—it’s been dismantled.
IPIC Theaters plans to permanently close its Redmond, Wash., movie theater on April 28, according to a WARN notice filed with Washington state regulators.
The upscale movie theater chain notified state officials on Feb. 23 that all operations at the site will cease “due to business circumstances.”
IPIC last week filed for bankruptcy. The company, which operates 13 dine-in theaters across the country, is also shutting down a theater in Atlanta.
IPIC runs luxury theaters known for recliner seating and full food and drink service. The company opened its theater at Redmond Town Center in 2011.
Movie theaters like IPIC have faced ongoing pressure from streaming services. North American box office numbers fell short of projections in 2025 — just above the prior year but well behind pre-pandemic highs.
“5G” is an umbrella term that encompasses the current fifth-generation cellular wireless network technologies. All the major carriers and phones support 5G connections, which can offer faster data speeds than older technologies such as 4G LTE or 3G.
Essentially there are three types of 5G: Millimeter-wave (mmWave), which can be fast but has limited range; low-band 5G, which has slower speeds but works on a broader range; and midband, which is a balance between the two that’s faster than low-band but also covers a larger range than millimeter-wave. Midband also incorporates C-band, a batch of spectrum auctioned off by the Federal Communications Commission in 2021.
Your phone’s 5G connection depends on which type blankets the area you’re in, as well as other factors, such as population density and infrastructure. For instance, mmWave is super fast, but its signals can be thwarted by buildings — or glass, or leaves — or whether you’re inside a structure.
When your device is connected to a 5G network, it can show up as several variations such as 5G, 5G Plus, 5G UW or others, depending on the carrier. Here’s a list of icons you see at the top of your phone for the major services:
Advertisement
AT&T: 5GE (which isn’t actually 5G, but rather a sly marketing name for 4G LTE), 5G (low band), 5G Plus (mmWave, midband)
Verizon: 5G (low band, also called “Nationwide 5G”), 5G UW/5G UWB (midband and mmWave, also called “5G Ultra Wideband”)
T-Mobile: 5G (low band), 5G UC (midband and mmWave, also called “Ultra Capacity 5G”)
There’s also 5G Reduced Capacity (5G RedCap), which is a lower-power, smaller-capacity branch of 5G used by devices such as smartwatches and portable health devices; the Apple Watch Ultra 3, for example, connects via 5G RedCap.
Advertisement
Just around the corner is 5G Advanced, promising much faster speeds due to carrier aggregation, or combining multiple spectrums.