Connect with us
DAPA Banner

Tech

iPhone 18 series: Everything we know so far

Published

on

Apple’s iPhone is a product that the world, including potential buyers, critics, and competitors, watches obsessively. Over the years, the Cupertino giant has repeatedly shown up every September, with the best iteration of their smartphone technology, spread across multiple Pro and non-Pro models. However, the iPhone 18 series could change that tradition.

This year could be the first time the company splits its massive September event into two, focusing on different categories of the upcoming iPhones. The premium ones, including the Pro models and the purported Apple foldable, could arrive this fall, while the more affordable models could arrive in spring 2027. That’s why it’s all the more important to know about the purported iPhone 18 series this year, so that you can plan your upgrade (and prepare your wallet) well in advance.

iPhone 18 series: Latest news

Apple’s iPhone is one of those evergreen product lineups that attracts rumors and reports year-round. It doesn’t matter whether the iPhone 17 has just dropped or we’re almost half a year away from the expected iPhone 18 series launch time; the news just keeps coming in from all directions.

Release Date and price rumors

Unlike previous years, Apple is heavily rumored to split its grand September launch event into two equally important events across 2026 and 2027.

The split strategy was initially reported by The Information in May 2025, and later, Bloomberg’s Mark Gurman corroborated it, stating that it will help the company spread its engineering and marketing efforts across its calendar year, from fall to spring. 

As part of the new launch paradigm, we should get to see the premium Apple iPhones, including the iPhone 18 Pro, the iPhone 18 Pro Max, and the iPhone Fold (Apple’s first-ever foldable), in early September 2026, with retail availability typically following about two weeks later. Some rumors also suggest the Fold’s retail availability could commence in December.

Advertisement

Price seems to be a sensitive topic this year, not just for the upcoming iPhone 18 series, but for every other smartphone in 2026. The ongoing memory crisis and rising component costs have compelled manufacturers to either raise prices or upsell buyers to higher-memory or storage variants at higher prices. 

Expected Release Starting Price
iPhone 18 Pro September 2026 ~$1,099
iPhone 18 Pro Max September 2026 ~$1,199
iPhone Fold (or Ultra) September – December 2026 ~$2,000 or more

Apple, however, might be in a slightly better position than other manufacturers, as per renowned analyst Ming-Chi Kuo. In January 2026, Kuo claimed that the company could leverage its position to lock in long-term deals with memory suppliers, potentially helping it absorb the higher cost, and, in the process, securing a higher market share as other brands hike prices. 

Post the September 2026 event, Apple could return in March 2027 with more value-driven, consumer-centric models, including the regular iPhone 18 and the iPhone 18e. 

The successor to the thinnest iPhone ever, the iPhone Air, could also break cover at the same time. Whether this would be through a live-streamed event, a pre-recorded presentation, or simply via a press release is something we’re yet to find out. 

Expected Release Starting Price
iPhone 18 March 2027 ~$799
iPhone 18e March 2027 ~$599
iPhone Air 2 March 2027 ~$999

Please keep in mind that the prices mentioned here are mere speculations, and Apple hasn’t confirmed them (yet).

Design and display

According to the most recent rumor from Fixed Focus Digital (via Weibo), the baseline iPhone 18 could look and feel the same as its predecessor, the iPhone 17. In other words, we could get the same glass-and-aluminum sandwich design with flat edges, rounded corners, the pill-shaped camera module, and a minimal yet premium visual appeal. 

The overall dimensions and weight of the handset might remain unchanged, barring any minor modifications. While the handset could still feature a 6.27-inch LTPO OLED screen with a 120Hz refresh rate, perhaps with improvements to peak brightness and always-on efficiency. 

Advertisement

It might have a smaller Dynamic Island, though newer leaks dispute this, suggesting that a smaller cutout on the screen could be reserved for the Pro models in the iPhone 18 series. The bezels are already quite slim on the baseline iPhone 17, and they might not get any slimmer on the successor. 

The iPhone 18 Pro models could also borrow their aluminum unibody (with the camera plateau) and glass (at the rear) chassis from the iPhone 17 Pro models. What could change, however, is the color difference between the metal body and the back glass, in favor of a more seamless look. 

In fact, Apple could also double down on more vibrant, fun colors with the iPhone 18 Pro (as the Cosmic Orange finish did quite well). Some leaks claimed Apple might ditch the Dynamic Island entirely and adopt an under-display Face ID module, resulting in punch-hole screens. But for now, a smaller Dynamic Island makes much more sense, given Apple’s slow-paced physical innovation cycle. It would also help with product segmentation. 

Beyond that, the handsets will most certainly retain their current dimensions and weight, with minute changes always on the table (perhaps for a bigger battery). The iPhone 18 Pro could sport the same 6.3-inch OLED screen, and the iPhone 18 Pro Max could have the 6.9-inch OLED screen, both capable of a 120Hz ProMotion display, with subtle refinements in the screen-to-body ratio and the anti-reflecting coating.

Performance and software

The baseline iPhone 18 will almost certainly feature the A20 chip, while the iPhone 18 Pro models could get the A20 Pro chip. They’ll be the first Apple-designed chipsets based on TSCM’s 2nm fabrication technology. Technically, Samsung crossed the finish line first with 2nm chips (with its Exynos 2600 chip), but Apple’s implementation should be more intentional and capable. 

Apart from improvements in raw performance and efficiency, the purported mobile processors from Apple could be based on a new WMCM (Wafer-level Multi-Chip Module) design, as claimed by renowned analyst Ming-Chi Kuo and corroborated by a few other industry sources. 

Report: TSMC’s WMCM and SoIC Dual Support Ensures Apple’s Presence in Advanced Packaging

Advanced packaging continues to be a hot topic, and the industry is closely watching not only NVIDIA’s large orders with TSMC, but also Apple’s entry into the fray, with clear plans for…

Advertisement

— Jukan (@jukan05) June 22, 2025

The design allows the integration of several key components, including the CPU, GPU, and DRAM, into the same package, resulting in enhanced system performance and reduced material costs. Apple could also use the same tech for the upcoming M6 chip, which could break cover on a MacBook Pro later this year

Even though the current A19 chips are extremely fast, the A20 family could deliver double-digit improvements in both CPU and GPU performance, making it ideal for a future iteration of the MacBook Neo. We’re also expecting better sustained performance from the A20 chips.

The baseline iPhone 18 could get a memory boost to 12GB (up from 8GB), while the iPhone 18 Pro could retain its 12GB memory, but perhaps with faster bandwidth for improved performance. Storage options should remain the same as on the current iPhone 17 lineup. The Pro models could also get better satellite connectivity, perhaps even 5G-via-satellite

The iPhone 18 series should debut with iOS 27 out of the box, which is expected to rely heavily on AI-driven improvements and under-the-hood refinements rather than any big visual changes (it is also referred to as the “Snow Leopard” update).

Advertisement

The update will likely include a chatbot-like Siri with deeper integration across iOS and support for third-party AI models. We might get a standalone Siri app, much like other chatbots. 

Among other major additions could include Health+, an AI-powered health-tracing platform with features like food logging, personal coaching, and an AI-based doctor or consultant. We could also get an improved, AI-integrated Spotlight search experience, better multitasking optimization (especially on the big-screen iPhone Fold), an improved Shortcuts app, and a Liquid Glass slider for tweaking transparency. 

We’ll get a glimpse of everything new in iOS 27 at WWDC 2026.

Cameras and battery

Both the iPhone 18 and the iPhone 18 Pro models are rumored to get a 24MP square-shaped sensor on the front, which could add the missing sharpness to the iPhone 17’s ultrawide selfies. However, newer reports assign the improved 24MP selfie shooter to the Pro models, not the baseline iPhone 18. 

Chinese tipster Digital Chat Station claims that the iPhone 18 Pro models could feature a DSLR-like variable aperture for the 48MP primary camera, alongside larger fixed apertures for the ultrawide and telephoto sensors. Simply put, users could get more control over the background blur and overall light in the frame (via the primary camera) and better low-light performance (via other sensors).

While Apple was also reportedly considering acquiring Lux Optics, the company behind the Halide Camera app (which provides creative and professional photography controls), the plans seem to be tangled in a legal mess, at least for now. Per a Chinese tipster, Apple was toying around with teleconverter lenses for the Pro models as well.

A simplified Camera Control button (without the capacitive touch layer) is also on the cars for all iPhone 18 models. 

Advertisement

A leak from Instant Digital suggests a slight weight increase for the iPhone 18 Pro Max, possibly to accommodate a larger battery than the current model. In fact, the rumor was corroborated by Digital Chat Station, which stated that the non-Chinese version of the handset could feature a battery with a capacity between 5,100 and 5,200 mAh, a substantial improvement in the battery life. 

Apple is reportedly cleaning up iOS 27’s code to make it more efficient, which should also improve overall battery life for the iPhone 18 series and the supported iPhones. Beyond that, there are no leaks or rumors about the iPhone 18 series getting any charging upgrades, wired or MagSafe.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

OpenAI’s new $100 ChatGPT Pro plan targets Claude Max with five times the Codex access

Published

on

In short: OpenAI launched a new $100 per month Pro plan for ChatGPT on 9 April 2026, inserting a new tier between the existing $20 Plus plan and the $200 Pro plan and directly targeting Anthropic’s Claude Max, which is also priced at $100 per month. The new plan offers five times more Codex usage than Plus, access to the same model suite as the $200 tier, and a launch promotion that temporarily doubles that advantage: through 31 May 2026, subscribers get ten times the Codex usage of Plus. The move follows Codex crossing three million weekly users on 8 April, a growth rate the company describes as a 5x increase in three months.

What the $100 plan includes, and where it sits in ChatGPT’s pricing structure

The new plan is the sixth pricing tier in ChatGPT’s current structure, which now runs from a free account with advertising, through a $8 per month Go plan, the $20 per month Plus plan, to two versions of Pro at $100 and $200 per month, a $25 per user per month Business plan, and custom-priced Enterprise contracts. The $100 Pro plan sits directly between Plus and the existing $200 Pro tier, offering five times the Codex usage of Plus and targeting what OpenAI describes as “longer, high-effort Codex sessions” that Plus subscribers hit the ceiling on. The $200 Pro plan, by comparison, provides 20 times the Codex usage of Plus, making it four times more Codex-intensive than the new $100 tier.

Despite the difference in usage limits, both Pro tiers give access to the same model suite: the exclusive GPT-5.4 Pro model, unlimited use of GPT-5.4 Instant and GPT-5.4 Thinking, and all other features available on the $200 plan. The differentiation between the two tiers is usage volume, not capability. As a launch promotion, subscribers to the new $100 plan will receive ten times the Codex usage of Plus through 31 May 2026; after that date, the standard five times limit applies. OpenAI also announced a rebalancing of the Plus plan’s Codex allocation alongside the new tier, shifting Plus towards steadier day-to-day usage rather than allowing the longer burst sessions that the $100 plan is intended to serve.

Codex demand: the numbers that prompted the new tier

On 8 April 2026, the day before the $100 plan was announced, Sam Altman posted on X that OpenAI was resetting Codex’s usage limits across all plans “to celebrate 3M weekly codex users,” and committed to repeating the reset for every additional million users until Codex reaches ten million weekly users. Thibault Sottiaux, who leads the Codex product, stated: “Three million people are now using Codex weekly, up from two million a little under a month ago.” OpenAI described the growth trajectory as a 5x increase in the preceding three months, with 70% month-over-month user growth.

Advertisement

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The scale of that growth reflects a shift in how developers are using AI coding tools. OpenAI rolled out a dedicated Codex app for macOS in February 2026, designed to move beyond line-by-line code generation into what the company called agentic, multi-task coding workflows: orchestrating multiple agents in parallel, running background jobs, and handling instructions that span hours rather than seconds. That architecture, with its longer-running sessions and heavier compute demands, is precisely the usage pattern that the $100 plan is priced to capture. A Plus subscriber who uses Codex for extended autonomous engineering tasks hits usage limits well before their billing cycle ends; the $100 plan is designed to be the next logical tier rather than a jump to $200.

The Claude Max comparison

OpenAI made no attempt to obscure the competitive framing. The new plan is priced identically to Anthropic’s Claude Max 5x tier, which also costs $100 per month and includes elevated limits for Claude Code, Anthropic’s terminal-based agentic coding product. Claude Code has become the fastest-growing part of Anthropic’s commercial portfolio, with an estimated $2.5 billion in annualised revenue by early 2026, and Anthropic has been constructing a developer ecosystem around it: Anthropic launched a marketplace for Claude-powered enterprise software in March 2026, with launch partners including Snowflake, Harvey, and Replit, connecting enterprise buyers with third-party applications built on Claude.

Advertisement

The competitive dynamic sharpened further in the week before OpenAI’s announcement. On 4 April 2026, Anthropic banned third-party agents from Claude Pro and Max subscriptions, preventing subscribers from routing their plan’s usage limits through external frameworks such as OpenClaw; users wanting to continue using those tools must now pay separately under a new per-session “extra usage” system. OpenAI’s announcement went in the opposite direction, increasing Codex availability at the $100 price point and doubling it temporarily to mark the launch. The contrast, at the identical price, was visible enough that most coverage described the new plan as a direct response to Anthropic’s developer subscriber base.

What OpenAI’s pricing move signals

The new tier arrives during a period of accelerating commercial momentum for OpenAI. OpenAI’s $122 billion raise at an $852 billion valuation, completed in March 2026, was led by SoftBank, NVIDIA, and Amazon, and included $3 billion from individual retail investors, a structure that many analysts read as groundwork for an IPO expected as early as the fourth quarter of 2026. The company is generating $2 billion in revenue per month and has more than 50 million paid subscribers across its plans. The $100 plan is part of a deliberate effort to fill the pricing gap between $20 and $200 that had, until now, left a large segment of heavy but not enterprise-grade users without a compelling upgrade path.

The model powering the Pro tiers, GPT-5.4, which launched in March 2026 and introduced native computer use directly into Codex and the API, is the clearest statement of where OpenAI sees the next phase of developer adoption going: not prompting, but autonomous agents operating software, navigating file systems, and running multi-step workflows across applications for hours at a time. The $100 plan is the pricing expression of that bet. Whether it moves enough developers at the $100 Claude Max price point to make a measurable difference in Anthropic’s subscriber base will be visible in both companies’ next quarterly metrics.

Advertisement

Source link

Continue Reading

Tech

‘Marshals’ Release Schedule: When Episode 7 Hits Paramount Plus

Published

on

Marshals, a new Yellowstone spinoff starring Luke Grimes as Kayce Dutton, is airing on CBS right now. You can also tune in with Paramount Plus. The Yellowstone sequel series sees Grimes’ former Navy SEAL join an elite unit of US Marshals to bring range justice to Montana, according to a synopsis from CBS.

The show includes Yellowstone actors Gil Birmingham as Thomas Rainwater, Mo Brings Plenty as Mo and Brecken Merrill as Tate. Spencer Hudnut is the showrunner of Marshals — formerly known as Y: Marshals — and Taylor Sheridan is an executive producer.

Advertisement

When to watch new Marshals episodes on Paramount Plus

Episode 7 of Marshals airs on CBS on Sunday, April 12. Viewing options for Paramount Plus customers vary by subscription tier. You can watch the episode live if you have Paramount Plus Premium, which includes your local CBS station. If you subscribe to Paramount Plus Essential, you can watch the installment on demand the following Monday, but not live on Sunday.

Here’s a release schedule for the next two episodes of Marshals.

  • Episode 7, Family Business: Premieres on CBS/Paramount Plus Premium on April 12 at 8 p.m. ET/8 p.m. PT/7 p.m. CT. Streams on Paramount Plus Essential on April 13.
  • Episode 8, Blowback: Premieres on CBS/Paramount Plus Premium on April 19 at 8 p.m. ET/8 p.m. PT/7 p.m. CT. Streams on Paramount Plus Essential on April 20.

You can also watch CBS and the seventh episode of Marshals without cable with a live TV streaming service such as YouTube TV, Hulu Plus Live TV or the DirecTV MyNews skinny bundle. In addition to offering a lower-cost option, Paramount Plus lets you watch the other two Yellowstone spinoffs: the prequels 1883 and 1923.

Advertisement

James Martin/CNET

After a price increase in early 2026, the ad-supported Essential version runs $9 per month or $90 per year. The ad-free Premium version runs $14 per month or $140 per year. Paying more for Premium gives you downloads, the ability to watch more Showtime programming than Essential and access to your live, local CBS station.

Source link

Advertisement
Continue Reading

Tech

Sunday Reboot: MacBook Neo upgrades, masses of Mac minis, and iPhone re-entry

Published

on

In this week’s “Sunday Reboot,” a storage upgrade for the MacBook Neo, an excuse to buy many Mac minis, and the iPhones come back to Earth with a late congratulatory message.

Silhouetted person in shadow peers through a narrow opening toward a brightly lit wall of stacked silver computer mini desktops with ports and indicator lights
Image credits: NASA/Overcast

Sunday Reboot is a weekly column covering some of the lighter stories within the Apple reality distortion field from the past seven days. All to get the next week underway with a good first step.
This week, researchers managed to get around Apple Intelligence security measures using prompt injection techniques, a repairability report panned Apple’s hardware again, and Apple’s lawsuit with Epic Games over the App Store continued to roll on. There was also a bug found to break Mac networking every 49 days, 17 hours, two minutes, and 47 seconds.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Tesla Achieves European Breakthrough as Full Self-Driving Supervised Reaches Dutch Roads

Published

on

Tesla FSD Europe Netherlands Launch Test Drive
Tesla finally received approval on April 10 from the Dutch vehicle regulator, the RDW, for its Full Self-Driving Supervised system to be used on European roads. They were the first to receive approval for this advanced technology across Europe, marking a significant milestone for the company. This means that the program has been cleared to run on public roads in the Netherlands, and the distribution began the next day, April 11, for a limited number of early testers who had been patiently waiting.



Drivers with Hardware 4 (HW4) computers in their vehicles received an upgrade to version 2026.3.6, which included the European-tuned build of FSD 14.2.2.5. Before turning on the system, drivers must complete a fast tutorial followed by a mini test within the car interface. Once that’s done, they can take their hands off the wheel under appropriate situations, and cameras will watch their eyes to see if they’re paying attention. If they become distracted, the system will begin to display visual alarms, followed by sounds and vibrations if they do not return to it, and if all else fails, the car will slow down and come to a safe stop on its own.


LEGO Icons Ford Model T Building Set for Adults – Collectible Kit with Detailed Features for Bedroom…
  • COLLECTIBLE MODEL CAR KIT – Embark on a nostalgic journey with the LEGO Icons Ford Model T (11376) building set for adults ages 18 and up
  • VINTAGE CAR DISPLAY PIECE – Recreate the iconic 1910s automobile with gleaming black bodywork, golden accents, a foldback fabric roof and tall…
  • AUTHENTIC MODEL T FUNCTIONS – Fold back the model car’s roof, fold down the split windshield, lift the hood from both sides to see the engine, turn…

Overall, this was the product of 18 months of testing, which included more than 1.5 million kilometers of driving on European highways, as well as numerous controlled scenarios on closed tracks. Before approving the system, regulators reviewed almost 400 compliance points. The RDW pronounced it a beneficial addition to road safety, but stressed that drivers must remain in the driver’s seat and ready to take over at any time.


The European version of the software is substantially different from the one accessible in the United States. This is primarily due to the way regulators work here, which requires them to do pre-market checks, as opposed to their US counterparts’ self-certification strategy. As a result, the Dutch construction is more cautious and limits some of the more aggressive driving characteristics available elsewhere. Automatic turns at junctions and navigation-based lane changes are still accessible, but several parking-lot summoning capabilities found in the United States are not available in the Netherlands.

Advertisement


Subscribers will pay 99 euros per month to receive the system, or 49 euros per month if they already have Enhanced Autopilot. Alternatively, they can purchase it outright for 7,500 euros. Tesla claims that the system leverages billions of kilometers of real-world data collected worldwide, and Elon Musk has just stated that the RDW review process was particularly rigorous.

Tesla FSD Europe Netherlands Launch Test Drive
The Dutch approval is now a one-time occurrence, although it has a provisional validity period of at least 36 months. It means that other European states can adopt it on their own, and authorities in Germany, France, and Italy are expected to do so within the next 4 to 8 weeks. Tesla’s goal is to have the system more widely accepted across the EU by the summer, allowing millions of drivers to use it without having to repeat the testing procedure in each nation.
[Source]

Source link

Advertisement
Continue Reading

Tech

Apple reportedly testing out four different styles for its smart glasses that will rival Meta Ray-Bans

Published

on

Apple may be late to the smart glasses market, but it could be covering all its bases with up to four potential styles for its upcoming product. According to Bloomberg‘s Mark Gurman, Apple could launch some or all of the four styles it’s currently testing for its smart glasses.

Gurman reported Apple is testing out a large rectangular frame that’s comparable to Ray-Ban Wayfarers, a slimmer rectangular design like the glasses that Apple CEO Tim Cook wears, a larger oval or circular frame and a smaller oval or circle option. Apple is also working on a range of colors, including black, ocean blue and light brown, according to Bloomberg.

Internally code-named N50 for now, Apple’s upcoming smart glasses will compete directly with the second-gen Ray-Ban Meta model. While similar, Apple might be differentiating its design with “vertically oriented oval lenses with surrounding lights,” according to the report. Like Meta’s smart glasses, Apple’s upcoming product will capture photos and videos, but is meant to better sync with an iPhone, allowing users to take advantage of Apple’s ecosystem for editing, sharing, phone calls, notifications, music and even its voice assistant, according to Gurman. The release of Apple’s smart glasses could even coincide with the upcoming improved Siri that should arrive with iOS 27.

Gurman reported that Apple could reveal its smart glasses as soon as the end of 2026 or early 2027, followed by an official release sometime in 2027. As for the competition, Meta released its latest model that’s better suited for prescription lenses and offers a more customizable fit.

Advertisement

Source link

Continue Reading

Tech

The MacBook Neo is moonlighting as a Windows gaming machine, and it’s doing it well

Published

on

Apple didn’t position its most affordable MacBook as a gaming machine. The MacBook Neo, a budget-leaning laptop that runs on Apple’s A18 Pro chip, the same chip that powers the iPhone 16 Pro models, has been put through a Windows 11 gaming test for YouTuber ETA Prime. 

Turns out, the results are genuinely surprising. Using Parallels Desktop, a virtualization app (paid) with 3D hardware acceleration, the channel ran Windows 11 ARM directly on the Neo’s 8GB RAM (allocating 5GB to the virtual environment), and it did better than most people would think it would. 

What games actually ran well?

Dirt 3 held 75 fps at 1200p on high settings, while Portal 2 cleared 100 fps on medium settings. Skyrim, on the other hand, maintained roughly 60 fps at 1200p resolution on medium graphics settings, while Marvel Cosmic Invasion averaged around 60 fps at the maximum resolution.

Advertisement

What helped performance was games running as native Windows-on-ARM applications. However, GTA V was among the notable stumbles, as the frame rates through the Parelles weren’t playable at all. However, according to Notebookcheck, the game runs acceptably via Crossover. 

Why does this matter for everyday MacBook Neo users?

For users who work on their Mac but occasionally enjoy playing Windows-only games, MacBook Neo’s ability to run native titles via the Parallels app comes as good news. The cost? Parallels Desktop’s Standard tier costs $99.99 per year, which could add to your weekend leisure sessions. 

Anyways, the bigger takeaway is that the MacBook Neo, even with 8GB of RAM (highlighted as a constraint in the video), can run low-to-mid-range Windows games. It also changes the notion around budget Apple hardware being primarily for productivity-based tasks. 

As virtualization tech continues to improve and Apple provides more RAM in future generations of the MacBook Neo, it could redefine what “budget” actually means for Apple buyers, bridging the gap between MacBook and Windows laptops even further. 

Advertisement

Source link

Continue Reading

Tech

How the Budget-Friendly BougeRV 23-Quart 12V Fridge Keeps Food Fresh Through Every Drive

Published

on

BougeRV 23 Quart 12V Portable Fridge Car
Summer heat makes any travel difficult, especially if you’re transporting groceries and / or cold drinks. Drivers are frequently forced to rely on old, simple coolers with ice that melts faster than a popsicle on a hot day, leaving everything wet by the time they reach. That’s where the BougeRV 23-quart unit, priced at $159.97 (was $189.99), comes in, a more practical solution that plugs directly into your car’s normal 12V socket and keeps items perfectly chilled without any of the fuss.



The unit is 22 inches long and weighs just more than 21 pounds, so it can fit into even the smallest trunks or backseats. It also has a built-in handle, making it simple to pull out at a rest break or transport home after a long shopping excursion. Inside, there’s enough space for a couple days’ worth of food or a full load of drinks and snacks for a family road trip.

Sale


BougeRV 12 Volt Refrigerator 12V Car Fridge 23 Quart Portable Freezer Compressor Cooler 12/24V DC…
  • What You Get: The CR22 12V refrigerator comes with a 2-year Tech Support. If you have any questions about the product, please REACH OUT TO BougeRV, as…
  • Fast Cooling Down to 32℉: With Compressor refrigeration technology, this 12v car refrigerator could achieve 15 min fast cooling from 77℉ to…
  • 45W Low Power Consumption: With ECO energy saving mode, this 23 Qt portable refrigerator’s operating power is less than 36W. Even running on MAX…


It’s powered by a 12-volt socket, which is found in practically every modern automobile, and there are alternatives for residential outlets or even solar power if you are parked for an extended period of time. The compressor system kicks in quickly, about 15 minutes, and maintains a consistent temperature between 8 degrees below zero and 50 degrees Fahrenheit, allowing you to choose between fridge and freezer mode as needed.

Advertisement

BougeRV 23 Quart 12V Portable Fridge Car
The portable fridge uses very little energy (around 36 watts in environment mode), and the smart cycling keeps your daily power consumption under one kilowatt-hour even on the warmest days. To be on the safe side, there’s a built-in battery monitor that will turn it off before it consumes your vehicle’s battery, so you don’t have to worry about that.

BougeRV 23 Quart 12V Portable Fridge Car
People who have used it on road trips note that it works effectively, keeping perishables from spoiling without having to constantly add ice, and it absorbs bumps in the road well, even while traveling at a 30-degree angle. If you’re only running to the store for a quick shopping trip, the fridge will keep running until you return home, even if you get stopped in traffic.

Source link

Continue Reading

Tech

New FCC router rules could trap millions using outdated ISP hardware as supply chain limits stall upgrades and complicate security fixes

Published

on


  • FCC rules block new foreign routers while old, vulnerable ones stay in homes longer
  • ISP customers cannot upgrade routers even when security risks become widely known
  • Router approvals now depend on waivers that may slow down nationwide replacements

The Federal Communications Commission (FCC) has issued new rules intended to address security risks posed by routers produced outside the United States.

A number of recent incidents have shown foreign routers are vulnerable to cyberattacks, with campaigns like Flax, Volt, and Salt Typhoon making headlines across the world.

Source link

Continue Reading

Tech

Your developers are already running AI locally: Why on-device inference is the CISO’s new blind spot

Published

on

For the last 18 months, the CISO playbook for generative AI has been relatively simple: Control the browser.

Security teams tightened cloud access security broker (CASB) policies, blocked or monitored traffic to well-known AI endpoints, and routed usage through sanctioned gateways. The operating model was clear: If sensitive data leaves the network for an external API call, we can observe it, log it, and stop it. But that model is starting to break.

A quiet hardware shift is pushing large language model (LLM) usage off the network and onto the endpoint. Call it Shadow AI 2.0, or the “bring your own model” (BYOM) era: Employees running capable models locally on laptops, offline, with no API calls and no obvious network signature. The governance conversation is still framed as “data exfiltration to the cloud,” but the more immediate enterprise risk is increasingly “unvetted inference inside the device.”

When inference happens locally, traditional data loss prevention (DLP) doesn’t see the interaction. And when security can’t see it, it can’t manage it.

Advertisement

Why local inference is suddenly practical

Two years ago, running a useful LLM on a work laptop was a niche stunt. Today, it’s routine for technical teams.

Three things converged:

  • Consumer-grade accelerators got serious: A MacBook Pro with 64GB unified memory can often run quantized 70B-class models at usable speeds (with practical limits on context length). What once required multi-GPU servers is now feasible on a high-end laptop for many real workflows.

  • Quantization went mainstream: It’s now easy to compress models into smaller, faster formats that fit within laptop memory often with acceptable quality tradeoffs for many tasks.

  • Distribution is frictionless: Open-weight models are a single command away, and the tooling ecosystem makes “download → run → chat” trivial.

The result: An engineer can pull down a multi‑GB model artifact, turn off Wi‑Fi, and run sensitive workflows locally, source code review, document summarization, drafting customer communications, even exploratory analysis over regulated datasets. No outbound packets, no proxy logs, no cloud audit trail.

From a network-security perspective, that activity can look indistinguishable from “nothing happened”.

Advertisement

The risk isn’t only data leaving the company anymore

If the data isn’t leaving the laptop, why should a CISO care?

Because the dominant risks shift from exfiltration to integrity, provenance, and compliance. In practice, local inference creates three classes of blind spots that most enterprises have not operationalized.

1. Code and decision contamination (integrity risk)

Local models are often adopted because they’re fast, private, and “no approval required.” The downside is that they’re frequently unvetted for the enterprise environment.

A common scenario: A senior developer downloads a community-tuned coding model because it benchmarks well. They paste in internal auth logic, payment flows, or infrastructure scripts to “clean it up.” The model returns output that looks competent, compiles, and passes unit tests, but subtly degrades security posture (weak input validation, unsafe defaults, brittle concurrency changes, dependency choices that aren’t allowed internally). The engineer commits the change.

Advertisement

If that interaction happened offline, you may have no record that AI influenced the code path at all. And when you later do incident response, you’ll be investigating the symptom (a vulnerability) without visibility into a key cause (uncontrolled model usage).

2. Licensing and IP exposure (compliance risk)

Many high-performing models ship with licenses that include restrictions on commercial use, attribution requirements, field-of-use limits, or obligations that can be incompatible with proprietary product development. When employees run models locally, that usage can bypass the organization’s normal procurement and legal review process.

If a team uses a non-commercial model to generate production code, documentation, or product behavior, the company can inherit risk that shows up later during M&A diligence, customer security reviews, or litigation. The hard part is not just the license terms, it’s the lack of inventory and traceability. Without a governed model hub or usage record, you may not be able to prove what was used where.

3. Model supply chain exposure (provenance risk)

Local inference also changes the software supply chain problem. Endpoints begin accumulating large model artifacts and the toolchains around them: ownloaders, converters, runtimes, plugins, UI shells, and Python packages.

Advertisement

There is a critical technical nuance here: The file format matters. While newer formats like Safetensors are designed to prevent arbitrary code execution, older Pickle-based PyTorch files can execute malicious payloads simply when loaded. If your developers are grabbing unvetted checkpoints from Hugging Face or other repositories, they aren’t just downloading data — they could be downloading an exploit.

Security teams have spent decades learning to treat unknown executables as hostile. BYOM requires extending that mindset to model artifacts and the surrounding runtime stack. The biggest organizational gap today is that most companies have no equivalent of a software bill of materials for models: Provenance, hashes, allowed sources, scanning, and lifecycle management.

Mitigating BYOM: treat model weights like software artifacts

You can’t solve local inference by blocking URLs. You need endpoint-aware controls and a developer experience that makes the safe path the easy path.

Here are three practical ways:

Advertisement

1. Move governance down to the endpoint

Network DLP and CASB still matter for cloud usage, but they’re not sufficient for BYOM. Start treating local model usage as an endpoint governance problem by looking for specific signals:

  • Inventory and detection: Scan for high-fidelity indicators like .gguf files larger than 2GB, processes like llama.cpp or Ollama, and local listeners on common default port 11434.

  • Process and runtime awareness: Monitor for repeated high GPU/NPU (neural processing unit) utilization from unapproved runtimes or unknown local inference servers.

  • Device policy: Use mobile device management (MDM) and endpoint detection and response (EDR) policies to control installation of unapproved runtimes and enforce baseline hardening on engineering devices. The point isn’t to punish experimentation. It’s to regain visibility.

2. Provide a paved road: An internal, curated model hub

Shadow AI is often an outcome of friction. Approved tools are too restrictive, too generic, or too slow to approve. A better approach is to offer a curated internal catalog that includes:

Advertisement
  • Approved models for common tasks (coding, summarization, classification)

  • Verified licenses and usage guidance

  • Pinned versions with hashes (prioritizing safer formats like Safetensors)

  • Clear documentation for safe local usage, including where sensitive data is and isn’t allowed. If you want developers to stop scavenging, give them something better.

3. Update policy language: “Cloud services” isn’t enough anymore

Most acceptable use policies talk about SaaS and cloud tools. BYOM requires policy that explicitly covers:

  • Downloading and running model artifacts on corporate endpoints

  • Acceptable sources

  • License compliance requirements

  • Rules for using models with sensitive data

  • Retention and logging expectations for local inference tools This doesn’t need to be heavy-handed. It needs to be unambiguous.

The perimeter is shifting back to the device

For a decade we moved security controls “up” into the cloud. Local inference is pulling a meaningful slice of AI activity back “down” to the endpoint.

5 signals shadow AI has moved to endpoints:

Advertisement
  • Large model artifacts: Unexplained storage consumption by .gguf or .pt files.

  • Local inference servers: Processes listening on ports like 11434 (Ollama).

  • GPU utilization patterns: Spikes in GPU usage while offline or disconnected from VPN.

  • Lack of model inventory: Inability to map code outputs to specific model versions.

  • License ambiguity: Presence of “non-commercial” model weights in production builds.

Shadow AI 2.0 isn’t a hypothetical future, it’s a predictable consequence of fast hardware, easy distribution, and developer demand. CISOs who focus only on network controls will miss what’s happening on the silicon sitting right on employees’ desks.

The next phase of AI governance is less about blocking websites and more about controlling artifacts, provenance, and policy at the endpoint, without killing productivity.

Jayachander Reddy Kandakatla is a senior MLOps engineer.

Welcome to the VentureBeat community!

Advertisement

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Advertisement
Continue Reading

Tech

Five signs data drift is already undermining your security models

Published

on

Data drift happens when the statistical properties of a machine learning (ML) model’s input data change over time, eventually rendering its predictions less accurate. Cybersecurity professionals who rely on ML for tasks like malware detection and network threat analysis find that undetected data drift can create vulnerabilities. A model trained on old attack patterns may fail to see today’s sophisticated threats. Recognizing the early signs of data drift is the first step in maintaining reliable and efficient security systems.

Why data drift compromises security models

ML models are trained on a snapshot of historical data. When live data no longer resembles this snapshot, the model’s performance dwindles, creating a critical cybersecurity risk. A threat detection model may generate more false negatives by missing real breaches or create more false positives, leading to alert fatigue for security teams.

Adversaries actively exploit this weakness. In 2024, attackers used echo-spoofing techniques to bypass email protection services. By exploiting misconfigurations in the system, they sent millions of spoofed emails that evaded the vendor’s ML classifiers. This incident demonstrates how threat actors can manipulate input data to exploit blind spots. When a security model fails to adapt to shifting tactics, it becomes a liability.

5 indicators of data drift

Security professionals can recognize the presence of drift (or its potential) in several ways.

Advertisement

1. A sudden drop in model performance

Accuracy, precision, and recall are often the first casualties. A consistent decline in these key metrics is a red flag that the model is no longer in sync with the current threat landscape.

Consider Klarna’s success: Its AI assistant handled 2.3 million customer service conversations in its first month and performed work equivalent to 700 agents. This efficiency drove a 25% decline in repeat inquiries and reduced resolution times to under two minutes.

Now imagine if those parameters suddenly reversed because of drift. In a security context, a similar drop in performance does not just mean unhappy clients — it also means successful intrusions and potential data exfiltration.

2. Shifts in statistical distributions

Security teams should monitor the core statistical properties of input features, such as the mean, median, and standard deviation. A significant change in these metrics from training data could indicate the underlying data has changed.

Advertisement

Monitoring for such shifts enables teams to catch drift before it causes a breach. For example, a phishing detection model might be trained on emails with an average attachment size of 2MB. If the average attachment size suddenly jumps to 10MB due to a new malware-delivery method, the model may fail to classify these emails correctly.

3. Changes in prediction behavior

Even if overall accuracy seems stable, distributions of predictions might change, a phenomenon often referred to as prediction drift.

For instance, if a fraud detection model historically flagged 1% of transactions as suspicious but suddenly starts flagging 5% or 0.1%, either something has shifted or the nature of the input data has changed. It might indicate a new type of attack that confuses the model or a change in legitimate user behavior that the model was not trained to identify.

4. An increase in model uncertainty

For models that provide a confidence score or probability with their predictions, a general decrease in confidence can be a subtle sign of drift.

Advertisement

Recent studies highlight the value of uncertainty quantification in detecting adversarial attacks. If the model becomes less sure about its forecasts across the board, it is likely facing data it was not trained on. In a cybersecurity setting, this uncertainty is an early sign of potential model failure, suggesting the model is operating in unfamiliar ground and that its decisions might no longer be reliable.

5. Changes in feature relationships

The correlation between different input features can also change over time. In a network intrusion model, traffic volume and packet size might be highly linked during normal operations. If that correlation disappears, it can signal a change in network behavior that the model may not understand. A sudden feature decoupling could indicate a new tunneling tactic or a stealthy exfiltration attempt.

Approaches to detecting and mitigating data drift

Common detection methods include the Kolmogorov-Smirnov (KS) and the population stability index (PSI). These compare the distributions of live and training data to identify deviations. The KS test determines if two datasets differ significantly, while the PSI measures how much a variable’s distribution has shifted over time. 

The mitigation method of choice often depends on how the drift manifests, as distribution changes may occur suddenly. For example, customers’ buying behavior may change overnight with the launch of a new product or a promotion. In other cases, drift may occur gradually over a more extended period. That said, security teams must learn to adjust their monitoring cadence to capture both rapid spikes and slow burns. Mitigation will involve retraining the model on more recent data to reclaim its effectiveness.

Advertisement

Proactively manage drift for stronger security

Data drift is an inevitable reality, and cybersecurity teams can maintain a strong security posture by treating detection as a continuous and automated process. Proactive monitoring and model retraining are fundamental practices to ensure ML systems remain reliable allies against developing threats.

Zac Amos is the Features Editor at ReHack.

Welcome to the VentureBeat community!

Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.

Advertisement

Read more from our guest post program — and check out our guidelines if you’re interested in contributing an article of your own!

Source link

Continue Reading

Trending

Copyright © 2025