Connect with us
DAPA Banner

Tech

M4 iPad Air review: Middle of the road is the best place to be

Published

on

Apple’s M4 iPad Air refresh is overkill for nearly every tablet task, and sits in a strange but welcome spot between the iPad and iPad Pro.

Person holding a tablet at a wooden table in a cafe, with a takeaway coffee cup and a closed laptop nearby, soft natural light in the background
M4 iPad Air review: iPad Air is now powered by the M4 chip

Recently, the iPad Air has been simultaneously a compromise and upgrade sitting in the middle of on the iPad roster. It provides users with more performance than they’d get on the base iPad, but without hitting the sometimes-nosebleed pricing of the premium iPad Pro.
This is why it’s been our recommended iPad for most users. It’s most of an iPad Pro, more than an entry-level iPad, at a solid price point.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

New Emoji, Playlist Generator and More: All the New Features iOS 26.4 Brings to Your iPhone

Published

on

Apple released iOS 26.4 on Tuesday, March 24, about a week after the tech giant released iOS 26.3.1 (a), the company’s first Background Security Improvement update. The most recent update brings a slew of features to your iPhone, including new emoji and video podcasts.

Tech Tips

You can download iOS 26.4 now by going to Settings and tapping General. Next, select Software Update, tap Update Now and follow the prompts on your screen.

Here are some of the new features iOS 26.4 brings to your iPhone.

Advertisement

New emoji

An orca, distorted face and other emoji coming out of a smartphone.

All the new emoji iOS 26.4 brings to your iPhone.

CNET/Apple

With iOS 26.4, your iPhone gets eight new emoji. Those emoji include:

The Unicode Consortium is responsible for creating emoji, and it approved these new emoji in September as part of Unicode 17.0. But this is the first time the emoji are showing up on iPhones. 

Advertisement

Video podcasts come to Apple Podcasts

The iOS 26.4 update also brings video to your Podcasts app. To view these video podcasts, open the Podcasts app and start listening to an episode with the video player icon in the top right corner of the title card. Once you’re listening, open the media player and tap the Turn Video On button near the podcast’s progress bar. The podcast’s artwork will be replaced with the video. To turn the video off again, tap Turn Video Off and the podcast’s artwork will return.

Side-by-side screenshots of the Podcasts app. On the left we see a podcasts artwork and on the right we see that artwork replaced by a video.

Video podcasts are a fun addition to the Podcasts app.

Apple/Screenshots by CNET

Reduce some Liquid Glass effects across your device

Apple’s iOS 26.4 update adds another setting to minimize Liquid Glass effects across your device: Reduce Bright Effects. Here’s where to find this setting.

Advertisement

1. Tap Settings.
2. Tap Accessibility.
3. Tap Display & Text Size.
4. Scroll down the menu to find Reduce Bright Effects.

The Reduce Bright Effects option in the Display and Text Size settings menu.

Reduce Bright Effects can eliminate some Liquid Glass effects.

Apple/Screenshot by CNET

Apple says the setting will minimize highlighting and flashing when interacting with on-screen elements, such as buttons or the keyboard. So if you find certain flash elements annoying, you can now disable them. 

Advertisement

Playlist Playground in Apple Music

The iOS 26.4 update also introduces a new playlist generator for Apple Music subscribers called Playlist Playground. Apple says the feature can create a playlist based on your description. Once you enter your description, it will create a playlist with a title, tracklist and general description.

To access Playlist Playground, first you have to be an Apple Music subscriber. Then, open Apple Music and go to your Library. In your Library, you’ll see a new icon at the top of your screen with a plus and a few lines next to it. Tap this, and you’ll be prompted to describe your playlist.

Apple Music's Playlist Playground which can create a playlist for you based on your own desccription.

Playlist Playground can generate a playlist for you in no time.

Advertisement

Apple/Screenshots by CNET

Apple notes this feature is still in beta, so it might create unexpected results. So you might ask for a good gym mix and end up with some Whitney Houston — but who’s to say Whitney isn’t good gym music?

Find nearby concerts with the aptly named Concerts feature

iOS 26.4 brings a new Concerts feature to your Apple Music app. 

“Concerts helps you discover nearby shows from artists in your library and recommends new artists based on what you listen to,” Apple writes in the update’s description. That way, you can easily find nearby shows.

To find Concerts, tap the magnifying glass icon at the bottom of your Apple Music screen, then tap Concerts. The feature may ask for your location the first time you use it. Then you’ll see popular shows nearby, along with their dates, times and locations. Tapping into any of these shows gives you more information on the show, as well as a link to buy tickets.

Advertisement
The Concerts menu in Apple Music.

The Concerts tab in Apple Music makes it easy to see upcoming shows in your area.

Apple/Screenshot by CNET

Shazam works offline, kind of

With iOS 26.4, your Control Center’s Shazam app can work in more ways. Now, if you aren’t connected to the internet and use the Control Center app to identify a song, the app will eventually tell you the song’s identity once you’re back online. 

Ambient Music home screen widgets

Apple introduced two new Ambient Music widgets for your home screen with iOS 26.4. These widgets let you easily access the four Ambient Music playlists: Sleep, Chill, Productivity, Wellbeing. You can quickly turn on a relaxing playlist to unwind after a long day, or one to help you focus on the task at hand.

Advertisement
An iPhone widget for the Ambient Music feature.

The Ambient Music widget makes it easy to play music for just the right setting.

Apple/Screenshot by CNET

Apple introduced these playlists to your iPhone alongside iOS 18.4 in 2024. However, you could only access those playlists from your Control Center at the time. 

Let other adults in your Family pay for themselves

In iOS 26.4, other adults in your Family sharing group can now use their own payment instead of depending on the group organizer’s payment method. That means if you’re an adult and have a family sharing group with your own parents, siblings or other family members, you can now purchase a new game, movie or something else with your own information instead of using someone else’s information and then paying them back for using their money. 

Advertisement

This can be a helpful feature that allows you to avoid the hassle of paying someone else back for using their payment information. And if you’re the person whose card is always used, it can be a nice way to ensure others pay for their own stuff and don’t freeload off you. 

More caption options when viewing videos

With iOS 26.4, you can easily change the caption style while watching content in certain apps, such as Apple TV. 

To see these options, start playing a video, then tap the speech bubble icon in the bottom-right corner of your screen to open the subtitle menu. Tap Style, and you’ll see the subtitle options Classic (the default setting), Large Text, Outline Text and Transparent Background. So if you and a few others are watching something on your iPhone and want to make sure everyone can see the captions, you might choose Large Text.

Advertisement
The subtitle style menu.

You can adjust the subtitles in some apps thanks to iOS 26.4.

Apple/Screenshot by CNET

More control over wallpaper Collections

The iOS 26.4 update also gives you more control over which wallpaper Collections are on your iPhone. Now, if you go to Settings > Wallpaper > Add New Wallpaper, you can tap Get under Collections like Weather and Astronomy. 

If you want to delete a Collection from your device, tap the check mark to the right of the downloaded Collection, and the option to Remove from Gallery appears. Tap this to delete the Collection from your iPhone, saving you some precious space.

Advertisement
The option to Remove from Gallery is highlighted in the Add New Wallpaper menu.

You can remove wallpaper Collections from your iPhone if you want to save a little more space. 

Apple/Screenshot by CNET

Here are the release notes for iOS 26.4.

Apple Music

  • Playlist Playground (beta) generates a playlist from your description, complete with a title, description and tracklist.
  • Concerts helps you discover nearby shows from artists in your library and recommends new artists based on what you listen to.
  • Offline Music Recognition in Control Center identifies songs without an internet connection and delivers results automatically when you’re back online.
  • Ambient Music widget for Sleep, Chill, Productivity and Wellbeing brings curated playlists to the Home Screen.
  • Full-screen backgrounds give album and playlist pages a more immersive look.

Accessibility

  • Reduce bright effects setting minimizes bright flashes when tapping on elements like buttons.
  • Subtitle and caption settings are available from the captions icon while viewing media, making them easier to find, customize and preview.
  • Reduce Motion setting more reliably reduces the animations of Liquid Glass for users sensitive to on-screen motion.

This update also includes the following enhancements:

  • Support for AirPods Max 2.
  • 8 new emoji, including an orca, trombone, landslide, ballet dancer and distorted face, are available in the emoji keyboard.
  • Freeform gains advanced image creation and editing tools, and a premium content library, joining Apple Creator Studio.
  • Mark reminders as urgent from the Quick Toolbar or by touching and holding, and filter for urgent reminders in your Smart Lists.
  • Purchase Sharing lets adult members in Family Sharing groups use their own payment method when making purchases, without relying on the family organizer.
  • Improved keyboard accuracy when typing quickly.

For more iOS news, check out what features were included in iOS 26.3 and iOS 26.2. You can also take a look at our iOS 26 cheat sheet for other tips and tricks.

Advertisement

Watch this: Don’t Wait: iOS 26.4 Brings New Emoji, Keyboard Fixes, AI Playlists

Source link

Continue Reading

Tech

Upcoming 3% Big Tech tax in Poland may leave Apple some wiggle room

Published

on

Poland is moving ahead with a digital services tax aimed at Big Tech revenue, but the structure leaves enough room for companies like Apple to argue they don’t fully qualify.

Blue European Union flag waving on tall metal pole, yellow stars in a circle surrounding stylized white airplane symbol, with modern glass office building in the background
Poland moves to tax Big Tech companies

The country will draft a digital services tax bill with a 3% levy on revenue from online advertising, user platforms, and data-driven services. Poland’s bill targets companies with over $1.16 billion globally and about $6.8 million within Poland.
Apple and other major U.S. tech firms fall within those thresholds and would be affected.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Cloudflare’s new Dynamic Workers ditch containers to run AI agent code 100x faster

Published

on

Web infrastructure giant Cloudflare is seeking to transform the way enterprises deploy AI agents with the open beta release of Dynamic Workers, a new lightweight, isolate-based sandboxing system that it says starts in milliseconds, uses only a few megabytes of memory, and can run on the same machine — even the same thread — as the request that created it.

Compared with traditional Linux containers, the company says Dynamic Workers is roughly 100x faster to start and between 10x and 100x more memory efficient.

Cloudflare has spent months pushing what it calls “Code Mode,” the idea that large language models often perform better when they are given an API and asked to write code against it, rather than being forced into one tool call after another.

The company says converting an MCP server into a TypeScript API can cut token usage by 81%, and it is now positioning Dynamic Workers as the secure execution layer that makes that approach practical at scale.

Advertisement

For enterprise technical decision makers, that is the bigger story. Cloudflare is trying to turn sandboxing itself into a strategic layer in the AI stack. If agents increasingly generate small pieces of code on the fly to retrieve data, transform files, call services or automate workflows, then the economics and safety of the runtime matter almost as much as the capabilities of the model. Cloudflare’s pitch is that containers and microVMs remain useful, but they are too heavy for a future where millions of users may each have one or more agents writing and executing code constantly.

The history of modern isolated runtime environments

To understand why Cloudflare is doing this, it helps to look at the longer arc of secure code execution. Modern sandboxing has evolved through three main models, each trying to build a better digital box: smaller, faster and more specialized than the one before it.

The first model is the isolate. Google introduced the v8::Isolate API in 2011 so the V8 JavaScript engine could run many separate execution contexts efficiently inside the same process. In effect, a single running program could spin up many small, tightly separated compartments, each with its own code and variables.

In 2017, Cloudflare adapted that browser-born idea for the cloud with Workers, betting that the traditional cloud stack was too slow for instant, globally distributed web tasks. The result was a runtime that could start code in milliseconds and pack many environments onto a single machine. The trade-off is that isolates are not full computers. They are strongest with JavaScript, TypeScript and WebAssembly, and less natural for workloads that expect a traditional machine environment.

Advertisement

The second model is the container. Containers had been technically possible for years through Linux kernel features, but the company Docker turned them into the default software packaging model when it popularized them in 2013.

Containers solved a huge portability problem by letting developers package code, libraries and settings into a predictable unit that could run consistently across systems. That made them foundational to modern cloud infrastructure. But they are relatively heavy for the sort of short-lived tasks Cloudflare is talking about here. The company says containers generally take hundreds of milliseconds to boot and hundreds of megabytes of memory to run, which becomes costly and slow when an AI-generated task only needs to execute for a moment.

The third model is the microVM. Popularized by AWS Firecracker in 2018, microVMs were designed to offer stronger machine-like isolation than containers without the full bulk of a traditional virtual machine. They are attractive for running untrusted code, which is why they have started to show up in newer AI-agent systems such as Docker Sandboxes. But they still sit between the other two models: stronger isolation and more flexibility than an isolate, but slower and heavier as well.

That is the backdrop for Cloudflare’s pitch. The company is not claiming containers disappear, or that microVMs stop mattering. It is claiming that for a growing class of web-scale, short-lived AI-agent workloads, the default box has been too heavy, and the isolate may now be the better fit.

Advertisement

Cloudflare’s case against the container bottleneck

Cloudflare’s argument is blunt: for “consumer-scale” agents, containers are too slow and too expensive. In the company’s framing, a container is fine when a workload persists, but it is a bad fit when an agent needs to run one small computation, return a result and disappear. Developers either keep containers warm, which costs money, or tolerate cold-start delay, which hurts responsiveness. They may also be tempted to reuse a live sandbox across multiple tasks, which weakens isolation.

Dynamic Worker Loader is Cloudflare’s answer. The API allows one Worker to instantiate another Worker at runtime with code provided on the fly, usually by a language model. Because these dynamic Workers are built on isolates, Cloudflare says they can be created on demand, run one snippet of code, and then be thrown away immediately afterward. In many cases, they run on the same machine and even the same thread as the Worker that created them, which removes the need to hunt for a warm sandbox somewhere else on the network.

The company is also pushing hard on scale. It says many container-based sandbox providers limit concurrent sandboxes or the rate at which they can be created, while Dynamic Workers inherit the same platform characteristics that already let Workers scale to millions of requests per second. In Cloudflare’s telling, that makes it possible to imagine a world where every user-facing AI request gets its own fresh, isolated execution environment without collapsing under startup overhead.

Security remains the hardest part

Cloudflare does not pretend this is easy to secure. In fact, the company explicitly says hardening an isolate-based sandbox is trickier than relying on hardware virtual machines, and notes that security bugs in V8 are more common than those in typical hypervisors. That is an important admission, because the entire thesis depends on convincing developers that an ultra-fast software sandbox can also be safe enough for AI-generated code.

Advertisement

Cloudflare’s response is that it has nearly a decade of experience doing exactly that. The company points to automatic rollout of V8 security patches within hours, a custom second-layer sandbox, dynamic cordoning of tenants based on risk, extensions to the V8 sandbox using hardware features like MPK, and research into defenses against Spectre-style side-channel attacks. It also says it scans code for malicious patterns and can block or further sandbox suspicious workloads automatically. Dynamic Workers inherit that broader Workers security model.

That matters because without the security story, the speed story sounds risky. With it, Cloudflare is effectively arguing that it has already spent years making isolate-based multi-tenancy safe enough for the public web, and can now reuse that work for the age of AI agents.

Code Mode: from tool orchestration to generated logic

The release makes the most sense in the context of Cloudflare’s larger Code Mode strategy. The idea is simple: instead of giving an agent a long list of tools and asking it to call them one by one, give it a programming surface and let it write a short TypeScript function that performs the logic itself. That means the model can chain calls together, filter data, manipulate files and return only the final result, rather than filling the context window with every intermediate step. Cloudflare says that cuts both latency and token usage, and improves outcomes especially when the tool surface is large.

The company points to its own Cloudflare MCP server as proof of concept. Rather than exposing the full Cloudflare API as hundreds of individual tools, it says the server exposes the entire API through two tools — search and execute — in under 1,000 tokens because the model writes code against a typed API instead of navigating a long tool catalog.

Advertisement

That is a meaningful architectural shift. It moves the center of gravity from tool orchestration toward code execution. And it makes the execution layer itself far more important.

Why Cloudflare thinks TypeScript beats HTTP for agents

One of the more interesting parts of the launch is that Cloudflare is also arguing for a different interface layer. MCP, the company says, defines schemas for flat tool calls but not for programming APIs. OpenAPI can describe REST APIs, but it is verbose both in schema and in usage. TypeScript, by contrast, is concise, widely represented in model training data, and can communicate an API’s shape in far fewer tokens.

Cloudflare says the Workers runtime can automatically establish a Cap’n Web RPC bridge between the sandbox and the harness code, so a dynamic Worker can call those typed interfaces across the security boundary as if it were using a local library. That lets developers expose only the exact capabilities they want an agent to have, without forcing the model to reason through a sprawling HTTP interface.

The company is not banning HTTP. In fact, it says Dynamic Workers fully support HTTP APIs. But it clearly sees TypeScript RPC as the cleaner long-term interface for machine-generated code, both because it is cheaper in tokens and because it gives developers a narrower, more intentional security surface.

Advertisement

Credential injection and tighter control over outbound access

One of the more practical enterprise features in the release is globalOutbound, which lets developers intercept every outbound HTTP request from a Dynamic Worker. They can inspect it, rewrite it, inject credentials, respond to it directly, or block it entirely. That makes it possible to let an agent reach outside services while never exposing raw secrets to the generated code itself.

Cloudflare positions that as a safer way to connect agents to third-party services requiring authentication. Instead of trusting the model not to mishandle credentials, the developer can add them on the way out and keep them outside the agent’s visible environment. In enterprise settings, that kind of blast-radius control may matter as much as the performance gains.

More than a runtime: the helper libraries matter too

Another reason the announcement lands as more than a low-level runtime primitive is that Cloudflare is shipping a toolkit around it. The @cloudflare/codemode package is designed to simplify running model-generated code against AI tools using Dynamic Workers. At its core is DynamicWorkerExecutor(), which sets up a purpose-built sandbox with code normalization and direct control over outbound fetch behavior. The package also includes utility functions to wrap an MCP server into a single code() tool or generate MCP tooling from an OpenAPI spec.

The @cloudflare/worker-bundler package handles the fact that Dynamic Workers expect pre-bundled modules. It can resolve npm dependencies, bundle them with esbuild, and return the module map the Worker Loader expects. The @cloudflare/shell package adds a virtual filesystem backed by a durable Workspace using SQLite and R2, with higher-level operations like read, write, search, replace, diff and JSON update, plus transactional batch writes.

Advertisement

Taken together, those packages make the launch feel much more complete. Cloudflare is not just exposing a fast sandbox API. It is building the surrounding path from model-generated logic to packaged execution to persistent file manipulation.

Isolates versus microVMs: two different homes for agents

Cloudflare’s launch also highlights a growing split in the AI-agent market. One side emphasizes fast, disposable, web-scale execution. The other emphasizes deeper, more persistent environments with stronger machine-like boundaries.

Docker Sandboxes is a useful contrast. Rather than using standard containers alone, it uses lightweight microVMs to give each agent its own private Docker daemon, allowing the agent to install packages, run commands and modify files without directly exposing the host system. That is a better fit for persistent, local or developer-style environments. Cloudflare is optimizing for something different: short-lived, high-volume execution on the global web.

So the trade-off is not simply security versus speed. It is depth versus velocity. MicroVMs offer a sturdier private fortress and broader flexibility. Isolates offer startup speed, density and lower cost at internet scale. That distinction may become one of the main dividing lines in agent infrastructure over the next year.

Advertisement

Community reaction: hype, rivalry and the JavaScript catch

The release also drew immediate attention from developers on X, with reactions that captured both excitement and skepticism.

Brandon Strittmatter, a Cloudflare product lead and founder of Outerbase, called the move “classic Cloudflare,” praising the company for “changing the current paradigm on containers/sandboxes by reinventing them to be lightweight, less expensive, and ridiculously fast.”

Zephyr Cloud CEO Zack Chapple called the release “worth shouting from the mountain tops.”

But the strongest caveat surfaced quickly too: this system works best when the agent writes JavaScript. Cloudflare says Workers can technically run Python and WebAssembly, but that for small, on-demand snippets, “JavaScript will load and run much faster.”

Advertisement

That prompted criticism from YouTuber and ThursdAI podcast host Alex Volkov, who wrote that he “got excited… until I got here,” reacting to the language constraint.

Cloudflare’s defense is pragmatic and a little provocative. Humans have language loyalties, the company argues, but agents do not. In Cloudflare’s words, “AI will write any language you want it to,” and JavaScript is simply well suited to sandboxed execution on the web. That may be true in the narrow sense the company intends, but it also means the platform is most naturally aligned with teams already comfortable in the JavaScript and TypeScript ecosystem.

The announcement also triggered immediate competitive positioning. Nathan Flurry of Rivet used the moment to contrast his Secure Exec product as an open-source alternative that supports a broader range of platforms including Vercel, Railway and Kubernetes rather than being tied closely to Cloudflare’s own stack.

That reaction is worth noting because it shows how quickly the sandboxing market around agents is already splitting between vertically integrated platforms and more portable approaches.

Advertisement

Early use cases: AI apps, automations and generated platforms

Cloudflare is pitching Dynamic Workers for much more than quick code snippets. The company highlights Code Mode, AI-generated applications, fast development previews, custom automations and user platforms where customers upload or generate code that must run in a secure sandbox.

One example it spotlights is Zite, which Cloudflare says is building an app platform where users interact through chat while the model writes TypeScript behind the scenes to build CRUD apps, connect to services like Stripe, Airtable and Google Calendar, and run backend logic. Cloudflare quotes Zite CTO and co-founder Antony Toron saying Dynamic Workers “hit the mark” on speed, isolation and security, and that the company now handles “millions of execution requests daily” using the system.

Even allowing for vendor framing, that example gets at the company’s ambition. Cloudflare is not just trying to make agents a bit more efficient. It is trying to make AI-generated execution environments cheap and fast enough to sit underneath full products.

Pricing and availability

Dynamic Worker Loader is now in open beta and available to all users on the Workers Paid plan. Cloudflare says dynamically loaded Workers are priced at $0.002 per unique Worker loaded per day, in addition to standard CPU and invocation charges, though that per-Worker fee is waived during the beta period. For one-off code generation use cases, the company says that cost is typically negligible compared with the inference cost of generating the code itself.

Advertisement

That pricing model reinforces the larger thesis behind the product: that execution should become a small, routine part of the agent loop rather than a costly special case.

The bigger picture

Cloudflare’s launch lands at a moment when AI infrastructure is becoming more opinionated. Some vendors are leaning toward long-lived agent environments, persistent memory and machine-like execution. Cloudflare is taking the opposite angle. For many workloads, it argues, the right agent runtime is not a persistent container or a tiny VM, but a fast, disposable isolate that appears instantly, executes one generated program, and vanishes.

That does not mean containers or microVMs go away. It means the market is starting to split by workload. Some enterprises will want deeper, more persistent environments. Others — especially those building high-volume, web-facing AI systems — may want an execution layer that is as ephemeral as the requests it serves.

Cloudflare is betting that this second category gets very large, very quickly. And if that happens, Dynamic Workers may prove to be more than just another Workers feature. They may be Cloudflare’s attempt to define what the default execution layer for internet-scale AI agents looks like.

Advertisement

Source link

Continue Reading

Tech

Razer’s Nikke collab finally lets you arm your rifle-wielding waifu with a cat-eared gamer headset

Published

on


  • Goddess of Victory: Nikke is getting a Razer collaboration
  • It includes the ability to unlock a new character skin featuring the brand’s Razer Kraken Kitty V2 BT headset
  • There will also be pop-up events at some Razer stores

Goddess of Victory: Nikke publisher Level Infinite has revealed a new collaboration with gaming hardware giant Razer that brings one of the brand’s cutest headsets to the mobile game.

Starting on March 26, 2026, players will be able to unlock the new Punky Street skin for the character Viper by working their way through the limited-time Punky Street Pass. The skin decks out Viper in trendy streetwear and a white Razer Kraken Kitty V2 BT wireless gaming headset, complete with cat ears and some custom pink decals.

Source link

Advertisement
Continue Reading

Tech

Retail Fail: The :CueCat Disaster

Published

on

Digital Convergence Corporation is hardly a household name, and there’s a good reason for that. However, it raised about $185 million in investments around the year 2000 from companies such as Coca-Cola, Radio Shack, GE, E. W. Scripps, and the media giant Belo Corporation. So what did all these companies want, and why didn’t it catch on? If you are old enough, you might remember the :CueCat, but you probably thought it was Radio Shack’s disaster. They were simply investors.

The Big Idea

The :CueCat was a barcode scanner that, usually, plugged into a PC’s keyboard port (in those days, that was normally a PS/2 port). A special cable, often called a wedge, was like a Y-cable, allowing you to use your keyboard and the scanner on the same port. The scanner looked like a cat, of course.

However, the :CueCat was not just a generic barcode scanner. It was made to only scan “cues” which were to appear in catalogs, newspapers, and other publications. The idea was that you’d see something in an ad or a catalog, rush to your computer to scan the barcode, and be transported to the retailer’s website to learn more and complete the purchase.

The software could also listen using your sound card for special audio codes that would play on radio or TV commercials and then automatically pop up the associated webpage. So, a piece of software that was reading your keyboard, listening to your room audio at all times, and could inject keystrokes into your computer. What could go wrong?

Advertisement

Of Interest

You might think this was some tiny startup that died with a whimper, but Radio Shack, Forbes, Wired, and several major newspapers were onboard. The :CueCat cost about $6.50 to produce, but most people never bought one. Radio Shack, Forbes, and Wired were giving them away.

The problem is, even free was too high a price for most people. To use the device, you had to register and complete a long survey full of invasive questions. Then the software showed you an ad bar. Digital Convergence had your demographic info, your surfing habits, and knew what you were scanning.

Even then, the scanner solved a non-problem. If you saw something in a Radio Shack catalog, for example, it was probably not so hard to go to their website and search for it by title or stock number. Especially if you were sitting in front of your computer. If you weren’t… well, then, the :CueCat didn’t help you in that case, anyway.

The Next Big Thing?

It is easy to look back on this and think, “What a bad idea?” But Digital Convergence and its investors were in a full-blown media blitz. The video below shows a contemporary demo of the technology.

Advertisement

If you still aren’t sold, look at how happy the woman in the Radio Shack commercial is that she didn’t have to manually search the web for her next phone purchase.

A clip from the Radio Shack 2002 catalog (from RadioShackCatalogs.com)

Problem solved, right? Want to buy that new ham radio? Scan the code, and you don’t have to type “Alinco” into a search box! Even the table of contents in the 2002 RadioShack catalog was festooned with barcodes.

The RadioShack catalog might have been an exception, though. A 2001 issue of Forbes magazine showed sparing use of the barcodes and no obvious ones linking to big advertisers. You would think the advertisers would have been a prime target, even if you had to make deals to get them onboard.

Advertisement

Hackers

Naturally, hacks immediately appeared. Drives from [Pierre-Philippe Coupard] and [Michael Rothwell]  allowed you to use the :CueCat without the invasive software or registration. You could even scan normal barcodes like UPC codes. Radio Shack and others wound up simply giving away $6.50 barcode scanners.

While people were already prickly about the amount of information gathered and the tracking, hackers found a report file on a public server that revealed personal info about 140,000 users — a huge number for the year 2000.

With hackers attacking both the hardware and the company’s website, Digital Convergence had to act. They changed their license, claiming that you didn’t own the scanner and forbidding reverse engineering. There were no real lawsuits, but there were threats and, as you might imagine, that just made things worse.

The Decline

By 2001, there were a very few USB-native :CueCats distributed. But the bad publicity and the lack of usefulness took its toll. By mid-year, most of the 225 employees at Digital Convergence had been let go. Later in the year, the investors decided to stop using the tech entirely.

Advertisement

By 2005, you could buy the now-surplus devices for $0.30 each, as long as you agreed to take 500,000 or more of them. You can still find them on the used market if you look. Open source software is still around that can make them do useful things, but honestly, unless you’re hacking it into a custom hardware setup, your phone is a better barcode scanner.

Hardware

You can still find some of the contemporary teardowns of the :CueCat online. There were, apparently, several revisions of the hardware, but at least one version had a cheap CPU, a serial EEPROM, an 8 KB static RAM, and a handful of small parts. For a free device, the insides looked pretty good.

:CueCat without cover by [Shaddack]

Removing the ID from the device was as easy as removing the EEPROM, although people were less equipped to remove SMD chips in those days. You could also just lift a single pin, which was slightly easier. At least one enterprising hacker added a DIP switch to experiment with the pin settings.

Aftermath

Of course, now we have QR codes. But these are somewhat more private, work with the ubiquitous cell phone, and even then haven’t caught on in the way Digital Convergence had planned.

Was it a good idea? That’s debatable. But giant privacy grabs usually go poorly. Granted, in 2000, that might not have been as obvious as it is today. But it still doesn’t keep companies from finding it out all over again.

Advertisement

Featured image: The :CueCat. Photo by [Jerry Whiting]

Source link

Advertisement
Continue Reading

Tech

Dell’s latest laptops shed some weight, trim the waistline, and get sensible names

Published

on

Dell has overhauled its commercial PC lineup with four new Pro notebooks: Pro Premium, Pro 7, Pro 5, and Pro 3. The devices are thinner and lighter than their predecessors, pack Intel and AMI processors, and finally ditch the old Latitude branding for a cleaner, number-based naming scheme. 

Which laptop is actually built for you?

The Dell Pro Premium is the executive pic. It is up to 7% thinner, the lightest of them all, and wears a classy magnesium alloy chassis in a dark gray finish. The notebook offers an optional tandem OLED display and comes with an 8MP HDR camera for video calls that don’t make you look like you’re broadcasting from a basement.

The Dell Pro 7 is for those who want it all in a small package. Up to 18% thinner than the previous generation, the Pro 7 is the thinnest 13- and 14-inch commercial laptop and 2-in-1 in its class. The edge-to-edge Gorilla Glass touchscreen can achieve up to 500 nits, and the higher trims can add OLED displays, 8MP cameras, and a mini-LED backlit keyboard. 

The Dell Pro 5 could be a popular choice

The Dell Pro 5 delivers the most scalable performance of the laptops. It is available in 14- and 16-inch sizes, it’s up to 12% thinner than last year, and up to 21% thinner than competing designs. It also houses a 70Wh battery and optional OLED display, making it the practical workhorse of the range. 

There’s another Pro 3, which starts at just 2.89 pounds with a scratch-resistant metallic finish, Wi-Fi 7, and solid battery life. Dell’s latest laptops run on Intel Core Ultra Series 3 or AMD Ryzen AI 400 processors with Copilot+ PC support. 

Product Sizes Availability
Dell Pro 14 Premium 14-inch March 31, 2026
Dell Pro 7 13-inch, 14-inch May 2026
Dell Pro 5 14-inch, 16-inch May 2026
Dell Pro 3 14-inch, 16-inch May 2026

Beyond laptops, the company has also announced the compact Pro 5 Micro desktop, new Pro Precision workstations, and a range of Pro P monitors with built-in conferencing features. 

Advertisement

Source link

Continue Reading

Tech

Meet the 91-year-old gamer who beat Resident Evil Requiem the old-fashioned way

Published

on


Yang’s meticulous, analog approach has captivated gaming communities in China and abroad, where clips of him leafing through notebooks filled with hand-sketched maps and puzzle notes have drawn admiration and nostalgia in equal measure. His accomplishment – finishing Resident Evil Requiem entirely unaided – has been hailed by fans as…
Read Entire Article
Source link

Continue Reading

Tech

Citrix urges admins to patch NetScaler flaws as soon as possible

Published

on

Citrix

Citrix has patched two vulnerabilities affecting NetScaler ADC networking appliances and NetScaler Gateway secure remote access solutions, one of which is very similar to the CitrixBleed and CitrixBleed2 flaws exploited in zero-day attacks in recent years.

The critical security bug (tracked as CVE-2026-3055) stems from insufficient input validation, which can lead to a memory overread on Citrix ADC or Citrix Gateway appliances configured as a SAML identity provider (IDP), potentially enabling remote attackers without privileges to steal sensitive information such as session tokens.

“Cloud Software Group strongly urges affected customers of NetScaler ADC and NetScaler Gateway to install the relevant updated versions as soon as possible,” the company warned in a Monday advisory.

Citrix has also shared detailed guidance on how to identify and patch NetScaler instances vulnerable to CVE-2026-3055.

Advertisement

The company also patched the CVE-2026-4368 vulnerability affecting appliances configured as Gateways (SSL VPN, ICA Proxy, CVPN, RDP proxy) or AAA virtual servers, which can enable threat actors with low privileges on the targeted system to exploit a race condition in low-complexity attacks, potentially leading to user session mix-ups.

The two flaws affect NetScaler ADC and NetScaler Gateway versions 13.1 and 14.1 (fixed in 13.1-62.23 and 14.1-66.59) and NetScaler ADC 13.1-FIPS and 13.1-NDcPP (addressed in 13.1-37.262).

Internet security watchdog group Shadowserver is currently tracking over 30,000 NetScaler ADC instances and more than 2,300 Gateway instances exposed online. However, there is currently no information regarding how many of them are using vulnerable configurations or have already been patched against attacks.

Citrix NetScaler ADC instances exposed online
Citrix NetScaler ADC instances exposed online (Shadowserver)

Since Citrix released security updates to address the vulnerability, multiple cybersecurity companies have warned that it’s critical to secure NetScaler against attacks targeting CVE-2026-3055.

Many of them have also pointed out obvious similarities to the CitrixBleed and CitrixBleed2 out-of-bounds memory-read vulnerabilities exploited in zero-day attacks in recent years.

Advertisement

“Unfortunately, many will recognise this as sounding similar to the widely exploited ‘CitrixBleed’ vulnerability from 2023 and the subsequent ‘CitrixBleed2’ variant disclosed in 2025, both of which were and continue to be actively leveraged in real-world attacks,” cybersecurity company watchTowr said.

“Although Citrix states that the vulnerability was identified internally, it is reasonable to expect that threat actors will attempt to reverse engineer the patch to develop exploit capabilities.”

“Exploitation of CVE-2026-3055 is likely to occur once exploit code becomes public. Therefore, it is crucial that customers running affected Citrix systems remediate this vulnerability as soon as possible; Citrix software has previously seen memory leak vulnerabilities broadly exploited in the wild, including the infamous ‘CitrixBleed’ vulnerability, CVE-2023-4966, in 2023,” Rapid7 added.

In August 2025, CISA flagged CitrixBleed2 as actively exploited and gave federal agencies a single day to secure their systems. In total, the U.S. cybersecurity agency has tagged 21 Citrix vulnerabilities as exploited in the wild, seven of which were used in ransomware attacks.

Advertisement

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Continue Reading

Tech

AI Economy Is a ‘Ponzi Scheme,’ Says AI Doc Director

Published

on

An anonymous reader quotes a report from Vanity Fair: Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you’re even slightly interested in what’s going on with AI, it’s required viewing: The film touches on all aspects of the technology, from how it’s currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user — the point at which machines become autonomous, like Skynet in the Terminator franchise. […]

[Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google’s AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. “Have you seen that guy speak? He’s like a lizard man,” Roher says regarding Zuckerberg. “Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while,” adds [codirector Charlie Tyrell]. Altman, arguably AI’s greatest mascot, is prominently featured in the documentary. But Roher wasn’t buying it. “That guy doesn’t know what genuine means,” he says. “Every single thing he says and does is calculated. He is a machine. He’s like AI, and it’s in the service of growth, growth, growth. You can be disingenuous and media savvy.” […]

How, exactly, is Roher an apocaloptimist? “We are preaching a worldview,” he says, “in a world that’s asking you to either see this as the apocalypse or embrace it with this unbridled optimism.” He and his film are taking a stance that rests between those two poles. “It’s both at the same time. We have to try and embrace a middle ground so this technology doesn’t consume us, so we can stay in the driver’s seat,” says Roher — meaning, it’s up to all of us to chart the course. “You have to speak up,” says Tyrell. “Things like AI should disclose themselves. If your doctor’s office is using an AI bot, you have to say, I don’t like that.” The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. […]

Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies — and Roher gets going yet again. “This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we’ve seen,” he says. “I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It’s fucking crazy.” […] “These people are prospectors, and they are going up to the Yukon because it’s the gold rush.”

Advertisement

Source link

Continue Reading

Tech

Amazon and FedEx, together again, this time for e-commerce returns

Published

on

An Amazon Prime delivery van and a FedEx Ground van on a Seattle street. The two companies are expanding their rekindled partnership into returns. (GeekWire File Photo / Todd Bishop)

Amazon and FedEx are expanding their partnership after starting to patch things up last year.

The companies announced Wednesday that more than 1,500 FedEx Office locations nationwide are now accepting Amazon returns as part of a network of more than 10,000 drop-off points across the U.S. where customers can return items without a shipping box, tape, or label.

It’s notable in part because of the history between the two companies. 

FedEx severed its logistics relationship with Amazon in 2019 as the e-commerce giant built out its own logistics network. But the two have started working together again over the past year, with FedEx reportedly helping to fill delivery gaps for Amazon left by UPS, which said last year that it would cut its Amazon package volume by more than half.

Amazon says four out of five U.S. customers now have a drop-off point within five miles of their home. Other locations in the network include Whole Foods Market, The UPS Store, Kohl’s, Staples, and regional partners such as Winn-Dixie, Save Mart, and Goodwill.

Advertisement

Returns have become a competitive battleground in e-commerce logistics because they boost shipping volume, lock in merchant relationships, and generate foot traffic for retail partners. 

UPS acquired Happy Returns in 2023 and offers box-free returns at 5,000 UPS Store locations as part of a broader network. FedEx has been rolling out its own Easy Returns service

Amazon benefits from the competition, gaining more drop-off density and better economics while also continuing to grow its own in-house network.

To make a return, customers start the process in their Amazon account, choose a nearby location, and receive a QR code. They bring the unpackaged item and QR code to the drop-off point, where it’s scanned and prepared for shipping.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025