Connect with us
DAPA Banner

Tech

Not Costco, Not Sam's Club: This Woman Swears By A Cheaper Way To Buy Tires

Published

on

A set of new car tires can easily cost hundreds of dollars. This often inspires drivers to shop around, looking for low-price alternatives to major retailers.

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

PDFsam 6.0 adds compression tools and PDF 2.0 support

Published

on

PDFsam Basic is a free, open-source tool for splitting, merging, and organizing PDFs. Version 6.0 adds three compression modes, better support for PDF 2.0 and UTF-8 text, stronger handling for malformed files, and more quality-of-life improvements.

Read Entire Article
Source link

Continue Reading

Tech

Claude Code’s source code appears to have leaked: here’s what we know

Published

on

Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public.

A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning.

By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers.

For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical given the commercial velocity of the product.

Advertisement

Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year.

With enterprise adoption accounting for 80% of its revenue, the leak provides competitors—from established giants to nimble rivals like Cursor—a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent.

We’ve reached out to Anthropic for an official statement on the leak and will update when we hear back.

The anatomy of agentic memory

The most significant takeaway for competitors lies in how Anthropic solved “context entropy”—the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity.

Advertisement

The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional “store-everything” retrieval.

As analyzed by developers like @himanshustwts, the architecture utilizes a “Self-Healing Memory” system.

At its core is MEMORY.md, a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations.

Actual project knowledge is distributed across “topic files” fetched on-demand, while raw transcripts are never fully read back into the context, but merely “grep’d” for specific identifiers.

Advertisement

This “Strict Write Discipline”—where the agent must update its index only after a successful file write—prevents the model from polluting its context with failed attempts.

For competitors, the “blueprint” is clear: build a skeptical memory. The code confirms that Anthropic’s agents are instructed to treat their own memory as a “hint,” requiring the model to verify facts against the actual codebase before proceeding.

KAIROS and the autonomous daemon

The leak also pulls back the curtain on KAIROS,” the Ancient Greek concept of “at the right time,” a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode.

While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called autoDream.

Advertisement

In this mode, the agent performs “memory consolidation” while the user is idle. The autoDream logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts.

This background maintenance ensures that when the user returns, the agent’s context is clean and highly relevant.

The implementation of a forked subagent to run these tasks reveals a mature engineering approach to preventing the main agent’s “train of thought” from being corrupted by its own maintenance routines.

Unreleased internal models and performance metrics

The source code provides a rare look at Anthropic’s internal model roadmap and the struggles of frontier development.

Advertisement

The leak confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing.

Internal comments reveal that Anthropic is already iterating on Capybara v8, yet the model still faces significant hurdles. The code notes a 29-30% false claims rate in v8, an actual regression compared to the 16.7% rate seen in v4.

Developers also noted an “assertiveness counterweight” designed to prevent the model from becoming too aggressive in its refactors.

For competitors, these metrics are invaluable; they provide a benchmark of the “ceiling” for current agentic performance and highlight the specific weaknesses (over-commenting, false claims) that Anthropic is still struggling to solve.

Advertisement

“Undercover” Claude

Perhaps the most discussed technical detail is the “Undercover Mode.” This feature reveals that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories.

The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.”

While Anthropic may use this for internal “dog-fooding,” it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure.

The logic ensures that no model names (like “Tengu” or “Capybara”) or AI attributions leak into public git logs—a capability that enterprise competitors will likely view as a mandatory feature for their own corporate clients who value anonymity in AI-assisted development.

Advertisement

The fallout has just begun

The “blueprint” is now out, and it reveals that Claude Code is not just a wrapper around a Large Language Model, but a complex, multi-threaded operating system for software engineering.

Even the hidden “Buddy” system—a Tamagotchi-style terminal pet with stats like CHAOS and SNARK—shows that Anthropic is building “personality” into the product to increase user stickiness.

For the wider AI market, the leak effectively levels the playing field for agentic orchestration.

Competitors can now study Anthropic’s 2,500+ lines of bash validation logic and its tiered memory structures to build “Claude-like” agents with a fraction of the R&D budget.

Advertisement

As the “Capybara” has left the lab, the race to build the next generation of autonomous agents has just received an unplanned, $2.5 billion boost in collective intelligence.

What Claude Code users and enterprise customers should do now about the alleged leak

While the source code leak itself is a major blow to Anthropic’s intellectual property, it poses a specific, heightened security risk for you as a user.

By exposing the “blueprints” of Claude Code, Anthropic has handed a roadmap to researchers and bad actors who are now actively looking for ways to bypass security guardrails and permission prompts.

Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now design malicious repositories specifically tailored to “trick” Claude Code into running background commands or exfiltrating data before you ever see a trust prompt.

Advertisement

The most immediate danger, however, is a concurrent, separate supply-chain attack on the axios npm package, which occurred hours before the leak.

If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.

To mitigate future risks, you should migrate away from the npm-based installation entirely. Anthropic has designated the Native Installer (curl -fsSL https://claude.ai/install.sh | bash) as the recommended method because it uses a standalone binary that does not rely on the volatile npm dependency chain.

The native version also supports background auto-updates, ensuring you receive security patches (likely version 2.1.89 or higher) the moment they are released. If you must remain on npm, ensure you have uninstalled the leaked version 2.1.88 and pinned your installation to a verified safe version like 2.1.86.

Advertisement

Finally, adopt a zero trust posture when using Claude Code in unfamiliar environments. Avoid running the agent inside freshly cloned or untrusted repositories until you have manually inspected the .claude/config.json and any custom hooks.

As a defense-in-depth measure, rotate your Anthropic API keys via the developer console and monitor your usage for any anomalies. While your cloud-stored data remains secure, the vulnerability of your local environment has increased now that the agent’s internal defenses are public knowledge; staying on the official, native-installed update track is your best defense.

Source link

Advertisement
Continue Reading

Tech

Oakcastle MP300 review: the super-cheap MP3 player that can

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

Oakcastle MP300: Two-minute review

Okay, I’ll fess up: this Oakcastle MP300 review wasn’t meant to take a month. I thought this super-cheap MP3 player would be a quick in-and-out style of review where I’d listen to a few tunes and take it on a trip, but it ended up being a really useful addition to my audio set-up. Good for it, not so good for my deadlines.

This is the kind of budget music player that a serious music fan would probably ignore — does anyone other than wallet-friendly Chinese brands make this kind of tech? Apparently yes, they do actually, but if I can humbly request that we stop that train of thought right now: this isn’t any bargain bin buy.

Advertisement

Source link

Continue Reading

Tech

AirPods Max 2 review: Familiar features & design, but needs more

Published

on

AirPods Max 2 finally got an actual update. They’re still excellent, but the added features aren’t really anything new.

Peach and gold over-ear wireless headphones resting on a reflective surface beside an open matching orange carrying case, with a soft blurred pink flower in the foreground
AirPods Max 2 review: What’s old is new again

Apple half-heartedly updated the AirPods Max in September of 2024. It was such a meager update that it removed a prior feature — wired lossless — and didn’t get a new name.
Thankfully, Apple at least brought back wired lossless audio via a software update. That update delivered nothing else, and was months later just to restore a feature that the Lightning version had.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Google is now letting users in the US change their Gmail address

Published

on

Google said on Tuesday that it is now rolling out a way for users in the U.S. to change their Gmail address without starting over or losing access to their data.

Users who have access to this feature can go to their Google Account settings, navigate to Personal info> Email > Google Account email option to see a “Change Google Account email” button. Tap on the button to start the process of changing your username.

Users will be able to change their username only once every 12 months. Plus, they won’t be able to delete their new email address for that period of time.

The company said users’ old emails will be preserved, and the old email address will serve as an alternate address for the account. Users will be able to sign in to Google services using both the old and the new addresses.

Advertisement

Google was rolling out this change in some Hindi-speaking territories, as noted by 9to5Google, which noticed the Hindi support page describing the process to change the username.

The company’s support page says the feature is rolling out gradually, and users might not immediately have access to it.

Source link

Advertisement
Continue Reading

Tech

Ollama is supercharged by MLX's unified memory use on Apple Silicon

Published

on

Machine learning researchers using Ollama will enjoy a speed boost to LLM processing, as the open-source tool now uses MLX on Apple Silicon to fully take advantage of unified memory.

Open laptop on a desk displaying a black screen with a white line drawing of a cute alpaca character standing beside a sleek sports car, against a soft orange background.
Ollama has been boosted by MLX on Apple Silicon

Anyone working with large language models (LLMs) wants results as quickly as possible. There are techniques to do this using multiple Macs, working in a cluster to increase the amount of processing at hand, but one method made by Apple also provides an extra bit of assistance.
This has been undertaken by the developers working on the open-source model management and execution tool Ollama. In a March 30 update, it announced that it is previewing a version of the tool for Apple Silicon that takes advantage of MLX.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

5 Classic Muscle Cars That Make The Pontiac GTO Look Slow

Published

on





If you were to say the term American muscle car, a Pontiac GTO will certainly spring to a lot of people’s minds, and for good reason. Originally a trim level of the 1964 Pontiac Tempest, the GTO nameplate, which stands for Gran Turismo Omologato (Grand Touring Homologation in Italian) became synonymous with big power in a modest package. Arguably, it started the whole muscle car trend, debuting before giants like the Mustang, Charger, and more. It had the muscle to back it up as well, with later examples boasting either a 400 or 455 cubic inch engine in top trim, with various options such as the famous Ram Air intake, characterized by its hood scoop.

Power figures are impressive for the time, boasting 360 hp and 500 lb-ft torque with the 455 big block, or 370 hp and 445 lb-ft torque with the Ram Air 400 in 1970. But how fast was it, really, in comparison to its peers? It’s hard to say in pure mathematical terms because of the variables; different magazines and journals list varying times, ranging from 14.6 at 99.6 mph to 13.6 at 104.5 mph with the 400 Ram Air and manual, the fastest configuration. The 455 was slower still, dropping down to 15 seconds.

Quite a few cars could certainly hang with the GTO, and more still could exceed it. For this article, we’ll take a look at the original GTO’s fastest year of 1970 and measure it against all muscle cars built up to that point, so nothing post-1970, and no special models like the Super Stock Hursts or Yenkos — these are common production cars only. Let’s kick it off.

Advertisement

1970 Dodge Challenger R/T 440 Six Pack: 13.6 @ 105 mph

Our opening car already matches the GTO’s best recorded time, and beats the 455 by over a second at the line, a massive length in drag racing terms: 1970 was the debut year of the Dodge Challenger. Made famous by its starring role in the hit movie “Vanishing Point,” the 1970 Dodge Challenger, in this case a 400 Six Pack-equipped R/T trim, is one of the most iconic muscle cars ever made, though its status is somewhat deceptive; Challengers are actually pony cars. 

Advertisement

Pony cars are smaller vehicles, in this case built on the Chrysler E-body, a crucial point when talking about power/weight ratio. This 1970 Dodge Challenger R/T houses the same engine as the midsize and full-size muscle cars, but the ’69 Charger R/T 440 weighing 3,900 pounds, whereas the ’70 Challenger comes in at 3,395 pounds. With less weight and a smaller profile to move through the air, the Challenger will naturally be the faster of the two body styles, and certainly as fast or faster than the GTO.

The 440 Six Pack does all the heavy lifting here, of course, boasting 375 hp and 480 lb-ft torque. Much like the 455, these were large, powerful engines designed for cruising; Winnebago motorhomes used these engines, for example, albeit with different accessories and tunes. When you take that engine, give it three carbs and some performance upgrades, and shove it into a car the size of a Challenger, it’s no wonder they exceed the GTO’s figures.

Advertisement

1970 Ford Mustang BOSS 429: 13.6 @ 114 mph

Here’s another car with a bit of a spotty drag racing record; Motor Trend actually tested their own ’69 Boss 429 and got a blistering 12.3-second quarter mile time at 112 mph. For these purposes, let’s use the worst-case scenario — a 1970 model year, same engine, running a 13.6 at 114. And that shouldn’t be all too surprising, because here we have an example of another smaller car with a massive engine shoved under the hood.

Originally, the Mustang didn’t even have a big block at all; the Mustang is one of the progenitors of the term pony car — smaller, more nimble cars with small block V8s like the 289 Ford or 350 Chevy in the Camaro. That changed in 1967, when Ford introduced the 390 option for the Mustang. The company continuously experimented with the design over the next couple of years, and while the 1970 car shares the same basic architecture as the original, their bodies are very different — as are their engines.

The 429 cubic-inch big block is, ostensibly, a racing engine. In fact, the Boss 429 itself was designed to compete in NASCAR. A little-known fact is that the 429 actually uses a hemispherical combustion chamber, like the legendary 426 HEMI engine from Mopar; this configuration allows for extremely efficient combustion processes, especially at rev ranges expected in racing, making this engine particularly well-suited to high-speed runs. It’s believed that Ford underrated the engine at 375 hp and 450 lb-ft torque, which — coupled with the Mustang’s slim profile — made for an extremely potent muscle car.

Advertisement

1970 Buick GS / GSX Stage 1: 13.38 @ 105.5 mph

With 360 hp and a whopping 510 lb-ft torque, the 1970 Buick GS Stage 1 rips up a quarter-mile track at 13.38 seconds, according to Motor Trend’s January 1970 issue. This one’s a bit conflated, however; the same car was also tested over at Hot Rod Magazine in November 1969, reaching the finish line after 14.40 seconds at 96 mph, albeit with the automatic. Being that the manual is faster for both the GTO and GS Stage 1, we’ll use those times instead for consistency.

Most people probably don’t say Buick and high-performance in the same breath anymore, but that wasn’t true in 1970. In fact, the GS Stage 1 was one of the fastest muscle cars on the market — and much like the previous entry, the GS and GSX special-edition were midsize, built on the same A-body platform as the GTO, Chevelle, Olds 4-4-2, and so on. The fastest Olds 4-4-2, a 1966 model with the W-30 and manual accomplished a 13.8-second time, making it about on-par with the GTO. This makes the Buick GS the second-fastest GM-platform in the quarter-mile run.

Advertisement

The engine used by the GS Stage 1 was the 455 cubic inch (7.4-liter) big block, among the most powerful Buick engines ever produced, and it was also Buick’s biggest ever V8 fitted to a production car. While it doesn’t have the same power rating as some others on this list, that engine more than makes up for it in raw torque, especially with the close-ratio Muncie 4-speed it was often paired with.

Advertisement

1970 Chevrolet Chevelle SS 454: 13.12 @ 107.01 mph

Arguably the first true mid-size car on this list, the 1970 Chevrolet Chevelle SS 454 is yet another iconic muscle car, wearing its classic Le Mans racing stripes and SS badging. Moreover, the LS6 engine option code bumped up power to 450 horsepower and 500 lb-ft torque, more power than anything else on this list, at least in terms of factory ratings. This produced rapid times frequently teasing the low-13 second mark, with Hot Rod attaining a respectable 13.44 @ 108.17 mph in their best run, for instance.

The 1970 Chevelle SS came in several different variants, each with their own power and top speed figures, ranging from the entry-level L34-code 396 ci unit with 350 hp, up to the infamous LS6. LS6-powered Chevelles are sometimes referred to today as the king of muscle cars, directly competing with the likes of the infamous 426 HEMI, the 428 Cobra Jet, and more. Much like the 429, the LS6 was a bespoke high-performance engine, sporting an 11.25:1 compression ratio, aggressive solid-lifter camshaft, aluminum pistons, and more, topped off with a thirsty 800 cfm (cubic feet per minute) Holley carburetor. All that runs through a Muncie M-22 Rock Crusher transmission.

In short, the LS6-powered Chevelle is the 1970 equivalent of a supercar today, though it’s decidedly less refined than one. According to Motor Trend, the transmission is noisy and unrefined, the engine unhappy on unleaded gasoline due to its high compression ratio, and it’s almost impossible to drive hard without spinning tires if you’re running regular street rubber. It’s decidedly specialized for one purpose — going fast, and it does that very well, indeed.

Advertisement

1970 Plymouth Barracuda 426 Hemi: 13.10 @ 107.1 mph

It should come as no surprise that the top spot is secured by a Hemi, an engine that needs no introduction to drag racing enthusiasts. In truth, the infamous Elephant Block could likely accommodate several spots on this list, but the fastest among them, at least according to Car Craft magazine, is the 1970 Plymouth Hemi ‘Cuda. Much like Ford’s 429, the 426 Street Hemi is widely rumored to have a significantly underreported horsepower rating throughout its production run — an impressive 425 hp and 490 lb/ft torque, so says Chrysler.

This was a massive, racing-oriented engine that just barely fit in a lot of these cars; getting hemi heads on the block required a lot of real-estate, one reason why you don’t see them too often. The option itself cost an eye-watering $900, or over $7,500 today — basically you have to buy a third of the car over again at the dealership. But what you get is, for all intents and purposes, the closest thing to a factory-built racecar without crossing the line into specialist vehicles. The A-body Barracuda was built with this in mind, being an early example of a hero sports car alongside its sister Dodge Challenger.

To put it into perspective, the already (supposedly) underrated 426 Hemi can launch the infamous Hurst Hemi ’68 Barracuda deep into 10-second times at over 120 mph. That same engine, albeit tuned for street use, propels the 1970 Hemi ‘Cuda over a second and a half faster than the GTO down the strip. With its light weight and massive performance, it’s simply no contest for the Pontiac at this stage.

Advertisement



Source link

Advertisement
Continue Reading

Tech

Quadratic Gravity Theory Reshapes Quantum View of Big Bang

Published

on

Researchers at the University of Waterloo say a new “quadratic quantum gravity” framework could explain the universe’s rapid early expansion without adding extra ingredients to Einstein’s theory by hand. The idea is especially notable because it makes testable predictions, including a minimum level of primordial gravitational waves that future experiments may be able to detect. “Even though this model deals with incredibly high energies, it leads to clear predictions that today’s experiments can actually look for,” said Dr. Niayesh Afshordi, professor of physics and astronomy at the University of Waterloo and Perimeter Institute (PI). “That direct link between quantum gravity and real data is rare and exciting.” Phys.org reports: The research team found that the Big Bang’s rapid early expansion can emerge naturally from this simple, consistent theory of quantum gravity, without adding any extra ingredients. This early burst of expansion, often called inflation, is a central idea in modern cosmology because it explains why the universe looks the way it does today.

Their model also predicts a minimum amount of primordial gravitational waves, which are tiny ripples in spacetime geometry created in the first moments after the Big Bang. These signals may be detectable in upcoming experiments, offering a rare chance to test ideas about the universe’s quantum origins.

[…] The team plans to refine their predictions for upcoming experiments to explore how their framework connects to particle physics and other puzzles about the early universe. Their long-term goal is to strengthen the bridge between quantum gravity and observational cosmology. The research has been published in the journal Physical Review Letters.

Source link

Advertisement
Continue Reading

Tech

Facial Recognition Is Spreading Everywhere

Published

on

Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.

Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. There are three possible outcomes.

Three Possible Outcomes

White figures and an orange hooded figure, focusing on the hooded figure in a split design.a) identifies the suspect, since the two images are of the same person, according to the software. Success!

Abstract figures: orange hoodie enlarged, white, yellow, and orange on left, black background.b) matches another person in the footage with the suspect’s probe image. A false positive, coupled with sloppy verification, could put the wrong person behind bars and lets the real criminal escape justice.

Three white icons and one orange hoodie icon on left, large orange hoodie icon on right.c) fails to find a match at all. The suspect may be evading cameras, but if cameras just have low-light or bad-angle images, this creates a false negative. This type of error might let a suspect off and raise the cost of the manhunt.

In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are around two in 1,000 and false positives are less than one in 1 million.

In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect.

Advertisement

Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. The United Kingdom estimated that its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others.

Five faces arranged left to right, from easy to hard to recognize.Less clear photographs are harder for FRT to process.iStock

What happens with photos of people who aren’t cooperating, or vendors that train algorithms on biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky.

Facial Recognition Gone Wrong

THE NEGATIVES OF FALSE POSITIVES

Detroit Police SUV with American flag decal on side under bright sunlight.2020: Robert Williams’s wrongful arrest cost him detention. The ensuing settlement requires Detroit police to enact policies that recognize FRT’s limits. iStock

ALGORITHMIC BIAS

Advertisement

Red sign reads 2023: Court bans Rite Aid from using facial recognition for five years over its use of a racially biased algorithm. iStock

TOO FAST, TOO FURIOUS?

Back of ICE officer in tactical gear facing a house.2026: U.S. immigration agents misidentify a woman they’d detained as two different women. VICTOR J. BLUE/BLOOMBERG/GETTY IMAGES

Consider a busy trade fair using FRT to check attendees against a database, or gallery, of images of the 10,000 registrants, for example. Even at 99.9 percent accuracy you’ll get about a dozen false positives or negatives, which may be worth the trade-off to the fair organizers. But if police start using something like that across a city of 1 million people, the number of potential victims of mistaken identity rises, as do the stakes.

What if we ask FRT to tell us if the government has ever recorded and stored an image of a given person? That’s what U.S. Immigration and Customs Enforcement agents have done since June 2025, using the Mobile Fortify app. The agency conducted more than 100,000 FRT searches in the first six months. The size of the potential gallery is at least 1.2 billion images.

At that size, assuming even best-case images, the system is likely to return around 1 million false matches, but at a rate at least 10 times as high for darker-skinned people, depending on the subgroup.

Advertisement

Responsible use of this powerful technology would involve independent identity checks, multiple sources of data, and a clear understanding of the error thresholds, says computer scientist Erik Learned-Miller of the University of Massachusetts Amherst: “The care we take in deploying such systems should be proportional to the stakes.”

Source link

Continue Reading

Tech

How to back up your iPhone & iPad to your Mac before something goes wrong

Published

on

Backing up your iPhone or iPad to your Mac is the fastest and most reliable way to protect your data, and is especially useful before updates, repairs, or device replacement.

Apple iPhone and iPad on a colorful blurred background, with a large Time Machine backup app icon centered between them
How to back up your iPhone and iPad to Mac

Backing up your iPhone or iPad to your Mac remains the fastest and most complete way to protect your data before updates, repairs, or hardware changes. Apple built local backup support directly into macOS through Finder, allowing full-device backups without relying on an internet connection.
Local backups are like full system snapshots, saving your device settings, messages, app data, and media stored on your device. Backing up to iCloud does save your data, but restoring from a Mac is also faster than from because the data transfers directly over USB.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Trending

Copyright © 2025