Connect with us
DAPA Banner

Tech

Stephen Thaler’s Legendary AI Copyright Losing Streak Ends With Nowhere Left To Appeal

Published

on

from the public-domain-ftw dept

We’ve been covering Stephen Thaler’s quixotic quest to get copyright (and patent) protection for works generated entirely by his AI system “DABUS” for years now. If there’s one thing Thaler has proved beyond all reasonable doubt, it’s that you can be comprehensively, thoroughly, and repeatedly wrong at every level of the American legal system and still keep going. He loses everywhere, every time, at every level. The Copyright Office rejected him. A federal district court rejected him. The DC Circuit rejected him. The Patent Office rejected him. Courts rejected his parallel patent claims. Even the Trump administration—not exactly known for its nuanced intellectual property positions—told the Supreme Court not to bother hearing his appeal.

And now, the Supreme Court has declined to take up the case, putting the final period on what has been one of the most impressive losing streaks in recent IP law history.

Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection because it did not have a human creator.

That was always the fatal flaw with his argument. He wasn’t making the more nuanced claim that a human who uses AI as a tool should get some copyright protection. He was making the maximalist claim: the AI did it all by itself, and it (or rather, he, as the AI’s owner) should get the copyright anyway.

The image in question—”A Recent Entrance to Paradise,” of train tracks entering a portal surrounded by green and purple plant-like imagery—was, according to Thaler, created entirely by DABUS with no human creative input. Every single institution that looked at this said no.

Advertisement

A federal judge in Washington upheld the office’s decision in Thaler’s case in 2023, writing that human authorship is a “bedrock requirement of copyright.” The U.S. Court of Appeals for the District of Columbia Circuit affirmed the ruling in 2025.

Thaler’s lawyers, for their part, tried to argue that the stakes were too high for the Court to sit this one out:

With a refusal by the court to hear the appeal, Thaler’s lawyers said, “even if it later overturns the Copyright Office’s test in another case, it will be too late. The Copyright Office will have irreversibly and negatively impacted AI development and use in the creative industry during critically important years.”

That’s rich. The Copyright Office is already working through the genuinely harder questions in cases involving tools like Midjourney—cases where humans actually did have meaningful creative input. Those cases are moving through the system right now. The problem for Thaler is that he chose the worst possible vehicle to force a Supreme Court showdown: a case so maximalist in its claims (the AI did everything, humans did nothing, give us the copyright anyway) that courts could rule against him on the narrowest possible grounds without ever having to engage with the nuanced questions at all. His all-or-nothing bet made this an easy case.

Still, the question of what happens when a human uses AI as a creative tool—rather than letting the machine do everything—isn’t actually as novel or unsettled as many people seem to think.

Copyright law has required human creative choices since at least Burrow-Giles Lithographic Co. v. Sarony all the way back in 1884 in a case about whether or not photographs get covered by copyright. And the wonderful Feist Publications v. Rural Telephone Service from 1991 (a case we cite often) hammered the point home by establishing that copyright demands original creative expression. Consider how this already works with photography. A photographer who frames a shot of a landscape gets copyright protection in the creative choices they made—the composition, the angle, the timing, the lighting. But the landscape itself? No human created that. It gets no copyright. The camera mechanically captured what was in front of it, but the human’s original creative decisions (and only those original creative decisions) are what copyright protects.

Advertisement

AI-generated works should work roughly the same way. If a human’s creative input—through a sufficiently specific and expressive prompt, through selection and arrangement, through iterative creative choices—meaningfully shapes the output, that human contribution can be protected. But the parts that the AI generated autonomously, without human creative direction? Those are “the landscape.” They’re the thing no human authored.

There will certainly be disputes at the margins about exactly how much human input is enough, and where the line sits between “I told the AI to make something cool” and genuine creative direction. But the fundamental framework for handling this already exists. We’ve been here before with every new creative tool, from cameras to Photoshop. The principle has always been the same: copyright protects human creativity, regardless of the tool used to express it.

Thaler chose to fight for the one position that had no support in law (or in common sense). His losing streak is now complete, and there’s nowhere left to appeal. But the legacy of his many, many losses is actually kind of useful: he has, through sheer persistence, generated an incredibly clear and consistent body of authority establishing that purely AI-generated works, with no human creative input, do not get copyright protection.

So, thanks for that, I guess. Oh, and I guess we can confidently post that “Recent Entrance to Paradise” image as it, like the monkey selfie before it, is officially in the public domain.

Advertisement

Filed Under: ai, copyright, copyright office, copyrightable subject matter, dabus, human creativity, stephen thaler, supreme court

Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

MLB's robot-assisted strike zone is exposing umpire errors in real time

Published

on


The technology is designed to reduce strike zone disputes, long the source of baseball’s most heated arguments. Under the new system, each team receives two challenges per game and only loses a challenge if it is incorrect. In practice, this incentive has quickly reshaped game-day strategy – and last Saturday’s…
Read Entire Article
Source link

Continue Reading

Tech

Scientists Have Made a French Fry Breakthrough

Published

on

French fries are delicious, but notoriously unhealthy. A research team at the University of Illinois, however, has developed a deceptively straightforward method to keep the satisfying taste and crunch without requiring as much oil.

The cooking method combines traditional frying and microwave heating. Adding that microwave step could reduce the amount of oil used in the process, meaning you would absorb less fat with each bite. All the secrets to being able to cook fries in this way have been laid out in two studies published in Current Research in Food Science and The Journal of Food Science.

French Fries and Health

Although popular, fried foods contain high levels of fat, which is linked to several health problems, including obesity and hypertension. “Consumers want healthy foods, but at the time of purchase, cravings often prevail,” says Pawan Singh Takhar, author of one of the two studies. “The high oil content adds flavor, but it also contains a lot of energy and calories.”

It’s precisely with the goal of helping consumers make better food choices without feeling deprived that researchers have been trying to figure out how they can cook healthier french fries, achieving lower fat content without altering their taste and texture.

Advertisement

One of the main difficulties in frying, as the studies explain, is preventing the oil from penetrating the food. In the early stages of the french fry process, in fact, the pores of the potato are filled with water, leaving no room for the oil.

As cooking continues, however, the water evaporates, creating empty spaces that allow the oil to be drawn in by negative pressure. Much of the frying process takes place under that negative pressure, which essentially increases the tendency of the oil to be sucked into the fries

A New Wavelength

In the new study, therefore, the researchers tried to figure out how to extend the time in positive pressure and reduce the period under negative pressure. “When we heat something in a traditional oven, the heat transfers from the outside to the inside, but a microwave oven heats from the inside to the outside because the microwaves penetrate everywhere in the material,” Takhar says.

Specifically, microwaves cause water molecules to oscillate, resulting in increased vapor formation and thus shifting the pressure profile toward positive values that prevent the oil from being easily absorbed.

Advertisement

Microwave frying alone, however, would not produce the desired texture. “If only microwaving is used, the food turns out mushy,” says Takhar. In order to achieve crispness, frying and microwaving should be combined.

To achieve the right balance, the researchers carried out an experiment in which they specially designed a microwave fryer, monitoring temperature, pressure, volume, texture, moisture, and oil content of the chips. “We propose to combine the two methods in the same device. Traditional heating maintains crispness, while microwave heating reduces oil consumption,” the study concludes.

Source link

Advertisement
Continue Reading

Tech

Microsoft releases new AI models to expand further beyond OpenAI

Published

on

Mustafa Suleyman, CEO of Microsoft AI. (GeekWire File Photo / Kevin Lisota)

Microsoft is expanding its roster of in-house AI models, releasing a new speech-to-text system and making two existing models broadly available to developers for the first time.

The moves by Microsoft AI (MAI) are part of a broader effort by the company to expand its proprietary AI capabilities beyond its partnership with OpenAI, giving Microsoft more control over its own destiny in the competition against Google, Amazon, and others.

Microsoft announced MAI-Transcribe-1 on Thursday, a speech-to-text model that it says is the most accurate currently available. The company also released its existing voice and image generation models, known as MAI-Voice-1 and MAI-Image-2, for broad commercial use.

It’s Microsoft’s first major model release since a March reorganization, announced by CEO Satya Nadella, in which Microsoft AI CEO Mustafa Suleyman shifted away from day-to-day Copilot oversight to focus on frontier model development and superintelligence.

Suleyman told The Verge that the transcription model runs at “half the GPU cost of the other state-of-the-art models.” He told VentureBeat that the model was built by a team of just 10 people, and that Microsoft plans to eventually build a frontier large language model to be “completely independent” if needed.

Advertisement

Microsoft also recently hired former Allen Institute for CEO Ali Farhadi and other top AI researchers from the Seattle-based institute to further bolster Suleyman’s team, as GeekWire reported last week.

MAI-Transcribe-1 is designed to handle noisy real-world conditions such as call centers and conference rooms, and Microsoft says it is testing integrations with Copilot and Teams. Microsoft says it offers the best price-performance of any large cloud provider, competing directly with OpenAI’s Whisper and Google’s Gemini on the FLEURS benchmark.

In a blog post, Suleyman called the model “not just the most accurate but also lightning fast.”

MAI-Voice-1 generates natural-sounding speech and now lets developers create custom voices from short snippets of sample audio. MAI-Image-2 ranks in the top three on the Arena.ai image generation leaderboard and is rolling out in Bing and PowerPoint.

Advertisement

All three are available on the Microsoft Foundry developer AI platform and MAI Playground.

Source link

Continue Reading

Tech

‘You Guys Look Great’: Artemis Astronauts Share Earth’s Out-of-This-World Views

Published

on

It’s been more than 50 years since NASA astronaut Harrison Schmitt took the famous Big Blue Marble photograph, showing a breathtaking vision of Earth taken aboard the Apollo 17 spacecraft on its way to the moon. Now, as the four-astronaut crew of the Artemis II mission heads toward the moon, more spectacular images are being released.

This stunning photo is perhaps the most reminiscent of the Big Blue Marble, showing Earth in all its fragile, lovely glory.

“That’s us!” NASA wrote in a post. The post also quoted astronaut Christina Koch as saying of Earth, “You guys look great.”

Advertisement

In a reply to questions on the post, NASA wrote, “Two auroras (top right and bottom left) are visible in this image. Zodiacal light (bottom right), is also visible, as well as airglow from Earth’s atmosphere.”

Another neat photo from the Artemis mission shows the planet neatly bisected, with one side lit up by the sun and the other in darkness.

Advertisement
The Earth half in shadow as taken by the Artemis II crew

This image of the Earth was taken by one of the Artemis II crew out the Orion’s window.

Reid Wiseman/NASA

“You look amazing, you look beautiful,” Victor Glover, Artemis II pilot, said of the views of Earth in a video call with ABC News.

The Orion spacecraft and a half-moon shaped view of Earth in outer space

A view of the Earth from NASA’s Orion spacecraft as it orbits above the planet during the Artemis II test flight.

Advertisement

NASA

Another intriguing image shows part of the spacecraft itself. USA Today noted that “the image appears to show the bottom of Orion’s service module where its main engine and auxiliary thrusters are housed.”

We’re tracking the 10-day Artemis II mission with a regularly updated blog.

Keep an eye on NASA’s image repository to see the latest photos.

Advertisement

Source link

Continue Reading

Tech

Tesla’s Texas factory workforce reportedly shrunk 22% in 2025

Published

on

The total workforce at Tesla’s factory outside Austin, Texas shrunk dramatically last year as the company suffered its second straight year of declining sales, according to a compliance report spotted by Austin American-Statesman.

Tesla went from employing 21,191 people at the factory in 2024 to 16,506 workers in 2025, a drop of 22%. That’s despite the company’s global workforce growing from 125,665 employees in 2024 to 134,785 employees in 2025, according to filings with the U.S. Securities and Exchange Commission.

It’s not clear which teams were most affected by Tesla scaling back its workforce at the plant. But the company has become one of the largest employers in the Austin area since it opened the factory in 2022. CEO Elon Musk also relocated Tesla’s headquarters to the factory in 2021 before it opened. The company has invested more than $6.3 billion in the facility to date, according to the new report.

Source link

Advertisement
Continue Reading

Tech

AirPods Max 2 teardown reveals nothing has changed beyond the H2 chip

Published

on

Though the AirPods Max 2 offer new features, a teardown of the headphones shows they’re still plagued by the same flaws of the original 2020 model.

Close-up of cushioned over-ear headphones in peach and orange tones resting on a dark textured fabric surface, showing mesh inside the ear cup and part of the headband
Apple’s AirPods Max 2 gained the H2 chip, but not much else.

Apple’s AirPods Max 2 debuted on March 16, with their core feature being the H2 chip. With it, Apple’s high-end headphones gained capabilities like Adaptive Audio, Conversation Awareness, and gesture controls, among others. Active Noise Cancellation was improved as well.
However, as explained in our review, the AirPods Max 2 are an iterative upgrade, that ultimately leaves something to be desired. New features and ANC enhancements aside, Apple effectively delivered more of the same with its AirPods Max 2.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

Published

on

Meta has paused all its work with the data contracting firm Mercor while it investigates a major security breach that impacted the startup, two sources confirmed to WIRED. The pause is indefinite, the sources said. Other major AI labs are also reevaluating their work with Mercor as they assess the scope of the incident, according to people familiar with the matter.

Mercor is one of a few firms that OpenAI, Anthropic, and other AI labs rely on to generate training data for their models. The company hires massive networks of human contractors to generate bespoke, proprietary datasets for these labs, which are typically kept highly secret as they’re a core ingredient in the recipe to generate valuable AI models that power products like ChatGPT and Claude Code. AI labs are sensitive about this data because it can reveal to competitors—including other AI labs in the US and China—key details about the ways they train AI models. It’s unclear at this time whether the data exposed in Mercor’s breach would meaningfully help a competitor.

While OpenAI has not stopped its current projects with Mercor, it is investigating the startup’s security incident to see how its proprietary training data may have been exposed, a spokesperson for the company confirmed to WIRED. The spokesperson says that the incident in no way affects OpenAI user data, however. Anthropic did not immediately respond to WIRED’s request for comment.

Mercor confirmed the attack in an email to staff on March 31. “There was a recent security incident that affected our systems along with thousands of other organizations worldwide,” the company wrote.

Advertisement

A Mercor employee echoed these points in a message to contractors on Thursday, WIRED has learned. Contractors who were staffed on Meta projects cannot log hours until—and if—the project resumes, meaning they could functionally be out of work, a source familiar claims. The company is working to find additional projects for those impacted, according to internal conversations viewed by WIRED.

Mercor contractors were not told exactly why their Meta projects were being paused. In a Slack channel related to the Chordus initiative—a Meta-specific project to teach AI models to use multiple internet sources to verify their responses to user queries—a project lead told staff that Mercor was “currently reassessing the project scope.”

An attacker known as TeamPCP appears to have recently compromised two versions of the AI API tool LiteLLM. The breach exposed companies and services that incorporate LiteLLM and installed the tainted updates. There could be thousands of victims, including other major AI companies, but the breach at Mercor illustrates the sensitivity of the compromised data.

Mercor and its competitors—such as Surge, Handshake, Turing, Labelbox, and Scale AI—have developed a reputation for being incredibly secretive about the services they offer to major AI labs. It’s rare to see the CEOs of these firms speaking publicly about the specific work they offer, and they internally use codenames to describe their projects.

Advertisement

Adding to the confusion around the hack, a group going by the well-known name Lapsus$ claimed this week that it had breached Mercor. In a Telegram account and on a BreachForums clone, the actor offered to sell an array of alleged Mercor data, including a 200-plus GB database, nearly 1 TB of source code, and 3 TBs of video and other information. But researchers say that many cybercriminal groups now periodically take up the Lapsus$ name and that Mercor’s confirmation of the LiteLLM connection means that the attacker is likely TeamPCP or an actor connected to the group.

TeamPCP appears to have compromised the two LiteLLM updates as part of an even larger supply chain hacking spree in recent months that has been gaining momentum, catapulting TeamPCP to prominence. And while launching data extortion attacks and working with ransomware groups, such as the group known as Vect, TeamPCP has also strayed into political territory, spreading a data wiping worm known as “CanisterWorm” through vulnerable cloud instances with Farsi as their default language or clocks set to Iran’s time zone.

“TeamPCP is definitely financially motivated,” says Allan Liska, an analyst for the security firm Recorded Future who specializes in ransomware. “There might be some geopolitical stuff as well, but it’s hard to determine what’s real and what’s bluster, especially with a group this new.”

Looking at the dark-web posts of the alleged Mercor data, Liska adds, “There is absolutely nothing that connects this to the original Lapsus$.”

Advertisement

Source link

Continue Reading

Tech

Intel’s 270K Plus CPU dominates content creation workloads while challenging expensive AMD chips without breaking the bank for professionals

Published

on


  • Intel Core Ultra 270K Plus improves Adobe Premiere workflows by 15% over 9700X
  • Rendering in Cinebench and Blender achieves up to 23% faster results
  • 250K Plus outperforms previous-generation AMD CPUs by roughly 35%

Intel’s latest Core Ultra 200S Plus series has drawn attention for delivering performance that is difficult to ignore, especially compared to older Intel models and some similarly priced AMD processors.

In testing by Puget Systems, the 270K Plus and 250K Plus both increase E-core counts, boost clocks, and raise maximum memory speeds, creating a tangible improvement over prior generations.

Advertisement

Source link

Continue Reading

Tech

SpaceX and Blue Origin race to orbit while scientists question the physics

Published

on

The pitch is seductive in its simplicity: AI needs more power than terrestrial grids can supply, so move the data centres into orbit, where the sun never sets and the electricity is free. SpaceX, Blue Origin, and a growing constellation of startups are now racing to make that vision real. The problem, according to the scientists and engineers who would have to make the physics work, is that the vision skips several chapters of thermodynamics, economics, and orbital mechanics that have not yet been written.

SpaceX filed with the Federal Communications Commission on 30 January for permission to launch up to one million satellites into low Earth orbit, each carrying computing hardware that would collectively form what the company described as a constellation with “unprecedented computing capacity to power advanced artificial intelligence models.” The satellites would operate at altitudes between 500 and 2,000 kilometres, in orbits designed to maximise time in sunlight, and route traffic through SpaceX’s existing Starlink network. SpaceX requested a waiver of the FCC’s standard deployment milestones, which typically require half a constellation to be operational within six years.

Seven weeks later, Blue Origin filed its own application. Project Sunrise proposes 51,600 satellites in sun-synchronous orbits between 500 and 1,800 kilometres, complemented by the previously announced TeraWave constellation of 5,408 satellites providing ultra-high-speed optical backhaul. Where SpaceX’s filing emphasised raw scale, Blue Origin’s emphasised architecture: the system would perform computation in orbit and relay results to the ground through TeraWave’s mesh network.

The startup ecosystem is moving even faster. Starcloud, formerly Lumen Orbit, raised $170 million at a $1.1 billion valuation in March, becoming the fastest unicorn in Y Combinator history just 17 months after completing the programme. The company launched its first satellite carrying an Nvidia H100 GPU in November 2025 and filed with the FCC in February for a constellation of up to 88,000 satellites. Aethero, a defence-focused startup building space-grade computers with Nvidia Orin NX chips wrapped in radiation shielding, raised $8.4 million and is testing hardware on orbit this year.

Advertisement

The commercial logic rests on a genuine problem. Global data centre electricity consumption reached roughly 415 terawatt-hours in 2024 and the International Energy Agency projects it could exceed 1,000 TWh by 2026, with accelerated AI servers driving 30 per cent annual growth. In Virginia alone, data centres consume 26 per cent of total electricity supply. Ireland’s share could reach 32 per cent by year’s end. The grid constraints are real, the permitting delays are real, and the political resistance to building more terrestrial capacity is real.

Advertisement

What is also real, scientists argue, is the physics that makes orbital computing spectacularly difficult at any meaningful scale. The most fundamental challenge is heat. In space, there is no air to carry heat away from processors, only radiative cooling, which requires vast surface areas. Dissipating just one megawatt of thermal energy while keeping electronics at a stable 20 degrees Celsius demands approximately 1,200 square metres of radiator, roughly four tennis courts. A several-hundred-megawatt data centre, the minimum threshold for commercial relevance, would require radiators thousands of times larger than anything ever deployed on the International Space Station.

Radiation presents the second structural problem. Low Earth orbit exposes unshielded chips to cosmic rays and trapped particles that induce bit flips and permanent circuit damage. Radiation hardening adds 30 to 50 per cent to hardware costs and reduces performance by 20 to 30 per cent. The alternative, triple modular redundancy, means launching three copies of every chip, three times the cooling, three times the electricity, and three times the mass. Starcloud’s approach of flying commercial GPUs with external shielding is an interesting experiment, but no one has demonstrated that it works at scale or over hardware lifetimes measured in years rather than months.

Latency is the third constraint. A million satellites spread across orbital shells from 500 to 2,000 kilometres cannot achieve the tight coupling required for frontier model training, where inter-node communication latencies must remain in the microsecond range. Low Earth orbit introduces minimum latencies of several milliseconds for inter-satellite links and 60 to 190 milliseconds for ground-to-orbit round trips, compared to 10 to 50 milliseconds for terrestrial content delivery networks. That makes orbital infrastructure potentially viable for inference workloads, not for training, which is where the overwhelming majority of AI compute demand currently sits.

Then there is cost. IEEE Spectrum estimated that a one-gigawatt orbital data centre would cost upwards of $50 billion, roughly three times the cost of an equivalent terrestrial facility including five years of operation. Google has said that launch costs must fall to under $200 per kilogram before space-based computing begins to make economic sense. SpaceX’s current Starlink economics operate at roughly $1,000 to $2,000 per kilogram. Some analysts argue the true threshold for competing with terrestrial refresh economics is $20 to $30 per kilogram, a figure no credible projection places within the next two decades. The economics look even less favourable when set against the deep-tech funding landscape on the ground, where terrestrial infrastructure projects can draw on established supply chains and proven unit economics.

Advertisement

Even OpenAI’s Sam Altman, who explored a multibillion-dollar investment in rocket maker Stoke Space as a potential SpaceX competitor for orbital data centres, has publicly called the concept “ridiculous” for the current decade. Altman told journalists that the rough maths of launch costs relative to terrestrial power costs simply does not work yet, and he pointedly asked how anyone plans to fix a broken GPU in space.

The astronomical community adds a separate objection entirely. The vast majority of the roughly 1,000 public comments on SpaceX’s FCC filing urged the commission not to proceed. If approved, the constellation would place more satellites than visible stars in the sky for large portions of the night throughout the year, further militarising and commercialising an orbital environment that is already straining under the weight of existing megaconstellations.

None of this means orbital data centres will never exist. SpaceX’s Starship, if it achieves its cost targets, could fundamentally change the mass-to-orbit economics that currently make the concept unworkable. Starcloud’s incremental approach of flying small payloads and iterating on radiation performance is the kind of engineering pathway that occasionally produces breakthroughs. And the terrestrial grid constraints driving the interest are not going away.

But the gap between filing an FCC application for a million satellites and actually making orbital computation economically competitive with a warehouse full of GPUs in Iowa is not measured in years. It is measured in physics problems that the current pace of AI infrastructure investment cannot shortcut, no matter how many billionaires are willing to try. The question scientists are asking is not whether space data centres are theoretically possible. It is why, given the magnitude of the unsolved engineering, anyone is treating them as a near-term solution to a problem that requires near-term answers. The sky, it turns out, is not the limit. The radiator is.

Advertisement

Source link

Continue Reading

Tech

Ireland begins digital wallet testing and consultation

Published

on

The wallet will also be used to verify age for accessing online platforms.

The Irish Government is inviting people to try out the new official ‘Government Digital Wallet’ as the platform enters its training phase.

Countries including Nigeria, Laos and New Zealand – and the US state of California – are all piloting their own versions of a digital ID platform, as governments across borders try to bolster security and make administration smoother.

The digital wallet makes up a key part of the Government’s Digital Public Services Plan 2030, which aims to use digital technology to make accessing public services easier and more efficient.

Advertisement

It facilitates identity management that residents should be able to use within the EU to access public and private services. The wallet can be used both offline and online, and will allow users to self-manage how their data is shared.

The ID can help obtain a marriage certificate or register for key welfare supports, and holders can also obtain a digital version of their birth certificates, driving licences and other official documents. The wallet will also be used to verify age on online platforms, amid debates in the region on a ban for social media for those under 16.

It is also expected to reduce the need to repeat the same information to different Government departments and make everyday interactions with state administration more seamless.

The EU mandates that all member states must make a digital wallet available to their citizens by the end of 2026. The Irish wallet will be developed to EU digital identity standards, the Government said.

Advertisement

The digital wallet will “make it simpler for people to verify their identity, apply for supports and access entitlements”, said Minister for Public Expenditure, Infrastructure, Public Service Reform and Digitalisation Jack Chambers, TD.

“The wallet is designed so that all personal data is fully protected, and the user stays in control of what information they put in the wallet and choose to share. Only the details needed for a service will be shared, and nothing more.”

Minister of State at the Department of Public Expenditure, Infrastructure, Public Service Reform and Digitalisation Frank Feighan, TD said that the wallet will be “a crucial element of the Government’s overall portfolio of digital services”.

He added: “It will be able to facilitate secure age verification capability as set out in Digital Ireland and the implementation of the Online Safety Code, under which designated platforms must have age verification measures in place to help protect, in particular, children and young people from online harm.”

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025