Connect with us

Tech

The ancient IRC protocol is back in action, thanks to SSHStalker’s Linux botnet exploiting cloud servers for profit

Published

on


  • SSHStalker uses IRC channels and multiple bots to control infected Linux hosts
  • Automated SSH brute-forcing rapidly spreads the botnet through cloud server infrastructures
  • Compilers are downloaded locally to build payloads for reliable cross-distribution execution

SSHStalker, a recently discovered Linux botnet, is apparently relying on the classic IRC (Internet Relay Chat) protocol to manage its operations.

Created in 1988, IRCwas once the dominant instant messaging system for technical communities due to its simplicity, low bandwidth needs, and cross-platform compatibility.

Advertisement

Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Nintendo sues the US government for a refund on tariffs

Published

on

Nintendo filed a lawsuit against the U.S. government on Friday over its extraction of tariffs from global businesses. The gaming giant is seeking a refund for any duties it paid due to President Donald Trump’s executive orders that invoke the International Emergency Economic Powers Act (IEEPA).

This lawsuit, filed in the U.S. Court of International Trade, comes after a Supreme Court decision struck down the tariffs that the president imposed under IEEPA, arguing that he exceeded his authority. More than a thousand other companies have already sued for refunds on the tariffs that they pay; according to Nintendo’s complaint, viewed by TechCrunch, these tariffs have resulted in the collection of over $200 billion on imports in total.

“We can confirm that we have filed a request,” Nintendo told TechCrunch in a statement. “We have nothing else to share on the topic.”

In response to the Supreme Court’s decision — which he called “extraordinarily anti-American” — President Trump raised tariffs from 10% to 15%. Now, 24 states have sued to argue that the president has once again overstepped the limits of his power by making this change.

Advertisement

Source link

Continue Reading

Tech

EC-Council Expands AI Certification Portfolio to Strengthen U.S. AI Workforce Readiness and Security

Published

on

EC-Council

With $5.5 trillion in global AI risk exposure and 700,000 U.S. workers needing reskilling, four new AI certifications and Certified CISO v4 help close the gap between AI adoption and workforce readiness

EC-Council, creator of the world-renowned Certified Ethical Hacker (CEH) credential and a global leader in applied cybersecurity education, today launched its Enterprise AI Credential Suite, with four new role-based AI certifications debuting alongside Certified CISO v4, an overhauled executive cyber leadership program.

The dual launch is the largest single expansion of EC-Council’s portfolio in its 25-year history. It addresses a structural gap no single tool, platform, or policy can solve alone: AI is scaling faster than the workforce trained to run, secure, and govern it.

The launch aligns with U.S. priorities on workforce development and applied AI education outlined in Executive Order 14179, the July 2025 AI Action Plan’s workforce development pillar, and Executive Orders 14277 and 14278, which emphasize expanding AI education pathways and building job-relevant skills across professional and skilled-trade roles, at a time when organizations are moving AI from pilot projects into everyday operations and decision-making.

Advertisement

That urgency is visible in both economic exposure and workforce capacity. IDC estimates that unmanaged AI risk could reach $5.5 trillion globally, while Bain & Company projects a 700,000-person AI and cybersecurity reskilling gap in the United States.

The International Monetary Fund (IMF) and the World Economic Forum (WEF) have also pointed to workforce readiness, rather than access to technology, as a primary constraint on AI-driven productivity and growth, especially as adoption accelerates across sectors.

Security pressure is rising in parallel with adoption. Eighty-seven percent of organizations report AI-driven attacks, and generative AI traffic has surged by 890 percent, expanding attack surfaces that many teams are still learning how to defend, while AI capability remains concentrated, with 67 percent of AI talent located in just 15 U.S. cities and women representing only 28 percent of the AI workforce, highlighting persistent access and participation gaps as demand increases.

AI is moving from experimentation to infrastructure, and the workforce has to move with it,” said Jay Bavisi, Group President, EC-Council. “These programs are built to give professionals practical capability across adoption, security, and governance, so organizations can scale AI with confidence and clear accountability.

Advertisement

Role-Aligned Certifications

The Enterprise AI Credential Suite is structured to mirror how AI capability is developed in practice. Artificial Intelligence Essentials (AIE) serves as the baseline, building practical AI fluency and responsible usage across roles, and it is supported by EC-Council’s proprietary Adopt. Defend. Govern. (ADG) framework, which defines how AI should be operationalized at scale in real environments.

Adopt: Prepare teams to deploy AI deliberately, with readiness and safeguards

Defend: Secure AI systems against threats such as prompt injection, data poisoning, model exploitation, and AI supply-chain compromise

Govern: Embed accountability, oversight, and risk management into AI systems from the outset

Advertisement

Within this structure, the four new certifications align directly to specific workforce needs across the AI lifecycle.

  • Artificial Intelligence Essentials (AIE) builds foundational AI literacy.
  • Certified AI Program Manager (CAIPM) equips to translate AI strategy into execution, aligning teams, governance, and delivery to drive measurable ROI and enterprise-scale intelligence.
  • Certified Offensive AI Security Professional (COASP) builds elite capabilities to test vulnerabilities in LLMs, simulate exploits, and secure AI infrastructure hardening enterprises against emerging threats.
  • Certified Responsible AI Governance & Ethics (CRAGE) credential focuses on Responsible AI, Governance and Ethics at enterprise scale with NIST/ISO compliance.

Alongside the new AI certifications, Certified CISO v4 updates executive cyber leadership education for AI-driven risk environments, strengthening leadership readiness as intelligent systems become part of core business operations and security decision-making.

Security leaders are now accountable for systems that learn, adapt, and influence outcomes at speed,” Bavisi added. “Certified CISO v4 prepares leaders to manage AI-driven risk with clarity, strengthen governance, and make informed decisions when responsibility is on the line.

The portfolio also builds on EC-Council’s long-standing work with government and defense organizations, including its existing DoD 8140 baseline certification recognition, as AI security and workforce readiness take on greater national importance.

To explore the full range of training and certification opportunities, visit the EC-Council AI Courses library.

Advertisement

About EC-Council:

EC-Council is the creator of the Certified Ethical Hacker (CEH) program and a leader in cybersecurity education. Founded in 2001, EC-Council’s mission is to provide high-quality training and certifications for cybersecurity professionals to keep organizations safe from cyber threats. EC-Council offers over 200 certifications and degrees in various cybersecurity domains, including forensics, security analysis, threat intelligence, and information security.

An ISO/IEC 17024 accredited organization, EC-Council has certified over 350,000 professionals worldwide, with clients ranging from government agencies to Fortune 100 companies. EC-Council is the gold standard in cybersecurity certification, trusted by the U.S. Department of Defense, the Army, Navy, Air Force, and leading global corporations.

For more information, visit: www.eccouncil.org

Sponsored and written by EC-Council.

Advertisement

Source link

Continue Reading

Tech

This British Car Combined Two Aircraft Engines For Nearly 1000 HP In The ’20s

Published

on





Carl Benz patented his squat, three-wheeled Benz Patent Motor Car (Model no. 1) in 1886, and it didn’t take long for humanity’s obsession with automobiles to take hold. In 40 short years, we went from a German one-cylinder four-stroke engine producing just 0.75 hp to a four-wheeled, British-made bullet powered by two 22.4-liter V12 Matabele airplane engines each producing 435 hp. The combo isn’t a big deal now, admittedly, with half a dozen production cars packing 1,000 horses or more, but it was certainly impressive for the 1920s. 

This behemoth, known as the Sunbeam 1,000 HP, was nearly 24 feet long and weighed 4 tons, yet it was the first car to go faster than 200 mph — exactly what it was made to do. Henry Segrave was at the wheel of the Sunbeam, sometimes referred to as “The Slug” or “Mystery,” when he broke that 200-mph barrier on March 29, 1927. Seagrave and The Slug achieved that milestone on the hard white sands of Daytona Beach, Florida, which had seen 30 years of record-breaking speed trials since racing began there in 1902, including Segrave’s successful attempt. 

Advertisement

The Sunbeam’s achievement came about 20 years after the first-ever 100-mph run, which took place on July 21, 1904. That year, Frenchman Louis Emile Rigolly hit 103.561 mph on a beach in Ostend, Belgium.

Advertisement

This was not your ordinary Slug

Sunbeam driver Henry Segrave had previously set a Land Speed Record almost exactly a year earlier, hitting 152.33 mph while driving a 4.0-liter Sunbeam Tiger, so he was very familiar with the need for speed. This new, more powerful Sunbeam 1000 was the brainchild of chief engineer and designer Louis Coatalen, who decided to place the two Matabele airplane engines in line. 

Both of the massive V12s had double overhead camshafts and 48 valves. The one sitting up front was mated to a custom-built three-speed gearbox, while the rear engine was connected to the back wheels via chain sprockets. Segrave was nestled tightly in between the beast’s metallic hearts, which had a wild past all of their own.

Both Matabele engines were built in 1918 and destined for World War I airplanes, but were never used. Two years later, they (along with two other engines) were dropped into a 39-foot single-step hydroplane (the Maple Leaf V) and used for powerboat racing. The following year, they were transferred to the 34-foot Maple Leaf VII and used again, although the boat sank on its first run. Both engines were recovered and sent back to the U.K., where they sat around until being used in the Sunbeam.

Advertisement

Ironically, the slug-like body of the Sunbeam actually resembled an upside-down boat in many ways, an intentional decision to improve aerodynamics. Additionally, it had a flat underbelly, with the idea that it would help the car slide along the beach if it lost a wheel, thus avoiding a major catastrophe.

Advertisement

The British beast comes back to life

Louis Coatalen developed the engine placement and internal workings, while Captain JA “Jack” Irving built the Mystery using a chassis from John Thompson Motor Pressings, steel forgings from Vickers, a set of special Hartford shock absorbers, and a braking system from Dewandre Vacuum. When driver Henry Segrave heard the beast roar for the first time, the car reportedly shook the Sunbeam Moorfield facility in Wolverhampton so hard that it convinced Segrave it couldn’t be driven. But drive the monster he did, achieving an average speed of 203.79 mph at Daytona Beach.

Records are made to be broken, and this one fell less than a year later when Malcolm Campbell drove another Sunbeam, known as the Blue Bird, to 206.956 mph at Daytona on February 19, 1928, becoming one of the many cars to hold the title of fastest in the world over the years. With its glory faded, the Sunbeam 1000 was parked and nearly forgotten for a time. Once rediscovered, it bounced around until it was eventually purchased by the Montagu Motor Museum in the United Kingdom (the forerunner to the National Motor Museum) in 1970.

A total refurbishment began in 2024, aiming to finish by March 2027, so it could be sent to Daytona Beach for the 100th anniversary of its land speed record. The fully rebuilt rear engine was fired up for the first time in 90 years in front of onlookers at the National Motor Museum in September 2025. Only time will tell whether the team behind the restoration can cross the finish line in Daytona in 2027.

Advertisement



Source link

Advertisement
Continue Reading

Tech

CISA warns feds to patch iOS flaws exploited in crypto-theft attacks

Published

on

CISA

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered federal agencies to patch three iOS security flaws targeted in cyberespionage and crypto-theft attacks using the Coruna exploit kit.

As Google Threat Intelligence Group (GTIG) researchers revealed earlier this week, Coruna uses multiple exploit chains targeting 23 iOS vulnerabilities, many of which were deployed in zero-day attacks.

However, the exploits will not work on recent versions of iOS and will be blocked if the target is using private browsing or has enabled Apple’s Lockdown Mode anti-spyware protection feature.

Coruna provides threat actors with Pointer Authentication Code (PAC) bypass, sandbox escape, and PPL (Page Protection Layer) bypass capabilities, and enables them to gain WebKit remote code execution and escalate permissions to Kernel privileges on vulnerable devices.

Advertisement

GTIG observed the exploit kit being used by multiple threat actors last year, including a surveillance vendor customer, a suspected Russian state-backed hacking group (UNC6353), and a financially motivated Chinese threat actor (UNC6691).

The latter deployed it on fake gambling and crypto websites and used it to deliver a malware payload designed to steal infected victims’ cryptocurrency wallets.

Coruna attacks timeline
Coruna attacks timeline (GTIG)

Mobile security firm iVerify also said that Coruna is an example of “sophisticated spyware-grade capabilities” that migrated “from commercial surveillance vendors into the hands of nation-state actors and, ultimately, mass-scale criminal operations.”

On Thursday, CISA added three of the 23 Coruna vulnerabilities to its catalog of Known Exploited Vulnerabilities, ordering Federal Civilian Executive Branch (FCEB) agencies to secure their devices by March 26, as mandated by the Binding Operational Directive (BOD) 22-01.

“Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable,” CISA warned.

Advertisement

“These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise.”

Although BOD 22-01 applies only to federal agencies, CISA urged all organizations, including private sector companies, to prioritize patching these flaws to secure their devices against attacks as soon as possible.

Malware is getting smarter. The Red Report 2026 reveals how new threats use math to detect sandboxes and hide in plain sight.

Download our analysis of 1.1 million malicious samples to uncover the top 10 techniques and see if your security stack is blinded.

Source link

Advertisement
Continue Reading

Tech

Oracle to cut ‘thousands’ of jobs, reports Bloomberg

Published

on

Oracle employs around 162,000 globally, with 900 workers situated in Ireland.

Oracle will cut thousands of jobs to funnel funds into its major AI data centre expansion efforts, according to Bloomberg.

The cuts will affect divisions across the company and may come as soon as this month, the publication said. Some of the cuts might target jobs that Oracle needs less due to AI.

Latest data shows that Oracle employs around 162,000 globally, with around 900 workers situated in Ireland.

Advertisement

Last September, the company revealed plans for its largest-ever restructuring, set to cost up to $1.6bn. At the time, Oracle’s Irish arm sent a collective redundancy notification to the Government.

SiliconRepublic.com has contacted Oracle for details on the latest layoffs and its effects in Ireland.

Oracle is one of the world’s largest cloud operators, having cemented itself as a leading AI infrastructure provider tapped by major cloud users, such as OpenAI.

OpenAI has promised Oracle $300bn for its compute power, but, as TechCrunch highlights, much of the promised spending is speculative and highly dependent on the companies’ growth.

Advertisement

Plus, data compiled by Bloomberg shows that Oracle will have negative cash flow on account of the data centre buildout until 2030. The massive AI expenditures have turned Oracle’s cash flow negative last year for the first time since 1992, noted the publication.

Early last month, Oracle said it plans to raise up to $50bn through debt and equity sales to build additional cloud capacity.

The Larry Ellison-led company is also pouring money into OpenAI as part of the major $500bn AI infrastructure build-out called Stargate, while a close relationship with the US government helped it towards a stake of 15pc of the new TikTok USDS entity, as well as control over the platform’s algorithm.

Oracle enjoyed strong investor support in the initial years of the AI boom, which boosted the company stock 61pc in 2024 and 20pc in 2025. The support briefly made Ellison the world’s richest man in September last year.

Advertisement

However, investors have been wary of massive AI spending in recent months, sending Oracle shares down 54pc since September.

Several Big Tech firms have laid off employees over the past year, including Microsoft, which axed thousands, Block, which is cutting around 40pc of its workforce, and Amazon, which has cut more than 30,000 jobs since October.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Larry Ellison, 2010. Image: Ilan Costica, via Wikimedia Commons (CC BY-SA 3.0)

Advertisement

Source link

Continue Reading

Tech

GoPro Lit Hero Review: a tiny action cam, with too many compromises

Published

on

Why you can trust TechRadar


We spend hours testing every product or service we review, so you can be sure you’re buying the best. Find out more about how we test.

GoPro Lit Hero: two-minute review

GoPro is a name that’s synonymous with the action cam market, with the brand having largely been responsible for the explosion in popularity of such cameras over the past two decades. The brand has come a long way since its first Hero camera, a 35mm film-compatible wearable model released in 2004.

Advertisement

Source link

Continue Reading

Tech

LangChain’s CEO argues that better models alone won’t get your AI agent to production

Published

on

As models get smarter and more capable, the “harnesses” around them must also evolve.

This “harness engineering” is an extension of context engineering, says LangChain co-founder and CEO Harrison Chase in a new VentureBeat Beyond the Pilot podcast episode. Whereas traditional AI harnesses have tended to constrain models from running in loops and calling tools, harnesses specifically built for AI agents allow them to interact more independently and effectively perform long-running tasks.

Chase also weighed in on OpenAI’s acquisition of OpenClaw, arguing that its viral success came down to a willingness to “let it rip” in ways that no major lab would — and questioning whether the acquisition actually gets OpenAI closer to a safe enterprise version of the product.

“The trend in harnesses is to actually give the large language model (LLM) itself more control over context engineering, letting it decide what it sees and what it doesn’t see,” Chase says. “Now, this idea of a long-running, more autonomous assistant is viable.”

Advertisement

Tracking progress and maintaining coherence

While the concept of allowing LLMs to run in a loop and call tools seems relatively simple, it’s difficult to pull off reliably, Chase noted. For a while, models were “below the threshold of usefulness” and simply couldn’t run in a loop, so devs used graphs and wrote chains to get around that. Chase pointed to AutoGPT — once the fastest-growing GitHub project ever — as a cautionary example: same architecture as today’s top agents, but the models weren’t good enough yet to run reliably in a loop, so it faded fast.

But as LLMs keep improving, teams can construct environments where models can run in loops and plan over longer horizons, and they can continually improve these harnesses. Previously, “you couldn’t really make improvements to the harness because you couldn’t actually run the model in a harness,” Chase said.

LangChain’s answer to this is Deep Agents, a customizable general-purpose harness.

Advertisement

Built on LangChain and LangGraph, it has planning capabilities, a virtual filesystem, context and token management, code execution, and skills and memory functions. Further, it can delegate tasks to subagents; these are specialized with different tools and configurations and can work in parallel. Context is also isolated, meaning subagent work doesn’t clutter the main agent’s context, and large subtask context is compressed into a single result for token efficiency.

All of these agents have access to file systems, Chase explained, and can essentially create to-do lists that they can execute on and track over time.

“When it goes on to the next step, and it goes on to step two or step three or step four out of a 200 step process, it has a way to track its progress and keep that coherence,” Chase said. “It comes down to letting the LLM write its thoughts down as it goes along, essentially.”

He emphasized that harnesses should be designed so that models can maintain coherence over longer tasks, and be “amenable” to models deciding when to compact context at points it determines is “advantageous.”

Advertisement

Also, giving agents access to code interpreters and BASH tools increases flexibility. And, providing agents with skills as opposed to just tools loaded up front allows them to load information when they need it. “So rather than hard code everything into one big system prompt,” Chase explained, “you could have a smaller system prompt, ‘This is the core foundation, but if I need to do X, let me read the skill for X. If I need to do Y, let me read the skill for Y.’”

Essentially, context engineering is a “really fancy” way of saying: What is the LLM seeing? Because that’s different from what developers see, he noted. When human devs can analyze agent traces, they can put themselves in the AI’s “mindset” and answer questions like: What is the system prompt? How is it created? Is it static or is it populated? What tools does the agent have? When it makes a tool call, and gets a response back, how is that presented?

“When agents mess up, they mess up because they don’t have the right context; when they succeed, they succeed because they have the right context,” Chase said. “I think of context engineering as bringing the right information in the right format to the LLM at the right time.”

Listen to the podcast to hear more about:

Advertisement
  • How LangChain built its stack: LangGraph as the core pillar, LangChain at the center, Deep Agents on top.

  • Why code sandboxes will be the next big thing.

  • How a different type of UX will evolve as agents run at longer intervals (or continuously).

  • Why traces and observability are core to building an agent that actually works.

You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

Source link

Continue Reading

Tech

Iceland Foods Finally Surrenders In Trademark Fight With Iceland, The Country

Published

on

from the who’s-the-moron-in-a-hurry-here dept

The ten year war over Iceland is over and Iceland has come out the victor.

If you don’t know what I’m talking about, be prepared to listen to a whole bunch of stupid. In 2016, we wrote about Iceland Foods, a UK grocer, which had somehow convinced the EU to give it a trademark for “Iceland” and which then went about bullying other companies and opposing trademarks for any that included the name of that country. One of the entities that Iceland Foods found itself in a trademark opposition with was Iceland, as in the country, when it attempted to trademark “Inspired by Iceland.” The Icelandic government didn’t take too kindly to that appropriation of its own name and petitioned to cancel the Iceland Foods trademark, which is exactly what happened. Rather than put an end to this absurdity, Iceland Foods appealed that decision, lost, then appealed it again, lost again, appealed a third time, only to lose there as well.

From there, Iceland Foods had but one final option for appealing all of these perfectly sane rulings, which would be to take this before the Court of Justice of the EU. And, while that would obviously be crazy, everything I’d seen to date led me to believe the grocer would do just that.

But sanity seems to finally be on the menu, I guess. Iceland Foods has publicly announced that it is ending the fight and surrendering.

Advertisement

Executive chairman Richard Walker revealed the supermarket would drop the legal dispute, which centres on the right to use the phrase Iceland in the EU, following its third legal loss in July 2025.

Iceland had one fourth and final route of appeal, via the Court of Justice of the European Union, but Walker told the Financial Times it would instead use the “couple of hundred grand” it would save in legal fees to give a “rapprochement discount” to Icelandic shoppers.

Yeah, that’s how this should have been approached from the jump, folks. And this actually goes back even further, where this broad, geographic trademark by a private entity consisting of the name of a sovereign nation never should have been granted a trademark to begin with.

But that’s all over now. Iceland Foods’ trademark is invalidated. Iceland once more is free from being bullied over its own name, as would be other companies from the island nation. Iceland Foods can keep on operating as it always has, sans the ability to bully others with this ridiculous mark. Walker himself said as much, in a very frustrating manner.

“We lost for a third time. We’re going to throw in the towel,” Walker told the FT. “It’s actually fine — we don’t have to change our name.”

Exactly. You never had to. That was never in question. The only question is whether you got to keep your laughable trademark and bully others over it.

Advertisement

Instead, the grocer wasted everyone’s time, and who knows how much of its own money, trying to wage this silly war.

Filed Under: cjeu, iceland, iceland iceland iceland, trademark, uk

Companies: iceland foods

Source link

Advertisement
Continue Reading

Tech

Reverse Engineering The PROM For The SGI O2

Published

on

The SGI O2 was SGI’s last-ditch attempt at a low-end MIPS-based workstation back in 1996, and correspondingly didn’t use the hottest parts of the time, nor did it offer much of an upgrade path. None of which is a concern to hobbyists who are more than happy to work around any hardware- and software limitations to e.g. install much faster CPUs. While quite a few CPU upgrades were possible with just some BGA chip reworking skills, installing the 900 MHz RM7900 would require some PROM hacking, which [mattst88] recently took a shake at.

The initial work on upgrading SGI O2 systems was done in the early 2000s, with [Joe Page] and [Ian Mapleson] running into the issue that these higher frequency MIPS CPUs required a custom IP32 PROM image, for which they figured that they’d need either SGI’s help or do some tricky reverse-engineering. Since SGI is no longer around, [mattst88] decided to take up the torch.

After downloading a 512 kB binary dump of the last version of the O2’s PROM, he set to work reverse-engineering it, starting by dissembling the file. A big part of understanding MIPS PROM code is understanding how the MIPS architecture works, including its boot process, so much of what followed was a crash-course on the subject.

Advertisement

With that knowledge it was much easier to properly direct the Capstone disassembler and begin the arduous process of making sense of the blob of data and code. The resulting source files now reassemble into bit-identical ROM files, which makes it likely that modifying it to support different CPUs is now possible with just a bit more work.

For those who want to play along, [mattst88] has made his ip32prom-decompiler project available on GitHub.

Thanks to [adistuder] for the tip.


Top image: Silicon Graphics 1600SW LCD display and O2 workstation. (Source: Wikimedia)

Advertisement

Source link

Advertisement
Continue Reading

Tech

Apple is adding a warning against AI music content

Published

on

Apple Music is introducing a new way to flag AI-generated music. However, it’s relying on the music industry itself to disclose it.

As reported by Music Business Worldwide, the streaming service has launched Transparency Tags, a new metadata system that allows record labels and distributors to mark when artificial intelligence has been used in different parts of a release.

The tags can be applied immediately. Eventually, they will become a requirement when partners deliver new content to the platform.

Rather than analysing songs itself, Apple is placing the responsibility on the supply chain. Labels and distributors will decide whether a track or release qualifies as AI-generated. They will apply the tags during the delivery process – this is similar to how genres or credits are currently submitted.

Advertisement

The system covers four areas of a release. Artwork tags flag when AI is used to create album artwork or other visuals. Track tags indicate that AI helped generate the sound recording itself. Composition tags apply when lyrics or other songwriting elements are created using AI. Meanwhile, Music Video tags identify AI-generated visuals tied to releases.

Advertisement

Apple says the goal is to give the industry better visibility into how generative AI is being used in music production. In a note to industry partners, the company described the tags as a “first step”. This is toward building clearer policies and best practices around AI-created content.

The approach stands in contrast to how some rivals are tackling the issue. Streaming platform Deezer, for example, has built its own AI detection system. It scans uploads automatically rather than relying on labels to self-report.

Advertisement

That difference matters given how quickly AI-generated music is growing. Deezer said earlier this year that it now receives more than 60,000 fully AI-generated tracks every day. Synthetic music now accounts for roughly 39% of all uploads to the platform.

The company also claims most of that content is tied to streaming fraud rather than artistic experimentation. According to Deezer, up to 85% of streams on AI-generated tracks were fraudulent in 2025. Those plays were removed from the royalty pool.

Apple’s Transparency Tags don’t currently include a visible enforcement mechanism or verification system. This means the accuracy of the labels will largely depend on the honesty of the distributors supplying the music.

Advertisement

Advertisement

For now, though, Apple’s move signals that AI disclosure is quickly becoming the next battleground for music streaming platforms.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025