Connect with us

Tech

The NFL Won A Lawsuit Over Its Bluesky Ban. Its Social Media Strategy Is Still A Loser

Published

on

from the a-social-media-fumble dept

Full disclosure up front: I sit on the board of Bluesky. That said, I had absolutely no idea this lawsuit existed until recently. Which, honestly, tells you something about how much of a legal non-event it was. But the underlying story here—about the NFL treating social media the way it treats television broadcast rights—is worth digging into, because it reveals something deeply broken about how major sports leagues think about the internet.

The 2025-2026 NFL season just wrapped up, and along with it came a federal court ruling in a case called Brown v. NFL that most people missed entirely. Two football fans—one in Illinois, one in California—sued the NFL under the Sherman Act, claiming the league violated antitrust law by barring its teams from posting on Bluesky. The fans wanted to follow their teams—the Bears and the now-champion Seahawks—on the platform they actually use, rather than on Elon Musk’s X. The court dismissed the case for lack of standing, and honestly, that was probably the right legal outcome.

The fans couldn’t demonstrate a concrete injury—the information they wanted was still available, for free, on X. As the court put it, their grievance reduced to being “denied the ability to obtain real-time NFL team information on a private platform with which they are ideologically comfortable.” And “I don’t like Elon Musk” is not an antitrust injury. The Sherman Act targets conspiracies that restrain trade and harm competition—not content distribution preferences. You can’t force a private organization to distribute its content on the platform you like best, just as we’ve called out attempts to force social media platforms to carry content they don’t want to carry.

But the fact that the NFL is legally allowed to be this myopic doesn’t make it a smart business decision. You can be entirely within your rights and still be making a spectacularly bad call.

Advertisement

Since 2013, the NFL has had a “content partnership” with X (dating back to when it was the useful site known as Twitter). The deal lets X publish real-time highlights, and in return the league gets… money, presumably. As the court noted in its ruling:

Since 2013, the NFL and X (formerly Twitter, Inc.) have had a “content partnership.” It allows X to publish real-time highlights from football games, such as touchdowns. During the offseason, reporters post on X with news about team practices and other NFL-related topics, and fans on X discuss teams’ acquisitions of free agents and other roster changes. For example, during the NFL draft (the high-profile annual event in which teams select eligible players to join their rosters), X published more than one million posts concerning the NFL; these appeared on users’ screens more than 800 million times. The NFL has repeatedly renewed its partnership with X. Fans do not pay money to receive NFL news on X.

Fine. Lots of organizations have deals with social media platforms. But this just seems like self-sabotage: the NFL apparently used this partnership as justification to tell its own teams they couldn’t even exist on a competing platform. Multiple NFL teams—including the New England Patriots—had set up accounts on Bluesky, started posting, and were building audiences. And then the league office stepped in and told them to shut it all down.

From the ruling:

Initially, multiple NFL teams, including the New England Patriots, had accounts on Bluesky to communicate with fans….

As alleged, however, the NFL later instructed its member teams to delete their Bluesky accounts. But for this instruction, at least some NFL teams would use Bluesky. The Patriots’ vice president of content, Fred Kirsch, for example, has stated: “Whenever the league gives us the green light[,] we’ll get back on Bluesky.”

Advertisement

Yes, the (Super Bowl-losing) Patriots’ VP of content is publicly saying his team wants to be on Bluesky and is just waiting for the league to let them. This wasn’t a case of teams being uninterested. Teams saw the audience there, set up shop, and were actively communicating with fans—and the NFL made them stop.

As Front Office Sports reported at the time, the league specifically told the Patriots to take down their Bluesky account. The league apparently hasn’t even approved Threads—Meta’s X competitor—for team real-time updates either.

So the NFL has essentially decided that when it comes to the kind of real-time updates that fans actually care about, X is the only approved outlet. Everything else is locked out.

This is “broadcast-brain” thinking applied to the internet, and it’s spectacularly dumb.

Advertisement

The NFL is treating social media platforms the way it treats regional sports networks or its Sunday Ticket package: as exclusive territories to be carved up and sold to the highest bidder. In the television world, that model makes a certain kind of sense—there’s a limited amount of spectrum, a limited number of cable channels, and that scarcity creates value. But social media doesn’t work that way. There’s no scarcity. Posting an injury report on Bluesky doesn’t remove it from X. Cross-posting is literally free. The entire point of social media for a brand is to be everywhere your audience is.

And the audience, increasingly, is on Bluesky. As Mashable noted last year heading into the season, the NFL community on Bluesky had already hit a kind of critical mass:

You need the presence and regular posting of big names to legitimize a platform. It certainly helped that folks like Kimes and a large portion of the NFL writers at popular sports sites like The Ringer made Bluesky home. And last season it felt like Bluesky hit terminal velocity, where enough people joined that you could fully exit to the site for football content. And with the migration of the professionals, the shitposters naturally came along, too. Because that’s where the discussion was happening. There is genuine, easy-to-find, fun NFL talk on Bluesky with minimal interruptions from, say, weird ads or angry reply guys you might find on X.

That’s a real community. A vibrant, engaged community of exactly the kind of hardcore football fans that the NFL should be desperate to cultivate. These are, as Mashable noted, the “ball knowers.” They’ve moved to Bluesky because, well, X kind of sucks now for following sports. As Mashable also noted:

Bluesky does have a leg-up in some areas — Elon Musk’s site recently has proven unreliable for NFL fans. The site crashed the morning free agency launched, which is one of the most important days for NFL social media. And the sports tab — which used to be an easy, fun way to follow games in the Twitter days — degraded into near uselessness years ago. And, in general, X has morphed with Musk’s image, which is focused more on AI and politics — not things like following football. Of course you can still follow the NFL on X, but it does involve wading through more junk than it used to. Bluesky offers an interesting alternative in that regard.

So the most engaged, most knowledgeable football community has moved to Bluesky. The teams themselves want to be on Bluesky. And the NFL’s response to all of this is… to ban its teams from showing up.

Advertisement

It’s the digital equivalent of a local blackout (something we’ve been calling out for well over a decade)—punishing your most dedicated fans because of some deal you cut with a middleman in an effort to create an artificial and unnecessary scarcity.

Meanwhile, the platform the NFL is propping up with this exclusivity arrangement is one where fans who tuned in for the Super Bowl halftime show got to watch a significant chunk of the X user base have a full-blown racist meltdown over Bad Bunny performing. The NFL specifically chose Bad Bunny to appeal to a broader, more global audience—and the audience that actually appreciated the choice? They were on Bluesky where there was an overwhelming wave of support for the performance. The league is betting its real-time presence on the platform where its expansion strategy gets shouted down, while blocking teams from the one where those new fans are actually showing up.

This kind of control-freakery from the NFL shouldn’t surprise anyone who has followed the league’s behavior over the years. This is the same organization that has spent decades aggressively lying to bars, restaurants, and small businesses about the scope of its “Super Bowl” trademarks, sending threatening letters suggesting you can’t even say the words “Super Bowl” in an ad without a license—something that has never actually been true.

The NFL’s institutional DNA is “control equals value,” and they apply that logic to everything, from what a church can call its viewing party to which social media apps their teams are permitted to use.

Advertisement

The problem is that control-based thinking only works when you actually can control the ecosystem. You can (sort of) control which networks broadcast your games. You can control which streaming service gets Sunday Ticket. You cannot control where fans choose to talk about football on the internet. The conversation is going to happen whether the NFL’s official accounts are there or not. The only question is whether the league’s teams get to participate in it.

Any organization whose core business depends on fan engagement should be finding fans where they are, not herding them onto a single platform because you cut an exclusivity deal. Especially when that platform is increasingly known for being a hellscape of AI slop, political rage, and engagement-bait, while the platform you’re blocking your teams from is the one where people are actually talking about your product with genuine enthusiasm.

The NFL generates billions in revenue. And yet, when it comes to social media strategy, it’s stuck in a 2005 mindset. That’s not how any of this works anymore.

Someone at NFL HQ needs to understand that when your most passionate fans have moved to a new platform and your own teams are begging for permission to follow them there, the smart play is to let them go.

Advertisement

Filed Under: blackouts, exclusivity, fans, football, social media

Companies: bluesky, nfl, twitter, x

Source link

Advertisement
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

BMW Sends AEON Humanoid Robots to the Line in Leipzig

Published

on

BMW AEON Humanoid Robots Leipzig Factory
BMW employees at the Leipzig plant have been juggling all the complex parts of vehicle assembly, especially the hefty battery modules. However, a new member has joined the team: the AEON humanoid robot. AEON was created by Hexagon Robotics, a company BMW has been collaborating with for years on grunt work, to manage the physically taxing and repetitive tasks that wear people out.



By April 2026, the AEON prototype will be put through its paces in a larger round of assessments, with a complete pilot phase scheduled to begin this summer. The goal is to make AEON extremely adaptable so that it can transition between activities as needed; simply switch out the gripper or scanner and you’re ready to go. It travels from station to station without the need for fixed rails because it has wheels rather than legs.


Unitree G1 Humanoid Robot(No Secondary Development)
  • Height, width and thickness (standing): 1270x450x200mm Height, width and thickness (folded): 690x450x300mm Weight with battery: approx. 35kg
  • Total freedom (joint motor): 23 Freedom of one leg: 6 Waist Freedom: 1 Freedom of one arm: 5
  • Maximum knee torque: 90N.m Maximum arm load: 2kg Calf + thigh length: 0.6m Arm arm span: approx. 0.45m Extra large joint movement space Lumbar Z-axis…

Electric vehicle battery modules need extra care, and workers frequently need to wear safety gear simply to move them. After a few shifts, it becomes monotonous, but AEON is more than willing to relieve them of that kind of work without putting undue strain on the human workers. External component production is also relevant since, let’s face it, robotic consistency is a huge bonus when performing the same operation repeatedly.

BMW AEON Humanoid Robots Leipzig Factory
BMW is doing something a little different from the conventional industrial arms that are fastened to the ground. Thanks to data from BMW’s recently unified systems, AEON is able to move around, adjust to any arrangement, and become more intelligent every day. For a more seamless operation, the business had to dismantle the outdated data silos that were creating so much friction; now, all of that data streams directly into AEON. Naturally, safety is the top priority, which entails improved wireless coverage, additional barriers to keep people safe, and other measures. Since everything is linked into the current Smart Robotics network, there is no need to worry about anything getting left behind.

BMW AEON Humanoid Robots Leipzig Factory
The main benefit in this case is that human labor is still essential to the entire operation. BMW wants employees to be able to do something more interesting for a change by eliminating the monotony of their jobs. The personnel on the floor have been won over by early buy-in from safety teams, IT, and logistics, and having their support from the beginning has made everything go much more smoothly. The strategy here was shaped by lessons learned from an earlier test in the US facility in Spartanburg, which demonstrated how rapidly AEON could catch up and the dependability of everything in an actual production environment.

BMW AEON Humanoid Robots Leipzig Factory
In all of this, Leipzig has established a new Center of Competence for Physical AI, and its experts are getting to work assessing partners, conducting pilots, and expanding the concepts that prove effective. To stay competitive in Europe, executives are already discussing quantifiable improvements in speed and accuracy for these difficult activities.
[Source]

Advertisement

Source link

Continue Reading

Tech

City of Seattle CTO Rob Lloyd is resigning to lead a government institute with national reach

Published

on

Rob Lloyd, chief technology officer for the City of Seattle, announced that he is stepping down from his role in March. (Photo courtesy of Rob Llloyd)

Rob Lloyd, Seattle’s chief technology officer, is leaving his post to become executive director of the Center for Digital Government. His last day will be March 27.

“Leading IT and our dedicated teams in service to Seattle has been an honor,” Lloyd said to colleagues in an email sent Thursday night.

Lloyd told GeekWire that while he appreciated Mayor Katie Wilson’s invitation to stay in the role, he was “beyond excited” to take the new job, which would allow him to perform similar work with local and state governments nationwide.

Lloyd became CTO in June 2024 after eight years as deputy city manager of San José, Calif. While his new employer is based in California, he will remain in Seattle. “My family wanted it no other way,” Lloyd said.

The city provided GeekWire with Lloyd’s letter of resignation, in which he said the “timing is right for a change.” The mayor is reshaping her executive team and its direction, he wrote, and strategizing actions related to the budget and this summer’s FIFA World Cup games.

Advertisement

Seattle is facing about a $140 million budget deficit for next year. The Seattle Times reported that Wilson is asking departments to provide plans for funding cuts of 5% to 10%.

In the letter, Lloyd also highlighted some of his team’s accomplishments during his tenure, including:

  • Recovering more than $130 million “in failing and stalled technology projects.”
  • Executing the city’s IT Strategic Plan.
  • Partnering with fire, police, mental health and emergency management services on public safety technologies.
  • Managing a $21 million operating budget reduction while increasing service reliability and employee retention.
  • Updating cybersecurity practices.
  • Formalizing his department’s first customer service and staff feedback surveys.

Lloyd has been responsible for overseeing roughly 670 employees, and joined the city with a $270 million operating budget and a capital budget of about $24 million.

In December, the city appointed Lisa Qian as its first AI Officer. Her experience includes serving as a senior manager of data science at LinkedIn, as well other tech company leadership positions.

When Lloyd came to Seattle, he told GeekWire he hoped the city would be his “forever home” — and that he wanted to step outside City Hall and build relationships with the community members and companies driving the region’s tech scene. He was eager to play a part in tackling difficult issues such as public safety, homelessness and downtown recovery.

Advertisement

In his email to employees, Lloyd said that during his final weeks he would “be focused on completing the final commitments I made to the organization when I arrived.”

“What I’ll carry most from my time here isn’t the projects or the milestones though, it’s the memories of you and our partners,” Lloyd continued. “So many people made this work a true gift. Thank you to the City for letting me serve this community with you.”

Source link

Advertisement
Continue Reading

Tech

Microsoft’s new AI training method eliminates bloated system prompts without sacrificing model performance

Published

on

In building LLM applications, enterprises often have to create very long system prompts to adjust the model’s behavior for their applications. These prompts contain company knowledge, preferences, and application-specific instructions. At enterprise scale, these contexts can push inference latency past acceptable thresholds and drive per-query costs up significantly. 

On-Policy Context Distillation (OPCD), a new training framework proposed by researchers at Microsoft, helps bake the knowledge and preferences of applications directly into a model. OPCD uses the model’s own responses during training, which avoids some of the pitfalls of other training techniques. This improves the abilities of models for bespoke applications while preserving their general capabilities. 

Why long system prompts become a liability

In-context learning allows developers to update a model’s behavior at inference time without modifying its underlying parameters. Updating parameters is typically a slow and expensive process. However, in-context knowledge is transient. This knowledge does not carry across different conversations with the model, meaning you have to feed the model the exact same massive set of instructions or documents every time. For an enterprise application, this might mean repeatedly pasting company policies, customer tickets, or dense technical manuals into the prompt. This eventually slows down the model, drives up costs, and can confuse the system.

“Enterprises often use long system prompts to enforce safety constraints (e.g., hate speech detection) or to provide domain-specific expertise (e.g., medical knowledge),” said Tianzhu Ye, co-author of the paper and researcher at Microsoft Research Asia, in comments provided to VentureBeat. “However, lengthy prompts significantly increase computational overhead and latency at inference time.”

Advertisement

The main idea behind context distillation is to train a model to internalize the information that you repeatedly insert into the context. Like other distillation techniques, it follows a teacher-student paradigm. The teacher is an AI model that receives the massive, detailed prompt. Because it has all the instructions and reference documents, it generates highly tailored responses. The student is a model being trained that only sees the main question and doesn’t have access to the full context. Its goal is simply to observe the teacher’s responses and learn to mimic its behavior.

Through this training process, the student model effectively compresses the complex instructions from the teacher’s prompt directly into its parameters. For an enterprise, the primary value happens at inference time. Because the student model has internalized the context, you can deploy it in your application without needing to paste in the lengthy instructions again. This makes the model significantly faster and with far less computational overhead.

context distillation

However, classic context distillation relies on a flawed training method called “off-policy training,” where the model is trained on fixed datasets that were collected before the training process. This is problematic in several ways. During training, the student is only exposed to ground-truth data and teacher-generated answers, creating what Ye calls “exposure bias.” In production, the model must come up with its own token sequences to reach those answers. Because it never practiced making its own decisions or recovering from its own mistakes during training, it can easily derail when operating independently. It’s like showing a student videos of a professional driver and expecting them to learn driving without trial and error.

Another problem is the “forward Kullback-Leibler (KL) divergence” minimization measure used to train the model. Under this method, the model is graded on how similar its answers are to the teacher, which encourages “mode-covering” behavior, Ye says. The student model is often smaller or lacks the rich context the teacher had, meaning it simply lacks the capacity to perfectly replicate the teacher’s complex reasoning. Because the student is forced to try and cover all those possibilities anyway, its underlying guesses become overly broad and unfocused.

In real-world applications, this can result in hallucinations, where the AI gets confused and confidently makes things up because it is trying to mimic a depth of knowledge it does not actually possess. It also means that the model cannot generalize well to new tasks.

Advertisement

How OPCD fixes the teacher-student problem

To fix the critical issues with the old teacher-student dynamic, the Microsoft researchers introduced On-Policy Context Distillation (OPCD). The most important shift in OPCD is that the student model learns from its own generation trajectories as opposed to a static dataset (which is why it is called “on-policy”). Instead of passively studying a dataset of the teacher’s perfect outputs, the student is given a task without seeing the massive instruction prompt and has to generate an answer entirely on its own.

As the student generates its answer, the teacher acts as a live instructor. The teacher has access to the full, customized prompt and evaluates the student’s output. At every step along the student’s generation, the system compares the student’s token distribution against what the context-aware teacher would do.

on-policy context distillation

On-policy context distillation

OPCD uses “reverse KL divergence” to grade the student. “By minimizing reverse KL divergence, it promotes ‘mode-seeking’ behavior. It focuses on high-probability regions of the student’s distribution,” Ye said. “It suppresses tokens that the student considers unlikely, even if the teacher’s belief assigned them high probability. This alignment helps the student correct its own mistakes and avoid the broad, hallucinatory distributions of standard distillation.”

Advertisement

Because the student model actively practices making its own decisions and learns to correct its own mistakes during training, it behaves more reliably when deployed in a live application. It successfully bakes complex business rules, safety constraints, or specialized knowledge directly into its permanent memory.

What OPCD delivers: The benchmark results

The researchers tested OPCD in two key areas: experiential knowledge distillation and system prompt distillation. For experiential knowledge distillation, the researchers wanted to see if an LLM could learn from its own past successes and permanently adopt those lessons. They tested this on models of various sizes, using mathematical reasoning problems.

First, the model solved problems and was asked to write down general rules it learned from its successes. Then, using OPCD, they baked those written lessons directly into the model’s parameters. The results showed that the models improved dramatically without needing the learned experience pasted into their prompts anymore. On complex math problems, an 8-billion-parameter model improved from a 75.0% baseline to 80.9%. For example, on the Frozen Lake navigation game, a small 1.7-billion parameter model initially had a success rate of 6.3%. After OPCD baked in the learned experience, its accuracy jumped to 38.3%.

The second set of experiments were on long system prompts. Enterprises often use massive system prompts to enforce strict behavioral guidelines, like maintaining a professional tone, ensuring medical accuracy, or filtering out toxic language. The researchers tested whether OPCD could permanently bake these dense behavioral rules into the models so they would not have to be sent with every single user query. Their experiments show that OPCD successfully internalized these complex rules and massively boosted performance. When testing a 3-billion parameter Llama model on safety and toxicity classification, the base model scored 30.7%. After using OPCD to internalize the safety prompt, its accuracy spiked to 83.1%. On medical question answering, the same model improved from 59.4% to 76.3%.

Advertisement

One of the key challenges of fine-tuning models is catastrophic forgetting, where the model becomes too focused on the fine-tune task and worse at general tasks. The researchers tracked out-of-distribution performance to test for this tunnel vision. When they distilled strict safety rules into a model, they immediately tested its ability to answer unrelated medical questions. OPCD successfully maintained the model’s general medical knowledge, outperforming the old off-policy methods by approximately 4 percentage points. It specialized without losing its broader intelligence.

Where OPCD fits — and where it doesn’t

While OPCD is a powerful tool for internalizing static knowledge and complex rules, it does not replace all external context methods. “RAG is better when the required information is highly dynamic or involves a massive, frequently updated external database that cannot be compressed into model weights,” Ye said.

For enterprise teams evaluating their pipelines, adopting OPCD does not require overhauling existing systems or investing in specialized hardware. “OPCD can be integrated into existing workflows with very little friction,” Ye said. “Any team already running standard RLVR [Reinforcement Learning from Verifiable Rewards] pipelines can adopt OPCD without major architectural changes.”

In practice, the student model acts as the policy model performing rollouts, while the frozen teacher model serves as a reference providing logits. The hardware requirements are highly accessible. According to Ye, enterprise teams can reproduce the researchers’ experiments using about eight A100 GPUs.

Advertisement

The data requirements are similarly lightweight. For experiential knowledge distillation, developers only need around 30 seed examples to generate solution traces. Because the technique is applied to previously unoptimized environments, even a small amount of data yields the majority of the performance improvement. For system prompt distillation, existing optimized prompts and standard task datasets are sufficient.

The researchers built their own implementation on verl, an open-source RLVR codebase, proving that the technique fits cleanly within conventional reinforcement learning frameworks. They plan to release their implementation as open source following internal reviews.

The self-improving model: What comes next

Looking ahead, OPCD paves the way for genuinely self-improving models that continuously adapt to bespoke enterprise environments. Once deployed, a model can extract lessons from real-world interactions and use OPCD to progressively internalize those characteristics without requiring manual supervision or data annotation from model trainers.

“This represents a fundamental paradigm shift in model improvement: the core improvements to the model would move from training time to test time,” Ye said. “Using the model—and allowing it to gather experience—would become the primary driver of its advancement.”

Advertisement

Source link

Continue Reading

Tech

Ultrahuman’s Pro smart ring can go for two weeks

Published

on

Ultrahuman is back with its most capable smart ring yet, the Ring Pro.

This third-generation smart ring can deliver up to 15 days of battery life alongside a new Pro Charging Case and an AI health platform – Jade.

The 15-day battery claim triples the four to six-day lifespan of the Ring Air, a gap that Ultrahuman frames as a category-defining shift.

The Ring Pro uses a titanium unibody construction and carries a redesigned internal architecture. A redesigned heart-rate sensing architecture improves signal quality during sleep and recovery, while an upgraded dual-core processor handles faster data processing and on-chip machine learning, both of which directly affect the accuracy of the health metrics the ring generates overnight.

Advertisement

ProRelease Technology allows the ring to be cut apart more easily in the event of finger swelling or injury, a safety feature that most competing smart rings have not addressed despite growing consumer concerns about wearables worn continuously for extended periods.

Advertisement

The Charging Case carries its own 45-day battery reserve and stores up to one year of ring data, using a magnetic UltraSnap connection that Ultrahuman states generates less heat than conventional wireless charging during repeated daily use.

ultrahuman pro charging caseultrahuman pro charging case

Jade and PowerPlugs

Jade, the company’s new biointelligence AI platform, pulls real-time data from across the Ultrahuman ecosystem, including the Ring Pro, Blood Vision biomarkers, M1 continuous glucose monitoring, and Ultrahuman Home environmental sensors to surface personalised health insights rather than retrospective summaries.

The system differs from standard AI health integrations by executing real-time actions such as triggering AFib detection or initiating breathwork sessions, a capability that Ultrahuman positions closer to Tesla’s Full Self-Driving model of continuous real-time processing than a conventional backward-looking data query tool.

Advertisement

PowerPlugs, the company’s expanding micro-application platform, adds new capabilities including GLP-1 lifestyle tracking, respiratory health and snoring analysis through a Sleep Cycle integration, and migraine management tools built on Click Therapeutics’ FDA-authorised digital therapeutic technology.

Advertisement

The Ring Pro is available to pre-order globally, excluding the United States, for $479, with shipments beginning in March, and trade-in discounts of up to $115 apply for existing Ring Air and other smart ring owners.

Advertisement

Source link

Continue Reading

Tech

Who is hiring in AI, robotics and automation right now?

Published

on

AI solutions architect, senior automation engineer and machine learning engineer are just three of the exciting roles open to qualified professionals in Ireland at the moment.

Click here to access the entire catalogue of Automation Focus.

February at SiliconRepublic.com is when we take a closer look at all things AI, robotics and automation. It is a rapidly evolving space that often requires a significant commitment to upskilling and training, but on the upside, it is also a fascinating ecosystem to be a part of. 

So, if you are a STEM professional looking for a new role or opportunity, then why not consider applying at one of these 14 organisations? Each has a range of positions in Ireland open to experts aiming to work in AI, robotics or automation, or a combination of the three. 

Analog Devices

US semiconductor manufacturing company Analog Devices has a presence in a number of countries, including Ireland. Currently, in Limerick, the organisation is looking to onboard a robotics and automation graduate, whose responsibilities will include assisting in the calibration and maintenance of robotic arms and automated handlers, supporting troubleshooting of robotic motion control systems and sensors, learning about robotic integration with high-vacuum and pneumatic systems, and participating in projects to optimise robotic throughput and reliability, among other tasks. 

Advertisement

BMS

US pharmaceutical company Bristol Myers Squibb (BMS) is currently advertising a position for a senior manager in EHSS performance enablement. The role will require a professional with the skills to support the execution of EHSS performance monitoring, ensure that EHSS systems and processes deliver accurate, reliable data for decision-making, and lay the foundation for future predictive analytics capabilities.

BMS states that the position supports the maintenance and operational improvement of EHSS data and reporting systems, including performance measurement frameworks, maintaining data quality, and enabling analytics and visualisation through data science to track EHSS performance.

There is also a similar role for a senior manager in EHSS systems implementation.

Clio

In November of last year, Canadian AI legal-tech Clio officially opened a new office in Dublin’s Docklands, just a few days after the company announced a $500m raise. Over the next couple of months, Clio plans to expand its Dublin-based team from 60 to more than 100 employees, adding new roles.

Advertisement

At the moment, someone with AI or automation skills could be well-suited to a role as a software developer at the organisation. There is also a vacancy for a senior compliance analyst, EMEA. This job involves work in dealing with expansion and automation of compliance programs. The successful candidate will work with stakeholders across Clio’s operations to support compliance initiatives such as risk mitigation, support of innovation in AI and product development, customer inquiry support, control maintenance and instilling best practices.

Equinix

AI infrastructure provider Equinix recently announced plans to create 200 jobs in Dundalk, Co Louth via an investment of up to $700m in a new facility that will be built by local company Hanley Energy. The new roles are expected to be in a range of technical areas, such as precision engineering, quality assurance and lean manufacturing. While more jobs are likely to be announced down the line, one of the roles currently open to prospective employees is that of senior director, controls engineering and service management.

EXL

Data and artificial intelligence company EXL is headquartered in New York; however, the organisation has a presence in multiple countries, including Ireland, where there are two opportunities open to professionals with AI and automation skills: senior engineer in applied AI and digital AI solution architect. Both roles are advertised as hybrid and full-time. EXL is also offering Dublin-based full-stack engineering positions for people with varying degrees of seniority. 

EY

UK-based professional services firm EY has a number of AI-specific job opportunities for qualified professionals. Anyone interested in applying could consider jobs in agentic AI engineering, AI-lab full-stack engineering, agentic and generative intelligence, AI analytics, and data architecture, among others. The Dublin office is also recruiting for an intelligent automation assistant manager and an intelligent automation manager. 

Advertisement

Fidelity Investments 

For professionals looking for a new role in which AI and automation skills are a plus, Fidelity Investments has opportunities at its Dublin and Galway-based facilities. In the capital, there is a vacancy for a principal site reliability engineer, and out west there are roles for principal full-stack engineer, senior software engineer, principal software engineer and senior full-stack engineer.

Fixify

US software company Fixify has a position in senior data science for an Ireland-based professional looking to work remotely. Fixify states that the successful candidate will be at the forefront of transformation by wielding machine learning, AI and “data wizardry”. 

Liberty IT

Liberty IT, the technology arm of the insurance company Liberty Mutual Insurance, has offices in Belfast, Dublin and Galway. The team is looking for a professional skilled in ML automated workflows, software engineering, cloud architecture and AI, for a role as a senior AI solutions architect. The role is open to professionals located in any of the three Irish premises. Also on offer to an expert with automation skills is a role as a senior data engineer. 

MSD

Pharmaceutical multinational MSD has a vacancy for a senior specialist in manufacturing automation. The successful candidate will join the team at the multiproduct facility in Dunboyne, Co Meath and will work closely with colleagues across engineering, operations, quality, validation and global technology, among other teams.

Advertisement

PwC

Professional services company PwC is adding to its AI and automation capabilities with a number of key hires. In Dublin, open roles include AI Azure architect in data and AI, senior associate engineer for agentic AI, AI technology consultant manager in data and AI, and senior manager and AI architect in Azure.  

TCS

IT services, consulting and business solutions platform TCS has an opportunity open to a Cork-based professional in its IT department. Its Little Island facility is hiring for a senior automation engineer, and desired skills for the role include experience in a manufacturing environment, high volume automated assembly experience and medical device manufacturing experience.

Version1

Dublin-headquartered IT services and consulting company Version1 is looking to expand its teams. Currently, the organisation is recruiting professionals armed with a range of AI and automation skills. Vacant roles include AI engineer, cloud/AI solution architect, power apps architect, full-stack developer and Azure cloud consultant, among others. 

Yahoo

Technology company Yahoo is looking to recruit professionals with AI and automation skills to a number of its Ireland-based teams. The Yahoo Mail division has a vacancy for multiple positions, including backend engineer II, principal software apps engineer, senior data engineer and machine learning engineer II. 

Advertisement

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

eBay cuts 800 jobs after Depop acquisition

Published

on

eBay acquired Depop from Etsy for $1.2bn earlier this month.

Online marketplace eBay is laying off around 800 jobs – or 6pc of its workforce spread globally. The job cuts are a response to operating model needs and future priorities, the company has explained.

As of 31 December last year, company filings show that eBay employed approximately 12,300 people globally, 5,100 of which are situated outside the US.

“We are taking steps to reinvest across our business and align our structure with our strategic priorities, which will affect certain roles across our workforce,” an eBay spokesperson told news publications.

Advertisement

“We are grateful for the contributions of the employees impacted and are committed to supporting them with care and respect.” eBay, however, will still continue to hire in key areas, it said.

The announcement comes after eBay posted a 2025 annual net revenue of $11.1bn, up 8pc from the year before, while gross merchandise volume (GMV) was up 7pc to $79.6bn. eBay noted that fashion alone represents more than $10bn in GMV annually.

The layoffs also come on the heels of eBay announcing a $1.2bn acquisition of the second-hand fashion marketplace Depop, from Etsy. With the acquisition, eBay wants to target the under-34 consumer base – which represents a majority of Depop’s user base.

Etsy purchased Depop for $1.6bn in 2021. The same year, it bought Brazilian online marketplace Elo7 for $217m. In 2019, it purchased music gear marketplace Reverb. All three have since been sold by Etsy, which has been suffering from slowed growth in recent years. Its year-over-year revenue grew by just 2.2pc in 2024, down from 7.1pc in 2023.

Advertisement

eBay laid off 9pc of its total workforce in 2024, or 1,000 jobs, citing macroeconomic conditions. In early 2023, it cut 500 jobs.

Company filings showed that eBay’s Irish arm, which handles its European operations, paid out more than €1.8m in redundancy costs after cutting 75 jobs in 2024.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

PlayStation 5 Pro is getting a big graphics upgrade with AMD tech

Published

on

The PlayStation 5 Pro is getting a notable graphics upgrade, and it comes straight from Sony’s long-running partnership with AMD. As shared by Sony in an official blog post, the console’s image-upscaling technology is about to receive a major refresh.

Grateful for the shared vision with @cerny on Project Amethyst.

🎮 Through deep co-engineering between @PlayStation and @AMD, we’re seeing that vision power the PlayStation 5 Pro, delivering higher resolution, higher frame rates, and beautiful visual fidelity for gamers.

🧠… https://t.co/vzebLidCbE

— Jack Huynh (@jackhuynh) February 27, 2026

Advertisement

At the center of the news is an upgraded version of PSSR (PlayStation Spectral Super Resolution), Sony’s AI-driven upscaling technology. The new version has been co-developed with AMD and is derived from the company’s latest FSR 4 upscaling technology. Sony says the improved PSSR will roll out soon, with Resident Evil Requiem confirmed as the first game to support it. The update is said to bring sharper image quality, reduced ghosting, and better detail reconstruction compared to the original version.

A glimpse at the future of console graphics

The new PSSR update is not just a small tweak. Sony describes it as the result of months of additional refinement built on top of AMD’s FSR 4 technology. That matters because upscaling has become one of the most important tools in modern gaming, allowing consoles to deliver higher resolutions and smoother frame rates without requiring dramatically more powerful hardware. Instead of brute-force rendering every pixel, AI-assisted upscaling reconstructs high-resolution images from lower-resolution frames. The result is better performance while still delivering near-native visual quality.

For PS5 Pro owners, this upgrade could mean clearer visuals, better performance, and more future-proof graphics as new games begin adopting the updated PSSR technology. And because the upgrade is system-level, it has the potential to benefit multiple upcoming titles rather than being limited to a single release. As Sony and AMD continue to work together, the PS5 Pro may end up feeling more like a living platform that improves over time. With the PS6 reportedly pushed further down the road, it’s reassuring to see the PS5 Pro getting a meaningful performance boost in the meantime.

Source link

Advertisement
Continue Reading

Tech

Enterprise MCP adoption is outpacing security controls

Published

on

AI agents now carry more access and more connections to enterprise systems than any other software in the environment. That makes them a bigger attack surface than anything security teams have had to govern before, and the industry doesn’t yet have a framework for it. “If that attack vector gets utilized, it can result in a data breach, or even worse,” said Spiros Xanthos, founder and CEO of Resolve AI, speaking at a recent VentureBeat AI Impact Series event.

Traditional security frameworks are built around human interactions. There’s not yet an agreed-upon construct for AI agents that have personas and can work autonomously, noted Jon Aniano, SVP of product and CRM applications at Zendesk, at the same event. Agentic AI is moving faster than enterprises can build guardrails — and Model Context Protocol (MCP), while decreasing integration complexity, is making the problem worse.

“Right now it’s an unsolved problem because it’s the wild, wild West,” Aniano said. “We don’t even have a defined technical agent-to-agent protocol that all companies agree on. How do you balance user expectations versus what keeps your platform safe?”

MCP still “extremely permissive”

Enterprises are increasingly hooking into MCP servers because they simplify integration between agents, tools and data. However, MCP servers tend to be “extremely permissive,” he said.

Advertisement

They are “actually probably worse than an API,” he contended, because APIs at least have more controls in place to impose upon agents.

Today’s agents are acting on behalf of humans based on explicit permissions, thus establishing human accountability. “But you might have tens, hundreds of agents in the future with their own identity, their own access,” said Xanthos. “It becomes a very complex matrix.”

Even as his startup is developing autonomous AI agents for site reliability engineering (SRE) and system management, he acknowledged that the industry “completely lacks the framework” for autonomous agents.

“It’s completely on us and to anybody who builds agents to figure out what restrictions to give them,” he said. And customers must be able to trust those decisions.

Advertisement

Some existing security tools do offer fine-grained access — Splunk, for instance, developed a method to provide access to certain indexes in underlying data stores, he noted — but most are broader and human-oriented.

“We’re trying to figure this out with existing tools,” he said. “But I don’t think they’re sufficient for the era of agents.”

AI Impact Series 1password

Credit: Michael O’Donnell, ShinyRedPhoto

Who’s accountable when an AI mis-authenticates a user?

At Zendesk and other customer relationship management (CRM) platform providers, AI is involved in a number of user interactions, Aniano noted — in fact, now it’s at a “volume and a scale that we haven’t contemplated as businesses and as a society.”

Advertisement

It can get tricky when AI is helping out human agents; the audit trail can become a labyrinth.

“So now you’ve got a human talking to a human that’s talking to an AI,” Aniano noted. “The human tells the AI to take action. Who’s at fault if it’s the wrong action?” This becomes even more complicated when there are “multiple pieces of AI and multiple humans” in the mix.

To prevent agents from going off the rails, Zendesk tends to be “very strict” about access and scope; however, customers can define their own guardrails based on their needs. In most cases, AI can access knowledge sources, but they’re not writing code or running commands on servers, Aniano said. If an AI does call an API, it is “declaratively designed” and sanctioned, and actions are specifically called out.

However, customer demand is flooding these scenarios and “we’re kind of holding the gates right now,” he said.

Advertisement

The industry must develop concrete standards for agent interactions. “We’re entering a world where, with things like MCP that can auto-discover tools, we’re going to have to create new methods of safety for deciding what tools these bots can interact with,” said Aniano.

When it comes to security, enterprises are rightly concerned when AI takes over authentication tasks, such as sending out and processing one-time passwords (OTP), SMS codes, or other two-step verification methods, he said. What happens if an AI mis-authenticates or misidentifies someone? This can lead to sensitive data leakage or open the door for attackers.

“There’s a spectrum now, and the end of that spectrum today is a human,” Aniano said. However, “the end of that spectrum tomorrow might be a specialized agent designed to do the same kind of gut feeling or human-level interaction.”

Customers themselves are on a spectrum of adoption and comfort. In certain companies — particularly financial services or other highly-regulated environments — humans still must be involved in authentication, Aniano noted. In other cases, legacy companies or old guards only trust humans to authenticate other humans.

Advertisement

He noted that Zendesk is experimenting with new AI agents that are “a little more connected to systems,” and working with a select group of customers around guardrailing.

Standing authorization is coming

In some future, agents may actually be more trusted than humans to do some tasks, and granted permissions “way beyond” what humans have today, Xanthos said. But we’re a long way from that, and, for the most part, the fear of something going wrong is what’s holding enterprises back.

“Which is a good fear, right? I’m not saying that it is a bad thing,” he said. Many enterprises simply aren’t yet comfortable with an agent doing all steps of a workflow or fully closing the loop by itself. They still want human review.

Resolve AI is on the cusp of giving agents standing authorization in a few cases that are “generally safe,” such as in coding; from there they’ll move to more open-ended scenarios that are not all that risky, Xanthos explained. But he acknowledged that there will always be very risky situations where AI mistakes could “mutate the state of the production system,” as he put it.

Advertisement

Ultimately, though: “There’s no going back, obviously; this is moving faster than maybe even mobile did. So the question is what do we do about it?”

What security teams can do now

Both speakers pointed to interim measures available within existing tooling. Xanthos noted that some tools — Splunk among them — already offer fine-grained index-level access controls that can be applied to agents. Aniano described Zendesk’s approach as a practical starting point: declaratively designed API calls with explicitly sanctioned actions, strict access and scope limits, and human review before expanding agent permissions.

The underlying principle, as Aniano put it: “We’re always checking those gates and seeing how we can widen the aperture” — meaning don’t grant standing authorization until you’ve validated each expansion.

Source link

Advertisement
Continue Reading

Tech

Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’

Published

on

United States Secretary of Defense Pete Hegseth directed the Pentagon to designate Anthropic as a “supply-chain risk” on Friday, sending shockwaves through Silicon Valley and leaving many companies scrambling to understand whether they can keep using one of the industry’s most popular AI models.

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote in a social media post.

The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US military could use the startup’s AI models. In a blog post this week, Anthropic argued its contracts with the Pentagon should not allow for its technology to be used for mass domestic surveillance of Americans or fully autonomous weapons. The Pentagon asked that Anthropic agree to let the US military apply its AI to “all lawful uses” with no specific exceptions.

A supply chain risk designation allows the Pentagon to restrict or exclude certain vendors from defense contracts if they are deemed to pose security vulnerabilities, such as risks related to foreign ownership, control, or influence. It is intended to protect sensitive military systems and data from potential compromise.

Advertisement

Anthropic responded in another blog post on Friday evening, saying it would “challenge any supply chain risk designation in court,” and that such a designation would “set a dangerous precedent for any American company that negotiates with the government.”

Anthropic added that it hadn’t received any direct communication from the Department of Defense or the White House regarding negotiations over the use of its AI models.

“Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement,” the company wrote.

The Pentagon declined to comment.

Advertisement

“This is the most shocking, damaging, and over-reaching thing I have ever seen the United States government do,” says Dean Ball, a senior fellow at the Foundation for American Innovation and the former senior policy advisor for AI at the White House. “We have essentially just sanctioned an American company. If you are an American, you should be thinking about whether or not you should live here 10 years from now.”

People across Silicon Valley chimed in on social media expressing similar shock and dismay. “The people running this administration are impulsive and vindictive. I believe this is sufficient to explain their behavior,” Paul Graham, founder of the startup accelerator Y Combinator said.

Boaz Barak, an OpenAI researcher, said in a post that “kneecapping one of our leading AI companies is right about the worst own goal we can do. I hope very much that cooler heads prevail and this announcement is reversed.”

Meanwhile, OpenAI CEO Sam Altman announced on Friday night that the company reached an agreement with the Department of Defense to deploy its AI models in classified environments, seemingly with carveouts. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” said Altman. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

Advertisement

Confused Customers

In its Friday blog post, Anthropic said a supply chain risk designation, under the authority 10 USC 3252, only applies to Department of Defense contracts directly with suppliers, and doesn’t cover how contractors use its Claude AI software to serve other customers.

Three experts in federal contracts say it’s impossible at this point to determine which Anthropic customers, if any, must now cut ties with the company. Hegseth’s announcement “is not mired in any law we can divine right now,” says Alex Major, a partner at the law firm McCarter & English, which works with tech companies.

Source link

Advertisement
Continue Reading

Tech

Washington state is primed to let Rivian and Lucid sell EVs directly to consumers

Published

on

rivian
The Rivian showroom at University Village in Seattle in 2025. (GeekWire Photo / Kurt Schlosser)

With the threat of a ballot initiative looming, a slate of auto dealers in Washington have come out in support of EV makers Rivian and Lucid Motors in their pursuit of direct sales to consumers.

Electric automakers have fought for years to change the state law that only allows Tesla to directly sell cars, operate showrooms and offer test drives to potential customers in Washington. The rule exists to prevent manufacturers from competing against franchised dealerships.

In December, Rivian began taking steps to put the issue before voters this fall — an action that could exclude dealers from having a say in how the new rules are drafted.

On Friday afternoon, the Senate Committee on Transportation will consider Senate Bill 6354, which carves out a narrow exemption that applies to Rivian and Lucid, but excludes smaller EV makers and any new entrants to the market.

Multiple local dealership owners testified on Tuesday in favor of the recently introduced measure — but they also offered caveats and concerns.

Advertisement

“I believe that SB 6354 provides a fair compromise and gives legal status to non-franchise EV manufacturers while keeping the guardrails in place to prevent abuse by our franchise manufacturers,” said Greg Rairdon, whose family owns 13 franchise dealerships in Western Washington.

He warned that further opening of direct sales to automakers would give manufacturers an insurmountable competitive advantage and force his and similar businesses to close up shop.

Oregon, California, Idaho, Arizona, Nevada and most other Western states allow all EV manufacturers to offer direct sales. Because Rivian and Lucid don’t offer their vehicles through traditional dealers, consumers have needed to make their purchases online and get them delivered, or travel out of state to shop.

Abigail Ramsden, western state policy manager for Rivian, gave an enthusiastic endorsement of the bill.

Advertisement

The legislation ensures “we can provide seamless customer service,” she said. “Rivian welcomes the opportunity to operate within a clear regulatory framework.”

Two years ago, EV auto companies and environmental groups pushed hard for legislation opening up direct sales. After that failed, Rivian launched a campaign dubbed the Washington Coalition for Consumer Choice and Innovation to put the issue to voters through a fall ballot initiative.

Rivian pledged nearly $4.7 million for the effort, and has spent $270,000, according to state records. The organization has not filed proposed language for an initiative. It would need to collect and submit at least 308,911 voter signatures by early July to be included on the November ballot.

If SB 6354 passes, those actions won’t be necessary. But not all dealers and automakers back the move.

Advertisement

“If the Legislature really believes that the franchise model provides benefits to communities, then why would the state ever entertain any concept like this that would create the erosion of those benefits by exempting some companies,” said Curt Augustine, senior director of state affairs of the Alliance for Automotive Innovation, in testimony this week.

Augustine also expressed his frustration in having the bill sprung upon him unannounced and without the opportunity to engage in negotiations. Craig Orlan, a government affairs director for the American Honda Motor Company, likewise shared his opposition.

“I’m disappointed to see several dealers supporting this legislation, which signals a slow erosion of a franchise system that has proven to be beneficial for manufacturers, dealers and consumers,” Orlan said. “In addition to those benefits, this model has also proven to be highly effective at selling and servicing electric vehicles.”

Beyond allowing the two EV makers to join Tesla in selling their vehicles directly to consumers, SB 6354 would:

Advertisement
  • Increase the vehicle dealer documentary service fee from $200 to $250 until Dec. 31, 2036.
  • Of the $50 increase, a portion would be allocated to the Electric Vehicle Account for instant rebates granted to low income households and to the Multimodal Transportation Account, which helps fund public transit, bike and pedestrian infrastructure and other travel.

SB 6354 has missed legislative cutoffs that apply to most bills, but because it includes a fee related to state funds it’s considered necessary to implement the budget and is exempt from most deadlines.

The legislative session is scheduled to end on March 12.

Source link

Continue Reading

Trending

Copyright © 2025