The facility is part of a five-year, €5m signature innovation partnership between Medtronic and the university.
US and Irish medical device company Medtronic and the University of Galway have launched their Medical Device Prototype Hub, a specialist facility designed to support the medtech ecosystem, STEM engagement and research.
Development of the hub, which belongs to the university’s new Technology Services Directorate, is part of a five-year, €5m signature innovation partnership between Medtronic and the university.
Professor David Burn, the president of the university, said: “The launch of the Medical Device Prototype Hub at University of Galway marks a hugely significant milestone in our signature partnership with Medtronic, but it also sends a strong message to all those in the sector and all those who are driving innovation.
Advertisement
“University of Galway is creating the ecosystem in which our partners in research and innovation can thrive. We look forward to celebrating the breakthroughs and successes that this initiative enables.”
The Medical Device Prototype Hub forms part of the Institute for Health Discovery and Innovation, which was established at the university in 2024.
It will be further supported via collaborations with government agencies and industry leaders, aiming to create a collaborative environment that promotes innovation and regional growth in life sciences and medical technologies.
The university said that the hub has a range of expert staff to facilitate concept creation, development and manufacturing of innovative medical device prototypes.
Advertisement
It offers a suite of services to support early-stage medical device innovation – for example, virtual and physical prototyping – that enables rapid design iteration through computer aided design, modelling and simulation.
“The Technology Services Directorate brings together key research facilities that support fundamental research at University of Galway,” said Aoife Duffy, the head of the directorate.
“It aims to advance our research excellence by bringing together state-of-the-art core facilities and making strategic decisions on infrastructure and investment. The new prototype hub significantly enhances the innovation pathway available for the university research community and wider, and we look forward to working with Medtronic on this partnership.”
Ronan Rogers, senior R&D director at Medtronic, added: “Today’s launch of the Medical Device Prototype Hub represents an exciting next step in our long‑standing partnership with University of Galway. Medtronic has deep roots in the west of Ireland, and this facility strengthens a shared commitment to advancing research, accelerating innovation and developing the next generation of medical technologies.
Advertisement
“We are proud to invest in an ecosystem that not only drives technological progress but also supports talent development. This hub will unlock new avenues for discovery and accelerate the path from promising ideas to real‑world medical solutions for patients.”
Just last week (27 January), two University of Galway projects won proof-of-concept grants from the European Research Council. One of the winning Galway projects is called Concept-AM and is being led by Prof Ted Vaughan, who is also involved with the new hub.
Concept-AM aims to advance software that enables engineers to design lighter, stronger and more efficient components optimised for 3D printing across biomedical, automotive and aerospace applications, creating complex and lightweight parts with less material waste.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.
This week on the GeekWire Podcast: Andy Jassy tells Wall Street that Amazon is planning $200 billion in capital expenses this year, mostly to build out AI infrastructure, and investors give it a thumbs down.
Microsoft’s financial results beat expectations but the company loses $357 billion in market value in a single day after investors learn the extent of its dependence on OpenAI.
In our featured conversation, recorded at a dinner hosted by Accenture in Bellevue, GeekWire co-founder Todd Bishop sits down with computer scientist and entrepreneur Oren Etzioni to talk about AI agents, the startup landscape, the fight against deepfakes, and what good AI leadership looks like.
Etzioni is co-founder of AI agent startup Vercept, founder of the AI2 Incubator, professor emeritus at the UW Allen School, venture partner at Madrona, and the former founding CEO of the Allen Institute for AI.
Advertisement
“Moltbook is to agent networks as Myspace was to social networks,” he posted on LinkedIn. “It’s a sign of what’s to come, and will soon be supplanted by more secure and more pervasive alternatives.”
Upcoming GeekWire Podcast Live Event: Join us from 4 p.m. to 6 p.m. Thursday, Feb 12 at Fremont Brewing for a live recording of the GeekWire Podcast with Todd Bishop and John Cook. Free for Fremont Chamber members, $15 otherwise. Register here.
Autoflight’s Matrix, which is a game changer in the eVTOL category of the aviation world, is the first of its kind to exceed the 5-ton class, with a maximum takeoff weight of 5,700kg (about 12,566lbs). While the plane’s wingspan of 20 meters, length of 17.1 meters, and short stature of only 3.3 meters may suggest that it is a large craft, once inside, you’ll notice that the cabin itself is surprisingly roomy with its 5.25-meter length, 1.8-meter width, and 1.85-meter height, giving you a comfortable 13.9 cubic meters in which to spread out.
To get off the ground, Autoflight engineers designed a novel solution: a compound wing lift-and-cruise system in triplane style, combined with a six-arm framework that can draw on up to 20 engines, just in case. As a result, the transition from vertical launch to forward flight goes relatively smoothly. Most importantly, the aircraft can maintain flight even if one or both engines fail.
Lightweight & Portable Design – Weighing just 151g [9] and C0 certified, this compact drone features full-coverage propeller guards for safer,…
Palm Takeoff & Landing [1], Gesture Control [2] – Enjoy easy palm takeoff and landing, plus intuitive gesture controls for hands-free operation and…
Smooth & Reliable Tracking – ActiveTrack [3] keeps your subject in focus, while Apple Watch lets you view live feed, check flight status, or use voice…
There are several Matrix variations to pick from. The totally electric vehicle has a range of 250 kilometers (155 miles) for short journeys. The hybrid-electric variant, as expected, has a longer range of 1,500 kilometers (932 miles). Let’s just say that getting from A to B takes precedence over speed.
As one might expect, it comes with a host of amenities: 10 comfortable business class seats or 6 VIP seats, all with climate control, beautiful ambient lighting, wide windows to enjoy the view from above, and, yeah, a proper bathroom because, you know, priorities. And if you need to carry some luggage, this aircraft can carry up to 1,500kg (3,300lb) via the large forward opening door, which is also beneficial for the hybrid system.
Advertisement
Autoflight tested the Matrix prototype at its low-altitude test site in Kunshan, China, in February 2026. To establish a fact, the company executed a slick move in which the eVTOL lifted off vertically, transitioned from vertical to cruising mode, and then descended vertically. This makes the Matrix the first 5-ton eVTOL to achieve this feat. Furthermore, the eVTOL operated alongside the company’s smaller, 2-ton carry-all cargo eVTOL design.
So, what does Autoflight’s Matrix offer? Well, the company believes it’s great for regional transit, large freight, and emergency response operations. As Tian Yu, the company’s founder and CEO, says, it will be a game changer, propelling eVTOLs beyond the normal short excursions and light cargoes. He believes that by improving the capabilities of eVTOLs, they will be able to reduce costs per seat or ton, which might be a significant advantage. [Source]
If you’re going to upgrade your TV for the Super Bowl, this is the kind of deal that actually changes the experience, not just the number on the spec sheet. A 77-inch OLED is the “everyone on the couch can see everything clearly” size, and OLED is the tech that makes the biggest difference on broadcast-style content: strong contrast, clean highlights, and better-looking motion in fast action.
Right now, the Samsung 77-inch S90F Series OLED (2025) is $1,999.99, which is $1,500 off the $3,499.99 compared value. The key detail is the deadline: the deal ends February 9, 2026, so this is very much a “plan your setup now” situation.
What you’re getting
This is a 77-inch 4K OLED with Samsung’s Tizen smart platform and SamsungVision AI branding around picture processing and smart features. The practical benefit is simple: OLED’s pixel-level control delivers deep blacks and strong contrast, which helps games look more dimensional, especially in mixed lighting.
For the Super Bowl specifically, a screen this size is great for the details that matter: jersey textures, sideline action, the ball in motion, and those quick camera cuts that can look smeary on older TVs. With a modern OLED panel, the picture tends to look cleaner and more premium without you needing to crank settings to extremes.
Advertisement
Why it’s worth it
The real story here is value per inch for a premium display. At $1,999.99, you’re getting into “big statement TV” territory while still landing in a price band that’s far more approachable than most 77-inch OLED pricing historically.
It also helps that the timing lines up perfectly with a common buying moment. If you host, even casually, a TV like this does a lot of the heavy lifting. You do not need fancy décor or a full surround system to make the room feel upgraded. A 77-inch OLED becomes the focal point instantly.
The bottom line
At $1,999.99, the Samsung 77-inch S90F OLED is a standout deal for anyone who wants a huge, premium screen ahead of the Super Bowl. The size is legitimately immersive, OLED is a visible upgrade, and saving $1,500 is the kind of discount that justifies moving now instead of “someday.” Just remember the deadline: this deal ends February 9, 2026.
Whether you’re handing off an AirTag or trying to resolve pairing issues, knowing how to properly reset Apple’s item tracker ensures you can set it up on a new iPhone easily.
How to factory reset a second generation AirTag
Every AirTag can be associated with only a single Apple Account. If you want to gift your AirTag to another person, you’ll need to reset it. While this does take a little effort, the whole process can be done in about a minute. Before you get started, we highly recommend that all small children and pets are out of the area while you reset an AirTag. AirTags, for all their usefulness, are choking hazards and can cause internal damage if they pass through the digestive tract. Continue Reading on AppleInsider | Discuss on our Forums
A state-sponsored threat group has compromised dozens of networks of government and critical infrastructure entities in 37 countries in global-scale operations dubbed ‘Shadow Campaigns’.
Between November and December last year, the actor also engaged in reconnaissance activity targeting government entities connected to 155 countries.
According to Palo Alto Networks’ Unit 42 division, the group has been active since at least January 2024, and there is high confidence that it operates from Asia. Until definitive attribution is possible, the researchers track the actor as TGR-STA-1030/UNC6619.
‘Shadow Campaigns’ activity focuses primarily on government ministries, law enforcement, border control, finance, trade, energy, mining, immigration, and diplomatic agencies.
Unit 42 researchers confirmed that the attacks successfully compromised at least 70 government and critical infrastructure organizations across 37 countries.
Advertisement
This includes organizations engaged in trade policy, geopolitical issues, and elections in the Americas; ministries and parliaments across multiple European states; the Treasury Department in Australia; and government and critical infrastructure in Taiwan.
Targeted countries (top) and confirmed compromises (bottom) Source: Unit 42
The list of countries with targeted or compromised organizations is extensive and focused on certain regions with particular timing that appears to have been driven by specific events.
The researchers say that during the U.S. government shutdown in October 2025, the threat actor showed increased interest in scanning entities across North, Central and South America (Brazil, Canada, Dominican Republic, Guatemala, Honduras, Jamaica, Mexico, Panama, and Trinidad and Tobago).
Significant reconnaissance activity was discovered against “at least 200 IP addresses hosting Government of Honduras infrastructure” just 30 days before the national election, as both candidates indicated willingness to restore diplomatic ties with Taiwan.
Unit 42 assesses that the threat group compromised the following entities:
Advertisement
Brazil’s Ministry of Mines and Energy
the network of a Bolivian entity associated with mining
two of Mexico’s ministries
a government infrastructure in Panama
an IP address that geolocates to a Venezolana de Industria Tecnológica facility
compromised government entities in Cyprus, Czechia, Germany, Greece, Italy, Poland, Portugal, and Serbia
an Indonesian airline
multiple Malaysian government departments and ministries
a Mongolian law enforcement entity
a major supplier in Taiwan’s power equipment industry
a Thai government department (likely for economic and international trade information)
critical infrastructure entities in the Democratic Republic of the Congo, Djibouti, Ethiopia, Namibia, Niger, Nigeria, and Zambia
Unit 42 also believes that TGR-STA-1030/UNC6619 also tried to connect over SSH to infrastructure associated with Australia’s Treasury Department, Afghanistan’s Ministry of Finance, and Nepal’s Office of the Prime Minister and Council of Ministers.
Apart from these compromises, the researchers found evidence indicating reconnaissance activity and breach attempts targeting organizations in other countries.
They say that the actor scanned infrastructure connected to the Czech government (Army, Police, Parliament, Ministries of Interior, Finance, Foreign Affairs, and the president’s website).
The threat group also tried to connect to the European Union infrastructure by targeting more than 600 IP hosting *.europa.eu domains. In July 2025, the group focused on Germany and initiated connections to more than 490 IP addresses that hosted government systems.
Shadow Campaigns attack chain
Early operations relied on highly tailored phishing emails sent to government officials, with lures commonly referencing internal ministry reorganization efforts.
Advertisement
The emails embedded links to malicious archives with localized naming hosted on the Mega.nz storage service. The compressed files contained a malware loader called Diaoyu and a zero-byte PNG file named pic1.png.
Sample of the phishing email used in Shadow Campaigns operations Source: Unit 42
Unit 42 researcher found that the Diaoyu loader would fetch Cobalt Strike payloads and the VShell framework for command-and-control (C2) under certain conditions that equate to analysis evasion checks.
“Beyond the hardware requirement of a horizontal screen resolution greater than or equal to 1440, the sample performs an environmental dependency check for a specific file (pic1.png) in its execution directory,” the researchers say.
They explain that the zero-byte image acts as a file-based integrity check. In its absence, the malware terminates before inspecting the compromised host.
To evade detection, the loader looks for running processes from the following security products: Kaspersky, Avira, Bitdefender, Sentinel One, and Norton (Symantec).
Advertisement
Apart from phishing, TGR-STA-1030/UNC6619 also exploited at least 15 known vulnerabilities to achieve initial access. Unit 42 found that the threat actor leveraged security issues in SAP Solution Manager, Microsoft Exchange Server, D-Link, and Microsoft Windows.
New Linux rootkit
TGR-STA-1030/UNC6619’s toolkit used for Shadow Campaigns activity is extensive and includes webshells such as Behinder, Godzilla, and Neo-reGeorg, as well as network tunneling tools such as GO Simple Tunnel (GOST), Fast Reverse Proxy Server (FRPS), and IOX.
However, researchers also discovered a custom Linux kernel eBPF rootkit called ‘ShadowGuard’ that they believe to be unique to the TGR-STA-1030/UNC6619 threat actor.
“eBPF backdoors are notoriously difficult to detect because they operate entirely within the highly trusted kernel space,” the researchers explain.
Advertisement
“This allows them to manipulate core system functions and audit logs before security tools or system monitoring applications can see the true data.”
ShadowGuard conceals malicious process information at the kernel level, hides up to 32 PIDs from standard Linux monitoring tools using syscall interception. It can also hide from manual inspection files and directories named swsecret.
Additionally, the malware features a mechanism that lets its operator define processes that should remain visible.
The infrastructure used in Shadow Campaigns relies on victim-facing servers with legitimate VPS providers in the U.S., Singapore, and the UK, as well as relay servers for traffic obfuscation, and residential proxies or Tor for proxying.
Advertisement
The researchers noticed the use of C2 domains that would appear familiar to the target, such as the use of .gouv top-level extension for French-speaking countries or the dog3rj[.]tech domain in attacks in the European space.
“It’s possible that the domain name could be a reference to ‘DOGE Jr,’ which has several meanings in a Western context, such as the U.S. Department of Government Efficiency or the name of a cryptocurrency,” the researchers explain.
According to Unit 42, TGR-STA-1030/UNC6619 represents an operationally mature espionage actor who prioritizes strategic, economic, and political intelligence and has already impacted dozens of governments worldwide.
Unit 42’s report includes indicators of compromise (IoCs) at the bottom of the report to help defenders detect and block these attacks.
Advertisement
Modern IT infrastructure moves faster than manual workflows can handle.
In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.
In the grand scheme of things — the wanton cruelty, the routine violations of rights, the actual fucking murders — this may only seem like a blip on the mass deportation continuum. But this report from Dell Cameron for Wired is still important. It not only explains why federal officers are approaching people with cellphones drawn nearly as often as they’re approaching them with guns drawn, but also shows the administration is yet again pretending it’s a law unto itself.
On Wednesday, the Department of Homeland Security published new details about Mobile Fortify, the face recognition app that federal immigration agents use to identify people in the field, undocumented immigrants and US citizens alike. The details, including the company behind the app, were published as part of DHS’s 2025 AI Use Case Inventory, which federal agencies are required to release periodically.
The inventory includes two entries for Mobile Fortify—one for Customs and Border Protection (CBP), another for Immigration and Customs Enforcement (ICE)—and says the app is in the “deployment” stage for both. CBP says that Mobile Fortify became “operational” at the beginning of May last year, while ICE got access to it on May 20, 2025. That date is about a month before 404 Media first reported on the app’s existence.
A lot was going on last May, in terms of anti-migrant efforts and the casual refusal to recognize long-standing constitutional rights. That was the same month immigration officers were told they could enter people’s homes while only carrying self-issued “administrative warrants,” which definitely aren’t the same thing as the judicial warrants the government actually needs to enter areas provided the utmost in Fourth Amendment protection.
The app federal officers are using is made by NEC, a tech company that’s been around since long before ICE and CBP become the mobile atrocities they are. Prior to this revelation, NEC had only been associated with developing biometric software with an eye on crafting something that could be swiftly deployed and just as quickly scaled to meet the government’s needs. This particular app was never made public prior to this.
Advertisement
ICE claims it’s not a direct customer. It’s only a beneficiary of the CBP’s existing contract with NEC. That’s a meaningless distinction when multiple federal agencies have been co-opted into the administration’s bigoted push to rid the nation of brown people.
As is always the case (and this precedes Trump 2.0), CBP and ICE are rolling out tech far ahead of the privacy impact paperwork that’s supposed to filed before anything goes live.
While CBP says there are “sufficient monitoring protocols” in place for the app, ICE says that the development of monitoring protocols is in progress, and that it will identify potential impacts during an AI impact assessment. According to guidance from the Office of Management and Budget, which was issued before the inventory says the app was deployed for either CBP or ICE, agencies are supposed to complete an AI impact assessment before deploying any high-impact use case. Both CBP and ICE say the app is “high-impact” and “deployed.”
This is standard operating procedure for the federal government. The FBI and DEA were deploying surveillance tech well ahead of Privacy Impact Assessments (PIAs) as far back as [oh wow] 2014, while the nation was still being run by someone who generally appeared to be a competent statesman. That nothing has changed since makes it clear this problem is endemic.
But things are a bit worse now that Trump is running an administration stocked with fully-cooked MAGA acolytes. In the past, our rights might have received a bit of lip service and the occasional congressional hearing about the lack of required Privacy Impact Assessments.
Advertisement
None of that will be happening now. No one in the DHS is even going to bother to apply pressure to those charged with crafting these assessments. And no one will threaten (much less terminate) the tech deployment until these assessments have been completed. I would fully expect this second Trump term to come and go without the delivery of legally-required paperwork, especially since oversight of these agencies will be completely nonexistent as long as the GOP holds a congressional majority.
We lose. The freshly stocked swamp wins. And while it’s normal to expect the federal government to bristle at the suggestion of oversight, it’s entirely abnormal to allow an administration that embraces white Christian nationalism to act as though the only holy text any Trump appointee subscribes to was handed down by Aleister Crowley: Do what thou wilt. That is the whole of the law.
Most 3D design software requires visual dragging and rotating—posing a challenge for blind and low-vision users. As a result, a range of hardware design, robotics, coding, and engineering work is inaccessible to interested programmers. A visually-impaired programmer might write great code. But because of the lack of accessible modeling software, the coder can’t model, design, and verify physical and virtual components of their system.
However, new 3D modeling tools are beginning to change this equation. A new prototype program called A11yShape aims to close the gap. There are already code-based tools that let users describe 3D models in text, such as the popular OpenSCAD software. Other recent large-language-model tools generate 3D code from natural-language prompts. But even with these, blind and low-vision programmers still depend on sighted feedback to bridge the gap between their code and its visual output.
Blind and low-vision programmers previously had to rely on a sighted person to visually check every update of a model to describe what changed. But with A11yShape, blind and low-vision programmers can independently create, inspect, and refine 3D models without relying on sighted peers.
A11yShape does this by generating accessible model descriptions, organizing the model into a semantic hierarchy, and ensuring every step works with screen readers.
Advertisement
The project began when Liang He, assistant professor of computer science at the University of Texas at Dallas, spoke with his low-vision classmate who was studying 3D modeling. He saw an opportunity to turn his classmate’s coding strategies, learned in a 3D modeling for blind programmers course at the University of Washington, into a streamlined tool.
“I want to design something useful and practical for the group,” he says. “Not just something I created from my imagination and applied to the group.”
Re-imagining Assistive 3D Design With OpenSCAD
A11yShape assumes the user is running OpenSCAD, the script-based 3D modeling editor. The program adds OpenSCAD features to connect each component of modeling across three application UI panels.
OpenSCAD allows users to create models entirely through typing, eliminating the need for clicking and dragging. Other common graphics-based user interfaces are difficult for blind programmers to navigate.
Advertisement
A11yshape introduces an AI Assistance Panel, where users can submit real-time queries to ChatGPT-4o to validate design decisions and debug existing OpenSCAD scripts.
A11yShape’s three panels synchronize code, AI descriptions, and model structure so blind programmers can discover how code changes affect designs independently.Anhong Guo, Liang He, et al.
If a user selects a piece of code or a model component, A11yShape highlights the matching part across all three panels and updates the description, so blind and low-vision users always know what they’re working on.
User Feedback Improved Accessible Interface
The research team recruited 4 participants with a range of visual impairments and programming backgrounds. The team asked the participants to design models using A11yShape and observed their workflows.
One participant, who had never modeled before, said the tool “provided [the blind and low-vision community] with a new perspective on 3D modeling, demonstrating that we can indeed create relatively simple structures.”
Advertisement
Participants also reported that long text descriptions still make it hard to grasp complex shapes, and several said that without eventually touching a physical model or using a tactile display, it was difficult to fully “see” the design in their mind.
To evaluate the accuracy of the AI-generated descriptions, the research team recruited 15 sighted participants. “On a 1–5 scale, the descriptions earned average scores between about 4.1 and 5 for geometric accuracy, clarity, and avoiding hallucinations, suggesting the AI is reliable enough for everyday use.”
A new assistive program for blind and low-vision programmers, A11yShape, assists visually disabled programmers in verifying the design of their models.Source: Anhong Guo, Liang He, et al.
The feedback will help to inform future iterations—which He says could integrate tactile displays, real-time 3D printing, and more concise AI-generated audio descriptions.
Beyond its applications in the professional computer programming community, He noted that A11yShape also lowers the barrier to entry for blind and low-vision computer programming learners.
Advertisement
“People like being able to express themselves in creative ways. . . using technology such as 3D printing to make things for utility or entertainment,” says Stephanie Ludi, director of DiscoverABILITY Lab and professor of the department of computer science and engineering at the University of North Texas. “Persons who are blind and visually impaired share that interest, with A11yShape serving as a model to support accessibility in the maker community.”
The team presented A11yshape in October at the ASSETS conference in Denver.
Aiming to encourage app development and celebrate the most creative participants, Apple’s Swift Student Challenge is back and the winners will get to visit Apple Park.
Apple’s 2026 Swift Student Challenge is open for applications — image credit: Apple
As it has done now every year since 2020, Apple is running a Swift Student Challenge. Applications for the contest to find innovative new app developers are open now and close on February 28, 2026. Applications are sought from students working with Swift Playground 4.6 or Xcode 26, or later. As well as only around two weeks to apply, there are many eligibility requirements. Continue Reading on AppleInsider | Discuss on our Forums
Everyone knows Super Bowl commercials are expensive, bombastic, and designed to be talked about. What we didn’texpect was an AI startup using the biggest ad stage of the year to throw shade at a rival’s advertising strategy. That’s exactly what Anthropic has done. The company bought Super Bowl airtime to broadcast a simple message: “Ads are coming to AI, but not to Claude.” Its ads depict a chatbot spitting product pitches mid-conversation, ending with a clear contrast to its own ad-free promise. Even ads these days aren’t what they used to be. Video: Can I get a six pack quickly?, uploaded… This story continues at The Next Web
The 2026 Super Bowl between the New England Patriots and the Seattle Seahawks will air on NBC this Sunday, Feb. 8. The game will also stream on Peacock. If you don’t have NBC over the air and don’t subscribe to Peacock, there are still ways to watch Super Bowl LX — and Bad Bunny’s history-making Halftime Show — for free. Here’s how to tune in.
How to watch Super Bowl LX free:
Date: Sunday, Feb. 8
Time: 6:30 p.m. ET
Location: Levi’s Stadium in Santa Clara, Calif.
TV channel: NBC, Telemundo
Advertisement
Streaming: Peacock, DirecTV, NFL+ and more
2026 Super Bowl game channel
Super Bowl LX will air on NBC. A Spanish-language broadcast is available on Telemundo.
In addition to hosting NBC’s Super Bowl broadcast, DirecTV’s Entertainment tier gets you access to loads of channels where you can tune in to college and pro sports throughout the year, including ESPN, TNT, ACC Network, Big Ten Network, CBS Sports Network, and, depending on where you live, local affiliates for ABC, CBS, Fox and NBC.
Whichever package you choose, you’ll get unlimited Cloud DVR storage and access to ESPN Unlimited.
Advertisement
DirecTV’s Entertainment tier package is $89.99/month. But you can currently try all this out for free for 5 days. If you’re interested in trying out a live-TV streaming service for football, but aren’t ready to commit, we recommend starting with DirecTV.
Peacock is the streaming home of the 2026 Super Bowl.
While a regular Peacock subscription begins at $10.99 a month for a Premium Plan and goes up to $16.99 for the ad-free Premium Plus plan, you can get an ad-supported subscription for free if you’re a Walmart+ subscriber.
Walmart+ members actually get their choice between Paramount+ or Peacock included in their membership at no additional cost. A monthly subscription to Walmart+ costs $12.99, and an annual plan usually costs $98. But you can try the service out totally free. Beyond free Peacock, Walmart+ has additional perks like five free months of Apple Music, discounts on Cinemark movie theater memberships, free shipping and delivery on Walmart purchases, discounts on gas and much more.
Advertisement
Instacart+ subscribers are able to get an annual Peacock Premium plan (a $109.99 value) for free. After a free 14-day trial, Instacart+ plans cost $99/year, meaning you’ll save more on Peacock simply by subscribing to the delivery service, but you’ll get tons of extras, like free grocery and restaurant delivery and a free subscription to the New York Times Cooking app.
What time is the 2026 Super Bowl?
The 2026 Super Bowl kicks off at 6:30 p.m. ET/3:30 p.m. PT on Sunday, Feb. 8. Green Day will be performing a pre-game special starting at 6 p.m. ET.
Who is playing in the Super Bowl?
The AFC champions, the New England Patriots, will play the NFC champions, the Seattle Seahawks.
Advertisement
Where is the 2026 Super Bowl being played?
The 2026 Super Bowl will be held at Levi’s Stadium in Santa Clara, Calif., home of the San Francisco 49ers.
Who is performing at the 2026 Super Bowl halftime show?
Bad Bunny is headlining the 2026 Super Bowl halftime performance. You can expect that show to begin after the second quarter, likely between 8-8:30 p.m. ET. Green Day will perform a pre-game show starting at 6 p.m. ET. If you’re tuning in before the game, singer Charlie Puth will perform the National Anthem, Brandi Carlile is scheduled to sing “America the Beautiful,” and Grammy winner Coco Jones will perform “Lift Every Voice and Sing.”