Tech
Apple’s M5 Chip Is Here & It’s an AI Monster
Just imagine this: your laptop isn’t just following your work, it’s ahead of you! Editing those 8K videos feels as easy as scrolling through your socials, and those clever AI tools whip up pictures that look real in just seconds, no stress at all. That is the power we are talking about with Apple’s new M5 chip, announced this very week inside three powerful machines: the new 14-inch MacBook Pro, the iPad Pro (in both 11- and 13-inch sizes), and the upgraded Apple Vision Pro.
No big show, just some news that dropped on October 15, 2025, like a proper surprise. But don’t think this is a small thing—this is Apple’s biggest jump in power yet, made for a future run by AI. With pre-orders open and devices arriving from October 22, this M5 isn’t just a small improvement; it’s a complete change of the game. Let’s get into the details and see what makes it so special.
The M5 Chip: What’s Inside This AI Beast?
The star of the show is the M5, Apple’s newest System on a Chip (SoC) for its devices, built with the latest 3-nanometer technology. This isn’t a small step; it’s a giant leap, packing more power into a tiny space for amazing speed and efficiency.![]()
Here’s the breakdown:
- CPU Powerhouse: A 10-core setup (four strong cores and six efficient ones) gives you up to 15% more speed than the M4. Whether you’re coding, designing 3D models, or have too many browser tabs open, the M5 handles it all with ease.
- GPU Revolution: The 10-core GPU is the real star. It has a new design with special Neural Accelerators in every core. This means it can do AI work over 4 times faster than the M4—think 45% faster graphics for everything, including beautiful lighting in games. The memory bandwidth is also up nearly 30%, so your devices can run big AI programs directly on them.
- Neural Engine on Fire: The 16-core Neural Engine is unbelievably fast, handling over 50 trillion tasks per second. That’s 3.5 times faster for AI jobs. From translating languages in Messages to creating images, it’s ready for all the new Apple Intelligence features.
- Efficiency is Key: Even with all this power, the M5 is very gentle on your electricity bill, giving you up to 24 hours of battery on the MacBook Pro. It’s a proper efficient chip.
These aren’t just numbers on paper; they mean real results. Apple tested the M5, and it completely outpaces older M1 machines, all while staying cool and quiet. For creators and anyone who uses their device heavily, it’s like getting a turbo boost for your work.
A Look at the Devices: How M5 Changes Everything
Apple didn’t change the look of these devices—they are the same good designs you know. But putting the M5 inside? That changes everything.
Let’s see.
1. 14-Inch MacBook Pro: The Pro Laptop, Now Even Better

Starting from $1,599, this model keeps the brilliant mini-LED display, great speakers, and 12MP webcam. But the M5 makes it fly:
- Performance Perks: Graphics are 1.6x faster for video editing, the storage is 2x quicker for opening files, and AI tasks are 3.5x faster.
- Battery for Days: Up to 24 hours of web browsing or video—enough power for a whole day without a charger.
- More Space: You can now get it with up to 4TB of storage.
If you’re coming from an older Intel or M1 Mac, this is an easy decision. If you have an M4, just think about how much you need those AI features.
2. iPad Pro (11- and 13-Inch): Thinner, Smarter, A Real Powerhouse

The M5 iPad Pro keeps its super-thin design but gets a major brain upgrade. Starting at $999, it’s a dream for artists and people who do many things at once:
- AI & Graphics Leap: AI features are 50% faster with iPadOS 26. The GPU makes professional apps run much smoother.
- Better Connections: A new chip brings the latest Wi-Fi 7 and Bluetooth, and much faster cellular internet.
- Smarter Software: iPadOS 26 adds much better multitasking, making it a true replacement for a laptop.
If your current iPad feels slow for serious work, this is the one to get. The battery lasts longer too, perfect for working on the go.
3. Apple Vision Pro: Amazing AI, Now More Comfortable
At $3,499, the M5 Vision Pro is the upgrade we were waiting for. It swaps the M2 for the M5, making the mixed-reality experience even better:
- Visual & Performance Boost: 10% more pixels, a super-smooth 120Hz screen, and AI that works twice as fast.
- Comfort Upgrade: The new Dual Knit Band is softer and fits better, fixing the earlier issue of it being uncomfortable for long periods.
- visionOS 26 Magic: Widgets you can place anywhere in your space, better video calls, and amazing interactive environments.
The battery lasts longer now, and new apps for work make it a powerful tool for designers and filmmakers.
| M5 vs. The Old Ones: Key Improvements | M5 | M4 | M1 |
|---|---|---|---|
| AI Performance | 3.5x faster | Baseline | 6x slower |
| GPU Compute (Peak) | 4x higher | Baseline | N/A |
| Graphics Speed | Up to 45% faster | Baseline | 2x+ faster |
| Memory Bandwidth | 153GB/s | 120GB/s | ~68GB/s |
| Battery Life (MacBook Pro) | Up to 24 hrs | Up to 22 hrs | Up to 20 hrs |
Why the M5 is a Big Deal: Powering the AI Revolution
This launch is Apple’s master plan for the AI competition. With special AI parts built right into the chip, the M5 can do complex tasks right on your device. This keeps your private information safe and makes everything feel instant. It’s different from other AI that needs the internet to work.
For creators, it means faster work. For businesses, it saves on power. In a world full of AI, the M5 makes Apple the king of powerful, private, and local AI.
Of course, the price is still high for many, and the even more powerful M5 Pro and Max versions are coming next year for those who need extreme power.
What’s Next? The M5’s Future in 2026
The M5 is just the beginning. The rumours for 2026 are exciting:
- MacBook Air M5: Coming soon, with all-day battery life.
- Mac Studio & Mac Mini M5 Pro/Max: Early 2026, for the real professionals.
- iMac & Mac Pro M5: Maybe with new, bigger screens.
- Touchscreen Macs? People are talking about MacBooks with touchscreens finally arriving.
So, Should You Upgrade? Here’s How to Decide
- Yes, Upgrade If: You have an M1 or older (you’ll see a huge difference), you need AI speed for work, or you want that full-day battery.
- Think About It If: You have an M4—the jump is good, but see if you really need it.
- Wait If: You want the most powerful versions (coming soon) or you’re watching your budget (wait for sales).
- Smart Tip: You can trade in your old device with Apple for credit. And if you can, go and try the Vision Pro in a store—it’s an experience!
The M5 era is about giving you smooth, powerful technology that lets you create without any limits.
So, what will you do first with an M5—edit a masterpiece or build a new virtual world? Let us know down there in the comments, share your thoughts on X with #M5Debut, or tag your friend who needs to see this. Let’s talk about it
Tech
Tinder looks to AI to help fight ‘swipe fatigue’ and dating app burnout
Tinder is turning to a new AI-powered feature, Chemistry, to help it reduce so-called “swipe fatigue,” a growing problem among online dating users who are feeling burned out and are in search of better outcomes.
Introduced last quarter, the Match-owned dating app said that Chemistry leverages AI to get to know users through questions and, with permission, accesses their Camera Roll on their phone to learn more about their interests and personality.
On Match’s Q4 2026 earnings call, one analyst from Morgan Stanley asked for an update on the product’s success so far.
Match CEO Spencer Rascoff noted that Chemistry was still only being tested in Australia for the time being, but said that the feature offered users an “AI way to interact with Tinder.” He explained that users could choose to answer questions to then “get just a single drop or two, rather than swiping through many, many profiles.”
In addition to Chemistry’s Q&A and Camera Roll features, the company plans to use the AI feature in other ways going forward, the CEO also hinted.
Most importantly, Rascoff said the feature is designed to combat swipe fatigue — a complaint from users who say they have to swipe through too many profiles to find a potential match.
The company’s turn toward AI comes as Tinder and other dating apps have been experiencing paying subscriber declines, user burnout, and declines in new sign-ups.
Techcrunch event
Boston, MA
|
June 23, 2026
In the fourth quarter, new registrations on Tinder were still down 5% year-over-year, and its monthly active users were down 9%. These numbers show some slight improvements over prior quarters, which Match attributes to AI-driven recommendations that change the order of profiles shown to women, and other product experiments.
Match said that this year, it aims to address common Gen Z pain points, including better relevance, authenticity and trust. To do so, the company said it is redesigning discovery to make it less repetitive and is using other features, like Face Check — a facial recognition verification system — to cut down on bad actors. On Tinder, the latter led to a more than 50% reduction in interactions with bad actors, Match noted.
Tinder’s decision to start moving away from the swipe toward more targeted, AI-powered recommendations could have a significant impact on the dating app. Today, the swipe method, which was popularized by Tinder, encourages users to think that they’re choosing a match from an endless number of profiles. But in reality, the app presents the illusion of choice, since matches have to be two-way to connect, and even then, a spark is not guaranteed.
The company delivered an earnings beat in the fourth quarter, with revenue of $878 million and EPS of 83 cents per share above Wall Street estimates. But weak guidance saw the stock decline on Tuesday, before rising again in premarket trading on Wednesday.
Beyond AI, Match will also increase its product marketing to help boost Tinder engagement. The company is committing to $50 million in Tinder marketing spend, which will include creator campaigns on TikTok and Instagram, where users will make claims that “Tinder is cool again,” Rascoff noted.
Tech
How Researchers Are Putting Students at the Center of Edtech Design
When researchers ask students to test educational technology products, a consistent pattern emerges: Tools that impress adults in demos often fall flat with the students who actually use them. Recent studies show that even well-designed products can frustrate students or create unnecessary mental strain when technical complexity gets in the way of learning. The disconnect means even promising tools aren’t reaching their full potential in real classrooms.
This gap between adult expectations and student experience is exactly what ISTE+ASCD, the Joan Ganz Cooney Center at Sesame Workshop and the youth research organization In Tandem aim to close through their collaborative work on student usability in edtech.
EdSurge spoke with three leaders from this collaborative effort: Vanessa Zuidema, co-founder and director of customer success at In Tandem; Dr. Medha Tare, senior director of research at the Joan Ganz Cooney Center; and Dr. Brandon Olszewski, senior director of research and innovation at ISTE+ASCD.
“To help clarify what matters most when it comes to student usability, we knew we needed to work with these partners to reach students, check our findings against others in the space and develop guidance for edtech providers,” Olszewski explains. “Sesame has extensive experience designing for young people and balancing high-quality learning with engagement. In Tandem connects young people with companies and organizations that need their voices at the table. ISTE+ASCD sits at the intersection of educational technology, learning design, and curriculum and instruction.”
Ahead of releasing a formal student usability framework later this year, the three organizations shared early findings about what students actually want from educational technology — and what it means for schools and developers.
EdSurge: Why focus specifically on student usability, and what does that mean in practice?
Tare: The field is very good at evaluating edtech from an adult perspective: alignment, evidence, safety, interoperability. But none of those frameworks capture what it’s like to be a kid trying to use a tool in real time.
In our research with students and product developers, we often saw cognitive load issues: students struggle with instructions, navigation or unclear affordances. We saw motivation issues: kids shut down when a feature feels intimidating or frustrating. Many existing evaluations don’t examine how struggling, multilingual or reluctant readers experience the same product quite differently.
Zuidema: While districts, school leaders and teachers all play critical roles, ultimately the student experience determines whether learning actually happens. Yet too often, product development processes overlook the people most affected: students themselves.
How does centering student voice change the way edtech products are designed?
Tare: You can count on young people to surface things adults would never catch. Kids are the experts in fun, not adults! In one case, an AI writing companion talked too much, repeated questions and “felt like a bot” to kids. Students redesigned the personality system to be less chatty, more responsive and more playful, and engagement shot up the next day.
In another case, developers initially assumed a read-aloud feature would help with assessment, but kids were often too anxious or unsure to speak. Student discomfort fundamentally shifted how developers approached assessment supports.
Zuidema: When you center student voice, you learn things about an edtech tool that adults simply can’t see. Testing early ideas with students helps product teams figure out if things like onboarding or screen design actually work before a tool is used in real classrooms. This keeps teams from building features based on adult guesses and saves them from costly rebuilds.
One example is customization. Adults often assume students want lots of choices in how everything looks. But many students say they prefer simple, steady designs and want more control over their learning path instead.
Olszewski: I’m generalizing here, but what we heard is that they don’t care about chatbots, and they don’t want to do anything for school on their phones except check due dates. I think these insights offer edtech providers some solid guidance on how to spend their energy when developing products.
What do students want from edtech?
Olszewski: Students want a clean user interface that feels intuitive, as if it were actually tested by real students. They don’t care about a lot of add-ons, advanced customization, badges and points. Instead, they want clear learning progressions that show them what’s next. They want to see language and scenarios that reflect who they are.
Zuidema: Students want tools that are simple to use, don’t waste time and feel made for how they actually learn. They want tools that let them move at their own pace and get feedback that actually makes sense.
Tare: Students want feedback that feels human and helpful: timely, specific, supportive and aligned to where they are in the process. For example, kids told one writing tool not to give grammar feedback while they were still generating ideas because it felt disruptive and demotivating. They want characters and tools that react to them in joyful, surprising ways. And they want tools that respect their intelligence: kids reject infantilizing features and lean into tools that challenge them while also supporting them.
What does it take to do rigorous, ethical student-centered usability research?
Zuidema: Conducting rigorous research with students starts with creating spaces where young people feel safe enough to be honest. When that trust is in place, they move beyond polite answers and offer the kind of deeper feedback that improves programs and products.
Organizations partner most effectively when they start with a clear sense of what they hope to learn and how they plan to use those insights. When students feel safe and respected, they offer the kind of honest, deeper insight that strengthens the work.
Tare: We recommend genuine youth partnership, not tokenism: Kids need time to build relationships, trained facilitators and multiple sessions to share deeper feedback. And there needs to be a willingness to change course: Product teams need to be ready to iterate, and sometimes to do so fundamentally. Kids are experts! We need to listen.
Olszewski: Young people under 18 rightfully are afforded special protections through Institutional Review Boards. Coordinating with the right organizations that have streamlined that work helps responsible research partners get right to the work of actually collecting data. That’s so helpful when the people we want to learn from don’t yet have a driver’s license!
How should school leaders evaluate edtech through the lens of student usability?
Olszewski: We know that alignment to standards and evidence supporting better student learning outcomes are top of mind — and those priorities can sometimes overshadow other important factors. We believe that products designed for usability, both for teachers and students, are more likely to improve teaching and learning. Our forthcoming student usability framework will provide concrete criteria for evaluating these factors. If your sandbox account of a product offers a jumbled user experience without a clear learning progression, that’s a signal it might not work well in practice.
Tare: Student usability should be given strong consideration. We advise school leaders to ask questions such as: Can students independently navigate the tool? Do multilingual learners and struggling readers experience friction? Does the tool maintain motivation, or diminish it? How does feedback feel to a child: supportive or punitive? This approach helps leaders choose tools that work for the students they actually serve.
Learn more: ISTE+ASCD’s student usability framework will be released later this year. In the meantime, educators and edtech decision-makers can explore ISTE’s Teacher Ready Evaluation Tool and related resources at iste.org/edtech-product-selection.
Tech
Snowflake and OpenAI forge $200M enterprise AI partnership

Snowflake and OpenAI have struck a multi-year, $200 million partnership to bring OpenAI’s advanced models, including GPT-5.2, directly into Snowflake’s enterprise data platform. The collaboration is designed to let Snowflake’s large customer base, more than 12,000 organisations, build AI agents and semantic analytics tools that operate on their own data without moving it outside Snowflake’s governed environment. Under the agreement, OpenAI models will be natively embedded in Snowflake Cortex AI and Snowflake Intelligence, making it possible to run queries, derive insights, and deploy AI-powered workflows using natural language interfaces and context-aware agents. Customers can analyse structured and unstructured data, automate…
This story continues at The Next Web
Tech
As Software Stocks Slump, Investors Debate AI’s Existential Threat
Investors were assessing on Wednesday whether a selloff in global software stocks this week had gone too far, as they weighed if businesses could survive an existential threat posed by AI. The answer: It’s unclear and will lead to volatility. From a report: After a broad selloff on Tuesday that saw the S&P 500 software and services index fall nearly 4%, the sector slipped another 1% on Wednesday. While software stocks have been under pressure in recent months as AI has gone from being a tailwind for many of these companies to investors worrying about the disruption it will cause to some sectors, the latest selloff was triggered by a new legal tool from Anthropic’s Claude large language model (LLM).
The tool – a plug-in for Claude’s agent for tasks across legal, sales, marketing and data analysis – underscored the push by LLMs into the so-called “application layer,” where these firms are increasingly muscling into lucrative enterprise businesses for revenue they need to fund massive investments. If successful, investors worry, it could wreak havoc across a range of industries, from finance to law and coding.
Tech
Vercel rebuilt v0 to tackle the 90% problem: Connecting AI-generated code to existing production infrastructure, not prototypes
Before Claude Code wrote its first line of code, Vercel was already in the vibe coding space with its v0 service.
The basic idea behind the original v0, which launched in 2024, was essentially to be version 0. That is, the earliest version of an application, helping developers solve the blank canvas problem. Developers could prompt their way to a user interface (UI) scaffolding that looked good, but the code was disposable. Getting those prototypes into production required rewrites.
More than 4 million people have used v0 to build millions of prototypes, but the platform was missing elements required to get into production. The challenge is a familiar one with vibe coding tools, as there is a gap in what tools provide and what enterprise builders require. Claude Code, for instance, generates backend logic and scripts effectively, but does not deploy production UIs within existing company design systems while enforcing security policies
This creates what Vercel CPO Tom Occhino calls “the world’s largest shadow IT problem.” AI-enabled software creation is already happening inside every enterprise. Credentials are copied into prompts. Company data flows to unmanaged tools. Apps deploy outside approved infrastructure. There’s no audit trail.
Vercel rebuilt v0 to address this production deployment gap. The new version, generally available today, imports existing GitHub repositories and automatically pulls environment variables and configurations. It generates code in a sandbox-based runtime that maps directly to real Vercel deployments and enforces security controls and proper git workflows while allowing non-engineers to ship production code.
“What’s really nice about v0 is that you still have the code visible and reviewable and governed,” Occhino told VentureBeat in an exclusive interview. “Teams end up collaborating on the product, not on PRDs and stuff.”
This shift matters because most enterprise software work happens on existing applications, not new prototypes. Teams need tools that integrate with their current codebases and infrastructure.
How v0’s sandbox runtime connects AI-generated code to existing repositories
The original v0 generated UI scaffolding from prompts and let users iterate through conversations. But the code lived in v0’s isolated environment, which meant moving it to production required copying files, rewriting imports and manually wiring everything together.
The rebuilt v0 fundamentally changes this by directly importing existing GitHub repositories. A sandbox-based runtime automatically pulls environment variables, deployments and configurations from Vercel, so every prompt generates production-ready code that already understands the company’s infrastructure. The code lives in the repository, not a separate prototyping tool.
Previously, v0 was a separate prototyping environment. Now, it’s connected to the actual codebase with full VS Code built into the interface, which means developers can edit code directly without switching tools.
A new git panel handles proper workflows. Anyone on a team can create branches from within v0, open pull requests against main and deploy on merge. Pull requests are first-class citizens and previews map directly to real Vercel deployments, not isolated demos.
This matters because product managers and marketers can now ship production code through proper git workflows without needing local development environments or handing code snippets to engineers for integration. The new version also adds direct integrations with Snowflake and AWS databases, so teams can wire apps to production data sources with proper access controls built in, rather than requiring manual work.
Vercel’s React and Next.js experience explains v0’s deployment infrastructure
Prior to joining Vercel in 2023, Occhino spent a dozen years as an engineer at Meta (formerly Facebook) and helped lead that company’s development of the widely-used React JavaScript framework.
Vercel’s claim to fame is that its company founder, Guillermo Rauch, is the creator of Next.js, a full-stack framework built on top of React. In the vibe coding era, Next.js has become an increasingly popular framework. The company recently published a list of React best practices specifically designed to help AI agents and LLMs work.
The Vercel platform encapsulates best practices and learnings from Next.js and React. That decade of building frameworks and infrastructure together means v0 outputs production-ready code that deploys on the same infrastructure Vercel uses for millions of deployments annually. The platform includes agentic workflow support, MCP integration, web application firewall, SSO and deployment protections. Teams can open any project in a cloud dev environment and push changes in a single click to a Vercel preview or production deployment.
With no shortage of competitive offerings in the vibe coding space, including Replit, Lovable and Cursor among others, it’s the core foundational infrastructure that Occhino sees as standing out.
“The biggest differentiator for us is the Vercel infrastructure,” Occhino said. “It’s been building managed infrastructure, framework-defined infrastructure, now self-driving infrastructure for the past 10 years.”
Why vibe coding security requires infrastructure control, not just policy
The shadow IT problem isn’t that employees are using AI tools. It’s that most vibe coding tools operate entirely outside enterprise infrastructure. Credentials are copied into prompts because there’s no secure way to connect generated code to enterprise databases. Apps deploy to public URLs because the tools don’t integrate with company deployment pipelines. Data leaks happen because visibility controls don’t exist.
The technical challenge is that securing AI-generated code requires controlling where it runs and what it can access. Policy documents don’t help if the tooling itself can’t enforce those policies.
This is where infrastructure matters. When vibe coding tools operate on separate platforms, enterprises face a choice: Block the tools entirely or accept the security risks. When the vibe coding tool runs on the same infrastructure as production deployments, security controls can be enforced automatically.
v0 runs on Vercel’s infrastructure, which means enterprises can set deployment protections, visibility controls and access policies that apply to AI-generated code the same way they apply to hand-written code. Direct integrations with Snowflake and AWS databases let teams connect to production data with proper access controls rather than copying credentials into prompts.
“IT teams are comfortable with what their teams are building because they have control over who has access,” Occhino said. “They have control over what those applications have access to from Snowflake or data systems.”
Generative UI vs. generative software
In addition to the new version of v0, Vercel has recently introduced a generative UI technology called json-render.
v0 is what Vercel calls generative software. This differs from the company’s json-render framework for a true generative UI. Vercel software engineer Chris Tate explained that v0 builds full-stack apps and agents, not just UIs or frontends. In contrast, json-render is a framework that enables AI to generate UI components directly at runtime by outputting JSON instead of code.
“The AI doesn’t write software,” Tate told VentureBeat. “It plugs directly into the rendering layer to create spontaneous, personalized interfaces on demand.”
The distinction matters for enterprise use cases. Teams use v0 when they need to build complete applications, custom components or production software.
They use JSON-render for dynamic, personalized UI elements within applications, dashboards that adapt to individual users, contextual widgets and interfaces that respond to changing data without code changes.
Both leverage the AI SDK infrastructure that Vercel has built for streaming and structured outputs.
Three lessons enterprises learned from vibe coding adoption
As enterprises adopted vibe coding tools over the past two years, several patterns emerged about AI-generated code in production environments.
Lesson 1: Prototyping without production deployment creates false progress. Enterprises saw teams generate impressive demos in v0’s early versions, then hit a wall moving those demos to production. The problem wasn’t the quality of generated code. It was that prototypes lived in isolated environments disconnected from production infrastructure.
“While demos are easy to generate, I think most of the iteration that’s happening on these code bases is happening on real production apps,” Occhino said. “90% of what we need to do is make changes to an existing code base.”
Lesson 2: The software development lifecycle has already changed, whether enterprises planned for it or not. Domain experts are building software directly instead of writing product requirement documents (PRDs) for engineers to interpret. Product managers and marketers ship features without waiting for engineering sprints.
This shift means enterprises need tools that maintain code visibility and governance while enabling non-engineers to ship. The alternative is creating bottlenecks by forcing all AI-generated code through traditional development workflows.
Lesson 3: Blocking vibe coding tools doesn’t stop vibe coding. It just pushes the activity outside IT’s visibility. Enterprises that try to restrict AI-powered development find employees using tools anyway, creating the shadow IT problem at scale.
The practical implication is that enterprises should focus less on whether to allow vibe coding and more on ensuring it happens within infrastructure that can enforce existing security and deployment policies.
Tech
Theory Professional Previews SR-221.3 Extreme-Output Full-Range Loudspeaker at ISE 2026, Headlining New SR Series for Pro Sound Reinforcement
Theory Professional, the professional division of Theory Audio Design, is making a serious first impression at ISE 2026 with its all-new SR Series — a family of premium passive and powered loudspeakers engineered for sound reinforcement with performance and fidelity that rivals much larger systems. At the heart of the lineup is the SR-221.3, a truly unique full-range loudspeaker that pushes the envelope of output, bandwidth, and coverage.
The SR-221.3 pairs dual 21-inch, 3,600 W low-frequency drivers with four 10-inch high-output carbon fiber midrange drivers and a 5-inch wide-band ring radiator compression driver. The result: an astonishing 27 Hz – 20 kHz (-3 dB) frequency response, up to ~140 dB SPL, and an ultra-wide 170° × 60° coverage pattern. One to two SR-221.3s can easily fill medium venues with high-fidelity, high-impact sound.
Theory Professional has designed the SR Series to deliver lively dynamics, refined acoustic accuracy, and sheer output capability, all in surprisingly compact cabinets that won’t dominate aesthetic spaces; whether installed or used portably. The series includes eight models; passive, active, portable, and install variants — all built-to-order and available in black or white. Optional upgrades include custom paint matching and weatherizing on passive units.
Thoughtful details throughout the SR Series — from ergonomic handles and multiple fly points to industry-standard mount points, pole cups, and a suite of accessories like the Theory SplitYoke multipurpose mounting brackets, caster kits, and a dolly board — make these systems as flexible as they are powerful. Q2 2026 delivery is planned for powered, passive, portable, and install versions.

SR Series Loudspeakers and Subwoofers: Eight High-Output Models Now Available from Theory Professional
At ISE 2026, Theory Professional is demonstrating the new SR Series in Hall 8.0, Audio Demo Room D4, giving attendees a chance to hear exactly how far the company is pushing premium sound reinforcement. Demonstrations can be scheduled in advance, or you can stop by D4 to experience the system in action. More details are available at theoryprofessional.com.
Below is a clear breakdown of the eight SR Series loudspeakers and subwoofers available to order now, covering configuration, intended use, and key options.
SR Series Loudspeakers

SR-46.2
- Quad 6-inch, 2-way multi-use loudspeaker
- Ultra-slender, tall-but-narrow enclosure with 120° conical coverage
- Available in passive and powered versions
- Included features:
- Integral pole cups
- Ergonomic handles
- Fly points and industry-standard mount points
- Optional:
- Theory SplitYoke Multipurpose Mounting Bracket Kit for horizontal or vertical surface mounting
SR-28.2
- Dual 8-inch, 2-way multi-use loudspeaker
- 80° × 60° elliptical horn
- Available in passive and powered versions
- Included features:
- Integral pole cups
- Ergonomic handles
- Fly points and industry-standard mount points
- Optional:
- Theory SplitYoke Multipurpose Mounting Bracket Kit for horizontal or vertical surface mounting
SR-112.2
- Single 12-inch, 2-way multi-use loudspeaker
- 80° × 60° elliptical horn
- Available in passive and powered versions
- Included features:
- Integral pole cups
- Ergonomic handles
- Fly points and industry-standard mount points
- Optional:
- Theory SplitYoke Multipurpose Mounting Bracket Kit for horizontal or vertical surface mounting
SR-212.2
- Full-range, multi-use loudspeaker with integrated subwoofer
- Dual 12-inch LF drivers and dual 8-inch mid drivers in a 3-way design
- Exceptionally compact enclosure at just 10 inches deep
- Available in passive and powered versions
- Included features:
- Integral pole cups
- Ergonomic handles
- Fly points
- Optional:
- Theory SplitYoke Multipurpose Mounting Bracket Kit
- Theory Caster Kit for portable applications
SR Series Subwoofers
SR-212LF
- Compact, high-output bass-reflex subwoofer
- Dual 12-inch, 1,400 W woofers
- Sonically matched to SR-46.2, SR-28.2, and SR-112.2
- Included features:
- Removable feet
- Integral pole cups
- Ergonomic handles
- Fly points
- Optional:
- Theory SplitYoke Multipurpose Mounting Bracket Kit
- Theory Caster Kit
SR-215LF
- Maximum-output, manifold bass-reflex subwoofer
- Dual 15-inch, 3,600 W woofers
- Sonically matched to SR-46.2, SR-28.2, and SR-112.2
- Included features:
- Removable feet
- Integral pole cups
- Ergonomic handles
- Fly points
- Optional:
- Theory Quick-Release or Standard Caster Kits
- Dolly Board for transport
SR-218LF
- Maximum-output, manifold bass-reflex subwoofer
- Dual 18-inch, 3,600 W woofers
- Sonically matched to SR-46.2, SR-28.2, and SR-112.2
- Included features:
- Removable feet
- Integral pole cups
- Ergonomic handles
- Fly points
- Optional:
- Theory Quick-Release or Standard Caster Kits
- Dolly Board for transport
SR-221LF
- Extreme-output, manifold bass-reflex subwoofer
- Dual 21-inch, 3,600 W woofers
- Sonically matched to SR-46.2, SR-28.2, and SR-112.2
- Included features:
- Removable feet
- Integral pole cups
- Ergonomic handles
- Fly points
- Optional:
- Theory Quick-Release or Standard Caster Kits
- Dolly Board for transport

The Bottom Line
The SR-221.3 is still very much a proof of concept, and pricing has not been finalized. What is clear is that it will sit above Theory Professional’s existing SR models once it moves toward production. This is a true full-range loudspeaker loaded with expensive hardware — dual 21-inch, 3,600-watt low-frequency drivers, four 10-inch carbon-fiber midrange drivers, and a 5-inch wide-band ring-radiator compression driver — and there’s no scenario where that bill of materials leads to an “affordable” outcome.
With 27 Hz–20 kHz bandwidth (-3 dB), approximately 140 dB SPL, and 170° × 60° coverage, the SR-221.3 is designed to deliver high-output, high-fidelity sound at scale, with just one or two cabinets capable of filling medium-sized venues. Based on the pricing of Theory Professional’s current pro models, expect the SR-221.3 to land firmly in premium territory when it eventually reaches market — because nothing about this design suggests it’s meant to play in the shallow end.
For more information: theoryprofessional.com
Related Reading:
Tech
NASA's Artemis II will test laser communications system in lunar orbit
![]()
The mission will mark NASA’s first crewed test of laser communications in lunar orbit. O2O will use infrared light instead of radio frequencies to transmit voice, mission data, and high-resolution video back to Earth. While the technology has been tested on seven prior uncrewed missions, Artemis II is the first to…
Read Entire Article
Source link
Tech
CATL’s Next-Gen 5C Batteries Can Be Fully Recharged in 12-Minutes and Has Lifespan That Stretches Beyond a Million Miles

CATL’s innovative 5C battery claims to revolutionize the electric vehicle industry for drivers. CATL, or Contemporary Amperex Technology Limited, the world’s largest battery manufacturer, claims that a full charge takes only 12 minutes and has a lifespan of over a million miles.
The engineers at CATL worked on this battery to see if it could withstand a 5C charge (basically, an 80-kilowatt-hour pack could be charged at 400 kilowatts in roughly 12 minutes) without quickly wearing out. Yes, according to some estimations, a top-up would take about the same amount of time as filling up with gas, but this battery would withstand wear and tear better.
Sale
S ZEVZO ET03 Car Jump Starter 4000A Jump Starter Battery Pack for Up to 8.0L Gas and 7.0L Diesel Engines,…
- POWERFUL CAR BATTERY JUMP STARTER: The ET03 car battery jump starter can easily jump-start all 12V common vehicles with up to 8.0L gas and 7.0L diesel…
- STARTS 0V DEAD BATTERIES EASILY: This car battery jump starter has integrated the force start function in the jumper clamps, which delivers powerful…
- BACKUP PORTABLE POWER BANK: This jump starter battery pack can also work as a 74Wh large battery capacity portable power bank to charge your…
Under normal conditions, at 68°F (20°C), it retained at least 80% of its original capacity after 3,000 full charge-discharge cycles. When you add the figures up, that’s more than 1.8 million kilometers, or almost one and a half million miles. Or, in the blistering heat of 140°F (60°C) during the summer in Dubai, it managed 80% after 1,400 cycles, or almost 840,000 kilometers and a half million miles. CATL believes this is six times better than the current industry average for batteries put through a similar test.

So how did they manage to accomplish all of this? For starters, the cathode has a unique covering that keeps the battery from breaking down and losing metal ions during rapid charging and discharging. Second, the electrolyte contains an additive that detects and seals tiny breaches, preventing harmful lithium from leaking out and shortening battery life. Last but not least, there is a particular temperature-responsive coating on the separator that slows down the ions when things get heated locally, all of which contributes to a lower risk of things getting out of control.

Heat becomes considerably more of a concern when charging quickly. So they created a clever system that monitors the pack as a whole and precisely distributes coolant to the hotspots, keeping temperatures consistent across all cells and effectively adding years to the lifespan.

All of this implies that the battery no longer wears out as quickly when charging at high speeds. CATL believes it’s ideal for heavy users, large trucks, taxis, and ride-hailing vehicles. They will be the ones who gain from faster turnaround times and lower replacement prices. Passenger cars will follow once production begins, but there is no news on when the 5C variant will be available. Previous versions, such as the 4C technology released in 2023, were only stepping stones, and this is the next natural step.
Tech
The hidden tax of “Franken-stacks” that sabotages AI strategies
Presented by Certinia
The initial euphoria around Generative and Agentic AI has shifted to a pragmatic, often frustrated, reality. CIOs and technical leaders are asking why their pilot programs, even those designed to automate the simplest of workflows, aren’t delivering the magic promised in demos.
When AI fails to answer a basic question or complete an action correctly, the instinct is to blame the model. We assume the LLM isn’t “smart” enough. But that blame is misplaced. AI doesn’t struggle because it lacks intelligence. It struggles because it lacks context.
In the modern enterprise, context is trapped in a maze of disconnected point solutions, brittle APIs, and latency-ridden integrations — a “Franken-stack” of disparate technologies. And for services-centric organizations in particular, where the real truth of the business lives in the handoffs between sales, delivery, success, and finance, this fragmentation is existential. If your architecture walls off these functions, your AI roadmap is destined for failure.
Context can’t travel through an API
For the last decade, the standard IT strategy was “best-of-breed.” You bought the best CRM for sales, a separate tool for managing projects, a standalone CSP for success, and an ERP for finance; stitched them together with APIs and middleware (if you were lucky), and declared victory.
For human workers, this was annoying but manageable. A human knows that the project status in the project management tool might be 72 hours behind the invoice data in the ERP. Humans possess the intuition to bridge the gap between systems.
But AI doesn’t have intuition. It has queries. When you ask an AI agent to “staff this new project we won for margin and utilization impact,” it executes a query based on the data it can access now. If your architecture relies on integrations to move data, the AI is working with a delay. It sees the signed contract, but not the resource shortage. It sees the revenue target, but not the churn risk.
The result is not only a wrong answer, but a confident, plausible-sounding wrong answer based on partial truths. Acting on that creates costly operational pitfalls that go far beyond failed AI pilots alone.
Why agentic AI requires a platform-native architecture
This is why the conversation is shifting from “which model should we use?” to “where does our data live?“
To support a hybrid workforce where human experts work alongside duly capable AI agents, the underlying data can’t be stitched together; it must be native to the core business platform. A platform-native approach, specifically one built on a common data model (e.g. Salesforce), eliminates the translation layer and provides the single source of truth that good, reliable AI requires.
In a native environment, data lives in a single object model. A scope change in delivery is a revenue change in finance. There is no sync, no latency, and no loss of state.
This is the only way to achieve real certainty with AI. If you want an agent to autonomously staff a project or forecast revenue, it’s going to require a 360-degree view of the truth, not a series of snapshots taped together by middleware.
The security tax of the side door: APIs as attack surface
Once you solve for intelligence, you must solve for sovereignty. The argument for a unified platform is usually framed around efficiency, but an increasingly pressing argument is security.
In a best-of-breed Franken-stack, every API connection you build is effectively a new door you have to lock. When you rely on third-party point solutions for critical functions like customer success or resource management, you’re constantly piping sensitive customer data out of your core system of record and into satellite apps. This movement is the risk.
We’ve seen this play out in recent high-profile supply chain breaches. Hackers didn’t need to storm the castle gates of the core platform. They simply walked in through the side door by exploiting the persistent authentication tokens of connected third-party apps.
A platform-native strategy solves this through security by inheritance. When your data stays resident on a single platform, it inherits the massive security investment and trust boundary of that platform. You aren’t moving data across the wire to a different vendor’s cloud just to analyze it. The gold never leaves the vault.
Fix the architecture, then curate the context
The pressure to deploy AI is immense, but layering intelligent agents on top of unintelligent architecture is a waste of time and resources.
Leaders often hesitate because they fear their data isn’t “clean enough.” They believe they have to scrub every record from the last ten years before they can deploy a single agent. On a fragmented stack, this fear is valid.
A platform-native architecture changes the math. Because the data, metadata, and agents live in the same house, you don’t need to boil the ocean. Simply ring-fence specific, trusted fields — like active customer contracts or current resource schedules — and tell the agent, ‘Work here. Ignore the rest.’ By eliminating the need for complex API translations and third-party middleware, a unified platform allows you to ground agents in your most reliable, connected data today, bypassing the mess without waiting for a ‘perfect’ state that may never arrive.
We often fear that AI will hallucinate because it’s too creative. The real danger is that it will fail because it’s blind. And you cannot automate a complex business with fragmented visibility. Deny your new agentic workforce access to the full context of your operations on a unified platform, and you’re building a foundation that is sure to fail.
Raju Malhotra is Chief Product & Technology Officer at Certinia.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Tech
Why AI Keeps Falling for Prompt Injection Attacks
Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.
Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM’s safety guardrails, and it complies.
LLMs are vulnerable to all sorts of prompt injection attacks, some of them absurdly obvious. A chatbot won’t tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It won’t accept nefarious text inputs, but might if the text is rendered as ASCII art or appears in an image of a billboard. Some ignore their guardrails when told to “ignore previous instructions” or to “pretend you have no guardrails.”
AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are impossible with today’s LLMs. More precisely, there’s an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally.
If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.
Human Judgment Depends on Context
Our basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.
As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what’s normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us especially careful about things that have a large downside or are impossible to reverse.
The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to reward cooperation with cooperation and punish defection with defection.
A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.
We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who’s making the request), and normative (what’s appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual—for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.
Crucially, we also have an interruption reflex. If something feels “off,” we naturally pause the automation and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it’s how we humans are able to navigate a complex world where others are constantly trying to trick us.
So let’s return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them you’re filming a commercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.
Con artists are astute observers of human defenses. Successful scams are often slow, undermining a mark’s situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era “big store” cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern “pig-butchering” frauds, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim’s trust.
Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s targeted fast-food workers by phone, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts.
Humans detect scams and tricks by assessing multiple layers of context. AI systems do not. Nicholas Little
Why LLMs Struggle With Context and Judgment
LLMs behave as if they have a notion of context, but it’s different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see “tokens,” not hierarchies and intentions. LLMs don’t reason through context, they only reference it.
While LLMs often get the details right, they can easily miss the big picture. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond “no.” What it doesn’t “know”—forgive the anthropomorphizing—is whether it’s actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.
This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it’s hard to get it back. AI expert Simon Willison wipes context clean if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.
There’s more. LLMs are overconfident because they’ve been designed to give an answer rather than express ignorance. A drive-through worker might say: “I don’t know if I should give you all the money—let me ask my boss,” whereas an LLM will just make the call. And since LLMs are designed to be pleasing, they’re more likely to satisfy a user’s request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what’s necessary for security.
The result is that the current generation of LLMs is far more gullible than people. They’re naive and regularly fall for manipulative cognitive tricks that wouldn’t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. There’s a story about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.
Prompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently. This is the promise of AI agents: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions—and sometimes they will take the wrong ones.
Science doesn’t know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks.
We humans get our model of the world—and our facility with overlapping contexts—from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person’s identity as a doctor is suddenly more relevant.
We don’t know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can’t be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.
The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving them “world models.” Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its naïveté.
Ultimately we are probably faced with a security trilemma when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while it’s going to be tails—and along with a burger and fries, the customer will get the contents of the cash drawer.
From Your Site Articles
Related Articles Around the Web
-
Crypto World5 days agoSmart energy pays enters the US market, targeting scalable financial infrastructure
-
Crypto World6 days ago
Software stocks enter bear market on AI disruption fear with ServiceNow plunging 10%
-
Politics5 days agoWhy is the NHS registering babies as ‘theybies’?
-
Crypto World6 days agoAdam Back says Liquid BTC is collateralized after dashboard problem
-
Video2 days agoWhen Money Enters #motivation #mindset #selfimprovement
-
Tech11 hours agoWikipedia volunteers spent years cataloging AI tells. Now there’s a plugin to avoid them.
-
NewsBeat6 days agoDonald Trump Criticises Keir Starmer Over China Discussions
-
Fashion5 days agoWeekend Open Thread – Corporette.com
-
Politics3 days agoSky News Presenter Criticises Lord Mandelson As Greedy And Duplicitous
-
Crypto World4 days agoU.S. government enters partial shutdown, here’s how it impacts bitcoin and ether
-
Sports4 days agoSinner battles Australian Open heat to enter last 16, injured Osaka pulls out
-
Crypto World4 days agoBitcoin Drops Below $80K, But New Buyers are Entering the Market
-
Crypto World2 days agoMarket Analysis: GBP/USD Retreats From Highs As EUR/GBP Enters Holding Pattern
-
Crypto World5 days agoKuCoin CEO on MiCA, Europe entering new era of compliance
-
Business5 days ago
Entergy declares quarterly dividend of $0.64 per share
-
Sports2 days agoShannon Birchard enters Canadian curling history with sixth Scotties title
-
NewsBeat1 day agoUS-brokered Russia-Ukraine talks are resuming this week
-
NewsBeat2 days agoGAME to close all standalone stores in the UK after it enters administration
-
Crypto World20 hours agoRussia’s Largest Bitcoin Miner BitRiver Enters Bankruptcy Proceedings: Report
-
Crypto World6 days agoWhy AI Agents Will Replace DeFi Dashboards

