Connect with us
DAPA Banner

Tech

This Sennheiser Momentum 4 deal slashes $250 off the best ANC headphones around

Published

on

The Sennheiser Momentum 4 has earned a reputation as one of the most capable over-ear headphones at its price point, and this discount makes a strong case for picking one up right now.

A $250 discount makes that case even harder to ignore, bringing the Sennheiser Momentum 4 Wireless down to $199.95 from its original $449.95 on a pair of noise-cancelling headphones that genuinely punch above their discounted price.

Sennheiser MOMENTUM 4 DealSennheiser MOMENTUM 4 Deal

Sennheiser’s Momentum 4 stays locked at its Amazon sale price, giving you 56% off high‑end, over‑ear audio

The Sennheiser Momentum 4 has a reputation as one of the most capable headphones at its price point, and this discount makes them even more tempting.

Advertisement

View Deal

The 42mm dynamic drivers are a meaningful part of that story, delivering a frequency range of 6Hz to 22kHz that captures both the deep sub-bass rumble in electronic music and the fine harmonic detail in acoustic recordings that smaller drivers tend to smear.

Advertisement

AptX Adaptive Bluetooth helps maintain that audio quality wirelessly, adjusting the bitrate dynamically so that the signal holds up even when you are moving through environments with heavy wireless interference, like a crowded commuter train or a busy airport.

Adaptive noise cancellation handles the environmental side of things, and the transparency mode lets you flip back into the world around you without removing the headphones, which matters when you are navigating a city or need to catch a platform announcement.

Advertisement

Four beamforming microphones handle calls with enough directional precision to suppress wind and ambient noise independently, so the person on the other end hears your voice rather than the background of wherever you happen to be.

Advertisement

Battery life is where the 4.5-star Momentum 4 genuinely separates itself from the competition in this category, with 60 hours of playback on a single charge, meaning most users will go weeks between top-ups under realistic daily use patterns.

The foldable design and included carry case make the headphones practical for travel as well, and the package also includes a USB-C cable and a 3.5mm to 2.5mm audio cable for wired listening when Bluetooth is not an option.

Sennheiser’s Smart Control Plus app rounds out the experience, giving you access to a parametric equaliser, preset sound modes, and granular controls over noise cancellation and transparency levels.

For commuters and frequent travellers who will lean on both the ANC and the battery capacity daily, the Momentum 4 at $199.95 represents one of the strongest value propositions currently available in the premium over-ear headphone market.

Advertisement

Advertisement

An excellent pair of wireless headphones that deliver a balanced, neutral presentation, long battery life and very good noise cancellation. The Sennheiser Momentum 4 Wireless all-round performance is excellent though the Sony WH-1000XM5 are better in most respects, and available for similar money

Advertisement
  • Great comfort

  • Clear, musical audio

  • Very good noise cancellation

  • Massive battery life

  • Excellent wireless performance

  • Functional look

  • Not the best ANC at this price

  • Beaten for call quality

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

Anthropic unveils new powerful AI that finds software flaws, but says it's too dangerous to release

Published

on


In Project Glasswing, announced Tuesday, the company is giving a select group of major tech and financial firms access to Claude Mythos Preview, a frontier model that has already uncovered thousands of previously unknown software vulnerabilities. Anthropic says the model is too dangerous to release to the general public.
Read Entire Article
Source link

Continue Reading

Tech

X’s Grok AI now breaks language barriers and lets you edit photos using simple prompts

Published

on

Elon Musk’s X is continuing its push to bake AI deeper into the platform with two new Grok-powered features aimed at helping users reach a wider audience and edit images seamlessly.

What’s new on X?

The company has rolled out automatic translation for posts worldwide, allowing users to instantly read content in their preferred language without needing to tap on the translation option. The feature, powered by xAI’s Grok models, is designed to give posts a broader global reach while reducing friction for cross-language conversations. Users who prefer the original text can still toggle translations off on a per-language basis.

We’re rolling out auto-translate worldwide to give posts in any language global reach on X.

The translations are powered by Grok and have improved substantially over the last couple months.

If you prefer to read in the original language, you can always turn off auto-translate…

Advertisement

— Nikita Bier (@nikitabier) April 7, 2026

Alongside translation, X has also introduced a new in-app photo editor on iOS. The tool gives users access to basic editing options like drawing, text overlays, and blur controls for hiding sensitive information, such as faces or personal details.

Ladies and gentlemen, we’re launching a brand new Photo Editor in our post composer.

It has long-overdue features like drawing & text. But we also included special add-ons that are unique to X:

• Edit with words, powered by Grok
• Add a blur to redact parts of the photo… pic.twitter.com/38Zaw8b5jl

Advertisement

— Nikita Bier (@nikitabier) April 7, 2026

The editor also utilizes AI to help users edit images with natural language prompts. According to X’s head of product, Nikita Bier, users can ask Grok to transform images in specific ways. For example, they can ask Grok to turn a regular photo into something styled like a painting. For now, the feature is limited to X’s iOS app, but Android support is coming soon.

What does this mean for users?

With these additions, X is trying to get users to spend more time inside its app instead of relying on third-party tools. Other social media platforms have released similar AI-driven translation features, and X is now joining the fray to make Grok a core part of how people create and engage on the platform.

Whether this push pays off will ultimately come down to execution. If these tools feel genuinely useful and intuitive, they could make posting and discovery smoother. If not, they risk blending into the background as features more users ignore, adding complexity without meaningfully improving the experience.

Advertisement

Source link

Continue Reading

Tech

Did you mean to buy that?

Published

on

As agentic shopping becomes more commonplace, how do you dispute a purchase your AI made?

Companies are preparing for a near future where consumers will allow AI agents to shop on their behalf.

Studies have found that most European consumers already use AI to help shape their purchase decisions, but not at checkout, where the money passes hands – although that could change, and fast.

‘Agentic commerce’ is seen as a natural consequence of AI-powered search, which already makes up more than half of global search engine volume. McKinsey trend analysis finds this number could rise significantly over the coming years.

Advertisement

McKinsey found that by 2030, agentic commerce could orchestrate up to $5trn globally. But while Morgan Stanley earlier this year noted that only 1pc of shoppers currently choose the agentic route, newer research elsewhere finds that AI agents could make up a significant portion of customers a business receives in the coming years.

In the background, infrastructure works to make agentic commerce possible are underway at fintechs such as Revolut, Stripe, Visa, Mastercard and PayPal. More are expected to follow.

Did you mean to buy that?

A growing number of users say they would trust AI systems to place orders and execute payments on their behalf. But such a combination of trust and automation will end up creating a whole new category of purchase disputes that companies are yet to get ahead of, says Monica Eaton, the founder and CEO of Chargebacks 911.

“The infrastructure for agentic commerce is being built quickly, but the safeguards need to evolve at the same pace,” she says.

Advertisement

In the era of agentic commerce, both customers and businesses will find it hard to define intent – or a lack thereof – when purchases are made by AI agents. It is easier to determine intent when humans make a deliberate choice to press ‘buy’, but agentic commerce removes that moment in the transaction. And currently, there aren’t many ways to dispute an agentic AI-made purchase, Eaton notes.

“Most customers do not have access to detailed records of the instructions they gave, the permissions in place, or how the agent reached its decision. In many cases, the transaction is technically authorised, which makes it difficult to challenge,” she adds.

To solve this, platforms need to prioritise transparency before a transaction occurs. The AI agent in question must be able to show what it is about to do and why, and ensure it has customer authorisation before going forward with a transaction. An audit trail for agentic purchases will provide an added layer of protection, says Eaton.

Meanwhile, clear permission frameworks that define where and what agents can purchase, and how much they can spend, will further protect customers.

Advertisement

This may only work in the short term, says Eaton. Longer term protections would involve platforms providing transparency and access to activity logs, while dispute processes will need to evolve to recognise when an agent’s decision does not align with the customer’s intent.

Shift in responsibility

This new category of purchase dispute lies somewhere between fraud and ‘buyer’s remorse’, and current systems are not equipped to handle this anomaly, says Eaton.

“In an agentic environment, platforms need to take greater responsibility for how instructions are captured, interpreted and executed”, and merchants should not be expected to absorb this liability by default, she explains.

Moreover, if effective frameworks are not built ahead of time, customers could end up in a situation where they are arguing with an AI customer service bot about an unauthorised purchase made by a personal AI agent.

Advertisement

There is still time to get ahead of this eventuality, but the window is narrowing, Eaton says. “Businesses need to treat agentic commerce as a fundamentally different transaction environment, not just a faster version of existing e-commerce.”

It is important not to wait for regulation to catch up, Eaton warns. “Businesses that build trust into agentic commerce early will be in a much stronger position than those that react later.

“As for the future of customer service, it does not have to become AI versus AI. The key is to keep the human at the centre of the process. Agentic commerce should reflect and support human intent. If that principle is lost, trust will follow.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Advertisement

Source link

Continue Reading

Tech

Valve Releases Native Steam Link App For Apple’s Vision Pro

Published

on

Valve has released a native Steam Link beta for Apple Vision Pro, letting users stream their existing Steam games onto a large virtual screen in visionOS. It supports up to 4K resolution and will let you dynamically adjust the curve of the display. The Mac Observer reports: Steam Link does not support VR titles in this beta, and Valve clearly states that the app is limited to 2D game streaming, but this still opens up a large library of games that users can play on a massive virtual screen inside Vision Pro.

At the same time, Vision Pro already handles 2D media very well, and this update builds on that strength by turning the headset into a portable gaming display that connects directly to your existing setup without needing extra hardware.

You can join the Steam Link beta through TestFlight right now, and this early release shows how Apple Vision Pro continues to expand beyond media into more practical and everyday use cases like gaming.

Source link

Advertisement
Continue Reading

Tech

OPPO F33 Pro Launching April 15 With 50MP Ultra-Wide Selfie Camera, New Design

Published

on

The new OPPO F33 series is just around the corner, and the Chinese smartphone maker has shared a lot more about the upcoming phones ahead of its India launch on April 15, 2026. The headline features are the all-new ultrawide selfie camera and a more polished design. Here’s everything we know so far.

New Cameras

Selfie captured from the OPPO F33 Pro

Like the new iPhone 17, the highlight of the OPPO F33 Pro is its 50MP ultra-wide front camera with a 100° field of view. That’s significantly wider than what most phones in this segment offer, and OPPO says it can capture up to 30% more area in group selfies. To make that useful in real-world scenarios, the phone also includes an “AI Groupfie Expert” system. It can automatically switch to a wider 0.6x view when more people enter the frame and correct facial distortion for up to six faces at once.

On the back, the F33 series uses a 50MP main sensor paired with a 2MP depth sensor. While that setup isn’t groundbreaking on paper, one of the new features is AI Portrait Glow, which adjusts lighting in real time depending on the scene. It offers multiple lighting styles, including Natural, Rim, and Studio modes, to improve portraits in tricky lighting conditions. Another interesting addition is the Colorful Front Fill Light, which replaces the usual harsh white flash with softer, adjustable tones to make selfies look more natural, especially at night. The phone also introduces creative features like Popout, which lets users create layered photos with a sense of depth directly from the camera, and Dual-View Video

Redesigned Build

People posing for a selfie with the F33 Pro

Beyond cameras, OPPO is also making noticeable changes to the design. The F33 Pro introduces a new “Starry Sea” camera module with a cleaner layout and a more prominent lens design. The phone uses a one-piece back panel made from a thicker composite material, which OPPO claims improves durability without adding the fragility of glass. It’s also using a CNC carving process to create a mix of glossy and matte finishes on the same surface.

The F33 Pro will be available in three finishes: Misty Forest, Starry Blue, and Passion Red, each with a slightly different texture and visual style. The device features a 6.57-inch flat display and weighs 194 grams, keeping things relatively slim and manageable.

Source link

Advertisement
Continue Reading

Tech

Astropad’s Workbench reimagines remote desktop for AI agents, not IT support

Published

on

Demand for Apple’s Mac Mini has skyrocketed, particularly in China, as the small computer has become an ideal platform for experimenting with autonomous AI agents like OpenClaw and others. Now, a company called Astropad is building out a remote desktop solution specifically for this use case.

On Tuesday, Astropad CEO Matt Ronge introduced Astropad Workbench, a remote desktop solution for Apple devices that he pitches as made “for the AI era.”

While an AI agent running on a Mac Mini may not need a screen, its operator (the human) will want to log in at times to see what’s happening in order to check logs, monitor outputs, or restart stuck tasks, he says.

Image Credits:Astropad

The new remote desktop solution offers a variety of features, including high-fidelity streaming; the ability to dictate prompts and commands with your voice; plus support for other input methods like the keyboard, Apple Pencil, or touch; and clients for both the iPad and iPhone — the latter essentially putting the remote desktop solution into your pocket for on-the-go access.

If you’re running AI agents across multiple Macs, Workbench offers a device chooser so you can move between them.

Advertisement
Image Credits:Astropad

The idea came about because it was something the team at Astropad had wanted for themselves, as had their friends.

“We have heavily adopted AI at Astropad, and we’ve been using agents. And sometimes, you have an agent running on a long task, and you want to check on it,” says Ronge. “There’s not a great way to do this…there were existing remote desktop tools, but nothing built specifically for this,” he continues. “There have also been ways where you can use a terminal, or there are things like Telegram chats, but they’re limited. I mean, there are times you’ve got to see what’s happening on your Mac. You’ve got to approve a dialog or save something, or just visually see what’s happening.”

Workbench also leverages the company’s proprietary, low-latency display protocol, which it calls LIQUID, which supports the workflows creative professionals use. It retains full fidelity, even at Retina resolutions, Astropad claims, and doesn’t blur lines or pixelate data. The protocol already powers Astropad’s other products, like Luna Display, which turns your iPad into a second display, and Astropad Studio, which lets you use an iPad as a professional drawing tablet.

While monitoring an AI agent may not always need a high-fidelity solution, Ronge points out that it’s something that’s nice to have — especially if you’re approving designs or mock-ups your AI agent made.

Image Credits:Astropad

Of course, remote desktop software has existed for some time, meaning Astropad has well-established rivals like Jump Desktop, RustDesk, AnyDesk, Parsec, VNC-based solutions, and many more.

But Ronge suggests that those weren’t designed for the specific needs of using remote desktop software to keep tabs on AI agents. With Workbench, it’s easy to check on the status of logs to see your AI agents’ progress in order to spot issues, restart stalled jobs, and make other changes, but what’s more, you can do this from your iPhone or iPad.

Advertisement

“We’ve been doing iPad stuff for years — it’s been, like, our whole company for the past 10 years. So we have a lot of experience in making good iPad apps,” Ronge says. “We know how to make good iOS apps…so we did that, and then we also added a voice model.”

Image Credits:Astropad

The tech uses Apple’s voice model so you can talk to your phone and direct your AI agent to do something with a press of the microphone button.

“It’s a very natural way to work with agents. That’s the kind of feature that existing remote desktop [apps] just don’t have — they’re built for more traditional, enterprise-style remote desktop.”

As a new release, there will still be some bugs and polishing needed, but the team is continuing to work on the product. Next up, they plan to launch Windows and Linux support and refine the iPhone app.

The new software runs on macOS 15 and up and iOS 26, and is available as a free download offering 20 minutes of access per day. For unlimited access, the cost is $10 per month, or $50 per year.

Advertisement

Astropad, a bootstrapped and profitable small tech business, has over 100,000 customers, including those who have bought its iPad hardware accessories and its software. With Workbench, Ronge believes the company has the potential to reach both AI enthusiasts and businesses as remote support for AI agents becomes more common.

“I totally think businesses are gonna buy it. I mean, just the productivity gains I’m seeing from it myself — this is totally headed to businesses. It’s just too powerful,” he notes.

Source link

Advertisement
Continue Reading

Tech

Temple University Student On IEEE Membership Perks

Published

on

Kyle McGinley graduated from high school in 2018 and, like many teenagers, he was unsure what career he wanted to pursue. Recuperating from a sports injury led him to consider becoming a physical therapist for athletes. But he was skilled at repairing cars and fixing things around the house, so he thought about becoming an engineer, like his father.

McGinley, who lives in Sellersville, Pa., took some classes at Montgomery County Community College in Blue Bell, while also working. During his years at the college, he took a variety of courses and was drawn to electrical engineering and computing, he says. He left to pursue a bachelor’s degree in electrical and computer engineering in Philadelphia at Temple University, where he is currently a junior.

Kyle McGinley

MEMBER GRADE

Student member

Advertisement

UNIVERSITY

Temple, in Philadelphia

MAJOR

Electrical and computer engineering

Advertisement

The 26-year-old is also a teaching assistant and a research assistant at Temple. His research focuses on applying artificial intelligence to electrical hardware and robotics. He helped build an AI-integrated android companion to assist in-home caregivers.

Temple recognized McGinley’s efforts last year with its Butz scholarship, which is awarded annually to an electrical and computer engineering undergraduate with an interest in software development, AI development systems, health education software, or a similar field.

An IEEE student member, he is active within the university’s student branch.

“My career ambition after I graduate is to gain real-world experience in the engineering industry to learn skills outside of academia,” he says. “Long term, I want to do project management or work in a technical lead role, with the primary goal of creating impactful projects that I can be proud of.”

Advertisement

Building a robot aide

McGinley is a teaching assistant for his digital circuit design course. In a class of 35 students, it can be a struggle for some to digest the professor’s words, he says.

“My job is to answer students’ questions if they are having problems following the professor’s lecture or are confused about any of the topics,” he says. “In the lab, I help students debug code or with hardware issues they have on the FPGA [field-programmable gate array] boards.”

He also conducts research for the university’s Computer Fusion Lab under the supervision of IEEE Senior Member Li Bai, a professor of electrical and computer engineering. McGinley writes software programs at the lab.

“In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”

Advertisement

One such assignment was working with the Temple School of Social Work at the Barnett College of Public Health to build a robot companion integrated with AI to assist individuals with Parkinson’s disease and their caregivers.

“I realized the need for this with my grandmother, when she was taking care of my grandfather,” he says. “It was a lot for her, trying to remember everything.”

Using the latest software and hardware, he and three classmates rebuilt an older lab robot. They installed an operating system and used Python and C++ for its control, perception, and behavior, he says. The students also incorporated Google’s Gemini AI to help with routine tasks such as scheduling medication reminders and setting alarms for upcoming doctor visits.

A small humanoid robot standing on a kitchen counter.Kyle McGinley helped build an AI-integrated android to assist individuals with Parkinson’s disease and their caregivers.Temple University of Public Health

The AI-integrated android was intended to assist, not replace, the caregivers by handling the mental load of remembering tasks, he says.

Advertisement

“This was one of the cool things that drew me to working in the robotics field,” he says. “Something where AI could be used to help caregivers do simple tasks.”

The benefits of a student branch

McGinley joined Temple’s IEEE student branch last year after one of his professors offered extra credit to students who did so. After attending meetings and participating in a few workshops, he found he really liked the club, he says, adding that he made new friends and enjoyed the camaraderie with other engineering students.

After the student branch’s board members got to know McGinley better, they asked him to become the club’s historian and manage its social media account. He also helps with event planning, creating and posting fliers, taking pictures, and shooting videos of the gatherings.

The branch has benefited from McGinley’s involvement, but he says it’s a two-way street.

Advertisement

“The biggest things I’ve learned are being held accountable and being reliable,” he says. “I am responsible for other people knowing what’s going on.”

Being an active volunteer has improved his communication skills, he says.

“Learning to clearly communicate with other people to make sure everyone is on the same page is important,” he says. “In school, they don’t teach you how to communicate with people. They only teach you how to remember stuff. Working well with people is one of the most underrated skills that a lot of students don’t understand is important.”

He encourages students to join their university’s IEEE branch.

Advertisement

“I know it can be scary because you might not know anyone, but it honestly can’t hurt you; it could actually benefit you,” he says. “Being active is going to help you with a lot of skills that you need.

“You’ll definitely get opportunities that you would have never known about, like a scholarship or working in the research lab. I would have never gotten these opportunities if I hadn’t shown up. Joining IEEE and being active is the best thing you can do for your career.”

From Your Site Articles

Related Articles Around the Web

Advertisement

Source link

Continue Reading

Tech

Intel joins Musk’s Terafab as foundry partner in $25B chip megaproject

Published

on

In short: Intel has signed on as the primary foundry partner for Elon Musk’s Terafab, a $25 billion joint venture between Tesla, SpaceX, and xAI targeting a terawatt of AI compute per year, handing the struggling chip giant the marquee customer it has been searching for since pivoting to a foundry-first strategy.

On 7 April 2026, Intel announced it is joining the Terafab project, becoming the foundry partner for the most ambitious semiconductor facility ever proposed in the United States. The announcement came two weeks after Musk first unveiled Terafab at the North Campus of Giga Texas in Austin, a joint venture between Tesla, SpaceX, and xAI that claims it will produce one terawatt of AI compute every year. Intel’s role is to contribute its most advanced process node, packaging expertise, and manufacturing scale to make that claim real. For Intel chief executive Lip-Bu Tan, who has spent the past year attempting to rebuild Intel around an external foundry business, the deal is the most significant external customer win the company has landed since he took the job.

What Terafab is claiming to build

Terafab is designed as a vertically integrated semiconductor complex,  covering chip design, lithography, fabrication, memory production, advanced packaging, and testing under a single roof,  with a stated goal of producing between 100 billion and 200 billion custom AI and memory chips per year. The initial buildout targets 100,000 wafer starts per month, with ambitions to eventually scale to one million wafer starts per month at full capacity. The project involves two separate facilities on the Giga Texas campus: one dedicated to chips for automotive and humanoid robotics applications, including Tesla’s Full Self-Driving system, its Cybercab robotaxi programme, and the Optimus robot line; and a second for high-performance AI data centre infrastructure and specialised processors for orbital deployments.

That orbital component is central to the project’s rationale. SpaceX, which completed its acquisition of xAI in an all-stock deal in February 2026, creating a combined entity valued at approximately $1.25 trillion, is building out a constellation of space-based AI satellites internally designated AI Sat Mini. Musk has said 80% of Terafab’s compute output will be directed toward that orbital infrastructure, with the remaining 20% for ground-based applications. The full cost of the project has been cited as between $20 billion and $25 billion, though independent analysts have been sharply sceptical of whether that figure is remotely sufficient to meet the stated production targets. A note from Bernstein Research estimated the true capital required to hit one terawatt of annual compute at approximately $5 trillion,  more than 70% of the total annual United States federal budget.

Advertisement

Intel’s role, and what the deal is worth

Intel will contribute its 18A process node, the company’s most advanced logic manufacturing technology, currently ramping to high-volume production at Intel’s fabrication plants in Arizona and Oregon. Intel’s 18A is a 1.8-nanometre-class node, placing it in the same tier as the most advanced processes currently entering commercial production globally, and it represents the most sophisticated semiconductor capability manufactured entirely within the United States. Intel’s statement on joining Terafab was direct: “Intel is proud to join the Terafab project with SpaceX, xAI, and Tesla to help refactor silicon fab technology.” The company added: “Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 TW/year of compute to power future advances in AI and robotics.”

Advertisement

Tan’s post on X was more personal in its framing. “Elon has a proven track record of reimagining entire industries,” he wrote. “This is exactly what is needed in semiconductor manufacturing today. Terafab represents a step change in how silicon logic, memory and packaging will get built in the future. Intel is proud to be a partner.” Intel’s shares rose approximately 4% on the announcement, closing at $52.91. The market reaction reflects how significant the deal is for Intel’s foundry ambitions: in its most recent full year, Intel Foundry generated just $307 million in external customer revenue, a figure that makes the company a distant also-ran against Taiwan Semiconductor Manufacturing Company, which generates tens of billions annually from external customers. Terafab, if even partially realised, would transform Intel Foundry’s commercial profile entirely.

Intel’s recovery, and what this bet requires

Tan inherited an Intel in acute crisis. The company had lost ground to TSMC and AMD across almost every major product category, its own manufacturing roadmap had slipped repeatedly, and its foundry business, the effort to manufacture chips for external customers as TSMC does, had attracted little meaningful interest beyond government-supported contracts under the US CHIPS and Science Act. Tan’s restructuring has been aggressive: thousands of redundancies, a sharper focus on Intel’s 18A and 14A process nodes as the foundation of the foundry pitch, and a deliberate effort to position Intel’s domestic manufacturing capability as a geopolitical differentiator at a moment when US policymakers are intensely focused on reducing dependence on Taiwanese chipmaking.

Terafab is the clearest expression yet of where that pitch lands. The CHIPS Act tailwinds, the Trump administration’s desire to see advanced semiconductor production in the United States, and the specific demand Musk’s companies represent for high-volume, US-manufactured chips at the leading edge, all of those forces converge in this partnership. Whether Intel’s 18A can deliver at the yields and volumes Terafab’s targets require is a separate question. The node has been in development for several years and is only now entering volume ramp; the gap between a controlled high-volume manufacturing ramp and the production scales Terafab envisions remains very large. Chipmakers building the largest foundries in the world require several years of construction and billions of dollars before the first wafer is processed. The scale of capital commitments now characterising AI infrastructure investment gives some context for what serious execution at Terafab’s claimed targets would actually require.

The credibility problem Terafab has not solved

The scepticism around Terafab is structural, not merely financial. Building a 2nm-class fabrication facility capable of 100,000 wafer starts per month costs roughly $25-35 billion on its own, according to Tom’s Hardware’s analysis of Bernstein’s research, meaning the entire stated Terafab budget is roughly enough to build a single fab operating at a fraction of the claimed full-capacity scale. Reaching one million wafer starts per month would require dozens of such facilities. The $20-25 billion figure appears to represent initial construction capital for the first phase, rather than the cost of the stated ambition.

Advertisement

There is also the question of the companies at the table. SpaceX-xAI’s internal situation has been turbulent: all 11 of xAI’s original co-founders have now left the company since the SpaceX acquisition, a rate of attrition that has raised questions about the organisation’s technical continuity. Musk’s companies have a documented history of announcing timelines for facilities and products that subsequently stretch by years. Tesla’s Cybertruck, Optimus, and Full Self-Driving have each missed multiple committed dates without affecting the company’s willingness to make new commitments. None of this disqualifies Terafab, Musk’s companies have also delivered on goals that were widely dismissed, most notably SpaceX’s orbital launch programme, but it establishes why analysts are not taking the one-terawatt headline at face value.

What the partnership means for the chip industry

Intel’s arrival at Terafab lands at a moment when the chip industry is navigating a broader restructuring of who makes what and for whom. The rise of custom AI silicon, Amazon’s Trainium, Google’s TPUs, Microsoft’s Maia, has been eating into the share of AI workloads that run on Nvidia hardware. Nvidia’s response has been to open its NVLink Fusion interconnect to third-party silicon, including Marvell’s custom AI accelerators, a strategy designed to keep custom chip buyers inside Nvidia’s ecosystem even as they move off pure Nvidia hardware. Terafab represents something different: a vertically integrated attempt to produce custom silicon at a scale that has no precedent outside of the established foundry giants. If the project proceeds anywhere near its stated ambitions, it would add a third major domestic US semiconductor manufacturing ecosystem to a landscape currently dominated by TSMC’s Arizona expansion and Samsung’s Texas operations.

For Intel, the strategic logic is clear. As hyperscalers and technology companies increasingly pilot non-Nvidia chips for AI training and inference workloads, the market for foundry services from a domestically situated, leading-edge manufacturer is growing precisely when Intel has positioned itself to serve it. Whether Terafab is the vehicle that finally validates that positioning, or another ambitious announcement that tests the distance between Musk’s projections and physical reality, will become clearer as construction begins and wafer starts are counted rather than promised. The capital flowing into AI infrastructure at this scale has a way of turning implausible timelines into achieved ones, and Intel, for the first time in years, is positioned to benefit if it does.

Advertisement

Source link

Continue Reading

Tech

Sony’s upcoming True RGB TVs look to set “a new benchmark” for picture quality

Published

on

After several months of teasing, Sony has decided to fill us in a bit more information about its upcoming True RGB TV.

Although only a little bit more.

We still don’t have an idea of what the TV looks like (but we assume it’ll look like any other Sony TV), and there’s no word on pricing yet, but we do know they’ll be multiple TVs as the press alert refers to “Bravia TVs”. And on top of that you won’t have too long to wait. They’re set for release this spring.

Decades in the making

This latst release provides a nugget of more information, just a few days after Sony and TCL came to an agreement over the new TV venture they’ve established together that’s rather nicely called Bravia Inc.

Advertisement

Sony comments that its True RGB technology intends to set a new benchmark for RGB LED picture performance. Unlike conventional approaches to the technology, True RGB is said to use independently controlled red, gree, and blue light sources (diodes) that can apparently deliver “purer colour, greater brightness, and the largest colour volume ever achieved in Sony’s home TV history”

Advertisement

Sony introduces Sony’s proprietary “True RGB” technology – the naming convention behind the breakthrough display technology powering upcoming Sony’s True RGB televisions and setting a new benchmark for RGB LED picture performance. By combining the individual RGB LEDs with the strengths of both Mini LED and OLED into one TV, we’re potentially looking at the ultimate TV viewing experience.

Sony’s hope with its True RGB technology is that picture quality looks more natural, more three dimensional, and more accurate, whether you’re viewing in a bright living room or otherwise.

Advertisement
Mini LED vs New Sony RGB BacklightMini LED vs New Sony RGB Backlight
Image Credit (Sony)

What makes Sony’s True RGB tick is the “proprietary optical structure and precision backlight control” that’s driven by a new RGB backlight driver. You can add “faithful colour reproduction from wider viewing angles” to the list of plusses that Sony’s True RGB backlight is bringing to the table.

Sony says that its True RGB is the culmination of more than 20 years of its “innovation in LED control”, fomr the first RGB light sources introduced in the QUALIA 005 in 2004, through to the flagship and much praised Backlight Master Drive that launched in 2016.

Similar to how James Bond will Return, additional details will be shared in the “near future”. Trusted Reviews has been invited to a Sony Home Cinema event in May. It’s looking likely that we’ll be seeing the future of Sony’s TVs there and then. Will it be a brighter and colourful one?

Advertisement

Source link

Advertisement
Continue Reading

Tech

Anthropic Unveils ‘Claude Mythos’, Powerful AI With Major Cyber Implications

Published

on

“Anthropic has unveiled Claude Mythos, a new AI model capable of discovering critical vulnerabilities at scale,” writes Slashdot reader wiredmikey. “It’s already powering Project Glasswing, a joint effort with major tech firms to secure critical software. But the same capabilities could also accelerate offensive cyber operations.” SecurityWeek reports: Mythos is not an incremental improvement but a step change in performance over Anthropic’s current range of frontier models: Haiku (smallest), Sonnet (middle ground), and Opus (most powerful). Mythos sits in a fourth tier named Copybara, and Anthropic describes it as superior to any other existing AI frontier model. It incorporates the current trend in the use of AI: the modern use of agentic AI. “The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills… the model has the highest scores of any model yet developed on a variety of software coding tasks,” notes Anthropic in a blog titled Project Glasswing — Securing critical software for the AI era.

In the last few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities with many classified as critical. Several are ten or 20 years old — the oldest found so far is a 27-years old bug in OpenBSD. Elsewhere, a 16-years old vulnerability found in video software has survived five million hits from other automated testing tools without ever being discovered. And it autonomously found and chained together several in the Linux kernel allowing an attacker to escalate from ordinary user access to complete control of the machine. […] Anthropic is concerned that Mythos’ capabilities could unleash cyberattacks too fast and too sophisticated for defenders to block. It hopes that Mythos can be used to improve cybersecurity generally before malicious actors can get access to it.

To this end, the firm has announced the next stage of this preparation as Project Glasswing, powered by Mythos Preview. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. “Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play.” Claude Mythos Preview is described as a general-purpose, unreleased frontier model from Anthropic that has nevertheless completed its training phase. The firm does not plan to make Mythos Preview generally available. The implication is that ‘Preview’ is a term used solely to describe the current state of Mythos and the market’s readiness to receive it, and will be dropped when the firm gets closer to general release.

Source link

Advertisement
Continue Reading

Trending

Copyright © 2025