Connect with us
DAPA Banner

Tech

Amazon’s Tax Bill Plunges 87% After Tax Cuts

Published

on

An anonymous reader shares a report: Republicans’ tax cuts shaved billions off Amazon’s tax bill, new government filings show. The company says it ran a $1.2 billion tax bill last year, down from $9 billion the previous year, and even as its profits jumped by 45% to nearly $90 billion.

That’s largely because of the generous new depreciation breaks GOP lawmakers included in their One Big Beautiful Bill, something that’s particularly important to Amazon which — in addition to maintaining a vast infrastructure for its ubiquitous delivery business — has been spending billions to build out artificial intelligence data centers.

Also helping, though less important: The law’s expanded breaks for businesses research and development expenses. The company has long been criticized by Democrats for paying little in tax, and it appeared to be bracing for criticism in the wake of the report to the Securities and Exchange Commission.

Source link

Advertisement
Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Tech

In with a bang, out in silence — the end of the Mac Pro

Published

on

For almost two decades, the Mac Pro bounced between coveted and beloved, to derided and forgotten. Now, it’s finally over.

Silver computer tower with a handle, power button, and ventilation holes on the side.
Apple is reportedly pressing the off switch on the Mac Pro

All political careers end in failure, and all devices fade out as they are eventually superseded. Yet this time it’s more that the Mac Pro has been usurped, and possibly even stabbed in the back.
If you’re a Mac Pro fan, you know this day is coming, and you probably don’t want to believe it. It’s true that the Mac Pro has long lost its crown as the most powerful Mac, but still this is the legendary Mac Pro.
Continue Reading on AppleInsider | Discuss on our Forums

Source link

Continue Reading

Tech

Is It Time For Open Source to Start Charging For Access?

Published

on

“It’s time to charge for access,” argues a new opinion piece at The Register. Begging billion-dollar companies to fund open source projects just isn’t enough, writes long-time tech reporter Steven J. Vaughan-Nichols:


Screw fair. Screw asking for dimes. You can’t live off one-off charity donations… Depending on what people put in a tip jar is no way to fund anything of value… [A]ccording to a 2024 Tidelift maintainer report, 60 percent of open source maintainers are unpaid, and 60 percent have quit or considered quitting, largely due to burnout and lack of compensation. Oh, and of those getting paid, only 26 percent earn more than $1,000 a year for their work. They’d be better paid asking “Would you like fries with that?” at your local McDonald’s…

Some organizations do support maintainers, for example, there’s HeroDevs and its $20 million Open Source Sustainability Fund. Its mission is to pay maintainers of critical, often end-of-life open source components so they can keep shipping patches without burning out. Sentry’s Open Source Pledge/Fund has given hundreds of thousands of dollars per year directly to maintainers of the packages Sentry depends on. Sentry is one of the few vendors that systematically maps its dependency tree and then actually cuts checks to the people maintaining that stack, as opposed to just talking about “giving back.”

Sentry is on to something. We have the Linux Foundation to manage commercial open source projects, the Apache Foundation to oversee its various open source programs, the Open Source Initiative (OSI) to coordinate open source licenses, and many more for various specific projects. It’s time we had an organization with the mission of ensuring that the top programmers and maintainers of valuable open source projects get a cut of the tech billionaire pie.

Advertisement

We must realign how businesses work with open source so that payment is no longer an optional charitable gift but a cost of doing business. To do that, we need an organization to create a viable, supportable path from big business to individual programmer. It’s time for someone to step up and make this happen. Businesses, open source software, and maintainers will all be better off for it.

One possible future… Bruce Perens wrote the original Open Source definition in 1997, and now proposes a not-for-profit corporation developing “the Post Open Collection” of software, distributing its licensing fees to developers while providing services like user support, documentation, hardware-based authentication for developers, and even help with government compliance and lobbying.

Source link

Continue Reading

Tech

From “Hello, World!” to AI: What Skills Actually Prepare Students for the Future?

Published

on

This article is part of the collection: Teaching Tech: Navigating Learning and AI in the Industrial Revolution.


A little over a decade ago, schools were swept into what many described as a movement to prepare students for the future of work. That work was coding — “Hello, world!”

Districts introduced new courses, nonprofits expanded access to computer science education and a growing ecosystem of programs promised to teach students the skills needed to enter the tech workforce. For many, it felt like a necessary correction to a rapidly digitizing world. But over time, a more complicated picture emerged.

While access to computer science education expanded, the relationship between early coding exposure and long-term workforce outcomes became uneven. The “learn to code” movement raised an important question that still lingers today: Which skills actually endure when technologies change? That question has resurfaced in a new form.

Advertisement

Today, generative AI is driving a similar wave of urgency. Schools are once again being encouraged to adapt quickly, often with the same underlying rationale that teachers must prepare students for a future shaped by emerging technologies.

But if the instructional role of AI remains unclear, and if the tools themselves are likely to evolve rapidly, the more persistent challenge may lie elsewhere.

After conducting a two-year research project alongside teachers, who are adapting and are open to integrating AI, we found that uptake is still minimal. Most of our participants, including those who are engineering or computer science teachers, still struggle to identify a clear or universal instructional use case for widespread AI integration.

So, what should students learn to help them adapt to whatever comes next?

Advertisement

A growing body of research suggests that the answer may lie not in teaching students how to use a particular AI system, but in helping them understand the computational ideas that make those systems possible.

The Limits of Teaching the Tool

In recent years, many discussions about AI education have centered on teaching students how to use generative tools effectively. Prompt engineering, for example, has become a common topic in professional development workshops and online tutorials.

Yet, focusing heavily on tool-specific skills can create a familiar educational problem, because technology changes faster than curricula.

Teaching students how to interact with a specific interface risks becoming the equivalent of teaching to standardized tests, rather than teaching students important lessons that don’t appear on state exams.

Advertisement

The history of computing education offers a useful example. In the early 2010s, a wave of coding initiatives encouraged schools to teach programming skills broadly. While many of those programs expanded access to computer science education, subsequent analysis showed that workforce pipelines in technology remained uneven, and many students learned tool-specific skills without developing deeper computational reasoning abilities.

That experience offers a cautionary lesson for the current AI moment. If the goal of integrating AI into education is long-term preparation for technological change, focusing narrowly on how to use today’s tools may not be the most durable strategy.

The Skill That Outlasts the Tool

A growing body of research suggests that computational thinking is a more durable educational objective.

Computational thinking refers to a set of problem-solving practices used in computer science and other analytical disciplines. These include:

Advertisement
  • breaking complex problems into smaller components
  • recognizing patterns
  • designing step-by-step processes
  • evaluating the outputs of automated systems

These skills apply not only to programming but also to fields ranging from engineering to public policy.

Importantly, they also help students understand how algorithmic systems operate.

When students learn computational thinking, they gain the ability to analyze how technologies like AI produce results rather than simply accepting those results as authoritative.

In this sense, computational thinking provides a conceptual bridge between traditional academic skills and emerging digital systems.

What Teachers Are Already Doing

Many teachers in our study were already moving in this direction, often without using the term computational thinking.

Advertisement

When teachers asked students to analyze chatbot errors, they were encouraging students to examine how algorithmic systems produce outputs. When they designed exercises comparing training data and algorithms to everyday processes, they were helping students reason about how automated systems work.

These approaches do not require students to rely heavily on AI tools themselves. Instead, they position AI as a case study for examining how technology shapes information.

That framing aligns with longstanding educational goals around critical thinking, media literacy and problem-solving.

Implications for Educators

If the instructional use case for generative AI remains uncertain, educators may benefit from focusing on skills that remain valuable regardless of which tools dominate in the future.

Advertisement

Several practical approaches are already emerging in classrooms. Teachers can use AI systems as objects of analysis, asking students to evaluate outputs, identify errors and investigate how models generate responses.

Lessons can connect AI to broader topics such as data quality, algorithmic bias and information reliability.

Assignments that emphasize reasoning, structured problem solving and evidence evaluation continue to support the kinds of cognitive work that remain central to learning.

These approaches allow students to engage with AI without allowing the technology to replace the thinking process itself.

Advertisement

Implications for EdTech Developers

The experiences teachers described also highlight an opportunity for edtech companies.

Many current AI tools were developed as general-purpose language systems and later introduced into education contexts. As a result, teachers are often left to determine whether and how those tools align with classroom learning goals. Future products may benefit from deeper collaboration with educators during the design process.

Teachers in our conversations were already experimenting with small classroom applications, designing AI literacy lessons and building course-specific chatbots.

These experiments resemble early-stage product development.

Advertisement

Partnerships between educators, edtech developers and product managers could help identify instructional problems that AI systems could realistically address.

The Next Phase of the Research

The conversations described in this series represent an early attempt to document how teachers are navigating the arrival of generative AI.

As schools continue experimenting with these tools, the next challenge will be to develop governance frameworks that help educators evaluate when and how AI should be used in learning environments.

Our research team is beginning the next phase of this work by partnering with school districts to develop guidance for AI governance and inviting edtech companies interested in exploring these questions collaboratively.

Advertisement

Rather than assuming that AI will inevitably transform classrooms, this phase of the project will focus on identifying the conditions under which AI tools actually support teaching and learning and how to reduce harm when they don’t.

The fourth grade teacher’s question remains a useful guide: What can I actually use this for in math?

Until the answer becomes clearer, many teachers will likely continue doing what professionals in any field do when new technologies appear: experimenting cautiously, adopting what works and relying on their judgment to decide where or if the tool belongs.


If your school, district, organization, or edtech company is interested in learning more about joining our next project on AI governance, contact our research team at research@edsurge.com.

Advertisement

Source link

Continue Reading

Tech

French AI start-up Mistral raises $830m in debt

Published

on

The Paris-based company is building out ‘cutting-edge’ European data centres with a total capacity ambition of 200MW by 2027.

French AI start-up Mistral has raised $830m in its first debt financing, for the purposes of funding its data centre near Paris.

The company said the deal, supported by a consortium of seven “top-tier” global banks, would pay for Nvidia Grace Blackwell infrastructure with 13,800 Nvidia GB300 GPUs at the “cutting-edge” centre, bringing powered capacity to 44MW.

The data centre at Bruyères-le-Châtel, scheduled to be operational in the first half of this year, was previously earmarked to train AI models belonging to Mistral and its customers, while also “delivering high-performance inference services”, according to the company.

Advertisement

Last month, Mistral said it would spend over $1.4bn in Sweden on digital infrastructure, including a data centre, building towards its stated goal of 200MW of capacity across Europe by 2027.

“Scaling our infrastructure in Europe is critical to empower our customers and to ensure AI innovation and autonomy remain at the heart of Europe,” said Arthur Mensch, CEO of Mistral AI.

“We will continue to invest in this area, given the surging and sustained demand from governments, enterprises and research institutions seeking to build their own customised AI environment, rather than depend on third-party cloud providers.”

Mistral said it is building a “vertically integrated AI company” comprising “frontier open-weight models, deep enterprise integration, production deployments and its own compute infrastructure”.

Advertisement

It counts organisations in the tech, retail, logistics and public sectors among its customers, and has already partnered with the likes of AMSL, Ericsson and the European Space Agency to train models on their proprietary data.

Earlier this month, Mistral launched both ‘Small 4’, the newest model in its fully open-source ‘Small’ series with an aim of consolidating capabilities of its flagship models, and ‘Forge’, a platform that lets enterprises build custom models trained on their own data.

Last September, the 2023-founded French AI darling announced a Series C raise of around $2bn at a post-money valuation of more than $13bn, led by Dutch chipmaker ASML. Existing investors DST Global, Andreessen Horowitz, Bpifrance, General Catalyst, Index Ventures, Lightspeed and Nvidia took part.

Although a frontrunner in the European AI space, Mistral is far behind US competitors such as OpenAI and Anthropic in terms of funding levels and valuations.

Advertisement

Mistral is a founding member of the Nvidia Nemotron Coalition. As part of the initiative, Mistral and Nvidia plan to co-develop frontier open-source AI models.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Source link

Advertisement
Continue Reading

Tech

NASA Picks Intuitive Machines for a 2030 Artemis Moon Delivery Loaded with Science Tools and a Human Time Capsule

Published

on

NASA Intuitive Machines 2030 Artemis Moon Delivery
NASA has awarded Intuitive Machines a $180.4 million contract to deliver seven science payloads to a carefully chosen site near the lunar south pole. The Houston based company will use one of its larger lander configurations for the mission, designated IM-5, with a target landing date of around 2030 at Mons Malapert. The location was selected for good reason. The ridge maintains fairly consistent line of sight with Earth, receives relatively steady sunlight, and sits close to permanently shadowed regions that may hold water ice, a resource that could prove critical to sustaining long term human operations on the Moon.



The lander arrives loaded with instruments ready to start collecting data from the moment it touches down. A stereo camera package developed at NASA’s Langley Research Center, called the Stereo Cameras for Lunar Plume Surface Studies, will capture how the descent engines disturb the fine lunar soil, information that will help engineers design landing systems that cause less disruption to the surface. A near infrared spectrometer mounted on a small rover from Honeybee Robotics, led by NASA’s Ames Research Center, will then scan for minerals and potential ice deposits while also measuring surface temperatures and mapping how the soil composition varies across the landing area.


LEGO Technic NASA Artemis Space Launch System Rocket Building Toy for Boys & Girls – STEM Learning…
  • BUILD AN OFFICIAL NASA ROCKET – Kids prepare to explore outer space with the LEGO Technic NASA Artemis Space Launch System Rocket (42221) building…
  • 3-STAGE ROCKET SEPARATION – Young builders can turn the hand crank to watch the rocket separate in 3 distinct stages: solid rocket boosters, core…
  • STEM BUILDING TOY FOR KIDS – This educational rocket kit was created in collaboration with NASA and ESA to showcase the authentic system that will…

A mass spectrometer called MSolo, built at NASA’s Kennedy Space Center, will analyze gases present at the landing site immediately after touchdown, focusing on lightweight molecules that could prove useful for future lunar explorers. Radiation monitoring is handled by a set of four detectors developed by the Korea Astronomy and Space Science Institute, measuring surface exposure levels to assess risks for both equipment and future crew while also providing insight into the geological history of the surrounding area.


A set of small sensors aboard the Australian Space Agency’s Roo-ver will track how landing plumes interact with surface materials across varying distances over time, part of NASA Goddard Space Flight Center’s Multifunctional Nanosensor Platform. The Roo-ver will also demonstrate its ability to navigate and move independently across uneven lunar terrain. A Laser Retroreflector Array, also out of Goddard, rounds out the payload with a compact set of mirrors designed to bounce laser signals back to orbiting spacecraft, improving navigation accuracy for future missions passing overhead or coming in to land nearby and helping establish reliable reference points across the lunar surface.

Advertisement


Rounding out the cargo is Sanctuary on the Moon, a time capsule developed in France containing information about human civilization, science, technology, culture, and the human genome, etched onto 24 durable synthetic sapphire discs. It is built to last, and designed to be found.
[Source]

Source link

Advertisement
Continue Reading

Tech

Google’s new compression drastically shrinks AI memory use while quietly speeding up performance across demanding workloads and modern hardware environments

Published

on


  • Google TurboQuant reduces memory strain while maintaining accuracy across demanding workloads
  • Vector compression reaches new efficiency levels without additional training requirements
  • Key-value cache bottlenecks remain central to AI system performance limits

Large language models (LLMs) depend heavily on internal memory structures that store intermediate data for rapid reuse during processing.

One of the most critical components is the key-value cache, described as a “high-speed digital cheat sheet” that avoids repeated computation.

Source link

Advertisement
Continue Reading

Tech

Apple’s Early Days: Massive Oral History Shares Stories About Young Wozniak and Jobs

Published

on

Apple’s 50th anniversary is this week — and Fast Company’s Harry McCracken just published an 11,000-word oral history with some fun stories from Apple’s earliest days and the long and winding road to its very first home computers:


Steve Wozniak, cofounder, Apple: I told my dad when I was in high school, “I’m going to own a computer someday.” My dad said, “It costs as much as a house.” And I sat there at the table — I remember right where we were sitting — and I said, “I’ll live in an apartment.” I was going to have a computer if it was ever possible. I didn’t need a house.

Woz even remembers trying to build a home computer early on with a teenaged Steve Jobs and Bill Fernandez from rejected parts procured from local electronics companies. Woz designed it — “not from anybody else’s design or from a manual. And Fernandez was one of those kids that could use a soldering iron.”

Bill Fernandez: The computer was very basic. It was working, and we were starting to talk about how we could hook a teletype up to it. Mrs. Wozniak called a reporter from the San Jose Mercury, and he came over with a photographer. We set up the computer on the floor of Steve Wozniak’s bedroom.

Well, the core integrated circuit that ran the power supply that I built was an old reject part. We turned on the computer, and the power supply smoked and burnt out the circuitry. So we didn’t get our photos in the paper with an article about the boy geniuses.

But within a few years Jobs and Wozniak both wound up with jobs at local tech companies. Atari cofounder Nolan Bushnell remembers that Steve Jobs “wasn’t a good engineer, but he was a great technician. He was pristine in his ability to solder, which was actually important in those days.” Meanwhile Allen Baum had shared Wozniak’s high school interest in computers, and later got Woz a job working at Hewlett-Packard — where employees were allowed to use stockroom parts for private projects. (“When he needed some parts, even if we didn’t have them, I could order them.”) Baum helped with the Apple I and II, and joined Apple a decade later.

Advertisement

Wozniak remembers being inspired to build that first Apple I by the local Homebrew Computing Club, people “talking about great things that would happen to society, that we would be able to communicate like we never did [before] and educate in new ways. And being a geek would be important and have value.” And once he’d built his first computer, “I wanted these people to help create the revolution. And so I passed out my designs with no copyright notices — public domain, open source, everything. A couple of other people in the club did build it.”

But Woz and Jobs had even tried pitching the computer as a Hewlett-Packard product, Woz remembers:

Steve Wozniak: I showed them what it would cost and how it would work and what it could do with my little demos. They had all the engineering people and the marketing people, and they turned me down. That was the first of five turndowns from Hewlett-Packard. Steve Jobs and I had to go into business on our own.

In the end, Randy Wigginton, Apple employee No. 6 remembers witnessing Jobs, Wozniak, and Ronald Wayne the signing of Apple’s founding contract, “which is pretty funny, because I was 15 at the time.” And it was Allen Baum’s father who gave Wozniak and Jobs the bridge loan to buy the parts they’d need for their first 500 computers.

After all the memories, the article concludes that “Trying to connect every dot between Apple, the tiny, dirt-poor 1970s startup, and Apple, the $3.7 trillion 21st-century global colossus, is impossible.”

But this much is clear: The company has always been at its best when its original quirky humanity and willingness to be an outlier shine through.

Advertisement

Mark Johnson, Apple employee No. 13: I was in Cupertino just yesterday. It’s totally different. They own Cupertino now.

Jonathan Rotenberg, who cofounded the Boston Computer Society in 1977 at age 13: People want to hate Apple, because it is big and powerful. But Apple has an underlying moral purpose that is immensely deep and expansive…

Mike Markkula, the early retiree from Intel whose guidance and money turned the garage startup into a company: The culture mattered. People were there for the right reasons — to build something transformative — not just to make money. That alignment produced extraordinary results…

Steve Wozniak: Everything you do in life should have some element of joy in it. Even your work should have an element of joy… When you’re about to die, you have certain memories. And for me, it’s not going to be Apple going public or Apple being huge and all that. It’s really going to be stories from the period when humble people spotted something that was interesting and followed it

Advertisement

I’ll be thinking of that when I die, along with a lot of pranks I played. The important things.

Source link

Continue Reading

Tech

Kandou AI raises $225M from SoftBank and Synopsys to solve AI’s memory wall

Published

on

Kandou AI, a Swiss semiconductor company that builds chip-to-chip interconnect technology, has raised $225 million in what it calls a Series A round, led by Maverick Silicon with strategic participation from SoftBank, Synopsys, Cadence Design Systems, and Alchip Technologies. The round values the company at $400 million. The label is worth pausing on: Kandou was founded in 2011 and previously raised more than $163 million across Series B and C rounds under the name Kandou Bus. The “Series A” designation reflects a rebrand and leadership change, not a fresh start.

The company’s new chief executive, Srujan Linga, a former Goldman Sachs managing director, took over in 2025 from founder Amin Shokrollahi, an EPFL professor of mathematics and computer science who invented the core technology. Shokrollahi’s contribution, a signalling method called Chord that sends correlated signals across multiple wires to increase bandwidth by a factor of two to four while halving power consumption, remains the technical foundation. The rebrand to Kandou AI and the repositioning toward artificial intelligence infrastructure is Linga’s doing, and it appears to have worked: the $225 million raise is the largest in the company’s history and brings SoftBank, one of the most aggressive AI infrastructure investors, onto the cap table.

The bet against light

What makes Kandou AI’s position unusual is not the problem it is trying to solve but the material it proposes to solve it with. The AI industry’s interconnect bottleneck is real and well documented. As models scale to hundreds of billions of parameters and training clusters expand to tens of thousands of GPUs, the speed at which data moves between processors and memory has become the binding constraint on performance. At signalling speeds of 224 gigabits per second, traditional copper interconnects consume roughly 30 per cent of total cluster power, with signal degradation so severe that reach is limited to less than a metre without amplification.

The prevailing industry response has been to move to optics. Ayar Labs raised $500 million in March 2026 at a $3.8 billion valuation for its co-packaged optical interconnects. Marvell completed a $3.25 billion acquisition of Celestial AI in February, buying photonic fabric technology that claims 25 times the bandwidth of copper alternatives at a tenth of the latency. The optical interconnect market for AI data centres is projected to grow from $3.75 billion in 2025 to $18.36 billion by 2033.

Advertisement

Kandou AI is betting that copper is not finished. Its Chord signalling technology, the company claims, can achieve path-to-Shannon-capacity efficiency, reducing power consumption and system costs by a factor of ten while extending copper links to 448 gigabits per second and beyond. If that claim holds, it would mean that the billions being spent on optical interconnect transitions are at least partially premature, and that existing copper infrastructure can be made to work for several more hardware generations at a fraction of the cost.

The strategic investors tell the story

The composition of the investor syndicate matters more than the headline figure. Synopsys and Cadence are the two dominant providers of electronic design automation tools. Their participation is not purely financial; it signals potential integration of Kandou AI’s serialiser/deserialiser intellectual property into the design flows that chip architects use to build processors and memory controllers. Alchip, a Taiwanese ASIC design services company, provides a path to manufacturing. SoftBank, which has invested more than $100 billion in AI-adjacent companies through its Vision Fund and direct investments, adds the scale capital and the strategic network.

The practical implication is that Kandou AI’s technology could appear inside chips designed by other companies rather than requiring customers to adopt Kandou’s own silicon. This is a licensing and IP model, similar in structure to Arm’s approach in mobile processors, and it is a more capital-efficient path to market dominance than manufacturing and selling chips directly. Whether Kandou can execute on that model with a $400 million valuation and $225 million in fresh capital, against optical competitors valued at ten times as much, is the central question.

The valuation gap

At $400 million, Kandou AI is valued at roughly a tenth of Ayar Labs and an eighth of what Marvell paid for Celestial AI. That gap could reflect market scepticism about copper’s longevity in AI infrastructure, or it could reflect the fact that Kandou’s technology, if it works as claimed, does not require the industry to rip out its existing wiring. Copper is already in every data centre. If Kandou’s signalling technology can make it fast enough for another generation of AI workloads, the adoption curve would be faster and cheaper than an optical transition.

The risk is that “another generation” may not be long enough. AI model sizes and training cluster scales are growing at a pace that consistently outstrips infrastructure predictions. What is adequate at 448 gigabits per second today may be inadequate at the terabit-per-second speeds that next-generation models will demand within two to three years. Optical interconnects, for all their cost and complexity, offer a higher theoretical ceiling.

Advertisement

Kandou AI’s $225 million buys it time to prove that the ceiling can wait. The company’s 15-year history and the technical credibility of Chord signalling, which has been deployed commercially in consumer electronics since the mid-2010s, lend substance to the bet. But the AI infrastructure market has a pattern of rewarding ambition over incrementalism, and a company arguing that the existing material is good enough faces a harder narrative sell than one promising to replace it entirely. The investors on this round appear to be betting on engineering pragmatism. Whether the market agrees will depend on how quickly the optical transition matures, and whether Kandou’s copper can keep pace with an industry that has shown little interest in waiting for anything.

Source link

Advertisement
Continue Reading

Tech

Meta launches prescription Ray-Ban smart glasses to reach billions of eyewear buyers

Published

on

Meta is preparing to launch two new Ray-Ban smart glasses models designed specifically for prescription wearers, according to a Bloomberg report published on Thursday. The models, codenamed Scriber and Blazer, were first spotted in Federal Communications Commission filings and are expected to reach consumers as early as next week. They do not represent a new generation of hardware. They represent something potentially more important: a distribution strategy.

Prescription eyewear accounts for roughly 69 per cent of the $223 billion global eyewear market. Meta sold more than seven million Ray-Ban and Oakley AI frames in 2025, an impressive figure for a product category that barely existed three years ago, but a rounding error against the estimated 1.5 billion people worldwide who wear corrective lenses. The new models are Meta’s clearest attempt yet to move smart glasses from consumer electronics into mainstream optical retail, where the customers, the margins, and the scale are all substantially larger.

What the new models are, and what they are not

Scriber and Blazer are non-display AI glasses, similar in capability to the existing Ray-Ban Meta line: camera, microphone, speakers, and Meta AI integration, but no screen. Blazer will come in regular and large sizes; Scriber appears to be a single-size offering. Both include Wi-Fi 6 UNII-4 band support, an upgrade over current models, and will ship with charging cases.

The distinction matters because Meta already sells a display-equipped model. The Ray-Ban Meta Display, launched at Connect 2025, includes a full-colour heads-up display, a 12-megapixel camera with 3x zoom, and pairs with a neural wristband that reads muscle signals to navigate the interface. It costs $799. Orion, Meta’s full augmented reality prototype with holographic displays, remains a research project with no consumer launch date.

Advertisement

Scriber and Blazer sit below both in the product hierarchy. Their purpose is not to showcase Meta’s most advanced technology but to put Meta AI into the frames that people already need to buy. The insight behind the move is straightforward: if someone requires prescription lenses and is going to spend several hundred dollars at an optician regardless, the incremental cost of making those lenses smart drops significantly. Mark Zuckerberg made the strategic logic explicit on a recent earnings call, noting that “billions of people wear glasses or contacts for vision correction” and suggesting it is “hard to imagine a world in several years where most glasses that people wear aren’t AI glasses.”

Advertisement

The EssilorLuxottica question

The prescription pivot also runs directly into the most complex relationship in Meta’s hardware business. EssilorLuxottica, the Franco-Italian conglomerate that owns Ray-Ban, Oakley, LensCrafters, and Sunglass Hut, manufactures all of Meta’s smart glasses and controls the optical retail channel through which the new models will be sold. The partnership has delivered results, but it has also generated friction.

Bloomberg reported in February that the two companies are working through disagreements over pricing and strategy. EssilorLuxottica’s adjusted gross margin fell 2.6 percentage points in 2025 to 60.9 per cent, partly because of the higher component costs that smart glasses require compared with conventional frames. Meta wanted to offer Black Friday discounts in 2023; EssilorLuxottica, which guards its luxury positioning carefully, rejected the idea. The tension is structural: Meta wants to maximise adoption and lock users into its AI ecosystem. EssilorLuxottica wants to protect margins on a product line that is eroding them.

Prescription models could ease that tension. Prescription lenses carry higher retail prices and fatter margins than non-prescription sunglasses. The lens coatings, custom grinding, and fitting appointments that prescription orders require generate additional revenue at every stage of the value chain. If smart glasses move into the prescription channel at scale, the economics improve for EssilorLuxottica even as volumes increase for Meta. The companies are reportedly considering doubling their combined production target to 20 million units per year, up from an estimated 10 million capacity by the end of 2026.

The risks in the optician’s chair

Selling smart glasses through optical retail introduces complications that consumer electronics channels do not. Opticians are trained to fit lenses, not to explain AI assistants, camera privacy settings, or software updates. The customer experience in a LensCrafters is fundamentally different from the experience in a Meta Store or an Apple Store, and the staff training, product support, and return handling required for a connected device are orders of magnitude more complex than for a pair of Wayfarers.

Advertisement

There is also the legal exposure. Solos Technology filed a patent infringement suit against Meta and EssilorLuxottica in January 2026, claiming that the Ray-Ban Meta line violates several patents covering core smart eyewear technologies and seeking “multiple billions of dollars” in damages. A second patent front, on top of the partnership tension and the margin pressure, adds risk to a product line that Meta is treating as the foundation of its wearable AI strategy.

The smart glasses market itself is growing rapidly, from an estimated $2.5 billion in 2025 to a projected $14.4 billion by 2033 according to Grand View Research, but nearly all of that growth is speculative and dependent on whether consumers will choose connected frames when ordinary ones are cheaper, lighter, and carry no privacy concerns. Meta’s bet is that AI functionality, specifically the ability to ask questions, get real-time information, and interact with digital services without reaching for a phone, will be compelling enough to overcome those objections.

Scriber and Blazer are not the product that will test that bet definitively. They are the product that puts Meta’s AI into opticians’ fitting trays, onto the faces of people who were going to buy new glasses anyway, and into a distribution channel that reaches billions of potential customers. The technology is incremental. The strategic ambition is not.

Advertisement

Source link

Continue Reading

Tech

Hisense UR9 RGB MiniLED 4K TVs Aim To Set New Standard in Color Performance in 2026

Published

on

The TV technology arms race is accelerating in 2026, and RGB MiniLED has quickly emerged as one of the key battlegrounds. Hisense is stepping directly into that fight with its newly announced UR9 RGB MiniLED TV lineup, offered in 65, 75, 85, and 100-inch screen sizes. The move puts it alongside Samsung, TCL, and LG, all of whom are pushing next-generation backlighting systems aimed at improving brightness, color accuracy, and contrast control, right as consumers continue to gravitate toward much larger displays.

That shift in demand is impossible to ignore. Screens that once felt excessive now look like the new normal, especially as prices fall and living rooms evolve into full-time viewing spaces. Hisense is clearly leaning into that trend with the UR9 series, positioning RGB MiniLED as a practical upgrade for buyers who want bigger screens and better performance without stepping into ultra-premium territory.

The living room has become the social centerpiece of the home, with your screen starring at the center of it all,” said James Fishler, Chief Commercial Officer at Hisense USA. “Nearly 90% of Americans say bold, vibrant color makes them more interested in what they’re watching — and that’s exactly why we built the UR9. As the first to bring RGB MiniLED to market, we’re setting a new standard for color performance in home viewing experiences.”

RGB MiniLED Explained: Why This New Backlight Tech Matters

hisense-rgb-miniled-diagram

An RGB MiniLED TV is still an LCD-based display, but it takes miniLED backlighting a step further by using individual red, green, and blue LEDs instead of the traditional white or blue-only LEDs found in most LED and MiniLED TVs. This tri-color backlight structure allows for far more precise control over both brightness and color, rather than relying on filters to shape the image after the fact.

The result is a wider color range, up to full BT.2020 coverage, along with improved contrast and more accurate detail rendering. With Pantone Validated RGB MiniLED color support, the technology is designed to deliver more lifelike images with better separation between light and dark areas, exposing details that conventional LED backlit displays often miss.

Advertisement

For a deeper dive into RGB MiniLED and Micro RGB LED technology, check out our reference article: WTF Are RGB MiniLED and Micro RGB LED TVs? Breaking Down the Next Gen Display Tech.

Hisense was first to bring RGB MiniLED technology to market, getting out ahead of rivals with a consumer-ready implementation. The UR9 Series represents the next phase of that strategy, expanding the lineup with a broader range of screen sizes aimed at meeting growing demand for larger displays.

Hisense UR9 RGB MiniLED TVs: Key Features, Screen Sizes, and What Sets Them Apart

RGB MiniLED: This is the foundation of the UR9 series. Available in 65, 75, 85, and 100-inch screen sizes, it uses independent red, green, and blue MiniLED light sources to generate color directly, rather than relying on a white backlight and filters. The payoff is more accurate color reproduction, improved contrast, deeper blacks with better shadow detail, and brighter, more controlled highlights.

Hi-View AI Engine RGB: To support the RGB MiniLED backlight system, the UR9 series integrates Hisense’s Hi View AI Engine, which analyzes content in real time and adjusts brightness, contrast, and color temperature on the fly. It can recognize different types of content such as sports, movies, streaming, and gaming and optimize the picture accordingly, reducing the need for constant manual adjustments.

Advertisement
hisense-ur9-rgb-miniled-85-inch-tv-2026
The 65/75/85-inch Hisense UR9 models all share the same stand.

Obsidian Panel: Hisense’s low reflection screen surface is designed to reduce glare from windows and room lighting while maintaining strong contrast. Dark scenes hold onto their detail and depth even in bright environments, making daytime viewing far less of a compromise.

Up to 5000 Nits Peak Brightness: Combined with the low reflection properties of the Hisense Obsidian Panel, the UR9 series can deliver up to 5000 nits of peak brightness, depending on the model and screen size. This level of light output helps maintain image clarity and impact in bright rooms and daytime viewing conditions.

Advertisement. Scroll to continue reading.

AI RGB Light Sensor: This feature automatically adjusts brightness and color temperature based on the lighting in your room, helping the picture stay balanced and natural whether you are watching during the day or at night. It works hand in hand with the UR9’s high light output to keep the image consistent without constant manual adjustment.

Advertisement

IMAX Enhanced and Filmmaker Mode: IMAX Enhanced support allows the UR9 to deliver optimized picture and DTS audio performance for compatible content, including the correct aspect ratio used in IMAX presentations. Filmmaker Mode takes a different approach by preserving the original aspect ratio, color, frame rate, and sound, ensuring content is presented as the director intended without added processing.

Native 180Hz Game Mode: For gamers, the UR9 series supports a native 180Hz refresh rate, delivering fast, responsive performance with reduced motion blur and input lag. Rapid camera movement, competitive gameplay, and live sports all benefit from sharper detail and smoother motion.

Enhanced Game Bar: Hisense’s Advanced Game Bar provides real time access to key settings such as FPS, VRR, and HDR. It allows for quick adjustments without interrupting gameplay, which is exactly how it should work.

AI Smooth Motion with MEMC: The UR9 also includes AI Smooth Motion along with standard motion estimation and motion compensation processing to reduce blur, judder, and stutter. It can improve clarity across sports, movies, and games, but there is a trade off. For films, it is best to turn it off if you want to preserve a more natural look. Filmmaker Mode handles that automatically and saves you from digging through menus.

Advertisement

Total HDR Solution: The Hisense UR9 is compatible with advanced HDR formats (Dolby Vision IQ, HDR10+ Adaptive), preserving creative detail and dynamically adjusting brightness based on both content and room lighting.

4K UltraHD Resolution & AI 4K Upscaler: The UR9 Series TVs support 4K UHD native resolution and also support AI 4K upscaling for the best possible image display from lower resolution content. 

3x HDMI 2.1: The Hisense UR9 TVs support HDMI 2.1 on all three of their HDMI inputs. For details on what HDMI 2.1 supports, refer to our reference article: WTF is HDMI 2.1?

Wi-Fi 6E: The UR9 Series is equipped with the latest WiFi 6E connectivity, provided you have high-speed broadband access. This provides support for high-resolution streaming, cloud gaming, and multi-device households.

Advertisement

4.1.2 Multi-Channel Surround & Tuned by Devialet: To provide a good foundation for TV listening, the UR9 incorporates precision-tuned speakers tuned by Devialet that provide layered, multidirectional room-filling audio. Clear Dialogue is supported while effects move naturally around and above you, providing a more natural listening experience. The UR9 series is also Dolby Atmos compatible

Hi-Concerto: In addition to Devialet tuning, UR9 TVs include Hisense Hi-Concerto. This allows the TV to work in tandem with compatible Hisense soundbars, or the HT Saturn Audio system enables the speakers in both the TV and external audio system to work together. This is similar to Samsung’s Q-Symphony, offering users a more integrated audio setup without needing to disable their TV’s own speakers.

Google TV with Gemini: The UR9 series runs Google TV, bringing together movies, shows, and live TV from your streaming services into a single interface with access to over 10,000 apps. Gemini adds a more conversational layer, allowing users to ask more natural questions and get useful responses, while also helping with voice control and basic automation.

Advertisement. Scroll to continue reading.
Advertisement

Backlit Remote: Hisense includes a backlit voice remote with practical touches like a customizable favorite key for quick app access and a Find My Remote function. The backlighting adjusts automatically based on room conditions, making it easier to use in both bright and dark environments.

Minimalist Design: The UR9 features a clean, understated chassis with a slim profile that keeps the focus on the screen. It integrates easily into a range of setups, whether wall mounted or placed on a stand or media console.

hisense-ur9-rgb-miniled-tv-sides-2026
Left to right: Hisense UR9 65/75-inch, 85-inch, and 100-inch side views

The Bottom Line 

The Hisense UR9 series is one of the first serious attempts to bring RGB MiniLED into a full consumer lineup, not just a limited run of oversized flagship screens. That alone makes it notable. Hisense moved quickly, beating Samsung to market with a broader range of sizes from 65 to 100 inches, and positioned RGB MiniLED as a practical step forward in backlight precision, color performance, and brightness for real world viewing.

But there are tradeoffs. Pricing is aggressive, with the 65 inch model starting at $3,499, and there is no support for HDMI 2.2, which some expected to see at this level. That makes the UR9 feel a bit early adopter focused. Hisense is clearly aware of that, which is why the pre-order promotion matters. Offering a free 55-inch TV alongside the purchase could take some of the sting out of the price and give buyers a reason to jump in sooner rather than later.

The bigger picture is where things get interesting. This puts immediate pressure on Samsung, which has talked up Micro RGB LED but has yet to deliver a full lineup, and leaves the door open for TCL and Sony to respond with their own approaches. The UR9 is not a safe play, but it is a strategic one. If RGB MiniLED delivers on its promise, Hisense just bought itself a head start in what is shaping up to be the next major TV technology fight.

Advertisement
hisense-ur9-canvastv

Availability & Pricing

Pre-orders for the Hisense UR9 RGB MiniLED TV have begun at Hisense and Best Buy as follows:

Pro Tip: Customers who register for pre-order a UR9 Series RGB MiniLED TV  between March 26 and April 22 will receive a unique redemption code for a 55-inch Hisense CanvasTV ($687.99 at Amazon) on Hisense with a qualifying purchase. Additional terms and conditions apply. 

Hisense Out Host 

“Out Host with Hisense” Campaign: Alongside the UR9 pre order launch, Hisense is rolling out its “Out Host with Hisense” campaign, timed with the FIFA World Cup 2026 coming to the United States this summer, where the brand is serving as an official sponsor. The campaign leans into a familiar message for Hisense, focusing on how TVs bring people together at home and anchor shared viewing experiences.

“Out Host with Hisense” highlights different hosting styles and ties them to the brand’s 2026 TV lineup, positioning its products as part of how people gather, watch, and share major moments. As part of the campaign, users can visit the official Hisense site to take a Hosting Style Quiz, identify their hosting persona, and get matched with a recommended TV setup.

Advertisement

Source link

Continue Reading

Trending

Copyright © 2025